Electronics and Control

Bayesian sample size determination for integrated test of missile hit accuracy

  • DONG Guangling ,
  • YAO Yu ,
  • HE Fenghua ,
  • HE Chi
Expand
  • 1. School of Astronautics, Harbin Institute of Technology, Harbin 150080, China;
    2. Department of Test Technology, Baicheng Ordnance Test Center of China, Baicheng 137001, China

Received date: 2014-03-12

  Revised date: 2014-04-12

  Online published: 2014-04-17

Supported by

National Natural Science Foundation of China (61021002, 61304239)

Abstract

Sample size determination (SSD) methods for integrated test of missile hit accuracy are analyzed, which reveals the problem and contradiction in classical method and Bayesian method using standard power prior for design. In order to solve the contradiction between standard power prior for design and average posterior variance criterion of Bayesian SSD while prior sample size is very large, design effect of experiment is proposed with a comprehensive consideration on simulation test credibility and prior sample size. Thus, a modified power exponent for design prior elicitation based on design effect equivalence of experiment is given. Taking Bayesian average posterior variance for parameter of interest as the output precision, we get the optimization equations for SSD of integrated test scheme under both test cost constraint and required posterior precision constraint. In the end, the effectiveness of our proposed Bayesian SSD method for integrated test of missile hit accuracy is illustrated with two examples.

Cite this article

DONG Guangling , YAO Yu , HE Fenghua , HE Chi . Bayesian sample size determination for integrated test of missile hit accuracy[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2015 , 36(2) : 575 -584 . DOI: 10.7527/S1000-6893.2014.0051

References

[1] Kraft E M. Integrated test and evaluation: a knowledge-based approach to system development, AIAA-1995-3982[R]. Reston: AIAA, 1995.

[2] Waters D P. Integrating modeling and simulation with test and evaluation activities, AIAA-2004-6800[R]. Reston: AIAA, 2004.

[3] Claxton J D, Cavoli C, Johnson C. Test and evaluation management guide[M]. 6th ed. Fort Belvoir, VA: The Defense Acquisition University Press, 2012: 311.

[4] Adcock C J. Sample size determination: a review[J]. Journal of the Royal Statistical Society: Series D (The Statistician), 1997, 46(2): 261-283.

[5] Murphy K R, Myors B, Wolach A H. Statistical power analysis: a simple and general model for traditional and modern hypothesis tests[M]. 3rd ed. New York: Routledge, 2009: 224.

[6] Balci O. Verification, validation and accreditation of simulation models[C]//Proceedings of the 1997 Winter Simulation Conference. Piscataway, NJ: IEEE, 1997: 135-141.

[7] Rebba R, Mahadevan S, Huang S P. Validation and error estimation of computational models[J]. Reliability Engineering & System Safety, 2006, 91(10-11): 1390-1397.

[8] Balci O. Verification, validation, and testing of models[M]. New York: Springer, 2013: 1618-1627.

[9] De Santis F. Using historical data for Bayesian sample size determination[J]. Journal of the Royal Statistical Society: Series A (Statistics in Society), 2007, 170(1): 95-113.

[10] Ibrahim J G, Chen M H. Power prior distributions for regression models[J]. Statistical Science, 2000, 15(1): 46-60.

[11] Desu M M, Raghavarao D. Sample size methodology[M]. Boston, MA: Academic Press, 1990.

[12] Joseph L, Wolfson D B, Du Berger R. Sample size calculations for binomial proportions via highest posterior density intervals[J]. The Statistician, 1995, 44(2): 143-154.

[13] Joseph L, Belisle P. Bayesian sample size determination for normal means and differences between normal means[J]. Journal of the Royal Statistical Society: Series D (The Statistician), 1997, 46(2): 209-226.

[14] Di Bacco M, Amore G D, Scalfari F, et al. Two experimental settings in clinical trials: predictive criteria for choosing the sample size in interval estimation[M]. Norwell, MA: Springer, 2004: 109-130.

[15] Greenhouse J B, Waserman L. Robust Bayesian methods for monitoring clinical trials[J]. Statistics in Medicine, 1995, 14(12): 1379-1391.

[16] Fryback D G, Chinnis J O, Ulvila J W. Bayesian cost-effectiveness analysis[J]. International Journal of Technology Assessment in Health Care, 2001, 17(1): 83-97.

[17] Duan Y. A modified Bayesian power prior approach with applications in water quality evaluation[D]. Blacksburg, Virginia: Virginia Polytechnic Institute and State University, 2005.

[18] Spiegelhalter D J, Abrams K R, Myles J P. Bayesian approaches to clinical trials and health-care evaluation[M]. Chichester, West Sussex: John Wiley & Sons, 2004: 406.

[19] De Santis F. Power priors and their use in clinical trials[J]. The American Statistician, 2006, 60(2): 122-129.

[20] Muessig P R, Laack D R, Wrobleski J J. An integrated approach to evaluating simulation credibility, ADA405051[R]. California: Naval Air Warfare Center Weapons, 2001.

[21] Yang H Z, Kang F J, Yan J T. A methodology of simulation credibility evaluation based on AHP[J]. Journal of System Simulation, 2006, 18(s2): 52-54 (in Chinese). 杨惠珍, 康凤举, 阎晋屯. 一种基于AHP的仿真可信度评估方法研究[J]. 系统仿真学报, 2006, 18(s2): 52-54.

[22] Fang K, Ma P, Yang M. AHP ultra weight for simulation credibility evaluation[J]. Journal of Beijing University of Aeronautics and Astronautics, 2011, 37(5): 584-588 (in Chinese). 方可, 马萍, 杨明. 仿真可信度评估中的AHP超越权重[J]. 北京航空航天大学学报, 2011, 37(5): 584-588.

[23] Jiao P. Research on the VV & A theory and method of missile guidance simulation system[D]. Changsha: National University of Defense Technology, 2010 (in Chinese). 焦鹏. 导弹制导仿真系统VV&A理论和方法研究[D]. 长沙: 国防科学技术大学, 2010.

[24] Bernardo J M, Smith A F M. Bayesian theory[M]. New York: John Wiley & Sons, 2000: 608.

Outlines

/