Composite Tests under Corrupted Data

. 2019 Jan 14 ; 21 (1) : . [epub] 20190114

Status PubMed-not-MEDLINE Jazyk angličtina Země Švýcarsko Médium electronic

Typ dokumentu časopisecké články

Perzistentní odkaz   https://www.medvik.cz/link/pmid33266779

This paper focuses on test procedures under corrupted data. We assume that the observations Z i are mismeasured, due to the presence of measurement errors. Thus, instead of Z i for i = 1 , … , n, we observe X i = Z i + δ V i, with an unknown parameter δ and an unobservable random variable V i. It is assumed that the random variables Z i are i.i.d., as are the X i and the V i. The test procedure aims at deciding between two simple hyptheses pertaining to the density of the variable Z i, namely f 0 and g 0. In this setting, the density of the V i is supposed to be known. The procedure which we propose aggregates likelihood ratios for a collection of values of δ. A new definition of least-favorable hypotheses for the aggregate family of tests is presented, and a relation with the Kullback-Leibler divergence between the sets f δ δ and g δ δ is presented. Finite-sample lower bounds for the power of these tests are presented, both through analytical inequalities and through simulation under the least-favorable hypotheses. Since no optimality holds for the aggregation of likelihood ratio tests, a similar procedure is proposed, replacing the individual likelihood ratio by some divergence based test statistics. It is shown and discussed that the resulting aggregated test may perform better than the aggregate likelihood ratio procedure.

Zobrazit více v PubMed

Broniatowski M., Jurečková J., Kalina J. Likelihood ratio testing under measurement errors. Entropy. 2018;20:966. doi: 10.3390/e20120966. PubMed DOI PMC

Guo D. Relative entropy and score function: New information-estimation relationships through arbitrary additive perturbation; Proceedings of the IEEE International Symposium on Information Theory (ISIT 2009); Seoul, Korea. 28 June–3 July 2009; pp. 814–818.

Huber P., Strassen V. Minimax tests and the Neyman-Pearson lemma for capacities. Ann. Stat. 1973;2:251–273. doi: 10.1214/aos/1176342363. DOI

Eguchi S., Copas J. Interpreting Kullback-Leibler divergence with the Neyman-Pearson lemma. J. Multivar. Anal. 2006;97:2034–2040. doi: 10.1016/j.jmva.2006.03.007. DOI

Narayanan K.R., Srinivasa A.R. On the thermodynamic temperature of a general distribution. arXiv. 2007. 0711.1460

Bahadur R.R. Stochastic comparison of tests. Ann. Math. Stat. 1960;31:276–295. doi: 10.1214/aoms/1177705894. DOI

Bahadur R.R. Some Limit Theorems in Statistics. Society for Industrial and Applied Mathematics; Philadelpha, PA, USA: 1971.

Birgé L. Vitesses maximales de décroissance des erreurs et tests optimaux associés. Z. Wahrsch. Verw. Gebiete. 1981;55:261–273. doi: 10.1007/BF00532119. DOI

Tusnády G. On asymptotically optimal tests. Ann. Stat. 1987;5:385–393. doi: 10.1214/aos/1176343804. DOI

Liese F., Vajda I. Convex Statistical Distances. Teubner; Leipzig, Germany: 1987.

Tsallis C. Possible generalization of BG statistics. J. Stat. Phys. 1987;52:479–485. doi: 10.1007/BF01016429. DOI

Goldie C. A class of infinitely divisible random variables. Proc. Camb. Philos. Soc. 1967;63:1141–1143. doi: 10.1017/S0305004100042225. DOI

Basu A., Shioya H., Park C. Statistical Inference: The Minimum Distance Approach. CRC Press; Boca Raton, FL, USA: 2011.

Barndorff-Nielsen O. Information and Exponential Families in Statistical Theory. John Wiley & Sons; New York, NY, USA: 1978.

Krafft O., Plachky D. Bounds for the power of likelihood ratio tests and their asymptotic properties. Ann. Math. Stat. 1970;41:1646–1654. doi: 10.1214/aoms/1177696808. DOI

Chernoff H. Large-sample theory: Parametric case. Ann. Math. Stat. 1956;27:1–22. doi: 10.1214/aoms/1177728347. DOI

Najít záznam

Citační ukazatele

Nahrávání dat ...

Možnosti archivace

Nahrávání dat ...