Proving admissibility using the stepwise Bayes technique: with applications to maximum likelihood estimation

Thumbnail Image
Date
1989
Authors
Funo, Eiichiro
Major Professor
Advisor
Glen Meeden
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Authors
Research Projects
Organizational Units
Organizational Unit
Statistics
As leaders in statistical research, collaboration, and education, the Department of Statistics at Iowa State University offers students an education like no other. We are committed to our mission of developing and applying statistical methods, and proud of our award-winning students and faculty.
Journal Issue
Is Version Of
Versions
Series
Department
Statistics
Abstract

The stepwise Bayes technique is a simple but versatile method for proving admissibility of estimators under a strictly convex loss function like squared error loss. For example, when X ~ Binomial (n, [theta]), it is easy to prove that under squared error loss the MLE of [theta] is admissible using the stepwise Bayes technique. Similarly, the admissibility of the joint MLE can also be proven in cases of X ~ Multinomial and independent Binomial random variables;Furthermore, those results can be extended. Let X ~ Multinomial (n, p) where p [epsilon] [xi] = (p[subscript]0, p[subscript]1, ..., p[subscript] k): 0 ≤ p[subscript] i ≤ 1 for each i = 0, 1, ..., k and [sigma][subscript]spi=1 k p[subscript] i = 1, and p = [underline][phi]([underline][theta]) = ([phi][subscript]0([underline][theta]), [phi][subscript]1([underline][theta]), ..., [phi][subscript] k([underline][theta]))[superscript]' where [underline][theta] [epsilon] [theta] = ([theta][subscript]10, [theta][subscript]11, ..., [theta][subscript] 1s[subscript]1; [theta][subscript]20, [theta][subscript]21, ..., [theta][subscript] 2s[subscript]2; ; [theta][subscript] r1, [theta][subscript] r2, ..., [theta][subscript] rs[subscript]2): 0 ≤ [theta][subscript] ij ≤ 1 for any i, j and [sigma][subscript]sp j=1 s[subscript] i [theta][subscript] ij = 1 for each i = 1, 2, ..., r. Assume [underline][phi]: [theta] → [xi] is an onto map and each [phi][subscript] i([theta]) is a monomial of [theta][subscript]10, [theta][subscript]11, ..., [theta][subscript] 1s[subscript]1; [theta][subscript]21, [theta][subscript]22, ..., [theta][subscript] 2s[subscript]2; ; [theta][subscript] r1, [theta][subscript] r2, [theta][subscript] rs[subscript] r. Then, the stepwise Bayes technique can be used to show that the MLE of [underline][theta] is admissible under squared error loss;This result is useful for proving the admissibility of maximum likelihood estimators in many areas of statistics, for example, missing data analysis, censored data analysis, log-linear models and finite population sampling problems;In contrast to the above admissibility theorem, in binomial or multinomial problems when the parameter space is restricted or truncated to a subset of the natural parameter space, the MLE may be inadmissible under squared error loss. A quite general condition for the inadmissibility of maximum likelihood estimators in such cases can be established using the stepwise Bayes technique and the complete class theorem of Brown;References. (1) Brown, L. 1981. A complete class theorem for statistical problems with finite sample spaces. Ann. of Statist. 9:1289-1300. (2) Meeden, G., and Ghosh, M. 1981. Admissibility in finite problems. Ann. Statist. 9:846-852.

Comments
Description
Keywords
Citation
Source
Subject Categories
Keywords
Copyright
Sun Jan 01 00:00:00 UTC 1989