Degree Type


Date of Award


Degree Name

Doctor of Philosophy




In a typical inferential problem, the conclusion reached by a Bayesian statistician depends on three components: the prior distribution, the likelihood function, and the observed sample data. These components are combined in the form of the posterior distribution, which updates the information in the prior by merging it with the information in the likelihood function. Assuming that the likelihood function is correct, we evaluate the consonance between prior and posterior distributions of the parameter of interest. We use the squared Hellinger's distance and the Kullback-Liebler divergences;If the distance measure is very large, then this could indicate either a misspecified prior or a "surprising" data value. Considering the first possibility, we show how the distance measure can be used to specify more robust prior distributions. If the prior is assumed to be correct, we indicate how the measure can isolate disconsonant sample values;If we have a sample x = x(,1),....,x(,n), then the posterior distribution of (theta) given x(,1),...,x(,n-1) is the prior distribution for the observation x(,n). Calculating the distance between the posterior distribution of (theta) given x and the posterior distributions with each x(,i) deleted, gives an ordering of the sample values, and establishes a Bayesian technique for the detection of outliers. We show that for the general linear model, the distance is a function of a classical measure of the influence of the i('th) value on the estimation of regression parameters;We define a new measure of the amount of information obtained from an experiment. The measure is a convex combination of the information in the posterior and the information in the likelihood function, with weights given by an appropriate form of the distance function;We apply the distance idea to the question of which subset of the independent variables in the general linear model should be included when predicting a future y value. We calculate the distance between the predictive density with all variables included, and the predictive densities from reduced models which include only a fixed subset of the variables. We decide to use that model for which the distance is minimum. We show that with a uniform prior, the distance is a function of the classical reduction in sum of squares Statistics and Probability;



Digital Repository @ Iowa State University,

Copyright Owner

Cindy Lynn Martin



Proquest ID


File Format


File Size

105 pages