Efficiency of Markov chain Monte Carlo algorithms for Bayesian inference in random regression models

Thumbnail Image
Date
2000-01-01
Authors
Liu, Ho
Major Professor
Advisor
Hal S. Stern
Jack C. M. Dekkers
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Authors
Research Projects
Organizational Units
Organizational Unit
Statistics
As leaders in statistical research, collaboration, and education, the Department of Statistics at Iowa State University offers students an education like no other. We are committed to our mission of developing and applying statistical methods, and proud of our award-winning students and faculty.
Journal Issue
Is Version Of
Versions
Series
Department
Statistics
Abstract

A random regression model can be used to fit repeated measurements such as weight gain of an animal over time in order to accommodate the between- and within-individual variations. The Bayesian approach is an alternative to the REML-BLUP approach for drawing inference and often depends on Markov chain Monte Carlo (MCMC) methods. Our studies primarily focus on the efficiency of MCMC methods in models for repeated measurements in time. Efficient methods should make Bayesian methods feasible in studying animal growth traits;With conjugate prior distributions, posterior samples under the random regression model can be obtained with Gibbs sampling algorithm. Orthogonality of parameters can reduce posterior correlations among model parameters and therefore improve convergence rate of MCMC methods. Hierarchical centering can improve the convergence rate of MCMC methods when the variance of the random effects is much larger than the variance of the residuals. Adopting hierarchical centering and orthogonality simultaneously yields the greatest improvement in convergence rate. When there are more than one random component besides random residuals, using a cycling algorithm along with orthogonal polynomials leads to the best performance. In addition, using a batching scheme for drawing correlated parameters can improve convergence rate as well;It is also possible to develop Metropolis-Hastings (M-H) algorithms for nonlinear models. We find that M-H algorithms with a normal jumping distribution, that is centered at the current value and with its variance evaluated either at individual-model-fitting MLE or at the current value, perform best.

Comments
Description
Keywords
Citation
Source
Copyright
Sat Jan 01 00:00:00 UTC 2000