Degree Type


Date of Award


Degree Name

Doctor of Philosophy





First Advisor

Vivekananda Roy


This dissertation consists of three research articles on the topic of Markov chain Monte Carlo (MCMC) diagnostics and sparse Bayesian learning models. The first article consists of MCMC diagnostic tools based on Kullback Leibler (KL) divergence and smoothing methods. These tools can assess the joint convergence of multiple variables and detect non convergence when MCMC chains get stuck at a particular mode of a multi modal stationary distribution. Further, in case of non convergence of multiple MCMC chains, the visualization tool can be used to investigate reasons for non convergence. The second article deals with assessing posterior propriety of some sparse Bayesian learning models. Relevance Vector Machine (RVM) is a popular sparse Bayesian learning model that assumes improper prior over its hyperparameters. We prove that this improper prior leads to an improper posterior. Further, we also provide necessary and sufficient conditions for posterior propriety of RVM. Additionally, we also prove the posterior impropriety of some Bayesian learning models that have a prior structure similar to that of RVM. In the third article, we propose to replace multiple penalties of RVM with a single penalty. The new model is named as single penalty relevance vector machine (SPRVM) and is analyzed using a semi Bayesian approach. The SPRVM allows for computation of Monte Carlo standard errors since we prove the geometric ergodicity of its associated Gibbs sampler. Such a Monte Carlo standard error cannot be computed in the case of RVM since the rate of convergence of its associated Gibbs sampler is not known. Thus, through these three articles we hope to make valuable additions to the literature of MCMC diagnostics and sparse Bayesian learning models.

Copyright Owner

Anand Ulhas Dixit



File Format


File Size

98 pages