Degree Type

Dissertation

Date of Award

2014

Degree Name

Doctor of Philosophy

Department

Statistics

First Advisor

Stephen B. Vardeman

Abstract

This disertation focuses on two topics in Statistical Learning. One is biclustering, and the other is deep learning. The whole dissertation has three chapters, where Chapter 1 and 2 focus on biclustering; Chapter 3 focuses on the deep learning topic.

Biclustering is a Statistical Learning technique that simultaneously partitions the set of samples and the set of their attributes into homogeneous subsets. In Chapter 1, motivated by movie rating data, we firstly propose a Bayesian model and an MCMC algorithm for model estimation. Because this algorithm is too slow to be of practical use with current computation power, we next propose a simplified model and design a Genetic Algorithm for maximizing the likelihood function. This approach works well on a small data set. However, due to the NP-hard nature of the problem, both approaches fail to be practically useful with current computation power. Nonetheless, they provide principled ways of solving a biclustering problem for future use as computation power develops.

Also motivated by movie rating data, where missing values need to be addressed, in Chapter 2, we propose a new Prototype-based Biclustering method. We evaluate our method on test cases with various percentages with missing values in terms of the Rand Index between our result and the "true" partitions. In fact, our method has good performance on test cases even with a large missing value percentage. We further evaluate our method on a gene expression data set, that contains no missing values. Our method outperforms an existing biclustering method, i.e., Spectral Biclustering, using the Mean Squared Error criterion.

Deep Learning is a Statistical Learning topic, which involves a "deep" network architecture mimicing the information representation structure in human brain. In Chapter 3, motivated by a hand-written digit classification problem, we propose a Bayesian framework for fitting Boltzmann machine models. The proposed approach surpasses the previous available methods in terms of fitting because it provides a principled fitting method using an MCMC algorithm. The approach presented here also provides a reasonably effective way to extract features from multivariate data for use in classification.

Copyright Owner

Jing Li

Language

en

File Format

application/pdf

File Size

72 pages

gibbsBM.c (16 kB)
C code for fitting full Boltzman Machines

gibbsRBM.c (5 kB)
C code for fitting RBMs with one hidden layer

gibbsRBM2layer.c (9 kB)
C code for fitting RMBs with two hidden layers

5_20_13.R (23 kB)
R functions

Share

COinS