Publication Date

6-2003

Series Number

Preprint # - 2003-08

Abstract

Model-combining (i.e., mixing) methods have been proposed in recent years to deal with uncertainty in model selection. Even though advantages of model combining over model selection have been demonstrated in simulations and data examples, it is still unclear to a large extent when model combining should be preferred. In this work, first we propose an instability measure to capture the uncertainty of model selection in estimation, called perturbation instability in estimation (PIE), based on perturbation of the sample. We demonstrate that estimators from model selection can have large PIE values and that model combining substantially reduces the instability for such cases. Second, we propose a model combining method, adaptive regression by mixing with model screening (ARMS), and derive a theoretical property. In ARMS, a screening step is taken to narrow down the list of candidate models before combining, which not only saves computing time, but also can improve estimation accuracy. Third, we compare ARMS with EBMA (an empirical Bayesian model averaging) and model selection methods in a number of simulations and real data examples. The comparison shows that model combining produces better estimators when the instability of model selection is high and that ARMS performs better than EBMA in most such cases in our simulations. With respect to the choice between model selection and model combining, we propose a rule of thumb in terms of PIE. The empirical results support that PIE is a sensible indicator of model selection instability in estimation and is useful for understanding whether model combining is a better choice over model selection for the data at hand.

Comments

This preprint was published as Zheng Yuan and Yuhong Yang, "Combining Linear Regression Models: When and how?", Journal of the American Statistical Association (2005): 1202-1214, doi: 10.1198/016214505000000088.

Language

en

Share

COinS