Markov network structure discovery using independence tests

Thumbnail Image
Date
2007-01-01
Authors
Bromberg, Facundo
Major Professor
Advisor
Dimitris Margaritis
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Authors
Research Projects
Organizational Units
Organizational Unit
Computer Science

Computer Science—the theory, representation, processing, communication and use of information—is fundamentally transforming every aspect of human endeavor. The Department of Computer Science at Iowa State University advances computational and information sciences through; 1. educational and research programs within and beyond the university; 2. active engagement to help define national and international research, and 3. educational agendas, and sustained commitment to graduating leaders for academia, industry and government.

History
The Computer Science Department was officially established in 1969, with Robert Stewart serving as the founding Department Chair. Faculty were composed of joint appointments with Mathematics, Statistics, and Electrical Engineering. In 1969, the building which now houses the Computer Science department, then simply called the Computer Science building, was completed. Later it was named Atanasoff Hall. Throughout the 1980s to present, the department expanded and developed its teaching and research agendas to cover many areas of computing.

Dates of Existence
1969-present

Related Units

Journal Issue
Is Version Of
Versions
Series
Department
Computer Science
Abstract

We investigate efficient algorithms for learning the structure of a Markov network from data using the independence-based approach. Such algorithms conduct a series of conditional independence tests on data, successively restricting the set of possible structures until there is only a single structure consistent with the outcomes of the conditional independence tests executed (if possible). As Pearl has shown, the instances of the conditional independence relation in any domain are theoretically interdependent, made explicit in his well-known conditional independence axioms. The first couple of algorithms we discuss, GSMN and GSIMN, exploit Pearl's independence axioms to reduce the number of tests required to learn a Markov network. This is useful in domains where independence tests are expensive, such as cases of very large data sets or distributed data. Subsequently, we explore how these axioms can be exploited to "correct" the outcome of unreliable statistical independence tests, such as in applications where little data is available. We show how the problem of incorrect tests can be mapped to inference in inconsistent knowledge bases, a problem studied extensively in the field of non-monotonic logic. We present an algorithm for inferring independence values based on a sub-class of non-monotonic logics: the argumentation framework. Our results show the advantage of using our approach in the learning of structures, with improvements in the accuracy of learned networks of up to 20%. As an alternative to logic-based interdependence among independence tests, we also explore probabilistic interdependence. Our algorithm, called PFMN, takes a Bayesian particle filtering approach, using a population of Markov network structures to maintain the posterior probability distribution over them given the outcomes of the tests performed. The result is an approximate algorithm (due to the use of particle filtering) that is useful in domains where independence tests are expensive.

Comments
Description
Keywords
Citation
Source
Copyright
Mon Jan 01 00:00:00 UTC 2007