Sign in
Author

Conference

Journal

Organization

Year

DOI
Look for results that meet for the following criteria:
since
equal to
before
between
and
Search in all fields of study
Limit my searches in the following fields of study
Agriculture Science
Arts & Humanities
Biology
Chemistry
Computer Science
Economics & Business
Engineering
Environmental Sciences
Geosciences
Material Science
Mathematics
Medicine
Physics
Social Science
Multidisciplinary
Keywords
(7)
Experimental Tests
Heuristic Search
high dimensional dataset
Hill Climbing
nphard problem
Probability Distribution
bayesian network
Subscribe
Academic
Publications
Learning Bayesian networks by hill climbing: efficient methods based on progressive restriction of the neighborhood
Learning Bayesian networks by hill climbing: efficient methods based on progressive restriction of the neighborhood,10.1007/s1061801001786,Data Min
Edit
Learning Bayesian networks by hill climbing: efficient methods based on progressive restriction of the neighborhood
(
Citations: 2
)
BibTex

RIS

RefWorks
Download
José A. Gámez
,
Juan L. Mateo
,
José Miguel Puerta
Learning Bayesian networks is known to be an
NPhard problem
and that is the reason why the application of a
heuristic search
has proven advantageous in many domains. This learning approach is computationally efficient and, even though it does not guarantee an optimal result, many previous studies have shown that it obtains very good solutions.
Hill climbing
algorithms are particularly popular because of their good tradeoff between computational demands and the quality of the models learned. In spite of this efficiency, when it comes to dealing with highdimensional datasets, these algorithms can be improved upon, and this is the goal of this paper. Thus, we present an approach to improve
hill climbing
algorithms based on dynamically restricting the candidate solutions to be evaluated during the search process. This proposal, dynamic restriction, is new because other studies available in the literature about restricted search in the literature are based on two stages rather than only one as it is presented here. In addition to the aforementioned advantages of
hill climbing
algorithms, we show that under certain conditions the model they return is a minimal Imap of the joint
probability distribution
underlying the training data, which is a nice theoretical property with practical implications. In this paper we provided theoretical results that guarantee that, under these same conditions, the proposed algorithms also output a minimal Imap. Furthermore, we experimentally test the proposed algorithms over a set of different domains, some of them quite large (up to 800 variables), in order to study their behavior in practice.
Journal:
Data Mining and Knowledge Discovery  DATAMINE
, vol. 22, no. 12, pp. 106148, 2011
DOI:
10.1007/s1061801001786
Cumulative
Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
(
www.springerlink.com
)
(
www.springerlink.com
)
(
dx.doi.org
)
(
www.informatik.unitrier.de
)
(
www.springerlink.com
)
(
www.springerlink.com
)
More »
Citation Context
(2)
...2. This problem is exponential, and there are papers which use an exhaustive search equipped with a parameter k to limit the size of parent sets as in
Friedman and Koller (2003)
and Teyssier and Koller (2005)...
...The first improvement, used in previous studies (
Friedman and Koller 2003;
de Campos and Puerta 2001; Teyssier and Koller 2005), allows the reduction of the number of computations of parent sets needed in any insert operation Insertð� ; i; jÞ, from n to i  j ? 1. Thus, the computation of the parent sets for those variables which are not located between the variables at positions i and j can be omitted, as the set of variables ...
Juan I. AlonsoBarbaLuis
,
et al.
Structural learning of Bayesian networks using local algorithms based ...
...Furthermore, it is possible to improve upon the scalability of GES by methods successfully used in the Dspace [10,5,
6
], which basically consist in restricting the search space...
...In this second proposal, denoted as GESC, we apply a similar approach to the progressive restriction of the neighborhood used in [
6
]...
Juan I. AlonsoBarba
,
et al.
Scaling Up the Greedy Equivalence Search Algorithm by Constraining the...
References
(52)
Adaptive Probabilistic Networks with Hidden Variables
(
Citations: 12
)
John Binder
,
Daphne Koller
,
Stuart Russell
,
Keiji Kanazawa
Journal:
Machine Learning  ML
, vol. 29, no. 2, pp. 213244, 1997
Bayesian Networks for Data Mining
(
Citations: 10
)
David Heckerman
,
Usama Fayyad
Journal:
Data Mining and Knowledge Discovery  DATAMINE
, vol. 1, no. 1, pp. 79119, 1997
Using Bayesian Networks to Analyze Expression Data
(
Citations: 90
)
Nir Friedman
,
Michal Linial
,
Iftach Nachman
,
Dana Pe'er
Journal:
Journal of Computational Biology  JCB
, vol. 7, no. 34, pp. 601620, 2000
Some Variations on the PC Algorithm
(
Citations: 5
)
Joaquín Abellán
,
Manuel Gomezolmedo
,
Serafín Moral
Conference:
Probabilistic Graphical Models  PGM
, pp. 18, 2006
A hybrid methodology for learning belief networks: BENEDICT
(
Citations: 14
)
Silvia Acid
Journal:
International Journal of Approximate Reasoning  IJAR
, vol. 27, no. 3, pp. 235262, 2001
Sort by:
Citations
(2)
Structural learning of Bayesian networks using local algorithms based on the space of orderings
Juan I. AlonsoBarbaLuis
,
Luis delaOssa
,
Jose M. Puerta
Journal:
Soft Computing  SOCO
, vol. 14, no. 11, pp. 115
Scaling Up the Greedy Equivalence Search Algorithm by Constraining the Search Space of Equivalence Classes
Juan I. AlonsoBarba
,
Luis de la Ossa
,
Jose Gámez
,
Jose Puerta