Sign in
Author

Conference

Journal

Organization

Year

DOI
Look for results that meet for the following criteria:
since
equal to
before
between
and
Search in all fields of study
Limit my searches in the following fields of study
Agriculture Science
Arts & Humanities
Biology
Chemistry
Computer Science
Economics & Business
Engineering
Environmental Sciences
Geosciences
Material Science
Mathematics
Medicine
Physics
Social Science
Multidisciplinary
Keywords
(3)
Computational Biology
Convex Relaxation
Graphical Model
Subscribe
Academic
Publications
Accelerated Dual Decomposition for MAP Inference
Accelerated Dual Decomposition for MAP Inference,Vladimir Jojic,Stephen Gould,Daphne Koller
Edit
Accelerated Dual Decomposition for MAP Inference
(
Citations: 1
)
BibTex

RIS

RefWorks
Download
Vladimir Jojic
,
Stephen Gould
,
Daphne Koller
Approximate MAP inference in graphical models is an important and challenging prob lem for many domains including computer vi sion,
computational biology
and natural lan guage understanding. Current stateofthe art approaches employ convex relaxations of these problems as surrogate objectives, but only provide weak running time guarantees. In this paper, we develop an approximate in ference algorithm that is both ecient and has strong theoretical guarantees. Speci cally, our algorithm is guaranteed to converge to an accurate solution of the convex relax ation in O 1 time. We demonstrate our ap proach on synthetic and realworld problems and show that it outperforms current state oftheart techniques.
Conference:
International Conference on Machine Learning  ICML
, pp. 503510, 2010
Cumulative
Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
(
robotics.stanford.edu
)
(
ai.stanford.edu
)
(
www.icml2010.org
)
(
www.informatik.unitrier.de
)
More »
Citation Context
(1)
...In a most recent work [
7
], smoothing of the dual objective was addressed with Nesterov’s optimal firstorder optimization scheme...
...Our approach is similar to the method described in [
7
] but differs from it in essential technical details: (a) Instead of using a fixed Lipschitz constant for a gradient step of the algorithm, we adaptively estimate this constant during the iteration...
...In contrast, no stopping criterion was specified in [
7
]...
...This lemma is analogous to the ”Computing Lipschitz” lemma in [
7
] with the significant difference that we consider the ‘2norm instead of the ‘1norm...
...Jojic at al. [
7
] inconsistently apply an algorithm based on the ‘2norm, however...
...Therefore, the role of the ‘1‐Lipschitz estimate (which reads L = 2 , see [
7
]) for the algorithm design re...
...Therefore, the role of the ‘1‐Lipschitz estimate (which reads L = 2 , see [7]) for the algorithm design re mains unclear in [
7
]...
...A simple alternative is to set ! t = L in view of Lemma 2, as done in [
7
]...
...Theorem 1. Thus, according to (14) and (9), this lemma can be directly applied to Algorithm 1, as done in [14] and [
7
]...
...We also applied the calculation of the Lipschitz constant L as suggested in [
7
]...
...For the Tsukuba model, this effect is not so pronounced, which explains good applied results reported in [
7
]...
...As can be seen in Figure 1 (bottom), the remaining gap is not significantly larger in this case, however we observed violations of (13) Figure 1. Nesterov’s method for the synthetic (top) and Tsukuba (bottom) models with 3 different ways of Lipschitz constant L selection: (a) fixed, (b) adaptive, (c)L selected according to [
7
]...
...While adaptive estimation outperforms the fixed setting, the method suggested in [
7
] produces invalid values of the Lipschitz constant and does not converge to the optimum...
Bogdan Savchynskyy
,
et al.
A study of Nesterov's scheme for Lagrangian decomposition and MAP labe...
References
(16)
Nonlinear Programming
(
Citations: 1716
)
D P Bertsekas
Journal:
Journal of The Operational Research Society  J OPER RES SOC
, vol. 48, no. 3, pp. 334334, 1997
Training structural SVMs when exact inference is intractable
(
Citations: 44
)
Thomas Finley
,
Thorsten Joachims
Conference:
International Conference on Machine Learning  ICML
, pp. 304311, 2008
Fixing MaxProduct: Convergent Message Passing Algorithms for MAP LPRelaxations
(
Citations: 37
)
Amir Globerson
,
Tommi Jaakkola
Conference:
Neural Information Processing Systems  NIPS
, 2007
Lagrangian Relaxation for MAP Estimation in Graphical Models
(
Citations: 18
)
Jason K. Johnson
,
Dmitry M. Malioutov
,
Alan S. Willsky
Journal:
Computing Research Repository  CORR
, vol. abs/0710.0, 2007
Convergent TreeReweighted Message Passing for Energy Minimization
(
Citations: 255
)
Vladimir Kolmogorov
Journal:
IEEE Transactions on Pattern Analysis and Machine Intelligence  PAMI
, vol. 28, no. 10, pp. 15681583, 2006
Sort by:
Citations
(1)
A study of Nesterov's scheme for Lagrangian decomposition and MAP labeling
(
Citations: 1
)
Bogdan Savchynskyy
,
Jorg Kappes
,
Stefan Schmidt
,
Christoph Schnorr
Conference:
Computer Vision and Pattern Recognition  CVPR
, pp. 18171823, 2011