[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
Lifted first-order probabilistic inference
Publisher:
  • University of Illinois at Urbana-Champaign
  • Champaign, IL
  • United States
ISBN:978-0-549-34110-9
Order Number:AAI3290183
Pages:
91
Reflects downloads up to 02 Mar 2025Bibliometrics
Skip Abstract Section
Abstract

There has been a long standing division in AI between logical symbolic and probabilistic reasoning approaches. While probabilistic models can deal well with inherent uncertainty in many real-world domains, they operate on a mostly propositional level. Logic systems, on the other hand, can deal with much richer representations, especially first-order ones. In the last two decades, many probabilistic algorithms accepting first-order specifications have been proposed, but in the inference stage they still operate mostly on a propositional level, where the rich and useful first-order structure is not explicit anymore. In this thesis we present a framework for lifted inference on first-order models, that is, inference where the main operations occur on a first-order level, without the need to prop ositionalize the model. We clearly define the semantics of first-order probabilistic models, present an algorithm (FOVE) that performs lifted inference, and show detailed proofs of its correctness. Furthermore, we describe how to solve the Most Probable Explanation problem with a variant of FOVE, and present a new anytime probabilistic inference algorithm, ABVE, meant to generalize the ability of logical systems to gradually process a model and stop as soon as an answer is available.

Cited By

  1. Braun T and Möller R Lifted Junction Tree Algorithm 39th Annual German Conference on AI on KI 2016: Advances in Artificial Intelligence - Volume 9904, (30-42)
  2. De Salvo Braz R, O'Reilly C, Gogate V and Dechter R Probabilistic inference modulo theories Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, (3591-3599)
  3. Venugopal D, Sarkhel S and Cherry K Non-parametric domain approximation for scalable Gibbs sampling in MLNs Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, (745-754)
  4. ACM
    Gogate V and Domingos P (2016). Probabilistic theorem proving, Communications of the ACM, 59:7, (107-115), Online publication date: 24-Jun-2016.
  5. Sarkhel S, Singla P and Gogate V Fast lifted MAP inference via partitioning Proceedings of the 29th International Conference on Neural Information Processing Systems - Volume 2, (3240-3248)
  6. Sarkhel S, Venugopal D, Singla P and Gogate V An Integer Polynomial Programming based framework for lifted MAP inference Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, (3302-3310)
  7. Venugopal D and Gogate V Scaling-up importance sampling for Markov Logic Networks Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, (2978-2986)
  8. Venugopal D and Gogate V On lifting the Gibbs sampling algorithm Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, (1655-1663)
  9. Gogate V and Domingos P Probabilistic theorem proving Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, (256-265)
Contributors
  • University of Pennsylvania
  • SRI International
Please enable JavaScript to view thecomments powered by Disqus.

Recommendations