Probabilistic Logical Models (PLMs) combine the powerful formalisms of probability theory and first-order logic to handle uncertainty in large, complex problems. While PLMs provide a very effective learning paradigm under the umbrella of Statistical Relational Learning (SRL) methods, tractable inference is a significant problem in these models. Earlier approaches focused on grounding the model to a propositional network to use existing inference algorithms. Other popular techniques include sampling and lifted inference, with a lot of interest in the latter recently.
In this talk, I will present three different approaches to accelerate inference in PLMs. First, a preprocessing method for Markov Logic Networks that makes exact grounded inference tractable; second, an approximate inference method called `counting belief propagation' that performs belief propagation on compressed factor graphs; and finally, an `anytime'inference algorithm that returns a bound over the marginal distribution of the query variable. I will present experimental results to demonstrate the usefulness of these three distinct, yet related, inference methodologies.
Sriraam Natarajan is currently a Post-Doctoral Research Associate at the Department of Computer Science at University of Wisconsin-Madison. He graduated with his PhD from ¾«¶«Ó°ÊÓworking with Dr. Prasad Tadepalli. His research interests lie in the field of Artificial Intelligence, with emphasis on Machine Learning, Statistical Relational Learning, Reinforcement Learning, Graphical Models and Bio-Medical Applications.