Abstract
Dynamic Bayesian networks (DBNs) extend Bayesian networks from static domains to dynamic domains. The only known generic method for exact inference in DBNs is based on dynamic expansion and reduction of active slices. It is effective when the domain evolves relatively slowly, but is reported to be “too expensive” for fast evolving domain where inference is under time pressure.
This study explores the stationary feature of problem domains to improve the efficiency of exact inference in DBNs. We propose the construction of a temporally invariant template of a DBN directly supporting exact inference and discuss issues in the construction. This method eliminates the need for the computation associated with dynamic expansion and reduction of the existing method. The method is demonstrated by experimental result.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
S. Andreassen, R. Hovorka, J. Benn, K.G. Olesen, and E.R. Carson. A model-based approach to insulin adjustment. In Proc. 3rd Conf. on Artificial Intelligence in Medicine, pages 239–248. Springer-Verlag, 1991.
G.F. Cooper. The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence, 42(2–3):393–405, 1990.
P. Dagum, A. Galper, and E. Horvitz. Dynamic network models for forecasting. In D. Dubois, M.P. Wellman, B. D’Ambrosio, and P. Smets, editors, Proc. 8th Conf. on Uncertainty in Artificial Intelligence, pages 41–48, Stanford, CA, 1992.
P. Dagum and M. Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artificial Intelligence, 60(1):141–153, 1993.
T.L. Dean and K. Kanazawa. A model for reasoning about persistence and causation. Computational Intelligence, (5):142–150, 1989.
T.L. Dean and M.P. Wellman. Planning and Control. Morgan Kaufmann, 1991.
J. Forbes, T. Huang, K. Kanazawa, and S. Russell. The batmobile: towards a bayesian automated taxi. In Proc. Fourteenth International Joint Conf. on Artificial Intelligence, pages 1878–1885, Montreal, Canada, 1995.
F.V. Jensen, S.L. Lauritzen, and K.G. Olesen. Bayesian updating in causal probabilistic networks by local computations. Computational Statistics Quarterly, (4):269–282, 1990.
U. Kjaerulff. A computational scheme for reasoning in dynamic probabilistic networks. In D. Dubois, M.P. Wellman, B. D’Ambrosio, and P. Smets, editors, Proc. 8th Conf. on Uncertainty in Artificial Intelligence, pages 121–129, Stanford, CA, 1992.
J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.
D.J. Rose, R.E. Tarjan, and G.S. Lueker. Algorithmic aspects of vertex elimination on graphs. SIAM J. Computing, 5:266–283, 1976.
W.X. Wen. Optimal decomposition of belief networks. In Proc. 6th Conf. on Uncertainty in Artificial Intelligence, pages 245–256, 1990.
Y. Xiang, D. Poole, and M. P. Beddoes. Multiply sectioned Bayesian networks and junction forests for large knowledge based systems. Computational Intelligence, 9(2):171–220, 1993.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Xiang, Y. (1999). Temporally Invariant Junction Tree for Inference in Dynamic Bayesian Network. In: Wooldridge, M.J., Veloso, M. (eds) Artificial Intelligence Today. Lecture Notes in Computer Science(), vol 1600. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48317-9_20
Download citation
DOI: https://doi.org/10.1007/3-540-48317-9_20
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66428-4
Online ISBN: 978-3-540-48317-5
eBook Packages: Springer Book Archive