[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
article

A review of inference algorithms for hybrid Bayesian networks

Published: 01 May 2018 Publication History

Abstract

Hybrid Bayesian networks have received an increasing attention during the last years. The difference with respect to standard Bayesian networks is that they can host discrete and continuous variables simultaneously, which extends the applicability of the Bayesian network framework in general. However, this extra feature also comes at a cost: inference in these types of models is computationally more challenging and the underlying models and updating procedures may not even support closed-form solutions. In this paper we provide an overview of the main trends and principled approaches for performing inference in hybrid Bayesian networks. The methods covered in the paper are organized and discussed according to their methodological basis. We consider how the methods have been extended and adapted to also include (hybrid) dynamic Bayesian networks, and we end with an overview of established software systems supporting inference in these types of models.

References

[1]
Attias, H. (2000). A variational Bayesian framework for graphical models. Advances in neural information processing systems, 13, 209-215.
[2]
Barber, D. (2006). Expectation correction for smoothed inference in switching linear dynamical systems. Journal of Machine Learning Research, 7, 2515-2540.
[3]
Beal, M. J. (2003). Variational algorithms for approximate Bayesian inference. Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London.
[4]
Bengio, Y. (1999). Markovian models for sequential data. Neural Computing Surveys, 2, 129-162.
[5]
Boyen, X., & Koller, D. (1998). Tractable inference for complex stochastic processes. In Proceedings of the Fourteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI-98), pp. 33-42.
[6]
Boyen, X., & Koller, D. (1999). Exploiting the architecture of dynamic systems. In Proceedings of the Sixteenth National Conference on Artificial Intelligence (AAAI-99), pp. 313-320.
[7]
Broderick, T., Boyd, N., Wibisono, A., Wilson, A. C., & Jordan, M. (2013). Streaming variational Bayes. In Advances in Neural Information Processing Systems 26, pp. 1727-1735. Neural Information Processing Systems.
[8]
Cano, A., Moral, S., & Salmerón, A. (2000). Penniless propagation in join trees. International Journal of Intelligent Systems, 15, 1027-1059.
[9]
Cinicioglu, E., & Shenoy, P. (2009). Arc reversals in hybrid Bayesian networks with deterministic variables. International Journal of Approximate Reasoning, 50, 763-777.
[10]
Cobb, B., & Shenoy, P. (2006a). Inference in hybrid Bayesian networks with mixtures of truncated exponentials. International Journal of Approximate Reasoning, 41, 257-286.
[11]
Cobb, B., & Shenoy, P. (2006b). Operations for inference in continuous Bayesian networks with linear deterministic variables. International Journal of Approximate Reasoning, 42, 21-36.
[12]
Cooper, G. F. (1990). The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence, 42, 393-405.
[13]
Cortijo, S., & Gonzales, C. (2017). On conditional truncated densities Bayesian networks. International Journal of Approximate Reasoning, In Press.
[14]
Cowel, R. (2005). Local propagation in conditional Gaussian Bayesian networks. Journal of Machine Learning Research, 6, 1517-1550.
[15]
Cowel, R., Dawid, A., Lauritzen, S., & Spiegelhalter, D. (1999). Probabilistic networks and decision graphs. Springer.
[16]
Dagum, P., & Luby, M. (1993). Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artificial Intelligence, 60 (1), 141-153.
[17]
Darwiche, A. (2001). Recursive conditioning. Artificial Intelligence, 125 (1-2), 5-41.
[18]
Darwiche, A. (2003). A differential approach to inference in Bayesian networks. Journal of the Association for Computing Machinery, 50 (3), 280-305.
[19]
Davis, S., & Moore, A. (2000). Mix-nets: Factored mixtures of Gaussians in Bayesian networks with mixed continuous and discrete variables. In Boutilier, C., & Goldszmidt, M. (Eds.), Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pp. 168-175.
[20]
Dean, T., & Kanazawa, K. (1989). A model for reasoning about persistence and causation. Computational Intelligence, 5 (3), 142-150.
[21]
Dechter, R. (1999). Bucket elimination: a unifiying framework for reasoning. Artificial Intelligence, 113, 41-85.
[22]
Díez, F., & Druzdzel, M. (2006). Canonical probabilistic models for knowledge engineering. Tech. rep. 06-01, CISIAD.
[23]
Díez, F., & Mira, J. (1994). Distributed inference in Bayesian networks. Cybernetics and Systems, 25, 39-61.
[24]
Doshi, F., Wingate, D., Tenenbaum, J. B., & Roy, N. (2011). Infinite dynamic Bayesian networks. In International Conference on Machine Learning (ICML-11), pp. 913-920.
[25]
Doucet, A., De Freitas, N., Murphy, K., & Russell, S. (2000). Rao-Blackwellised particle filtering for dynamic Bayesian networks. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pp. 176-183.
[26]
Fernández, A., Rumí, R., del Sagrado, J., & Salmerón, A. (2014). Supervised classification using hybrid probabilistic decision graphs. In van der Gaag, L., & Feelders, A. (Eds.), PGM 2014, Vol. 8754 of Lecture Notes in Artificial Intelligence, pp. 206-221. Springer.
[27]
Fernández, A., Rumí, R., & Salmerón, A. (2012). Answering queries in hybrid Bayesian networks using importance sampling. Decision Support Systems, 53, 580-590.
[28]
Friedman, N., Geiger, D., & Goldszmidt, M. (1997). Bayesian network classifiers. Machine Learning, 29, 131-163.
[29]
Fung, R., & Chang, K. (1990). Weighting and integrating evidence for stochastic simulation in Bayesian networks. In Henrion, M., Shachter, R., Kanal, L., & Lemmer, J. (Eds.), Uncertainty in Artificial Intelligence, Vol. 5, pp. 209-220. North-Holland (Amsterdam).
[30]
Geman, S., & Geman, D. (1984). Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721-741.
[31]
Ghahramani, Z., & Beal, M. J. (2000). Propagation algorithms for variational Bayesian learning. In Leen, T. K., Dietterich, T. G., & Tresp, V. (Eds.), Advances in Neural Information Processing Systems 12, pp. 507-513. MIT Press.
[32]
Ghahramani, Z., & Hinton, G. E. (2000). Variational learning for switching state-space models. Neural Computation, 12 (4), 831-864.
[33]
Ghahramani, Z., & Jordan, M. I. (1997). Factorial hidden Markov models. Machine Learning, 29 (2-3), 245-273.
[34]
Gogate, V., & Dechter, R. (2005). Approximate inference algorithms for hybrid Bayesian networks with discrete constraints. In Bacchus, F., & Jaakkola, T. (Eds.), Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence, pp. 209-216.
[35]
Haft, M., Hofmann, R., & Tresp, V. (1999). Model-independent mean-field theory as a local method for approximate propagation of information. Network, 10, 93-105.
[36]
Hammersley, J., & Handscomb, D. (1964). Monte Carlo Methods. Chapman & Hall.
[37]
Hanea, A., Morales-Napoles, O., & Ababei, D. (2015). Non-parametric Bayesian networks: Improving theory and reviewing applications. Reliability Engineering and System Safety, 144, 265-284.
[38]
Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97-109.
[39]
Heskes, T., Opper, M., Wiegerinck, W., Winther, O., & Zoeter, O. (2005). Approximate inference techniques with expectation constraints. Journal of Statistical Mechanics: Theory and Experiment, 2005, 1105.
[40]
Heskes, T., & Zoeter, O. (2003). Generalized belief propagation for approximate inference in hybrid Bayesian networks. In Proceedings of the 9th Workshop on A.I. and Statistics.
[41]
Hoffman, M., Blei, D., Wang, C., & Paisley, J. (2013). Stochastic variational inference. The Journal of Machine Learning Research, 14, 1303-1347.
[42]
Hrycej, T. (1990). Gibbs sampling in Bayesian networks (research note). Artificial Intelligence, 46, 351-363.
[43]
Iamsumang, C., Mosleh, A., & Modarres, M. (2014). Computational algorithm for dynamic hybrid Bayesian network in on-line system health management applications. In Prognostics and Health Management (PHM), 2014 IEEE Conference on, pp. 1-8.
[44]
Iamsumang, C., Mosleh, A., & Modarres, M. (2015). Hybrid DBN monitoring and anomaly detection algorithms for on-line shm. In Reliability and Maintainability Symposium (RAMS), 2015 Annual, pp. 1-7.
[45]
Ickstadt, K., Bornkamp, B., Grzegorczyk, M., Wieczorek, J., Sheriff, M. R., Grecco, H., & Zamir, E. (2010). Nonparametric Bayesian networks. In Bernardo, J., Bayarri, M., Berger, J., Heckerman, A. D. D., Smith, A., & West, M. (Eds.), Bayesian Statistics, Vol. 9. Oxford University Press.
[46]
Jaakkola, T. S., & Jordan, M. I. (2000). Bayesian parameter estimation via variational methods. Statistics and Computing, 10 (1), 25-37.
[47]
Jaakkola, T. S., & Qi, Y. (2006). Parameter expanded variational Bayesian methods. In Advances in Neural Information Processing Systems, pp. 1097-1104.
[48]
Jaeger, M. (2004). Probabilistic decision graphs - combining verification and AI techniques for probabilistic inference. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 12, 19-42.
[49]
Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., & Saul, L. K. (1999). An introduction to variational methods for graphical models. Machine Learning, 37, 183-233.
[50]
Julier, S. J., & Uhlmann, J. K. (1997). A new extension of the Kalman filter to nonlinear systems. In SPIE AeroSense Symposium, pp. 182-193.
[51]
Kalman, R. (1960). A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82, 35-45.
[52]
Kjærulff, U. (1994). Reduction of computational complexity in Bayesian networks through removal of weak dependencies. In Proceedings of the 10th Conference on Uncertainty in Artificial Intelligence, pp. 374-382. Morgan Kaufmann.
[53]
Kjærulff, U. (1992). A computational scheme for reasoning in dynamic probabilistic networks. In Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence, pp. 121-129, San Francisco, California. Morgan Kaufmann Publishers.
[54]
Koller, D., & Friedman, N. (2009). Probabilistic graphical models: Principles and techniques. MIT Press.
[55]
Koller, D., Lerner, U., & Anguelov, D. (1999). A general algorithm for approximate inference and its application to hybrid Bayes nets. In Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence, pp. 324-333.
[56]
Koller, D., & Pfeffer, A. (1997). Object-oriented Bayesian networks. In Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence, pp. 302-313. Morgan Kaufmann.
[57]
Kozlov, A. V., & Koller, D. (1997). Nonuniform dynamic discretization in hybrid networks. In Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence, pp. 314-325.
[58]
Kozlov, A. V., & Singh, J. P. (1996). Parallel implementations of probabilistic inference. Computer, 29 (12), 33-40.
[59]
Kozlov, A. V., & Singh, J. P. (1994). A parallel Lauritzen-Spiegelhalter algorithm for probabilistic inference. In Proceedings of the 1994 ACM/IEEE conference on Super-computing - Supercomputing '94, pp. 320-329. ACM Press.
[60]
Langseth, H., Nielsen, T., Rumí, R., & Salmerón, A. (2009). Inference in hybrid Bayesian networks. Reliability Engineering and System Safety, 94, 1499-1509.
[61]
Langseth, H., Nielsen, T., Rumí, R., & Salmerón, A. (2012a). Inference in hybrid Bayesian networks with mixtures of truncated basis functions. In Proceedings of the Sixth European Workshop on Probabilistic Graphical Models (PGM'2012, pp. 171-178.
[62]
Langseth, H., Nielsen, T. D., Rumí, R., & Salmerón, A. (2012b). Mixtures of truncated basis functions. International Journal of Approximate Reasoning, 53 (2), 212-227.
[63]
Lauritzen, S. L., & Jensen, F. (2001). Stable local computation with conditional Gaussian distributions. Statistics and Computing, 11 (2), 191-203.
[64]
Lauritzen, S., & Spiegelhalter, D. (1988). Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society, Series B, 50, 157-224.
[65]
Lauritzen, S., & Wermuth, N. (1989). Graphical models for associations between variables, some of which are qualitative and some quantitative. The Annals of Statistics, 17, 31-57.
[66]
Lauritzen, S. L. (1992). Propagation of probabilities, means and variances in mixed graphical association models. Journal of the American Statistical Association, 87 (420), 1098- 1108.
[67]
Lerner, U., & Parr, R. (2001). Inference in hybrid networks: Theoretical limits and practical algorithms. In Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, pp. 310-318. Morgan Kaufmann.
[68]
Lerner, U., Segal, E., & Koller, D. (2001). Exact inference in networks with discrete children of continuous parents. In Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, pp. 319-328.
[69]
Lerner, U., Moses, B., Scott, M., McIlraith, S., & Koller, D. (2002). Monitoring a complex physical system using a hybrid dynamic Bayes net. In Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI-02).
[70]
Li, Z., & D'Ambrosio, B. (1994). Efficient inference in Bayes networks as a combinatorial optimization problem. International Journal of Approximate Reasoning, 11, 55-81.
[71]
Luttinen, J. (2013). Fast variational Bayesian linear state-space model. In Machine Learning and Knowledge Discovery in Databases, Vol. 8188 of Lecture Notes in Computer Science, pp. 305-320. Springer Berlin Heidelberg.
[72]
Madsen, A. (2008). Belief update in CLG Bayesian networks with lazy propagation. International Journal of Approximate Reasoning, 49, 503-521.
[73]
Madsen, A. L., & Jensen, F. V. (1999a). Lazy propagation: A junction tree inference algorithm based on lazy evaluation. Artificial Intelligence, 113, 203-245.
[74]
Madsen, A. L., & Jensen, F. V. (1999b). Parallelization of inference in Bayesian networks. Technical report R-99-5002, Department of Computer Science, Aalborg University.
[75]
Masegosa, A. R., Martínez, A. M., Langseth, H., Nielsen, T. D., Salmerón, A., Ramos-López, D., & Madsen, A. L. (2017). Scaling up Bayesian variational inference using distributed computing clusters. International Journal of Approximate Reasoning, 88, 435-451.
[76]
Masegosa, A., Martínez, A., Langseth, H., Nielsen, T., Salmerón, A., Ramos-López, D., Rumí, R., & Madsen, A. (2016). d-VMP: Distributed variational message passing. In PGM'2016. JMLR: Workshop and Conference Proceedings, Vol. 52, pp. 321-332.
[77]
Minka, T. (2001). Expectation propagation for approximate Bayesian inference. In Proceedings of the Seventeenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-01), pp. 362-369.
[78]
Moral, S., Rumí, R., & Salmerón, A. (2003). Approximating conditional MTE distributions by means of mixed trees. ECSQARU'03. Lecture Notes in Artificial Intelligence, 2711, 173-183.
[79]
Moral, S., Rumí, R., & Salmerón, A. (2001). Mixtures of truncated exponentials in hybrid Bayesian networks. In EQSCARU'2001, Vol. 2143 of Lecture Notes in Artificial Intelligence, pp. 145-167. Springer, Berlin, Germany.
[80]
Mori, J., & Mahalec, V. (2016). Inference in hybrid Bayesian networks with large discrete and continuous domains. Expert Systems with Applications, 49, 1-19.
[81]
Murphy, K. (1999). A variational approximation for Bayesian networks with discrete and continuous latent variables. In Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence, pp. 457-466.
[82]
Murphy, K. P. (1998). Inference and learning in hybrid Bayesian networks. Tech. rep. UCB/CSD-98-990, EECS Department, University of California, Berkeley.
[83]
Murphy, K. P. (2002). Dynamic Bayesian Networks: Representation, Inference and Learning. Ph.D. thesis, UC Berkeley, Computer Science Division.
[84]
Murphy, K. P., Weiss, Y., & Jordan, M. I. (1999). Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI'99, pp. 467-475, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
[85]
Namasivayam, V., & Prasanna, V. (2006). Scalable parallel implementation of exact inference in Bayesian networks. In 12th International Conference on Parallel and Distributed Systems - (ICPADS'06), Vol. 1, p. 8 pp. IEEE.
[86]
Neil, M., Fenton, N., & Nielson, L. (2000). Building large-scale Bayesian networks. The Knowledge Engineering Review, 15 (3), 257-284.
[87]
Neil, M., Tailor, M., & Marquez, D. (2007). Inference in Bayesian networks using dynamic discretisation. Statistics and Computing, 17 (3), 219-233.
[88]
Nielsen, J., Gámez, J., & Salmerón, A. (2012). Modelling and inference with conditional Gaussian probabilistic decision graphs. International Journal of Approximate Reasoning, 53, 929-945.
[89]
Nikolova, O., Zola, J., & Aluru, S. (2009). A parallel algorithm for exact Bayesian network inference. In 2009 International Conference on High Performance Computing (HiPC), pp. 342-349. IEEE.
[90]
Nodelman, U., Shelton, C., & Koller, D. (2002). Continuous time Bayesian networks. In Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI), pp. 378-387.
[91]
Paisley, J. W., Blei, D. M., & Jordan, M. I. (2012). Variational Bayesian inference with stochastic search. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp. 1367-1374.
[92]
Paskin, M. (2004). Sample propagation. In Thrun, S., Saul, L., & Schölkopf, B. (Eds.), Advances in Neural Information Processing Systems 16, pp. 425-432. MIT Press.
[93]
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers Inc., San Mateo, CA.
[94]
Pennock, D. M. (1998). Logarithmic time parallel Bayesian inference. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pp. 431-438.
[95]
Peot, M. A., & Shachter, R. D. (1991). Fusion and propagation with multiple observations in belief networks. Artificial Intelligence, 48 (3), 299-318.
[96]
Pérez, A., Larrañaga, P., & Inza, I. (2009). Bayesian classifiers based on kernel density estimation: Flexible classifiers. International Journal of Approximate Reasoning, 50, 341-362.
[97]
Pourret, O., Naim, P., & Marcot, B. (2008). Bayesian networks. A practical guide to applications. Statistics in Practice. Wiley.
[98]
Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77 (2), 257-286.
[99]
Ranganath, R., Gerrish, S., & Blei, D. M. (2014). Black box variational inference. In AISTATS, pp. 814-822.
[100]
Robbins, H., & Monro, S. (1951). A stochastic approximation method. Annals of Mathematical Statistics, 22 (3), 400-407.
[101]
Rumí, R., & Salmerón, A. (2005). Penniless propagation with mixtures of truncated exponentials. Lecture Notes in Computer Science, 3571, 39-50.
[102]
Rumí, R., & Salmerón, A. (2007). Approximate probability propagation with mixtures of truncated exponentials. International Journal of Approximate Reasoning, 45, 191-210.
[103]
Salmerón, A., Cano, A., & Moral, S. (2000). Importance sampling in Bayesian networks using probability trees. Computational Statistics and Data Analysis, 34, 387-413.
[104]
Salmerón, A., Ramos-López, D., Borchani, H., Masegosa, A., Fernández, A., Langseth, H., Madsen, A., & Nielsen, T. (2015). Parallel importance sampling in conditional linear Gaussian networks. CAEPIA'2015. Lecture Notes in Artificial Intelligence, 9422, 36- 46.
[105]
Shafer, G. R., & Shenoy, P. P. (1990). Probability propagation. Annals of Mathematics and Artificial Intelligence, 2, 327-352.
[106]
Shenoy, P., & Shafer, G. (1990a). Axioms for probability and belief-function propagation. In Proceedings of the Sixth Workshop on Uncertainty in Artificial Intelligence, pp. 169-198.
[107]
Shenoy, P. P., & Shafer, G. (1990b). Axioms for probability and belief functions propagation. In Shachter, R., Levitt, T., Lemmer, J., & Kanal, L. (Eds.), Uncertainty in Artificial Intelligence, 4, pp. 169-198. North Holland, Amsterdam.
[108]
Shenoy, P. (2006). Inference in hybrid Bayesian networks with mixtures of Gaussians. In Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, pp. 428- 436.
[109]
Shenoy, P., Rumí, R., & Salmerón, A. (2015). Practical aspects of solving hybrid Bayesian networks containing deterministic conditionals. International Journal of Intelligent Systems, 30, 265-291.
[110]
Shenoy, P., & West, J. (2011a). Inference in hybrid Bayesian networks using mixtures of polynomials. International Journal of Approximate Reasoning, 52, 641-657.
[111]
Shenoy, P. P., & West, J. C. (2011b). Extended Shenoy-Shafer architecture for inference in hybrid Bayesian networks with deterministic conditionals. International Journal of Approximate Reasoning, 52 (6), 805-818.
[112]
Smídl, V., & Quinn, A. (2006). The Variational Bayes Method in Signal Processing. Springer, New York.
[113]
Smyth, P. (1994). Markov monitoring with unknown states. IEEE Journal of Selected Areas in Communications, Special Issue on Intelligent Signal Processing for Communications, 12 (9), 1600-1612.
[114]
Straub, D. (2009). Stochastic modeling of deterioration processes through dynamic Bayesian networks. Journal of Engineering Mechanics, 135, 1089-1099.
[115]
Sudderth, E., Ihler, A., Isard, M., Freeman, W., & Willsky, A. (2010). Nonparametric belief propagation. Communications of the ACM, 53 (10), 95-103.
[116]
Sun, W., & Chang, K. (2005). Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks. In Signal Processing, Sensor Fusion, and Target Recognition XIV. Proc. of SPIE, Vol. 5809, pp. 322-329.
[117]
Sun, W., Chang, K., & Laskey, K. (2010). Scalable inference for hybrid Bayesian networks with full density estimations. In Proceedings of the 13th Conference on Information Fusion, pp. 1-8.
[118]
Thiesson, B., Meek, C., Chickering, D., & Heckerman, D. (1998). Learning mixtures of DAG models. In Cooper, G., & Moral, S. (Eds.), Proceedings of the Fourteenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-98), pp. 504-513, San Francisco, CA. Morgan Kaufmann.
[119]
Turner, R. E., & Sahani, M. (2011). Two problems with variational expectation maximisation for time-series models. In Barber, D., Cemgil, T., & Chiappa, S. (Eds.), Bayesian Time series models, chap. 5, pp. 109-130. Cambridge University Press.
[120]
van Engelen, R. A. (1997). Approximating Bayesian belief networks by arc removal. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19 (8), 916-920.
[121]
Van Gael, J., Teh, Y. W., & Ghahramani, Z. (2009). The infinite factorial hidden Markov model. In Advances in Neural Information Processing Systems, Vol. 21.
[122]
Vinh, N., Chetty, M., Coppel, R., & Wangikar, P. (2012). Data discretization for dynamic Bayesian network based modeling of genetic networks. In Huang, T., Zeng, Z., Li, C., & Leung, C. (Eds.), Neural Information Processing, Vol. 7664 of Lecture Notes in Computer Science, pp. 298-306. Springer Berlin Heidelberg.
[123]
Wand, M., & Jones, M. (1995). Kernel smoothing. Chapman & Hall, London.
[124]
Winn, J., & Bishop, C. (2005). Variational message passing. Journal of Machine Learning Research, 6, 661-694.
[125]
Wu, Y., & Ghosal, S. (2008). Kullback Leibler property of kerner mixture priors in Bayesian density estimation. Electronic Journal of Statistics, 2, 298-331.
[126]
Yedidia, J., Freeman, W., & Weiss, Y. (2001). Bethe free energy, Kikuchi approximations and belief propagation algorithms. In Advances in Neural Information Processing Systems, Vol. 13.
[127]
Yuan, C., & Druzdzel, M. (2007a). Generalized evidence pre-propagated importance sampling for hybrid Bayesian networks. In AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence, Vol. 2, pp. 1296-1302.
[128]
Yuan, C., & Druzdzel, M. (2007b). Importance sampling for general hybrid Bayesian networks. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics, pp. 652-659.
[129]
Zhang, N., & Poole, D. (1996). Exploiting causal independence in Bayesian network inference. Journal of Artificial Intelligence Research, 5, 301-328.
[130]
Zhu, J., & Collette, M. (2015). A dynamic discretization method for reliability inference in dynamic Bayesian networks. Reliability Engineering and System Safety, 138, 242-252.
[131]
Zhu, M., Liu, S., & Yang, Y. (2012). Propagation in CLG Bayesian networks based on semantic modeling. Artificial Intelligence Review, 38, 149-162.
[132]
Zoeter, O., & Heskes, T. (2011). Expectation propagation and generalised EP methods for inference in switching Kalman filter models. In Barber, D., Cemgil, A. T., & Chiappa, S. (Eds.), Probabilistic Methods for Time-Series Analysis, pp. 181-207. Cambridge University Press.

Cited By

View all
  • (2024)Ask Less, Learn More: Adapting Ecological Momentary Assessment Survey Length by Modeling Question-Answer Information GainProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997358:4(1-32)Online publication date: 21-Nov-2024
  • (2023)A Review of Bayesian Networks for Spatial DataACM Transactions on Spatial Algorithms and Systems10.1145/35165239:1(1-21)Online publication date: 17-Jan-2023
  • (2023)Disentangling causality: assumptions in causal discovery and inferenceArtificial Intelligence Review10.1007/s10462-023-10411-956:9(10613-10649)Online publication date: 27-Feb-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Journal of Artificial Intelligence Research
Journal of Artificial Intelligence Research  Volume 62, Issue 1
May 2018
871 pages
ISSN:1076-9757
Issue’s Table of Contents

Publisher

AI Access Foundation

El Segundo, CA, United States

Publication History

Published: 01 May 2018
Published in JAIR Volume 62, Issue 1

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Ask Less, Learn More: Adapting Ecological Momentary Assessment Survey Length by Modeling Question-Answer Information GainProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997358:4(1-32)Online publication date: 21-Nov-2024
  • (2023)A Review of Bayesian Networks for Spatial DataACM Transactions on Spatial Algorithms and Systems10.1145/35165239:1(1-21)Online publication date: 17-Jan-2023
  • (2023)Disentangling causality: assumptions in causal discovery and inferenceArtificial Intelligence Review10.1007/s10462-023-10411-956:9(10613-10649)Online publication date: 27-Feb-2023
  • (2020)Bayesian attack graphs for platform virtualized infrastructures in cloudsJournal of Information Security and Applications10.1016/j.jisa.2020.10245551:COnline publication date: 1-Apr-2020
  • (2019)A Latent Variable Model in Conflict ResearchComputer Aided Systems Theory – EUROCAST 201910.1007/978-3-030-45093-9_5(36-43)Online publication date: 17-Feb-2019
  • (2019)Multiscalar genetic pathway modeling with hybrid Bayesian networksWIREs Computational Statistics10.1002/wics.147912:1Online publication date: 13-Dec-2019

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media