[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Learning with globally predictive tests

  • Special Feature
  • Published:
New Generation Computing Aims and scope Submit manuscript

Abstract

We introduce a new bias for rule learning systems. The bias only allows a rule learner to create a rule that predicts class membership if each test of the rule in isolation is predictive of that class. Although the primary motivation for the bias is to improve the understandability of rules, we show that it also improves the accuracy of learned models on a number of problems. We also introduce a related preference bias that allows creating rules that violate this restriction if they are statistically significantly better than alternative rules without such violations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Michalski, R., Mozetic, I., Hong, J. and Lavrac, N., “The Multi-purpose Incremental Learning System AQ15 and Its Testing Application to Three Medical Domains,”Proc. of the 5th National Conference on Artificial Intelligence. Philadelphia, PA: Morgan Kaufmann, pp. 1041–1047, 1986.

    Google Scholar 

  2. Clark, P. and Niblett, T., “The CN2 Induction Algorithm,”Machine Learning, 3, pp. 261–284, 1989.

    Google Scholar 

  3. Goodman, R. and Smyth, P., “The induction of probabilistic rule sets: the ITRULE algorithm,”Proc. of the Sixth International Machine Learning Workshop, Los Altos, CA: Morgan Kaufmann, pp. 129–132, 1989.

    Google Scholar 

  4. Quinlan, J. R.C4.5: Programs for Machine Learning, Los Altos, CA: Morgan Kaufmann, 1992.

    Google Scholar 

  5. Quinlan, J. R., “Learning logical definitions fromrelations,”Machine Learning, 5, pp. 239–266, 1990.

    Google Scholar 

  6. Pazzani, M. and Kibler, D., “The utility of knowledge in inductive learning,”Machine Learning, 9, pp. 57–94, 1992.

    Google Scholar 

  7. Pagallo, G. and Haussler, D.,Boolean feature discovery in empirical learning, 1990.

  8. Cohen, W., “Fast effective rule induction,” inProc. of the Twelfth International Conference on Machine Learning, Lake Tahoe, California, 1995.

  9. Rivest, R., “Learning decision lists,”Machine Learning, 2, pp. 229–246, 1987.

    MathSciNet  Google Scholar 

  10. Ali, K. and Pazzani, M., “HYDRA: A noise-tolerant relational concept learning algorithm,”The International Joint Conference on Artificial Intelligence Chambery, France, 1993.

  11. Mani, M., McDermott, S. and Pazzani, M., “Generating Models of Mental Retardation from Data with Machine Learning,”Proc. IEEE Knowledge and Data Engineering Exchange Workshop (KDEX-97), IEEE Computer Society, pp. 114–119, 1997.

  12. Kelley, H., “The process of causal attribution,”American Psychologist, pp. 107–128, 1983.

  13. Pazzani, M. and Silverstein, G., “Feature selection and hypothesis selection: Models of induction,”Proc. of the Twelfth Annual Conference of the Cognitive Science Society, Cambridge, MA: Lawrence Erlbaum, pp. 221–228, 1990.

    Google Scholar 

  14. Quinlan, J. R., “Learning logical definitions from relations,”Machine Learning, 5, pp. 239–266, 1990.

    Google Scholar 

  15. Merz, C. J. and Murphy, P. M., “UCI Repository of machine learning databases,” [http://www.ics.uci.edu/mlearn/MLRepository.html]. Irvine, CA: University of California, Department of Information and Computer Science,Machine Learning 5(1), pp. 71–100, 1998.

    Google Scholar 

  16. Holte, R. Acker, L. and Porter, B., “Concept learning and the problem of small disjuncts,”Proc. International Joint Conference on Artificial Intelligence, pp. 813–818, 1989.

  17. Murphy, P. and Pazzani, M., “ID2-of-3: Constructive induction of m-of-n discriminators for decision trees,”Proc. of the Eighth International Workshop on Machine Learning, Evanston, IL: Morgan Kaufmann, pp. 183–187, 1991.

    Google Scholar 

  18. Vilalta, R., Blix, G. and Rendell, L., “Global Data Analysis and the Fragmentation problem in Decision Tree Induction 9th European Conference on Machine Learning,”Lecture Notes in Artificial Intelligence, Vol. XXX, Springer-Verlag, Heinderberg, pp. 312–326,Workshop on Machine Learning, Evanston, IL: Morgan Kaufmann, pp. 183–187, 1997.

    Google Scholar 

  19. Pazzani, M., “Learning with Globally Predictive Tests,”Proc. of the First International Conference on Discovery Science, Fukuoka, Japan, 1998.

  20. Schaffer, C., “A conservation law for generalization Proceedings of the 11th International Conference of Machine Learning”New Brunswick, Morgan Kaufmann, 1994.

  21. Pazzani, M., Mani, S. and Shankle, W. R., Comprehnesible knowledge-discovery in databases, in M. G. Shafto and P. Langley (Ed.)Proc. of the Nineteenth Annual Conference of the Cognitive Science Society, Lawrence Erlbaum, pp. 596–601, 1997.

  22. Lee, Y. Buchanan, B. and Aronis, J., “Knowledge-Based Learning in Exploratory Science: Learning Rules to Predict Rodent Carcinogenicity,”Machine Learning (in press).

  23. Rymon, R., “An SE-based characterization of the induction problem,”Proc. of the 10th International Conference of Machine Learning, Amherst, MA: Morgan Kaufmann, pp. 268–275, 1993.

    Google Scholar 

  24. Webb, G., “Systematic search for categorical attribute-value data-driven machine learning,”Proc. of the Sixth Australian Joint Conference on Artificial Intelligence, Melbourne: World Scientific, pp. 342–347, 1993.

    Google Scholar 

  25. Quinlan, J. R. and Cameron-Jones, R., “Oversearching and layered search in empirical learning,”Proc. Fourteenth International Joint Conference on Artificial Intelligence, Morgan Kaufmann, Montreal, pp. 1019–1024, 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Michael J. Pazzani, Ph.D.: He is a Full Professor and Chair in the Department of Information and Computer Science at the University of California, Irvine. He obtained his bachelors degree from the University of Connecticut in 1980 and his Ph. D. from University of California, Los Angles in 1987. His research interests are in machine learning, cognitive modeling and information access. He has published over 100 research papers and 2 books. He has served on the Editorial Board of the Machine Learning and the Journal of Artificial Intelligence Research.

About this article

Cite this article

Pazzani, M.J. Learning with globally predictive tests. New Gener Comput 18, 29–38 (2000). https://doi.org/10.1007/BF03037566

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF03037566

Keywords

Navigation