Abstract
We show that within the Gold paradigm for language learning an informer for a superfinite set can cause an optimal MDL learner to make an infinite amount of mind changes. In this setting an optimal learner can make an infinite amount of wrong choices without approximating the right solution. This result helps us to understand the relation between MDL and identification in the limit in learning: MDL is an optimal model selection paradigm, identification in the limit defines recursion theoretical conditions for convergence of a learner.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Gold, E.M.: Language Identification in the Limit. Information and Control 10(5), 447–474 (1967)
Grünwald, P.D.: The Minimum Description Length Principle, 570 pages. MIT Press, Cambridge (2007)
Li, M., Vitányi, P.M.B.: An Introduction to Kolmogorov Complexity and Its Applications, 3rd edn. Springer, New York (2008)
Zeugmann, T., Lange, S.: A Guided Tour Across the Boundaries of Learning Recursive Languages. In: Lange, S., Jantke, K.P. (eds.) GOSLER 1994. LNCS (LNAI), vol. 961, pp. 190–258. Springer, Heidelberg (1995)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Adriaans, P., Mulder, W. (2010). MDL in the Limit. In: Sempere, J.M., García, P. (eds) Grammatical Inference: Theoretical Results and Applications. ICGI 2010. Lecture Notes in Computer Science(), vol 6339. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15488-1_21
Download citation
DOI: https://doi.org/10.1007/978-3-642-15488-1_21
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15487-4
Online ISBN: 978-3-642-15488-1
eBook Packages: Computer ScienceComputer Science (R0)