Abstract
Recent work has shown that second-order recurrent neural networks (2ORNNs) may be used to infer deterministic finite automata (DFA) when trained with positive and negative string examples. This paper shows that 2ORNN can also learn DFA from samples consisting of pairs (W,μ W ) where W is a noisy string of input vectors describing the degree of resemblance of every input to the symbols in the alphabet, and μ W is the degree of acceptance of the noisy string, computed with a DFA whose behavior has been extended to deal with noisy strings.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Giles, C.L., Miller, C.B., Chen, D., Chen, H.H., Sun, G.Z., and Lee, Y.C. (1992a) “Learning and extracting finite state automata with second-order recurrent neural networks” Neural Computation 4, 393–405.
Giles, C.L., Miller, C.B., Chen, D., Sun, G.Z., Chen, H.H., and Lee, Y.C. (1992b) “Extracting and learning an unknown grammar with recurrent neural networks”, Advances in Neural Information Processing Systems, vol. 4 (J. Moody et al., eds; Morgan-Kaufmann, San Mateo, Calif., U.S.A.), 317–324.
Siegelmann, H.T., Sontag, E.D., and Giles, C.L. (1992) “The complexity of language recognition by neural networks” Information Processing 92, vol. 1 (Elsevier/North-Holland), p. 329–335.
Watrous, R.L. and Kuhn, G.M. (1992a) “Induction of Finite-State Automata Using Second-Order Recurrent Networks”, Advances in Neural Information Processing Systems, vol. 4 (J. Moody et al., eds; Morgan-Kaufmann, San Mateo, Calif., U.S.A.), 306–316.
Watrous, R.L. and Kuhn, G.M. (1992b) “Induction of Finite-State Languages Using Second-Order Recurrent Networks”, Neural Computation 4, 406–414.
Steimann, F. and Adlassnig, K.-P. (1994) “Clinical monitoring with fuzzy automata”, Fuzzy Sets and Systems 61, 37–42.
M.L. Forcada and R.C. Carrasco (1994) “Learning the initial state of a second-order recurrent neural network during regular-language inference”, Neural Computation, in press.
Williams, R.J. and Zipser, D. (1989) “A learning algorithm for continually running fully recurrent neural networks” Neural Comp. 1, 270.
Tomita, M. (1982) “Dynamic construction of finite-state automata from examples, using hillclimbing” Proceedings of the Fourth. Annual Cognitive Science Conference, (Ann Arbor, Mich., U.S.A.) p. 105–108
Carrasco, R.C. and Oncina, J. (1994) “Learning stochastic regular grammars by means of a state merging method”, in Grammatical Inference and Applications, Proc. of the 2nd. Intl. Colloq. on Grammatical Inference ICGI-94 (Alicante, Spain, September 1994) (Carrasco, R. and Oncina, J., eds.) Lecture Notes in Artificial Intelligence 862 (Springer-Verlag) p. 139–152.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Carrasco, R.C., Forcada, M.L. (1995). Second-order recurrent neural networks can learn regular grammars from noisy strings. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_228
Download citation
DOI: https://doi.org/10.1007/3-540-59497-3_228
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-59497-0
Online ISBN: 978-3-540-49288-7
eBook Packages: Springer Book Archive