[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Artificial Development of Biologically Plausible Neural-Symbolic Networks

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Neural-symbolic networks are neural networks designed for the purpose of representing logic programs. One of the motivations behind this is to work towards a biologically plausible model of knowledge representation in the brain. This paper reviews work in this direction and suggests that a new direction to take would be to evolve neural-symbolic networks using artificial development, which also has some biological plausibility. This idea is supported by a review of artificial development, followed by some initial results in using artificial development to evolve a neural-symbolic SHRUTI network in order to demonstrate how the fields of neural-symbolic integration and artificial development may be integrated. The experiments were successful in evolving genomes which could develop connections between neurons in working SHRUTI networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. One exception is Learning Classifier Systems [64], which are beyond the scope of this review.

  2. Though not cited in any papers, Murray Shanahan coined this term in a presentation at COGRIC, the slides for which can be found at: http://www.cogric.reading.ac.uk/presentations/murray_shanahan.ppt.

  3. SHRUTI is a Sanskrit word meaning “what is heard directly”. The name was chosen because the SHRUTI developers draw a parallel between the dynamic pattern of acoustic energy required to encode “what is heard” and the dynamic patterns of neuron spikes which SHRUTI uses to propagate information.

References

  1. Bader S, Hitzler P. Dimensions of neural-symbolic integration: a structured survey. In: Artemov S, Barringer H, d’Avila Garcez AS, Lamb LC, Woods J, editors. We will show them: essays in honour of Dov Gabbay. London: College Publications; 2005. p. 167–94.

    Google Scholar 

  2. Hammer B, Hitzler P. Perspectives of neural-symbolic integration. Berlin: Springer; 2007.

    Book  Google Scholar 

  3. d’Avila Garcez AS, Lamb LC, Gabbay DM. Neural-symbolic cognitive reasoning. New York City: Springer Publishing Company; 2008.

    Google Scholar 

  4. de Penning L, Kappé B, van den Bosch K. A neural-symbolic system for automated assessment in training simulators. In: Proceedings of the fifth international workshop on neural-symbolic learning and reasoning (NeSy 09); 2009. p. 35–8.

  5. de Penning L, den Hollander RJM, Bouma H, Burghouts GJ, d’Avila Garcez AS. A neural-symbolic cognitive agent with a mind’s eye. In: Proceedings of AAAI workshop on neural-symbolic learning and reasoning NeSy12. Toronto, Canada; 2012. p 9–14.

  6. Shastri L, Ajjanagadde V. From simple associations to systematic reasoning. Behav Brain Sci. 1993;16(3):417–94.

    Article  Google Scholar 

  7. Shastri L. Advances in SHRUTI—a neurally motivated model of relational knowledge representation and rapid inference using temporal synchrony. Appl Intell. 1999;11:79–108.

    Article  Google Scholar 

  8. Shastri L. SHRUTI: A Neurally Motivated Architecture for Rapid, Scalable Inference. In: Hammer B, Hitzler P, editors. Perspectives of neural-symbolic integration. Berlin: Springer; 2007. p. 183–203.

    Chapter  Google Scholar 

  9. Mukerjee A. Using attentive focus to discover action ontologies from perception. In: Proceedings of the fifth international workshop on neural-symbolic learning and reasoning (NeSy 09); 2009. p. 9–15.

  10. Wichert A. Neural sub-symbolic reasoning. In: Seventh international workshop on neural-symbolic learning and reasoning (NeSy ‘11); 2011. p. 2–7.

  11. Chavoya A. Artificial development. In: Abraham A, Vasilakos AV, Pedrycz W, Hassanien A, editors. Foundations of computational intelligence. Berlin: Springer; 2009. p. 185–215.

    Google Scholar 

  12. Siegel A, Sapru HN. Essential neuroscience. Lippincott: Williams & Wilkins; 2011.

    Google Scholar 

  13. Twyman RM. Instant notes in developmental biology. Oxford: BIOS Scientific Publishers limited; 2001.

    Google Scholar 

  14. Bowers JS. On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychol Rev. 2009;116(1):220–51.

    Google Scholar 

  15. Plaut DC, McClelland JL. Locating object knowledge in the brain: comment on Bowers’s (2009) attempt to revive the grandmother cell hypothesis. Psychol Rev. 2010;117(1):284–90.

    Google Scholar 

  16. Goel V. Cognitive Neuroscience of Thinking. In: Bernston GG, Cacioppo JT, editors. Handbook of neuroscience for the behavioural sciences. New York, NY: Wiley; 2009.

    Google Scholar 

  17. Prado J, Chadha A, Booth JR. The brain network for deductive reasoning: a quantitative meta-analysis of 28 neuroimaging studies. J Cogn Neurosci. 2011;23(11):3483–97.

    Google Scholar 

  18. Baars BJ. A cognitive theory of consciousness Cambridge. MA: Cambridge University Press; 1988.

    Google Scholar 

  19. Shanahan M. A cognitive architecture that combines internal simulation with a global workspace. Conscious Cogn. 2006;15(2).

  20. Shanahan M. A spiking neuron model of cortical broadcast and competition. Conscious Cogn. 2008;17(1):288–303.

    Google Scholar 

  21. Jacobsson H. Rule extraction from recurrent neural networks: a taxonomy and review. Neural Comput. 2005;17(6):1223–63.

    Article  Google Scholar 

  22. Gross CG. Genealogy of the “grandmother cell”. Neuroscientist. 2002;8(5):512–8 .

    Google Scholar 

  23. Hölldobler S, Kalinke Y. Towards a new massively parallel computational model for logic programming. In: ECAI 94 workshop on combining symbolic and connectionist processing; 1994. p. 68–77.

  24. Hölldobler S, Kalinke Y, Storr HP. Approximating the semantics of logic programs by recurrent neural networks. Appl Intell. 1999;11:45–58.

    Article  Google Scholar 

  25. Bader S, Hitzler P, Hölldobler S, Witzel A. A Fully connectionist model generator for covered first-order logic programs. In: Veslo MM, editor. Proceedings of the twentieth international joint conference on artificial intelligence (IJCAI-07). Hyderabad: AAAI Press. 2007; p. 666–71.

  26. d’Avila Garcez AS, Gabbay DM. Fibring neural networks. In: Proceedings of 19th national conference on artificial intelligence (AAAI ‘04); 2004. p. 342–7.

  27. Ray O, Golénia B. A neural network approach for first-order abductive inference. In: Proceedings of the fifth international workshop on neural-symbolic learning and reasoning (NeSy 09); 2009. p. 2–8.

  28. d’Avila Garcez AS, Broda K, Gabbay DM. Neural-symbolic learning systems: foundations and applications. Berlin: Springer; 2002.

    Book  Google Scholar 

  29. Colombo Tosatto S, Boella G, Van Der Torre L, d’Avila Garcez AS, Genovese V. Embedding normative reasoning into neural symbolic systems. In: Seventh international workshop on neural-symbolic reasoning and learning (NeSy ‘11); 2011. p. 19–24.

  30. Komendantskaya E, Zhang Q. SHERLOCK—an interface for neuro-symbolic networks. In Proceedings of the seventh international workshop on neural-symbolic learning and reasoning (NeSy ‘11); 2011. p. 39–40.

  31. d’Avila Garcez AS, Lamb LC, Gabbay DM. Connectionist modal logic: representing modalities in neural networks. Theor Comput Sci. 2007;371(1–2):34–53.

    Article  Google Scholar 

  32. d’Avila Garcez AS, Lamb LC. Reasoning about time and knowledge in neural-symbolic learning systems. In: Advances in neural information processing systems 16, proceedings of NIPS 2003; 2003. p. 921–928.

  33. d’Avila Garcez AS, Lamb LC, Gabbay DM. Neural-symbolic intuitionistic reasoning. In: Abraham A, Köppen M, Franke K, editors. Design and application of hybrid intelligent systems. Amsterdam, The Netherlands: IOS Press; 2003.

  34. Wendelken C, Shastri L. Acquisition of concepts and causal rules in SHRUTI. In: Proceedings of the twenty fifth annual conference of the cognitive science society. Boston, MA; 2003.

  35. Hebb DO. The organization of behavior: a neuropsychological theory. New York: Wiley; 1949.

    Google Scholar 

  36. Kumar A, Rotter S, Aertsen A. Spiking activity propagation in neuronal networks: reconciling different perspectives on neural coding. Nat Rev Neurosci. 2010;11(9):615–27.

    Google Scholar 

  37. Feldman JA. Dynamic connections in neural networks. Biol Cybern. 1982;46:27–39.

    Article  CAS  PubMed  Google Scholar 

  38. Slater A, Bremner G, editors. An introduction to developmental psychology. 2nd ed. Hoboken: Wiley; 2011.

    Google Scholar 

  39. Marcus G. Plasticity and Nativism: Towards a Resolution of an Apparent Paradox. In: Wermter S, Austin J, Willshaw D, editors. Emergent neural computational architectures based on neuroscience. Berlin: Springer; 2001. p. 368–82.

    Chapter  Google Scholar 

  40. Townsend J, Keedwell E, Galton A. A scalable genome representation for neural-symbolic networks. In: Proceedings of the first symposium on nature inspired computing and applications (NICA) at the AISB/IACAP world congress 2012. Birmingham; 2012.

  41. Pollack JB. Recursive distributed representations. Artif Intell. 1990;46(1–2):77–105.

    Article  Google Scholar 

  42. Martinetz T, Schulten K. Topology representing networks. Neural Netw. 1994;7(3):507–22.

    Google Scholar 

  43. Mukerjee A, Dabbeeru MM. Symbol emergence in design. In: Proceedings of the fifth international workshop on neural-symbolic learning and reasoning (NeSy 09); 2009. p. 29–34.

  44. Anderson JR. Cognitive psychology and its implications. 4th ed. San Francisco: W. H. Freeman and Company; 1995.

    Google Scholar 

  45. Stanley KO, Miikkulainen R. Efficient evolution of neural network topologies. In Langdon WB, Cantu-Paz E, Mathias KE, Roy R, Davis D, Poli R, et al., editors. Proceedings of the genetic and evolutionary computation conference. Piscataway, NJ: Morgan Kaufmann; 2002. p. 1757–62.

  46. Siebel NT, Sommer G. Evolutionary reinforcement learning of artificial neural networks. Int J Hybrid Intel Syst. 2007;4(3):171–83.

    Google Scholar 

  47. Eggenberger Hotz P, Gómez G, Pfeifer R. Evolving the morphology of a neural network for controlling a foveating retina—and its test on a real robot. In: Proceedings of the eighth international symposium on artificial life; 2003. p. 243–51.

  48. Eggenberger P. Creation of neural networks based on developmental and evolutionary principles. In: Proceedings of the international conference on artificial neural networks. Berlin: Springer; 1997. p. 337–42.

  49. de Garis H, Korkin M, Fehr G. The CAM-brain machine (CBM). J Auton Robots. 2001;10.

  50. Khan GM, Miller JF, Halliday DM. Intelligent agents capable of developing memory of their environment. In Loula A, Queiroz J, editors. Advances in modeling adaptive and cognitive systems. UEFS; 2010. p. 77–114.

  51. Kitano H. Neurogenetic learning: an integrated method of designing and training neural networks using genetic algorithms. Phys D. 1994;75(1–3):225–38.

    Article  Google Scholar 

  52. Gruau F. Automatic definition of modular neural networks. Adapt Behav. 1994;3(2):151–83.

    Article  Google Scholar 

  53. Lee DW, Kong SG, Sim KB. Evolvable neural networks based on developmental models for mobile robot navigation. In: Proceedings of the international joint conference on neural networks (IJCNN); 2005. p. 337–42.

  54. Dawkins R. The blind watchmaker: why the evidence of evolution reveals a universe without design. New York: Norton; 1986.

    Google Scholar 

  55. Lindenmayer A. Mathematical models for cellular interactions in development. J Theor Biol. 1968;18(3):280–99.

    Article  CAS  PubMed  Google Scholar 

  56. Nolfi S, Parisi D. Learning to adapt to changing environments in evolving neural networks. Adapt Behav. 1996;5:75–98.

    Article  Google Scholar 

  57. de Campos LML, Roisenberg M, de Oliveira RCL. Automatic design of neural networks with L-systems and genetic algorithms—a biologically inspired methodology. In: Proceedings of international joint conference on neural networks. San Jose, California; 2011.

  58. de Jong H. Modeling and simulation of genetic regulatory systems: a literature review. J Comput Biol. 2002;9(1):67–103.

    Article  PubMed  Google Scholar 

  59. Eggenberger P. Evolving morphologies of simulated 3d organisms based on differential gene expression. In: Proceedings of the 4th European conference on artificial life (ECAL97). Cambridge: MIT Press; 1997. p. 205–13.

  60. de Garis H. Artificial embryology and cellular differentiation. In: Evolutionary design by computers. Morgan Kaufman Publication; 1999. p. 281–295.

  61. Miller JF, Thomson P. Cartesian genetic programming. In Proceedings of the 3rd European conference on genetic programming. Berlin: Springer; 2000. p. 121–32.

  62. Miller JF, Khan GM. Where is the brain inside the brain? Memet Comput. 2011;3(3):217–28.

    Google Scholar 

  63. Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput. 2002;6(2):182–97.

    Article  Google Scholar 

  64. Butz MV. Rule-based evolutionary online learning systems: a principled approach to LCS analysis and design. Berlin: Springer; 2005.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joe Townsend.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Townsend, J., Keedwell, E. & Galton, A. Artificial Development of Biologically Plausible Neural-Symbolic Networks. Cogn Comput 6, 18–34 (2014). https://doi.org/10.1007/s12559-013-9217-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-013-9217-0

Keywords

Navigation