Differences of Training Structures on Stimulus Class Formation in Computational Agents
<p>Example of 2 stimulus classes (1 and 2) with 3 members (A, B, and C). The left part displays the training phase, and the right part corresponds to transitivity symmetry and the reflexivity test.</p> "> Figure 2
<p>Members (A, B, C, and D) of the class (<span class="html-italic">n</span>) are trained according to the training structure, as shown by the solid arrows. The emergent relations of reflexivity, transitivity, and symmetry for the evaluation are shown as dashed arrows.</p> "> Figure 3
<p>Network architectures for the 4 agents. All ANNs have the same 60 input and 3 output units. The number of units for the hidden layer is: Wide1 (20,000); Wide2 (2000, 2000); Deep1 (105, 90, 75, 60, 45, 30); Deep2 (100, 100, 100, 100, 100, 100, 100, 100, 100, 100).</p> "> Figure 4
<p>The emergent relations of transitivity (AC) and reflexivity (BB) observed in the Wide1 and Wide2 agents are shown in dashed arrows. The LS trained relations (AB, BC) are shown in solid arrows.</p> ">
Abstract
:1. Introduction
Related Work
2. Materials and Methods
2.1. MTS Procedure
2.2. Stimulus Encoding
2.3. ANN Agents
2.4. Simulations
3. Results
4. Discussion
4.1. Behavioural Analytical Perspective
4.2. ML Perspective
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
SE | Stimulus Equivalence |
ANN | Artificial Neural Network |
ML | Machine Learning |
DL | Deep Learning |
RL | Reinforcement learning |
MTS | Matching To Sample |
TS | Training Structure |
MTO | Many-To-One |
OTM | One-To-Many |
LS | Linear Structure |
References
- Sidman, M. Reading and Auditory-Visual Equivalences. J. Speech Hear. Res. 1971, 14, 5–13. [Google Scholar] [CrossRef] [PubMed]
- Sidman, M. Equivalence relations and behavior: An introductory tutorial. Anal. Verbal Behav. 2009, 25, 5–17. [Google Scholar] [CrossRef]
- Sidman, M. Equivalence relations and the reinforcement contingency. J. Exp. Anal. Behav. 2000, 74, 127–146. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sidman, M.; Tailby, W. Conditional discrimination vs. Matching to sample: An expansion of the testing paradigm. J. Exp. Anal. Behav. 1982, 37, 5–22. Available online: https://onlinelibrary.wiley.com/doi/pdf/10.1901/jeab.1982.37-5 (accessed on 14 June 2022). [CrossRef] [PubMed] [Green Version]
- Arntzen, E. Training and testing parameters in formation of stimulus equivalence: Methodological issues. Eur. J. Behav. Anal. 2012, 13, 123–135. [Google Scholar] [CrossRef]
- Critchfield, T.S.; Barnes-Holmes, D.; Dougher, M.J. Editorial: What Sidman Did–Historical and Contemporary Significance of Research on Derived Stimulus Relations. Perspect. Behav. Sci. 2018, 41, 9–32. [Google Scholar] [CrossRef]
- Tovar, Á.E.; Torres-Chávez, Á.; Mofrad, A.A.; Arntzen, E. Computational models of stimulus equivalence: An intersection for the study of symbolic behavior. J. Exp. Anal. Behav. 2023, 119, 407–425. [Google Scholar] [CrossRef]
- Urcuioli, P.J. Stimulus Control and Stimulus Class Formation; APA Handbooks in Psychology®; American Psychological Association: Washington, DC, USA, 2013; pp. 361–386. [Google Scholar] [CrossRef]
- Arntzen, E.; Hansen, S. Training Structures and the Formation of Equivalence Classes. Eur. J. Behav. Anal. 2011, 12, 483–503. [Google Scholar] [CrossRef]
- Green, G.; Saunders, R.R. Handbook of Research Methods in Human Operant Behavior; Springer: New York, NY, USA, 1998; Chapter Stimulus Equivalence; pp. 229–262. [Google Scholar]
- Saunders, R.R.; Green, G. A discrimination analysis of training-structure effects on stimulus equivalence outcomes. J. Exp. Anal. Behav. 1999, 72, 117–137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ayres-Pereira, V.; Arntzen, E. A descriptive analysis of baseline and equivalence-class performances under many-to-one and one-to-many structures. J. Exp. Anal. Behav. 2021, 115, 540–560. [Google Scholar] [CrossRef]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; p. 800. [Google Scholar]
- Alpaydin, E. Introduction to Machine Learning, 4th ed.; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
- Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson: London, UK, 2020. [Google Scholar]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction, 2nd ed.; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning: With Applications in R; Springer Texts in Statistics; Springer: New York, NY, USA, 2021. [Google Scholar]
- Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
- Lai, Y.; Shi, Y.; Han, Y.; Shao, Y.; Qi, M.; Li, B. Exploring uncertainty in regression neural networks for construction of prediction intervals. Neurocomputing 2022, 481, 249–257. [Google Scholar] [CrossRef]
- Chauhan, V.; Tiwari, A. Randomized neural networks for multilabel classification. Appl. Soft Comput. 2022, 115, 108184. [Google Scholar] [CrossRef]
- Silver, D.; Singh, S.; Precup, D.; Sutton, R.S. Reward is enough. Artif. Intell. 2021, 299, 103535. [Google Scholar] [CrossRef]
- Barnes, D.; Hampson, P.J. Stimulus equivalence and connectionism: Implications for behavior analysis and cognitive science. Psychol. Rec. 1993, 43, 617–638. [Google Scholar] [CrossRef]
- Mofrad, A.A.; Yazidi, A.; Hammer, H.L.; Arntzen, E. Equivalence Projective Simulation as a Framework for Modeling Formation of Stimulus Equivalence Classes. Neural Comput. 2020, 32, 912–968. Available online: https://direct.mit.edu/neco/article-pdf/32/5/912/1865343/neco_a_01274.pdf (accessed on 19 May 2022). [CrossRef]
- Tovar, A.E.; Chávez, A.T. A connectionist model of stimulus class formation with a yes/no procedure and compound stimuli. Psychol. Rec. 2012, 62, 747–762. [Google Scholar] [CrossRef]
- Ninness, C.; Ninness, S.K.; Rumph, M.; Lawson, D. The emergence of stimulus relations: Human and computer learning. Perspect. Behav. Sci. 2018, 41, 121–154. [Google Scholar] [CrossRef]
- Vernucio, R.R.; Debert, P. Computational simulation of equivalence class formation using the go/no-go procedure with compound stimuli. Psychol. Rec. 2016, 66, 439–449. [Google Scholar] [CrossRef] [Green Version]
- García, A.G.; Hernández, J.A.M.; Domínguez, M.T.G. Modelo computacional para la formación de clases de equivalencia. Int. J. Psychol. Psychol. Ther. 2010, 10, 163–176. [Google Scholar]
- Tovar, A.E.; Westermann, G. A Neurocomputational Approach to Trained and Transitive Relations in Equivalence Classes. Front. Psychol. 2017, 8, 1848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mofrad, A.A.; Yazidi, A.; Mofrad, S.A.; Hammer, H.L.; Arntzen, E. Enhanced Equivalence Projective Simulation: A Framework for Modeling Formation of Stimulus Equivalence Classes. Neural Comput. 2021, 33, 483–527. Available online: https://direct.mit.edu/neco/article-pdf/33/2/483/1896861/neco_a_01346.pdf (accessed on 19 May 2022). [CrossRef] [PubMed]
- Kansizoglou, I.; Bampis, L.; Gasteratos, A. Deep feature space: A geometrical perspective. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6823–6838. [Google Scholar] [CrossRef] [PubMed]
- Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar]
- Sun, S.; Cao, Z.; Zhu, H.; Zhao, J. A survey of optimization methods from a machine learning perspective. IEEE Trans. Cybern. 2019, 50, 3668–3681. [Google Scholar] [CrossRef] [Green Version]
- Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. J. Mach. Learn. Technol. 2011, 2, 37–63. [Google Scholar]
- Van Rossum, G.; Drake, F.L. Python 3 Reference Manual; CreateSpace: Scotts Valley, CA, USA, 2009. [Google Scholar]
- Anaconda Inc. Anaconda Software Distribution. 2020. Available online: https://anaconda.com/ (accessed on 15 June 2021).
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Varoquaux, G.; Buitinck, L.; Grisel, O.; Louppe, G.; Mueller, A.; Pedregosa, F. Scikit-learn. Getmobile Mob. Comput. Commun. 2015, 19, 29–33. [Google Scholar] [CrossRef]
- Ayres-Pereira, V.; Arntzen, E. Emergence of large equivalence classes as a function of training structures. Rev. Mex. De Análisis De La Conducta 2019, 45, 20–47. [Google Scholar] [CrossRef]
- Garnelo, M.; Shanahan, M. Reconciling deep learning with symbolic artificial intelligence: Representing objects and relations. Curr. Opin. Behav. Sci. 2019, 29, 17–23. [Google Scholar] [CrossRef]
- Rahwan, I.; Cebrian, M.; Obradovich, N.; Bongard, J.; Bonnefon, J.F.; Breazeal, C.; Crandall, J.W.; Christakis, N.A.; Couzin, I.D.; Jackson, M.O.; et al. Machine behaviour. Nature 2019, 568, 477–486. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Barrett, D.; Hill, F.; Santoro, A.; Morcos, A.; Lillicrap, T. Measuring abstract reasoning in neural networks. In Proceedings of the International Conference on Machine Learning (PMLR), Stockholm, Sweden, 10–15 July 2018; pp. 511–520. [Google Scholar]
- Chollet, F. On the Measure of Intelligence. CoRR 2019, abs/1911.01547. Available online: http://xxx.lanl.gov/abs/1911.01547 (accessed on 16 May 2020).
- Ritter, S.; Barrett, D.G.T.; Santoro, A.; Botvinick, M.M. Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study. In Proceedings of Machine Learning Research, Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Precup, D., Teh, Y.W., Eds.; PMLR: London, UK, 2017; Volume 70, pp. 2940–2949. [Google Scholar]
- Binz, M.; Schulz, E. Using cognitive psychology to understand GPT-3. Proc. Natl. Acad. Sci. USA 2023, 120, e2218523120. [Google Scholar] [CrossRef] [PubMed]
Stimulus | Digit Sequence |
---|---|
A1 | 100000000000000 |
A2 | 010000000000000 |
A3 | 001000000000000 |
A4 | 000100000000000 |
TX | 000010000000000 |
B1 | 000001000000000 |
B2 | 000000100000000 |
B3 | 000000010000000 |
B4 | 000000001000000 |
TY | 000000000100000 |
C1 | 000000000010000 |
C2 | 000000000001000 |
C3 | 000000000000100 |
C4 | 000000000000010 |
TZ | 000000000000001 |
Response | Digit Sequence |
---|---|
1 | 100 |
2 | 010 |
3 | 001 |
none | 000 |
TS | Pairs’ Group | Wide1 | Wide2 | Deep1 | Deep2 |
---|---|---|---|---|---|
LS | Train | 1.00 | 1.00 | 1.00 | 0.00 |
Reflexivity | 0.50 | 0.50 | 0.43 | 0.00 | |
Symmetry | 0.00 | 0.00 | 0.00 | 0.00 | |
Transitivity | 1.00 | 1.00 | 0.90 | 0.00 | |
Transitivity–Symmetry | 0.00 | 0.00 | 0.00 | 0.00 | |
OTM | Train | 1.00 | 1.00 | 1.00 | 0.00 |
Reflexivity | 0.00 | 0.00 | 0.00 | 0.00 | |
Symmetry | 0.00 | 0.00 | 0.00 | 0.00 | |
Transitivity | 0.00 | 0.00 | 0.00 | 0.00 | |
MTO | Train | 1.00 | 1.00 | 1.00 | 0.33 |
Reflexivity | 0.00 | 0.00 | 0.00 | 0.10 | |
Symmetry | 0.00 | 0.00 | 0.00 | 0.00 | |
Transitivity | 0.00 | 0.00 | 0.00 | 0.00 |
Architecture | ||||
---|---|---|---|---|
Pair | Wide1 | Wide2 | Deep1 | Deep2 |
A1A1 | 0.00 | 0.00 | 0.00 | 0.00 |
A2A2 | 0.00 | 0.00 | 0.00 | 0.00 |
A3A3 | 0.00 | 0.00 | 0.00 | 0.00 |
A4A4 | 0.00 | 0.00 | 0.00 | 0.00 |
B1B1 | 1.00 | 1.00 | 1.00 | 0.00 |
B2B2 | 1.00 | 1.00 | 0.67 | 0.00 |
B3B3 | 1.00 | 1.00 | 1.00 | 0.00 |
B4B4 | 1.00 | 1.00 | 0.67 | 0.00 |
C1C1 | 0.00 | 0.00 | 0.00 | 0.00 |
C2C2 | 0.00 | 0.00 | 0.00 | 0.00 |
C3C3 | 0.00 | 0.00 | 0.00 | 0.00 |
C4C4 | 0.00 | 0.00 | 0.00 | 0.00 |
Architecture | ||||
---|---|---|---|---|
Pair | Wide1 | Wide2 | Deep1 | Deep2 |
A1C1 | 1.00 | 1.00 | 1.00 | 0.00 |
A2C2 | 1.00 | 1.00 | 0.67 | 0.00 |
A3C3 | 1.00 | 1.00 | 1.00 | 0.00 |
A4C4 | 1.00 | 1.00 | 0.67 | 0.00 |
C1A1 | 0.00 | 0.00 | 0.00 | 0.00 |
C2A2 | 0.00 | 0.00 | 0.00 | 0.00 |
C3A3 | 0.00 | 0.00 | 0.00 | 0.00 |
C4A4 | 0.00 | 0.00 | 0.00 | 0.00 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Carrillo, A.; Betancort, M. Differences of Training Structures on Stimulus Class Formation in Computational Agents. Multimodal Technol. Interact. 2023, 7, 39. https://doi.org/10.3390/mti7040039
Carrillo A, Betancort M. Differences of Training Structures on Stimulus Class Formation in Computational Agents. Multimodal Technologies and Interaction. 2023; 7(4):39. https://doi.org/10.3390/mti7040039
Chicago/Turabian StyleCarrillo, Alexis, and Moisés Betancort. 2023. "Differences of Training Structures on Stimulus Class Formation in Computational Agents" Multimodal Technologies and Interaction 7, no. 4: 39. https://doi.org/10.3390/mti7040039
APA StyleCarrillo, A., & Betancort, M. (2023). Differences of Training Structures on Stimulus Class Formation in Computational Agents. Multimodal Technologies and Interaction, 7(4), 39. https://doi.org/10.3390/mti7040039