Abstract
We model a multiagent system (MAS) in socio-technical terms, combining a social layer consisting of norms with a technical layer consisting of actions that the agents execute. This approach emphasizes autonomy, and makes assumptions about both the social and technical layers explicit. Autonomy means that agents may violate norms. In our approach, agents are computational entities, with each representing a different stakeholder. We express stakeholder requirements of the form that a MAS is resilient in that it can recover (sufficiently) from a failure within a (sufficiently short) duration. We present ReNo, a framework that computes probabilistic and temporal guarantees on whether the underlying requirements are met or, if failed, recovered. ReNo supports the refinement of the specification of a socio-technical system through methodological guidelines to meet the stated requirements. An important contribution of ReNo is that it shows how the social and technical layers can be modeled jointly to enable the construction of resilient systems of autonomous agents. We demonstrate ReNo using a manufacturing scenario with competing public, industrial, and environmental requirements.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
. Ciesinski, F. & Größer, M. (2004). in On probabilistic computation tree logic (eds Baier, C., Haverkort, B. R., Hermanns, H., Katoen, J.-P. & Siegle, M.) Validation of Stochastic Systems 147–188 (Springer, 2004). https://doi.org/10.1007/978-3-540-24611-4_5.
Agrawal, R., Ajmeri, N. & Singh, M. P. (2022). Socially intelligent genetic agents for the emergence of explicit norms. Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI) 10–16. https://doi.org/10.24963/ijcai.2022/2 .
Ajmeri, N., Guo, H., Murukannaiah, P. K. & Singh, M. P. (2018). Robust norm emergence by revealing and reasoning about context: Socially intelligent agents for enhancing privacy. Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI) 28–34. https://doi.org/10.24963/ijcai.2018/4 .
Aydoğan, R., Kafalı, Ö., Arslan, F., Jonker, C. M. & Singh, M. P. (2021). Nova: Value-based negotiation of norms. ACM Transactions on Intelligent Systems and Technology (TIST)12(4), 45:1–45:29. https://doi.org/10.1145/3465054 .
Baldoni, M., Baroglio, C., Micalizio, R., & Tedeschi, S. (2022). Exception handling as a social concern. IEEE Internet Computing, 26(6), 33–40. https://doi.org/10.1109/MIC.2022.3216272
Baldoni, M., Baroglio, C., Micalizio, R., & Tedeschi, S. (2023). Accountability in multi-agent organizations: From conceptual design to agent programming. Autonomous Agents and Multi-Agent Systems, 37(1), 7. https://doi.org/10.1007/s10458-022-09590-6
Bevan, C. et al. (2013). Factors in the emergence and sustainability of self-regulation. Social Coordination: Principles, Artefacts and Theories. AISB Convention.
Cámara, J. & De Lemos, R. (2012). Evaluation of resilience in self-adaptive systems using probabilistic model-checking. 7th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS) 53–62. https://doi.org/10.1109/SEAMS.2012.6224391 .
Castelfranchi, C. (1998). Modelling social action for AI agents. Artificial Intelligence, 103(1–2), 157–182. https://doi.org/10.1016/S0004-3702(98)00056-3
Cheong, C. & Winikoff, M. P. (2009). in Hermes: Designing flexible and robust agent interactions (ed.Dignum, V.) Handbook of Research on Multi-Agent Systems: Semantics and Dynamics of Organizational Models Ch. 5, 105–139 (IGI Global, Hershey, Pennsylvania, 2009). https://doi.org/10.4018/978-1-60566-256-5.ch005.
Chopra, A. K. & Singh, M. P. (2018). Sociotechnical systems and ethics in the large. Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES) 48–53. https://doi.org/10.1145/3278721.3278740 .
Chopra, A. K. et al. (2011). Analyzing contract robustness through a model of commitments. Proceedings of the 11th International Workshop on Agent Oriented Software Engineering (AOSE 2010) (6788), 17–36. https://doi.org/10.1007/978-3-642-22636-6_2 .
Chopra, A. K. et al. (2014). Protos: Foundations for engineering innovative sociotechnical systems. Proceedings of the 22nd IEEE International Requirements Engineering Conference (RE) 53–62. https://doi.org/10.1109/RE.2014.6912247 .
Chopra, A. K., & Singh, M. P. (2021). Accountability as a foundation for requirements in sociotechnical systems. IEEE Internet Computing, 25(6), 33–41. https://doi.org/10.1109/MIC.2021.3106835
Christie V, S. H., Chopra, A. K. & Singh, M. P. (2021). Bungie: Improving fault tolerance via extensible application-level protocols. IEEE Computer54(5), 44–53. https://doi.org/10.1109/MC.2021.3052147 .
Christie, S. H., Chopra, A. K., & Singh, M. P. (2022). Mandrake: multiagent systems as a basis for programming fault-tolerant decentralized applications. Autonomous Agents and Multi-Agent Systems, 36(1), 16. https://doi.org/10.1007/s10458-021-09540-8
Dalpiaz, F., Giorgini, P., & Mylopoulos, J. (2013). Adaptive socio-technical systems: A requirements-based approach. Requirements Engineering, 18(1), 1–24. https://doi.org/10.1007/S00766-011-0132-1
Dastani, M., Torroni, P., & Yorke-Smith, N. (2018). Monitoring norms: A multi-disciplinary perspective. Knowledge Engineering Review, 33, e25. https://doi.org/10.1017/S0269888918000267
Dell’Anna, D., Dastani, M. & Dalpiaz, F. (2020). Runtime revision of sanctions in normative multi-agent systems. Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS)34 (2), 43.1–43.54. https://doi.org/10.1007/s10458-020-09465-8 .
d’Inverno, M., Luck, M., Noriega, P., Rodríguez-Aguilar, J. A., & Sierra, C. (2012). Communicating open systems. Artificial Intelligence, 186, 38–94. https://doi.org/10.1016/j.artint.2012.03.004
Fagundes, M. S., Ossowski, S., Luck, M., & Miles, S. (2012). Using normative Markov decision processes for evaluating electronic contracts. AI Communications, 25(1), 1–17. https://doi.org/10.3233/AIC-2012-0511
Gasparini, L., Norman, T. J., & Kollingbaum, M. J. (2018). Severity-sensitive norm-governed multi-agent planning. Autonomous Agents and Multi-Agent Systems, 32(1), 26–58. https://doi.org/10.1007/S10458-017-9372-X
Gutierrez-Garcia, J. O., Koning, J.-L. & Ramos-Corchado, F. F. (2009). An obligation approach for exception handling in interaction protocols. 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology3, 497–500. https://doi.org/10.1109/WI-IAT.2009.334 .
Hägg, S. (1997). A sentinel approach to fault handling in multi-agent systems. Multi-Agent Systems Methodologies and Applications: Second Australian Workshop on Distributed Artificial Intelligence Cairns, QLD, Australia, August 27, 1996 Selected Papers 2 181–195. https://doi.org/10.1007/BFB0030090 .
Hansson, H., & Jonsson, B. (1994). A logic for reasoning about time and reliability. Formal Aspects of Computing, 6(5), 512–535. https://doi.org/10.1007/BF01211866
Jones, A. J. I., Artikis, A., & Pitt, J. (2013). The design of intelligent socio-technical systems. Artificial Intelligence Review, 39(1), 5–20. https://doi.org/10.1007/s10462-012-9387-2
Kafalı, Ö., Ajmeri, N. & Singh, M. P. (2020). Specification of sociotechnical systems via patterns of regulation and control. ACM Transactions on Software Engineering and Methodology (TOSEM)29(1), 7:1–7:50. https://doi.org/10.1145/3365664 .
Kafalı, Ö., Ajmeri, N., & Singh, M. P. (2016). Revani: Revising and verifying normative specifications for privacy. IEEE Intelligent Systems (IS), 31(5), 8–15. https://doi.org/10.1109/MIS.2016.89
Kampik, T. et al. (2022). Governance of autonomous agents on the web: Challenges and opportunities. ACM Transactions on Internet Technology (TOIT)22 (4), 104:1–104:31. https://doi.org/10.1145/3507910 .
Kwiatkowska, M., Norman, G. & Parker, D. (2002). PRISM: Probabilistic symbolic model checker. Proceedings of the International Conference on Modelling Techniques and Tools for Computer Performance Evaluation 200–204. https://doi.org/10.1007/3-540-46029-2_13 .
Kwiatkowska, M., Norman, G. & Parker, D. (2011). PRISM 4.0: Verification of probabilistic real-time systems. Proceedings of the International Conference on Computer Aided Verification (CAV) 585–591. https://doi.org/10.1007/978-3-642-22110-1_47 .
Kwiatkowska, M., Norman, G., & Parker, D. (2006). Quantitative analysis with the probabilistic model checker PRISM. Electronic Notes in Theoretical Computer Science, 153(2), 5–31.
Liaskos, S., Khan, S. M. & Mylopoulos, J. (2022). Modeling and reasoning about uncertainty in goal models: A decision-theoretic approach. Software and Systems Modeling 1–24. https://doi.org/10.1007/S10270-021-00968-W .
Mahala, G., Kafalı, O., Dam, H., Ghose, A. & Singh, M. P. (2023). Replication package. https://anonymous.4open.science/r/JAAMAS-ReNO-C6EE.
Mahmoud, S., Miles, S. & Luck, M. (2016). Cooperation emergence under resource-constrained peer punishment, 900–908 (International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2016). http://dl.acm.org/citation.cfm?id=2937056.
Modgil, S., et al. (2015). Monitoring compliance with e-contracts and norms. Artificial Intelligence and Law, 23(2), 161–196. https://doi.org/10.1007/s10506-015-9167-9
Murukannaiah, P. K., Ajmeri, N. & Singh, M. P. (2016). Acquiring creative requirements from the crowd: Understanding the influences of personality and creative potential in crowd RE. Proceedings of the 24th IEEE International Requirements Engineering Conference (RE) 176–185. https://doi.org/10.1109/RE.2016.68 .
Murukannaiah, P. K., Ajmeri, N. & Singh, M. P. (2022). Enhancing creativity as innovation via asynchronous crowdwork. Proceedings of the 14th ACM Web Science Conference (WebSci) 66–74. https://doi.org/10.1145/3501247.3531555 .
Murukannaiah, P. K., Kalia, A. K., Telang, P. R. & Singh, M. P. (2015). Resolving goal conflicts via argumentation-based analysis of competing hypotheses. Proceedings of the 23rd IEEE International Requirements Engineering Conference (RE) 156–165. https://doi.org/10.1109/RE.2015.7320418 .
Norman, T. J., & Reed, C. (2010). A logic of delegation. Artificial Intelligence, 174(1), 51–71. https://doi.org/10.1016/j.artint.2009.10.001
Oh, J., Meneguzzi, F., Sycara, K. & Norman, T. J. (2011). An agent architecture for prognostic reasoning assistance. Proceedings of the 22nd International Joint Conference on Artificial Intelligence 2513–2518. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-418 .
Ostrom, E. (2008). in Developing a method for analyzing institutional change (eds Batie, S. & Mercuro, N.) Alternative Institutional Structures 66–94 (Routledge, 2008).
Ostrom, E. (2009). Understanding Institutional Diversity (Princeton University Press, 2009).
Ostrom, E. (2009). A general framework for analyzing sustainability of social-ecological systems. Science, 325(5939), 419–422. https://doi.org/10.1126/science.1172133
Parker, D. A. (2003). Implementation of symbolic model checking for probabilistic systems. Ph.D. thesis, University of Birmingham.
Pitt, J., Schaumeier, J., & Artikis, A. (2012). Axiomatization of socio-economic principles for self-organizing institutions: Concepts, experiments and challenges. ACM Transactions on Autonomous and Adaptive Systems (TAAS), 7(4), 1–39. https://doi.org/10.1145/2382570.2382575
Savarimuthu, B. T. R., Cranefield, S., Purvis, M. A. & Purvis, M. K. (2010). Obligation norm identification in agent societies. Journal of Artificial Societies and Social Simulation13 (4), 3. https://doi.org/10.18564/JASSS.1659 .
Savarimuthu, B. T. R., & Cranefield, S. (2011). Norm creation, spreading and emergence: A survey of simulation models of norms in multi-agent systems. Multiagent and Grid Systems, 7(1), 21–54. https://doi.org/10.3233/MGS-2011-0167
Savarimuthu, B. T. R., Cranefield, S., Purvis, M. A., & Purvis, M. K. (2013). Identifying prohibition norms in agent societies. Artificial Intelligence and Law, 21(1), 1–46. https://doi.org/10.1007/S10506-012-9126-7
Singh, A. M. & Singh, M. P. (2023). Norm deviation in multiagent systems: A foundation for responsible autonomy. Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI) 289–297. https://doi.org/10.24963/ijcai.2023/33 .
Singh, M. P. (2013). Norms as a basis for governing sociotechnical systems. ACM Transactions on Intelligent Systems and Technology (TIST)5(1), 21:1–21:23. https://doi.org/10.1145/2542182.2542203 .
Singh, M. P. (August 1990). in Group ability and structure (eds Demazeau, Y. & Müller, J.-P.) Decentralized Artificial Intelligence, Volume 2 127–145 (Elsevier/North-Holland, Amsterdam, 1991). Revised proceedings of the 2nd European Workshop on Modeling Autonomous Agents in a Multi Agent World (MAAMAW), St. Quentin en Yvelines, France.
Singh, M. P. (2014). Norms as a basis for governing sociotechnical systems. ACM Transactions on Intelligent Systems and Technology (TIST), 5(1), 1–23. https://doi.org/10.1145/2542182.2542203
Singh, A. M., & Singh, M. P. (2023). Wasabi: A conceptual model for trustworthy artificial intelligence. IEEE Computer, 56(2), 20–28. https://doi.org/10.1109/MC.2022.3212022
Verhagen, H., Neumann, M. & Singh, M. P. (2018). Normative multi-agent systems: Foundations and history. Handbook of Normative Multiagent Systems. College Publications 3–25. http://www.collegepublications.co.uk/downloads/handbooks00004.pdf.
Acknowledgements
We dedicate this article to the memory of Professor Aditya Ghose, who passed away unexpectedly in February 2023. This work was initiated with support from a grant from the University of Wollongong and NC State University through a collaboration network called the University Global Partnership Network. MPS additionally thanks the US National Science Foundation (grant IIS-1908374) for partial support.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A java code: translation of an STS specification into a PRISM model
Appendix A java code: translation of an STS specification into a PRISM model
1.1 Code snippets for the initial state in the PRISM model
The Listing 5 shows java code to return the initial state of the PRISM model. The list variables have the agent variable and all state variables. The setValue(0, 1) in Line 2 in the Listing 5 shows that value 1 has been assigned to the variable at index 0 in variables list (here the variable agent is at index 0 in the list named variable). Similarly, random values have been assigned to other state variables.
1.2 Code snippets to calculate the action selection probability
1.3 Code snippets for state transitions in the PRISM model
The Listing 7 shows the snippet of Java code of state transitions in the model. The function computeTransitionTarget(int i, int offset) is used to compute non-deterministic outcomes for executing all mechanisms of a selected agent. The Line 2 in the Listing 7 shows the current state. From Line 3 to Line 5 turns each agent using round-robin agent execution. In the initial state, the value of the agent variable is 1 (as explained in the use case) meaning that this agent is Company. In the next state, the value of the agent is 2 (i.e., Hospital); then the value of the agent is 3 (i.e., Regulator); and then the value of the agent is again 1 (i.e., Company). The remaining state variables are updated with new values in the Line 8 in the Listing 7.
The function computeTransitionTarget(int i, int offset) is used to compute non-deterministic outcomes of executing all mechanisms for a selected agent and the function getTransitionProbability(int i, int offset) is used to compute the state transition probability for each state transition.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Mahala, G., Kafalı, Ö., Dam, H. et al. A normative approach for resilient multiagent systems. Auton Agent Multi-Agent Syst 37, 46 (2023). https://doi.org/10.1007/s10458-023-09627-4
Accepted:
Published:
DOI: https://doi.org/10.1007/s10458-023-09627-4