Logo PTI Logo FedCSIS

Proceedings of the 17th Conference on Computer Science and Intelligence Systems

Annals of Computer Science and Information Systems, Volume 30

Individual and Collective Self-Development: Concepts and Challenges

, , ,

DOI: http://dx.doi.org/10.15439/2022F301

Citation: Proceedings of the 17th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 30, pages 1521 ()

Full text

Abstract. The increasing complexity and unpredictability of many ICT scenarios will represent a major challenge for future intelligent systems. The capability to dynamically and autonomously adapt to evolving and novel situations, with a partial or limited knowledge of the domain, both at the level of individual components and at the collective level, will become a crucial need for smart devices acting in many application domains. In this paper, we envision future systems able to self-develop mental models of themselves and of the environment they act in. Key properties will include: learning models of own capabilities; learning how to act purposefully towards the achievement of specific goals; and learning how to act in the presence of others, i.e., at the collective level. In our work, we will introduce the vision of self-development in ICT systems, by framing its key concepts and by illustrating suitable application domains. Then, we overview the many research areas that are contributing or can potentially contribute to the realisation of the vision, and identify some key research challenges.

References

  1. P. Rochat, “Self-perception and action in infancy,” Experimental brain research, vol. 123, no. 1-2, pp. 102–109, 1998.
  2. J. Weng, J. McClelland, A. Pentland, O. Sporns, I. Stockman, M. Sur, and E. Thelen, “Autonomous mental development by robots and animals,” Science, vol. 291, no. 5504, pp. 599–600, 2001.
  3. M. Lippi, S. Mariani, and F. Zambonelli, “Developing a sense of agency in IoT systems: Preliminary experiments in a smart home scenario,” in 17th CoMoRea workshop at PerCom. IEEE, 2021.
  4. S. Jha, M. Schiemer, F. Zambonelli, and J. Ye, “Continual learning in sensor-based human activity recognition: An empirical benchmark analysis,” Inf. Sci., vol. 575, pp. 1–21, 2021.
  5. S. Mariani, G. Cabri, and F. Zambonelli, “Coordination of autonomous vehicles: Taxonomy and survey,” ACM Comput. Surv., vol. 54, no. 1, pp. 19:1–19:33, 2021.
  6. Y. Yang, M. Taylor, J. Luo, Y. Wen, O. Slumbers, D. Graves, H. Bou Ammar, and J. Wang, “Diverse auto-curriculum is critical for successful real-world multiagent learning systems,” in 20th International Conference on Autonomous Agents and Multiagent Systems. IFAAMAS, 2021.
  7. M. Salehie and L. Tahvildari, “Self-adaptive software: Landscape and research challenges,” ACM transactions on autonomous and adaptive systems, vol. 4, no. 2, pp. 1–42, 2009.
  8. B. Subagdja and A.-H. Tan, “Beyond autonomy: The self and life of social agents,” in Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 2019, pp. 1654–1658.
  9. S. Mariani and A. Omicini, “Anticipatory coordination in socio-technical knowledge-intensive environments: Behavioural implicit communication in MoK,” in AI*IA 2015. Springer, 2015, pp. 102–115.
  10. A. Morris-Martin, M. De Vos, and J. Padget, “Norm emergence in multiagent systems: a viewpoint paper,” Autonomous Agents and Multi-Agent Systems, vol. 33, no. 6, pp. 706–749, 2019.
  11. J. Z. Leibo, E. Hughes, M. Lanctot, and T. Graepel, “Autocurricula and the emergence of innovation from social interaction: A manifesto for multi-agent intelligence research,” arXiv preprint https://arxiv.org/abs/1903.00742, 2019.
  12. J. Bongard, V. Zykov, and H. Lipson, “Resilient machines through continuous self-modeling,” Science, vol. 314, no. 5802, pp. 1118–1121, 2006.
  13. N. Cambier, R. Miletitch, V. Frémont, M. Dorigo, E. Ferrante, and V. Trianni, “Language evolution in swarm robotics: A perspective,” Frontiers in Robotics and AI, vol. 7, p. 12, 2020.
  14. M. Martinelli, S. Mariani, M. Lippi, and F. Zambonelli, “Self-development and causality in intelligent environments,” in Workshops at 18th International Conference on Intelligent Environments (IE2022), Biarritz, France, 20-23 June 2022, ser. Ambient Intelligence and Smart Environments, vol. 31. IOS Press, 2022, pp. 248–257.
  15. J. Zhang, X. Yao, J. Zhou, J. Jiang, and X. Chen, “Self-organizing manufacturing: Current status and prospect for industry 4.0,” in 5th International Conference on Enterprise Systems, 2017, pp. 319–326.
  16. M. Saelens, Y. Kinoo, and D. Weyns, “Heyciti: Healthy cycling in a city using self-adaptive internet-of-things,” in IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion, 2020, pp. 226–227.
  17. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  18. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
  19. Y. Bengio, J. Louradour, R. Collobert, and J. Weston, “Curriculum learning,” in Proceedings of the 26th International Conference on Machine Learning, 2009, pp. 41–48.
  20. J. Schmidhuber, “Formal theory of creativity, fun, and intrinsic motivation (1990–2010),” IEEE Transactions on Autonomous Mental Development, vol. 2, no. 3, pp. 230–247, 2010.
  21. Y. Burda, H. Edwards, D. Pathak, A. Storkey, T. Darrell, and A. A. Efros, “Large-scale study of curiosity-driven learning,” in International Conference on Learning Representations, 2018.
  22. K. Khetarpal, Z. Ahmed, G. Comanici, D. Abel, and D. Precup, “What can i do here? a theory of affordances in reinforcement learning,” in International Conference on Machine Learning, 2020, pp. 5243–5253.
  23. B. Schölkopf, F. Locatello, S. Bauer, N. R. Ke, N. Kalchbrenner, A. Goyal, and Y. Bengio, “Toward causal representation learning,” Proceedings of the IEEE, 2021.
  24. J. Pearl and D. Mackenzie, The book of why: the new science of cause and effect. Basic Books, 2018.
  25. Y. Zhao, Y. Chen, K. Tu, and J. Tian, “Learning bayesian network structures under incremental construction curricula,” Neurocomputing, vol. 258, pp. 30–40, 2017.
  26. K. Javed, M. White, and Y. Bengio, “Learning causal models online,” arXiv preprint https://arxiv.org/abs/2006.07461, 2020.
  27. R. B. Myerson, Game theory. Harvard university press, 2013.
  28. A. Nowé, P. Vrancx, and Y.-M. De Hauwere, “Game theory and multiagent reinforcement learning,” in Reinforcement Learning. Springer, 2012, pp. 441–470.
  29. B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch, “Emergent tool use from multi-agent autocurricula,” arXiv preprint https://arxiv.org/abs/1909.07528, 2020.
  30. S. Meganck, S. Maes, B. Manderick, and P. Leray, “Distributed learning of multi-agent causal models,” in IEEE/WIC/ACM International Conference on Intelligent Agent Technology. IEEE, 2005, pp. 285–288.
  31. S. Gupta and A. Dukkipati, “Winning an election: On emergent strategic communication in multi-agent networks,” in International Conference on Autonomous Agents and Multiagent Systems, 2020, pp. 1861–1863.
  32. J. N. Foerster, Y. M. Assael, N. De Freitas, and S. Whiteson, “Learning to communicate with deep multi-agent reinforcement learning,” arXiv preprint https://arxiv.org/abs/1605.06676, 2016.
  33. N. A. Grupen, D. D. Lee, and B. Selman, “Low-bandwidth communication emerges naturally in multi-agent learning systems,” arXiv preprint https://arxiv.org/abs/2011.14890, 2020.
  34. M. Esteva, J.-A. Rodriguez-Aguilar, C. Sierra, P. Garcia, and J. L. Arcos, “On the formal specification of electronic institutions,” in Agent mediated electronic commerce. Springer, 2001, pp. 126–147.
  35. M. A. Nowak, “Five rules for the evolution of cooperation,” Science, vol. 314, no. 5805, pp. 1560–1563, 2006.
  36. C. Yu, M. Zhang, and F. Ren, “Collective learning for the emergence of social norms in networked multiagent systems,” IEEE Trans. Cybern., vol. 44, no. 12, pp. 2342–2355, 2014.
  37. R. Beheshti, “Normative agents for real-world scenarios,” in International Conference on Autonomous Agents and Multi-Agent Systems, 2014, pp. 1749–1750.
  38. B. Porter and R. Rodrigues Filho, “Distributed emergent software: Assembling, perceiving and learning systems at scale,” in IEEE International Conference on Self-Adaptive and Self-Organizing Systems, 2019, pp. 127–136.
  39. A. Yapo and J. Weiss, “Ethical implications of bias in machine learning,” in 51st Hawaii International Conference on System Sciences, 2018.
  40. E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J.-F. Bonnefon, and I. Rahwan, “The moral machine experiment,” Nature, vol. 563, no. 7729, pp. 59–64, 2018.