Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 21

Proceedings of the 2020 Federated Conference on Computer Science and Information Systems

Game AI Competitions: Motivation for the Imitation Game-Playing Competition

DOI: http://dx.doi.org/10.15439/2020F126

Citation: Proceedings of the 2020 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 21, pages 155160 ()

Full text

Abstract. Games have played crucial role in advancing research in Artificial Intelligence and tracking its progress. In this article, a new proposal for game AI competition is presented. The goal is to create computer players which can learn and mimic the behavior of particular human players given access to their game records. We motivate usefulness of such an approach in various aspects, e.g., new ways of understanding what constitutes the human-like AI or how well it fits into the existing game production workflows. This competition may integrate many problems such as learning, representation, approximation and compression of AI, pattern recognition, knowledge extraction etc. This leads to multi-directional implications both on research and industry. In addition to the proposal, we include a short survey of the available game AI competitions.

References

  1. A. M. Turing, “Can a Machine Think,” The World of Mathematics, vol. 4, pp. 2099–2123, 1956.
  2. A. L. Samuel, “Some Studies in Machine Learning Using the Game of Checkers,” IBM Journal of Research and Development, vol. 3, no. 3, pp. 210–229, 1959, http://dx.doi.org/10.1147/rd.33.0210.
  3. A. Newell, J. C. Shaw, and H. A. Simon, “Chess-Playing Programs and the Problem of Complexity,” IBM Journal of Research and Development, vol. 2, no. 4, pp. 320–335, 1958, http://dx.doi.org/10.1147/rd.24.0320.
  4. M. Newborn, Kasparov versus Deep Blue: Computer Chess Comes of Age. Springer Science & Business Media, 2012, http://dx.doi.org/10.1007/978-1-4612-2260-6.
  5. D. Ferrucci, A. Levas, S. Bagchi, D. Gondek, and E. T. Mueller, “Watson: Beyond Jeopardy!” Artificial Intelligence, vol. 199, pp. 93–105, 2013, http://dx.doi.org/10.1109/ICCI-CC.2013.6622216.
  6. M. Newborn, “2007: Deep Junior Deep Sixes Deep Fritz in Elista, 4–2,” in Beyond Deep Blue. Springer, 2011, pp. 149–157.
  7. D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel et al., “A General Reinforcement Learning Algorithm that Masters Chess, Shogi, and Go Through Self-Play,” Science, vol. 362, no. 6419, pp. 1140–1144, 2018, http://dx.doi.org/10.1126/science.aar6404.
  8. F.-Y. Wang, J. J. Zhang, X. Zheng, X. Wang, Y. Yuan, X. Dai, J. Zhang, and L. Yang, “Where does AlphaGo Go: from Church-Turing Thesis to AlphaGo Thesis and Beyond,” IEEE/CAA Journal of Automatica Sinica, vol. 3, no. 2, pp. 113–120, 2016, http://dx.doi.org/10.1109/JAS.2016.7471613.
  9. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering the Game of Go Without Human Knowledge,” Nature, vol. 550, no. 7676, pp. 354–359, 2017, http://dx.doi.org/10.1038/nature24270.
  10. H. J. Van Den Herik, J. W. Uiterwijk, and J. Van Rijswijck, “Games Solved: Now and in the Future,” Artificial Intelligence, vol. 134, no. 1-2, pp. 277–311, 2002, http://dx.doi.org/10.1016/S0004-3702(01)00152-7.
  11. J. Schaeffer, N. Burch, Y. Björnsson, A. Kishimoto, M. Müller, R. Lake, P. Lu, and S. Sutphen, “Checkers is solved,” Science, vol. 317, no. 5844, pp. 1518–1522, 2007, http://dx.doi.org/10.1126/science.1144079.
  12. O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev et al., “Grandmaster Level in StarCraft II using Multi-Agent Reinforcement Learning,” Nature, vol. 575, no. 7782, pp. 350–354, 2019, http://dx.doi.org/10.1038/s41586-019-1724-z.
  13. S. McCandlish, J. Kaplan, D. Amodei, and O. D. Team, “An Empirical Model of Large-Batch Training,” arXiv preprint https://arxiv.org/abs/1812.06162, 2018.
  14. M. Świechowski, T. Tajmajer, and A. Janusz, “Improving Hearthstone AI by Combining MCTS and Supervised Learning Algorithms,” in 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 2018, pp. 1–8, http://dx.doi.org/10.1109/CIG.2018.8490368.
  15. M. Świechowski, H. Park, J. Mańdziuk, and K.-J. Kim, “Recent Advances in General Game Playing,” The Scientific World Journal, vol. 2015, 2015, http://dx.doi.org/10.1155/2015/986262.
  16. M. Świechowski and J. Mańdziuk, “Specialized vs. Multi-Game Approaches to AI in Games,” in Intelligent Systems’ 2014. Springer, 2015, pp. 243–254, http://dx.doi.org/10.1007/978-3-319-11313-5_23.
  17. M. Świechowski and D. Ślęzak, “Grail: A Framework for Adaptive and Believable AI in Video Games,” in 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI). IEEE, 2018, pp. 762–765, http://dx.doi.org/10.1109/WI.2018.00012.
  18. M. R. Genesereth, N. Love, and B. Pell, “General Game Playing: Overview of the AAAI Competition,” AI Magazine, vol. 26, no. 2, pp. 62–72, 2005, http://dx.doi.org/10.1609/aimag.v26i2.1813.
  19. M. Świechowski and J. Mańdziuk, “Self-Adaptation of Playing Strategies in General Game Playing,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 6, no. 4, pp. 367–381, Dec 2014, http://dx.doi.org/10.1109/TCIAIG.2013.2275163.
  20. C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton, “A Survey of Monte Carlo Tree Search Methods,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 4, no. 1, pp. 1–43, 2012, http://dx.doi.org/10.1109/TCIAIG.2012.2186810.
  21. D. Perez-Liebana, S. Samothrakis, J. Togelius, T. Schaul, S. M. Lucas, A. Couëtoux, J. Lee, C.-U. Lim, and T. Thompson, “The 2014 General Video Game Playing Competition,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 8, no. 3, pp. 229–243, 2015, http://dx.doi.org/10.1109/TCIAIG.2015.2402393.
  22. O. Syed and A. Syed, Arimaa - A New Game Designed to be Difficult for Computers. Institute for Knowledge and Agent Technology, 2003, vol. 26, no. 2, http://dx.doi.org/10.3233/icg-2003-26213.
  23. S. Xu, H. Kuang, Z. Zhi, R. Hu, Y. Liu, and H. Sun, “Macro Action Selection with Deep Reinforcement Learning in Starcraft,” in Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 15, no. 1, 2019, pp. 94–99.
  24. S. Ontanon, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill, and M. Preuss, “A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 5, no. 4, pp. 293–311, 2013, http://dx.doi.org/10.1109/TCIAIG.2013.2286295.
  25. S. Ontañón, N. A. Barriga, C. R. Silva, R. O. Moraes, and L. H. Lelis, “The First MicroRTS Artificial Intelligence Competition,” AI Magazine, vol. 39, no. 1, pp. 75–83, 2018, http://dx.doi.org/10.1609/aimag.v39i1.2777.
  26. Y. Takano, H. Inoue, R. Thawonmas, and T. Harada, “Self-Play for Training General Fighting Game AI,” in 2019 Nico- graph International (NicoInt). IEEE, 2019, pp. 120–120, http://dx.doi.org/10.1109/NICOInt.2019.00034.
  27. I. P. Pinto and L. R. Coutinho, “Hierarchical Reinforcement Learning with Monte Carlo Tree Search in Computer Fighting Game,” IEEE Transactions on Games, vol. 11, no. 3, pp. 290–295, 2018, http://dx.doi.org/10.1109/TG.2018.2846028.
  28. M. Wydmuch, M. Kempka, and W. Jaśkowski, “Vizdoom Competitions: Playing Doom from Pixels,” IEEE Transactions on Games, vol. 11, no. 3, pp. 248–259, 2018, http://dx.doi.org/10.1109/TG.2018.2877047.
  29. M. Kempka, M. Wydmuch, G. Runc, J. Toczek, and W. Jaśkowski, “Vizdoom: A Doom-based AI Research Platform for Visual Reinforcement Learning,” in 2016 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 2016, pp. 1–8, http://dx.doi.org/10.1109/CIG.2016.7860433.
  30. K. Shao, D. Zhao, N. Li, and Y. Zhu, “Learning Battles in ViZDoom via Deep Reinforcement Learning,” in 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 2018, pp. 1–4, http://dx.doi.org/10.1109/CIG.2018.8490423.
  31. N. Bard, J. N. Foerster, S. Chandar, N. Burch, M. Lanctot, H. F. Song, E. Parisotto, V. Dumoulin, S. Moitra, E. Hughes et al., “The Hanabi Challenge: A New Frontier for AI Research,” Artificial Intelligence, vol. 280, p. 103216, 2020, http://dx.doi.org/10.1016/j.artint.2019.103216.
  32. J.-F. Baffier, M.-K. Chiu, Y. Diez, M. Korman, V. Mitsou, A. van Renssen, M. Roeloffzen, and Y. Uno, “Hanabi is NP-hard, even for cheaters who look at their cards,” Theoretical Computer Science, vol. 675, pp. 43–55, 2017, http://dx.doi.org/10.1016/j.tcs.2017.02.024.
  33. A. Janusz, Ł. Grad, and D. Ślęzak, “Utilizing Hybrid Information Sources to Learn Representations of Cards in Collectible Card Video Games,” in 2018 IEEE International Conference on Data Mining Workshops, ICDM Workshops, Singapore, Singapore, November 17-20, 2018. IEEE, 2018, pp. 422–429, http://dx.doi.org/10.1109/ICDMW.2018.00069.
  34. J. Kowalski and R. Miernik, “Evolutionary Approach to Collectible Card Game Arena Deckbuilding using Active Genes,” Accepted to IEEE Congress on Evolutionary Computation 2020, 2020. [Online]. Available: arXivpreprinthttps://arxiv.org/abs/2001.01326
  35. D. M. G. Verghese, S. Bandi, and G. J. Jayaraj, “Solving the Complexity of Geometry Friends by Using Artificial Intelligence,” in Advances in Decision Sciences, Image Processing, Security and Computer Vision. Springer, 2020, pp. 528–533, http://dx.doi.org/10.1007/978-3-030-24318-0_62.
  36. N. Justesen, L. M. Uth, C. Jakobsen, P. D. Moore, J. Togelius, and S. Risi, “Blood Bowl: A New Board Game Challenge and Competition for AI,” in 2019 IEEE Conference on Games (CoG). IEEE, 2019, pp. 1–8, http://dx.doi.org/10.1109/CIG.2019.8848063.
  37. J. Renz, X. Ge, S. Gould, and P. Zhang, “The Angry Birds AI competition,” AI Magazine, vol. 36, no. 2, pp. 85–87, 2015, http://dx.doi.org/10.1609/aimag.v36i2.2588.
  38. A. Irfan, A. Zafar, and S. Hassan, “Evolving Levels for General Games Using Deep Convolutional Generative Adversarial Networks,” in 2019 11th Computer Science and Electronic Engineering (CEEC). IEEE, 2019, pp. 96–101, http://dx.doi.org/10.1109/CEEC47804.2019.8974332.
  39. C. Salge, M. C. Green, R. Canaan, and J. Togelius, “Generative Design in Minecraft (GDMC) Settlement Generation Competition,” in Proceedings of the 13th International Conference on the Foundations of Digital Games, 2018, pp. 1–10.
  40. J. Whitehill, “Climbing the Kaggle Leaderboard by Exploiting the Log-Loss Oracle,” in Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  41. A. Janusz, D. Slezak, S. Stawicki, and M. Rosiak, “Knowledge Pit - A Data Challenge Platform,” in CS&P, 2015, pp. 191–195.
  42. A. Janusz, T. Tajmajer, M. Świechowski, Ł. Grad, J. Puczniewski, and D. Ślęzak, “Toward an Intelligent HS Deck Advisor: Lessons Learned from AAIA’18 Data Mining Competition,” in 2018 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 2018, pp. 189–192, http://dx.doi.org/10.15439/2018F386.
  43. M. Świechowski and D. Ślęzak, “Introducing LogDL - Log Description Language for Insights from Complex Data,” in 2020 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 2020, (submitted).
  44. S. D. Baum, B. Goertzel, and T. G. Goertzel, “How Long until Human-Level AI? Results from an Expert Assessment,” Technological Forecasting and Social Change, vol. 78, no. 1, pp. 185–195, 2011, http://dx.doi.org/10.1016/j.techfore.2010.09.006.
  45. J. Fink, “Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction,” in International Conference on Social Robotics. Springer, 2012, pp. 199–208, http://dx.doi.org/10.1007/978-3-642-34103-8_20.
  46. P. Hingston, Believable Bots: Can Computers Play Like People? Springer, 2012, http://dx.doi.org/10.1007/978-3-642-32323-2.
  47. J. Zhu, A. Liapis, S. Risi, R. Bidarra, and G. M. Youngblood, “Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation,” in 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 2018, pp. 1–8, http://dx.doi.org/10.1109/CIG.2018.8490433.
  48. M. Świechowski and D. Ślęzak, “Granular Games in Real-Time Environment,” in 2018 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2018, pp. 462–469.
  49. D. Irish, The game poducer’s handbook. Course Technology Press, 2005, http://dx.doi.org/10.5555/1209055.
  50. J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu, “CNN-RNN: A Unified Framework for Multi-label Image Classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2285–2294, http://dx.doi.org/10.1109/CVPR.2016.251.
  51. N. A. Bennett, Q. He, C. Chang, and B. R. Schatz, “Concept Extraction in the Interspace Prototype,” University of Illinois at Urbana- Champaign, Champaign, IL, 1999, http://dx.doi.org/10.5555/871248.
  52. R. M. Cichy, D. Pantazis, and A. Oliva, “Resolving Human Object Recognition in Space and Time,” Nature Neuroscience, vol. 17, no. 3, p. 455, 2014, http://dx.doi.org/10.1038/nn.3635.
  53. C. Han, J. Mao, C. Gan, J. Tenenbaum, and J. Wu, “Visual Concept-Metaconcept Learning,” in Advances in Neural Information Processing Systems, 2019, pp. 5002–5013.
  54. B. Goertzel, “Artificial General Intelligence: Concept, State of the Art, and Future Prospects,” Journal of Artificial General Intelligence, vol. 5, no. 1, pp. 1–48, 2014, http://dx.doi.org/10.2478/jagi-2014-0001.