StarCraft strategy learning refinement using replay snapshotting
Štefan Krištofík, Michaela Hanková
DOI: http://dx.doi.org/10.15439/2025F2657
Citation: Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS), M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 43, pages 315–320 (2025)
Abstract. We propose a new replay snapshotting (RS) technique for strategy learning from past matches in real-time strategy game StarCraft: Brood War (SCBW). It allows for more precise understanding of particular strategy aspects by sampling the state of selected game features at important checkpoints during a match. We use RS to extract and refine a set of strategies from a large replay dataset STARDATA. To validate our approach in a competitive environment, we implement an AI agent for SCBW. It is able to perform the extracted strategy set against opponents in the BASIL Ladder competition. The agent consistently achieves rank C with 56 \% win rate which is a significant improvement over our previous approaches.
References
- Z. Lin, J. Gehring, V. Khalidov and G. Synnaeve, “STARDATA: A StarCraft AI Research Dataset,” 13th AAAI Conf. Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2017, pp. 50–56, https://dx.doi.org/10.48550/arXiv.1708.02139
- S. Ontañon, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill and M. Preuss, “A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft,” IEEE Trans. Computational Intelligence and AI in games, IEEE Computational Intelligence Society, 2013, 5(4), pp. 293–311, https://dx.doi.org/10.1109/TCIAIG.2013.2286295
- Mi. Čertický, D. Churchill, K.-J. Kim, Ma. Čertický and R. Kelly, “StarCraft AI Competitions, Bots and Tournament Manager Software,” IEEE Trans. Games, 2018, 11(3), pp. 227–237, https://dx.doi.org/10.1109/TG.2018.2883499
- O. Vinyals, I. Babuschkin et al., “Grandmaster level in StarCraft II using multi-agent reinforcement learning,” Nature, 2019, 575, pp. 350–354, https://dx.doi.org/10.1038/s41586-019-1724-z
- B. G. Weber and M. Mateas, “A data mining approach to strategy prediction,” IEEE Symp. Computational Intelligence and Games, 2009, pp. 140-147, https://dx.doi.org/10.1109/CIG.2009.5286483
- H. C. Cho, K. J. Kim and S. B. Cho, “Replay-based strategy prediction and build order adaptation for StarCraft AI bots,” IEEE Conf. Computational Intelligence in Games (CIG), 2013, pp. 1-7, https://dx.doi.org/10.1109/CIG.2013.6633666
- G. Synnaeve and P. Bessière, “A Dataset for StarCraft AI & an Example of Armies Clustering,” Artificial Intelligence in Adversarial Real-Time Games, 2012,, https://dx.doi.org/10.48550/arXiv.1211.4552
- Š. Krištofı́k, P. Malı́k, M. Kasáš, Š. Neupauer, “StarCraft agent strategic training on a large human versus human game replay dataset,” Federated Conf. Computer Science and Information Systems, FedCSIS 2020, 21, ACSIS, pp. 391–399, https://dx.doi.org/10.15439/2020F178
- M. Świechowski, “Game AI Competitions: Motivation for the Imitation Game-Playing Competition,” Federated Conf. Computer Science and Information Systems, FedCSIS 2020, 21, ACSIS, pp. 155–160, https://dx.doi.org/10.15439/2020F126
- Š. Krištofı́k, M. Kasáš, P. Malı́k, “StarCraft strategy classification of a large human versus human game replay dataset,” Federated Conf. Computer Science and Information Systems, FedCSIS 2021, 25, ACSIS, pp. 137–140, https://dx.doi.org/10.15439/2021F48
- G. Robertson, I. Watson, “A Review of Real-Time Strategy Game AI,” AI Magazine, 2014, 35(4), pp. 75–104, https://dx.doi.org/10.1609/aimag.v35i4.2478
- S. Xu, H. Kuang et al., “Macro action selection with deep reinforcement learning in StarCraft,” 15th AAAI Conf. Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2019, pp. 94–99, https://dx.doi.org/10.48550/ARXIV.1812.00336
- J. J. Merelo-Guervós, A. Fernández-Ares et al., “RedDwarfData: a simplified dataset of StarCraft matches,” 2017, https://dx.doi.org/10.48550/arXiv.1712.10179
- F. Dai, J. Gong, J. Huang, J. Hao, “Macromanagement and Strategy Classification in Real-Time Strategy Games,” 2nd China Symp. Cognitive Computing and Hybrid Intelligence (CCHI), 2019, pp. 263–267, https://dx.doi.org/10.1109/CCHI.2019.8901957
- N. Justesen, S. Risi, “Learning Macromanagement in StarCraft from Replays using Deep Learning,” 2017, https://dx.doi.org/10.48550/arXiv.1707.03743
- C. Shi, B. Wei et al., “A quantitative discriminant method of elbow point for the optimal number of clusters in clustering algorithm,” J. Wireless Comm. and Networking, 2021, https://dx.doi.org/10.1186/s13638-021-01910-w
- N. Justensen, M. Kaselimi et at., “Human-like Bots for Tactical Shooters Using Compute-Efficient Sensors,” 2024, https://dx.doi.org/10.48550/arXiv.2501.00078
- J. Gehring, D. Ju, V. Mella, D. Gant, N. Usunier, G. Synnaeve, “High-Level Strategy Selection under Partial Observability in StarCraft: Brood War,” 2018, https://dx.doi.org/10.48550/arXiv.1811.08568