Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 11

Proceedings of the 2017 Federated Conference on Computer Science and Information Systems

Helping AI to Play Hearthstone using Neural Networks

DOI: http://dx.doi.org/10.15439/2017F561

Citation: Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 11, pages 131134 ()

Full text

Abstract. This paper presents a winning solution to the AAIA'17 Data Mining Challenge. The challenge focused on creating an efficient prediction model for digital card game Hearthstone. Our final solution is an ensemble of various neural network models, including convolutional neural networks.

References

  1. Blizzard Entertainment. (2017) Hearthstone official game site. [Online]. Available: https://eu.battle.net/hearthstone/en/
  2. HearthArena. (2017) Heartharena’s hearthstone arena tierlist. [Online]. Available: http://www.heartharena.com/tierlist
  3. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikit-learn: Machine learning in python,” Journal of Machine Learning Research, vol. 12, no. Oct, pp. 2825–2830, 2011.
  4. L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler et al., “Api design for machine learning software: experiences from the scikit-learn project,” arXiv preprint https://arxiv.org/abs/1309.0238, 2013.
  5. M. Schmidt, N. Le Roux, and F. Bach, “Minimizing finite sums with the stochastic average gradient,” Mathematical Programming, vol. 162, no. 1-2, pp. 83–112, 2017. http://dx.doi.org/10.1007/s10107-016-1030-6. [Online]. Available: http://dx.doi.org/10.1007/s10107-016-1030-6
  6. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint https://arxiv.org/abs/1603.04467, 2016.
  7. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807–814.
  8. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint https://arxiv.org/abs/1502.03167, 2015.
  9. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org.
  10. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. http://dx.doi.org/10.1109/CVPR.2015.7298594 pp. 1–9. [Online]. Available: http://dx.doi.org/10.1109/CVPR.2015.7298594
  11. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
  12. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint https://arxiv.org/abs/1412.6980, 2014.