Citation: Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 11, pages 131–134 (2017)
Abstract. This paper presents a winning solution to the AAIA'17 Data Mining Challenge. The challenge focused on creating an efficient prediction model for digital card game Hearthstone. Our final solution is an ensemble of various neural network models, including convolutional neural networks.
- Blizzard Entertainment. (2017) Hearthstone official game site. [Online]. Available: https://eu.battle.net/hearthstone/en/
- HearthArena. (2017) Heartharena’s hearthstone arena tierlist. [Online]. Available: http://www.heartharena.com/tierlist
- F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikit-learn: Machine learning in python,” Journal of Machine Learning Research, vol. 12, no. Oct, pp. 2825–2830, 2011.
- L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler et al., “Api design for machine learning software: experiences from the scikit-learn project,” arXiv preprint https://arxiv.org/abs/1309.0238, 2013.
- M. Schmidt, N. Le Roux, and F. Bach, “Minimizing finite sums with the stochastic average gradient,” Mathematical Programming, vol. 162, no. 1-2, pp. 83–112, 2017. http://dx.doi.org/10.1007/s10107-016-1030-6. [Online]. Available: http://dx.doi.org/10.1007/s10107-016-1030-6
- M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint https://arxiv.org/abs/1603.04467, 2016.
- V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807–814.
- S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint https://arxiv.org/abs/1502.03167, 2015.
- I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org.
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. http://dx.doi.org/10.1109/CVPR.2015.7298594 pp. 1–9. [Online]. Available: http://dx.doi.org/10.1109/CVPR.2015.7298594
- N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
- D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint https://arxiv.org/abs/1412.6980, 2014.