Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 11

Proceedings of the 2017 Federated Conference on Computer Science and Information Systems

Evaluation of Hearthstone Game States With Neural Networks and Sparse Autoencoding

DOI: http://dx.doi.org/10.15439/2017F559

Citation: Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 11, pages 135138 ()

Full text

Abstract. In this paper, an approach to evaluating game states of a collectible card game Hearthstone is described. A deep neural network is employed to predict the probability of winning associated with a given game state. Encoding the game state as an input vector is based on another neural network, an autoencoder with a sparsity-inducing loss. The autoencoder encodes minion information in a sparse-like fashion so that it can be efficiently aggregated. Additionally, the model is regularized by decorrelation of hidden layer neuron activations, a concept derived from an existing regularizing method DeCov. The approach was developed for AAIA'17 data mining competition``Helping AI to play Hearthstone'' and achieved 5th place out of 188 submissions.

References

  1. Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." Nature 529.7587 (2016): 484-489.
  2. Deng, Li. "A tutorial survey of architectures, algorithms, and applications for deep learning." APSIPA Transactions on Signal and Information Processing 3 (2014): e2.
  3. Brugmann, Bernd. "Monte carlo go." Syracuse, NY: Technical report, Physics Department, Syracuse University, (1993).
  4. Y. Vaizman, B. McFee, and G. Lanckriet, "Codebook based audio feature representation for music information retrieval," IEEE/ACM Transactions on Acoustics, Speech and Signal Processing, vol. 22, no. 10, pp. 1483-1493, 2014.
  5. J. Nam, J. Herrera, M. Slaney, and J. Smith, "Learning sparse feature representations for music annotation and retrieval," in Proceedings of the 13th International Society for Music Information Retrieval Conference (ISMIR), pp. 565-570, 2012.
  6. Cogswell, Michael, et al. "Reducing overfitting in deep networks by decorrelating representations." https://arxiv.org/abs/1511.06068 (2015).
  7. Ng, Andrew. "Sparse autoencoder." CS294A Lecture notes 72.2011 (2011): 1-19.
  8. Theano Development Team, "Theano: A Python framework for fast computation of mathematical expressions", https://arxiv.org/abs/1605.02688 (2016).
  9. Duchi, John, Elad Hazan, and Yoram Singer. "Adaptive subgradient methods for online learning and stochastic optimization." Journal of Machine Learning Research 12 (2011): 2121-2159.