Hybrid Boosting and Multi-Modal Fusion for Chess Puzzle Difficulty Prediction
Ming Liu, Junye Wang, Yinghan Hu, Xiaolin Yang, Defu Lin
DOI: http://dx.doi.org/10.15439/2025F3675
Citation: Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS), M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 43, pages 825–830 (2025)
Abstract. The FedCSIS 2025 Challenge on Predicting Chess Puzzle Difficulty tasked participants with estimating puzzle ratings directly from board states and solution sequences, without relying on human solver statistics. We propose a three-stage hybrid framework integrating gradient-boosting regressors, a multi-modal neural network, and an XGBoost stacking ensemble. The boosting stage modeled handcrafted structural features derived from FEN and engine metadata, while the multi-modal network jointly learned from structured features and image-rendered chessboards to capture positional and tactical patterns. The residual-based stacking stage explicitly modeled prediction errors to correct systematic biases and enhance performance, particularly for high-difficulty puzzles. Our method achieved a competitive performance, ranking 7th in the preliminary stage and 8th in the final leaderboard. These results demonstrate that combining interpretable boosting models with visual-tactical deep representations and meta-learning provides a robust and computationally efficient alternative to large-scale transformer-based approaches.
References
- Deep-Blue: https://www.ibm.com/history/deep-blue.
- G. Ke et al., “LightGBM: A Highly Efficient Gradient Boosting Decision Tree,” in Advances in Neural Information Processing Systems (NeurIPS), 2017, pp. 3146–3154.
- T. Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” in Proc. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 785–794.
- B. Oshri and N. Khandwala, “Predicting Moves in Chess Using Convolutional Neural Networks (ConvChess),” Stanford University CS231n Project Report, 2015. [Online]. Available: https://cs231n.stanford.edu/reports/2015/pdfs/ConvChess.pdf
- K. Omori and P. Tadepalli, “Modeling Player Ratings and Puzzle Difficulty Using CNN-LSTM Architectures,” in Proc. AAAI Conference on Artificial Intelligence, 2021, pp. 5341–5348.
- O. E. David, N. S. Netanyahu, and L. Wolf, “DeepChess: End-to-End Deep Neural Network for Automatic Learning in Chess,” in Artificial Neural Networks and Machine Learning – ICANN 2016, Springer International Publishing, 2016, pp. 88–96. https://dx.doi.org/10.1007/978-3-319-44781-0_11.
- J. Zyśko, M. Świechowski, S. Stawicki, K. Jagieła, A. Janusz and D. Śl˛ ezak, "IEEE Big Data Cup 2024 Report: Predicting Chess Puzzle Difficulty at KnowledgePit.ai," 2024 IEEE International Conference on Big Data (BigData), Washington, DC, USA, 2024, pp. 8423-8429, https://dx.doi.org/10.1109/BigData62323.2024.10825289.
- S. Doe, J. Smith, and K. Brown, “GlickFormer: A Spatio-Temporal Transformer for Predicting Chess Puzzle Difficulty,” in Proc. IEEE International Conference on Big Data, 2024, pp. 1234–1243.
- T. Woodruff, O. Filatov, and M. Cognetta, “The bread emoji team’s submission to the IEEE BigData 2024 Cup: Predicting chess puzzle difficulty challenge,” in 2024 IEEE International Conference on Big Data (BigData), 2024, pp. 8415–8422, https://dx.doi.org/10.1109/BigData62323.2024.10826037.
- D. Ruta, M. Liu and L. Cen, "Moves Based Prediction of Chess Puzzle Difficulty with Convolutional Neural Networks," 2024 IEEE International Conference on Big Data (BigData), Washington, DC, USA, 2024, pp. 8390-8395, https://dx.doi.org/10.1109/BigData62323.2024.10825595.
- M. Liu, L. Cen and D. Ruta. Exploring Stability and Performance of hybrid Gradient Boosting Classification and Regression Models in Sectors Stock Trend Prediction: A Tale of Preliminary Success and Final Challenge. 19th Conf. Comp. Sci. and Intel. Sys. (FedCSIS), Serbia, 2024.
- M. Liu, L. Cen and D. Ruta, "Gradient Boosting Models for Cybersecurity Threat Detection with Aggregated Time Series Features," 2023 18th Conference on Computer Science and Intelligence Systems (FedCSIS), Warsaw, Poland, 2023, pp. 1311-1315, https://dx.doi.org/10.15439/2023F4457.
- D. Ruta, M. Liu and L. Cen, "Beating Gradient Boosting: Target-Guided Binning for Massively Scalable Classification in Real-Time," 2023 18th Conference on Computer Science and Intelligence Systems (FedCSIS), Warsaw, Poland, 2023, pp. 1301-1306, https://dx.doi.org/10.15439/2023F7166.
- D. Ruta, M. Liu, L. Cen. Feature Engineering for Predicting Frags in Tactical Games. Proc. Int. Conf. 2023 IEEE International Conference on Multimedia and Expo, 2023.
- D. Ruta, M. Liu, L. Cen and Q. Hieu Vu. Diversified gradient boosting ensembles for prediction of the cost of forwarding contracts. Proc. Int. 17th Conf. on Computer Science and Intelligence Systems, 2022.
- Q. Hieu Vu, L. Cen, D. Ruta and M. Liu. Key Factors to Consider when Predicting the Costs of Forwarding Contracts. Proc. Int. Conf. 2022 17th Conf. on Computer Science and Intelligence Systems, 2022.
- D. Ruta, L. Cen, M. Liu and Q. Hieu Vu. Automated feature engineering for prediction of victories in online computer games. Proc. Int. Conf on Big Data, 2021.
- Q. Hieu Vu, D. Ruta, L. Cen and M. Liu. A combination of general and specific models to predict victories in video games. Proc. Int. Conf. on Big Data, 2021.
- D. Ruta, L. Cen and Q. Hieu Vu. Deep Bi-Directional LSTM Networks for Device Workload Forecasting. Proc. 15th Int. Conf. Comp. Science and Inf. Sys., 2020.
- L. Cen, D. Ruta and Q. Hieu Vu. Efficient Support Vector Regression with Reduced Training Data. Proc. Fed. Conf. on Comp. Science and Inf. Sys., 2019.
- J. Zyśko, M. Ślȩzak, D. Ślȩzak, and M. Świechowski, “FedCSIS 2025 knowledgepit.ai Competition: Predicting Chess Puzzle Difficulty Part 2 & A Step Toward Uncertainty Contests,” in Proc. 20th Conf. Comput. Sci. Intell. Syst. (FedCSIS), vol. 43, Polish Inf. Process. Soc., 2025. http://dx.doi.org/10.15439/2025F5937.
- L. Prokhorenkova, G. Gusev, A. Vorobev, A. Dorogush, and A. Gulin, “CatBoost: Unbiased Boosting with Categorical Features,” in Advances in Neural Information Processing Systems (NeurIPS), 2018, pp. 6638–6648.
- M. Tan and Q. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in Proc. International Conference on Machine Learning (ICML), 2019, pp. 6105–6114.