Symbolic vs Black-Box Explanations: A Model-Driven Approach Using Grammatical Evolution
Dominik Sepioło, Antoni Ligęza
DOI: http://dx.doi.org/10.15439/2025F5371
Citation: Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS), M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 43, pages 381–386 (2025)
Abstract. Black-box explainability tools such as LIME and SHAP are widely used to interpret machine learning models. However, their post-hoc and local nature often leads to inconsistent and semantically opaque explanations. In contrast, this paper explores a model-driven approach to explainability using grammatical evolution (GE), which enables the discovery of symbolic, human-readable models. We contrast black-box explanations with symbolic GE-generated models on two benchmark tasks: a quadratic equation classification problem and the Iris dataset. The results show that GE produces interpretable, consistent, and semantically meaningful expressions that reflect domain knowledge, offering a more trustworthy foundation for explainable AI. The integration of Meaningful Intermediate Variables (MIVs) further enhances the expressiveness and clarity of the symbolic models.
References
- Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., et al.: Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, Elsevier (2019), https://doi.org/10.1016/j.inffus.2019.12.012.
- Guidotti, R., Monreale, A., Ruggieri, S., et al.: A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys 51(5), 1–42 (2019), https://doi.org/10.1145/3236009.
- Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 23(1), 18 (2021), https://doi.org/10.3390/e23010018.
- Ribeiro, M. T., Singh, S., Guestrin, C.: "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In: KDD 2016, pp. 1135–1144. ACM (2016), https://doi.org/10.1145/2939672.2939778.
- Lundberg, S. M., Lee, S. I.: A Unified Approach to Interpreting Model Predictions. In: Advances in Neural Information Processing Systems 30, pp. 4765–4774 (2017), https://dl.acm.org/doi/10.5555/3295222.3295230.
- Sepioło, D., Ligęza, A.: Towards Explainability of Tree-Based Ensemble Models: A Critical Overview. In: New Advances in Dependability of Networks and Systems, pp. 287–296. Springer (2022), https://doi.org/10.1007/978-3-031-06746-4_28.
- Rudin, C.: Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence 1(5), 206–215 (2019), https://doi.org/10.1038/s42256-019-0048-x.
- Lig˛ eza, A., et al.: Explainable Artificial Intelligence. Model Discovery with Constraint Programming. In: ISMIS 2020, Springer, pp. 171–191 (2020), https://doi.org/10.1007/978-3-030-67148-8_13.
- Hu, T.: Can Genetic Programming Perform Explainable Machine Learning for Bioinformatics? In: Genetic and Evolutionary Computation, Springer, pp. 63–77 (2020), https://doi.org/10.1007/978-3-030-39958-0_4.
- Ryan, C., O’Neill, M., Collins, J. J. (eds.): Handbook of Grammatical Evolution. Springer (2018), https://doi.org/10.1007/978-3-319-78717-6.
- Sepioło, D., Ligęza, A.: Towards Model-Driven Explainable Artificial Intelligence. An Experiment with Shallow Methods Versus Grammatical Evolution. In: ECAI 2023 Workshops, Springer, pp. 360–365 (2024), https://doi.org/10.1007/978-3-031-50485-3_36.
- Orzechowski, P., La Cava, W., Moore, J.H.: Where Are We Now? A Large Benchmark Study of Recent Symbolic Regression Methods. IN: GECCO ’18: Proceedings of the Genetic and Evolutionary Computation Conference, Association for Computing Machinery, pp. 1183–1190 (2018), https://doi.org/10.1145/3205455.3205539.
- Sepioło, D., Ligęza, A.: Towards Model-Driven Explainable Artificial Intelligence: Function Identification with Grammatical Evolution. IN: Applied Sciences 14, 5950, (2024), https://doi.org/10.3390/app14135950.
- Tsoulos, I. G., Tzallas, A., Karvounis, E.: Using Optimization Techniques in Grammatical Evolution. In: Future Internet, 16, 172, (2024), https://doi.org/10.3390/fi16050172.
- Ligęza, A.: An experiment in causal structure discovery. A constraint programming approach. In: In: ISMIS 2017, Springer, pp. 261-268 (2017), https://doi.org/10.1007/978-3-319-60438-1_26.