Rule-based approximation of black-box classifiers for tabular data to generate global and local explanations
Cezary Maszczyk, Michal Kozielski, Marek Sikora
DOI: http://dx.doi.org/10.15439/2022F258
Citation: Proceedings of the 17th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 30, pages 89–92 (2022)
Abstract. The need to understand the decision bases of artificial intelligence methods is becoming widespread. One method to obtain explanations of machine learning models and their decisions is the approximation of a complex model treated as a black box by an interpretable rule-based model. Such an approach allows detailed and understandable explanations to be generated from the elementary conditions contained in the rule premises. However, there is a lack of research on the evaluation of such an approximation and the influence of the parameters of the rule-based approximator. In this work, a rule-based approximation of complex classifier for tabular data is evaluated.Moreover, it was investigated how selected measures of rule quality affect the approximation. The obtained results show what quality of approximation can be expected and indicate which measure of rule quality is worth using in such application.
References
- R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Comput. Surv., vol. 51, no. 5, aug 2018. [Online]. Available: https://doi.org/10.1145/3236009
- A. Adadi and M. Berrada, “Peeking inside the black-box: A survey on explainable artificial intelligence (xai),” IEEE Access, vol. 6, pp. 52 138–52 160, 2018.
- C. Molnar, Interpretable Machine Learning, 2nd ed., 2022. [Online]. Available: https://christophm.github.io/interpretable-ml-book
- P. Biecek and T. Burzykowski, Explanatory model analysis: Explore, explain and examine predictive models. Chapman and Hall/CRC, 2021.
- J. W. Grzymala-Busse, Rule Induction. Boston, MA: Springer US, 2005, pp. 277–294. [Online]. Available: https://doi.org/10.1007/0-387-25465-X_13
- E. Pastor and E. Baralis, “Explaining black box models by means of local rules,” in Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, ser. SAC ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 510–517. [Online]. Available: https://doi.org/10.1145/3297280.3297328
- D. Pedreschi, F. Giannotti, R. Guidotti, A. Monreale, S. Ruggieri, and F. Turini, “Meaningful explanations of black box ai decision systems,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 9780–9784.
- M. Setzu, R. Guidotti, A. Monreale, and F. Turini, “Global explanations with local scoring,” in Machine Learning and Knowledge Discovery in Databases, P. Cellier and K. Driessens, Eds. Cham: Springer International Publishing, 2020, pp. 159–171.
- M. Sikora and Ł. Wróbel, “Data-driven adaptive selection of rule quality measures for improving rule induction and filtration algorithms,” International Journal of General Systems, vol. 42, no. 6, pp. 594–613, 2013.
- L. S. Shapley, “A value for n-person games,” Classics in game theory, vol. 69, 1997.
- M. Sikora, “Redefinition of decision rules based on the importance of elementary conditions evaluation,” Fundamenta Informaticae, vol. 123, no. 2, pp. 171–197, 2013.
- F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikit-learn: Machine learning in python,” the Journal of machine Learning research, vol. 12, pp. 2825–2830, 2011.
- A. Gudyś, M. Sikora, and Łukasz Wróbel, “Rulekit: A comprehensive suite for rule-based learning,” Knowledge-Based Systems, vol. 194, p. 105480, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0950705120300046