Logo PTI Logo FedCSIS

Proceedings of the 19th Conference on Computer Science and Intelligence Systems (FedCSIS)

Annals of Computer Science and Information Systems, Volume 39

Attentiveness on criticisms and definition about Explainable Artificial Intelligence

DOI: http://dx.doi.org/10.15439/2024F0001

Citation: Proceedings of the 19th Conference on Computer Science and Intelligence Systems (FedCSIS), M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 39, pages 4552 ()

Full text

Abstract. The emergence of deep learning at the beginning of the last decade has driven the advancement of complex models, culminating in the development of large language models and generative AI. These models represent the summit of size and complexity. Explainability should be an option that plays a key role in enabling understandable the AI-assisted decision-making and ensuring accountability. This contribution delves into the complexities of explainable artificial intelligence (XAI) through various perspectives, considering the extensive and growing body of literature. Our discussion begins by addressing the challenges posed by complex data, models, and high-risk scenarios. Given the rapid growth of the field, it is essential to tackle the criticisms and challenges that emerge as it matures, requiring thorough exploration. This contribution explores them, along with three aspects that may shed light on them. First, it is focused on the lack of definitional cohesion, examining how and why is defined XAI from the perspectives of audience and understanding. Second, it explores XAI explanations, bridging the gap between complex AI models and human understanding. Third, it is crucial to consider how to analyze and evaluate the maturity level of explainability, from a triple dimension, practicality, governance and auditability.

References

  1. D. Castelvecchi, “Can we open the black box of AI? (News Feature),” Nature, vol. 538, pp. 20–23, 2016.
  2. C. Panigutti, R. Hamon, I. Hupont, D. Fernandez Llorca, D. Fano Yela, H. Junklewitz, S. Scalzo, G. Mazzini, I. Sanchez, J. Soler Garrido et al., “The role of explainable AI in the context of the AI Act,” in Proceedings of the 2023 ACM conference on fairness, accountability, and transparency, 2023, pp. 1139–1150.
  3. L. Nannini, J. Alonso-Moral, A. Catala, M. Lama, and S. Barro, “Operationalizing Explainable AI in the EU Regulatory Ecosystem,” IEEE Intelligent Systems, 2024.
  4. S. Ali, T. Abuhmed, S. El-Sappagh, K. Muhammad, J. M. Alonso-Moral, R. Confalonieri, R. Guidotti, J. Del Ser, N. Díaz-Rodríguez, and F. Herrera, “Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence,” Information fusion, vol. 99, p. 101805, 2023.
  5. L. Longo, M. Brcic, F. Cabitza, J. Choi, R. Confalonieri, J. Del Ser, R. Guidotti, Y. Hayashi, F. Herrera, A. Holzinger et al., “Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions,” Information Fusion, vol. 106, p. 102301, 2024.
  6. T. Freiesleben and G. König, “Dear XAI community, we need to talk! Fundamental misconceptions in current XAI research,” in World Conference on Explainable Artificial Intelligence. Springer, 2023, pp. 48–65.
  7. J. Zerilli, “Explaining machine learning decisions,” Philosophy of Science, vol. 89, no. 1, pp. 1–19, 2022.
  8. R. O. Weber, A. J. Johs, P. Goel, and J. M. Marques-Silva, “XAI is in trouble,” AI Magazine, 2024.
  9. N. Díaz-Rodríguez, J. Del Ser, M. Coeckelbergh, M. L. de Prado, E. Herrera-Viedma, and F. Herrera, “Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation,” Information Fusion, vol. 99, p. 101896, 2023.
  10. X. Huang and J. Marques-Silva, “On the failings of shapley values for explainability,” International Journal of Approximate Reasoning, p. 109112, 2024.
  11. J. Marques-Silva and X. Huang, “Explainability is not a game,” Communications of the ACM, vol. 67, no. 7, pp. 66–75, 2024.
  12. M. A. Clinciu and H. F. Hastie, “A survey of explainable AI terminology,” in 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence 2019. Association for Computational Linguistics, 2019, pp. 8–13.
  13. A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information fusion, vol. 58, pp. 82–115, 2020.
  14. K. Haresamudram, S. Larsson, and F. Heintz, “Three levels of AI transparency,” Computer, vol. 56, no. 2, pp. 93–100, 2023.
  15. M. Langer, D. Oster, T. Speith, H. Hermanns, L. Kästner, E. Schmidt, A. Sesing, and K. Baum, “What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research,” Artificial Intelligence, vol. 296, p. 103473, 2021.
  16. R. R. Hoffman, S. T. Mueller, G. Klein, M. Jalaeian, and C. Tate, “Explainable ai: roles and stakeholders, desirements and challenges,” Frontiers in Computer Science, vol. 5, p. 1117848, 2023.
  17. H. V. Subramanian, C. Canfield, and D. B. Shank, “Designing explainable ai to improve human-ai team performance: a medical stakeholder-driven scoping review,” Artificial Intelligence in Medicine, p. 102780, 2024.
  18. M. Kim, S. Kim, J. Kim, T.-J. Song, and Y. Kim, “Do stakeholder needs differ?-designing stakeholder-tailored explainable artificial intelligence (xai) interfaces,” International Journal of Human-Computer Studies, vol. 181, p. 103160, 2024.
  19. M. Bergquist, B. Rolandsson, E. Gryska, M. Laesser, N. Hoefling, R. Heckemann, J. F. Schneiderman, and I. M. Björkman-Burtscher, “Trust and stakeholder perspectives on the implementation of ai tools in clinical radiology,” European Radiology, vol. 34, no. 1, pp. 338–347, 2024.
  20. A. J. Karran, P. Charland, J. Martineau, A. O. de Guinea, A. Lesage, S. Senecal, and P.-M. Leger, “Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education,” arXiv preprint https://arxiv.org/abs/2402.15027, 2024.
  21. M. Atzmueller, J. Fürnkranz, T. Kliegr, and U. Schmid, “Explainable and interpretable machine learning and data mining,” Data Mining and Knowledge Discovery, pp. 1–25, 2024.
  22. F. Bodria, F. Giannotti, R. Guidotti, F. Naretto, D. Pedreschi, and S. Rinzivillo, “Benchmarking and survey of explanation methods for black box models,” Data Mining and Knowledge Discovery, vol. 37, no. 5, pp. 1719–1778, 2023.
  23. I. Sevillano-García, J. Luengo, and F. Herrera, “Revel framework to measure local linear explanations for black-box models: Deep learning image classification case study,” International Journal of Intelligent Systems, vol. 2023, no. 1, p. 8068569, 2023.
  24. V. Arya, R. K. Bellamy, P.-Y. Chen, A. Dhurandhar, M. Hind, S. C. Hoffman, S. Houde, Q. V. Liao, R. Luss, A. Mojsilović et al., “One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques,” arXiv preprint https://arxiv.org/abs/1909.03012, 2019.
  25. B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas et al., “Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav),” in International conference on machine learning. PMLR, 2018, pp. 2668–2677.
  26. E. Poeta, G. Ciravegna, E. Pastor, T. Cerquitelli, and E. Baralis, “Concept-based explainable artificial intelligence: A survey,” arXiv preprint https://arxiv.org/abs/2312.12936, 2023.
  27. S. Garcia, J. Derrac, J. Cano, and F. Herrera, “Prototype selection for nearest neighbor classification: Taxonomy and empirical study,” IEEE transactions on pattern analysis and machine intelligence, vol. 34, no. 3, pp. 417–435, 2012.
  28. A. Narayanan and K. Bergen, “Prototype-Based Methods in Explainable AI and Emerging Opportunities in the Geosciences,” in ICML 2024 AI for Science Workshop, 2024.
  29. R. Guidotti, “Counterfactual explanations and how to find them: literature review and benchmarking,” Data Mining and Knowledge Discovery, pp. 1–55, 2022.
  30. V. Chen, J. Li, J. S. Kim, G. Plumb, and A. Talwalkar, “Interpretable machine learning: Moving from mythos to diagnostics. queue 19, 6 (jan 2022), 28–56,” 2022.
  31. H. Liu, V. Lai, and C. Tan, “Understanding the effect of out-of-distribution examples and interactive explanations on human-ai decision making,” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, pp. 1–45, 2021.
  32. J. Choi, J. Raghuram, R. Feng, J. Chen, S. Jha, and A. Prakash, “Concept-based explanations for out-of-distribution detectors,” in International Conference on Machine Learning. PMLR, 2023, pp. 5817–5837.
  33. V. Chen, Q. V. Liao, J. Wortman Vaughan, and G. Bansal, “Understanding the role of human intuition on reliance in human-ai decision-making with explanations,” Proceedings of the ACM on Human-computer Interaction, vol. 7, no. CSCW2, pp. 1–32, 2023.
  34. M. Frasca, D. La Torre, G. Pravettoni, and I. Cutica, “Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review,” Discover Artificial Intelligence, vol. 4, no. 1, p. 15, 2024.
  35. P. Weber, K. V. Carl, and O. Hinz, “Applications of explainable artificial intelligence in finance—a systematic review of finance, information systems, and computer science literature,” Management Review Quarterly, vol. 74, no. 2, pp. 867–907, 2024.
  36. M. A. Camilleri, “Artificial intelligence governance: Ethical considerations and implications for social responsibility,” Expert systems, vol. 41, no. 7, p. e13406, 2024.
  37. L. Waltersdorfer, F. J. Ekaputra, T. Miksa, and M. Sabou, “AuditMAI: Towards An Infrastructure for Continuous AI Auditing,” arXiv preprint https://arxiv.org/abs/2406.14243, 2024.