Logo PTI Logo FedCSIS

Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)

Annals of Computer Science and Information Systems, Volume 43

Simultaneous pursuit of accountability for regulatory compliance, financial benefits, and societal impacts in artificial intelligence (AI) projects

DOI: http://dx.doi.org/10.15439/2025F6392

Citation: Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS), M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 43, pages 207217 ()

Full text

Abstract. This exploratory study, grounded in agency theory, employs quantitative analyses to investigate the simultaneous pursuit of accountability for regulatory compliance, financial benefits, and societal impacts within artificial intelligence (AI) projects. An agent-principal matrix was developed, synthesizing knowledge from the AI stakeholder model into 11 accountability indicators. These indicators establish a standard of responsibility among project actors for regulatory compliance, ethical practices, and financial benefits. Using quantitative methods, we analyzed survey data on accountability and defined the scope of AI systems under development. We identified two clusters of AI systems---autonomous and non-autonomous---based on seven features. We then examined how these two types of systems, as well as the importance of sustainability and fairness, impact the promotion of accountability. Results indicate that accountabilities shift based on the scope of the AI system and the project role. Regulatory compliance, financial benefits, and societal impacts are not mutually exclusive project goals and coexist. The findings quantify subjective and theoretical speculation about accountabilities within AI projects. Additionally, the study contributes empirical data to the literature on AI, ethics, and project management.

References

  1. European Union, “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act),” 2024. [Online]. Available: http://data.europa.eu/eli/reg/2024/1689/oj
  2. M. Theresia, “Newly enacted law sets basis for nat’l development of AI,” Dec 27, 2024 2024.
  3. M. Sloane and E. Wüllhorst, “A systematic review of regulatory strategies and transparency mandates in AI regulation in europe, the united states, and canada,” Data & Policy, vol. 7, 2025. https://dx.doi.org/10.1017/dap.2024.54
  4. F. Filippucci, P. Gal, and M. Schief, “Miracle or myth? assessing the macroeconomic productivity gains from artificial intelligence,” OECD Publishing, Report, 2024.
  5. O. Zwikael, Y.-Y. Chih, and J. R. Meredith, “Project benefit management: Setting effective target benefits,” International Journal of Project Management, vol. 36, no. 4, pp. 650–658, 2018. https://doi.org/10.1016/j.ijproman.2018.01.002
  6. C. F. Breidbach and P. Maglio, “Accountable algorithms? the ethical implications of data-driven business models,” Journal of Service Management, vol. 31, no. 2, pp. 163–185, 2020. https://doi.org/10.1108/JOSM-03-2019-0073
  7. G. J. Miller, “Stakeholder-accountability model for artificial intelligence projects,” Journal of Economics and Management, vol. 44, no. 1, pp. 446–494, 2022. https://doi.org/10.22367/jem.2022.44.18
  8. K. Kieslich, B. Keller, and C. Starke, “Artificial intelligence ethics by design. evaluating public perception on the importance of ethical design principles of artificial intelligence,” Big Data & Society, vol. 9, p. 205395172210929, 2022. https://dx.doi.org/10.1177/20539517221092956
  9. F. Herrera, “Attentiveness on criticisms and definition about explainable artificial intelligence,” in 2024 19th Conference on Computer Science and Intelligence Systems (FedCSIS), 2024, Conference Proceedings, pp. 45–52. https://dx.doi.org/10.15439/2024F0001
  10. M. Ryan and B. C. Stahl, “Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications,” Journal of Information, Communication and Ethics in Society, vol. 19, no. 1, p. 61–86, 2021. https://dx.doi.org/10.1108/JICES-12-2019-0138
  11. L. H. Ajmani, N. A. Abdelkadir, and S. Chancellor, “Secondary stakeholders in AI: Fighting for, brokering, and navigating agency,” p. 1095–1107, 2025. https://dx.doi.org/10.1145/3715275.3732071
  12. D. Heaton, J. Clos, E. Nichele, and J. E. Fischer, “The social impact of decision-making algorithms: Reviewing the influence of agency, responsibility and accountability on trust and blame,” p. Article 11, 2023. https://dx.doi.org/10.1145/3597512.3599706
  13. P. S. Scoleze Ferrer, G. D. A. Galvão, and M. M. de Carvalho, “Tensions between compliance, internal controls and ethics in the domain of project governance,” International Journal of Managing Projects in Business, vol. 13, no. 4, p. 845–865, 2020. https://dx.doi.org/10.1108/IJMPB-07-2019-0171
  14. B. Kuehnert, R. Kim, J. Forlizzi, and H. Heidari, “The “who”, “what”, and “how” of responsible AI governance: A systematic review and meta-analysis of (actor, stage)-specific tools,” p. 2991–3005, 2025. https://dx.doi.org/10.1145/3715275.3732191
  15. T. M. Jones, “Ethical decision making by individuals in organizations: An issue-contingent model,” Academy of Management Review, vol. 16, no. 2, p. 366–395, 1991. https://doi.org/10.5465/amr.1991.4278958
  16. E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” arXiv preprint https://arxiv.org/abs/1906.02243, 2019. https://doi.org/10.48550/arXiv.1906.02243
  17. B. Mittelstadt, “Principles alone cannot guarantee ethical AI,” Nature Machine Intelligence, vol. 1, no. 11, p. 501–507, 2019. https://doi.org/10.1038/s42256-019-0114-4
  18. 119th Congress (2024-2025), “Committee print: Providing for reconciliation pursuant to h. con. res. 14, the concurrent resolution on the budget for fiscal year 2025,” 2025.
  19. N. DePaula, L. Gao, S. Mellouli, L. F. Luna-Reyes, and T. M. Harrison, “Regulating the machine: An exploratory study of us state legislations addressing artificial intelligence, 2019-2023,” p. 815–826, 2024. https://dx.doi.org/10.1145/3657054.3657148
  20. A. Hopkins and S. Booth, “Machine learning practices outside big tech: How resource constraints challenge responsible development,” in AIES 2021: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM, 2021, Conference Proceedings, p. 134–145. https://dx.doi.org/10.1145/3461702.3462527
  21. M. C. Jensen and W. H. Meckling, “Theory of the firm: Managerial behavior, agency costs and ownership structure,” Journal of Financial Economics, vol. 3, no. 4, pp. 305–360, 1976. https://doi.org/10.1016/0304-405X(76)90026-X
  22. R. Derakhshan, R. Turner, and M. Mancini, “Project governance and stakeholders: a literature review,” International Journal of Project Management, vol. 37, no. 1, pp. 98–116, 2019. https://doi.org/10.1016/j.ijproman.2018.10.007
  23. International Organization for Standardization, Project, programme and portfolio management—Guidance on project management (ISO Standard No. 21502:2020-12). ISO, 2020.
  24. T. Guggenberger, L. Lämmermann, N. Urbach, A. Walter, and P. Hofmann, “Task delegation from AI to humans: A principal-agent perspective,” in ICIS 2023 Convention, Hyderabad, India, 2023, Conference Paper.
  25. B. Wachnik, “Moral hazard in it project completion. a multiple case study analysis,” pp. 1557–1562, 2015. http://dx.doi.org/10.15439/2015F68
  26. T. Sheridan, W. Verplank, and T. L. Brooks, “Human and computer control of undersea teleoperators,” 1978.
  27. J. F. J. Hair, W. C. Black, B. J. Babin, and R. E. Anderson, Multivariate Data Analysis Seventh Edition. Pearson College Division, 2014.
  28. B. E. Weller, N. K. Bowen, and S. J. Faubert, “Latent class analysis: A guide to best practice,” J. Black Psychol., vol. 46, no. 4, p. 287–311, 2020. https://doi.org/10.1177/0095798420930932
  29. SurveyMonkey, “SurveyMonkey answers to the ESOMAR questions to help buyers of online samples,” 2024. [Online]. Available: https://www.surveymonkey.com/mp/legal/esomar-37/
  30. F. Bentley, K. O’Neill, K. Quehl, and D. Lottridge, “Exploring the quality, efficiency, and representative nature of responses across multiple survey panels,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, Conference Proceedings, pp. 1–12. https://dx.doi.org/10.1145/3313831.3376671
  31. A. Gordon and K. Thorson, Dealing with Repeated Measures: Design Decisions and Analytic Strategies for Over-Time Data*. Cambridge University Press, 2024, pp. 532–564. https://dx.doi.org/10.1017/9781009170123.023
  32. P. M. Podsakoff, S. B. MacKenzie, J.-Y. Lee, and N. P. Podsakoff, “Common method biases in behavioral research: A critical review of the literature and recommended remedies.” J. Appl. Psychol., vol. 88, no. 5, pp. 879–903, 2003. https://dx.doi.org/10.1037/0021-9010.88.5.879