Logo PTI Logo FedCSIS

Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)

Annals of Computer Science and Information Systems, Volume 43

ELEVATE-AI: Evaluation of Learning Environments Via Assessment Tools Enhanced by AI

, ,

DOI: http://dx.doi.org/10.15439/2025F3908

Citation: Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS), M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 43, pages 499506 ()

Full text

Abstract. The number of STEM camps and extracurricular initiatives has risen considerably in recent years, driven by the increasing emphasis on the workforce shortage in STEM fields. However, despite their growth, these programs often suffer from a lack of structured evaluation practices, a well-known issue that hinders a comprehensive understanding of their effectiveness. This work focuses specifically on outreach initiatives and proposes ELEVATE-AI, a standardized evaluation platform that includes data-cleaning procedures, Exploratory Factor Analysis (EFA), and regression analysis to measure impacts effectively. Furthermore, we discuss the potential integration of AI-based tools to support non-experts in interpreting the results of the proposed analysis flow. The platform aims to lower technical barriers, promote systematic assessment, and encourage the widespread adoption of data-driven practices in evaluating CS and STEM outreach activities. To facilitate adoption and reproducibility, the platform will be made available as an open-source tool.

References

  1. European Commission, “STEM Education Strategic Plan,” https://education.ec.europa.eu/sites/default/files/2025-03/STEM_Education_Strategic_Plan_COM_2025_89_1_EN_0.pdf, Mar. 2025, accessed: 2025-07-15.
  2. OECD, “Gender, Education and Skills,” https://www.oecd.org/en/publications/gender-education-and-skills_34680dd5-en.html, 2023, accessed: 2025-07-15.
  3. M. M. Msambwa, K. Daniel, C. Lianyu, and F. Antony, “A systematic review using feminist perspectives on the factors affecting girls’ participation in stem subjects,” Science & Education, pp. 1–32, 2024. https://dx.doi.org/10.1007/s11191-024-00524-0
  4. F. Beroíza-Valenzuela and N. Salas-Guzmán, “Stem and gender gap: A systematic review in wos, scopus, and eric databases (2012–2022),” in Frontiers in Education, vol. 9. Frontiers Media SA, 2024. https://dx.doi.org/10.3389/feduc.2024.1378640 p. 1378640.
  5. A. S. Rorrer, “An evaluation capacity building toolkit for principal investigators of undergraduate research experiences: A demonstration of transforming theory into practice,” Evaluation and program planning, vol. 55, pp. 103–111, 2016. https://dx.doi.org/10.1016/j.evalprogplan.2015.12.006
  6. X. Chen, D. Zou, H. Xie, G. Cheng, and C. Liu, “Two decades of artificial intelligence in education,” Educational Technology & Society, vol. 25, no. 1, pp. 28–47, 2022.
  7. E. A. Locke, “Self-efficacy: The exercise of control,” Personnel psychology, vol. 50, no. 3, p. 801, 1997.
  8. B. P. Cipriano, N. Fachada, and P. Alves, “Drop project: An automatic assessment tool for programming assignments,” SoftwareX 18 (2022) 101079, vol. 18, 2022. https://dx.doi.org/10.1016/j.softx.2022.101079
  9. S. H. Edwards and M. A. Perez-Quinones, “Web-cat: automatically grading programming assignments,” in Proceedings of the 13th annual conference on Innovation and technology in computer science education, 2008. https://dx.doi.org/10.1145/1384271.1384371 pp. 328–328.
  10. J. Majerník, “E-assessment management system for comprehensive assessment of medical students knowledge,” in 2018 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 2018. https://dx.doi.org/10.15439/2018F138 pp. 795–799.
  11. L. Stanescu and B. Savu, “Automatic assessment of narrative answers using information retrieval techniques,” in 2019 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 2019. https://dx.doi.org/10.15439/2019F96 pp. 355–358.
  12. B. Van Dusen, “Lasso: A new tool to support instructors and researchers,” arXiv preprint https://arxiv.org/abs/1812.02299, 2018. https://dx.doi.org/10.48550/arXiv.1812.02299
  13. J. Hattie, Visible learning: A synthesis of over 800 meta-analyses relating to achievement. routledge, 2008.
  14. A. Decker and M. M. McGill, “A topical review of evaluation instruments for computing education,” in Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 2019. doi: 10.1145/3287324.3287393 pp. 558–564.
  15. E. Knekta, C. Runyon, and S. Eddy, “One size doesn’t fit all: Using factor analysis to gather validity evidence when using surveys in your research,” CBE—Life Sciences Education, vol. 18, no. 1, p. rm1, 2019. https://dx.doi.org/10.1187/cbe.18-04-0064
  16. I. Fronza, P. Ihantola, O.-P. Riikola, G. Iaccarino, T. Mikkonen, L. García Rytman, V. Lappalainen, C. Rebollo Santamaría, I. Remolar Quintana, and V. Rossano, “Towards s’more connected coding camps,” in Proceedings of the 56th ACM Technical Symposium on Computer Science Education V. 1, 2025. https://dx.doi.org/10.1145/3641554.3701849 pp. 353–359.
  17. D. Guse, H. R. Orefice, G. Reimers, and O. Hohlfeld, “Thefragebogen: A web browser-based questionnaire framework for scientific research,” in 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019. https://dx.doi.org/10.1109/QoMEX.2019.8743231 pp. 1–3.
  18. F. Faenza, C. Canali, M. Colajanni, and A. Carbonaro, “The digital girls response to pandemic: Impacts of in presence and online extracurricular activities on girls future academic choices,” Education Sciences, vol. 11, no. 11, p. 715, 2021. https://dx.doi.org/10.3390/educsci11110715
  19. F. Faenza, C. Canali, and A. Carbonaro, “Ict extra-curricular activities: The “digital girls” case study for the development of human capital,” in The International Research & Innovation Forum. Springer, 2021. doi: 10.1007/978-3-030-84311-3_18 pp. 193–205.
  20. P. Cobb, K. Jackson, and C. Dunlap, “Design research: An analysis and critique,” in Handbook of international research in mathematics education. Routledge, 2015, pp. 481–503.
  21. L. Markauskaite, P. Freebody, and J. Irwin, Methodological choice and design: Scholarship, policy and practice in social and educational research. Springer Science & Business Media, 2010, vol. 9.
  22. C. Canali and F. Faenza, “An evaluation tool for extracurricular activities to reduce the gender gap in computer science,” in ICGR 2023 6th International Conference on Gender Research. Academic Conferences and publishing limited, 2023. https://dx.doi.org/10.34190/icgr.6.1.1026
  23. I. Ajzen, “Nature and operation of attitudes,” Annual review of psychology, vol. 52, no. 1, pp. 27–58, 2001. https://dx.doi.org/10.1146/annurev.psych.52.1.27
  24. J. F. McKenzie, M. L. Wood, J. E. Kotecki, J. K. Clark, and R. A. Brey, “Establishing content validity: Using qualitative and quantitative steps.” American Journal of Health Behavior, vol. 23, no. 4, 1999.
  25. A. R. Artino Jr, J. S. La Rochelle, K. J. Dezee, and H. Gehlbach, “Developing questionnaires for educational research: Amee guide no. 87,” Medical teacher, vol. 36, no. 6, pp. 463–474, 2014. https://dx.doi.org/10.3109/0142159X.2014.889814
  26. J. A. Maxwell, Qualitative research design: An interactive approach: An interactive approach. sage, 2013.
  27. E. Rudolph, H. Seer, C. Mothes, and J. Albrecht, “Automated feedback generation in an intelligent tutoring system for counselor education,” in 2024 19th Conference on Computer Science and Intelligence Systems (FedCSIS). IEEE, 2024. https://dx.doi.org/10.15439/2024F1649 pp. 501–512.
  28. T. C. Freitas, M. J. V. Pereira, A. C. Neto, and P. R. Henriques, “Goliath, a programming exercises generator supported by ai,” in 2024 19th Conference on Computer Science and Intelligence Systems (FedCSIS). IEEE, 2024. https://dx.doi.org/10.15439/2024F8479 pp. 331–342.
  29. F. Perez and B. E. Granger, “Project jupyter: Computational narratives as the engine of collaborative data science,” Retrieved September, vol. 11, no. 207, p. 108, 2015.
  30. Y. Zhu, S. Du, B. Li, Y. Luo, and N. Tang, “Are large language models good statisticians?” Advances in Neural Information Processing Systems, vol. 37, pp. 62 697–62 731, 2024.
  31. D. Maniglia, F. Faenza, and C. Canali, “Empowering girls in cs: The impact of digital girls outreach camp,” in Proceedings of The 7th International Conference on Gender Research. Academic Conferences and publishing limited, 2025. https://dx.doi.org/10.34190/icgr.8.1.3528
  32. G. Szyjewski and L. Fabisiak, “Survey as a source of low quality research data,” in 2017 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 2017. https://dx.doi.org/10.15439/2017F266 pp. 939–943.
  33. S. Pigozzi, F. Faenza, and C. Canali, “Sophon: An extensible platform for collaborative research,” in Practice and Experience in Advanced Research Computing 2022: Revolutionary: Computing, Connections, You, 2022. https://dx.doi.org/10.1145/3491418.3535163 pp. 1–4.