Automatic Assessment of Narrative Answers Using Information Retrieval Techniques
Liana Stanescu, Beniamin Savu
DOI: http://dx.doi.org/10.15439/2019F96
Citation: Proceedings of the 2019 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 18, pages 355–358 (2019)
Abstract. This paper presents a system for automatic assessment of narrative answers using information retrieval algorithms. It is designed to help professors to evaluate the answers that they receive from their students. It is a Java application that communicates through a REST API. This REST API has at its core the Lucene library and exposes all the great functionalities that Lucene has. The application has one UI for the students and one UI for the professor. The student will select the professor, select the question, upload the answer and send it. The professor will evaluate the student answer using the algorithms that will be discussed in this paper. Also in this paper a series of experiments will be presented, and their result will give us a better understanding of the algorithms and have a taste of how they work.
References
- C. Gormley and Z. Tong, Elasticsearch: The Defenitive Guide available via web at https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html
- G. Wielenga, “How to Split an Application into Modules?” https://dzone.com/articles/how-to-split-into-modules, 2009
- https://en.wikipedia.org/wiki/Data_transfer_object
- D. Perez, “Automatic evaluation of users’ short essays by using statistical and shallow natural language processing techniques”, Retrieved from https://pdfs.semanticscholar.org/025e/cb63d3322608e8f3073965ee9a0fc4d51e63.pdf, 2004
- D. Perez, E. Alfonseca and P. Rodrıguez, “Adapting the automatic assessment of free-text answers to the students profiles”, Retrieved from https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/2000/1/PerezD_AlfonsecaE.pdf, 2005
- D. Perez, O. Postolache, E. Alfonseca, D. Cristea, and P. Rodriguez, “About the effects of using Anaphora Resolution in assessing free-text student answer”, in Proceedings of Recent Advances in Natural Language Processing Conf. Borovets, Bulgaria, pp.380-386, 2005.
- T. Mitchell, T. Russell, P. Broomhead and N. Aldridge, “Towards robust computerised marking of free-text responses”, Retrieved from https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/1884/1/Mitchell_t1.pdf, 2002
- J. Burstein, C. Leacock and R. Swartz, “Automated evaluation of essays and short answers”, in Proceedings of the International CAA Conference. Loughborough: Loughborough University Davis, 2001
- E. Alfonseca and D. Perez, “Automatic assessment of short questions with a bleu-inspired algorithm and shallow nlp”, Advances in Natural Language Processing, Springer Verlag, pp. 25–35, 2004
- D. Pérez-Marín, I. Pascual-Nieto and P. Rodríguez, “Computer-assisted assessment of free-text answers”, Knowledge Eng. Review, vol. 24, no. 4, pp.353-374, 2009
- M.M. Hassan, “Experiments in Automatic Assessment Using Basic Information Retrieval Techniques”, Knowledge, Information and Creativity Support Systems,Berlin, Springer, pp.13-21, 2011
- K.T. Tung, N.D. Hung and L.T.M. Hanh, “A Comparison of Algorithms used to measure the Similarity between two documents”, International Journal of Advanced Research in Computer Engineering & Technology (IJARCET), vol. 4, no. 4, pp. 1117-1121, 2015
- https://lucene.apache.org/