Sign language interpreting - relationships between research in different areas - overview
Barbara Probierz, Jan Kozak, Adam Piasecki, Angelika Podlaszewska
DOI: http://dx.doi.org/10.15439/2023F2503
Citation: Proceedings of the 18th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 35, pages 213–223 (2023)
Abstract. Translation from the national language into sign language is an extremely important area of research and practice, which aims to ensure communication between deaf or hard of hearing people and the hearing community. The article provides an overview of the most important research on sign language interpretation conducted in various research areas. The latest scientific and theoretical achievements were presented, which contribute to a better understanding of the subject of sign language translation and the improvement of the quality of translation services. Our main goal is to identify outstanding areas of interdisciplinary research related to sign language translation and to identify links between these studies conducted in different areas. The conclusions of the article aim to broaden the knowledge and awareness of sign language translation and to identify areas that require further research and development. The work is linked to a project related to the application of machine learning in increasing accessibility for deaf people.
References
- S. K. Liddell, “American sign language syntax,” in American Sign Language Syntax. De Gruyter Mouton, 2021.
- A. Patil, A. Kulkarni, H. Yesane, M. Sadani, and P. Satav, “Literature survey: sign language recognition using gesture recognition and natural language processing,” Data Management, Analytics and Innovation: Proceedings of ICDMAI 2021, Volume 1, pp. 197–210, 2021.
- R. Smith and B. Nolan, “Manual evaluation of synthesised sign language avatars,” in Proceedings of the 15th international ACM SIGACCESS conference on computers and accessibility, 2013, pp. 1–2.
- K. Shah, S. Rathi, R. Shetty, and K. Mistry, “A comprehensive review on text to indian sign language translation systems,” Smart Trends in Computing and Communications: Proceedings of SmartCom 2021, pp. 505–513, 2022.
- R. Wolfe, “Sign language translation and avatar technology,” Machine Translation, vol. 35, no. 3, pp. 301–304, 2021.
- U. Farooq, M. S. M. Rahim, N. Sabir, A. Hussain, and A. Abid, “Advances in machine translation for sign language: approaches, limitations, and challenges,” Neural Computing and Applications, vol. 33, no. 21, pp. 14 357–14 399, 2021.
- L. Naert, C. Larboulette, and S. Gibet, “A survey on the animation of signing avatars: From sign representation to utterance synthesis,” Computers & Graphics, vol. 92, pp. 76–98, 2020.
- H. Saggion, D. Shterionov, G. Labaka, T. Van de Cruys, V. Vandeghinste, and J. Blat, “Signon: Bridging the gap between sign and spoken languages,” in Alkorta J, Gonzalez-Dios I, Atutxa A, Gojenola K, Martínez-Cámara E, Rodrigo A, Martínez P, editors. Proceedings of the Annual Conference of the Spanish Association for Natural Language Processing: Projects and Demonstrations (SEPLN-PD 2021) co-located with the Conference of the Spanish Society for Natural Language Processing (SEPLN 2021); 2021 Sep 21-24; Málaga, Spain. Aachen: CEUR Workshop Proceedings; 2021. p. 21-5. CEUR Workshop Proceedings, 2021.
- I. Tozsa, “Virtual reality and public administration,” Transylvanian Review of Administrative Sciences, vol. 9, no. 38, pp. 202–212, 2013.
- T. Makasi, A. Nili, K. Desouza, and M. Tate, “Chatbot-mediated public service delivery: A public service value-based framework,” First Monday, 2020.
- R. Johnson, “Towards enhanced visual clarity of sign language avatars through recreation of fine facial detail,” Machine Translation, vol. 35, no. 3, pp. 431–445, 2021.
- R. G. Smith and B. Nolan, “Emotional facial expressions in synthesised sign language avatars: a manual evaluation,” Universal Access in the Information Society, vol. 15, pp. 567–576, 2016.
- R. Wolfe, J. McDonald, R. Johnson, R. Moncrief, A. Alexander, B. Sturr, S. Klinghoffer, F. Conneely, M. Saenz, and S. Choudhry, “State of the art and future challenges of the portrayal of facial nonmanual signals by signing avatar,” in Universal Access in Human-Computer Interaction. Design Methods and User Experience: 15th International Conference, UAHCI 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I. Springer, 2021, pp. 639–655.
- D. A. Gonçalves, M. C. C. Baranauskas, J. C. dos Reis, and E. Todt, “Facial expressions animation in sign language based on spatio-temporal centroid.” in ICEIS (2), 2020, pp. 463–475.
- M. Huenerfauth, “Learning to generate understandable animations of american sign language,” 2014.
- H. Kacorri, “Tr-2015001: A survey and critique of facial expression synthesis in sign language animation,” 2015.
- N. Aoki, “An experimental study of public trust in ai chatbots in the public sector,” Government Information Quarterly, vol. 37, no. 4, p. 101490, 2020.
- C. Van Noordt and G. Misuraca, “New wine in old bottles: Chatbots in government: Exploring the transformative impact of chatbots in public service delivery,” in Electronic Participation: 11th IFIP WG 8.5 International Conference, ePart 2019, San Benedetto Del Tronto, Italy, September 2–4, 2019, Proceedings 11. Springer, 2019, pp. 49–59.
- C. Geraci, “Language policy and planning: The case of italian sign language,” Sign Language Studies, vol. 12, no. 4, pp. 494–518, 2012.
- V. Hristidis, “Chatbot technologies and challenges,” in 2018 First International Conference on Artificial Intelligence for Industries (AI4I). IEEE, 2018, pp. 126–126.
- N. K. Kahlon and W. Singh, “Machine translation from text to sign language: a systematic review,” Universal Access in the Information Society, vol. 22, no. 1, pp. 1–35, 2023.
- U. Farooq, M. S. M. Rahim, N. S. Khan, S. Rasheed, and A. Abid, “A crowdsourcing-based framework for the development and validation of machine readable parallel corpus for sign languages,” IEEE Access, vol. 9, pp. 91 788–91 806, 2021.
- S. Mazumder, R. Mukhopadhyay, V. P. Namboodiri, and C. Jawahar, “Translating sign language videos to talking faces,” in Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing, 2021, pp. 1–10.
- N. S. Khan, A. Abid, K. Abid, U. Farooq, M. S. Farooq, and H. Jameel, “Speak pakistan: Challenges in developing pakistan sign language using information technology,” South Asian Studies, vol. 30, no. 2, 2020.
- K. Cormier, N. Fox, B. Woll, A. Zisserman, N. C. Camgöz, and R. Bowden, “Extol: Automatic recognition of british sign language using the bsl corpus,” in Proceedings of 6th Workshop on Sign Language Translation and Avatar Technology (SLTAT) 2019. Universitat Hamburg, 2019.
- Q. Xiao, M. Qin, and Y. Yin, “Skeleton-based chinese sign language recognition and generation for bidirectional communication between deaf and hearing people,” Neural networks, vol. 125, pp. 41–55, 2020.
- K. Stefanidis, D. Konstantinidis, A. Kalvourtzis, K. Dimitropoulos, and P. Daras, “3d technologies and applications in sign language,” Recent advances in 3D imaging, modeling, and reconstruction, pp. 50–78, 2020.
- A. Soudi, K. Van Laerhoven, and E. Bou-Souf, “Africasign–a crowd-sourcing platform for the documentation of stem vocabulary in african sign languages,” in Proceedings of the 21st International ACM SIGAC-CESS Conference on Computers and Accessibility, 2019, pp. 658–660.
- S. Stoll, N. C. Camgoz, S. Hadfield, and R. Bowden, “Text2sign: towards sign language production using neural machine translation and generative adversarial networks,” International Journal of Computer Vision, vol. 128, no. 4, pp. 891–908, 2020.
- S. Tateno, H. Liu, and J. Ou, “Development of sign language motion recognition system for hearing-impaired people using electromyography signal,” Sensors, vol. 20, no. 20, p. 5807, 2020.
- D. Szulc, J. Gałka, M. Másior, F. Malawski, T. J. Wilczyński, and K. Wróbel, “Studies on machine processing of sign language in the context of deaf support. application in health care–interactive service system for the deaf.”
- A. H. Aliwy and A. A. Ahmed, “Development of arabic sign language dictionary using 3d avatar technologies,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 21, no. 1, pp. 609–616, 2021.
- F. Barrera Melchor, J. C. Alcibar Palacios, O. Pichardo-Lagunas, and B. Martinez-Seis, “Speech to mexican sign language for learning with an avatar,” in Advances in Computational Intelligence: 19th Mexican International Conference on Artificial Intelligence, MICAI 2020, Mexico City, Mexico, October 12–17, 2020, Proceedings, Part II 19. Springer, 2020, pp. 179–192.
- M. Filhol and J. C. McDonald, “Extending the azee-paula shortcuts to enable natural proform synthesis,” in sign-lang@ LREC 2018. European Language Resources Association (ELRA), 2018, pp. 45–52.
- A. Braffort, M. Filhol, M. Delorme, L. Bolot, A. Choisier, and C. Verrecchia, “Kazoo: a sign language generation platform based on production rules,” Universal Access in the Information Society, vol. 15, pp. 541–550, 2016.
- L. S. García, T. Felipe, A. Guedes, D. R. Antunes, C. E. Iatskiu, E. Todt, J. Bueno, D. d. F. Trindade, D. Gonçalves, R. Canteri et al., “Deaf inclusion through brazilian sign language: A computational architecture supporting artifacts and interactive applications and tools,” in Universal Access in Human-Computer Interaction. Access to Media, Learning and Assistive Environments: 15th International Conference, UAHCI 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part II. Springer, 2021, pp. 167–185.
- P. Kumar and S. Kaur, “Sign language generation system based on indian sign language grammar,” ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), vol. 19, no. 4, pp. 1–26, 2020.
- P. Kumar, S. Kaur et al., “Online multilingual dictionary using hamburg notation for avatar-based indian sign language generation system,” International Journal of Cognitive and Language Sciences, vol. 12, no. 8, pp. 1117–1124, 2018.
- K. Hayward, N. Adamo-Villani, and J. Lestina, “A computer animation system for creating deaf-accessible math and science curriculum materials.” in Eurographics (Education Papers), 2010, pp. 1–8.
- B. Saunders, N. C. Camgoz, and R. Bowden, “Everybody sign now: Translating spoken language to photo realistic sign language video,” arXiv preprint https://arxiv.org/abs/2011.09846, 2020.
- P. A. Angga, W. E. Fachri, A. Elevanita, R. D. Agushinta et al., “Design of chatbot with 3d avatar, voice interface, and facial expression,” in 2015 international conference on science in information technology (ICSITech). IEEE, 2015, pp. 326–330.
- T. Lima, M. S. Rocha, T. A. Santos, A. Benetti, E. Soares, and H. S. de Oliveira, “Innovation in learning–the use of avatar for sign language,” in Human-Computer Interaction. Applications and Services: 15th International Conference, HCI International 2013, Las Vegas, NV, USA, July 21-26, 2013, Proceedings, Part II 15. Springer, 2013, pp. 428–433.
- S. Al-Khazraji, L. Berke, S. Kafle, P. Yeung, and M. Huenerfauth, “Modeling the speed and timing of american sign language to generate realistic animations,” in Proceedings of the 20th international ACM SIGACCESS conference on computers and accessibility, 2018, pp. 259–270.
- M. Kipp, Q. Nguyen, A. Heloir, and S. Matthes, “Assessing the deaf user perspective on sign language avatars,” in The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility, 2011, pp. 107–114.
- S. M. Shohieb, “A gamified e-learning framework for teaching mathematics to arab deaf students: Supporting an acting arabic sign language avatar,” Ubiquitous Learning: An International Journal, vol. 12, no. 1, pp. 55–70, 2019.
- R. Rajendran and S. T. Ramachandran, “Finger spelled signs in sign language recognition using deep convolutional neural network,” International Journal of Research in Engineering, Science and Management, vol. 4, no. 6, pp. 249–253, 2021.
- R. Yorganci, A. A. Kindiroglu, and H. Kose, “Avatar-based sign language training interface for primary school education,” in Workshop: Graphical and Robotic Embodied Agents for Therapeutic Systems, 2016.
- L. C. Galea and A. F. Smeaton, “Recognising irish sign language using electromyography,” in 2019 International Conference on Content-Based Multimedia Indexing (CBMI). IEEE, 2019, pp. 1–4.
- F. Roelofsen, L. Esselink, S. Mende-Gillings, M. De Meulder, N. Sijm, and A. Smeijers, “Online evaluation of text-to-sign translation by deaf end users: Some methodological recommendations (short paper),” in Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL), 2021, pp. 82–87.
- F. Roelofsen, L. Esselink, S. Mende-Gillings, and A. Smeijers, “Sign language translation in a healthcare setting,” in Proceedings of the Translation and Interpreting Technology Online Conference, 2021, pp. 110–124.
- P. Bouillon, B. David, I. Strasly, and H. Spechbach, “A speech translation system for medical dialogue in sign language—questionnaire on user perspective of videos and the use of avatar technology,” in 3rd Swiss Conference on Barrier-free Communication (BfC 2020), 2021, p. 46.
- R. De Maria Marchiano, G. Di Sante, G. Piro, C. Carbone, G. Tortora, L. Boldrini, A. Pietragalla, G. Daniele, M. Tredicine, A. Cesario et al., “Translational research in the era of precision medicine: Where we are and where we will go,” Journal of Personalized Medicine, vol. 11, no. 3, p. 216, 2021.
- D. Kruk, D. M ̨etel, Ł. Gawęda, and A. Cechnicki, “Implementation of virtual reality (vr) in diagnostics and therapy of nonaffective psychoses.” Psychiatria Polska, vol. 54, no. 5, pp. 951–975, 2020.
- P. M. Martin, S. Belhe, S. Mudliar, M. Kulkarni, and S. Sahasrabudhe, “An indian sign language (isl) corpus of the domain disaster message using avatar,” in Proceedings of the third international symposium in sign language translations and technology (SLTAT-2013), 2013, pp. 1–4.
- S. Ebling and J. Glauert, “Building a swiss german sign language avatar with jasigning and evaluating it among the deaf community,” Universal Access in the Information Society, vol. 15, pp. 577–587, 2016.
- A. Androutsopoulou, N. Karacapilidis, E. Loukis, and Y. Charalabidis, “Transforming the communication between citizens and government through ai-guided chatbots,” Government information quarterly, vol. 36, no. 2, pp. 358–367, 2019.
- Y. Wang, N. Zhang, and X. Zhao, “Understanding the determinants in the different government ai adoption stages: Evidence of local government chatbots in china,” Social Science Computer Review, vol. 40, no. 2, pp. 534–554, 2022.
- A. Lommatzsch, “A next generation chatbot-framework for the public administration,” in Innovations for Community Services: 18th International Conference, I4CS 2018, Žilina, Slovakia, June 18-20, 2018, Proceedings. Springer, 2018, pp. 127–141.
- P. Henman, “Improving public services using artificial intelligence: possibilities, pitfalls, governance,” Asia Pacific Journal of Public Administration, vol. 42, no. 4, pp. 209–221, 2020.
- A. Grabowski, “Wykorzystanie współczesnych technik rzeczywistości wirtualnej i rozszerzonej do szkolenia pracowników,” Bezpieczeństwo pracy: nauka i praktyka, no. 4, pp. 18–21, 2012.
- B. Kopka, “Theoretical aspects of using virtual advisors in public administration.”
- S. Pauser and U. Wagner, “Judging a book by its cover: Assessing the comprehensibility and perceived appearance of sign language avatars.”
- T. M. Vogl, C. Seidelin, B. Ganesh, and J. Bright, “Smart technology and the emergence of algorithmic bureaucracy: Artificial intelligence in uk local authorities,” Public Administration Review, vol. 80, no. 6, pp. 946–961, 2020.
- B. Jaskowska, “Nie wiesz? zapytaj awatara: wirtualny doradca w bibliotece,” in Biblioteka-klucz do sukcesu użytkowników (ePublikacje Instytutu Informacji Naukowej i Bibliotekoznawstwa, nr 5). Instytut Informacji Naukowej i Bibliotekoznawstwa, Uniwersytet Jagielloński, 2008, pp. 104–110.
- H. Nakanishi, S. Koizumi, and T. Ishida, “Virtual cities for real-world crisis management,” in Digital Cities III. Information Technologies for Social Capital: Cross-cultural Perspectives: Third International Digital Cities Workshop, Amsterdam, The Netherlands, September 18-19, 2003. Revised Selected Papers 3. Springer, 2005, pp. 204–216.
- P. Cabral, M. Gonçalves, H. Nicolau, L. Coheur, and R. Santos, “Pe2lgp animator: A tool to animate a portuguese sign language avatar,” in Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, 2020, pp. 33–38.
- O. Alonzo, A. Glasser, and M. Huenerfauth, “Effect of automatic sign recognition performance on the usability of video-based search interfaces for sign language dictionaries,” in Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility, 2019, pp. 56–67.
- N. S. Khan, A. Abid, and K. Abid, “A novel natural language processing (nlp)–based machine translation model for english to pakistan sign language translation,” Cognitive Computation, vol. 12, pp. 748–765, 2020.
- G. Tschare, “The sign language avatar project. innovative practice 2016,” 2016.
- R. San-Segundo, R. Barra, L. D’haro, J. M. Montero, R. Córdoba, and J. Ferreiros, “A spanish speech to sign language translation system for assisting deaf-mute people,” in Ninth International Conference on Spoken Language Processing, 2006.
- A. Pardasani, A. K. Sharma, S. Banerjee, V. Garg, and D. S. Roy, “Enhancing the ability to communicate by synthesizing american sign language using image recognition in a chatbot for differently abled,” in 2018 7th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO). IEEE, 2018, pp. 529–532.
- A. Kizabekova and V. Chernyshenko, “E-government avatar-based modeling and development,” in Avatar-Based Control, Estimation, Communications, and Development of Neuron Multi-Functional Technology Platforms. IGI Global, 2020, pp. 19–34.
- J. McDonald, R. Wolfe, J. Schnepp, J. Hochgesang, D. G. Jamrozik, M. Stumbo, L. Berke, M. Bialek, and F. Thomas, “An automated technique for real-time production of lifelike animations of american sign language,” Universal Access in the Information Society, vol. 15, pp. 551–566, 2016.
- L. Ventura, A. Duarte, and X. Giró-i Nieto, “Can everybody sign now? exploring sign language video generation from 2d poses,” arXiv preprint https://arxiv.org/abs/2012.10941, 2020.
- R. Bartoszcze, Z. Bauer, E. Chudziński, M. DuVall, S. Dziki, B. Fischer, W. Furman, A. Hess, M. Jasionowicz, S. Jędrzejewski et al., Słownik terminologii medialnej. Kraków: Towarzystwo Autorów i Wydawców Prac Naukowych Universitas, 2006.
- J. J. Bird, A. Ekárt, and D. R. Faria, “Chatbot interaction with artificial intelligence: human data augmentation with t5 and language transformer ensemble for text classification,” Journal of Ambient Intelligence and Humanized Computing, vol. 14, no. 4, pp. 3129–3144, 2023.
- S. Fontana and G. Caligiore, “Italian sign language (lis) and natural language processing: an overview.” NL4AI@ AI* IA, 2021.
- D. Bragg, O. Koller, M. Bellard, L. Berke, P. Boudreault, A. Braffort, N. Caselli, M. Huenerfauth, H. Kacorri, T. Verhoef et al., “Sign language recognition, generation, and translation: An interdisciplinary perspective,” in Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility, 2019, pp. 16–31.
- I. Papastratis, C. Chatzikonstantinou, D. Konstantinidis, K. Dimitropoulos, and P. Daras, “Artificial intelligence technologies for sign language,” Sensors, vol. 21, no. 17, p. 5843, 2021.
- R. Rastgoo, K. Kiani, S. Escalera, and M. Sabokrou, “Sign language production: A review,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 3451–3461.