British Sign Language Recognition In The Wild Based On Multi-Class SVM
Joanna Isabelle Olszewska, M. Quinn
DOI: http://dx.doi.org/10.15439/2019F274
Citation: Proceedings of the 2019 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 18, pages 81–86 (2019)
Abstract. Developing assistive, cost-effective, non-invasive technologies to aid communication of people with hearing impairments is of prime importance in our society, in order to widen accessibility and inclusiveness. For this purpose, we have developed an intelligent vision system embedded on a smartphone and deployed in the wild. In particular, it integrates both computer vision methods involving Histogram of Oriented Gradients (HOG) and machine learning techniques such as multiclass Support Vector Machine (SVM) to detect and recognize British Visual Language (BSL) signs automatically. Our system was successfully tested on a real-world dataset containing 13,066 samples and shown an accuracy of over 99\% with an average processing time of 170ms, thus appropriate for real-time visual signing.
References
- Actiononhearingloss.org.uk, “Facts and Figures,,” 2019, Available online at: https://www.actiononhearingloss.org.uk.
- BDA, “British Deaf Association,,” 2019, Available online at: https://bda.org.uk.
- NRCPD, “The National Registers of Communication Professionals working with Deaf and Deafblind People,,” 2019, Available online at: https://www.nrcpd.org.uk.
- D. Waters, R. Campbell, C.M. Capek, B. Woll, A.S. David, P.K. McGuire, M.J. Brammer, and M. MacSweeney, “Fingerspelling, signed language, text and picture processing in deaf-native signers: The role of the mid-fusiform gyrus,” NeuroImage, vol. 35, no. 3, pp. 1287–1302, 2007.
- J.I. Olszewska, “Designing transparent and autonomous intelligent vision systems,” in Proceedings of the International Conference on Agents and Artificial Intelligence (ICAART), 2019, pp. 850–856.
- G. Plouffe and A.-M. Cretu, “Static and dynamic hand gesture recognition in depth data using dynamic time warping,” IEEE Transactions on Instrumentation and Measurement, vol. 65, no. 2, pp. 305–316, 2016.
- M. Goyal, B. Shahi, K.V. Prema, and N.V.S.S. Reddy, “Performance analysis of human gesture recognition techniques,” in Proceedings of the IEEE International Conference on Recent Trends in Electronics, Information and Communication Technology, 2017, pp. 111–115.
- S. Liwicki and M. Everingham, “Automatic recognition of fingerspelled words in British sign language,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 50–57.
- J.L. Raheja, A. Mishra, and A. Chaudhary, “Indian sign language recognition using SVM,” Pattern Recognition and Image Analysis, vol. 26, no. 2, pp. 434–441, 2016.
- A.B. Jani, N.A. Kotak, and A.K. Roy, “Sensor based hand gesture recognition system for English alphabets used in sign language of deaf-mute people,” in IEEE SENSORS Proceedings, 2018, pp. 1–4.
- D.-J. Li an Y.-Y. Li, J.-X. Li, and Y. Fu, “Gesture recognition based on BP neural network improved by chaotic genetic algorithm,” International Journal of Automation and Computing, vol. 15, no. 3, pp. 267–276, 2018.
- S. Salian, I. Dokare, D. Serai, A. Suresh, and P. Ganorkar, “Proposed system for sign language recognition,” in Proceedings of the IEEE Conference on Computation of Power, Energy Information and Communication, 2017, pp. 58–62.
- B.L. Loeding, S. Sarkar, A. Parashar, and A.I. Karshmer, “Progress in automated computer recognition of sign language,” in Proceedings of the International Conference on Computers for Handicapped Persons. 2004, pp. 1079–1087, LNCS, Springer.
- E. Klima and U. Bellugi, The Signs of Language, Harvard University Press, 1979.
- M. Loesdau, S. Chabrier, and A. Gabillon, “Hue and saturation in the RGB color space,” in Proceedings of the International Conference on Image and Signal Processing. 2014, vol. 8509, pp. 203–212, LNCS, Springer.
- C. Rouge, S. Shaikh, and J.I. Olszewska, “HD: Efficient Hand Detection and Tracking,” in Proceedings of the IEEE Federated Conference on Computer Science and Information Systems (FedCSIS), 2016, pp. 291–297.
- J.I. Olszewska, C. De Vleeschouwer, and B. Macq, “Multi-feature vector flow for active contour tracking,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2008, pp. 721–724.
- N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2005, pp. 886–893.
- C. Cortes and V.N. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.
- P. Matykiewicz and J. Pestian, “Effect of small sample size on text categorization with support vector machines,” in Proceedings of the ACL Workshop on Biomedical Natural Language Processing (BioNLP), 2012, pp. 193–201.
- S. Nagarajan and T.S. Subashini, “Static hand gesture recognition for sign language alphabets using edge oriented histogram and multi-class SVM,” International Journal of Computer Applications, vol. 82, no. 4, pp. 28–35, 2013.