Deep Learning Transformer Architecture for Named Entity Recognition on Low Resourced Languages: State of the art results
Citation: Proceedings of the 17th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 30, pages 53–60 (2022)
Abstract. This paper reports on the evaluation of Deep Learning (DL) transformer architecture models for Named-Entity Recognition (NER) on ten low-resourced South African (SA) languages. In addition, these DL transformer models were compared to other Neural Network and Machine Learning (ML) NER models. The findings show that transformer models substantially improve performance when applying discrete fine-tuning parameters per language. Furthermore, fine-tuned transformer models outperform other neural network and machine learning models on NER with the low-resourced SA languages. For example, the transformer models obtained the highest F-scores for six of the ten SA languages and the highest average F-score surpassing the Conditional Random Fields ML model. Practical implications include developing high-performance NER capability with less effort and resource costs, potentially improving downstream NLP tasks such as Machine Translation (MT). Therefore, the application of DL transformer architecture models for NLP NER sequence tagging tasks on low-resourced SA languages is viable. Additional research could evaluate the more recent transformer architecture models on other Natural Language Processing tasks and applications, such as Phrase chunking, MT, and Part-of-Speech tagging.
- Loubser, M., & Puttkammer, M. J. (2020). Viability of neural networks for core technologies for resource-scarce languages. Information (Switzerland). https://doi.org/10.3390/info11010041
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., & Stoyanov, V. (2020). Unsupervised Cross-lingual Representation Learning at Scale. https://doi.org/10.18653/v1/2020.acl-main.747
- Plank, B., Søgaard, A., & Goldberg, Y. (2016). Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Short Papers. https://doi.org/10.18653/v1/p16-2067
- Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., & Dyer, C. (2016). Neural architectures for named entity recognition. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference. https://doi.org/10.18653/v1/n16-1030
- Lafferty, J., McCallum, A., & Pereira, C. N. F. (2001). Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data.
- Kudo, T. CRF++: Yet another CRF toolkit [Electronic resource]. GitHub. https://github.com/taku910/crfpp.
- Liddy, E. D. (2001). Natural Language Processing. In Encyclopedia of Library and Information Science. In Encyclopedia of Library and Information Science.
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems.
- Hedderich, M. A., Adelani, D., Zhu, D., Alabi, J., Markus, U., & Klakow, D. (2020). Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages. https://doi.org/10.18653/v1/2020.emnlp-main.204
- Eiselen, R. (2016). Government domain named entity recognition for South African languages. Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016.
- Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T., Wang, X., Wang, G., Cai, J., & Chen, T. (2018). Recent advances in convolutional neural networks. Pattern Recognition. https://doi.org/10.1016/j.patcog.2017.10.013
- Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing. https://doi.org/10.1109/78.650093
- Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., & Brew, J. (2019). Transformers: State-of-the-art natural language processing. In arXiv. https://doi.org/10.18653/v1/2020.emnlp-demos.
- Pires, T., Schlinger, E., & Garrette, D. (2020). How multilingual is multilingual BERT? ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference. https://doi.org/10.18653/v1/p19-1493
- Conneau, A., & Lample, G. (2019). Cross-lingual language model pretraining. Advances in Neural Information Processing Systems.
- Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing and Management, 45(4). https://doi.org/10.1016/j.ipm.2009.03.002
- Kadari, R., Zhang, Y., Zhang, W., & Liu, T. (2018). CCG supertagging via Bidirectional LSTM-CRF neural architecture. Neurocomputing, 283. https://doi.org/10.1016/j.neucom.2017.12.050
- Hanslo, R. (2021). Evaluation of Neural Network Transformer Models for Named-Entity Recognition on Low-Resourced Languages. 16th Conference on Computer Science and Intelligence Systems, FedCSIS. https://doi.org/10.15439/2021F7
- Sang, E. T. K., & De Meulder, F. (2003). Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL.
- Chen, S., Pei, Y., Ke, Z., & Silamu, W. (2021). Low-resource named entity recognition via the pre-training model. Symmetry, 13(5), 786.
- Gao, S., Kotevska, O., Sorokine, A., & Christian, J. B. (2021). A pretraining and self-training approach for biomedical named entity recognition. PloS one, 16(2), e0246310.