Logo PTI Logo FedCSIS

Communication Papers of the 18th Conference on Computer Science and Intelligence Systems

Annals of Computer Science and Information Systems, Volume 37

An Evaluation of a Zero-Shot Approach to Aspect-Based Sentiment Classification in Historic German Stock Market Reports

, , ,

DOI: http://dx.doi.org/10.15439/2023F3725

Citation: Communication Papers of the 18th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 37, pages 5160 ()

Full text

Abstract. One critical aspect that remains in the application of state-of-the-art neural networks to text analysis in applied research is the continued requirement for manual data annota- tion. In computer science research, there is a strong focus on maximizing the data efficiency of fine-tuning language models. This has led to the development of zero-shot text classification methods, which promise to work effectively without requiring fine-tuning for the specific task at hand. In this paper, we conduct an in-depth analysis of aspect-based sentiment analysis in historic German stock market reports to evaluate the reliability of this promise. We present a comparison of a zero-shot approach with a meticulously fine-tuned three-step process of training and applying text classification models. This study aims to empirically assess the reliability of zero-shot text classification and provide justification for the potential benefits it offers in terms of reducing the burden of data labeling and training for analysis purposes. The findings of our study demonstrate a strong correlation between the sentiment time series generated through aspect- based sentiment analysis using the zero-shot approach and those derived from the fine-tuned supervised pipeline, validating the viability of the zero-shot approach. While the zero-shot pipeline exhibits a tendency to underestimate negative examples, the overall trend remains discernible. Additionally, a qualitative analysis of the linguistic patterns reveals no explicit error patterns. Nevertheless, we acknowledge and discuss the practical and epistemological obstacles associated with employing zero-shot algorithms in untested domains.

References

  1. Bing Liu. Sentiment Analysis and Opinion Mining. Vol. 5. 2012.
  2. Jacob Devlin et al. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1. Minneapolis, Minnesota: Association for Computational Linguistics, June 2019, pp. 4171–4186. http://dx.doi.org/10.18653/v1/N19-1423.
  3. Tom Brown et al. “Language Models are Few-Shot Learners”. en. In: Advances in Neural Information Processing Systems 33 (2020), pp. 1877–1901.
  4. Y. Xian, B. Schiele, and Z. Akata. “Zero-Shot Learning — The Good, the Bad and the Ugly”. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Los Alamitos, CA, USA: IEEE Computer Society, 2017, pp. 3077–3086.
  5. Lei Shu et al. Zero-Shot Aspect-Based Sentiment Analysis. 2022. https://arxiv.org/abs/ 2202.01924 [cs.CL].
  6. Kishaloy Halder et al. “Task-Aware Representation of Sentences for Generic Text Classification”. en. In: Proceedings of the 28th International Conference on Computational Linguistics. Barcelona, Spain (Online): International Committee on Computational Linguistics, 2020, pp. 3202–3213. DOI : 10.18653/v1/2020.coling-main.285.
  7. Wenpeng Yin, Jamaal Hay, and Dan Roth. “Bench-marking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach”. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019. Ed. by Kentaro Inui et al. Association for Computational Linguistics, 2019, pp. 3912–3921. http://dx.doi.org/10.18653/v1/D19-1404.
  8. Senait Gebremichael Tesfagergish, Jurgita Kapočiūtė-Dzikienė, and Robertas Damaševičius. “Zero-Shot Emotion Detection for Semi-Supervised Sentiment Analysis Using Sentence Transformers and Ensemble Learning”. In: Applied Sciences 12.17 (2022), p. 8662. http://dx.doi.org/10.3390/app12178662.
  9. Mengting Hu et al. “Multi-Label Few-Shot Learning for Aspect Category Detection”. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). ACL-IJCNLP 2021. Online: Association for Computational Linguistics, 2021, pp. 6330–6340. http://dx.doi.org/10.18653/v1/2021.acl-long.495.
  10. Ronald Seoh et al. “Open Aspect Target Sentiment Classification with Natural Language Prompts”. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. EMNLP 2021. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021, pp. 6311–6322. http://dx.doi.org/10.18653/v1/2021.emnlp-main.509.
  11. Anindya Sarkar, Sujeeth Reddy, and Raghu Sesha Iyengar. “Zero-Shot Multilingual Sentiment Analysis Using Hierarchical Attentive Network and BERT”. In: Proceedings of the 2019 3rd International Conference on Natural Language Processing and Information Retrieval. NLPIR 2019. Tokushima, Japan: Association for Computing Machinery, 2019, pp. 49–56. http://dx.doi.org/10.1145/3342827.3342850.
  12. Wehrheim, Lino et al. “„Auch heute war die Stimmung im Allgemeinen fest.“ Zero-Shot Klassifikation zur Bestimmung des Media Sentiment an der Berliner Börse zwischen 1872 und 1930”. In: Konferenzabstracts. Dhd23 Open Humanities Open Culture. Trier, 2023. http://dx.doi.org/10.5281/zenodo.7688632.
  13. Borst, Janos, Wehrheim, Lino, and Burghardt, Manuel. “Money Can’t Buy Love?” Creating a Historical Sentiment Index for the Berlin Stock Exchange, 1872–1930”. In: Book of Abstracts. Digital Humanities. Graz, 2023.
  14. Evgeny Kim and Roman Klinger. “A survey on sentiment and emotion analysis for computational liter- ary studies”. In: Zeitschrift für digitale Geisteswissenschaften (Aug. 2019). DOI : 10.17175/2019_008_v2.
  15. George Akerlof and Robert Shiller. Animal Spirits: How Human Psychology Drives the Economy and Why It Matters for Global Capitalism. Vol. 21. Jan. 1, 2009. ISBN: 978-0-691-14592-1. DOI : 10.2307/j.ctv36mk90z.
  16. Paul C. Tetlock. “Giving Content to Investor Sentiment: The Role of Media in the Stock Market”. en. In: The Journal of Finance 62.3 (2007), pp. 1139–1168. http://dx.doi.org/10.2139/ssrn.685145.
  17. Diego García. “Sentiment during Recessions”. en. In: The Journal of Finance 68.3 (2013), pp. 1267–1300. DOI : 10.1111/jofi.12027.
  18. Alan J. Hanna, John D. Turner, and Clive B. Walker. “News media and investor sentiment during bull and bear markets”. en. In: The European Journal of Finance 26.14 (Sept. 2020), pp. 1377–1395.
  19. Kostadin Mishev et al. “Evaluation of Sentiment Analysis in Finance: From Lexicons to Transformers”. In: IEEE Access 8 (2020). ISSN: 2169-3536.
  20. Wouter van Atteveldt, Mariken A. C. G. van der Velden, and Mark Boukes. “The Validity of Sentiment Analysis: Comparing Manual Annotation, Crowd-Coding, Dictionary Approaches, and Machine Learning Algorithms”. In: Communication Methods and Measures 15.2 (Apr. 2021), pp. 121–140. http://dx.doi.org/10.1080/19312458.2020.1869198.
  21. Zhuang Liu et al. “FinBERT: A Pre-Trained Financial Language Representation Model for Financial Text Mining”. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. IJCAI’20. Yokohama, Yokohama, Japan, 2021. DOI : 10.24963/ijcai.2020/615.
  22. Pekka Malo et al. “Good debt or bad debt: Detecting semantic orientations in economic texts”. In: Journal of the Association for Information Science and Technology 65.4 (Apr. 2014), pp. 782–796.
  23. Ankur Sinha et al. “SEntFiN 1.0: Entity-aware sentiment analysis for financial news”. In: Journal of the Association for Information Science & Technology 73.9 (2022), pp. 1314–1335.
  24. Wenxuan Zhang et al. “A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and Challenges”. In: IEEE Transactions on Knowledge and Data Engineering (2022). Conference Name: IEEE Transactions on Knowledge and Data Engineering, pp. 1–20. ISSN: 1558-2191. DOI : 10.1109/TKDE.2022.3230975.
  25. Thomas Wolf et al. “HuggingFace’s Transformers: State-of-the-art Natural Language Processing”. In: Computing Resource Repository abs/1910.03771 (2019). URL : http://arxiv.org/abs/1910.03771.
  26. Jeremy Howard and Sebastian Ruder. “Universal Language Model Fine-tuning for Text Classification”. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Mel- bourne, Australia, July 15-20, 2018, Volume 1: Long Papers. Association for Computational Linguistics, 2018, pp. 328–339. http://dx.doi.org/10.18653/v1/P18-1031.
  27. Zhilin Yang et al. “XLNet: Generalized Autoregressive Pretraining for Language Understanding”. In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada. 2019, pp. 5754–5764.
  28. Jonathan Bragg et al. “FLEX: Unifying Evaluation for Few-Shot NLP”. In: Neural Information Processing Systems. 2021. http://dx.doi.org/10.1162/tacl_a_00485.
  29. Yujia Bao et al. “Few-shot Text Classification with Distributional Signatures”. In: International Conference on Learning Representations. 2020. http://dx.doi.org/10.1145/3531536.3532949.
  30. Yaqing Wang et al. “Generalizing from a Few Examples: A Survey on Few-Shot Learning”. In: ACM Comput. Surv. 53.3 (June 2020). DOI : 10.1145/3386252.
  31. Edgar Schonfeld et al. “Generalized Zero- and Few- Shot Learning via Aligned Variational Autoencoders”. en. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE, June 2019, pp. 8239–8247. DOI : 10.1007/s00521-022-07413-z.
  32. Oliver Guhr et al. “Training a Broad-Coverage German Sentiment Classification Model for Dialog Systems”. English. In: Proceedings of the Twelfth Language Resources and Evaluation Conference. Marseille, France: European Language Resources Association, May 2020, pp. 1627–1632. ISBN: 979-10-95546-34-4.
  33. Francesco De Toni et al. “Entities, Dates, and Languages: Zero-Shot on Historical Texts with T0”. In: ArXiv abs/2204.05211 (2022). http://dx.doi.org/10.18653/v1/2022.bigscience-1.7.
  34. Tingting Ma et al. “Issues with Entailment-based Zero-shot Text Classification”. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). ACL-IJCNLP 2021. Online: Association for Computational Linguistics, Aug. 2021, pp. 786–796.
  35. Chenggong Gong, Jianfei Yu, and Rui Xia. “Unified Feature and Instance Based Domain Adaptation for Aspect-Based Sentiment Analysis”. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). EMNLP 2020. Association for Computational Linguistics, Nov. 2020, pp. 7035–7045. DOI : 10.18653/v1/2020.emnlp-main. 572.
  36. Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. “Emotions from Text: Machine Learning for Text-based Emotion Prediction”. In: Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. HLT- EMNLP 2005. Vancouver, British Columbia, Canada: Association for Computational Linguistics, Oct. 2005, pp. 579–586. DOI : 10.3115/1220575.1220648.
  37. Thomas Schmidt, Manuel Burghardt, and Katrin Dennerlein. Sentiment Annotation of Historic German Plays: An Empirical Study on Annotation Behavior. Aug. 1, 2018.
  38. Mihir Parmar et al. “Don’t Blame the Annotator: Bias Already Starts in the Annotation Instructions”. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. EACL 2023. Dubrovnik, Croatia: Association for Computational Linguistics, May 2023, pp. 1779–1789.
  39. Rachele Sprugnoli et al. “Towards sentiment analysis for historical texts”. In: Digital Scholarship in the Humanities 31.4 (July 2015), pp. 762–772. DOI : 10.1093/llc/fqv027.
  40. Frank Xing et al. “Financial Sentiment Analysis: An Investigation into Common Mistakes and Silver Bullets”. In: Proceedings of the 28th International Conference on Computational Linguistics. COLING 2020. Barcelona, Spain (Online): International Committee on Computational Linguistics, Dec. 2020, pp. 978–987. http://dx.doi.org/10.18653/v1/2020.coling-main.85.
  41. Mohammad Rostami and Aram Galstyan. Domain Adaptation for Sentiment Analysis Using Increased Intraclass Separation. July 4, 2021. https://arxiv.org/abs/ 2107.01598[cs]. URL: http://arxiv.org/abs/2107.01598 (visited on 05/22/2023).
  42. Guoliang Kang et al. “Contrastive Adaptation Network for Unsupervised Domain Adaptation”. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE, 2019, pp. 4888–4897.
  43. Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde. “Projecting Embeddings for Domain Adaption: Joint Modeling of Sentiment Analysis in Diverse Domains”. In: Proceedings of the 27th International Conference on Computational Linguistics. COLING 2018. Santa Fe, New Mexico, USA: Association for Computational Linguistics, Aug. 2018, pp. 818–830.