Exploring BERT-Based Pretrained Models for Polarity Analysis of Tweets in Spanish
DOI:
https://doi.org/10.61467/2007.1558.2023.v14i1.336Keywords:
Polarity analysis, NLP, BERTAbstract
This paper reviews the implementation of three pre-trained models based on BERT (“bert-base-multilingual-cased”, “IIC/beto-base-spanish-sqac” and “MarcBrun/ixambert-finetuned-squad-eu-en”) to solve tasks 1.1 and 1.2 of “Workshop on Semantic Analysis at SEPLN 2020” (TASS 2020), these tasks consist of the polarity analysis of tweets in Spanish from different Spanish-speaking countries. The proposed models are evaluated individually by pre-processing and replacing synonyms. This research is carried out to find the points to improve in the polarity analysis of tweets (tweets), mainly in how the pre-trained models interpret words that are not in their vocabulary due to variations in the language, regional expressions, misspellings, and use of emojis.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 International Journal of Combinatorial Optimization Problems and Informatics
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.