The document presents a comparative analysis of BERT, RoBERTa, and ALBERT models for multi-class sentiment analysis on a non-benchmark COVID-19 tweet dataset. The models were fine-tuned with a proposed architecture and evaluated using f1-score and AUC. BERT achieved the highest f1-score of 0.85, followed by RoBERTa at 0.80 and ALBERT at 0.78, showing that BERT performed best for this task. Future work could investigate model performance at different batch sizes and dropout values to determine the best model for sentiment analysis based on both accuracy and speed.
Delhi Call Girls Punjabi Bagh 9711199171 ââđâ Whatsapp Hard And Sexy Vip Call
Â
Comparing BERT, RoBERTa & ALBERT for Sentiment Analysis
1. Comparative Analysis of Transformer based
Pre-Trained NLP Models
Authors : Saurav Singla & Ramachandra N
2. Abstract:
â Transformer based self-supervised pre-trained models have transformed the concept of Transfer learning
in Natural language processing (NLP) using Deep learning approach.
â In this project we analyze the performance of self-supervised models for Multi-class Sentiment analysis
on a Non benchmarking dataset.
â We used BERT, RoBERTa, & ALBERT models for this study.
â We fine-tuned these models on Sentiment analysis with a proposed architecture.
â We used f1-score & AUC (Area under ROC curve) score for evaluating model performance.
â We found the BERT model with proposed architecture performed well with the highest f1-score of 0.85
followed by RoBERTa (f1-score=0.80), & ALBERT (f1-score=0.78).
â This analysis reveals that the BERT model with proposed architecture is best for multi-class sentiment
task on a Non-benchmarking dataset.
3. Related work:
â A concise overview on several large pre-trained language models provided with state-of-the-art results
on benchmark datasets viz. GLUE, RACE, & SQuAD[5].
â CristĂłbal et al. presented a benchmark comparison of various deep learning architectures such as
Convolutional Neural Networks (CNN) , Long short-term memory (LSTM) recurrent neural networks
and BERT with a Bi-LSTM for the sentiment analysis of drug reviews[6].
â Horsuwan et al. systematically compared four modern language models: ULMFiT, ELMo with biLSTM,
OpenAI GPT, and BERT across different dimensions including speed of pretraining and fine-tuning,
perplexity, downstream classification benchmarks, and performance in limited pre training data on Thai
Social Text Categorization[7].
â Carlos Aspillaga et al. have performed Stress test evaluation of Transformer based models (RoBERTa,
XLNet, & BERT) in Natural Language Inference (NLI) & Question Answering (QA) tasks with
adversarial-examples[8].
4. Methodology:
In this section we will explain briefly about the model architecture & dataset used in this task.
Dataset
â We used Covid19 tweets dataset, publicly available on Kaggle[10].
â The train dataset contains 41157 tweets & test dataset contains 3798 tweets.
â There are 5 classes in the sentiment variable viz. Extremely Negative(0), Extremely positive(1),
Negative(2), Neutral(3), & Positive(4).
We used the Pytorch framework for building deep learning models with the help of Hugging face transformers.
5. Methodology:
Model Architecture
We have proposed architectures for BERT, RoBERTa, & ALBERT models for this study.
BERT
â It is a bidirectional transformer, meaning that it uses both left & right contexts in all layers as in Fig 1.
â This stands for Bidirectional Encoding Representations from Transformers.
â The Fig 2 shows the BERT input representations which includes Token, Segment, & Position
embeddings.
â In practice, input embeddings also contain input/attention masks used to differentiate between actual
tokens & padded tokens.
6. Methodology: BERT
Fig 1. BERT architecture Fig 2. BERT input representation
â In our task, we fine tuned the BERT model on preprocessed tweets data using a dropout layer, a hidden
layer, a fully connected layer & a softmax layer for classification on top of BERT embeddings which is
shown in Fig 3.
â We have considered bert-base uncased pre-trained model for this task, which has 12 layers, 768 hidden
size, 110 M parameters.
8. Methodology: RoBERTa
â It is Robustly optimized BERT pre-training approach. This replicates the BERT model by tweaking
hyper parameter settings & increasing training data size.
â For this task, we have fine tuned the model on preprocessed tweets data using a dropout layer, a hidden
layer, a fully connected layer & a softmax layer on top of RoBERTa embeddings as shown in Fig 4.
â We have chosen distil roberta base pre-trained model, which has 6 layers, 768 hidden size,12 heads, 82
M parameters.
ALBERT
â A Light BERT introduced to overcome TPU/GPU limitations & longer training times.
â We have fine tuned this model on preprocessed tweets data using a dropout, a fully connected layer &
finally a softmax on top of ALBERT embeddings which is shown in Fig 5.
â We have selected albert-base-v2 pre-trained model for this task, which has 12 layers, 768 hidden size,
11 M parameters.
9. Results & Discussions:
Table 1 shows the Sentiment analysis results for all models & corresponding hyperparameters.
Table 1. Comparison between models
We have kept a constant learning rate (lr) of 2e-5 & sentiment length (Sent len) of 120 for all models by
varying batch size & drop out.
Model f1-score lr dropout batch Sent length
BERT 0.85 2e-5 0.35 8 120
RoBERT
a
0.80 2e-5 0.32 32 120
ALBERT 0.78 2e-5 0.35 8 120
10. Results & Discussions: BERT
Fig 6. Precision-Recall curve Fig 7. ROC curve
We got the best results for BERT at batch size of 8 & drop out of 0.35.
11. Results & Discussions: RoBERTa
Fig 8. Precision-Recall curve Fig 9. ROC curve
We achieved best results at batch size of 32 & drop out of 0.32.
12. Results & Discussions: ALBERT
Fig 8. Precision-Recall curve Fig 9. ROC curve
We got good model performance at batch size of 8 & drop out of 0.35.
13. Conclusions & Future work:
â In this paper, we have fine tuned Transformer based pre-trained models viz., BERT, RoBERTa, &
ALBERT with proposed method for Multiclass Sentiment analysis task on Covid19 tweets dataset.
â We obtained the best results for BERT with a high training time (batch size=8). RoBERTa model
achieves acceptable results with less training time (batch size=32). We got reasonable results for
ALBERT with high training time (batch size=8).
â From the accuracy point of view the BERT model is the best for Multiclass Sentiment classification on
our dataset following the RoBERTa & ALBERT model. If speed is the main consideration, we
recommend using RoBERTa due to its speed of pretraining and fine-tuning with acceptable results.
â This study was conducted at specific batch size & drop out for 5 epochs. So model performance may be
different beyond 5 epochs & for different batch size & drop out.
â This work can be carried out in future to investigate how these models perform for different batch sizes &
drop out values.
â This work would help to choose the best pre-trained models for Sentiment analysis based on accuracy &
speed.
14. References:
1. Ashish et al, âAttention is all you needâ, 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA pp.5998-
6008, 2017.
2. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, âBERT: Pre-training of deep bidirectional transformers for language
understandingâ, In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational
Linguistics: Human Language Technologies, , Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171-4186.
3. Yinhan Liu et al, âRoberta: A robustly optimized bert pre training approachesâ, 2019, arXiv preprint arXiv:1907.11692.
4. Zhenzhong Lan et al, âAlbert: A lite bert for self-supervised learning of language representationsâ, 2019, arXiv preprint
arXiv:1909.11942.
5. Matthias AĂenmacher, Christian Heumann, âOn the comparability of pre-trained language modelsâ, CEUR Workshop Proceedings,
Vol.2624.
6. CristĂłbal ColĂłn-Ruiz, Isabel Segura-Bedmar, "Comparing deep learning architectures for sentiment analysis on drug reviews", Journal
of Biomedical Informatics, Volume 110, 2020, 103539, ISSN 1532-0464.
7. Thanapapas Horsuwan, Kasidis Kanwatchara, Peerapon Vateekul, Boonserm Kijsirikul, "A Comparative Study of Pretrained Language
Models on Thai Social Text Categorization", 2019, arXiv:1912.01580v1.
8. Carlos Aspillaga, Andres Carvallo, Vladimir Araujo,"Stress Test Evaluation of Transformer-based Models in Natural Language
Understanding Tasks", Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), European Language
Resources Association (ELRA), Marseille , pp. 1882â1894, 2020.
9. Vishal Shirsat, Rajkumar Jagdale, Kanchan Shende, Sachin N. Deshmukh, Sunil Kawale, âSentence Level Sentiment Analysis from
News Articles and Blogs using Machine Learning Techniquesâ, International Journal of Computer Sciences and Engineering, Vol.7,
Issue.5, 2019.
10. Avinash Kumar1 , Savita Sharma , Dinesh Singh, "Sentiment Analysis on Twitter Data using a Hybrid Approach", International Journal
of Computer Sciences and Engineering, Vol.-7, Issue-5, May 2019.
11. Dataset: https://www.kaggle.com/datatattle/covid-19-nlp-text-classification