key: cord-0664268-i8p7iapp authors: Martin, Gati L.; Mswahili, Medard E.; Jeong, Young-Seob title: Sentiment Classification in Swahili Language Using Multilingual BERT date: 2021-04-19 journal: nan DOI: nan sha: f2fb0f6d437f80ec91f25d488ffcface586c3318 doc_id: 664268 cord_uid: i8p7iapp The evolution of the Internet has increased the amount of information that is expressed by people on different platforms. This information can be product reviews, discussions on forums, or social media platforms. Accessibility of these opinions and peoples feelings open the door to opinion mining and sentiment analysis. As language and speech technologies become more advanced, many languages have been used and the best models have been obtained. However, due to linguistic diversity and lack of datasets, African languages have been left behind. In this study, by using the current state-of-the-art model, multilingual BERT, we perform sentiment classification on Swahili datasets. The data was created by extracting and annotating 8.2k reviews and comments on different social media platforms and the ISEAR emotion dataset. The data were classified as either positive or negative. The model was fine-tuned and achieve the best accuracy of 87.59%. The growth of the Internet has led to the increase in data that holds infinite and valuable perceptions about public opinion. Since the amount of generated data is too large for the normal users to analyze, sentiment analysis techniques are used to automate the process. Sentiment classification deals with identifying and classifying opinions in the text using natural language processing (NLP) techniques. Sentiment analysis has been popular in different applications like customer feedback analysis, social media monitoring, and product and services analysis. The result obtained can be useful for understanding user's perceptions and satisfaction toward products and services. Sentiment classification has evolved with different machine learning techniques including traditional machine learning (Samuel et al., 2020) and deep learning (Kim and Jeong, 2019) techniques. Recently, significant results have been reported by using pretrained state-of-the-art, BERT (Bidirectional Encoder Representations from Transformers) model (Devlin et al., 2018) . Many researchers have implemented bert-based model on different NLP tasks, including sentiment classification, intent detection, and emotion classification. The majority of these studies have either focused on English language or high resource languages. However, we lack these implementations on low resource languages such as Swahili, Yoruba, and Zulu. Swahili is a Bantu language spoken in multiple countries in Africa but mainly in Tanzania, Kenya, and Uganda as the official language. It contains many loanwords from Arabic, English, and other Bantu languages. Africa is approximately one-third of the world's languages, it has a language diversity of over 2,000 languages, many of which are primarily oral, and little is written. This shortage of online resources and datasets keeps the researches in these geographical areas to be stunted despite the free availability of NLP architectures. In this study we perform binary sentiment classification, using multilingual BERT (mBERT) by creating an 8.2k Swahili dataset extracted from different Swahili online platforms such as JamiiForum, 1 and DW Kiswahili (Deutsche Welle). 2 Sentiment classification is one of the most popular tasks in NLP, therefore, there has been a lot of researches conducted using different machine learning techniques. These researches have been focused on either binary (positive and negative) or ternary (positive, negative, and neutral) sentiment classification. Study of Jagdale et al. (2019) use support vector machine (SVM) and Naïve Bayes (NB) techniques for binary classification of Amazon product reviews and achieve 98.17% and 93.54% accuracy for camera reviews. Samuel et al. (2020) conducted a binary classification on COVID-19 Tweets to understand COVID-19's informational crisis by comparing two essential ML methods in the context of textual analytic. NB achieve an accuracy of 91% compared to the accuracy of 74% of Logistic regression for short Tweets, and poor performance for both in longer Tweets. Although these traditional machine learning (ML) models have shown great contributions, they suffer from the limitation of relying on feature selection for their performance. Features are defined and extracted either manually or by using feature selection methods. The deep learning (DL) technique is known for its competence in extracting the features automatically. Kumar et al. (2020) compared ML (maximum entropy, NB, SVM) and DL approaches (long short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) , convolution neural network (CNN)) (Kim, 2019) for exploring the impact of age and gender on the binary classification of sentiment reviews, and CNN achieves the best accuracy of 78% on age and 80% on gender impacts. Li et al. (2020) adopted simple recurrent network (SRN), LSTM, and CNN to sentiment analysis tasks on movie reviews dataset for evaluating the effect of data quality on model performance. In this study, CNN achieve higher accuracy with short and readable reviews. Kim and Jeong (2019) use three movie review data to designed a binary and ternary sentiment classification model of accuracy 81% and 68% and shows that employing consecutive convolutional layers is effective for longer texts. Dang et al. (2020) examine two text processing techniques, word embedding and term frequencyinverse document frequency (TF-IDF) on deep neural network, recurrent neural network, and CNN models by using 8 datasets (tweets and reviews), results show that it is better to combine deep learning techniques with word embedding than with TF-IDF. Some pre-trained word embedding such as Word2Vec (Mikolov et al., 2013) face some limitation of incapability to handle out-of-vocabulary and being context-independent. A pretrained state-ofthe-art model, BERT (Devlin et al., 2018) handle these limitations by using attention mechanism that takes the context into consideration. This model has achieved incredible results in many NLP tasks including sentiment classification. Lee et al. (2020) perform binary sentiment classification on U.S. stocks reviews and achieve an accuracy of 87.3%. The study of Wang et al. (2020) use the Chinese BERT model to classify ternary sentiments and analyze the characteristics of negative sentiment about COVID-19 in a popular Chinese social media (Weibo). Biswas et al. (2020) analyze 3 sentiments of sentences in Stack Overflow posts using BERT and achieve an F1 score of 87%. There are studies that show the great impact of mBERT on low resource languages. Messaoudi et al. (2020) compared binary classification of several deep learning models on 9k Tunisian social media comments and achieved the accuracy of 93.8% using mBERT. Cruz and Cheng (2019) evaluate fine-tuning techniques (BERT and Universal Language Model Fine-tuning (Howard and Ruder, 2018) ) on binary sentiment data of Filipino language, and achieve better performance using BERT. According to our knowledge, there is no BERT-based model for sentiment classification that has implemented in the Swahili language. By using mBERT we perform the binary sentiment classification on Swahili annotated data. BERT is a neural network-based language model developed by Google (Devlin et al., 2018) . BERT is bidirectional, unsupervised language representation, pre-trained using only a plain BooksCorpus (800M words) and English Wikipedia (2,500M words). This characteristic allows the model to learn the context of a word based on all of its surrounding text. There are two architecture sizes, BERT Base and BERT Large with 12 and 24 encoder layers respectively. The model has two phases: pre-training and fine-tuning. During pretraining, the model is trained on unsupervised data over different pre-training tasks. For fine-tuning, the model is first initialized with the pre-trained parameters and then fine-tuned using labeled data from the downstream tasks. The model takes input data in a specific format with limited input sequences of up to 512 tokens. Input to BERT can be a single sentence or a sentence pair with special tokens [CLS] to indicate the beginning of the sentence and [SEP] to indicate the end of the separation of sentences. To feed our sentences to BERT, they must be split into tokens by using the WordPiece tokenizer, and then these tokens must be mapped to their index in the tokenizer vocabulary. BERT uses the WordPiece algorithm to generate a fixed-size vocabulary of individual characters, most common words, and subwords in a trained language corpus. In fixed vocabulary some tokens might not appear and cause out-ofvocabulary(OOV) issue, this is where WordPiece algorithm takes effect by split them into character tokens that can be mapped in the vocabulary file and assign ## to indicate that it is a suffix following some other subwords (as shown in Figure 1 ). In our experiments, we use a multilingual version of BERT (mBERT) that is trained on the Wikipedia pages of 104 languages, with a shared word piece vocabulary. Swahili is among the language that mBERT was trained in, this adds some advantages of using the model to encounter the linguistic features of the language. Tanzania has many tribes (Sukuma, Konde, Maasai) that differ by their accents, this has affected the representation of the Swahili language in both oral and written form. While most of conducted researches are based on standard (formal) language, the most used language in the social media platforms is informal language (Table 1 ). This has been influenced by age difference (Kumar et al., 2020) and loanwords from other languages. There are English words that are modified and used as Swahili but do not contain semantic meaning, for example, ccta (sister) means dada, or faza (father) means baba. All of these features were involved in our dataset because the social media platforms involve people of different age, location and aspects. These characteristics, make mBERT the better choice to handle the OOV words and accommodate the loanwords from other languages. Fig. 1 shows the BERT model architecture that we use in fine-tuning the sentiment classification. We add a dense layer on top of the pre-trained model and maintain other hyper-parameters as stated in the original paper. Negative informal manzi mkali miyeyusho kinoma formal binti mzuri tenda kinyume English georgeous lady disappointment We use ISEAR 3 emotional dataset which contains 7 emotions (e.g., joy, anger, sadness, disgust, shame, and guilt), and convert them into sentiments by taking joy as positive sentiment and all others as negative. We use other data from different sources including online discussion forums (e.g., JamiiForum, DW Kiswahili) and social media (e.g., Tweeter, YouTube), and manually annotated them; we finally have a total of 8.2k data including 2.7k positive and 5.4k negative sentiments. The extraction of data was done manually except for the tweets where we use Tweepy, the open source Python library to access tweeter API. 4 The dataset cover different topics including politics, different aspects of daily life activities and psychology (ISEAR). Annotation process was done by two native Swahili speakers from Tanzania. Each annotator has to identify the sentiment of the text then, labelled data were compared. For those with different labels, the annotators have to agree on the correct label without uncertainty. We remove all the text which the labelling was not certain and the neutral ones. Social media data are very noisy, they include many unstructured data that may affect the model performance . To be on the safe side, we pre-process our data to remove unwanted characters and words such as usernames, links, and some punctuation. We split the data into two sets, where 10% was used for testing and the remaining 90% for the training process. Table 3 shows the sentimental classification result of our dataset. The accuracy metrics, precision, recall, and F1-score of each sentiment ware computed. Our model achieves an accuracy of 87.59%. Based on the result we can see that the scores of negative sentiment are high compared to positive sentiments. Table 4 shows the confusion matrix, where the ratio of positive text that were predicted to be negative was high. This was due to an unbalanced ratio of data, a high ratio of negative data gives the model high sensitivity on negative predictions. In this paper, we perform the sentiment classification on the Swahili dataset that we extract from different online social media platforms. We apply the pre-trained mBERT model and achieve the best accuracy of 87.59%. As stated above, there are some biases in free available social media platforms data that can contribute to sarcasm, poor data quality, and semantic problems that can affect the model performance. In addition to observations that we found on predictions, the size and ratio of the data is important for model performance. However,this research is in-progress the following issues will be addressed, the ratio of data and comparison with other NLP models. While mBERT has shown great performance, there are other studies that demonstrate the powerful ability of languagespecific BERT models over mBERT. In the future, we will train specific BERT for Swahili that can be favorable to language features and applicable to different tasks. Achieving reliable sentiment analysis in the software engineering domain using bert Evaluating language model finetuning techniques for low-resource languages Sentiment analysis based on deep learning: A comparative study Bert: Pre-training of deep bidirectional transformers for language understanding Long short-term memory Universal language model fine-tuning for text classification Sentiment analysis on product reviews using machine learning techniques Sentiment classification using convolutional neural networks Convolutional neural networks for sentence classification Adam: A method for stochastic optimization Exploring impact of age and gender on sentiment analysis using machine learning Bert-based stock market sentiment analysis How textual quality of online reviews affect classification performance: a case of deep learning sentiment analysis Moez Ben Ha-jHmida Efficient estimation of word representations in vector space Covid-19 public sentiment insights and machine learning for tweets classification Covid-19 sensing: Negative sentiment analysis on social media in china via bert model