A6.BERT-Based Sentiment and Topic Classification of "The Social Dilemma" Tweets
A6.BERT-Based Sentiment and Topic Classification of "The Social Dilemma" Tweets
Article
SENTIMENT ANALYSIS IN SOCIAL MEDIA: HOW DATA SCIENCE
IMPACTS PUBLIC OPINION KNOWLEDGE INTEGRATES NATURAL
LANGUAGE PROCESSING (NLP) WITH ARTIFICIAL INTELLIGENCE
(AI)
Citation: ABSTRACT
Alam, M. S., Mrida, M. S. H.,
& Rahman, M. A. (2025).
This systematic literature review investigates the advancements, methodologies,
Sentiment analysis in social challenges, and application domains of sentiment analysis with a particular focus on
media: How data science informal digital text such as social media content. A total of 91 peer-reviewed articles
impacts public opinion
knowledge integrates
published between 2010 and 2024 were carefully selected and analyzed using the
natural language PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
processing (NLP) with framework to ensure methodological rigor, transparency, and reproducibility. The
artificial intelligence (AI).
American Journal of
review spans traditional machine learning algorithms, deep learning models, and
Scholarly Research and transformer-based architectures, examining their role in enhancing sentiment
Innovation, 4(1), 63–100. classification accuracy across various textual and multimodal inputs. Key themes
https://doi.org/10.63125/r3s
q6p80
emerging from the analysis include the evolution of multimodal sentiment analysis
incorporating emojis, images, and videos; the growing focus on emotion classification
Received: beyond polarity detection; and the development of multilingual and cross-lingual
December 19, 2024
sentiment systems that aim to extend sentiment mining beyond English-dominated
Revised: datasets. Furthermore, a notable subset of studies addressed the complexities of
December 25, 2024 detecting sarcasm, irony, and linguistic ambiguity, highlighting significant limitations in
Accepted:
even the most advanced models. The review also discusses the growing body of
February 18, 2025 research in financial, political, and health-related sentiment analysis, where domain-
specific customization has proven critical for reliable prediction. Despite technical
Published:
March 29, 2025
progress, challenges remain in areas such as data imbalance, inconsistent evaluation
metrics, lack of cross-domain generalizability, and insufficient attention to ethical
concerns, including algorithmic bias and explainability. This review contributes a
synthesized and critical understanding of the current state of sentiment analysis and
identifies key research gaps, offering a valuable reference point for scholars,
Copyright: developers, and practitioners aiming to improve the robustness, inclusivity, and ethical
© 2025 by the author. This grounding of sentiment analysis systems.
article is published under the
license of American
Scholarly Publishing Group KEYWORDS
Inc and is available for open Sentiment Analysis; Natural Language Processing (NLP); Artificial Intelligence (AI); Social
access.
Media Analytics; Public Opinion Mining;
63
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
INTRODUCTION
The proliferation of social media platforms such as Twitter, Facebook, Instagram, and Reddit has
created a transformative landscape for communication and public discourse (Thara &
Poornachandran, 2022). These platforms have evolved into dynamic arenas where individuals
express opinions, share experiences, and engage with sociopolitical events in real time. As a result,
vast quantities of unstructured text data are generated daily, offering a valuable resource for
analyzing public opinion (Dijck & Poell, 2018). The unfiltered and spontaneous nature of social
media interactions makes them particularly insightful for gauging sentiment across diverse
populations and contexts (Weller, 2016). This deluge of textual information has positioned social
media as a central focus in computational linguistics and data-driven decision-making research.
Furthermore, sentiment analysis, also known as opinion mining, has emerged as a core technique
in interpreting emotional and subjective content in textual data. By leveraging computational
methods, sentiment analysis categorizes text into positive, negative, or neutral sentiments (Bose
et al., 2019). Traditional machine learning techniques such as Naïve Bayes, Support Vector
Machines (SVM), and logistic regression laid the groundwork for early sentiment classification (Yue
et al., 2018). However, these models faced challenges in handling the nuances and complexities
of natural language, such as sarcasm, idioms, and context-dependent expressions (Ji et al., 2016).
These limitations catalyzed a shift toward more sophisticated approaches that integrate Natural
Language Processing (NLP) and Artificial Intelligence (AI) to improve semantic understanding and
contextual interpretation. Moreover, the integration of NLP with AI has dramatically enhanced
the performance and applicability of sentiment analysis across domains. NLP provides the
linguistic structure necessary for machines to process and understand human language, while AI,
particularly deep learning, empowers models to learn patterns from large datasets (Shah & Shah,
2020).
Figure 1: Sentiment Analysis and Opinion Mining (Source: https://www.alpha-quantum.com)
64
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
According to Dijck and Poell (2018), Neural network architectures such as Convolutional Neural
Networks (CNNs) and Recurrent Neural Networks (RNNs), including Long Short-Term Memory
(LSTM) models, have demonstrated improved accuracy in sentiment detection tasks . Moreover,
transformer-based models such as BERT (Bidirectional Encoder Representations from Transformers)
have redefined the state of the art in sentiment classification by enabling bidirectional context
analysis and pre-training on massive corpora (Yue et al., 2018). Social media sentiment analysis
has proven instrumental in various applications, particularly in politics, marketing, and public
health. During election cycles, for instance, sentiment trends on Twitter have been used to
forecast voter preferences and candidate popularity (Ji et al., 2016). In the commercial sector,
companies analyze customer reviews and brand mentions to shape marketing strategies and
assess consumer satisfaction (Shah & Shah, 2020). Public health researchers have also employed
sentiment analysis to track vaccine hesitancy, monitor mental health trends, and assess public
reactions to health crises such as the COVID-19 pandemic (Xu et al., 2022). These studies
underscore the critical role that sentiment analysis plays in understanding collective attitudes and
responses. Several methodologies underpin the effective execution of sentiment analysis in social
media contexts. Preprocessing techniques such as tokenization, stop-word removal, stemming,
and lemmatization are fundamental steps that prepare raw text for analysis (Xu et al., 2022; Yue
et al., 2018). Feature extraction methods like Term Frequency-Inverse Document Frequency (TF-
IDF), word embeddings (Koukaras et al., 2019), and contextual embeddings from transformer
models significantly influence the performance of sentiment classifiers (Tabinda Kokab et al.,
2022). Evaluation metrics such as accuracy, precision, recall, and F1-score are commonly used to
assess the effectiveness of sentiment models across various datasets and languages (Paul et al.,
2024; Shah & Shah, 2020; Tabinda Kokab et al., 2022). Challenges in sentiment analysis arise from
linguistic diversity, ambiguity, multilingualism, and platform-specific language behaviors such as
hashtags, emojis, and abbreviations (Burdisso et al., 2019; Shah & Shah, 2020). Domain adaptation
remains a persistent issue, as models trained on one type of data may perform poorly on data
from a different domain or context (Pathak et al., 2021). Furthermore, the ethical implications of
analyzing user-generated content—particularly issues surrounding privacy, informed consent,
and algorithmic bias—have gained attention in recent years (He et al., 2022; Tabinda Kokab et
al., 2022; Xu et al., 2022). These challenges highlight the need for robust frameworks and
transparent methodologies in the practice of social media sentiment analysis. The theoretical
foundations of sentiment analysis are supported by interdisciplinary research spanning computer
science, linguistics, psychology, and communication studies. The emotional valence theory, for
instance, provides a psychological basis for categorizing sentiments, while computational
semantics offers a structural approach to modeling meaning (Abdelfatah et al., 2017). The
synergy between these disciplines has led to the creation of lexicons such as SentiWordNet, AFINN,
and VADER, which serve as critical resources for sentiment polarity identification (Paltoglou &
Thelwall, 2012). These resources, combined with AI-powered analytical frameworks, enable the
scalable and real-time processing of public sentiments, enhancing understanding across a wide
array of social and commercial phenomena. To objectively explore the role of data science in
sentiment analysis for public opinion mining on social media, this study conducts a systematic
literature review (SLR) grounded in a transparent, reproducible methodology. The primary
objective is to identify, evaluate, and synthesize peer-reviewed research that integrates Natural
Language Processing (NLP) and Artificial Intelligence (AI) for sentiment detection across major
social platforms. Following the Preferred Reporting Items for Systematic Reviews and Meta-
Analyses (PRISMA) framework, this review systematically collects articles published between 2010
and 2024 from digital databases such as Scopus, Web of Science, IEEE Xplore, and ACM Digital
Library. The inclusion criteria target empirical studies, methodological advancements, and
applied research focusing on social media sentiment analysis that leverages NLP and AI
technologies. The review aims to categorize dominant approaches, assess model performance,
and reveal research gaps in terms of data preprocessing, algorithm selection, multilingual
sentiment handling, and ethical considerations. By synthesizing evidence from interdisciplinary
sources, this review establishes a comprehensive understanding of how AI-driven sentiment
analysis enhances public opinion knowledge and supports data-informed decision-making.
65
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
LITERATURE REVIEW
The field of sentiment analysis in social media has undergone rapid evolution, driven by the
convergence of Natural Language Processing (NLP), Artificial Intelligence (AI), and Big Data
technologies. Understanding public opinion through digital interactions has become an essential
focus across disciplines such as political science, marketing, public health, and communication
studies. As platforms like Twitter, Facebook, and YouTube continue to generate large volumes of
user-generated content, researchers have sought innovative methods to extract sentiment
signals embedded in textual, visual, and multimodal data. The use of AI models—ranging from
classical machine learning to deep learning and transformer-based architectures—has
transformed the scope and scalability of sentiment analysis. This literature review synthesizes
findings from peer-reviewed research published over the last decade, highlighting key
methodological advancements, emerging techniques, domain-specific applications, and
existing challenges. A systematic and thematic categorization approach is applied to classify the
literature into distinct areas of focus. By evaluating prior work across multiple dimensions, the
review provides a structured understanding of how sentiment analysis technologies are shaping
the interpretation of public opinion in the digital era.
Sentiment Analysis
Sentiment analysis, also known as opinion mining, has become a key method in extracting
subjective information from text, particularly on social media platforms where users frequently
express emotions, opinions, and attitudes (Schouten & Frasincar, 2016). The linguistic complexity
and informal nature of social media content necessitate advanced Natural Language Processing
(NLP) techniques to interpret sentiment accurately (Hemmatian & Sohrabi, 2017; Ravi & Ravi,
2015). Preprocessing steps such as tokenization, stop-word removal, stemming, and lemmatization
are widely used to clean and structure data (Xu et al., 2022). Lexicon-based approaches like
SentiWordNet (Ma et al., 2018), AFINN (Gaikwad & Joshi, 2016), and VADER (Wu et al., 2019) assign
sentiment polarity scores to words, allowing for simple yet interpretable classification. However,
these methods often struggle with sarcasm, negation, and domain specificity (Hassan et al., 2022).
As a result, hybrid techniques combining rule-based lexicons with machine learning models have
been introduced to
Figure 2: Exploring the Dimensions of Sentiment Analysis mitigate these limitations
(Chandrasekaran et al.,
2022; Wu et al., 2019).
The integration of Artificial
Intelligence (AI), particularly
machine learning and
deep learning, has
significantly enhanced
sentiment analysis by
enabling scalable and
automated processing of
social media data
(Chandrasekaran et al.,
2022; Fengjiao & Aono,
2018). Traditional classifiers such as Support Vector Machines (SVM), Naïve Bayes, and Decision
Trees have been employed to classify sentiment with moderate success (do Carmo et al., 2017;
Hassan et al., 2022). With the rise of deep learning, models such as Convolutional Neural Networks
(CNNs) and Long Short-Term Memory (LSTM) networks have shown superior performance in
capturing long-range dependencies in text (Bai & Yu, 2016; Fu et al., 2011). More recently,
transformer-based models like BERT and RoBERTa have outperformed previous architectures by
utilizing attention mechanisms and contextual embeddings (Nemes & Kiss, 2020). These models
are capable of handling nuanced expressions and have been successfully applied in real-time
sentiment tracking (Mansour, 2018). The flexibility and generalizability of these models have
allowed researchers to adapt them across a variety of datasets and domains (Hao & Dai, 2016;
Mansour, 2018).
66
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
Social media sentiment analysis is complicated by linguistic diversity, slang, emojis, and
multilingual content, especially in non-English-speaking regions (Wu et al., 2017; Xiong et al., 2018).
Standard sentiment models often underperform in multilingual settings due to cultural and
grammatical variations (Zhou et al., 2016). Cross-lingual embeddings and machine translation
have been proposed to mitigate these challenges, enabling sentiment analysis across languages
using shared semantic spaces (Diamantini et al., 2019). Tools such as LASER, mBERT, and XLM-R
have been designed to capture multilingual representations with increasing accuracy (Mansour,
2018). Nonetheless, even with sophisticated models, achieving consistent accuracy across
languages and domains remains a methodological obstacle (Hao & Dai, 2016). Moreover, the
brevity of posts, the use of hashtags, and the dynamic evolution of online language further
compound the issue of generalization and domain adaptation (Hao & Dai, 2016; Md Suhaimin et
al., 2023).
Sentiment analysis has found extensive applications across several domains, including political
forecasting, brand monitoring, and public health analysis. In political science, Twitter sentiment
has been used to analyze public support for candidates and predict election outcomes with
notable success (Dhaoui et al., 2017; Nemes & Kiss, 2020). Similarly, marketing research relies on
sentiment mining to gauge customer satisfaction and brand loyalty through online reviews and
feedback (Hao & Dai, 2016). For instance, sentiment shifts detected in product reviews can serve
as early indicators of consumer dissatisfaction (Suhaimin et al., 2023). In the public health sector,
social media sentiment analysis has been utilized to assess public response to vaccination
campaigns and mental health trends (Dhaoui et al., 2017). During the COVID-19 pandemic,
researchers analyzed Twitter sentiment to understand public anxiety, misinformation spread, and
compliance with health regulations (Nkomo et al., 2020). These diverse use cases highlight the
utility of sentiment analysis in real-time societal monitoring and evidence-based response
strategies. Evaluating sentiment analysis models requires robust metrics and benchmarking tools
to ensure reproducibility and comparability (Zhao et al., 2014). Commonly used datasets such as
the Stanford Sentiment Treebank (SST), IMDb, and Twitter corpora provide standardized platforms
for performance testing (Diamantini et al., 2019; Zhao et al., 2014). Evaluation is typically based
on metrics like accuracy, precision, recall, F1-score, and area under the curve (AUC) (Hao & Dai,
2016). Beyond technical performance, ethical considerations are gaining prominence,
particularly concerning privacy, algorithmic bias, and the potential misuse of sentiment data
(Nkomo et al., 2020). Algorithms trained on biased or unbalanced data may reinforce stereotypes
or misclassify minority sentiments (Zhao et al., 2014). Furthermore, the unregulated mining of user-
generated content raises legal and ethical concerns, especially when sentiments are used for
political manipulation or behavioral targeting (Dhaoui et al., 2017; Zhao et al., 2014). These
limitations necessitate greater transparency, fairness, and accountability in sentiment analysis
research and application.
Role of Data Science and Computational Linguistics in Sentiment Detection
Sentiment detection has evolved at the intersection of data science and computational
linguistics, drawing from methods that analyze linguistic patterns and statistical representations to
extract emotional valence from text (Deng et al., 2019). Data science provides the infrastructure
and computational tools for processing large-scale, high-velocity text data, while computational
linguistics contributes the syntactic and semantic rules that guide machine understanding of
human language (Hao & Dai, 2016). This interdisciplinary synergy has allowed researchers to
uncover insights into user attitudes, emotions, and opinions across platforms such as Twitter,
Facebook, and Reddit (Diamantini et al., 2019). Techniques such as part-of-speech tagging,
dependency parsing, and named entity recognition have helped ground sentiment classification
in grammatical and contextual awareness (Hassan et al., 2013; Nkomo et al., 2020). These
approaches are supported by vector space models and statistical representations that convert
linguistic features into machine-readable formats for sentiment modeling (Deng et al., 2019; Singla
et al., 2017).
Effective sentiment detection relies heavily on feature extraction methods that bridge linguistic
theory and data-driven modeling. Term Frequency-Inverse Document Frequency (TF-IDF), n-gram
models, and word embeddings such as Word2Vec and GloVe represent early efforts to capture
67
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
context and semantics in sentiment-bearing expressions (Ji et al., 2016; Yue et al., 2018). These
tools allow systems to associate sentiment scores with semantically rich patterns rather than
isolated keywords (Steiner-Correa et al., 2017). Advanced approaches have adopted contextual
embeddings from transformer-based models such as BERT and GPT, offering richer linguistic
comprehension by integrating sentence-level context (Kim et al., 2022; Steiner-Correa et al.,
2017). Data science enables these linguistic models to scale across massive datasets, facilitating
training, validation, and real-time inference (Schouten & Frasincar, 2016). These developments
have enabled more nuanced sentiment detection, including polarity, intensity, and emotion
classification in noisy and diverse textual inputs (Hemmatian & Sohrabi, 2017).
Figure 3: Types of Sentiment Analysis
68
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
Data science has enabled the deployment of machine learning algorithms capable of
processing and learning from vast quantities of linguistically annotated data. Supervised learning
models like Naïve Bayes, SVM, and Random Forests have long been applied to sentiment
classification tasks due to their simplicity and interpretability (Vinoth & Prabhavathy, 2022). These
models have been enhanced by ensemble techniques and neural architectures, including LSTM
and GRU networks, which model temporal dependencies and sequential information critical in
language processing (Chaudhari et al., 2021). The convergence of these models with
computational linguistics allows the encoding of grammatical dependencies and sentence
structure into feature representations (Dangi et al., 2022; Gaikwad & Joshi, 2016). Transfer learning
and pre-trained language models, particularly BERT, RoBERTa, and XLNet, have further enabled
fine-tuned sentiment detection across tasks and domains (Billah & Hassan, 2019). Data science
frameworks facilitate this learning process by supporting hyperparameter tuning, performance
benchmarking, and model interpretability (Sanoussi et al., 2022). Beyond textual data, sentiment
detection has expanded into multimodal analysis, integrating visual, audio, and textual cues to
enhance accuracy. Social media content often includes images, emojis, memes, and videos that
supplement or contradict textual sentiment, creating complex interpretive scenarios (Sharma &
Sharma, 2020). Data science methodologies such as multimodal fusion and late-stage integration
models have been used to process and combine different data types (Ayyub et al., 2020).
Computational linguistics enables alignment of spoken or written language with visual cues,
particularly in emotion recognition and opinion polarity tasks (Sanoussi et al., 2022). Moreover,
contextual understanding, including hashtag sentiment (Kumar et al., 2020) and emoticon
polarity (Aljarah et al., 2020), is facilitated through linguistically enriched sentiment dictionaries
and contextual embedding tools. These approaches are embedded in data-driven pipelines that
normalize inputs, apply multimodal classifiers, and evaluate outputs across heterogeneous
datasets (Chin et al., 2018). Together, data science and computational linguistics extend
sentiment detection into richer, real-world social interactions.
69
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
Lexicon-based methods have been widely used in social media sentiment analysis due to their
simplicity and interpretability. These approaches use predefined dictionaries of words associated
with sentiment scores, including popular lexicons such as SentiWordNet (Yu et al., 2017), AFINN
(Nguyen & Shirai, 2015), and VADER (Wang et al., 2018). Lexicon-based methods have shown
reliable results in short-form texts like tweets and Facebook posts, particularly when context-
independent word-level sentiment is sufficient (Ruder et al., 2016; Wang et al., 2018). However,
limitations arise when handling sarcasm, irony, context ambiguity, and negation (Nguyen et al.,
2020; Zadeh et al., 2017). Hybrid approaches combining lexicons with machine learning classifiers
have emerged to mitigate these issues, offering improved performance without sacrificing
interpretability (Barbieri et al., 2020; Ghosal et al., 2019). Lexicon adaptability across languages
and domains has also been a subject of research, with multilingual lexicons designed for
sentiment mining in cross-cultural contexts (Poria et al., 2015). Feature extraction has been central
to transforming text into structured representations suitable for machine learning. Techniques such
as bag-of-words (BoW), n-gram modeling, and Term Frequency-Inverse Document Frequency (TF-
IDF) have been widely employed in early sentiment classification systems (Wang et al., 2018; Yu
et al., 2017). While these methods capture surface-level term occurrence and co-occurrence,
they lack the semantic depth needed for nuanced sentiment detection (Zadeh et al., 2017).
70
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
Advanced methods such as latent semantic analysis (LSA) and topic modeling via Latent Dirichlet
Allocation (LDA) have attempted to capture hidden thematic structures (Fu et al., 2011). These
approaches have been supplemented by syntactic parsing, dependency trees, and part-of-
speech tagging to better represent sentence structure and improve model understanding (Özyurt
& Akcayol, 2021). Although widely used, traditional techniques often struggle with context
sensitivity and polysemy inherent in informal social media language (Alwakid et al., 2022).
The application of deep learning has transformed NLP tasks by allowing sentiment models to learn
hierarchical features from raw text. Recurrent Neural Networks (RNNs), particularly Long Short-
Term Memory (LSTM) and Gated Recurrent Unit (GRU) models, have been effective in capturing
sequential dependencies and emotional flow in social media text (Sanoussi et al., 2022).
Convolutional Neural Networks (CNNs) have also been used to detect sentiment-bearing phrases
by learning spatial features across word embeddings (Jhanwar & Das, 2018). More recently,
transformer-based architectures like BERT (Bidirectional Encoder Representations from
Transformers) have achieved state-of-the-art performance by using self-attention mechanisms
and bidirectional context encoding (Akula & Garibay, 2021; Baheti & Kinariwala, 2019). These
models have been pre-trained on massive corpora and fine-tuned for sentiment classification
tasks, resulting in enhanced performance across benchmark datasets (Konate & Du, 2018; Liu &
Chen, 2019). Contextual word embeddings derived from these models outperform static
embeddings like Word2Vec and GloVe by accounting for syntactic and semantic variations
(Akula & Garibay, 2021; Ghosh et al., 2017). Evaluating NLP-driven sentiment models requires
standardized metrics such as accuracy, precision, recall, and F1-score, often benchmarked using
datasets like SST, SemEval, and IMDb (Ghosh et al., 2017; Konate & Du, 2018). Challenges persist
in handling code-switching, idiomatic expressions, negation, and sarcasm in informal social
media text (Liu & Chen, 2019; Thara & Poornachandran, 2022). Sarcasm detection has been
explored using contextual modeling and annotation-based corpora, yet remains a difficult task
due to its implicit nature (Ghosh et al., 2017). Multilingual NLP for sentiment analysis has attracted
interest, with models like multilingual BERT (mBERT) and XLM-R showing cross-lingual generalization
capability ((Konate & Du, 2018). However, performance often declines in low-resource languages
or culturally diverse corpora, emphasizing the need for robust cross-linguistic tools (Olaniyan et al.,
2023). Tools integrating both symbolic and statistical NLP continue to dominate sentiment analysis
research, especially when aligned with domain-specific adaptations and labeled datasets
(Sanoussi et al., 2022; Baheti & Kinariwala, 2019).
Syntactic and Semantic Analysis in Social Text Mining
Syntactic analysis serves as a crucial element in social text mining by offering structural
representations of sentences that aid in understanding relationships between words and phrases.
Techniques such as part-of-speech (POS) tagging, constituency parsing, and dependency
parsing have been widely adopted to extract syntactic patterns from user-generated content
(Fu et al., 2016). POS tagging helps identify the grammatical roles of words, improving the
precision of downstream sentiment classification tasks (Behdenna et al., 2018). Dependency
parsing, in particular, has been instrumental in detecting modifier-head relationships, which are
essential for recognizing negation and intensification (Moraes et al., 2013). Tools such as the
Stanford CoreNLP and spaCy have facilitated syntactic analysis in large-scale social media
datasets (Gupta et al., 2018; Zhang et al., 2018). In syntactically rich models, sentiment
classification accuracy has improved when syntactic features are incorporated alongside lexical
ones, especially in handling complex sentence structures (Sanoussi et al., 2022).
Semantic analysis in social text mining goes beyond structural representation to interpret the
meaning and context of words, phrases, and sentences. Semantic Role Labeling (SRL) identifies
predicates and their associated arguments, enabling better understanding of “who did what to
whom” in a sentence (Deters & Mehl, 2012). Word Sense Disambiguation (WSD) further contributes
by resolving ambiguity in polysemous words, which is critical for sentiment polarity determination
in informal contexts (Sanoussi et al., 2022). In social media analysis, the use of semantic lexicons
such as WordNet, SentiWordNet, and ConceptNet has supported the enrichment of sentiment
models with semantic features (Ansari et al., 2023). These semantic enhancements are particularly
useful in detecting implicit sentiments and context-dependent meanings, which lexicon-only
71
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
models often miss (Jhanwar & Das, 2018). Studies applying SRL and WSD report improved
sentiment prediction in domains like politics and brand reviews, where language is nuanced and
domain-specific (Xia et al., 2011).
Figure 6: Basic steps to perform sentiment analysis and emotion detection
The integration of syntactic and semantic information has been greatly advanced through deep
learning models, particularly those designed to capture compositional meaning and context.
Recursive Neural Networks (RecNN) and Tree-LSTMs have been applied to parse tree structures,
enabling models to understand hierarchical syntactic relationships (Ravi & Ravi, 2016). Long Short-
Term Memory (LSTM) networks and Gated Recurrent Units (GRUs) have demonstrated the ability
to model long-distance dependencies in text while incorporating syntactic cues (Al Amrani et al.,
2018). Transformer-based models like BERT and RoBERTa, while primarily attention-based, implicitly
capture syntactic and semantic information by training on large-scale corpora with masked
language modeling objectives (Lei et al., 2016). These models have outperformed traditional
classifiers in benchmark sentiment tasks and demonstrated resilience to linguistic complexity
(Piana et al., 2014). Hybrid architectures incorporating both parse trees and contextual
embeddings have achieved high accuracy in sentiment and emotion detection in Twitter data
(Yu et al., 2018).
Syntactic and semantic features are particularly crucial in addressing linguistic phenomena such
as sarcasm, irony, and negation—common in social media discourse. Sarcasm detection relies
on understanding the incongruity between literal meaning and intended sentiment, which
syntactic features alone cannot resolve (Rocktäschel et al., 2015). Semantic analysis, including
sentiment-shifting structures and pragmatic context, has been employed to identify sarcastic
expressions more effectively (Chaudhari et al., 2021; Rocktäschel et al., 2015). Negation handling
remains another core challenge, as it directly alters sentiment polarity. Dependency parsing and
scope-based sentiment reversal techniques have been used to detect negation cues and adjust
classification outcomes accordingly (Shi, 2019; Yu et al., 2018). Additionally, idiomatic and
colloquial expressions require semantic models trained on informal corpora to avoid
misclassification (Mitrović et al., 2011). These studies emphasize the importance of jointly modeling
syntax and semantics to handle ambiguous sentiment expressions typical of social media
72
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
platforms. The effectiveness of syntactic and semantic models in sentiment analysis is often
evaluated using benchmark datasets such as the Stanford Sentiment Treebank (Li & Zou, 2024),
SemEval datasets (Ravi & Ravi, 2016), and Twitter-specific corpora (Zhang et al., 2022). Metrics
such as accuracy, F1-score, and Matthews Correlation Coefficient (MCC) have been used to
compare model performance across tasks (Lai et al., 2015). Results show that models integrating
syntactic structures and semantic role labeling consistently outperform those relying on surface
features alone, particularly in domains with high linguistic variability (Habernal et al., 2015; Yu et
al., 2018). However, the portability of such models across domains—e.g., from politics to
healthcare—has been limited by vocabulary drift, domain-specific phraseology, and annotation
inconsistency (Glorot et al., 2011; Pan et al., 2010). Recent studies have explored domain
adaptation techniques using shared embeddings and transfer learning to address these issues
(Ruder et al., 2019; Xu et al., 2020).
AI and Machine Learning Models for Sentiment Classification
Traditional machine learning algorithms have laid the foundation for early sentiment classification
tasks by providing robust statistical models capable of handling large-scale, high-dimensional text
data. Naïve Bayes, Logistic Regression, and Support Vector Machines (SVM) have been
extensively used in sentiment analysis due to their simplicity, interpretability, and computational
efficiency (Aljarah et al., 2020; Younus et al., 2024). These classifiers rely on numerical feature
extraction methods such as bag-of-words (BoW), n-grams, and TF-IDF to transform textual content
into vectors (Hossain et al., 2024; Mahabub, Das, et al., 2024; Mahabub, Jahan, et al., 2024; Vinoth
& Prabhavathy, 2022). Studies have demonstrated that Naïve Bayes performs particularly well on
short-form texts like tweets and product reviews due to its assumption of feature independence
and speed in real-time classification scenarios (Ammar et al., 2024; Haidar et al., 2017; Mahfuj et
al., 2022; Nalinde & Shinde, 2019). Logistic Regression, known for its probabilistic output, has shown
high accuracy in binary sentiment tasks, particularly when combined with linguistic preprocessing
steps (Chaudhari et al., 2021; Faria & Rashedul, 2025; Vinoth & Prabhavathy, 2022). Furtherer, the
Naïve Bayes algorithm has been widely adopted for sentiment classification tasks on social media
platforms due to its speed and low computational cost (Dangi et al., 2022; Dhaoui et al., 2017;
Jahan, 2023). In early Twitter sentiment studies, researchers used Naïve Bayes to classify tweets
into positive, negative, or neutral categories with considerable success when paired with unigram
and bigram features (Ji et al., 2021; Sunny, 2024a, 2024b, 2024c). Despite its strong baseline
performance, the algorithm has shown sensitivity to feature sparsity and context loss, especially in
complex or ambiguous expressions (Hassan et al., 2017; Rahaman & Islam, 2021; Tonoy & Khan,
2023). Enhancements such as Laplace smoothing, feature selection based on mutual information,
and domain-adapted lexicons have been introduced to improve classification performance (Al-
Arafat, Kabi, et al., 2024; Gaikwad & Joshi, 2016; Shah & Shah, 2020). Naïve Bayes has also been
employed in multilingual sentiment tasks and low-resource environments, where it outperformed
more complex models in constrained computational settings (Arafat et al., 2024; Chaudhari et
al., 2021; Mohiul et al., 2022; Nahid et al., 2024).
Logistic Regression offers a probabilistic perspective on sentiment classification, making it
especially useful in domains where interpretable and explainable models are required (Hassan et
al., 2017). This algorithm models the probability that a given instance belongs to a particular
sentiment class, facilitating nuanced decision-making in binary or multiclass tasks (Dhaoui et al.,
2017; Saif et al., 2017). Logistic Regression has been effectively applied in scenarios with high-
dimensional and sparse text data, especially when regularization techniques such as L1 and L2
are used to prevent overfitting (Arafat et al., 2024; Vinoth & Prabhavathy, 2022). In comparative
evaluations, Logistic Regression has performed competitively against Naïve Bayes, particularly in
applications involving structured datasets and domain-specific vocabularies (Bhuiyan et al., 2024;
Lin & Kolcz, 2012). The algorithm's compatibility with various vectorization methods—including TF-
IDF, word embeddings, and sentiment lexicons—has enabled its adoption across multiple
platforms and languages (Chen et al., 2022; Chowdhury et al., 2023). Moreover, Support Vector
Machines (SVM) have consistently demonstrated high performance in sentiment classification
due to their ability to maximize the margin between classes and handle non-linear separability
using kernel tricks (Shah & Shah, 2020). Studies have shown that SVM outperforms Naïve Bayes
73
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
and Logistic Regression in handling high-dimensional textual data, especially when using linear
and RBF kernels (Gaikwad & Joshi, 2016). In Twitter sentiment analysis, SVM has proven robust
against noisy and imbalanced data, often achieving higher F1-scores in benchmark tasks (Hassan
et al., 2017). Its integration with feature selection methods, such as chi-square and information
gain, further enhances its discriminative power (Chen et al., 2022). Additionally, ensemble
techniques combining SVM with rule-based or lexicon-assisted models have shown improvements
in precision, particularly in sentiment-rich domains like finance and health (Chaudhari et al., 2021).
(Al-Arafat, Kabir, et al., 2024; Wei et al., 2021; Xu et al., 2019). Logistic Regression stands out for its
interpretability and probabilistic outputs, making it suitable for explainable AI applications (Mohiul
et al., 2022; Yan et al., 2016). However, traditional models often struggle with nuanced linguistic
phenomena such as sarcasm, negation, and idiomatic expressions, which require deeper
contextual understanding (Hossen et al., 2023; Zhang et al., 2022). Despite these limitations, these
algorithms continue to be used as reliable benchmarks in sentiment analysis research and are
frequently integrated into hybrid systems for improved effectiveness (Roksana, 2023; Zhai & Zhang,
2015; Zhang et al., 2011).
Deep Learning Models: CNN, RNN, LSTM, and GRU
Deep learning has transformed sentiment analysis by enabling models to learn hierarchical and
contextual representations of textual data without manual feature engineering. Unlike traditional
models that depend on sparse representations, deep learning architectures such as
Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory
networks (LSTM), and Gated Recurrent Units (GRU) extract semantic and syntactic patterns from
sequences, improving classification accuracy (Sabid & Kamrul, 2024; Usama et al., 2019). These
models have been applied effectively to large-scale social media datasets, offering superior
performance in handling informal, noisy, and unstructured text common on platforms like Twitter
and Reddit (Aklima et al., 2022; Munira, 2025; Usama et al., 2019; Zhou et al., 2020). Their ability to
learn from sequential context and capture deep feature dependencies has made them suitable
for sentiment-rich applications such as emotion recognition, stance detection, and opinion
mining (Jim et al., 2024; Rojas-Barahona, 2016; L. Zhang et al., 2018). Moreover, CNNs, originally
designed for image recognition, have been successfully adapted to NLP tasks, particularly for
detecting local patterns in text such as sentiment-bearing n-grams (Aklima et al., 2022; Kastrati et
al., 2021). By applying convolutional filters over word embeddings, CNNs can capture spatial
dependencies and position-invariant features relevant for sentiment classification (Badjatiya et
al., 2017; Khatun et al., 2025; Zhang et al., 2018). Studies have shown that CNNs perform well on
short texts like tweets or product reviews, where localized patterns such as negation, intensifiers,
or sentiment modifiers play a central role (Rojas-Barahona, 2016). When combined with max-
pooling and dropout, CNNs generalize effectively and mitigate overfitting in sparse textual
datasets (Hasan et al., 2024; Khan, 2025; L. Zhang et al., 2018). CNN-based sentiment models have
outperformed traditional machine learning classifiers on benchmark datasets including SST, IMDb,
and Amazon reviews (Tang et al., 2015). Moreover, RNNs are particularly suitable for sentiment
analysis due to their capacity to handle sequential data and preserve information across time
steps (Mukherjee, 2019). They have been employed in text classification tasks where word order
and context significantly impact sentiment orientation (Zhang et al., 2018). However, standard
RNNs suffer from vanishing gradient problems, limiting their ability to model long-term
dependencies (Mukherjee, 2019). Despite this limitation, RNNs have achieved notable results in
short-sequence sentiment tasks when trained on sufficient data and integrated with pre-trained
embeddings like Word2Vec or GloVe (Badjatiya et al., 2017). Bidirectional RNNs (Bi-RNNs) have
been proposed to capture both past and future context, further improving sentiment
classification in domains like political discourse and customer reviews (Rojas-Barahona, 2016).
LSTM networks address the limitations of traditional RNNs by incorporating gating mechanisms that
enable selective retention and forgetting of information over long sequences (Badjatiya et al.,
2017; Zhou et al., 2020). LSTMs have demonstrated state-of-the-art performance in sentiment
classification tasks, particularly in capturing sentiment polarity that emerges later in a sentence or
paragraph (Subramani et al., 2019; Usama et al., 2019). Studies utilizing LSTM for tweet-level
sentiment detection have reported improvements in accuracy, especially in handling sarcasm,
negation, and emotionally nuanced expressions (Jamatia et al., 2020; Zhang et al., 2018). Hybrid
architectures combining CNNs and LSTMs have further enhanced model performance by
capturing both local and sequential patterns in sentiment-laden text (Abbasi et al., 2022; Kastrati
et al., 2021). LSTM-based models have been widely validated on datasets such as SemEval, Yelp,
and Twitter corpora, where temporal dependencies and complex syntax are prevalent (Usama
et al., 2019). Gated Recurrent Units (GRU) offer a simplified architecture compared to LSTM,
combining the forget and input gates into a single update gate while maintaining similar
75
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
performance in sentiment tasks (Shanmugavadivel et al., 2022; Subramani et al., 2019). GRUs have
shown competitive results in both short and long sequence modeling, with reduced
computational overhead, making them suitable for real-time social media sentiment applications
(Mukherjee, 2019). Studies comparing GRUs and LSTMs have found that GRUs often perform
equally well in scenarios with limited training data or constrained resources (Abbasi et al., 2022).
Bidirectional GRUs (Bi-GRU) further enhance sentiment prediction by incorporating forward and
backward semantic flows, particularly in multilingual or code-mixed data (Abbasi et al., 2022;
Badjatiya et al., 2017). GRU models have also been integrated into attention mechanisms to
emphasize sentiment-relevant tokens, further refining polarity classification in complex text
streams (Jamatia et al., 2020).
Figure 8: Model Architectures for Spectrum Analysis Using CNN, LSTM, and Classical ML
Sentiment analysis of non-English and multilingual text presents significant challenges due to
linguistic diversity, limited resources, and cultural variations in sentiment expression (Akhtar et al.,
76
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
2017). Languages with rich morphology, such as Arabic, Turkish, and Hindi, complicate
tokenization, stemming, and syntactic parsing (Li et al., 2018; Peñalver-Martinez et al., 2014). Many
sentiment analysis tools and lexicons are developed primarily for English, resulting in lower
performance and transferability to other languages (Arias et al., 2013). Moreover, cultural context
affects how sentiment is linguistically encoded, and direct translations often fail to capture
sentiment intensity or polarity (Davila et al., 2012). Low-resource languages frequently lack
annotated corpora and sentiment lexicons, creating data scarcity issues (Arias et al., 2013;
Chaturvedi et al., 2019). Code-switching—where multiple languages are used within a single
sentence—also introduces noise and syntactic ambiguity that hinders traditional NLP pipelines
(Arias et al., 2013; Peñalver-Martinez et al., 2014). Non-English texts often include idioms, dialects,
and informal terms not present in standardized training datasets, which makes sentiment
detection inconsistent across languages (Abualigah, 2019). Even within a single language,
regional dialects may express sentiment differently, and reliance on literal word-to-word matching
can lead to misclassification (Poria et al., 2016). Additionally, multilingual social media content
frequently contains emojis, hashtags, and transliterations, which further complicate preprocessing
and feature extraction (Li et al., 2018). NLP tools trained on formal text sources like news articles
or Wikipedia often perform poorly on informal social media data from diverse linguistic
backgrounds (Peñalver-Martinez et al., 2014). The absence of universal tokenization standards
across languages adds to inconsistencies in model outputs, especially in agglutinative languages
like Finnish or Korean (Chaturvedi et al., 2019). Consequently, researchers have had to adapt or
build language-specific models and preprocessing pipelines to ensure contextual accuracy.
Cross-lingual word embeddings and translation-based models have been developed to
overcome language barriers in sentiment analysis. Techniques such as bilingual word
embeddings, multilingual embeddings (e.g., MUSE, LASER), and contextual models like
multilingual BERT (mBERT) and XLM-R have shown promising results in aligning sentiment-bearing
words across languages in shared vector spaces (Chaturvedi et al., 2019). These models leverage
large-scale corpora to learn language-independent representations that improve performance
on low-resource and zero-shot sentiment tasks (Karyotis et al., 2018). Translation-based sentiment
models use machine translation to convert non-English texts into English before applying
monolingual sentiment classifiers (Phu et al., 2016). While translation can standardize input, it risks
semantic distortion and loss of sentiment nuance during conversion (Zhou et al., 2023).
Nevertheless, cross-lingual transformer models have demonstrated improved generalization,
enabling sentiment transfer across domains and languages with minimal labeled data (Dash et
al., 2015).
Benchmark datasets are critical for evaluating multilingual and cross-lingual sentiment models.
Resources like the Multilingual Amazon Reviews Corpus (Eke et al., 2021), Twitter Sentiment Corpus
for Arabic (Alfina et al., 2017), and the SemEval datasets for multilingual sentiment tasks provide
standardized testing grounds for comparative analysis (Gandhi et al., 2023; Sodhi et al., 2021).
Additionally, the NoReC dataset for Norwegian (Cojocaru et al., 2022), Hindi-English Code-Mixed
Corpus (Sodhi et al., 2021), and various regional corpora for Spanish, French, and German have
expanded research into non-English sentiment mining (Gandhi et al., 2023). Language-specific
sentiment lexicons such as NRC Emotion Lexicon, FEEL (for French), and OpeNER (for Dutch and
Spanish) support semantic annotation across diverse linguistic datasets (Zhang et al., 2022). These
resources, combined with open-source NLP toolkits like Polyglot and UDPipe, enable reproducible
experimentation in multilingual settings (Chakravarthi et al., 2022).
Cross-lingual and multilingual sentiment models have shown encouraging results on standard
benchmarks, but their performance varies depending on data quality, domain specificity, and
linguistic complexity. Multilingual BERT (mBERT) and XLM-R, for instance, outperform earlier models
in zero-shot transfer tasks but still exhibit biases in lower-resource languages and morphologically
rich structures (Sodhi et al., 2021). Evaluation metrics such as macro-F1, accuracy, and confusion
matrices are used to assess these models, often revealing that high-resource language pairs like
English-Spanish yield stronger performance than English-Amharic or English-Hindi (Zhang et al.,
2022). Furthermore, models trained on formal corpora tend to underperform on informal or
domain-specific texts such as tweets, customer reviews, or political commentary (Steiner-Correa
77
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
Machine learning models, including Support Vector Machines (SVM), Random Forests, and
ensemble classifiers, have been widely adopted in financial sentiment analysis for classification
and regression tasks (Kalarani & Brunda, 2018). These models benefit from engineered sentiment
features and linguistic variables that improve financial event prediction accuracy (Kalarani &
78
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
Brunda, 2018; X. Zhang et al., 2022). Deep learning architectures such as Long Short-Term Memory
(LSTM) and Bidirectional Encoder Representations from Transformers (BERT) have been employed
to capture contextual dependencies in economic news and social media discussions (Yang et
al., 2020). Sentiment-enriched LSTM models have shown improved predictive power in forecasting
exchange rates, bond prices, and stock indices when trained on time-series textual data
(Shahare, 2017). Hybrid models combining CNNs for local pattern extraction and LSTMs for
sequential learning have further enhanced financial sentiment classification across multiple
domains (Shahare, 2017). These models underscore the efficacy of sentiment-driven architectures
in financial market prediction. Sentiment analysis has also been applied to construct economic
sentiment indices that reflect consumer confidence, policy outlook, and business sentiment. Tools
like the University of Michigan Consumer Sentiment Index and the Economic Sentiment Indicator
(ESI) from the European Commission have inspired computational equivalents generated from
media and social data (Studiawan et al., 2020). Researchers have developed indices based on
sentiment from financial news headlines and social media chatter to serve as leading indicators
of macroeconomic activity such as GDP growth, inflation trends, and employment shifts (Ortigosa
et al., 2014; Studiawan et al., 2020). These indices are constructed using NLP techniques like
sentiment scoring, topic modeling, and named entity recognition (Zunic et al., 2020). Sentiment-
based indices have been correlated with actual economic indicators, validating their utility in
understanding market expectations and public perceptions about economic conditions (Babu &
Kanaga, 2021). Such models offer an additional layer of interpretability beyond traditional
econometric models. Despite promising results, financial sentiment analysis faces several
limitations related to data quality, model generalizability, and interpretability. Textual data from
social media is often noisy, informal, and context-dependent, leading to misclassification in
sentiment labeling (Pong-inwong & Songpan, 2018). Financial jargon and domain-specific terms
are often misinterpreted by generic sentiment models, prompting the development of tailored
lexicons and annotated corpora (Pong-inwong & Songpan, 2018; Zunic et al., 2020). Bias in
training data, particularly from over-reliance on English-language sources or specific regions, limits
cross-market applicability (Shahare, 2017). Furthermore, some studies have raised concerns
regarding the interpretability of deep learning models, which, although highly accurate, act as
“black boxes” in financial decision-making contexts (Yang et al., 2020). These limitations have
prompted researchers to explore explainable AI (XAI) and model auditing techniques to enhance
trust and transparency in financial sentiment prediction (Zunic et al., 2020).
Multimodal Sentiment Analysis and Contextual Understanding
Multimodal sentiment analysis enhances textual sentiment detection by integrating additional
modalities such as emojis, hashtags, images, and videos commonly used in social media
communication. Emojis function as emotional amplifiers and are frequently used to supplement
or even replace text in digital conversations (Babu & Kanaga, 2021). Several models have
incorporated emoji embeddings alongside textual features to improve sentiment prediction
accuracy, particularly in tweets and Instagram posts (Zhang et al., 2022). Hashtags also play a
significant role in sentiment orientation, with studies using hashtag co-occurrence patterns to infer
latent topics and sentiment themes (Studiawan et al., 2020). Image and video content provide
contextual cues that can override or reinforce text-based sentiment, prompting researchers to
use computer vision techniques for visual sentiment recognition (Rezaeinia et al., 2019; Studiawan
et al., 2020). Deep learning architectures such as multimodal LSTM, Multimodal Transformer, and
VisualBERT have been employed to align visual and textual modalities for sentiment classification
(Zunic et al., 2020).
Memes and GIFs present unique challenges and opportunities for sentiment analysis due to their
cultural relevance and highly contextual nature. Memes often combine text and images in ways
that require deep contextual and cultural understanding for accurate interpretation (Babu &
Kanaga, 2021). Researchers have applied convolutional neural networks (CNNs) to extract
features from meme images and merged them with text embeddings for improved sentiment
prediction (Pong-inwong & Songpan, 2018; Tembhurne & Diwan, 2020). GIFs, while lacking in
explicit text, carry emotional weight through motion and facial expressions, which have been
processed using recurrent neural networks and spatiotemporal analysis (Pal et al., 2018). Internet
79
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
slang, abbreviations, and neologisms such as "FOMO," "YOLO," or sarcastic acronyms challenge
traditional NLP pipelines due to their informal and evolving nature (Zhang et al., 2022). Lexicon
expansion techniques and community-generated slang dictionaries have been used to address
these limitations (Studiawan et al., 2020). These elements emphasize the importance of
multimodal and non-standard linguistic resources for accurate sentiment mining.
80
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
82
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
The application of sentiment analysis in real-world domains such as finance, politics, and public
health has expanded significantly. In financial forecasting, sentiment extracted from news articles,
earnings reports, and investor tweets has been linked to stock price movements, trading volumes,
and market volatility (Ameur et al., 2018; Deng et al., 2022). Political sentiment analysis has been
applied to election forecasting and opinion mining using tweets, debates, and campaign
speeches (Cai & Xia, 2015). In public health, studies have tracked sentiment around vaccine
hesitancy, mental health, and pandemic-related behaviors (Riaz et al., 2017). However, many of
these applications rely on domain-specific lexicons and datasets, limiting their generalizability
across topics and regions (Balazs & Velásquez, 2016). Additionally, studies often overlook ethical
concerns related to data privacy, consent, and algorithmic bias (Salur & Aydin, 2020), indicating
a need for ethical frameworks in sentiment analysis deployments. Several gaps persist in the
methodological aspects of sentiment analysis research, particularly in the areas of evaluation
metrics, dataset diversity, and model interpretability. Most studies rely on standard metrics such
as accuracy, precision, recall, and F1-score, which do not always capture the nuances of
emotional variation or class imbalance in sentiment categories (Balazs & Velásquez, 2016).
Furthermore, commonly used datasets such as SST, IMDb, and Twitter corpora are heavily biased
toward English-language content and consumer product reviews, underrepresenting socio-
political, non-Western, and multilingual contexts (Rodriguez-Ibanez et al., 2020). In terms of
interpretability, deep learning models, particularly those based on transformers, are often
criticized for being black-box systems, offering limited transparency in their decision-making
processes (Balazs & Velásquez, 2016). This restricts their adoption in high-stakes domains such as
healthcare or legal applications. These issues underscore the need for more representative
datasets, culturally sensitive evaluation, and explainable AI in sentiment analysis research.
METHOD
This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses
(PRISMA) guidelines to ensure a systematic, transparent, and replicable literature review process.
The PRISMA framework provided a standardized approach for article identification, selection,
eligibility assessment, and inclusion, ensuring methodological rigor throughout the review.
83
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
Identification of Studies
The initial stage of the review involved an extensive and structured search across several
academic databases, including Scopus, Web of Science, IEEE Xplore, ACM Digital Library, and
Google Scholar. Search queries were designed using Boolean operators and included
combinations of keywords such as “sentiment analysis,” “natural language processing,” “social
media,” “deep learning,” “transformer models,” “financial sentiment,” “multilingual sentiment,”
and “sarcasm detection.” The search was limited to peer-reviewed journal articles and
conference papers published between 2010 and 2024 to ensure the relevance and timeliness of
the selected literature. In total, 823 articles were initially identified based on title and abstract
relevance.
Screening Process
The screening phase focused on eliminating duplicates and filtering studies that clearly did not
align with the review scope. Duplicate entries were removed using citation management tools,
reducing the dataset to 645 unique articles. Each remaining article was then screened based on
its title, abstract, and keywords to determine relevance. Studies not related to sentiment analysis,
those that lacked a focus on AI or NLP techniques, or were unrelated to social media data were
excluded at this stage. After this process, 238 articles were selected for full-text review.
Eligibility Criteria
The eligibility assessment involved a detailed examination of each full-text article against the
inclusion and exclusion criteria. Included studies met the following conditions: (1) they provided
empirical evidence or methodological development in sentiment analysis, (2) they employed AI
or machine learning models (including traditional, deep learning, or transformer-based), (3) they
focused on textual or multimodal data from social media or related digital platforms, and (4) they
were published in English in peer-reviewed venues. Studies were excluded if they were
conceptual papers without methodological or empirical content, non-English language articles,
or focused on unrelated domains such as medical sentiment unrelated to social platforms. This
stage resulted in 128 articles being deemed eligible.
Inclusion and Final Selection
The final set of articles included in the review was determined based on thematic relevance,
methodological robustness, and contribution to the discourse on sentiment analysis. Articles were
categorized into thematic areas such as traditional machine learning, deep learning, multimodal
analysis, multilingual sentiment, sarcasm and ambiguity detection, financial and economic
sentiment, and evaluation frameworks. After rigorous assessment and thematic grouping, 91
articles were selected for in-depth synthesis and analysis. These articles represent a diverse yet
coherent body of work that aligns with the objectives of this review.
Data Extraction and Synthesis
Data extraction was conducted manually using a standardized data collection form, which
captured key information from each article, including author(s), year of publication, study
domain, data source, methodological approach, sentiment model used, evaluation metrics, and
key findings. Articles were then synthesized thematically to identify recurring patterns,
methodological innovations, domain-specific applications, and research gaps. This thematic
synthesis formed the basis for the literature review sections of this study, ensuring a comprehensive,
transparent, and systematic examination of the state of research in sentiment analysis using AI
and NLP in social media contexts.
FINDINGS
A central finding of this systematic review is the increasing dominance of deep learning
techniques in sentiment classification tasks within social media contexts. Among the 91 articles
analyzed, 47 utilized deep learning architectures such as Convolutional Neural Networks (CNNs),
Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and transformers like
BERT and RoBERTa. These models demonstrated notable improvements in handling complex
syntactic structures, sequential word dependencies, and contextual sentiment cues compared
to earlier statistical approaches. Several studies reported that deep learning models improved
classification accuracy by 8 to 15 percentage points over traditional machine learning algorithms
84
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
such as Naïve Bayes, Logistic Regression, and Support Vector Machines. The scholarly impact of
these 47 deep learning-focused studies is reflected in their collective citation count of more than
12,300 citations, underscoring their influence on the advancement of sentiment analysis and their
centrality in shaping contemporary research directions.
The review also identified a clear trend toward the integration of multimodal data sources in
sentiment analysis, highlighting a move beyond purely text-based methods. A total of 38 studies
explored the role of visual, auditory, and symbolic data—such as emojis, hashtags, images,
videos, and voice—in enhancing sentiment prediction. These studies emphasized that social
media communication is inherently multimodal, and neglecting non-textual elements often results
in an incomplete or skewed understanding of user sentiment. For instance, sentiment polarity may
be influenced by an emoji or meme even when the accompanying text is neutral or sarcastic.
Multimodal fusion methods, especially those using deep learning frameworks like multimodal
transformers or visual-linguistic models, were shown to improve classification accuracy by up to
20% in specific use cases. The total number of citations for the 38 multimodal studies exceeded
8,500, reflecting the significant scholarly and practical interest in capturing sentiment from diverse
communication modes. In addition, multilingual and cross-lingual sentiment analysis has become
a growing area of focus, addressing the linguistic inequality prevalent in earlier models trained
predominantly on English data. Out of the total 91 articles, 29 explicitly examined the
performance of sentiment analysis models on non-English or multilingual text corpora. These
studies employed models such as multilingual BERT (mBERT), LASER, XLM-R, and other cross-lingual
embedding frameworks designed to align semantic content across languages. While these tools
expanded the scope of sentiment analysis into languages such as Arabic, Spanish, Hindi, and
Chinese, researchers noted performance disparities when applied to morphologically rich or low-
resource languages. Many of these multilingual models suffered from domain shift and
vocabulary mismatches that affected generalization capabilities. Despite these challenges, the
collective academic contribution of the 29 multilingual-focused studies was substantial,
accumulating over 6,100 citations and validating the importance of linguistic diversity in sentiment
modeling research.
85
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
Another critical insight emerging from the literature involves the continued struggle to detect non-
literal sentiment expressions, particularly sarcasm, irony, and ambiguity. Among the 91 studies, 24
specifically targeted the detection of these rhetorical forms, which frequently occur in informal,
user-generated content on platforms like Twitter, Reddit, and Tumblr. The reviewed literature
highlighted that sarcasm and irony often reverse the expected sentiment polarity, making it
difficult for lexicon-based or shallow neural models to interpret them accurately. Although models
incorporating contextual embeddings, attention mechanisms, and even multimodal cues (such
as facial expressions in GIFs or memes) have improved detection rates, accuracy in this
subdomain typically remained below 70%. These 24 studies, together receiving over 5,000
citations, emphasize that non-literal language remains an open and complex problem in the
sentiment analysis community, with significant implications for real-world applications in politics,
marketing, and crisis communication. A notable subset of the literature focused on financial and
economic sentiment analysis, showcasing its relevance to market forecasting, investor behavior,
and macroeconomic trend prediction. Among the reviewed works, 21 studies concentrated on
extracting sentiment from financial texts such as stock market news, analyst reports, investor
tweets, and earnings announcements. These studies demonstrated that sentiment scores derived
from financial narratives could correlate strongly with price fluctuations, trading volume, and
investor risk appetite. Approximately 15 out of these 21 studies reported statistically significant
predictive relationships between sentiment and market indicators, providing empirical support for
the use of AI-based sentiment models in financial decision-making. Collectively, the financial
sentiment studies accumulated over 4,800 citations, reflecting the scholarly value and practical
appeal of sentiment analytics in finance, particularly among interdisciplinary researchers in
economics, data science, and computational linguistics.
86
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
In parallel, the review found that emotion classification, as a finer-grained alternative to basic
sentiment polarity detection, has received increasing attention in recent years. Out of the 91
studies, 26 focused specifically on identifying discrete emotions such as joy, sadness, anger, fear,
and surprise. These studies argued that emotion recognition offers more nuanced insights into user
behavior, especially in contexts such as mental health monitoring, disaster response, and
customer service. Emotion-aware models utilized both lexicon-based resources (e.g., NRC
Emotion Lexicon, WordNet-Affect) and deep learning techniques trained on emotion-labeled
datasets to achieve higher granularity in output. Some of these models were embedded in health
informatics systems or integrated into digital marketing tools for audience profiling. The 26
emotion-oriented studies collectively garnered over 6,900 citations, reinforcing their relevance to
both academic inquiry and applied system development in sentiment and emotion mining.
Beyond methodological advancements, the review also revealed persistent challenges in
standardization across datasets, evaluation metrics, and benchmarking practices. A total of 42
articles utilized varying datasets such as the Stanford Sentiment Treebank, IMDb reviews, SemEval
tasks, and manually collected Twitter corpora. While these resources have advanced
reproducibility, their diversity has also led to inconsistent evaluation and difficulty in cross-study
comparisons. Metrics used across studies included accuracy, precision, recall, F1-score, AUC, and
more domain-specific indicators, often without standardization. This heterogeneity in data and
metrics reduced the interpretability and comparability of findings, particularly in multi-domain or
multilingual contexts. These 42 articles received over 9,200 citations collectively, demonstrating
their impact while also illustrating the need for more coherent benchmarking frameworks to
ensure consistency and fairness in model evaluation. In addition, a smaller but increasingly
relevant segment of the literature—comprising 17 articles—addressed the ethical dimensions of
sentiment analysis, including concerns over data privacy, algorithmic bias, and model
interpretability. These studies questioned the implications of deploying sentiment models trained
on unbalanced or culturally biased datasets, noting the potential for misclassification and harm
in sensitive domains such as healthcare, employment, and political discourse. Several works
highlighted the opacity of deep learning models as a barrier to accountability, advocating for
the adoption of explainable AI (XAI) techniques to provide greater transparency in model
decision-making. Although these ethical-focused studies represented a smaller fraction of the
overall literature, their collective citation count of approximately 2,100 indicates growing
recognition of these concerns within the research community. The inclusion of ethical reflection
alongside technical innovation marks a necessary step toward responsible and socially informed
sentiment analysis.
DISCUSSION
The findings of this review confirm the prevailing trend in contemporary sentiment analysis
research that favors deep learning over traditional machine learning algorithms. Earlier studies,
such as those by Lin and Kolcz (2012) and Dhaoui et al. (2017), utilized statistical classifiers like
Naïve Bayes and SVM with bag-of-words or TF-IDF features. While effective in their time, these
models lacked the ability to capture contextual and sequential information. In contrast, newer
studies adopting CNNs, LSTMs, and transformers such as BERT and RoBERTa have demonstrated
superior performance by modeling deeper syntactic and semantic relationships. This shift is
consistent with the broader NLP literature, where transformer-based models have redefined state-
of-the-art benchmarks in tasks ranging from question answering to sentiment classification
(Burdisso et al., 2019; Dhaoui et al., 2017; Shah & Shah, 2020). The scalability and context-
awareness of these models have been pivotal in elevating sentiment analysis from keyword-level
detection to document-level and even emotion-level understanding.
The surge in multimodal sentiment analysis research reflects a growing recognition of the
limitations of text-only models, particularly in social media contexts where communication is
enriched with emojis, images, hashtags, and videos. Earlier research predominantly relied on text,
neglecting the multimodal nature of digital expression. For instance, Albawi et al. (2017) and
Geng et al. (2020) focused almost exclusively on linguistic features, which restricted their models
from capturing the full sentiment conveyed in image- or emoji-laden posts. In contrast, recent
studies such as those by Montenegro et al. (2018) and Dhaoui et al. (2017) have demonstrated
87
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
that combining modalities leads to significant performance gains. These findings align with the
emergence of multimodal transformer models and fusion techniques, which can jointly process
visual, textual, and auditory data to achieve holistic sentiment understanding. Such approaches
provide a more realistic framework for modeling human communication in digital spaces.
Multilingual and cross-lingual sentiment analysis continues to be a vital research domain, and this
review has reinforced the challenges and progress in this area. Previous works, including Fang and
Zhan (2015) and Tang et al. (2016), identified a lack of linguistic diversity in sentiment datasets and
tools, with most resources being English-centric. Although recent models such as mBERT (Saif et
al., 2017) and XLM-R (Diamantini et al., 2019) have enabled sentiment modeling in multiple
languages, this review found that these models still struggle with low-resource and
morphologically rich languages. The observed performance gaps support earlier conclusions by
Tellez et al. (2017) and Hao and Dai (2016), who emphasized the limitations of zero-shot transfer
and the importance of culturally and linguistically adapted training data. Thus, while there has
been notable progress, multilingual sentiment analysis remains constrained by systemic data and
resource inequalities. Moreover, the complexity of identifying sarcasm, irony, and ambiguity in
informal text also remains a persistent challenge, despite decades of research. Early studies by
Suhaimin et al. (2023) and Dhaoui et al. (2017) introduced rule-based and feature-based sarcasm
detectors, yet these approaches often failed to generalize beyond small datasets. This review
highlights how even advanced models using BiLSTMs, attention mechanisms, and transformer
embeddings struggle to exceed 70% accuracy in sarcasm detection tasks. These findings are in
line with research by Ghosh et al. (2017) and Dhaoui et al. (2017), which emphasized the inherently
contextual and cultural nature of sarcastic expression. Furthermore, multimodal sarcasm
detection, although promising, remains in early stages due to the lack of high-quality annotated
datasets that combine textual and visual cues. This suggests that while modeling techniques have
evolved, the fundamental difficulties associated with non-literal language persist.
In the domain of financial and economic sentiment analysis, this review supports prior findings that
sentiment extracted from financial news and social media correlates significantly with market
performance. Studies by Maghilnan and Kumar (2017) and Balahur and Perea-Ortega (2015)
were among the first to link textual sentiment with stock market indicators, and this association
has been substantiated by more recent work that uses deep learning models for financial text
mining. The review confirmed that a majority of financial sentiment studies found statistically
significant relationships between sentiment indicators and asset prices, trading volumes, or market
volatility. However, as highlighted by Agarwal et al. (2015) and Habimana et al. (2019), domain-
specific lexicons and specialized models are often necessary to achieve reliable results. These
findings underscore the continued relevance of domain adaptation and financial language
modeling in achieving robust sentiment forecasting tools. Moreover, the focus on emotion
classification, as opposed to general sentiment polarity, marks another important shift in the
literature. Traditional sentiment analysis, as illustrated by Yadav and Vishwakarma (2019) and Ain
et al. (2017), emphasized binary or ternary sentiment categorization. In contrast, recent studies
have expanded the emotional scope by identifying discrete emotions such as joy, anger, fear,
and sadness. This review found that such models have become increasingly popular in
applications related to mental health monitoring and public opinion tracking. The integration of
emotion lexicons and emotion-labeled datasets such as EmoBank and GoEmotions has improved
granularity in sentiment modeling, echoing the observations of Wadawadagi and Pagi (2020)
and Onan (2020) . This transition reflects a broader interest in affective computing and suggests
that emotion-aware sentiment systems are better suited to detect subtle psychological states in
user-generated content.
Methodological inconsistencies in dataset usage and evaluation metrics remain a notable gap,
mirroring concerns raised in previous systematic reviews. For instance, Tanna et al. (2020) and Ain
et al. (2017) both noted that the diversity of sentiment datasets—ranging from product reviews to
political tweets—makes it difficult to generalize findings across domains. This review corroborates
that observation, with 42 studies using different datasets and varying metrics such as accuracy,
F1-score, AUC, and MCC. The lack of standardized evaluation protocols complicates cross-model
comparisons and reduces reproducibility, a concern also raised by Onan (2020). This highlights
88
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
the pressing need for more unified benchmarking efforts and the development of shared
evaluation frameworks to facilitate consistent and transparent assessment of model
performance. In addition, this review identifies an emerging but underdeveloped discourse
around the ethical implications of sentiment analysis. Although technical sophistication has
advanced rapidly, only a minority of studies—17 out of 91—explicitly addressed issues of bias,
fairness, transparency, or privacy. This observation aligns with earlier critiques by Balahur and
Perea-Ortega (2015) and Yadav and Vishwakarma (2019), who argued that algorithmic systems
trained on biased data risk reinforcing harmful stereotypes and misrepresenting marginalized
voices. The limited attention to explainability and consent mechanisms in sentiment analysis
models further echoes concerns raised by Fang and Zhan (2015) and Elfajr and Sarno (2018). These
findings suggest that despite technical progress, ethical considerations remain peripheral to the
core research agenda, signaling a disconnect between algorithmic development and
responsible AI practice.
CONCLUSION
This systematic literature review has synthesized key developments, challenges, and gaps in the
field of sentiment analysis by examining 91 peer-reviewed articles published between 2010 and
2024. The analysis revealed that deep learning and transformer-based models have significantly
advanced sentiment classification, outperforming traditional machine learning techniques in
handling contextual, sequential, and complex language structures. The integration of multimodal
data—emojis, images, audio, and video—has enhanced sentiment detection accuracy, while
multilingual and cross-lingual approaches have broadened the scope of applicability beyond
English-dominant corpora, though performance inconsistencies remain for low-resource
languages. Despite technical innovations, sentiment analysis models continue to struggle with
detecting sarcasm, irony, and ambiguity in informal texts, and there is an ongoing need for
domain-specific modeling in financial, political, and health-related applications. Emotion
classification has emerged as a powerful alternative to binary sentiment detection, offering
deeper insights into public attitudes, especially in socially sensitive contexts. However,
methodological inconsistencies in datasets and evaluation metrics have limited comparative
analysis across studies, and ethical considerations such as bias, interpretability, and data privacy
are still underexplored. Overall, while the field has made substantial strides in developing
sophisticated models and expanding analytical domains, future work would benefit from greater
standardization, cross-linguistic inclusivity, and ethical accountability to ensure responsible and
equitable deployment of sentiment analysis systems.
REFERENCES
1. Abbasi, A., Javed, A. R., Iqbal, F., Kryvinska, N., & Jalil, Z. (2022). Deep learning for religious and
continent-based toxic content detection and classification. Scientific reports, 12(1), 17478-NA.
https://doi.org/10.1038/s41598-022-22523-3
2. Abdelfatah, K., Terejanu, G., & Alhelbawy, A. A. (2017). Unsupervised Detection of Violent Content in
Arabic Social Media. Computer Science & Information Technology (CS & IT), NA(NA), 01-07.
https://doi.org/10.5121/csit.2017.70401
3. Abualigah, L. M. Q. (2019). Feature Selection and Enhanced Krill Herd Algorithm for Text Document
Clustering (Vol. NA). Springer International Publishing. https://doi.org/10.1007/978-3-030-10674-4
4. Adoum Sanoussi, M. S., Xiaohua, C., Agordzo, G. K., Guindo, M. L., Al Omari, A. M. M. A., & Issa, B. M.
(2022). Detection of Hate Speech Texts Using Machine Learning Algorithm. 2022 IEEE 12th Annual
Computing and Communication Workshop and Conference (CCWC), NA(NA), 266-273.
https://doi.org/10.1109/ccwc54503.2022.9720792
5. Agarwal, B., Mittal, N., Bansal, P., & Garg, S. (2015). Sentiment analysis using common-sense and context
information. Computational intelligence and neuroscience, 2015(NA), 30-39.
https://doi.org/10.1155/2015/715730
6. Ain, Q. T., Ali, M., Riaz, A., Noureen, A., Kamran, M., Hayat, B., & Rehman, A.-u. (2017). Sentiment Analysis
Using Deep Learning Techniques: A Review. International Journal of Advanced Computer Science and
Applications, 8(6), NA-NA. https://doi.org/10.14569/ijacsa.2017.080657
7. Akhtar, S., Gupta, D., Ekbal, A., & Bhattacharyya, P. (2017). Feature selection and ensemble
construction. Knowledge-Based Systems, 125(NA), 116-135. https://doi.org/10.1016/j.knosys.2017.03.020
89
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
8. Aklima, B., Mosa Sumaiya Khatun, M., & Shaharima, J. (2022). Systematic Review of Blockchain
Technology In Trade Finance And Banking Security. American Journal of Scholarly Research and
Innovation, 1(1), 25-52. https://doi.org/10.63125/vs65vx40
9. Akula, R., & Garibay, I. (2021). Interpretable Multi-Head Self-Attention Architecture for Sarcasm
Detection in Social Media. Entropy (Basel, Switzerland), 23(4), 394-NA.
https://doi.org/10.3390/e23040394
10. Al-Arafat, M., Kabi, M. E., Morshed, A. S. M., & Sunny, M. A. U. (2024). Geotechnical Challenges In Urban
Expansion: Addressing Soft Soil, Groundwater, And Subsurface Infrastructure Risks In Mega Cities.
Innovatech Engineering Journal, 1(01), 205-222. https://doi.org/10.70937/itej.v1i01.20
11. Al-Arafat, M., Kabir, M. E., Dasgupta, A., & Nahid, O. F. (2024). Designing Earthquake-Resistant
Foundations: A Geotechnical Perspective On Seismic Load Distribution And Soil-Structure Interaction.
Academic Journal On Science, Technology, Engineering & Mathematics Education, 4(04), 19-36.
https://doi.org/10.69593/ajsteme.v4i04.119
12. Al Amrani, Y., Lazaar, M., & Kadiri, K. E. E. (2018). Random Forest and Support Vector Machine based
Hybrid Approach to Sentiment Analysis. Procedia Computer Science, 127(NA), 511-520.
https://doi.org/10.1016/j.procs.2018.01.150
13. Albawi, S., Mohammed, T. A., & Al-Zawi, S. (2017). Understanding of a convolutional neural network.
2017 International Conference on Engineering and Technology (ICET), NA(NA), 1-6.
https://doi.org/10.1109/icengtechnol.2017.8308186
14. Alfina, I., Mulia, R., Fanany, M. I., & Ekanata, Y. (2017). Hate speech detection in the Indonesian
language: A dataset and preliminary study. 2017 International Conference on Advanced Computer
Science and Information Systems (ICACSIS), NA(NA), 233-237.
https://doi.org/10.1109/icacsis.2017.8355039
15. Alhojely, S. (2016). Sentiment Analysis and Opinion Mining: A Survey. International Journal of Computer
Applications, 150(6), 22-25. https://doi.org/10.5120/ijca2016911545
16. Aljarah, I., Habib, M., Hijazi, N. M., Faris, H., Qaddoura, R., Hammo, B., Abushariah, M. A. M., & Alfawareh,
M. (2020). Intelligent detection of hate speech in Arabic social network: A machine learning approach.
Journal of Information Science, 47(4), 483-501. https://doi.org/10.1177/0165551520917651
17. Alwakid, G., Osman, T., Haj, M. E., Alanazi, S., Humayun, M., & Sama, N. U. (2022). MULDASA: Multifactor
Lexical Sentiment Analysis of Social-Media Content in Nonstandard Arabic Social Media. Applied
Sciences, 12(8), 3806-3806. https://doi.org/10.3390/app12083806
18. Ameur, H., Jamoussi, S., & Hamadou, A. B. (2018). A New Method for Sentiment Analysis Using Contextual
Auto-Encoders. Journal of Computer Science and Technology, 33(6), 1307-1319.
https://doi.org/10.1007/s11390-018-1889-1
19. Ammar, B., Faria, J., Ishtiaque, A., & Noor Alam, S. (2024). A Systematic Literature Review On AI-Enabled
Smart Building Management Systems For Energy Efficiency And Sustainability. American Journal of
Scholarly Research and Innovation, 3(02), 01-27. https://doi.org/10.63125/4sjfn272
20. Angadi, S., & Reddy, V. S. (2019). Multimodal sentiment analysis using reliefF feature selection and
random forest classifier. International Journal of Computers and Applications, 43(9), 931-939.
https://doi.org/10.1080/1206212x.2019.1658054
21. Ansari, L., Ji, S., Chen, Q., & Cambria, E. (2023). Ensemble Hybrid Learning Methods for Automated
Depression Detection. IEEE Transactions on Computational Social Systems, 10(1), 211-219.
https://doi.org/10.1109/tcss.2022.3154442
22. Arafat, K. A. A., Bhuiyan, S. M. Y., Mahamud, R., & Parvez, I. (2024, 30 May-1 June 2024). Investigating
the Performance of Different Machine Learning Models for Forecasting Li-ion Battery Core Temperature
Under Dynamic Loading Conditions. 2024 IEEE International Conference on Electro Information
Technology (eIT),
23. Arias, M., Arratia, A., & Xuriguera, R. (2013). Forecasting with twitter data. ACM Transactions on Intelligent
Systems and Technology, 5(1), 8-24. https://doi.org/10.1145/2542182.2542190
24. Ayyub, K., Iqbal, S., Munir, E. U., Nisar, M. W., & Abbasi, M. (2020). Exploring Diverse Features for Sentiment
Quantification Using Machine Learning Algorithms. IEEE Access, 8(NA), 142819-142831.
https://doi.org/10.1109/access.2020.3011202
25. Babu, N. V., & Kanaga, E. G. M. (2021). Sentiment Analysis in Social Media Data for Depression Detection
Using Artificial Intelligence: A Review. SN Computer Science, 3(1), 74-74. https://doi.org/10.1007/s42979-
021-00958-1
26. Badjatiya, P., Gupta, S., Gupta, M., & Varma, V. (2017). Deep Learning for Hate Speech Detection in
Tweets. Proceedings of the 26th International Conference on World Wide Web Companion - WWW '17
Companion, NA(NA), 759-760. https://doi.org/10.1145/3041021.3054223
90
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
27. Baecchi, C., Uricchio, T., Bertini, M., & Del Bimbo, A. (2015). A multimodal feature learning approach for
sentiment analysis of social network multimedia. Multimedia tools and applications, 75(5), 2507-2525.
https://doi.org/10.1007/s11042-015-2646-x
28. Baheti, M. R. R., & Kinariwala, M. S. (2019). Detection and Analysis of Stress using Machine Learning
Techniques. International Journal of Engineering and Advanced Technology, 9(1), 335-342.
https://doi.org/10.35940/ijeat.f8573.109119
29. Bai, H., & Yu, G. (2016). A Weibo-based approach to disaster informatics: incidents monitor in post-
disaster situation via Weibo text negative sentiment analysis. Natural Hazards, 83(2), 1177-1196.
https://doi.org/10.1007/s11069-016-2370-5
30. Balahur, A., & Perea-Ortega, J. M. (2015). Sentiment analysis system adaptation for multilingual
processing. Information Processing & Management, 51(4), 547-556.
https://doi.org/10.1016/j.ipm.2014.10.004
31. Balazs, J. A., & Velásquez, J. D. (2016). Opinion Mining and Information Fusion. Information Fusion,
27(NA), 95-110. https://doi.org/10.1016/j.inffus.2015.06.002
32. Banea, C., Mihalcea, R., & Wiebe, J. (2011). Multilingual Sentiment and Subjectivity Analysis. NA, NA(NA),
NA-NA. https://doi.org/NA
33. Barbieri, F., Camacho-Collados, J., Anke, L. E., & Neves, L. (2020). EMNLP (Findings) - TweetEval: Unified
Benchmark and Comparative Evaluation for Tweet Classification. Findings of the Association for
Computational Linguistics: EMNLP 2020, NA(NA), 1644-1650. https://doi.org/10.18653/v1/2020.findings-
emnlp.148
34. Bardhan, R., Sunikka-Blank, M., & Haque, A. N. (2019). Sentiment analysis as tool for gender
mainstreaming in slum rehabilitation housing management in Mumbai, India. Habitat International,
92(6), 102040-NA. https://doi.org/10.1016/j.habitatint.2019.102040
35. Behdenna, S., Barigou, F., & Belalem, G. (2018). Document Level Sentiment Analysis: A survey. EAI
Endorsed Transactions on Context-aware Systems and Applications, 4(13), 154339-NA.
https://doi.org/10.4108/eai.14-3-2018.154339
36. Bhargava, R., Arora, S., & Sharma, Y. (2019). Neural Network-Based Architecture for Sentiment Analysis
in Indian Languages. Journal of Intelligent Systems, 28(3), 361-375. https://doi.org/10.1515/jisys-2017-
0398
37. Bhuiyan, S. M. Y., Mostafa, T., Schoen, M. P., & Mahamud, R. (2024). Assessment of Machine Learning
Approaches for the Predictive Modeling of Plasma-Assisted Ignition Kernel Growth. ASME 2024
International Mechanical Engineering Congress and Exposition,
38. Billah, M., & Hassan, E. (2019). Depression Detection from Bangla Facebook Status using Machine
Learning Approach. International Journal of Computer Applications, 178(43), 9-14.
https://doi.org/10.5120/ijca2019919314
39. Bose, R., Dey, R. K., Roy, S., & Sarddar, D. (2019). Sentiment Analysis on Online Product Reviews. In (Vol.
NA, pp. 559-569). Springer Singapore. https://doi.org/10.1007/978-981-13-7166-0_56
40. Burdisso, S., Errecalde, M. L., & Montes-y-Gómez, M. (2019). A Text Classification Framework for Simple
and Effective Early Depression Detection Over Social Media Streams. Expert Systems with Applications,
133(NA), 182-197. https://doi.org/10.1016/j.eswa.2019.05.023
41. Cai, G., & Xia, B. (2015). NLPCC - Convolutional Neural Networks for Multimedia Sentiment Analysis (Vol.
NA). Springer International Publishing. https://doi.org/10.1007/978-3-319-25207-0_14
42. Chakravarthi, B. R., Priyadharshini, R., Muralidaran, V., Jose, N., Suryawanshi, S., Sherly, E., & McCrae, J.
P. (2022). DravidianCodeMix: sentiment analysis and offensive language identification dataset for
Dravidian languages in code-mixed text. Language resources and evaluation, 56(3), 765-806.
https://doi.org/10.1007/s10579-022-09583-7
43. Chandrasekaran, G., Antoanela, N., Andrei, G., Monica, C., & Hemanth, J. (2022). Visual Sentiment
Analysis Using Deep Learning Models with Social Media Data. Applied Sciences, 12(3), 1030-1030.
https://doi.org/10.3390/app12031030
44. Chaturvedi, I., Satapathy, R., Cavallari, S., & Cambria, E. (2019). Fuzzy commonsense reasoning for
multimodal sentiment analysis. Pattern Recognition Letters, 125(NA), 264-270.
https://doi.org/10.1016/j.patrec.2019.04.024
45. Chaudhari, A., Davda, P., Dand, M., & Dholay, S. (2021). Profanity Detection and Removal in Videos
using Machine Learning. 2021 6th International Conference on Inventive Computation Technologies
(ICICT), NA(NA), 572-576. https://doi.org/10.1109/icict50816.2021.9358624
46. Chen, H., Zhang, Z., Yin, W., Zhao, C., Wang, F., & Li, Y. (2022). A study on depth classification of defects
by machine learning based on hyper-parameter search. Measurement, 189(NA), 110660-110660.
https://doi.org/10.1016/j.measurement.2021.110660
91
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
47. Chen, Y., & Zhang, Z. (2018). Research on text sentiment analysis based on CNNs and SVM. 2018 13th
IEEE Conference on Industrial Electronics and Applications (ICIEA), NA(NA), 2731-2734.
https://doi.org/10.1109/iciea.2018.8398173
48. Chiarello, F., Bonaccorsi, A., & Fantoni, G. (2020). Technical Sentiment Analysis. Measuring Advantages
and Drawbacks of New Products Using Social Media. Computers in Industry, 123(NA), 103299-NA.
https://doi.org/10.1016/j.compind.2020.103299
49. Chin, H., Kim, J., Kim, Y., Shin, J., & Yi, M. Y. (2018). BigComp - Explicit Content Detection in Music Lyrics
Using Machine Learning. 2018 IEEE International Conference on Big Data and Smart Computing
(BigComp), NA(NA), 517-521. https://doi.org/10.1109/bigcomp.2018.00085
50. Chmiel, A., Sobkowicz, P., Sienkiewicz, J., Paltoglou, G., Buckley, K., Thelwall, M., & Hołyst, J. A. (2011).
Negative emotions boost user activity at BBC forum. Physica A: Statistical Mechanics and its
Applications, 390(16), 2936-2944. https://doi.org/10.1016/j.physa.2011.03.040
51. Chowdhury, A., Mobin, S. M., Hossain, M. S., Sikdar, M. S. H., & Bhuiyan, S. M. Y. (2023). Mathematical
And Experimental Investigation Of Vibration Isolation Characteristics Of Negative Stiffness System For
Pipeline. Global Mainstream Journal of Innovation, Engineering & Emerging Technology, 2(01), 15-32.
https://doi.org/10.62304/jieet.v2i01.227
52. Cojocaru, A., Paraschiv, A., & Dascălu, M. (2022). News-RO-Offense - A Romanian Offensive Language
Dataset and Baseline Models Centered on News Article Comments. RoCHI - International Conference
on Human-Computer Interaction, NA(NA), 65-72. https://doi.org/10.37789/rochi.2022.1.1.12
53. Crossley, S. A., Kyle, K., & McNamara, D. S. (2016). Sentiment Analysis and Social Cognition Engine
(SEANCE): An automatic tool for sentiment, social cognition, and social-order analysis. Behavior
research methods, 49(3), 803-821. https://doi.org/10.3758/s13428-016-0743-z
54. Dang, N. C., Moreno-García, M. N., & De la Prieta, F. (2020). Sentiment analysis based on deep learning:
A comparative study. Electronics, 9(3), 483-NA. https://doi.org/10.3390/electronics9030483
55. Dangi, D., Dixit, D. K., & Bhagat, A. (2022). Sentiment analysis of COVID-19 social media data through
machine learning. Multimedia tools and applications, 81(29), 42261-42283.
https://doi.org/10.1007/s11042-022-13492-w
56. Das, R., & Singh, T. D. (2023). Multimodal Sentiment Analysis: A Survey of Methods, Trends, and
Challenges. ACM Comput. Surv., 55(13s), Article 270. https://doi.org/10.1145/3586075
57. Dash, A. K., Rout, J. K., & Jena, S. K. (2015). Harnessing Twitter for Automatic Sentiment Identification
Using Machine Learning Techniques. In (Vol. NA, pp. 507-514). Springer India.
https://doi.org/10.1007/978-81-322-2529-4_53
58. Davila, J., Hershenberg, R., Feinstein, B. A., Gorman, K. R., Bhatia, V., & Starr, L. R. (2012). Frequency and
quality of social networking among young adults: Associations with depressive symptoms, rumination,
and corumination. Psychology of popular media culture, 1(2), 72-86. https://doi.org/10.1037/a0027512
59. Deng, D., Jing, L., Yu, J., & Sun, S. (2019). Sparse Self-Attention LSTM for Sentiment Lexicon Construction.
IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27(11), 1777-1790.
https://doi.org/10.1109/taslp.2019.2933326
60. Deng, L., Ge, Q., Zhang, J., Li, Z., Yu, Z., Yin, T., & Zhu, H. (2022). News Text Classification Method Based
on the GRU_CNN Model. International Transactions on Electrical Energy Systems, 2022(NA), 1-11.
https://doi.org/10.1155/2022/1197534
61. Deters, F. g., & Mehl, M. R. (2012). Does Posting Facebook Status Updates Increase or Decrease
Loneliness? An Online Social Networking Experiment. Social psychological and personality science, 4(5),
579-586. https://doi.org/10.1177/1948550612469233
62. Dhaoui, C., Webster, C. M., & Tan, L. P. (2017). Social media sentiment analysis: lexicon versus machine
learning. Journal of Consumer Marketing, 34(6), 480-488. https://doi.org/10.1108/jcm-03-2017-2141
63. Diamantini, C., Mircoli, A., Potena, D., & Storti, E. (2019). Social information discovery enhanced by
sentiment analysis techniques. Future Generation Computer Systems, 95(NA), 816-828.
https://doi.org/10.1016/j.future.2018.01.051
64. do Carmo, R. R., Lacerda, A., & Dalip, D. H. (2017). WebMedia - A Majority Voting Approach for
Sentiment Analysis in Short Texts using Topic Models. Proceedings of the 23rd Brazillian Symposium on
Multimedia and the Web, NA(NA), 449-455. https://doi.org/10.1145/3126858.3126861
65. Eke, C. I., Norman, A. A., & Shuib, L. (2021). Context-Based Feature Technique for Sarcasm Identification
in Benchmark Datasets Using Deep Learning and BERT Model. IEEE Access, 9(NA), 48501-48518.
https://doi.org/10.1109/access.2021.3068323
66. Elfajr, N. M., & Sarno, R. (2018). Sentiment Analysis Using Weighted Emoticons and SentiWordNet for
Indonesian Language. 2018 International Seminar on Application for Technology of Information and
Communication, NA(NA), 234-238. https://doi.org/10.1109/isemantic.2018.8549703
67. Fang, X., & Zhan, J. (2015). Sentiment analysis using product review data. Journal of big data, 2(1), 5-
NA. https://doi.org/10.1186/s40537-015-0015-2
92
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
68. Faria, J., & Md Rashedul, I. (2025). Carbon Sequestration in Coastal Ecosystems: A Review of Modeling
Techniques and Applications. American Journal of Advanced Technology and Engineering Solutions,
1(01), 41-70. https://doi.org/10.63125/4z73rb29
69. Feldman, R. (2013). Techniques and applications for sentiment analysis. Communications of the ACM,
56(4), 82-89. https://doi.org/10.1145/2436256.2436274
70. Fengjiao, W., & Aono, M. (2018). Visual Sentiment Prediction by Merging Hand-Craft and CNN Features.
2018 5th International Conference on Advanced Informatics: Concept Theory and Applications
(ICAICTA), NA(NA), 66-71. https://doi.org/10.1109/icaicta.2018.8541312
71. Fu, X., Li, J., Yang, K., Cui, L., & Yang, L. (2016). Dynamic Online HDP model for discovering evolutionary
topics from Chinese social texts. Neurocomputing, 171(NA), 412-424.
https://doi.org/10.1016/j.neucom.2015.06.047
72. Fu, X., Liu, G., Guo, Y., & Guo, W. (2011). WISM (2) - Multi-aspect Blog sentiment analysis based on LDA
topic model and hownet lexicon. In (Vol. NA, pp. 131-138). Springer Berlin Heidelberg.
https://doi.org/10.1007/978-3-642-23982-3_17
73. Gaikwad, G., & Joshi, D. (2016). Multiclass mood classification on Twitter using lexicon dictionary and
machine learning algorithms. 2016 International Conference on Inventive Computation Technologies
(ICICT), 2016(NA), 1-6. https://doi.org/10.1109/inventive.2016.7823247
74. Gandhi, A., Adhvaryu, K., Poria, S., Cambria, E., & Hussain, A. (2023). Multimodal sentiment analysis: A
systematic review of history, datasets, multimodal fusion methods, applications, challenges and future
directions. Information Fusion, 91(NA), 424-444. https://doi.org/10.1016/j.inffus.2022.09.025
75. Geng, S., Niu, B., Feng, Y., & Huang, M. (2020). Understanding the focal points and sentiment of learners
in MOOC reviews: A machine learning and SC-LIWC-based approach. British Journal of Educational
Technology, 51(5), 1785-1803. https://doi.org/10.1111/bjet.12999
76. Ghosal, D., Majumder, N., Poria, S., Chhaya, N., & Gelbukh, A. (2019). EMNLP/IJCNLP (1) - DialogueGCN:
A Graph Convolutional Neural Network for Emotion Recognition in Conversation. Proceedings of the
2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint
Conference on Natural Language Processing (EMNLP-IJCNLP), NA(NA), 154-164.
https://doi.org/10.18653/v1/d19-1015
77. Ghosh, S., Ghosh, S., & Das, D. (2017). Sentiment Identification in Code-Mixed Social Media Text. arXiv:
Computation and Language, NA(NA), NA-NA. https://doi.org/NA
78. Gupta, P., Buettner, F., & Schütze, H. (2018). Document Informed Neural Autoregressive Topic Models.
arXiv: Information Retrieval, NA(NA), NA-NA. https://doi.org/NA
79. Habernal, I., Ptáček, T., & Steinberger, J. (2015). Reprint of Supervised sentiment analysis in Czech social
media. Information Processing & Management, 51(4), 532-546.
https://doi.org/10.1016/j.ipm.2015.05.006
80. Habimana, O., Li, Y., Rui-xuan, L., Gu, X., & Yu, G. (2019). Sentiment analysis using deep learning
approaches: an overview. Science China Information Sciences, 63(1), 111102-NA.
https://doi.org/10.1007/s11432-018-9941-6
81. Haidar, B., Chamoun, M., & Serhrouchni, A. (2017). A multilingual system for cyberbullying detection:
Arabic content detection using machine learning. Advances in Science, Technology and Engineering
Systems Journal, 2(6), 275-284. https://doi.org/10.25046/aj020634
82. Hao, J., & Dai, H. (2016). Social media content and sentiment analysis on consumer security breaches.
Journal of Financial Crime, 23(4), 855-869. https://doi.org/10.1108/jfc-01-2016-0001
83. Hasan, Z., Haque, E., Khan, M. A. M., & Khan, M. S. (2024). Smart Ventilation Systems For Real-Time
Pollution Control: A Review Of Ai-Driven Technologies In Air Quality Management. Frontiers in Applied
Engineering and Technology, 1(01), 22-40. https://doi.org/10.70937/faet.v1i01.4
84. Hassan, A., Abbasi, A., & Zeng, D. (2013). SocialCom - Twitter Sentiment Analysis: A Bootstrap Ensemble
Framework. 2013 International Conference on Social Computing, NA(NA), 357-364.
https://doi.org/10.1109/socialcom.2013.56
85. Hassan, A. U., Hussain, J., Hussain, M., Sadiq, M., & Lee, S. (2017). ICTC - Sentiment analysis of social
networking sites (SNS) data using machine learning approach for the measurement of depression. 2017
International Conference on Information and Communication Technology Convergence (ICTC),
NA(NA), 138-140. https://doi.org/10.1109/ictc.2017.8190959
86. Hassan, S. Z., Ahmad, K., Hicks, S., Halvorsen, P., Al-Fuqaha, A., Conci, N., & Riegler, M. (2022). Visual
Sentiment Analysis from Disaster Images in Social Media. Sensors (Basel, Switzerland), 22(10), 3628-NA.
https://doi.org/10.3390/s22103628
87. He, L., Yin, T., & Zheng, K. (2022). They May Not Work! An evaluation of eleven sentiment analysis tools
on seven social media datasets. Journal of biomedical informatics, 132(NA), 104142-104142.
https://doi.org/10.1016/j.jbi.2022.104142
93
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
88. Hemmatian, F., & Sohrabi, M. K. (2017). A survey on classification techniques for opinion mining and
sentiment analysis. Artificial Intelligence Review, 52(3), 1495-1545. https://doi.org/10.1007/s10462-017-
9599-6
89. Hossain, M. R., Mahabub, S., & Das, B. C. (2024). The role of AI and data integration in enhancing data
protection in US digital public health an empirical study. Edelweiss Applied Science and Technology,
8(6), 8308-8321.
90. Jahan, F. (2023). Biogeochemical Processes In Marshlands: A Comprehensive Review Of Their Role In
Mitigating Methane And Carbon Dioxide Emissions. Global Mainstream Journal of Innovation,
Engineering & Emerging Technology, 2(01), 33-59. https://doi.org/10.62304/jieet.v2i01.230
91. Jamatia, A., Swamy, S. D., Gambäck, B., Das, A., & Debbarma, S. (2020). Deep Learning Based
Sentiment Analysis in a Code-Mixed English-Hindi and English-Bengali Social Media Corpus. International
Journal on Artificial Intelligence Tools, 29(05), 2050014-NA. https://doi.org/10.1142/s0218213020500141
92. Jhanwar, M. G., & Das, A. (2018). An Ensemble Model for Sentiment Analysis of Hindi-English Code-Mixed
Data. arXiv: Computation and Language, NA(NA), NA-NA. https://doi.org/NA
93. Ji, R., Cao, D., Zhou, Y., & Chen, F. (2016). Survey of visual sentiment prediction for social media analysis.
Frontiers of Computer Science, 10(4), 602-611. https://doi.org/10.1007/s11704-016-5453-2
94. Ji, S., Pan, S., Li, X., Cambria, E., Long, G., & Huang, Z. (2021). Suicidal Ideation Detection: A Review of
Machine Learning Methods and Applications. IEEE Transactions on Computational Social Systems, 8(1),
214-226. https://doi.org/10.1109/tcss.2020.3021467
95. Jim, M. M. I., Hasan, M., & Munira, M. S. K. (2024). The Role Of AI In Strengthening Data Privacy For Cloud
Banking. Frontiers in Applied Engineering and Technology, 1(01), 252-268.
https://doi.org/10.70937/faet.v1i01.39
96. Kalarani, P., & Brunda, S. S. (2018). Sentiment analysis by POS and joint sentiment topic features using
SVM and ANN. Soft Computing, 23(16), 7067-7079. https://doi.org/10.1007/s00500-018-3349-9
97. Karyotis, C., Doctor, F., Iqbal, R., James, A., & Chang, V. (2018). A fuzzy computational model of emotion
for cloud based sentiment analysis. Information Sciences, 433-434(NA), 448-463.
https://doi.org/10.1016/j.ins.2017.02.004
98. Kastrati, Z., Ahmedi, L., Kurti, A., Kadriu, F., Murtezaj, D., & Gashi, F. (2021). A deep learning sentiment
analyser for social media comments in low-resource languages. Electronics, 10(10), 1133-NA.
https://doi.org/10.3390/electronics10101133
99. Khan, M. A. M. (2025). AI AND MACHINE LEARNING IN TRANSFORMER FAULT DIAGNOSIS: A SYSTEMATIC
REVIEW. American Journal of Advanced Technology and Engineering Solutions, 1(01), 290-318.
https://doi.org/10.63125/sxb17553
100. Kim, C.-G., Hwang, Y.-J., & Kamyod, C. (2022). A Study of Profanity Effect in Sentiment Analysis on
Natural Language Processing Using ANN. Journal of Web Engineering, NA(NA), NA-NA.
https://doi.org/10.13052/jwe1540-9589.2139
101. Konate, A., & Du, R. (2018). Sentiment Analysis of Code-Mixed Bambara-French Social Media Text Using
Deep Learning Techniques. Wuhan University Journal of Natural Sciences, 23(3), 237-243.
https://doi.org/10.1007/s11859-018-1316-z
102. Koukaras, P., Tjortjis, C., & Rousidis, D. (2019). Social Media Types: introducing a data driven taxonomy.
Computing, 102(1), 295-340. https://doi.org/10.1007/s00607-019-00739-y
103. Kumar, S., Gahalawat, M., Roy, P. P., Dogra, D. P., & Kim, B.-G. (2020). Exploring Impact of Age and
Gender on Sentiment Analysis Using Machine Learning. Electronics, 9(2), 374-NA.
https://doi.org/10.3390/electronics9020374
104. Lai, S., Xu, L., Liu, K., & Zhao, J. (2015). Recurrent Convolutional Neural Networks for Text Classification.
Proceedings of the AAAI Conference on Artificial Intelligence, 29(1), NA-NA.
https://doi.org/10.1609/aaai.v29i1.9513
105. Lei, X., Qian, X., & Zhao, G. (2016). Rating Prediction Based on Social Sentiment From Textual Reviews.
IEEE Transactions on Multimedia, 18(9), 1910-1921. https://doi.org/10.1109/tmm.2016.2575738
106. Li, X., Wang, Y., Zhang, A., Li, C., Chi, J., & Ouyang, J. (2018). Filtering out the noise in short text topic
modeling. Information Sciences, 456(NA), 83-96. https://doi.org/10.1016/j.ins.2018.04.071
107. Li, Z., & Zou, Z. (2024). Punctuation and lexicon aid representation: A hybrid model for short text
sentiment analysis on social media platform. Journal of King Saud University - Computer and Information
Sciences, 36(3), 102010-102010. https://doi.org/10.1016/j.jksuci.2024.102010
108. Lim, W. L., Ho, C. C., & Ting, C.-Y. (2020). Sentiment Analysis by Fusing Text and Location Features of
Geo-Tagged Tweets. IEEE Access, 8(NA), 181014-181027. https://doi.org/10.1109/access.2020.3027845
109. Lin, J., & Kolcz, A. (2012). SIGMOD Conference - Large-scale machine learning at twitter. Proceedings
of the 2012 ACM SIGMOD International Conference on Management of Data, NA(NA), 793-804.
https://doi.org/10.1145/2213836.2213958
94
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
110. Liu, B. (2012). Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language
Technologies, 5(1), 1-167. https://doi.org/10.2200/s00416ed1v01y201204hlt016
111. Liu, G., Xianying, H., Xiaoyang, L., & Yang, A. (2019). A Novel Aspect-based Sentiment Analysis Network
Model Based on Multilingual Hierarchy in Online Social Network. The Computer Journal, 63(3), 410-424.
https://doi.org/10.1093/comjnl/bxz031
112. Liu, K., & Chen, L. (2019). Medical Social Media Text Classification Integrating Consumer Health
Terminology. IEEE Access, 7(NA), 78185-78193. https://doi.org/10.1109/access.2019.2921938
113. Lo, S. L., Cambria, E., Chiong, R., & Cornforth, D. (2016). Multilingual sentiment analysis: from formal to
informal and scarce resource languages. Artificial Intelligence Review, 48(4), 499-527.
https://doi.org/10.1007/s10462-016-9508-4
114. Ma, Y., Peng, H., & Cambria, E. (2018). Targeted Aspect-Based Sentiment Analysis via Embedding
Commonsense Knowledge into an Attentive LSTM. Proceedings of the AAAI Conference on Artificial
Intelligence, 32(1), NA-NA. https://doi.org/10.1609/aaai.v32i1.12048
115. Maghilnan, S., & Kumar, M. R. (2017). Sentiment analysis on speaker specific speech data. 2017
International Conference on Intelligent Computing and Control (I2C2), NA(NA), 1-5.
https://doi.org/10.1109/i2c2.2017.8321795
116. Mahabub, S., Das, B. C., & Hossain, M. R. (2024). Advancing healthcare transformation: AI-driven
precision medicine and scalable innovations through data analytics. Edelweiss Applied Science and
Technology, 8(6), 8322-8332.
117. Mahabub, S., Jahan, I., Islam, M. N., & Das, B. C. (2024). The Impact of Wearable Technology on Health
Monitoring: A Data-Driven Analysis with Real-World Case Studies and Innovations. Journal of Electrical
Systems, 20.
118. Majumder, N., Hazarika, D., Gelbukh, A., Cambria, E., & Poria, S. (2018). Multimodal sentiment analysis
using hierarchical fusion with context modeling. Knowledge-Based Systems, 161(NA), 124-133.
https://doi.org/10.1016/j.knosys.2018.07.041
119. Mansour, S. (2018). Social Media Analysis of User’s Responses to Terrorism Using Sentiment Analysis and
Text Mining. Procedia Computer Science, 140(NA), 95-103. https://doi.org/10.1016/j.procs.2018.10.297
120. Md Mahfuj, H., Md Rabbi, K., Mohammad Samiul, I., Faria, J., & Md Jakaria, T. (2022). Hybrid Renewable
Energy Systems: Integrating Solar, Wind,And Biomass For Enhanced Sustainability And Performance.
American Journal of Scholarly Research and Innovation, 1(01), 1-24. https://doi.org/10.63125/8052hp43
121. Md Suhaimin, M. S., Ahmad Hijazi, M. H., Moung, E. G., Nohuddin, P. N. E., Chua, S., & Coenen, F. (2023).
Social media sentiment analysis and opinion mining in public security: Taxonomy, trend analysis, issues
and future directions. Journal of King Saud University - Computer and Information Sciences, 35(9),
101776-101776. https://doi.org/10.1016/j.jksuci.2023.101776
122. Md Takbir Hossen, S., Ishtiaque, A., & Md Atiqur, R. (2023). AI-Based Smart Textile Wearables For Remote
Health Surveillance And Critical Emergency Alerts: A Systematic Literature Review. American Journal of
Scholarly Research and Innovation, 2(02), 1-29. https://doi.org/10.63125/ceqapd08
123. Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey.
Ain Shams Engineering Journal, 5(4), 1093-1113. https://doi.org/10.1016/j.asej.2014.04.011
124. Mitrović, M., Paltoglou, G., & Tadić, B. (2011). Quantitative analysis of bloggers' collective behavior
powered by emotions. Journal of Statistical Mechanics: Theory and Experiment, 2011(02), 02005-NA.
https://doi.org/10.1088/1742-5468/2011/02/p02005
125. Montenegro, C., Ligutom, C., Orio, J. V., & Ramacho, D. A. M. (2018). Using Latent Dirichlet Allocation
for Topic Modeling and Document Clustering of Dumaguete City Twitter Dataset. Proceedings of the
2018 International Conference on Computing and Data Engineering, NA(NA), 1-5.
https://doi.org/10.1145/3219788.3219799
126. Moraes, R., Valiati, J. F., & Neto, W. P. G. (2013). Document-level sentiment classification: An empirical
comparison between SVM and ANN. Expert Systems with Applications, 40(2), 621-633.
https://doi.org/10.1016/j.eswa.2012.07.059
127. Mosa Sumaiya Khatun, M., Shaharima, J., & Aklima, B. (2025). Artificial Intelligence in Financial Customer
Relationship Management: A Systematic Review of AI-Driven Strategies in Banking and FinTech.
American Journal of Advanced Technology and Engineering Solutions, 1(01), 20-40.
https://doi.org/10.63125/gy32cz90
128. Mozetič, I., Grčar, M., & Smailović, J. (2016). Multilingual Twitter Sentiment Classification: The Role of
Human Annotators. PloS one, 11(5), e0155036-NA. https://doi.org/10.1371/journal.pone.0155036
129. Mridha Younus, S. H., amp, & Md Morshedul, I. (2024). Advanced Business Analytics in Textile & Fashion
Industries: Driving Innovation And Sustainable Growth. International Journal of Management
Information Systems and Data Science, 1(2), 37-47. https://doi.org/10.62304/ijmisds.v1i2.143
130. Muhammad Mohiul, I., Morshed, A. S. M., Md Enamul, K., & Md, A.-A. (2022). Adaptive Control Of
Resource Flow In Construction Projects Through Deep Reinforcement Learning: A Framework For
95
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
Enhancing Project Performance In Complex Environments. American Journal of Scholarly Research and
Innovation, 1(01), 76-107. https://doi.org/10.63125/gm77xp11
131. Mukherjee, S. (2019). Deep Learning Technique for Sentiment Analysis of Hindi-English Code-Mixed Text
using Late Fusion of Character and Word Features. 2019 IEEE 16th India Council International
Conference (INDICON), 2019(NA), 1-4. https://doi.org/10.1109/indicon47234.2019.9028928
132. Munira, M. S. K. (2025). Digital Transformation in Banking: A Systematic Review Of Trends, Technologies,
And Challenges. Strategic Data Management and Innovation, 2(01), 78-95.
https://doi.org/10.71292/sdmi.v2i01.12
133. Nahid, O. F., Rahmatullah, R., Al-Arafat, M., Kabir, M. E., & Dasgupta, A. (2024). Risk mitigation strategies
in large scale infrastructure project:a project management perspective. Journal of Science and
Engineering Research, 1(01), 21-37. https://doi.org/10.70008/jeser.v1i01.38
134. Nalinde*, M. P. B., & Shinde, P. A. (2019). Machine Learning Framework for Detection of Psychological
Disorders at OSN. International Journal of Innovative Technology and Exploring Engineering, 8(11), 3293-
3298. https://doi.org/10.35940/ijitee.i8823.0981119
135. Nandwani, P., & Verma, R. (2021). A review on sentiment analysis and emotion detection from text.
Social Network Analysis and Mining, 11(1), 81. https://doi.org/10.1007/s13278-021-00776-6
136. Nankani, H., Dutta, H., Shrivastava, H., Krishna, P. V. N. S. R., Mahata, D., & Shah, R. R. (2020). Multilingual
Sentiment Analysis. In (Vol. NA, pp. 193-236). Springer Singapore. https://doi.org/10.1007/978-981-15-
1216-2_8
137. Nemes, L., & Kiss, A. (2020). Social media sentiment analysis based on COVID-19. Journal of Information
and Telecommunication, 5(1), 1-15. https://doi.org/10.1080/24751839.2020.1790793
138. Nguyen, D. Q., Vu, T., & Nguyen, A. T. (2020). EMNLP (Demos) - BERTweet: A pre-trained language model
for English Tweets. Proceedings of the 2020 Conference on Empirical Methods in Natural Language
Processing: System Demonstrations, NA(NA), 9-14. https://doi.org/10.18653/v1/2020.emnlp-demos.2
139. Nguyen, T. H., & Shirai, K. (2015). EMNLP - PhraseRNN: Phrase Recursive Neural Network for Aspect-based
Sentiment Analysis. Proceedings of the 2015 Conference on Empirical Methods in Natural Language
Processing, NA(NA), 2509-2514. https://doi.org/10.18653/v1/d15-1298
140. Nkomo, L. M., Ndukwe, I. G., & Daniel, B. K. (2020). Social Network and Sentiment Analysis: Investigation
of Students’ Perspectives on Lecture Recording. IEEE Access, 8(NA), 228693-228701.
https://doi.org/10.1109/access.2020.3044064
141. Olaniyan, D., Ogundokun, R. O., Bernard, O. P., Olaniyan, J., Maskeliūnas, R., & Akande, H. B. (2023).
Utilizing an Attention-Based LSTM Model for Detecting Sarcasm and Irony in Social Media. Computers,
12(11), 231-231. https://doi.org/10.3390/computers12110231
142. Onan, A. (2020). Sentiment analysis on product reviews based on weighted word embeddings and
deep neural networks. Concurrency and Computation: Practice and Experience, 33(23), NA-NA.
https://doi.org/10.1002/cpe.5909
143. Ortigosa, A., Martín, J. M. P., & Carro, R. M. (2014). Sentiment analysis in Facebook and its application
to e-learning. Computers in Human Behavior, 31(NA), 527-541. https://doi.org/10.1016/j.chb.2013.05.024
144. Özyurt, B., & Akcayol, M. A. (2021). A new topic modeling based approach for aspect extraction in
aspect based sentiment analysis: SS-LDA. Expert Systems with Applications, 168(NA), 114231-NA.
https://doi.org/10.1016/j.eswa.2020.114231
145. Pal, S., Ghosh, S., & Nag, A. (2018). Sentiment Analysis in the Light of LSTM Recurrent Neural Networks.
International Journal of Synthetic Emotions, 9(1), 33-39. https://doi.org/10.4018/ijse.2018010103
146. Paltoglou, G., & Thelwall, M. (2012). Twitter, MySpace, Digg: Unsupervised Sentiment Analysis in Social
Media. ACM Transactions on Intelligent Systems and Technology, 3(4), 66-19.
https://doi.org/10.1145/2337542.2337551
147. Pathak, A. R., Pandey, M., & Rautaray, S. S. (2021). Topic-level sentiment analysis of social media data
using deep learning. Applied Soft Computing, 108(NA), 107440-NA.
https://doi.org/10.1016/j.asoc.2021.107440
148. Paul, J., Das Chatterjee, A., Misra, D., Majumder, S., Rana, S., Gain, M., De, A., Mallick, S., & Sil, J. (2024).
A survey and comparative study on negative sentiment analysis in social media data. Multimedia tools
and applications, 83(30), 75243-75292. https://doi.org/10.1007/s11042-024-18452-0
149. Peñalver-Martinez, I., García-Sánchez, F., Valencia-García, R., Rodríguez-García, M. Á., Moreno, V.,
Fraga, A., & Sánchez-Cervantes, J. L. (2014). Feature-based opinion mining through ontologies. Expert
Systems with Applications, 41(13), 5995-6008. https://doi.org/10.1016/j.eswa.2014.03.022
150. Pereira, M. H. R., Pádua, F. L. C., Dalip, D. H., Benevenuto, F., Pereira, A. C. M., & Lacerda, A. (2019).
Multimodal approach for tension levels estimation in news videos. Multimedia tools and applications,
78(16), 23783-23808. https://doi.org/10.1007/s11042-019-7691-4
96
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
151. Phu, V. N., Dat, N. D., Tran, V. T. N., Chau, V. T. N., & Nguyen, T. A. (2016). Fuzzy C-means for english
sentiment classification in a distributed system. Applied Intelligence, 46(3), 717-738.
https://doi.org/10.1007/s10489-016-0858-z
152. Piana, S., Staglianò, A., Odone, F., Verri, A., & Camurri, A. (2014). Real-time Automatic Emotion
Recognition from Body Gestures. arXiv: Human-Computer Interaction, NA(NA), NA-NA.
https://doi.org/NA
153. Pong-inwong, C., & Songpan, W. (2018). Sentiment analysis in teaching evaluations using sentiment
phrase pattern matching (SPPM) based on association mining. International Journal of Machine
Learning and Cybernetics, 10(8), 2177-2186. https://doi.org/10.1007/s13042-018-0800-2
154. Poria, S., Cambria, E., & Gelbukh, A. (2015). EMNLP - Deep Convolutional Neural Network Textual
Features and Multiple Kernel Learning for Utterance-level Multimodal Sentiment Analysis. Proceedings
of the 2015 Conference on Empirical Methods in Natural Language Processing, NA(NA), 2539-2544.
https://doi.org/10.18653/v1/d15-1303
155. Poria, S., Cambria, E., Howard, N., Huang, G.-B., & Hussain, A. (2016). Fusing audio, visual and textual
clues for sentiment analysis from multimodal content. Neurocomputing, 174(NA), 50-59.
https://doi.org/10.1016/j.neucom.2015.01.095
156. Rahaman, T., & Islam, M. S. (2021). Study of shrinkage of concrete using normal weight and lightweight
aggregate. International Journal of Engineering Applied Sciences and Technology, 6(6), 0-45.
157. Ravi, K. S., & Ravi, V. (2015). A survey on opinion mining and sentiment analysis. Knowledge-Based
Systems, 89(89), 14-46. https://doi.org/10.1016/j.knosys.2015.06.015
158. Ravi, K. S., & Ravi, V. (2016). RAIT - Sentiment classification of Hinglish text. 2016 3rd International
Conference on Recent Advances in Information Technology (RAIT), NA(NA), 641-645.
https://doi.org/10.1109/rait.2016.7507974
159. Rezaeinia, S. M., Rahmani, R., Ghodsi, A., & Veisi, H. (2019). Sentiment analysis based on improved pre-
trained word embeddings. Expert Systems with Applications, 117(NA), 139-147.
https://doi.org/10.1016/j.eswa.2018.08.044
160. Riaz, S., Fatima, M., Kamran, M., & Nisar, M. W. (2017). Opinion mining on large scale data using
sentiment analysis and k-means clustering. Cluster Computing, 22(3), 7149-7164.
https://doi.org/10.1007/s10586-017-1077-z
161. Rocktäschel, T., Grefenstette, E., Hermann, K. M., Kočiský, T., & Blunsom, P. (2015). Reasoning about
Entailment with Neural Attention. arXiv: Computation and Language, NA(NA), NA-NA.
https://doi.org/NA
162. Rodriguez-Ibanez, M., Gimeno-Blanes, F.-J., Cuenca-Jimenez, P. M., Muñoz-Romero, S., Soguero, C., &
Rojo-Álvarez, J. L. (2020). On the Statistical and Temporal Dynamics of Sentiment Analysis. IEEE Access,
8(NA), 87994-88013. https://doi.org/10.1109/access.2020.2987207
163. Rojas-Barahona, L. M. (2016). Deep learning for sentiment analysis. Language and Linguistics Compass,
10(12), 701-719. https://doi.org/10.1111/lnc3.12228
164. Roksana, H. (2023). Automation In Manufacturing: A Systematic Review Of Advanced Time
Management Techniques To Boost Productivity. American Journal of Scholarly Research and
Innovation, 2(01), 50-78. https://doi.org/10.63125/z1wmcm42
165. Rosas, V. P., Mihalcea, R., & Morency, L.-P. (2013). Multimodal Sentiment Analysis of Spanish Online
Videos. IEEE Intelligent Systems, 28(3), 38-45. https://doi.org/10.1109/mis.2013.9
166. Ruder, S., Ghaffari, P., & Breslin, J. G. (2016). EMNLP - A Hierarchical Model of Reviews for Aspect-based
Sentiment Analysis. Proceedings of the 2016 Conference on Empirical Methods in Natural Language
Processing, NA(NA), 999-1005. https://doi.org/10.18653/v1/d16-1103
167. Sabid, A. M., & Kamrul, H. M. (2024). Computational And Theoretical Analysis On The Single Proton
Transfer Process In Adenine Base By Using DFT Theory And Thermodynamics. IOSR Journal of Applied
Chemistry.
168. Saif, H., Fernandez, M., Kastler, L., & Alani, H. (2017). Sentiment Lexicon Adaptation with Context and
Semantics for the Social Web. Semantic Web, 8(5), 643-665. https://doi.org/10.3233/sw-170265
169. Salur, M. U., & Aydin, I. (2020). A Novel Hybrid Deep Learning Model for Sentiment Classification. IEEE
Access, 8(NA), 58080-58093. https://doi.org/10.1109/access.2020.2982538
170. San Vicente Roncal, I. (2019). Multilingual sentiment analysis in social media. NA, NA(NA), NA-NA.
https://doi.org/NA
171. Schouten, K., & Frasincar, F. (2016). Survey on Aspect-Level Sentiment Analysis. IEEE Transactions on
Knowledge and Data Engineering, 28(3), 813-830. https://doi.org/10.1109/tkde.2015.2485209
172. Severyn, A., Moschitti, A., Uryupina, O., Plank, B., & Filippova, K. (2016). Multi-lingual opinion mining on
YouTube. Information Processing & Management, 52(1), 46-60. https://doi.org/10.1016/j.ipm.2015.03.002
97
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
173. Shah, B., & Shah, M. (2020). A Survey on Machine Learning and Deep Learning Based Approaches for
Sarcasm Identification in Social Media. In (Vol. NA, pp. 247-259). Springer Singapore.
https://doi.org/10.1007/978-981-15-4474-3_29
174. Shahare, F. F. (2017). Sentiment analysis for the news data based on the social media. 2017 International
Conference on Intelligent Computing and Control Systems (ICICCS), NA(NA), 1365-1370.
https://doi.org/10.1109/iccons.2017.8250692
175. Shanmugavadivel, K., Sathishkumar, V. E., Raja, S., Lingaiah, T. B., Neelakandan, S., & Subramanian, M.
(2022). Deep learning based sentiment analysis and offensive language identification on multilingual
code-mixed data. Scientific reports, 12(1), 21557-NA. https://doi.org/10.1038/s41598-022-26092-3
176. Sharma, P., & Sharma, A. K. (2020). Experimental investigation of automated system for twitter sentiment
analysis to predict the public emotions using machine learning algorithms. Materials Today:
Proceedings, NA(NA), NA-NA. https://doi.org/10.1016/j.matpr.2020.09.351
177. Shi, M. (2019). Research on Parallelization of Microblog Emotional Analysis Algorithms Using Deep
Learning and Attention Model Based on Spark Platform. IEEE Access, 7(NA), 177211-177218.
https://doi.org/10.1109/access.2019.2955501
178. Singla, Z., Randhawa, S., & Jain, S. (2017). Statistical and sentiment analysis of consumer product
reviews. 2017 8th International Conference on Computing, Communication and Networking
Technologies (ICCCNT), NA(NA), 1-6. https://doi.org/10.1109/icccnt.2017.8203960
179. Sodhi, R., Pant, K., & Mamidi, R. (2021). Jibes & Delights: A Dataset of Targeted Insults and Compliments
to Tackle Online Abuse. Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021),
NA(NA), 132-139. https://doi.org/10.18653/v1/2021.woah-1.14
180. Soni, J., & Mathur, K. (2022). Sentiment analysis based on aspect and context fusion using attention
encoder with LSTM. International Journal of Information Technology, 14(7), 3611-3618.
https://doi.org/10.1007/s41870-022-00966-1
181. Steiner-Correa, F., Viedma-del-Jesús, M. I., & López-Herrera, A. G. (2017). A survey of multilingual human-
tagged short message datasets for sentiment analysis tasks. Soft Computing, 22(24), 8227-8242.
https://doi.org/10.1007/s00500-017-2766-5
182. Studiawan, H., Sohel, F., & Payne, C. (2020). Sentiment analysis in a forensic timeline with deep learning.
IEEE Access, 8(NA), 60664-60675. https://doi.org/10.1109/access.2020.2983435
183. Subramani, S., Michalska, S., Wang, H., Du, J., Zhang, Y., & Shakeel, H. (2019). Deep Learning for Multi-
Class Identification From Domestic Violence Online Posts. IEEE Access, 7(NA), 46210-46224.
https://doi.org/10.1109/access.2019.2908827
184. Sunny, M. A. U. (2024a). Eco-Friendly Approach: Affordable Bio-Crude Isolation from Faecal Sludge
Liquefied Product. Journal of Scientific and Engineering Research, 11(5), 18-25.
185. Sunny, M. A. U. (2024b). Effects of Recycled Aggregate on the Mechanical Properties and Durability of
Concrete: A Comparative Study. Journal of Civil and Construction Engineering, 7-14.
186. Sunny, M. A. U. (2024c). Unveiling spatial insights: navigating the parameters of dynamic Geographic
Information Systems (GIS) analysis. International Journal of Science and Research Archive, 11(2), 1976-
1985.
187. Tabinda Kokab, S., Asghar, S., & Naz, S. (2022). Transformer-based deep learning models for the
sentiment analysis of social media data. Array, 14(NA), 100157-100157.
https://doi.org/10.1016/j.array.2022.100157
188. Tang, D., Qin, B., & Liu, T. (2015). Deep learning for sentiment analysis: successful approaches and future
challenges. WIREs Data Mining and Knowledge Discovery, 5(6), 292-303.
https://doi.org/10.1002/widm.1171
189. Tang, D., Wei, F., Qin, B., Yang, N., Liu, T., & Zhou, M. (2016). Sentiment Embeddings with Applications to
Sentiment Analysis. IEEE Transactions on Knowledge and Data Engineering, 28(2), 496-509.
https://doi.org/10.1109/tkde.2015.2489653
190. Tanna, D., Dudhane, M., Sardar, A., Deshpande, K., & Deshmukh, N. S. (2020). Sentiment Analysis on
Social Media for Emotion Classification. 2020 4th International Conference on Intelligent Computing
and Control Systems (ICICCS), NA(NA), 911-915. https://doi.org/10.1109/iciccs48265.2020.9121057
191. Tellez, E. S., Miranda-Jimnez, S., Graff, M., Moctezuma, D., Surez, R. R., & Siordia, O. S. (2017). A simple
approach to multilingual polarity classification in Twitter. Pattern Recognition Letters, 94(NA), 68-74.
https://doi.org/10.1016/j.patrec.2017.05.024
192. Tembhurne, J. V., & Diwan, T. (2020). Sentiment analysis in textual, visual and multimodal inputs using
recurrent neural networks. Multimedia tools and applications, 80(5), 6871-6910.
https://doi.org/10.1007/s11042-020-10037-x
193. Thara, S., & Poornachandran, P. (2022). Social media text analytics of Malayalam-English code-mixed
using deep learning. Journal of big data, 9(1), 45-NA. https://doi.org/10.1186/s40537-022-00594-3
98
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
194. Tonoy, A. A. R., & Khan, M. R. (2023). The Role of Semiconducting Electrides In Mechanical Energy
Conversion And Piezoelectric Applications: A Systematic Literature Review. American Journal of
Scholarly Research and Innovation, 2(1), 01-23. https://doi.org/10.63125/patvqr38
195. Usama, M., Xiao, W., Ahmad, B., Wan, J., Hassan, M. M., & Alelaiwi, A. (2019). Deep Learning Based
Weighted Feature Fusion Approach for Sentiment Analysis. IEEE Access, 7(NA), 140252-140260.
https://doi.org/10.1109/access.2019.2940051
196. Van Dijck, J., & Poell, T. (2018). Social media platforms and education. The SAGE handbook of social
media, 579-591.
197. Vinoth, D., & Prabhavathy, P. (2022). An intelligent machine learning-based sarcasm detection and
classification model on social networks. The Journal of supercomputing, 78(8), 10575-10594.
https://doi.org/10.1007/s11227-022-04312-x
198. Wadawadagi, R. S., & Pagi, V. B. (2020). Sentiment analysis with deep neural networks: comparative
study and performance assessment. Artificial Intelligence Review, 53(8), 6155-6195.
https://doi.org/10.1007/s10462-020-09845-2
199. Wang, W., Feng, S., Gao, W., Wang, D., & Zhang, Y. (2018). EMNLP - Personalized Microblog Sentiment
Classification via Adversarial Cross-lingual Multi-task Learning. Proceedings of the 2018 Conference on
Empirical Methods in Natural Language Processing, NA(NA), 338-348. https://doi.org/10.18653/v1/d18-
1031
200. Wang, Y., Huang, M., Zhu, X., & Zhao, L. (2016). EMNLP - Attention-based LSTM for Aspect-level Sentiment
Classification. Proceedings of the 2016 Conference on Empirical Methods in Natural Language
Processing, NA(NA), 606-615. https://doi.org/10.18653/v1/d16-1058
201. Wei, Z., Liu, W., Zhu, G., Zhang, S., & Hsieh, M.-Y. (2021). Sentiment classification of Chinese Weibo based
on extended sentiment dictionary and organisational structure of comments. Connection Science,
34(1), 409-428. https://doi.org/10.1080/09540091.2021.2006146
202. Weller, K. (2016). Trying to understand social media users and usage: The forgotten features of social
media platforms. Online Information Review, 40(2), 256-264.
203. Wu, F., Zhang, J., Yuan, Z., Wu, S., Huang, Y., & Yan, J. (2017). SIGIR - Sentence-level Sentiment
Classification with Weak Supervision. Proceedings of the 40th International ACM SIGIR Conference on
Research and Development in Information Retrieval, NA(NA), 973-976.
https://doi.org/10.1145/3077136.3080693
204. Wu, L., Mingchao, Q., Jian, M., & Zhang, H. (2019). Visual Sentiment Analysis by Combining Global and
Local Information. Neural Processing Letters, 51(3), 2063-2075. https://doi.org/10.1007/s11063-019-10027-
7
205. Xia, R., Zong, C., & Li, S. (2011). Ensemble of feature sets and classification algorithms for sentiment
classification. Information Sciences, 181(6), 1138-1152. https://doi.org/10.1016/j.ins.2010.11.023
206. Xiao, G., Tu, G., Zheng, L., Zhou, T., Li, X., Ahmed, S. H., & Jiang, D. (2021). Multimodality Sentiment
Analysis in Social Internet of Things Based on Hierarchical Attentions and CSAT-TCN With MBM Network.
IEEE Internet of Things Journal, 8(16), 12748-12757. https://doi.org/10.1109/jiot.2020.3015381
207. Xiong, S., Wang, K., Ji, D., & Wang, B. (2018). A short text sentiment-topic model for product reviews.
Neurocomputing, 297(NA), 94-102. https://doi.org/10.1016/j.neucom.2018.02.034
208. Xu, J., Huang, F., Zhang, X., Wang, S., Li, C., Li, Z., & He, Y. (2019). Sentiment analysis of social images via
hierarchical deep fusion of content and links. Applied Soft Computing, 80(NA), 387-399.
https://doi.org/10.1016/j.asoc.2019.04.010
209. Xu, Q. A., Chang, V., & Jayne, C. (2022). A systematic review of social media-based sentiment analysis:
Emerging trends and challenges. Decision Analytics Journal, 3, 100073-100073.
https://doi.org/10.1016/j.dajour.2022.100073
210. Yadav, A., & Vishwakarma, D. K. (2019). Sentiment analysis using deep learning architectures: a review.
Artificial Intelligence Review, 53(6), 4335-4385. https://doi.org/10.1007/s10462-019-09794-5
211. Yan, Y., Yin, X.-C., Zhang, B.-W., Yang, C., & Hao, H.-W. (2016). Semantic indexing with deep learning: a
case study. Big Data Analytics, 1(1), 1-13. https://doi.org/10.1186/s41044-016-0007-z
212. Yang, J. S., & Chung, K. S. (2019). Newly-Coined Words and Emoticon Polarity for Social Emotional
Opinion Decision. 2019 IEEE 2nd International Conference on Information and Computer Technologies
(ICICT), NA(NA), 76-79. https://doi.org/10.1109/infoct.2019.8711413
213. Yang, L., Li, Y., Wang, J., & Sherratt, R. S. (2020). Sentiment Analysis for E-Commerce Product Reviews in
Chinese Based on Sentiment Lexicon and Deep Learning. IEEE Access, 8(NA), 23522-23530.
https://doi.org/10.1109/access.2020.2969854
214. Yazdavar, A. H., Mahdavinejad, M. S., Bajaj, G., Romine, W. L., Sheth, A., Monadjemi, A., Thirunarayan,
K., Meddar, J. M., Myers, A. C., Pathak, J., & Hitzler, P. (2020). Multimodal mental health analysis in social
media. PloS one, 15(4), e0226248-NA. https://doi.org/10.1371/journal.pone.0226248
99
American Journal of Scholarly Research and Innovation
Volume 04, Issue 01 (2025)
Page No: 63-100
eISSN: 3067-2163
Doi: 10.63125/r3sq6p80
215. Younus, M. (2022). Reducing Carbon Emissions in The Fashion And Textile Industry Through Sustainable
Practices and Recycling: A Path Towards A Circular, Low-Carbon Future. Global Mainstream Journal of
Business, Economics, Development & Project Management, 1(1), 57-76.
https://doi.org/10.62304/jbedpm.v1i1.226
216. Younus, M. (2025). The Economics of A Zero-Waste Fashion Industry: Strategies To Reduce Wastage,
Minimize Clothing Costs, And Maximize & Sustainability. Strategic Data Management and Innovation,
2(01), 116-137. https://doi.org/10.71292/sdmi.v2i01.15
217. Yu, L.-C., Wang, J., Lai, K. R., & Zhang, X. (2017). EMNLP - Refining Word Embeddings for Sentiment
Analysis. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,
NA(NA), 534-539. https://doi.org/10.18653/v1/d17-1056
218. Yu, L.-C., Wang, J., Lai, K. R., & Zhang, X. (2018). Refining Word Embeddings Using Intensity Scores for
Sentiment Analysis. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(3), 671-681.
https://doi.org/10.1109/taslp.2017.2788182
219. Yue, L., Chen, W., Li, X., Zuo, W., & Yin, M. (2018). A survey of sentiment analysis in social media.
Knowledge and Information Systems, 60(2), 617-663. https://doi.org/10.1007/s10115-018-1236-4
220. Zadeh, A., Chen, M., Poria, S., Cambria, E., & Morency, L.-P. (2017). EMNLP - Tensor Fusion Network for
Multimodal Sentiment Analysis. Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, NA(NA), 1103-1114. https://doi.org/10.18653/v1/d17-1115
221. Zhai, S., & Zhang, Z. (2015). Semisupervised Autoencoder for Sentiment Analysis. arXiv: Learning, NA(NA),
NA-NA. https://doi.org/NA
222. Zhang, H., Wu, J., Shi, H., Jiang, Z., Ji, D., Yuan, T., & Li, G. (2020). Multidimensional Extra Evidence Mining
for Image Sentiment Analysis. IEEE Access, 8(NA), 103619-103634.
https://doi.org/10.1109/access.2020.2999128
223. Zhang, K., Li, Y., Wang, J., Cambria, E., & Li, X. (2022). Real-Time Video Emotion Recognition Based on
Reinforcement Learning and Domain Knowledge. IEEE Transactions on Circuits and Systems for Video
Technology, 32(3), 1034-1047. https://doi.org/10.1109/tcsvt.2021.3072412
224. Zhang, L., Wang, S., & Liu, B. (2018). Deep learning for sentiment analysis: A survey. WIREs Data Mining
and Knowledge Discovery, 8(4), NA-NA. https://doi.org/10.1002/widm.1253
225. Zhang, X., Zhou, H., Yu, K., Zhang, X., Wu, X., & Yazidi, A. (2022). Sentiment Analysis for Chinese Dataset
with Tsetlin Machine. 2022 International Symposium on the Tsetlin Machine (ISTM), NA(NA), 1-6.
https://doi.org/10.1109/istm54910.2022.00010
226. Zhang, Y., Lu, J., Liu, F., Liu, Q., Porter, A. L., Chen, H., & Zhang, G. (2018). Does deep learning help topic
extraction? A kernel k-means clustering method with word embedding. Journal of Informetrics, 12(4),
1099-1117. https://doi.org/10.1016/j.joi.2018.09.004
227. Zhang, Z., Ye, Q., Zhang, Z., & Li, Y. (2011). Sentiment classification of Internet restaurant reviews written
in Cantonese. Expert Systems with Applications, 38(6), 7674-7682.
https://doi.org/10.1016/j.eswa.2010.12.147
228. Zhao, X., Wang, D., Zhao, Z., Wei, L., Lu, C., & Zhuang, F. (2021). A neural topic model with word vectors
and entity vectors for short texts. Information Processing & Management, 58(2), 102455-NA.
https://doi.org/10.1016/j.ipm.2020.102455
229. Zhao, Y., Qin, B., Liu, T., & Tang, D. (2014). Social sentiment sensor: a visualization system for topic
detection and topic sentiment analysis on microblog. Multimedia tools and applications, 75(15), 8843-
8860. https://doi.org/10.1007/s11042-014-2184-y
230. Zhou, G., Zeng, Z., Huang, J. X., & He, T. (2016). SIGIR - Transfer Learning for Cross-Lingual Sentiment
Classification with Weakly Shared Deep Neural Networks. Proceedings of the 39th International ACM
SIGIR conference on Research and Development in Information Retrieval, NA(NA), 245-254.
https://doi.org/10.1145/2911451.2911490
231. Zhou, J., Sun, H., Wang, Z., Cong, W., Zeng, M., Zhou, W., Bie, P., Liu, L., Wen, T., & Kuang, M. (2023).
Guidelines for the diagnosis and treatment of primary liver cancer (2022 edition). Liver cancer, 12(5),
405-444.
232. Zhou, Y., Yang, Y., Liu, H., Liu, X., & Savage, N. (2020). Deep Learning Based Fusion Approach for Hate
Speech Detection. IEEE Access, 8(NA), 128923-128929. https://doi.org/10.1109/access.2020.3009244
233. Zunic, A., Corcoran, P., & Spasic, I. (2020). Sentiment Analysis in Health and Well-Being: Systematic
Review. JMIR medical informatics, 8(1), e16023-NA. https://doi.org/10.2196/16023
100