2024 : 11 : 21
Fateme Daneshfar

Fateme Daneshfar

Academic rank: Assistant Professor
ORCID:
Education: PhD.
ScopusId: 35078447100
HIndex:
Faculty: Faculty of Engineering
Address: Department of Computer Engineering, Faculty of Engineering, University of Kurdistan
Phone:

Research

Title
Multi-objective Manifold Representation for Opinion Mining
Type
Thesis
Keywords
Opinion mining; Dimension reduction; Manifold Learning; Deep global features extraction; Local manifold feature extraction
Year
2024
Researchers Pshtiwan Rahman Raheem(Student)، Fateme Daneshfar(PrimaryAdvisor)، Hashem Parvin(Advisor)

Abstract

Sentiment analysis is an essential task in numerous domains, necessitating effective dimensionality reduction and feature extraction techniques. This study introduces MultiObjective Manifold Representation for Opinion Mining (MOMR). This novel approach combines deep global and local manifold feature extraction to reduce dimensions while capturing intricate data patterns efficiently. Additionally, incorporating a self-attention mechanism further enhances MOMR's capability to focus on relevant parts of the text, resulting in improved performance in sentiment analysis tasks. MOMR was evaluated against established techniques such as Long Short-Term Memory (LSTM), Naive Bayes (NB), Support Vector Machines (SVM), Recurrent Neural Networks (RNN), and Convolutional Neural Networks (CNN), as well as recent state-of-the-art models across multiple datasets including IMDB, Fake News, Twitter, and Yelp. Therefore, our comparative analysis underscores MOMR's efficacy in sentiment analysis tasks across diverse datasets, highlighting its potential and applicability in real-world sentiment analysis applications. On the IMDB dataset, MOMR achieved an accuracy of 99.7% and an F1 score of 99.6%, outperforming other methods such as LSTM, NB, SMSR, and various SVM and CNN models. For the Twitter dataset, MOMR attained an accuracy of 88.0% and an F1 score of 88.0%, surpassing other models, including LSTM, CNN, BiLSTM, Bi-GRU, NB, and RNN. In the Fake News dataset, MOMR demonstrated superior performance with an accuracy of 97.0% and an F1 score of 97.6%, compared to techniques like RF, RNN, BiLSTM+CNN, and NB. For the Yelp dataset, MOMR achieved an accuracy of 80.0% and an F1 score of 80.0%, proving its effectiveness alongside other models such as Bidirectional Encoder Representations from Transformers (BERT), aspect-sentence graph convolutional neural network (ASGCN), Multi-layer Neural Network, LSTM, and bidirectional recurrent convolutional neural network attention (BRCAN).