Please use this identifier to cite or link to this item: http://hdl.handle.net/123456789/5943
Title: Fuzzy Layered Convolution Neutral Network for Feature Level Fusion Based On Multimodal Sentiment Classification
Authors: Nooraini Yusoff 
Onasoga Olukayode Ayodele 
Nor Hazlyna Harun 
Keywords: Fuzzy;deep learning;CNN;MSA;fusion
Issue Date: 2023
Publisher: UTHM Press
Journal: Emerging Advances in Integrated Technology (EmAIT) 
Abstract: 
Multimodal sentiment analysis (MSA) is one of the core research topics of natural language processing (NLP). MSA has become a challenge for scholars and is equally complicated for an appliance to comprehend. One study that supports MS difficulties is the MSA, which is learning opinions, emotions, and attitudes in an audio-visual format. In order words, using such diverse modalities to obtain opinions and identify emotions is necessary. Such utilization can be achieved via modality data fusion; such as feature fusion. In handling the data fusion of such diverse modalities while obtaining high performance, a typical machine learning algorithm is Deep Learning (DL), particularly the Convolutional Neutral Network (CNN), which has the capacity to handle tasks of great intricacy and difficulty. In this paper, we present a CNN architecture with an integrated layer via fuzzy methodologies for MSA, a task yet to be explored in improving the accuracy performance of CNN for diverse inputs. Experiments conducted on a benchmark multimodal dataset, MOSI, obtaining 37.5% and 81% on seven (7) class and binary classification respectively, reveals an improved accuracy performance compared with the typical CNN, which acquired 28.9% and 78%, respectively.
Description: 
Others
URI: http://hdl.handle.net/123456789/5943
ISSN: 2773-5540
Appears in Collections:Journal Indexed Era/Google Scholar and Others - FSDK

Files in This Item:
File Description SizeFormat
EmAIT_2023_UTHM_Fuzzy.pdf1.6 MBAdobe PDFView/Open
Show full item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.