Please use this identifier to cite or link to this item: http://hdl.handle.net/123456789/5943
DC FieldValueLanguage
dc.contributor.authorNooraini Yusoffen_US
dc.contributor.authorOnasoga Olukayode Ayodeleen_US
dc.contributor.authorNor Hazlyna Harunen_US
dc.date.accessioned2024-01-29T04:18:46Z-
dc.date.available2024-01-29T04:18:46Z-
dc.date.issued2023-
dc.identifier.issn2773-5540-
dc.identifier.urihttp://hdl.handle.net/123456789/5943-
dc.descriptionOthersen_US
dc.description.abstractMultimodal sentiment analysis (MSA) is one of the core research topics of natural language processing (NLP). MSA has become a challenge for scholars and is equally complicated for an appliance to comprehend. One study that supports MS difficulties is the MSA, which is learning opinions, emotions, and attitudes in an audio-visual format. In order words, using such diverse modalities to obtain opinions and identify emotions is necessary. Such utilization can be achieved via modality data fusion; such as feature fusion. In handling the data fusion of such diverse modalities while obtaining high performance, a typical machine learning algorithm is Deep Learning (DL), particularly the Convolutional Neutral Network (CNN), which has the capacity to handle tasks of great intricacy and difficulty. In this paper, we present a CNN architecture with an integrated layer via fuzzy methodologies for MSA, a task yet to be explored in improving the accuracy performance of CNN for diverse inputs. Experiments conducted on a benchmark multimodal dataset, MOSI, obtaining 37.5% and 81% on seven (7) class and binary classification respectively, reveals an improved accuracy performance compared with the typical CNN, which acquired 28.9% and 78%, respectively.en_US
dc.language.isoenen_US
dc.publisherUTHM Pressen_US
dc.relation.ispartofEmerging Advances in Integrated Technology (EmAIT)en_US
dc.subjectFuzzyen_US
dc.subjectdeep learningen_US
dc.subjectCNNen_US
dc.subjectMSAen_US
dc.subjectfusionen_US
dc.titleFuzzy Layered Convolution Neutral Network for Feature Level Fusion Based On Multimodal Sentiment Classificationen_US
dc.typeNationalen_US
dc.description.page65-78en_US
dc.volume3(2)en_US
dc.description.articleno2en_US
dc.description.typeArticleen_US
item.fulltextWith Fulltext-
item.openairetypeNational-
item.languageiso639-1en-
item.grantfulltextopen-
crisitem.author.deptUniversiti Malaysia Kelantan-
crisitem.author.orcid0000-0003-2703-2531-
Appears in Collections:Journal Indexed Era/Google Scholar and Others - FSDK
Files in This Item:
File Description SizeFormat
EmAIT_2023_UTHM_Fuzzy.pdf1.6 MBAdobe PDFView/Open
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.