Multimodal Machine Learning for an Efficient Information Retrieval Step into Next Generation Computing

dc.contributor.guideTiwari, Shailendra and Pannu, H S
dc.coverage.spatial
dc.creator.researcherSaklani, Avantika
dc.date.accessioned2024-11-26T10:16:19Z
dc.date.available2024-11-26T10:16:19Z
dc.date.awarded2024
dc.date.completed2024
dc.date.registered
dc.description.abstractWhat kind of a perception living creatures learn about the external environment including their own body is perceived through sensory information or modalities such as visuals, touch and hearing. Due to the rich characteristics of the environment, it is infrequent that a single modality provides efficient complete knowledge about any phenomena of interest. As when several senses are occupied in the processing of knowledge, we can have a better understanding. The increase in the obtainability of modalities on the same space provides new degrees of freedom for the fusion of modalities. Fusion of modalities is the process of combining features from different sources to obtain complementary information from each. This dissertation focuses on information fusion of multimodal data to provide high accuracy, scalability and enhanced performance for various tasks. In this research work we integrated the visual and linguistic modalities to have the improved decision making machine learning models. For this we have proposed three different frameworks for multimodal classification. The primary focus is to develop robust frameworks that utilize deep learning architectures for enhancement of multimodal classification accuracy and efficiency. In the first proposed work we address the challenge of effectively fusing features to improve food classification accuracy. The proposed model is evaluated on the UPMC Food 101 dataset and a newly created Bharatiya Food dataset. It involves feature extraction using fine-tuned Inception-v4 for visual and RoBERTa for its related text, followed by earlystage fusion to integrate these features effectively. The second proposed work introduces Deep Attentive Multimodal Fusion Network (DAMFN) which is an improvement to the previous model for multimodal food classification system. In this model majorly two significant improvements have been done - one update is in the feature extraction model of visual component and other is the increase in the size of the newly developed dataset. The model
dc.description.note
dc.format.accompanyingmaterialNone
dc.format.dimensions
dc.format.extentxiv, 139p.
dc.identifier.urihttp://hdl.handle.net/10603/602993
dc.languageEnglish
dc.publisher.institutionDepartment of Computer Science and Engineering
dc.publisher.placePatiala
dc.publisher.universityThapar Institute of Engineering and Technology
dc.relation
dc.rightsuniversity
dc.source.universityUniversity
dc.subject.keywordComputer Science
dc.subject.keywordComputer Science Information Systems
dc.subject.keywordEngineering and Technology
dc.subject.keywordInformation retrieval
dc.subject.keywordMachine learning
dc.titleMultimodal Machine Learning for an Efficient Information Retrieval Step into Next Generation Computing
dc.title.alternative
dc.type.degreePh.D.

Files

Original bundle

Now showing 1 - 5 of 13
Loading...
Thumbnail Image
Name:
01_title.pdf
Size:
125.48 KB
Format:
Adobe Portable Document Format
Description:
Attached File
Loading...
Thumbnail Image
Name:
02_prelimpages.pdf
Size:
592.41 KB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
03_content.pdf
Size:
63.67 KB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
04_abstract.pdf
Size:
75.86 KB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
05_chapter 1.pdf
Size:
2.65 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.79 KB
Format:
Plain Text
Description: