Investigations on biosignal sensors based assistive technology for speech recognition
Loading...
Date
item.page.authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Speech communication involves the transmission of messages or information through spoken language, encompassing both verbal and nonverbal components like tone, gestures, and facial expressions. Proficient speech communication encompasses skills such as listening, speaking, interpreting, and appropriately responding to verbal cues, playing a vital role in interpersonal, professional, and public interactions. Technological advancements have integrated speech recognition and synthesis systems into various applications, enabling human-machine interaction through spoken language. This lack of preparedness exacerbates the stigma surrounding this group. Silent Speech Recognition (SSR) presents an alternative to conventional acoustic-based speech interfaces.
newlineSilent Speech Recognition is a technology that aims to interpret and understand speech without requiring the speaker to vocalize the words aloud. Instead, SSR systems typically rely on capturing and analyzing subtle physiological signals associated with speech production and neural activity. Despite progress, the effectiveness of SSR, systems faces numerous obstacles, including the requirements for high-quality data collection, transfer learning, customization, adaptation, noise reduction, and signal enhancement. Addressing these issues, the proposed research presents four key contributions: a hybrid feature extraction technique employing the Integrated Stacking Classifier, a cross-subject analysis hybrid model combining Long Short Term Memory (LSTM) and Graph Attention Network (GAT), a few-shot domain adaptation model named Supervised Domain Adaptation-Convolution Neural Network (SDA-CNN), and a Multi-Source Marginal Domain Adaptation (MSMDA) model specifically designed to enhance SSR systems.
newline