Abstract
Most current Brain-Computer Interfaces (BCIs) achieve high information transfer rates using spelling paradigms based on stimulus-evoked potentials. Despite the success of this interfaces, this mode of communication can be cumbersome and unnatural. Direct synthesis of speech from neural activity represents a more natural mode of communication that would enable users to convey verbal messages in real-time. In this pilot study with one participant, we demonstrate that electrocoticography (ECoG) intracranial activity from temporal areas can be used to resynthesize speech in real-time. This is accomplished by reconstructing the audio magnitude spectrogram from neural activity and subsequently creating the audio waveform from these reconstructed spectrograms. We show that significant correlations between the original and reconstructed spectrograms and temporal waveforms can be achieved. While this pilot study uses audibly spoken speech for the models, it represents a first step towards speech synthesis from speech imagery.
Original language | English (US) |
---|---|
Title of host publication | 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1540-1543 |
Number of pages | 4 |
Volume | 2016-October |
ISBN (Electronic) | 9781457702204 |
DOIs | |
State | Published - Oct 13 2016 |
Event | 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016 - Orlando, United States Duration: Aug 16 2016 → Aug 20 2016 |
Other
Other | 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016 |
---|---|
Country/Territory | United States |
City | Orlando |
Period | 8/16/16 → 8/20/16 |
ASJC Scopus subject areas
- Signal Processing
- Biomedical Engineering
- Computer Vision and Pattern Recognition
- Health Informatics