Towards direct speech synthesis from ECoG

A pilot study

Christian Herff, Garett Johnson, Lorenz Diener, Jerry Shih, Dean Krusienski, Tanja Schultz

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

Most current Brain-Computer Interfaces (BCIs) achieve high information transfer rates using spelling paradigms based on stimulus-evoked potentials. Despite the success of this interfaces, this mode of communication can be cumbersome and unnatural. Direct synthesis of speech from neural activity represents a more natural mode of communication that would enable users to convey verbal messages in real-time. In this pilot study with one participant, we demonstrate that electrocoticography (ECoG) intracranial activity from temporal areas can be used to resynthesize speech in real-time. This is accomplished by reconstructing the audio magnitude spectrogram from neural activity and subsequently creating the audio waveform from these reconstructed spectrograms. We show that significant correlations between the original and reconstructed spectrograms and temporal waveforms can be achieved. While this pilot study uses audibly spoken speech for the models, it represents a first step towards speech synthesis from speech imagery.

Original languageEnglish (US)
Title of host publication2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1540-1543
Number of pages4
Volume2016-October
ISBN (Electronic)9781457702204
DOIs
StatePublished - Oct 13 2016
Event38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016 - Orlando, United States
Duration: Aug 16 2016Aug 20 2016

Other

Other38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016
CountryUnited States
CityOrlando
Period8/16/168/20/16

Fingerprint

Speech synthesis
Brain computer interface
Communication
Bioelectric potentials
Brain-Computer Interfaces
Imagery (Psychotherapy)
Evoked Potentials

ASJC Scopus subject areas

  • Signal Processing
  • Biomedical Engineering
  • Computer Vision and Pattern Recognition
  • Health Informatics

Cite this

Herff, C., Johnson, G., Diener, L., Shih, J., Krusienski, D., & Schultz, T. (2016). Towards direct speech synthesis from ECoG: A pilot study. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016 (Vol. 2016-October, pp. 1540-1543). [7591004] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/EMBC.2016.7591004

Towards direct speech synthesis from ECoG : A pilot study. / Herff, Christian; Johnson, Garett; Diener, Lorenz; Shih, Jerry; Krusienski, Dean; Schultz, Tanja.

2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016. Vol. 2016-October Institute of Electrical and Electronics Engineers Inc., 2016. p. 1540-1543 7591004.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Herff, C, Johnson, G, Diener, L, Shih, J, Krusienski, D & Schultz, T 2016, Towards direct speech synthesis from ECoG: A pilot study. in 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016. vol. 2016-October, 7591004, Institute of Electrical and Electronics Engineers Inc., pp. 1540-1543, 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016, Orlando, United States, 8/16/16. https://doi.org/10.1109/EMBC.2016.7591004
Herff C, Johnson G, Diener L, Shih J, Krusienski D, Schultz T. Towards direct speech synthesis from ECoG: A pilot study. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016. Vol. 2016-October. Institute of Electrical and Electronics Engineers Inc. 2016. p. 1540-1543. 7591004 https://doi.org/10.1109/EMBC.2016.7591004
Herff, Christian ; Johnson, Garett ; Diener, Lorenz ; Shih, Jerry ; Krusienski, Dean ; Schultz, Tanja. / Towards direct speech synthesis from ECoG : A pilot study. 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016. Vol. 2016-October Institute of Electrical and Electronics Engineers Inc., 2016. pp. 1540-1543
@inproceedings{f3f6977725c548c7bcedf78dcfdc0280,
title = "Towards direct speech synthesis from ECoG: A pilot study",
abstract = "Most current Brain-Computer Interfaces (BCIs) achieve high information transfer rates using spelling paradigms based on stimulus-evoked potentials. Despite the success of this interfaces, this mode of communication can be cumbersome and unnatural. Direct synthesis of speech from neural activity represents a more natural mode of communication that would enable users to convey verbal messages in real-time. In this pilot study with one participant, we demonstrate that electrocoticography (ECoG) intracranial activity from temporal areas can be used to resynthesize speech in real-time. This is accomplished by reconstructing the audio magnitude spectrogram from neural activity and subsequently creating the audio waveform from these reconstructed spectrograms. We show that significant correlations between the original and reconstructed spectrograms and temporal waveforms can be achieved. While this pilot study uses audibly spoken speech for the models, it represents a first step towards speech synthesis from speech imagery.",
author = "Christian Herff and Garett Johnson and Lorenz Diener and Jerry Shih and Dean Krusienski and Tanja Schultz",
year = "2016",
month = "10",
day = "13",
doi = "10.1109/EMBC.2016.7591004",
language = "English (US)",
volume = "2016-October",
pages = "1540--1543",
booktitle = "2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Towards direct speech synthesis from ECoG

T2 - A pilot study

AU - Herff, Christian

AU - Johnson, Garett

AU - Diener, Lorenz

AU - Shih, Jerry

AU - Krusienski, Dean

AU - Schultz, Tanja

PY - 2016/10/13

Y1 - 2016/10/13

N2 - Most current Brain-Computer Interfaces (BCIs) achieve high information transfer rates using spelling paradigms based on stimulus-evoked potentials. Despite the success of this interfaces, this mode of communication can be cumbersome and unnatural. Direct synthesis of speech from neural activity represents a more natural mode of communication that would enable users to convey verbal messages in real-time. In this pilot study with one participant, we demonstrate that electrocoticography (ECoG) intracranial activity from temporal areas can be used to resynthesize speech in real-time. This is accomplished by reconstructing the audio magnitude spectrogram from neural activity and subsequently creating the audio waveform from these reconstructed spectrograms. We show that significant correlations between the original and reconstructed spectrograms and temporal waveforms can be achieved. While this pilot study uses audibly spoken speech for the models, it represents a first step towards speech synthesis from speech imagery.

AB - Most current Brain-Computer Interfaces (BCIs) achieve high information transfer rates using spelling paradigms based on stimulus-evoked potentials. Despite the success of this interfaces, this mode of communication can be cumbersome and unnatural. Direct synthesis of speech from neural activity represents a more natural mode of communication that would enable users to convey verbal messages in real-time. In this pilot study with one participant, we demonstrate that electrocoticography (ECoG) intracranial activity from temporal areas can be used to resynthesize speech in real-time. This is accomplished by reconstructing the audio magnitude spectrogram from neural activity and subsequently creating the audio waveform from these reconstructed spectrograms. We show that significant correlations between the original and reconstructed spectrograms and temporal waveforms can be achieved. While this pilot study uses audibly spoken speech for the models, it represents a first step towards speech synthesis from speech imagery.

UR - http://www.scopus.com/inward/record.url?scp=85009089427&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85009089427&partnerID=8YFLogxK

U2 - 10.1109/EMBC.2016.7591004

DO - 10.1109/EMBC.2016.7591004

M3 - Conference contribution

VL - 2016-October

SP - 1540

EP - 1543

BT - 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016

PB - Institute of Electrical and Electronics Engineers Inc.

ER -