Home | english  | Impressum | Sitemap | KIT

Special Session on Silent Speech Interfaces

Interspeech 2009 – Call for Papers for the Special Session on "Silent Speech Interfaces"

Organizer

Bruce Denby (Université Pierre et Marie Curie / ESPCI-ParisTech (CNRS)), denbySnx4∂ieee org

Tanja Schultz (Cognitive Systems Lab, University of Karlsruhe), tanjaIuh9∂ira uka de

Scope of the Special Session

Interspeech 2009 will feature a special session on the topic of “Silent Speech Interfaces”. A Silent Speech Interface (SSI) is an electronic system enabling speech communication to take place without the necessity of emitting an audible acoustic signal. By acquiring sensor data from elements of the human speech production process – from the articulators, their neural pathways, or the brain itself – an SSI produces a digital representation of speech which can synthesized directly, interpreted as data,  or routed into a communications network. Due to this novel approach Silent Speech Interfaces have the potential to overcome the major limitations of traditional speech interfaces today, i.e. (a) limited robustness in the presence of ambient noise; (b) lack of secure transmission of private and confidential information; and (c) disturbance of bystanders created by audibly spoken speech in quiet environments; while at the same time retaining speech as the most natural human communication modality. 

To date, SSI experiments based on seven different types of technology have been described in the literature:  (1) Capture of the movement of fixed points on the articulators using Electro­magnetic Articulography (EMA) sensors, (2) Real-time characterization of the vocal tract using ultrasound (US) and optical imaging of the tongue and lips, (3) Digital transformation of signals from a Non-Audible Murmur (NAM) microphone (a type of stethoscopic microphone), (4) Analysis of glottal activity using electromagnetic or vibration sensors, (5) Surface electro­myography (sEMG) of the articulator muscles or the larynx, (6) Interpretation of signals from electro-encephalographic (EEG) sensors, and (7) the Interpretation of signals from implants in the speech-motor cortex. Developing working Silent Speech Interfaces solutions will require improved methodologies which are robust to sensor positioning and signal conditioning issues, environmental factors, speaker variability, etc. Furthermore, the devices developed will have to be more user-friendly, less intrusive, and completely portable. 

The special session intends to bring together researchers in the areas of human articulation, speech and language technologies, data acquisition and signal processing, as well as in human interface design, software engineering and systems integration. Its goal is to promote the exchange of ideas on current SSI challenges and to discuss solutions, by highlighting, for each of the technological approaches presented, its range of applications, key advantages, potential drawbacks, and current state of development. So, it is the ideal forum for participants of different backgrounds who share a common interest in Silent Speech Interface technology.

Papers and Presentation Form

We ask contributors to send in 4-page papers on the subject of silent speech interfaces and related topics. When you submit your paper using the electronic submission system of Interspeech, please tick the box saying that you would like to be included in the special session “Silent Speech Interfaces”. All papers will follow the normal paper review process. The presentations will be oral. We plan to reserve about 20 minutes to allow selected contributors to demonstrate their system prototypes on stage.  Please indicate via email to the session chairs if you are interested in such a presentation.

Important Deadlines

4-page Full Paper Submission: 17. April 2009

Notification of Paper Acceptance: 17. June 2009

Indicate your interest to demonstrate a system (via email to the chairs):   25. June 2009