Here are some thoughts on how Silent Speech technology could advance human computer interaction.
Silent speech, also known as sub-vocal speech will likely play a key role in how humans interact with machines in the near future. This interaction includes voice assistants, text and voice communications, and authentication, etc. It addresses many issues associated with such forms of interaction including privacy and security concerns, noisy environments, etc. and overcomes these in a very natural way.
1. Signal capture (EMG sensor)
2. Signal transmission (likely over Bluetooth to a nearby processing device such as a smart phone)
3. Energy harvesting technology (so as to almost never require replacement or complicated recharging procedures)
4. Built using human-safe non-magnetic materials
Back-end features (likely a smart phone with/without cloud support)
1. Signal processing (including the transformation of captured signals into text, and subsequent processing such as text-to-speech)
2. Synthesised voice creation (for use in voice communications, and ideally tailored to the user's own voice)
3. Vocabulary training and learning (automated as part of associating signals with audible speech)
4. Integration with voice assistants and other smart phones features (including messaging and voice communications).
Closing the interaction loop on a Silent Speech HCI, is auditory (and optionally) visual capture and feedback. Whilst this can somewhat be achieved today with headphones (and microphone) and augmented/mixed reality headsets (such as Google's Glass, or Microsoft's HoloLens), in the future implants are also likely to play a key role, replacing such wearables with more natural extensions of these human senses.