Skip to main content

Research Repository

Advanced Search

All Outputs (2)

Direct Speech Reconstruction from Articulatory Sensor Data by Machine Learning (2017)
Journal Article
Gonzalez, J. A., Cheah, L. A., Gomez, A. M., Green, P. D., Gilbert, J. M., Ell, S. R., Moore, R. K., & Holdsworth, E. (2017). Direct Speech Reconstruction from Articulatory Sensor Data by Machine Learning. IEEE/ACM transactions on audio, speech, and language processing, 25(12), 2362-2374. https://doi.org/10.1109/TASLP.2017.2757263

© 2014 IEEE. This paper describes a technique that generates speech acoustics from articulator movements. Our motivation is to help people who can no longer speak following laryngectomy, a procedure that is carried out tens of thousands of times per... Read More about Direct Speech Reconstruction from Articulatory Sensor Data by Machine Learning.

A silent speech system based on permanent magnet articulography and direct synthesis (2016)
Journal Article
Gonzalez, J. A., Cheah, L. A., Gilbert, J. M., Bai, J., Ell, S. R., Green, P. D., & Moore, R. K. (2016). A silent speech system based on permanent magnet articulography and direct synthesis. Computer speech & language, 39, 67-87. https://doi.org/10.1016/j.csl.2016.02.002

In this paper we present a silent speech interface (SSI) system aimed at restoring speech communication for individuals who have lost their voice due to laryngectomy or diseases affecting the vocal folds. In the proposed system, articulatory data cap... Read More about A silent speech system based on permanent magnet articulography and direct synthesis.