Artificial intelligence neural interface helped a paralysed woman speak using an avatar
Researchers have helped a paralysed woman speak using a digital avatar and artificial intelligence algorithms that analyse her brainwaves and convert them into speech.
Here's What We Know
The patient, named Ann, can move her facial muscles but cannot speak. Scientists trained a recurrent neural network to convert her brain signals into 39 different phonemes - sound units that combine to form words when she speaks.
Ann's brain waves were measured using a brain-computer interface. Once trained, the model was able to decode her thoughts into phonemes that were spoken by the digital avatar on the screen.
Ann can speak through her digital avatar at a rate of 62 words per minute, which is about 40 per cent faster than normal speech. The word error rate is 23.8 per cent with the system's vocabulary of 125,000 words.
The technology was developed by a team of computer scientists from the University of California, San Francisco and the University of California, Berkeley. They hope the research will lead to a regulatory-approved device so that paralysed people can express themselves.
Source: The Register