12.5 C
New York
Sunday, July 12, 2020
Home News Groundbreaking A.I. can synthesize speech based on a person’s brain activity

Groundbreaking A.I. can synthesize speech based on a person’s brain activity

Scientists from the University of California, San Francisco have demonstrated a way to use artificial intelligence to turn brain signals into spoken words. It could one day pave the way for people who cannot speak or otherwise communicate to be able to talk with those around them.

The work began with researchers studying five volunteers with severe epilepsy. These volunteers had electrodes temporarily placed on the surface of their brains in order to locate the part of the brain responsible for triggering seizures. As part of this work, the team was also able to study the way that the brain responds when a person is speaking. This included analyzing the brain signals that translate into movements of the vocal tract, which includes the jaw, larynx, lips, and tongue. An artificial neural network was then used to decode this intentionality, which was in turn used to generate understandable synthesized speech.

While still at a relatively early stage, the hope is that this work will open up some exciting possibilities. A future step will involve carrying out clinical trials to test the technology on patients who are physically unable to speak (which was not the case with this demonstration). It will also be necessary to develop an Food and Drug Administration-approved electrode device with the kind of high channel capacity (256 channels in this latest study) required to capture the necessary level of brain activity.

This isn’t the first time we’ve covered impressive brain-computer interfaces at Digital Trends. In 2017, researchers from Carnegie Mellon University developed technology that used A.I. machine learning algorithms to read complex thoughts based on brain scans, including interpreting complete sentences in some cases.

A similar project, carried out by researchers in Japan, was able to analyze fMRI brain scans and generate a written description of what that person was viewing — such as “a dog is sitting on the floor in front of an open door” or “a group of people standing on the beach.” As this technology matures, more and more examples of similarly groundbreaking work will no doubt emerge.

A paper describing UC San Francisco’s recent work, titled Speech Synthesis From Neural Decoding of Spoken Sentences, was recently published in the journal Nature.

Editors’ Recommendations

  • China’s mind-controlled cyborg rats are proof we live in a cyberpunk dystopia
  • A.I. system seeks to turn thoughts of people unable to talk into speech
  • What is artificial intelligence? Here’s everything you need to know
  • How to use Interpreter Mode on Google Home devices
  • Huawei’s A.I. has finished Schubert’s Unfinished Symphony, and we’ve heard it


Everything you need to know about sharing PS4 games

How to get your share on with all your PS4 friends!Years ago, sharing games was something that you did in

These are the free shows we’re enjoying right now on Amazon Fire TV

You don't have to spend an extra dime to watch some excellent content on Fire TV.If you're one of the

How to watch the Styrian Grand Prix online from anywhere

If you enjoyed the Austrian Grand Prix, then you're in luck as Formula 1 will once again be racing on

Top Applications for Dyslexic Children to Study Better

Unfortunately, a lot of students suffer from learning disabilities, such as dysgraphia and dyslexia. These disabilities have a deteriorating impact on the...