Updated News Around the World

How Brain-Computer Interfaces Could Restore Speech and Help Fight Depression

Already, brain-computer interfaces have helped to control epileptic seizures and decrease tremors in patients with Parkinson’s disease. The next wave will tackle even more complex applications, like restoring speech and regulating mood.

Eddie Chang, a neurosurgeon and brain-computer interface pioneer at the University of California, San Francisco Weill Institute for Neurosciences, is among those at the forefront of that work. In July 2021, he led a groundbreaking study, funded in part by Facebook, that translated the brain signals of a paralyzed man into words on a screen.

The amount of research in brain-computer interfaces, or BCI, has been growing steadily in recent years, according to federal online database Pubmed. In 2021, there were almost 600 studies, compared to roughly 340 in 2016. Fueling some of this activity are the availability of more powerful computers, improvements in artificial intelligence, and the further miniaturization of devices that can be implanted into the brain. Investors, entrepreneurs like Elon Musk and tech giants like Meta Inc. are getting in on the action as advances in hardware and software have made decoding the human brain seem less insurmountable.

Federal investments in neuroscience have also played a part in the field’s upswing.

Dr. Chang says the aim is to decipher the brain activity that underpins complex human behaviors, like speech and emotions, in an attempt to develop therapies that could help people who can’t speak or who suffer from neuropsychiatric conditions like depression and anxiety.

That promise also comes with some peril, such as the potential of further erosion of privacy as BCIs give direct access to the brain and the processes that underlie thoughts.

Dr. Chang spoke to The Wall Street Journal about where the field is going, his work on restoring speech and improving mood, and whether the experimental worlds of BCI and psychedelics could one day collide.

Why speech?

Speech is really special. [It’s] just one of the unique and defining behaviors we have as a species. And so it’s been quite exciting to be trying to understand how the human brain processes such a unique behavior.

What makes that difficult?

One of the biggest challenges is trying to translate electrical signals. The brain uses [electrical signals] as its own language for communication. That language has its own logic, its own code. And the true challenge is to understand how that code works.

Dr. Chang shows a brain-computer interface. He paired the technology with AI to help paralyzed patients communicate.



Photo:

Carolyn Fong for The Wall Street Journal

Has technology made that challenge easier?

The signals that we are interpreting—they look nothing like what we’re trying to decode, like words. They look like squiggles on a screen. And the patterns are so complex. A lot of our work now leverages computer-science advances in artificial intelligence and machine learning because they are very, very powerful ways of pattern recognition. So the kind of stuff that you use for Siri or Alexa.

How do you see the technology evolving?

Ten to 15 years ago, the state of the field was at the level of trying to decode vowels, for example. What has been really incredible over the last five years is how much progress has been made from moving beyond just individual sounds to trying to decode words. And it’s still really at its very beginning. The first project that we published on this last year really focused on a vocabulary of 50 words. Our current efforts are towards two endeavors. One is to expand the vocabulary beyond our [Weill Institute’s] 50-word vocabulary while also having high accuracy.

The second thing that we’re trying to do is move from essentially trying to say a word and having a word appear on the screen to synthesizing words that you can hear. So the goal is for someone who’s paralyzed not to just think of a word and have it appear on a screen for writing things out, but for actually synthesizing those sounds. And that turns out to be a very, very difficult and big challenge because of the complexity of producing words audibly [for] a very large vocabulary. For a small number of words, it’s very doable now.

Another area that we’re really excited about is not just hearing words, but actually controlling [an avatar] of the face that is not only speaking, but actually making the movements that you normally see when you’re talking to someone in person. The reason why we think that that’s important is that it will help the learning process for someone [who’s paralyzed] to essentially feel like the brain computer interface is part of the way that they speak because it feels more natural.

How is that different from turning text on a screen to words you can hear?

Speech synthesis to us is not just the words themselves, but the nuance of creating the full richness of voice, like intonation and rhythm, in real time. For example, “Sally went to the store.” To change that from a statement to a question, all I’ve done there is increase the pitch of my voice on the last word. It’s the same words, but what that does is that changes the meaning. So we’re trying to tap into that as well.

You’re also working on systems that can handle multiple languages. Can you tell me about that?

Part of what we’re trying to develop is technology that is cued by the patient to switch from one language to another. So the probability of one word following another word in English is very, very different than their sequences in Spanish. As a person switches between two languages, we’re looking at ways that the machine can detect that and continue to communicate in multiple languages. I’m excited about it because it’s just another step in making the technology more usable and more practical for people and allowing them to really express who they are.

Does that make it more complicated?

It’s much, much more complex because you don’t want the two to be confused. It’s not like there’s the part of your brain that is Spanish and a different area that’s processing English. For people who are bilingual, it’s coming from the same area. And having a computer figure that out is tricky.

Are there signatures that, to the machine, suggest that a person is trying to speak in one language versus another?

So that’s actually a fascinating question and I don’t have the answer to that yet. We’re definitely getting to territory of work that’s under way now.

How are you using BCI to help patients with mental-health conditions?

We’re really interested in trying to understand what is going on when someone is processing emotions normally, and what the signals look like in people who have depression and don’t have normal regulation of their mood. Our hope is that by understanding these electrical signaling patterns that we can use them as biomarkers, as ways to understand when and what parts of the brain are involved when someone is having depressive episodes. And then the second thing, which is far more important, is to use that information to intervene and to regulate some of these areas so that someone feels more normal, like they aren’t in incapacitating depression.

Do you see a world in which your language and mood projects collide?

I haven’t connected these two projects but I do think that there’s potential. We’re talking about different brain areas and essentially different computer algorithms to understand and decode them. But a lot of the hardware has a lot of overlap. I do think that in the coming decade, there will be a lot more device-based medical therapy, and that we’ll be able to interact with the brain and integrate [neural] recordings from different parts of the brain.

SHARE YOUR THOUGHTS

How do you see brain-computer interfaces being used in the future? Join the conversation below.

Psychedelics are another class of exciting experimental therapies in mental health. Do you think there will be crossover between that field and yours?

I do have a feeling that those worlds will combine. They’re very complementary. [Psychedelics are] primarily a chemical approach. We’re exploring this complementary signal, which is the electrical signal. The brain uses both heavily and they are related directly. As we start to understand the signaling changes that occur with things like psychedelics or psychiatric medications in general, I think there’s a lot of potential.

Would you get a BCI for yourself, not as treatment, but for augmentation?

I personally wouldn’t. There are so many things that I wish I were better at. But I personally really respect who I am as I am right now.

Is that because of privacy? Why the hesitation?

Part of the hesitation is that there are risks. We’re talking brain surgery. The second thing is that we’re still very much in the learning phase. We’re really far from any kind of scenario where I can imagine it being for augmentation. There’s also an ethical line. I’m not supportive right now until we have a much, much better regulatory ethical framework around this.

This interview has been condensed and edited.

Write to Daniela Hernandez at [email protected]

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.