Tiny sensors allow an ALS patient to communicate

A year and a half ago, a neurosurgeon at Stanford University in California placed two tiny sensors the size of a small aspirin pill in the brain of Pat Bennett, now 68, a former human resources manager and daily jog.

In 2012, she was diagnosed with amyotrophic lateral sclerosis (ALS or Lou Gehrig’s disease)It is a progressive neurodegenerative disorder that attacks the nerve cells that control movement, causing physical weakness, paralysis, and eventually death.

sensors and components iNon-cortical brain-computer interface (iBCI) It was transplanted into two separate regions – both involved in speech production. Using state-of-the-art decoding software, it is designed to translate the brain activity associated with attempts to speak into words on the screen.

About a month after the surgery, the Stanford scientists began twice-weekly research sessions to train the software that interpreted her speech. Four months later, Bennett’s attempts at articulation were converted into words on a computer screen at a rate of 62 words per minute – three times faster than the previous record for brain-computer-assisted communication.

Dr Jamie Henderson, the surgeon who performed the implants, said: ‘These initial results have validated the concept and eventually the technology will catch up to make it more accessible to people who cannot speak easily. Our brains remember how to articulate words even if the muscles responsible for pronouncing them out loud are powerless. The brain-computer interface makes the dream of speech restoration a reality.

The devices transmit signals from two speech-related regions of Bennett’s brain to state-of-the-art software that decodes her brain activity and converts it into text displayed on a computer screen.

ALS took away Pat Bennett’s ability to speak. (Credit: Steve Fish)

How do the symptoms of amyotrophic lateral sclerosis begin to appear?

Usually, amyotrophic lateral sclerosis (ALS) first appears on the periphery of the body — the arms, legs, hands, and fingers. For Bennett, the deterioration did not start in her spinal cord, as is usual, but in her brainstem. She can still move around, wear her clothes, and use her fingers to write, albeit with increasing difficulty. But she can no longer use the muscles of her lips, tongue, throat, and jaws to clearly articulate the phonemes—sound units like sh—that are the building blocks of speech. And although her brain can still formulate directions to generate those syllables, her muscles can’t carry out commands.

“We’ve shown that you can decode intended speech by recording activity from a very small area on the surface of the brain,” said Henderson, who co-authored a paper describing the findings published in 2015. nature.

Dr. Frank Willett, Howard Hughes Medical Institute scientist in the Neural Prosthetics Lab, which Henderson helped found in 2009, shares lead authorship of the study with co-lead author in electrical engineering Professor Krishna Shenoy, who died before the study was published by graduate student IRIN Koons and Chaove Fan.

In 2021, Henderson, Chinnoe, and Willett co-authored a study published in the journal nature They describe their success in converting the imaginary handwriting of a paralyzed person into on-screen text using iBCI, reaching a speed of 90 characters, or 18 words, per minute – a world record so far for an iBCI-related methodology.

In 2021, Bennett learns of Henderson and Shennoy’s work and volunteers to participate in the clinical trial. The sensors that Henderson implanted in Bennett’s cerebral cortex — the outermost layer of the brain — are square arrays of tiny silicon electrodes. The electrodes penetrate the cerebral cortex to a depth approximately equal to that of two stacked quarters. The arrays are attached to fine gold wires that exit through bases attached to the skull, which are then linked by cable to a computer.

An AI algorithm takes in the electronic information from Bennett’s brain and decodes it, eventually teaching itself how to distinguish distinct brain activity associated with its attempts to craft each there are 39 phonemes that make up the spoken English language.. It feeds a “best guess” about the sequence of syllables that Bennett attempted to implement into a sophisticated autocorrect system that converts syllable streams into the sequence of words they represent.

To teach the algorithm how to recognize patterns of brain activity associated with phonemes, Bennett participated in 25 four-hour training sessions during which she tried to repeat sentences randomly selected from a large dataset consisting of samples of conversations between people speaking. On the phone. As she tries to read each sentence, Bennett’s brain activity—translated by the decoder into an audio stream and then put together into words by an autocorrect system—will be displayed on the screen below the original text. Then a new sentence will appear on the screen. She repeated between 260 and 480 sentences in each training session, and the system continued to improve as it became aware of Bennett’s brain activity during her attempts to speak.

When the vocabulary was expanded to 125,000 words – large enough to compose almost anything you wanted to say – the error rate was 23.8% – far from perfect, but a giant step over the previous state of the art.

The device described in this study is licensed for investigational use only and is not yet commercially available.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top