611 W. Union Street
Benson, AZ 85602
(520) 586-0800

member support line
M-F 5pm-8pm
24/7 weekends/holidays

AzCH Nurse Assist Line


611 W. Union Street
Benson, AZ 85602
(520) 586-0800

AzCH Nurse Assist Line


powered by centersite dot net

Getting Started
Here are some forms to get started. These can be printed and brought with you so that you can pre-fill out some known info ahead of time. More...

Health Sciences
Basic InformationLatest News
'It's Like You Have a Hand Again': New Prosthetic Gets Closer to the Real ThingLosing a Spouse Could Speed Brain's DeclinePaddles Against Parkinson's: Ping Pong Might Ease SymptomsIn a First, Doctors Use Robotics to Treat Brain AneurysmSkiers Study Suggests Fitness May Stave Off Parkinson'sCRISPR Gene Editing Creates 'Designer' Immune Cells That Fight CancerGene Variant Ups Dementia Risk in Parkinson's Patients: StudyGene Variation May Protect Against Alzheimer's: StudyYoung-Onset Parkinson's May Start in the Womb, New Research SuggestsNew Gene Study Unravels Cancer's SecretsDoes Size Matter? Volume of Brain Area Not Always Tied to Memory, ThinkingGene Test Might Spot Soccer Players at High Risk for Brain TroubleSevere Deprivation in Childhood Has Lasting Impact on Brain SizeIn the Future, Could Exercise's Benefits Come in a Pill?Could Brain Scans Spot Children's Mood, Attention Problems Early?Brain Damage Changes Over Time in Boxers, MMA FightersSpecial 'Invisible' Dye Could Serve as Skin's Vaccination RecordCancer Drug Shows Promise for Parkinson's Patients'Smart' Contact Lenses Might Also Monitor Eye HealthCould Obesity Alter a Child's Brain Structure?Playing Sports Might Sharpen Your HearingAntarctic Study Shows Isolation, Monotony May Change the Human BrainCould MS Have Links to the Herpes Virus?Ultrasound Treatment Might Ease Parkinson's TremorsAnimal Study Offers Hope for Treating Traumatic Brain InjuriesA Gene Kept One Woman From Developing Alzheimer's -- Could It Help Others?Could AI Beat Radiologists at Spotting Bleeds in the Brain?Pro Soccer Players More Likely to Develop Dementia: StudyExtinct Human Species Passed on Powerful Immune System GeneScientists ID Genes Tied to Left-HandednessScientists Creating Gene Map of Human 'Microbiome'New DNA Blood Test May Help Guide Breast Cancer TreatmentFootball Head Trauma Linked Again to Long-Term Brain DamageMore 'Buyer Beware' Warnings for Unregulated Stem Cell Clinics3-D Printers Might Someday Make Replacement HeartsOne Gene Change 2 Million Years Ago Left Humans Vulnerable to Heart AttackHow to Protect Your DNA for Big Health BenefitsBones Help Black People Keep Facial Aging at BayGene Test Might Someday Gauge Your Heart Attack RiskYour Gut Bacteria Could Affect Your Response to MedsIt's Never Too Late for New Brain CellsSensor-Laden Glove Helps Robotic Hands 'Feel' ObjectsAn Antibiotic Alternative? Using a Virus to Fight BacteriaBrain Sharpens the Hearing of the Blind, Study FindsMind-Reading Tech Could Bring 'Synthetic Speech' to Brain-Damaged PatientsCan Obesity Shrink Your Brain?Will You Get Fat? Genetic Test May TellMagnet 'Zap' to the Brain Might Jumpstart Aging MemoryIsraeli Team Announces First 3D-Printed Heart Using Human CellsNFL Retirees Help Scientists Develop Early Test for Brain Condition CTE
Questions and AnswersLinksBook Reviews
Related Topics

Medical Disorders
Mental Disorders
Mental Health Professions

Mind-Reading Tech Could Bring 'Synthetic Speech' to Brain-Damaged Patients

HealthDay News
by By Dennis Thompson
HealthDay Reporter
Updated: Apr 24th 2019

new article illustration

WEDNESDAY, April 24, 2019 (HealthDay News) -- Reading the brain waves that control a person's vocal tract might be the best way to help return a voice to people who've lost their ability to speak, a new study suggests.

A brain-machine interface creates natural-sounding synthetic speech by using brain activity to control a "virtual" vocal tract -- an anatomically detailed computer simulation that reflects the movements of the lips, jaw, tongue and larynx that occur as a person talks.

This interface created artificial speech that could be accurately understood up to 70% of the time, said study co-author Josh Chartier, a bioengineering graduate student at the University of California, San Francisco (UCSF) Weill Institute for Neuroscience.

The participants involved in this proof-of-concept study still had the ability to speak. They were five patients being treated at the UCSF Epilepsy Center who had electrodes temporarily implanted in their brains to map the source of their seizures, in preparation for neurosurgery.

But researchers believe the speech synthesizer ultimately could help people who've lost the ability to talk due to stroke, traumatic brain injury, cancer or neurodegenerative conditions like Parkinson's disease, multiple sclerosis or amyotrophic lateral sclerosis (Lou Gehrig's disease).

"We found that the neural code for vocal movements is partially shared across individuals," said senior researcher Dr. Edward Chang, a professor of neurosurgery at the UCSF School of Medicine. "An artificial vocal tract modeled on one person's voice can be adapted to synthesize speech from another person's brain activity," he explained.

"This means that a speech decoder that's trained in one person with intact speech could maybe someday act as a starting point for someone who has a speech disability, who could then learn to control the simulated vocal tract using their own brain activity," Chang said.

Reading brain waves to create 'synthetic' speech

Current speech synthesis technology requires people to spell out their thoughts letter-by-letter using devices that track very small eye or facial muscle movements, a laborious and error-prone method, the researchers said.

Directly reading brain activity could produce more accurate synthetic speech more quickly, but researchers have struggled to extract speech sounds from the brain, Chang noted.

So, Chang and his colleagues came up with a different approach -- creating speech by focusing on the signals that the brain sends out to control the various parts of the vocal tract.

For this study, the researchers asked the five epilepsy patients to read several hundred sentences aloud while readings were taken from a brain region in the frontal cortex known to be involved in language production, Chartier said.

Sample sentences included "Is this seesaw safe," "Bob bandaged both wounds with the skill of a doctor," "Those thieves stole thirty jewels," and "Get a calico cat to keep the rodents away."

According to co-researcher Gopala Anumanchipalli, "These are sentences that are particularly geared towards covering all of the articulatory phonetic contexts of the English language." Anumanchipalli is a UCSF School of Medicine speech scientist.

Audio recordings of the participants' voices were used to reverse-engineer the vocal tract movements required to make those sounds, including lip movements, vocal cord tightening, tongue manipulation and jaw movement.

Using that information, the investigators created a computer-based vocal tract for each participant that could be controlled by their brain activity. An algorithm transformed brain patterns produced during speech into movements of the "virtual" vocal tract, and then a synthesizer converted those movements into synthetic speech.

Listeners understood words up to 70% of the time

Researchers then tested whether the synthetic speech could be understood by asking hundreds of human listeners to write down what they thought they heard.

The transcribers more successfully understood the sentences when given shorter lists of words to choose from. For example, they accurately identified 69% of synthesized words from lists of 25 alternatives, including 43% of sentences transcribed with perfect accuracy.

But when given a list of 50 words to choose from, they accurately identified only 47% of words correctly and understood just 21% of synthesized sentences accurately.

Chartier pointed out that the researchers "still have a ways to go to perfectly mimic spoken language. We're quite good at synthesizing slower speech sounds like 'sh' and 'z' as well as maintaining the rhythms and intonations of speech and the speaker's gender and identity, but some of the more abrupt sounds like 'b's and 'p's get a bit fuzzy."

Still, he added, "The levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what's currently available."

The research team is working to extend these findings, using more advanced electrode arrays and computer algorithms to further improve brain-generated synthetic speech.

The next major test will be to determine whether someone who has lost the ability to speak could learn to use the system, even though it's been geared to work with another person who can still talk, the researchers said.

The new study was published April 24 in the journal Nature.

More information

The U.S. National Institutes of Health has more about human speech.