611 W. Union Street
Benson, AZ 85602
(520) 586-0800

Health Choice Integrated Care crisis Line
1-877-756-4090

AzCH Nurse Assist Line
1-866-495-6735

NAZCARE Warm Line
1-888-404-5530



SEABHS
611 W. Union Street
Benson, AZ 85602
(520) 586-0800

AzCH Nurse Assist Line
1-866-495-6735

NAZCARE Warm Line
1-888-404-5530


powered by centersite dot net

Getting Started
Here are some forms to get started. These can be printed and brought with you so that you can pre-fill out some known info ahead of time. More...


Health Sciences
Resources
Basic InformationLatest News
Scientists ID Genes Tied to Left-HandednessScientists Creating Gene Map of Human 'Microbiome'New DNA Blood Test May Help Guide Breast Cancer TreatmentFootball Head Trauma Linked Again to Long-Term Brain DamageMore 'Buyer Beware' Warnings for Unregulated Stem Cell Clinics3-D Printers Might Someday Make Replacement HeartsOne Gene Change 2 Million Years Ago Left Humans Vulnerable to Heart AttackHow to Protect Your DNA for Big Health BenefitsBones Help Black People Keep Facial Aging at BayGene Test Might Someday Gauge Your Heart Attack RiskYour Gut Bacteria Could Affect Your Response to MedsIt's Never Too Late for New Brain CellsSensor-Laden Glove Helps Robotic Hands 'Feel' ObjectsAn Antibiotic Alternative? Using a Virus to Fight BacteriaBrain Sharpens the Hearing of the Blind, Study FindsMind-Reading Tech Could Bring 'Synthetic Speech' to Brain-Damaged PatientsCan Obesity Shrink Your Brain?Will You Get Fat? Genetic Test May TellMagnet 'Zap' to the Brain Might Jumpstart Aging MemoryWhy More Patients Are Surviving an AneurysmIsraeli Team Announces First 3D-Printed Heart Using Human CellsPoverty Could Leave Its Mark on GenesNFL Retirees Help Scientists Develop Early Test for Brain Condition CTEBrain 'Zap' Might Rejuvenate Aging MemoryLab-Grown Blood Vessels Could Be Big Medical AdvanceOnly Spoken Words Processed in Newly Discovered Brain RegionSmall Trial Provides New Hope Against Parkinson's DiseaseInsomnia May Be in Your Genes'Miracle' Young Blood Infusion Treatments Unproven, Potentially Harmful: FDAPossible Parkinson's 'Pandemic' Looms: ReportScience Puzzling Out Differences in Gut Bacteria Around the World'Mind-Reading' AI Turns Thoughts Into Spoken WordsEat What You Want and Still Stay Slim? Thank Your GenesGood News, Bad News on Levodopa for Parkinson's DiseaseNature or Nurture? Twins Study Helps Sort Out Genes' Role in DiseaseBeing Bullied May Alter the Teen BrainFDA Warns Companies on Dangerous, Unapproved Stem Cell TreatmentsGene Tweaking Prevented Obesity in MiceApproach That New Gene Testing Kit With CautionResearch on Almost 2,000 Brains Brings Insight Into Mental IllnessRestoring Hair Growth on Scarred Skin? Mouse Study Could Show the WayParkinson's Gene Therapy Wires New Brain CircuitsNext for Disabling Back Pain? New Discs From Patients' Own CellsSkin 'Glow' Test Might Someday Spot Disease Risk EarlyComputer-Brain Link Helps 'Locked In' People Chat, Surf WebCould a Natural Protein Help Fight Obesity?Blood Test May One Day Help Track Concussion RecoveryThe Bigger the Brain, the Bigger the Tumor Risk?Gene Therapy for Parkinson's Symptoms Shows PromiseCould Same-Sex Couples Have Babies With Shared DNA? Study Hints It's Possible
Questions and AnswersLinksBook Reviews
Related Topics

Medical Disorders
Mental Disorders
Mental Health Professions

Mind-Reading Tech Could Bring 'Synthetic Speech' to Brain-Damaged Patients

HealthDay News
by By Dennis Thompson
HealthDay Reporter
Updated: Apr 24th 2019

new article illustration

WEDNESDAY, April 24, 2019 (HealthDay News) -- Reading the brain waves that control a person's vocal tract might be the best way to help return a voice to people who've lost their ability to speak, a new study suggests.

A brain-machine interface creates natural-sounding synthetic speech by using brain activity to control a "virtual" vocal tract -- an anatomically detailed computer simulation that reflects the movements of the lips, jaw, tongue and larynx that occur as a person talks.

This interface created artificial speech that could be accurately understood up to 70% of the time, said study co-author Josh Chartier, a bioengineering graduate student at the University of California, San Francisco (UCSF) Weill Institute for Neuroscience.

The participants involved in this proof-of-concept study still had the ability to speak. They were five patients being treated at the UCSF Epilepsy Center who had electrodes temporarily implanted in their brains to map the source of their seizures, in preparation for neurosurgery.

But researchers believe the speech synthesizer ultimately could help people who've lost the ability to talk due to stroke, traumatic brain injury, cancer or neurodegenerative conditions like Parkinson's disease, multiple sclerosis or amyotrophic lateral sclerosis (Lou Gehrig's disease).

"We found that the neural code for vocal movements is partially shared across individuals," said senior researcher Dr. Edward Chang, a professor of neurosurgery at the UCSF School of Medicine. "An artificial vocal tract modeled on one person's voice can be adapted to synthesize speech from another person's brain activity," he explained.

"This means that a speech decoder that's trained in one person with intact speech could maybe someday act as a starting point for someone who has a speech disability, who could then learn to control the simulated vocal tract using their own brain activity," Chang said.

Reading brain waves to create 'synthetic' speech

Current speech synthesis technology requires people to spell out their thoughts letter-by-letter using devices that track very small eye or facial muscle movements, a laborious and error-prone method, the researchers said.

Directly reading brain activity could produce more accurate synthetic speech more quickly, but researchers have struggled to extract speech sounds from the brain, Chang noted.

So, Chang and his colleagues came up with a different approach -- creating speech by focusing on the signals that the brain sends out to control the various parts of the vocal tract.

For this study, the researchers asked the five epilepsy patients to read several hundred sentences aloud while readings were taken from a brain region in the frontal cortex known to be involved in language production, Chartier said.

Sample sentences included "Is this seesaw safe," "Bob bandaged both wounds with the skill of a doctor," "Those thieves stole thirty jewels," and "Get a calico cat to keep the rodents away."

According to co-researcher Gopala Anumanchipalli, "These are sentences that are particularly geared towards covering all of the articulatory phonetic contexts of the English language." Anumanchipalli is a UCSF School of Medicine speech scientist.

Audio recordings of the participants' voices were used to reverse-engineer the vocal tract movements required to make those sounds, including lip movements, vocal cord tightening, tongue manipulation and jaw movement.

Using that information, the investigators created a computer-based vocal tract for each participant that could be controlled by their brain activity. An algorithm transformed brain patterns produced during speech into movements of the "virtual" vocal tract, and then a synthesizer converted those movements into synthetic speech.

Listeners understood words up to 70% of the time

Researchers then tested whether the synthetic speech could be understood by asking hundreds of human listeners to write down what they thought they heard.

The transcribers more successfully understood the sentences when given shorter lists of words to choose from. For example, they accurately identified 69% of synthesized words from lists of 25 alternatives, including 43% of sentences transcribed with perfect accuracy.

But when given a list of 50 words to choose from, they accurately identified only 47% of words correctly and understood just 21% of synthesized sentences accurately.

Chartier pointed out that the researchers "still have a ways to go to perfectly mimic spoken language. We're quite good at synthesizing slower speech sounds like 'sh' and 'z' as well as maintaining the rhythms and intonations of speech and the speaker's gender and identity, but some of the more abrupt sounds like 'b's and 'p's get a bit fuzzy."

Still, he added, "The levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what's currently available."

The research team is working to extend these findings, using more advanced electrode arrays and computer algorithms to further improve brain-generated synthetic speech.

The next major test will be to determine whether someone who has lost the ability to speak could learn to use the system, even though it's been geared to work with another person who can still talk, the researchers said.

The new study was published April 24 in the journal Nature.

More information

The U.S. National Institutes of Health has more about human speech.