611 W. Union Street
Benson, AZ 85602
(520) 586-0800

member support line
M-F 5pm-8pm
24/7 weekends/holidays

AzCH Nurse Assist Line


611 W. Union Street
Benson, AZ 85602
(520) 586-0800

AzCH Nurse Assist Line


powered by centersite dot net
Health Sciences
Basic InformationLatest News
Scientists Create Embryos With Cells From Monkeys, Humans'Game of Thrones' Study Reveals the Power of Fiction on the MindScientists Create Human Tear Glands That Cry in the LabAHA News: How Grief Rewires the Brain and Can Affect Health – and What to Do About ItCould Taking a Swing at Golf Help Parkinson's Patients?Autopsy Study May Explain Why Some COVID Survivors Have 'Brain Fog'Gene Study Probes Origins of Addison's DiseaseCould a Common Prostate Drug Help Prevent Parkinson's?AHA News: Hormones Are Key in Brain Health Differences Between Men and WomenNerve Drug Might Curb Spinal Cord Damage, Mouse Study SuggestsIs There a 'Risk-Taking' Center in the Brain?AHA News: Dr. Dre Recovering From a Brain Aneurysm. What Is That?Can 2 Nutrients Lower Your Risk for Parkinson's?New Clues to How Cancers Originate in the BrainBrain May Age Faster After Spinal Cord InjuryScans Reveal How COVID-19 Can Harm the BrainWhat Loneliness Looks Like in the BrainNeurologists Much Tougher to Find in Rural AmericaCOVID-19 Survival Declines When Brain Affected: StudyAs Testing Costs Rise, Neurology Patients May Skip ScreeningGene Therapy Shows No Long-Term Harm in Animals: StudyCould Gene Therapy Cure Sickle Cell Disease? Two New Studies Raise HopesCocoa Might Give Your Brain a Boost: StudyLockdown Loneliness Could Worsen Parkinson's SymptomsChildhood Lead Exposure Tied to Brain Changes in Middle AgeStaying Social Can Boost Healthy 'Gray Matter' in Aging BrainsDNA Analysis Might Reveal Melanoma RiskGenetics Might Explain Some Cases of Cerebral PalsyDiabetes Drug Metformin May Protect the Aging BrainNew Research Links Another Gene to Alzheimer's RiskYour Sex Affects Your Genes for Body Fat, Cancer, Birth WeightExperimental Drug Shows Promise Against ALSCould Gene Therapy Stem the Damage of Parkinson's?Genetic Research May Help Identify Causes of StillbirthBlood Test Heralds New Era in Alzheimer's DiagnosisMore Clues to the Genes Behind Hearing LossScientists Move Closer to Mapping Entire Human GenomeBlood Test May Reveal Concussion Severity With Accuracy of Spinal TapDeep Brain Stimulation May Slow Parkinson's, Study FindsStroke, Confusion: COVID-19 Often Impacts the Brain, Study ShowsYour Genes May Affect How You'll Heal If WoundedEven Without Concussion, Athletes' Brains Can Change After Head Jolts: StudyHealthDay In-Depth
The AI Revolution: For Patients, Promise and Challenges Ahead">HealthDay In-Depth
The AI Revolution: For Patients, Promise and Challenges Ahead
Blood Test Might Predict Worsening MSKeto Diet Might Change Your Gut in More Ways Than OneParkinson's Patient Improving After First-Ever Stem Cell TherapyKey Areas of the Brain Triggered in Recent Heart Attack SurvivorsFirst Good Evidence That Brain Hits 'Replay' While You SleepSome NFL Players May Be Misdiagnosed With Brain Disease: StudyGreenhouse Gases Bad for Your Brain
Questions and AnswersLinksBook Reviews
Related Topics

Medical Disorders
Mental Disorders
Mental Health Professions

HealthDay In-Depth
The AI Revolution: For Patients, Promise and Challenges Ahead

HealthDay News
by By Anne Harding
HealthDay Reporter
Updated: Jun 10th 2020

new article illustration

WEDNESDAY, June 10, 2020 (HealthDay News) -- Streaks of color swirl through a pulsing, black-and-white image of a patient's heart. They represent blood, and they're color-coded based on speed: turquoise and green for the fastest flow, yellow and red for the slowest.

This real-time video, which can be rotated and viewed from any angle, allows doctors to spot problems like a leaky heart valve or a failing surgical repair with unprecedented speed. And artificial intelligence (AI) imaging technology made it possible.

"It's quite simple, it's like a video game," said Dr. Albert Hsiao, an associate professor of radiology at the University of California, San Diego, who developed the technology while a medical resident at Stanford University.

There's a lot going on behind the scenes to support this simplicity. Each 10-minute scan produces 2 to 10 gigabytes of data. To handle such huge, complicated data sets, Hsiao and his colleagues at Arterys, the company he helped found in 2012 to develop the technology, decided to build the infrastructure on the internet, where it can be accessed by servers from other researchers.

And now, investigators around the world are using this cloud-based infrastructure to share and test medical AI imaging models in the Arterys Marketplace. "We've made it almost as easy to get medical AI online as to upload a YouTube video," said Arterys product strategy manager Christian Ulstrup.

Arterys decided to open up its $50 million platform to all comers -- a move that raised eyebrows in the competitive world of health care and medicine -- because the company realized that the full potential of the technology to transform medicine couldn't be realized without collaboration from others, Ulstrup explained.

"There are all these brilliant researchers, startup founders and individual developers who are working with machine learning models with the data they find online," Ulstrup explained. "The thing that's really heartbreaking is most of these models that could be used to meet unmet clinical needs end up dying on hard drives. We're just trying to connect these people who don't really have a communication channel."

Here's a video of the AI heart scan in action:

Artificial intelligence -- basically, computer programs or machines that can learn -- has the potential to open up access to health care, improve health care quality and even reduce costs, but it also carries real risks. AI tools have to be "trained" with huge quantities of high-quality data, and to be useful they have to be robust enough to work in any setting. And using AI that is trained on biased data could harm patients.

AI as 'double-edged sword'

"It's very important that we start looking at the unconscious biases in the data to make sure that we don't hardwire discriminatory recommendations," said Dr. Kevin Johnson, chair of biomedical informatics at Vanderbilt University Medical Center in Nashville. He prefers the term "augmented intelligence" to "artificial intelligence," since AI aims to extend the abilities of clinicians, not to steal their jobs.

One key application of AI in health care will be to identify patients who are at risk of poor outcomes, but such predictions are worse than useless if doctors don't know how to prevent these outcomes, or the resources aren't available to help patients, Johnson added. "We don't have the work force who plays the role of the catcher's mitt" and can step in and help these at-risk patients, Johnson noted, especially in a healthcare system now stretched to the limit by the coronavirus pandemic.

"I think we have to think creatively about how we restructure the system to support some of the outcomes that are of interest to us," he said.

Dr. Ravi Parikh is an instructor in medical ethics and health policy at the Perelman School of Medicine at the University of Pennsylvania in Philadelphia. He pointed out that "AI and machine learning are sort of a double-edged sword, particularly in my field of oncology."

AI has proven its potential for interpreting images, for example diagnosing lung cancer from a CT scan, but when it comes to using AI for supporting clinical decisions, like whether this patient should have chemo or that patient should go to the hospital, there's a risk that it won't help patients or could even be harmful, Parikh noted.

"Even though you might have an AI that's accurate on the whole, if it's mischaracterizing an outcome for a specific group of patients you really have to question whether it's worth it," he said.

What's been missing in the development of health care AI, Parikh added, have been rigorous prospective studies to determine whether the technology is actually useful for patients.

Just as the U.S. Food and Drug Administration requires that drug companies run clinical trials to confirm that their product is safe and effective, Parikh said, the FDA should start requiring makers of AI tools to test their safety and effectiveness in humans.

And just as the agency tracks the safety of drugs once they reach the market, the FDA should set up frameworks to study whether the AI algorithms it approves are enforcing existing biases, he noted.

"We really need to start focusing on that as these things are making their way to the clinic," Parikh said.