Medtech Insight is part of Pharma Intelligence UK Limited

This site is operated by Pharma Intelligence UK Limited, a company registered in England and Wales with company number 13787459 whose registered office is 5 Howick Place, London SW1P 1WG. The Pharma Intelligence group is owned by Caerus Topco S.à r.l. and all copyright resides with the group.

This copy is for your personal, non-commercial use. For high-quality copies or electronic reprints for distribution to colleagues or customers, please call +44 (0) 20 3377 3183

Printed By


‘Biggest Transformation In The History Of Medicine’: Oncologist And Cardiologist Talk AI-Enabled Health Care

Executive Summary

At the recent Precision Med Tri-Con conference, health care leaders and AI enthusiasts Eric Topol and Doug Flora discussed how AI integration will change the way doctors practice medicine and patients receive care.

An AI-enabled, personalized health care future is coming, though challenges in provider uptake and regulatory oversight remain obstacles to overcome.

Already, St. Elizabeth Healthcare in Kentucky is using third-party software called Eon Patient Management (EPM) to augment radiologists’ reading of scans – including about 1,000 lung cancer screens a month – enhancing ability to detect cancers at earlier stages.

“That includes cancers that would have been incidentally found on scans and not as readily identified as the lung masses we were looking for,” explained oncologist Douglas Flora, who is also the executive medical director of oncology services at St. Elizabeth, in an interview with Medtech Insight.

Denver, CO-based Eon says its platform uses superior computational linguistics models to capture incidental findings with up to 98.3% accuracy and 98.1% precision. The system “tracks and predicts patient follow-up and arms providers with intelligence that saves lives and increases new hospital revenue,” according to the company’s website.

St. Elizabeth is investigating avenues for deploying other AI-driven tools in radiology to improve productivity and accuracy, including in mammography.   

“We are screening 88% of women who will never get breast cancer, and we know that definitively,” said Flora in a panel discussion with Kevin Davies, executive editor at CRISPR, during the recent annual Precision Med TRI-CON meeting in San Diego. “We just don’t know who the 88% of women are of 100 that we’re screening inappropriately.”

More refined approaches incorporating polygenic risk scores should help with this. “I think the tools that we’re putting in our center will help us drive those innovations more quickly,” Flora said.

Also at Precision Med TRI-CON, Eric Topol, founder and director of the Scripps Research Translational Institute, said in a fireside chat with Mara Aspinall, partner at Illumina Ventures, that he’s been urging doctors to adopt alternative screening approaches using polygenic risk scores along with AI tools to calculate a patient’s risk for disease.

A recent study in Finland showed that using polygenic risk scores along with family history can help predict which women would benefit the most from breast cancer screening.

A renowned cardiologist who has written bestseller books on the future of medicine, Topol believes AI should be combined with a wide range of personalized data from electronic health records, lab tests and images, gut microbiome and “organ clock,” which evaluates the age of individual organs, to anticipate disease, including neurological diseases with longer timescales such as Alzheimer’s.

AI could be applied at multiple stages in the health care journey, from helping with diagnosis to planning treatments and predicting outcomes. By layering fields of highly personalized data, it would allow oncologists, for instance, to provide targeted therapy and immunotherapy rather than standardized treatments that may or may not work for each individual.

Flora is similarly confident in AI’s health care future. In the next five years as AI models become more robust and reliable, he expects they will be integrated in everyday practice across numerous specialties to assist doctors.  (Also see "Identifying Optimal Use Cases For Generative AI In Health Care: DHIS West Panelists Share Views" - Medtech Insight, 13 Feb, 2024.)

Currently, AI is most prominent in radiology, with hundreds of regulatory-approved programs for pattern recognition, as well as pathology, with cardiology and oncology on their heels.

He told the audience at the conference, “We can now assess gender from a retinal scan or an EKG, we can do gestational sweeps of the uterus and identify in low-resource settings genders and potential development of disabilities.”

Flora also expects hospitals to deploy AI increasingly to gauge risk of hospital readmissions or emergency department visits.

“The next generation of these tools will more likely even help doctors make medical diagnoses and treatment decisions,” he said.

In 10 years, AI use will lead to major breakthroughs in understanding cancer. As AI models are better trained and begin learning on their own, they will unravel genetic and molecular interactions that underlie cancer, Flora predicted. He envisions a future where a newborn baby’s heel may be pricked to stratify disease risk based on genomics.

Doctors Reservations Persist

In Topol’s view, “AI will be the biggest transformation in the history of medicine.”

There are challenges to be overcome, however, starting with resistance in the professional health care community. Doctors are wary of adopting AI in their workflows and decision-making processes, with concerns ranging from liability risks to their outright replacement by machines.

A self-proclaimed machine-learning enthusiast, Flora agreed that health care has lagged behind other industries when it comes to embracing AI and other emerging technologies.

But there are compelling cases to be made for AI.

Flora noted that St. Elizabeth serves a community with a high propensity for obesity, as well as alcohol and tobacco use.

“We’re at the center of the universe for lung cancer and lung cancer screening,” he said. “About  25 to 30% of the patient population has small cell lung cancer.”

Thanks to medical advances and new cancer drugs, cancer patients have the potential to live much longer today compared with the past. However, providing care to the aging population will grow increasingly difficult due to shortages of oncologists and other physicians. AI represents a promising solution, and providers are beginning to come around, according to Flora.

“My personal journey talking about this topic around the country the last couple of years has been to try and be a bridge between the folks who are designing these tools and the folks who have no idea they exist,” Flora said at the conference. “I've tried to introduce the concept of generative AI to other clinicians, and they've been suspicious and then ultimately sort of worn down and responsive, because they need these tools as much as the rest of us.”

He emphasized that using generative AI the likes of ChatGPT to compose emails and dispense with other secondary tasks has already helped reduce his administrative burden by two to three hours a day.

One area where ChatGPT has made a big difference for Flora is in navigating the appeals process with insurers. Rather than waiting for an often time-consuming peer-to-peer authorization review on the phone, Flora now delegates his AI assistant to compose letters with citations and references to get tests approved, which has proven highly successful.

“I've tried to introduce the concept of generative AI to other clinicians, and they've been suspicious, and then ultimately sort of worn down and responsive, because they need these tools as much as the rest of us.” – Doug Flora

“Now most of my work that can be automated is automated, and I’m trying to broaden that to a wider audience,” he said.

Automated Clinical Notes Will Become Norm, With Coaching Options

In 2024, St. Elizabeth Healthcare will pilot Nuance Communications’ Dragon Ambient eXperience Copilot (DAX) automated clinical documentation. Flora is confident that DAX will spare doctors from getting bogged down in administrative tasks by listening in and recording patient visits, with the patient’s consent, and creating clinical notes that doctors can review afterward.

“Once they [doctors] recognize that they've already been dictating into Nuance software, it's not so daunting to tell them that this Nuance software is like the last one, except it's going to hear everything you say and not include the things that you didn't want in your doctor's note,” Flora said.

Topol identified other advantages of digitized notes, which will lead to automation of appointment scheduling, lab work and tests, prescriptions, pre-authorizations, and more. He believes such systems will be adopted rapidly due to widely reported reductions in data entry and clerical burdens.

“This doesn’t require FDA approval, so it’s going to happen fast,” he said. He predicts that in the next year or two years, it will be the norm.  (Also see "In Five Years, People Will Navigate Their Health Care With An AI Advisor – Verily’s Andrew Trister" - Medtech Insight, 11 Mar, 2024.)

Not only can AI assistants help to streamline doctors’ workloads, they also have the potential to provide feedback and coaching. 

For example, Topol said, “These notes will tell the doctor, why did you interrupt Mrs. Jones after nine seconds?” They may also identify points in a patient visit where a doctor could have been more attentive or empathetic.

Flora noted studies that have shown chatbots to generate more empathetic responses to patient input than doctors, which he agreed is a problem.

“We [doctors] should do better, and doctors need to do better,” said Flora, who, as a two-time cancer survivor, has spent considerable time on the patient side of the equation. He also lost his mother to breast cancer at a young age, which inspired him to become a physician. 

Regulatory Challenges

Lawmakers and regulators continue to grapple with the issue of AI regulation.

AI tools such as ChatGPT are known to pose risks and ethical quandaries for health care stakeholders, with a known capacity to hallucinate, put data privacy and intellectual property rights at peril, magnify biases in health care data, and reduce transparency in medical decision making.  (Also see "Generative AI Providing Behavioral Health Solutions, But Not Therapy … Yet" - Medtech Insight, 12 Oct, 2023.)

A landmark study published in the medical journal Science, for instance, found that an algorithm used to predict health care needs classified white patients overall as being more sick than black patients.

“These aren’t necessarily devices, and we typically don’t regulate software, so it’s a whole paradigm for regulatory agencies to figure out who is ultimately responsible and how do we police this,” Flora told Medtech Insight. “We don’t know how to police things that are not transparent, that may not be reliable yet.”

Topol further noted, “We can’t expect machines to never make mistakes, but clearly there is a lot of work to show that the net-benefit is extraordinary.”

In the US, AI policy is developing on numerous fronts. President Joe Biden issued an Executive Order in October 2023 to promote development of new standards for AI safety and security to protect Americans’ privacy.

Regulators like the US Food and Drug Administration are making moves in the way of rulemaking and guidance, but currently lack resources needed to effectively regulate AI. FDA Commissioner Robert Califf stated at a conference in January that with AI/ML, “the algorithm’s not only living, but the assessment of the algorithm needs to be continuous,” which would require the FDA to double its size to keep pace.  (Also see "Creating Standards For Responsible AI In Health Care" - Medtech Insight, 26 Mar, 2024.)

Meanwhile there continues to be a steady drip of AI solutions, such as chatbots, into health care settings.

“I would like to make sure that physicians and providers are at the table when these things are discussed,” Flora said. “I think transparency is paramount in terms of importance so we can understand how these models were trained, to make sure that we understand there is no inherent bias, to make sure there’s no drift of the outputs of the systems as they self-train.”

He added, “You can’t hold the computer responsible for a medical error, so it has to be the physician that is the final arbiter of good medicine, and we are the primary protector of the patient, so we should take that charge very seriously, no matter how technology races forward.”


Related Content


Latest Headlines
See All



Ask The Analyst

Ask the Analyst is free for subscribers.  Submit your question and one of our analysts will be in touch.

Your question has been successfully sent to the email address below and we will get back as soon as possible. my@email.address.

All fields are required.

Please make sure all fields are completed.

Please make sure you have filled out all fields

Please make sure you have filled out all fields

Please enter a valid e-mail address

Please enter a valid Phone Number

Ask your question to our analysts