Medtech Insight is part of Pharma Intelligence UK Limited

This site is operated by Pharma Intelligence UK Limited, a company registered in England and Wales with company number 13787459 whose registered office is 5 Howick Place, London SW1P 1WG. The Pharma Intelligence group is owned by Caerus Topco S.à r.l. and all copyright resides with the group.

This copy is for your personal, non-commercial use. For high-quality copies or electronic reprints for distribution to colleagues or customers, please call +44 (0) 20 3377 3183

Printed By

UsernamePublicRestriction

Canary Speech’s Voice AI Can Help Detect Alzheimer’s With 40-Second Conversation

Executive Summary

AI voice tech start-up Canary Speech is developing algorithmic models for detecting behavioral health, progressive neurological and cognitive diseases based on a 40-second recording of speech. Medtech Insight caught up with Canary CEO Henry O’Connell at DHIS West to learn more about the company and its plans.

Just as canaries once were used in coal mines to detect life-threatening levels of toxic gases before they sickened miners, Canary Speech is using vocal biomarkers to identify “previously silent health conditions” earlier and more precisely.

Voice AI is a small but growing field. What sets Canary apart from competing start-ups is the breadth of its platform, with clinically supported applications in behavioral health, progressive neurological diseases and cognitive diseases, the Lehi, Utah-based company says.

Canary's patented language processing and machine learning technology analyzes acoustic and linguistic features to model and screen for diseases ranging from depression, anxiety and schizophrenia to respiratory conditions, Alzheimer’s and other neurodegenerative diseases.

The platform facilitates proactive care through earlier detection and remote monitoring of patients for clinical deterioration.

Henry O'Connell Henry O'Connell, CEO of Canary Speech Canary Speech

At the recent DHIS West conference in San Diego, Canary CEO and co-founder Henry O’Connell showed Medtech Insight how the Canary Speech app translates speech patterns, inflections and other data captured in a 40-second audio clip into a score reflecting vocal stress, mood and other qualities, which can augment standardized clinical assessments like GAD-7 for anxiety and PHQ-8 for depression.  (Also see "SVB’s Milo Bissin Predicts ‘Reckoning In 2024’ For Health Care Companies With Sky-High Valuations" - Medtech Insight, 20 Feb, 2024.)

Canary’s voice AI also is being used to help detect early-stage Alzheimer’s, Parkinson’s disease, Huntington’s disease, and post-traumatic stress. This year, the company plans to expand its models into multiple sclerosis and pulmonary tract diseases, and next year plans to add autism, ADHD (attention-deficit/hyperactivity disorder) and asthma to its disease targets.

O’Connell noted that Canary’s models can analyze speech in Spanish, Japanese, American English, Irish English, UK English, and are currently being built out in German as well.

“You can measure multiple things during that [40-second] conversation,” he said. “That provides information about the disease state that can help the neurologist understand, ‘Do I need to make a referral to the psychiatrist for that patient?’”

Amazon Alexa Builder Brings Expertise

Jeff Adams, Canary’s co-founder and chief speech officer, led the team that built Amazon Alexa’s speech technology and Amazon Echo. From 2001 to 2009, Adams was also the director of language modeling R&D at software solutions company Nuance Communications, which provides AI and speech recognition software. Nuance was bought by Microsoft Corporation in 2022 for about $16bn, and Microsoft is now a partner of Canary.

“Canary’s approach is different than Nuance’s approach, but they augment each other,” O’Connell said. “We could provide in the same space of time additional information that can be used objectively to evaluate patients.”

O’Connell said he and Adams have been friends for decades. They first met in Bethesda, Maryland roughly 40 years ago when O’Connell was a research fellow at the National Institutes of Health studying rare neurological diseases. It wasn’t until 2015 when the time seemed ripe to start a company together.

During one of their early conversations, O’Connell recalled, Adams described how the use of natural language processing [NLP] to analyze disease was being spearheaded by leading academic institutions like MIT and Carnegie Mellon, but had yet to produce a commercially available product.

“NLP produces about 150 data words a minute, and then spaces and gaps and syllables – you might get up to 500, 600, 800 data points,” O’Connell said. Using AI, Canary’s mathematical models analyze 15 million data points every minute. “We generally take a 40-second sample of conversational speech – it’s about 12.5 million data points. We actually analyze that 40 seconds every 20 milliseconds and then slide 10 milliseconds and we extract 2,548 different data elements.”

Working with research partners, such as neurologists at Harvard University, Canary attains audio from patients that have been diagnosed with a particular disease, correlates the 2,548 different data points to their disease, and, from there, builds an algorithm with an accuracy between 80-98%, he explained.

O’Connell stressed that Canary’s technology is backed by rigorous clinical studies. “We function within the clinical setting,” he said. “We are a partner in that [clinical] setting, we are fully data-secure, and we are part of the informed consent.”

Canary is working with clients that include major US health insurersCigna Corp. and Optum United Healthcare, as well as leaders in other industries such as Microsoft, telecommunications company Nippon Telegraph and Telephone, and Japanese conglomerate Hitachi. Under its software-as-a-service business model, Canary charges clients between $1-$3 for a speech assessment, he said.

Canary also is working with three unnamed pharmaceutical companies to validate potential therapies for Alzheimer’s and Parkinson’s disease. O’Connell explained that by looking at speech patterns over time, the technology can identify how well drugs are working in patients during clinical trials.

“The biomarker information provides them [pharmaceutical companies] with another objective piece of data to validate the efficacy of the drug,” he explained. “We look at the patient after the administration of the drug – are their symptoms improving?” Speech analysis can also be used to pinpoint precisely when symptoms improve, which can help with managing dosage rates.

On 6 February, Canary announced its latest collaboration with pharmaceutical company Halia Therapeutics to provide voice data and insights to enhance clinical development of the latter’s HT-4253, an Alzheimer’s drug that targets a component of neuro-inflammation known for playing a key role in the progression of the disease.

“Canary AI-driven speech technology will provide invaluable data to help us tailor our therapies more effectively and improve outcomes for patients with this debilitating neurodegenerative disease,” says Halia CEO David Bearss.

Canary conducted an initial trial with a large US health insurer in which it analyzed speech recordings from phone calls with 651 policyholders with early-stage Alzheimer’s and 1,018 people without the condition. The recordings were used to create a first diagnostic model for the disease with 96% accuracy in identifying Alzheimer’s.

Growing Body Of Research

Researchers worldwide are exploring the potential for voice to predict the onset of Alzheimer’s disease. 

In November 2023, the Alzheimer’s Drug Discovery Foundation (ADFF)’s Diagnostic Accelerator launched its first longitudinal international study to create the largest repository of speech and voice data to accelerate the detection, diagnosis and monitoring of Alzheimer’s disease. The three-year study will include clinical sites in the US, Spain, and Australia collecting data from 2,650 participants, who will be given handheld tablets with the pre-installed SpeechDx to capture voice data, according to the ADDF.

A research study led by Ihab Hajjar, a professor at the University of Texas Southwestern Medical Center, suggests that using advanced machine learning and natural language processing tools hold promise in diagnosing Alzheimer’s disease before symptoms begin to show. Findings were published in the Alzheimer’s Association publication Diagnosis, Assessment & Disease Monitoring last February.

The study assessed speech patterns in 206 people – 114 who met the criteria for mild cognitive disease and 92 who were unimpaired. Researchers mapped the findings to commonly used biomarkers to determine how effective they were in measuring impairment.

Study participants, who were enrolled in a research program at Emory University in Atlanta, GA, were given several standard cognitive assessments before being asked to record a 1-to-2-minute description of artwork. Researchers compared speech analytics to cerebral spinal fluid samples and MRI scans to see how accurately the voice biomarkers were in detecting both mild cognitive impairment and Alzheimer’s disease status and progression.

Hajjar said in a press release that, if confirmed with larger studies, the use of AI could provide primary care providers with an easy-to-perform screening tool for at-risk individuals and possibly allow for earlier intervention.

Competitive Scene

Other start-ups working on AI voice technology include Ellipsis Health, headquartered in San Francisco, CA, which developed a vocal stress test with Cigna that is collected in a mobile app. According to an article published in New Scientist Magazine last December, however, a new study suggests that Ellipsis’ test provides inconsistent results when given to the same person twice.

Boston, MA-based Sonde Health uses vocal biomarkers to assess patient health, wellness and fitness, and Newton, MA-based Vocalis Health, Inc. uses vocal biomarkers to identify the likelihood of hospitalizations for various conditions, as well as to detect respiratory disease such as asthma, chronic obstructive pulmonary disease, and COVID-19.  (Also see "Kintsugi’s AI Software Analyzes Voice For Signs Of Depression, Anxiety" - Medtech Insight, 5 Apr, 2023.)

In 2023, Canary had about $1m in revenues, which O’Connell expects to grow to $5m-$8m in 2024 revenues, driven by strategic partnerships. In November 2023, Canary signed a deal providing SMK Corp. with exclusive rights to commercialize and distribute Canary’s speech-based algorithm in Japan and Asia for analyzing brain-related diseases. Later this year, Canary plans to introduce a HIPAA-compliant consumer app that will be marketed with an undisclosed partner.

To date, Canary has raised $16m in funding – including a strategic investment by Hackensack Meridian Health and its Bear’s Den innovation accelerator program in June 2023 – and is looking to raise $10m in a series A preferred round.

 

Related Content

Related Companies

Latest Headlines
See All
UsernamePublicRestriction

Register

MT154459

Ask The Analyst

Ask the Analyst is free for subscribers.  Submit your question and one of our analysts will be in touch.

Your question has been successfully sent to the email address below and we will get back as soon as possible. my@email.address.

All fields are required.

Please make sure all fields are completed.

Please make sure you have filled out all fields

Please make sure you have filled out all fields

Please enter a valid e-mail address

Please enter a valid Phone Number

Ask your question to our analysts

Cancel