top of page
  • Writer's pictureBonnie Young

"Speak Up!" Speech Tech Addressing COVID-19, w/ Insights from Henry O'Connell, CEO of Canary Speech


While it seemed like the COVID-19 spread was easing, we’re continuing to see massive spikes in cases all around the country. This has prompted both the private and public sector to double down on technologies addressing COVID-19. Reducing the spread is important, and a big part of that is maintaining the welfare of our brave healthcare professionals. Ensuring that they stay healthy and in a positive mindset will be critical to seeing the pandemic through to the end.

One of the most interesting technology applications I’ve seen is the use of speech to detect stress and anxiety in healthcare professionals. The company leading the charge is Utah-based Canary Speech. Canary focuses on speech and uses advanced modeling and ML to identify diseases like Alzheimer’s, Parkinson’s, anxiety, and stress. The company holds 6 patents around the world with 6 more pending. Canary has deployed its technology in 4 different languages today, with more rolling out soon.

I got the chance to chat with Henry O’Connell, Founder and CEO of Canary Speech to discuss this project and more.

What COVID-19 related projects is Canary working on?

We are working with a number of hospital groups around the country that are treating an influx of COVID-19 patients. Today, we are providing solutions for 2 primary use cases:

1. Monitoring COVID patients after discharge. Hospitals need to be able to track COVID-19 patients after they are discharged to make sure the disease has not returned and the patient’s overall wellbeing is intact. The sheer volume of COVID patients makes traditional phone call monitoring impossible, so Canary’s app is critical in monitoring discharged patients at scale. In the Canary app, discharged patients answer a series of questions and alerts the clinical team about specific individuals that need intervention. For context, one of our hospital groups has seen 30,000 COVID-19 patients.

2. Tracking the welfare of clinical staff. Clinical staff have worked extremely hard and put themselves in danger treating COVID-19 patients. The impact of treating COVID patients to the staff themselves can manifest in depression, stress, anxiety, and PTSD. Our app works on clinical staff to monitor these conditions and help them seek the support they need.

How are you collecting and analyzing patient audio samples?

For the diseases we have modeled, we can take a 20 second piece of audio spoken into our app and return an accurate assessment within 3 seconds. We have achieved accuracy that is very similar to the gold standards of GAD-7 and STAI for anxiety. As far as I’m aware, we are the only company that can do this.

In terms of what the patient needs to say in the audio sample, they can actually say anything because the biomarkers we measure occur underneath the words they speak. Nevertheless, we usually prompt patients with a question like “talk about the best moment in your day” to get them speaking naturally.

How are you creating models for each disease and what specific speech biomarkers do you look at?

Our models look at 2,548 different spectral characteristics in speech. Then, we look at 1st, 2nd and 3rd order derivatives of each of these. Every 20 milliseconds, we measure all 2,548 markers. We look for emotional characteristics that occur like “jitter”, “shimmer” or the rate of change in tonal qualities. We as humans are accustomed to identifying emotions like stress, alarm or fear in someone’s voice; our mathematical models are just able to systematically recognize and categorize these.

Each disease will have different speech biomarkers that are associated with it, but the process for identifying them is the same. We start with a group of patients we know has the disease and look for similarities across the 2,548 spectral characteristics. Then, we selectively identify biomarkers we can see are related to that disease. Finally, these biomarkers are actually very similar across different spoken languages, and that allows Canary’s technology to be highly accurate across languages.

Where does the company’s name “Canary” come from?

The name “Canary” was inspired by “Canary in a Coal Mine”. This phrase refers to a useful early indicator of danger. It comes from the story that coal miners would bring caged birds down to the mines, and if dangerous gases were present, it would kill the bird first, alerting coal miners to flee. Our goal is for Canary Speech’s solutions to be an effective early indicator of all types of diseases.

About the Founder

Henry O'Connell has over 20 years of executive and C-level experience. Following graduate school, Henry began his career at the National Institutes of Health in a neurological disease group and continued on to a successful business career specializing in turnaround situations in the tech industry. He has served on several boards in both the private and public sector. Henry’s vast experience spans globally, as he has managed companies in North America, Europe, and Asia.

Fundraising

Canary has raised about $4 million in funding to date. The company is currently raising a $6 million round with $3 million already committed and $3 million remaining open. Interested investors can reach out to him at henry@canaryspeech.com.


About the Author

Bonnie Young runs the Amplified blog. She shares her insights on market trends from US to Asia and interviews founders that are shaking up the tech scene. Please reach out to her at bonnieyoung@berkeley.edu with your questions and feedback. She is currently looking for a growth equity or VC role in the Bay Area.

bottom of page