Dive Brief:
- Suki, a voice-enabled digital assistant powered with artificial intelligence, is partnering with Google's cloud computing business.
- Redwood City, Calif.-based Suki launched in 2017 and has been used with a variety of medical specialties including family practice, gynecology, orthopaedics and cardiology. The startup focuses on clinical documentation, but plans to expand its use cases to data queries, ordering, prescribing and billing.
- The company processes about 3,200 patient interactions weekly, Suki CEO Punit Soni told Healthcare Dive, and expects that to triple or even quadruple by January. The tech is deployed in Ascension, OB-GYN practice giant Unified and 60 to 70 mid- to small-size practices. Two "very, very large deployment announcements" with major U.S. health systems are expected to be announced in the next month or so, Soni said.
Dive Insight:
Voice-enabled technologies that allow physicians free use of their hands and more face time with patients are seeing some serious hype from within the tech and healthcare worlds. Suki uses speech recognition tech and natural language processing from partners like Google and internally-developed software.
"This partnership is actually about strengthening that bond and figuring out a way to integrate many of their speech APIs into the product such that it can become even better and smarter," Soni said, speaking on the sidelines of the HLTH conference in Las Vegas this week.
The former Google exec said Suki was "agnostic" in looking for additional partners. Google will also point their partners, like health systems, toward the product as part of the newly-inked relationship.
Google will integrate Suki into its Cloud Partner Advantage Program, giving the startup access to the cloud computing division's tech and AI services.
"Basically, we intend to take all of the scut work that doctors have to do off their plate," Soni said, "so that they can focus on their practice."
According to a 2016 study funded by the American Medical Association and published in the Annals of Internal Medicine, for every hour of face-time with a patient, a doctor spends almost two additional hours on paperwork. Providers spend more than half of their workday on EHR and computer tasks.
The startup was founded by Soni and is backed by funding from investors like Venrock, First Round and Social Capital. The company recently raised $20 million to advance the technology in a Series A round led by Venrock.
It wasn't an outlier. Robin Healthcare, another voice-enabled clinician workflow tool, raised $11.5 million in a Series A funding round in September, bringing its total funding to $15 million.
The lucrative medical voice assistant market is still young, but growing. Google researchers have been developing speech recognition technology for more than a year for use in transcribing doctor-patient conversations and aiding documentation, and Amazon's Alexa is HIPAA-eligible. The e-commerce giant also offers a speech-to-text service under AWS, its cloud business.
Earlier this month, Blue Shield of California announced it was partnering with Notable, a platform supported by Apple Watch that documents physician-patient interactions and uses machine learning.
Also this month, tech giant Microsoft announced a partnership with clinical documentation company Nuance Communications to accelerate the development of ambient sensing tech, AI software that understands patient-clinician conversations and automatically integrates that data into the patient's medical record.
Suki said it is already integrating with many EHR systems, including athenahealth's Marketplace, Epic AppOrchard and Cerner, and plans to add more in the future.