这篇文章来自 wired.com。原始 url 是: https://www.wired.com/story/does-your-doctor-need-a-voice-assistant/
以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.
“Siri, where is the nearest Starbucks?”
“Alexa, order me an Uber.”
“Suki, let’s get Mr. Jones a two-week run of clarithromycin and schedule him back here for a follow-up in two weeks.”
Doesn’t sound that crazy, does it? For years, voice assistants have been changing the way people shop, get around, and manage their home entertainment systems. Now they’re starting to show up someplace even a little more personal: the doctor’s office. The goal isn’t to replace physicians with sentient speakers. Quite the opposite. Drowning in a sea of e-paperwork, docs are quitting, retiring, and scaling back hours in droves. By helping them spend more time listening to patients and less time typing into electronic health records, voice assistants aim to keep physicians from getting burned out.
It’s a problem that started when doctors switched from handwritten records to electronic ones. Health care organizations have tried more manual fixes—human scribes either in the exam room or outsourced to Asia and dictation tools that can only convert text verbatim. But these new assistants—you’ll meet Suki in a sec—go one step further. Equipped with advanced 人工智能 和 natural language processing algorithms, all a doc has to do is ask them to listen. From there they’ll parse the conversation, structure it into medical and billing lingo, and insert it cleanly into an EHR.
“We must reduce the burden on clinicians,” says John Halamka, chief information officer at Boston-based Beth Israel Deaconess Medical Center.1 He’s been conducting extensive early research around how Alexa might be used in a hospital, to help patients locate their care team or request additional services, for example. “Ambient listening—the notion that technologies like Alexa and Siri turn clinician speech and clinician-patient conversations into medical records—is a key strategy.”
Alexa 和 Siri might be the best known voice assistants, but they’re not the first ones doctors are trusting with their patients. While Amazon and Apple are rumored to be working on voice applications for health care, so far they’re still piloting potential use cases with hospitals and long-term care facilities. They don’t yet have any HIPAA-compliant products on the market.
Not so for Sopris Health, a Denver-based health intelligence company that launched today after starting to roll out its app at the beginning of the year. You don’t summon a name to turn it on, just tap it when you want it to start listening. It automatically converts the audio to free text, then turns that speech into a doctor’s note, thanks to hours of training data from actual doctors’ visits. So “I think I’d like to see you again if things aren’t feeling better within a few days,” becomes “Schedule three-day follow-up.” Or, “We’re going to need to get an MRI of that left knee to figure out what’s going on in there” becomes “Order left knee MRI.”
Much in the same way that Google’s neural networks learned that cats and dogs are different animals that people like to keep as pets, Sopris’ algorithms learned to use context clues to pull out the medically actionable parts of a conversation. A cardinal number becomes an interesting feature—maybe it’s a calendar date or the dose of a medication. The words around it help the app decide to schedule a follow-up or order a prescription. And because it integrates directly with the EHR vendor, no separate orders or emails or phone calls are necessary: You just hit a button.
By doing so, physicians assume responsibility (and liability) that everything in it is correct. Which might sound like a leap of faith, but Sopris CEO and co-founder Patrick Leonard says is actually a positive feature. “What’s really cool is it’s changing physician behavior in a good way,” he says. “The app forces them to practice active listening, double-checking with patients that they got everything right. Which they actually have time for, now that they’re not sitting at a computer for six hours a day.” And if the assistant gets anything off, doctors can manually overwrite it.
Sopris plans to eventually move beyond orthopedics into other specialties; it’s currently in talks with a large children’s hospital about creating a pediatrics module. Another clinical voice company also launching today has even bigger plans. With $20 million in funding and stacked with engineers from Google and Apple, Redwood City-based Suki unveiled its AI-powered digital voice assistant this morning. Former Googler Punit Soni founded the company a year ago (it was originally called Robin), and has since launched a dozen pilots in internal medicine, ophthalmology, orthopedics, and plastic surgery practices in California and Georgia. Preliminary results from the company show Suki cuts physician paperwork by 60 percent.
For now, the app still needs some hand-holding. You have to say “Suki, this patient is 67 years old,” and “Suki, we need to order a blood test.” That’s because Soni’s team gave it just enough seed data to survive. But eventually, with enough data flowing through its neural nets, doctors will be able to say simply, “Suki, pay attention.” And then it’s on to tackling bigger problems.
“We’re starting with documentation, but then we can apply the same methods to billing and coding, and other higher order architectures,” says Soni. Things like prescription management, and maybe even decision support—an algorithm whispering hints in your doctor’s ear about a care plan. “I think it’s unreasonable to imagine that 10 years from now doctors will still be using clunky 1990s-style UI to take care of patients,” says Soni.
The health care system has long been impervious to this kind of disruption. But as deep learning gets even better, these kinds of assistants begin to look more plausible. The space is filling up rapidly; last year a third startup, SayKara, helmed by former Amazon engineers, announced it was developing its own Alexa for health care. Others are sure to follow. And that’s when lawyers focused on privacy and cybersecurity start to get concerned. “When you’re talking about AI in the health care space, the appetite to capture more and more data becomes insatiable,” says Aaron Tantleff, a partner at Foley and Lardner law firm in Chicago. He points out that one of HIPAA’S key privacy protections is a rule that says businesses should only collect the minimal amount of information that is necessary. It’s a provision that is fundamentally at odds with data-hungry neural networks.
Voice assistants also raise questions about unauthorized disclosures in the exam room. “We already know these listening devices can get hacked and allow third parties to record conversations,” says Tantleff. “In a medical setting, there’s a very different level of risk. What are companies doing to prevent that from happening?”
Both Suki and Sopris recognize the significant privacy and security considerations involved with their products. The companies encrypt audio on the device, and in between transit to HIPAA-compliant clouds where their algorithms run. And both apps require a prompt from someone in the room to enable listening. Plus patients have to opt-in; docs can’t just record people who don’t consent. The potential benefit to physicians seems clear. The tradeoff for patients less so. Then again, if you want to keep your doctor around for the long haul, maybe it’s worth asking, “Suki, can you keep my data safe?”
1 Disclosure: Halamka was also formerly a member of Suki’s advisory board
The Algorithm Will See You Now
When time is brain, AI can help stroke patients get better care, faster.
Google has developed software that can detect early signs of diabetes-related eye problems, and is testing it in eye hospitals in India.
To keep pace with all the new ideas for using computers and machine learning in health care, the Food and Drug Administration had to create a new team of digital health experts.