Skip to main content.

Can Google save doctors and patients from the misery of electronic medical records?

Can Google save doctors and patients from the misery of electronic medical records?

Picture of Monya De
Photo by Kevork Djansezian/Getty Images
Photo by Kevork Djansezian/Getty Images

Google is a company that likes to simplify tasks that used to be much bigger hassles, like reading maps, sharing documents, and finding old emails. Now, recognizing that health systems have not exactly jumped to help doctors with soul-crushing levels of daily data entry, Google wants to use speech recognition to help doctors get patient histories and plans into the electronic health record, or EHR.

Most doctors working today have been forced to become doctor-stenographers, either because they work for a large health system adopting an EHR system, such as Epic or Cerner, or because their private practices have been offered bonuses from the government for using approved software. Doctors have always taken notes, but EHRs are computer documentation systems — they don’t lend themselves to natural narratives like the paper charts of old. One does not have to mouse around on a piece of paper, looking for the “order Vicodin” button.

Electronic records certainly store information in an easier-to-access format than do paper charts, and an absurd number of startups now exist with the aim of mining the health data within them. That being said, these systems were designed by software engineers who apparently thought each physician has hours to write up each patient’s history, do an exam, make a plan, and write orders. Scribes, the specially trained assistants who type in the history and physical findings and start medication orders while the doctor and patient talk, help with the otherwise impossible goal of solving medical problems every 15 minutes, while pages, interruptions, and inbox tasks pile up relentlessly.

But apart from in a few settings, like emergency rooms and the private offices of specialists, scribes are still a novelty at the doctor’s office. This is despite ample evidence that they improve productivity and doctor satisfaction. Doctors often cannot convince administrators to cough up the money to hire and train them. Google’s innovation, discussed on a company blog, involves tweaking its already-competent speech recognition software to serve as a scribe for doctors.

Katherine Chou, a product manager, and Chung-Cheng Chiu, an engineer on something delightfully called the Google Brain Team, found that the algorithms used to turn the rat-a-tat medical dictation by doctors into an accurate representation of the speaker’s words could be tweaked for a more normal human conversation — like an office visit.

Existing medical dictation sounds roughly like: “36-year-old Filipina presenting for the second time with complaints of hirsute elbows period. Patient has tried all recommendations including Nair and spironolactone period.” In other words, it is a one-sided doctor’s assessment of a conversation with the patient.

But Google’s Chou and Chiu have extended this capability to understand conversations. The computer is getting a double dose of information: It must recognize two different voices, which sometimes overlap or interrupt each other. The speakers are also not necessarily using medical terminology: “How long have you been feeling this way?” “I dunno … Maybe since March? April?” “I started coughing, you know, when we went up to the mountains, you know, to get away from the smoke in Napa.”

The Googlers analyzed thousands of anonymized conversations with the help of a professional medical scribe, who helped them parse out the parts of the conversations that were medically relevant and needed to be captured. The team realized they needed to do “data cleaning,” meaning that the discussion of how cute the patient’s kids are could be eliminated from the medical record. They even added artificial noise to the transcripts to challenge their software to pick out the words. The Google researchers concluded the final transcripts constituted a “reasonable” assessment of what was said in the exam room. Overall, Google found that they were able to achieve 80 to 90 percent accuracy in documenting these doctor-patient conversations, or “reasonable quality”, as the Googlers put it.

But there are some concerns with this practice of “roboscribing,” at least at the level of sophistication Chou and Chiu describe. For one, they say “utterances under five words” are simply removed. Many of these “utterances” could well be words like “Hello, Uh-huh,” and other phrases that do not make a difference in the chart. But a doctor could respond to a patient with a crucial short phrase such as “I agree” or “Take half a dose,” which the software might miss. This could lead to confusion in the emergency room or on the part of a new doctor, not to mention legal headaches when the malpractice attorney says, “But doctor, the electronic record makes no mention of you assessing this patient for risk of harming anyone.”

Another concern is privacy. The Google team doesn’t discuss the implications of Google being privy to the most intimate details of patients’ lives. Twitter employees can allegedly check out our “direct messages.” Will rogue Google engineers be able to browse clinical notes?

Nor is Google the only player in the field. The company got some competition from M*Modal, an EHR and transcription company that launched a similar virtual scribe service in March of 2017. Reviews for the service seem wildly mixed, with some doctors saying M*Modal has the best speech recognition they’ve ever seen, while other doctors dictated their reviews from M*Modal in order to prove how dissatisfied they were with the dropped words.

While freeing doctors to talk at length with their patients without the interference of a computer screen is definitely an admirable goal, and even though errors certainly occur in paper charts, a head-to-head trial should happen before this technology debuts widely in medical offices and large health systems. The trial would compare the work of medical scribes as well as Google’s software. Only then could researchers determine whether critical information was falling out of the medical record. If it is, then hospitals will have no choice but to pay for scribes — or get better software that is more user-friendly for doctors than today’s woefully bad options.


Picture of

Geez, for over 100 years we have done time-motion analyses and even a few minutes devoted to this area indicates the major problems. You can lump them under the headers of screen change delays and lack of automaticity.

For a cold this should happen - Boot up, security, load program, click on patient, click from 4 to 5 suggested templates to load cold, voila - documentation ready to review along with prescriptions (none) and personalized information sheets you may change slightly - after all the should be pre-filled in based on your past 1000 forms filled out for much the same scenario adjusted for age and insurance

But that is not what happens..

Boot up - wait,
do more security, wait,
click on patient - wait,
click on Documentation (wait...),
Click on Template (should have been preloaded most of the time),

fill in documentation from dictation (small part of most encounters as Google should know),
click on multiple choice in template (large part of dictation,

click on prescriptions (most delays in this module) - wait a long, long time, longer where bandwidth is poor. Often you will be presented with preferred choices - more screen changes, more delays.

Then diagnosis - God help you find the right fracture diagnosis or eye or diabetic. These are so overpopulated with every possible permutation that you cannot find the simple ones - so you do your own templates to save time - so it takes weeks or months to get up to speed, due to lack of speed

And nurses have to input instructions and prescriptions that are printed in the wrong priorities and few patients read through anyway

Your own template loaded at the start should have your optional diagnoses, your usual prescription choices adjusted for allergies, the patient pharmacy, referrals, needed equipment - all for your review and send.

And your template should have the instructions that you want loaded - which you review and adjust - first thing in the packet and reviewed by you with the patient, then general instructions based on diagnosis which you can review also, then your meds are added by template with instructions (not by the nurse at her time cost), there should be an option to share this directly to a device (not print 12 pages).

And a link should be on this device to give feedback and request any changes such as medicine made you sick or to set up a follow up as scheduled or as needed.

Ideally this would also be able to allow patients to see needed specialists from ER or urgent care visits instead of having to go back to primary care - which that cannot get because payers pay too little and there are woefully not enough for half the nation - great for saving insurance dollars.

Net neutrality will make matters worse for EHR as the bandwidth is worse where primary care is least and will get worse - slowing down the process more as computers bounce info to servers and back and forth. I suspect that health practices will have to pay more for better bandwidth - to make their EHR work well.

And EHR generally costs twice as much per PC doc in smaller practices - much the same as ACA, MACRA, and Primary Care Medical Home costs ($40,000 for larger, $80,000 and up per PC doc for smaller)

Medicaid prescriptions are the worst in the prescription module

1. nothing is approved, all appear as red NF, no green or preference 1 or 2. In a patient with no insurance or plan there is no back and forth so this is fastest. Generally Medicaid should be fastest as you just click on the prescription and then sign or send as there are no alternatives - but no. You still have to click and click to get rid of the red NF alternatives that are also not approved, then you often have to provide a date of expiration - ridiculous in 99% of prescriptions

2. Medicaid patients often ask for a prescription for over the counter. Most Dual Eligible plans take care of this with a $60 allowance each 3 months to save time, costs, and cover fever, minor pain, heartburn, bandaids, bacitracin, allergy meds - and perhaps prevent an extra visit, but not Medicaid alone. Also the prescription area for over the counter has the highest error/delay rate - dose, size, availability at pharmacy.

Picture of

"...then hospitals will have no choice but to pay for scribes — or get better software that is more user-friendly for doctors than today’s woefully bad options."
Agree that roboscribing missed the mark as a perfect solution. Also worth noting that use of EMR products also occurs in non-hospital, outpatient settings, where it leads to end-user frustration and creates additional time constraints. I see a need for compartmentalizing the "conversation" aspect to enable preserving the requisite technical aspects and medico-legal nature of clinical documentation. For some interactions, accurate capture of the dialog that occured in the exam room may be preferable, though this is not a universal need. If needed, conversations between patient and provider can be captured (verbatim) in a separate document that can be added to an existing EMR yet kept separate from the provider's clinical documentation. Worth considering also- release of records should make certain parts of the EMR "non-visible" to prying eyes of so many who are now granted access (from patient, payer, auditor, researcher, etc.)

Picture of

I tested scribe for myself, it's not worth it. Accuracy on keeping medical records is very important. There are many companies who offers reasonable fees to record and imputing medical records into the system. Hiring local transcriber is a little bit pricey compare to outsourced the services from the other countries. But, it really needs time to find the right one.


The Center for Health Journalism’s 2023 National Fellowship will provide $2,000 to $10,000 reporting grants, five months of mentoring from a veteran journalist, and a week of intensive training at USC Annenberg in Los Angeles from July 16-20. Click here for more information and the application form, due May 5.

The Center for Health Journalism’s 2023 Symposium on Domestic Violence provides reporters with a roadmap for covering this public health epidemic with nuance and sensitivity. The next session will be offered virtually on Friday, March 31. Journalists attending the symposium will be eligible to apply for a reporting grant of $2,000 to $10,000 from our Domestic Violence Impact Reporting Fund. Find more info here!


Follow Us



CHJ Icon