Google Big Data Mining The Doctor's Office? It May Save A Lot Of Time

By Chuck Dinerstein, MD, MBA — Nov 29, 2017
Utilizing speech recognition could allow information to automatically be placed in medical records. That would allow doctors to get back to providing care, instead of performing data entry.   
Philippe Kurlapski at French Wikisource

One of the enormous problems with electronic health records (EHR) is that it requires lots of data entry. When physicians perform this task, it takes away from valuable time and expertise that could be given over to patients not screens. [1]

Google, in some exploratory work is showing how the same type of speech recognition systems in Google Home can be adapted to medical care.

The preprint paper by Chung-Cheng Chiu et al. made use of two deep learning models to abstract medically relevant information from the conversations of physicians and patients.

Using a dataset of 14,000 hours of conversation, they trained these computer systems, using the annotations of medical scribes as the learning and accuracy standards. They found that the computers were able to correctly transcribe and abstract pertinent medical information 80% of the time. It is the abstraction of medically relevant information that is different and pertinent. Commercially available transcription systems are already used by physicians, and many physicians employ medical scribes, people present in the room, like a court stenographer, taking down every word.

The cloud-based speech recognition systems, Siri, Alexa, and Google, have already demonstrated their speed, reliability and greater computational power than commercial systems in transcription. But in all of these cases, someone still has to input the data abstracted from the conversation into the EHR.

A system that can identify the essential parts of a medical discussion and automatically place the information into electronic health records would be a blessing for physicians, although not so much for medical scribes.

80 percent accuracy is not great, and it is clear from the article that machines will have to learn from a variety of clinical situations, e.g., wellness visits, follow up for diabetes; and will have to learn the relevant clinical information in each of the medical specialties. But it is basic proof of concept and provides hope to physicians who feel enslaved to screens when they would rather be with patients. Google plans on continuing their research with clinical partners at Stanford.

Note:

[1] Physicians may spend up to 50 percent or more of their clinical day within EHR’s with 25 percent of their time inputting data. It is also cited as a problem by patients who feel they compete with the screen for attention and physicians who find EHRs impede the flow of clinical care and relationships.

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author: