Published in AI

Microsoft and Epic set their AI on medical records

by on19 April 2023


Looking for trends 

Software king of the world Microsoft and Epic Systems announced that they are bringing OpenAI's GPT-4 AI language model into health care for use in drafting message responses from health care workers to patients and for use in analysing medical records while looking for trends.

Epic Systems is one of America's largest healthcare software companies. Its electronic health records (EHR) software (such as MyChart) is reportedly used in over 29 percent of acute hospitals in the United States, and over 305 million patients have an electronic record in Epic worldwide. On the downside, Epic's history of using predictive algorithms in health care has attracted some criticism in the past.

Vole said that Epic will use its Azure OpenAI Service, which provides API access to OpenAI's large language models (LLMs), such as GPT-3 and GPT-4. The first use of GPT-4 comes in the form of allowing doctors and health care workers to automatically draft message responses to patients. 

Chero Goswami, chief information officer at UW Health in Wisconsin, as saying, "Integrating generative AI into some of our daily workflows will increase productivity for many of our providers, allowing them to focus on the clinical duties that truly require their attention."

The Microsoft/Epic system will take natural language queries and "data analysis" to SlicerDicer, which is Epic's data-exploration tool that allows searches across large numbers of patients to identify trends that could be useful for making new discoveries or for financial reasons.

According to Microsoft, that will help "clinical leaders explore data in a conversational and intuitive way." Imagine talking to a chatbot similar to ChatGPT and asking it questions about trends in patient medical records, and you might get the picture.

Some are less than happy about this as GPT-4 can make up information that isn't represented in its data set and might discriminate against certain patients based on gender, race, and age.

Dr. Margaret Mitchell, chief ethics scientist at Hugging Face said: "Combined with the well-known problem of automation bias, where even experts will believe things that are incorrect if they're generated automatically by a system, this work will foreseeably generate false information," says Mitchell. "In the clinical setting, this can mean the difference between life and death."

 

Last modified on 19 April 2023
Rate this item
(1 Vote)

Read more about: