Artificial Intelligence in Medicine – Part 4: Will AI Make Mistakes that Harm Patients?
The most obvious risk is that AI systems will sometimes be wrong, and that patient injury or other health-care problems may result.[i]
There are many levels in which Artificial Intelligence (AI) is already serving the field of medicine, but two of them involve direct impact on patients:
- enhance patient information available to doctors (e.g. diagnosis, treatment options, etc.)
- take over time-consuming chores like electronic medical record keeping, organizing individual patient files to filter relevant data, appointment setting, etc.
Other areas that utilize AI have an indirect impact on patients:
- as medicine is increasingly in the hands of large hospital corporations, AI will affect the administration of medicine by analyzing large data sets that include demographics, geography, etc. and make predictions about future resource utilization
It doesn’t take much to imagine how beneficial AI can be for all of the humans involved in healthcare services. Digging a little deeper, however, reveals it may not all be rosy.
Errors in information used by doctors
Medicine’s history is filled with errors and mistakes—even some horrors. Imagine the fates of surgery patients or women in childbirth before anesthesia, antiseptics and antibiotics. How often was the cure worse than the disease? Medical science has come a very long way, but is still imperfect and doctors are not gods. Mistakes happen. Efforts at cure often fall short. Most often, however, the damage is limited within a single doctor-patient relationship.
With AI, though, it’s different. For instance, If AI makes a mistake in interpreting radiological scans (MRI, CT, ultrasound, etc.), matching drugs with disease states, or assigning hospital beds during an epidemic, an untold number of patients could be at risk before the problem is detected, traced, and corrected.
Also, whether such errors affect just a few patients or hundreds, where will those who suffer because of impersonal computer error direct their anger and grief? The doctor? The computer? The hospital that invested in the software? Who will bear the burden and cost of compensating patients? These are some of the issues that must be identified, along with plans for resolving them.
Managing time-consuming tasks
The last time you visited your primary care doctor, was he/she busy typing your responses to questions into a laptop computer during your consultation? At the end of the appointment, were you given a printed after-visit summary? Did you ever wonder if your doctor was filling in a template with your data, where the data will be stored, and how it will be used? Who else might have access to it?
AI makes medical record-keeping much more efficient for the doctor. In addition, it adds to a vast pool of data that AI can rapidly call up, sort, and analyze. A growing number of research studies are based on thousands of patient records within large systems like an academic medical center or the Veterans Administration; in the blink of an eye, correlations and associations can be established for things like repurposing a drug or discovering a radiation side effect that shows up years later. Your record may have been used in a research study cleared by an ethics board, but you would not even have known it. Even if it contributed to a medical advance and didn’t identify you personally, was your privacy violated?
Making business projections
In addition to individual consequences from erroneous AI, masses of people might be affected. Healthcare in the U.S. is big business; sadly, it is not equally distributed. The Kaiser Family Foundation defines health care disparity as “differences between groups in health insurance coverage, access to and use of care, and quality of care.” The reasons for disparities are complex, and “often refer to differences that are not explained by variations in health needs, patient preferences, or treatment recommendations and are closely linked with social, economic, and/or environmental disadvantage.”[ii]
Analyzing the many factors involved in medical inequities within populations is an area in which machine learning could be of enormous benefit. However, inputting huge amounts of data without mistakes or bias will be time consuming and costly. Knowing that allocations of funds and investments may be based on AI’s evaluation and recommendations, we have to wonder: Who will be in charge of designing the program, and what’s at stake for them? Literally, the health of millions of today’s underserved patients may be affected by simple input errors or biased training within deep learning. We have to be prepared for patient harm on a grander demographic and economic scale than we want to imagine.
In Part 5, I will explore implications for patient privacy, which also touches on legal and regulatory issues. Keep an eye out for another aspect of getting the medical world ready for AI.
NOTE: This content is solely for purposes of information and does not substitute for diagnostic or medical advice. Talk to your doctor if you are experiencing pelvic pain, or have any other health concerns or questions of a personal medical nature.
[i] Price, W. Nicholson II. “Risks and Remedies for Artificial Intelligence in Healthcare.” The Brookings Institute. Nov. 14, 2019. https://www.brookings.edu/research/risks-and-remedies-for-artificial-intelligence-in-health-care/
[ii] Artiga S, Orgera K, Pham O. Disparities in Health and Health Care: Five Key Questions and Answers.” KFF Disparities Policy. Mar. 4, 2020. https://www.kff.org/disparities-policy/issue-brief/disparities-in-health-and-health-care-five-key-questions-and-answers/
- General Medicine