Artificial Intelligence in Medicine: Complex Economic, Legal and Ethical Issues
No consideration of Artificial Intelligence/Machine Learning (AI/ML) in medicine and healthcare would be complete without identifying the swirl of economic, legal and regulatory issues that surround it.
The economics of medical AI
In the broad field of healthcare, Artificial Intelligence and Machine Learning (AI/ML) offer the enticing lure of a solid return on investment. Compared to 10 other high-income countries, the U.S. spends more on health care as a share of the economy — nearly twice as much as the collective national average — yet it has the lowest life expectancy and highest suicide rates.[i] The promise of better healthcare with increased economic efficiency drives the billions of investment dollars in developing AI/ML: in 2018, AI/ML investment deals ran to $2.7 billion, and in 2019 investors opened their wallets to the tune of $4 billion.[ii] This is big business, encouraging the creation of small startup companies which, if successful, will see funding by going public, or by being acquired or merged with megacorporations like Microsoft.
As all this economic activity surges, the value of AI/ML to the marketplace is expected to reach $6.6 billion by 2021. With applications in things like diagnostics, virtual nursing assistants, administration of vast amounts of healthcare records and data, research/clinical trials, and robot-assisted surgery, AI is projected to save the U.S. healthcare industry $150 billion annually by 2026.[iii]
As rosy as the economics sound, the downside is that adoption is still cautiously slow, which means many investors are pinning their hopes on the future. The caution is due in some measure to the complex legal and regulatory issues entailed in its development. Imagine yourself as the CEO of a large hospital system wanting to spend your budget wisely as you integrate AI into your Radiology Departments or Electronic Healthcare Records (EHR). Of course, you embrace the core ethics value of “above all do no harm”, but you wonder if growing legislative and regulatory guidelines will be adequate to protect against the potential snares and snarls of, let’s say, a class action lawsuit over an AI slip-up.
Legal and regulatory questions
It isn’t easy to separate legal and regulatory issues from ethics because there is a great deal of overlap. In Part 5 of this series, I discussed the ethics of AI/ML access to millions of private case records—designed to be there for the good of the individual patient—for research purposes that might serve the common good at the risk of violating a patient’s individual rights. The legality layered over this ethical spectrum is simple: Who owns the data? According to a 2019 journal article, this issue “…is characterized by a complex tension between health care provider proprietary interests, patient privacy, copyright issues, and AI developer intellectual property, and an overarching public interest in open access to data that can improve medical care.”[iv] Bright legal minds, including lawyers, judges, and lawmakers will have more than enough work to create equitable laws, which will almost certainly be tested through lawsuits. It’s not surprising that much healthcare-related AI is developing in the relative shelter of academic and clinical research (e.g. diagnostics) more than in the marketplace (e.g. EHR)—though commercial enterprises are catching up.
In the U.S., where approved AI applications are already in use, both White House policy-makers and Congress have taken baby steps to issue guidance via executive orders, bills introduced to the legislative floor, and even a public website, “AI For The American People.” (There is also sensitivity to protecting U.S. interests against international competition, but that’s beyond my scope.) At this point, nothing is binding in a meaningful way, but lawmakers at the Federal and state levels are anticipating legal needs.
Regulation
The broad legal realm includes regulatory issues. If AI/ML software is defined as a medical device, it falls under the watchdog function of the U.S. Food & Drug Administration (FDA). The FDA does not regulate the practice of medicine, but rather products and medical devices. With regard to the latter, the FDA determines if “an instrument … or other apparatus, component, or accessory” is intended for diagnosis, treatment, lessening, cure, or prevention of disease.[v] If so, it must comply with FDA regulations.
The FDA has already approved about 40 AI applications within healthcare, yet it is still pondering how to define and what to do about AI/ML:
Artificial intelligence and machine learning technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day. Medical device manufacturers are using these technologies to innovate their products to better assist health care providers and improve patient care. The FDA is considering a total product lifecycle-based regulatory framework for these technologies that would allow for modifications to be made from real-world learning and adaptation, while still ensuring that the safety and effectiveness of the software as a medical device is maintained.[vi]
This is a good example of where legal/regulatory overlaps with the ethical responsibility to keep patients from harm. At the same time, regulatory oversight should not stifle the incredible progress and potential benefits in current and future development.
Launching an armada of AI applications into the vast ocean of healthcare needs will be met with complex economic, legal and regulatory crosscurrents, but I foresee the waters surrounding AI/ML will eventually be well navigated. Watch for Part 7 of this series that focuses on AI/ML in my own specialty, radiology. I will explore real-world examples of AI outperforming experienced diagnosticians and radiologic readers, offering true advantages for patients and doctors alike.
NOTE: This content is solely for purposes of information and does not substitute for diagnostic or medical advice. Talk to your doctor if you are experiencing pelvic pain, or have any other health concerns or questions of a personal medical nature.
[i] https://www.commonwealthfund.org/publications/issue-briefs/2020/jan/us-health-care-global-perspective-2019
[ii] https://www.fiercehealthcare.com/tech/investors-poured-4b-into-healthcare-ai-starups-2019
[iii] Uzialko, Adam. “Artificial Intelligence will Change Healthcare as We Know It.”Business News Daily, June 9, 2019.
[iv] Jaremko JL, Azar M, Bromwich R, Lum A et al. Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology. Canadian Association of Radiologists Journal. 70 (2019); 107-118..
[v] Tsang L, Kracov D, Mulryne J, Strom L et al. The Impact of Artificial Intelligence on Medical Innovation in the European Union and United States. Intellectual Property and Technology Law Journal. 2017 Aug. https://www.arnoldporter.com/~/media/files/perspectives/publications/2017/08/the-impact-of-artificial-inteelligence-on-medical-innovation.pdf
[vi] https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device
- CATEGORY:
- Artificial Intelligence