Artificial Intelligence in Medicine: Can Artificial Medicine Have True Humanity?
When it comes to making decisions, do you consider yourself a head or heart person? According to a psychology article, “… a heart-based thinking style is intuitive (‘follow your heart’), whereas a head based thinking style is rational (‘use your head’).”[i] Both styles are among the many traits that are part of being a well-rounded human, and it’s good be able to draw upon either capacity when appropriate. Let’s say a close friend’s wife has unexpectedly passed away; in his shock he turns to you for consolation and support. Engaging your feeling side, you express compassion, empathy, and an offer to help. As the familiar expression says, your heart goes out to him. If, however, your only inner resource is logic, you’d be more likely to say something like, “Sorry to hear it, but let’s be practical. Have you planned the funeral? Is your estate in order? Have you made a list of the accounts you’ll need to make name changes for?” You’re using your head, but no matter how well-intentioned, there’s a good chance your friend would—to use another common expression—feel like you’re leaving him out in the cold.
In medicine, advances in Artificial Intelligence (AI) are rapidly penetrating the underpinnings of clinical decision-making. An American Hospital Association (AHA) blog tells us, “One of AI’s most promising roles is in clinical decision support at the point of patient care. AI algorithms analyze a vast amount of patient data to assist medical professionals in making more informed decisions about care …” Such use of AI amplifies the accuracy of information available to a doctor so he/she can choose best recommendations for the patient. No doctor has yet surrendered the entire decision-making process to AI itself.
As a human being, the doctor not only brings informed and experienced logic (head) to patient care, he/she also brings the less tangible qualities of intuition and empathy (heart) to the process. There is recognition that today, AI technology:
… still lacks empathy, which is a crucial element of human interaction. While it’s true that AI can mimic empathy, it remains a programmed response rather than genuine emotion. The question is, can we achieve ethical AI that exhibits empathy and compassion toward humans?[ii]
Recently, JAMA Editor in Chief Dr. Kirsten Bibbins-Domingo interviewed Dr. Ida Sim, a primary care physician and UCSF’s Chief Research Informatics Officer, on the interface between AI and its relationship to humanity in medicine. They agree that AI algorithms using clinical data aids clinical predictions. After all, medicine is about prognosis—what is the probability that this patient will recover or die? This is not about being psychic. Clinical predictions are based in aggregated research as well as a doctor’s cumulative experience with patients. The more data plus the more experience, the better the ability to make a correct prognosis and treat the patient accordingly. In its capacity to process massive amounts of data faster and more accurately than the human brain, AI is a doctor’s best friend. But processing data is not the same as caring about and judging what’s in the best interest of a patient.
However, AI is increasingly infiltrating the realm of relationships because it can now use language in a way that a patient’s brain may perceive as medical advice delivered by a human doctor. Think of online AI services like chatbots. A patient with a clinical question about his symptoms or condition enters the query into a search engine. Even though the patient knows a human doctor is not on the other end typing a personal response, it’s delivered in familiar words (or sometimes unfamiliar medical terms). The patient’s brain may overlook that the words were assembled by a Machine Learning language program.
Drs. Bibbins-Domingo and Sims make the point that it’s techies in computer science, not necessarily physicians or others in the medical domain who are the innovators. To return to the AHA blog above, Deployment of medical AI systems in routine clinical care presents an important, yet largely unfulfilled, opportunity as the medical AI community navigates the complex ethical issues that can arise from using AI in health care. According to the Futurescan survey, more than 48% of hospital CEOs and strategy leaders are confident that by 2028, health systems will have the infrastructure in place to make use of AI in augmenting clinical decision-making. AI is designed to enhance — not replace — traditional care delivery. Thoughtful implementation of AI offers boundless opportunities for clinical care improvements. The greatest potential for AI in the next five years is in human-centered AI design.
Bringing true humanity to AI is not only an ethical challenge, it’s a practical one. Human emotions are complex and difficult to categorize in a binary (yes/no) manner. Going back to the early example of a close friend whose spouse has died, your ability to “read” your friend’s feelings depends not only on his words, but also his tone of voice, facial expressions and body language. Would an AI program equipped with a video camera and language function recognize and respond to your friend as you do when your heart goes out to him? For that matter, we may wonder whether it makes a difference in a doctor patient relationship if the doctor has empathy or not. In fact, it does:
Empathy extends far beyond a patient’s medical history, signs, and symptoms. It is more than a clinical diagnosis and treatment. Empathy encompasses a connection and an understanding that includes the mind, body, and soul. Expressing empathy is highly effective and powerful, which builds patient trust, calms anxiety, and improves health outcomes. Research has shown empathy and compassion to be associated with better adherence to medications, decreased malpractice cases, fewer mistakes, and increased patient satisfaction. Expressing empathy, one patient at a time, advances humanism in healthcare.[iii]
The fact that so many physicians use their heart as well as their head sets a high bar for developing AI that incorporates the human side of medicine. May ethics be the moral compass that helps us all navigate the humanization of AI.
NOTE: This content is solely for purposes of information and does not substitute for diagnostic or medical advice. Talk to your doctor if you are experiencing pelvic pain, or have any other health concerns or questions of a personal medical nature.
[i] Fetterman AK, Robinson MD. Do you use your head or follow your heart? Self-location predicts personality, emotion, decision making, and performance. J Pers Soc Psychol. 2013 Aug;105(2):316-34.
[ii] “Artificial Empathy: Is Ethical AI Possible?” https://originstamp.com/blog/artificial-empathy-is-ethical-ai-possible/
[iii] Jerry Stone. “The Importance of Empathy in Healthcare: Advancing Humanism.” MedicalGPS, Jan. 28, 2019. https://blog.medicalgps.com/the-importance-of-empathy-in-healthcare/
- CATEGORY:
- Artificial Intelligence