Sperling Medical Group

reading & research

Artificial Intelligence in Medicine: AI Confounded by Contaminated Prostate Cancer Tissue

I’ve posted numerous blogs on the promising applications of Artificial Intelligence (AI) in the medical world. Early research and development of Machine Learning (ML) and Deep Learning (DL) show results that offer accuracy rivaling that of humans, and speed/efficiency no human can begin to approach. This all sounds glorious! However, as AI tools have become commercially available in the real world, some users are starting to voice concern over possible errors.

One such voice is raised by a multidisciplinary team out of Northwestern University/Feinberg School of Medicine. Their study titled “Tissue contamination challenges the credibility of machine learning models in real world digital pathology,” is published in the journal Modern Pathology.[i] The team members represent expertise in several sciences: genitourinary pathology, computational pathology/imaging, bioimaging and informatics, electrical engineering, and machine learning science. Who better qualified to delve into potential accuracy errors in ML detection of prostate cancer in prostate biopsy slides?

The authors explain, “While human pathologists are extensively trained to consider and detect tissue contaminants, we examined their impact on ML models.” They note that in ML models trained on tissue slides marked for differences between healthy tissue and disease, the program may not have trained ML to recognize deviations or errors called contaminants. What causes such deviations? As the team writes,

The process of tissue handling, wherein patient tissue becomes a slide, contains multiple steps in which tissues from one patient can appear on the slide of a different patient. This could be a “push” from an insufficiently cleaned tool at the grossing bench, “block contamination” that occurs during transport processing in a retort shared by tissues from multiple patients, or a “floater” that occurs when histology water baths are insufficiently cleaned between blocks.[ii]

Such unexpected tissue may not be recognized correctly by ML models. To discover the range of error rates, the team experiment with digitally added image patches of contaminant tissue into known slides images. One such experiment involved prostate cancer (PCa) slides to which were added patches of bladder tissue. These were submitted to an ML program trained to identify PCa. The model’s performance was then evaluated for the proportion of attention the model gave to contaminants, and its impact on results.


The team discovered that adding bladder to PCa biopsy slides caused false positive identification (the model mistook bladder patches for PCa) at a rate of 97%. In the ML model, these patched “received attention at or above the rate of the average patch of patient tissue.”

The team concluded, “Tissue contaminants induce errors in modern ML models. The high level of attention given to contaminants indicates a failure to encode biological phenomena.” This suggests a strong need to correct the problem by quantifying tissue contaminants while training ML models such as those used to red flag PCa in biopsy slides. In the meantime, it is up to pathologists to be on the alert for surprising contaminants in slides that are marked as suspicious by AI programs.

In all cases, patients should be assured that all slides flagged by ML or other models are subject to final determination made by an experienced professional clinician. As development continues, AI is still a valued partner, but not the final authority.

NOTE: This content is solely for purposes of information and does not substitute for diagnostic or medical advice. Talk to your doctor if you are experiencing pelvic pain, or have any other health concerns or questions of a personal medical nature.

[i] Irmakci I, Nateghi R, Zhou R, Vescovo M et al. Tissue contamination challenges the credibility of machine learning models in real world digital pathology. Mod Pathol. 2024 Jan 5:100422.
[ii] Ibid.