Artificial Intelligence in Medicine: Deep Learning in Image-Based Prostate Cancer Diagnosis
Gone are the days of “exploratory surgery.” Today’s magnetic resonance imaging (MRI) is a widely used scanning method that provides accurate 3D pictures of physical anatomy and bodily processes. It is noninvasive, and there is no exposure to radiation as with CT/PET scans. MRI detects changes in atoms under high frequency radio waves within a magnetic field, and registers them as pictures on a monitor. While radio waves and magnetic fields may sound scary, MRI has no harmful effects.
What does multiparametric mean?
The potential applications for MRI in every part of the body are virtually limitless, thanks to combining multiple imaging sequences to highlight various tissue features. Each sequence is called a parameter. Thus, multiple + parameter = multiparametric MRI, or mpMRI for short.
mpMRI is beautifully versatile. Each parameter highlights a specific anatomic feature, tissue characteristic or cellular function. By combining parameters, a pictorial map is generated and fine-tuned for a patient’s unique situation. For example, here are the parameters we can use for prostate scans:
- T1 weighted MRI (T1W MRI)
- T2 weighted MRI (T2W MRI)
- Diffusion Weighted Imaging (DWI-MRI)
- Contrast-enhanced MRI (CE-MRI)
- MRI Spectroscopy (MRI-S)
AI can help interpret imaging results
As with any cancer, when mpMRI is applied for the diagnosis of prostate cancer (PCa), the stakes are high. Other abnormalities can be confused for cancer. As one team of authors states, “…the reviewing, weighing and coupling of multiple images not only places additional burden on the radiologist, it also complicates the reviewing process.”[i] Therefore, correct interpretation of complex mpMRI scans is crucial. This is where AI can improve accuracy through computer-aided diagnosis (CAD).
In order to perform such service, computer programs must be trained for both content (image features) and process (self-teaching in order to generate identification without human input). This is modeled on what human brains do naturally from birth. In fact, newborns can perceive faces and prefer to look at them, and learn to recognize individuals, especially their primary caregivers, by 2 months. However, when it comes to distinguishing cancer cell types, a powerful, properly programmed computer can identify, integrate, and make judgements about millions of bits of data much faster than, say, a team of 200 scientists working fulltime in an information laboratory.
Deep Learning
Medical diagnosis of mpMRI utilizes a subtype of AI called Deep Learning (DL) that is specially trained to distinguish tumors from healthy structures. A website called Mathworks offers this clear summary of DL:
In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers.[ii]
DL requires an initial input of huge amounts of labeled data (e.g. components of mpMRI images from prostate scans) but the technology itself rapidly self-augments as it extrapolates features based on probabilities. Each MRI parameter acquires specific information regarding anatomy and cellular function, so training must include pattern recognition and textural analysis. In some cases, there are features that can’t be perceived by the human eye, but that computer analysis can magnify.
As DL proceeds, the computer learns to extract and co-register imaging voxels (tiny graphic units in 3D space) from one parameter to another, allowing voxel-to-voxel matching of visual information as the mpMRI map of the prostate is generated. I want to emphasize that a radiological reader’s brain is educated to do the same thing, but computers do it faster, with more data, and at higher magnification.
Who is smarter, the human or the computer?
In my Introduction to this series, I mentioned that AI in medicine could be a double-edged sword. At this stage in development, CAD will only be as good as the teams that feed data in, and continually refine analytic algorithms. As healthcare financial writer/contributor Jeff Gorke opined for Forbes online articles, “Bad ‘training’ of the computer and bad data inputs lead to bad and/or inaccurate outputs.”[iii]
AI in medicine is not an experiment, toy or hobby. The purpose is not to see if computers are smarter or better than radiologic readers—who will have the final diagnostic say in any case. The term Computer Aided Diagnosis (or Computer Augmented Diagnosis) is the best descriptor for its intended purpose: assistance that can enhance accuracy and efficiency, liberating doctors for something no computer on earth can offer: personal patient contact. Experts agree that DL may not have any bedside manner to speak of, but it has begun to fulfill its diagnostic promise in many cancers, including prostate cancer.
NOTE: This content is solely for purposes of information and does not substitute for diagnostic or medical advice. Talk to your doctor if you are experiencing pelvic pain, or have any other health concerns or questions of a personal medical nature.
[i] Wildeboer RR, van Sloun RJG, Wijkstra H, Mischi M. Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods. Comput Methods Programs Biomed. 2020 Jun;189:105316.
[ii] https://www.mathworks.com/discovery/deep-learning.html
[iii] Gorke, Jeff. “AI and Machine Learning in Healthcare: Garbage In, Garbage Out.” Forbes, June 18, 2020. https://www.forbes.com/sites/jeffgorke/2020/06/18/ai-and-machine-learning-in-healthcare-garbage-in-garbage-out/#7b73367a50a7
- CATEGORY:
- Artificial Intelligence, General Medicine, Prostate imaging