The second article in the series discusses the application of AI in developing clinical prediction models. These models aim to forecast patient outcomes, guide treatment decisions, and improve the efficiency of clinical trials. In cancer research, prediction models can assist clinicians in determining which treatments may be most effective for a patient based on a vast range of variables, from tumor type to genetic information. Yet, as Lyman and Kuderer note, developing a reliable model is no simple task. Large amounts of high-quality data are needed, as are rigorous validation procedures to ensure the model works across diverse patient groups.
Bias presents a significant challenge in AI modeling for healthcare. If the data used to train an AI model is biased—perhaps underrepresenting certain demographic groups—the model’s predictions may be inaccurate when applied to a broader, more diverse population. The issue of the “black box” in AI, where the model’s decision-making process is unclear even to experts, makes it difficult for clinicians to figure out which specific biases are at play and correct for them. Moreover, many studies on AI prediction models fail to adhere to strict reporting standards, such as the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) guidelines, which specify how to document model development and validation. This lack of transparency makes it harder for other researchers to replicate or validate AI findings, further complicating the integration of AI into clinical practice.
In their most recent publication in this series, Drs. Lyman and Kuderer discuss the intrinsic limitations of AI, which go beyond technical challenges like data bias and model opacity. They draw on philosophical and mathematical principles to argue that AI may never fully replicate human cognition or consciousness. The authors reference theories like Gödel’s Incompleteness Theorem and Turing’s Halting Problem, which suggest there are truths and computations that no algorithm can achieve. These principles imply that human intelligence, with its ability to grasp abstract concepts and develop new insights, cannot be entirely replicated by AI. Roger Penrose’s theory also argues that aspects of human thought lie beyond what can be formalized as a set of rules, thus indicating that AI may never achieve the depth of understanding and self-awareness found in human beings.
A core limitation of AI is its reliance on inductive reasoning, where predictions are made based on observed data patterns. While this approach works well for repetitive, structured tasks, it falls short when unexpected or novel situations arise. Human intelligence, on the other hand, often uses abductive reasoning—the ability to form hypotheses and generate new ideas in response to surprises or anomalies. This capability is critical in fields like medicine, where clinicians must make quick decisions based on incomplete information, often drawing on intuition and empathy, elements that AI cannot emulate.
Overall, AI has transformative potential in cancer research, offering tools that can analyze data and assist in clinical decision-making with remarkable speed. However, the inherent limitations of AI—its susceptibility to bias, opacity, ethical issues, and inability to replicate human intuition and empathy—mean that it should complement, not replace, the expertise of healthcare professionals. Future research and development should focus on improving the transparency, accuracy, and ethical use of AI models in medicine. Only through a balanced approach that integrates AI’s computational strengths with the irreplaceable insights of human clinicians can we ensure that AI truly enhances cancer care and ultimately benefits patients and society.