Balancing innovation and human insight: navigating the promise and limitations of AI in cancer research

From the Gary Lyman research group, Public Health Sciences Division

Artificial Intelligence (AI) is transforming many aspects of our lives, including the field of medicine. In cancer research—where timely and accurate predictions are essential for effective treatment—AI promises to revolutionize clinical decision-making and patient outcomes. However, AI also has notable limitations, particularly when applied to complex and deeply human tasks like understanding patient needs and interpreting nuanced medical data. A series of articles published in Cancer Investigation by Drs. Gary Lyman and Nicole Kuderer provide a comprehensive exploration of AI’s potential, its limitations, and the essential role of human oversight in healthcare.

In their introductory article, Drs. Lyman and Kuderer begin by defining the essence of intelligence itself, distinguishing between human cognitive abilities and machine learning (ML) capabilities. They highlight that human intelligence involves not only data processing but also perception, creativity, and emotional awareness, which go beyond logical calculations. AI, despite its impressive advancements, currently lacks the depth of subjective experience unique to humans. Human intelligence includes symbolic thinking where symbols evoke internal, abstract responses, such as emotions or insights—elements that machines cannot replicate, as they are confined to processing signs and patterns based on rules and data.

AI in cancer research, however, is undeniably impactful. Through ML, computers are trained to recognize complex patterns in data, enabling them to make predictions in diagnostic imaging, patient prognosis, and even treatment planning. While traditional ML focuses on learning from labeled data (supervised learning), unsupervised learning allows models to find hidden patterns in unlabeled datasets, expanding AI’s scope in fields like oncology. Deep learning—a more advanced form of ML—utilizes complex neural networks to simulate certain brain functions, further enhancing AI’s ability to handle large volumes of medical data with unprecedented speed and accuracy.

Generic photo
Generic photo

The second article in the series discusses the application of AI in developing clinical prediction models. These models aim to forecast patient outcomes, guide treatment decisions, and improve the efficiency of clinical trials. In cancer research, prediction models can assist clinicians in determining which treatments may be most effective for a patient based on a vast range of variables, from tumor type to genetic information. Yet, as Lyman and Kuderer note, developing a reliable model is no simple task. Large amounts of high-quality data are needed, as are rigorous validation procedures to ensure the model works across diverse patient groups.

Bias presents a significant challenge in AI modeling for healthcare. If the data used to train an AI model is biased—perhaps underrepresenting certain demographic groups—the model’s predictions may be inaccurate when applied to a broader, more diverse population. The issue of the “black box” in AI, where the model’s decision-making process is unclear even to experts, makes it difficult for clinicians to figure out which specific biases are at play and correct for them. Moreover, many studies on AI prediction models fail to adhere to strict reporting standards, such as the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) guidelines, which specify how to document model development and validation. This lack of transparency makes it harder for other researchers to replicate or validate AI findings, further complicating the integration of AI into clinical practice.

In their most recent publication in this series, Drs. Lyman and Kuderer discuss the intrinsic limitations of AI, which go beyond technical challenges like data bias and model opacity. They draw on philosophical and mathematical principles to argue that AI may never fully replicate human cognition or consciousness. The authors reference theories like Gödel’s Incompleteness Theorem and Turing’s Halting Problem, which suggest there are truths and computations that no algorithm can achieve. These principles imply that human intelligence, with its ability to grasp abstract concepts and develop new insights, cannot be entirely replicated by AI. Roger Penrose’s theory also argues that aspects of human thought lie beyond what can be formalized as a set of rules, thus indicating that AI may never achieve the depth of understanding and self-awareness found in human beings.

A core limitation of AI is its reliance on inductive reasoning, where predictions are made based on observed data patterns. While this approach works well for repetitive, structured tasks, it falls short when unexpected or novel situations arise. Human intelligence, on the other hand, often uses abductive reasoning—the ability to form hypotheses and generate new ideas in response to surprises or anomalies. This capability is critical in fields like medicine, where clinicians must make quick decisions based on incomplete information, often drawing on intuition and empathy, elements that AI cannot emulate.

Overall, AI has transformative potential in cancer research, offering tools that can analyze data and assist in clinical decision-making with remarkable speed. However, the inherent limitations of AI—its susceptibility to bias, opacity, ethical issues, and inability to replicate human intuition and empathy—mean that it should complement, not replace, the expertise of healthcare professionals. Future research and development should focus on improving the transparency, accuracy, and ethical use of AI models in medicine. Only through a balanced approach that integrates AI’s computational strengths with the irreplaceable insights of human clinicians can we ensure that AI truly enhances cancer care and ultimately benefits patients and society.


The authors reported no funding associated with the works featured in this article.

Lyman, G. H., & Kuderer, N. M. (2024). Artificial Intelligence in Cancer Clinical Research: I. Introduction. Cancer Investigation42(6), 443–446.

Lyman, G. H., & Kuderer, N. M. (2024). Artificial Intelligence in Cancer Clinical Research: II. Development and Validation of Clinical Prediction Models. Cancer Investigation42(6), 447–451.

Lyman, G. H., & Kuderer, N. M. (2024). Artificial intelligence in Cancer Clinical Research: IV. Inherent Limitations of Artificial Intelligence. Cancer Investigation42(9), 741–744.

Darya Moosavi

Science Spotlight writer Darya Moosavi is a postdoctoral research fellow within Johanna Lampe's research group at Fred Hutch. Darya studies the nuanced connections between diet, gut epithelium, and gut microbiome in relation to colorectal cancer using high-dimensional approaches.