10 insightful questions about AI answered by our experts

In this arcticle, we bring together two unique perspectives united by the shared mission of improving the future of breast cancer screening. We gathered questions from several women, including some breast cancer survivors, and asked Dr Mehran Taghipour-Gorjikolaie, our own leading scientist from LSBU who specializes in AI, to answer them. The result is an insightful conversation that delves into the hopes, challenges, and real-world impact of AI in oncology and explains how it is transforming cancer diagnosis, treatment, and patient support. We hope to contribute to the dialogue between people with lived experience and scientists that is shaping innovation in healthcare under the Horizon Europe programme.

MammoScreen (MS): How does AI help doctors make better decisions about diagnosis and treatment?

Dr. Mehran Taghipour-Gorjikolaie (MTG): AI assists doctors by rapidly analysing vast amounts of complex medical data to identify subtle patterns that might be difficult for humans to detect. For instance, in the context of breast cancer detection, AI can analyse microwave signal responses from breast tissue and highlight anomalies that may indicate the presence of a tumour. By providing data-driven insights and enhancing pattern recognition, AI supports clinicians in making more accurate, timely, and personalized diagnostic and treatment decisions.

MS: What type of medical data does AI analyse, and how does it learn from it? 

MTG: AI can analyse a wide range of medical data, including medical images (such as MRIs, CT scans, and mammograms), physiological waveforms (like ECGs or microwave signal responses), laboratory test results, and even unstructured data such as clinical notes and patient records. It learns through a process known as supervised learning, where it is trained on large datasets containing labelled examples, by analysing thousands – or even millions – of cases with known outcomes (such as confirmed labels).

MS: Can AI detect diseases earlier or more accurately than traditional methods?

MTG: It has been demonstrated that it can do, in many cases. For instance, in the case of breast cancer, traditional imaging might miss tumours in dense breast tissue. AI that analyses microwave signals, as we are developing in MammoScreen, can offer a different type of insight, potentially detecting small or early-stage tumours that aren’t yet visible on a mammogram. Early results in research are certainly promising, but for the moment AI is only used to support, not replace, traditional diagnostic methods.

MS: How accurate are AI tools compared to human doctors?

MTG: AI tools can match or sometimes exceed human performance in specific tasks, like spotting signs of disease in images. However, they are most powerful when combined with expert review. For example, AI might catch a tiny signal that a human might miss, but the doctor adds context (such as patient history) to decide what it really means.

MS: How can we ensure that AI doesn’t make mistakes or miss important details?

MTG: Ensuring the safety and reliability of AI in healthcare requires a multi-layered approach. First, AI systems must be trained on large, diverse, and high-quality datasets that represent the full spectrum of patient populations and conditions. Rigorous validation and testing are essential before deployment in clinical settings. Additionally, AI models can be designed to flag cases where their confidence is low, prompting human review rather than automated decisions. Continuous performance monitoring, regular updates, and real-world audits are also critical to detect and correct any errors or biases early. Ultimately, AI should be seen as a tool that supports – not replaces – clinical expertise. 

MS: Can AI explain its decisions or is it a “black box” that doctors must trust blindly?

MTG: This is one of the key challenges in integrating AI into clinical practice. While some AI models – particularly deep learning systems – have traditionally functioned as “black boxes,” offering little insight into how decisions are made, the field is rapidly advancing toward greater transparency. Increasingly, researchers are developing explainable AI (XAI) techniques that highlight which features of a signal, image, or dataset contributed most to a particular decision. These tools can, for example, show specific areas of a medical image that influenced the AI’s diagnosis. This interpretability is crucial for building clinicians’ trust, supporting informed decision-making, and ensuring AI serves as a transparent, accountable partner in patient care.

MS: How can AI improve my personal healthcare experience and treatment outcomes?

MTG: AI can lead to faster diagnoses, fewer unnecessary tests, and more personalized treatment plans. In breast cancer screening, for instance, AI may reduce false alarms (which cause stress) and catch cancers earlier, which can improve survival rates and quality of life.

MS: What happens if an AI system gives a recommendation that conflicts with a doctor’s opinion?

MTG: Doctors always have the final say. AI provides another perspective, but it’s the doctor’s job to decide what’s best for the patient. AI-based models work as an assistant for doctors, and they can’t make a final decision. Moreover, some systems can trigger a second review or deeper investigation in cases of disagreement.

MS: Are there risks to relying on AI in medicine, and how are they managed?

MTG: While AI has the potential to greatly support clinical decision-making, it is still too early to rely on AI models without human oversight. There are inherent risks – such as misdiagnosis, overfitting to specific datasets, or failing to generalize across diverse populations. These risks can be mitigated through several key strategies: training AI models on large, diverse, and high-quality datasets; integrating information at multiple levels – from raw sensor data to final decision-making stages (a process known as multi-level data fusion); and rigorously testing the models on previously unseen “blind” datasets to assess real-world performance. Fine-tuning and continuous monitoring are also essential to ensure safe and reliable operation. Importantly, AI should be viewed as a clinical decision support tool – meant to assist, not replace, healthcare professionals.

MS: How do you see AI evolving in healthcare, and will it ever replace doctors?

MTG: AI will keep getting better at helping with diagnosis, planning treatment, and even predicting future health risks. But it won’t replace doctors. Medicine is not just data, it is also empathy, communication, and expertise. AI will be like a super-smart assistant, helping doctors spend more time with patients and make better decisions, not taking over their role.

Share the Post:

You might be interested in

Be aware of hereditary breast cancer

Read more

The MammoScreen interim analysis

Read more

LSBU’s research contributions to the MammoScreen Project

Read more