Should You Choose Radiology in 2025? A No Bull$hit Guide

Diagnostic accuracy across humans and multimodal AI systems on the Radiology’s Last Exam (RadLE v1) benchmark
Figure 1. Diagnostic accuracy across humans and multimodal AI systems on the Radiology’s Last Exam (RadLE v1) benchmark. Board-certified radiologists achieved the highest accuracy (0.83), followed by trainees (0.45). Frontier models under-performed, with GPT-5 (0.30) and Gemini 2.5 Pro (0.29) below human benchmarks. Source: Datta et al., 2025 (arXiv).

Why This Blog Matters (More Than You Think)

If you're reading this, you're probably lying awake wondering if radiology is a safe bet anymore. I get it. I was in the same place some time back, except I was convinced AI would eat my career before I finished residency.

I read the breathless headlines about deep learning crushing board-certified radiologists on test datasets. I also watched Geoffrey Hinton tell a room full of people to stop training radiologists because neural networks would do it better and cheaper. It was precisely around the same time that my AI journey started (around 2017) and I started spending a lot more time in IIT Madras than the JIPMER campus (thank you to my friends for the proxies), where I was doing my MBBS.

Eight years later, I'm still here. And so is every radiologist I trained with. And honestly, they're busier than ever.

Geoffrey is a smart person (I mean he won the Nobel Prize) and an inspiration to many of us, but he was not a radiologist. There are a lot of things which AI will replace before it comes for radiologists. I feel he could have taken a safer bet.

So, here's the thing! Today, doomers screaming "radiology is dead!" and optimists saying "AI will never replace doctors!" are both wrong. On X (formerly Twitter), I have often taken both sides, depending on which phase of the Gartner Hype Cycle for Radiology AI I was traversing. As I go up the slope of enlightenment and hopefully reaching the plateau of productivity, the reality, I have come to realise is very complicated, more interesting and way more dependent on law and policy than any self proclaimed radiology guru commenting on X has an experience in or wants to admit. I claim to be no expert either but being in student and RDA unions throughout my life and continuing to be in policy roundtables even today, I have seen how policy and laws get created and how difficult it is for domain experts (or even economists, who are helping run the country) to predict.

After spending nearly a decade building AI systems, validating AI in real hospitals and watching many of them fail to deploy or perform as they had promised, I've learned my bitter lesson - that building the technology is only half the story. It is probably much easier today to develop an AI model than deploy that at scale (before AI developers get offended, I am comparing with the tech we had in 2017 when I started). A lot of AI implementation depends on regulation, liability, workforce shortages and human factors that no human being can predict with a 100% guarantee.

This no bull$hit guide is my attempt to cut through the noise and give you a semi-evidence-based picture. If I wrote a paper, none of you would read! But this blog has what I believe is all you need to know as you decide your residency in 2025. I cannot answer questions like "Will Radiologists be replaced?" or "Should I take radiology?" as I do not think anyone knows or should answer to this today.

TL;DR (in case you do not want to read through the entire article and miss the opportunity I have for you at the end)

Here's the reality in short: AI today is getting better at narrow pattern-recognition tasks and even generalist tasks, and it will keep improving fast. But replacing doctors isn't blocked by accuracy anymore. It's blocked by law and liability.

Regulation is tightening everywhere. The EU AI Act classifies most clinical AI as high-risk, requiring explicit human oversight.1,2 The FDA's approval list is long but every device assumes a human in the loop.3 In India, the Digital Personal Data Protection Act and medical device rules are not going to make AI development or deployment easy.9,11

And here's the kicker: because radiologists are scarce, the UK just spent £216 million outsourcing reads in a single year6 and policies are bending toward safe augmentation, not autonomy. Yet.

A flood of mediocre models is coming, which means stricter post-market monitoring and recalls as real-world errors surface. You'll become dependent on these tools, but with discipline and oversight. Your career strategy? Lean into interventional radiology, preventive imaging programs, clinic leadership, and AI governance literacy. Those are your moats.

What AI Actually Does Today

Let's start with what works today, not in some speculative future. AI tools in radiology right now majorly help with triage (flagging urgent cases), quality control (catching technical errors), measurements (tumor volumes, bone density) and report drafting.

A big cross-European survey of radiologists found these are the functions doctors actually trust and use in daily practice.4 Notice the pattern? All of these need you to still read the scan and sign the report.

So why isn't there "replacement"? Three reasons, and they're all about systems, not just technology. And that is why Geoffrey made a miscalculation. He only factored in technology. He is a great scientist but he was not so much into policy at that time, as he is today.

First, we have regulations. The FDA has cleared over 900 AI devices (as per latest literature bit I think it is over 1000 now) for medical use,3 and yes, that number sounds scary. But if you actually read the approval letters, they're for narrow, specific intended uses with explicit disclaimers that a qualified physician must review the output. These aren't autonomous diagnostic systems. They're assistants that generate suggestions, and you're liable for what gets reported. The legal framework treats them more like a second opinion tool (researchers please do not catch me on this term - this blog is meant for medical students and residents) than a replacement of radiologists.

Second, current risk classification of AI tools. Under the EU AI Act (already influencing India's policy), most (not all) clinical and diagnostic AI falls into the "high-risk" category.1,2 That means mandatory risk management systems, high-quality training data with known biases documented, transparent user information, and most critically, explicit human oversight baked into the design. AI governance work is becoming clinical work. If you're annoyed by paperwork now, wait until you're the designated model steward for your department. You will want an AI for that work desperately lol.

Infographic timeline of EU AI Act implementation and compliance milestones
Figure 2. EU AI Act: Implementation & compliance milestones (snapshot). Source: Future of Privacy Forum (FPF), “EU AI Act: A Comprehensive Implementation & Compliance Timeline” (updated Apr 15 2025). Direct PDF version here. For official application dates, see the European Commission timeline here.

Third, there are human factors. Even when AI is accurate, humans mess up the interaction. The literature is full of studies showing automation bias where experienced readers getting nudged by wrong AI suggestions and missing things they'd normally catch.12,13,14 Radiologists in one mammography study were more likely to overlook subtle cancers when an AI wrongly flagged a different area as suspicious.12 Another study on cerebral aneurysm detection found that wrong AI suggestions actively degraded reader performance.13 This isn't theoretical; it's happening in pilot studies right now. The answer isn't to throw AI out, but to design guardrails: deliberate training, cross-checks and incident reporting systems that actually work.

Why Governments Move Slower Than What AI Startups Want

Here's something Silicon Valley and Bangalore healthcare AI startups don't want to hear: laws change when shortages bite, not when algorithms get better. And the evidence backs this up everywhere you look.

The Royal College of Radiologists released their 2024 UK workforce census, and the picture is stark! The demand for imaging is outpacing radiologist supply by a wide margin.5 The NHS is stretched so thin that in 2024 alone, they spent a record £216 million outsourcing radiology reads to private firms and teleradiology companies.6 Backlogs are measured in months for non-urgent scans. Politicians know this. Hospital administrators know this. This gradually shapes policy more than any AI benchmark.

Bar chart: number of diagnostic radiologists needed to clear the 6-week wait in England, Dec 2020–Dec 2024
Figure 3. Radiology workforce gap to clear England’s 6-week CT/MRI wait within one month (trend to December 2024). Source: Royal College of Radiologists, State of the Wait: Diagnostic Imaging; monthly update confirming the December 2024 estimate (346 radiologists) here.

Also when governments face scarcity, they don't jump straight to "replace doctors with algorithms." They bend toward safe augmentation tools that help existing radiologists work faster and catch more, but still require a licensed professional to sign off. Why? Because when something goes wrong, someone needs to be legally accountable and right now, algorithms can't be sued.20

In India, AI development and deployment will be shaped by two big frameworks. First, the Digital Personal Data Protection Act (DPDP), which gives patients consent rights over how their health data is used, including by AI systems.9 Second, the Central Drugs Standard Control Organization (CDSCO) classifies AI software as a medical device under the Medical Devices Rules 2017, which means it needs approval, post-market surveillance and traceable use.11 The practical result? You won't see "AI-only" reports in Indian hospitals anytime soon, unless you attend my workshops or seminars where I've shown the example from the only hospital in India who are doing this (And believe me the clinical residents are REALLY pissed at the False Negatives). There will always be a radiologist's name and signature on that report for the foreseeable future, with audit trails showing how the AI contributed.

The Coming Wave of Mediocre Models (and How They'll Be Regulated)

Here's an uncomfortable truth: model-making has been democratized, but deployment and evaluations haven't picked up. Anyone with a GPU for a few weeks can train or fine tune a model and claim it "detects pneumonia." There are startups launching AI radiology tools every single week. Most of them are very very mediocre.

Why? Because these models fall apart when they see data from a different scanner, a different patient population, or even a different hospital's imaging protocols. We call this "out-of-distribution" failure, and it's a massive problem that most companies don't advertise until after they've signed contracts. A really big AI company exited India after failing miserably at one of India's top hospitals for triaging CT scans. The HOD of the Radiology department is roping us in now to do evaluations before he does the same mistake again, even for a tool as simple as an ambient scribe. Because once you lose money and burn resources once, you NEVER want to be in that position again.

Regulators know this, which is why frameworks are getting stricter, not looser. The FDA's Predetermined Change Control Plan (PCCP) allows AI companies to update their models but only within a pre-approved envelope of changes, and they have to prove the updates don't break safety or effectiveness.3 The World Health Organization's guidance on large multimodal models is even more explicit: it calls for local validation on your specific patient population and equipment before you trust any model with clinical decisions, plus ongoing monitoring for drift.8 If a model starts quietly degrading because your patient mix changed or your CT scanner got a software update, you need to catch it.

For you as a resident: expect more work, not less. Someone in your department will need to be the AI steward who is tracking performance metrics, running quarterly validation checks, helping set thresholds (I did it during my Senior Residency and if you end up in one of the handful of residencies which have AI deployments, you will too), documenting when the AI influenced a decision and knowing when to pull Skynet's plug if things go wrong. That someone might be you.

You'll Become Dependent on AI (and That's Okay, If You're Smart About It)

If I am honest, calculator-style reliance is inevitable. Once you get used to AI triaging your worklist and pre-measuring nodules, working without it feels like going back to film. Here is an open secret which you all know: Residents in most places are brought in to get the work done and rarely for training them well. So if you become too dependant on AI for the same, the work will get done but your value as a radiologist will decrease.

The literature already flags this as automation bias/neglect and hints at de-skilling when assistive tools are suddenly withdrawn. One of my superstar co-authors and a very renowned physician scientist Dr. Eric Topol wrote about this just last week.12,13,14 Residents trained with AI available struggle more when they don't have access to it, similar to how GPS dependence has changed how we navigate. We are in fact doing a study on this at the lab and I invite medical students and residents to write to me if you want to be a part of it.

Eric Topol figure illustrating deskilling and never-skilling in medicine
Figure 4. Conceptual framework on clinician deskilling and never-skilling as routine AI assistance expands in medicine. Adapted from Topol & Berzin, 2025 — The Lancet.
Change in adenoma detection rate before and after AI-assisted colonoscopy in experienced endoscopists
Figure 5. Multicentre, observational study of experienced endoscopists: adenoma detection rate (ADR) in non-AI-assisted colonoscopies fell from 28.4% (226/795) to 22.4% (145/648) after routine AI exposure. Source: Budzyń K et al., “Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy”, The Lancet Gastroenterol Hepatol, 2025.

The answer isn't complete abstinence though, it's structured supervision and education. You need to train with AI, but also train without it periodically, so you maintain your baseline skills. Think of it like a pilot practicing manual landings even though autopilot exists. You want to be the person who knows when the AI is wrong, not the person who blindly trusts it.

And here's something else to consider: patients are ambivalent about this stuff. Surveys show mixed trust in AI for medical care, with a strong preference for human doctors in emotionally loaded situations like breaking bad news, discussing end-of-life care, or explaining complex diagnoses.17,18 A 2025 UK report found that consumers are increasingly frustrated with low-quality chatbots and automated customer service (if you have ever had a bad experience with Zomato or Swiggy, you know what I mean), and that frustration bleeds into healthcare expectations.19 If we ship "AI slop" (I call it "AI Cringe") aka fast, cheap and inaccurate automation, there will be backlash. Patients will demand their human radiologist back, and rightly so.

Clinic benchmarks and guardrails are a must if you are a radiologist who wants to deploy AI in your setup: Run local validation on your hospital's imaging equipment and patient demographics. Appoint a named model steward who owns the performance monitoring (and please pay him/her if you want them to do the work well). Document in the radiology report when AI contributed to a finding or changed your interpretation. Monitor for drift i.e. performance decay over time as patient populations or equipment change. Have an incident response plan for when the AI gets something badly wrong. Write plain language patient notes explaining when AI was used in their care. These align with WHO guidance and EU AI Act expectations,1,8 and more importantly, they build trust.

Where Humans Still Dominate (and Where Radiology Actually Grows)

Let's talk about the safe zones; the parts of medicine where AI isn't close to replacing us, and won't be for a reasonably long time.

Surgical and patient-facing specialties rely on trust and hands-on skill. AI can assist, but it won't replace the judgment call in the OT when something unexpected happens, or the conversation in clinic when a patient needs to understand their treatment options. Patient experience studies repeatedly show that people prefer talking to actual staff over bots when the stakes are high or the interaction is emotionally charged.17 This isn't sentimentality but psychology and ethics. Medicine is built on relationships, and those relationships still matter.

Interventional Radiology (IR) is your best hedge. I literally wrote a chapter in some Springer textbook on AI in Interventional Radiology. It was so early (2022 I guess) that I did not even feel like sharing it on social media. AI in IR is still in research and mainly helps with targeting lesions, choosing devices and intra-procedural navigation. There are systems that use real-time imaging fusion and robotics to guide catheters more precisely.15,16 But the workflow remains human led. You're adapting on the fly based on what you see, what the patient tolerates, and what the anatomy allows. Recent reviews forecast higher throughput and safety with AI-assisted guidance, but not fully autonomous IR.15,16 Why? Because liability, patient variability and the need for real-time judgment aren't going away. If you want a future-proof radiology career, get IR training. It's procedural, it's patient-facing and it's hard to automate. Many residencies are giving you this exposure today, so try to get into those if IR is something you want to do.

Preventive radiology is another growth area. Screening programs for breast and lung cancer, osteoporosis follow-up, incidental findings management, fatty liver detection; they will require a radiologist to oversee the pathway and make the call on borderline cases.4 There are schools of thought citing this as clinical leadership work, not just image interpretation, where you're designing the screening protocol, deciding on recall thresholds, managing patient anxiety, and integrating imaging into long-term care plans. AI can help, but it can't own the relationship with the patient. But the real reason it is a great area is because there is very little data today to automate this and create generalisable models. I was having a chat with the head of Preventive Radiology at IRIA last month and there are some really cool things coming up soon. There are some smart ring companies getting into wellness and starting to do these scans but they are far away from generating evidence that makes any sense. One of them was offering a shit load of money but my interests did not align (and I did not want to shift to Bangalore). Your interests might. Also if you want to stay in Delhi-NCR and are interested in exploring this part time, I may have some opportunity, so please reach out to me.

Let's Talk About Money (Because I Know You're Thinking About It)

Nobody wants to say it out loud, but earning potential matters when you're choosing a specialty, especially if you are thinking about radiology. So let's address it directly.

If you end up in an independent or teleradiology practice, AI can improve your efficiency (and margins) but only if you reinvest the efficiency gains into quality and retention. Faster turnaround times, fewer repeat scans and better structured reports; all of this builds referrer loyalty. One of my friends has built a reporting tool which is state-of-the-art and has the best UI/UX today. And I know for sure that he is going to give the incumbents a run for their money once he announces his raise.

This one is for teleradiology practice owners. If you just pocket the savings and don't reinvest in your radiologists, you'll bleed talent. The UK's cautionary tale is instructive: they starved their radiology workforce for years, and now they're hemorrhaging money on outsourcing.6 Don't let that happen to your practice. On a personal note, I am witnessing a growing teleradiology mafia (often owned by non physicians) in the country who pay radiologists $hit (somewhere around 2.5 dollars or 200 INR for a 3D scan) and I assure you that the day radiologists decide to unite, they are gonna get destroyed. I have a very personal interest in ensuring fair pay to our fraternity and I invite all radiologists to be a part of the CRASH Lab forum (link in the end) so that we stay in touch to end this destructive and insulting teleradiology and referral system practice in the country.

If you're an employed radiologist in 2030, the invisible work you do for AI implementation matters. AI doesn't just speed you up but it adds new tasks. Local validation, exception handling, patient callbacks when AI flags something ambiguous, teaching residents how to use the tools safely; all of this takes time, and most contracts don't account for it.

The American Journal of Roentgenology's Expert Panel on workforce issues explicitly calls for new staffing and reimbursement models that recognize these safety tasks.7 When you're negotiating your contract, make sure AI stewardship is part of your job description and not free labor.

The other economic reality: if radiologists refuse to adopt safe, auditable AI tools, referring clinicians will. Emergency physicians, oncologists, and surgeons are already experimenting with point-of-care ultrasound AI and direct-to-consumer imaging apps. If we don't set the standards for how AI is used in imaging, someone else will and they'll optimize for speed and cost, not quality and safety, totally cutting off radiologists. We will lose influence over standards of care. So we must adopt and govern early, or lose the fight later.

Practical Suggestions for Applicants and Radiologists, including residents (2025–2030)

If you're applying to residency right now or you're a first-year resident wondering what to focus on, here's what to actually do to stay ahead of 90% your peers. If you are unfortunately not in a program, which gives you this exposure, but are really motivated, you are welcome to be a part of our lab and take part in research projects which we are doing.

StepWhat to DoWhy It Matters
Program choice Pick residencies with real RIS-PACS analytics and active AI pilots You need to learn integration and governance, not just image interpretation. Programs that are actually deploying AI will teach you how to validate tools, monitor for errors and navigate regulatory compliance.1,3
Validation literacy Volunteer to help with your department's AI validation projects. Knowing how to spot when a model is quietly degrading makes you better than 99% of your peers and highly valuable8
IR and prevention Seek out interventional radiology rotations and preventive imaging pathways (screening programs, follow-up clinics). These are high-moat, human-led workflows with growing demand. They're also hard to automate and highly valued by health systems.15,16
Regulatory fluency Track the EU AI Act, FDA PCCP updates, and in India, the DPDP Act, ABDM health data exchange, and CDSCO medical device rules. These frameworks determine what you're allowed to sign, what requires audit trails, and how reimbursement works. Understanding the rules gives you negotiating power.1,3,9,11

My Two Cents, As Directly As I Can Say It

I'm going to be direct here, because I think you deserve clarity over comfort.

Yes, AI will surpass the average radiologist at drafting many types of reports some time in the future. It's already better than humans at detecting certain patterns in controlled test sets. Your edge isn't being faster or more accurate at pattern matching but in knowing when the AI is wrong, and being able to prove it with clinical reasoning and follow-up data.12 That's judgment, and judgment is still ours.

Expect gradual dependence, and prepare for it. You'll use AI the way you use a calculator; essential, embedded, hard to work without. The key is treating it like an instrument you master with checklists and quality checks, not a magic box you trust blindly.8 Maintain your baseline skills. Train without AI periodically. Know what you're capable of when the system goes down. There was an unfortunate AWS outage yesterday in USA and damn, it paralysed everything.

Pivot early to high-moat zones. Interventional radiology, complex oncologic imaging, preventive programs, clinic leadership roles, and AI safety and governance work: these are the areas where humans add irreplaceable value.15,16 If you wait until algorithms eat into your bread-and-butter work, you'll be pivoting from a position of weakness. Do it now while you have leverage. One of the major role our lab plays is to provide this leverage to medical students and residents who want to get into AI early.

Get involved in policy, even if you hate bureaucracy. In India, that means understanding DPDP consent frameworks, the Ayushman Bharat Digital Mission (ABDM) health data exchange standards, and CDSCO medical device approval pathways.9,10,11 In Europe and the U.S., it's the AI Act and FDA PCCP.1,3 These regulations decide what you're allowed to sign, what requires a paper trail, and how reimbursement flows. They're written by lawyers and policymakers, but they shape your clinical practice more than any research paper. Radiologists who understand these frameworks will have outsized influence over how AI gets deployed. Those who don't will have it imposed on them.

Final word: AI will at some point outperform us on most pattern recognition tasks. That's already happening. But it won't take legal responsibility when things go wrong. The radiologists who thrive in the next decade will be those who use AI as a tool and who know how to govern it responsibly. And it is harder if you start becoming dependant on it without the AI literacy which you will be expected to have. If this new future sounds interesting to you, radiology is still a great choice. If it sounds like too much work or not , maybe pick something else. But don't avoid radiology because you're scared of AI taking your job. That's the wrong reason.

Join Us to shape India's Healthcare AI Story

If you're a physician, resident or medical student who wants hands-on experience with responsible AI in real clinical workflows, feel free to reach out to me with your CV at suvrankar.datta@ashoka.edu.in. We run multi-institutional projects across India and internationally through the CRASH Lab at Ashoka University, and we have spots for motivated trainees who want to shape how healthcare evolves.

We also have a general cohort for all doctors who want early access to different AI tools in healthcare before they become public. You can join here: https://chat.whatsapp.com/LkTGRafwE7X09DgVeem93P?mode=wwt

For academic or industry collaborations, email me or DM me on LinkedIn or X. I am usually more responsive over mail.


References

  1. European Commission. The Artificial Intelligence Act: overview. Brussels; 2024. Available from: https://digital-strategy.ec.europa.eu/en/policies/european-ai-act
  2. Artificial Intelligence Act (consolidated text). 2024. Available from: https://artificialintelligenceact.eu/
  3. U.S. Food and Drug Administration. AI/ML-Enabled Medical Devices list. 2025 Jul 10 (updated). Available from: FDA SaMD AI/ML
  4. Zanardo M, et al. Impact of AI on radiology: a EuroAIM/EuSoMII 2024 survey among ESR members. Insights Imaging. 2024. Available from: Insights Imaging
  5. Royal College of Radiologists. Clinical Radiology Workforce Census 2024. London: RCR; 2025. Available from: https://www.rcr.ac.uk
  6. Boseley S. NHS gave private firms record £216m to examine X-rays in 2024. The Guardian. 2025 May 15. Available from: https://www.theguardian.com
  7. Rozenshtein A, et al. The U.S. Radiologist Workforce: AJR Expert Panel Narrative Review. AJR Am J Roentgenol. 2025. Available from: https://www.ajronline.org
  8. World Health Organization. Ethics and governance of AI for health: guidance on large multimodal models. Geneva: WHO; 2025. Available from: https://www.who.int/publications
  9. Government of India (MeitY). Digital Personal Data Protection Act, 2023. New Delhi; 2023. Available from: https://www.meity.gov.in/
  10. Press Information Bureau (GoI). ABDM consent-based health-data exchange & cybersecurity updates (2024–2025 releases). Available from: https://pib.gov.in
  11. Central Drugs Standard Control Organization. Medical Devices Rules 2017 (incl. software as medical device). New Delhi; ongoing. Available from: https://cdsco.gov.in
  12. Dratsch T, et al. Automation bias in mammography: the impact of artificial intelligence BI-RADS suggestions on reader performance. Radiology. 2023;307(4):e222176. Available from: https://pubs.rsna.org/journal/radiology
  13. Kim SH, et al. Automation bias in AI-assisted detection of cerebral aneurysms. Sci Rep. 2025;15:1234. Available from: https://www.nature.com/srep/
  14. Abdelwanis M, et al. Automation bias and errors in clinical decision support systems: a systematic review. Patterns. 2024;5(3):100925. Available from: https://www.cell.com/patterns/
  15. Lastrucci A, et al. Artificial intelligence and interventional radiology: a narrative review of current applications and future perspectives. EClinicalMedicine. 2025;68:102456. Available from: https://www.thelancet.com/eclinicalmedicine
  16. Lesaunier A, et al. Artificial intelligence in interventional radiology: state of the art. Diagn Interv Imaging. 2025;106(1):12-24. Available from: https://www.sciencedirect.com/journal/diagnostic-and-interventional-imaging
  17. Smoła P, et al. Attitudes toward artificial intelligence and robots in medicine among patients. Healthcare (Basel). 2025;13(2):189. Available from: https://www.mdpi.com/journal/healthcare
  18. Kühne S, et al. Attitudes toward artificial intelligence usage in patient health care: a cross-sectional survey study. JMIR Human Factors. 2025;12:e52874. Available from: https://humanfactors.jmir.org/
  19. Quantum Metric / TechRadar Pro. UK consumer dissatisfaction with AI chatbots in customer service (2025 report). Available from: https://www.techradar.com
  20. Davis N. AI could complicate medical liability cases, law experts warn. The Guardian. 2025 Oct 3. Available from: https://www.theguardian.com