We need a thread on why AI WILL NOT replace radiologists in the next several decades. I’ll start.

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

SeisK

Full Member
5+ Year Member
Joined
Oct 6, 2018
Messages
326
Reaction score
442
This post is made by a recent study here that delineates 1/6 medical students interested in radiology decide not to after learning about AI minimally from other attendings (almost certainly in specialties that do not generate radiology reports), and it’s something that keeps coming up, at first amusingly, but now it’s slowly become annoying.

Radiology is the best specialty. We deal with essentially no crap that other specialties have to on a day to day basis, we’re extraordinarily efficient, we deal with ALL the type of things you learn about in med school (even those pesky lysosomal storage diseases you were told never mattered), you are directly exposed to the applications of the coolest modern physical and technological sciences, and you’re paid appropriately for it unlike a large swath of the rest of medicine.

My motivation in this is, well, I’m a jealous guy. I want all the smart, driven, charismatic people to come to my specialty and in their (necessarily) naive state as young influenceable medical students I think a bunch of smooth-brained window-lickers (with the utmost respect) are dissuading them from this thing. So I want to start a thread on why this is so horribly mistaken.

—-

This is a post I made on auntminnie on a related thread, which I think really drives the point home. I’d love to hear other’s thoughts (doesn’t matter how thought out or not). This comes from a background in not a small amount of literature review, and clinical trial research.


We don’t really have successful models that can predict the future well in economic terms, and when that happens emotions run rife and dominate the conversation.

You have computer scientists and software developers that immediately show their futorology bias by repeatedly spouting “radiologists will be obsolete” even today, despite what little AI has been implemented probably isn’t saving anyone any time, and the only RoI comes in the form of higher quality reads.

You have radiologists on the other hand, who possibly in an ego-defense kind of way, state “AI will only assist us not replace us” when assisting you is tantamount to replacing you. If I need five radiologists instead of ten to get through a list in a day, I’ve replaced five with the implementation.

But the fact of the matter is, actual clinical implementation of algorithms and reproducibility studies have not matched trial studies in accuracy, and will continue to not do so for the next several decades at least, for many reasons:

I’ll start with the obvious: no radiologist is being replaced until radiologist+AI is better than radiologist in a large scale, heterogenous population. I’ll go more into that below. Starting with that:

1. Edge cases are not a negligible proportion of our studies. Even if they were, there are no studies or software present that assess the accuracy of an AI in determining edge-scenarios, so how am I going to know “you don’t need to look at this study” even if AI surpasses my ability? This is why AI that is a “normal identifier” is far away. Far away. FAR. AWAY.
2. Training datasets are not generalizable because of subtle differences in the scanners underlying the data acquisition, and heterogenous datasets are proprietary making it extremely difficult sometimes to acquire larger datasets to train your algorithms. There are some efforts to overcome this, but five large homogenous datasets do not a heterogenous sample make.
3. The Black Box problem. This is tied to problem 2. There’s often something else consistently on the image that may demonstrate why something is going to happen that’s coincidentally tied to the pathology, that we can’t identify. “Who cares if the diagnoses are accurate?” I do MFer, because if in a multivariate analysis we account for this hidden “black box variable” and find the machine is now worse than humans, I’m not going to use the thing. I have no idea if there are black box variables in your algorithm to even begin knowing how to set up a multivariate analysis in its elimination. This right here is almost certainly why clinical implementation of extremely promising algorithms have been milquetoast. Frankly, there’s s*** I can’t see that the thing is using to cheat. When you employ the algorithm in another population that doesn’t have that hidden variable, it fails. Two ways of getting around this are localizers to help the radiologist figure out what the AI is seeing, and testing the algorithm on an extremely heterogenous population (lots of different types of patients, lots of different types of scanners, lots of different types of clinical settings in acquisitions).
4. AI is exceptionally vulnerable to artifacts that are trivial to us.
5. AI does not reproduce human-level sensitivity or specificity on cross-sectional imaging, which is likely our most important work as it’s here we often truly make diagnoses, whereas in planar imaging we only provide descriptions that lean in favor of diagnoses.

Additionally, here are the bigger deals:
6. Greater accuracy doesn’t save anyone any time. Or at least it morally shouldn’t. AI+Radiologist surpassing radiologist performance assumes the radiologist hasn’t changed their behavior in the presence of AI, unless the software has accounted for that behavior in its pre-release trial. A radiologist going through studies quicker because they have AI on board isn’t reproducing the study conditions, so its conclusions can’t be guaranteed to extrapolate, and the person suffering that decision is the patient. Because of this, AI doesn’t actually yield a RoI for the radiology practice when used. Then again, there are a lot of dubious radiologist practices out there, and they’re becoming dubiouser with private equity expansion.

Finally:
7. No prospective trials. This is a big deal, probably the biggest. Nothing, I mean nothing in any field of medicine becomes or supplants the standard of care until you have a large, national-scale, large AND SUFFICIENTLY HETEROGENOUS sample population randomized clinical trial demonstrating the new method surpasses the old in terms of morbidity and mortality years down the line—NOT FOR MODALITIES AS A WHOLE, but for the thousands of specific pathologies picked up on that modality. There is a lot of groundwork to be done before you’ll let the experimental arm be put at risk of the study going wrong. You do this by performing quite exhaustive retrospective studies analyzing variables important to the outcome, and for AI that’s a lot of variables. Additionally and most importantly, this is also overcome by making the experimental population arm be “existing standard + new intervention,” which I’ll again remind you doesn’t replace a single radiologist. After this case is met can you maybe attempt to use the “new intervention” alone without the existing standard. Even a single such Phase 3 trial takes YEARS, and a simple search of clinicaltrials.gov will show that there is not even a phase 1 trial of ANY imaging modality AI versus radiologist. The FDA will NEVER clear these devices as standard of care until a Phase 3 looks gorgeous and published on the front page of NEJM, and right now we don’t even know yet how to set up an appropriately sampled population for such a phase 3 as, again, generalizability is an enormous issue (you’d have to sure any new variant of image acquisition is covered). Keep in mind though that while this is the biggest deal, it is the BIGGEST deal. Once an AI has overcome this hurdle for a specific pathology, the radiologist has lost. If AI says “acute interstitial edematous pancreatitis” and AI > AI + Radiologist for this pathology, that’s what goes in the report even if you don’t see it.

And again, I’ll remind you. You set up clinical trails NOT FOR MODALITIES AS A WHOLE. But for specific pathologies. You need a phase 1 for acute interstitial edematous pancreatitis, acute necrotizing pancreatitis, chronic pancreatitis, pancreatic adenocarcinoma… and so on. For the thousands of such diagnoses a radiologist is required to identify and describe. That’s a lot of work for a small group of software devs who don’t know what pancreatitis is.

Given the above, and probably because private equity would prefer modest short term return than huge long term return, the AI software we do see is relatively small, sold to radiologists rather than providers directly, and is always advertised as an adjunct to the standard of care rather than any kind of replacement for it lest they suffer the FDA and litigation’s wrath.

And I’ll remind everyone finally that all of this will reduce the need for radiologists, but still will not replace them. I see the future of radiology one that is much more data / mathematics / physical science driven as the number and complexity of imaging modalities grows and as the importance of AI grows. We have to become experts on it. We have to become as familiar with the language of AI implementation into healthcare as the oncologist is with their various chemotherapies, and the subtleties of using them depending on the context of what cancer. We really should be the experts and keepers of this, and become as familiar with it as the computer scientists themselves. For the benefit of our patients. Learn it, not because you fear it (if you’re new you don’t have much to fear) but because you want to employ it to save your patient’s lives. “

Members don't see this ad.
 
Last edited:
  • Like
Reactions: 3 users
I have read your post on auntminnie and loved it.
Great post.
 
I don’t care if medical students are scared off by AI and don’t go into radiology because of it. I care deeply if there is an unchecked overexpansion of residencies, which would lead to an oversupply of radiologists which in turn would cause crashes in income, job availability, and job security. Right now, rads are in the driver’s seat and I like it that way. I don’t want radiology to turn into EM, RadOnc, or hospital medicine. I don’t want radiology to be taken over by corporate America.
 
Last edited:
  • Like
Reactions: 9 users
Members don't see this ad :)
Pathologists were worried about being replaced by automated Pap smear screening in 1983. Products got approved in the late 90s. Today, we kind of have automated screening? But pathologists are still around and doing less mind numbing stuff.

The future of radiology: less screening, more being a doctor - synthesizing information, problem solving, working with people.
 
  • Like
Reactions: 2 users
Pathologists were worried about being replaced by automated Pap smear screening in 1983. Products got approved in the late 90s. Today, we kind of have automated screening? But pathologists are still around and doing less mind numbing stuff.

The future of radiology: less screening, more being a doctor - synthesizing information, problem solving, working with people.

I could be wrong but isn't the market for pathologists consistently pretty bad? If it is I don't think its related to AI etc. More likely too many residency programs and/or private equity

I think AI will eventually replace human rads but don't see this happening before Uber/bus/amazon/truck drivers, fast food workers, pilots, pharmacists, and many other professions are replaced as well.
 
  • Like
Reactions: 1 users
I could be wrong but isn't the market for pathologists consistently pretty bad? If it is I don't think its related to AI etc. More likely too many residency programs and/or private equity

I think AI will eventually replace human rads but don't see this happening before Uber/bus/amazon/truck drivers, fast food workers, pilots, pharmacists, and many other professions are replaced as well.
Bad job market because of massive unchecked residency expansion, not because they had any reduction in work available.

Also comment on current AI: we have AiDoc and yesterday it failed to detect a saddle PE that the tech's had no problem recognizing. Basically working with real life data is a whole lot different than controlled environment where the publications come from. No radiologist would EVER miss this, heck 99% of PAs and 90% of NPs could pick it up.
Currently AI is completely unreliable but can function as an early alerting system... when it works.
 
  • Like
Reactions: 2 users
AI has the potential to make radiologists better...eventually. I think it's going to provide a heat map of "possible anomalies" that radiologists can glance at and make sure before they sign off a normal report, that they aren't missing something subtle. Such a system does not make us faster/more efficient, so the effect on workload would be negligible. We have similar capability in CAD for mammo/tomo. But it is only "ok". It cannot differentiate suspicious calcs from vascular calcs or artifact from metal clips, for example.

I think we are way, way, WAY far away from AI generating and signing their own reports. There's just too much liability. Who gets sued when AI generates a report that has mistakes?

Elon Musk said, (I'm paraphrasing) "Robots are good at some things and really terrible at other things". An example he gave was loose tubing hanging in front of you that needs to be connected to a fitting. The robot has to calculate position, sway, distance to coupling, diameter, force applied, etc in order to perform the duty. But such a maneuver is simple even for the least skilled human. An anecdote for radiology I heard somewhere was automatic segmentation of the spleen. A 10 year old can be taught to identify the spleen with high reliability after a few hours of instruction, but it is exceedingly hard to get AI to do so with the same level of accuracy.
 
  • Like
Reactions: 1 users
For the foreseeable future, my take so far about AI is that it won’t replace human rads but instead increase our efficiencies so that we can read more studies per hour. It’s like how the numbers for ultrasound measurements can prepopulate templates so the rad doesn’t have to waste time doing it. Of course, if you extrapolate this nationwide so that the average rad can read more studies per hour thanks to AI, then you will decrease or at least flatten the demand curve for more radiologists but each rad generates more RVU per hour. If the rad is employed by corporate radiology, then corporate radiology gets a cut and increases their revenue while employing the same number of rads. If you bend the demand curve for rads, then you can also pay them less and have more control over them because they’re more desperate to get and keep their jobs. That’s the wet dream of these corporate shills anyways. No doubt AI will increase efficiency but the jury is still out on how much and which areas will see the most benefit. If this happens, radiology will need to continue looking for additional technologies or areas to expand into to keep the demand high for more radiologists. Increasing RVU per hour through AI will initially be good for everyone’s bottom lines…until the government catches on and makes even deeper cuts to imaging and you’re back to square one.
 
Last edited:
The amount of nuances, variations, and overlapping findings are so numerous that if we humans with very flexible yet complex way of thinking and ability to consolidate different diagnosis can't fully be confident in identifying everything we see, how can we expect an AI that's more limited in terms of algorithmic approach and coding to replace us?

I don't see encroachment. Rather, partnership with AI is what I see.
 
Bad job market because of massive unchecked residency expansion, not because they had any reduction in work available.

Also comment on current AI: we have AiDoc and yesterday it failed to detect a saddle PE that the tech's had no problem recognizing. Basically working with real life data is a whole lot different than controlled environment where the publications come from. No radiologist would EVER miss this, heck 99% of PAs and 90% of NPs could pick it up.
Currently AI is completely unreliable but can function as an early alerting system... when it works.

This 100%.

As someone who uses RAPID stroke perfusion AI everyday, I'd say the biggest hurdle to AI succeeding is being able to handle bad/heterogeneous input data like a real rad would. Making the diagnosis on clean data from a controlled environment is all well and good, but my group covers several hospital systems. Within those hospital systems there are different brands of CT's, different models, different levels of maintainance and definitely different levels of tech quality. Every now and then I get a RAPID study where the software has correctly identified a large vessel occlusion with an ischemic penumbra worth mechanical thrombectomy. Way way way more often, the software is worthless due to the heterogeneity in the CT acquisitions.
 
Last edited:
  • Like
Reactions: 2 users
I forgot to mention that the pathologists have also delegated Pap screening to cytotechnologists in part as well. This is the model for the high volume mind numbing tasks: delegate to AI for the first pass, to a midlevel for the second pass, and the third pass is for the physician.

Note the limitations here - it has to be a high volume, single pathology task. First to go this way will be screening mammography. Then CT chest screening for lung nodules.

I don't like screening radiology. We are diagnostic radiologists.
 
  • Like
Reactions: 1 user
AI has the potential to make radiologists better...eventually. I think it's going to provide a heat map of "possible anomalies" that radiologists can glance at and make sure before they sign off a normal report, that they aren't missing something subtle. Such a system does not make us faster/more efficient, so the effect on workload would be negligible. We have similar capability in CAD for mammo/tomo. But it is only "ok". It cannot differentiate suspicious calcs from vascular calcs or artifact from metal clips, for example.

I think we are way, way, WAY far away from AI generating and signing their own reports. There's just too much liability. Who gets sued when AI generates a report that has mistakes?

Elon Musk said, (I'm paraphrasing) "Robots are good at some things and really terrible at other things". An example he gave was loose tubing hanging in front of you that needs to be connected to a fitting. The robot has to calculate position, sway, distance to coupling, diameter, force applied, etc in order to perform the duty. But such a maneuver is simple even for the least skilled human. An anecdote for radiology I heard somewhere was automatic segmentation of the spleen. A 10 year old can be taught to identify the spleen with high reliability after a few hours of instruction, but it is exceedingly hard to get AI to do so with the same level of accuracy.
 
My AI wishlist:
Do all the screening mammos
Auto-compare all the index lesions on PET
Find new and compare old lung nodules

If you figure these things out, I will die a happy radiologist. But I don't even see these advances on the horizon let alone by the time I retire.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
My AI wishlist:
Do all the screening mammos
Auto-compare all the index lesions on PET
Find new and compare old lung nodules

If you figure these things out, I will die a happy radiologist. But I don't even see these advances on the horizon let alone by the time I retire.

Screening mammo trials are underway in Europe to try to replace the second reader. They’re phase 1s in sweden? I believe.
 
I’m in the US. We only have one reader.
 
For screening tasks, in a AI+rad setup, the only way it can start displacing rads is when the AI is so good that the rads starts trusting the AI and shortcuts their search pattern to save time. It's hard to imagine, for anyone who has used mammo CAD and even the current AI products out there right now.

The medicolegal exposure is also different between radiology and pathology. In radiology, you just show the old film to a plaintiff's expert witness and they'll point out the subtle thing you missed. In pathology, is anyone archiving the old slides of cell smears to let themselves get sued?
 
I don’t know much about path, but yeah, I’ve assumed that they are saving their slides or at least a digital representation.
 
Universal adoption of AI in radiology to the point of REPLACING a radiologist is going to take a very, very long time. I am referencing true replacement, not just replacement in the setting of improved efficiency, which could happen much sooner if more mundane tasks are deemed important enough for CMS to throw money at. However, I don't think this is going to happen anytime soon (see below).

Look, I have no background in CS or AI/ML. I can maybe read a paper or two and sort of understand what those nerds are talking about. But it doesn't take a learned PhD to know that these studies are using carefully curated data to make their algorithms look more accurate than radiologists. Yet, you take a different dataset and the algorithm is worse than 50/50 guessing. So on that front, like OP said, the absence of data heterogenieity precludes most of these algorithms from working in a real life setting.

The other thing, and really the only thing, that will determine whether AI is adopted in radiology is MONEY. If the algorithm is not reimbursed by CMS or private insurers, who often follow CMS reimbursement policy, then the algorithm will be DOA. No money, no support, no one is buying it, and no new data will be fed into the algorithm in the real world. As it currently stands, there are only a handful of ways that new tech, such as AI, gets reimbursed in radiology, and there are only a handful of companies that have obtained those reimbursement pathways. As far as I've read and learned, they all concerned more acute. life-threatening diagnoses, such as PE, strokes, or brain bleeds. And that makes sense because these are things that are actually time-sensitive. Even then, these reimbursements are re-evaluated every 1-2 years by CMS to make sure that outcome data is available.

There are a few more things I want to talk about, but I have some Netflix to catch up on. But when in doubt, just FOLLOW THE MONEY. It will never lie to you.
 
  • Like
Reactions: 1 user
Also, any non-radiologist physician or midlevel practitioner who even spouts anything related to AI replacing radiologist are ignorant and should not be speaking about what they do not know. It is damaging to the field and like OP, I also want the best and brightest to lead our specialty way after I am gone. I am happy to correct anyone who does so in my presence.
 
  • Like
Reactions: 1 users
As it currently stands, there are only a handful of ways that new tech, such as AI, gets reimbursed in radiology, and there are only a handful of companies that have obtained those reimbursement pathways. As far as I've read and learned, they all concerned more acute. life-threatening diagnoses, such as PE, strokes, or brain bleeds. And that makes sense because these are things that are actually time-sensitive. Even then, these reimbursements are re-evaluated every 1-2 years by CMS to make sure that outcome data is available.
Elaborating on this... The large vessel occlusion detectors won temporary reimbursement (max 3 years) from CMS as a new technology add-on payment. To get paid through this pathway, you have to demonstrate a substantial clinical benefit, novelty, and inadequate reimbursement based on the existing diagnosis-related groups.

Viz doesn't claim to outperform radiologists in diagnosing a large vessel occlusion. This is a relatively elementary task for humans and the program is only ~90% sensitive and specific in it, which is not great. While it performs as well as a great ape can, it does so within minutes and skips multiple people in the chain of results-telephone (the outside hospital radiologist, the emergency PA, the emergency physician, the neurologist) by alerting the interventionalist directly of a possibly actionable case. The burden is then on the interventionalist to figure out what's actually going on. It does not claim to be making a diagnosis and the liability associated with that; it sells itself as a care coordination app.

This is important for a time critical diagnosis, but this model will apply to few other situations. Showing a substantial clinical benefit will be hard, I think, even for the PE and intracranial hemorrhage apps to replicate. You have to show it shortens time to intervention and that improves clinical outcome. The reality is that the outcome of PEs and brain bleeds are usually not changed by the hour.
 
  • Like
Reactions: 2 users
It's pretty straighforward to project 5 years from now but quite hard to project 20 years out. Example: 20 years ago, we were using payphones. Today payphones belong in museums. Who would have expected the changes that smartphones have wrought?

Most of the issues discussed here (data heterogeneity, scanner heterogeneity) are real problems but could be immediately fixed if only we could pool data from multiple institutions. Suppose in five years that self-driving cars become mainstream and some upstart AI company begins pooling data from 5 different universities and is sued for violations of patient privacy. The case makes its way to the Supreme Court.

Now support the court rules in favors of the AI company. This would move data pooling from a gray legal area to a white one and Google might enter the field, make agreements with 40 hospital systems across 12 states and get massive data pooling.

Or suppose some other country (e.g., China) decides AI is a national priority and they have enough "authoritarian" in their politics that they demand that their hospitals work together, annotate their data uniformly and pass their cases to a central repository. This would also fix the problems we discussed.

We do not yet have a solution for rare diagnoses or complex critical thinking, and (unless there is some true AI revolution that allows AI to mimic critical thinking -- truly a moon shot today) my guess is that radiologists will move more towards these holistic skills. I would expect that more routine "find a lesion" tasks will get automated. In the same way that docs today are less good at clinical gestalt and better at referring to/interpreting labs and CTs/MRIs, radiologists will get worse at reading cross-sectional imaging and better at interpreting AI generated reports.

We already have "AI beats radiologists" studies published in Nature Medicine, and I would guess we are 3-6 years out from seeing these similar studies published in a prospective fashion with out-of-sample data in NEJM.
 
We already have "AI beats radiologists" studies published in Nature Medicine
I would argue until it’s prospective, you don’t have those studies at all, for ultimately the same reason that for every 5000 preclinical drugs only 1 makes it to market. Frankly if I were one of the referees for several such “groundbreaking” studies I would have laughed some of them off and rejected them in the first round. My guess is they weren’t rejected because the people reffing these papers shouldn’t be.

And I think you missed the major takeaway which is the prospective hurdle. That hurdle takes years to jump for specific diagnoses. There is only one group I know of that’s doing any type of prospective study (and you‘d have to start now to publish in 3 years) in a single-modality, single-question study that doesn’t replace any US radiologists.

And the data heterogeneity issue is all well and good: “Well if only this would happen…” The pathway to modern medicine is littered with the millions of cute, dead ideas that died because of “would have, should have.” Your data heterogeneity issue is not solved until a randomly sampled, age-indeterminate scanner with randomly sampled artifacts are inconsequential in a prospective manner. Until then FDA is not supplanting the radiologist’s task and won’t ok them taking shortcuts in their reads.

This is all more of the same handwavy nonsense that made me post this in the first place.

Finally - it’s AI beats AI plus radiologist, not AI beats radiologist, that is the benchmark to replace anyone. Right now the only thing I see on the horizon, maybe, is AI taking mammography in 15ish years, because mammography is particularly situated in a place that’s simple and straightforward to design, addresses a single issue, and is very easy for industries to monetize and is low-hanging fruit. And that’s assuming the prospective studies bear out, which they may not - that’s why we do them.
 
Last edited:
I said it before and I’ll say it again. If AI is so good that it passes the Turing test version of radiology, then all of humanity is in trouble and not just radiology. Think Terminator future. Can you imagine AI being able to cross-reference different imaging modalities and clinical notes and synthesize diagnoses and develop reports indistinguishable from a good radiologist? I don’t think people appreciate how much image recognition skills and higher order thinking are required to be a good radiologist. It’s the same level required to a good CPA, lawyer, banker, poet, artist, etc. If AI is so good to replace a radiologist, then you don’t need human truck drivers, accountants, and many other professions. Literally, no medical specialty or any profession would be safe. Even humans may not be necessary. The people who say AI could replace radiologists either don’t really know what a radiologist does or don’t know the limitations of computers.

Anyways, I don’t know why people care if medical students are scared off by AI. Makes no difference to me. Who cares. The main thing that would affect me negatively and significantly is if radiology organizations are infiltrated by corporate goons and they step on the gas pedal to expand residencies. That’s what has happened to EM. EM organizations are full of corporate representatives. They have no interest in scaling back residencies. In fact, they are opening even more residencies. That’s why there is a glut of EM docs and they predict an oversupply of 10k by 2030.

The Emergency Medicine Physician Workforce: Projections for 2030

 
Last edited:
  • Like
Reactions: 3 users
Anyways, I don’t know why people care if medical students are scared off by AI. Makes no difference to me. Who cares. The main thing that would affect me negatively and significantly is if radiology organizations are infiltrated by corporate goons and they step on the gas pedal to expand residencies. That’s what has happened to EM. EM organizations are full of corporate representatives. They have no interest in scaling back residencies. In fact, they are opening even more residencies. That’s why there is a glut of EM docs and they predict an oversupply of 10k by 2030.

The Emergency Medicine Physician Workforce: Projections for 2030


I think this could be a potential concern. The primary hospital corporation tht has recently created so many EM residencies has also recently created a decent amount of radiology residencies (~5). The EM job market was very good just several years before these predictions came out so I wonder if in 10-15 years it could be a problem for radiology
 
  • Like
Reactions: 1 users
  • Like
Reactions: 3 users
I think this could be a potential concern. The primary hospital corporation tht has recently created so many EM residencies has also recently created a decent amount of radiology residencies (~5). The EM job market was very good just several years before these predictions came out so I wonder if in 10-15 years it could be a problem for radiology
Where are you finding this info? Figured you were talking about HCA. But they only have like two programs.

Edit: we’ll that’s what their site shows when you filter by diagnostic radiology. But there’s obviously a couple more just by google search.
 
I will add the following from Quanta Magazines review of the year in Mathematics. It appears that there have been a number of fundamental mathematical results published this year that raise possible theoretical limits to certain types of "artificial intelligence."

But there have been setbacks, too. Related kinds of AI known as convolutional neural networks have a very hard time distinguishing between similar and different objects, and there’s a good chance they always will. Likewise, recent work has shown that gradient descent — an algorithm useful for training neural networks and performing other computational tasks — is a fundamentally difficult problem, meaning some tasks may be forever beyond its reach.

 

I hope this doesn’t happen to radiology. AI is a non-issue, however corporatization of medicine and radiology in particular is what we should oppose..

Private Equity is a real issue. Many of the ACR job postings are from Rad Partners, Lucid etc (particularly in OH for some strange reason)...at this point biggest difference between radiology and EM/anesth. etc is that mid-levels are unable to replace a physician rad. This could change with AI but hard to see this happening anytime soon.
 
It’s amazing how easily some people are duped by ludicrous claims about technology. Look no further than yesterdays Theranos founder ruling
 
  • Like
Reactions: 3 users
Top