Research in Machine Learning & Artificial Intelligence

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

sudo

Señor Member
10+ Year Member
Joined
Sep 12, 2011
Messages
431
Reaction score
34
Hey guys, this might be a long shot but I wanted to at least give it a shot...

I'm a 3rd year med student interested in diagnostic radiology. I have a extensive/published background in computer programming and informatics. I'm interested in developing machine learning algorithms for different needs in radiology (including image recognition/classification).

It would be great if I could get involved in some research in this field but I'm not sure where to start with looking for places that are researching in these fields and willing to let a med student help out. Any tips for me would be much appreciated - thanks!

Members don't see this ad.
 
Hi sudo (nice username by the way), I was a software developer before medical school and I'm applying this year for residency in radiology. I'm a big proponent of machine learning in radiology. What I would suggest is finding a mentor and/or a summer project with someone who is well-known in the field. One of my mentors was Dr. Daniel Rubin at Stanford. He's a radiologist who publishes a truly enormous number of radiology informatics papers. You could send him a cold email explaining who you are and offer to provide free programming work. He likes to interview people before bringing them onboard, but if you have a solid resume and are willing to work for free, he'll likely welcome you. Then you could try and arrange a summer project with him.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
Hi sudo (nice username by the way), I was a software developer before medical school and I'm applying this year for residency in radiology. I'm a big proponent of machine learning in radiology. What I would suggest is finding a mentor and/or a summer project with someone who is well-known in the field. One of my mentors was Dr. Daniel Rubin at Stanford. He's a radiologist who publishes a truly enormous number of radiology informatics papers. You could send him a cold email explaining who you are and offer to provide free programming work. He likes to interview people before bringing them onboard, but if you have a solid resume and are willing to work for free, he'll likely welcome you. Then you could try and arrange a summer project with him.

Naijaba, thanks for the information. This sounds like the perfect opportunity if I can make it work! Thank you so much for the quick reply - I'm glad I made this thread!
 
I wonder if other specialties have people in the field actively trying to ruin it
 
  • Like
Reactions: 5 users
There's an idea that radiology can be automated. The move rests on an analogies from the past where mass production of machined items (like cars or cotton) would result in cheaper more reliable products for all. And enormous wealth for those who control the machines.

Then there's computing power, which can be put to use powering the machines instead of gas or steam.

I don't worry about it too much because I think the analogies are misguided. The underlying assumption is that radiologists produce a commodity (a popular and incorrect belief) that can be better and better homogenized and eventually mass produced. The artisans in the guild who kick at the automation of their work should get out of the way of progress.

But an MD interpreting an image is a fundamentally different act than building a car, IMO. I tend to think of cooking or writing as better analogies. You can automate food prep all you want, when it comes down to really good food, you need a trained chef to prepare it. The flexibility and self-awareness is important. Similarly, you could have a computer generate novels for mass consumption, but the quality would be poor and they would become somewhat of a joke.

What the believers in machine/deep learning, A.I. (Or or whatever term you want to use), seem to say is that they can replicate a human mind so close that they can capture the things about it that make it human. Visual interpretation with any degree of workable flexibility and subsequent interpretation is one of those things. They're either excited about something that's more magic and sci-if comics than actual science, in which case I'm not going to worry about it, excited about something over 100 years in the future, in which case I'm not going to worry about it because it will phase in slowly, or the world is about to change immediately because we have workable simulacra of the human mind, in which case I'm not going to worry about radiology, because there are much bigger things to worry about.
 
  • Like
Reactions: 5 users
I do agree that the motivation for people who want to create a human mind from computers is a little odd. It's certainly not a thing to take lightly. What's that saying, "the road to hell...
 
  • Like
Reactions: 3 users
I wonder if other specialties have people in the field actively trying to ruin it

Well, maybe they'd like to be part of radiology's "leadership", so they figured they might as well get on with ruining the specialty even before residency starts.
 
I wonder if other specialties have people in the field actively trying to ruin it

Well, maybe they'd like to be part of radiology's "leadership", so they figured they might as well get on with ruining the specialty even before residency starts.

The speciality has been changing for a while:

1. Workstations have templates, dictaphones, hotkeys, high-speed PACS connections, thin-client viewers, server-side rendering of images, etc. All of these features exist for one reason - helping radiologists read more images in less time.
2. Twenty-four DR programs had unfilled spots last year, and yet new programs continue to open.
3. The radiologist job market is good.

These observations point to the fact that image volumes are continuing to rise.

I think the leadership is well aware of these points, although I'd be happen to give them my thoughts on how machine learning might be rolled out. A lot of companies are focusing on the "science fiction" or "magic" (not that I'd call it that), instead of fitting in with the radiologist's existing workflow. I've been following these trends for a while, and it looks like RadLogics is the first company to go about it the right way. Radiologists already rely on templates, why not automatically modify those templates within PowerScribe itself?
 
Last edited:
I have no problem with changes that signal improvement in the field. You want to use technology to streamline the job of the radiologist? Please do. But if you have an interest in training computers to replace the job of the radiologist, I wish you abject failure. And that includes training computers to read "only" normal CXRs or head CTs. The successful specialties pursue expansion of their trainees' roles; not the limitation thereof. And if it does come to computer replacement of DRs, it will not result from consideration of better care - it will result from a better bottom-line for payors. I, for one, have no interest in sacrificing myself or my career on that particular altar.
 
  • Like
Reactions: 1 user
I have no problem with changes that signal improvement in the field. You want to use technology to streamline the job of the radiologist? Please do. But if you have an interest in training computers to replace the job of the radiologist, I wish you abject failure. And that includes training computers to read "only" normal CXRs or head CTs. The successful specialties pursue expansion of their trainees' roles; not the limitation thereof. And if it does come to computer replacement of DRs, it will not result from consideration of better care - it will result from a better bottom-line for payors. I, for one, have no interest in sacrificing myself or my career on that particular altar.

Hi radsisrad, I appreciate your honesty, and I think you've hit the nail on the head with respect to the attitude of many practicing radiologists. Many welcome enhancements in workflow, but are starkly opposed to a fully automated approach.

The point I try to argue is that a fully automated approach is nearing feasibility, and, if it comes to fruition, the market will have to adapt. I believe that radiologists are the ones who should lead the adoption of this technology, otherwise other departments (or payors, as you mention), will be the drivers of the technology. It's not far fetched to imagine a referring physician simply not referring if they can purchase an in-house system to read chest x-rays and receive the reimbursement themselves.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
Clearly you see no need to temper your claims with practical experience. Faith finds experience an annoyance, I suppose.

Your grasp of how a fully automated system would play out does not seem accurate. The hospital would buy the system and pocket both the technical and professional fee for a year or two as a ghost crew of employee radiologists would sign off on generated interpretations. Then there would be arguments that the professional fee is no longer valid and the whole thing would be lumped as a technical fee, which would be a true commodity and priced accordingly.

You seem to be laboring under the illusion that thinking is a mechanical process, a commodity... stereotypical engineer stuff. I refer you back to the chef example. I can tell the difference between a fast food hamburger and a well prepared hamburger. The fast food hamburger could be "noninferior" depending on how you want to look at it, but it won't be better. If we have truly created a human mind with all its flexibility, nuances, and expressive capability, then kudos to us and god help us.

My dear, since you seem to think IR will save you, I think it's "nearly feasible" that I could replace most interventionalists with NPs or techs wearing augmented reality goggles. Since when is skilled manual labor immune from automation? Do you really think getting arterial access is the dividing line for job security? I could automate the IR clinic (or any clinic) as well in this fantasy world we can live in.

Your motives to me are a mystery. I feel you are being strongly influenced by some researcher, since no medical student I've ever met has yet has had the pure arrogance to say that they would like to meet with the leaders of the specialty to discuss how the roll out of new technology should be implemented. I don't care that you were a computer scientist before medical school, half the residents in my program were also, and they are much more circumspect in their opinions (at least in public). I get the sense that someone told you that informatics and "leadership" are the smart way to an easy and wealthy life in radiology and you apparently think interventional radiology is a "safe" path with prestige (because you wear a blue scrub cap, I guess). If you are correct that in the "near" future computer capabilities will render knowledge and experience valueless in the world (certainly radiology is not only at risk), then you've got much bigger things to worry about than your match.

I think it might be easier to automate an airplane than many aspects of medicine. It's all about routine and checklists. Pilots are expensive and suffer from human error. That day will come, I guess, after we learn from a few burning wrecks. Lucky us. I suppose you could say that the pilotless plane is "nearing feasibility."
 
Last edited:
  • Like
Reactions: 4 users
Hi Gadofosveset, let me try to address your points as best I can.

Clearly you see no need to temper your claims with practical experience. Faith finds experience an annoyance, I suppose.

Your grasp of how a fully automated system would play out does not seem accurate. The hospital would buy the system and pocket both the technical and professional fee for a year or two as a ghost crew of employee radiologists would sign off on generated interpretations. Then there would be arguments that the professional fee is no longer valid and the whole thing would be lumped as a technical fee, which would be a true commodity and priced accordingly.

I don't have a background in healthcare economics, so I concede that it is not appropriate for me to predict how it plays out.

You seem to be laboring under the illusion that thinking is a mechanical process, a commodity... stereotypical engineer stuff. I refer you back to the chef example. I can tell the difference between a fast food hamburger and a well prepared hamburger. The fast food hamburger could be "noninferior" depending on how you want to look at it, but it won't be better. If we have truly created a human mind with all its flexibility, nuances, and expressive capability, then kudos to us and god help us.

While charged, your statement is correct. I have an engineer's perspective on radiology reads and medicine in general. Since you opened the box, let me nerd out for a bit.

I see the complement of a physician's knowledge as a finite dataset. There are theorems from computer science about what is computable over a finite dataset. Take Google for example. If you were to collect every radiology image ever recorded it would not nearly approach the size of Google's index. Yet, Google can return the correct webpage across trillions of webpages in a fraction of a second. This is because Google maintains a global ordering of all webpages, originally known as the PageRank. The amount of comparisons it takes to search a finite, ordered dataset is log2(# items). So if Google has 1 trillion, webpages, it only takes log2(1 trillion) = 40 comparisons. Not a billion, not a million. Forty. A truly amazing result that we use everyday.

My dear, since you seem to think IR will save you, I think it's "nearly feasible" that I could replace most interventionalists with NPs or techs wearing augmented reality goggles. Since when is skilled manual labor immune from automation? Do you really think getting arterial access is the dividing line between automation and not? I could automate the clinic as well in this fantasy world we can live in.

Automating robots for procedures is a 3D vision task, and is extremely difficult. Do you remember the Star Wars Initiative under Ronald Reagan? Shooting down missiles with missiles is also a 3D vision task. Machine learning relies upon training, testing and validation inputs. With 3D vision your inputs are constantly changing without a "gold standard" by which to compare. Should you move the catheter forward by 1 mm, by 2 mm? What about the angle of approach? There are so many variables. This is the same problem that Uber/Waymo/Tesla/GE, etc are trying to solve with their self-driving cars. Their approach is to combine deep learning with reinforcement learning. Whereas I think that DR reads will be automated in 10-15 years, procedures are likely to be 20-25 years off or more.

Your motives to me are a mystery. I feel you are being strongly influenced by some researcher, since no medical student I've ever met has yet has had the pure arrogance to say that they would like to meet with the leaders of the specialty to discuss how the roll out of new technology should be implemented. I don't care that you were a computer scientist before medical school, half the residents in my program were also, and they are much more circumspect in their opinions (at least in public). I get the sense that someone told you that informatics and "leadership" are the smart way to an easy and wealthy life in radiology and you apparently think interventional radiology is a "safe" path with prestige (because you wear a blue scrub cap, I guess).

I love computer science and I like IR, see my other post here. Medicine is (in my opinion), the most important field to mankind. Why it lags behind others in adopting technology is a travesty. I mean, hospitals adopted EMRs only because a law forced them to. There are still providers who complain about electronic notes. I don't have a strong tract record of leadership and I'm more introverted than most, but we need more people in medicine that have strong opinions about technology. Call me crazy, but I think computer science should be on the MCAT. I've never seen a kinematics equation since undergrad, but I know at least two residents who wrote scripts to automate their rounding lists.
 
I am a 3rd year rad resident currently interviewing for IR, here's my take on the matter.

While it can be said that machine imaging recognition will potentially meet/exceed human performence with deep learning, radiologist is not just a pair of visual cortex.

The complete practice of a diagnostic radiologist requires all facet of human intelligence, include creativity, in dealing with complicated cases.

Therefore, machine learning to REPLACE radiology is more than a engineering problem, it's a scienfic and philosphical problem.

I believe radiology cannot be entirely automated until GENERAL ARTIFICAL INTELLIGENCE is created.

Prior to that, deep learning software + human radiologist will always beat software alone.

So say you managed to develop a general AI in 50 years (which is more than a radiology problem, but a technological singularity/survival of mankind problem). I'll entertain you and say you can try to train a fully autonomous AI organism to read films.

But then we have other problems. For example, how do you ascertain a limitless general AI's motive? Are they going to be like AI in the culture series and baby sit us? Are they going to want to wipe us out?

Ok, say you imprint the three roles of robotic in an AI. Will AI refuse to CT scan a patient because radiation risk is greater than benefit in this patient whk has nebulous complaints?

Basically, if I am a student right now, I will feel comfortable that human radiologists will ALWAYS be needed in a way until the debut of general AI (as in independent, fully conscious artificial life.
 
At the risk of sounding simplistic, I would ask the following rhetorical question: who is it that is digging and scratching after automation in radiology? Is it radiologists? No. Is it referring primary care providers? No. Is it orthopedists, neurosurgeons, or radiation oncologists? No. It is computer scientists and venture capitalists.

Predict what you will... I will say one thing quite confidently: given the forces behind automation in radiology, this development (whenever it occurs) will be bad medicine.
 
  • Like
Reactions: 1 users
I think there is a role in machine learning and interpretation. It will catch my misses from having been awake for 15 hours. It will craft a preliminary report which I will modify. It will make me faster and enable to do other stuff (I would like to be 90 percent IR, don't want to give up diagnostic entirely).
 
I'm always a little bit confused by why people think automation in radiology is just SO easy. What do the hippy tight-jeans mocha-sipping pansies think radiologists do all day? Who is going to answer the phone call from the orthopod? Who will biopsy the liver lesion? Protocol studies properly? Interact with techs? Go to tumor board? Make line recommendations to the intensivist? Is mystical Cylon-watson-tron going to do the fluoro study in room 2? Etc etc.

BUT WE'LL BE ABLE TO AUTOMATE MRI'S WITHIN 3 YEARS!! Give me a break. Show me an EKG, a freaking 2D line, that doesn't get read by a cardiologist (who always ignore the "automated" read on top) and we'll talk.
 
  • Like
Reactions: 2 users
At the risk of sounding simplistic, I would ask the following rhetorical question: who is it that is digging and scratching after automation in radiology? Is it radiologists? No. Is it referring primary care providers? No. Is it orthopedists, neurosurgeons, or radiation oncologists? No. It is computer scientists and venture capitalists.

I agree with you, computer scientists and venture capitalists are very interested in automating radiology for financial gain. They see a volume-based service that is data-driven and increasing in demand. The recent developments in deep learning suggest that automatic reads are feasible.

Deep learning relies heavily on calculus, linear algebra and statistics. It's just math, but the similarities to the human brain are striking. Signals sent between a network of interconnected nodes, where one node's "action potential" is determined by the input of other nodes. The same neural network can be taught to solve multiple unrelated problems. You can embed within the same network, complex pathways for identification of unrelated objects. This paper released five days ago shows that a deep learning network can learn to play a 3D game better than the best human players

The results above are impressive, but not lucrative. Computer scientists and venture capitalist see the above results and look for places where they can be applied to make money. Radiology becomes a natural target. I don't believe this is a bad thing, I believe that machine learning will eventually meet or exceed a radiologist's ability to interpret an image. Please do not dismiss the results above by saying that radiology image interpretation relies on the flexibility, creativity and self-awareness of the human mind. Deep learning is certainly flexible. There have been deep-learning created pieces of art accepted at museums (without the curator knowing they were created by machine). The Turing test is a simple demonstration that it is impossible to know whether an AI is self-aware, only to be indistinguishable from a human.

Ultimately it comes down to a question of beliefs. Some may choose to believe that machine learning cannot exceed a radiologist's ability to interpret an image. As in other topics, an educated opinion requires perspective from both sides. I believe as more and more computer science-trained individuals enter medicine these discussion will be less hostile and more about how best to welcome the next innovation, even if that means the venture capitals make some money.
 
no one denies that machines will one day be able to replace a radiologist, a worker at mcdonalds or your mechanic. the debate is over when.
 
  • Like
Reactions: 1 users
I agree with you, computer scientists and venture capitalists are very interested in automating radiology for financial gain. They see a volume-based service that is data-driven and increasing in demand. The recent developments in deep learning suggest that automatic reads are feasible.

Deep learning relies heavily on calculus, linear algebra and statistics. It's just math, but the similarities to the human brain are striking. Signals sent between a network of interconnected nodes, where one node's "action potential" is determined by the input of other nodes. The same neural network can be taught to solve multiple unrelated problems. You can embed within the same network, complex pathways for identification of unrelated objects. This paper released five days ago shows that a deep learning network can learn to play a 3D game better than the best human players

The results above are impressive, but not lucrative. Computer scientists and venture capitalist see the above results and look for places where they can be applied to make money. Radiology becomes a natural target. I don't believe this is a bad thing, I believe that machine learning will eventually meet or exceed a radiologist's ability to interpret an image. Please do not dismiss the results above by saying that radiology image interpretation relies on the flexibility, creativity and self-awareness of the human mind. Deep learning is certainly flexible. There have been deep-learning created pieces of art accepted at museums (without the curator knowing they were created by machine). The Turing test is a simple demonstration that it is impossible to know whether an AI is self-aware, only to be indistinguishable from a human.

Ultimately it comes down to a question of beliefs. Some may choose to believe that machine learning cannot exceed a radiologist's ability to interpret an image. As in other topics, an educated opinion requires perspective from both sides. I believe as more and more computer science-trained individuals enter medicine these discussion will be less hostile and more about how best to welcome the next innovation, even if that means the venture capitals make some money.

Can you elaborate when machine learning will produce a general AI? As I mentioned in my post, until we have true general AI an radiologist's job can not be replaced.

Again, even diagnostic radiology isn't entirely image interpretation. You have to answer a clinical question, which takes a general AI being to do so.
 
Last edited:
Agree with PL198. Will radiology be automated someday? Of course. So will pretty much every other other job. The question is when.

Could McDonald's workers be automated today? Probably, but they're not. If a burger flipper isn't yet automated (due to whatever reason - practicality, cost, customer desire for human interaction) but someone is predicting the demise of radiologists within the next 10 years they are beyond debate and just trolling IMO.
 
Can you elaborate when machine learning will produce a general AI? As I mentioned in my post, until we have true general AI an radiologist's job can not be replaced.

Again, even diagnostic radiology isn't entirely image interpretation. You have to answer a clinical question, which takes a general AI being to do so.

The challenge with general AI is that our current systems rely upon thousands (usually millions) of training samples to learn a new task. The human mind is capable of learning a new task that is similar to an existing task very quickly. For example, we don't have to give people examples of rib fractures for every rib. We simply teach what a rib fracture looks like, what a rib looks like, and the person puts two and two together. What deep learning would need, in its present state, is many examples of fractures from many different ribs. It would exceed human level of performance at fracture detection within its training domain, but would lack generalizability (for example, missing fractures in congenital cervical ribs).

With that said, the capacity of a neural network is exponential. A 12-layer network with 10 nodes at each layer has 10x10x...x10 = 10^12 = 1 trillion parameters; roughly the same number of neuronal connections in the human brain. Such a network is so large that it can certainly store information about "What is a fracture?" and "What is a rib?" The challenge is training the model such that these subnetworks are activated correctly for a new images containing a rib fracture in a novel location.

I know this sounds very existential and like science fiction, so let me give a concrete example from machine translation. Here's the paper from Google that came out this January. Machine translation is the process of automatically translating words and phrases to different languages (i.e. English to Spanish). A similar analogy can be drawn:

"What is a rib?" ---- "Using the word 'a' appropriately"
"What is a fracture?" --- "Context of the word 'innovation'."
"Signs of pulmonary congestion." --- "Words that mean fast."

The examples on the right are from Google's paper. They show specific (randomly chosen) subnetworks within the overall neural network.

When Google's network is asked to translate a phrase such as, "Innovation is a fast moving target." These three subnetworks (along with many other networks are triggered).

A "general AI", in my mind, is a deep learning model that contains billions of subnetworks each capable of handling a given task and whose abilities can be linked together to solve de novo tasks.
 
The challenge with general AI is that our current systems rely upon thousands (usually millions) of training samples to learn a new task. The human mind is capable of learning a new task that is similar to an existing task very quickly. For example, we don't have to give people examples of rib fractures for every rib. We simply teach what a rib fracture looks like, what a rib looks like, and the person puts two and two together. What deep learning would need, in its present state, is many examples of fractures from many different ribs. It would exceed human level of performance at fracture detection within its training domain, but would lack generalizability (for example, missing fractures in congenital cervical ribs).

With that said, the capacity of a neural network is exponential. A 12-layer network with 10 nodes at each layer has 10x10x...x10 = 10^12 = 1 trillion parameters; roughly the same number of neuronal connections in the human brain. Such a network is so large that it can certainly store information about "What is a fracture?" and "What is a rib?" The challenge is training the model such that these subnetworks are activated correctly for a new images containing a rib fracture in a novel location.

I know this sounds very existential and like science fiction, so let me give a concrete example from machine translation. Here's the paper from Google that came out this January. Machine translation is the process of automatically translating words and phrases to different languages (i.e. English to Spanish). A similar analogy can be drawn:

"What is a rib?" ---- "Using the word 'a' appropriately"
"What is a fracture?" --- "Context of the word 'innovation'."
"Signs of pulmonary congestion." --- "Words that mean fast."

The examples on the right are from Google's paper. They show specific (randomly chosen) subnetworks within the overall neural network.

When Google's network is asked to translate a phrase such as, "Innovation is a fast moving target." These three subnetworks (along with many other networks are triggered).

A "general AI", in my mind, is a deep learning model that contains billions of subnetworks each capable of handling a given task and whose abilities can be linked together to solve de novo tasks.

Couldnt you wait til after match to crush all of our dreams?
 
All these fancy terms but nobody can tell me why AI can't read an EKG yet is expected to replace radiologists in the near future.
 
  • Like
Reactions: 2 users
The challenge with general AI is that our current systems rely upon thousands (usually millions) of training samples to learn a new task. The human mind is capable of learning a new task that is similar to an existing task very quickly. For example, we don't have to give people examples of rib fractures for every rib. We simply teach what a rib fracture looks like, what a rib looks like, and the person puts two and two together. What deep learning would need, in its present state, is many examples of fractures from many different ribs. It would exceed human level of performance at fracture detection within its training domain, but would lack generalizability (for example, missing fractures in congenital cervical ribs).

With that said, the capacity of a neural network is exponential. A 12-layer network with 10 nodes at each layer has 10x10x...x10 = 10^12 = 1 trillion parameters; roughly the same number of neuronal connections in the human brain. Such a network is so large that it can certainly store information about "What is a fracture?" and "What is a rib?" The challenge is training the model such that these subnetworks are activated correctly for a new images containing a rib fracture in a novel location.

I know this sounds very existential and like science fiction, so let me give a concrete example from machine translation. Here's the paper from Google that came out this January. Machine translation is the process of automatically translating words and phrases to different languages (i.e. English to Spanish). A similar analogy can be drawn:

"What is a rib?" ---- "Using the word 'a' appropriately"
"What is a fracture?" --- "Context of the word 'innovation'."
"Signs of pulmonary congestion." --- "Words that mean fast."

The examples on the right are from Google's paper. They show specific (randomly chosen) subnetworks within the overall neural network.

When Google's network is asked to translate a phrase such as, "Innovation is a fast moving target." These three subnetworks (along with many other networks are triggered).

A "general AI", in my mind, is a deep learning model that contains billions of subnetworks each capable of handling a given task and whose abilities can be linked together to solve de novo tasks.

I don't want to go on record as an enemy of progress, and I consider myself somewhat computer-comfortable -- I taught myself C++ in high school -- although I'm certainly no computer scientist.

I think there's somewhat a confusion of ideas here. One is whether a "neural network" can essentially become an error-free, tireless, human mind. Maybe, sure. Last time I checked, we don't really understand the human mind that perfectly, but sure, maybe it's possible that we can create an electronic replica of higher brain functions.

But as one gets older, the interesting questions become not so much "can we do this?" as "why?" and "who benefits?" I don't buy the humanitarian angle... technology as a panacea for the world's ills is an old-fashioned idea. The world has enough technology to generate enough food to feed everyone in the Sudan, but we don't. We're lucky we haven't blown ourselves up (fingers crossed). Industrialization wiped out a lot of manual labor and made everyone's lives better, but at a cost. Marx got a movement going as a result of concentration of power with industrial capitalists and workers alienated from their labor by machines. It'll be interesting to see what kind of movements people come up with when knowledge workers are displaced by neural networks. According to your analysis, creative endeavors are also redundant since a machine can simulate human artwork.

I think the argument here is not so much disagreement with the details of the theoretical stuff your pasting from the Google think tank. I think the argument is with the one-sided way you're presenting it... more like creating godlike computer minds is a great way to advance your individual career, since you don't really seem to bring any other perspective into this.

There was a short online article (WSJ?), in which some physicist was comparing neural networks with the transition from oral history to written history. Needless to say, he was all for it. An imperfect analogy. Thinking is what defines us as human (memory is a part of that, sure). Flexibility in thought (creativity) is something you mentioned as possible with the neural network. If humans are displaced from thinking and creativity, then what is their role? This is a question bigger than "How can I ride this tech movement to make some bucks and advance my career?"
 
Last edited:
  • Like
Reactions: 3 users
I don't want to go on record as an enemy of progress, and I consider myself somewhat computer-comfortable -- I taught myself C++ in high school -- although I'm certainly no computer scientist.

I think there's somewhat a confusion of ideas here. One is whether a "neural network" can essentially become an error-free, tireless, human mind. Maybe, sure. Last time I checked, we don't really understand the human mind that perfectly, but sure, maybe it's possible that we can create an electronic replica of higher brain functions.

But as one gets older, the interesting questions become not so much "can we do this?" as "why?" and "who benefits?" I don't buy the humanitarian angle... technology as a panacea for the world's ills is an old-fashioned idea. The world has enough technology to generate enough food to feed everyone in the Sudan, but we don't. We're lucky we haven't blown ourselves up (fingers crossed). Industrialization wiped out a lot of manual labor and made everyone's lives better, but at a cost. Marx got a movement going as a result of concentration of power with industrial capitalists and workers alienated from their labor by machines. It'll be interesting to see what kind of movements people come up with when knowledge workers are displaced by neural networks. According to your analysis, creative endeavors are also redundant since a machine can simulate human artwork.

I think the argument here is not so much disagreement with the details of the theoretical stuff your pasting from the Google think tank. I think the argument is with the one-sided way you're presenting it... more like creating godlike computer minds is a great way to advance your individual career, since you don't really seem to bring any other perspective into this.

There was a short online article (WSJ?), in which some physicist was comparing neural networks with the transition from oral history to written history. Needless to say, he was all for it. An imperfect analogy. Thinking is what defines us as human (memory is a part of that, sure). Flexibility in thought (creativity) is something you mentioned as possible with the neural network. If humans are displaced from thinking and creativity, then what is their role? This is a question is bigger than "How can I ride this tech movement to make some bucks and advance my career?"

something none of the computer science guys understand.
 
I don't want to go on record as an enemy of progress, and I consider myself somewhat computer-comfortable -- I taught myself C++ in high school -- although I'm certainly no computer scientist.

I think there's somewhat a confusion of ideas here. One is whether a "neural network" can essentially become an error-free, tireless, human mind. Maybe, sure. Last time I checked, we don't really understand the human mind that perfectly, but sure, maybe it's possible that we can create an electronic replica of higher brain functions.

But as one gets older, the interesting questions become not so much "can we do this?" as "why?" and "who benefits?" I don't buy the humanitarian angle... technology as a panacea for the world's ills is an old-fashioned idea. The world has enough technology to generate enough food to feed everyone in the Sudan, but we don't. We're lucky we haven't blown ourselves up (fingers crossed). Industrialization wiped out a lot of manual labor and made everyone's lives better, but at a cost. Marx got a movement going as a result of concentration of power with industrial capitalists and workers alienated from their labor by machines. It'll be interesting to see what kind of movements people come up with when knowledge workers are displaced by neural networks. According to your analysis, creative endeavors are also redundant since a machine can simulate human artwork.

I think the argument here is not so much disagreement with the details of the theoretical stuff your pasting from the Google think tank. I think the argument is with the one-sided way you're presenting it... more like creating godlike computer minds is a great way to advance your individual career, since you don't really seem to bring any other perspective into this.

There was a short online article (WSJ?), in which some physicist was comparing neural networks with the transition from oral history to written history. Needless to say, he was all for it. An imperfect analogy. Thinking is what defines us as human (memory is a part of that, sure). Flexibility in thought (creativity) is something you mentioned as possible with the neural network. If humans are displaced from thinking and creativity, then what is their role? This is a question is bigger than "How can I ride this tech movement to make some bucks and advance my career?"

I'm giving this an A++ :)

We are so much more than "neural circuitry."

A machine will never be a Doctor, nor will a machine ever "care" for a patient.
 
All these fancy terms but nobody can tell me why AI can't read an EKG yet is expected to replace radiologists in the near future.

I am not a cardiologist, and I have seen automated EKG interpretation miss stuff. There is a reason why each EKG has to be signed off as having been read by an MD.
 
  • Like
Reactions: 1 user
I am not a cardiologist, and I have seen automated EKG interpretation miss stuff. There is a reason why each EKG has to be signed off as having been read by an MD.

Exactly.
 
Who is going to answer the phone call from the orthopod? Who will biopsy the liver lesion? Protocol studies properly? Interact with techs? Go to tumor board? Make line recommendations to the intensivist? Is mystical Cylon-watson-tron going to do the fluoro study in room 2? Etc etc.

This should be higher up in the conversation and also falls in line with what DrFluffyMD is discussing about general AI. It seems that there is a big misconception (even among other physicians) that Radiologists only read images and that those images are either "benign" or "malignant." Even if computers are able to provide full reports similar to a Radiologist, further consultation is often needed after the report that will influence patient care - only a general AI will be able to handle this. This isn't even including the myriad of other tasks that a Radiologist is responsible for that was mentioned above.
 
Departments and PPs, as well as radiology websites, should refuse to get on board with any such effort that's not being driven by us. If AI in radiology is driven and directed by physicians in the field, great. Silicon Valley can go screw itself.
 
  • Like
Reactions: 1 users
Excellent New Yorker article by the author of "The Emperor of Maladies", Siddhartha Mukherjee: A.I. VERSUS M.D.

Highlights of the article:
-He straightaway addresses old-school rules-based systems vs. deep learning.
-He addresses old technology failing on mammography datasets
-Highlights how deep learning has already surpassed dermatologists
-Covers the "black box" problem with deep-learning (e.g. do we really know what it's doing?)

I think the most salient paragraph is the quote by the Dr. David Bickers, Chair of Dermatology at Columbia:

“Believe me, I’ve tried to understand all the ramifications of Thrun’s paper,” he said. “I don’t understand the math behind it, but I do know that such algorithms might change the practice of dermatology. Will dermatologists be out of jobs? I don’t think so, but I think we have to think hard about how to integrate these programs into our practice. How will we pay for them? What are the legal liabilities if the machine makes the wrong prediction? And will it diminish our practice, or our self-image as diagnosticians, to rely on such algorithms? Instead of doctors, will we end up training a generation of technicians?”

These are the same questions that radiologist must ask themselves now, and even more so for diagnostic radiologists who provide diagnoses-only (i.e. are not treating providers).

Edit: It's my personal belief that MDs must understand the math behind these systems. Why do we have physics and chemistry on the MCAT, but spend 80% of our time sitting behind computer systems without understanding how they work? Computer science and math are becoming essential to medicine, especially in the technical fields like radiology/radiation oncology/neurosurgery/etc.
 
Excellent New Yorker article by the author of "The Emperor of Maladies", Siddhartha Mukherjee: A.I. VERSUS M.D.

Highlights of the article:
-He straightaway addresses old-school rules-based systems vs. deep learning.
-He addresses old technology failing on mammography datasets
-Highlights how deep learning has already surpassed dermatologists
-Covers the "black box" problem with deep-learning (e.g. do we really know what it's doing?)

I think the most salient paragraph is the quote by the Dr. David Bickers, Chair of Dermatology at Columbia:

“Believe me, I’ve tried to understand all the ramifications of Thrun’s paper,” he said. “I don’t understand the math behind it, but I do know that such algorithms might change the practice of dermatology. Will dermatologists be out of jobs? I don’t think so, but I think we have to think hard about how to integrate these programs into our practice. How will we pay for them? What are the legal liabilities if the machine makes the wrong prediction? And will it diminish our practice, or our self-image as diagnosticians, to rely on such algorithms? Instead of doctors, will we end up training a generation of technicians?”

These are the same questions that radiologist must ask themselves now, and even more so for diagnostic radiologists who provide diagnoses-only (i.e. are not treating providers).

By becoming a diagnostic radiology resident, you are in a unique position to tackle this problem. We must own and control AI. As you go through your training, you will be fascinated by how much more a human radiologist is more than the machine.
 
By becoming a diagnostic radiology resident, you are in a unique position to tackle this problem. We must own and control AI. As you go through your training, you will be fascinated by how much more a human radiologist is more than the machine.
Agreed - I will be very interested to see if and how your views change once you actually start doing radiology Naijabba, it's not nearly as mechanical as people think. In that article you mention fit for example that arrogant AI guy who went from saying that all radiologists would be out of jobs in a few years to saying it would just "change their role" and they aren't threatened at all once someone explained to him what radiologists do.
 
Agreed - I will be very interested to see if and how your views change once you actually start doing radiology Naijabba, it's not nearly as mechanical as people think. In that article you mention fit for example that arrogant AI guy who went from saying that all radiologists would be out of jobs in a few years to saying it would just "change their role" and they aren't threatened at all once someone explained to him what radiologists do.

Some of the pro-AI crowd says extremely inflammatory things. Quotes like "reading radiograph is easier than driving" or "hospital needs to stop training radiologist now" per Hinton.

I do agree that we need to cut our field to radonc size (200-300) if someone comes out with an AI solution that is 90 percent as good as human and can read most general stuff.
 
Lemme start by saying, I'm pro augmented intelligence. Note that I didn't say "artificial" and that it's not an arbitrary distinction.

But...I'm skeptical. More on that to follow...

I'd like to offer a few thoughts on things discussed above. Take them for what they're worth...musings from a semi-anonymous radiology resident posting on the interwebs.

1. Naijaba is straight up avoiding the EKG question...just like every other artificial intelligence proselyte on these (or any) boards.

2. From the neuroscience perspective, "neural networks" in AI are based on a simplified version of the cable model of electrochemical integration and conduction of potentials through a neuron. Many assumptions are made in this simplification...many of which have been proven wrong. So when a med student with a background in CS (or at least enough knowledge of CS acquired from programming experience to fool me) lauds the "striking similarities" between NNs and neuronal processing, it suggests that he/she either doesn't understand or greatly underestimates the leaps required for the assumptions underpinning the math that facilitates the model. I won't deny that they produce similar results, but to argue that AI computation and neural computation have a "striking" degree of similarity betrays a poor understanding of current theories of neural computation (which are incomplete, at best, and even recently significantly updated - see news about dendritic processing). Furthermore, most NNs have a very different architecture and greatly increased network density from what "we" understand of the connectome.

3. Bridging to the CS perspective, many AI systems have been trained in specific image analysis tasks to be better than humans. However, the difficulty ahead lies in expanding the skill set of any one system to incorporate many tasks and doing so within a reasonable computational capacity in a reasonable amount of time. Currently, AI requires brute force training on extremely large data sets to even achieve similar results to humans. Whereas, I can show a mediocre med student a handful of extra-axial hemorrhages on noncon head CTs and he can somewhat reliably distinguish between epidural and subdural hematomas about as well as some of my more neuro-averse co-residents. CNNs require orders of magnitude more of such "examples" in order to achieve a similar degree of accuracy to humans in classifying perifissural lung nodules. Our brains are really good at learning new tasks with sparse data (thanks, evolution!). However, where our brains excel in learning new tasks efficiently, they also fail us in trying to simplify and automate too much, resulting in heuristic failure and susceptibility to bias. Currently, it seems that NNs don't generalize very well (with a few exceptions), leading SV programmers and VCs to regard image analysis as problem solvable with "big data" (omfg, that's so sexy, I wanna pump a billion in seed funding into something just thinking about it). However, as the problem scales, so does the computational power required to solve the problem. "But Moore's Law..." you may argue. "...is probably no longer valid..." many electrical engineers and computer scientists will respond. Though, quantum computation offers the future possibility of a significant leap into a new dimension of computational power that has yet to be quantified, practically speaking.

4. Finally, the humanistic perspective... Much of the art of radiology relies on integration of the clinical history with laboratory and pathologic data, prior imaging across different modalities and ancillary clues to the diagnosis contained within a given imaging study. Anyone who has practiced radiology for any amount of time knows that any of these factors can significantly influence your final interpretation. Computers (AI or otherwise) are only as good as their inputs. Also, those with at least a few days of experience as radiologists will have experienced the daily battle of closing the loop in communicating critical results or clarifying the desired imaging study when you suspect the order was entered wrong in Epic and the structured indication has very little to do with anything that's actually wrong with the patient. So...I think it becomes pretty clear that any AI will have to come a lot closer to general AI before it replaces human radiologists. Then there are all of the regulatory hurdles, including FDA approval, insurance reimbursement, provider/patient/public acceptance... Try convincing a Trump voter that a computer should read his/her MRI and get paid for it...

Radiologists should be at the forefront of developing AI technology, because it's best for everyone if we help guide it toward supplementing/improving what we offer our patients and referring providers, rather than to let "enterprising" SV programmers try to brute force their way into generating a product that will do more harm than good. We should design it to do well what we can't. I'd love for a CNN to analyze every pulmonary nodule I see (or don't see) in the lung apices on a neck CT and give me a probability of malignancy that accounts for that patient's risk factors with error margins that account for missing information (above and beyond the Fleischner guidelines). I wouldn't copy-paste it into the report necessarily, but it'd help me feel better about not recommending the follow-up chest CT that will inevitably continue the incidental domino game.
 
  • Like
Reactions: 1 user
Lemme start by saying, I'm pro augmented intelligence. Note that I didn't say "artificial" and that it's not an arbitrary distinction.

But...I'm skeptical. More on that to follow...

I'd like to offer a few thoughts on things discussed above. Take them for what they're worth...musings from a semi-anonymous radiology resident posting on the interwebs.

1. Naijaba is straight up avoiding the EKG question...just like every other artificial intelligence proselyte on these (or any) boards.

2. From the neuroscience perspective, "neural networks" in AI are based on a simplified version of the cable model of electrochemical integration and conduction of potentials through a neuron. Many assumptions are made in this simplification...many of which have been proven wrong. So when a med student with a background in CS (or at least enough knowledge of CS acquired from programming experience to fool me) lauds the "striking similarities" between NNs and neuronal processing, it suggests that he/she either doesn't understand or greatly underestimates the leaps required for the assumptions underpinning the math that facilitates the model. I won't deny that they produce similar results, but to argue that AI computation and neural computation have a "striking" degree of similarity betrays a poor understanding of current theories of neural computation (which are incomplete, at best, and even recently significantly updated - see news about dendritic processing). Furthermore, most NNs have a very different architecture and greatly increased network density from what "we" understand of the connectome.

3. Bridging to the CS perspective, many AI systems have been trained in specific image analysis tasks to be better than humans. However, the difficulty ahead lies in expanding the skill set of any one system to incorporate many tasks and doing so within a reasonable computational capacity in a reasonable amount of time. Currently, AI requires brute force training on extremely large data sets to even achieve similar results to humans. Whereas, I can show a mediocre med student a handful of extra-axial hemorrhages on noncon head CTs and he can somewhat reliably distinguish between epidural and subdural hematomas about as well as some of my more neuro-averse co-residents. CNNs require orders of magnitude more of such "examples" in order to achieve a similar degree of accuracy to humans in classifying perifissural lung nodules. Our brains are really good at learning new tasks with sparse data (thanks, evolution!). However, where our brains excel in learning new tasks efficiently, they also fail us in trying to simplify and automate too much, resulting in heuristic failure and susceptibility to bias. Currently, it seems that NNs don't generalize very well (with a few exceptions), leading SV programmers and VCs to regard image analysis as problem solvable with "big data" (omfg, that's so sexy, I wanna pump a billion in seed funding into something just thinking about it). However, as the problem scales, so does the computational power required to solve the problem. "But Moore's Law..." you may argue. "...is probably no longer valid..." many electrical engineers and computer scientists will respond. Though, quantum computation offers the future possibility of a significant leap into a new dimension of computational power that has yet to be quantified, practically speaking.

4. Finally, the humanistic perspective... Much of the art of radiology relies on integration of the clinical history with laboratory and pathologic data, prior imaging across different modalities and ancillary clues to the diagnosis contained within a given imaging study. Anyone who has practiced radiology for any amount of time knows that any of these factors can significantly influence your final interpretation. Computers (AI or otherwise) are only as good as their inputs. Also, those with at least a few days of experience as radiologists will have experienced the daily battle of closing the loop in communicating critical results or clarifying the desired imaging study when you suspect the order was entered wrong in Epic and the structured indication has very little to do with anything that's actually wrong with the patient. So...I think it becomes pretty clear that any AI will have to come a lot closer to general AI before it replaces human radiologists. Then there are all of the regulatory hurdles, including FDA approval, insurance reimbursement, provider/patient/public acceptance... Try convincing a Trump voter that a computer should read his/her MRI and get paid for it...

Radiologists should be at the forefront of developing AI technology, because it's best for everyone if we help guide it toward supplementing/improving what we offer our patients and referring providers, rather than to let "enterprising" SV programmers try to brute force their way into generating a product that will do more harm than good. We should design it to do well what we can't. I'd love for a CNN to analyze every pulmonary nodule I see (or don't see) in the lung apices on a neck CT and give me a probability of malignancy that accounts for that patient's risk factors with error margins that account for missing information (above and beyond the Fleischner guidelines). I wouldn't copy-paste it into the report necessarily, but it'd help me feel better about not recommending the follow-up chest CT that will inevitably continue the incidental domino game.

Hi Puff-of-Snow, thanks for your insightful comments on the topic of AI. Here are my thoughts:

1. Ignoring EKGs: You're right, there isn't a successful EKG reader. I believe this problem could be 100% solved by deep learning. It's an order of magnitude easier than radiographic imaging. I think nobody's gotten behind it because there isn't any money in it. For better or worse, financial considerations drive people's time and the computer vision world is much more interested in dermatology, pathology, radiology, self-driving cars, terraforming Mars, etc.

2-3. Regarding the similarity to the neural cortex, let's take a step back; forget about machine learning for a moment. Suppose we didn't know anything about algorithms or data science and we just had a calculator sitting our desk. The calculator, though a dumb, simple device, is remarkably similar to our minds. Why? The transistor is to the human neuron as a bat wing is to a bird wing. Transistors and neurons are electrical switches controlled by electricity. The idea that an electrical component can control other electrical components is what makes thought and computation possible. I love the history of computer science almost as much as the field itself. I recommend this excellent nonfiction book called The Chip by T.R. Reid as well as the more popular Code: The Hidden Language of Computer Hardware and Software by Charles Petzoid. Now, of course calculators and computers aren't sentient. Sentience relies upon the computational model, just how do those transistors or neurons talk to one another? A computer's processor uses a clock-based mechanism: every pulse of the clock updates the instruction pointer and a new instruction is executed. The human mind is way different than that, maybe we run under a quantum model of computation? Who knows. Software-driven neural networks are an approximation to the human model of computation using the underlying clock-based mechanism of the computer's hardware. If you found a way to train a software neural network using a different model of computation, then we'd be entering the Twilight Zone...

4. I agree that imaging-based models are insufficient to capture 360 degrees of a patient's clinical story. There's a lot of work yet to be done in integrating written and oral communications into learning systems. The obstacles as you mention are enormous and I'm not personally convinced of the value of machine learning in "reading clinical notes." I work part-time for a natural language processing (NLP) company, and our primary business focus has been on billing. There aren't a lot of monetary opportunities for NLP on the provider side of things...and unfortunately revenue drives a lot of technological production, especially in healthcare where upfront R&D costs are high.
 
@Naijaba if I may ask, do you believe AI will take over radiology soon? If so, why are you entering radiology? Not trying to be a jerk at all - just curious about your viewpoint. Thanks!
 
@Naijaba if I may ask, do you believe AI will take over radiology soon? If so, why are you entering radiology? Not trying to be a jerk at all - just curious about your viewpoint. Thanks!

He wanted to do IR, IR and rads are totally different.

I do think the lack of any EKG reading based on machine learning model potentially illustrates that it breaks down in a "messy" situation. As of RSNA 2016 the state of the art identifies adrenal gland (not nodules, the gland itself) 50-60 percent of the time I believe.
 
@Naijaba if I may ask, do you believe AI will take over radiology soon? If so, why are you entering radiology? Not trying to be a jerk at all - just curious about your viewpoint. Thanks!

I think 15 years is a good number for widespread adoption. I know there's a lot of focus on AI and its impact on radiology, but I think there are other factors that diagnostic radiologist should be concerned about. The PACS was a major boon to radiologists' throughput, but it also gave the referring physician easy-access to images. Images are no longer siloed in the radiologist's workroom, and residents routinely learn to read images within their domain. The finances haven't caught up with this situation, but the current radiology reimbursement model is quite at odds with value-based care. Let me give some concrete examples:

1. A patient presents to the ED with suspected pneumonia. The ED attending orders a chest radiograph and confirms the diagnosis on the mobile monitor (the screen attached to the mobile x-ray unit). The attending admits the patient to medicine and the internist reviews the imaging on their local workstation (e.g. via an Epic link to the PACS). Neither the ED attending nor the internist needed the radiologist's read. An hour or so later the x-ray is reviewed by the radiologist. The radiologist is reimbursed even though their impression was not used to guide care. Should the patient be charged for the radiologist's read when their contribution did not affect treatment? Many would say, "Well there could have been something else going on, only visible to the radiologist!" But that's exactly the wrong answer in an ACA world. If a referring provider reads an image and identifies a reasonable diagnosis, the marginal value of a radiologist's read is too low given the expense.

2. Another example: There's a joke amongst neurosurgical residents that they should be dual-certified in neurorads, because seven years of looking at neurosurgical images is at least as good as a one-year neurorads fellowship. No surgeon, let alone neurosurgeon, would go into the OR without reviewing the patient's imaging, and many surgeons have the patient's imaging up throughout the whole operation. I've never (personally) seen a surgeon read the radiologist's note during the procedure. Shouldn't the surgeon therefore be reimbursed for reading the patient's image while in the OR? Again, it's a misalignment of value-based care. If the surgeon has a question about the image, he/she should request a radiologist's interpretation. The model of reading every image that hits the PACS needs to change.

The question about AI / machine learning is set against the backdrop of the these observations about value-based care. I'm fascinated by radiology because it has long been the one speciality that values innovation and embraces technology. I think that the future of radiology is similar to MSK/Breast/IR => more procedures and interaction with patients with less reads. The read volume can be reduced by a) Not reading every image on the PACS as noted above and b) Using machine learning / AI to screen out simpler reads such as normals.
 
Last edited:
I think 15 years is a good number for widespread adoption. I know there's a lot of focus on AI and its impact on radiology, but I think there are other factors that diagnostic radiologist should be concerned about. The PACS was a major boon to radiologists' throughput, but it also gave the referring physician easy-access to images. Images are no longer siloed in the radiologist's workroom, and residents routinely learn to read images within their domain. The finances haven't caught up with this situation, but the current radiology reimbursement model is quite at odds with value-based care. Let me give some concrete examples:

1. A patient presents to the ED with suspected pneumonia. The ED attending orders a chest radiograph and confirms the diagnosis on the mobile monitor (the screen attached to the mobile x-ray unit). The attending admits the patient to medicine and the internist reviews the imaging on their local workstation (e.g. via an Epic link to the PACS). Neither the ED attending nor the internist needed the radiologist's read. An hour or so later the x-ray is reviewed by the radiologist. The radiologist is reimbursed even though their impression was not used to guide care. Should the patient be charged for the radiologist's read when their contribution did not affect treatment? Many would say, "Well there could have been something else going on, only visible to the radiologist!" But that's exactly the wrong answer in an ACA world. If a referring provider reads an image and identifies a reasonable diagnosis, the marginal value of a radiologist's read is too low given the expense.

2. Another example: There's a joke amongst neurosurgical residents that they should be dual-certified in neurorads, because seven years of looking at neurosurgical images is at least as good as a one-year neurorads fellowship. No surgeon, let alone neurosurgeon, would go into the OR without reviewing the patient's imaging, and many surgeons have the patient's imaging up throughout the whole operation. I've never (personally) seen a surgeon read the radiologist's note during the procedure. Shouldn't the surgeon therefore be reimbursed for reading the patient's image while in the OR? Again, it's a misalignment of value-based care. If the surgeon has a question about the image, he/she should request a radiologist's interpretation. The model of reading every image that hits the PACS needs to change.

The question about AI / machine learning is set against the backdrop of the these observations about value-based care. I'm fascinated by radiology because it has long been the one speciality that values innovation and embraces technology. I think that the future of radiology is similar to MSK/Breast/IR => more procedures and interaction with parents with less reads. The read volume can be reduced by a) Not reading every image on the PACS as noted above and b) Using machine learning / AI to screen out simpler reads such as normals.

As you progress in your residency, you'll realize the "normals" are the hardest exams.
 
I think 15 years is a good number for widespread adoption. I know there's a lot of focus on AI and its impact on radiology, but I think there are other factors that diagnostic radiologist should be concerned about. The PACS was a major boon to radiologists' throughput, but it also gave the referring physician easy-access to images. Images are no longer siloed in the radiologist's workroom, and residents routinely learn to read images within their domain. The finances haven't caught up with this situation, but the current radiology reimbursement model is quite at odds with value-based care. Let me give some concrete examples:

1. A patient presents to the ED with suspected pneumonia. The ED attending orders a chest radiograph and confirms the diagnosis on the mobile monitor (the screen attached to the mobile x-ray unit). The attending admits the patient to medicine and the internist reviews the imaging on their local workstation (e.g. via an Epic link to the PACS). Neither the ED attending nor the internist needed the radiologist's read. An hour or so later the x-ray is reviewed by the radiologist. The radiologist is reimbursed even though their impression was not used to guide care. Should the patient be charged for the radiologist's read when their contribution did not affect treatment? Many would say, "Well there could have been something else going on, only visible to the radiologist!" But that's exactly the wrong answer in an ACA world. If a referring provider reads an image and identifies a reasonable diagnosis, the marginal value of a radiologist's read is too low given the expense.

2. Another example: There's a joke amongst neurosurgical residents that they should be dual-certified in neurorads, because seven years of looking at neurosurgical images is at least as good as a one-year neurorads fellowship. No surgeon, let alone neurosurgeon, would go into the OR without reviewing the patient's imaging, and many surgeons have the patient's imaging up throughout the whole operation. I've never (personally) seen a surgeon read the radiologist's note during the procedure. Shouldn't the surgeon therefore be reimbursed for reading the patient's image while in the OR? Again, it's a misalignment of value-based care. If the surgeon has a question about the image, he/she should request a radiologist's interpretation. The model of reading every image that hits the PACS needs to change.

The question about AI / machine learning is set against the backdrop of the these observations about value-based care. I'm fascinated by radiology because it has long been the one speciality that values innovation and embraces technology. I think that the future of radiology is similar to MSK/Breast/IR => more procedures and interaction with parents with less reads. The read volume can be reduced by a) Not reading every image on the PACS as noted above and b) Using machine learning / AI to screen out simpler reads such as normals.

Thanks for your answer.

I'm surprised our experiences have been so different. In my experience, the ED and IM docs only catch the obvious pneumonia. Otherwise, they just await treatment/dispo (unless treating empirically) based on the radiologists report. Probably 9/10 times the ED doc says, "yea let's see what radiology thinks first", particularly with anything other than radiographs and maybe some ultrasounds. Just my experience. Same thing with the surgeons. Sure they scroll through the CT prior to a abdomen case and maybe even during the case, but they still relied on the radiologists' reports and most would never want the liability or even time of having to interpret cases themselves. While it is true neurosurg residents are good at reading their own imaging, they don't spend time studying the theory, physics, etc. of the modalities.

I find it odd you predict doom for radiology yet ranked DR programs above some IR/DR programs on your match list, but no judgment from me. I wish you the best!
 
1. A patient presents to the ED with suspected pneumonia. The ED attending orders a chest radiograph and confirms the diagnosis on the mobile monitor (the screen attached to the mobile x-ray unit). The attending admits the patient to medicine and the internist reviews the imaging on their local workstation (e.g. via an Epic link to the PACS). Neither the ED attending nor the internist needed the radiologist's read. An hour or so later the x-ray is reviewed by the radiologist. The radiologist is reimbursed even though their impression was not used to guide care. Should the patient be charged for the radiologist's read when their contribution did not affect treatment? Many would say, "Well there could have been something else going on, only visible to the radiologist!" But that's exactly the wrong answer in an ACA world. If a referring provider reads an image and identifies a reasonable diagnosis, the marginal value of a radiologist's read is too low given the expense

The ED put one of our smaller hospitals on divert a couple of months ago because the on-call radiology residents and attending were too busy reading cross-sectional studies from the main hospital to keep up with the radiographs. They couldn't get an official read on that chest series, so they basically shut the whole place down.
 
2. Another example: There's a joke amongst neurosurgical residents that they should be dual-certified in neurorads, because seven years of looking at neurosurgical images is at least as good as a one-year neurorads fellowship. No surgeon, let alone neurosurgeon, would go into the OR without reviewing the patient's imaging, and many surgeons have the patient's imaging up throughout the whole operation. I've never (personally) seen a surgeon read the radiologist's note during the procedure. Shouldn't the surgeon therefore be reimbursed for reading the patient's image while in the OR? Again, it's a misalignment of value-based care. If the surgeon has a question about the image, he/she should request a radiologist's interpretation. The model of reading every image that hits the PACS needs to change.

First of all, let's recognize that the patient would still be wallowing at home with headaches unless the radiologist had seen the mass, called the nurse practioner, and told him/her to send the patient to neurosurgery.

Secondly, who cares whether or not they're reading the note during the procedure? What a strange metric.

Thirdly, many subspecialists are quite good at interpreting within their field, but you know who's better? Radiologists who are in the same field. I guarantee you that, save an emergent intervention (e.g. epidural hematoma), the surgeon is at least taking a peak at the read. If anything else, they want to know what will officially be in the record, and if the corresponding subspecialist in radiology is working, then that radiologist is adding value.

Fourthly, your last idea is entirely at odds with both practical and legal considerations. Nobody else wants the legal burden of owning the interpretation, and the radiologists aren't going to accept a contract that lets the ordering providers determine which studies filter to them.
 
Thanks for your answer.

I'm surprised our experiences have been so different. In my experience, the ED and IM docs only catch the obvious pneumonia. Otherwise, they just await treatment/dispo (unless treating empirically) based on the radiologists report. Probably 9/10 times the ED doc says, "yea let's see what radiology thinks first", particularly with anything other than radiographs and maybe some ultrasounds. Just my experience. Same thing with the surgeons. Sure they scroll through the CT prior to a abdomen case and maybe even during the case, but they still relied on the radiologists' reports and most would never want the liability or even time of having to interpret cases themselves. While it is true neurosurg residents are good at reading their own imaging, they don't spend time studying the theory, physics, etc. of the modalities.

I find it odd you predict doom for radiology yet ranked DR programs above some IR/DR programs on your match list, but no judgment from me. I wish you the best!
I've had similar experience. There are a couple of high-profile neurosurgeons at my institution who are regular visitors to the neurorads reading room, especially before complex cases.

Also I agree about the liability point. In order to be reimbursed for reading images, one would also have to agree to assume the legal liability for their interpretation. For the most part, surgeons are very averse to this idea, and would prefer to let the rads provide an official read, while they use the images in the OR for their own orientation.
 
I appreciate everyone's perspective. Certainly the model of practice I've seen is not consistent across all hospitals, and may even be a minority case. My purpose was to highlight that referring providers are able to view images very easily now. I think that fee-for-service is synonymous with "read everything that hits the PACS" whereas value based care calls for something else.
 
1. A patient presents to the ED with suspected pneumonia. The ED attending orders a chest radiograph and confirms the diagnosis on the mobile monitor (the screen attached to the mobile x-ray unit). The attending admits the patient to medicine and the internist reviews the imaging on their local workstation (e.g. via an Epic link to the PACS). Neither the ED attending nor the internist needed the radiologist's read. An hour or so later the x-ray is reviewed by the radiologist. The radiologist is reimbursed even though their impression was not used to guide care. Should the patient be charged for the radiologist's read when their contribution did not affect treatment? Many would say, "Well there could have been something else going on, only visible to the radiologist!" But that's exactly the wrong answer in an ACA world. If a referring provider reads an image and identifies a reasonable diagnosis, the marginal value of a radiologist's read is too low given the expense.

2. Another example: There's a joke amongst neurosurgical residents that they should be dual-certified in neurorads, because seven years of looking at neurosurgical images is at least as good as a one-year neurorads fellowship. No surgeon, let alone neurosurgeon, would go into the OR without reviewing the patient's imaging, and many surgeons have the patient's imaging up throughout the whole operation. I've never (personally) seen a surgeon read the radiologist's note during the procedure. Shouldn't the surgeon therefore be reimbursed for reading the patient's image while in the OR? Again, it's a misalignment of value-based care. If the surgeon has a question about the image, he/she should request a radiologist's interpretation. The model of reading every image that hits the PACS needs to change.

These are two interesting examples. For the first, pneumonia is a clinical diagnosis, with imaging on as needed basis - so many physicians are likely to feel comfortable starting abx before a final read. After having spent some time in a level 1 ED and on wards, I can say with certainty that even in "straight forward" cases such as looking for a pneumonia, there is hesitation/uncertainty in reading images among ED physicians and internists to the extent of waiting for final reads. And, who can blame them really? They are swamped with work and the need to learn how to read images is on the back burner for most of them. For your second point, I've also never seen a surgeon read a note during a procedure (which would be odd, but maybe not what you meant) but I certainly see them reading reports in clinic and the OR before the patient is wheeled in. I wouldn't take what is simple ribbing among residents to be taken as truth. We've all heard of residents and Attendings discussing how easy the work of another field is when they've never worked one day in it before. I do agree completely that the field needs to continue to push for integration into patient care versus only sitting in a room reading images.
 
At a major academic center here. Mentally disabled kid came in for major cellulitis infection and we noticed some bruising in various stages of healing. Alarm bells rang and we imaged the crap out of him. IM, Peds, IM/Peds & ER attended all "read the X-rays" and was about to discharge the kid. Literally 30 minutes left before kid goes home, rads calls up ER attending and is like DONT SEND THE KID HOME!. Rads finds 2 fine fractures in various stages of healing that all the other attending missed even when they were looking for it. We find out that he is being abused at home.

Give rads an indication and we will read the **** out of the scan. I can sleep easy knowing that there will be a long time (if ever) that AI can do that. and this is coming from someone with a CS degree from undergrad and has done big data research at a national level

Its like the EKG machine, reading "normal" but missing the early signs of the STEMI that cards finds (happened multiple times).

If anything, rads is going to be augment and we just read more
 
Top