This timely article from JAMA popped into my Inbox today. Not oncology specific by any means, but rather than a bunch of internet randos (like me) spouting off on things we may or may not have any actual knowledge of, it was written by a medical informaticist (and hospitalist) at UCSF and an AI researcher at Stanford, so worth considering as a "Category 2B" recommendation per NCCN guidelines.
I'll also note that it reiterates a lot of things that have been said here already (hooray for internet randos!) noting that there are a lot of easy wins for AI in medicine (scribing, prior auth/appeals, billing, scheduling, etc) that are likely to be implemented relatively quickly. But that any meaningful impacts on what we consider "patient care" are going to be quite a bit further down the road. The legal, regulatory, privacy, "Six Sigma" and other concerns inherent in healthcare are actually going to be much larger barriers to AI in medicine that the technology, or even it's adoption by physicians and healthcare systems, will be.
I can definitely see some impressive inroads in decision support tools in the short term as well. My prior employer used (and was instrumental in creating) what is now the Elsevier Clinical Pathways program. I spent (and still do as my current employer is also a user) a lot of time working with them on creating, maintaining and updating the pathways. I think it's a very useful tool but requires far too much human interfacing to maintain and improve. AI would definitely be a huge benefit in a setting like that.
I just finished reading
a New Yorker profile/interview of Jensen Huang, the founder and CEO of Nvidia, which morphed from a gaming chip company to an AI company that is currently the Tesla/Microsoft/Amazon/Walmart combined of AI computing power. The article was an interesting bit of insight into the business of AI that I certainly wasn't aware of. What a lot of people talk about as the advantage of AI is not so much the work it can do, but the cost savings of the work it does. We've known for some time how much energy generative AI uses (ChatGPT-4 by itself consumes the same energy as 33K US households daily), but I wasn't aware of what the hardware cost on top of that. Nvidia's flagship A100 machine goes for ~$500K a box. ChatGPT4 by itself was created (and is maintained) using ~25,000 of them, which, at retail (which OpenAI clearly didn't pay) is $12.5 motherf***** TRILLION(...with a T) for hardware alone. The entire US government budget for FY22 was "only" $6.5T and the US GDP for FY22 was ~$25T, so it's hard to imagine sustaining half of the US GDP on AI alone for even a very short period of time.
On a related note, a surgeon colleague of mine called me to talk through a few cases this morning and we wound up talking about AI in medicine. He said that the first time an AI bot starts screaming at a patient and storms out of the room for their circular logic and asking the same questions over and over again (which, let's be honest, is probably a pretty common part of most of our days in clinic) will be the end of AI in patient facing clinical medicine.