Timmerman Editorial and Discussion

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

elementaryschooleconomics

Mask on, Aquaplast, Mask off.
Administrator
2+ Year Member
Joined
Nov 2, 2019
Messages
3,507
Reaction score
14,562
1639482123566.png


This is, quite possibly, one of my favorite Red Journal articles to be published in a long time, perhaps ever.

1639482194887.png

Members don't see this ad.
 
  • Like
Reactions: 3 users

While I respect Timmerman, this is what I've railed against for years and years when people just accept Timmerman constraints (which are frequently much higher than what I would routinely feel comfortable with) given their completely unvalidated nature.

Having read the editorial though, I get why he did it. I think anyone NOT using HyTec in 2021 is a damn fool though given that there is actual literature review and not just the "feelings" of one guy. The main issue is the paucity of high-quality data in this space for individual dose constraints. I agree that trials to see if dose constraints can be safely pushed, on protocol, are worth while.
 
  • Like
Reactions: 3 users
While I respect Timmerman, this is what I've railed against for years and years when people just accept Timmerman constraints (which are frequently much higher than what I would routinely feel comfortable with) given their completely unvalidated nature.

Having read the editorial though, I get why he did it. I think anyone NOT using HyTec in 2021 is a damn fool though given that there is actual literature review and not just the "feelings" of one guy. The main issue is the paucity of high-quality data in this space for individual dose constraints. I agree that trials to see if dose constraints can be safely pushed, on protocol, are worth while.
I think you're absolutely correct.

Part of the reason I love this editorial is that I feel it lays bare what underpins A LOT of modern Radiation Oncology. We're a very small field that has accumulated hyper-competitive people with somewhat delicate egos. We are incredibly susceptible to deferring to authority and doing things a certain way "because that's how they've always been done".

Newer, evidenced-based doses/regimens/constraints etc are very slow to be adopted (if ever) because the people in power are slow to change.

I do not think this is unique to Radiation Oncology, just that many of these common issues are very amplified because we're so small, and I think it's very rare to see something spelled out in black and white like this editorial is.
 
  • Like
Reactions: 3 users
Members don't see this ad :)
Just because I know people have a habit of reading headlines and not reading articles:

1639507548598.png


I would argue these constraints are foundational for modern SBRT.

He made them up, asked Fowler to look them over (implies that Fowler may or may not have done so), intentionally exploited a loophole in the medical literature to get them published, and then exploited human nature to just copy references instead of reading the references yourself (which we're all guilty of, I know I am).

This editorial is remarkable. I know there are probably a lot of things we do in medicine that were born the same way, but having this spelled out so plainly is amazing.
 
  • Like
Reactions: 3 users
1639508410482.png


As many times as I have used the original 2008 tables and the versions updated since then, I have never noticed the title of the table, as he pointed out. I had to see it with my own eyes. That is absolutely hilarious and brilliant. Things that are practice-changing has to start somewhere. At least, he was honest about its beginnings that was considered extremely radical back then (60 Gy, no heterogeneity corrections, in 3 fractions!).
 
  • Like
Reactions: 6 users
I feel kinda dirty after reading that Timmerman editorial.
 
  • Like
  • Haha
Reactions: 3 users
I feel kinda dirty after reading that Timmerman editorial.
I'll be honest, I'm a little surprised at the reception. Drew was the first to post it on Twitter, and it has like 100 "likes" and basically only positive comments, and only positive comments from the other people who have talked about it.

To me, this is the most obvious example of "the ends justify the means" that I have seen in a long time.

To summarize:

Timmerman wanted to advance SBRT
There was very little "real" data about OAR constraints re: SBRT
He knew people needed constraints to "buy in"
He made them up from what appears to be mostly animal data and math
He snuck his constraints in an editorial so they would be "published", intentionally exploiting the academic publishing system
Those "published" constraints have been used on tens of thousands of people across the globe for 20 years

While his constraints were "invented", because it "worked out", the immediate take I'm seeing on this from the RadOnc community is "haha awesome, classic Timmerman".

To juxtapose: this is, in a sense, a version of what Paolo Macchiarini did. For those that don't know, he was a surgeon at Karolinska transplanting engineered tracheas into patients inappropriately. It's an absolutely wild story otherwise (here's a Guadrian article). But basically, Macchiarini had weak pre-clinical data on the efficacy of these synthetic tracheas, but started putting them into people anyway. Again, so much else to that story is absolutely insane, but that's the important part here.

Now, I know making this comparison might trigger some people, and I want to be very very clear: I am not saying that making up constraints is on the same level of doing fake trachea transplants.

What I am saying is this:

Timmerman made up some bio-plausible numbers based on pre-clinical data, math, and guessing, and got them published by exploiting the system. These numbers were used to assess the safety of radiation treatment for almost 20 years in, what I assume to be, thousands to tens-of-thousands of patients. It seems like it worked out, and we have better (HyTEC) numbers now, backed by some level of data.

Macchiarini created synthetic tracheas with a bio-plausible mechanism and weak pre-clinical data, and started putting them in people by exploiting the system. The patients did poorly/died. It did not work out.

I want to stress I'm not trying to hyperbolize here and am not making a one-to-one comparison. But both stories follow the same structure: a doctor invented something and introduced it with little to no data which had immediate real-world impact on patients. In one story, it was OK. In another story, it was not OK.

I think there's some fertile ground here for an actual ethical debate (not just me musing on my laptop).
 
  • Like
Reactions: 7 users
I'll be honest, I'm a little surprised at the reception. Drew was the first to post it on Twitter, and it has like 100 "likes" and basically only positive comments, and only positive comments from the other people who have talked about it.

To me, this is the most obvious example of "the ends justify the means" that I have seen in a long time.

To summarize:

Timmerman wanted to advance SBRT
There was very little "real" data about OAR constraints re: SBRT
He knew people needed constraints to "buy in"
He made them up from what appears to be mostly animal data and math
He snuck his constraints in an editorial so they would be "published", intentionally exploiting the academic publishing system
Those "published" constraints have been used on tens of thousands of people across the globe for 20 years

While his constraints were "invented", because it "worked out", the immediate take I'm seeing on this from the RadOnc community is "haha awesome, classic Timmerman".

To juxtapose: this is, in a sense, a version of what Paolo Macchiarini did. For those that don't know, he was a surgeon at Karolinska transplanting engineered tracheas into patients inappropriately. It's an absolutely wild story otherwise (here's a Guadrian article). But basically, Macchiarini had weak pre-clinical data on the efficacy of these synthetic tracheas, but started putting them into people anyway. Again, so much else to that story is absolutely insane, but that's the important part here.

Now, I know making this comparison might trigger some people, and I want to be very very clear: I am not saying that making up constraints is on the same level of doing fake trachea transplants.

What I am saying is this:

Timmerman made up some bio-plausible numbers based on pre-clinical data, math, and guessing, and got them published by exploiting the system. These numbers were used to assess the safety of radiation treatment for almost 20 years in, what I assume to be, thousands to tens-of-thousands of patients. It seems like it worked out, and we have better (HyTEC) numbers now, backed by some level of data.

Macchiarini created synthetic tracheas with a bio-plausible mechanism and weak pre-clinical data, and started putting them in people by exploiting the system. The patients did poorly/died. It did not work out.

I want to stress I'm not trying to hyperbolize here and am not making a one-to-one comparison. But both stories follow the same structure: a doctor invented something and introduced it with little to no data which had immediate real-world impact on patients. In one story, it was OK. In another story, it was not OK.

I think there's some fertile ground here for an actual ethical debate (not just me musing on my laptop).
We exist in a medical field that seems to tout how it’s the most evidence based in all of oncology, if not medicine, and yet… Timmerman table. And protons. Oy vey!
 
  • Like
  • Haha
Reactions: 6 users
I'll be honest, I'm a little surprised at the reception. Drew was the first to post it on Twitter, and it has like 100 "likes" and basically only positive comments, and only positive comments from the other people who have talked about it.

To me, this is the most obvious example of "the ends justify the means" that I have seen in a long time.

To summarize:

Timmerman wanted to advance SBRT
There was very little "real" data about OAR constraints re: SBRT
He knew people needed constraints to "buy in"
He made them up from what appears to be mostly animal data and math
He snuck his constraints in an editorial so they would be "published", intentionally exploiting the academic publishing system
Those "published" constraints have been used on tens of thousands of people across the globe for 20 years

While his constraints were "invented", because it "worked out", the immediate take I'm seeing on this from the RadOnc community is "haha awesome, classic Timmerman".

To juxtapose: this is, in a sense, a version of what Paolo Macchiarini did. For those that don't know, he was a surgeon at Karolinska transplanting engineered tracheas into patients inappropriately. It's an absolutely wild story otherwise (here's a Guadrian article). But basically, Macchiarini had weak pre-clinical data on the efficacy of these synthetic tracheas, but started putting them into people anyway. Again, so much else to that story is absolutely insane, but that's the important part here.

Now, I know making this comparison might trigger some people, and I want to be very very clear: I am not saying that making up constraints is on the same level of doing fake trachea transplants.

What I am saying is this:

Timmerman made up some bio-plausible numbers based on pre-clinical data, math, and guessing, and got them published by exploiting the system. These numbers were used to assess the safety of radiation treatment for almost 20 years in, what I assume to be, thousands to tens-of-thousands of patients. It seems like it worked out, and we have better (HyTEC) numbers now, backed by some level of data.

Macchiarini created synthetic tracheas with a bio-plausible mechanism and weak pre-clinical data, and started putting them in people by exploiting the system. The patients did poorly/died. It did not work out.

I want to stress I'm not trying to hyperbolize here and am not making a one-to-one comparison. But both stories follow the same structure: a doctor invented something and introduced it with little to no data which had immediate real-world impact on patients. In one story, it was OK. In another story, it was not OK.

I think there's some fertile ground here for an actual ethical debate (not just me musing on my laptop).
Isn't all the hytec data based on a bunch of outcomes in patients treated using the timmerman tables? There's lots of examples of this historically. In my mind, the bigger ethical question is why is it still being used?
 
Isn't all the hytec data based on a bunch of outcomes in patients treated using the timmerman tables? There's lots of examples of this historically. In my mind, the bigger ethical question is why is it still being used?


Because thousands of patient have been treated and it seems to work
 
  • Like
Reactions: 1 users
Isn't all the hytec data based on a bunch of outcomes in patients treated using the timmerman tables? There's lots of examples of this historically. In my mind, the bigger ethical question is why is it still being used?
I think the HyTEC stuff is an evolution? I haven't dug into the methods in awhile so I don't know off the top of my head.

Prior to the editorial, I would imagine many of us assumed there was a level of voodoo to these (any) constraints. I have seen OARs blasted with no issue, I have seen Grade 5 toxicity from OARs well within even the most conservative constraints. It's an art, not a science.

Where this changed, for me, is reading "how the sausage was made" in the editorial. I think this is fundamentally different than the folks who are pro-proton for breast or NanoKnife for pancreas (or, at least, how they present their arguments).

Since reading this yesterday morning, I am stuck with what I consider to be two truths:

1) Timmerman's work in the early 2000s in SBRT was foundational to the way we practice Radiation Oncology today. He has had an obvious and measurable positive impact on humanity. I am grateful for his work.

2) Timmerman knowingly breathed imagined constraints - built with shoestrings and glue - into the academic literature by exploiting a loophole, as he knew they could not be published AT THAT TIME in a traditional manuscript. Since we now know they "work" (whatever that means), they could have been published later with actual data. He titled the table coyly, essentially admitting they were imaginary, but the intent of putting these numbers in that paper was so he could have an indexed manuscript to reference for his future publications to justify his work.

In an alternate reality, what if he had been wrong about one or more of his imagined numbers? How many people were treated in the early days using just this table alone? Probably enough where at least a handful of people would have died. What would this conversation look like in that reality?

His work has had a tremendously positive impact on modern cancer care. Does that wash away the original sin of fabricating numbers? I don't have an answer. Why would he publish something like this now? This seems more appropriate for a posthumous memoir.
 
  • Like
Reactions: 3 users
I think the HyTEC stuff is an evolution? I haven't dug into the methods in awhile so I don't know off the top of my head.

Prior to the editorial, I would imagine many of us assumed there was a level of voodoo to these (any) constraints. I have seen OARs blasted with no issue, I have seen Grade 5 toxicity from OARs well within even the most conservative constraints. It's an art, not a science.

Where this changed, for me, is reading "how the sausage was made" in the editorial. I think this is fundamentally different than the folks who are pro-proton for breast or NanoKnife for pancreas (or, at least, how they present their arguments).

Since reading this yesterday morning, I am stuck with what I consider to be two truths:

1) Timmerman's work in the early 2000s in SBRT was foundational to the way we practice Radiation Oncology today. He has had an obvious and measurable positive impact on humanity. I am grateful for his work.

2) Timmerman knowingly breathed imagined constraints - built with shoestrings and glue - into the academic literature by exploiting a loophole, as he knew they could not be published AT THAT TIME in a traditional manuscript. Since we now know they "work" (whatever that means), they could have been published later with actual data. He titled the table coyly, essentially admitting they were imaginary, but the intent of putting these numbers in that paper was so he could have an indexed manuscript to reference for his future publications to justify his work.

In an alternate reality, what if he had been wrong about one or more of his imagined numbers? How many people were treated in the early days using just this table alone? Probably enough where at least a handful of people would have died. What would this conversation look like in that reality?

His work has had a tremendously positive impact on modern cancer care. Does that wash away the original sin of fabricating numbers? I don't have an answer. Why would he publish something like this now? This seems more appropriate for a posthumous memoir.
These numbers are not quite “fabricated”. They are extrapolated from the LQM. I do something similar with dose painting ultra central lung tumors. If I know dose X is well tolerated in 10 fractions, what dose should I accept in 15?

Granted, I have a lot of specific references that I use to determine EQD2 tolerances that I extrapolate from, but end result is not dramatically different from timmerman.

The Timmerman constraints are often applied in situations where the well characterized treatments are inadequate. Personally, I don’t think the ethics of this are really that dubious
 
  • Like
Reactions: 2 users
Members don't see this ad :)
I think the HyTEC stuff is an evolution? I haven't dug into the methods in awhile so I don't know off the top of my head.

Prior to the editorial, I would imagine many of us assumed there was a level of voodoo to these (any) constraints. I have seen OARs blasted with no issue, I have seen Grade 5 toxicity from OARs well within even the most conservative constraints. It's an art, not a science.

Where this changed, for me, is reading "how the sausage was made" in the editorial. I think this is fundamentally different than the folks who are pro-proton for breast or NanoKnife for pancreas (or, at least, how they present their arguments).

Since reading this yesterday morning, I am stuck with what I consider to be two truths:

1) Timmerman's work in the early 2000s in SBRT was foundational to the way we practice Radiation Oncology today. He has had an obvious and measurable positive impact on humanity. I am grateful for his work.

2) Timmerman knowingly breathed imagined constraints - built with shoestrings and glue - into the academic literature by exploiting a loophole, as he knew they could not be published AT THAT TIME in a traditional manuscript. Since we now know they "work" (whatever that means), they could have been published later with actual data. He titled the table coyly, essentially admitting they were imaginary, but the intent of putting these numbers in that paper was so he could have an indexed manuscript to reference for his future publications to justify his work.

In an alternate reality, what if he had been wrong about one or more of his imagined numbers? How many people were treated in the early days using just this table alone? Probably enough where at least a handful of people would have died. What would this conversation look like in that reality?

His work has had a tremendously positive impact on modern cancer care. Does that wash away the original sin of fabricating numbers? I don't have an answer. Why would he publish something like this now? This seems more appropriate for a posthumous memoir.

If we are being fair, 99% of our dose constraints were made up at the time of their inception. The only way to know human dose constraints is do a human dose escalation study, which has not been done for 99% of dose constraints
 
  • Like
Reactions: 6 users
We exist in a medical field that seems to tout how it’s the most evidence based in all of oncology, if not medicine, and yet… Timmerman table. And protons. Oy vey!
I wonder who paid for the original lung SBRT study? Doesn't say in Timmerman's original phase I SBRT paper, only that Elekta supplied ("loaned" to be precise) the frame. Given the way NIH/NCI supports radiation oncology research, one wonders if pre-clinical animal studies in pigs to provide evidence for SBRT would have ever been funded.

Heard Timmerman speak once at ASTRO, and don't remember anything about his talk except his impatience with the risk aversion of our field. His response to this line of questioning/fear was something along the lines of: "surgeons and medical oncologists hurt people every day".
 
  • Like
Reactions: 4 users
If we are being fair, 99% of our dose constraints were made up at the time of their inception. The only way to know human dose constraints is do a human dose escalation study, which has not been done for 99% of dose constraints
Totally agree.

These numbers are not quite “fabricated”. They are extrapolated from the LQM. I do something similar with dose painting ultra central lung tumors. If I know dose X is well tolerated in 10 fractions, what dose should I accept in 15?

Granted, I have a lot of specific references that I use to determine EQD2 tolerances that I extrapolate from, but end result is not dramatically different from timmerman.

The Timmerman constraints are often applied in situations where the well characterized treatments are inadequate. Personally, I don’t think the ethics of this are really that dubious
"I'm not proud of engineering (sounds better than fabricating) the constraints in a national protocol that ultimately changed the standard of care for treating lung cancer." - direct quote from editorial

"Many argued in favor of using the popular linear-quadratic (LQ) model to simply convert our 3-fraction table into the other SABR options. However, that exercise quickly showed problems as the LQ model clearly overpredicted the potency of 1-fraction treatments in particular." - direct quote from editorial

Your post/opinion is extremely reasonable, here in 2021, knowing how this all turned out.

However, I'm thinking about this in the context of when this all happened. The timeline doesn't quite add up as written, but:

"... by 1998 we took the bold step to initiate treatments in patients with early lung cancer who would likely live long enough to actually experience late effects. Our pilot experience showed positive results and was appreciated by open-minded thought leaders."

He references the 2003 "Extracranial Stereotactic Radioablation" paper at the end of that sentence. That paper says they enrolled 37 patients starting in February of 2000 - so perhaps they conceived of this in 1998? I don't know what this 2-year gap means.

"...the RTOG disease committees wisely negotiated terms for buy-in at enrolling sites. For example, they wanted a list of dose constraints for normal tissues for a 3-fraction course of therapy..." "Although we had treated several hundred patients by this time, no such constraints existed. We simply created compact dose distributions with isotropic falloff. Given these firm terms required for activating the trial, and not to be dissuaded, I set out to develop the first ever table of dose constraints for SABR."

I'm not quite clear (and don't have the energy to go looking in the literature) for how he calculates that he had treated "several hundred patients", because he's not clear with his timeline about the RTOG pitch. As it is written and referenced, it appears he's talking about pitching RTOG trials which he writes started in the year 2000, and with only that 2003 paper referenced, he has treated 37 patients. Perhaps he treated many, many more and didn't publish about it, or he's using the collective "we" and means all the members in the group pitching the SBRT trial to RTOG?

However, he is clearly stating that he/they have treated "several hundred patients" with SBRT at the time of the RTOG request for constraints. While they normally "created compact dose distributions with isotropic falloff"...didn't that mean they theoretically had the ability to go back through their "hundreds" of treated cases, back-calculate OAR doses, see if there were any reported toxicity, and factor that in to the constraint table?

I know that's easy for me to say and do in 2021 - all I have to do is open Eclipse, go to ANY of the cases I have EVER treated, and can draw new contours which the software automatically calculates dose for. I know it was probably much more work back then to do this - but wasn't it theoretically possible?

That ABSOLUTELY would have gotten published without trickery. Jack Fowler has a ton of publications from the 90s/2000s where he basically just muses about radbio math. There could have been a Fowler and Timmerman (et al) paper combining math+data which, to me, would have much better ethical optics.

Based on Twitter and the texts from my friends, maybe I'm in the minority with experiencing cognitive dissonance after reading this editorial.
 
  • Like
Reactions: 1 users
These numbers are not quite “fabricated”. They are extrapolated from the LQM. I do something similar with dose painting ultra central lung tumors. If I know dose X is well tolerated in 10 fractions, what dose should I accept in 15?

Granted, I have a lot of specific references that I use to determine EQD2 tolerances that I extrapolate from, but end result is not dramatically different from timmerman.

The Timmerman constraints are often applied in situations where the well characterized treatments are inadequate. Personally, I don’t think the ethics of this are really that dubious
I agree. The ethics are attackable. But not dubious. Fool me once shame on you. Fool me twice shame on me. Timmerman was not oncology’s first fooler. And it’s not like the numbers are out of whole cloth really.
 
  • Like
Reactions: 1 user
I wonder who paid for the original lung SBRT study? Doesn't say in Timmerman's original phase I SBRT paper, only that Elekta supplied ("loaned" to be precise) the frame. Given the way NIH/NCI supports radiation oncology research, one wonders if pre-clinical animal studies in pigs to provide evidence for SBRT would have ever been funded.

Heard Timmerman speak once at ASTRO, and don't remember anything about his talk except his impatience with the risk aversion of our field. His response to this line of questioning/fear was something along the lines of: "surgeons and medical oncologists hurt people every day".
Did he show the slide where he burned a hole in someone with an early AP PA field arrangement to a high dose. It is scary. But I respected it. I think there is room to push. Nay, I know there is room to push. I saw almost fifty patients at my training institution receive 60+ Gy accidentally to the cervical cord with no spine toxicity at 1 year or so.

A problem is we have almost no way of knowing when it’s ok to hit the gas versus the brakes. I think we all share the suspicion that it varies patient to patient and tumor to tumor. Sometimes a tiny bit and sometimes a lot. For now we are cursed with stochasticity.

Let us not forget that Gil Lederman, who spoke of and touted SBRT in 2000 and before, was considered a heretic, outcast, charlatan, and worse.
 
  • Like
Reactions: 1 users
Based on what I heard I wouldnt trust his rectum constraints. Fair amount of colostomies needed on those UTSW dose escalated prostate SBRT series.
 
  • Like
Reactions: 2 users
He has killed some patients. Comes with the territory.
 
  • Like
Reactions: 4 users
EMAMI constraints were just survey based. Many of our constraints had no original basis in reality. Still many of our constraints are from retrospective reviews and thus weakly supported. There are only a handful of well validated constraints, like v20 in lung. Where the limit is on max small bowel dose still eludes me?
 
  • Like
Reactions: 1 users
Totally agree.


"I'm not proud of engineering (sounds better than fabricating) the constraints in a national protocol that ultimately changed the standard of care for treating lung cancer." - direct quote from editorial

"Many argued in favor of using the popular linear-quadratic (LQ) model to simply convert our 3-fraction table into the other SABR options. However, that exercise quickly showed problems as the LQ model clearly overpredicted the potency of 1-fraction treatments in particular." - direct quote from editorial

Your post/opinion is extremely reasonable, here in 2021, knowing how this all turned out.

However, I'm thinking about this in the context of when this all happened. The timeline doesn't quite add up as written, but:

"... by 1998 we took the bold step to initiate treatments in patients with early lung cancer who would likely live long enough to actually experience late effects. Our pilot experience showed positive results and was appreciated by open-minded thought leaders."

He references the 2003 "Extracranial Stereotactic Radioablation" paper at the end of that sentence. That paper says they enrolled 37 patients starting in February of 2000 - so perhaps they conceived of this in 1998? I don't know what this 2-year gap means.

"...the RTOG disease committees wisely negotiated terms for buy-in at enrolling sites. For example, they wanted a list of dose constraints for normal tissues for a 3-fraction course of therapy..." "Although we had treated several hundred patients by this time, no such constraints existed. We simply created compact dose distributions with isotropic falloff. Given these firm terms required for activating the trial, and not to be dissuaded, I set out to develop the first ever table of dose constraints for SABR."

I'm not quite clear (and don't have the energy to go looking in the literature) for how he calculates that he had treated "several hundred patients", because he's not clear with his timeline about the RTOG pitch. As it is written and referenced, it appears he's talking about pitching RTOG trials which he writes started in the year 2000, and with only that 2003 paper referenced, he has treated 37 patients. Perhaps he treated many, many more and didn't publish about it, or he's using the collective "we" and means all the members in the group pitching the SBRT trial to RTOG?

However, he is clearly stating that he/they have treated "several hundred patients" with SBRT at the time of the RTOG request for constraints. While they normally "created compact dose distributions with isotropic falloff"...didn't that mean they theoretically had the ability to go back through their "hundreds" of treated cases, back-calculate OAR doses, see if there were any reported toxicity, and factor that in to the constraint table?

I know that's easy for me to say and do in 2021 - all I have to do is open Eclipse, go to ANY of the cases I have EVER treated, and can draw new contours which the software automatically calculates dose for. I know it was probably much more work back then to do this - but wasn't it theoretically possible?

That ABSOLUTELY would have gotten published without trickery. Jack Fowler has a ton of publications from the 90s/2000s where he basically just muses about radbio math. There could have been a Fowler and Timmerman (et al) paper combining math+data which, to me, would have much better ethical optics.

Based on Twitter and the texts from my friends, maybe I'm in the minority with experiencing cognitive dissonance after reading this editorial.

I've been to the Timmerman SBRT/SABR course, and he spends quite a bit of time discussing the evolution in the development of SBRT, which I found fascinating.

Here's a few things that I remember from that course:
- The idea for SBRT came about for medically inoperable early stage lung cancer, with no potential interventions offered to these group of patients. I don't treat lung so I don't know what the status of treatments were like back in the mid- to late-90s.
- In the early days, he discuss that he spent a lot of time coming up with beam arrangements on many patients, starting at APPA, all the way up to arrangements that mimic that of Gamma Knife. Probably explains the gap in time that you have found?
- There was virtually no image guidance like we have today so he had to rely on external Elekta frames for stereotactic coordinates.
- Dosimetry was hand done, as it was done normally back then.
- He also mentioned that he pitched to the RTOG multiple years in a row the idea of SBRT, only to be shot down each time. I think it was mentioned that James Cox called him out publicly on how dangerous Cox thought SBRT was. I think this was probably the most impressive part of it all, convincing the old guard to do something completely outside the norm and their own comfort zone. I do think that most, if not all, rad oncs, me included, are always nervous about toxicities related to our own treatment. I remember that Bruce Minsky had a mednet comment (theMednet - Login) stating that the etiology of 1.8 Gy sized fractions (down from 2.0 Gy) came from the advent of concurrent chemotherapy, worrying about additional acute toxicities, without any substantial data, but yet, it became part of the standard fractionation scheme across many sites.

It may not be the most scientifically linear or elegant route to how we got to where we are today, but I don't think most of us, especially those trained in the past 10-15 years, really think twice now about doing SBRT, given how many thousands of patients have been treated in the past 20 years.
 
  • Like
Reactions: 3 users
I think in medicine we heavily weight errors in commission, where we cause toxicity resulting morbidity/mortality based on an active treatment. This is probably from the hippocratic oath and malpractice fears.

But errors in omission, where somebody has toxicity or progression of disease because we were not aggressive enough actually likely happens much more commonly. The classic scenario is progressive tumor around the spinal cord that was previously radiated to cord tolerance. What is more likely to harm the patient - re-irradiation or progressive tumor? Almost always progressive tumor but you can bet most people would be gun shy about aggressively treating. If the pt progresses without radiation in this situation, is that still included in our "kill count"?
 
  • Like
Reactions: 4 users
I think in medicine we heavily weight errors in commission, where we cause toxicity resulting morbidity/mortality based on an active treatment. This is probably from the hippocratic oath and malpractice fears.

But errors in omission, where somebody has toxicity or progression of disease because we were not aggressive enough actually likely happens much more commonly. The classic scenario is progressive tumor around the spinal cord that was previously radiated to cord tolerance. What is more likely to harm the patient - re-irradiation or progressive tumor? Almost always progressive tumor but you can bet most people would be gun shy about aggressively treating. If the pt progresses without radiation in this situation, is that still included in our "kill count"?
Obviously, this seems to be the battle Timmerman et al had to fight. He was innovative and transformed Oncology for the better.

I'm just hung up on:

He proposes a multi-center clinical trial
RTOG asks for dose constraints for safety
He makes them up

He wants to publish constraints without validating them (or allow any scrutiny/peer-review at all)
He uses a loophole to publish them; starts self-referencing in future pubs

Per the story, the original constraints he gave RTOG was a solo project. Could this get past an IRB today? Let's say I want to start the first FLASH clinical trial. I have some animal data. Maybe I've treated some end-of-life patients and nothing too obviously toxic has happened, so due to my anecdotal experience I feel it works and is safe. The IRB asks for my constraints, and I make "educated guesses" and give it to them. No references or explanations.

Would it get through? Should it get through?

Again, I'm thinking about this divorced from knowing how it turned out.
 
  • Like
Reactions: 1 users
Obviously, this seems to be the battle Timmerman et al had to fight. He was innovative and transformed Oncology for the better.

I'm just hung up on:

He proposes a multi-center clinical trial
RTOG asks for dose constraints for safety
He makes them up

He wants to publish constraints without validating them (or allow any scrutiny/peer-review at all)
He uses a loophole to publish them; starts self-referencing in future pubs

Per the story, the original constraints he gave RTOG was a solo project. Could this get past an IRB today? Let's say I want to start the first FLASH clinical trial. I have some animal data. Maybe I've treated some end-of-life patients and nothing too obviously toxic has happened, so due to my anecdotal experience I feel it works and is safe. The IRB asks for my constraints, and I make "educated guesses" and give it to them. No references or explanations.

Would it get through? Should it get through?

Again, I'm thinking about this divorced from knowing how it turned out.
Really important point. All the gladhanding and backslapping on social media misses the point. This was a dangerous thing to do, and we are only chuckling about it because it turned out okay. It certainly could have been a disaster.
 
  • Like
Reactions: 5 users
Obviously, this seems to be the battle Timmerman et al had to fight. He was innovative and transformed Oncology for the better.

I'm just hung up on:

He proposes a multi-center clinical trial
RTOG asks for dose constraints for safety
He makes them up

He wants to publish constraints without validating them (or allow any scrutiny/peer-review at all)
He uses a loophole to publish them; starts self-referencing in future pubs

Per the story, the original constraints he gave RTOG was a solo project. Could this get past an IRB today? Let's say I want to start the first FLASH clinical trial. I have some animal data. Maybe I've treated some end-of-life patients and nothing too obviously toxic has happened, so due to my anecdotal experience I feel it works and is safe. The IRB asks for my constraints, and I make "educated guesses" and give it to them. No references or explanations.

Would it get through? Should it get through?

Again, I'm thinking about this divorced from knowing how it turned out.
This essentially how cardiac sbrt was done, minus RTOG part. There are actually no heart constraints published on that study
 
  • Like
Reactions: 2 users
This essentially how cardiac sbrt was done, minus RTOG part. There are actually no heart constraints published on that study
Bingo. Cardiac SBRT was the WAGgiest. Oh and btw rad oncs are calling it “ablation” still but the cardiologists are showing how it is not functioning in an ablative manner at all. So I glad our guesses worked out there too, we were wrong but no one seems hurt. I thought of all this when Ralph W was yelling about 1 Gy of RT to the heart during COVID and I’m all like “Ralph have you seen what they’re doing with cardiac SBRT.”
 
  • Like
Reactions: 1 user
This essentially how cardiac sbrt was done, minus RTOG part. There are actually no heart constraints published on that study
ENCORE-VT had heart constraints:

1639659406593.png


Which is interesting, considering the heart was the target. They also have the following:

1639659457577.png


It's not the fact that "no one knows", that's very common in medicine, and safety/toxicity is literally the definition of a Phase I clinical trial.

My feelings on this are influenced by the way he tells the story. I'm reading it as if the RTOG asked for constraints before agreeing to the trial, he "engineered" some and gave it to them - and he gave it to them with some sort of authority, meaning "yes these are excellent to keep patients safe, I know this to be true".

If everyone involved knew these numbers were just *wink wink* "constraints", and were doing it just to move the trial forward - as in, all parties involved knew these were guesses and not supported by data, I would feel differently.
 
  • Like
Reactions: 1 users
Obviously, this seems to be the battle Timmerman et al had to fight. He was innovative and transformed Oncology for the better.

I'm just hung up on:

He proposes a multi-center clinical trial
RTOG asks for dose constraints for safety
He makes them up

He wants to publish constraints without validating them (or allow any scrutiny/peer-review at all)
He uses a loophole to publish them; starts self-referencing in future pubs

Per the story, the original constraints he gave RTOG was a solo project. Could this get past an IRB today? Let's say I want to start the first FLASH clinical trial. I have some animal data. Maybe I've treated some end-of-life patients and nothing too obviously toxic has happened, so due to my anecdotal experience I feel it works and is safe. The IRB asks for my constraints, and I make "educated guesses" and give it to them. No references or explanations.

Would it get through? Should it get through?

Again, I'm thinking about this divorced from knowing how it turned out.

Is it really so different from RTOG 0529 constraints? Kachnic is basically on record saying those are essentially made up, and admits even her own team can't meet some of them most of the time.
 
Bingo. Cardiac SBRT was the WAGgiest. Oh and btw rad oncs are calling it “ablation” still but the cardiologists are showing how it is not functioning in an ablative manner at all. So I glad our guesses worked out there too, we were wrong but no one seems hurt. I thought of all this when Ralph W was yelling about 1 Gy of RT to the heart during COVID and I’m all like “Ralph have you seen what they’re doing with cardiac SBRT.”
"Ablation" has never meant ablation when it comes to RT, and there is nothing wrong with guessing and being wrong. This is especially true in the case of CRA patients who frequently have an awful QOL (getting ICD shocks multiple times a day, waking up on the floor of a convenience store, considering hospice). So long as the initial patients knew that the treatment was experimental... whose to say that they shouldn't get to try something out of the box before giving up?

FWIW
Ralph did have a back and forth about CRA on twitter a few months ago... I can't link it because I am at the office and my hospital blocks it.
 
  • Like
Reactions: 2 users
ENCORE-VT had heart constraints:

View attachment 346860

Which is interesting, considering the heart was the target. They also have the following:

View attachment 346861

It's not the fact that "no one knows", that's very common in medicine, and safety/toxicity is literally the definition of a Phase I clinical trial.

My feelings on this are influenced by the way he tells the story. I'm reading it as if the RTOG asked for constraints before agreeing to the trial, he "engineered" some and gave it to them - and he gave it to them with some sort of authority, meaning "yes these are excellent to keep patients safe, I know this to be true".

If everyone involved knew these numbers were just *wink wink* "constraints", and were doing it just to move the trial forward - as in, all parties involved knew these were guesses and not supported by data, I would feel differently.

Those numbers look the same as the Timmerman numbers, except for the esophagus and trachea/large bronchus.
1639664052060.png


Also, the treatment planning priorities are in similar order to that on studies like NRG Oncology LU-002 (probably no surprise since Timmerman and Cliff Robinson are co-chairs on that study):
1639664318249.png
 
  • Like
Reactions: 1 users
ENCORE-VT had heart constraints:

View attachment 346860

Which is interesting, considering the heart was the target. They also have the following:

View attachment 346861

It's not the fact that "no one knows", that's very common in medicine, and safety/toxicity is literally the definition of a Phase I clinical trial.

My feelings on this are influenced by the way he tells the story. I'm reading it as if the RTOG asked for constraints before agreeing to the trial, he "engineered" some and gave it to them - and he gave it to them with some sort of authority, meaning "yes these are excellent to keep patients safe, I know this to be true".

If everyone involved knew these numbers were just *wink wink* "constraints", and were doing it just to move the trial forward - as in, all parties involved knew these were guesses and not supported by data, I would feel differently.
ENCORE-VT was after their initial study published in 2017 from their experience April through November 2015 (and passed IRB)

This is their constraint section of the protocol
1639684178022.png
 
  • Like
Reactions: 1 user
ENCORE-VT was after their initial study published in 2017 from their experience April through November 2015 (and passed IRB)

This is their constraint section of the protocol
Right - so what I consider the issue is distinct from this. The cardiac SBRT crew didn't pretend to have constraints. I don't know the backstory to 0529, but if it was generally known that the constraints were "made up"/not evidence based and all parties involved collectively agreed it was fine, then I don't see that as an issue either.

This story, from Timmerman's POV: he (and others) wanted to open an RTOG SBRT trial at multiple sites. The RTOG required OAR constraints for safety. Timmerman, by himself (with perhaps a cursory glance by Fowler) created constraints from "educated guessing" and gave them to the RTOG, which was an essential backbone to the trial.

What is not clear from this editorial: was the crew at RTOG under the impression these constraints had validity? Did the investigators enrolling patients on trial and utilizing these constraints do so with the belief that adhering to these constraints would keep patients more safe than not adhering to these constraints? If the investigators enrolling patients on trial assumed these constraints had validity, would they have participated in the trial if they knew the constraints did not have validity?

Essentially: did all parties involved really have informed consent? If the RTOG and investigators enrolling patients on trial KNEW these constraints were nothing more than educated guesses, I have no problem with that. If, as this editorial seems to read, these constraints were disseminated with some sort of authority, and having these constraints (and assuming their validity) played ANY sort of role in an investigator agreeing to participate and/or deciding who to enroll on trial (as well as guide the conversation with patients being enrolled on the trial), that is of potential ethical concern to me.

To go back to my theoretical FLASH trial - let's say every TrueBeam on the planet can have a quick software update to deliver FLASH treatments. I propose an APBI FLASH trial to the NRG. I make up constraints with some "educated guessing" for the heart and lungs. The NRG somehow accepts these constraints without verifying how I got them.

You are interested in enrolling patients in my APBI FLASH trial. You get my protocol, like everything you see, and observe I have constraints. While my protocol doesn't explicitly lie and say "these are valid constraints", it also doesn't say they're NOT valid constraints, either. You think, "well the NRG thought this was OK, so I will, too".

You enroll your patients into my trial and do your treatment planning with my constraints, because you assume they have merit. There is now a branch point:

1) The "Timmerman Branch", whereby my educated guessing was reasonable and things turn out OK. APBI FLASH becomes a standard treatment in 20 years.

2) The "Macchiarini Branch", whereby my educated guessing was NOT reasonable and patients developed severe cardiac issues a year after treatment. My work comes under scrutiny and I am accused of misconduct.

I dug out the 0236 protocol. Read the passage describing the table creation, then read the language in the protocol. Timmerman NEVER claims these are valid in any sense of the word. It's more...a sin of omission:

1639689726759.png


1639689740101.png


The ends justify the means.
 
Right - so what I consider the issue is distinct from this. The cardiac SBRT crew didn't pretend to have constraints. I don't know the backstory to 0529, but if it was generally known that the constraints were "made up"/not evidence based and all parties involved collectively agreed it was fine, then I don't see that as an issue either.

This story, from Timmerman's POV: he (and others) wanted to open an RTOG SBRT trial at multiple sites. The RTOG required OAR constraints for safety. Timmerman, by himself (with perhaps a cursory glance by Fowler) created constraints from "educated guessing" and gave them to the RTOG, which was an essential backbone to the trial.

What is not clear from this editorial: was the crew at RTOG under the impression these constraints had validity? Did the investigators enrolling patients on trial and utilizing these constraints do so with the belief that adhering to these constraints would keep patients more safe than not adhering to these constraints? If the investigators enrolling patients on trial assumed these constraints had validity, would they have participated in the trial if they knew the constraints did not have validity?

Essentially: did all parties involved really have informed consent? If the RTOG and investigators enrolling patients on trial KNEW these constraints were nothing more than educated guesses, I have no problem with that. If, as this editorial seems to read, these constraints were disseminated with some sort of authority, and having these constraints (and assuming their validity) played ANY sort of role in an investigator agreeing to participate and/or deciding who to enroll on trial (as well as guide the conversation with patients being enrolled on the trial), that is of potential ethical concern to me.

To go back to my theoretical FLASH trial - let's say every TrueBeam on the planet can have a quick software update to deliver FLASH treatments. I propose an APBI FLASH trial to the NRG. I make up constraints with some "educated guessing" for the heart and lungs. The NRG somehow accepts these constraints without verifying how I got them.

You are interested in enrolling patients in my APBI FLASH trial. You get my protocol, like everything you see, and observe I have constraints. While my protocol doesn't explicitly lie and say "these are valid constraints", it also doesn't say they're NOT valid constraints, either. You think, "well the NRG thought this was OK, so I will, too".

You enroll your patients into my trial and do your treatment planning with my constraints, because you assume they have merit. There is now a branch point:

1) The "Timmerman Branch", whereby my educated guessing was reasonable and things turn out OK. APBI FLASH becomes a standard treatment in 20 years.

2) The "Macchiarini Branch", whereby my educated guessing was NOT reasonable and patients developed severe cardiac issues a year after treatment. My work comes under scrutiny and I am accused of misconduct.

I dug out the 0236 protocol. Read the passage describing the table creation, then read the language in the protocol. Timmerman NEVER claims these are valid in any sense of the word. It's more...a sin of omission:

View attachment 346881

View attachment 346882

The ends justify the means.

The RTOG protocols routinely make up constraints without references. Look how they magically increased the tolerance of the brainstem from RTOG 0825 so they could shoehorn in dose escalation for BN-001. Did anyone get informed consent on the brainstem surface tolerance somehow changing? Is there is reference for that ?

1639693138211.png


1639693122948.png



Your branch point example I dont really follow. Timmerman actually did cause terrible toxicity. The central airway toxicity stuff is probably some of the most cited constraint literature we have (1500 citations ). But because it was a prospective trial, with clear dosimetric data and constraints that were scrutinized, they were able to come up with the no fly zone.

That doesn't happen if the "made up" constraints are vague or buried like the first cardiac trial because we would never know what threshold was reached. They realized this for ENCORE_VT and thats why those constraints are much more detailed.
 
  • Like
Reactions: 1 users
The ends justify the means.
I've been fortunate to spend time with Timmerman. On the dose escalation study, after increasing doses to a point where cancer control seemed likely and additional dose would only cause toxicity, Timmerman wanted to stop escalation prior to finding the maximum tolerated dose. The NCI director told him that if he did so, he would be blacklisted from proposing future trials because "you said you would find the maximum tolerated dose." He had support, if not a push from behind to make some of these clinical efforts practice-changing.
 
  • Like
Reactions: 1 user
Right - so what I consider the issue is distinct from this. The cardiac SBRT crew didn't pretend to have constraints. I don't know the backstory to 0529, but if it was generally known that the constraints were "made up"/not evidence based and all parties involved collectively agreed it was fine, then I don't see that as an issue either.

This story, from Timmerman's POV: he (and others) wanted to open an RTOG SBRT trial at multiple sites. The RTOG required OAR constraints for safety. Timmerman, by himself (with perhaps a cursory glance by Fowler) created constraints from "educated guessing" and gave them to the RTOG, which was an essential backbone to the trial.

What is not clear from this editorial: was the crew at RTOG under the impression these constraints had validity? Did the investigators enrolling patients on trial and utilizing these constraints do so with the belief that adhering to these constraints would keep patients more safe than not adhering to these constraints? If the investigators enrolling patients on trial assumed these constraints had validity, would they have participated in the trial if they knew the constraints did not have validity?

Essentially: did all parties involved really have informed consent? If the RTOG and investigators enrolling patients on trial KNEW these constraints were nothing more than educated guesses, I have no problem with that. If, as this editorial seems to read, these constraints were disseminated with some sort of authority, and having these constraints (and assuming their validity) played ANY sort of role in an investigator agreeing to participate and/or deciding who to enroll on trial (as well as guide the conversation with patients being enrolled on the trial), that is of potential ethical concern to me.

To go back to my theoretical FLASH trial - let's say every TrueBeam on the planet can have a quick software update to deliver FLASH treatments. I propose an APBI FLASH trial to the NRG. I make up constraints with some "educated guessing" for the heart and lungs. The NRG somehow accepts these constraints without verifying how I got them.

You are interested in enrolling patients in my APBI FLASH trial. You get my protocol, like everything you see, and observe I have constraints. While my protocol doesn't explicitly lie and say "these are valid constraints", it also doesn't say they're NOT valid constraints, either. You think, "well the NRG thought this was OK, so I will, too".

You enroll your patients into my trial and do your treatment planning with my constraints, because you assume they have merit. There is now a branch point:

1) The "Timmerman Branch", whereby my educated guessing was reasonable and things turn out OK. APBI FLASH becomes a standard treatment in 20 years.

2) The "Macchiarini Branch", whereby my educated guessing was NOT reasonable and patients developed severe cardiac issues a year after treatment. My work comes under scrutiny and I am accused of misconduct.

I dug out the 0236 protocol. Read the passage describing the table creation, then read the language in the protocol. Timmerman NEVER claims these are valid in any sense of the word. It's more...a sin of omission:

View attachment 346881

View attachment 346882

The ends justify the means.

It sounds like you are taking issue with the fact that the protocol makes it sound like the constraints were heavily scrutinized, but the editorial implies they weren't.

Given that they were indeed doing these sorts of treatments in IU and overseas before 0236, I would wager that these constraints were as heavily scrutinized as they could have been, and that he is acting cavalier in the editorial to emphasize how daring it was.
 
  • Like
Reactions: 1 users
ESE, I do get where you're coming from, and the "it worked so it's OK, but is it really" point is well taken from your posts.

That being said, I don't think Timmerman really did anything too different than the first trials that put together dose constraints for say IMRT. Yes, the fracitonation scheme makes everyone pause a bit, but same thing with LDR prostate brachy, or HDR prostate brachy, or initial use of CT-guided Gyn brachy, where constraints were just made up, until people did enough of x,y,z to develop actual constraints (relatively robust in Gyn brachy literature because of how similar a lot of patients are).

I don't think he "fabricated" his constraints anymore than other RTOG trial constraints are "fabricated". He may feel content to post this now, b/c if he gets cancelled, well maybe it's been a good career, or he's built up so much goodwill that he's confident that a charge to cancel him by you (if you so chose) is unlikely to put his career in actual danger. And may put the career of those campaigning against Timmerman in jeopardy by those who see Timmerman as this prophet of SBRT.

*EDIT* - As an aside, I have moved this into its own thread to facilitate discussion away from the RO Twitter megathread.
 
  • Like
Reactions: 2 users
ESE, I do get where you're coming from, and the "it worked so it's OK, but is it really" point is well taken from your posts.

That being said, I don't think Timmerman really did anything too different than the first trials that put together dose constraints for say IMRT. Yes, the fracitonation scheme makes everyone pause a bit, but same thing with LDR prostate brachy, or HDR prostate brachy, or initial use of CT-guided Gyn brachy, where constraints were just made up, until people did enough of x,y,z to develop actual constraints (relatively robust in Gyn brachy literature because of how similar a lot of patients are).

I don't think he "fabricated" his constraints anymore than other RTOG trial constraints are "fabricated". He may feel content to post this now, b/c if he gets cancelled, well maybe it's been a good career, or he's built up so much goodwill that he's confident that a charge to cancel him by you (if you so chose) is unlikely to put his career in actual danger. And may put the career of those campaigning against Timmerman in jeopardy by those who see Timmerman as this prophet of SBRT.

*EDIT* - As an aside, I have moved this into its own thread to facilitate discussion away from the RO Twitter megathread.
Ah thanks man I know I really went on a dissertation-level tangent with this one, sorry to derail the Twitter thread.

I've been trying to think of a more succinct way to phrase my feelings about this. I don't necessarily disagree with @Lamount or @radiation on any of their points. I do think of Timmerman as the "SBRT Prophet", I think he's incredibly important and a pivotal figure for RadOnc, and I'm glad he accomplished everything he did. I don't want him "canceled" in any sense.

None of this would have crossed my mind had he chose to tell the story a different way. I have never considered constraints for anything as inviolable laws of the universe, though I know some folks who do. The editorial is glib, and reads as "I took liberties with the world of Radiation Oncology to accomplish something I believed in". Additionally, he sprinkles in several defensive statements about the whole thing, and the editorial appears to be prompted by him seeing the Twitter poll and thinking "y'all really need to use other stuff now".

I guess I'm perplexed that I appear to be the only one currently talking about potential ethical issues on the internet (maybe I missed something on Twitter though). In this era of hyper-woke social media, where a lot of people claim to be passionate about things like informed consent/autonomy/medical ethics and topics of that nature, I would have expected at least one or two raised eyebrows on Twitter, instead of the avalanche of "likes" it received.

Not to be overly dramatic, but Timmerman "invented" these constraints for the greater good of medicine. While not on this scale, I encounter situations of this nature not infrequently in my own practice. Many times I've watched patients make decisions which have a high probability to negatively impact their outcome. There's usually a point where I could massage the truth and lead them down the "right" path, and I have never made that decision, because I would be taking their autonomy.

I had an idea for a light-hearted research project that I wanted to submit to an ASTRO meeting. I approached a faculty member about being senior author (because I was a resident at the time), and I distinctly remember him sitting quietly after I pitched the data. Eventually, he goes "sorry, I'm just trying to think about how this could potentially be leveraged/twisted to impact patients once it enters the literature, and I don't think it can be, so sure, I'm on board". That really stuck with me, because I've never seen anyone do that (before or since). However, I've asked myself the same question on everything I published subsequently.

In his own story, Timmerman massaged the truth to lead RadOnc down the path he considered "right", and leveraged the academic publishing system for the same reason. I would argue these choices have had a significant, positive impact on medicine and society. So, then, all those times where I have seen a similar opportunity and not taken it - was I wrong? There are patients who have died where I saw clear points of intervention which might have changed (or at least delayed) outcomes. Obviously, I can't know that for sure, but it certainly feels that way.

The events of the editorial are well before my time. Perhaps it was widely known that these constraints weren't "real" and everyone involved in RTOG 0236 understood that these numbers were just suggestions (well, except not following them was a major protocol violation, but I digress). By the time I became aware of these constraints, SBRT was widely practiced. My first recollection of these constraints is being pimped on them in clinic, and I have many memories of heated debates in Dosimetry and chart rounds about various treatment plans and meeting these constraints.

Obviously, I am yet again not being succinct with this, but my impression of the decisions, actions, and consequences described in this editorial are tied to many different parts of my own experiences in science and medicine. If I were in his shoes, I don't know if I would have made the same choices. But if he didn't make these choices, I don't think SBRT would be what it is today.

*insert shrug emoji here*
 
What about “copycat” behavior? That could be dangerous. This is a good story with a good outcome, but I’m completely with @elementaryschooleconomics - this is unsettling.
 
  • Like
Reactions: 1 users
In defense of Timmerman and Ron McGarry, they did the right thing back then.
You have to start somewhere, the "unvalidated" table did not bother me, it was created using "educated guess".
Surgeons do this all the time, they calculate the surgical risks with what they know and go from there.
As time goes on, the "unvalidated" table will get "validated" as more papers are published...

The fact that we (and thousands of patients) benefit from Timmerman et al, we should be thankful. Thousands of patient lives have been saved by SBRT. We should focus on this...
 
  • Like
  • Care
Reactions: 4 users
How comfortable should we be with painstakingly slow progress? We can positively identify the (few) patients we may unfortunately harm by investigatory therapies, but how many unidentified thousands upon thousands are we harming by not developing new therapies faster? Fortunately there remain some out there willing to be mavericks. One famous radonc trialist (not Timmerman) has suggested we need fewer 'mammas boys' in this field, who only know how to dot I's and cross T's and please their admins. You can't be innovative without taking some risk, and we need to take a whole lot more of it. We are already, arguably but obviously, the most risk averse oncologists. That's not something to be proud of. It reeks of complacency.
 
Last edited:
  • Like
  • Love
Reactions: 4 users
@StIGMA,

I agree...

This is the most risk-averse culture I have seen. Everything these days is V10 or V20 or Vsomething.
It is so sad when at chart rounds, all I hear is Vsomething lol. Just sad.
Fletcher, Cox, Brady et al the pioneers were not as risk-averse as the current gen.

Think about Covid vaccines (Pfizer, Moderna etc. etc.), if not developed fast enough, millions more would be killed by Covid by now...

Life, you have to take risk and I have the most respect for Timmerman, Ron McGarry et al.
 
  • Like
Reactions: 5 users
I agree with all that, too..

He’s a oncology legend and hero
I did a rotation there before they started their residency. Incredibly humble and sincere gentleman.

Just saying, if the outcome wasn’t as good, this editorial would have been a confession notex
 
  • Like
Reactions: 1 user
I agree with all that, too..

He’s a oncology legend and hero
I did a rotation there before they started their residency. Incredibly humble and sincere gentleman.

Just saying, if the outcome wasn’t as good, this editorial would have been a confession notex
But the outcome you are most worried about, unexpected fatal toxicity from incorrect constraints in the airway, actually did happen and incredibly useful data came from it
 
  • Like
Reactions: 1 user
I think this is sort of an amazing discussion and it shows how remarkable Bob T is at making people think.
That being said, I would like to jump in here because I feel a bit responsible for asking him to do this and going out on a limb to help me in my new role at the RJ, at a time when he is under a lot of pressure as interim chair at UTSW.
Bob is super humble and unassuming and very gentle in nature. So when he says "educated guess" it is not what a normal person's "educated guess" would be. The guy pores over dosimetry day in and day out studying and learning from each of his patients. Educated guess or "fabricated" in Bob T language means it was based on a deeply researched amalgam of the best experimental sources and clinical experiences possible. He is downplaying it because it didn't meet what he thinks are now real standards of evidence which are rigorous dose finding prospective clinical trials. Also remember he had done a clinical trial at Indiana that formed a lot of the basis for moving into the RTOG. He had all the dosimetry from that plus Indiana's fairly extensive clinical experiences. Bob does not believe in any models. He really only believes in real data from clinical trials. I tried to get him to go into that on the podcast but he wisely steered away from that even more controversial discussion. I know for a fact he does not put things into the tables willy nilly but thinks a lot about each constraint. You may not agree and yes, they are all his opinions, but he has a rationale for each one. The “aw shucks” thing belies a lifetime of dedication to this work.
He also wasn't alone. He is telling his story which kind of casts him automatically as the protagonist, but If you listen to the podcast we did with him, he talks a lot about how his relationship with Dave Larson was very influential. Dave was one of our Gold Medal legends at UCSF. Dave made his career on SRS and from there moved into SBRT. While I would never diminish Bob, this is just to say that SBRT wasn't an idea that came wholesale out of his head alone. People like Dave Larson and the Karolinska Institute/Pia Baumann were doing this too but just hadn't popularized it. They were also using dosimetric principles from the brain tumor/SRS world. Bob brought it to RTOG and made it "mainstream" instead of it remaining a single institution boutique-y thing. He had to extrapolate conformality into constraints, which is what he terms "fabricated." That's why I find his insights on how something becomes "mainstream" so interesting. He really wanted SBRT to become what he calls a “mainstream therapy."
I don't agree with everything Bob says. As a lung and skin cancer person, I love SBRT/hypofractionation. As a head and neck and skull base person, I have a lot of reservations. However what I appreciate about him is that he is constantly thinking and reflective. I also think people choose their own paths in life, and your "style" of research may or may not match his - but it's instructive to listen to how he thinks and gain greater insight into your own perspectives and where, as some of you have said, your level of "risk aversion" lies and whether this has cost some missed opportunities.
I really appreciate the tremendous outpouring of interest, some of which was related to Bob and some of which might have been excitement about the turnover into a new "era" as many people perceive it. I appreciate it all and take it very seriously and hope to keep publishing a few good conversation starters in the RJ every once in a while.
Cheers, Sue

 
  • Like
  • Love
Reactions: 7 users
On a related note, I prefer Timmerman Table to Onishi et al 2007 paper.

Recall that people quote Onishi paper with BED100 and all that jazz all the time?
At last count, the Onishi paper has been cited ~ 1000 times.


Did anyone bother reading the Onishi paper in detail? Well, here you go...

- Retrospective, not randomized, circa 1995 ---> 2004.
- N = 257.
- No respiratory gating.
- This is the interesting part, can you imagine 0 mm expansion lol?
The clinical target volume (CTV) marginally exceeded the gross target volume (GTV) by 0 to 5 mm. The planning target volume (PTV) comprised the CTV, a 2- to 5-mm internal margin and a 0–5-mm safety margin.

- # of fractions range = 1 to 14, with single doses of 4.4 to 35 Gy.
- (3-D) treatment planning; irradiation with multiple noncoplanar static ports or dynamic arcs.
- Prescription was at isocenter, not PTV:
A total dose of 30 to 84 Gy at the isocenter was administered with 6- or 4-MV x-rays within 20% heterogeneity in the PTV dose.

I am not saying the Onishi paper is no good, I am just saying that one should read the Onishi paper with "a grain of salt"...

With modern SBRT with gating and prescription to PTV (assuming the standard ITV + 5mm expansion), one should expect local control rate of 87% ---> 90% or so, even using a lower dose/fractionation regimen.
 
Last edited:
In defense of Timmerman and Ron McGarry, they did the right thing back then.
You have to start somewhere, the "unvalidated" table did not bother me, it was created using "educated guess".
Surgeons do this all the time, they calculate the surgical risks with what they know and go from there.
As time goes on, the "unvalidated" table will get "validated" as more papers are published...

The fact that we (and thousands of patients) benefit from Timmerman et al, we should be thankful. Thousands of patient lives have been saved by SBRT. We should focus on this...
I think everything you're saying is true! But the fact that I think everything you're saying is true is actually what I find problematic.

"They did the right thing back then".
Absolutely. But we only know that now. I have been glibly saying "the ends justify the means", but to be more formal, this is an outcome bias. There's an interesting Harvard Business School white paper on this: "...we argue that people judge unethical actions differently based on whether those actions led to positive or negative outcomes. That is, we suggest that outcome information impacts an observer’s evaluation of the ethicality of a target’s behavior". While I'm not saying any of this was clearly unethical, I am saying that it is hard for us to objectively consider the decisions made in his editorial because we know he changed medicine for the better.

"Surgeons do this all the time".
Yup, agreed. But that is also a logical fallacy. "It's OK for me to do X because I know Group Z does it all the time". We could really get wild with plugging in variables for "X" and "Z" with that statement. If someone justifies any action, let alone a medical intervention, by saying "well everyone is doing it", I don't know how many of us would accept that argument.

"The calculate surgical risks with what they know and go from there".
Agreed, very common. I don't mind that the tables were an educated guess, and assume there's a level of alchemy in any OAR constraints. However, the narrative of the editorial implies these constraints were given to the RTOG because they required constraints for safety. I am interpreting, based on how the editorial is written, that the RTOG thought there was some "validity" to these numbers. If it was clear to the RTOG and the physicians using these constraints for treatment planning that they were just educated guesses, then I have absolutely no issue with this!

"As time goes on, the unvalidated table will get validated as more papers are published".
Definitely. However, with intent and forethought, the constraints were published in a specific way to circumvent the need for ANY evidence or scrutiny, which was done to have an indexed reference for future publications to cite for the optics of validity. A real world negative example of this exploit: Purdue Pharma found the 1980 NEJM letter to the editor which gave them a citation to say the "rate of addiction for patients who are treated by doctors is much less than 1%". A single citation is all you need sometimes. In the SBRT world, the single citation turned out good. In the opioid world, the single citation turned out bad.

Life, you have to take risk and I have the most respect for Timmerman, Ron McGarry et al.

Well, the risk Timmerman and McGarry took were for their careers. The risks for the patients we treat is their lives.

I also have tremendous respect for them and their impact on humanity. I assume that there are a ridiculous number of interventions in medicine which, if we "knew how the sausage was made", have similar ethical quandaries. We just don't get the chance to hear about it.

I think this is sort of an amazing discussion and it shows how remarkable Bob T is at making people think.
That being said, I would like to jump in here because I feel a bit responsible for asking him to do this and going out on a limb to help me in my new role at the RJ, at a time when he is under a lot of pressure as interim chair at UTSW.
Bob is super humble and unassuming and very gentle in nature. So when he says "educated guess" it is not what a normal person's "educated guess" would be. The guy pores over dosimetry day in and day out studying and learning from each of his patients. Educated guess or "fabricated" in Bob T language means it was based on a deeply researched amalgam of the best experimental sources and clinical experiences possible. He is downplaying it because it didn't meet what he thinks are now real standards of evidence which are rigorous dose finding prospective clinical trials. Also remember he had done a clinical trial at Indiana that formed a lot of the basis for moving into the RTOG. He had all the dosimetry from that plus Indiana's fairly extensive clinical experiences. Bob does not believe in any models. He really only believes in real data from clinical trials. I tried to get him to go into that on the podcast but he wisely steered away from that even more controversial discussion. I know for a fact he does not put things into the tables willy nilly but thinks a lot about each constraint. You may not agree and yes, they are all his opinions, but he has a rationale for each one. The “aw shucks” thing belies a lifetime of dedication to this work.
He also wasn't alone. He is telling his story which kind of casts him automatically as the protagonist, but If you listen to the podcast we did with him, he talks a lot about how his relationship with Dave Larson was very influential. Dave was one of our Gold Medal legends at UCSF. Dave made his career on SRS and from there moved into SBRT. While I would never diminish Bob, this is just to say that SBRT wasn't an idea that came wholesale out of his head alone. People like Dave Larson and the Karolinska Institute/Pia Baumann were doing this too but just hadn't popularized it. They were also using dosimetric principles from the brain tumor/SRS world. Bob brought it to RTOG and made it "mainstream" instead of it remaining a single institution boutique-y thing. He had to extrapolate conformality into constraints, which is what he terms "fabricated." That's why I find his insights on how something becomes "mainstream" so interesting. He really wanted SBRT to become what he calls a “mainstream therapy."
I don't agree with everything Bob says. As a lung and skin cancer person, I love SBRT/hypofractionation. As a head and neck and skull base person, I have a lot of reservations. However what I appreciate about him is that he is constantly thinking and reflective. I also think people choose their own paths in life, and your "style" of research may or may not match his - but it's instructive to listen to how he thinks and gain greater insight into your own perspectives and where, as some of you have said, your level of "risk aversion" lies and whether this has cost some missed opportunities.
I really appreciate the tremendous outpouring of interest, some of which was related to Bob and some of which might have been excitement about the turnover into a new "era" as many people perceive it. I appreciate it all and take it very seriously and hope to keep publishing a few good conversation starters in the RJ every once in a while.
Cheers, Sue

I was hoping you'd show up! I really appreciate your insight and thoughtfulness here. It's clear that Timmerman is brilliant and is operating only out of good intentions.

He just happened to author this story in such a way that it read very similarly to "classic" ethical scenarios in modern medicine. So...I guess congratulations on your opening Red Journal salvo? I literally can't stop thinking about this paper. I have read it perhaps over a dozen times since 6AM yesterday, read many of his other papers, and the RTOG 0236 protocol. If your aim is to provoke thought, you are setting the bar high (at least for me, personally).

Out of all this I want to be clear: Timmerman is a pivotal figure in modern medicine, and has had an astoundingly positive impact on humanity.
 
  • Like
Reactions: 2 users
ESE - You are thoughtful and exactly who we wanted to reach. I knew Bob enough to know it would be edgy and thought provoking. I wondered, should we put this out there. But I also thought, these are incredible lessons. Some people will define themselves in alliance to Bob and his approaches to novel technology. Some people will define themselves against him - or at least, differently from him. That's all good, as long as people are thinking about it and being self aware of where they "sit." It's the spectrum we need in radonc so that there is balance in the Force. So I appreciate this and the Twitter conversation and hopefully whatever else, letters or what have you, comes out of it, we'll see. I'm so impressed that it resonated with so many people, each having their own different reasons. It really speaks to the continuing controversies of SBRT which persist to this day.
 
  • Like
  • Love
Reactions: 3 users
ESE - You are thoughtful and exactly who we wanted to reach. I knew Bob enough to know it would be edgy and thought provoking. I wondered, should we put this out there. But I also thought, these are incredible lessons. Some people will define themselves in alliance to Bob and his approaches to novel technology. Some people will define themselves against him - or at least, differently from him. That's all good, as long as people are thinking about it and being self aware of where they "sit." It's the spectrum we need in radonc so that there is balance in the Force. So I appreciate this and the Twitter conversation and hopefully whatever else, letters or what have you, comes out of it, we'll see. I'm so impressed that it resonated with so many people, each having their own different reasons. It really speaks to the continuing controversies of SBRT which persist to this day.
Speaking of the force.
main-qimg-c6625c7738ecc7b5cdd38e2b940a7a59-pjlq.jpeg
 
  • Haha
  • Like
  • Love
Reactions: 2 users
Quite a fun editorial. I'm glad it was shared here.
Absolutely. But we only know that now. I have been glibly saying "the ends justify the means", but to be more formal, this is an outcome bias. There's an interesting Harvard Business School white paper on this: "...we argue that people judge unethical actions differently based on whether those actions led to positive or negative outcomes. That is, we suggest that outcome information impacts an observer’s evaluation of the ethicality of a target’s behavior". While I'm not saying any of this was clearly unethical, I am saying that it is hard for us to objectively consider the decisions made in his editorial because we know he changed medicine for the better.
I don't think the experts who make the sausage are at all surprised how the sausage is made. After all, the table itself proclaims widely, "Mostly unvalidated!" And Timmerman points out that where we DO have solid data, it is rarely from humans, and when it is from humans, it is woefully underpowered. In that sense I would not call publication and dissemination of these tables unethical, not in any sense. You have to start somewhere. In a multicenter trial, you want uniformity, and you want it written down in a table so that others can learn from it and improve on it. Some smart people got together and made their best guesses - that's #TimmermanTables.

What perhaps IS "unethical" is if these unvalidated tables get some undeserved respect and attention from the non-sausage making experts. This is the error that Timmerman seems to be trying to correct with this tongue-in-cheek editorial. He is saying, back in the day we gave it our best shot. Since then we have seen toxicities in some places, not in other places, so now it is a good time to go for Round 2 and revise the tables. So use the new ones because they are based on more evidence, not the old ones just because they were first.
 
  • Like
Reactions: 3 users
Top