Identifying Sepsis

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Janders

Senior Member
20+ Year Member
Joined
May 24, 2002
Messages
1,047
Reaction score
1,269
  • Like
  • Love
Reactions: 3 users
  • Like
  • Haha
  • Wow
Reactions: 5 users
Members don't see this ad :)
So the old way of treating the pt and not the numbers still hold?
 
  • Like
Reactions: 2 users
Look guys, we outperform the algorithms and screening tools at 15 minutes!

Not to mention ... in this modern era, the way "sepsis" usually ends up in the discharge diagnoses is ... if a best-practice alert for "sepsis" fires at some point during their hospital stay, the self-fulfilling prophecy inflating the apparent performance of these alerts.
 
  • Like
Reactions: 2 users
Wait, so you are telling me that a physician’s gestalt of whether something is considered sepsis is more likely than a scoring tool to result in the same physician coding the patient as having sepsis? I can’t believe it!

The fundamental design of this study is designed to get this exact result and I am surprised this is published in Annals. Additionally, looking at the methods, those diagnosed with sepsis vs those without have ethnic disparities and based on the design, are we underdiagnosing sepsis in certain populations? This is not addressed at all. I would be more curious to see if gestalt is more sensitive and/or specific at identifying the objective criteria for sepsis than decision tools, not whether it predicts the same physician adding the very diagnosis that is part of the study outcome. While sepsis is both a real clinical syndrome and a CMS metric, they are so interconnected now that to try and divorce them in a study leads to too many uncontrolled confounders. This study even addresses how this could improve SEP-1, but did not evaluate patients based on meeting SEP-1 criteria. Screeners like SIRS and qSOFA screen for all CMS sepsis, but not limited to CMS severe sepsis or septic shock which are part of the SEP-1 measure, so I am not sure why apples are being compared to oranges in this study.

Another thing that impairs its external validity is that there were patients brought directly to the resuscitation area, not all comers. Most decision tools are designed to identify patients as possibly ill before a physician sees them. Using them on patients already identified as sick who are in front of a physician is not what they are primarily designed for, rather to identify potentially sick patients. I would be surprised if they identified any difference in outcomes for these patients given the reported design of the treatment area studied. They don’t report outcomes here, but I would be skeptical of any conclusions for a follow up study using this design.
 
Last edited:
  • Like
Reactions: 1 users
They are definitely not immune from GIGO. What was the estimate - 80-90% of what's published in journals is trash?
It is extraordinarily difficult to find things that are "non-trash" these days. The big name journals are frequently full of pharma garbage. The professional society journals (Annals, JCC, etc.) frequently have a variety of methodological or design flaws preventing their publication in one of the big name journals.

The net effect being there are only rare instances in which a single study ought to change practice – it requires a bit of the totality of reading all the bad papers to parse out which aspects of internal and external validity shake out enough to incorporate some aspect into reasonable practice.

But why do that when a bunch of academics can just design "quality" measures, tie reimbursement to them, and force everyone to go along with a bunch of inappropriate interventions ....
 
  • Like
Reactions: 2 users
It is extraordinarily difficult to find things that are "non-trash" these days. The big name journals are frequently full of pharma garbage. The professional society journals (Annals, JCC, etc.) frequently have a variety of methodological or design flaws preventing their publication in one of the big name journals.

The net effect being there are only rare instances in which a single study ought to change practice – it requires a bit of the totality of reading all the bad papers to parse out which aspects of internal and external validity shake out enough to incorporate some aspect into reasonable practice.

But why do that when a bunch of academics can just design "quality" measures, tie reimbursement to them, and force everyone to go along with a bunch of inappropriate interventions ....
Totally agree--especially when so many studies can't be replicated
 
It is extraordinarily difficult to find things that are "non-trash" these days. The big name journals are frequently full of pharma garbage. The professional society journals (Annals, JCC, etc.) frequently have a variety of methodological or design flaws preventing their publication in one of the big name journals.

The net effect being there are only rare instances in which a single study ought to change practice – it requires a bit of the totality of reading all the bad papers to parse out which aspects of internal and external validity shake out enough to incorporate some aspect into reasonable practice.

But why do that when a bunch of academics can just design "quality" measures, tie reimbursement to them, and force everyone to go along with a bunch of inappropriate interventions ....
Reminds me of how hospital policies are made. You can go around spending months getting every department involved, pushing it through Pharmacy&Therapeutics committe, having it signed by med exec, post written versions of the policy everywhere and then spend dozens of hours training staff on the new policy. And after all that, it may be adopted as "how things are done"

Or an RN or CT tech can make make up a policy based on absolutely nothing during a phone call (pt's alk phos is too high to use IV contrast, all patients whose diastolic blood pressure is evenly divisible by 8 have to go to stepdown) and it becomes an unbreakable law that threatens peer review for any who dare question it.
 
  • Like
Reactions: 1 user
Top