Stats question

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Ho0v-man

Full Member
7+ Year Member
Joined
Nov 28, 2014
Messages
4,120
Reaction score
14,776
Have a dumb stats question.

Been reviewing journal articles lately and find a lot of them report statistically significant odds ratios. If an odds ratio is statistically significant, does that mean other stuff like ARR is statistically significant? My gut says no, but I can’t actually find anything that discretely clarifies this.

Members don't see this ad.
 
Have a dumb stats question.

Been reviewing journal articles lately and find a lot of them report statistically significant odds ratios. If an odds ratio is statistically significant, does that mean other stuff like ARR is statistically significant? My gut says no, but I can’t actually find anything that discretely clarifies this.
Odds ratio and arr are just different ways of expressing the same thing ie is there a between group difference? The between group diff is either statistically significant or not
 
  • Like
Reactions: 1 user
Odds ratio and arr are just different ways of expressing the same thing ie is there a between group difference? The between group diff is either statistically significant or not
Hey thanks. So I guess a follow up is why report one over the other? Just bc it sounds better than a 7% difference when you “treatment X doubled the odds of Y outcome when compared to placebo”?
 
Members don't see this ad :)
You ideally would be taking them both in context. For example, the OR may be significant and even seem very much so, but when the absolute risk is very small, the ARR might be very small as well. And then you need to decide whether such a larger OR is actually clinically relevant, which it may not be.

For example, the OR of RHD in treating GAS with penicillin is like 0.5 or lower. But RHD is so rare in the US with or without antibiotics that the ARR is very, very small. So small in fact that the NNT (which is 1/ARR) is over 2 million, which is way larger than the NNH of like 200.
 
  • Like
Reactions: 1 user
Odds ratio and arr are just different ways of expressing the same thing ie is there a between group difference? The between group diff is either statistically significant or not
Have a dumb stats question.

Been reviewing journal articles lately and find a lot of them report statistically significant odds ratios. If an odds ratio is statistically significant, does that mean other stuff like ARR is statistically significant? My gut says no, but I can’t actually find anything that discretely clarifies this.
Mostly agree with the responses you've already gotten, but the point of potential contention is "other stuff." Because other stuff implies different kinds of testing which may involve different assumptions and information.

Separately, it also matters what's in your model and how it was constructed. Inclusion exclusion for your study population. Data quality. Study design... Just having something be significant is a trap.
 
  • Like
Reactions: 1 users
Mostly agree with the responses you've already gotten, but the point of potential contention is "other stuff." Because other stuff implies different kinds of testing which may involve different assumptions and information.

Separately, it also matters what's in your model and how it was constructed. Inclusion exclusion for your study population. Data quality. Study design... Just having something be significant is a trap.
Oh yeah no doubt! I’ve just seen some seemingly well-designed studies end up reporting odds ratios and after calculating ARR, NNT, I’ve found them very unimpressive. Which seemed weird (but I guess not surprising) that they’d go through so much detail just to kinda obfuscate the interpretation at the end.
 
It also depends on the type of study. If the paper is showing the results of a case-control study, cross-sectional study, or a cohort study, then the results will be an odds ratio. If the study is showing the results of a randomized controlled trial, then the results would usually be demonstrated as a relative risk reduction.

Put differently, you can use a risk ratio or an odds ratio in a cohort study, but in a case-control study, you can only talk about an odds ratio.
 
  • Like
Reactions: 1 user
Oh yeah no doubt! I’ve just seen some seemingly well-designed studies end up reporting odds ratios and after calculating ARR, NNT, I’ve found them very unimpressive. Which seemed weird (but I guess not surprising) that they’d go through so much detail just to kinda obfuscate the interpretation at the end.
The NNT can often be quite high. For example the NNT for screening CT in high risk patients for lung cancer is 300, and this is one of the best screening tests that exist, if I recall. The NNT for ASA for secondary prevention after MI is 40 to 50.

In some cases, "well established" things actually have no proven benefit but have a lot of political inertia.

An interesting website is thennt.com
 
Last edited:
  • Like
Reactions: 1 user
It also depends on the type of study. If the paper is showing the results of a case-control study, cross-sectional study, or a cohort study, then the results will be an odds ratio. If the study is showing the results of a randomized controlled trial, then the results would usually be demonstrated as a relative risk reduction.

Put differently, you can use a risk ratio or an odds ratio in a cohort study, but in a case-control study, you can only talk about an odds ratio.
Yeah I get that. That’s why it’s been surprising coming across so many RCTs with OR being the only thing reported.
 
Yeah I get that. That’s why it’s been surprising coming across so many RCTs with OR being the only thing reported.
I don't think it's surprising at all. For any given study where either RR or OR can be reported, the OR will always be larger. That's a mathematical consequence of how it's calculated. So it's beneficial for the authors to calculate an OR and report that because their "effect size" will look psychologically bigger. Also because it doesn't require much thought - it works in all circumstances and is always valid even if it's not the best measure of association for a particular study. Also, it has the convenient quality of being the number that gets spit out in a simple logistic regression model so you can get an easy "adjusted" number (after a simple transformation).

Everybody should report a baseline risk. ORs are useless without having some sense of baseline risk. If the OR is 10 but the baseline risk is 0.001%, would you really care about that clinically? Probably not unless that 0.001% is referring to some particularly devastating condition that is entirely preventable without risk.
 
Yeah I get that. That’s why it’s been surprising coming across so many RCTs with OR being the only thing reported.
You get a big enough population size, you'll find a statistically significant difference. Reporting OR makes that difference look bigger. Thus, there's a push in the stats community (and somewhat in the medical community) to portray the data more accurately by presenting multiple ways looking at the data, including the effect size.
 
You get a big enough population size, you'll find a statistically significant difference. Reporting OR makes that difference look bigger. Thus, there's a push in the stats community (and somewhat in the medical community) to portray the data more accurately by presenting multiple ways looking at the data, including the effect size.
Thats not quite true- false positive statistically significant results are more likely to occur in small population sizes.
 
Thats not quite true- false positive statistically significant results are more likely to occur in small population sizes.

I think you and the poster you're responding to are getting at different tradeoffs inherent to big data. More data points = more precision. More precision means smaller confidence intervals meaning the more likely you'll detect a statistically significant result. However, the magnitude of that result is likely to be small. In that case, it's a true positive but not a very meaningful one.
 
  • Like
Reactions: 1 user
Top