[Resident Study] USMLE and CBSE Correlation for OMFS

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

duncamunk

Full Member
5+ Year Member
Joined
Aug 19, 2017
Messages
11
Reaction score
4
Hey guys,

Finally have IRB approval and am trying to obtain data to analyze CBSE scores for OMFS programs. Currently the "soft cap" is 65 at most institutions, but this has only been correlated with medical school data. I want to see what the cutoff should be.

Long story short, having troubles getting data from programs. If you would like to contribute to the research, and don't mind sharing scores, I'd greatly appreciate it. All data would be de-identified prior to publishing. The following data I would need are:

1) OMFS Program
2) Do you take any med school before step 1
3) If so, which years? (1, 2, 1 and 2)
4) CBSE Attempts
5) CBSE dates
6) CBSE Scores
7) Step 1 attempts
8) Step 1 dates
9) Step 1 scores
10) Undergrad GPA
11) Dental School
12) Dental Rank
13) Dental GPA (or P/F)
14) Time off to prepare for step 1

Appreciate any help you can lend. If you wish to participate, shoot me a private message with the data (unless you don't mind sharing here). Thanks!

Members don't see this ad.
 
  • Like
Reactions: 1 user
Oh, great study idea. I look forward to seeing the results in the future with this.... Good luck!
 
Because this is an area of research I'm personally interested in, I'll try and help you along by asking the questions I would ask were this paper to come across my desk for peer review.

1. How do you plan on using these data to determine a "cutoff" score?

2. What are your predetermined outcome measures to identify success and failure?

3. It certainly *appears* like you're excluding 4 year programs (Questions 2, 3, 7-9, 14 assume a 6-year program). Why did you choose to exclude these programs, or why did you choose to word your survey questions in this fashion?

4. What statistical methods do you plan to use to handle comparisons between multiple variables of disparate types, given that by my count you have: 4 categorical variables, 8 continuous variables, 3 dichotomous variables and 1 ordinal variable?
 
Members don't see this ad :)
Because this is an area of research I'm personally interested in, I'll try and help you along by asking the questions I would ask were this paper to come across my desk for peer review.

1. How do you plan on using these data to determine a "cutoff" score?

2. What are your predetermined outcome measures to identify success and failure?

3. It certainly *appears* like you're excluding 4 year programs (Questions 2, 3, 7-9, 14 assume a 6-year program). Why did you choose to exclude these programs, or why did you choose to word your survey questions in this fashion?

4. What statistical methods do you plan to use to handle comparisons between multiple variables of disparate types, given that by my count you have: 4 categorical variables, 8 continuous variables, 3 dichotomous variables and 1 ordinal variable?

Let me answer these in a different order, as that may help explain the rationale.

4. I am asking for more data than I can probably use; I'm expecting most programs (as I'm directly contacting them) to send me just the two scores. However, this study is designed based off Miloro's original paper assessing Part 1 scores and USMLE step 1; these data are practically identical to the ones he asked reported. The biggest issue with this study I think will be power; but if I can get enough responses I would like to look at specifics as well. To be honest, I'm not a great statistician, so once the data is obtained I'll be meeting with our statistics department to determine which tests would be most appropriate (but I'll start looking into them, thanks for the heads up).

1. "Cutoff" I suppose is misleading; I want to know the average (the 65 they quote is based off medical students) of dental students taking this test. One of the other big metrics I want to examine is: is there a difference in CBSE scores between dental programs that have integrated classes with medical schools vs. your "traditional" GP programs that have limited basic science. Depending on that average, then we can stratify scores into a likelihood-of-passing type report, similar to Miloro's paper.

3. Excluding 4 years programs because they don't take the USMLE step 1. Conversely, it would be interesting (and another area of research) to look at 4 year programs and the CBSE scores they consider "good." As part of a 6 year program, I wanted to start with just those. Perhaps myself - or someone else - could look at 4 year programs. The issue being that there isn't a great standardized test for 4 years programs; perhaps the OMSITE (but it is very program-dependent).

2. I'll think more on this; I haven't considered anything past what the correlation between the two tests would be.

Thanks for the questions, I'll continue thinking on these.
 
UPDATE
Thank you for those who had sent me data; I've created an anonymous survey (link follows) for those of you who wish to submit data without any identifiers. Survey will take you ~1 minute. I will also be sending to program directors to distribute among residents. If you know of coresidents who would be willing to participate, please share the link with them:

 
Reading through this, I came up with more questions about the methodology. Again, I'll take the standpoint of someone reviewing this article for publication.

4. I am asking for more data than I can probably use; I'm expecting most programs (as I'm directly contacting them) to send me just the two scores. However, this study is designed based off Miloro's original paper assessing Part 1 scores and USMLE step 1; these data are practically identical to the ones he asked reported. The biggest issue with this study I think will be power; but if I can get enough responses I would like to look at specifics as well. To be honest, I'm not a great statistician, so once the data is obtained I'll be meeting with our statistics department to determine which tests would be most appropriate (but I'll start looking into them, thanks for the heads up).

I would highly recommend consulting with your statistics department prior to collecting data. To keep the study design tight, you should know exactly which statistical tests and transformations you plan on performing in advance. Otherwise, based on the amount of data you're gathering, you'll run into problems with multiple comparisons and "p-hacking." Assuming you use proper statistical comparisons for the number of variables you're gathering, and compare all of them, the effective "significant" p-value would be somewhere around .0005. Using a p-value of .05, with the number of correlations 15 variables generate, any single "significant" comparison would have around a 99.5% likelihood of being due to chance. Basically, given the highest possible n you could get, there is no possible way you could properly power the study as described.

In terms of repeating Miloro's study, is there a reason to do that? Even using 90+ on the old boards as a cutoff, while sensitive, is inordinately non-specific. The ratio of residents who failed their first attempt was distributed at nearly 1:1 between >90 and <90. If you move beyond the "first attempt" metric in Miloro's study, even this disappears. Given that the NBDE is a criterion referenced test, and the "scores" handed out were highly variable (note that higher scores are even more variable in criterion referenced tests) this is unsurprising. The same problems exist with the CBSE, along with other, new issues (content, construct validity). Basically, the problems with doing this are explained in this comic:
UQOIDNB.png


1. "Cutoff" I suppose is misleading; I want to know the average (the 65 they quote is based off medical students) of dental students taking this test. One of the other big metrics I want to examine is: is there a difference in CBSE scores between dental programs that have integrated classes with medical schools vs. your "traditional" GP programs that have limited basic science. Depending on that average, then we can stratify scores into a likelihood-of-passing type report, similar to Miloro's paper.

Seeing if the test is truly standardized for our unique applicant pool (Does dental school curriculum style influence score? USMLE passage rate?) seems like a better question to ask.

3. Excluding 4 years programs because they don't take the USMLE step 1. Conversely, it would be interesting (and another area of research) to look at 4 year programs and the CBSE scores they consider "good." As part of a 6 year program, I wanted to start with just those. Perhaps myself - or someone else - could look at 4 year programs. The issue being that there isn't a great standardized test for 4 years programs; perhaps the OMSITE (but it is very program-dependent).
There is a great standardized test: the ADAT. It's statistically constructed to differentiate applicants, tests material specific to dental school curricula (allows PDs to equate things like ranks at different dental schools, or non-ranked programs with ranked programs), and is developed specifically for dental residency admission.

Finding a silver-bullet test to distinguish who will and who will not fail the USMLE for OMS-residency is going to be nearly impossible due to the variability of both dental school and OMS-residency curricula.

2. I'll think more on this; I haven't considered anything past what the correlation between the two tests would be.

You already described them, you want to try and develop a statistical "cut-off score" to ensure Step 1 passage
 
  • Like
Reactions: 1 user
Top