Step 1 P/F: Decision

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
We're going to collectively look back in 10 years and see this as a good move

This is slowthai from the year 2030. I travelled back in time just to tell you that this was the dumbest move ever

Members don't see this ad.
 
  • Like
  • Love
  • Haha
Reactions: 12 users
I think the best analogy for this situation is other standardized tests. @elfe @Mass Effect Would you have been ok with it if the MCAT were pass/fail? What if the SAT and ACT were pass/fail? I grew up in Canada where college admissions is done based on high school grades (no SAT or ACT) and the lack of standardization was causing real problems. If step 2 also goes pass/fail, I don't see how this situation is any different.
 
The funniest thing to me is the underlying presumption from P/F activists that PDs are too dumb to act rationally. If step 1 is truly such a ineffective measure, then surely our PDs are able to see that and exclude step 1 scores from their ranking criteria.
 
Members don't see this ad :)
I think the best analogy for this situation is other standardized tests. @elfe @Mass Effect Would you have been ok with it if the MCAT were pass/fail? What if the SAT and ACT were pass/fail? I grew up in Canada where college admissions is done based on high school grades (no SAT or ACT) and the lack of standardization was causing real problems. If step 2 also goes pass/fail, I don't see how this situation is any different.
The MCAT actually tried to push a P/F interpretation at the time of the score change. Whole big show and dance about how 500+ were all highly likely to graduate on time and pass boards on first attempt.

Everyone ignored it of course, and continued to treat the top 1% of scores as if they were very different from a 510, which in turn was treated very different from a 501.

Did I benefit a ton from that favoritism? Yes. Do I think the AAMC had the right intentions? Also yes. If they had made it strictly Pass/Fail with a 500 cutoff, my application would have been hurt a great deal, but I can also completely see their justification for why it would be a better practice in the long run.
 
  • Like
Reactions: 1 users
The funniest thing to me is the underlying presumption from P/F activists that PDs are too dumb to act rationally. If step 1 is truly such a ineffective measure, then surely our PDs are able to see that and exclude step 1 scores from their ranking criteria.
PDs were up against a wall because of "application fever" to borrow another term from Dr. Carmody.

Back 15 years ago when application numbers were more appropriate, you're right! Nobody really gave a **** whether you got a 220 or 250. The IQR for Ortho matches included the average at that time. To repeat, you could have a score right at the average, and that wouldn't make you an atypical match at all, you'd be right in the middle of the pile.

Fast forward to modern day, where some people are applying to every single program in their specialty and the typical person is applying to 60 or 80+ programs. Now, PDs need something to quickly trim their stack down to a fraction of the original size. Enter USMLE Step 1 as king, not because it was useful and nobody had noticed before, but because it was the only available and standardized metric they could use to sort the stack.

This is why history is so important to pay attention to. Step 1 didn't take this role because it was smart or effective or deserved to be the determining criteria. It took this role by necessity because we overwhelmed the system.

And, as this year is about to demonstrate to everyone with their head still in the sand, that's why we need application caps ASAP.
 
  • Like
Reactions: 9 users
Right but that’s actually true, whether people ignored it or not.
Yeah, and a person who scored 200 (the median) in the 1990s being a perfectly capable doctor who we trust to train our fellows, residents and clerks is also true. The whole concept of higher USMLE=better on the wards is a fabrication you'll only find in defensive conversations between med students in the 2010s.
 
  • Like
Reactions: 3 users
I think the best analogy for this situation is other standardized tests. @elfe @Mass Effect Would you have been ok with it if the MCAT were pass/fail? What if the SAT and ACT were pass/fail? I grew up in Canada where college admissions is done based on high school grades (no SAT or ACT) and the lack of standardization was causing real problems. If step 2 also goes pass/fail, I don't see how this situation is any different.
Plenty of schools (very prestigious ones like Yale and UChicago) have gone test optional for the SAT/ ACT. Even these places that are looking for the cream of the crop understand they don’t get them by looking at one 2 or 4 digit number. Also, the MCAT is fundamentally different because only 40% of people who apply to med school get in, at any particular school of course, numbers are a blood bath. In residency the numbers are much more reasonable, so there’s not much room to say “it’s impossible not to screen with step”. I agree, application caps would have subverted the need for P/F step 1 but one thing I keep replying because it keeps getting ignored is: Step 2 CK is still scored.... if you want to show you’re a genius even though you go to a low tier school just crush step 2.
 
  • Like
Reactions: 1 users
Yeah, and a person who scored 200 (the median) in the 1990s being a perfectly capable doctor who we trust to train our fellows, residents and clerks is also true. The whole concept of higher USMLE=better on the wards is a fabrication you'll only find in defensive conversations between med students in the 2010s.

Doesn’t step 2ck correlate better to wards performance anyway? Step 1 correlates pretty well with inservice training exams but that’s it I think.
 
Plenty of schools (very prestigious ones like Yale and UChicago) have gone test optional for the SAT/ ACT.
Yeah, the funniest thing to me about Scored Step activists is the underlying presumption that they couldn't have excelled on any other metrics. Y'all are only out here smashing hundreds of thousands of flashcards because PDs told you that's the important selection criteria in 2020. When they're giving something else to maximize for 2030, the students of the future just like yourselves will instead dump thousands of hours into that. There is truly nothing special about Step.
 
  • Like
Reactions: 4 users
Doesn’t step 2ck correlate better to wards performance anyway? Step 1 correlates pretty well with inservice training exams but that’s it I think.
Carmody also had a nice article/blogpost about that myth of Step correlations. The correlation is mostly at the lower end, where scores nearing the fail threshold on Step also predict problems in residency. There hasn't, to my knowledge, ever been anything to show a 230 and 250 meaningfully differ during residency, and AnatomyGrey did pull out a few papers for us to look at over in the megathread. None were able to show me that.
 
  • Like
Reactions: 4 users
PDs were up against a wall because of "application fever" to borrow another term from Dr. Carmody.

Back 15 years ago when application numbers were more appropriate, you're right! Nobody really gave a **** whether you got a 220 or 250. The IQR for Ortho matches included the average at that time. To repeat, you could have a score right at the average, and that wouldn't make you an atypical match at all, you'd be right in the middle of the pile.

Fast forward to modern day, where some people are applying to every single program in their specialty and the typical person is applying to 60 or 80+ programs. Now, PDs need something to quickly trim their stack down to a fraction of the original size. Enter USMLE Step 1 as king, not because it was useful and nobody had noticed before, but because it was the only available and standardized metric they could use to sort the stack.

This is why history is so important to pay attention to. Step 1 didn't take this role because it was smart or effective or deserved to be the determining criteria. It took this role by necessity because we overwhelmed the system.

And, as this year is about to demonstrate to everyone with their head still in the sand, that's why we need application caps ASAP.

Except application caps disproportionately benefit Harvard students since they can afford to dedicate their 15 allotted applications to reaches. Unlike the DO student. Hmm, makes sense why you would support it.

Second, if PDs really think Step 1 is useless, then they can require students submit a step 2 score before the admission deadline.

I don't see the merit in your argument, clearly the USMLE did not have the intention of coupling this change with an application cap.

Yeah, the funniest thing to me about Scored Step activists is the underlying presumption that they couldn't have excelled on any other metrics. Y'all are only out here smashing hundreds of thousands of flashcards because PDs told you that's the important selection criteria in 2020. When they're giving something else to maximize for 2030, the students of the future just like yourselves will instead dump thousands of hours into that. There is truly nothing special about Step.

No one is debating that. If I was an incoming medical student, I would do the minimum possible to pass step 1 and start maturing the dorian deck as soon as possible. Not because it makes me a better doctor (because there is little I could do in medical school to affect that), but rather because it will put me in the best residency (which will affect my preformance as an attending).
 
Last edited:
  • Like
Reactions: 1 users
Yeah, the funniest thing to me about Scored Step activists is the underlying presumption that they couldn't have excelled on any other metrics. Y'all are only out here smashing hundreds of thousands of flashcards because PDs told you that's the important selection criteria in 2020. When they're giving something else to maximize for 2030, the students of the future just like yourselves will instead dump thousands of hours into that. There is truly nothing special about Step.

I don't disagree with you but I think that a large part of the concern has to do with the level of control. A person that would grind hard to get a good step 1 score will do the same thing to get a good step 2 score or good class grades but no amount of grinding or work ethic once you're in med school is going to get someone school name recognition, letters from big names in the field, equal access to research (if they're from a lower ranked school or one without home programs), or subjective clinical grades, because those aren't things that an individual can reliably get themselves, at least not to the same degree as Step 1/2. Not without encouraging obnoxious social gunning anyways.
 
  • Like
Reactions: 4 users
Members don't see this ad :)
Doesn’t step 2ck correlate better to wards performance anyway? Step 1 correlates pretty well with inservice training exams but that’s it I think.

I don't think Step 2 has much correlation either. Which is why I think it will go P/F too at some point. I agree with @efle that leaving it scored was likely the compromise the NBME left out there for PD's, knowing PD's would completely lose it if they lost all standardized metrics.

Yeah, the funniest thing to me about Scored Step activists is the underlying presumption that they couldn't have excelled on any other metrics.

Eh I don't think there is that presumption..... I'll admit this could be sampling bias because I don't think Hopkins has the same stratification of student potential that a DO school has, but at my school it's the same students that are excelling in every area.
 
  • Like
Reactions: 5 users
Except application caps disproportionately benefit Harvard students since they can afford to dedicate their 15 allotted applications to reaches. Unlike the DO student. Hmm, makes sense why you would support it.

Second, if PDs really think Step 1 is useless, then they can require students submit a step 2 score before the admission deadline.

I don't see the merit in your argument, clearly the USMLE did not have the intention of coupling this change with an application cap.



No one is debating that. If I was an incoming medical student, I would do the minimum possible to pass step 1 and start maturing the dorian deck as soon as possible. Not because it makes me a better doctor (because there is little I could do in medical school to affect that), but rather because it will put me in the best residency (which will affect my preformance as an attending).
I wouldn't cap at 15. The goal here isn't to hamstring anybody. But there is good data that your match odds plateau way, way, way before the current numbers of apps. Nobody needs to be applying to 100+ programs. My advisor told me that if I was a late applicant to Derm or Ophtho I should just apply to every single program in the country. In my book, that is insane behavior.

Caps are coming. They just don't know it yet. Or they do and haven't told us. The runaway application numbers only continue upwards every year. It should be addressed now but it will need to be addressed within 5-10 years at this rate.

It's refreshing, again, to finally have someone defend Step not because it makes them a more knowledgeable student or better clerk, but just because it's a reliable grind. At the risk of being a broken record, Carmody has once again got a wonderful piece about this - memorizing digits of pi is a reliable grind too. But imagining a world where residency placement is determined by how many digits you can memorize is a very effective ad absurdum.

And let me share something that might take some of the villain out of me - I have extremely tight regional restriction on my match. Odds are slim for a match to the single Top 10 residency that exists there. There is a very good chance I will be matching either a nearby mid-tier or a local academic-affiliated community hospital program. And this is all in a specialty that is already not competitive whatsoever and wide open to any US MD or US DO with average scores. So no, I personally probably will not be benefiting from any of the ideas I espouse here, even if they went into effect immediately.
 
  • Like
Reactions: 1 users
I don't disagree with you but I think that a large part of the concern has to do with the level of control. A person that would grind hard to get a good step 1 score will do the same thing to get a good step 2 score or good class grades but no amount of grinding or work ethic once you're in med school is going to get someone school name recognition, letters from big names in the field, equal access to research (if they're from a lower ranked school or one without home programs), or subjective clinical grades, because those aren't things that an individual can reliably get themselves, at least not to the same degree as Step 1/2. Not without encouraging obnoxious social gunning anyways.
I absolutely agree that there are advantages to attending some places over others. But, that has always been the case, and yet it's also always been the case that NIH Top 40 schools make up about the same number of surgical specialty matches as they do in IM and all other specialties combined. The fact is that matching HSS might be much harder coming from U of State in the year 2030, but matching Ortho in general will not be shutting out mid and low tier schools.
 
Yeah, the funniest thing to me about Scored Step activists is the underlying presumption that they couldn't have excelled on any other metrics.

What metrics could a student from a lower tier school excel in without standardized tests? Not research, since Harvard students have better access. Not clerkship grades, since those are heavily dependent on site and preceptor. Not LORs, since every attending writes favorable LORs and Harvard faculty have more name recognition.

I am curious, what metrics are you referring to? Even better, I wonder how lower tier applicants will complete once step 2 is gone.
 
  • Like
Reactions: 3 users
I don't think Step 2 has much correlation either. Which is why I think it will go P/F too at some point. I agree with @efle that leaving it scored was likely the compromise the NBME left out there for PD's, knowing PD's would completely lose it if they lost all standardized metrics.

The fact that PDs would riot if they had no standardized metrics highlights how important standard metrics they are. Even with application caps, PDs would still need a standard reference point to compare applicants. Why did EM PDs adopt a Standardized LOE? Because standardization is important! I agree that step 1 doesn't really correlate to resident performance, but that's not an argument against having step 1, it's an argument for changing step 1 so that it can serve as a useful, standardized metric for predicting resident success.
 
  • Like
Reactions: 2 users
What metrics could a student from a lower tier school excel in without standardized tests? Not research, since Harvard students have better access. Not clerkship grades, since those are heavily dependent on site and preceptor. Not LORs, since every attending writes favorable LORs and Harvard faculty have more name recognition.

I am curious, what metrics are you referring to? Even better, I wonder how lower tier applicants will complete once step 2 is gone.

The only way I could see them competing if that happens is by doing multiple research years, which is a hefty price to pay. Debt growth, delayed career, all for tremendous uncertainty. The rich get richer.
 
  • Like
Reactions: 1 user
What metrics could a student from a lower tier school excel in without standardized tests? Not research, since Harvard students have better access. Not clerkship grades, since those are heavily dependent on site and preceptor. Not LORs, since every attending writes favorable LORs and Harvard faculty have more name recognition.

I am curious, what metrics are you referring to? Even better, I wonder how lower tier applicants will complete once step 2 is gone.
In 2005, before Step mattered to the degree it does now, the match rate into surgical specialties was similar between Top 40 MD and all others. Those Top 40 comprised only ~1/3 of the pool.

So do what those ~2/3 did. U of State still has retrospectives for you to datamine. Wonky or subjective grading is everywhere, not just state schools, so struggle to lock down Honors like everyone else does. In a non-COVID era, do some aways and knock their socks off. In COVID, do home sub-Is and knock their socks off.

Congratulations, you are now the typical Ortho applicant from before Step 1 Mania. You're not from a top school, like most people aren't. You have Honors and glowing letters and did a bunch of posters like most people do. Apply smart, interview well, and boom you're now training in Ortho.

I have said some variation of this half a dozen times throughout the thread and nobody seems to accept that what I'm describing was the reality before the 2010s.
 
  • Like
Reactions: 1 users
I think the best analogy for this situation is other standardized tests. @elfe @Mass Effect Would you have been ok with it if the MCAT were pass/fail? What if the SAT and ACT were pass/fail? I grew up in Canada where college admissions is done based on high school grades (no SAT or ACT) and the lack of standardization was causing real problems. If step 2 also goes pass/fail, I don't see how this situation is any different.

You're not going to like my answer. I never took the ACT or the SAT. I never had plans to go to college, so I didn't bother.

I took the MCAT twice and scored horribly both times.

So yeah both of those things would have favored me and in my case, none of it held me back. Going by what some say, I was never supposed to have gotten into med school, let alone be an attending at a major academic institution. I'm a case in point on how scores shouldn't make a difference.
 
  • Like
Reactions: 1 users
Except application caps disproportionately benefit Harvard students since they can afford to dedicate their 15 allotted applications to reaches. Unlike the DO student. Hmm, makes sense why you would support it

I'm a DO and I support it.
 
In 2005, before Step mattered to the degree it does now, the match rate into surgical specialties was similar between Top 40 MD and all others. Those Top 40 comprised only ~1/3 of the pool.

So do what those ~2/3 did. U of State still has retrospectives for you to datamine. Wonky or subjective grading is everywhere, not just state schools, so struggle to lock down Honors like everyone else does. In a non-COVID era, do some aways and knock their socks off. In COVID, do home sub-Is and knock their socks off.

Congratulations, you are now the typical Ortho applicant from before Step 1 Mania. You're not from a top school, like most people aren't. You have Honors and glowing letters and did a bunch of posters like most people do. Apply smart, interview well, and boom you're now training in Ortho.

I have said some variation of this half a dozen times throughout the thread and nobody seems to accept that what I'm describing was the reality before the 2010s.

Ortho, derm, PS always had Step 1 scores one standard deviation above average even before 2005. Look at the posts on Orthogate from 2002 if you want to see. I’m not sure how things worked back in the 90’s when they didn’t have Step 1 and didn’t select for research, but you’re making an assumption that it was better than it is now. Likely there was a ton of nepotism or blind luck.
 
  • Like
Reactions: 2 users
Ortho, derm, PS always had Step 1 scores one standard deviation above average even before 2005. Look at the posts on Orthogate from 2002 if you want to see. I’m not sure how things worked back in the 90’s when they didn’t have Step 1 and didn’t select for research, but you’re making an assumption that it was better than it is now. Likely there was a ton of nepotism or blind luck.
Hard to credit a majority of matches to nepotism and blind luck.

The Ortho IQR in the mid 2000s was ~220s-240s when the average was also ~220. And keep in mind this is not a norm-referenced test. It's static. A 220 back then represents the same as a 220 now. It's just wildly, wildly different than what's expected of surgical subspecialty applicants these days.
 
digits of pi != the pathophysiology of disease.
It's my opinion that the additional knowledge to climb from 230 to >250 is roughly equivalently useful in a clinic. I think most people who read alot about the history and design of the test, and who experience making that climb themselves, would agree with me.

Hence earlier, when you defended Step you didn't defend it because it made you better at your specialty. You defended it for being a reliable way to grind out success, just like Pi is. The analogy is just to illustrate that an objective metric is still crap when it's objectively measuring something useless.
 
It's my opinion that the additional knowledge to climb from 230 to >250 is roughly equivalently useful in a clinic. I think most people who read alot about the history and design of the test, and who experience making that climb themselves, would agree with me.

Hence earlier, when you defended Step you didn't defend it because it made you better at your specialty. You defended it for being a reliable way to grind out success, just like Pi is. The analogy is just to illustrate that an objective metric is still crap when it's objectively measuring something useless.

Frankly, I don't think step 2 reflects clinical prowess either. The fact of the matter is that there is no metric in medical school that really predicts clinical competency other than maybe clerkship grades. The problem is that clerkships are not standardized and preceptors grade horrendously.

If I were looking to reform medical school grading. I would stop looking at step 1 and step 2 and application limits. I would instead force all schools to implement a standardized grading criteria with SLOE-like evals.
 
  • Like
Reactions: 4 users
Frankly, I don't think step 2 reflects clinical prowess either. The fact of the matter is that there is no metric in medical school that really predicts clinical competency other than maybe clerkship grades. The problem is that clerkships are not standardized and preceptors grade horrendously.

If I were looking to reform medical school grading. I would stop looking at step 1 and step 2 and application limits. I would instead force all schools to implement a standardized grading criteria with SLOE-like evals.
I agree. I think if I was a PD, my #1 way to select people would be having them work with one of my teams for an audition month, and then my #2 would be strong recommendations from people I trust. AOA and Step scores are, in my mind, probably the lamest two metrics to use and yet they're the major hurdles for most people. SLOEs would make a better #3 than those would.
 
I agree that step 1 isn't the best metric and that we need better standardized metrics for residency selection, and we need application caps. However, I think step 1 should be made pass/fail AFTER a better system is created, not before. After all, step 1 adds at least some value.
 
  • Like
Reactions: 2 users
Now that I’ve finished 3rd year, I gotta say that I still think that grind for step1 was totally worth it. It’s extremely applicable to IM and FM and knowing that stuff well helps in peds,


I will say it’s really dumb that it stratifies surgical and derm applicants to the degree it does considering those topics are so low yield for step and that knowledge wasn’t very helpful on those rotations. Especially OB. (I’m actually very surprised OB doesn’t put more emphasis on step 2).

I’ll be the first to say after 245 you’re really not gaining any useful knowledge.
 
  • Like
Reactions: 4 users
Wrong. Applying knowledge is part of learning
Now that I’ve finished 3rd year, I gotta say that I still think that grind for step1 was totally worth it. It’s extremely applicable to IM and FM and knowing that stuff well helps in peds,


I will say it’s really dumb that it stratifies surgical and derm applicants to the degree it does considering those topics are so low yield for step and that knowledge wasn’t very helpful on those rotations. Especially OB. (I’m actually very surprised OB doesn’t put more emphasis on step 2).

I’ll be the first to say after 245 you’re really not gaining any useful knowledge.
 
Frankly, I don't think step 2 reflects clinical prowess either. The fact of the matter is that there is no metric in medical school that really predicts clinical competency other than maybe clerkship grades. The problem is that clerkships are not standardized and preceptors grade horrendously.

If I were looking to reform medical school grading. I would stop looking at step 1 and step 2 and application limits. I would instead force all schools to implement a standardized grading criteria with SLOE-like evals.

Even clerkship grades suck to demonstrate clinical prowess, and don't even reflect patient-centeredness that well as some people claim (they're sometimes lauded as a measure of "soft skills"/sociability). I was complemented several times by patients about my listening and empathy but most of that wasn't witnessed by evaluators and didn't make it into my evals, so the number that's arbitrarily generated from the subjective eval sheet, in my case, was fairly average.

Fact is, there really isn't any standard, bias-free measure at all to judge whether someone will be a "good physician." That in and of itself will mean something a little different to every single patient, and to every program director. We all know how normal it is that not every person has chemistry with every other person - personalities are so vastly different that that sort of utopia is impossible. There are basic things like professionalism and honestly (although even the word "professionalism" is unfairly weaponized sometimes), but beyond those there aren't really standard metrics that measure what every physician should have. That's why I think all of this discussion about metrics in the end isn't going to make a difference - the main problem is PDs stratifying by imperfect things to get their lists to manageable sizes, so capping is the most logical treatment for the disease regardless of what these other symptomatic strategies. This will make applicants more judicious in choosing where to apply, and allow PDs to find the best fit between the goals of applicants and the goals of the program, which in the end will make both parties happier anyway
 
  • Like
Reactions: 3 users
Great arguments here.
I don't really support majority of the arguments against step 1.
The only one I really care about is this the first 2 years becoming primarily online. It gives a lot of ammunition for PAs and NPs to also watch boards and beyond, pathoma, sketchy and say they are as good as physicians because they are using the same online resources and doing the "same" rotations.
Medical school curriculum should be unique to physicians. I think there is an opportunity to build something great here in our first two years and make people better clinicians than the powerpoint BS that made us all switch to using online resources.
 
Last edited:
  • Like
Reactions: 4 users
I agree. I think if I was a PD, my #1 way to select people would be having them work with one of my teams for an audition month, and then my #2 would be strong recommendations from people I trust. AOA and Step scores are, in my mind, probably the lamest two metrics to use and yet they're the major hurdles for most people. SLOEs would make a better #3 than those would.
#1 is completely infeasible for many fields. Even in Ortho (picking because you've talked about it a bunch), there's no way that each ortho candidate could audition at enough programs. In IM it would be a nightmare.

#2 trades a standardized exam (with all of its problems) for name recognition / nepotism / connections (with its problems). Students at "unknown" schools would be doomed.

#3 SLOE's require the same H/HP/P breakdown, and all the same problems as clerkship grading.

There is no perfect way to measure performance in the clinical setting. Each tool only measures a piece, and imperfectly at that. Some programs/fields disproportionately weight USMLE scores, probably more from application overload than anything else. Removing any standardized national assessment is IMHO a mistake -- if folks are "unhappy" with USMLE, then lets propose something different.

Application caps are similarly a blunt intervention to a complicated problem, and will create lots of problems for some applicants. once you start making exceptions for subclasses of applicants, further turmoil will ensue.

No one change can really "fix" the problem, you would need a suite of changes / agreements. To include:

1. Some national assessment of medical knowledge. This could be specialty specific and unrelated to licensing. This would allow people to take it more than once if they wanted (like MCAT) - whether that's good or bad depends on your viewpoint.

2. Continued improvement in MSPE's. They have gotten much better over the last 10 years, but more improvement is needed. This is especially true for DO schools, where they are near useless.

3. A "SLOE" equivalent for each specialty. This is basically an assessment from a group (rather than just an individual) about a student's performance. In Internal Medicine and other fields, it's called a "Department Letter". These assessments need to actually discriminate student performance -- you can't just say everyone is "outstanding". Each field could decide what attributes are important to assess - perhaps IM cares about ability to communicate with team members and patients, and Ortho care about deadlift weight.

4. Given #1 and #3, programs need to have transparency about their resident classes so that students have some sense if they are competitive.

5. ERAS needs to codify the information in #1-3 such that programs can sort applications into categories and can be more time efficient in reviewing applications.

6. I would create an early application process for residency. Applicants would be able to apply to a very limited number of programs, perhaps 3. Programs could interview those candidates, and offer spots before the match. Programs would only be able to fill a subset of their spots in this early process -- perhaps 30%. This would leave the majority of spots for the match itself. Anyone applying early to a program and not getting a spot in the early round (at any of their early apps) would automatically roll into the regular process.

7. Another idea I like is having a list of strengths of residency programs / things that students are interested in. Each program would be able to pick some limited number of items that they think they are best at. Each student could pick a limited number of things that they are interested in. Then, when applying, both would see overlaps that might show which students/programs are best options.

No matter what you do, the solution is never completely fair. At the end of the day, all residency spots fill with someone. If a new system helps someone get a spot that they wouldn't have gotten under the current system, then someone else loses that spot. It's like musical chairs - no matter how you change the rules about when the music stops, someone doesn't get a seat. All of these ideas have problems, I can tell you why they are all problematic. The early application process, for example -- it's quite possible that students coming from The Best Medical School will all get an early spot, and those from Podunk University will get none. But maybe a group of interventions might make the process a bit better for most people?
 
  • Like
Reactions: 11 users
#1 is completely infeasible for many fields. Even in Ortho (picking because you've talked about it a bunch), there's no way that each ortho candidate could audition at enough programs. In IM it would be a nightmare.

#2 trades a standardized exam (with all of its problems) for name recognition / nepotism / connections (with its problems). Students at "unknown" schools would be doomed.

#3 SLOE's require the same H/HP/P breakdown, and all the same problems as clerkship grading.

There is no perfect way to measure performance in the clinical setting. Each tool only measures a piece, and imperfectly at that. Some programs/fields disproportionately weight USMLE scores, probably more from application overload than anything else. Removing any standardized national assessment is IMHO a mistake -- if folks are "unhappy" with USMLE, then lets propose something different.

Application caps are similarly a blunt intervention to a complicated problem, and will create lots of problems for some applicants. once you start making exceptions for subclasses of applicants, further turmoil will ensue.

No one change can really "fix" the problem, you would need a suite of changes / agreements. To include:

1. Some national assessment of medical knowledge. This could be specialty specific and unrelated to licensing. This would allow people to take it more than once if they wanted (like MCAT) - whether that's good or bad depends on your viewpoint.

2. Continued improvement in MSPE's. They have gotten much better over the last 10 years, but more improvement is needed. This is especially true for DO schools, where they are near useless.

3. A "SLOE" equivalent for each specialty. This is basically an assessment from a group (rather than just an individual) about a student's performance. In Internal Medicine and other fields, it's called a "Department Letter". These assessments need to actually discriminate student performance -- you can't just say everyone is "outstanding". Each field could decide what attributes are important to assess - perhaps IM cares about ability to communicate with team members and patients, and Ortho care about deadlift weight.

4. Given #1 and #3, programs need to have transparency about their resident classes so that students have some sense if they are competitive.

5. ERAS needs to codify the information in #1-3 such that programs can sort applications into categories and can be more time efficient in reviewing applications.

6. I would create an early application process for residency. Applicants would be able to apply to a very limited number of programs, perhaps 3. Programs could interview those candidates, and offer spots before the match. Programs would only be able to fill a subset of their spots in this early process -- perhaps 30%. This would leave the majority of spots for the match itself. Anyone applying early to a program and not getting a spot in the early round (at any of their early apps) would automatically roll into the regular process.

7. Another idea I like is having a list of strengths of residency programs / things that students are interested in. Each program would be able to pick some limited number of items that they think they are best at. Each student could pick a limited number of things that they are interested in. Then, when applying, both would see overlaps that might show which students/programs are best options.

No matter what you do, the solution is never completely fair. At the end of the day, all residency spots fill with someone. If a new system helps someone get a spot that they wouldn't have gotten under the current system, then someone else loses that spot. It's like musical chairs - no matter how you change the rules about when the music stops, someone doesn't get a seat. All of these ideas have problems, I can tell you why they are all problematic. The early application process, for example -- it's quite possible that students coming from The Best Medical School will all get an early spot, and those from Podunk University will get none. But maybe a group of interventions might make the process a bit better for most people?
A couple clarifications - don't most surgical subspecialty applicants do 2-3 aways as a de facto requirement these days, and don't the match statistics show that a vast majority of applicants match at their top 3-4 ranks? I know that in some of my school's surgical departments, they rotate more than enough auditioners to fill their yearly residency slots. I just cannot imagine a better option for choosing who joins your teeny tiny program for 5-7 years. I'd be less concerned about a giant IM cohort.

Aren't community residencies more familiar with their local surgeons and med schools? If I'm attending Podunk SOM and trying to match Podunk Ortho, I'm really disadvantaged having my Podunk Dept Chair making that phone call for me?
 
I am not certain what most students do in the competitive specialties, but I expect that you're correct that most do 1 or 2 away / audition rotations.

The match data doesn't answer your next question - where in match lists do the candidates to competitive specialties land? In the Main Match data 2020, table 15 answers this question for all candidates - 80% get their top 3. In charting outcomes, where Ortho is broken out individually, this data is not reported. But I doubt it's better than 80%. We can see this by looking at the effect of rank list length on match outcomes. For Ortho:
1589072224769.png

A rank list of 3-4 is not a great place to be. But we don't know how many of the people ranking 12+ got one of their top 3. Still, comparing this to Neurology (one of the least competitive fields:

1589072375958.png


Here, for sure more than 80% of people must be matching into their top 3. So I expect it is much lower for Ortho, but I can't be sure.

Regarding your last point, Podunk Univ might not have an Ortho residency at all. And, they might decide that they would rather take someone from The Best Medical School anyway, to improve their stature / name recognition. It also means that once you pick a medical school, you're "locked in" to where you can apply to residency.

All that said, I see I left off my list auditions / away rotations, and that was an oversight. They also probably have a role to play, and perhaps more so in smaller / competitive fields. When a student does a rotation elsewhere, I have more confidence in the evaluation since the evaluator has less bias -- there's less pressure to give a good evaluation to the student to help the school.
 
I am not certain what most students do in the competitive specialties, but I expect that you're correct that most do 1 or 2 away / audition rotations.

The match data doesn't answer your next question - where in match lists do the candidates to competitive specialties land? In the Main Match data 2020, table 15 answers this question for all candidates - 80% get their top 3. In charting outcomes, where Ortho is broken out individually, this data is not reported. But I doubt it's better than 80%. We can see this by looking at the effect of rank list length on match outcomes. For Ortho:
View attachment 305776
A rank list of 3-4 is not a great place to be. But we don't know how many of the people ranking 12+ got one of their top 3. Still, comparing this to Neurology (one of the least competitive fields:

View attachment 305778

Here, for sure more than 80% of people must be matching into their top 3. So I expect it is much lower for Ortho, but I can't be sure.

Regarding your last point, Podunk Univ might not have an Ortho residency at all. And, they might decide that they would rather take someone from The Best Medical School anyway, to improve their stature / name recognition. It also means that once you pick a medical school, you're "locked in" to where you can apply to residency.

All that said, I see I left off my list auditions / away rotations, and that was an oversight. They also probably have a role to play, and perhaps more so in smaller / competitive fields. When a student does a rotation elsewhere, I have more confidence in the evaluation since the evaluator has less bias -- there's less pressure to give a good evaluation to the student to help the school.
I don't think these data necessarily mean that match rate is an effect of rank list length. I would guess that it's the other way around: the candidates who are a priori more likely to match (because they have stronger applications) are the ones who get more interviews and therefore have longer rank lists—not because they need longer rank lists to match lower down, but because they can make longer ones.

For that reason, I assume that the people with the longer rank lists actually match higher on their lists.
 
  • Like
Reactions: 1 user
Purely speculative of course, but how many people believe its likely they delay rolling out P/F because of covid.
As class of 2023 I hope they do.
 
  • Like
Reactions: 1 users
I hope it’s implemented for class of 2024


Sent from my iPhone using Tapatalk
 
Purely speculative of course, but how many people believe its likely they delay rolling out P/F because of covid.
As class of 2023 I hope they do.

I've been wondering about this, actually. I imagine that this would be the time that schools would be restructuring their curricula for it because that's what the 2 year lead time was for, but with all the madness going on, I don't know if it'll be implemented in 2022. That said, I still think it's more likely that they keep the same timeline. Hope I'm wrong, though.
 
Last edited:
I've been wondering about this, actually. I imagine that this would be the time that schools would be restructuring their curricula for it because that's what the 2 year lead time was for, but with all the madness going on, I don't know if it'll be implemented in 2021. That said, I still think it's more likely that they keep the same timeline. Hope I'm wrong, though.

It’s supposed to be implemented in 2022 anyway.
 
  • Like
Reactions: 1 user
Plenty of schools (very prestigious ones like Yale and UChicago) have gone test optional for the SAT/ ACT. Even these places that are looking for the cream of the crop understand they don’t get them by looking at one 2 or 4 digit number. Also, the MCAT is fundamentally different because only 40% of people who apply to med school get in, at any particular school of course, numbers are a blood bath. In residency the numbers are much more reasonable, so there’s not much room to say “it’s impossible not to screen with step”. I agree, application caps would have subverted the need for P/F step 1 but one thing I keep replying because it keeps getting ignored is: Step 2 CK is still scored.... if you want to show you’re a genius even though you go to a low tier school just crush step 2.

Not having SAT/ACT scores will probably lower the quality of the future workforce, because you won’t be selecting out people who can’t solve hard problems quickly

whats ****ed is that “established” universities don’t have a huge incentive to select for the smartest... the burden will fall on future employers who may get ****ty employees (altho that could feedback)
 
  • Dislike
Reactions: 1 user
Not having SAT/ACT scores will probably lower the quality of the future workforce, because you won’t be selecting out people who can’t solve hard problems quickly

whats ****ed is that “established” universities don’t have a huge incentive to select for the smartest... the burden will fall on future employers who may get ****ty employees (altho that could feedback)

Seriously, you think SAT/ACT scores have anything at all to do with the workforce?
 
  • Like
Reactions: 8 users
Seriously, you think SAT/ACT scores have anything at all to do with the workforce?

While I don't feel as strongly as @premed1875, there is no doubt that the SAT is related to outcomes after college:
Figure 1 in this paper suggests that higher SAT scores correlate to higher income, higher likelihood of earning a doctorate degree, higher likelihood for holding a patent, publish a paper and more.

Maybe the SAT doesn't measure potential or intelligence, maybe it measures diligence or the availability of opportunity. Whatever the case may be, it is absolutely reasonable to conclude that SAT/ACT has something to do with the workforce.
 
Top