ABA basic exam score report

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

whiteorgo

Full Member
10+ Year Member
Joined
Dec 27, 2008
Messages
61
Reaction score
0
Hi guys,

Just wanted to clarify the score report for ABA basic exam (for CA-1). I was told that is is simply PASS/FAIL. Is that correct? Unless you get above certain percentile, the PD or fellowship directors won't know where you stand in terms of percentile. Please correct me if I'm wrong. Thanks.

Members don't see this ad.
 
They get notified if you were top 10%....and I heard also if you are bottom 10%. Otherwise just pass or fail

Sent from my SM-G930V using SDN mobile
 
Members don't see this ad :)
All you get is PASS/FAIL and then keywords you missed similar to ITE to help you refine your studying going forward.

It very deliberately does not include a number score. The ABA is considering doing away with the top 10% notifications as it could be used to stratify fellowship applicants which is exactly what they do NOT want.
 
They get notified if you were top 10%....and I heard also if you are bottom 10%. Otherwise just pass or fail

Sent from my SM-G930V using SDN mobile
The bottom 10% are notified only if the pass rate is 90%. Thus far, the pass rate has not dipped that low. So, you could probably say the bottom ~5% are "notified."

All of these organizations that administer exams (USMLE, COMLEX, ABA, ITE, etc) state that they should not be used for stratification of candidates for residency or fellowship positions, but the reality is that they almost assuredly are used for that purpose. Program directors used to have many tools with which to evaluate candidates including meaningful dean's letters, honest letters of evaluation, and meaningful grading systems. Most of those have gone by the wayside for a variety of reasons:

-dean's letters (MSPE)-used to give a fair assessment of a student's performance. Now that medical students are graduating without any residency training program that will have them, dean's letters have slowly become useless. As the number of grads far exceeds the number of GME positions, Deans have to resort to deceptive ways to lift up their worst students. Residency placement is, after all, how a dean is graded. If a dean consistently has a 10% unmatched rate, they won't last very long at that med school. There are many studies available that have looked critically at Dean's letters and found omissions of potentially harmful comments or findings and outright deceptive practices in order to soften a negative MSPE.
-grading systems-the majority now do a pass/fail system, so that no one can tell who are the best students and who are the worst. Someone somewhere is keeping track, because many of these same schools still have AOA elections to recognize the top students. It just becomes increasingly difficult to identify the worst students. The schools who still use the traditional ABCF grading systems often use grade inflation to give the impression that the below average student is a top student. At a particular school in my region, the students with an A average extends mid-way through the third quartile of their class.
-lack of universal grading system-many schools have "code words" in the MSPE that give the reader a clue as to the approximate class rank of the student compared to their classmates. These "code words" are often unique to each individual school and interpretation requires searching the fine print in the MSPE to find out what each word means. They are always superlative words like "excellent," "superior," etc. If you are a novice reader of MSPE's, it would be very easy to get the impression that the top 3 descriptors are interchangeable. In addition, the worst student also gets a superlative descriptor of something like "good," so that if it is the only MSPE that you are reading from that school, you may not recognize that word as having class rank significance because you have no context to know that the other words exist unless you take the time to read the fine print. When you have 700 applications from 100-150 different medical schools across the country, each with its own descriptors, it becomes time consuming to get accurate information. This is when years of experience in the role come into play to help make sense of the mess. A universal grading system or descriptor words would go along way towards simplifying this process. Honestly, I think the Deans like having it clouded because it allows them to camouflage their worst students and, perhaps, allow those students to secure a spot they would not have been able to get otherwise.
-Letters of recommendation-a letter writer used to be able to be honest in their evaluation and good letters were easily distinguishable from mediocre letters. Almost all letters of rec look very similar now and have become somewhat meaningless (there are exceptions, but 90% are useless). The universal letter was adopted by Emergency Med and functions fairly well. It has had modest adoption in our specialty but has had a few issues that have hampered its adoption. This form letter allows you to lump someone into a percentile based on your experience with that student and also asks you to be forthcoming with how tough of a "grader" you are (ie, what percent of students get top marks). The efficacy of the letter requires honesty on the part of the writer, and I am not certain how honest many are willing to be, knowing that it may be found out and also knowing that it could hamper that student's ability to get a residency position.

So, program directors have a difficult time stratifying 2nd-4th quartile students based on what info they are provided. It takes way too much work to figure out how a student measures up. The top students will usually have a few pieces of information that cause them to stand out, but the rest really blend together unless you do a lot of sleuthing. So, PD's are left using standardized tests as an evaluation tool that they were not designed to be used as due to a lack of other methods to discern who is a top student and who is a poor student. A single bad score on an exam may not reflect a candidate's overall ability to perform as a resident, but it can be a reflection of who may perform well on certification exams, which is, after all, how program directors are ultimately graded.
 
  • Like
Reactions: 1 user
Top