There are two types of elitism in my mind. One is trying to get the best candidates through examining more objective measures and another is a boy's club in a certain way where the people high up on the food chain stay there ect.
Oi. I get rankled by how people throw around terms like "old boy's club." When it's used haphazardly, it draws attention away from the "old boy's club" that
actually exists.
The reason that a lot of programs don't use the "more objective measures" is that a lot of us have seen that it doesn't correlate to performance in residency. I had a colleague that tracked 10 years of residents Step 1 and 2 scores to performance and the result was scatterplot. Granted, that was for EM, but I have a hunch the same would be true of psychiatry. Step scores successfully evaluate how good students are at taking standardized multiple choice tests related to medicine. Full stop.
I believe (and I'm far from alone on this) that the best prediction of what kind of resident clinician an applicant will be is what kind of student clinician an applicant was. This is based on the quality of their clinical experience and their performance during those experiences.
So how do we determine that? The more familiar I am with the clinical experiences an individual has the better. This means I have a preference for the better medical schools and quality medical schools near my program. This is based on the fact that this is where I have the most experience with the work product. This also means that I like to see good performance in their core rotations (I don't care about MS-4 electives). Lots of honors helps, although I don't compare one applicant to another this way, as I know which programs' rotations give honors to 60% of their students (looking at you, Ivy Psych!) and which give it to 15%.
The best assurance is good LORs. The challenge is that the effectiveness is directly proportional to how familiar I am with the source. I don't mean that I know their research, I mean that I know them personally or (more importantly) I am familiar with their LORs. I've read what comes across as a pretty ho-hum reference from a certain C/L dinosaur that I know means they thought this applicant was the best thing since sliced bread. I've also read absolutely stunning raves from famous people in the field whose language I recognize as being largely templates from year to year, so it doesn't mean as much. Given this familiarity, I can be more confident about how a student will perform as a resident if they come from schools I know (and faculty I know) which tends to be better programs and closer ones (see above).
Now, the "old boy's club"
does exist. This club is a group of those in power who seek to attract and promote those like them. Traditionally in medicine, this has been Judeo-Christian White Men from great schools (medical, undergraduate and [ideally] boarding schools) who tried to recruit from the same. This is not the case in my program and from networking with what I would consider top programs in psychiatry, it is not the norm at most of the great programs. If anything, appreciation and value of diversity is a huge push and at many programs this is actively sought out. Some of the most sought after applicants by top programs are individuals from immigrant backgrounds, people of color, or individuals from walks of life under-represented in medicine. I'm sure "old boy's clubs" exist in psychiatry residency programs. I wouldn't work at one. And I wouldn't want to train at one, so if you are passed over by one, consider yourself lucky.
I'm not saying it's not ok to take the best students into an elite program but are the metrics that they are basing med students on really result in the selection of the best future practitioners? Maybe you are right and medical students from ivy league schools would really make better practitioners but somehow I find that skeptical.
I do too. Most of us would rather have one of the top students at a good regional medical school than someone who floated through an ivy (though the ivy kid will find a happy home at some program desperate to have that school on their "current residents" letterhead).
What metrics makes the best practitioner is always going to be argued about back and forth in the community as well as in each program. There is much better data for what
isn't effective for predicting residency performance than what
is effective. Step scores don't seem to correlate. Interview performance also doesn't seem to, but imho, residencies tend to over-value that as well. Fits into the narcissistic tendencies of doctors in general and psychiatrists specifically.