Compilation of MSTP Programs with average GPA/MCAT/Ranking/GPP/Stipend

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Do the publications of matriculating students factor into rankings at all?
not to my knowledge. It's usually largely based on the faculty's merits.

Members don't see this ad.
 
Apologies for the bump, but I thought this would go better with a relevant bump than with a new thread. While procrastinating this weekend, I decided to make my own MSTP ranking. The specific criteria are described below, but it breaks down into 35% NIH funding, 30% publications/impact, 30% med school reputation and 5% how long a program has held its MSTP grant.

I did not include factors like GPA or MCAT of the entering class because I wanted this ranking to be an estimate of a student's opportunity when starting fresh in a program rather than how difficult it was to get in. An enterprising applicant might cross-reference this list with the original post on GPA/MCAT data to identify programs with high opportunity but a less competitive pool. There are no components relating to outcomes because assembling those data is far more work than I'm willing to do. I did not include non-MSTP programs because MSTP/not was a convenient cutoff point. If someone wants to propose a list of large, non-MSTP programs, I can always plug in the numbers.

Rank Program Points
1 Harvard 95.7
2 Hopkins 71.6
3 UCSF 70.9
4 UWash 70.8
5 Penn 69.3
6 Stanford 65.7
7 UCSD 64.4
8 Michigan 64.2
9 Duke 63.7
10 Columbia 62.7
11 UCLA 62.0
12 WashU 61.4
13 Yale 60.7
14 Pitt 59.4
15 Cornell 58.2
16 Vanderbilt 57.8
17 NW 56.6
18 UNC 56.6
19 Emory 55.4
20 Wisconsin 53.9
21 UChicago 53.4
22 Mayo 51.7
23 Minnesota 51.6
24 MSSM 51.4
25 NYU 51.2
26 Baylor 51.2
27 Case 50.8
28 UTSW 50.2
29 Colorado 48.7
30 Virginia 48.2
31 Iowa 48.1
32 Rochester 48.1
33 Einstein 47.2
34 UAB 46.7
35 Tufts 45.2
36 OSU 45.2
37 UMB 42.8
38 UCI 41.7
39 Indiana 41.4
40 Cincinnati 41.4
41 UIC 40.5
42 UMass 40.0
43 MUSC 39.1
44 Stony Brook 38.4
45 MCW 36.8

NIH Funding: 2013 figures including major affiliates and/or partner institutions on the MSTP grant. For example, Cornell is Cornell-MSK-Rockefeller, and Colorado is Denver-Boulder-National Jewish. Curved against the highest percent of the total pie among MSTPs with a total 35 points possible. The big skew in overall points comes from this category since Harvard and its affiliates take in much more total funding than any other group.

Publication Impact: Derived from the 2013 SCImago Institution Rank numbers in four categories, each curved against the top score in each subcategory. 10 possible for normalized impact; 10 possible for total output; 5 possible for percentage of top quartile publications; 5 possible for how often the senior author came from that institution. Combining these was not as easy as NIH funding, so the scores come only from the flagship institution, not every single affiliate. For example, Tufts and Tufts Med Center are combined in NIH funding, but the publication scores are only from Tufts Med Center.

Med School Reputation: Average of the peer assessment and residency program director scores from the US News 2015 rankings. Curved against the highest average with a total 30 points possible.

MSTP Grant Duration: Curved against the longest duration in years with 5 points possible. Year of inception taken from the Wikipedia MSTP page. I have no way of knowing about relevant caveats like prior probation.
 
  • Like
Reactions: 1 user
NIH Funding: 2013 figures including major affiliates and/or partner institutions on the MSTP grant. For example, Cornell is Cornell-MSK-Rockefeller, and Colorado is Denver-Boulder-National Jewish. Curved against the highest percent of the total pie among MSTPs with a total 35 points possible. The big skew in overall points comes from this category since Harvard and its affiliates take in much more total funding than any other group.
First, good job on the list, looks like it took a lot of work. It's a good way to gauge an institution's powerhouse-ness.

However, I think there are many other metrics that would be more useful to comparing MSTPs, such as the size of the T32 training grant, the size of the cohort (bigger programs tend to be more successful), average number of first author publications per student, average program duration, % of students completing the program, etc. I know you don't have this data, but I'm sure someone does. I bet the guys in the NIGMS section that reviews MSTPs have all the goodies, but they won't spill the beans.

Also, now I get to go on my rant on how I don't trust the total NIH funding spread over all "affiliates," particularly Harvard. I've always thought the way Harvard takes the credit for all their affiliated hospital's research is... sketchy, to say the least. I mean, does Harvard actually hire/fire and pay the salaries of the people who work in MGH, BWH, or Deaconess? The total funding in 2013 for all Harvard affiliates is some ridiculous number like 1.3 billion. But if you add up just Harvard college + HMS + HS-Public Health, you get ~$350M, which is still a lot, but nowhere close to Hopkins or UCSF. Besides, what counts as "affiliate" anyway? I mean, UWashington is "affiliated" with Fred Hutchinson Cancer Center, but I don't think it takes credit for the research there. Same with UCLA and Cedars-Sinai Hospital, or Stony Brook and Cold Spring Harbor. An institution like Hopkins puts its name on every one of its hospitals ("Sidney Kimmel Cancer Center at Johns Hopkins), so there's no confusion there. In the end, it's all very tiring and I like to pretend the numbers don't matter. ¯\_(ツ)_/¯
 
Members don't see this ad :)
First, good job on the list, looks like it took a lot of work. It's a good way to gauge an institution's powerhouse-ness.

However, I think there are many other metrics that would be more useful to comparing MSTPs, such as the size of the T32 training grant, the size of the cohort (bigger programs tend to be more successful), average number of first author publications per student, average program duration, % of students completing the program, etc. I know you don't have this data, but I'm sure someone does. I bet the guys in the NIGMS section that reviews MSTPs have all the goodies, but they won't spill the beans.

Also, now I get to go on my rant on how I don't trust the total NIH funding spread over all "affiliates," particularly Harvard. I've always thought the way Harvard takes the credit for all their affiliated hospital's research is... sketchy, to say the least. I mean, does Harvard actually hire/fire and pay the salaries of the people who work in MGH, BWH, or Deaconess? The total funding in 2013 for all Harvard affiliates is some ridiculous number like 1.3 billion. But if you add up just Harvard college + HMS + HS-Public Health, you get ~$350M, which is still a lot, but nowhere close to Hopkins or UCSF. Besides, what counts as "affiliate" anyway? I mean, UWashington is "affiliated" with Fred Hutchinson Cancer Center, but I don't think it takes credit for the research there. Same with UCLA and Cedars-Sinai Hospital, or Stony Brook and Cold Spring Harbor. An institution like Hopkins puts its name on every one of its hospitals ("Sidney Kimmel Cancer Center at Johns Hopkins), so there's no confusion there. In the end, it's all very tiring and I like to pretend the numbers don't matter. ¯\_(ツ)_/¯

I count affiliates because that, to my mind, is the best measure of a student's opportunity coming from that program. UWash students can do research at the Hutch, and the program is better for it. I therefore include the Hutch in UWash's totals. Harvard is a special case because of its decentralized structure, but again, the question for me is whether that translates into opportunity for students. So far as I know, there's nothing stopping a Harvard student from doing research with someone at MGH, the Brigham, Dana-Farber, etc. Or I can't imagine why there would be a problem since they typically have faculty appointments at Harvard in addition to their hospital appointments. Perhaps someone who knows more about it can correct me.
 
Our culture likes rankings... As I have said before, there are areas of Science that will be much stronger in an overall lower ranked program. Go for the Science, not for the overall rank.

Nevertheless, thanks for the great work, pithecanthropus!

You could make it better by adding:
  • Competitiveness per spot (i.e.: # applications/enrollment) - see table 33 https://www.aamc.org/download/321544/data/2013factstable33.pdf
  • Time to graduation - most often not public information
  • Publication per graduate - not easy, but using match/graduate class is possible
  • Quality of the residency match - often publish but should go against specialty rank
  • Long-term outcome of the program - unfortunately might mask dips in the quality of the program
  • It would be useful to have an annual satisfaction survey of students in the program
 
Last edited:
I'd be open to a collaborative effort to make a more definitive list if others want to volunteer to help. I can't gather all of these data myself, least of all come July. It would also be a better list if readers of this board could reach a consensus on what counts going in. In addition to what's already in the scoring system, what else would people like to see?

Regarding Underu's and Fencer's specific suggestions (and anyone with an opinion, feel free to take issue):

Size of T32 is okay. I think what we're both getting at (me with duration of the grant) is how durable the institution's MSTP status is. If someone wants to collect the data (as funded spots, not dollar totals), it could be combined with duration.

I would love to have percentage who finish the program, but that's going to be a closely-guarded NIH/program secret.

Applicants per spot is not something that I would favor including. It's a big advantage to programs in more desirable/populous locations, it penalizes large but high quality programs (like Penn, WashU), and it doesn't really have anything to do with what comes out of the program. I'm more interested in what happens from Day 1 forward than I am in the application process itself. But if someone knows how to get the numbers on how many acceptances a program has to give per spot filled, that might be a less biased stand-in for desirability/competitiveness.

Time to graduation is ok in principle, but it would need to be done carefully. 3-year PhDs in moderation mean hard work, good fortune and good planning. 3-year PhDs as the rule mean lax standards. I would argue that 5 is reasonable as long as, like 3 years, it's not the rule. Even people who go longer than 5, in my experience, do so for compelling reasons more often than incompetence. So how would we measure this, deviation about a mean of 4 years?

Publications per graduate I'm ok with. So many points for first-author, less for others and cut it off at publications of the outgoing class at time of graduation? It might be better to gather this from the RePORTER info on each grant rather than from the program websites, although I don't know whether the updates there will be that much more current.

Quality of residency match I would love to include, but I have no idea how to measure it. Is a mid-tier derm match worth more than a top-tier IM match? And deciding tiers of residency programs is a whole other can of worms that would also be obscured by people who rank according to personal geographic needs. I think the best way to do this would be to get an average of where the graduating class matched on their rank lists, but those data are beyond my ability to collect.

(Another stat I'd like to see would be program Step scores, but that's not going to happen.)

Long-term outcome is also appropriate, for example as the proportion who stay in some plausibly research-oriented job. Some programs will publish this in an alumni section that would be public. I think the issue would be whether such data could be collected for all programs. I know NIH has this, but I'm not NIH.

Satisfaction survey…if we can get two or three of the above, I'm happy. NIH should do that, though; the results would be fascinating.

Does anyone want to volunteer to do some data gathering?
 
3-year PhDs in moderation mean hard work, good fortune and good planning. 3-year PhDs as the rule mean lax standards.

I would take issue with this. The purpose of a PhD is to teach you how to do/think science; typically after 3 years this is pretty much accomplished. I feel that after 3 years, you're essentially just a post doc generating data for your PI, and it's not really serving your scientific training much other than getting papers. It would probably be more useful career-wise to use the next 1-3 years that you would be doing a PhD as a post doc generating TT-position data.
 
  • Like
Reactions: 1 user
Top