The problem with the data is that there are several things that it does not factor in.
First, it doesn't factor in choosing places based upon location. For the places that maybe aren't as sexy to live (i.e. West Virginia), you likely don't have the pan-applying that occurs to several of the California, NYC, Boston, etc., programs. Because of this, the applicants that end up applying and getting interviews likely have strong family ties to the areas and will rank them much higher.
Second, it doesn't factor in that every program doesn't interview every applicant. For instance, if the top 100 EM applicants all went to the theoretical top 10 programs, then at the end of the day, one of these programs would be the tenth lowest ranked program of that list, but still the tenth best program. When you have data like this though, what happens is that the 100 EM applicants ranked 900th-1000th will get interviews at similar places at lets say the programs theoretically ranked 90th-100th. What can potentially happen is for the 90th ranked program, to be ranked higher than the 10th ranked program.
Lastly, the data doesn't factor in for selectivity of programs. Lesser known programs might shoot for the stars and offer interviews to the top candidates who likely won't rank the program highly. Conversely, you might have lesser known programs who have learned that the top candidates won't rank them highly so they interview applicants who are likely to rank them highly.
Anyway, my point is that this spreadsheet should not be taken as the holy grail for future applicants as there are many things it can't factor in. At the end of the day, EM programs are all more similar than dissimilar and you as an applicant need to decide what is your number 1 program. For one person, living in West Virginia would be miserable, and New York City would be ideal. To another person, living in New York City would be miserable, and West Virginia would be ideal.