Ratings: What do they really mean?
Most students, parent and educational consultants consider rankings of universities as an important factor in deciding where to apply. We dispel some of the mythology surrounding rankings,
Ranking universities has become a big business. The US News & World Report ranking of American colleges and universities earns millions of dollars each year year for the online-only magazine that constructs and publishes it.
A surprisingly large number of people actually seem to believe that these are official rankings constructed by the American government, perhaps because of the "US" that appears in the title. That is simply not true: The US government does not rank universities at all, and there is nothing comparable to the Project 211 (211工程), initiated in 1995 by the Chinese Ministry of Education . US News & World Report simply makes up its own criteria, decides who important each criterion is, gathers some information, and then publishes its rankings. The criteria change from year to year, sometimes quite significantly.
So, when you look at any ranking, you need to ask yourself: what are the factors that go into the ranking, and how much weight is each one given? Second, you need to understand what data they use, and whether it actually bears any relationship to what they claim to be measuring. And, finally, of course, you need to decide whether these factors, and the relative weights assigned to them, are what you personally would consider important for your own purposes.
Let's take a look at the US News & World Report methodology. They assign a weight of 35% to what they call "Outcomes." You might think this measures things like the ability of students to get well-paying jobs after graduation, or how many of them go on to graduate programs. But, instead, the "Outcomes" here just measures what percentage of students return for a second year after enrolling in the first year, and what percentage actually complete a four-year degree in six years or less, They also give a little bit of weight to the graduation rates of students from poorer families who received government scholarships. That is it.
Next they assign a weight of 20% to something called "Faculty Resources," that relies on average class size, faculty salaries, student-to-faculty ratio, how many professors have the highest degrees (such as Ph.D.s) in their fields, and the proportion of faculty who are full-time. Surprisingly, this last item accounts for just 1%. Yet it is a major problem at many universities that classes are not taught by full-time Ph.D.s, but rather by part-time faculty or even teaching assistants.
"Reputation" counts for 20%. This is determined by a survey of university officials, and does not take into account what students, parents or the general public think.
"Financial Resources", that is how much money is spent on teaching, research and student services counts for 10%.
Then they weight "Incoming Students" at 10%. This is based on scores in tests such as the SAT or ACT, and on the rank of students in their high school class.
Finally, they count "Alumni Giving" at 5%. The idea is that if many students who graduated from a university donate money to it later, they must have been happy with their experience there.
One obvious question one can ask is. Where do the weightings come from? No answer is given; we just have to accept that the editors of the magazine pulled these out of thin air, and that "Alumni Giving," for example, is five times more important that whether you are taught by full-time or part-time faculty.
Notice also that things we might consider important are simply ignored. They measure how much money the university spends on research, but there is no mention of the quality of that research or the productivity of the faculty. Similarly, they consider how much money is spent on teaching, but completely ignore how good the teaching is. And if you are interested (as most students are) in getting a decent job after you graduate, they make no attempt to measure how well students do when they enter the job force: What percentage of students find employment in their chosen fields within, say, a year of graduating, or how well they are doing financially after 10 years, even though some of this information is readily available.
So the idea that we should take these rankings seriously is a little misguided. It makes absolutely no sense to assume that a university ranked 49th is somehow better than one ranked, say 51st. Just a slight tweaking of the arbitrary weight could change that around easily.
As I said before, the magazine fiddles around with the rankings each year, changing weights and throwing out or adding different categories. This almost ensures that the rankings will change from year to year, and that the publishers can make money peddling "the latest," hot-off-the press rankings.
In another post, I will explain how you can use some of the information in these rankings (but not the rankings themselves), together with other data sources, to get a better idea of how to evaluate universities and colleges, and how to choose the institution that is right for you.