US News & World Report rankings pose problems for students, colleges
November 6, 2013
Think back to junior year of high school. Your parents and guidance counselors had just begun asking you to consider where you might want to attend college.
But this proved no easy task. The heaps of college brochures, all depicting gray-haired professors pontificating to a handful of students gathered in a circle outside a gothic collegiate hall, only confused matters.
Everything was clarified, however, when you discovered that some Samaritan had ranked the “best colleges,” and published them online for everyone to easily evaluate. That savior was most likely the U.S. News & World Report — the holy grail of college rankings.
This measurement, which guides the decisions of so many high school students, says nothing about the actual value of the education that a university is capable of delivering, however. And it misleads not only confused high school overachievers, but senior university trustees, as well.
U.S. News & World Report became the first organization to compile and publish numeric university rankings in its magazine in 1983. Since then, it has reigned as the definitive measurement for university success — a fact well known to parents, prospective students and administrators. By 2007, when the magazine first published its rankings online, the site received 10 million views within three days — compared to its typical 500,000 per month.
The University of Pittsburgh has remained dropped from no. 58 to no. 62 between 2012 and 2013.
Curious high schoolers aren’t the only ones who keep an eye on these rankings. It is not uncommon for alumni and donors to follow the rankings as a way of keeping their alma mater’s chancellor and administrators accountable. The contract for the president of Arizona State University, for example, stipulates a raise if the university’s ranking ascends in the rankings.
The inherent flaw with using the U.S. News & World Report rankings as a guide for university choice and policy lies in its methodology. The rankings measure across seven categories, with each weighted differently according to importance.
The most heavily weighted measurement category is “undergraduate academic reputation.” U.S. News compiles this score by surveying both senior university faculty from various institutions and high school guidance counselors. Respondents are asked to rank universities from one to five for their quality or else respond “I don’t know.”
Immediately, the problem becomes clear. Most respondents would have experience only with a limited range of universities, but since the U.S. News rankings are so widely known, it would certainly receive responses from professors whose only knowledge of less prestigious universities comes from the rankings themselves. So the responses of the previous year strongly influence the single most-weighted category of the following year. In this way, the rankings perpetuate themselves year to year.
On closer inspection, this same problem runs through each of the other categories — to the point that it defines the methodology of the test.
Consider, for instance, the “student selectivity” category. It encompasses the overall acceptance rate — total admissions divided by total applicants — along with average SAT scores and high school class rankings. A university that can attract many applicants, but admits very few, is in an ideal position to rise in the rankings.
This criterion suffers from the same problem of self-perpetuation, since students will apply en masse to the universities that top the rankings and, not coincidentally, also admit the fewest students. According to the 2014 rankings, Princeton and Harvard — the top two schools on the list — admitted 7.8 percent and 6 percent of applicants, respectively.
The real flaw in this measurement, however, lies in its skewed understanding of a university’s mission. While the private colleges that occupy the first 20 spots in the rankings need not feel committed to making education widely accessible to their communities, public universities are obligated to do so. Taxpayers across Pennsylvania do not contribute to this University so that it can admit as few students as possible, all in the name of garnering “prestige.”
So how — or more importantly, why — should we be compared against universities with such a different institutional purpose? Hopefully, university alumni and donors will remember that accessible education is not a hallmark of a well-ranked university, but rather a university concerned with social justice.
In fact, if those donors would only have the generosity to write one more check or two, the University would find a great way to skyrocket up those rankings. Almost every measurement is directly or indirectly tied to a university’s wealth — “financial resources per student,” “alumni giving” and “faculty compensation,” just to name a few. So all Pitt needs to do to compete with Harvard is raise its endowment tenfold to match that university’s monstrous $30 billion endowment.
The policy that these rankings would suggest, then, is simple: drastically raise tuition and accept only those 6 or 7 percent of students who have already been gifted with the greatest educational opportunity.
All we can hope is that the leaders of this University pay far less attention to these rankings than we did when we were in 11th grade.
Write Simon at [email protected].