Editorial: The Princeton Review shouldn’t use data from RateMyProfessors.com
April 7, 2012
Few scholars agree on what qualities make a successful teacher. Some believe charisma and… Few scholars agree on what qualities make a successful teacher. Some believe charisma and enthusiasm are essential, while others emphasize the importance of intellect. Most, however, would argue that professors should be evaluated holistically and not — as one guidebook believes — on the basis of anonymous online ratings.
The Princeton Review, a company known primarily for its annual college guide, recently released “The Best 300 Professors,” a book that purports to showcase academia’s most effective lecturers. (Sadly, not a single Pitt faculty member made the list.) The text features profiles of outstanding professors alongside tips for how to succeed in the classroom.
Although it seems unlikely that researchers could narrow a list of best professors down to 300 people, we wouldn’t be opposed to such a book if it gathered data from reliable sources. But in addition to the Review’s annual student survey — which asks participants whether their professors are good teachers, whether they’re accessible, whether they encourage class discussion and whether they bring material “to life” — the guidebook incorporated data from RateMyProfessors.com. And that, in itself, should discredit it.
As every undergraduate knows, the popular website serves more as an outlet for students’ petty grievances than a forum for informed criticism. Teachers’ overall scores are tabulated using only two criteria — helpfulness and clarity — with a separate “easiness” rating accompanying them. If they wish, students can also evaluate their professor’s attractiveness; the hottest faculty members earn a chili pepper icon.
The site makes no effort to ensure that its users are actually the professor’s students or that the same user isn’t posting multiple reviews. It also does nothing to counter the abundance of trivial complaints (i.e. “this man gave me a C because I never attended class”) that appear on many rigorous professors’ pages. As a result, the teachers that score the highest are almost invariably lenient graders. Currently, the top-ranked lecturer boasts an “easiness” ranking of 4.4 out of 5. The second highest boasts a 4.6.
Treating RateMyProfessors.com as a reliable website doesn’t just skew one’s perspective on teaching — it legitimizes a ludicrous methodology. If a professor’s worth can be determined solely using the site’s criteria, then Pitt might as well abandon its semesterly teaching evaluations and solicit anonymous feedback by email.
Students are an important resource for college rankings, and their propensity to lionize easy teachers shouldn’t discourage researchers from seeking their input. But the Princeton Review, a respected authority on higher education, undermined its reputation by consulting RateMyProfessors.com. “The Best 300 Professors” shouldn’t be considered a reliable guidebook, no more than the website’s “easiness” ranking should be considered a valuable barometer of teaching effectiveness.