In the last two months since the midterm election, the Obama administration has taken one bold political step after another — from the executive order against deportation of undocumented immigrants to the deconstruction of Cold War-era restrictions against a thawing Cuba.
In the wake of these enactments, the media gave little attention to the Department of Education when it unveiled the long-awaited standards for the ‘college rating’ program that the president first proposed more than a year ago. While these standards move higher-education policy in the right direction, they suffer from the same shortcomings of Obama’s initial proposal.
The decidedly more measured tones of the announcement ensured minimal publicity for the standards, which aren’t exactly standards at all. They are, rather, a framework for the standards. But they aren’t exactly a “framework,” either. They are a “draft framework.” Even in this climate of partisan hostility, nothing called a draft framework could provoke too much passion.
Nevertheless, the tentative tone of the draft framework speaks to the controversy that the initial proposal and the ED’s subsequent fact-finding sparked among college presidents and policy makers. Obama proposed a rating system that could reliably inform prospective students and their families of college choices, and which would correspond to the proportional distribution of federal funds to each institution. The idea of the ratings themselves evoked the ire of some college administrators worried for their institutions’ reputations. Other administrators welcomed the plan, but held reservations about certain measurements.
Rather than committing to controversial measurements, the ED released the entirely non-committal draft framework, which emphasizes the three main metrics upon which institutions will be scored: “access, affordability and outcomes” — that is, representation of low-income students, low costs and manageable loan debt and high graduation and employment rates after graduation. Most stakeholders agree to these principles, but the quantitative measurements necessary to capture these broad categories prove elusive.
The ED put the predicament best itself, conceding in the announcement that “many of the factors that contribute to a high-quality postsecondary education are intangible, not amenable to simple and readily comparable quantitative measures and not the subject of existing data sources that could be used across all institutions.” It turns out that a good college education isn’t something that anyone can measure as easily as a good baseball player — even if there are about as many rankings.
Administrators have particularly targeted the “outcomes” category as an overly simplistic, one-size-fits-all approach to measuring higher education. Not all colleges strive to produce graduates with impressive long-term median incomes or with jobs paying 200 percent of the federal poverty guidelines immediately upon graduation — both of which the draft framework considers as possible metrics. Colleges that train students to enter social work, public service or encourage short-term employment opportunities, such as the Peace Corps, would not rank well under such scrutiny.
What is more, colleges can easily subvert the well-intentioned metrics through any number of loopholes. If colleges fear for their unimpressive graduation rates, they can always reduce academic rigor even further to pass students through a learning-free conveyor belt. If they want to increase their population of first-generation college students, they can always admit more and skimp on the institutional support that many of those students need to overcome their respective challenges.
The standards can circumvent these problems, however, by focusing on practices above results. The ED recognizes that students attend different colleges for different reasons — one quantitative definition of a “good outcome” may apply to a liberal arts college attracting idealistic 18-year-olds, but not to a vocational school enrolling employed adults into night school classes. Nevertheless, certain measurable pedagogic practices allow for a universal standard to assess quality.
It might help to think of colleges like restaurants. No government agency could reasonably articulate a standard for a good restaurant experience that pleases everyone. But they can — and certainly do — ensure that restaurants adhere to food safety standards, lest they poison their patrons.
In the same way, the ED can ensure universities conform to best educational practices without making any judgments on the definition of a good education. Rather than measuring graduates’ median salary, they can measure the proportion of the budget that a university allocates to classroom education as opposed to, say, advertising and marketing. Rather than measuring the number of Pell Grant students an institution succeeds in attracting, they can measure the extent to which a university appeals to local public schools and low-income student demographics.
The standards themselves — or at least their draft framework — could identify practices that all universities can be expected to follow. Any pronouncements on some ideal Platonic form of higher education, however, belong rightfully in the Ivory Tower and not Capitol Hill.
Simon Brown primarily writes about education for The Pitt News.
Write to Simon at spb40@pitt.edu.
The best team in Pitt volleyball history fell short in the Final Four to Louisville…
Pitt volleyball sophomore opposite hitter Olivia Babcock won AVCA National Player of the Year on…
Pitt women’s basketball fell to Miami 56-62 on Sunday at the Petersen Events Center.
Pitt volleyball swept Kentucky to advance to the NCAA Semifinals in Louisville on Saturday at…
Pitt Wrestling fell to Ohio State 17-20 on Friday at Fitzgerald Field House. [gallery ids="192931,192930,192929,192928,192927"]
Pitt volleyball survived a five-set thriller against Oregon during the third round of the NCAA…