In many aspects, college admissions is a game. Students with multiple ethnic backgrounds might try to choose the most âadvantageousâ ethnicity (or not choose one at all) in terms of affirmative action. They might put biomedical engineering or philosophy instead of pre-medicine as an academic interest to catch the eyes of admissions officers. There are, after all, many more pre-meds than biomedical engineers or philosophers. What many donât know, however, is that colleges play the game, too. âGaming the rankings,â as it is sometimes called, is the deliberate practice of certain admissions strategies specifically targeted toward raising a schoolâs rankings, especially those of US News and World Report. In a practice known as âyield protectionâ or âTufts syndrome,â named after its most famous perpetrator, colleges sometimes waitlist students who admissions officers see as unlikely to enroll even if admitted (think: safety school applicants). While a school might lose out on a few stellar admits, this practice has the net effect of decreasing the admission rate, a factor in the âstudent selectivityâ category of the US News rankings. Clearly, Tufts does it. Washington University in St. Louis does it. Georgetown does it. A host of top-30 institutions do it. Other dubious practices, such as shifting class sizes, target other US News categories. For instance, rumors surfaced last year on Boston.com that Clemsonâs meteoric rise in the US News rankings might at least be partially attributed to class size redistributing. Since 30% of the faculty resources category accounts for the proportion of classes with fewer than 20 students and only 10% accounts for the proportion of classes with fewer than 50, Clemson allegedly shifted students from classes with a little over 20 students to classes with over 50 students to increase its US News ranking. All this points toward an important question: if certain colleges can simply game the rankings, then how much should we trust them? No one college ranking system can comprehensively encompass such a subjective judgment, but there is much that can be done to make rankings more meaningful. One approach entirely discards the traditional notion of measuring a collegeâs intrinsic qualities. Instead, a 2004 study by researchers at Harvard, Boston University and the University of Pennsylvania relied entirely on the decisions of 3,240 high-achieving students as a measure of revealed preference, or the best possible option. The researchers took each studentâs college decision to be a tournament in which each of that studentâs prospective colleges competed against each other. Using a rating system similar to those of tennis and chess, they determined a ranking for about 100 colleges and universities. "Our method produces a ranking that would be very difficult for a college to manipulate. In contrast, colleges can easily manipulate the matriculation rate and the admission rate," the authors state. "If our ranking method were used, the pressure on colleges to practice strategic admissions would be relieved." Accordingly, well-known top school schools that notably game the rankings, including Clemson and Washington University in St. Louis, failed to make the top 100. Even with its advantages, though, the revealed preference ranking has never since been repeated. Unfortunately, the data required makes it very difficult to repeat. Meanwhile, we can just sit back and take college rankingsâUS News, Forbes or otherwiseâwith a rather large chunk of salt.
Categories:
Gaming the system: college rankings need protection from manipulation
Lawrence Chiou
•
April 30, 2010
Story continues below advertisement
0
Donate to The WEB
$75
$450
Contributed
Our Goal
Your donation will support the student journalists of Ames High School, and Iowa needs student journalists. Your contribution will allow us to cover our annual website hosting costs.