Which, not surprisingly, is the cause of much ill will among the surveyors and the schools. The schools dislike being graded. They dislike the way they're graded. They say the criteria for rankings often change, the rankings emphasize the wrong criteria, and many of the criteria are difficult to measure consistently.
For instance, the Journal survey asks about "chemistry" - the responding recruiter's general like or dislike for the school. The schools say a recruiter who's in a foul mood when he visits the campus could trash a program, just because he had a bad morning.
And is it possible to accurately assess the strength of a school's faculty, who don't have test scores or grade point averages? The Financial Times uses the number of PhDs on faculty, while BusinessWeek tracks academic papers the faculty produces. Do either mean anything in terms of providing a good education?
But what the schools especially dislike is all the work required to be graded. Many of the surveys are labor intensive - the U.S. News form is 10 to 12 pages, for instance. They require alumni lists, test scores for incoming classes, and reams of other data. Ute Frey, who coordinates responses for Haas, spends as much as one-fifth of her time keeping track of the 16 surveys the school participates in. Employees at other schools tell similar stories. Says Dave Wilson, president and CEO of the Graduate Management Admissions Council, an industry trade group, "That's the truly dysfunctional dimension of all this. The time it takes to fill them out just consumes the schools."