University Rankings
Someone (well, many people, deprived of commentary) asked me what I thought of these rankings. Frankly, they are all what I call pseudo-quantitative exercises. Without looking at what the numbers measure, we already know that these are dubious exercises. Why?
Firstly, the numbers you see rely a lot on either qualitative inputs converted to numerical form (like the infamous and hugely popular Likert scales). In theory, a large enough collection of inputs should work to minimise error; in practice, we simply do not know if this is true, considering that the inputs are non-random. How random can the respondents to a survey on universities be?
Other forms of non-linear 'hashing' such as converting raw data to a normalised index have also been used. An example of this is to take the 'best' value and convert it to 100/100, and then pro-rate the other values accordingly. Slightly geeky example follows:
Supposing you have a Physics test, and the best student scores 25/100. We have a few ways to convert this to 100/100: we can 1) multiply by 4, 2) take the square root and multiply by 20, 3) divide the score by 25, take the arcsine of that (in radians) and multiply by 200/π, and so on. A score of 10 on that test would get you 1) 40, 2) 63.3 or 3) 26.2 marks respectively.
Secondly, a lot of it depends on subjective data. By subjective data, I mean that the data originates from people who have no means of comparison with other institutions. For example, supposing I ask you to rank your experience in Institution A on a scale of 1-10. You give me an '8'. Four years later, you are at Institution B, which you now rank as '8' and you express the thought that perhaps you should have given Institution A a '6' instead because it is not half as good as B. Well, you are entitled to your opinion, but if it weren't half as good as B, perhaps you should have given it at most a '4'?
The 220,000 students surveyed in the Sunday Times version are like that; they are rating their degree of satisfaction without comparison, without objective measures, without reliability. Can we assume that the presumable error will cancel out? No...
Thirdly, many ranking systems rely on measuring awards. If we were to follow the silly example of this Professor of Economics, we'd be measuring universities in terms of Nobel Prizes. What's so bad about that? Well, the Nobel Prizes are one-off, first past the post awards. There are no prizes for second or third place, so we have no idea how close any given university came to getting one. It is like saying an MP is infinitely better than his defeated opponent even though he got 49.1% of the vote and his opponent got 48.7%. It is also like saying that all MPs are equal.
Fourthly, methodologies change from year to year and there's nothing that can be done about it. Ideally, they should improve; but in point of fact, the market that wants ranking data will require different measures each year, and hence require different methodologies in terms of data collection, weighting, cross-comparison and so on. Such a state has been passed off as 'good news', but I don't think that's necessarily true.
I could go on and on about how flawed everything is. But my readers are practical people, so what can we do about it? What I would say is that if you can live without brand names, most universities are fine.
A student should really be evaluating the environment in which that university is in, as well as what that student is going to university for. If university is a disguised career move, then you need a brand name. If university is a preparation for academic life, check out the research statistics and whether leading researchers are there. If university is an excuse to bum around, check out the social statistics. If university is a balance between personal finance and a reasonably sound education, check out the educational cost and its average returns (i.e. how much does a graduate in your chosen course of study earn within 1, 3, 5, or 10 years after graduating, ceteris paribus?)
There's lot's to say, really. I try to provide educational advice if anybody wants to ask, but here are my initial caveats.
- Good advice requires good information, and to advise you, I'd need to know you intrusively well;
- I come from an academic family, so I have a bias in certain directions;
- I didn't graduate from a top 10 university, and never felt the need to have done so;
- I believe that history and culture should have some weight;
- I believe that a lot of students are attracted by the dream or ideal of a particular place, and not the hard data.
Labels: Education, Schools, Students, University Life
0 Comments:
Post a Comment
<< Home