The shanghai ranking

On Universities Ranking: Hype or Substance?

When rankings of universities are released every year, Universities managers retain their breath until they discover that their institutions have either gone down, on which case the rankings are biased, inaccurate and flawed, or their institutions have climbed, in which case the rankings are a fair, objective and accurate representation of their performance.

And the strange thing is, these two diverging opinions are somehow right. What makes a university better than another one? Let us first define what is a university. According to Wikipedia, and in brief, "A university is an institution of higher education and research, which grants academic degrees in a variety of subjects. A university provides both undergraduate education and postgraduate education. The word university is derived from the Latin universitas magistrorum et scholarium, roughly meaning "community of teachers and scholars"[1].

The two well-known missions of universities, higher education and research, are clearly mentioned. To these, one could add the co-called third mission, service in the large sense.

In line with a globalisation that affects all, everywhere, at any time, one can not fail to notice an increasingly fashionable tendency to rank everything in life such as the best car in the world, the best wine. Higher education has not been spared by this phenomenon, and temptations to rank universities have led to the establishment of formal and publicised rankings.

The "US News and World Report" published the first public ranking of US universities in 1983; next was was the emergence of three international ranking systems within a short period of time in 2003[2] and 2004[3]. Our aim is to investigate some of these global ranking methods that make abstraction of international boundaries, languages and cultures.

For a university to be deemed better than the other, it would have to provide better education, better research, better service, in a subjective order decided by the rankers. So the fundamental questions become: What criteria can one use to evaluate universities? Which weight should each criterion carry? Which formula should be used? By investigating the most publicised ranking methods, I will attempt to respond to these questions.

So far, the two most publicised ranking systems are the Academic Ranking of World Universities[4] (ARWU) known as the "Shanghai ranking", provided by the Institute of Higher Education of the Shanghai Jiao Tong University in Shanghai, China. The second most popular ranking is provided by the "Times of Higher Education Supplement", a UK based magazine, in collaboration with Quacquarelli Symonds Ltd, a career and education network consultancy[5]. Their joint ranking is known THES-QS. A third, less known method is the Webometrics ranking[6], provided by the Cybermetrics Lab, a research group that is part of the Consejo Superior de Investigaciones Cientficas[7] (CSIC), which is "the largest public research body in Spain"[8].

Each of these institutions had a very specific agenda when they started the cumbersome process of establishing their ranking systems; the ARWU's aim was to compare Chinese Universities with "world class universities"[9]; the obvious goal was to identify gaps and remedies for Chinese Higher Education Institutions (HEI) to adapt, improve and eventually join the very select club of "world-class universities". The rapid worldwide public acceptance of the ARWU ranking took everyone, including the authors, by surprise. Shanghai evaluates 3 000 universities and provides a ranking up to 500. A small number of African universities are mentioned in the 2009 results, all of them in South Africa.

The motivations behind the creation of the second ranking system, THES-QS, were mainly a reaction to the perceived flaws of ARWU, but also partly to guide UK-based students in their choice of HEI. It includes up to 200 HEI, and no African university made it in the 2009 ranking.

The Webometrics ranking's aim was and remains to this day to encourage open access to information through the World Wide Web. It evaluates more than 18 000 university and provides ranks up to 6 000. The two public universities in Namibia, Unam[10] and the Polytechnic, are both included in this top 6 000.

From humble, confidential beginnings, rankings are now taken for granted by the public at large, to the extent that the French President, Nicolas Sarkozy, recently declared his wish to see more French universities in the top 20 and tasked his Minister of Higher Education to draft a strategy in that direction[11]. It is also rumoured that the remuneration level of some Presidents of universities in the USA depends on their institutions performance in rankings!

Looking closely at these three ranking methodologies, one can not fail to realise the importance they attach to research rather than teaching or service activities as the major ranking factor. It is probably because research as the main ranking criterion is very attractive, and it is also the one for which data can be easily gathered. All three methodologies claim transparency in the collection of data, and the authors all maintain their methods can be replicated.

Research outputs criteria form 60% of the ARWU score. These outputs are number of highly cited researchers in 21 categories (20% of total score), total number of papers indexed in Science Citation Index-Expanded and Social Science Citation Index in 2008[12] (20% of total score), articles published in Nature and Science (20%). The remaining 40% are made of Alumni who are Nobel prize laureates or Fields medal recipients (10%), productivity (10%) and finally, the number of academic staff members who are Fields medal recipients or Nobel Prize laureates excluding literature and peace (20%). This last criterion carries a decreasing weight the older the award. For the latter criterion, the points are awarded to the university the laureate was working at the time of the prize announcement. Anecdotally, Albert Einstein's Nobel prize was fought for, some years back, by two German universities who were the result of a split of the University of Berlin after the second world war[13]. These universities argued over which one should get the ARWU points for Einstein's Nobel prize[14]. Some critics[15] wondered what influence a Nobel prize obtained in 1921 can have on the research performance of a university nearly 90 years later. Proponents of ARWU argue that the criteria forming the ranking are readily available on the Internet, and are not subjective. ARWU offers, like THES-QS, an overall ranking as well as five rankings focused in the following fields: Natural sciences and mathematics, engineering / technology and computer science, life and agriculture science, clinical medicine and pharmacy, social sciences. ARWU offers five more rankings, very closely related, on the following subjects: Mathematics, physics, chemistry, computer science, economics and business. However, the formula has been criticised. Using Multiple Criteria Decision Model, the same critics[16] demonstrated how one change in the maximum value of a single criterion, with such criterion keeping the same weight in the formula, may completely change the ranking of other universities, thus violating a basic ranking principle, that the performance change of a top university should not affect the performance of other universities.

Then, the points for a Nobel prize of Fields medal are allocated to the hosting institution at the time of the prize announcement. Other critics[17] investigated where recent medicine Nobel laureates performed the work that led to their prize, and compared it with the institution that got the ARWU credit for the work. Out of 22 Nobel Medicine laureates in the period 1997-2006, quoting, "only seven did their award-winning work at the institution they were affiliated with when they received the award".

Should a researcher be employed by two institutions, which is quite a normal situation in countries such as France or Germany where research centres do most of the research and universities focus on teaching, the two institutions will share the reward, each receiving half a point. A near absurd result was the inclusion once of a research institution in ARWU, yet it was not even offering a single qualification[18].

Also disturbing are citations sometimes attributed to universities that do not exist in name, in the same fashion as the famous "Beam me up Scotty" that was in reality never said once by the Captain Kirk in the Star Trek series. For eg. the University of Paris was known as "Sorbonne" until it split into 13 smaller universities in 1970. University of Paris I is now called Paris I - Sorbonne, and Paris IV is called Pantheon - Sorbonne. If one academic citation mentions Sorbonne, to which university will ARWU allocate the points? Another striking example mentioned by Anthony van Raan is about the Free University of Brussels that is sometimes referred to as the "Vryje Universiteit Brussel", or as the "Universit Libre de Bruxelles", but usually indexed as the "Free University (of) Brussels". It all depends on whether scholars use its Flemish, French or English name[19]. How can one then trust citation counts when universities have different names? Van Raan, citing Moed[20], estimates that 7% of all citations end up being lost, while this number can reach 30% in some disciplines. In their reply to van Raan[21], the ARWU authors argue that the error margin for citation was a lower 2%.

One could wonder why prestigious prizes or awards other than Nobel or Fields are not taken into account? And why do only publications in the journals Science and Nature count, what about humanities or social sciences journals? With regards to the publication database, ARWU makes use solely of Thomson / Reuters, which is known to mostly include papers written in English. Could this be construed as a bias towards "hard" sciences and English as a publication language? At last, to correct that perceived bias towards "hard" sciences, the ARWU authors recently decided to apply a weight of two to all social science publications. But in spite of this effort, some critics still maintain that to belong to the elite group of top universities, one must have the following features: Generalist yet strongly oriented towards hard sciences, uses English, operates with large budgets, has a large contingent of foreign staff and students, is technologically enabled and spends the bulk of its operational budget in research activities. In other words, small, specialised institutions that excel in teaching, have excellent local students and staff but no foreign ones and do not use English have no chance to make it to the top. In their defense, the ARWU's authors acknowledged the imperfection of their formulae and criteria, and offered the following disclaimer in a response to van Raan: "People should be cautious about any ranking and should not rely on any ranking either, including the Academic Ranking of World Universities. Instead, people should use rankings simply as one kind of reference and read the ranking methodology carefully before looking at the ranking lists."[22] To their credit, the ARWU authors correct their methodology from time to time. But there again, critics claim the corrections themselves are not done transparently[23].

Moving to THES-QS, the ranking criteria are, number of citations per faculty (20%), graduate employability (10%), ratio of foreign staff (5%) and students (5%), student / faculty ratio (20%) and peer-review (40%)[24]. The latest rankings are interactive and institutions are evaluated per following subjects: Arts and humanities, engineering and technology, life sciences and medicine, social science and natural science. Critics argue that surveys with a completion rate of less than 1% make more than half of the ranking outcome[25]. The largest criterion in the surveys is peer-review, and survey recipients are asked to name their top 30 peers. As one can imagine, a survey with a 1% completion rate is highly unlikely to be representative nor to be accurate. Furthermore, the THES-QS ranking is perceived as being biased towards the English speaking world. One is thus not surprised to see the first non-English speaking university at only number 19, the University of Tokyo in Japan, while there is surprisingly high presence of UK and Australian universities in the top 100 compared to other ranking systems. To illustrate this possible bias, one can point to the London based University and Imperial Colleges who were ranked 4th and 5th position respectively in the world in 2009, ahead of the MIT in the USA who was far behind at 9th. Finally, the process of recruiting survey recipients is not transparent, with some suggesting that the lion share of survey recipients goes to the Anglo-Saxon world, which again may express favouritism towards English-speaking countries and institutions. Although QS opened offices in Singapore, Paris, and Alicante in Spain, could it be "too little, too late"?

The Webometrics ranking has not received as much criticism as Shanghai or THES-QS, partly because it is less popular[26]. The evaluation process is more transparent. Four criteria are used, all evaluating data extracted from search engines. They are: Size of the institutional domain, counting for 25% of the ranking formula, visibility expressed in number of external inlinks[27] (50%), number of files deemed to be of academic material (15%), and Google Scholar site count[28] (15%). Results are collected on the Internet from large search engines, Google, Yahoo, Bing, Exalead.

Detractors will mention that the method used by search engines to calculate a domain size[29] is at worse a well guarded secret, at best very opaque. It is worth mentioning that the bias of the search engine Exalead towards American and UK universities is something that I experienced while collecting data results of search engines queries in December 2009 and January 2010. It may have to do with the perceived "relevance" of a web domain. It is thus unclear whether the criterion "Size" gives a search engine results advantage to universities in name, because they may be seen as more relevant. For example, a search on "University in Namibia" will yield more results in favour of Unam than the Polytechnic, not because of the difference in academic substance of these two institutions[30], but because of their name difference in clear favour of the University of Namibia; the word "University" has obviously better visibility than "Polytechnic" when one uses a search engine to look for a "University"[31]. Then, critics may find the criteria "size", "rich files" and "number of publications" as quantitative, not qualitative, leading to the belief that Webometrics rewards quantity of information without regard for quality. Finally, Universities with restrictive publications policies are disadvantaged. The Webometrics authors refute these claims by mentioning the importance they attach to the visibility criterion, since a page receiving a high number of inlinks is usually an indication of the high quality of that page, in a very similar fashion to academic citations: High quality publications will likely be cited more often than lower quality ones. But citations should not always be interpreted as a positive vote[32].

There are other less known global ranking systems, such as the ones from: Leiden University, 4 International Colleges and Universities (4ICU), Ecole des Mines de Paris (ParisTech). Their methodologies are again different; Leiden performs a bibliometric analysis based on the scientific outputs of about 1 000 universities, 4ICU is a web popularity ranking whose formula is made of three web-collected criteria, two of which are also used by Webometrics (Google Size and Yahoo external inlinks[33]). ParisTech uses as unique criterion the number of alumni per university who are, at the time of the compilation of their ranking, CEO's of Fortune 500 companies; every institution that granted at least a degree to a CEO gets points; this ranking is seen by its critics as an assessment of solely teaching quality that has a clear bias towards French HEI. But one has to be aware that the bulk of research in France and in Germany is performed not by universities but by specialised research centres; one can mention the Institut Pasteur and CNRS in France or the Max Planck and Frauenhofer Institutes in Germany. Who can then blame French or German based ranking systems for not evaluating research outcome as a criterion to evaluate HEI if research activities take place at specialised institutes that do not routinely grant degrees?

Any attempt to rank universities in developing countries can prove a very difficult exercise, because the likelihood to be included in either the ARWU or THES-QS is at best low, firstly because none yet has produced a Nobel prize laureate or Fields medal recipient, nor is employing any[34]; secondly, research in these two institutions is hampered by chronic underfunding. To put different contexts in perspective, Harvard university in the USA is not a public university. It is privately funded and had operating expenses of N$29.4 billion[35] for 20 230 students in 2008[36]; a rapid calculation shows an average expense of N$1 450 000 per year per Harvard student, and that figure includes research activities for which this institution is well known. Let us compare this juggernaut figure with a Namibian government subsidy, for the same year, of N$259.4 million for 8 361 Unam students[37] and N$106.8 million for 9 410 Polytechnic students[38]; The operational expenditure of Unam that same year was N$332.9 million versus the Polytechnic spending N$217.9 million for its operations. The average cost of a Unam student was thus N$39 815 for the year 2008 against N$27 566 for a Polytechnic student; as one can see, these figures are no match against the N$1 450 000 spent per Harvard student. But does Harvard really spend so much money per student? Of course not, Harvard massively funds research activities that will do much for its ranking. The basic rule of affordability prevents Namibian institutions to engage into expensive basic research that will not contribute much to the local economy. It is then perfectly understandable that Namibia's universities focus on teaching and learning to provide the skills the country badly needs. Because of their low research output compared to the rich universities, and since both ARWU and and THES-QS do not rank beyond position 500 and 200 respectively, it will be almost impossible for Unam and the Polytechnic to enter these rankings, at least in the short term, unless they get more resources.

How are higher education institutions reacting to these rankings? Do they simply ignore them? Do they pay attention to them? Do they act upon them? It seems the better the position of an institution in the ranking, the higher the interest and the pressure to remain high. A very obvious solution to keep a good ARWU rank is the hiring of potential Nobel Prizes laureates or Fields medal winners, but because they are in short supply and highly in demand, one can still rely on the hiring of publications Stakhanovites.

It can be argued whether the presence of Nobel or Fields laureates in a university does necessarily mean that the quality of education in all its departments is great. It rather means this institution enjoys the presence of at least one world class researcher in an academic field[39]. But like most cognitively advanced persons, they usually have very difficult characters, making most of them unsuitable for teaching at undergraduate level, the level at which most students exit higher education institutions anyway. Using the presence of a Nobel Prize laureate as an argument of excellence is a bit similar to the marketing of airlines, where the huge leg space and the fine food in first class are advertised in TV ads, but strangely enough, the available leg space or the plastic food platters in economy class are never shown, except in parody adverts. To what extent does a university benefit from the presence of a Nobel prize laureate is still an excellent question.

A potential danger facing HEI is the pressure applied by university boards to allocate more resources to academic departments whose work faster improves positions in rankings. Allocating more resources to one department means allocating less to others. Should this logic be fully applied, we may witness the complete extinction[40] of humanities, law, and all other departments that do not directly and positively influence the rankings. This would prove catastrophic for entire foundations of human knowledge.

Any attempt to rank universities attracts criticism from some quarters and praise from others. Because too biased towards research, too biased towards the input quality of students, too biased towards the English speaking world, or even too "unscientific" or not transparent enough or not comparing apples with apples. To the question, "what is the best car in the world?", there is no universal response. It all depends on what the car will be used for. "What is the best wine?", it also all depends on personal taste and on the food being served. The response to the question, "what is the best university in the world?", seems to be the same. For whom, in which context, and which field of study? Do we speak about relative or absolute quality? Rather than discussing these critical questions, I witnessed discussions in an internet newsgroup over the poor showing of some institutions in the African section of the Webometrics ranking. The discussion was taking place early 2008. Alumni and staff members of about 20 African HEI outside the African top 10 claimed the rankings were incorrect because their institutions or alma mater should have been included in that top 10. Notwithstanding the mathematical impossibility, none of the participants ever bothered to have a close look at the methodology, which could have explained why South African universities were monopolising the first places. They were simply upset about the low status of their institutions, yet no one was doing much ask the most simple question, "Why" the low positions in the first place.

So what can we expect from the years to come? In 2009, THES publicly acknowledged that their ranking was no longer "fit for the purpose", and they severed ties with their partners of six years, QS Ltd. THES declared they would start their ranking methodology from scratch, and that they would use the Thomson-Reuters publication database. The new ranking system is expected at the end of 2010. QS will generate their own ranking also at the end of 2010, further adding to the cacophony. But in a positive development, the International Observatory on Academic Rankings and Excellence, a child of the International Ranking Expert Group of the Unesco, has established a set of ranking guidelines that are known as the Berlin Principles on Ranking Higher Education Institutions[41], a voluntary code of conduct to which rankers should ideally subscribe.

Then, the h-index factor named after Jorge Hirsch is showing promise. It intents to allocate points to scholars, who in turn transfer their points to their employing universities. To summarise, a scholar has an index h if h of published papers have at least h citations each. The h index is useful to see the impact and the production level of scholars. And more factors have been proposed over the last three years, the B index, the H-1 factor, G-factor, the list goes on, it seems that each proposer is hoping to leave his/her name to posterity. Opponents say the h-index is not accurate because talented researchers who die young never obtained a high h-index.

New ranking initiatives are already mushrooming. Some of them may not be seen as being universal like the one from the Russian institute RATER that placed the Moscow State University at number 5 in the world[42], while in no other ranking did the same university make it in the top 50. But there is no dishonesty in RATER's approach; the authors adopted sets of criteria that, according to Russian educationalists, are relevant to the evaluation of HEI quality. And in their defense, they rank the MIT as the number one university in the world.

The most interesting prospect could become the outcome of a tender advertised late last year by the European Commission to develop a ranking system. The tender was won by a consortium led the German Centre for Higher Education Development, whose existing ranking systems[43] are already operational. Although these systems are currently restricted to German-speaking countries, CHE provides four different rankings, not based on field of study like ARWU or THES-QS but rather on employability of students, research performance, excellence and overall. Rankings are generated interactively after the manual entry of a wide set of variables. Some of those variables are field of study, availability of sport facilities on campus, size of the town, ICT equipment per student, orientation towards research, study fees, price of accommodation, overall reputation.

Whatever the ranking methods, and notwithstanding the debate between detractors and proponents, rankings are here to stay. They are even due to multiply in number and in complexity. But if some scholars sing the praise of their institutions when they perform well in rankings, becoming silent when they do not perform that well, in both instances not bothering investigating the differences between the rankings, nor making the slightest basic research on ranking methodologies, they do so at the expense of the very scientific rigour they are entrusted to upheld.

To illustrate this cautionary statement, one needs to observe the results discrepancy expressed in two different rankings, 4ICU and Webometrics, in both their January 2010 releases[44]. 4ICU ranked the University of Cairo (UC) number one in Africa, followed by the University of Cape Town (UCT), while Webometrics ranked UCT number one and the UC at position number 9. Which one to believe?

In Namibia, 4ICU places the Polytechnic as number one and at position number 14 in Africa, while Unam is number two in the country and 39 in Africa. But we see the opposite situation in the Webometrics ranking, the Polytechnic is ranked number 37 in Africa while Unam is 21. This confirms that the results of any ranking system should always be welcomed with necessary caution, and that a pat in the back is not the correct way to react to a good position in a ranking system. A more serious approach would be to investigate how universities perform with regard to their mission, economic context, student quality intake, production process, graduates quality, expectations from society, productivity of staff and level of funding.

Laurent Evrard is the founding Director: Bureau of Computer Services at the Polytechnic of Namibia, a position he occupies since 1996. He is investigating universities ranking systems as part of his doctoral research. He graduated from EPITA in Paris, France, in 1993, with a MSc in Software Engineering applied to Business Computing. Married with two children, he is also a part-time wine maker.

Work Cited

  • Billaut, Jean-Charles, Denis Bouyssou and Philipe Vincke, "Should you Believe in the Shanghai Ranking? An MCDM view". 2006 <http://hal.archives-ouvertes.fr/docs/00/40/39/93/PDF/Shanghai_JCB_DB_PV.pdf>
  • Ioannidis, John PA, Nikolaos A. Patsopoulos, Fotini K. Kavvoura, Athina Tatsioni, Evangelos Evangelou, Ioanna Kouri, Despina G. Contopoulos-Ioannidis and George Liberopoulos, "International Ranking Systems for Universities and Institutions: A Critical Appraisal". BMC Medicine 5.30, 2007
  • Levin, Henry M., Dong Wook Jeong, Dongshu Ou, "What is a world-class university?", paper presented at the conference of the Comparative and International Education Society, Honolulu, Hawaii, 16 March 2006.
  • Liu, Nian Cai, Ying Cheng, Li Liu, "Academic Ranking of World universities Using Scientometrics - A Comment to the "Fatal Attraction" ". Scientometrics 64.1 (2005): 101-109
  • Moed, H. F., "The Impact Factors Debate: The ISI's use and Limits", Nature 415 (2002): 731-732
  • Van Raan, A.F.J. (2005), Fatal Attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics 62.1 (2005): 133-143

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!