Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Paper 7: Ranking Methodology of Times Higher Education (Baty)
Upcoming SlideShare
Loading in …5
×

Paper 7: Ranking Methodology of Times Higher Education (Baty)

1,408 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,408
On SlideShare
0
From Embeds
0
Number of Embeds
10
Actions
Shares
0
Downloads
45
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Good morning. I’m delighted to be here in such esteemed company. Apologies to Professor Liu – we were both in Beijing last week and he had to listen to a very similar presentation then! So it is confession time. I’ve have spent the last few months admitting that the world rankings my magazine has been publishing for the last six years are not fit for purpose. So we’ve ripped them up and we are starting again with an entirely new ranking system for 2010. Today I’m going to explain why rankings have become so important and what was wrong with our old rankings. I’ll explain what we’re doing to improve the rankings in our exciting new partnership with Thomson Reuters. I hope to convince you all today that we have learned important lessons, and that the major changes we are making to the rankings will make them much more rigorous, balanced, sophisticated and transparent – giving a more accurate picture of the global higher education landscape and providing a legitimate tool for the sector, rather than an annual headline-grabbing curiosity.
  • So first of all, I’ll briefly introduce you to my magazine. This is Times Higher Education – “THE” for short. Founded in 1971 as a tabloid newspaper – previously known as the Times Higher Education Supplement. Re-launched in January 2008 as a weekly magazine for all professionals in higher education. We dropped the “supplement” from our title. Now just THE.
  • We are also a daily website. The website was launched in 1994 – a first in UK publishing. All news, opinions, book reviews and features from weekly print magazine are published on-line every Thursday Also dedicated daily higher education news and opinions section. 100,000 unique visitors every week from all around the world. More than one million visits in 24 hours on 8 October 2009 – rankings day.
  • And this is our parent company – TSL Education Ltd, based in Bloomsbury, London. You’ll see from the slide that all our businesses are in education. The day to day mission at Times Higher Education, which we’ve been involved in for almost 40 years, is to be an authorative and respected source of information for the global higher education community. We are accountable to that community, so it is crucial that our rankings stand up to the close scrutiny of academics and university staff. As experts on higher education working for the people in higher education, we are acutely aware that universities are extraordinarily complex organisations, which do many wonderful, life-changing and paradigm shifting things which simply can not be measured. We know that universities can not really be reduced to a crude set of numbers. So why do we rank at all?
  • Well first of all, higher education is rapidly globalising. SLIDE We believe strongly that rankings, despite their limitations, help us understand this process.
  • And rankings do serve a valid purpose. This slide shows a series of quotes from the US Institute of Higher Education policy on the influence of rankings. QUOTE Love them or hate them, rankings are here to stay. As governments start adjusting to a future economy driven by knowledge and innovation -- and make creating world class universities a key national economic policy priority -- the rankings can only get more and more influential.
  • So between 2004 and 2009 all the data and analysis for the rankings were supplied by a company called QS. 40 per cent of the overall score was based on “Peer Review”. I’ve now banned that description. It was simply an opinion survey of university staff, asking them to rate the best universities. 10 per cent was based on “recruiter review” – a survey of graduate recruiters. 20 per cent was based on the SSR – that was the best proxy for teaching quality we could come up with at the time, based on the idea that smaller class sizes = better teaching. Crude, I know. 20 per cent was a measure of research excellence – looking at the number of times an academic’s published work was cited by others. 5 per cent was based on the proportion of overseas students at an institution. 5 per cent was based on the proportion of overseas staff at an institution. It was an attempt to get balance and breadth. But we have torn up the rankings and will start again. We have ended the arrangement with QS. They will have no further involvement in our rankings.
  • For what its worth.. Some key findings… The UK does very well – the cause of a great deal of suspicion, coming from a UK-based publication! Damaged the credibility of the tables
  • The biggest theme to emerge from 2009 was the rise of Asia. Rick Levin: “The rapid economic development of Asia since the Second World War… has altered the balance of power in the global economy and hence in geopolitics.” South Korea’s world-class university project provides about £4 billion in funding to 18 universities. China has been intensively developing its policy to build world-class universities for the last decade, with resources concentrated on its elite institutions. The number of Chinese who enrol in college each year has quintupled—rising from 1 million students in 1997 to more than 5.5 million students in 2007. gross enrolment rate for tertiary education stands at 23 percent, compared to 58 percent in Japan, 59 percent in the UK, and 82 percent in the United States
  • So the old rankings do help us get a broad picture of the global landscape. But as I said at the start, we no longer believe that they are fit for purpose so we have scrapped them Why? They have had a lot of criticism
  • There’s plenty more where that came from
  • But far the most stinging attack came from Andrew Oswald. 2007, he mocked the pecking order of that year’s rankings Oxford and Cambridge were joint second. Stanford University was 19th – despite having “ garnered three times as many Nobel Prizes over the past two decades as the universities of Oxford and Cambridge did combined ” He said we should be unhappy with ourselves. And believe me we were. New editor in 2008, she put me in charge of rankings in 2009. We had a review and did not like what we saw.
  • We convened a meeting of our editorial board, including: Drummond Bone, HE consultant and and advisor to the UK government Bahram Bekhradnia, director of the Higher Education Policy Institute Malcolm Grant, Provost of UCLSimon Marginson, Professor of HE Universtiy of Melbourne Philip Altbach, director, Center for International Higher Education, Boston College
  • Research by Michael Bastedo from the University of Michigan found that the biggest most important factor in achieving a good reputation score was the previous year’s ranking position Is open to manipulation – the US website Inside Higher Ed reported last year on the senior administrator who revealed that her colleagues were instructed to routinely give low ratings to all other programmes other than their own institutions’. But reputation is crucial in globalised and competitive HE world. It also helps get a sense of some of the important but intangible things. And it is a measure that people want.
  • So it is a good measure to have, but only if the measurement is done properly. And it wasn’t done properly in my view. Look at those figures. Tiny numbers. Shocking and not good enough.
  • The other big problem we had was with the research excellence measure. By simply measuring the volume of citations against staff numbers, the old rankings took no account of the dramatically different citations habits between disciplines. Med School advantage. LSE example – could be true of Stanford too. It was 16 th in the 2009 rankings.
  • Do SSRS really tell us much about actual teaching quality? Waiters in a restaurant? Should it be worth 20 per cent? Hard to verify. Marny Scully. University of Toronto. Demonstrated how a ratio of anywhere from 6:1 to 39:1 can be achieved from same data. No doubt manipulation goes on. International student score. A University’s ability to attract students from around the world is clearly important but... THE editorial board recently told us tales… scholarships/ busloads/ etc. bums on seats alone can’t tell us much. Same problems looking at the number of staff attracted from overseas. Awful story. University of Malay. 89 th in 2004. Next year 169 th . Data problem.
  • But despite all these clear weaknesses with the rankings, they became hugely influential. Back to that IHEP report I mentioned earlier…
  • I think that the key point from this is “Refined and improved”.
  • So here we are with a clean slate. This is what the editor, Ann Mroz, has said…
  • She has made a firm and bold commitment, and has set the bar very high indeed. Read…
  • Thomson Reuters is the world’s leading research data specialist – proud to have them as our new data supplier
  • The man behind the rankings on the Thomson Reuters side. Evidence Ltd. Government advisor. Framework 7, Australia, REF etc etc. Couldn’t ask for a more informed partner.
  • Thomson Reuters has invested vast resources into creating its Global Institutional Profiles Project, with full-time staff stationed around the world to build highly sophisticated data-driven profiles of more than 1,000 universities around the world. That project is not a ranking project, but the database it builds will be the foundation stone of our rankings.
  • Attempt at a serious piece of social science. Key points: SLIDE No scatter gun. No sign up. Invitation only. Teaching and research. First ever global teaching reputation measure Just completed it. Delighted with the results. I can reveal today, that we achieved 13,388 responses – more than four times more responses than QS achieved in 2009. And high quality responses.
  • Really pleased with the results: Large numbers. But size is not the only thing that counts. Good mix of experience – teaching and research Good geographical spread. Particularly pleased to see such a high response rate from Asia and the Middle East Good balance of subject areas. Further statistical manipuation will be applied to iron out any response biases.
  • Citations Data. TR owns the Web of Science. Excellent resource. But… It is not just the size and depth of the citations database, it’s the expertise and understanding of the data that comes with working directly with the data owners, Thomson Reuters. Not just buying in a lump of data! Also have the expertise to understand it. It will be normalised
  • Previous subject tables a mere afterthought – just one measure used, reputation survey. And we’ve seen how weak that was. Now all built from subject level upwards. Reputation survey based on local, subject-specific knowledge. Fine grain. Agriculture, not Life Science Citations data normalised for discipline etc. The mix and balance of indicators we use can vary between subjects. Eg arts and humanities – less confidence in
  • And here are the data points we are collecting directly from the universities themselves. So the rankings from 2010 will be made up of three key data sets: reputation survey results; bibliometric data already held by Thomson Reuters, and data collected by Thomson Reuters from individual institutions. And now over to the hard part – how do we pull this all together into a final methodology?
  • So, just a week or two ago, we put out a draft methodology for consultation. Here it is in broad terms. 13 data points compared to just six in the old QS Four broad indicators – research, economic activity and innovation, international diversity, and a broad “institutional indicators” category – designed to capture things like teaching quality. The exact weightings for each of the 13 indicators, and the final weightings for the four broad indicators are yet to be determined, but you’ll see from the graph that we do think it is appropriate to give a heavier weight to the research indicators, as there is much wider acceptance regarding the vaidity and suitability of the bibliometric data we have. Reputation measures will be split into two categories – a research reputation measure to be used in the research category, and for the first time ever, a teaching reputation measure, in the “institutional” category. At present, we are planning that both reputation indicators make up no more than 20-25 per cent of the overall ranking score – compared to 50 per cent in the QS tables.
  • So that draft methodology has gone to a group of around 50 expert advisors – editorial board and a wider Thomson Reuters platform group. The initial feedback has been very positive. Here’s two very early responses from Australia. But of course we still have a lot of work to do. In itital
  • But to make sure we’re as accountable as it is possible to be, we need constant criticism and input. Only with the engagement of the higher education sector will we achieve a tool that is as rigorous and as transparent and as useful as the sector needs and deserves. So please use the sites and tools above to make sure you have your say and tell us what you think.
  • ×