Thursday, July 29, 2010

An Epic?

Times Higher Education (THE) has announced the completion of the collection of data for its forthcoming World University Rankings:
An epic effort by our world university rankings data supplier, Thomson Reuters, to collect information from hundreds of universities around the world concluded successfully last week.
I am not sure whether "epic" is the right word. The number of universities in the database does not seem much higher than that for which QS has collected information. The data does apparently include some information that QS has ignored such as institutional income and research income but has not included items counted by QS such as total student numbers or the number of postgraduate students other than doctoral candidates. Meanwhile, the number of respondents to the opinion survey has fallen far short of the original target of 25,000, even with a bit of topping up, like QS, from the Mardev mailing lists.

A proposal to rank universities by disciplines as specific as Agriculture has been dropped. Now, THE will rank universities in six disciplinary clusters, up from five in the THE-QS and QS rankings.

THE also give some idea of errors will be detected. That might be an improvement although I suspect that in many countries third party sources may not be as reliable as THE thinks.

One thing that is not mentioned is whether any universities have refused to participate in the data collection and what THE will do if there are any abstentions.

Monday, July 26, 2010

Discrimination In Top US Colleges

Russell K. Nieli in Minding the Campus discusses a study by Thomas Espenshade and Alexandria Radford that details the extent and depth of the racial and social discrimination practiced by America's top colleges.

"Consistent with other studies, though in much greater detail, Espenshade and Radford show the substantial admissions boost, particularly at the private colleges in their study, which Hispanic students get over whites, and the enormous advantage over whites given to blacks. They also show how Asians must do substantially better than whites in order to reap the same probabilities of acceptance to these same highly competitive private colleges. On an "other things equal basis," where adjustments are made for a variety of background factors, being Hispanic conferred an admissions boost over being white (for those who applied in 1997) equivalent to 130 SAT points (out of 1600), while being black rather than white conferred a 310 SAT point advantage. Asians, however, suffered an admissions penalty compared to whites equivalent to 140 SAT points.

The box students checked off on the racial question on their application was thus shown to have an extraordinary effect on a student's chances of gaining admission to the highly competitive private schools in the NSCE database. To have the same chances of gaining admission as a black student with an SAT score of 1100, an Hispanic student otherwise equally matched in background characteristics would have to have a 1230, a white student a 1410, and an Asian student a 1550. Here the Espenshade/Radford results are consistent with other studies, including those of William Bowen and Derek Bok in their book The Shape of the River, though they go beyond this influential study in showing both the substantial Hispanic admissions advantage and the huge admissions penalty suffered by Asian applicants. Although all highly competitive colleges and universities will deny that they have racial quotas -- either minimum quotas or ceiling quotas -- the huge boosts they give to the lower-achieving black and Hispanic applicants, and the admissions penalties they extract from their higher-achieving Asian applicants, clearly suggest otherwise."

The advantage accorded to Non-Asian minority students, even those whose claim to moral reparation for generations of slavery or dispossession is questionable, is well known. What is surprising about Espenshade and Radford's study is the extent of the discrimination against poor, rural and working class whites.

In part, this is a consequence of the indicators used by American ranking organizations. Selective colleges are apparently reluctant to offer places to students who might not take up an offer for financial reasons since this would push down their acceptance rates and yield scores.

But there is more. Espenshade and Radford found that less affluent whites were dramatically less likely to be offered a place in a competitive private college even when SAT scores, a reasonable proxy for general intelligence, and high school grades were controlled for. In addition, they found evidence of serious discrimination against students who were involved in incorrect activities such as ROTC and Future Farmers of America, especially those holding leadership positions. Apparently "feeding the homeless" will boost one's chances of getting into a top private college if it means doling out soup in between starring in the school play and AP English classes but not if means showing an interest in growing the stuff that the homeless eat.

As cognitive skills become increasingly irrelevant to admission into America's best schools, it seems almost certain that US higher education will be less and less able to compete with those countries that continue to recruit those students most capable of demanding college-level work.





Wednesday, July 21, 2010

New Webometrics Rankings

The new Webometrics rankings are out.
There are few surprises. Here are the top universities in various categories.

World: Harvard
North America: Harvard
Latin America: Universidad Nacional Autonoma de Mexico
Europe: Cambridge
Central and Eastern Europe: Charles University, Prague
Asia: Tokyo
South East Asia: National University of Singapore
South Asia: Indian University of Technology, Bombay
Arab World: King Saud University
Oceania: Australian National University
Africa: Cape Town

One interesting feature of the Arab World rankings is that universites in the Palestinian territories do very well in comparison with many in more affluent countries. Would anyone like to suggest an explanation?




Tuesday, July 20, 2010

Global-Rankings Ping Pong

Ben Wildavsky has an article in the Chronicle of Higher Education about the competition between Times Higher Education and QS over this year's university rankings. It is actually called Global-Rankings Smackdown! but the smackdown bit is rather exaggerated and the exclamation mark is unnecessary. There are some well informed and incisive comments on recent developments in international university ranking, including the divorce between THE and QS.

He concludes:

Will a redemption narrative help Times Higher earn credibility for its new rankings? Perhaps. It should certainly be applauded for its openness to criticism, and for all it is doing to inform the public about its next moves in what its editor characterizes, with appropriate caution, as “a decent first step” at improvement. But ultimately, debating tactics notwithstanding, the global league tables will be judged on their merits. As the wars over league tables continue, the next rankings season should be well worth watching.


I am not entirely sure about how much THE, or more accurately their new partners, Thomson Reuters are doing to inform the public about what they are doing. At the moment there are some things we know about the QS survey that we do not know about Thomson Reuters' -- number of forms sent out, response rate, number of responses from individual countries. Still, all that could change within a few weeks and it did take QS a couple of years before they gave out anything beyond the bare minimum about their survey.
The Avalanche

A short article in the Chronicle of Higher Education ,by Mark Bauerlein, Mohamed Gad-el-Hak, Wayne Grody, Bill McKelvey, and Stanley W. Trimble, 'We Must Stop the Avalanche of Low-Quality Research', calls for a halt to the seemingly inexorable rise in the production of uncited and unread scholarly and scientific papers.

While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

As a result, instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. The avalanche of ignored research has a profoundly damaging effect on the enterprise as a whole. Not only does the uncited work itself require years of field and library or laboratory research. It also requires colleagues to read it and provide feedback, as well as reviewers to evaluate it formally for publication. Then, once it is published, it joins the multitudes of other, related publications that researchers must read and evaluate for relevance to their own work. Reviewer time and energy requirements multiply by the year. The impact strikes at the heart of academe.


Unfortunately, now that authorship of an ISI-indexed article has become the qualification for participation n the reputational survey section of the THE World University Rankings I suspect that universities will go on encouraging their staff to produce more and more articles of questionable quality. Or perhaps we should say more and more email addresses in the ISI database.

Friday, July 16, 2010

An Unwelcome Message

Phil Baty, who is in charge of world university rankings at Times Higher Education, writes about an email that he has received.


I was disturbed by an email that dropped into my in-box late last
month.

No, it was not another offer of cheap Viagra, or an announcement that I
had won an overseas lottery. It was more unsettling than that.

"Dear academic," it began. The greeting alone was a surprise, given
that I am a journalist with little more than a bachelor's degree by way of
academic credentials.

But my unease grew with each line of the message. The email was from a
major education information company inviting me to take part in an online survey
that would be used to create a university ranking.

It said that my role as a leading educationalist combined with my
subject focus made my opinion very important. It even offered to enter me into a
prize draw if I passed on my great wisdom and spent 10 minutes filling in the
form.

It would be amusing if the implications were not so serious. As the
email claimed, the audience for the company's annual exercise is in the
millions, and it is clear that university league tables in various forms have
become a very big business with wide influence.

Any organisation, such as Times Higher Education, that seeks to create
rankings must accept its responsibility to conduct thorough research and to
employ sound data.

There is a responsibility on companies doing such surveys that
academics are selected carefully by discipline, and by country and continent if
appropriate. If compilers want universities and students to see their league
table as robust the onus is on them to take a rigorous approach. When rankings can make or break a university's reputation, or influence multimillion-pound strategic decisions, anything less will simply not do.

I am sure that anyone reading this blog has received the message by now and knows that the mysterious sender is not Voldemort but QS, who are now producing their own university rankings independently of THE.

The sending of the message and form to Phil Baty actually represents an improvement for the QS survey. Even without a doctorate, he is probably better qualified to evaluate universities than most subscribers to the World Scientific mailing list, of whom nearly 200,000 receive the form every year. Subscription requires nothing more than the ability to click a mouse a few times.

I wonder though whether those who completed the THE survey form sent out by Thomson Reuters to authors who have published in ISI indexed journals are significantly better qualified. I have heard that there are many parts of the world where the granting of co-authorship of research papers is simply a perquisite of seniority within a department and nomination as corresponding author, the one who gets to go to conferences and do a bit of shopping, is decided partly or largely by political pressures.

It may be that the time has come for a greater variety of reputational surveys to be conducted. There is certainly room for a QS - style survey, essentially open to anyone who, for whatever reason, is interested. After all, that is a constituency that deserves some consideration . But equally, perhaps more so, we need as survey of research excellence that targets demonstrably competent researchers. The ability to be nominated as corresponding author -- I assume that is the one whose email addresses is entered in the ISI archives -- of a paper once in an academic career mught not be sufficient evidence of competence to evaluate university research and teaching. There is a case for a survey based on a more rigorous working definition of research competence, such as inclusion in the ISI list of highly cited researchers. Another possiblty might be to survey editors of academic journals. Response rates could be boosted by publishing the journals who took part. There is also an obvious niche for a student based survey of teaching.

Anyway, Phil, you might as well do the survey. There are many people less knowledgable than you filling out the form and, for that matter, the one for THE . You might even be the one who wins the BlackBerry.

Thursday, July 08, 2010

Presentation by Phil Baty

A presentation by Phil Baty of Times Higher Education at the ISTIC meeting in Beijing reviewed the background of the now defunct THES-QS World University Rankings and the rationale for the development of a new ranking system.

There are some quotations that highlight familiar complaints about the THE-QS rankings:


“Results have been highly volatile. There have been many sharp rises and falls… Fudan in China has oscillated between 72 and 195…” Simon Marginson, University of Melbourne.


“Most people think that the main problem with the rankings is the opaque way it constructs its sample for its reputational rankings”. Alex Usher, vice president of Educational Policy Institute, US.


“The logic behind the selection of the indicators appears obscure”. Christopher Hood, Oxford University.



Baty also indicates several problems with the "peer review", citations, faculty student ratio and internationalisation indicators.



All of this is very sound. But it is not yet certain how much of an improvement the new THE rankings will be.



THE will now obtain citations and publication data from Thomson Reuters rather than Scopus. The Thomson Reuters data is based on the ISI indexes, which are somewhat more selective than the Scopus database. There is, however, a great deal of overlap and simply using ISI data rather than Scopus will not in itself make very much difference except perhaps that there will be a somewhat greater bias towards English using researchers and the research output that is measured may be of a somewhat higher quality. We should also remember that from 2004 and 2006, the THE-QS citations data were collected by the very same Jonathon Adams who is now overseeing the development of the new THE rankings.



Some of the "confirmed improvements" noted by Baty are certainly that. Normalising citation scores between various disciplinary groups to take account of varying patterns of publication and citation is something overdue. The presentation of information about various types of income will, if the raw data is publicly available, make it possible to evaluate universities in terms of value for money.


In some ways the reputational survey may be better then the QS "peer review" but exactly how much better is not yet clear. Baty says that only published researchers were asked to take part but this apparently could mean no more than being listed as the corresponding author for an article once in a lifetime. No doubt this yields a better qualified group of respondents than that made up those with the energy to sign up with World Scientific but is it really significantly better?


Also, there is much that we have not been told about the reputational survey. We know the total number of respondents, which was much lower than the original target, but not the response rate. Nor has there been indication of the number of responses from individual countries. This is particularly irksome since rumour and subjective impression suggest that many countries have been neglected by the recently closed THE survey.


The methodology still appears in need of refinement. Research income of various kinds appears four times as an indicator or part of an indicator: research income from industry as the sole indicator in the Economic Activity/Innovation category; as part of overall research income and as part of research income from industry and public sources in Research Indicators; and as part of total institutional income in Institutional Indicators. This is a bit messy.

There is still time for THE to produce an improved ranking system. Let's hope they can do it.

Monday, July 05, 2010

Presentation by Jonathon Adams

A summary of progress so far on Thomson Reuters' Global Institutional Profiles Project an be found here.