Accueil > Évaluation / Bibliométrie / Primes > "New Ratings of Humanities Journals Do More Than Rank — They Rankle" par (...)

"New Ratings of Humanities Journals Do More Than Rank — They Rankle" par Jennifer Howard

The Chronicle of Higher Education (The Faculty), 10 octobre 2008

lundi 13 octobre 2008, par Laurence

http://chronicle.com/weekly/v55/i07/07a01001.htm

A large-scale, multinational attempt in Europe to rank humanities journals has set off a revolt. In a protest letter, some journal editors have called it "a dangerous and misguided exercise." The project has also started a drumbeat of alarm in this country, as U.S.-based scholars begin to grasp the implications for their own work and the journals they edit.

The ranking project, known as the European Reference Index for the Humanities, or ERIH, is the brainchild of the European Science Foundation, which brings together research agencies from many countries. It grew from a desire to showcase high-quality research in Europe. Panels of four to six scholars, appointed by a steering committee, compiled initial lists of journals to be classified in 15 fields. Each journal was assigned to a category — A, B, or C — depending on its reputation and international reach. (See box below.)

The denunciation of the project as dangerous appears in an open letter signed by more than 60 editors of journals devoted to the history of science, technology, and medicine. They also ask to have their journals removed from the rankings. The letter will be published in the first 2009 issues of those journals, which include Centaurus, Perspectives on Science, Isis, Annals of Science, and the British Journal for the History of Science.

"We now confront a situation in which our own research work is being subjected to putatively precise accountancy by arbitrary and unaccountable agencies," the editors write. They call the project "entirely defective in conception and execution," and argue that it could unfairly penalize good journals and even affect professors’ tenure applications.


Raising Profiles

The drive to classify has roots in scientific culture but is making inroads in the humanities — or at least among institutions that control purse strings and want evidence that their money is being well spent. "Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking," says the ESF’s Web site.

National agencies like Britain’s Arts & Humanities Research Council, which is roughly the equivalent of the National Endowment for the Humanities in the United States, act as conduits for information and monitor the project to make sure their constituents’ perspectives are adequately represented. But ultimate control over ERIH lies with the European Science Foundation’s Standing Committee for the Humanities, composed of more than 30 scholars from across Europe.

Their initial lists cover journals of anthropology, archaeology, art and art history, history, classical studies, gender studies, history, history and philosophy of science, linguistics, literature, music and musicology, Oriental studies and African studies, pedagogy and educational research, philosophy, psychology, religious studies, and theology.

ERIH’s guidelines specify that all listed journals be "good scientific journals," which means that they select articles through "an objective review process" and "fulfill basic publishing standards" like providing bibliographic information for citations.

In the initial lists, journals are assigned to A, B and C categories. Those do not equal grades, says Michael J. Worton, vice provost of University College in London and a professor of French literature, who serves on the project’s steering committee.

"What we want to do is establish a classification system and not a grading system," he told The Chronicle. "Category C is, in a sense, the journals we’re most interested in," he said. These are more-regional or localized publications that contain good work but could use a boost, perhaps by bringing their publishing standards more in line with those of Aand B-level journals.

ERIH’s organizers hope "to engage with editorial boards and publishers in order to raise the threshold standards of editorial practices," Mr. Worton said.

Not Fixable

Still, those opposed to rankings argue that a project like ERIH distorts the way humanistic scholarship proceeds. They don’t buy the idea that A, B, and C do not equal grades. They worry that a ranking system can force individual scholars to choose between publishing in a category-A journal — which might earn bureaucratic brownie points for themselves or their institutions — or in a Bor C-level journal that is known to fellow specialists as the best place to go for cutting-edge work.

They also worry that a project like ERIH actually creates the hierarchies it claims only to be describing.

"Ranking journals is an activity that fundamentally misunderstands how our field works," said Simon Schaffer, editor of the British Journal for the History of Science and a professor of the history of science at the University of Cambridge. Mr. Schaffer’s journal received an A classification from ERIH, but that hasn’t won him over. "This is not a project that can be fixed," he said. "This is a project that should be stopped."

Mr. Schaffer, one of the prime movers behind the letter of protest, said he had no trouble finding allies. "What is very striking is that with very, very few exceptions, the people we contacted enthusiastically signed up" to support the letter, he said.

He calls the journal listings "an assault on peer review." Tenure-and-promotion committees or university administrators may assume that if an article appears in a category-A journal, it must be A-level work. As he put it, under such a system, "you don’t need to know about the quality of a particular essay in a particular journal. You can substitute that with the letter grade of the journal in which it appears. And that seems to me nonsense."

Historians of science are well acquainted with data-driven measures of value — and their limits. "Absolutely crucial work often appears in marginal or small-circulation journals," Mr. Schaffer said. Gregor Mendel, the father of modern genetics, "would have done really, really badly" if judged by a rankings system. "He kept on publishing in C-grade journals," Mr. Schaffer said, "and that would have been a bummer for him."

Mr. Worton said that he understands the project has generated a lot of anxiety, but that it is misplaced. Organizers are taking steps, he said, to keep the rankings from being misused or misinterpreted : "We must as far as we can prevent perverse behavior." The ERIH Web site carries a caution, in boldface, "against using the lists as the only basis for assessment of individual candidates for positions or promotions or of applicants for research grants."

Mr. Worton also emphasized that ERIH should be considered "an ongoing and a dynamic project." In the second phase, which has begun, new experts will be brought in to "refresh" the original disciplinary panels and revise the initial lists as needed. Editors may also fill out feedback forms and make the case for their journals if they believe they have been misassigned. (Protesting editors cannot withdraw their journals from ERIH, however, since it does not rely on their permissions.)

"People may not like the results," Mr. Worton said. "They may not like the process. But the process is going to go ahead, because there is so much commitment in Europe to its going ahead."


Penalizing Creativity

That commitment — along with protests — is echoed elsewhere, as humanities scholars confront bureaucratic attempts to take their measure.

Australia has been conducting its own assessment exercise across all disciplines, including the humanities. Called the Excellence in Research for Australia, or ERA, it’s run by the Australian Research Council, an arm of the government devoted to promoting — and financing — research. A wide-ranging attempt to gather "indicators of research quality," the Australian project also ranks journals. It assigns them to four categories — A*, A, B, and C — based on information that includes data gathered from scholarly associations and how widely cited the journals are.

Elizabeth McMahon, president of the Association for the Study of Australian Literature, decided reluctantly that her group would participate in the process. A senior lecturer in English, media, and performing arts at the University of New South Wales, Ms. McMahon also edits Southerly, the country’s oldest literary journal.

The major problem, she says, is that all but one of the 20 journals devoted to Australian literature publish creative work alongside peer-reviewed scholarship. And creative work, which is not peer reviewed in the classic sense, appeared to be a liability under ERA. All the journals that feature creative work received lower rankings. Ms. McMahon said that "if we were to take these measures at the letter, we would be better off to get rid of all the creative material and just keep the peer-reviewed material."

But that would fly in the face of the field’s distinctive 30-year history. The journals "were born of this desire to promote Australian literature, which had not been promoted previously," Ms. McMahon said.

Newer fields also suffered in comparison with long-established fields like Renaissance studies, Ms. McMahon told The Chronicle. "Film studies, media studies — they were decimated in the metric because their journals aren’t as old as the literary journals. None of the film journals received a high rating, which is extraordinary."

Ms. McMahon blamed a combination of out-of-date information and an ideological mind-set that favors more-traditional fields. "Do you go to a journal that has the best ranking or to one that actually has more authority ?" she asked. "To my mind, there’s no question. You go to the journal that all your peers know is the highest-ranking journal" no matter what the official assessment says.

Coming to America ?

So far nobody in the United States has tried to set in motion a large-scale ranking system of humanities journals, but editors here have begun to take note of what’s happening overseas — and to weigh the implications of rankings for homegrown research.

"We haven’t had ’No Editor Left Behind’ yet," said Craig Howes, director of the Center for Biographical Research at the University of Hawaii-Manoa and a co-editor of Biography : An Interdisciplinary Quarterly. "But it’s happening in all the other countries. I can watch the lights go out."

American scholars, even if they are not aware of it, are already involved, Mr. Howes said. Many of the ERIH-listed journals are published in the United States or have U.S. contributors and editorial-board members. Scholarly work and the journals in which it appears transcend national boundaries. "The rankings systems in these various countries never asked us whether we wanted to be ranked or not," Mr. Howes said. "They’re going to do it anyway."

"It’s now becoming a global phenomenon," said Bonnie Wheeler, president of the Council of Editors of Learned Journals. Ms. Wheeler, a professor of English and medieval studies at Southern Methodist University, edits the journal Arthuriana. The general feeling in her group, she says, is that ranking humanities journals according to external standards of value is "intellectually irresponsible," but "since accreditation agencies like to be able to count things and grade them, I expect we’ll see the phenomenon here soon."

This is, after all, the culture that produced the Spellings Commission’s report on accountability in education and the U.S. News & World Report rankings of colleges. Journal editors would do well to "think these things through," Ms. Wheeler said, "before we get hit in the teeth by a directive from an underling of Margaret Spellings."

http://chronicle.com
Section : The Faculty
Volume 55, Issue 7, Page A10
Copyright © 2008 by The Chronicle of Higher Education