Accueil > Revue de presse > À l’étranger > Impact on humanities Researchers must take a stand now or be judged and (...)
Impact on humanities Researchers must take a stand now or be judged and rewarded as salesmen - The Times Literary Supplement, 13 novembre 2009
jeudi 19 novembre 2009, par
What follows is, I assure you, neither a satire nor a parody, though I suppose it might seem laughable were it not so serious.
For more than two decades, the distribution of that element in the funding of British universities that supports research has been determined by the outcome of successive “Research Assessment Exercises” (RAEs). These have, roughly speaking, required all university departments to submit evidence of their research over the relevant period (usually five years). The evidence has chiefly consisted of a number of publications per member of staff, plus information about the “research environment” of the department (measures for encouraging and supporting research, including for PhD students) and evidence of “esteem” (forms of scholarly recognition, professional roles and honours). All this material has been assessed by panels of senior academics covering particular disciplines or groups of cognate disciplines, with “scores” awarded on the basis of a fairly simple formula, greatest weight being given to the quality of the submitted publications. The highest-scoring departments then receive a greater share of the funding ; inevitably, the scores are also used to generate league tables.
In practice, the exercise has been flawed in various obvious ways as well as hugely time-consuming. In response to cumulative criticism, the government announced a couple of years ago that it was considering discontinuing the exercise or replacing it with something much simpler. In the event, it was decided that no better way to determine the distribution of this funding could be found, and so the exercise would have to carry on, albeit in modified form. To save face, it was renamed the Research Excellence Framework (REF). The guidelines spelling out how it will operate have just been issued by the Higher Education Funding Council for England (HEFCE ; the other parts of the UK have matching councils). The document declares that certain aspects of the process have yet to be settled, and so it invites responses from universities (and other interested parties) during a brief “consultation period”.
In many respects, the REF will be quite like the RAE, and will require similar kinds of evidence in the submissions (selected publications, information about research environment, etc). But one very significant new element has been introduced. In this exercise, approximately 25 per cent of the rating (the exact proportion is yet to be confirmed) will be allocated for “impact”. The premiss is that research must “achieve demonstrable benefits to the wider economy and society”. The guidelines make clear that “impact” does not include “intellectual influence” on the work of other scholars and does not include influence on the “content” of teaching. It has to be impact which is “outside” academia, on other “research users” (and assessment panels will now include, alongside senior academics, “a wider range of users”). Moreover, this impact must be the outcome of a university department’s own “efforts to exploit or apply the research findings” : it cannot claim credit for the ways other people may happen to have made use of those “findings”.
As always, the reality behind the abstractions which make up the main guidelines emerges more clearly from the illustrative details. The paragraphs about “impact indicators” give some sense of what is involved. The document specifies that some indicators relate to “outcomes (for example, improved health outcomes or growth in business revenue)” ; other indicators show that the research in question “has value to user communities (such as research income)” ; while still others provide “clear evidence of progress towards positive outcomes (such as the take-up or application of new products, policy advice, medical interventions, and so on)”. The document offers a “menu” of “impact indicators” that will be accepted : it runs to thirty-seven bullet points. Nearly all of these refer to “creating new businesses”, “commercialising new products or processes”, attracting “R&D investment from global business”, informing “public policy-making” or improving “public services”, improving “patient care or health outcomes”, and improving “social welfare, social cohesion or national security” (a particularly bizarre grouping). Only five of the bullet points are grouped under the heading “Cultural enrichment”. These include such things as “increased levels of public engagement with science and research (for example, as measured by surveys)” and “changes to public attitudes to science (for example, as measured by surveys)”. The final bullet point is headed “Other quality of life benefits” : in this case, uniquely, no examples are provided. The one line under this heading simply says “Please suggest what might also be included in this list”.
The priorities indicated by these phrases recur throughout the document. For example, in explaining how the “impact profile” of each department will be ranked as “four star”, “three star”, and so on, it provides “draft definitions of levels for the impact sub-profiles”. That for “three star” reads : “highly innovative (but not quite ground-breaking) impacts such as new products or processes, relevant to several situations have been demonstrated”. (Sentence-construction is not a forte of the document.) And there is also a rather chilling paragraph which reads : “Concerns have been raised about the indirect route through which research in some fields leads to social or economic impact ; that is, by influencing other disciplines that are ‘closer to market’ (for example, research in mathematics could influence engineering research that in turn has an economic impact). We intend to develop an approach that will give due credit for this”.
Clearly, the authors of this document, struggling to give expression to the will of their political masters, are chiefly thinking of economic, medical, and policy “impacts”, and they chiefly have in mind, therefore, those scientific, medical, technological, and social scientific disciplines that are, as the quoted phrase puts it, “closer to market”.
I shall not presume to speak for my colleagues in those disciplines, though I understand that they have the gravest misgivings about the distorting effect of this exercise on research in those fields. But it is a premiss of the exercise that the requirements and the criteria shall be uniform across the whole span of academic disciplines (it is worth questioning why this has to be so, but I shall leave that aside for the moment). What I want to address here is the potentially disastrous impact of the “impact” requirement on the humanities.
As the phrases quoted above make clear, the guidelines explicitly exclude the kinds of impact generally considered of most immediate relevance to work in the humanities – namely, influence on the work of other scholars and influence on the content of teaching. (Those are said to be covered by the assessment of the publications themselves.) For the purposes of this part of the exercise, “impact” means “on research users outside universities”. General readers do not appear to count as such “research users”. So, 25 per cent of the assessment of the “excellence” of research in the humanities in British universities will depend on the evidence provided of “impact” understood in a rather particular way. What will this mean in practice ?
Let us take a hypothetical case. Let us assume that I have a colleague at another university (not all colleagues are in one’s own department, despite the league-table competitiveness of these assessment exercises) who is a leading expert on Victorian poetry, and that over a number of years she works on a critical study of what we might call a three-star Victorian poet (“highly innovative but not quite groundbreaking”). The book is hailed by several expert reviewers as the best on the topic : it draws on deep familiarity not just with Victorian poetry, but with other kinds of poetry ; it integrates a wealth of historical and biographical learning in ways that illuminate the verse ; it is exact and scrupulous in adjudicating various textual complexities ; and it clarifies, modifies, and animates the understanding of this poet’s work on the part of other critics and, through their writing and teaching, of future generations of students, as well as of interested general readers. It also, it is worth saying, exemplifies the general values of careful scholarship and reminds its readers of the qualities of responsiveness, judgement, and literary tact called upon by the best criticism. It is a model piece of “excellent” research in the humanities. And its “impact” is zero.
Of course, in any intelligent use of the word, its impact is already evident from my description of its reception, but that, as we have seen, is explicitly excluded for this purpose. Moreover, any other kind of impact is only going to be credited to my colleague’s department if it can be shown to be the direct result of its own efforts. So if, say, the Departmental Impact Committee can be shown to have touted their colleague’s new “findings” to a range of producers in radio and television, and if, say, one of those producers takes an interest in this particular work, and if, say, this leads to a programme which bears some relation to the “findings” of the book (which, if they are interesting, can probably not be summarized as “findings” in the first place), and if, say, there is some measurable indicator of audience response to this programme, then, perhaps, the department’s score will go up slightly. And if not, not.
Let us leave aside for the moment the very considerable expenditure of time and effort any such process involves (often for no result), and let us also leave aside the fact that there is no reason to expect a literary scholar to be good at this kind of hustling and hawking.
There is still the fundamental question of why a department whose research happens to get taken up in this way should be any more highly rated (and rewarded) than one which does not. Not only do a variety of uncontrollable factors determine the chances of such translation to another medium, but there is also no reason to think that the success of such translation bears any relation to the quality of the original work. If anything, meretricious and vulgarizing treatments (which concentrate on, say, the poet’s sex life) will stand a greater chance of success than do nuanced critical readings. And will scholars then be encouraged to work on topics that have such “market” potential ? I recall a moment in the 1960s film The Graduate when a well-meaning older friend puts his hand on the young anti-hero Benjamin’s shoulder to give him a word of advice about a future career and whispers : “Plastics”. Should senior scholars similarly be whispering into their junior colleagues’ ears : “Tudor monarchs” ?
Not only does this exercise require all academic departments to become accomplished marketing agents ; it also requires them to become implausibly penetrating and comprehensive cultural historians. For in their submissions they will have to “identify the researchdriven contribution of the submitted unit to the successful exploitation or translation of excellent research”. Has anyone really thought about what this could involve where ideas are concerned ? An experienced cultural or social historian, working on the topic for years, might – just might – be able to identify the part played by a particular piece of academic research in long-term changes in certain social practices and attitudes, but it would require a hugely detailed study and could probably only be completed long after the event and with full access to a range of sources of different kinds. Yet every academic department in the land is going to have to attempt something like this if they are to get any credit for the “impact” of their “excellent” research.
Now consider a different example. Three historians of Anglo-Saxon England, scattered across three different university history departments (there are rarely many of them in one place), read each other’s work over a number of years and slowly find they are developing a revisionist view of the significance of, say, weapons found in burial hoards. They publish their findings in a series of articles in the relevant professional journals, and other scholars duly ponder and are persuaded, incorporating the new interpretation in their own writing and teaching.
The curator of a regional museum, himself a recent graduate of one of these history departments who still keeps up with some of the scholarly literature, thinks that this new line would provide an excellent theme for an exhibition. He arranges for the loan of material from other museums, asks his old teacher to check the accompanying information panels, and the exhibition turns out to be very popular. This may appear to be a model case of research affecting the understanding of a wider public, but when the REF submissions are made by each of the history departments, none of this can be mentioned because the exhibition was not the direct result of the departments’ own “efforts to exploit or apply the research findings”. The impact score of the research is zero.
Adequately to capture the impact of the new “impact” requirement on research, we probably have to pursue this example a little further. In the case of the first of the three scholars, his department’s REF Committee is furious about this missed opportunity, and that scholar has to spend a considerable part of the next five years contacting museum curators and TV producers on the off chance that his research (which he now has less time to do) will be taken up for their own purposes. He also has to produce annual reports on his efforts to do this and annual plans for attempts to do it in the future. At the second scholar’s university, a diktat comes round from the Pro-Vice-Chancellor (Research) that no funding or leave will be given to support research unless a “demonstrable impact dimension” is in place beforehand, and staff are urged not to share with colleagues in other universities any information or contacts which might allow those universities to get in first. The second historian becomes fearful for his future : he does less research, ghostwrites the King Alfred Book of Bread and Cakes, and then becomes the university’s Director of Research Strategy (Humanities). At the third university, the historian in question simply cannot stand any more of this idiocy : he takes a post at an American university and goes on to do “highly innovative” and “groundbreaking” (but impact-free) research which changes the way scholars all over the world think about the field.
We can all make half-informed guesses about how such a misconceived policy could ever have come to be imposed on British universities. Although the policy originated before the most recent Cabinet reshuffle, the fact that responsibility for higher education has now been subsumed into Lord Mandelson’s Department for Business is a dispiriting indication of official attitudes. But even if universities had more powerful political champions, the truth is that the “higher education sector” in Britain is now too large and too diverse, both in terms of types of institution and types of discipline, to be sensibly subject to a single uniform mode of assessment. The justification for the research activity of, say, a lecturer at a former polytechnic who is primarily engaged in teaching a refresher course for theatre nurses for a local health authority is bound to be different from the justification of the research activity of, say, a lecturer at a traditional university who is chiefly engaged in supervising doctoral students and teaching final-year undergraduates in Latin literature. The second may be no less valuable than the first, though in different ways ; and the relation of both lecturers’ research to their respective publics may also be different. So those differences need to be reflected in different forms of assessment and funding.
Even if the policy represents a deliberate attempt by government to change the character of British universities (and the humanities are, I suspect, simply being flattened by a runaway tank designed for other purposes), its confusions and inadequacies should still be called to public attention. There are, after all, some straightforward conceptual mistakes involved. For example, the exercise conflates the notions of “impact” and of “benefit”. It proposes no way of judging whether an impact is desirable ; it assumes that if the research in question can be shown to have affected a number of people who are categorized as “outside”, then it constitutes a social benefit of that research. It also confines the notion of a “benefit” to something that is deliberately aimed at and successfully achieved. Good work which has some wider influence without its authors having taken steps to bring this about is neither more nor less valuable than good work which has that influence as a result of such deliberate efforts, or indeed than good work which does not have that influence at all.
And there is the obvious confusion about what is being assessed. Instead of proposing that “impact” of this kind is a desirable social good over and above the quality of the research, the exercise makes the extent of such impact part of the measurement of the quality of the research. In terms of this exercise, research plus marketing is not just better than research without marketing : it is better research.
Underlying these tactical mistakes are larger confusions which are increasingly prevalent in public discourse. There is, to begin with, the reification of “inside” and “outside”. It is assumed that the only way to justify what goes on “inside” is by demonstrating some benefit that happens “outside”. But we are none of us wholly “inside” or “outside” any of the institutions or identities which partly constitute who we are. Similarly, it is a mistake to assume that if an activity that requires expenditure (as most activities do) can be shown to have the indirect effect of provoking expenditure by other people, then it is somehow more justified than an activity which does not have this indirect consequence. Art is a valuable human activity : showing that it also “generates” several million pounds for the economy in terms of visits, purchases, employment etc does not make it a more valuable human activity.
The OED definition of “impact” points to the central problem : “The act of impinging ; the striking of one body against another ; collision”. In the proposed exercise, what is being sought is evidence that one body (universities) is striking against another body (notuniversities, here referred to as “society”). Nothing more than that : a mechanistic model. But the real ways in which good scholarship may affect the thinking and feeling and therefore the lives of a wide range of people, including other scholars (who are, after all, also citizens, consumers, readers . . . ), is much subtler, more long-term, and more indirect than the clacking of one billiard ball against another.
It is, needless to say, perfectly proper to want specialists in any field to make an effort from time to time to explain the interest and significance of what they do to non-specialists (who may, let us remember, include specialists in other fields). Addressing such non-specialist publics is a commendable activity in itself, and it is sensible for a government, concerned about a perceived lack of public “engagement” with academic scholarship, to wish to encourage it. But that is quite different from what is being asked for here, which is evidence of “uptake” by “external users” of the research itself, with such evidence (or its absence) then helping to determine how highly that research is rated.
Lire la suite sur le site du TLS.