Monday, 29 November 2010

The Conundrums of Assessment

To my chagrin I recently realised that I have been assessing research proposals and grant applications for some twenty years, and have done so for most of the major humanities and social science funders in the UK, Europe and North America.  Over the last ten years I have also sat on numerous grant awarding panels, and helped to design the odd funding programme.  I have form in this particular small area of academic life.  So when I was asked for advice about how to write an assessment by a colleague faced with their first request of this sort, I felt obliged to offer some.  And since there is no independent body of advice about how the system works, or how the panels who arbitrate on the final decision use the reports drawn up by assessors, it seemed worthwhile posting that advice:

In the UK, funding bodies have made great efforts to train assessors, and brief them on what is expected; and to all intents and purposes have created a system that seems transparent and clear, with apparently precise criteria laid out in straightforward prose.   A great deal of effort has also been put into the supporting documentation in an attempt to ensure that assessments can be compared against one another, and that the process of decision making undertaken by the panels is speedy and uncontentious.  A real attempt has been made to eliminate special pleading, conflicts of interest, and the administratively perverse.  But, of course, all bureaucratic systems are also cultural systems, and there remain many unstated realities that effectively determine how a grant application and the assessments written in response to it, are read by the people charged with eventually sifting the funded from the unfunded. 

The single most important and determining factor that every assessor needs to keep in mind is that  most funding programmes have a success rate of around 20-25% (some as low as 10%).  From a panel's perspective, four out of five applications must be rejected.  As a result even small issues and problems in an otherwise exemplary application will be used to make a determination.  After reading perhaps thirty solid projects (and these days few applications are less than solid) and when faced with the need to find just five or six to fund, any panel, however well meaning or intelligent its individual members, will begin to reach for the smallest weakness.  

As an assessor you need to be aware of this problem.  This does not mean that you simply laud your favoured project to the skies.  If you do, your assessment will be judged to be insufficiently critical, and therefore worthless.  Instead, it means that if you seriously think a project is likely to be better than 80% of the others, you need to act as a critical advocate, and to place yourself at the heart of the debate that the application you are assessing will inevitably generate.

There will always be one or two applications that sit on the top of the pile, and if you are assessing one of these you can give yourself the freedom to engage with the underlying ideas, and to simply discuss the project's importance for a wider field.  Even in this instance, you might want to suggest where small problems might exist, but have been effectively addressed.

But these few, intellectually exciting and beautifully realised projects are rare and are consistently funded.  Where all the debate will be focussed is over the next tranche of projects.  This usually comprises some 40% of the total.  In a panel meeting (and regardless of how they are organised) the grading schema inevitably breaks down, and this 40% of applications start to bunch around the boundary grades.  Most panels give up on whole number grading of the sort the funders recommend, and end up using some form of 4.257, or 4-+(?) (if they are dominated by older academics from the Russell Group).  If you are asked to assess an application of this sort, the first decision you need to make is whether you think it should be funded; and having made that decision (assuming it is positive) you need to act as an advocate.

In other words, the first thing that any assessor needs to do is make an informed, over-arching judgement about the quality and importance of the application in front of them.  If you think the application you are assessing is compelling, although not so strong as to sit on the top of the pile, then you need to say so, and say why.  Alternatively,  if it is in the bottom 50%, for either technical or intellectual reasons (i.e. not exciting, or not practical, or just not well written) there is little point in expending your time struggling to find something good to say.  It will not affect the outcome, and it is likely to be fed back to the applicant in a way that just encourages them to resubmit something similar, instead of something better.

For myself, I start of with a basic question in my mind:  Am really excited by the project?  Has it caught my imagination, and left me thinking that I would actually want to know the outcome?  Would I want to read the book?  Or in a Knowledge Transfer  context, would it make a significant social difference? 


A minority of  academic projects get past this hurdle.  As a result, the real problem comes with the next stage, which is that while the intellectual case needs assessing (and if you are excited by the project, this is the easy bit) it is essentially all the things around it that will be used to exclude marginal projects.  If the project plan (with methodology and budgeting etc.) looks less than professional and doable, the application will be excluded on these grounds.  But if the fundamental idea is exciting and you decide you want to support it, regardless of its minor faults, you will need to deal with these issues directly and explicitly.  If you don't, the curmudgeon in the corner (and there is always one) will use these small issues to denigrate the project - generally as a stratagem to promote their own discipline or methodology or favoured project.  In this context, if you see a weakness, you need to address it directly and explain why it is not important to the success of the project.

My feeling is that you need to exercise an abstract academic and professional judgement, and in the round (regardless of the hoops and bureaucratic forms the funders want you to jump through) come to a conclusion about the worth of the project.  Once you do this, you are duty bound to do everything you can to ensure that the result is positive, in full recognition that most projects, including innumerable worthy ones, wont be funded.   This can lead to a rather instrumental and manipulative approach (which is a problem), but it at least has the advantage of allowing us to exercise the kind of judgement that is implied in peer review even when the process feels like bureaucratic games playing.



2 comments:

Vinoth Kumar said...

Wiztech Automation is a Chennai based one-stop Training Centre/Institute for the Students Looking for Practically Oriented Training in Industrial Automation PLC, SCADA, DCS, HMI, VFD,VLSI, Embedded, and others – IT Software, Web Designing and SEO.

PLC Training in Chennai
Embedded Training in Chennai
VLSI Training in Chennai
DCS Training in Chennai
IT Training Institutes in Chennai
Web Designing Training in Chennai

oky myla said...

Wow, incredible weblog layout! How long have you ever been blogging for?
penyakit hisprung