I have a confession to make. Even though I am currently employed as an Impact Research Fellow at the University of Stirling, I have somewhat lost track of exactly what scientific “impact” really is, or should be. Although given my job title I guess this constitutes a sackable offence, please hear me out…
A while ago I had an interesting coffee-room discussion with, amongst others @StuAuld, about the definition of scientific impact. Part of the discussion was about “impact statements”, now required for many grant proposals. I seem to remember there was some confusion about both what these should convey, but also about exactly how they would be assessed – what makes one proposal more likely to “achieve impact” than another? Is it economic value of the output? Societal impact? Government policy makers, volunteers for conservation NGO’s, consultant ecologists, members of NERC or NSF boards, academics – however brilliant and “objective” a grant scoring system, the more I thought about this, the more I realised such assessments will be entirely different depending on who you ask. Which makes writing an “impact statement” a little like cooking tea for fifty three-year olds, using only chips, olives, and brussel sprouts (admittedly I haven’t tried this so this might be a rubbish analogy).
Now, I used to think that conservation biologists / applied ecologists need not worry about such trifles. Obviously applied ecology has impact, right? Of course some poor soul working on the genetic basis of behavioural syndromes is going to have to work harder on their impact statements, than someone who has just discovered how to cost-effectively protect a critically endangered species. Right?
However, in the end it is just a matter of degrees. A grant-giving body funded by a government hell-bent on slashing budgets for nature conservation will be more likely to fund “economically viable” proposals (i.e. build us something that gets to the coal more effectively!) than anything to do with “cuddly animals” or anything that is likely to inform sustainable policies which supposedly “burden” business. Even in the case of something as obviously “applied” as conservation biology, the societal or economic value attributed to the work will depend on personal preferences, funding priorities and which way the political wind is blowing.
Analogous issues crop up in the valuation of research outputs. As academics, although we all like to tell each other that it’s not just about who has the most Nature or Science papers, but also about “engaging with the public”, and “stakeholder working groups”. Call me cynical, but as an early-career researcher I find it hard to believe that focusing solely on the latter is likely to get me a permanent position in academia, nor is it likely to generate many papers that are suitable for higher-impact (as in Impact Factor) journals. Indeed, even for the more applied journals, there often is a tension between the sort of work that is academically “trendy” and likely to attract citations quickly (i.e. improve the journal’s standing in terms of Impact Factor), and the sort of work that is (at least directly) useful to managers, policy makers or practitioners. Conversely, for ecologists working outside of academia, there often are very real constraints (financial or political ones) on their research output in terms of e.g. scope or time to do it– which makes publishing in higher quality peer-reviewed journals even harder than it is for academics.
The resulting “Great Divide” is a double whammy. On the academic side, work that would truly benefit e.g. an ecologist working for an NGO, or a government policy maker, is often not valued highly academically. On the other hand, applied ecologists working for NGO’s or consultancies with practical insights into how science could really make a difference cannot easily communicate this to their colleagues in academia. This gap between theory and practice is nothing new and has been extensively discussed (see e.g. Nature 450, 135–136; 2007, Cons Biol 22(3): 610-617; 2008, Oryx 44:1, 1-2; 2009, BioScience 60(10):835-842; 2010). But the key point is, I don’t believe it can be bridged by “impact statements” as long as impact is not valued in the same way by academics and practitioners (or, as Chapron & Arlettaz point out, as long as academic ecologists are victims to the “publish or perish” selective force).
I guess it is possible to achieve both high academic impact and high practical/societal impact; journals such as the Journal of Applied Ecology have made significant headway in this sense (see e.g. 47(1): 1-4; 2010, 48(1):1-2; 2011, 49(1):1-5; 2012).
So perhaps I am too gloomy about this (or, more likely – just not clever enough to do both!). Whatever the case may be, I am still confused about what “impact” really means. And in conclusion, I really feel I should.