People, not papers: rethinking ‘impact’

I dropped into today’s LSE impact conference. LSE have clearly made a sensible strategic move to grab the impact bull by its horns. But I came away depressed that the discussion about ‘impact’ is still trading in some pretty crappy linear models.

The 2014 Research Excellence Framework is the new Mecca to which academics must pray. It is the tool that will judge, and so inevitably shape, their work. And for the first time the judges will raise up their scorecards for impact as well as excellence. Over the last couple of years, academics have railed against policymakers’ attempts to instrumentalise their work. But the final compromise now seems to misrepresent policy as well as academic research.

Academics’ impact on the world clearly can’t be described by numbers. Hefce have therefore agree that researchers should construct case studies of impact, explaining how what they did has changed the world. But, crucially, these case studies must centre on particular papers that are judged to be world-class, according to citations, impact factors and all that.

I spent four years in a think tank and two doing policy in a national academy of sciences, where impact was the currency even if we didn’t use the term. My impression is that impact is about people, not papers. Innovation studies (some of which Alan Hughes described this morning) tell us that the economic benefit of research comes from consultancies, problem-solving and networking much more than from patents, spin-outs or breakthrough papers. And anyone who has been involved in policies that pretend to be ‘evidence-based’ know that it’s about being in the right place at the right time, talking to someone who’s prepared to listen. For academics interested in public engagement, this post tells a similar story.

My colleagues and I spent much of the last year engaged in discussions with government about the Spending Review and particularly how much money the science base should receive. We gathered armfuls of evidence and produced a nice report, but we were under no illusion that the really important policy discussions were taking place between people, not on paper (A story for another day…)

Academics and policymakers seem to be colluding in a dangerous myth that the academic paper is the valuable thing. It isn’t. It’s a signal that some interesting work has been done. Papers are easy to see and they are easy to count but people are what matter. Researchers justifiably argue that it shouldn’t be like this, that good research should speak for itself, that old boys’ networks of policy influence shouldn’t hold sway. But wishing doesn’t make it so.

Wiser academics and policymakers have privately told me that they are confident that the case study idea is broad enough to accommodate stories that reflect the realities of innovation and policymaking, rather than having to adopt a hyper-rational myth. Let’s hope so, otherwise, we are going to see a whole new world of weird in 2014.

About Jack Stilgoe

Jack Stilgoe is a professor of science and technology policy in the department of Science and Technology Studies, University College London.
This entry was posted in Uncategorized. Bookmark the permalink.

12 Responses to People, not papers: rethinking ‘impact’

  1. alice says:

    Ha, I suspect I’ll use that people not papers line myself in future… Though, riffing off James Sumner’s point, maybe networks?

  2. Steve Dennis says:

    So would you say that highly publicized, but highly controversial research such as the NASA arsenic based life debacle, to be of higher impact than something more specialized (and therefore generally lower awareness) that’s actually being applied in some form?

    • Jack Stilgoe says:

      Hi Steve. Not sure I understand the question. There is a point about negative citations – research that is cited because it is bad, in order to critique it. But apparently that is less than 1% of citations. In terms of impact, I don’t think that anyone would argue that the NASA arsenic stuff will change the world. It would therefore do badly on most assessments of impact. I don’t think that an impact on the public consciousness counts. Otherwise the famous Pons/Fleischman cold fusion thing would rate as a biggy.

  3. Andy Turner says:

    Thanks for the post. Applied impact of some research might not hit for a long time. The research more favoured will be that which can demonstrate impact in the short term. I don’t think it is just pure mathematical research that will struggle to identify impact in the short term.

    • Jack Stilgoe says:

      Hi Andy. The hope is that each discipline will be able to work these things out for itself, so it won’t lead to a predictable distorting of research funding away from one area into another. I hope Hefce have been persuaded at least by the arguments that in some more upstream areas of science, impacts are felt in the very long term and through complicated networks of interaction. That said, there will be unintended consequences, which we can’t foresee.

  4. Hi Jack – congratulations on your new appointment. Re this post – there’s a very interesting book by Mark Monaghan on UK drugs policy, which modifies Sabatier & Jenkins-Smith’s Advocacy Coalitions Framework to look at how deep core beliefs influence what we individually conceive of as evidence. The implications are that to understand the impact of research on policy we need to break even further away from models (even non-linear ones), and give greater consideration to the politics of evidence. Well worth reading (‘Evidence versus politics: exploiting research in UK drugs policy making?’ published by The Policy Press.)

  5. Pingback: KABOOM: Exploding ‘impact’ « through the looking glass

  6. Pingback: KABOOM: Exploding ‘impact’

  7. Incisive post – thanks. There’s a big selection effect going on here of course in academia, certainly in science in my experience: senior academics highlighting paper citations as the dominant measure have been selected to represent their communities by virtue of their particular strengths in that area. It is also something people feel is more objectively measurable, for better or worse. But arguably this has encouraged a generation of leading academics who place much less value on the wider picture including teaching, impact and longer term research itself.

    • Jack Stilgoe says:

      Thanks Jonathan. I quite agree. Academics know that not everything that counts can be counted, but they also know that what is counted will come to count in the future. So control of the metrics becomes the important thing.

  8. Pingback: The cult of personality in science « Testing hypotheses…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s