Dear Think Tank Review,
I’m in Istanbul for the Think Tank Initiative Exchange: 43 think tanks from 20 countries across the three continents of Africa, Asia and South America have gathered to discuss research quality and impact. Do conferences get any better? I don’t think so!
This capacity building programme is now into Phase 2, having completed five years but it is due to end in 2019. At that point, after a decade of stable core funding, these think tanks could be on their own. It is only the third speaker who mentions ‘exit strategy’. Around the globe, people are stunned when I explain that 40 researchers raise £3m a year from 150 funders, but that’s how my home think tank, IPPR works. People also seem surprised when I suggest that only a third of a researcher’s time should be spent actually doing research.
We discuss research quality for most of the first day. Who would have thought that in a room full of over 100 think tank researchers, we would struggling to pin it down! Relevance, timeliness and outreach are common themes. Some argue for public involvement in peer review. Others say that you can’t have a policy suggestion that doesn’t have implementation built into it. A panel discussion of diverse policy-makers from around the globe, including an MP from Tanzania, a Presidential candidate from Guinea Bissau, the former Eduction Minister from Costa Rica and even a British guy from DFID all agree on one thing: they don’t actually read the full reports.
There is another great moment when a think tank Director from Argentina proclaims, “I might read a long book, but not a long email.” The policy-makers complain about information overload. The politicians among them admit: “we all suffer from attention deficit disorder.”
But the quote of the day comes from the break out group on Impact Evaluation, when someone advocates “high-quality experimental and quasi-experimental theory-based, mixed-method impact evaluations,” only to then go on and berate the audience for failing to do research that is ‘useful’, which she describes as “relevant, contextual, clear, feasible and timely.” A woman from Sri Lanka explains that she has done research to evaluate the impact of impact evaluations (are you still with me?) and guess what? The perceived credibility of the evaluators, more than the robust nature of the methodology, is the key factor. Who would have thought it?
Towards the end of the day, in a wonderfully Jeremy Kyle-style facilitated feedback session, that same woman from Sri Lanka (either she is stalking me or I am stalking her) concludes that “all knowledge generation is a political endeavour.” Can’t argue with that. Roll on day two.
This post first appeared on ThinkTank Review