In this post, we look at why evaluators may need to adjust their usual approach to working, when evaluating prizes for development.
What’s different about the role of the M&E team in this situation?
With more standard evaluations the evaluation team generally has a fairly good idea of the intended results pathway as well as the key stakeholders involved in delivering the project or programme. In the context of prizes, both the prize solvers and the nature of the solutions they propose are largely unknown at the evaluation design stage. Consequently, when prizes are used to solve development outcomes, people from three different disciplines are brought together, often for the first time: innovation prize experts, development specialists, and M&E practitioners. Each person brings with them their own set of concerns, motivations and experience and we have seen how this combination of domains both enriches and elongates the prize design process.
The evaluators are primarily focused on how success will be measured and will be checking the assumptions being made between the problem, the prize design and the projected impact. A good example of where the different motivations within a prize team can create challenges is at the stage of verification. Several of the Ideas to Impact prizes incentivize people to achieve specific results, so a key step in awarding the prizes is to undertake some form of independent verification. This verification is driven by the need to award a prize, so it’s dangerous to assume that the resulting data will meet the needs of the evaluation. Inevitably, each of the three disciplines will be driven by their own requirements during planning decisions and it is important that M&E has a strong voice in these discussions.
Identifying “prizeable” problems that can address issues within development is a complicated task and takes much longer and is harder than first anticipated by all involved. We found the design process works more as a negotiation between the three parties and within this process we have seen how evaluation can make an important contribution. Rather than the evaluators joining after the prize design and plan was fixed, in Ideas to Impact, members of the M&E team have been able, and encouraged, to ask questions from the start, and especially before the prize is launched. We commonly find ourselves playing a ‘critical friend’ role; constructively challenging the anticipated prize results chain and increasing understanding of the context in which is prize is situated.
When does the M&E team need to get involved?
Our lesson here is that it’s not obvious how early and how often the evaluators should be involved in innovation prizes for development. As internal evaluators for the programme, we have found it hard sometimes to know which prize discussions will be relevant for evaluation and are still trying to get the balance right between being within, and sufficiently removed from, the prize design and implementation process.
Ideally, the evaluators should be present during every stage: problem definition, initial prize design, revisions to the design, prize launch, etc. but this would be beyond most evaluation team’s budgets. The discussions that turned out to be far more important to join than we anticipated were those around the Terms and Conditions that solvers have to accept when entering an innovation prize. Unless the needs of the evaluation are understood when they are drafted, the Terms & Conditions can restrict what data is collected, how the data can be used and whether entrants (including non-winners) can be contacted by the M&E team. Awareness and “buzz” is a characteristic of innovation prizes for development so even a discussion about what a prize’s name should be is one the M&E team should take part in; not to make suggestions so much as to draw attention to the place communication has in the prize’s results chain. In these situations, having an agreed Theory of Change can help make the role of communication in prizes clearer.
In our final post we share our checklist for planning the M&E of innovation prizes and the tools we have found particularly useful so far.
Cheryl Brown is the Evaluation & Learning Coordinator on Ideas to Impact and Itad Ltd, an innovative monitoring and evaluation consultancy.