My interest & Skills areas

The following are some of my interests & skill areas; Program & Strategy development; Monitoring, Evaluation & Research; Training, Coaching & Mentoring; Participatory M&E Processes; Capacity development in (Log-frame, Most Significant Change Technology, Rights Based Programming; Rights Based Approaches to M&E); Participatory Planning Processes; use of Open Space Technology; Indicator development; documentation & development of Case studies

Tuesday, July 19, 2011

Lessons Learnt – how should they look like? Sam Norgah, Feb, 2010

Introduction

This document is based on contributions made by members of the African Evaluation Association (AfrEA) on the question, ‘how should lessons learnt look like?’

One of the key functions of evaluation is to generate a body of knowledge or experience relevant for improving future performances and enhancing quality. Often classified as ‘lessons learnt’ in an evaluation report, commissioners of evaluations refer to this section in a bid to draw on the key ‘lessons to be learned’ from the program/project through the evaluation
which has been carried out.

One of the challenges faced by commissioners of evaluations with regards to ‘lessons learnt’ is that evaluators often state the ‘obvious’ without reference to context and scalability. In fact, some of the lessons outlined in the evaluation reports are ‘common sense’ and ‘general knowledge’ which do not require an evaluation to unveil. Considering the amount and extent of resources committed to evaluations and the pressure on development practitioners to ensure accountability, demonstrate results of their intervention and scale up their interventions, emphasis on lessons drawn from programs has become very critical and in most cases, non-negotiable.

Extent of the problem

NGOs are not alone when it comes to concerns about the relevance of lessons drawn from evaluations. A study by the United Nations Environmental Program (UNEP) in 2007 showed that nearly 50% of evaluations conducted between 1999 and 2006 did not meet their ‘quality criteria’(Lessons learned – a platform for sharing knowledge, Special study paper). This staggering figure reported by an organisation like UNEP raises concerns about the scale and trend of this issue and it is therefore imperative for both commissioners of evaluations and evaluation practitioners to take steps to strengthen lessons drawn from evaluations and ensure clarity around those lessons.

According to opinions from both commissioners and practitioners, one of the main reasons for this state of affairs is the lack of clarity around what should be captured under lessons learnt. In most cases, the ToR which guides the evaluation is not explicit on this. Notwithstanding this
argument though, in my personal opinion, I think the major responsibility lies with the evaluation consultant (as an expert) to be able to draw the line between ‘general knowledge and key lessons to be learned’ from the program/project. This in my opinion raises the question of capacity first and foremost among our practitioners and secondly among the
commissioners – a sentiment shared by many of the contributors on this issue.

Key criteria for identifying and capturing lessons:

Three sets of criteria emerged from the various contributions; I have stated them below. The key thing is that there is the need for clarity and agreement on what needs to be captured - this needs to be explicitly stated in the ToR.

1. According to UNEP, the quality criteria for assessing lessons learnt are:

(i) Concisely capture the context from which the lessons are derived

(ii)The lessons should be applicable in different contexts (generic) and should have a clear ‘application domain’ and identify target users.

(iii) The lessons should suggest a prescription and should guide action

2. There are five levels of lessons which can be drawn from any evaluation (Clarke and Rollo 2001); these are: (i) general Information, (ii) explicit knowledge, (iii) tacit knowledge, (iv) insight and (v) wisdom.

It however appears that for most evaluations what evaluators have
focused on is the first level – general information. This was confirmed by the UNEP study and contributions from other evaluation commissioners.
According to one commentator the levels that matter the most are the last two, provided they are supported by strong and rigorous evidence from the evaluation methods and findings. The essence of all evaluation processes is to draw relevant lessons - there is the need for a shift from gauging what has taken place in the past for its own sake and stating the obvious. The
challenge then is how to formulate lessons as research questions.

3. According to another contributor, two principles which have helped in articulating lessons are:

(i)Clarifying the intended use and intended users – at the time of agreeing and finalising the ToR and planned M&E processes, it is important to clarify the outcomes, learning points and the key audience

(ii) Develop a structured set of learning points and recommendations using appreciative inquiry. A staff of an international NGOs alluded to the challenge of clearly articulated lessons and suggested that presenting evaluation feedback through workshops with stakeholders is critical
forum which allows for reflection, documentation and dissemination.

Conclusion

Drawing relevant lessons from evaluations is a challenge faced by both commissioners of evaluation and practitioners alike. It was generally agreed that capacity is partly to be blamed for this state of affairs. This trend calls for standardisation of processes and a renewed call for
accreditation of the profession.

Overall, it’s been an interesting and insightful discussion – thanks to all who contributed to this topic and for sharing your experiences and lessons

1 comment:

  1. This article looks promising Drawing relevant lesson is supportive.




    Sample Evaluation

    ReplyDelete