My interest & Skills areas

The following are some of my interests & skill areas; Program & Strategy development; Monitoring, Evaluation & Research; Training, Coaching & Mentoring; Participatory M&E Processes; Capacity development in (Log-frame, Most Significant Change Technology, Rights Based Programming; Rights Based Approaches to M&E); Participatory Planning Processes; use of Open Space Technology; Indicator development; documentation & development of Case studies

Wednesday, January 28, 2009

Creating a Learning Culture

Closely linked to the issues of utilisation of evaluation reports/findings is another pertinent challenge: how can we create a learning culture within our Organisations and among our clients? Evaluation findings are suppose to enhance and promote program quality and learning. However we often find organisations implementing similar projects or interventions even though evaluation findings point otherwise. I have seen NGOs embark on non-sustainable interventions over and over even though results from their own evaluations show that such interventions do not work. How can we as evaluators influence decisions based on our findings? Does our work end once we've submitted our reports or could we do more to support our clients to shape their program direction? Your thoughts and experiences.

11 comments:

  1. Thanks Sam for this initiation..I think the challenge is that in many organisations the different departments work separately. The M&E unit is normally seen to come at the end of program implementation and not at the planning stages. Once this is done it could be a starting point of the learning process. Some development organisations not being profit organisations, do not look at this as something that will impact on their income later which is core to their business but i can imagine with the current trends they will be forced to deliver and thus need to have these checks and balances in place.

    Thanks

    Anthony

    ReplyDelete
  2. Hi Anthony, that's an interesting starting point - the lack of integration among different departments and the secondly the fact that M&E (as a function) is not integrated at the planning stages of the program. The point though is that, Senior Managers are ultimately responsible for organisational learning and program improvement. One therefore wonders why Management doesn't create an environment which promotes cross learning especially from findings from evaluations. By and large, everyone tends to gain from an 'enabling' learning environment. Maybe the role of evaluators should eb expanded to include elements of advocacy so that we can advocate for 'evidence-based' decision making.

    ReplyDelete
  3. I agree with you snorgah that M&E is often not integrated in the planning stages of a program. Evidence to that are the plenty requests to do an evaluation after the program has finished. Hardly ever do I see a request from a project for a M&E practitioner to get involved in the design stage of the project.

    But also, I get the feeling that many evaluations are set up to question the project in relation to its objectives but they don't question those objectives. Projects, in my opinion, can be seen as vehicles that are on a road to reach the objectives. But often evaluations only focus on what can be improved on the vehicle while you can question if this is the right vehicle and even if you've chosen the right road/direction. This means questioning your objectives and the project as a vehicle on a meta level.

    ReplyDelete
  4. hi Sam and Anthony; I like this initaitive and hope more people will join. Indeed M&E is an after thought in many development organizations. Management in most organizations do not realize that M&E is part of all stages is a project cycle and if not incoorparated early enough the project may miss the target and fail to achieve results.

    ReplyDelete
  5. Hi sam, thanks for facilitating the forum.i concur that there is critical need to incorporate the M$E function in the analysis and planning stages.The scope of evaluation should be expanded to mandate evaluators to come up with action plans based on their findings.Though adoption could be a problem,managers in charge of policy should be sensitized continously.Perhaps,the decision makers need to make more use of evaluation findings and review decisions periodically.

    ReplyDelete
  6. Reading from all the contributions, it appears we as evaluators are probably not assertive enough to get Management to understand the importance of incorporating the M&E function right at the planning stage. Is there something we're going wrong? Do we have to advocate for serious consideration for the M&E function? Maybe this' the way to go - I'm not sure

    ReplyDelete
  7. snorgah is right on with this thread. Lack of an evaluative culture has been identified in many studies as the key element explaining poor evaluation or RBM practice on an organization.

    Some suggestions I made for building such a culture can be found at http://www.cgiar-ilac.org/content/ilac-brief in Brief # 20

    ReplyDelete
  8. This is a great initiative! Thanks to the master minds. I would like to concur with the fact that evaluations are often done to look at the program in relation to its objcetives. More still, they are done at the request or to meet specific requirements as stipulated by the funding groups. Hence their utility is often limited. Going forward, as evaluators we need to be given the platform to ask the questions: What is the purpsoe of the evaluation? What will be done with the results of the evaluation?

    As the profession begins to be viewed as critical in many development programs, I am sure we are not far from the point where we can insist on organisations to indicate the utility of our findings.

    ReplyDelete
  9. Snorgah - First - many thanks for initiating the blog! We certainly need more dialogue & opportunities to learn from one another!

    I'm working with an IFAD funded Regional capacity building programme. We're working with an approach that we call "managing for impact" which essentially views M&E as an integral part of management along with 3 other pillars - i) Guiding the strategy/Strategic Guidance; ii) Day to day operations and iii) Learning Environments.

    So, I was very interested to read this post and the responses so far! - Many of which echo the principles embedded within the Managing for Impact (M4I) approach. There are a few posts on this on our blog - http://mande4mfi.wordpress.com/about/.

    I found the comment on "Senior Managers are ultimately responsible for organizational learning and program improvement" particularly interesting. I actually rarely come across situations where Senior managers are held accountable for learning processes and believe that this is a core part of the problem. In my experience - Managers and implementers are held accountable for "Objectively Verifiable Targets"...delivering the "numbers" and assessed through a "tick and go" approach (http://mande4mfi.wordpress.com/2008/12/04/ahthe-good-ol-tick-go/). Both of these (along with many other factors in the system) are often disincentives to learning and much stronger than any incentives that may exist.

    Perhaps it would be interesting to collectively write a post on incentives & disincentives to using evaluations for learning?

    Thanks for the stimulating post!
    Mine

    ReplyDelete
  10. Snorgah - First - many thanks for initiating the blog! We certainly need more dialogue & opportunities to learn from one another!

    I'm working with an IFAD funded Regional capacity building programme - SMIP. We're working with an approach that we call "managing for impact" which essentially views M&E as an integral part of management along with 3 other pillars - i) Guiding the strategy/Strategic Guidance; ii) Day to day operations and iii) Learning Environments.

    So, I was very interested to read this post and the responses so far! - Many of which echo the principles embedded within the Managing for Impact (M4I) approach. There are a few posts on this on our blog - http://mande4mfi.wordpress.com/about/.

    I found the comment on "Senior Managers are ultimately responsible for organizational learning and program improvement" particularly interesting. I actually rarely come across situations where Senior managers are held accountable for learning processes and believe that this is a core part of the problem. In my experience - Managers and implementers are held accountable for "Objectively Verifiable Targets"...delivering the "numbers" and assessed through a "tick and go" approach (http://mande4mfi.wordpress.com/2008/12/04/ahthe-good-ol-tick-go/). Both of these (along with many other factors in the system) are often disincentives to learning and much stronger than any incentives that may exist.

    Perhaps it would be interesting to collectively write a post on incentives & disincentives to using evaluations for learning?

    Thanks for the stimulating post!
    Mine

    ReplyDelete
  11. The Power of Measuring Results
    • If you do not measure results, you cannot tell success from failure.
    • If you cannot see success, you cannot reward it.
    • If you cannot reward success, you are probably rewarding failure.
    • If you cannot see success, you cannot learn from it.
    • If you cannot recognize failure, you cannot correct it.
    • If you can demonstrate results, you can win public support.

    Source: Adapted from Osborne & Gaebler 1992.


    Key Features of Implementation Monitoring versus Results Monitoring

    Elements of Implementation Monitoring (traditionally used for projects)
    • Description of the problem or situation before the intervention
    • Benchmarks for activities and immediate outputs
    • Data collection on inputs, activities, and immediate outputs
    • Systematic reporting on provision of inputs
    • Systematic reporting on production of outputs
    • Directly linked to a discrete intervention (or series of interventions)
    • Designed to provide information on administrative, implementation, and management issues as opposed to broader development effectiveness issues.

    Elements of Results Monitoring (used for a range of interventions and strategies)
    • Baseline data to describe the problem or situation before the intervention
    • Indicators for outcomes
    • Data collection on outputs and how and whether they contribute toward achievement of outcomes
    • More focus on perceptions of change among stakeholders
    • Systemic reporting with more qualitative and quantitative information on the progress toward outcomes
    • Done in conjunction with strategic partners
    • Captures information on success or failure of partnership strategy in achieving desired outcomes.

    Source: Adapted from Fukuda-Parr, Lopes, and Malik 2002, p. 11.

    ReplyDelete