Putting the Planning back into M&E – PME or PM&E what’s the acronym going to be?

In a posting on Linkedin, Leslie Ayre-Jaschke talked about the growth of PME – or maybe it will end up being called PM&E, or something else. Regardless of the acronym, it’s the movement to put planning back into monitoring and evaluation. ‘Putting the P back into M&E’ was the subtitle of a workshop I ran in South Africa for UNFPA several years ago. I think that it’s a concept that’s going to get a lot more traction over the next few years.

It’s consistent with what evaluators like Michael Patton, and many of us in the evaluation community, have been talking about for years. We’ve been talking up the key role of formative evaluation – evaluation aimed at making sure that programs are optimized. And formative evaluation is all about making sure that programs are well planned.

The point of this approach within evaluation is that it’s often pointless to evaluate a badly planned program. Evaluation resources would be better spent on making sure that the program is better planned than on measuring the fact that it often will not achieve its outcomes due to the fact that planning has been poor.

The new PM&E movement is not just about evaluators and evaluation, it is much broader than that taking in people from a range of disciplines. This new integrated approach which is emerging needs an underlying theory which will appeal to all of the different disciplines involved – strategic planners, performance managers, evaluators, contract managers, policy analysts etc. The work I’ve been doing in outcomes theory has been designed to meet this need.

The purpose of outcomes theory is to provide an integrated conceptual basis for PM&E-type approaches. A common conceptual basis is needed if people across the different disciplines and sectors are going to be able to share conceptual insights about how they identify, measure, attribute and hold parties to account for outcomes when doing planning, monitoring and evaluation. Good theory is needed to help them quickly sort out the type of conceptual confusion that current characterizes much of the discussion of outcomes related issues. As the famous social scientist Kurt Lewin said – ‘there’s nothing so practical as a good theory’.

This aspiration of outcomes theory is summarized in the diagram below showing how it’s a meso-level theory reaching across strategic planning, monitoring, evaluation etc.

d131-2
(see http://www.outcomescentral.org/outcomestheory.html for more on this)

For people just working out in the field, who don’t  need to know much theory, outcomes theory principles have been hard-wired into the DoView Visual Planning, Monitoring and Evaluation approach http://doview.com/plan. Using the approach means that they will avoid many of the technical problems which are highlighted by outcomes theory.

Large-scale visual models of a program (drawn in the correct way, for instance as ‘DoViews’) provide the ideal foundation for the new fully integrated approach to planning, monitoring and evaluation which many are now seeking. http://doview.com/plan/draw.html.

Does Monitoring and Evaluation M&E Planning have to be so cumbersome and painful? Just finished Bangkok Conference Presentation

Bangkok Conference

I was invited to give a presentation to the 1st Pan Asia-Africa Monitoring and Evaluation (M&E) Forum: Results-Based Management & Evaluation (RBM&E) and Beyond: Increasing M&E Effectiveness held in Bangkok. I’ve just finished my presentation which was called: ‘Anyone Else Think the Way We Do Our M&E Work is Too Cumbersome and Painful?’

I’ve had to review many Monitoring and Evaluation Plans in the past and I’ve generally found them long and tedious documents. I’ve also had to write them myself and realize that the tedium is not only on the part of the reader! It’s usually really hard to quickly overview what the M&E Plan is going to measure and the evaluation questions that are going to be asked.

Normally once the plan has been used to get funding for the M&E work, it’s just put in a desk drawer and other documentation is used to control the implementation of the M&E Plan and make presentations on it.

In the presentation, I outlined the new DoView Visual M&E Planning. This approach takes the pain out of writing (and reading) M&E plans and creates major efficiencies.

It takes 1/2 the time to create an M&E plan; it’s entirely visually based, which makes it easy to see what is, and (just as important) what’s not, being measured; the same DoView file can be used to control the implementation of the M&E work; all presentations can be made just using the DoView M&E Plan (you don’t need to create additional Powerpoints); and you can, if you wish fully integrate project strategic planning into M&E planning (the Holy Grail of putting the ‘P” – ‘Planning’ – back into ‘M&E’).

The virtual presentation was in the form of a three short videos (about 6-7 minutes each) and a Skype question and answer session afterwards.

Check out the three short videos of the presentation here. The first video describes the reason we should move from the traditional approach and the second and third video show you how to do use the new DoView paradigm. If you want the resource page on the DoView website which shows you how to build a DoView Visual M&E Plan and gives an example you can download, it’s here.

Paul Duignan PhD. Blogs at OutcomesBlog.org, is at Twitter.com/PaulDuignan, You are welcome to participate in the DoView Community of Practice on Linkedin. Download a DoView trial at DoView.com.

Cheaters guide to when to do different types of economic evaluation

I’ve just updated my guide to selecting different types of economic evaluation analysis. Selecting which type of economic evaluation analysis can be confusing  for people who aren’t economists or trained in doing such analyses. My guide makes four points, the first is that when doing economic analyses you should always clearly communicate the variables which are included in, and not included in the analysis. The most transparent and accessible way to do this is to  do it against a visual model of the program being analyzed.

The second point is that discussion of economic analysis often does not focus enough on the question of whether there are robust estimates of the effect-size of a program to feed into such an analysis. (The effect-size is how large an effect an intervention has on outcomes). I once talked to a senior economist in the World Bank who said that many programs basically make up the effect-sizes which are then plugged into economic analyses and I have seen the same type of behavior myself. The focus of much discussion of economic analysis neglects the effect-size question and focused on the technicalities of how to do an economic analysis.

The third point is that a great approach for outcomes, policy and evaluation people is to start from what they know about the robustness of effect-size information available on a program. This information, which they should know, is what they need to start on working out which type of economic analysis they should use. The guide then takes the reader to one of ten possible types of economic analysis based on the starting point of knowing whether robust effect-size estimates are, or are not, available.

The fourth point that that, even where there is no information on effect-sizes available, some economic evaluation can still be done. This type is what I call Assumed or Hypothetical Cost Effectiveness or Cost Benefit Analysis. Instead of pretending that an effect-size estimate can be struck in the absence of any robust effect-size information, this type of analysis simply ‘assumes’ one or more levels of effect-size and does the analysis. It is totally transparent about what it being done and leaves it up to the reader to work out whether they want to assume a particular level of effect-size. For those familiar with economic evaluation this approach is, in effect, giving much more prominence to the sensitivity analysis which should be included in any good economic evaluation.

Anyway, check out the article and you can comment on whether you think it is useful and how it could be improved up on the DoView Linkedin Community of Practice where you can discuss it if you wish.

 

Paul Duignan, PhD. More info OutcomesCentral.org. Follow on Twitter.com/paulduignan.

 

 

 

Organizational policies on evaluation

Recently had occasion to revisit a 2003 paper I wrote which included a list of what should be included in an organizational evaluation policy. My list is as follows:

  • The evaluation models that will be used in the organization
  • Policies regarding, and opportunities for, staff training in evaluation
  • Sources of, and procedures for, obtaining technical evaluation assistance
  • Procedures and stakeholders consultation standards for evaluation planning and sign-off
  • Procedures and consultation processes regarding cultural issues
  • Guidelines on the typical scope and type of evaluation for different sizes and types of programs
  • Guidelines on the use of internal and external evaluators
  • Ethical and other related considerations
  • Policies about disclosure of evaluation information.

Duignan, P. (2003). Mainstreaming evaluation or building evaluation capability? Three key elements. Barnette, J.J. and J. R. Sanders (Eds). The Mainstreaming of Evaluation. New Directions for Evaluation. 99, Fall 2003, p.7-32. That particular list is on p. 18 of the article.

Paul Duignan More info: OutcomesCentral.org, Follow on: Twitter.com/paulduignan.

Moving past the debate about randomized experiments

A colleague Bob Williams recently drew attention to articles on the New Yorker about the use of randomized experiments and particularly one from an economist advocating their widespread use in a range of program areas.

I’ve been involved in a number of seemingly endless discussions and presentations about the pros and cons of randomized experiments and the rise of what are being called the Randomistas – those advocating for a much wider use of randomized experiments. In this post I want to talk about how we can move beyond these seemingly endless discussions. Continue reading

Theory of Change Versus Theory of Action

What’s the difference between a Theory of Change and a Theory of Action? I’m just clarifying my thoughts on this issue and how it relates to my work thinking about how we conceptualize outcomes models (logic models) within outcomes theory. In summary, at the moment – apart from a Theory of Action just being an outcomes model drawn at a lower level – I can’t see a major difference. However I’m happy to be contradicted on this and will change my view if there are convincing arguments for making the distinction. My current thinking is as set out below. Continue reading

The evolution of the logic model

I’ve just posted an article on the evolution of the logic model within evaluation. Over the last couple of decades, increasing numbers of evaluators have started using logic models. For those not familiar with what logic models are – they are simply tabular or visual representations of all of the lower-level steps needed to achieve high-level outcomes for a program, organization or other intervention. They go by different names, for instance: program logics, intervention logics, results maps, theories of change, program theories, results hierarchies, strategy maps, end-means diagrams etc.). A traditional way of drawing logic models has evolved (known as the inputs, outputs, intermediate outcomes, final outcomes structured logic model) which often attempts to restrict logic models to a single page. However, many evaluators are now breaking away from the constraints of this traditional format and exploring various alternative ways of representing logic models. Continue reading

Developing an M&E plan using a visual approach

On various lists I am on I often see requests by people wanting to develop what is called an M&E plan. This terminology is often used in the international development area. It refers to a Monitoring and Evaluation Plan. The way these requests are made makes me think that the way you should monitor and evaluate different projects varies a great deal. Continue reading

Getting outcomes creds and saving time!

Public sector organizations these days have two important imperatives: establishing that they are truly ‘results and outcomes-focused’ while also becoming more efficient in their internal organizational activity. The really good news in the outcomes area is that using a central tool of outcomes work – outcomes models (a particular type of visual model of all of the high-level outcomes the organization is seeking to achieve and the steps it is taking to do so) is that organizations and programs can do both at the same time. Continue reading

Using an outcomes modeling approach to action research

Will get back to blogging on the Australasian Evaluation Society Conference when I get a moment (may not be for a few days). In the meantime had to prepare an article about using outcomes modeling as a basic tool within an action research approach. Because outcomes modeling – developing visual outcomes models (like a type of logic model, or theory of change model) – according to the outcomes theory set of standards for building such models is a generic process, such models can be used for a wide range of purposes. They can, for instance, be used within an action research approach. Action research is an approach which attempts to work in cycles of research/action/research. It has the great virtue of ensuring that research is connected to action and action is connected to research.
Continue reading