Putting the Planning back into M&E – PME or PM&E what’s the acronym going to be?

In a posting on Linkedin, Leslie Ayre-Jaschke talked about the growth of PME – or maybe it will end up being called PM&E, or something else. Regardless of the acronym, it’s the movement to put planning back into monitoring and evaluation. ‘Putting the P back into M&E’ was the subtitle of a workshop I ran in South Africa for UNFPA several years ago. I think that it’s a concept that’s going to get a lot more traction over the next few years.

It’s consistent with what evaluators like Michael Patton, and many of us in the evaluation community, have been talking about for years. We’ve been talking up the key role of formative evaluation – evaluation aimed at making sure that programs are optimized. And formative evaluation is all about making sure that programs are well planned.

The point of this approach within evaluation is that it’s often pointless to evaluate a badly planned program. Evaluation resources would be better spent on making sure that the program is better planned than on measuring the fact that it often will not achieve its outcomes due to the fact that planning has been poor.

The new PM&E movement is not just about evaluators and evaluation, it is much broader than that taking in people from a range of disciplines. This new integrated approach which is emerging needs an underlying theory which will appeal to all of the different disciplines involved – strategic planners, performance managers, evaluators, contract managers, policy analysts etc. The work I’ve been doing in outcomes theory has been designed to meet this need.

The purpose of outcomes theory is to provide an integrated conceptual basis for PM&E-type approaches. A common conceptual basis is needed if people across the different disciplines and sectors are going to be able to share conceptual insights about how they identify, measure, attribute and hold parties to account for outcomes when doing planning, monitoring and evaluation. Good theory is needed to help them quickly sort out the type of conceptual confusion that current characterizes much of the discussion of outcomes related issues. As the famous social scientist Kurt Lewin said – ‘there’s nothing so practical as a good theory’.

This aspiration of outcomes theory is summarized in the diagram below showing how it’s a meso-level theory reaching across strategic planning, monitoring, evaluation etc.

d131-2
(see http://www.outcomescentral.org/outcomestheory.html for more on this)

For people just working out in the field, who don’t  need to know much theory, outcomes theory principles have been hard-wired into the DoView Visual Planning, Monitoring and Evaluation approach http://doview.com/plan. Using the approach means that they will avoid many of the technical problems which are highlighted by outcomes theory.

Large-scale visual models of a program (drawn in the correct way, for instance as ‘DoViews’) provide the ideal foundation for the new fully integrated approach to planning, monitoring and evaluation which many are now seeking. http://doview.com/plan/draw.html.

Distinguishing Evaluation from Monitoring (and other processes such as Performance Management and Assessment)

I have blogged before about what I see as the wrong way to approach the problem of differentiating evaluation from related activities (e.g. monitoring, performance management, assessment). Last time it was about differentiating evaluation from quality assurance. The wrong way is to spend a lot of time on attempting to distinguish between processes by using high-level terms such as evaluation, performance management etc. A lot of ink and talk has been spilled on trying to do that over the years. In the case of monitoring and evaluation there is a simple distinction which can be drawn between monitoring being about routinely collected information and evaluation being about more ‘one-off’ studies. This distinction is useful sometimes, however it is not much good for any more sophisticated discussion of differences in evaluation, monitoring, performance management and related processes which people use in particular situations.

A more useful approach is set out in an article I have just written on Distinguishing Evaluation from Other Processes. The problems with trying to differentiate high-level terms are: 1) that we cannot control the way that stakeholders use them in the field; and, 2) these high-level terms for processes are not fully mutually exclusive, that is, sometimes the same activities are undertaken within processes even though the processes are thought to be distinct (e.g. evaluation, performance management, etc.). Continue reading

Standardized visual evaluation plans – quick and effective

Community Central Web Page Easy Outcomes Evaluation Plan ScreenshotI’ve not had much time to blog recently due to building a number of large outcomes models for public sector organizations; having input into the further development of DoView; and presenting at international evaluation conferences on Easy Outcomes, DoView and related evaluation and outcomes topics. A lot has been happening though, from version 1.14, DoView is now able to create web page versions of its visual outcomes models. I’ll do several postings showing how this new feature can be used. The first is that now, once an outcomes model has been built in DoView, the user can quickly create a web page version of the same model and then have it put up on an intranet or the internet. You can see (and use) a number of examples at OutcomesModels.org. The second great thing is that you can now produce visual evaluation plans will save you a great deal of time. I delivered a paper on this at the recent European Evaluation Society Biennial Conference in Lisbon. Continue reading

A Systematic Outcomes Analysis framework for psychotherapy evaluation

Psychotherapy outcomes model screenshotIn my last blog posting (which you should read before this one) I talked about using Systematic Outcomes Analysis to define the basic tasks one needs to do in quality assurance, monitoring and evaluation and how this can  avoid the need for a protracted theoretical discussion about the difference between quality assurance and program evaluation. I was using the example of an illustrative Systematic Outcomes Analysis framework I set up based on an outcomes logic model in regard to psychotherapy which I’ve posted on the Outcomes Models site. Here’s the PDF of the DoView file. Using the Systematic Outcomes Analysis approach, indicators and evaluation questions are mapped onto the outcomes logic model (indicators are marked with a yellow icon and evaluation questions with a green circular icon). This blog posting looks in more detail at ways stakeholders can use such a framework once it’s been developed. Continue reading

Avoiding the question: Defining quality assurance versus program evaluation

OIE Basic DiagramSometimes it’s more useful to avoid initially answering a question that’s posed in a particular way because there’s a better way of addressing the concern that lies behind the question. Such is the case if you’re ever asked to define the difference between quality assurance (or monitoring) and program evaluation.

Seeing the question as a theoretical one and attempting to find a definition which works has some similarities to the situation where you’re building a house and someone keeps wanting you to stop and define, from a theoretical point of view, the difference between the kitchen and the dining room. Now, some people do stuff in the dining room that others do in the kitchen, and some do stuff in the kitchen that others do in the dining room. Still other people don’t really have any theoretical problems because they have a kitchen/dining area where they do both kitchen and dining room stuff.

A more fruitful way of working with the question of the difference between quality assurance (or monitoring) and program evaluation is to attempt to identify all of the stuff (tasks) that you would do under each of these. Once you’ve done that, you can then decide whether or not you need to spend a lot of time defining the difference between the two if everybody concerned is clear about which of the underlying tasks are, and are not, being done by whom.   Continue reading

Castles of sand – cost benefit modeling

BucketA while ago I came across a report of a cost-benefit analysis on climate change in which the author of the analysis admitted that his model may come in for some heavy criticism because it didn’t include any cost for sea level rises. How cruel and heartless of his critics. I don’t know who paid for this particular report or why it was done, so I don’t want to comment on it at all. However cost benefit analyzes which leave out or minimize important costs are a well used weapon in the lobbyist’s armory. Policy makers, the media and the public only have time to catch the bottom line – the cost will be this or that much and then move on. Continue reading

New Systematic Outcomes Analysis case study – evaluating an academic research rating scheme

I’ve recently put up the link to another case study on the Systematic Outcomes Analysis site. This is a report detailing an evaluation and monitoring plan for the Performance Based Research Fund (PBRF). The PBRF is a national academic research output assessment system and the monitoring and evaluation framework was produced for the government organization responsible for the tertiary sector – the Tertiary Education Commission.

The report is a good example of a comprehensive use of Systematic Outcomes Analysis (in the report the approach went by one of its earlier names – REMLogic). One of the interesting things about the PBRF is that it’s an evaluation system in its own right and developing an evaluation framework for an evaluation system presents an interesting technical challenge. I’ll put up a blog post about that challenge some time in the future. Continue reading

Potential challenges to Systematic Outcomes Analysis

Systematic Outcomes Analysis claims to provide a standardized approach to outcomes, monitoring and evaluation planning [see my previous post Can outcomes, monitoring and evaluation planning be standardized?]. What are the challenges people are likely to make to this claim and can they be answered? I’ve set out some of the major potential challenges below and provided some thoughts on each of them:

1. It is not possible to have a standardized approach to outcomes, monitoring and evaluation planning, every situation is unique.

This is the argument that evaluation is a ‘craft’ needing a skilled evaluation planner to tailor an evaluation to fit the unique situation. How can a ‘cook-book’ standardized approach do justice to the complexity of real world programs. This challenge should not be accepted until it’s been proved to be true. The best way to try to prove it is to try out a system like Systematic Outcomes Analysis to see if it does, in fact, fail when being used to plan particular types of evaluations. Continue reading

Can outcomes, monitoring and evaluation planning be standardized?

I was involved in an interesting discussion recently with a group of evalutors about whether outcomes, monitoring and evaluation planning can be standardized. In my experience, much evaluation planning starts from a blank slate with evaluators and project staff sitting around wondering about how they’re going to evaluate a specific program. Or, in other cases, for budgetary or other reasons, people who are not trained in evaluation have to work their way through basic texts about evaluation trying to work out how to do an evaluation. This all takes a great deal of time. Does it have to be like this? I’m not sure that it does.

Every time an organization sets up an accounting system, you don’t get the feeling that accountants have to build the entire accounting system from scratch. They simply put in place a number of basic building blocks of such systems and tailor them to the requirements of the particular organization. Why should monitoring and evaluation be any different? What I’ve been trying to do over a number of years in developing Systematic Outcomes Analysis is to develop such a standardized approach. Continue reading

DoView – outcomes visualization software released

We’ve just released our outcomes visualization software – DoView. We’ve been working on this for two years out of frustration at the difficulty of finding software which is just right for building outcomes models. We have designed it to be an affordable simple tool for those who want to build outcomes models (program logics, program theories, results chains, simple cause-effect models, strategy maps) for all sorts of purposes (strategic planning, monitoring, evaluation, evidence-based practice etc.)

The most exciting part of this for me is to finally have a tool with which I can use in real time during meetings when building outcomes models with stakeholders. We’ve designed it so that you can’t produce any model which can’t be clearly viewed when used with a data projector in a medium sized room. Far too often I’ve tried to work out what was happening in an outcomes model being presented on a data projector where you couldn’t read what was in the model because the font size was too small or there were causal link lines all over the place. Continue reading