Distinguishing Evaluation from Monitoring (and other processes such as Performance Management and Assessment)

I have blogged before about what I see as the wrong way to approach the problem of differentiating evaluation from related activities (e.g. monitoring, performance management, assessment). Last time it was about differentiating evaluation from quality assurance. The wrong way is to spend a lot of time on attempting to distinguish between processes by using high-level terms such as evaluation, performance management etc. A lot of ink and talk has been spilled on trying to do that over the years. In the case of monitoring and evaluation there is a simple distinction which can be drawn between monitoring being about routinely collected information and evaluation being about more ‘one-off’ studies. This distinction is useful sometimes, however it is not much good for any more sophisticated discussion of differences in evaluation, monitoring, performance management and related processes which people use in particular situations.

A more useful approach is set out in an article I have just written on Distinguishing Evaluation from Other Processes. The problems with trying to differentiate high-level terms are: 1) that we cannot control the way that stakeholders use them in the field; and, 2) these high-level terms for processes are not fully mutually exclusive, that is, sometimes the same activities are undertaken within processes even though the processes are thought to be distinct (e.g. evaluation, performance management, etc.). Continue reading

Top to bottom or left to right? Logic model conventions

outcomesmodelThere are various conventions for visualizing logic models (or outcomes models as I call them to include the wide range of different models evaluators work with – program logics, logic models, outcomes hierarchies, theories of change, program theories, strategy maps, ends-means diagrams, results chains etc.)

I have put up a knol article in the Outcomes Theory Knowledge Base which talks about why I think there are advantages to a top to bottom rather than a left to right approach to drawing logic models. Of course, if you are in an organization where you are told to draw left to right logic models, then go for it and some people who use DoView software use it to draw left to right models. The article is here. Continue reading

Evaluation Advisory Committees – getting it right at the start

committeeRecently I attended the first meeting of an Evaluation Advisory Committee and I was reminded once again of the importance of setting up evaluations properly at the start. I have set out below the issues which need to be dealt with when such committees are set up. This list is drawn from my experience on a number of these types of committees. The issues you need to deal with include: Continue reading

Sharing the pain – crazy indicators, targets and funder reporting systems

sharingthepain.jpgThis blog post is a follow-up to an earlier posting on my blog. In a comment on that posting, M&Egirl outlined problems she was having with a funder demanding targets that are meaningless and asked for more comment on this obsession with targets on the part of funders.

The program she is working on is a Gender Related Violence program, which for the sake of this discussion I’ll presume involves women who have been subject to gender related violence receiving some sort of intervention to help them be safe and move beyond the trauma they have suffered. Because the issue of indicators and targets is such an important issue I thought that I’d do a further posting on it. If you have a moment, have a quick read of the original posting and M&Egirl’s comment before reading this blog posting. Continue reading

Standardized visual evaluation plans – quick and effective

Community Central Web Page Easy Outcomes Evaluation Plan ScreenshotI’ve not had much time to blog recently due to building a number of large outcomes models for public sector organizations; having input into the further development of DoView; and presenting at international evaluation conferences on Easy Outcomes, DoView and related evaluation and outcomes topics. A lot has been happening though, from version 1.14, DoView is now able to create web page versions of its visual outcomes models. I’ll do several postings showing how this new feature can be used. The first is that now, once an outcomes model has been built in DoView, the user can quickly create a web page version of the same model and then have it put up on an intranet or the internet. You can see (and use) a number of examples at OutcomesModels.org. The second great thing is that you can now produce visual evaluation plans will save you a great deal of time. I delivered a paper on this at the recent European Evaluation Society Biennial Conference in Lisbon. Continue reading

A Systematic Outcomes Analysis framework for psychotherapy evaluation

Psychotherapy outcomes model screenshotIn my last blog posting (which you should read before this one) I talked about using Systematic Outcomes Analysis to define the basic tasks one needs to do in quality assurance, monitoring and evaluation and how this can  avoid the need for a protracted theoretical discussion about the difference between quality assurance and program evaluation. I was using the example of an illustrative Systematic Outcomes Analysis framework I set up based on an outcomes logic model in regard to psychotherapy which I’ve posted on the Outcomes Models site. Here’s the PDF of the DoView file. Using the Systematic Outcomes Analysis approach, indicators and evaluation questions are mapped onto the outcomes logic model (indicators are marked with a yellow icon and evaluation questions with a green circular icon). This blog posting looks in more detail at ways stakeholders can use such a framework once it’s been developed. Continue reading

Avoiding the question: Defining quality assurance versus program evaluation

OIE Basic DiagramSometimes it’s more useful to avoid initially answering a question that’s posed in a particular way because there’s a better way of addressing the concern that lies behind the question. Such is the case if you’re ever asked to define the difference between quality assurance (or monitoring) and program evaluation.

Seeing the question as a theoretical one and attempting to find a definition which works has some similarities to the situation where you’re building a house and someone keeps wanting you to stop and define, from a theoretical point of view, the difference between the kitchen and the dining room. Now, some people do stuff in the dining room that others do in the kitchen, and some do stuff in the kitchen that others do in the dining room. Still other people don’t really have any theoretical problems because they have a kitchen/dining area where they do both kitchen and dining room stuff.

A more fruitful way of working with the question of the difference between quality assurance (or monitoring) and program evaluation is to attempt to identify all of the stuff (tasks) that you would do under each of these. Once you’ve done that, you can then decide whether or not you need to spend a lot of time defining the difference between the two if everybody concerned is clear about which of the underlying tasks are, and are not, being done by whom.   Continue reading

Avoid being an outcomes model 'Go-Between'

Go betweenA while ago a colleague recounted to me how they’d ended up pulling out their hair because they found themselves in a ‘Go-Between’ role when drawing an outcomes model (also called program logics, results chains, strategy maps, ends-means diagrams). You need to try to avoid this at all costs, although when dealing with high level stakeholders it’s often not easy to do so. I found myself in this role on a major project a while ago and I certainly didn’t enjoy it.
Continue reading

What's the Easy Outcomes site all about?

I’ve realized that I’ve not put up a post about the Easy Outcomes site even though it’s been up for a few weeks. Easy Outcomes takes the Systematic Outcomes Analysis approach and applies it using DoView outcomes software. We developed it in response to feedback that the Systematic Outcomes Analysis site is a bit intense for those who just want to work out how to develop a good evaluation plan, but don’t want to get buried in the detail of the approach too soon. Continue reading

New Systematic Outcomes Analysis case study – evaluating an academic research rating scheme

I’ve recently put up the link to another case study on the Systematic Outcomes Analysis site. This is a report detailing an evaluation and monitoring plan for the Performance Based Research Fund (PBRF). The PBRF is a national academic research output assessment system and the monitoring and evaluation framework was produced for the government organization responsible for the tertiary sector – the Tertiary Education Commission.

The report is a good example of a comprehensive use of Systematic Outcomes Analysis (in the report the approach went by one of its earlier names – REMLogic). One of the interesting things about the PBRF is that it’s an evaluation system in its own right and developing an evaluation framework for an evaluation system presents an interesting technical challenge. I’ll put up a blog post about that challenge some time in the future. Continue reading