On my way to the Australasian Evaluation Society Conference in Canberra my taxi driver in from the airport asked me what I do. When I explained that I ‘measure whether programs, often government programs, work or not so the taxpayer gets value for money’, he was right into the concept. Although I think he thought that I was over complicating things a little. He said: ‘shouldn’t it just be a matter of using statistics to measure if things are getting better or not about a program.’ What he was talking about was one aspect of monitoring and evaluation – an important piece – but just one of the Five Building Blocks I see lying behind all monitoring and evaluation systems (outcomes systems). Continue reading
Category Archives: Doing evaluation more efficiently
Can an exhaustive list of impact evaluation designs be developed, or is my mission on this futile?
I have set out on a mission as a part of outcomes theory to attempt to develop an exhaustive list of impact/outcome evaluation designs – evaluation designs which make a claim that changes in high-level outcomes can be attributed to a particular intervention. If we could pull off developing such a list that most people are happy with, it would be very powerful. First it could be used in evaluation planning to work out if all of the possible impact evaluation designs had been assessed for their appropriateness, feasibility and/or affordability. At the moment I think that almost every evaluation planner walks around wondering if there is some sort of impact evaluation design they have not considered.
Continue reading
Untangling evaluation terms – discussing evaluation 'types' with clients often more useful than evaluation 'approaches'
I have just put up a outcomes theory article based on a book chapter I wrote some time ago dividing the terminology used in evaluation into five groups of terms about five different ‘aspects’ of evaluation. These aspects are: evaluation approaches; evaluation types (based on the purpose of the evaluation); evaluation methods; evaluation information analysis techniques; and evaluation designs. Approaches tend to combine a range of different elements including general approaches to evaluation, philosophy of science views and for instance, quasi-political perspectives on the relationship between empowered and disempowered groups. Evaluation approaches are often not mutually exclusive from each other from a conceptual point of view. Evaluation approaches include such things as Scriven’s Goal Free Evaluation, Patton’s Utilization Focused Evaluation and Fetterman’s Empowerment Evaluation. While I find these very interesting from the point of view of stimulating my thinking about evaluation, I often (but not always) do not find them very useful when talking to a client about a specific evaluation.
Continue reading
Formative evaluation versus impact/outcome evaluation
In response to a posting on one of my outcomes theory articles by Marcus Pilgrim who ran the recent YEN Evaluation Clinic in Damascus, I have worked up an article on the difference between formative, process and impact/outcome evaluation. As Marcus points out in his posting, the term formative (or developmental) evaluation is not one which is widely known in all sectors. Formative evaluation is directed at optimizing program implementation. Process evaluation attempts to describe the course and context of a program. Impact/outcome evaluation looks at the intended and unintended, positive and negative outcomes of a program and whether they can be attributed to the program. Continue reading
Mapping indicators onto a logic model is obvious – but why haven't we always done it?
I was running a workshop today teaching policy analysts the basics of my approach to program evaluation (Easy Outcomes). One of the participants, when I talked about the importance of always mapping indicators back onto a visual model, commented that when you do it, it’s so obviously the right approach that you can’t understand why we’ve not been doing it for years.
The idea behind this approach is that the way we almost always approach indicator work is to eye-ball a list or table of indicators and ask the question of a group of busy people sitting around a table – ‘does this list of indicators look any good?’
Continue reading
Randomistas Rule
Just read and commented on an interesting article referred to on the 3IE site – a site dedicated to improving evidence about what works in international development. The article was by Martin Ravallion and was about the rise of the Randomistas in international development economics. Randomistas are those who promote much more use of randomized trials to try and work out what works in international development. It is a good article which points out the fact that randomized trials are not feasible in many important types of development interventions. This debate is the same one which is occurring in many sectors at the moment and one which has been debated on and off in the evaluation field for many years. My take on it is that we need to develop some underlying principle which we can debate and generally agree on so that we don’t need to have this debate endlessly without seemingly making much progress on it.
Continue reading
Damascus – YEN Evaluation Clinic
Apologies for not blogging for a while, I’ve been involved in considerable travel and lots of other work – but that’s really no excuse. Maybe I just got all blogged out. What with Knolling, Blogging here and Twittering, maybe it all just got too much. Anyway, I’m back in the saddle now as they say! Last month I was fortunate to be an evaluation expert at the YEN Evaluation Clinic in Damascus. YEN is the Youth Employment Network – an International Labour Organization, World Bank, United Nations collaboration. A site for the evaluation clinic has been set up at yenclinic.groupsite.co.
The Evaluation Clinic took two examples of youth employment programs and worked through designing an impact evaluation for them. It was a fascinating experience. I’ll blog about what it was like being the sole psychologist evaluator working with five economists evaluation specialists (from the ILO and the World Bank) another day! Continue reading
The good old 'one pager' contraint again
I’ve been up against the good old ‘one pager’ constraint again in a couple of contexts recently. This is where there is the demand that the outcomes for an organization or program be ‘put on one page’. It comes in various forms and is often a demand from senior management or a perceived demand from them to ‘keep things simple’. Now, there’s nothing wrong with the idea of summarizing things and paper overviews play a role in that. But such things should be seen as one of a range of different types of summaries and products which are produced by an underlying outcomes system, not the beginning and end of an outcomes effort. Continue reading
Tracking jobs created under the U.S. Recovery Act – when should the attempt at measurement be abandoned?
The default expectation in at least some sections of the U.S. public sector seems to be that it should always be feasible and affordable to both measure and attribute the results of interventions. This is using the term attribution to mean being able to actually demonstrate that a change in an outcome has been caused by a particular intervention rather than being the result of other factors (see here for more on attribution). The recent U.S. Recovery Act is a case in point. While it’s reasonable to start from the position that you should routinely assess the possibility of measuring and attributing changes in outcomes of particular interventions, you can’t start by just assuming that it will always be feasible or affordable to do this. Clinging to such an assumption, where it is untrue, can result in you either measuring an outcome when the data you are collecting is not accurate, or acting as though what you are measuring (even if it is an accurate measurement of a change in an outcome) is demonstrably attributable to a particular program, when in fact it may not be. Continue reading
Impact evaluation on full program roll-out versus just on piloting – two paradigms
I’ve just posted an article on the two paradigms in regard to impact/outcome evaluation and full program roll-out. What this is about is making a distinction between designing an evaluation which can provide impact/outcome evaluation information about full program roll-out versus a paradigm where you do impact/outcome evaluation just on piloting and then in regard to full program roll-out you just make sure that best practice is implemented. I once was involved in the evaluation of an overall program which had over 900 component programs. The way that we went about evaluating it was, in my view, wrong. Continue reading