An evaluation colleague Patrica Rogers commented on an earlier blog posting of mine in which I was claiming that what I am trying to do it to make outcomes, monitoring and evaluation work ‘easier’. She challenged me on that idea and pointed out that often what we are having to deal with is over-simplification in the way people are working with outcomes, monitoring and evaluation. Her comment inspired me to work up an article on over-simplification in outcomes and evaluation and after getting underway with it I realized all of the different ways in which people approach outcomes, monitoring and evaluation with over-simplified approaches and the problems which these cause. Continue reading
Category Archives: Doing evaluation more efficiently
Being sector-centric not program-centric in deciding on program evaluation priorities
I have been blogging less in the last week or so due to going on holiday – blogging will still be less frequent for a week or so. I was recently involved in assessing a set of projects summaries to help determine which of them should be selected for more intensive evaluation input. This was not being done directly to determine project evaluation priorities, however the exercise reminded me once again of the general issue of how we determine what types of evaluation should be undertaken for particular projects. My blog comments below are about the general issue rather than the particular exercise of project selection I was recently involved in. The set of information we typically use to work out what type of evaluation should be undertaken for a project is information such as the nature of the project, the proposed evaluation questions, and the proposed evaluation budget. I think that we need more than this when determining evaluation priorities. Continue reading
Intense analysis of the U.N. Results-Based Management System
I have just put up an Outcomes Theory Knowledge Base article which is an intense analysis of the United Nation Results-Based Management System. (Its obscure work, but someone has to do it!). The exciting part is that it has let me road-test my new Outcomes Systems Checklist. This now provides a common framework for analyzing any outcomes system – outcomes systems being any system which attempts to identify, measure, attribute or hold parties to account for outcomes or the steps which it is thought lead to them. A 2008 report from the U.N. itself on its Results-Based Management System said that the system was: ‘an administrative chore of little value to accountability and decision-making”.
Don't assume that impact evaluation should always be done
Impact evaluation – evaluation which looks at whether changes in high-level outcomes can be attributed to a particular program, organization or other intervention – is a particularly useful type of evaluation when done properly. It clearly tells us what works, and what doesn’t work, and this information can be used in decision-making about which programs should, and should not, be funded in the future. However, particularly at the present time, with all of the enthusiasm for evidence-based practice, many people mistakenly assume that impact evaluation should always be attempted in regard to any program, organization or other intervention. Assuming this is a serious mistake. I’ve just put up an article in the Outcomes Theory Knowledge Base which sets out in detail why it is and the way we should approach assessing when impact evaluation should be done. Continue reading
Making outcomes theory more concrete – checklist for assessing outcomes systems
Most normal people would think that it’s very very obscure, but I’ve just put up a Checklist for Analyzing Outcomes Systems in the Outcomes Theory Knowledge Base and it’s a very exciting development. Up until now the Outcomes Theory Knowledge Base has consisted of a set of articles which outline various aspects of outcomes theory. Outcomes theory is a general theory which covers all types of outcomes systems. Outcomes systems are any type of performance management system, results-base system, monitoring system, evaluation system, outcomes-focused contracting system, or strategic planning system (the term even includes evidence-based practice systems). Such systems have, in the past, been seen as somewhat different types of things without a common theory existing to analyze them. Outcomes theory is based on the insight that we can theorize them as a common type of system and then use the theory to work out how such systems should be best structured. This approach becomes powerful at the moment that we can start applying it to actual real-world outcomes systems. This is the role of the checklist I’ve just developed. Continue reading
What we are all on about – representing causal models
Whether we know it or not, a lot of us in evaluation, monitoring, social programs, philanthropy etc. spend a lot of time working with ‘causal models’. We call them all sorts of things – program justifications, rationales, program activities and objectives, logic models, logframes, intervention logics, strategy maps etc. – and most people who work with them don’t think of them as causal models. But that’ s what they are if we see causal models as just being an attempt to set out ‘what it is believed causes what in the world’. In the case of a program, the model is going to be a model of the steps which you think a program needs to take in order to cause high-level outcomes to occur. We really should get our heads around the best way to represent such models because at the moment I think that there is a great deal of wasted talk and effort about all of this. And it distracts us from getting on with the job of implementing good programs as fast as possible. Every dollar or every hour spent on struggling with an inefficient way of representing our program is a dollar or hour wasted. Continue reading
Why just about every indicator system in the world needs to be fixed!
I’ve just posted a new article in the Outcomes Theory Knowledge Base on why it is essential to map indicators onto an underlying visual outcomes model. I blogged a little while ago about why we should be wary of too-tidy indicator sets and in the article I explain why. The vast majority of indicator systems in the world suffer from the problem set out in the article – they are just a straight list of indicators set out in tabular format. They give the user no idea as to whether a number of important steps and outcomes are not being measured. Those using such systems remain blissfully unaware of this. In my view, all these straight indicator sets need to be fixed. It’s not particularly difficult, it just requires some work. How to draw the underlying outcomes models is set out in the outcomes model standards and how to then use such models for indicator mapping and many other things is described in detail in the applied version of outcomes theory – Easy Outcomes. Continue reading
Using visual outcomes models to describe and communicate best practice
Yesterday I blogged about what is meant by the term ‘best practice’. As I said then, I think that there is some conceptual confusion out there about it, and I am not sure that I have yet tidied it up my own thinking about it in a satisfactory way. However, the great thing is that regardless of how we define it, the idea of identifying the types of things that people currently call best practice and communicating these between programs is a great idea. The most difficult thing in many cases is to get best practice actually applied to on the ground after we have identified it. I have put up an Outcomes Theory Knowledge Base article (link to the article is at the bottom of this posting) on the issue suggesting that an efficient way of describing and communicating best practice may be to use visual outcomes models (a type of logic model). Continue reading
Simplifying outcomes terminology – angels dancing on the head of a pin!
In the middle ages, churchmen occupied themselves by having long discussions sitting around the fire drinking fine wine and discussing how many angels could dance on the head of a pin. It was tough, but someone had to do it! That was until someone called William of Occam came along. He is famous for spoiling the party with something called ‘Occam’s Razor’ – one way of describing the way he operated was that the did not want people to ‘multiply entities beyond necessity’ (see Wikipedia entry). What he was talking about was trying to keep things as simple as possible. This is the approach which is taken within outcomes theory. Continue reading
Tables versus visual models
Each day this week I am blogging on some of the themes that came up at a national philanthropy conference earlier in the week where I presented on the Easy Outcomes approach. There was some discussion in the conference session about the dynamic nature of what grantees do and how it is hard to capture such dynamism in any rigid system of evaluation. However, as is often the case, at the end of the day, some tabular logic models were presented as an example of how to set out an evaluation. I am not criticizing their presentation, because this is currently standard practice in evaluation planning, and the use of some sort of logic model is always much better than not having one. However, in my view, we fall back on the use of tables because we do not realize that there are better ‘technologies of representation’ (i.e. visual outcomes models) that we could use.
Continue reading