Identifying and communicating best practice is widely recommended in many sectors and disciplines. But I’ve sometimes wondered as I’ve sagely recommended in a serious voice, ‘I think that we should use an approach based on identifying and implementing best practice here’ exactly what best practice is? I think that doing it is often a good idea and I can work out how to identify it and share it, and I will blog about that tomorrow, but what I’m not clear on is exactly how we define ‘best’ in the term ‘best practice’. It’s not clear whether best practice consists of: 1) claims that practitioners, from their own experience, believe the practices concerned to be feasible and ‘useful’ to implement; or 2) practices which have been proven to improve high-level outcomes (through making a strong outcome/impact evaluation claim of some sort such as is made using some of the types of designs listed here). Continue reading
Category Archives: Impact evaluation
What can be claimed about whether a program works or not from a logic model?
We sometimes hear things like: “a logic model was used to show that the program works”. I’m interested in tidying up such talk so that we are very clear about exactly what it is that is being claimed in regard to showing whether or not a program works. I’ve put up a new article in Knol which deals with the question of the types of claims we can make in regard to logic models (I call them outcomes models) and the types of arguments we can mount about whether or not programs work based on these claims. The article is here.The article is rather intense, but I think that it’s important that, as evaluators, we get on top of this sort of thing. In the article, I set out three claims that can be made in regard to logic models (or sub-parts of logic models). These are: Continue reading
Ease of impact evaluation and bias within 'what works' evidence-based practice systems
Back on deck after Christmas and turning the mind back from eating and having a great time with relatives to outcomes and evaluation again. I’ve just put up an article in the Outcomes Theory Knowledge Base about the issue of evidence-based practice systems not taking into account variations in the ease of undertaking impact evaluation between program types.
There are many examples of evidence-based practice systems being set up (two classic examples are the Cochrane Collaboration in health care and the Campbell Collaboration in the social policy area) which are attempting to use information from impact evaluation (evaluation which demonstrates attribution of improvements in high level outcomes to particular programs). I think that such initiatives are generally a great idea. My article deals with a particular situation which sometimes arises which needs to be addressed in any well constructed ‘what works’ system. Continue reading
Attibution and President Bush's approval rating
I’ve been doing a lot of thinking about indicators and attribution these days. I think that a lot of problems (particularly misinterpretations of how well or badly parties are doing within outcomes systems of all types arise because of confusion between attributable and not-necessarily attributable indicators. The underying Five Building Blocks Diagram which is used in outcomes theory makes a distiction between attributable and not-necessarily attributable indicators.The Washington Post today had an interactive showing President Bush’s Approval Rating over time. At various points, the graph had labels showing what was happening at the time, presumably to help the reader work out why his rating went up and down at the points it did. The biggest jump was around September 11 where his approval took a massive leap from something like 56% to 92% (reading off the graph). Below is a section from the interactive. Continue reading
Impact evaluation's day in the sun (Part III)
[Please Read Part I and Part II first]. Now that I have got to Part III of this posting on the impact evaluation debate, the time has come for me to front up and tell you how I think things should be done if I were in charge of the world. First, the concept of there being a shoot-out between impact evaluation and other types of evaluation is about as sensible as trying to have a debate as to whether a fork or a knife is the better kitchen utensil. It obviously depends on what you want to do with it. This does not, of course mean that it might not be reasonable to argue that there are not enough knives in the kitchen at a particular point in time and that we should go out and encourage knife collecting rather than just getting more and more forks. Continue reading
Impact evaluation's day in the sun (Part II)
(See the previous posting Part I first). Progressing the debate around impact evaluation means unpacking some of the different issues which are involved in the debate. In doing this, it should not be assumed that those on various sides in the debate will always side in the same way with each of the sub-issues. In my doctorate (Duignan, 1997) I did this type of analysis for the positivist versus constructivist debate which was hot for a while within evaluation circles. Drawing on that analysis, we can now to look at the sub-issues which underly the impact evaluation debate because there are a number of similarities with the earlier debate. The sub-issues which can underly the impact evaluation debate and two contrasting positions that can be taken on them include the following: Continue reading
Impact evaluation's day in the sun (Part I)
When you’ve been around for a while you see things come and go. Impact evaluation – evaluation looking at the longer-term (or ‘higher-level’ outcomes within an visual outcomes model) is currently enjoying a resurgence (sometimes its just called plain outcome evaluation). I’ve been attending a number of international evaluation conferences recently and had the pleasure of going to several workshops from experts on impact evaluation – Gary Henry from the University of North Carolina who ran a workshop at the United Kingdom Evaluation conference in Bristol and Charles Reichardt from the University of Denver ran a workshop on estimating the effects of interventions at the American Evaluation Conference in Denver. I thought both were excellent and showed the level of sophistication some of the experts thinking about impact/outcome evaluation are going to to do the job properly. I will blog in more detail about what they were saying a little later. However, these presentations were little oases of calm in a wider and more heated debate in evaluation circles about the resurgence of impact evaluation. Continue reading