If there is one thing that causes a terrible amount of pain in outcomes land, it is the Non-Output Demonstrably Attributable Indicator Paradox. This paradox manifests itself as the demand to find an intermediate outcome which can be used for accountability purposes. The paradox comes into play when this quest is accompanied by a demand that such intermediate outcomes not also be outputs. Sometimes this demand is made explicitly, other times it is made implicitly. Outcomes models which are structured into horizontal layers (outputs, intermediate outcomes, final outcomes), implicitly make this demand by requiring that a step be put in either the outputs or the intermediate outcomes layer within the model. Many funder contract management and provider staff spend hours and hours in rooms trying to find such intermediate outcomes only to walk away frustrated. There is a simple solution to this problem by building a technically sound outcomes system. Continue reading
Category Archives: Indicators
Reliability versus validity – read on it's important!
Now that Easter is over (and the yard gate has been built to keep in the dog that my wife and the kids have their hearts set on getting). I’m back blogging. Today I want to talk about the difference between reliability and validity. It sounds technical, but read on, its really important in a lot of results and outcomes areas. In psychology, where I come from, they spend a lot of time drumming this distinction into you. Reliability is whether measurements at different times and by different people will give you the same result. Validity is whether you are measuring the right thing. Continue reading
Why just about every indicator system in the world needs to be fixed!
I’ve just posted a new article in the Outcomes Theory Knowledge Base on why it is essential to map indicators onto an underlying visual outcomes model. I blogged a little while ago about why we should be wary of too-tidy indicator sets and in the article I explain why. The vast majority of indicator systems in the world suffer from the problem set out in the article – they are just a straight list of indicators set out in tabular format. They give the user no idea as to whether a number of important steps and outcomes are not being measured. Those using such systems remain blissfully unaware of this. In my view, all these straight indicator sets need to be fixed. It’s not particularly difficult, it just requires some work. How to draw the underlying outcomes models is set out in the outcomes model standards and how to then use such models for indicator mapping and many other things is described in detail in the applied version of outcomes theory – Easy Outcomes. Continue reading
Beware of suspiciously tidy indicator sets
I’ve just come away from presenting at a national Philanthropy conference as part of a half day session on evaluation and outcomes. I was presenting on the use of the Easy Outcomes approach as a way of grantees structuring outcomes, indicators and evaluation. I will tidy up the outcomes model I used and post a link to it in a blog in a week or so. A lot of interesting points came up in the discussion and I will blog on several of them over the next few days. The first one is to be beware of suspiciously tidy indicator sets. The Easy Outcomes approach gets people to draw an outcomes model (intervention logic) of what they are trying to do without worrying about what they can and can’t measure and what they can, and can’t demonstrate is attributable to their particular project (both of these issues are dealt with later in the process). You draw the models using the guidelines here.
Continue reading
Bonuses role in the financial melt-down
President Obama has amplified the attack on bonuses being paid to staff in companies which have been bailed out by the U.S. government (CNN, 16 March 2009). What does outcomes theory have to say about the role of the bonus system in the current financial meltdown? I blogged in 2007 about the problem of bonuses in the financial system and how it was possible the full extent of the melt-down would take time to be revealled. Thinking in terms of an outcomes model, what has happened is that financial institutions should have been aiming (as they do in healthy times) at the overall outcome of Sustainable long-run profitability. Continue reading
14 years for revealing an indicator!
One of the principles of outcomes theory is that if you want groups of stakeholders to not have a complete understanding of what is happening as an outcomes model plays out, you try to suppress information about indicators of outcomes which are heading in the wrong direction. There is currently a story in the media about a UK officer being arrested in Afghanistan for allegedly supplying civilian casualty figures to a human rights campaigner. The CNN story is here.
Developing a comprehensive sector visual outcomes model
In earlier blog postings I’ve talked about the use of large scale outcomes models for various purposes such as overall policy development, evidence-based practice and monitoring and evaluation. The next version of DoView outcomes and evaluation software [[Disclosure: I am involved in the development of DoView] is going to allow images to be included in DoView files and in the web page models which can be created within DoView and then put up on the web. A mock-up of what parts of a visual sector plan using the upcoming version of DoView could look like is available. This will be a no cost update to DoView which is planned for release in February 2009.
More on politicians – accountability (praise and blame) and having 'blown it'
Further to my last posting on whether or not the jump in President Bush’s approval rating was attributable to him (i.e. provable that he caused most of it, rather than it just being the fact that he happened to find himself being President on September 11th), I came across a letter to Newsweek which relates to another aspect of the features of steps and outcomes in outcomes models as related to politicians. (The letter is the second letter listed here). This time it is the issue of accountability. Continue reading
Attibution and President Bush's approval rating
I’ve been doing a lot of thinking about indicators and attribution these days. I think that a lot of problems (particularly misinterpretations of how well or badly parties are doing within outcomes systems of all types arise because of confusion between attributable and not-necessarily attributable indicators. The underying Five Building Blocks Diagram which is used in outcomes theory makes a distiction between attributable and not-necessarily attributable indicators.The Washington Post today had an interactive showing President Bush’s Approval Rating over time. At various points, the graph had labels showing what was happening at the time, presumably to help the reader work out why his rating went up and down at the points it did. The biggest jump was around September 11 where his approval took a massive leap from something like 56% to 92% (reading off the graph). Below is a section from the interactive. Continue reading
Sharing the pain – crazy indicators, targets and funder reporting systems
This blog post is a follow-up to an earlier posting on my blog. In a comment on that posting, M&Egirl outlined problems she was having with a funder demanding targets that are meaningless and asked for more comment on this obsession with targets on the part of funders.
The program she is working on is a Gender Related Violence program, which for the sake of this discussion I’ll presume involves women who have been subject to gender related violence receiving some sort of intervention to help them be safe and move beyond the trauma they have suffered. Because the issue of indicators and targets is such an important issue I thought that I’d do a further posting on it. If you have a moment, have a quick read of the original posting and M&Egirl’s comment before reading this blog posting. Continue reading