Over-simplifications in outcomes, monitoring and evaluation

An evaluation colleague Patrica Rogers commented on an earlier blog posting of mine in which I was claiming that what I am trying to do it to make outcomes, monitoring and evaluation work ‘easier’. She challenged me on that idea and pointed out that often what we are having to deal with is over-simplification in the way people are working with outcomes, monitoring and evaluation. Her comment inspired me to work up an article on over-simplification in outcomes and evaluation and after getting underway with it I realized all of the different ways in which people approach outcomes, monitoring and evaluation with over-simplified approaches and the problems which these cause. Continue reading

Unalterable deliverables and program inflexibility

Back blogging now after having been on holiday. Recently I ran into the problem of unalterable deliverables in a project I am involved in. This problem was also mentioned in the UN report on its results-based management system that I blogged about a couple of postings ago. The problem arises where a project is set up and deliverables are set, but where ideally there needs to be some flexibility regarding deliverables as the program develops over time. Sometimes the problem is just a result of the difficulty of changing deliverables. Continue reading

Intense analysis of the U.N. Results-Based Management System

I have just put up an Outcomes Theory Knowledge Base article which is an intense analysis of the United Nation Results-Based Management System. (Its obscure work, but someone has to do it!). The exciting part is that it has let me road-test my new Outcomes Systems Checklist. This now provides a common framework for analyzing any outcomes system – outcomes systems being any system which attempts to identify, measure, attribute or hold parties to account for outcomes or the steps which it is thought lead to them. A 2008 report from the U.N. itself on its Results-Based Management System said that the system was: ‘an administrative chore of little value to accountability and decision-making”.

Continue reading

The single list of indicators problem

Many results management, performance management and monitoring systems suffer from what is called the ‘single list of indicators’ problem. I have just put up an article on the Outcomes Theory Knowledge Base regarding this problem (the URL of the article is at the bottom of this blog posting). It arises in situations where there is a demand that an indicator list be high-level (i.e. not at the output level) but at the same time that the list be able to be used to hold a program, organization or other intervention to account. Often one list cannot be used to do both of these jobs. There are four things that can happen in regard to single list approaches, all four create problems and can lead to undermining the credibility of the outcomes system in which they occur. Continue reading

Tears in outcomes land: the non-output demonstrably attributable indicator paradox

If there is one thing that causes a terrible amount of pain in outcomes land, it is the Non-Output Demonstrably Attributable Indicator Paradox. This paradox manifests itself as the demand to find an intermediate outcome which can be used for accountability purposes. The paradox comes into play when this quest is accompanied by a demand that such intermediate outcomes not also be outputs. Sometimes this demand is made explicitly, other times it is made implicitly. Outcomes models which are structured into horizontal layers (outputs, intermediate outcomes, final outcomes), implicitly make this demand by requiring that a step be put in either the outputs or the intermediate outcomes layer within the model. Many funder contract management and provider staff spend hours and hours in rooms trying to find such intermediate outcomes only to walk away frustrated. There is a simple solution to this problem by building a technically sound outcomes system. Continue reading

Making outcomes theory more concrete – checklist for assessing outcomes systems

Most normal people would think that it’s very very obscure, but I’ve just put up a Checklist for Analyzing Outcomes Systems in the Outcomes Theory Knowledge Base and it’s a very exciting development. Up until now the Outcomes Theory Knowledge Base has consisted of a set of articles which outline various aspects of outcomes theory. Outcomes theory is a general theory which covers all types of outcomes systems. Outcomes systems are any type of performance management system, results-base system, monitoring system, evaluation system, outcomes-focused contracting system, or strategic planning system (the term even includes evidence-based practice systems). Such systems have, in the past, been seen as somewhat different types of things without a common theory existing to analyze them. Outcomes theory is based on the insight that we can theorize them as a common type of system and then use the theory to work out how such systems should be best structured. This approach becomes powerful at the moment that we can start applying it to actual real-world outcomes systems. This is the role of the checklist I’ve just developed. Continue reading

Problems in pay for performance systems

Sorry, I stopped blogging there for a day or two due to a computer problem, and I will also not be blogging over the Easter Break, but will be back daily blogging after that. Today I’m looking at problems in a pay for performance system. From the point of view of outcomes theory, pay for performance systems are just another example of an outcomes system. Outcomes systems are any system which attempts to identify, measure, attribute and hold people, organizations or programs to account. A U.S. GAO report [2] on one such system is interesting reading (for those with a taste for obscure government reports). It reviews the National Security Personnel System which has just been put on hold by the Obama administration and they may axe it [1]. The problems identified in the GAO report include: Continue reading

Why just about every indicator system in the world needs to be fixed!

I’ve just posted a new article in the Outcomes Theory Knowledge Base on why it is essential to map indicators onto an underlying visual outcomes model. I blogged a little while ago about why we should be wary of too-tidy indicator sets and in the article I explain why. The vast majority of indicator systems in the world suffer from the problem set out in the article – they are just a straight list of indicators set out in tabular format. They give the user no idea as to whether a number of important steps and outcomes are not being measured. Those using such systems remain blissfully unaware of this. In my view, all these straight indicator sets need to be fixed. It’s not particularly difficult, it just requires some work. How to draw the underlying outcomes models is set out in the outcomes model standards and how to then use such models for indicator mapping and many other things is described in detail in the applied version of outcomes theory – Easy Outcomes. Continue reading

Using visual outcomes models to describe and communicate best practice

Yesterday I blogged about what is meant by the term ‘best practice’. As I said then, I think that there is some conceptual confusion out there about it, and I am not sure that I have yet tidied it up my own thinking about it in a satisfactory way. However, the great thing is that regardless of how we define it, the idea of identifying the types of things that people currently call best practice and communicating these between programs is a great idea. The most difficult thing in many cases is to get best practice actually applied to on the ground after we have identified it. I have put up an Outcomes Theory Knowledge Base article (link to the article is at the bottom of this posting) on the issue suggesting that an efficient way of describing and communicating best practice may be to use visual outcomes models (a type of logic model). Continue reading

Simplifying outcomes terminology – angels dancing on the head of a pin!

In the middle ages, churchmen occupied themselves by having long discussions sitting around the fire drinking fine wine and discussing how many angels could dance on the head of a pin. It was tough, but someone had to do it! That was until someone called William of Occam came along. He is famous for spoiling the party with something called  ‘Occam’s Razor’ – one way of describing the way he operated was that the did not want people to ‘multiply entities beyond necessity’ (see Wikipedia entry). What he was talking about was trying to keep things as simple as possible. This is the approach which is taken within outcomes theory. Continue reading