What could the politician at the party claim credit for?

I was at a party the other night talking with a group of people about what I do in the outcomes area. The normal reaction I get when I tell them that I’m a psychologist is straight forward.  However, when I tell them that I’m  an outcomes strategist I usually get the following reaction – they look at me, gesticulate, roll their eyes and say, ‘Oh, it’s so  hard to prove that what you did  changed high-level outcomes’. Of course,  this is what happens in the capital city where I work  because just about everyone here is either a policy wonk, or in a relationship with one. And we all know that the whole international wonkery is  obsessed with measuring outcomes.

In the rest of the country I usually get blank stares and people tend to quickly move on to the next guest to talk about something that makes sense. But sometimes I get people who just don’t perceive that there’s any problem to be solved in measuring outcomes. It’s always a little disturbing to have someone implying that there’s no real basis for a whole area of work you’ve involved in. I got this some time ago from a taxi driver on the way to an evaluation conference. I also got it again the other day the other night at the party.

A guest, who I later found out was a local government politician, heard me talking about being an outcomes strategist. He launched into something along the lines of: ‘I would have thought it was very easy, just measure the dollars’. Initially presuming he worked in the private sector, I gave my usual speel about the private sector and outcomes. In comparison to the public sector, it has the huge advantage that its outcomes are always measured (well the ones that people mostly focus on) and the measure is a common one (the dollar) which is used right across the entire sector, regardless of the type of work people are involved in. There’s also some more complicated stuff about the sector tending to have a more relaxed attitude towards attribution (proving exactly what caused what) than the public sector. I’ll blog about that second part sometime in the future.

When I introduced the point that non-financial outcomes, rather than financial outcomes, are at the heart of what’s done in the public sector, he then said something like: ‘well you just measure all that in surveys’. He thought that the whole problem of outcomes was simply solved by tracking outcomes over time. I pointed out that whether things were getting better in the district where he was in charge  said nothing about whether this was caused by his work. Things might be getting better in every city in the world because of general positive trends affecting everyone.

Up until this point, in my view, he was simply committing the basic outcomes fallacy of thinking that measuring a not-necessarily controllable indicator somehow shows that one has improved it. (see Duignan’s Six Type of Evidence That a Program Works diagram).

When I told him as politely as I could that I though he was not actually proving anything about what he was personally making happen, he introduced a more sophisticated argument which cannot be dismissed so easily. This argument was that he ‘hears from the people all the time’ and that he gets feedback from the different encounters he has with the people who live in his district. He also added that ultimately they would tell him if he wasn’t doing a good job.

Our conversation got interrupted about this time so I didn’t  get to continue talking to him. However, thinking in formal outcomes theory terms, in this second part of the conversation, he could have been making two somewhat different arguments. One is that his immersion in the nitty-gritty of working with the people in his district brought him into direct contact with the lower-levels of the outcomes model he was seeking to achieve (the model of the steps needed to achieve high-level outcomes – which can be operationalized in the form of a visual DoView). Being able to directly ‘see’ that the lower-level steps were being put in place (e.g. new environmental regulations), and having a sound logic of the intervention at hand (environmental regulation leading to a better environment), plus a measure that environmental issues were improving,  it was reasonable for him to claim that he had established he was having an impact. In Duignan’s Types of Impact Evaluation Designs, this is the seventh type of design: Intervention Logic (Program Theory/Theory of Change) Based Designs. It can be accepted as a credible impact design by stakeholders in some situations. Of course there’s always the question of who the observer is who is making the claim that lower-level steps have been achieved. But presumably we could get some independent assessment as to whether the lower-level steps were, as he was claiming, happening, so the logic of the design makes theoretical sense as a way of attempting to prove impact.

An alternative argument he could have been mounting, if the wanted to be very pragmatic, is that the fact that he keeps getting re-elected is what ‘hearing from the people all the time’ means in practice. Looking at it this way, he would be defining his outcomes as not changing things in his community (which he may well wish to do) but just as a matter of him getting re-elected. If this is the case, then the fact that he is regularly re-elected means that, by definition, he is achieving his ‘outcome’. And this outcome could be translated into something like ‘keeping the people satisfied’. The argument then would be that keeping the people satisfied was the best way of achieving outcomes for the community within a democracy. I think that this is an example of pulling the ‘outcome’ you’re trying to show you changed back down the outcomes model so they get to some lower-level where its easier to prove attribution.

So while, in my view, his initial claims about it being easy to figure out what is causing outcomes were weak and did not establish anything actually about him having an effect on outcomes, his second round of argument had more substance to it.

Want to know more? http://About.Me/PaulDuignan

Outcomes theory Unequal Input Principle – Op Ed applied to school national league tables

Just written an Op Ed on the question of school national league tables. It tries to move the argument away from a political argument to a technical one. It’s part of my initiative to show how outcomes theory (outcomestheory.org) can provide a technical lens on debates that are current seen as largely political.

The outcomes theory principle is the Unequal Inputs Principle (the ‘level playing field’ principle – in the article called the Equivalent Input, Equivalent Outcome Principle) which states: Where the inputs of units being compared are equivalent, the raw outcomes from such units can be used to measure the relative performance of different units. Where inputs to units differ, the amount of improvement is a better measure of unit performance than just raw outcomes.

The Op Ed argues that if you are wanting to improve student performance you should look to see if you can develop a measure of value-added rather than just raw academic performance. This is because of the differing academic level of kids entering different schools. Whether you can develop a value-added measure is another question, but the clear problem with a raw score approach when it is used to incentivize teachers is that they will just move to  schools which have pupils entering who are already functioning at a high academic level.

Here’s the Op Ed. http://www.stuff.co.nz/dominion-post/comment/7753957/Outcome-theory-and-education

Paul Duignan, PhD. More at OutcomesCentral.org and on Twitter at Twitter.com/paulduignan.

 

 

Stop the terminological madness now! ‘Outcomes’, ‘impact’, ‘results’, ‘goals’ and the Buffalo Dung Problem

All I can ask is ‘when will it stop’? As we speak I’m burning up bandwidth on an EVALTALK (the evaluators list) discussion about the distinction between a ‘goal’ and a ‘mission’. I’m on Linkedin where people are arguing about the distinction between a ‘result’ and an ‘outcome’ and I’ve someone emailing me from Europe preoccupied about why I don’t draw a distinction between an ‘outcome’ and an ‘impact’ in my outcomes theory work.

I think that Karyn Hicks on EVALTALK has come up with the best term for these debates, calling them the Buffalo Dung Problem! This stems from her being in a meeting involving one of these endless debates and her Director hollering ‘Well #!@ we can just call it buffalo dung for all I care’! From then on she’s called it the Buffalo Dung Problem.

Most of these Buffalo Dung discussions are a total waste of time and we can think about this in terms of there being two underlying issues:

1. These terms are all used in a common sense way by stakeholders to mean roughly the same thing: ‘the stuff we’re trying to achieve’. It’s ultimately futile to try and force the rest of the world to use them in very specific ways that suit us for our technical work. If we were physicists and no one had any common sense uses for our terms – like Boson Particles and Quarks – we could define them how we liked and insist that the people using them use them in a very precise technical way. We simply do not have the power to insist that people use terms in the way we want because we work amongst a wide variety of lay stakeholders who will use terms in whatever way they want to.

2. When we insist on using terms in a particular way we are usually trying to pack into the one term a number of technical distinctions which it is better to tease apart. These distinctions include things such as: 1) where something fits within a causal pathway; 2) whether it’s measurable or not; 3) whether it’s controllable or not; 4) whether it’s going to be used for accountability or not.

For instance in one of the discussions I’m involved in at the moment, it’s being suggested that maybe the term goal should be restricted to: 1) something below a thing called a ‘mission’ within a causal pathway; 2) something that is measurable; and, 3) something that is controllable. The problem is that when we ask an unsuspecting lay person to give us their ‘goals’, they have no way of knowing from just this word that we want a very specific thing from a technical point of view. We want something which has three specific technical characteristics. It’s far clearer to forget the word goal and tell them that we want something that is measurable and controllable by them (distinctions 2 and 3 above). We can achieve our first distinction – the position in the causal pathway – much more elegantly by just doing the whole thing in the form of a visual outcomes model.

A fully visual approach gets rid of a lot of the terminological madness which stems from trying to specify a particular location within a causal pathway, e.g. having to insist that a process is before an immediate outcome and that is before an intermediate outcome and that is before an impact.  When you try to do it in this way you inevitably get people then asking you where a result, goal, mission and vision fit into the schema.

You can eliminate this entire debate by simply working in a totally visual way. You can do the whole work of building an outcomes model visually just by talking about boxes within the model and the end-box(s).  Being a little less extreme, I normally talk about steps and at the end of the steps there are final outcomes.  But I couldn’t care less what people want to call the boxes at the end of the visual model. The visual approach eliminates the need to use words to describe particular positions within the causal pathway – you can just point at them (or if you are not physically present color them up, e.g. the green boxes).

Having eliminated this major cause of terminological stress by working visually you can then next deal with distinction 2, measurement. This is best though of in terms of a measurement being an object you put onto a visual model next to a box. It is something that measures that box. I happen to call these indicators but again couldn’t really care less what you call them as long as you maintain the idea of measuring things.

Then you need to deal with the 3rd distinction – controllability. This is best done by simply marking up the indicators that are controllable in some way. Make them red, put a letter next to them, whatever you like. But just think of it in terms of a particular type of indicator being controllable.

Lastly you need to deal with distinction 4 – whether a party is going to be held accountable for something. This is best dealt with by simply marking up the indicators which a party will be held accountable for. In the public and non-profit sector, these usually are exactly the same as the controllable indicators you’ve just marked up.

It’s as easy as that, you simply do not need the terminological madness so many people are currently involved in. I would love someone to work out the sum total of human time, effort and bandwidth (and hence dollars) which is currently going into these endless terminological debates.

William of Occam was a medieval philosopher who came up with Occam’s Razor – ‘do not multiply entities beyond necessity’. He was trying to stop the the type of madness where people in his time used to make dozens of distinctions between different types of angels. We have the same problem on our hands at the moment with the Buffalo Dung problem. I’m an Occam’s Razor fan myself – let’s just stop the madness!

To see how to work practically in this way as I do and those who use DoView Visual Planning and Management do all the time, see: http://doview.com/plan/draw.html that link shows you the 13 rules for building elegant but accessible visual models that you can use in the way described above. This url:  http://doview.com/plan shows you how you can used the whole process for accountability, evaluation, reporting etc.

Want more detail and references to this thinking? The following is a technical article about this issue (read the summary referenced at the start of it if you do not have time to read the whole article): Duignan, P. (2009). Simplifying the use of terms when working with outcomes. Outcomes Theory Knowledge Base Article No. 236. ( http://outcomestheory.wordpress.com/article/simplifying-terms-used-when-working-2m7zd68aaz774-73/ ). The substance of this article formed the basis for Duignan, P. (2009) Rejecting the traditional outputs, intermediate and final outcomes logic modeling approach and building more stakeholder-friendly visual outcomes models. American Evaluation Association Conference, Orlando, Florida, 11-14 November 2009.)

And the following article talks about the different dimensions we get mixed up in our outcomes and evaluation work:

Duignan, P. (2009). Features of steps and outcomes appearing in outcomes models. Outcomes Theory Knowledge Base Article No. 208. ( http://outcomestheory.wordpress.com/article/features-of-steps-and-outcomes-2m7zd68aaz774-20/ ).

Paul Duignan, PhD. Follow me on this OutcomesBlog.org; Twitter.com/PaulDuignan; or via my E-newsletter and resources at OutcomesCentral.org.

Getting outcomes creds and saving time!

Public sector organizations these days have two important imperatives: establishing that they are truly ‘results and outcomes-focused’ while also becoming more efficient in their internal organizational activity. The really good news in the outcomes area is that using a central tool of outcomes work – outcomes models (a particular type of visual model of all of the high-level outcomes the organization is seeking to achieve and the steps it is taking to do so) is that organizations and programs can do both at the same time. Continue reading

Tracking jobs created under the U.S. Recovery Act – when should the attempt at measurement be abandoned?

The default expectation in at least some sections of the U.S. public sector seems to be that it should always be feasible and affordable to both measure and attribute the results of interventions. This is using the term attribution to mean being able to actually demonstrate that a change in an outcome has been caused by a particular intervention rather than being the result of other factors (see here for more on attribution). The recent U.S. Recovery Act is a case in point.  While it’s reasonable to start from the position that you should routinely assess the possibility of measuring and attributing changes in outcomes of particular interventions, you can’t start by just assuming that it will always be feasible or affordable to do this. Clinging to such an assumption, where it is untrue, can result in you either measuring an outcome when the data you are collecting is not accurate, or acting as though what you are measuring (even if it is an accurate measurement of a change in an outcome) is demonstrably attributable to a particular program, when in fact it may not be.  Continue reading

The error of limiting focus to only the attributable

I am continuing to develop a set of articles which outline various problems which are often built into the outcomes systems which I see. The one I have just put up is on the Error of Limiting Focus to Only the Attributable. This is where the whole emphasis of a performance management or other outcomes system is just on holding a provider to account for a list of demonstrably attributable indicators (often called outputs, deliverables, or key performance indicators). This often creates problems. Continue reading

Over-simplifications in outcomes, monitoring and evaluation

An evaluation colleague Patrica Rogers commented on an earlier blog posting of mine in which I was claiming that what I am trying to do it to make outcomes, monitoring and evaluation work ‘easier’. She challenged me on that idea and pointed out that often what we are having to deal with is over-simplification in the way people are working with outcomes, monitoring and evaluation. Her comment inspired me to work up an article on over-simplification in outcomes and evaluation and after getting underway with it I realized all of the different ways in which people approach outcomes, monitoring and evaluation with over-simplified approaches and the problems which these cause. Continue reading

Unalterable deliverables and program inflexibility

Back blogging now after having been on holiday. Recently I ran into the problem of unalterable deliverables in a project I am involved in. This problem was also mentioned in the UN report on its results-based management system that I blogged about a couple of postings ago. The problem arises where a project is set up and deliverables are set, but where ideally there needs to be some flexibility regarding deliverables as the program develops over time. Sometimes the problem is just a result of the difficulty of changing deliverables. Continue reading

Intense analysis of the U.N. Results-Based Management System

I have just put up an Outcomes Theory Knowledge Base article which is an intense analysis of the United Nation Results-Based Management System. (Its obscure work, but someone has to do it!). The exciting part is that it has let me road-test my new Outcomes Systems Checklist. This now provides a common framework for analyzing any outcomes system – outcomes systems being any system which attempts to identify, measure, attribute or hold parties to account for outcomes or the steps which it is thought lead to them. A 2008 report from the U.N. itself on its Results-Based Management System said that the system was: ‘an administrative chore of little value to accountability and decision-making”.

Continue reading

The single list of indicators problem

Many results management, performance management and monitoring systems suffer from what is called the ‘single list of indicators’ problem. I have just put up an article on the Outcomes Theory Knowledge Base regarding this problem (the URL of the article is at the bottom of this blog posting). It arises in situations where there is a demand that an indicator list be high-level (i.e. not at the output level) but at the same time that the list be able to be used to hold a program, organization or other intervention to account. Often one list cannot be used to do both of these jobs. There are four things that can happen in regard to single list approaches, all four create problems and can lead to undermining the credibility of the outcomes system in which they occur. Continue reading