What could the politician at the party claim credit for?

I was at a party the other night talking with a group of people about what I do in the outcomes area. The normal reaction I get when I tell them that I’m a psychologist is straight forward.  However, when I tell them that I’m  an outcomes strategist I usually get the following reaction – they look at me, gesticulate, roll their eyes and say, ‘Oh, it’s so  hard to prove that what you did  changed high-level outcomes’. Of course,  this is what happens in the capital city where I work  because just about everyone here is either a policy wonk, or in a relationship with one. And we all know that the whole international wonkery is  obsessed with measuring outcomes.

In the rest of the country I usually get blank stares and people tend to quickly move on to the next guest to talk about something that makes sense. But sometimes I get people who just don’t perceive that there’s any problem to be solved in measuring outcomes. It’s always a little disturbing to have someone implying that there’s no real basis for a whole area of work you’ve involved in. I got this some time ago from a taxi driver on the way to an evaluation conference. I also got it again the other day the other night at the party.

A guest, who I later found out was a local government politician, heard me talking about being an outcomes strategist. He launched into something along the lines of: ‘I would have thought it was very easy, just measure the dollars’. Initially presuming he worked in the private sector, I gave my usual speel about the private sector and outcomes. In comparison to the public sector, it has the huge advantage that its outcomes are always measured (well the ones that people mostly focus on) and the measure is a common one (the dollar) which is used right across the entire sector, regardless of the type of work people are involved in. There’s also some more complicated stuff about the sector tending to have a more relaxed attitude towards attribution (proving exactly what caused what) than the public sector. I’ll blog about that second part sometime in the future.

When I introduced the point that non-financial outcomes, rather than financial outcomes, are at the heart of what’s done in the public sector, he then said something like: ‘well you just measure all that in surveys’. He thought that the whole problem of outcomes was simply solved by tracking outcomes over time. I pointed out that whether things were getting better in the district where he was in charge  said nothing about whether this was caused by his work. Things might be getting better in every city in the world because of general positive trends affecting everyone.

Up until this point, in my view, he was simply committing the basic outcomes fallacy of thinking that measuring a not-necessarily controllable indicator somehow shows that one has improved it. (see Duignan’s Six Type of Evidence That a Program Works diagram).

When I told him as politely as I could that I though he was not actually proving anything about what he was personally making happen, he introduced a more sophisticated argument which cannot be dismissed so easily. This argument was that he ‘hears from the people all the time’ and that he gets feedback from the different encounters he has with the people who live in his district. He also added that ultimately they would tell him if he wasn’t doing a good job.

Our conversation got interrupted about this time so I didn’t  get to continue talking to him. However, thinking in formal outcomes theory terms, in this second part of the conversation, he could have been making two somewhat different arguments. One is that his immersion in the nitty-gritty of working with the people in his district brought him into direct contact with the lower-levels of the outcomes model he was seeking to achieve (the model of the steps needed to achieve high-level outcomes – which can be operationalized in the form of a visual DoView). Being able to directly ‘see’ that the lower-level steps were being put in place (e.g. new environmental regulations), and having a sound logic of the intervention at hand (environmental regulation leading to a better environment), plus a measure that environmental issues were improving,  it was reasonable for him to claim that he had established he was having an impact. In Duignan’s Types of Impact Evaluation Designs, this is the seventh type of design: Intervention Logic (Program Theory/Theory of Change) Based Designs. It can be accepted as a credible impact design by stakeholders in some situations. Of course there’s always the question of who the observer is who is making the claim that lower-level steps have been achieved. But presumably we could get some independent assessment as to whether the lower-level steps were, as he was claiming, happening, so the logic of the design makes theoretical sense as a way of attempting to prove impact.

An alternative argument he could have been mounting, if the wanted to be very pragmatic, is that the fact that he keeps getting re-elected is what ‘hearing from the people all the time’ means in practice. Looking at it this way, he would be defining his outcomes as not changing things in his community (which he may well wish to do) but just as a matter of him getting re-elected. If this is the case, then the fact that he is regularly re-elected means that, by definition, he is achieving his ‘outcome’. And this outcome could be translated into something like ‘keeping the people satisfied’. The argument then would be that keeping the people satisfied was the best way of achieving outcomes for the community within a democracy. I think that this is an example of pulling the ‘outcome’ you’re trying to show you changed back down the outcomes model so they get to some lower-level where its easier to prove attribution.

So while, in my view, his initial claims about it being easy to figure out what is causing outcomes were weak and did not establish anything actually about him having an effect on outcomes, his second round of argument had more substance to it.

Want to know more? http://About.Me/PaulDuignan

Does avoiding regulatory enforcement represent a success or failure? A Chameleon Indicator.

There’s no doubt, some indicators are a lot more fun than others (although, I must note that people in my trade have a fairly low threshold for ‘fun’). I particularly enjoy ones which can be interpreted in any way you like. They can be called Chameleon indicators.

When developing outcomes DoViews (visual outcomes models) and performance management frameworks for organizations, I often run into a particularly ambiguous type of indicator – the number of regulatory interventions being undertaken by an organization. At first sight, what better indicator for an organization which includes a regulatory outcome as part of its mandate? But there are problems in interpreting these sorts of indicators reflected in a media exchange I heard this morning.

Our national department of conservation is currently embroiled in media controversy over reductions in staff and budgets. As part of the media spotlight focused on it, I heard a discussion this morning about a reduction in the number of times it involves itself in a regulatory process – the number of times it makes legal representations to Conservation Resource Management Consent Hearings.

A media interviewer, interpreting the reduction in the number of regulatory interventions as a failure of the department to achieve one of its outcomes due to it not pursuing it aggressively due to staff shortages,  asked the department’s head: ‘…on the face of it, is it a lower priority? [the regulatory intervention – the department getting involved in Conservation Resource Management Consent Hearings]. The department’s chief (interpreting the drop in the measure in the opposite way) replied:

‘What you are falling into is the trap of judging and measuring our success by the number of cases we take regardless of the outcome. We see [Conservation Resource Management Hearings] [the regulatory intervention] as a last resort. We would rather sit down without spending money on lawyers and work out issues if we can and confine the [Conservation Resource Management Hearings] issues to ones that we really can’t reach agreement on’.*

This is a classic example of the outcomes theory principle: Ambiguity in interpreting outcomes or performance measures/indicators of regulatory intervention when also seeking prevention.

Not having looked into this particular issue, I don’t want to come down on one side or another. I think that both sides are making reasonable ‘face value’ interpretations of the change in the indicator.

How can people setting up performance management systems deal with these regulatory intervention Chameleon Indicators? While people will continue to take different positions in interpreting them in the cut and thrust of media debate, there is a technical approach to the problem which is suggested by outcomes theory.  In order to actually interpret what’s going on with this indicator, we would need to have further information about other indicators. For instance whether there has been an increase in departmental activity focused on getting the parties together prior to potential Resource Management Consent Hearings. I’ve DoViewed it (built an outcomes model) below so that we can get a clearer picture of what’s going on. d388We would need to get information about the indicator in red in order to be able to interpret the regulatory intervention indicator in black. Even then we could not be certain, just from the indicator in red that the department had been successful in reducing the number of contentious issues going to  hearings (which is what this DoView aspires to). So we would really need to answer the evaluation question which also appears within the DoView.

So the technical answer to dealing with Chameleon Regulatory Intervention Indicators is to always interpret them against the underlying outcomes model (e.g. DoView) of the logic of what the organization is trying to do.  For the theory on showing whether an organization is achieving its outcomes see Duignan’s Types of Evidence That a Program ‘Works’ Diagram and for a practical visual approach see here.

So, I the lesson from all this is that we should never just look at a Chameleon Indicator like the number of regulatory interventions on its own. We should always visualize it in the context of the logic of what it is that the intervention consists of and see what surrounding indicators we need to measure and what impact evaluation questions we need to answer in order to really clearly understand whether or not an organization is achieving its outcomes.

*Reference to the interview can be found on in the outcomes theory article linked above.

Seamlessly moving from evaluation to strategy and back again

I’m currently in a discussion on the American Evaluation Association’s Linkedin page about the relationship between monitoring, evaluation and strategic planning. While different consultants may be involved in doing different aspects of these for a client, from a client’s point of view they’re all just parts of their organization’s work which they somehow need to integrate and align.

When working with clients, it really helps to have an approach which lets you move from doing monitoring and evaluation planning, for instance, back to strategic planning. You can then just track whatever their organizational focus is at any moment. From their point of view, it means that monitoring, evaluation etc are seamlessly aligned with strategic planning and other organizational functions.

For instance, working with a client yesterday, using our approach and software, we were building a DoView Visual M&E plan with them (http://doview.com/plan/evaluation.html). These plans are based on a DoView Visual Outcomes Model (http://doview.com/plan/draw.html). The client then said, ‘it’s great what we’ve just done about measurement, but we also need to work out what we’re going to say to our funders about what we want to do next – i.e. our forward strategy’.

So we immediately and seamlessly moved to doing this task for them within the same meeting. We just took the DoView Visual Outcomes Model we had already built with them for monitoring and evaluation planning purposes and went through it, marking up their priorities for future action. The next step will be to map their planned projects onto the DoView and check for ‘line-of-sight’ alignment between their priorities and their planned actions. (see http://doview.com/plan).

It’s great to have the flexibility to move in any direction along the: strategy – priority setting – project alignment – indicator monitoring – evaluation – outcomes-focused contracting spectrum, and to have a tool and approach that lets you immediately go wherever the client wants you to go. This is achieved by using the one visual model (a DoView Visual Outcomes Model drawn according to the 13 rules for drawing DoViews) to underpin all of these activities (http://doview.com/plan/draw.html).

Paul Duignan, PhD OutcomesBlog.org, Twitter.com/paulduignan, OutcomesCentral.org, DoView.com.

Stop the terminological madness now! ‘Outcomes’, ‘impact’, ‘results’, ‘goals’ and the Buffalo Dung Problem

All I can ask is ‘when will it stop’? As we speak I’m burning up bandwidth on an EVALTALK (the evaluators list) discussion about the distinction between a ‘goal’ and a ‘mission’. I’m on Linkedin where people are arguing about the distinction between a ‘result’ and an ‘outcome’ and I’ve someone emailing me from Europe preoccupied about why I don’t draw a distinction between an ‘outcome’ and an ‘impact’ in my outcomes theory work.

I think that Karyn Hicks on EVALTALK has come up with the best term for these debates, calling them the Buffalo Dung Problem! This stems from her being in a meeting involving one of these endless debates and her Director hollering ‘Well #!@ we can just call it buffalo dung for all I care’! From then on she’s called it the Buffalo Dung Problem.

Most of these Buffalo Dung discussions are a total waste of time and we can think about this in terms of there being two underlying issues:

1. These terms are all used in a common sense way by stakeholders to mean roughly the same thing: ‘the stuff we’re trying to achieve’. It’s ultimately futile to try and force the rest of the world to use them in very specific ways that suit us for our technical work. If we were physicists and no one had any common sense uses for our terms – like Boson Particles and Quarks – we could define them how we liked and insist that the people using them use them in a very precise technical way. We simply do not have the power to insist that people use terms in the way we want because we work amongst a wide variety of lay stakeholders who will use terms in whatever way they want to.

2. When we insist on using terms in a particular way we are usually trying to pack into the one term a number of technical distinctions which it is better to tease apart. These distinctions include things such as: 1) where something fits within a causal pathway; 2) whether it’s measurable or not; 3) whether it’s controllable or not; 4) whether it’s going to be used for accountability or not.

For instance in one of the discussions I’m involved in at the moment, it’s being suggested that maybe the term goal should be restricted to: 1) something below a thing called a ‘mission’ within a causal pathway; 2) something that is measurable; and, 3) something that is controllable. The problem is that when we ask an unsuspecting lay person to give us their ‘goals’, they have no way of knowing from just this word that we want a very specific thing from a technical point of view. We want something which has three specific technical characteristics. It’s far clearer to forget the word goal and tell them that we want something that is measurable and controllable by them (distinctions 2 and 3 above). We can achieve our first distinction – the position in the causal pathway – much more elegantly by just doing the whole thing in the form of a visual outcomes model.

A fully visual approach gets rid of a lot of the terminological madness which stems from trying to specify a particular location within a causal pathway, e.g. having to insist that a process is before an immediate outcome and that is before an intermediate outcome and that is before an impact.  When you try to do it in this way you inevitably get people then asking you where a result, goal, mission and vision fit into the schema.

You can eliminate this entire debate by simply working in a totally visual way. You can do the whole work of building an outcomes model visually just by talking about boxes within the model and the end-box(s).  Being a little less extreme, I normally talk about steps and at the end of the steps there are final outcomes.  But I couldn’t care less what people want to call the boxes at the end of the visual model. The visual approach eliminates the need to use words to describe particular positions within the causal pathway – you can just point at them (or if you are not physically present color them up, e.g. the green boxes).

Having eliminated this major cause of terminological stress by working visually you can then next deal with distinction 2, measurement. This is best though of in terms of a measurement being an object you put onto a visual model next to a box. It is something that measures that box. I happen to call these indicators but again couldn’t really care less what you call them as long as you maintain the idea of measuring things.

Then you need to deal with the 3rd distinction – controllability. This is best done by simply marking up the indicators that are controllable in some way. Make them red, put a letter next to them, whatever you like. But just think of it in terms of a particular type of indicator being controllable.

Lastly you need to deal with distinction 4 – whether a party is going to be held accountable for something. This is best dealt with by simply marking up the indicators which a party will be held accountable for. In the public and non-profit sector, these usually are exactly the same as the controllable indicators you’ve just marked up.

It’s as easy as that, you simply do not need the terminological madness so many people are currently involved in. I would love someone to work out the sum total of human time, effort and bandwidth (and hence dollars) which is currently going into these endless terminological debates.

William of Occam was a medieval philosopher who came up with Occam’s Razor – ‘do not multiply entities beyond necessity’. He was trying to stop the the type of madness where people in his time used to make dozens of distinctions between different types of angels. We have the same problem on our hands at the moment with the Buffalo Dung problem. I’m an Occam’s Razor fan myself – let’s just stop the madness!

To see how to work practically in this way as I do and those who use DoView Visual Planning and Management do all the time, see: http://doview.com/plan/draw.html that link shows you the 13 rules for building elegant but accessible visual models that you can use in the way described above. This url:  http://doview.com/plan shows you how you can used the whole process for accountability, evaluation, reporting etc.

Want more detail and references to this thinking? The following is a technical article about this issue (read the summary referenced at the start of it if you do not have time to read the whole article): Duignan, P. (2009). Simplifying the use of terms when working with outcomes. Outcomes Theory Knowledge Base Article No. 236. ( http://outcomestheory.wordpress.com/article/simplifying-terms-used-when-working-2m7zd68aaz774-73/ ). The substance of this article formed the basis for Duignan, P. (2009) Rejecting the traditional outputs, intermediate and final outcomes logic modeling approach and building more stakeholder-friendly visual outcomes models. American Evaluation Association Conference, Orlando, Florida, 11-14 November 2009.)

And the following article talks about the different dimensions we get mixed up in our outcomes and evaluation work:

Duignan, P. (2009). Features of steps and outcomes appearing in outcomes models. Outcomes Theory Knowledge Base Article No. 208. ( http://outcomestheory.wordpress.com/article/features-of-steps-and-outcomes-2m7zd68aaz774-20/ ).

Paul Duignan, PhD. Follow me on this OutcomesBlog.org; Twitter.com/PaulDuignan; or via my E-newsletter and resources at OutcomesCentral.org.

New How-To Guides on DoView Site – What's and outcomes (results) model

I have not been blogging for a while as I’ve been caught up in preparing multiple resources on outcomes models and also actually developing many outcomes models for clients. I now have many great examples which I want to share with you in the coming months. It’s only now that a number of these projects are coming to a conclusion and clients are becoming willing to share them with others. So watch this space.

In the meantime, on the DoView site some new How-To Guides are starting to be put up. The first is on What’s a DoView Outcomes (Results) Model and Why Should I Use One? This is in response to requests from DoView enthusiasts who want to be able to refer people to a quick article about what an outcomes model is and why people should use one for all of their project and organizational planning. Continue reading

The evolution of the logic model

I’ve just posted an article on the evolution of the logic model within evaluation. Over the last couple of decades, increasing numbers of evaluators have started using logic models. For those not familiar with what logic models are – they are simply tabular or visual representations of all of the lower-level steps needed to achieve high-level outcomes for a program, organization or other intervention. They go by different names, for instance: program logics, intervention logics, results maps, theories of change, program theories, results hierarchies, strategy maps, end-means diagrams etc.). A traditional way of drawing logic models has evolved (known as the inputs, outputs, intermediate outcomes, final outcomes structured logic model) which often attempts to restrict logic models to a single page. However, many evaluators are now breaking away from the constraints of this traditional format and exploring various alternative ways of representing logic models. Continue reading

How many evaluators does it take to change a light bulb?

In response to a series of ‘How many evaluators does it take to change a light bulb?’ jokes on the evaluators list EVALTALK, I whipped up an outcomes model (logic model) for a Changing Light Bulbs Project (some days one does wonder if this is what evaluators do for fun, it must be some sort of illness!).

Anyway here it is http://www.outcomesmodels.org/models/lightbulb62.html

Paul Duignan, PhD. (Follow me on my Outcomes Blog; Twitter; or via my E-Newsletter).

Christmas is over – back to blogging

I’ve been quiet for some time on the blogging front as I’ve been developing a number of resources on outcomes modeling (logic modeling), evaluation and related topics. Now Christmas is over and I’m back to work I’ll be blogging in the next few weeks about these resources. I’ll also be blogging about DoView 2.0 which has just been released. It’s full of new features suggested by users and I’ve been building a range of really cool outcomes models with clients using the beta version and using that for input into its development. Continue reading

Getting outcomes creds and saving time!

Public sector organizations these days have two important imperatives: establishing that they are truly ‘results and outcomes-focused’ while also becoming more efficient in their internal organizational activity. The really good news in the outcomes area is that using a central tool of outcomes work – outcomes models (a particular type of visual model of all of the high-level outcomes the organization is seeking to achieve and the steps it is taking to do so) is that organizations and programs can do both at the same time. Continue reading

Using an outcomes modeling approach to action research

Will get back to blogging on the Australasian Evaluation Society Conference when I get a moment (may not be for a few days). In the meantime had to prepare an article about using outcomes modeling as a basic tool within an action research approach. Because outcomes modeling – developing visual outcomes models (like a type of logic model, or theory of change model) – according to the outcomes theory set of standards for building such models is a generic process, such models can be used for a wide range of purposes. They can, for instance, be used within an action research approach. Action research is an approach which attempts to work in cycles of research/action/research. It has the great virtue of ensuring that research is connected to action and action is connected to research.
Continue reading