For a while at least I’m going to experiment with moving my blogging over to the DoView Blog (DoView.com/blog). I’ve currently spreading myself too thin across various social networks and am trying to consolidate. Come over and check out my blog posts there.
Category Archives: Uncategorized
Using the DoView Visual Planning Approach with climate change adaptation
In a webinar I ran earlier this week at the invitation of Sea Change – the Southeast Asia Community of Practice for Monitoring and Evaluation of Climate Change Interventions – I outlined how large scale visual outcomes models (visual Outcomes DoViews) can be used to make the problem of planning, monitoring and evaluating climate change interventions easier.
What I’m trying to do in my work is to make planning both faster and easier in complex areas like climate change through using a radical visual planning approach. We think that the DoView approach to visually modelling programs and proving success has struck the ‘sweet spot’ between being overly complex and overly simplistic by leveraging the power of visualization without falling into the trap of over complicating it. Click on the image below to explore the webpage version of the Climate Change Water Sector DoView.
Stop the terminological madness now! ‘Outcomes’, ‘impact’, ‘results’, ‘goals’ and the Buffalo Dung Problem
All I can ask is ‘when will it stop’? As we speak I’m burning up bandwidth on an EVALTALK (the evaluators list) discussion about the distinction between a ‘goal’ and a ‘mission’. I’m on Linkedin where people are arguing about the distinction between a ‘result’ and an ‘outcome’ and I’ve someone emailing me from Europe preoccupied about why I don’t draw a distinction between an ‘outcome’ and an ‘impact’ in my outcomes theory work.
I think that Karyn Hicks on EVALTALK has come up with the best term for these debates, calling them the Buffalo Dung Problem! This stems from her being in a meeting involving one of these endless debates and her Director hollering ‘Well #!@ we can just call it buffalo dung for all I care’! From then on she’s called it the Buffalo Dung Problem.
Most of these Buffalo Dung discussions are a total waste of time and we can think about this in terms of there being two underlying issues:
1. These terms are all used in a common sense way by stakeholders to mean roughly the same thing: ‘the stuff we’re trying to achieve’. It’s ultimately futile to try and force the rest of the world to use them in very specific ways that suit us for our technical work. If we were physicists and no one had any common sense uses for our terms – like Boson Particles and Quarks – we could define them how we liked and insist that the people using them use them in a very precise technical way. We simply do not have the power to insist that people use terms in the way we want because we work amongst a wide variety of lay stakeholders who will use terms in whatever way they want to.
2. When we insist on using terms in a particular way we are usually trying to pack into the one term a number of technical distinctions which it is better to tease apart. These distinctions include things such as: 1) where something fits within a causal pathway; 2) whether it’s measurable or not; 3) whether it’s controllable or not; 4) whether it’s going to be used for accountability or not.
For instance in one of the discussions I’m involved in at the moment, it’s being suggested that maybe the term goal should be restricted to: 1) something below a thing called a ‘mission’ within a causal pathway; 2) something that is measurable; and, 3) something that is controllable. The problem is that when we ask an unsuspecting lay person to give us their ‘goals’, they have no way of knowing from just this word that we want a very specific thing from a technical point of view. We want something which has three specific technical characteristics. It’s far clearer to forget the word goal and tell them that we want something that is measurable and controllable by them (distinctions 2 and 3 above). We can achieve our first distinction – the position in the causal pathway – much more elegantly by just doing the whole thing in the form of a visual outcomes model.
A fully visual approach gets rid of a lot of the terminological madness which stems from trying to specify a particular location within a causal pathway, e.g. having to insist that a process is before an immediate outcome and that is before an intermediate outcome and that is before an impact. When you try to do it in this way you inevitably get people then asking you where a result, goal, mission and vision fit into the schema.
You can eliminate this entire debate by simply working in a totally visual way. You can do the whole work of building an outcomes model visually just by talking about boxes within the model and the end-box(s). Being a little less extreme, I normally talk about steps and at the end of the steps there are final outcomes. But I couldn’t care less what people want to call the boxes at the end of the visual model. The visual approach eliminates the need to use words to describe particular positions within the causal pathway – you can just point at them (or if you are not physically present color them up, e.g. the green boxes).
Having eliminated this major cause of terminological stress by working visually you can then next deal with distinction 2, measurement. This is best though of in terms of a measurement being an object you put onto a visual model next to a box. It is something that measures that box. I happen to call these indicators but again couldn’t really care less what you call them as long as you maintain the idea of measuring things.
Then you need to deal with the 3rd distinction – controllability. This is best done by simply marking up the indicators that are controllable in some way. Make them red, put a letter next to them, whatever you like. But just think of it in terms of a particular type of indicator being controllable.
Lastly you need to deal with distinction 4 – whether a party is going to be held accountable for something. This is best dealt with by simply marking up the indicators which a party will be held accountable for. In the public and non-profit sector, these usually are exactly the same as the controllable indicators you’ve just marked up.
It’s as easy as that, you simply do not need the terminological madness so many people are currently involved in. I would love someone to work out the sum total of human time, effort and bandwidth (and hence dollars) which is currently going into these endless terminological debates.
William of Occam was a medieval philosopher who came up with Occam’s Razor – ‘do not multiply entities beyond necessity’. He was trying to stop the the type of madness where people in his time used to make dozens of distinctions between different types of angels. We have the same problem on our hands at the moment with the Buffalo Dung problem. I’m an Occam’s Razor fan myself – let’s just stop the madness!
To see how to work practically in this way as I do and those who use DoView Visual Planning and Management do all the time, see: http://doview.com/plan/draw.html that link shows you the 13 rules for building elegant but accessible visual models that you can use in the way described above. This url: http://doview.com/plan shows you how you can used the whole process for accountability, evaluation, reporting etc.
Want more detail and references to this thinking? The following is a technical article about this issue (read the summary referenced at the start of it if you do not have time to read the whole article): Duignan, P. (2009). Simplifying the use of terms when working with outcomes. Outcomes Theory Knowledge Base Article No. 236. ( http://outcomestheory.wordpress.com/article/simplifying-terms-used-when-working-2m7zd68aaz774-73/ ). The substance of this article formed the basis for Duignan, P. (2009) Rejecting the traditional outputs, intermediate and final outcomes logic modeling approach and building more stakeholder-friendly visual outcomes models. American Evaluation Association Conference, Orlando, Florida, 11-14 November 2009.)
And the following article talks about the different dimensions we get mixed up in our outcomes and evaluation work:
Duignan, P. (2009). Features of steps and outcomes appearing in outcomes models. Outcomes Theory Knowledge Base Article No. 208. ( http://outcomestheory.wordpress.com/article/features-of-steps-and-outcomes-2m7zd68aaz774-20/ ).
Paul Duignan, PhD. Follow me on this OutcomesBlog.org; Twitter.com/PaulDuignan; or via my E-newsletter and resources at OutcomesCentral.org.
Currently 'blogging' about my work in Linkedin Group rather than here
In order to keep the number of places where I am networking to a manageable level, I’m currently focusing my discussion and comments on outcomes, evaluation etc. within a Linkedin Group called DoView Visual Outcomes Planning (http://tinyurl.com/doviewplanningln). If you want to catch up with my latest thinking, have a look at that group and post any comments or questions you have in that group. (It is really easy to join Linkedin and it is worthwhile for making professional contacts).
I’ll see how it goes using Linkedin in this way as my primary site for commenting at the moment. I may come back to blogging here at OutcomesBlog at sometime. You can track me on Twitter (http://twitter.com/paulduignan) to keep up with what I’m posting where.
Op Ed on junk economic estimates
I’m back blogging now. As an outcomes specialist, I’m always irritated when I see people making what I think are junk estimates. There’s a lot of this going on in economics from the Credit Rating agencies underestimating the risk of junk bonds through to various estimates plugged into all sorts of cost-benefit and other economic analyses that I see from time to time.
One I got my teeth into recently in an Op-Ed I wrote is an estimate regarding the cost of one of the many after-shocks which struck Christchurch NZ in its recent sequence of earthquakes. In this case, an estimate was made of one set of aftershocks which did not seem to take into account the fact that many of the buildings damaged in the after-shock had already been damaged in the earlier earthquakes. As one commentator said, ‘there are only so many times you can write off a building’!
This estimate was shot down by many people when it arose but the problem with junk economic estimates is that they tend to keep popping up like zombies like this one did in at a later date, but without the original criticism which put it in perspective. Anyway, check out the Op-Ed at http://www.outcomescentral.org/files/duignan299quake.pdf.
Paul Duignan, PhD. (Follow me on my OutcomesBlog.org; Twitter; or via my E-Newsletter; many resources at OutcomesCentral.org
DoView use for visual strategic planning growing
My best intentions for blogging on a regular basis have fallen by the wayside as I have been swamped with work from all directions. Just last week I spent the evenings webinaring into Cairo and into Europe as the demand for people wanting to know about advanced ways of using DoView outcomes and evaluation software are mounting. Interestingly, my day consulting work at the moment is focusing in on a number of assignments where DoView is being used for strategic planning, rather than evaluation work. I will be putting up some case studies of this work soon as some of the organizations I’m working for are graciously letting me share the work I’m doing with them. More in later blog postings on these case studies. Continue reading
Developing a comprehensive sector visual outcomes model
In earlier blog postings I’ve talked about the use of large scale outcomes models for various purposes such as overall policy development, evidence-based practice and monitoring and evaluation. The next version of DoView outcomes and evaluation software [[Disclosure: I am involved in the development of DoView] is going to allow images to be included in DoView files and in the web page models which can be created within DoView and then put up on the web. A mock-up of what parts of a visual sector plan using the upcoming version of DoView could look like is available. This will be a no cost update to DoView which is planned for release in February 2009.
What can be claimed about whether a program works or not from a logic model?
We sometimes hear things like: “a logic model was used to show that the program works”. I’m interested in tidying up such talk so that we are very clear about exactly what it is that is being claimed in regard to showing whether or not a program works. I’ve put up a new article in Knol which deals with the question of the types of claims we can make in regard to logic models (I call them outcomes models) and the types of arguments we can mount about whether or not programs work based on these claims. The article is here.The article is rather intense, but I think that it’s important that, as evaluators, we get on top of this sort of thing. In the article, I set out three claims that can be made in regard to logic models (or sub-parts of logic models). These are: Continue reading
Drawing logic models (outcomes models)
I have just finished a substantial article on what I call outcomes models. They go by names such as: logic models, program logics, program theories, theories of change, ends-means diagram, strategy maps etc. and are used widely through evaluation.
In the article I attempt to go beyond the traditional received conventions about logic models which I think are usually unexamined in ‘how to’ books and much of the discussion of logic models.
The essence of my argument is that:
- There are six possible purposes for outcomes models – the first being to provide a picture of ‘what it is believed causes what to happen’ – what I call a ‘full’ outcomes model.
- Because people attempt to also use models to indicate measurability and demonstrability (attribution of changes in high-level steps and outcomes to a program) they end up truncating their models so much (limiting them to the measurable or the demonstrable) that they fail to achieve the first purpose of creating a picture of a ‘full’ model. Measurement and demonstrability (attribution) should be done after the model has been built.
- Most ‘technologies of representation’ – tables, single page diagrams etc – are hopeless at representing anything like a ‘full’ model of the causality behind a program. Hence they attract attacks as inadequate representations of programs.
- Models should be drawn in logic model software such as DoView so as to be able to represent full models (as large as they need to be, anything linked to anything else) and then measurement and demonstrability (attribution) can be mapped back onto them. This means they can achieve the purposes of the traditional conventional approaches to logic models without having their limitations.
If you have a moment, check out the article, any feedback appreciated. The article is here.
Paul Duignan PhD
Getting evidence implemented in practice – easier said than done – the Rich Dialog Process
Getting evidence implemented in practice is a lot more difficult than just waving around the odd evaluation report and pointing someone at an evidence-based web site like the Cochrane or Campbell collaboration sites. I was involved in a process earlier this year which was an attempt to work with a sector to get evidence-based practice implemented in practice.It used the Rich Dialog Process (RDP) – a simple deliberation and dialog process to get groups of stakeholders who may have different views on an issue to talk those differences through. The focus was on getting evidence-based practice really happening in the school drug and alcohol education area. An area in which there are a lot of different types of programs running. Continue reading
