Seamlessly moving from evaluation to strategy and back again

I’m currently in a discussion on the American Evaluation Association’s Linkedin page about the relationship between monitoring, evaluation and strategic planning. While different consultants may be involved in doing different aspects of these for a client, from a client’s point of view they’re all just parts of their organization’s work which they somehow need to integrate and align.

When working with clients, it really helps to have an approach which lets you move from doing monitoring and evaluation planning, for instance, back to strategic planning. You can then just track whatever their organizational focus is at any moment. From their point of view, it means that monitoring, evaluation etc are seamlessly aligned with strategic planning and other organizational functions.

For instance, working with a client yesterday, using our approach and software, we were building a DoView Visual M&E plan with them (http://doview.com/plan/evaluation.html). These plans are based on a DoView Visual Outcomes Model (http://doview.com/plan/draw.html). The client then said, ‘it’s great what we’ve just done about measurement, but we also need to work out what we’re going to say to our funders about what we want to do next – i.e. our forward strategy’.

So we immediately and seamlessly moved to doing this task for them within the same meeting. We just took the DoView Visual Outcomes Model we had already built with them for monitoring and evaluation planning purposes and went through it, marking up their priorities for future action. The next step will be to map their planned projects onto the DoView and check for ‘line-of-sight’ alignment between their priorities and their planned actions. (see http://doview.com/plan).

It’s great to have the flexibility to move in any direction along the: strategy – priority setting – project alignment – indicator monitoring – evaluation – outcomes-focused contracting spectrum, and to have a tool and approach that lets you immediately go wherever the client wants you to go. This is achieved by using the one visual model (a DoView Visual Outcomes Model drawn according to the 13 rules for drawing DoViews) to underpin all of these activities (http://doview.com/plan/draw.html).

Paul Duignan, PhD OutcomesBlog.org, Twitter.com/paulduignan, OutcomesCentral.org, DoView.com.

Does Monitoring and Evaluation M&E Planning have to be so cumbersome and painful? Just finished Bangkok Conference Presentation

Bangkok Conference

I was invited to give a presentation to the 1st Pan Asia-Africa Monitoring and Evaluation (M&E) Forum: Results-Based Management & Evaluation (RBM&E) and Beyond: Increasing M&E Effectiveness held in Bangkok. I’ve just finished my presentation which was called: ‘Anyone Else Think the Way We Do Our M&E Work is Too Cumbersome and Painful?’

I’ve had to review many Monitoring and Evaluation Plans in the past and I’ve generally found them long and tedious documents. I’ve also had to write them myself and realize that the tedium is not only on the part of the reader! It’s usually really hard to quickly overview what the M&E Plan is going to measure and the evaluation questions that are going to be asked.

Normally once the plan has been used to get funding for the M&E work, it’s just put in a desk drawer and other documentation is used to control the implementation of the M&E Plan and make presentations on it.

In the presentation, I outlined the new DoView Visual M&E Planning. This approach takes the pain out of writing (and reading) M&E plans and creates major efficiencies.

It takes 1/2 the time to create an M&E plan; it’s entirely visually based, which makes it easy to see what is, and (just as important) what’s not, being measured; the same DoView file can be used to control the implementation of the M&E work; all presentations can be made just using the DoView M&E Plan (you don’t need to create additional Powerpoints); and you can, if you wish fully integrate project strategic planning into M&E planning (the Holy Grail of putting the ‘P” – ‘Planning’ – back into ‘M&E’).

The virtual presentation was in the form of a three short videos (about 6-7 minutes each) and a Skype question and answer session afterwards.

Check out the three short videos of the presentation here. The first video describes the reason we should move from the traditional approach and the second and third video show you how to do use the new DoView paradigm. If you want the resource page on the DoView website which shows you how to build a DoView Visual M&E Plan and gives an example you can download, it’s here.

Paul Duignan PhD. Blogs at OutcomesBlog.org, is at Twitter.com/PaulDuignan, You are welcome to participate in the DoView Community of Practice on Linkedin. Download a DoView trial at DoView.com.

Stop the terminological madness now! ‘Outcomes’, ‘impact’, ‘results’, ‘goals’ and the Buffalo Dung Problem

All I can ask is ‘when will it stop’? As we speak I’m burning up bandwidth on an EVALTALK (the evaluators list) discussion about the distinction between a ‘goal’ and a ‘mission’. I’m on Linkedin where people are arguing about the distinction between a ‘result’ and an ‘outcome’ and I’ve someone emailing me from Europe preoccupied about why I don’t draw a distinction between an ‘outcome’ and an ‘impact’ in my outcomes theory work.

I think that Karyn Hicks on EVALTALK has come up with the best term for these debates, calling them the Buffalo Dung Problem! This stems from her being in a meeting involving one of these endless debates and her Director hollering ‘Well #!@ we can just call it buffalo dung for all I care’! From then on she’s called it the Buffalo Dung Problem.

Most of these Buffalo Dung discussions are a total waste of time and we can think about this in terms of there being two underlying issues:

1. These terms are all used in a common sense way by stakeholders to mean roughly the same thing: ‘the stuff we’re trying to achieve’. It’s ultimately futile to try and force the rest of the world to use them in very specific ways that suit us for our technical work. If we were physicists and no one had any common sense uses for our terms – like Boson Particles and Quarks – we could define them how we liked and insist that the people using them use them in a very precise technical way. We simply do not have the power to insist that people use terms in the way we want because we work amongst a wide variety of lay stakeholders who will use terms in whatever way they want to.

2. When we insist on using terms in a particular way we are usually trying to pack into the one term a number of technical distinctions which it is better to tease apart. These distinctions include things such as: 1) where something fits within a causal pathway; 2) whether it’s measurable or not; 3) whether it’s controllable or not; 4) whether it’s going to be used for accountability or not.

For instance in one of the discussions I’m involved in at the moment, it’s being suggested that maybe the term goal should be restricted to: 1) something below a thing called a ‘mission’ within a causal pathway; 2) something that is measurable; and, 3) something that is controllable. The problem is that when we ask an unsuspecting lay person to give us their ‘goals’, they have no way of knowing from just this word that we want a very specific thing from a technical point of view. We want something which has three specific technical characteristics. It’s far clearer to forget the word goal and tell them that we want something that is measurable and controllable by them (distinctions 2 and 3 above). We can achieve our first distinction – the position in the causal pathway – much more elegantly by just doing the whole thing in the form of a visual outcomes model.

A fully visual approach gets rid of a lot of the terminological madness which stems from trying to specify a particular location within a causal pathway, e.g. having to insist that a process is before an immediate outcome and that is before an intermediate outcome and that is before an impact.  When you try to do it in this way you inevitably get people then asking you where a result, goal, mission and vision fit into the schema.

You can eliminate this entire debate by simply working in a totally visual way. You can do the whole work of building an outcomes model visually just by talking about boxes within the model and the end-box(s).  Being a little less extreme, I normally talk about steps and at the end of the steps there are final outcomes.  But I couldn’t care less what people want to call the boxes at the end of the visual model. The visual approach eliminates the need to use words to describe particular positions within the causal pathway – you can just point at them (or if you are not physically present color them up, e.g. the green boxes).

Having eliminated this major cause of terminological stress by working visually you can then next deal with distinction 2, measurement. This is best though of in terms of a measurement being an object you put onto a visual model next to a box. It is something that measures that box. I happen to call these indicators but again couldn’t really care less what you call them as long as you maintain the idea of measuring things.

Then you need to deal with the 3rd distinction – controllability. This is best done by simply marking up the indicators that are controllable in some way. Make them red, put a letter next to them, whatever you like. But just think of it in terms of a particular type of indicator being controllable.

Lastly you need to deal with distinction 4 – whether a party is going to be held accountable for something. This is best dealt with by simply marking up the indicators which a party will be held accountable for. In the public and non-profit sector, these usually are exactly the same as the controllable indicators you’ve just marked up.

It’s as easy as that, you simply do not need the terminological madness so many people are currently involved in. I would love someone to work out the sum total of human time, effort and bandwidth (and hence dollars) which is currently going into these endless terminological debates.

William of Occam was a medieval philosopher who came up with Occam’s Razor – ‘do not multiply entities beyond necessity’. He was trying to stop the the type of madness where people in his time used to make dozens of distinctions between different types of angels. We have the same problem on our hands at the moment with the Buffalo Dung problem. I’m an Occam’s Razor fan myself – let’s just stop the madness!

To see how to work practically in this way as I do and those who use DoView Visual Planning and Management do all the time, see: http://doview.com/plan/draw.html that link shows you the 13 rules for building elegant but accessible visual models that you can use in the way described above. This url:  http://doview.com/plan shows you how you can used the whole process for accountability, evaluation, reporting etc.

Want more detail and references to this thinking? The following is a technical article about this issue (read the summary referenced at the start of it if you do not have time to read the whole article): Duignan, P. (2009). Simplifying the use of terms when working with outcomes. Outcomes Theory Knowledge Base Article No. 236. ( http://outcomestheory.wordpress.com/article/simplifying-terms-used-when-working-2m7zd68aaz774-73/ ). The substance of this article formed the basis for Duignan, P. (2009) Rejecting the traditional outputs, intermediate and final outcomes logic modeling approach and building more stakeholder-friendly visual outcomes models. American Evaluation Association Conference, Orlando, Florida, 11-14 November 2009.)

And the following article talks about the different dimensions we get mixed up in our outcomes and evaluation work:

Duignan, P. (2009). Features of steps and outcomes appearing in outcomes models. Outcomes Theory Knowledge Base Article No. 208. ( http://outcomestheory.wordpress.com/article/features-of-steps-and-outcomes-2m7zd68aaz774-20/ ).

Paul Duignan, PhD. Follow me on this OutcomesBlog.org; Twitter.com/PaulDuignan; or via my E-newsletter and resources at OutcomesCentral.org.

DoView for Mac and DoView 3.0 out – Phew!

Since my last post on our work-life balance, things have continued to run at a hectic pace! We’ve been developing resources for use by clients and others, been involved in building models, training in the use of DoView and our approach and we’ve also been involved in getting DoView for Mac and DoView 3.0 out. They have now been released so that is fantastic. A quick alert is that for the next four days both of them are available for the old price before the new price kicks in. So if you’ve been waiting, as many people have, for DoView for Mac or for DoView 3.0 to appear, then get in now (http://www.doview.com). I’ll be blogging about some of the new features over coming weeks and will also start to get out the new resources and materials we have been working on.

Paul Duignan, PhD. (Follow me on my OutcomesBlog.org; Twitter; or via my E-Newsletter; many resources at OutcomesCentral.org

The importance of 'looking behind the numbers' in performance management systems

A colleague Stan Capela recently highlighted the importance of ‘looking behind the numbers’ in performance management systems. Pointing out that, if this is not done, false conclusions can be drawn from such systems. I think that most people would agree with this sentiment. The key issue for me is what is the most effective way of us ‘looking behind the numbers’ when measuring people’s, project’s or organization’s performance. Continue reading

New How-To Guides on DoView Site – What's and outcomes (results) model

I have not been blogging for a while as I’ve been caught up in preparing multiple resources on outcomes models and also actually developing many outcomes models for clients. I now have many great examples which I want to share with you in the coming months. It’s only now that a number of these projects are coming to a conclusion and clients are becoming willing to share them with others. So watch this space.

In the meantime, on the DoView site some new How-To Guides are starting to be put up. The first is on What’s a DoView Outcomes (Results) Model and Why Should I Use One? This is in response to requests from DoView enthusiasts who want to be able to refer people to a quick article about what an outcomes model is and why people should use one for all of their project and organizational planning. Continue reading

The evolution of the logic model

I’ve just posted an article on the evolution of the logic model within evaluation. Over the last couple of decades, increasing numbers of evaluators have started using logic models. For those not familiar with what logic models are – they are simply tabular or visual representations of all of the lower-level steps needed to achieve high-level outcomes for a program, organization or other intervention. They go by different names, for instance: program logics, intervention logics, results maps, theories of change, program theories, results hierarchies, strategy maps, end-means diagrams etc.). A traditional way of drawing logic models has evolved (known as the inputs, outputs, intermediate outcomes, final outcomes structured logic model) which often attempts to restrict logic models to a single page. However, many evaluators are now breaking away from the constraints of this traditional format and exploring various alternative ways of representing logic models. Continue reading

Developing an M&E plan using a visual approach

On various lists I am on I often see requests by people wanting to develop what is called an M&E plan. This terminology is often used in the international development area. It refers to a Monitoring and Evaluation Plan. The way these requests are made makes me think that the way you should monitor and evaluate different projects varies a great deal. Continue reading

Christmas is over – back to blogging

I’ve been quiet for some time on the blogging front as I’ve been developing a number of resources on outcomes modeling (logic modeling), evaluation and related topics. Now Christmas is over and I’m back to work I’ll be blogging in the next few weeks about these resources. I’ll also be blogging about DoView 2.0 which has just been released. It’s full of new features suggested by users and I’ve been building a range of really cool outcomes models with clients using the beta version and using that for input into its development. Continue reading

Getting outcomes creds and saving time!

Public sector organizations these days have two important imperatives: establishing that they are truly ‘results and outcomes-focused’ while also becoming more efficient in their internal organizational activity. The really good news in the outcomes area is that using a central tool of outcomes work – outcomes models (a particular type of visual model of all of the high-level outcomes the organization is seeking to achieve and the steps it is taking to do so) is that organizations and programs can do both at the same time. Continue reading