More fun with the accountants – this time Integrated Reporting Standards

Today I’ve been at another iteration of my gig with the road-show accountants’ professional development conference in another city (see my last blog on what my presentation has been about – Key Performance Indicators (KPIs)). I’ll try to blog later in more detail about KPIs for those who are obsessed with their technical characteristics like I am.

In the meantime, another really engrossing topic (for people like me) – Integrated Reporting Standards. I heard a presentation by Mark Hucklesby on the newly developed Integrated Reporting Standards which are currently out for consultation until July.

A quick aside on standards. One of the great things about accountants is that they’re obsessed with standard setting. They have standards for everything and technical committees meeting all the time figuring out new standards.

Standards are great because they bring about consistency, they also get the best minds in the business focused on the technical trade-offs which come up when reporting and how these are best dealt with.

In the broader area of outcomes systems – the way we identify and measure outcomes of any type in any sector –  I really wish there was a parallel structure to the various official and unofficial standard setting that goes on in accounting. Instead of the order the accountants have in their world, our area of broader outcomes reporting is really like the Wild West. Of course the accountants have had about 500 years to get their area sorted while we’ve only been focusing on outcomes in the modern sense of the term for maybe 30 years or so.

The Integrated Reporting Standards are a new initiative which can be seen as a sort of reinvented Tripple Bottom Line (economic, social and environmental). More information on the initiative at http://www.theiirc.org/.

They have come up with a set of six ‘capitals’

  • Financial
  • Manufactured
  • Intellectual
  • Human
  • Social and relationship
  • Natural

I think that calling them ‘capitals’ is maybe a bit obscure for the average person. I would see them as ‘outcome areas’ or something. However, I can see how they ended up using the term capital. They wanted to have the concept that companies take aspects of these six capitals and add value to them. The concept is set out in the diagram below from their draft standards document.

I raised two issues with Mark in the discussion time. The first was whether their had been any consideration of distinguishing between controllable and not-necessarily controllable indicators in the integrated reporting framework. This is a crucial distinction I draw in my outcomes theory work – http://outcomescentral.org/outcomestheory.html#6.

The purpose of integrated reporting is to give investors and others a crystal clear picture of the risks and opportunities a company is involved in. Confusing controllable with not-necessarily controllable indicators lies at the heart of many of the problems arising from misunderstanding of the true underlying risk profile one is exposed to in both the private and public sector. Mark agreed with the importance of the controllability issue. My second point was whether the standards would allow for a range of reporting approaches. He said that the standards did not stipulate any one way of actually presenting an integrated report. This is good news for someone like myself who thinks that the only way of reporting these days is to use a visual approach because of its clear advantages.

Anyway, sometime when I’m wanting a little light reading I’ll delve into the standards and report back in this blog what’s interesting from the point of view of those of us interested in outcomes theory, measurement and strategy.

integratedreporting

Find out http://About.Me/PaulDuignan

What could the politician at the party claim credit for?

I was at a party the other night talking with a group of people about what I do in the outcomes area. The normal reaction I get when I tell them that I’m a psychologist is straight forward.  However, when I tell them that I’m  an outcomes strategist I usually get the following reaction – they look at me, gesticulate, roll their eyes and say, ‘Oh, it’s so  hard to prove that what you did  changed high-level outcomes’. Of course,  this is what happens in the capital city where I work  because just about everyone here is either a policy wonk, or in a relationship with one. And we all know that the whole international wonkery is  obsessed with measuring outcomes.

In the rest of the country I usually get blank stares and people tend to quickly move on to the next guest to talk about something that makes sense. But sometimes I get people who just don’t perceive that there’s any problem to be solved in measuring outcomes. It’s always a little disturbing to have someone implying that there’s no real basis for a whole area of work you’ve involved in. I got this some time ago from a taxi driver on the way to an evaluation conference. I also got it again the other day the other night at the party.

A guest, who I later found out was a local government politician, heard me talking about being an outcomes strategist. He launched into something along the lines of: ‘I would have thought it was very easy, just measure the dollars’. Initially presuming he worked in the private sector, I gave my usual speel about the private sector and outcomes. In comparison to the public sector, it has the huge advantage that its outcomes are always measured (well the ones that people mostly focus on) and the measure is a common one (the dollar) which is used right across the entire sector, regardless of the type of work people are involved in. There’s also some more complicated stuff about the sector tending to have a more relaxed attitude towards attribution (proving exactly what caused what) than the public sector. I’ll blog about that second part sometime in the future.

When I introduced the point that non-financial outcomes, rather than financial outcomes, are at the heart of what’s done in the public sector, he then said something like: ‘well you just measure all that in surveys’. He thought that the whole problem of outcomes was simply solved by tracking outcomes over time. I pointed out that whether things were getting better in the district where he was in charge  said nothing about whether this was caused by his work. Things might be getting better in every city in the world because of general positive trends affecting everyone.

Up until this point, in my view, he was simply committing the basic outcomes fallacy of thinking that measuring a not-necessarily controllable indicator somehow shows that one has improved it. (see Duignan’s Six Type of Evidence That a Program Works diagram).

When I told him as politely as I could that I though he was not actually proving anything about what he was personally making happen, he introduced a more sophisticated argument which cannot be dismissed so easily. This argument was that he ‘hears from the people all the time’ and that he gets feedback from the different encounters he has with the people who live in his district. He also added that ultimately they would tell him if he wasn’t doing a good job.

Our conversation got interrupted about this time so I didn’t  get to continue talking to him. However, thinking in formal outcomes theory terms, in this second part of the conversation, he could have been making two somewhat different arguments. One is that his immersion in the nitty-gritty of working with the people in his district brought him into direct contact with the lower-levels of the outcomes model he was seeking to achieve (the model of the steps needed to achieve high-level outcomes – which can be operationalized in the form of a visual DoView). Being able to directly ‘see’ that the lower-level steps were being put in place (e.g. new environmental regulations), and having a sound logic of the intervention at hand (environmental regulation leading to a better environment), plus a measure that environmental issues were improving,  it was reasonable for him to claim that he had established he was having an impact. In Duignan’s Types of Impact Evaluation Designs, this is the seventh type of design: Intervention Logic (Program Theory/Theory of Change) Based Designs. It can be accepted as a credible impact design by stakeholders in some situations. Of course there’s always the question of who the observer is who is making the claim that lower-level steps have been achieved. But presumably we could get some independent assessment as to whether the lower-level steps were, as he was claiming, happening, so the logic of the design makes theoretical sense as a way of attempting to prove impact.

An alternative argument he could have been mounting, if the wanted to be very pragmatic, is that the fact that he keeps getting re-elected is what ‘hearing from the people all the time’ means in practice. Looking at it this way, he would be defining his outcomes as not changing things in his community (which he may well wish to do) but just as a matter of him getting re-elected. If this is the case, then the fact that he is regularly re-elected means that, by definition, he is achieving his ‘outcome’. And this outcome could be translated into something like ‘keeping the people satisfied’. The argument then would be that keeping the people satisfied was the best way of achieving outcomes for the community within a democracy. I think that this is an example of pulling the ‘outcome’ you’re trying to show you changed back down the outcomes model so they get to some lower-level where its easier to prove attribution.

So while, in my view, his initial claims about it being easy to figure out what is causing outcomes were weak and did not establish anything actually about him having an effect on outcomes, his second round of argument had more substance to it.

Want to know more? http://About.Me/PaulDuignan

Seamlessly moving from evaluation to strategy and back again

I’m currently in a discussion on the American Evaluation Association’s Linkedin page about the relationship between monitoring, evaluation and strategic planning. While different consultants may be involved in doing different aspects of these for a client, from a client’s point of view they’re all just parts of their organization’s work which they somehow need to integrate and align.

When working with clients, it really helps to have an approach which lets you move from doing monitoring and evaluation planning, for instance, back to strategic planning. You can then just track whatever their organizational focus is at any moment. From their point of view, it means that monitoring, evaluation etc are seamlessly aligned with strategic planning and other organizational functions.

For instance, working with a client yesterday, using our approach and software, we were building a DoView Visual M&E plan with them (http://doview.com/plan/evaluation.html). These plans are based on a DoView Visual Outcomes Model (http://doview.com/plan/draw.html). The client then said, ‘it’s great what we’ve just done about measurement, but we also need to work out what we’re going to say to our funders about what we want to do next – i.e. our forward strategy’.

So we immediately and seamlessly moved to doing this task for them within the same meeting. We just took the DoView Visual Outcomes Model we had already built with them for monitoring and evaluation planning purposes and went through it, marking up their priorities for future action. The next step will be to map their planned projects onto the DoView and check for ‘line-of-sight’ alignment between their priorities and their planned actions. (see http://doview.com/plan).

It’s great to have the flexibility to move in any direction along the: strategy – priority setting – project alignment – indicator monitoring – evaluation – outcomes-focused contracting spectrum, and to have a tool and approach that lets you immediately go wherever the client wants you to go. This is achieved by using the one visual model (a DoView Visual Outcomes Model drawn according to the 13 rules for drawing DoViews) to underpin all of these activities (http://doview.com/plan/draw.html).

Paul Duignan, PhD OutcomesBlog.org, Twitter.com/paulduignan, OutcomesCentral.org, DoView.com.

Putting the Planning back into M&E – PME or PM&E what’s the acronym going to be?

In a posting on Linkedin, Leslie Ayre-Jaschke talked about the growth of PME – or maybe it will end up being called PM&E, or something else. Regardless of the acronym, it’s the movement to put planning back into monitoring and evaluation. ‘Putting the P back into M&E’ was the subtitle of a workshop I ran in South Africa for UNFPA several years ago. I think that it’s a concept that’s going to get a lot more traction over the next few years.

It’s consistent with what evaluators like Michael Patton, and many of us in the evaluation community, have been talking about for years. We’ve been talking up the key role of formative evaluation – evaluation aimed at making sure that programs are optimized. And formative evaluation is all about making sure that programs are well planned.

The point of this approach within evaluation is that it’s often pointless to evaluate a badly planned program. Evaluation resources would be better spent on making sure that the program is better planned than on measuring the fact that it often will not achieve its outcomes due to the fact that planning has been poor.

The new PM&E movement is not just about evaluators and evaluation, it is much broader than that taking in people from a range of disciplines. This new integrated approach which is emerging needs an underlying theory which will appeal to all of the different disciplines involved – strategic planners, performance managers, evaluators, contract managers, policy analysts etc. The work I’ve been doing in outcomes theory has been designed to meet this need.

The purpose of outcomes theory is to provide an integrated conceptual basis for PM&E-type approaches. A common conceptual basis is needed if people across the different disciplines and sectors are going to be able to share conceptual insights about how they identify, measure, attribute and hold parties to account for outcomes when doing planning, monitoring and evaluation. Good theory is needed to help them quickly sort out the type of conceptual confusion that current characterizes much of the discussion of outcomes related issues. As the famous social scientist Kurt Lewin said – ‘there’s nothing so practical as a good theory’.

This aspiration of outcomes theory is summarized in the diagram below showing how it’s a meso-level theory reaching across strategic planning, monitoring, evaluation etc.

d131-2
(see http://www.outcomescentral.org/outcomestheory.html for more on this)

For people just working out in the field, who don’t  need to know much theory, outcomes theory principles have been hard-wired into the DoView Visual Planning, Monitoring and Evaluation approach http://doview.com/plan. Using the approach means that they will avoid many of the technical problems which are highlighted by outcomes theory.

Large-scale visual models of a program (drawn in the correct way, for instance as ‘DoViews’) provide the ideal foundation for the new fully integrated approach to planning, monitoring and evaluation which many are now seeking. http://doview.com/plan/draw.html.

Does Monitoring and Evaluation M&E Planning have to be so cumbersome and painful? Just finished Bangkok Conference Presentation

Bangkok Conference

I was invited to give a presentation to the 1st Pan Asia-Africa Monitoring and Evaluation (M&E) Forum: Results-Based Management & Evaluation (RBM&E) and Beyond: Increasing M&E Effectiveness held in Bangkok. I’ve just finished my presentation which was called: ‘Anyone Else Think the Way We Do Our M&E Work is Too Cumbersome and Painful?’

I’ve had to review many Monitoring and Evaluation Plans in the past and I’ve generally found them long and tedious documents. I’ve also had to write them myself and realize that the tedium is not only on the part of the reader! It’s usually really hard to quickly overview what the M&E Plan is going to measure and the evaluation questions that are going to be asked.

Normally once the plan has been used to get funding for the M&E work, it’s just put in a desk drawer and other documentation is used to control the implementation of the M&E Plan and make presentations on it.

In the presentation, I outlined the new DoView Visual M&E Planning. This approach takes the pain out of writing (and reading) M&E plans and creates major efficiencies.

It takes 1/2 the time to create an M&E plan; it’s entirely visually based, which makes it easy to see what is, and (just as important) what’s not, being measured; the same DoView file can be used to control the implementation of the M&E work; all presentations can be made just using the DoView M&E Plan (you don’t need to create additional Powerpoints); and you can, if you wish fully integrate project strategic planning into M&E planning (the Holy Grail of putting the ‘P” – ‘Planning’ – back into ‘M&E’).

The virtual presentation was in the form of a three short videos (about 6-7 minutes each) and a Skype question and answer session afterwards.

Check out the three short videos of the presentation here. The first video describes the reason we should move from the traditional approach and the second and third video show you how to do use the new DoView paradigm. If you want the resource page on the DoView website which shows you how to build a DoView Visual M&E Plan and gives an example you can download, it’s here.

Paul Duignan PhD. Blogs at OutcomesBlog.org, is at Twitter.com/PaulDuignan, You are welcome to participate in the DoView Community of Practice on Linkedin. Download a DoView trial at DoView.com.

The importance of 'looking behind the numbers' in performance management systems

A colleague Stan Capela recently highlighted the importance of ‘looking behind the numbers’ in performance management systems. Pointing out that, if this is not done, false conclusions can be drawn from such systems. I think that most people would agree with this sentiment. The key issue for me is what is the most effective way of us ‘looking behind the numbers’ when measuring people’s, project’s or organization’s performance. Continue reading