More fun with the accountants – this time Integrated Reporting Standards

Today I’ve been at another iteration of my gig with the road-show accountants’ professional development conference in another city (see my last blog on what my presentation has been about – Key Performance Indicators (KPIs)). I’ll try to blog later in more detail about KPIs for those who are obsessed with their technical characteristics like I am.

In the meantime, another really engrossing topic (for people like me) – Integrated Reporting Standards. I heard a presentation by Mark Hucklesby on the newly developed Integrated Reporting Standards which are currently out for consultation until July.

A quick aside on standards. One of the great things about accountants is that they’re obsessed with standard setting. They have standards for everything and technical committees meeting all the time figuring out new standards.

Standards are great because they bring about consistency, they also get the best minds in the business focused on the technical trade-offs which come up when reporting and how these are best dealt with.

In the broader area of outcomes systems – the way we identify and measure outcomes of any type in any sector –  I really wish there was a parallel structure to the various official and unofficial standard setting that goes on in accounting. Instead of the order the accountants have in their world, our area of broader outcomes reporting is really like the Wild West. Of course the accountants have had about 500 years to get their area sorted while we’ve only been focusing on outcomes in the modern sense of the term for maybe 30 years or so.

The Integrated Reporting Standards are a new initiative which can be seen as a sort of reinvented Tripple Bottom Line (economic, social and environmental). More information on the initiative at http://www.theiirc.org/.

They have come up with a set of six ‘capitals’

  • Financial
  • Manufactured
  • Intellectual
  • Human
  • Social and relationship
  • Natural

I think that calling them ‘capitals’ is maybe a bit obscure for the average person. I would see them as ‘outcome areas’ or something. However, I can see how they ended up using the term capital. They wanted to have the concept that companies take aspects of these six capitals and add value to them. The concept is set out in the diagram below from their draft standards document.

I raised two issues with Mark in the discussion time. The first was whether their had been any consideration of distinguishing between controllable and not-necessarily controllable indicators in the integrated reporting framework. This is a crucial distinction I draw in my outcomes theory work – http://outcomescentral.org/outcomestheory.html#6.

The purpose of integrated reporting is to give investors and others a crystal clear picture of the risks and opportunities a company is involved in. Confusing controllable with not-necessarily controllable indicators lies at the heart of many of the problems arising from misunderstanding of the true underlying risk profile one is exposed to in both the private and public sector. Mark agreed with the importance of the controllability issue. My second point was whether the standards would allow for a range of reporting approaches. He said that the standards did not stipulate any one way of actually presenting an integrated report. This is good news for someone like myself who thinks that the only way of reporting these days is to use a visual approach because of its clear advantages.

Anyway, sometime when I’m wanting a little light reading I’ll delve into the standards and report back in this blog what’s interesting from the point of view of those of us interested in outcomes theory, measurement and strategy.

integratedreporting

Find out http://About.Me/PaulDuignan

Accountants, KPIs and dry topics

I’ve just got back from doing a presentation to an accountants’ professional development conference. I’m on a gig where I do several of the same presentation in different cities. The conference organizers gave the presentation the rather mind-numbing title of Using KPI* Reports to Enhance Organizational Performance.

Someone once told me that the way I get on in life is that I’m prepared to spend my time thinking about things (he was actually referring to analyzing KPI lists at the time) which most normal human-beings would find painfully boring.

Now, the great thing about accountants is that they’re a bit like that too –  you can’t scare them with a dry little title like the one above, so I had plenty of people turn up to my session.

The fact is that KPI lists (in various forms) are the central mechanism by which we translate our ideas about what should happen in the world into what actually happens on the ground. They’re a major determinant of the way the world turns out in the end. The accountants are right on the money with this one, sparing 50 minutes or so to talk about how to get KPI lists right is time well spent.

I started off my presentation by critiquing two of the most popular sayings in the KPI world – ‘what gets measured is what gets done’ and ‘organizational objectives should always be SMART – Specific, Measurable, Achievable, Relevant and Timebound’.

The problem with the first is that it results in: ‘what doesn’t get measured, ends up being absent from strategic discussions’. And the second (SMART ) can lead to a nasty organization problem – PM – Premature Measurement. Moving to measurement too fast before you’ve defined your strategy.

The take away points from my presentation were: 1) we need to identify our strategy before we focus just on measurement; 2) the best way to talk about strategy is to do it visually; and, 3) once we’ve developed a visual version of our strategy, we can then simply map our indicators (KPIs) directly back onto this map. This ensures that we have alignment between what we’re measuring and the priorities we’re trying to achieve.

One of the participants asked a key question, which is, ‘what it the best way of working out which indicators, out of a mass of indicators we might have, we should track?’

The simple answer is that the indicators we select should focus on our priorities. Working the way I suggested in my presentation is an ideal way of doing ensuring this. However there are some very interesting complexities around the question of indicator selection which I’ll try to get time to blog about in a few days time.

I’ll post the KPI presentation after I tweak it and do the next presentation.

*Key Performance Indicators, if any of the uninitiated are reading this blog.

What could the politician at the party claim credit for?

I was at a party the other night talking with a group of people about what I do in the outcomes area. The normal reaction I get when I tell them that I’m a psychologist is straight forward.  However, when I tell them that I’m  an outcomes strategist I usually get the following reaction – they look at me, gesticulate, roll their eyes and say, ‘Oh, it’s so  hard to prove that what you did  changed high-level outcomes’. Of course,  this is what happens in the capital city where I work  because just about everyone here is either a policy wonk, or in a relationship with one. And we all know that the whole international wonkery is  obsessed with measuring outcomes.

In the rest of the country I usually get blank stares and people tend to quickly move on to the next guest to talk about something that makes sense. But sometimes I get people who just don’t perceive that there’s any problem to be solved in measuring outcomes. It’s always a little disturbing to have someone implying that there’s no real basis for a whole area of work you’ve involved in. I got this some time ago from a taxi driver on the way to an evaluation conference. I also got it again the other day the other night at the party.

A guest, who I later found out was a local government politician, heard me talking about being an outcomes strategist. He launched into something along the lines of: ‘I would have thought it was very easy, just measure the dollars’. Initially presuming he worked in the private sector, I gave my usual speel about the private sector and outcomes. In comparison to the public sector, it has the huge advantage that its outcomes are always measured (well the ones that people mostly focus on) and the measure is a common one (the dollar) which is used right across the entire sector, regardless of the type of work people are involved in. There’s also some more complicated stuff about the sector tending to have a more relaxed attitude towards attribution (proving exactly what caused what) than the public sector. I’ll blog about that second part sometime in the future.

When I introduced the point that non-financial outcomes, rather than financial outcomes, are at the heart of what’s done in the public sector, he then said something like: ‘well you just measure all that in surveys’. He thought that the whole problem of outcomes was simply solved by tracking outcomes over time. I pointed out that whether things were getting better in the district where he was in charge  said nothing about whether this was caused by his work. Things might be getting better in every city in the world because of general positive trends affecting everyone.

Up until this point, in my view, he was simply committing the basic outcomes fallacy of thinking that measuring a not-necessarily controllable indicator somehow shows that one has improved it. (see Duignan’s Six Type of Evidence That a Program Works diagram).

When I told him as politely as I could that I though he was not actually proving anything about what he was personally making happen, he introduced a more sophisticated argument which cannot be dismissed so easily. This argument was that he ‘hears from the people all the time’ and that he gets feedback from the different encounters he has with the people who live in his district. He also added that ultimately they would tell him if he wasn’t doing a good job.

Our conversation got interrupted about this time so I didn’t  get to continue talking to him. However, thinking in formal outcomes theory terms, in this second part of the conversation, he could have been making two somewhat different arguments. One is that his immersion in the nitty-gritty of working with the people in his district brought him into direct contact with the lower-levels of the outcomes model he was seeking to achieve (the model of the steps needed to achieve high-level outcomes – which can be operationalized in the form of a visual DoView). Being able to directly ‘see’ that the lower-level steps were being put in place (e.g. new environmental regulations), and having a sound logic of the intervention at hand (environmental regulation leading to a better environment), plus a measure that environmental issues were improving,  it was reasonable for him to claim that he had established he was having an impact. In Duignan’s Types of Impact Evaluation Designs, this is the seventh type of design: Intervention Logic (Program Theory/Theory of Change) Based Designs. It can be accepted as a credible impact design by stakeholders in some situations. Of course there’s always the question of who the observer is who is making the claim that lower-level steps have been achieved. But presumably we could get some independent assessment as to whether the lower-level steps were, as he was claiming, happening, so the logic of the design makes theoretical sense as a way of attempting to prove impact.

An alternative argument he could have been mounting, if the wanted to be very pragmatic, is that the fact that he keeps getting re-elected is what ‘hearing from the people all the time’ means in practice. Looking at it this way, he would be defining his outcomes as not changing things in his community (which he may well wish to do) but just as a matter of him getting re-elected. If this is the case, then the fact that he is regularly re-elected means that, by definition, he is achieving his ‘outcome’. And this outcome could be translated into something like ‘keeping the people satisfied’. The argument then would be that keeping the people satisfied was the best way of achieving outcomes for the community within a democracy. I think that this is an example of pulling the ‘outcome’ you’re trying to show you changed back down the outcomes model so they get to some lower-level where its easier to prove attribution.

So while, in my view, his initial claims about it being easy to figure out what is causing outcomes were weak and did not establish anything actually about him having an effect on outcomes, his second round of argument had more substance to it.

Want to know more? http://About.Me/PaulDuignan

Throwing people in jail because they won’t give us the information we want? – The price of indicator collection

detainedpersonsmallAs I’m writing this, we have an interviewer from our Statistics Department sitting in the other room asking detailed questions about  our income and expenditure. It’s part of a nation-wide Household Economic Survey collecting information on household expenditure and income. It’s her second visit to us and she’s been here this time for just on two hours – the first visit took about the same time. Over the last two weeks I’ve been filling in an expenditure diary where I’ve had to record all my daily expenditure. Fortunately our interviewer knows what she’s doing and she has stepped us efficiently through the complex questionnaire – but it’s still a lot of work.

We actually don’t have any choice but to put several hours of our time to one side and fill in the questionnaire with her – participation is not voluntary. It’s mandatory, required by the same legal provisions that demand we fill in our Census forms (something that we, coincidentally, also had no choice but to do just a few weeks ago!). Presumably you would be talking about a fine rather than being thrown in jail (but if you refused to pay your fine, I presume that in the end you could end up in jail in the fullness of time).

This is a little personal example of the cost and infrastructure needed to collect indicator information. Being an outcomes wonk I don’t begrudge putting the time aside because I understand how crucial it is to collect information which can be used for indicator and other types of outcomes work. But the cost is something which is often lost on people who blithely demand that programs and organizations – ‘collect comprehensive outcomes indicator information’ – without any thought to how much it’s going to cost to do so.

It also illustrates the point that collecting accurate information can require more than just spending money – it can involve having to use the power of the State to make sure that such indicator information is collected from the people it needs to be collected from. One of the most dramatic examples we have of this is in the road safety area where drivers can be forced to give a sample measuring their blood alcohol level. Again as an outcomes wonk, I love this sort of data. But there are serious limits to any exercise of State power. A mandatory requirement to collect information needs to be used very carefully to avoid serious push-back from those who have to give their time to fill in the information (for example  discussions like the one here about someone complaining about having to fill in a mandatory survey).

Of course the types of examples that people hold up as providing best practice indicator and outcomes data collection tend to be ones where there’s a large data collection infrastructure and often mandatory data collection  (e.g. road safety, recidivism data in the corrections area). They then expect us to come up with similar information about trends in outcomes and causality in areas where we have much less ability to collect information and can’t turn to the backing of the law to force people to provide information.

So the next time someone demands that you collect more indicator information on your program, it’s reasonable to ask the question: 1) how much are they willing to spend (or do they want you to spend) on collecting this information; and, 2) if required, are they prepared to support making the collection of information on the indicators which are relevant to your program a mandatory legal requirement?

Follow me on Twitter http://twitter.com/paulduignan. Discuss outcomes issues on the DoView Linkedin Community of Practice.

Does avoiding regulatory enforcement represent a success or failure? A Chameleon Indicator.

There’s no doubt, some indicators are a lot more fun than others (although, I must note that people in my trade have a fairly low threshold for ‘fun’). I particularly enjoy ones which can be interpreted in any way you like. They can be called Chameleon indicators.

When developing outcomes DoViews (visual outcomes models) and performance management frameworks for organizations, I often run into a particularly ambiguous type of indicator – the number of regulatory interventions being undertaken by an organization. At first sight, what better indicator for an organization which includes a regulatory outcome as part of its mandate? But there are problems in interpreting these sorts of indicators reflected in a media exchange I heard this morning.

Our national department of conservation is currently embroiled in media controversy over reductions in staff and budgets. As part of the media spotlight focused on it, I heard a discussion this morning about a reduction in the number of times it involves itself in a regulatory process – the number of times it makes legal representations to Conservation Resource Management Consent Hearings.

A media interviewer, interpreting the reduction in the number of regulatory interventions as a failure of the department to achieve one of its outcomes due to it not pursuing it aggressively due to staff shortages,  asked the department’s head: ‘…on the face of it, is it a lower priority? [the regulatory intervention – the department getting involved in Conservation Resource Management Consent Hearings]. The department’s chief (interpreting the drop in the measure in the opposite way) replied:

‘What you are falling into is the trap of judging and measuring our success by the number of cases we take regardless of the outcome. We see [Conservation Resource Management Hearings] [the regulatory intervention] as a last resort. We would rather sit down without spending money on lawyers and work out issues if we can and confine the [Conservation Resource Management Hearings] issues to ones that we really can’t reach agreement on’.*

This is a classic example of the outcomes theory principle: Ambiguity in interpreting outcomes or performance measures/indicators of regulatory intervention when also seeking prevention.

Not having looked into this particular issue, I don’t want to come down on one side or another. I think that both sides are making reasonable ‘face value’ interpretations of the change in the indicator.

How can people setting up performance management systems deal with these regulatory intervention Chameleon Indicators? While people will continue to take different positions in interpreting them in the cut and thrust of media debate, there is a technical approach to the problem which is suggested by outcomes theory.  In order to actually interpret what’s going on with this indicator, we would need to have further information about other indicators. For instance whether there has been an increase in departmental activity focused on getting the parties together prior to potential Resource Management Consent Hearings. I’ve DoViewed it (built an outcomes model) below so that we can get a clearer picture of what’s going on. d388We would need to get information about the indicator in red in order to be able to interpret the regulatory intervention indicator in black. Even then we could not be certain, just from the indicator in red that the department had been successful in reducing the number of contentious issues going to  hearings (which is what this DoView aspires to). So we would really need to answer the evaluation question which also appears within the DoView.

So the technical answer to dealing with Chameleon Regulatory Intervention Indicators is to always interpret them against the underlying outcomes model (e.g. DoView) of the logic of what the organization is trying to do.  For the theory on showing whether an organization is achieving its outcomes see Duignan’s Types of Evidence That a Program ‘Works’ Diagram and for a practical visual approach see here.

So, I the lesson from all this is that we should never just look at a Chameleon Indicator like the number of regulatory interventions on its own. We should always visualize it in the context of the logic of what it is that the intervention consists of and see what surrounding indicators we need to measure and what impact evaluation questions we need to answer in order to really clearly understand whether or not an organization is achieving its outcomes.

*Reference to the interview can be found on in the outcomes theory article linked above.

Outcomes theory Unequal Input Principle – Op Ed applied to school national league tables

Just written an Op Ed on the question of school national league tables. It tries to move the argument away from a political argument to a technical one. It’s part of my initiative to show how outcomes theory (outcomestheory.org) can provide a technical lens on debates that are current seen as largely political.

The outcomes theory principle is the Unequal Inputs Principle (the ‘level playing field’ principle – in the article called the Equivalent Input, Equivalent Outcome Principle) which states: Where the inputs of units being compared are equivalent, the raw outcomes from such units can be used to measure the relative performance of different units. Where inputs to units differ, the amount of improvement is a better measure of unit performance than just raw outcomes.

The Op Ed argues that if you are wanting to improve student performance you should look to see if you can develop a measure of value-added rather than just raw academic performance. This is because of the differing academic level of kids entering different schools. Whether you can develop a value-added measure is another question, but the clear problem with a raw score approach when it is used to incentivize teachers is that they will just move to  schools which have pupils entering who are already functioning at a high academic level.

Here’s the Op Ed. http://www.stuff.co.nz/dominion-post/comment/7753957/Outcome-theory-and-education

Paul Duignan, PhD. More at OutcomesCentral.org and on Twitter at Twitter.com/paulduignan.

 

 

The importance of 'looking behind the numbers' in performance management systems

A colleague Stan Capela recently highlighted the importance of ‘looking behind the numbers’ in performance management systems. Pointing out that, if this is not done, false conclusions can be drawn from such systems. I think that most people would agree with this sentiment. The key issue for me is what is the most effective way of us ‘looking behind the numbers’ when measuring people’s, project’s or organization’s performance. Continue reading

The Taxi Driver and 'why don't you just measure outcomes' – on the way to AES conference

On my way to the Australasian Evaluation Society Conference in Canberra my taxi driver in from the airport asked me what I do. When I explained that I ‘measure whether programs, often government programs, work or not so the taxpayer gets value for money’, he was right into the concept. Although I think he thought that I was over complicating things a little. He said: ‘shouldn’t it just be a matter of using statistics to measure if things are getting better or not about a program.’ What he was talking about was one aspect of monitoring and evaluation – an important piece – but just one of the Five Building Blocks I see lying behind all monitoring and evaluation systems (outcomes systems). Continue reading

Mapping indicators onto a logic model is obvious – but why haven't we always done it?

I was running a workshop today teaching policy analysts the basics of my approach to program evaluation (Easy Outcomes). One of the participants, when I talked about the importance of always mapping indicators back onto a visual model, commented that when you do it, it’s so obviously the right approach that you can’t understand why we’ve not been doing it for years.

The idea behind this approach is that the way we almost always approach indicator work is to eye-ball a list or table of indicators and ask the question of a group of busy people sitting around a table – ‘does this list of indicators look any good?’
Continue reading

Flow of causality in outcomes models and feedback loops

A quick technical blog here. Fellow evaluator Rick Davies pointed out in a post on one of my outcomes theory articles (on how to best represent causal models), that strictly visualizing causality as flowing in one direction within an outcomes model (logic model, results map, logframe, theory of change etc.)  could be seen as preventing the representation of feedback loops. This is because if you are, as I usually do, representing causality as flowing from bottom to top within a model (others do it left to right) then when you want to represent a feedback loop it will, of necessity, have to flow back down the logic model against the direction in which causality is being represented. Continue reading