More fun with the accountants – this time Integrated Reporting Standards

Today I’ve been at another iteration of my gig with the road-show accountants’ professional development conference in another city (see my last blog on what my presentation has been about – Key Performance Indicators (KPIs)). I’ll try to blog later in more detail about KPIs for those who are obsessed with their technical characteristics like I am.

In the meantime, another really engrossing topic (for people like me) – Integrated Reporting Standards. I heard a presentation by Mark Hucklesby on the newly developed Integrated Reporting Standards which are currently out for consultation until July.

A quick aside on standards. One of the great things about accountants is that they’re obsessed with standard setting. They have standards for everything and technical committees meeting all the time figuring out new standards.

Standards are great because they bring about consistency, they also get the best minds in the business focused on the technical trade-offs which come up when reporting and how these are best dealt with.

In the broader area of outcomes systems – the way we identify and measure outcomes of any type in any sector –  I really wish there was a parallel structure to the various official and unofficial standard setting that goes on in accounting. Instead of the order the accountants have in their world, our area of broader outcomes reporting is really like the Wild West. Of course the accountants have had about 500 years to get their area sorted while we’ve only been focusing on outcomes in the modern sense of the term for maybe 30 years or so.

The Integrated Reporting Standards are a new initiative which can be seen as a sort of reinvented Tripple Bottom Line (economic, social and environmental). More information on the initiative at http://www.theiirc.org/.

They have come up with a set of six ‘capitals’

  • Financial
  • Manufactured
  • Intellectual
  • Human
  • Social and relationship
  • Natural

I think that calling them ‘capitals’ is maybe a bit obscure for the average person. I would see them as ‘outcome areas’ or something. However, I can see how they ended up using the term capital. They wanted to have the concept that companies take aspects of these six capitals and add value to them. The concept is set out in the diagram below from their draft standards document.

I raised two issues with Mark in the discussion time. The first was whether their had been any consideration of distinguishing between controllable and not-necessarily controllable indicators in the integrated reporting framework. This is a crucial distinction I draw in my outcomes theory work – http://outcomescentral.org/outcomestheory.html#6.

The purpose of integrated reporting is to give investors and others a crystal clear picture of the risks and opportunities a company is involved in. Confusing controllable with not-necessarily controllable indicators lies at the heart of many of the problems arising from misunderstanding of the true underlying risk profile one is exposed to in both the private and public sector. Mark agreed with the importance of the controllability issue. My second point was whether the standards would allow for a range of reporting approaches. He said that the standards did not stipulate any one way of actually presenting an integrated report. This is good news for someone like myself who thinks that the only way of reporting these days is to use a visual approach because of its clear advantages.

Anyway, sometime when I’m wanting a little light reading I’ll delve into the standards and report back in this blog what’s interesting from the point of view of those of us interested in outcomes theory, measurement and strategy.

integratedreporting

Find out http://About.Me/PaulDuignan

Throwing people in jail because they won’t give us the information we want? – The price of indicator collection

detainedpersonsmallAs I’m writing this, we have an interviewer from our Statistics Department sitting in the other room asking detailed questions about  our income and expenditure. It’s part of a nation-wide Household Economic Survey collecting information on household expenditure and income. It’s her second visit to us and she’s been here this time for just on two hours – the first visit took about the same time. Over the last two weeks I’ve been filling in an expenditure diary where I’ve had to record all my daily expenditure. Fortunately our interviewer knows what she’s doing and she has stepped us efficiently through the complex questionnaire – but it’s still a lot of work.

We actually don’t have any choice but to put several hours of our time to one side and fill in the questionnaire with her – participation is not voluntary. It’s mandatory, required by the same legal provisions that demand we fill in our Census forms (something that we, coincidentally, also had no choice but to do just a few weeks ago!). Presumably you would be talking about a fine rather than being thrown in jail (but if you refused to pay your fine, I presume that in the end you could end up in jail in the fullness of time).

This is a little personal example of the cost and infrastructure needed to collect indicator information. Being an outcomes wonk I don’t begrudge putting the time aside because I understand how crucial it is to collect information which can be used for indicator and other types of outcomes work. But the cost is something which is often lost on people who blithely demand that programs and organizations – ‘collect comprehensive outcomes indicator information’ – without any thought to how much it’s going to cost to do so.

It also illustrates the point that collecting accurate information can require more than just spending money – it can involve having to use the power of the State to make sure that such indicator information is collected from the people it needs to be collected from. One of the most dramatic examples we have of this is in the road safety area where drivers can be forced to give a sample measuring their blood alcohol level. Again as an outcomes wonk, I love this sort of data. But there are serious limits to any exercise of State power. A mandatory requirement to collect information needs to be used very carefully to avoid serious push-back from those who have to give their time to fill in the information (for example  discussions like the one here about someone complaining about having to fill in a mandatory survey).

Of course the types of examples that people hold up as providing best practice indicator and outcomes data collection tend to be ones where there’s a large data collection infrastructure and often mandatory data collection  (e.g. road safety, recidivism data in the corrections area). They then expect us to come up with similar information about trends in outcomes and causality in areas where we have much less ability to collect information and can’t turn to the backing of the law to force people to provide information.

So the next time someone demands that you collect more indicator information on your program, it’s reasonable to ask the question: 1) how much are they willing to spend (or do they want you to spend) on collecting this information; and, 2) if required, are they prepared to support making the collection of information on the indicators which are relevant to your program a mandatory legal requirement?

Follow me on Twitter http://twitter.com/paulduignan. Discuss outcomes issues on the DoView Linkedin Community of Practice.

Does avoiding regulatory enforcement represent a success or failure? A Chameleon Indicator.

There’s no doubt, some indicators are a lot more fun than others (although, I must note that people in my trade have a fairly low threshold for ‘fun’). I particularly enjoy ones which can be interpreted in any way you like. They can be called Chameleon indicators.

When developing outcomes DoViews (visual outcomes models) and performance management frameworks for organizations, I often run into a particularly ambiguous type of indicator – the number of regulatory interventions being undertaken by an organization. At first sight, what better indicator for an organization which includes a regulatory outcome as part of its mandate? But there are problems in interpreting these sorts of indicators reflected in a media exchange I heard this morning.

Our national department of conservation is currently embroiled in media controversy over reductions in staff and budgets. As part of the media spotlight focused on it, I heard a discussion this morning about a reduction in the number of times it involves itself in a regulatory process – the number of times it makes legal representations to Conservation Resource Management Consent Hearings.

A media interviewer, interpreting the reduction in the number of regulatory interventions as a failure of the department to achieve one of its outcomes due to it not pursuing it aggressively due to staff shortages,  asked the department’s head: ‘…on the face of it, is it a lower priority? [the regulatory intervention – the department getting involved in Conservation Resource Management Consent Hearings]. The department’s chief (interpreting the drop in the measure in the opposite way) replied:

‘What you are falling into is the trap of judging and measuring our success by the number of cases we take regardless of the outcome. We see [Conservation Resource Management Hearings] [the regulatory intervention] as a last resort. We would rather sit down without spending money on lawyers and work out issues if we can and confine the [Conservation Resource Management Hearings] issues to ones that we really can’t reach agreement on’.*

This is a classic example of the outcomes theory principle: Ambiguity in interpreting outcomes or performance measures/indicators of regulatory intervention when also seeking prevention.

Not having looked into this particular issue, I don’t want to come down on one side or another. I think that both sides are making reasonable ‘face value’ interpretations of the change in the indicator.

How can people setting up performance management systems deal with these regulatory intervention Chameleon Indicators? While people will continue to take different positions in interpreting them in the cut and thrust of media debate, there is a technical approach to the problem which is suggested by outcomes theory.  In order to actually interpret what’s going on with this indicator, we would need to have further information about other indicators. For instance whether there has been an increase in departmental activity focused on getting the parties together prior to potential Resource Management Consent Hearings. I’ve DoViewed it (built an outcomes model) below so that we can get a clearer picture of what’s going on. d388We would need to get information about the indicator in red in order to be able to interpret the regulatory intervention indicator in black. Even then we could not be certain, just from the indicator in red that the department had been successful in reducing the number of contentious issues going to  hearings (which is what this DoView aspires to). So we would really need to answer the evaluation question which also appears within the DoView.

So the technical answer to dealing with Chameleon Regulatory Intervention Indicators is to always interpret them against the underlying outcomes model (e.g. DoView) of the logic of what the organization is trying to do.  For the theory on showing whether an organization is achieving its outcomes see Duignan’s Types of Evidence That a Program ‘Works’ Diagram and for a practical visual approach see here.

So, I the lesson from all this is that we should never just look at a Chameleon Indicator like the number of regulatory interventions on its own. We should always visualize it in the context of the logic of what it is that the intervention consists of and see what surrounding indicators we need to measure and what impact evaluation questions we need to answer in order to really clearly understand whether or not an organization is achieving its outcomes.

*Reference to the interview can be found on in the outcomes theory article linked above.

Putting the Planning back into M&E – PME or PM&E what’s the acronym going to be?

In a posting on Linkedin, Leslie Ayre-Jaschke talked about the growth of PME – or maybe it will end up being called PM&E, or something else. Regardless of the acronym, it’s the movement to put planning back into monitoring and evaluation. ‘Putting the P back into M&E’ was the subtitle of a workshop I ran in South Africa for UNFPA several years ago. I think that it’s a concept that’s going to get a lot more traction over the next few years.

It’s consistent with what evaluators like Michael Patton, and many of us in the evaluation community, have been talking about for years. We’ve been talking up the key role of formative evaluation – evaluation aimed at making sure that programs are optimized. And formative evaluation is all about making sure that programs are well planned.

The point of this approach within evaluation is that it’s often pointless to evaluate a badly planned program. Evaluation resources would be better spent on making sure that the program is better planned than on measuring the fact that it often will not achieve its outcomes due to the fact that planning has been poor.

The new PM&E movement is not just about evaluators and evaluation, it is much broader than that taking in people from a range of disciplines. This new integrated approach which is emerging needs an underlying theory which will appeal to all of the different disciplines involved – strategic planners, performance managers, evaluators, contract managers, policy analysts etc. The work I’ve been doing in outcomes theory has been designed to meet this need.

The purpose of outcomes theory is to provide an integrated conceptual basis for PM&E-type approaches. A common conceptual basis is needed if people across the different disciplines and sectors are going to be able to share conceptual insights about how they identify, measure, attribute and hold parties to account for outcomes when doing planning, monitoring and evaluation. Good theory is needed to help them quickly sort out the type of conceptual confusion that current characterizes much of the discussion of outcomes related issues. As the famous social scientist Kurt Lewin said – ‘there’s nothing so practical as a good theory’.

This aspiration of outcomes theory is summarized in the diagram below showing how it’s a meso-level theory reaching across strategic planning, monitoring, evaluation etc.

d131-2
(see http://www.outcomescentral.org/outcomestheory.html for more on this)

For people just working out in the field, who don’t  need to know much theory, outcomes theory principles have been hard-wired into the DoView Visual Planning, Monitoring and Evaluation approach http://doview.com/plan. Using the approach means that they will avoid many of the technical problems which are highlighted by outcomes theory.

Large-scale visual models of a program (drawn in the correct way, for instance as ‘DoViews’) provide the ideal foundation for the new fully integrated approach to planning, monitoring and evaluation which many are now seeking. http://doview.com/plan/draw.html.

Outcomes theory Unequal Input Principle – Op Ed applied to school national league tables

Just written an Op Ed on the question of school national league tables. It tries to move the argument away from a political argument to a technical one. It’s part of my initiative to show how outcomes theory (outcomestheory.org) can provide a technical lens on debates that are current seen as largely political.

The outcomes theory principle is the Unequal Inputs Principle (the ‘level playing field’ principle – in the article called the Equivalent Input, Equivalent Outcome Principle) which states: Where the inputs of units being compared are equivalent, the raw outcomes from such units can be used to measure the relative performance of different units. Where inputs to units differ, the amount of improvement is a better measure of unit performance than just raw outcomes.

The Op Ed argues that if you are wanting to improve student performance you should look to see if you can develop a measure of value-added rather than just raw academic performance. This is because of the differing academic level of kids entering different schools. Whether you can develop a value-added measure is another question, but the clear problem with a raw score approach when it is used to incentivize teachers is that they will just move to  schools which have pupils entering who are already functioning at a high academic level.

Here’s the Op Ed. http://www.stuff.co.nz/dominion-post/comment/7753957/Outcome-theory-and-education

Paul Duignan, PhD. More at OutcomesCentral.org and on Twitter at Twitter.com/paulduignan.

 

 

The importance of 'looking behind the numbers' in performance management systems

A colleague Stan Capela recently highlighted the importance of ‘looking behind the numbers’ in performance management systems. Pointing out that, if this is not done, false conclusions can be drawn from such systems. I think that most people would agree with this sentiment. The key issue for me is what is the most effective way of us ‘looking behind the numbers’ when measuring people’s, project’s or organization’s performance. Continue reading

The Taxi Driver and 'why don't you just measure outcomes' – on the way to AES conference

On my way to the Australasian Evaluation Society Conference in Canberra my taxi driver in from the airport asked me what I do. When I explained that I ‘measure whether programs, often government programs, work or not so the taxpayer gets value for money’, he was right into the concept. Although I think he thought that I was over complicating things a little. He said: ‘shouldn’t it just be a matter of using statistics to measure if things are getting better or not about a program.’ What he was talking about was one aspect of monitoring and evaluation – an important piece – but just one of the Five Building Blocks I see lying behind all monitoring and evaluation systems (outcomes systems). Continue reading

Mapping indicators onto a logic model is obvious – but why haven't we always done it?

I was running a workshop today teaching policy analysts the basics of my approach to program evaluation (Easy Outcomes). One of the participants, when I talked about the importance of always mapping indicators back onto a visual model, commented that when you do it, it’s so obviously the right approach that you can’t understand why we’ve not been doing it for years.

The idea behind this approach is that the way we almost always approach indicator work is to eye-ball a list or table of indicators and ask the question of a group of busy people sitting around a table – ‘does this list of indicators look any good?’
Continue reading

Tracking jobs created under the U.S. Recovery Act – when should the attempt at measurement be abandoned?

The default expectation in at least some sections of the U.S. public sector seems to be that it should always be feasible and affordable to both measure and attribute the results of interventions. This is using the term attribution to mean being able to actually demonstrate that a change in an outcome has been caused by a particular intervention rather than being the result of other factors (see here for more on attribution). The recent U.S. Recovery Act is a case in point.  While it’s reasonable to start from the position that you should routinely assess the possibility of measuring and attributing changes in outcomes of particular interventions, you can’t start by just assuming that it will always be feasible or affordable to do this. Clinging to such an assumption, where it is untrue, can result in you either measuring an outcome when the data you are collecting is not accurate, or acting as though what you are measuring (even if it is an accurate measurement of a change in an outcome) is demonstrably attributable to a particular program, when in fact it may not be.  Continue reading

The error of limiting focus to only the attributable

I am continuing to develop a set of articles which outline various problems which are often built into the outcomes systems which I see. The one I have just put up is on the Error of Limiting Focus to Only the Attributable. This is where the whole emphasis of a performance management or other outcomes system is just on holding a provider to account for a list of demonstrably attributable indicators (often called outputs, deliverables, or key performance indicators). This often creates problems. Continue reading