Wednesday, 6 November 2013

HS2: Doing the Maths

As an evaluator, I thought I'd check the latest update on benefits of the proposed HS2 high speed rail links – much in the news lately*. It's an interesting read (for geeks!) and there is logic in its detailed analysis. It also assesses the impact on the business case of varying future scenarios, such as changing rates of economic growth and levels of demand.

The rationale for HS2 is partly an issue of capacity. If rail travel continues to grow, the existing network will be unable to cope. It's also about economic benefits, particularly to “The North”. This is based on detailed economic modelling, which is not my specialist field, but the principle is that (as well as leisure travel) people need to get together to do business. That business creates growth, which delivers the economic benefits. HS2 describes itself as “An engine for growth”.

The capacity issue is demonstrated by the graph, taken from the latest report. It shows rail travel growing much faster than road or domestic air travel, which over the past 10 years have actually declined. This is despite air travel presumably being much faster (OK, maybe not door-to-door, but the same could be said of HS2).

The economic argument shows a positive benefit to cost ratio, currently around 2.3:1 for the ‘standard case’ scenario. However, even if you accept this ratio, it doesn't mean that HS2 is necessarily the best way to spend that £50 billion. Many argue that a similar investment in schools, hospitals and other public services would be more valuable still. The counter-argument is that investment in business creates wealth which in turn generates the money for just these wider social projects. Where you stand on this is a question of values and political beliefs (e.g. the extent to which you believe in "the trickle-down fairy"), rather than of HS2 by itself.

So am I convinced? Well, not yet, for two reasons. Firstly, I’ve not yet seen enough cause-effect analysis. I want to know why road and domestic air travel have declined whilst rail travel (despite all our complaints about high fares) has increased. Without that understanding it is difficult to have confidence in the projections.

Second, should we be thinking communication rather than just travel? I'd also like to see on that graph the growth over the same time in non-face-to-face communication – Internet, teleconferencing, Skype and so forth. Sure, people buy from people, but it seems dangerous to assume that we will forever need to meet face-to-face. I've just run a workshop where one participant joined us by Skype from India. Not as good as being in the room but it worked OK. I wonder if, within a generation, meetings will consist of holograms sitting round a table whilst their owners participate from remote locations around the globe (I’m sure the technology for this already exists). And the demand for rail travel them?

There is no perfect answer of course. It's a fact of life that the further into the future you predict, the more your chances of being proved wrong by factors yet unknown. Some HS2 projections look ahead almost 90 years, and Dr Beeching shut down rail lines that had been running much less time than that (because the impact of motor transport had been underestimated).

We probably have to accept that HS2 is a gamble whether we build it or whether we don't. Personally, I would like to see the decision based on comparing it with investing in alternative communications technologies, rather than trying to catch up with 20th-century technology.

*The Economic Case for HS2

Wednesday, 2 October 2013

Marriage: Cause and Effect?

David Cameron caused a stir recently by announcing tax breaks for some married couples. This might be just a PR gimmick for the party conference season of course, but is there any evidence that (quoting Jeremy Hunt) "marriage particularly helps to strengthen commitment in our society"?

They might be on stronger ground claiming that marriage is better for the next generation, as I believe there is evidence that children of married couples tend to do better at school. Even here though, I strongly suspect that the nature of the parents’ relationship is the key factor, rather than whether or not they are married (or even living together).

But Mr Cameron chose to promote marriage based on his own experience, and his relationship with his wife. Personally, I believe that every relationship is unique, and that generalising to suggest that marriage in itself creates "responsibility and stability that helps bind families" is speculative at best.

The underlying point is that correlation does not prove cause and effect. Even if married relationships are more stable than unmarried ones (which in itself needs proving in view of current divorce rates), this doesn't prove that it was the marriage that caused that stability. The chances of a relationship being stable and successful are surely far more dependent on the individuals involved than whether they choose to marry.

Misunderstanding cause and effect is a common error, and examples of it are plentiful. For example, a 1980s study in New York showed a strong correlation between ice cream sales and crime levels. So ice cream makes people more likely to commit crime? No, actually both are seasonal and higher in the summer months; they are otherwise completely unrelated.

So if marriage itself doesn't instil those ‘family values’ that Mr Cameron supports, what does? There are plenty of studies on different aspects of this, but overall it seems self-evident that factors such as financial security, opportunity, health and a supportive environment have a stronger impact than marital status.

If Mr Cameron really wants to strengthen commitment and stability in society, then he should look at the government's influence on these wider social factors, rather than trying to offer a cheap bribe.

Wednesday, 4 September 2013

An Obsession with Numbers

"84.5% of statistics are made up on the spot!" I may have the percentage wrong, but I believe the line comes from Vic Reeves. It's witty in its own right, but it also reflects our obsession with reporting numbers rather than words.

Numbers just sound more credible. Anyone can say "it's getting better" and give examples in support, but it takes an Expert (whatever that is) to say performance has improved by 12.4%. Only last month for example, the Office for National Statistics informed us that happiness in the UK had increased by 1.1% - I hope your smile is 1.1% wider as a result! A more sombre example: the Mid-Staffordshire saga was largely due to generating the right numbers in entirely the wrong way.

So too with Social Return on Investment (SROI). “£5.25 of social return for each £1 invested” has the air of authority, even if qualified or expressed as a range such as £4.50 - £6.00. Working in this field, I sometimes have to explain to clients that how this value is created should be more important than the figure itself. (There has even been talk of SROI reports abandoning the ratio figure entirely, but we'd never get away with this. People like the numbers too much.)

I'm not saying that numbers don't matter; the danger is that people fail to look beyond these. They say "if you can't measure it you can't manage it" (an old cliché, but true), but numbers alone won't do that managing for you. Knowing it has improved by 12.4% is useless unless you know why, and how you can sustain that improvement. Those who fail to understand this often become experts in making excuses rather than making changes.

It isn't a question of quantitative versus qualitative measures, but how to combine the two. That's why all good performance/evaluation models (e.g. EFQM, SROI, Balanced Scorecard) all have elements of results and ‘enablers’, or theories of change. In various ways, all these methods examine cause and effect – what changes and why.

So next time someone quotes performance data to you, try the "so what?" test. Do they know the story behind those numbers? And more important still, what will they do as a result?

Wednesday, 7 August 2013

Are We Getting Happier?

Yes, according to the Office for National Statistics (ONS).  Their 2012/13 survey of Personal Well-being in the UK, reported last week, showed a 1.1% increase in people reporting good overall satisfaction with their lives, and a similar fall in those reporting high levels of anxiety. The survey, of more than 165,000 people, also breaks down this overall figure by population characteristics such as ethnicity, health, marital status and economic activity (although not religion, interestingly).

ONS doesn’t comment on government policies, but does comment on some basic causes. The three biggest factors affecting life satisfaction are health, relationship status and employment. On the last of these, slowly falling unemployment and (perhaps more significant) more job vacancies at least offer some hope. Oddly, ONS also suggests that one-off 2012 events such as the Queen’s jubilee, the Olympics and Paralympics may also have contributed, although the evidence here is unclear.

Often the devil is in the detail however, so I looked for any groups which do not follow this general trend. Surprisingly, even the long-term sick, disabled and unemployed, where you might expect welfare reform and other cuts to have an impact, show positive trends – although only slight, and a few groups have higher anxiety. The only area in which 2012/13 life satisfaction appears lower than the year before is based on ethnicity. Even here most groups are positive, but a few, particularly Arab and ‘other Asian’ show decreased life satisfaction.

Why should this be? Concern over anti-Muslim sentiment? Increased pressure on refugees and asylum seekers perhaps? The survey, like any form of evaluation, means little unless the information is understood and acted on. Statements about there being ‘more to life than GDP’ are just political rhetoric unless something is seen to happen as a result of the survey.

So will the government acknowledge and respond to these aspects of the survey results? Or will they just sit back, hope the overall improvement continues next year, and make excuses if it doesn’t? The answer will show how committed they really are to life satisfaction and personal well-being in this country.

Wednesday, 3 July 2013

SROI and Value for Money

As public funding gets tighter, there is understandable interest in Social Return on Investment (SROI). It's a way for third sector organisations to demonstrate what they achieve, expressed as a ratio of £x of social value per £1 invested. Where x is – hopefully – a high number.

It's important to understand though that this isn't quite the same as value for money (VfM). Both calculations consider the value of the outcomes achieved, but whilst VfM compares this purely with money invested, SROI takes account of all types of investment. This could for instance include the in-kind value of volunteers’ contributions, based on the time they put in or work they do. It means that for an organisation with many volunteers, the SROI ratio may be fairly modest but the actual amount of money needed may be just a small part of the total investment.

A couple of points follow from this. Firstly, using volunteers rather than paid staff may have little effect on the SROI ratio but might considerably improve VfM. This could be presented (perhaps to funding organisations) as matching modest funding with a substantial in-kind contribution to deliver a large amount of social value.

Of course, I'm not suggesting that volunteers are appropriate in all situations. In many cases, only paid staff have the skills and experience required, and even where volunteers are used they need to be trained and managed. But skilled volunteers can sometimes make an important contribution – many charity board members do exactly this. And there are organisations that seek to facilitate such skills offerings, such as Reach Volunteering (nationally - and Skill Will (in Yorkshire -

The second point, and very important, is that SROI is not just about the ratio. It's a way to understand how change is achieved, and hence how the organisation's impact might be improved further. So whilst it might not address the ‘economy’ aspect of VfM, SROI could certainly improve efficiency and effectiveness, and hence considerably increase VfM. For example it can help to identify how resources could be prioritised and targeted better, to achieve greater social value.

Underlying all this is a need to understand the purpose of evaluation: what are you trying to show and why? Simply demonstrating VfM, or an SROI ratio, may have a role but should never be the full story. Any evaluation needs a context, one that recognises who the audience is, and where the future potential lies as well as the present position. Without this context and understanding the danger is, as the famous quote goes, "the answer is 42"!

Monday, 10 June 2013

Measuring Outcomes: Good, Bad and Ugly

I've been comparing two recent publications, both from the Department of Health. The differences in their approach to measuring outcomes are interesting and instructive.

The first, actually a set of publications, is the Outcomes Frameworks 2013-14*. There are three: NHS, Adult Social Care, and Public Health (in development). Each defines a range of high-level outcome measures linked to national strategic priorities for these services.

There are a number of things I like about these frameworks:
  • Each contains a manageable number of high-level indicators grouped into ‘domains’, rather like the presentation of a balanced scorecard.
  • The measures are outcome-focused. That is, they address the critical question that defines outcomes: "What changes?" (or if you prefer "What difference have we made?"). They don't just measure activity or benchmarks at a point in time.
  • Many are perception-based, using the experience of patient/clients rather than just clinical indicators.
  • They are continually evolving. Recognising they are far from perfect, each annual version is improved from the previous one and anticipates further developments for the future.

The second document is Mental Health Payment by Results Guidance for 2013-14. To an extent this is also ‘work in progress’, but its aim is to rationalise the commissioning of mental health work based on symptoms or type of illness. These are defined as a number of clusters, currently 21 with a few conditions excluded from the system.

Whilst the intention is clear, the practice is more difficult, not least because every mental health patient is an individual, and conditions are much harder to ‘pigeonhole’ than physical health. Harder still is objectively assessing improvement, and here the paper recommends a number of “quality and outcome measures” without standardising on a particular method.

Even these options are mixed in terms of their relevance as outcomes. Some patient-based assessment scales are sound, though arguably over-generic. But other measures relate to activity, or even completeness of records/planning, and these are not outcomes; they do not measure what changes for the patient or client.

The ugly part though is the title of the paper. Calling it “Payment by Results” is misleading, at least at the current stage of development. It is payment by category, not by results. It worries me that this may give people the impression they are achieving outcomes when they are not.

I believe that making a real difference to people's lives, be it in health and social care or other types of support, relies fundamentally on understanding what we mean by outcomes. It isn't about levels of activity, standard practice or monitoring. It's about what difference we make to the quality of life for users of those services, and this is what we should be measuring.

*The NHS Outcomes Framework 2013/14, November 2012 
The Adult Social Care Outcomes Framework 2013/14, November 2012
Improving Outcomes and Supporting Transparency: Part 1A  A public health outcomes framework for England, 2013-2016, January 2012

Thursday, 11 April 2013

Secret Statistics and Lies

The saying "lies, damned lies and statistics" is attributed to Disraeli in the 19th century. But it seems truer than ever today.

Examples abound, and the NHS is particularly prone. The furore around children's heart surgery at Leeds seems to stem from the quality of statistics rather than the quality of surgery or care. And the Mid-Staffordshire experience shows (if we ever doubted it) that one way to reduce mortality rates is to reclassify patients as ‘palliative care’ – i.e. they were expected to die anyway.

And should anyone challenge this approach, the solution – up to now – has been to dismiss them and pay them off with a gagging order. What a shame that the government simply proposes to ban gagging clauses rather than solving the underlying problem.

A former advisor to the Bank of England made the observation known as Goodhart's Law, summarised as “When a measure becomes a target, it ceases to be a good measure”. This happens because it's generally much easier to manipulate statistics than to actually improve performance!

Hardly surprising. If "what gets measured gets done" (another cliché, but true), it naturally follows that if you set people targets, they will aim to achieve these targets, not necessarily the intention behind the targets.

So should we just give up and abandon all performance indicators? No, I’m a believer in an alternative approach, that of deliberately designing performance measures to influence the way people behave. Every potential measure should be 'destruction tested', to see how it can be fiddled, before implementation. Only those performance measures that pass this test should be used.

For the NHS this surely means greater emphasis on patient (and family) satisfaction, rather than clinical success rates. I'm not just talking about PROMs (patient reported outcome measures), but a much broader patient view of how well the NHS delivers its core function of care. Clinicians and managers should be incentivised to deliver a service that patients acknowledge as excellent.

If you doubt this, just ask yourself which is the better dentist: one whose work is technically brilliant but very painful, or one whose work is perfectly adequate and pain-free? I know which one I would rather visit.

Of course such a system would need careful design to ensure it is truly objective. But this is possible, and it offers the hope of creating a genuinely patient focused service for the future.

Thursday, 14 March 2013

Comparing SROI with CBA

I've been asked recently about the difference between Social Return on Investment (SROI) and Cost Benefit Analysis (CBA). This often stems from a belief that CBA is cheaper, so "will it do" instead of a full SROI analysis? Leaving aside misconceptions about the cost of SROI analyses (there are some fantasy figures floating around), it helps to understand their different purpose as well as methods.

In principle, the two are closely aligned. Indeed, SROI is often viewed as a development of more conventional CBA. Both involve assessing overall value from a particular project or course of action, and comparing it with the cost involved. In both cases, outcomes should be monetised (i.e. measured in pounds and pence), and this should include valuation of 'soft' outcomes such as health, well-being and environment.

The difference in practice arises because CBA is most often used to prove a point or compare different policies. SROI meanwhile takes the organisation or project itself as the starting point. It considers how that organisation/ project generates value and hence how that value can be improved. It generates a cost-benefit comparison, known as the SROI ratio, but this is just part of the process rather than the ultimate goal as in CBA.

As an example, take two reports on the theme of carers – those who care at home for adult relatives with special needs or disabilities. Valuing Carers 2011*, from Carers UK and partners, assesses how much carers save the public purse by calculating the cost of providing an equivalent level of paid homecare. This is a simple form of benefit analysis based solely on the perspective of public expenditure, and some of its assumptions are open to challenge (e.g. that home carers receive no state support at all). Moreover, it serves purely to demonstrate a value figure, not how that value could be enhanced.

In comparison, the Princess Royal Trust for Carers commissioned an SROI-based analysis on five of its centres that provide support for carers**. This gives a figure for value generated, but also shows how that value is produced through different ways of working – something the organisation itself can use. In fact this is not a full SROI analysis because it only considers the perspective of carers themselves, not the benefits that accrue to the state or others.

Neither of these studies considers value for the person being cared for (arguably the most important stakeholder) – the benefits of being cared for at home by family rather than institutional care. A full SROI analysis would consider all of these perspectives and any others where impact is significant. Also, one of SROI's core principles is 'involve stakeholders', and this means engaging people directly involved with that service rather than relying on national averages or other published data. The result is a localised study focusing on that specific project or service, as opposed to more generalised information on the type of service provided.

It comes down to understanding not just what you want to evaluate, but why. If the purpose is simply to demonstrate financial savings to government or the NHS, then CBA may well be appropriate – though it still needs to be done well. However, if the aim is also to understand how this value can be improved, and hence how the project or service can become even more effective, then SROI will give you the additional insight you need.

Monday, 25 February 2013

Lessons from Mid Staffordshire

Stories from Mid Staffordshire NHS Trust make gruesome reading, and it is hard to imagine how such poor care could happen in the 21st century. The public enquiry led by Robert Francis QC has now published its report, and some have criticised this for failing to blame any individuals. Instead it points mainly to "the system" as the cause of this tragic failure.

Whilst it is tempting to seek scapegoats, the 'system' argument deserves closer study. There are many facets to this system of course, and my diagram shows just one of these in highly simplified form. It illustrates different perspectives from which hospital performance can be judged. These comprise governing bodies (including the government itself, finance and regulators), patients and their families, staff, and the wider community.

Underlying the problems at Mid Staffordshire was a failure to recognise these different perspectives. A hospital – indeed, any medical institution – can only be truly and sustainably successful if it is acknowledged as such by all of these stakeholder groups. Pursuing a single focus, such as financial or government driven targets, will succeed only until other viewpoints become so critical they overtake it in importance. In Mid Staffordshire's case, patient care was so badly neglected that - eventually - no amount of statistical massaging could hide the reality.

"What gets managed gets done" is an old cliché, but still true. There is an important corollary though: if you set people targets, they will work to achieve those targets, not the intention behind those targets. So in this case fundamentals such as the NHS Constitution were ignored as the Trust chose to prioritise targets that would help it achieve Foundation status.

This situation cannot be fixed just by tinkering with mortality rates or other 'hard' performance data. It demands a more holistic approach to performance management, encompassing all of the success perspectives shown above and the 'enablers' to achieve these. These enablers include issues of leadership and culture that the Francis Report stresses. Too often these are dismissed as 'intangibles' or 'too hard to measure'. But they can be measured and must be measured for real change to be achieved.

These concepts underpin the EFQM Excellence Model, modern Balanced Scorecards, and indeed any sound performance management approach. Sadly, it remains too easy to pay lip service to these principles without testing whether they are actually working. If there is blame to be apportioned from the Mid Staffordshire experience, it belongs with everyone who failed to apply these core principles of good management. And if we don't get the message this time, it will certainly happen again.

Thursday, 31 January 2013

Counting the Cost...

A couple of reports caught my eye last week. One from CLG, The Cost of Troubled Families shows the high cost to public services of those who need repeated interventions from social services, education, NHS, police and other agencies. The report is more about costing methods than solutions, but the inference is clear: joined-up thinking and early intervention is far better than many agencies tackling problems later. In simple terms, prevention is better than cure.

The second was sombre data from the Office for National Statistics (ONS): their bulletin Suicides in the United Kingdom, 2011 reports a significant increase. We're only now getting 2011 figures because of the need to wait for coroners’ decisions. But if this trend is shown to continue in 2012 and beyond, it’s a very worrying one.

ONS don’t speculate on the reasons for this increase, but many others have. Organisations working in mental health report increased referrals from people in crisis, and identify unemployment and the economic downturn as amongst the contributory factors. Certainly the group most at risk – men aged 30-44 – are those for whom losing their job, perhaps compounded by debt, isolation and other kinds of deprivation, might precipitate a crisis. And we know that a high proportion of suicides have no previous contact with mental health services, so the scenario is certainly plausible.

Even if this cause-effect link is unproven, should we not be trying to understand it? The cost of “troubled family” is estimated between £45,000 and £100,000. Meanwhile the government’s own estimate of the cost of suicide (from No Health Without Mental Health, Dept of Health 2011) is £1.7m per instance.

So where is the joined-up thinking and early intervention on suicide? The government's 2012 paper Preventing Suicide in England is a set of palliative measures for different situations, not a joined up strategy linked to root causes. The fact that different methods of calculating these costs are used in different situations merely strengthens the argument. It gives the impression that the figures are being produced to make a political point rather than develop real solutions. (Perish the thought!)

So let’s have some real joined up thinking please. Thinking that counts the costs and benefits of policies in social and economic terms, not just through the mantra of GDP. It might even lead to a society with fewer troubled families and fewer suicides in the first place.

Friday, 4 January 2013

CLG: 50 Shades of Desperation

Eric Pickles' Christmas gift to local government was advice on how to save money. His department's 50 Ways to Save (CLG, December 2012) suggests how Councils can trim budgets in order to make the savings demanded of them.

I had a feeling of dread before opening the document, which proved fully justified when I did so. Here we have a random collection of tired old ideas and trivial homilies. Can there really be any Councils in the country who haven't considered options such as improved procurement, electronic payments, tackling absenteeism and cutting meetings refreshments? I expected to see double-sided copying and reduced paperclip usage in there, but presumably they came in at numbers 51 and 52 on the list.

More depressing than the "ideas" themselves is the lack of serious thinking behind them. Almost without exception, these ideas are one-offs. Once you've done them, that's it - you can't repeat savings such as cutting first class travel and scrapping the Council's newspaper. Is the implication that once Councils have implemented all 50 of these ideas, they will be exempt from future cuts? I suspect not, so where do they go next?

Proof of this barrel-scraping approach lies in the fact that the entire text of this report contains not a single mention of the word 'efficiency'. And only one of the 50 mentions 'innovation'. (There are even a few that look politically motivated, but that's another story.)

What happened to lean, systems thinking, service redesign and other continuous improvement methods well established in both the private and public sectors? Where are the ideas that inspire people, change old-fashioned thinking and deliver real benefits? Not apparently in CLG, whose most recent Capability Review (Cabinet Office, 2006) rated it lower than the great majority of local authorities it oversees.

Needless to say, the '50 ways' include reducing training and consultancy budgets. If this document proves anything, it proves that Eric Pickles and CLG are desperately in need of such training and consultancy advice themselves.