‘What matters and what works’. Why feasibility is important in local government.

Posted by Parkinson                                                                                                      2300 words

light

I have a colleague who often uses an analogy that I have always liked but never really understood why. Recently I have discovered that it is called the ‘streetlight effect’ or the ‘drunkards search’.

In the analogy two people are walking down a street to where their car is parked when they realise they have dropped their car keys somewhere in the pitch black darkness. They must find the keys before they can go home. They are looking around blindly in the darkness for the keys when one of them sees a light further down the street. They then go and look for the keys under the light where they can see.

The two people can’t do what they really need to do so they go off and do something that they can do. Even if it will never achieve the outcome they want. He often uses this analogy when talking about matters of council policy. What I have realised is that he is questioning the feasibility of the actions being taken to achieve something important.

I was recently listening to a podcast of the first Cranlana lecture for 2012 on the topic of the ‘Good Society’ given by Professor Dan Russell. In it he talks about the importance of feasibility in public policy. He says that often ambitious public policy fails because of the unavailability of feasible actions to implement it. Professor Russell says that putting feasibility first is the answer to the challenge of how to make hard moral decisions. This seems like good advice.

It is a variation on the streetlight effect, where what needs to be done to achieve what matters is known but impossible. Instead, effort is put into doing something that will never achieve what matters but it enables something to be done. It is pointless activity but it makes people feel better than they would just waiting for daylight.

Professor Russell believes that you must first look at what is feasible before setting priorities for action – we must think about both what really matters and what really works. If we do not do both, he believes it can makes it difficult to convince people that the action taken is really trying to do something about what matters.

He uses the example of car emissions in which there is a choice between an ambitious policy that achieves the largest reduction in emissions but significantly increases new car prices, and an alternative policy that achieves less reduction in emissions but has a smaller increase in new car prices. The second policy is more likely to encourage people to replace their old cars with a more environmentally friendly new car. The first policy could see people drive old cars for longer and pursuing that policy may make people feel good but is likely to result in more pollution than their needs to be.

How often do you see this type of policy choice in local government? And how often do we commit to the idealistic and unachievable policy to appease interest groups, knowing that it will never happen. Creating the illusion of change and progress is a special skill of some councils.

Professor Russell describes two approaches to public policy making:

  1. Determining priorities and then choosing which ends to pursue before ranking the available means according to feasibility.
  2. Ranking the available means in order of feasibility then prioritising the ends that can be pursued by those means.

Sports coaches take the second approach. They don’t take their favourite play and get the team to do it. Instead they work out the best play for the team to give the team the chance to play their best game. Public policy makers tend to do the opposite when they say what matters and leave it to others to work out how to do it – somehow. He says that in these circumstances it is then hardly surprising that so much public policy turns out to be counterproductive.

A recent article in the Melbourne Age newspaper looked at the recruiting strategies of the Western Bulldogs Australian rules football team. This team has struggled on and off the field and has limited resources to recruit players to improve their performance. The author says that the club has successfully used the ‘Moneyball’ strategy made famous in baseball:

First, identify a measurable facet of individual performance that is likely to bring team success. Second, assemble a playing group that excels in that one metric.

The Western Bulldogs have looked for players who amass high tackle numbers and high possession numbers. In doing so, they have found the players overlooked by other clubs, mainly because they are not tall. These players have been the bargains in player trading and recruiting.

“While the rest of the league is obsessed with unearthing the next 190-centimetre midfielder, the Bulldogs seemingly care little about height. While those same clubs grapple with the need for line-breaking speed demons, the Bulldogs care more about how well a player positions himself in the scrum.

The Bulldogs have gone another way. Instead of hoping a gifted specimen will become the perfect package, they have decided to trust the numbers, and recruit those who know how to play footy.

This is a team that has spent little and bought well by valuing the most fundamental skill in footy. They Dogs are finding their way by finding the ball.”

The results of this policy are evident in the on field performance with the Western Bulldogs in the top group of teams set to play off in the finals. They have looked to their means (i.e. their ability to buy players) and developed a game plan that utilises the available value-for-money players effectively to achieve what matters (i.e. a premiership).

Professor Russell asks why we keep making policy and then working out how to implement it, and discusses some of the barriers to putting feasibility first.

The first is that people have difficulty in knowing a difficult case when they see one. Citing the research of Daniel Kahneman, Professor Russell says that people respond to hard questions by replacing them with easy questions, often unwittingly, and answering them instead. The questions we hear are different to the questions we are asked.

“The road to unintended consequences is paved with easy answers to hard questions.”

Choices are often required between more or less ambitious policies. When this happens, we are usually given a choice between two policies or ends, not two outcomes. This approach fails spectacularly when dealing with the ‘wicked problems’ common in public services that are complex, hard to define, multi-causal and there is no definitive solution. It is frequently tempting to put forward easy answers to enable political action.

We know that ends don’t always justify means but we also need to understand how means can justify ends. The difference between an ‘end’ and an outcome is important in the context of Professor Russell’s discussion. I think his use of the term ‘end’ equates to the output or immediate effect of a policy, which may not achieve the intended outcome in addressing what matters. In particular, he says that ends don’t always justify means when resources are diverted away from other more feasible means.

“We can know how high to set any one objective only if we know what we give up in progress toward other objectives.” Steven E. Rhoads, The Economist’s View of the World, 1985.

Because ends don’t always justify means, Professor Russell says that sometimes means have to justify ends – he says this is not in spite some things being precious but because they are precious priorities can’t always come first and feasibility has to. In practice, doing things the other way around seems to still be the norm.

What is the rationale for priorities first and feasibility second? Maybe when questions of means are left to work out after setting ends, the decision makers have already worked out that the ends are not possible. The idea that people identify what matters and commit to infeasible actions to achieve it has been discussed by environment and economics writer David Pannell at his blog ‘Pannell Discussions’.

“Many government actions are tokenistic. They are too small to really make a difference, but they are pursued anyway. Why do governments do this, and how do they get away with it without provoking public anger?”

Pannell quotes from an interview with Professor Hugh White from the Strategic and Defence Studies Centre at the Australian National University in discussing the IS threat in Syria and Iraq.

“If you find yourself, as I think we do today, undertaking military operations without making them big enough to give yourself a reasonable chance of success, you’re just going through the motions and you’re better off not doing it.”

“Going through the motions doesn’t make strategic sense and I don’t think it makes moral sense either.”

Pannell asks why governments do this and proposes two answers.

  1. To be seen to be doing something. At times a government knows that the available funding is inadequate but they proceed anyway because there is less political risk in being seen to be doing something, rather than nothing.
  2. Ignorance, combined with a lack of evidence and analysis in the policy-development phase, and a tendency towards excessive optimism about the effectiveness of a proposed policy.

He discusses in some detail the work of psychologist Daniel Kahneman regarding the biases that occur in thinking, including the bias towards optimistic planning. Citing examples from Kahneman’s book, ‘Thinking, Fast and Slow’, Pannell lists the following reasons for why such optimistic thinking might be happening:

  • Planners focus on the most optimistic scenario, rather than using their full experience.
  • Simple wishful thinking.
  • Biased interpretation of poor past results.
  • Underestimating or overlooking the variety of risks that could affect the project.

When people plan environmental projects, Pannell says that the same thinking occurs, and he and his colleagues have observed that people who develop environmental projects often seem to be overly positive about the following variables.

  • The value of the environmental assets that the project will protect or enhance.
  • The level of cooperation and behaviour change by landholders.
  • The various risks that might cause the project to fail (some of which tend to be ignored completely, not just understated).
  • The cost of the project in the short term.
  • The longer-term costs, beyond the initial project (also commonly ignored).

As Pannell summarises:

“With the combined effects of these biases and omissions, it’s common for the assumptions in the plan for an environment project to make it look dramatically better than it really is. I reckon the implied benefit: cost ratio could be exaggerated by a factor of 10 or more in many cases we’ve seen. So the likelihood that decision making will be messed up, with adverse consequences for the environment, is very high.”

Professor Russell discusses what happens when feasibility is left to someone else. He says that ambitious public policy statements set people up to fail and cites an example from education where universities were told to enrol more students, graduate more students, and make sure they learn more during the course. The way he describes it, some ambitious new goals were ‘spoken into existence’ along with new layers of bureaucracy to make them happen. The result was administration staff were reduced to enable funding to be shifted to teaching and the teaching staff become administrators.

He cites other similar policies in the United States – ‘No child will be left behind’, which implemented a testing regime that drew teachers into teaching for the test; the US housing crisis and the Global Financial Crisis, which started with a political commitment to universal home ownership. In Australia we have similar examples; the prime ministerial commitment that ‘No child will live in poverty’ stands out.

To reach these policy objectives or goals, risks had to be taken that ended up defeating the goal.

“Putting feasibility second is to decide in advance not to hear any kind of warning, let alone heed it. Ambition shouldn’t be a euphemism for leaving feasibility behind.”

The idea that people are looking under the streetlight for something is not confined to public policy, sport and environmental projects. According to David Freedman, ‘many, and possibly most, scientists spend their careers looking for answers where the light is better rather than where the truth is more likely to lie’. He feels that they don’t always have much choice.

“It is often extremely difficult or even impossible to cleanly measure what is really important, so scientists instead cleanly measure what they can, hoping it turns out to be relevant. After all, we expect scientists to quantify their observations precisely.”

He quotes Lord Kelvin as saying more than a century ago that, ‘When you can measure what you are speaking about, and express it in numbers, you know something about it.’ Freedman says that the problem is that:

“ … while these surrogate measurements yield clean numbers, they frequently throw off the results, sometimes dramatically so. This “streetlight effect,” as I call it in my new book, Wrong (Little, Brown), turns up in every field of science, filling research journals with experiments and studies that directly contradict previously published work.”

Freedman’s criticism of scientific research is tempered by his closing comment that we should keep in mind what Einstein had to say on the subject:

“If we knew what we were doing, it wouldn’t be called research, would it?”

It is not surprising, given the complexity involved, that councils regularly engage in simplistic decision making on difficult policy matters. We are not well resourced or skilled in dealing with the complexity inherent in what we do. In these circumstances, our preference to look for the keys under the light, put priorities ahead of feasibility, pick plays not suited to our capabilities, take tokenistic and highly optimistic actions, and focus on what is measurable instead of what is important, are all predictable.

Predictable and not good enough to meet the requirements of the communities we serve. They need and deserve better.

Freedman, David 2010. ‘Why Scientific Studies Are So Often Wrong: The Streetlight Effect’, Discover magazine, July –August.

Marshall, Konrad, 2015. ‘Western Bulldogs master the mysteries of moneyball strategies’ The Age, 1 August

Pannell, David 2012. ‘213 – The environmental planning fallacy’, The PannellDiscussions blog.

Russell, Daniel C. 2012. ‘The Good Society’, Cranlana lecture series, No. 1.