- This is no longer a remote and uncertain risk. It is an existential threat.
- In ten years, we have made nothing like sufficient progress to mitigate or adapt to the dangers we face.
- There is still a large constituency of political leaders, economists and commentators, that is in complete denial on the subject. (Lawson, Mogg, Trump, Redwood, Phillips …).
Thursday, March 21, 2019
On this day …. Sometimes anniversary events can underline and dramatise a sombre warning.
19 March 2009. BBC
John Beddington, then Chief Scientific Adviser to the UK Government, warned the Sustainable Development UK 09 conference of a global crisis 'to strike by 2030'. Growing world population would cause a "perfect storm" of food, energy and water shortages by 2030. "It's a perfect storm." (Link)
“There is an intrinsic link between the challenge we face to ensure food security through the 21st century and other global issues, most notably climate change, population growth and the need to sustainably manage the world’s rapidly growing demand for energy and water. It is predicted that by 2030 the world will need to produce 50 per cent more food and energy, together with 30 per cent more available fresh water, whilst mitigating and adapting to climate change. This threatens to create a ‘perfect storm’ of global events.
The backdrop against which these demands must be met is one of rising global temperatures, impacting on water, food and ecosystems in all regions, and with extreme weather events becoming both more severe and more frequent. Rising sea levels and flooding will hit hardest in the mega-deltas, which are important for food production, and will impact too on water quality for many.
Even since the last report of the Intergovernmental Panel on Climate Change (IPCC) in 2007, new evidence suggests that climate change is impacting the real world faster than the models predicted, and global greenhouse gas emissions are continuing to rise at the high end of projections. For example, in 2007 the IPCC concluded that large parts of the Arctic were likely to be ice-free in the summer by the end of the 21st century. Record lows in sea ice extent in 2007 and 2008, combined with other evidence on ice thinning and age, have caused scientists to radically review these estimates, with some analyses now suggesting the Arctic may be near ice-free by 2030 (Figures 5 and 6). This has major implications not just for the Arctic region but for the world as a whole, as strong positive feedbacks effects are expected to drive climate changes even faster.” Recall that this revision is being discussed in 2009.
19th March 2019, Guardian:
Cyclone Idai, now devastating large areas of South East Africa, 'might be Southern Hemisphere's worst such disaster'.
Dr Friederike Otto, of Oxford University’s Environmental Change Institute, said: “There are three factors with storms like this: rainfall, storm surge and wind. Rainfall levels are on the increase because of climate change, and storm surges are more severe because of sea level rises… Otto said it was important to help communities in the worst-hit areas become more resilient to storms. “The standard of housing, the size of the population and effectiveness of the early warning systems … these are the sorts of things we need to think about as we move into a world where these events become more severe.”
We are now starting to see the real human impact of our collective failures to heed the warnings. The trend line, and the inertia built into our limited responses, suggests our problems may just be beginning.
Climate scientists, far from alarmist, have tended to understate the risks. Even in 2009, projections of Arctic ice melt were being revised upwards.
BRAIN OF BRITAIN QUESTION. What else do those last named have in common?
ANSWER. They have all been enthusiastic advocates of Brexit. EIGHT ECONOMISTS. BREXIT AND CLIMATE
Tuesday, March 19, 2019
And how subsidies, even if to the not so poor, can make a small contribution to the public good and saving the planet.
It is tempting to assume that the pensioner bus pass, or in London the Freedom Pass, is just another item within the host of benefits, tax reliefs, grants and subsidies that make up the complex of arrangements that reflect our welfare state and public spending choices. On this interpretation, it might be viewed as a policy choice based on political priorities. According to your political persuasion and generational perspective, it is then either just another bung to an over-privileged age group who happen to turn out in greater numbers to vote, or, alternatively a redistributive measure which can be a major help to some low income households. I have to declare a personal interest as a beneficiary; but even among pass holders many will have inclined to the former rather cynical explanation, a view which will likely be shared by large numbers of millennials.
The politics matter, but a little more thought and investigation reveals some hidden dimensions for the policy that may actually be just as important. Even if the policy is of benefit mainly to wealthier pensioners, it may still add significantly to the public good.
The Energy and Environment Connection
First there is an inevitable connection with energy use and hence with policies for a low carbon economy. Road transport is a major source of CO2 emissions, so any policy that has a significant impact on traffic volumes will also have a corresponding impact on emissions. We also know that two of the biggest factors influencing a driver’s fuel use for any given journey are, first, cruising speeds, and, second, traffic congestion, particularly when it results in stop/ start movement.
Of course, CO2 emissions are not the only important factor in terms of environment and the quality of urban life. More traffic can mean poor air quality, especially due to diesel fuel, and longer journey times for drivers. But in this instance, I would argue, all the effects are moving in the same direction. Less traffic means less CO2, better air quality, and shorter travel times.
Enter Market Failures and the Search for Second Best Solutions
Market failures occur when the fairly strict conditions, under which the unfettered operation of competitive markets can be shown to lead to a “best of all possible worlds” social welfare optimum, are simply not met. They often provide classic and compelling arguments for policy interventions. In addition, it will very often be the case that, if the failures are bad enough, then other generally sensible measures, like competition policy, will also start to show serious flaws (see an earlier essay on gas for coal substitutionin Europe). The “second best”, given that the theoretical “best” is unattainable, can be hard to find.
Failure to comply with these “welfare” conditions is particularly rife in relation to monopolies, networks (as in the Braess paradox), failure to “internalise” social, health, or environmental costs caused by pollution of various kinds, and difficulties in the allocation of fixed costs into (marginal) prices. And unfortunately transport networks and road travel display these characteristics in spades.
· Drivers do not face any penalty when they add to congestion and increase the journey times of all other drivers.
· Fuel costs may not reflect the full environmental and health costs that their use incurs, although UK fuel taxes probably go quite a long way in this direction.
· Most of the costs of operating a bus or rail service are fixed, at least in the sense that the (short run) marginal cost of an additional passenger is usually close to zero, but fares will still need to recover the high fixed costs.
Subsidising pensioner travel. The Bus Pass meets some sensible public policy tests
Particularly in big cities, traffic volume is the major cause congestion and hence of increased journey times, higher fuel consumption per vehicle journey made, and hence higher emissions. Subsidising pensioner travel on public transport can significantly reduce the number of vehicle journeys and hence traffic volumes. This helps address the first two bullet points above.
But, one might ask, why not make all travellers pay higher charges in the form of road pricing – which is what economists might recommend as a first best solution? The answer is first, that there is a lot of political resistance to raising travel costs for commuters travelling to work, some of whom may not have a public transport option and already pay a high percentage of their income on commuting to work. Second, introducing a road pricing scheme can be a complex and costly exercise. In the UK, for example, it is currently confined to central London.
In the absence of effective road pricing, subsidising travel by public transport can be a useful part of a “second best” solution. Pensioners are a group more likely to switch to public transport in response to a financial incentive, partly because they will tend to be less constrained by working hours. Because the marginal cost of taking an extra passenger is mostly close to zero (the third bullet point), this discrimination between categories of traveller does not in this instance lead to any serious distortions in the use of resources.
And, finally, is it fair that only pensioners enjoy free travel? The answer is probably no, but free travel for all could also bring its own problems, influencing fundamental long term decisions on choice of where to live in relation to work, for example. And as a practical matter of public finances, the transport system does need to be paid for, at least in substantial part, by travelling passengers. Given that most of the costs are typically fixed, and that pensioners are the group most likely to revert to personal transport if faced with higher fares, there is again a pragmatic case for offering them lower fares or free travel. This is essentially the same motivation that leads private rail companies to sell tickets at lower prices to groups deemed to be price sensitive, eg old people or students.
Readers are also recommended to two much more comprehensive evaluations of the benefits of these particular subsidised travel schemes.
Greener Journeys. The costs and benefits of concessionary bus travel for older and disabled people in Britain. September 2014
UK Department for Transport. Evaluation of Concessionary Bus Travel The impacts of the free bus pass. 2016
 Ramsey pricing. This is a well known economics approach to recovering fixed costs in a monopoly situation. In technical terms it means allocating fixed costs in inverse proportion to the elasticity of demand. Sometimes unfair because it means that charges fall more heavily on "essential users".
Sunday, March 10, 2019
Neither economics nor physics can always be reduced to simple common sense. The Braess Paradox may not be in the same league as Schrodinger’s Cat, but like other market failures it too may have some important implications for how we manage power systems and other networks.
- My apologies are offered to anyone susceptible to mathematics allergy, intolerance or indigestion, but you can still read this piece. Just ignore the algebra and arithmetic, assume that it’s correct, and move on to the discussion. I am hoping to produce a series of occasional comments that illustrate some of the broader issues of market failure in the energy sector, including those associated with the intriguingly named theory of the second best.
There is a phenomenon, well known to traffic engineers, called the Braess paradox, in which adding an additional link to a traffic network can actually increase journey times for everyone. The example below, a deliberately simplified but, superficially at least, plausible example, shows how this can come about.
4000 vehicles travel from X to Y each day and there is a choice of routes, via A or via B. Sections X-B and A-Y are uncongested with capacity well in excess of any likely volumes, and a typical travel time of 45 minutes. Sections X-A and B-Y, in contrast, have shorter travel times – only 20 minutes when there is no congestion, but the travel time rises with the number of cars if the volume of traffic exceeds 2000. The travel time on these links rises by 1 minute for every extra 100 cars. This is represented by the formula t = max [T/100; 20], where t is travel time, T is traffic volume, and max simply means the higher of the values in the bracket.
As drivers learn from their experiences, the volumes of traffic quickly reach an equilibrium, in which 2000 drivers use route X-A-Y, and 2000 use route X-B-Y. Whichever route is chosen the journey time is 65 minutes. The situation is stable in the following sense: if there is any significant net shift in the number of drivers changing away from their normal route, then they will face a longer journey time and are likely to revert back to their previous choice.
The paradox arises if we add in the possibility of a new connection A-B, which has a negligibly short journey time, taken for arithmetical convenience and to make the illustration simple, as being zero. Real physical examples might be a new short river bridge, or the removal of some other physical impediment to create a short new road connection. What then happens is that drivers using the X-A-Y route realise that they can reach Y faster using the new connection AB, and travelling along X-A-B-Y, avoiding the slow A-Y link. Initially this cuts their overall journey time. Unfortunately, the extra traffic on the B-Y link now raises the travel time on B-Y. When 500 drivers have switched, the overall journey time is the same as before, at 65 minutes.
The next consequence is that drivers previously using X-B-Y start to realise that X-A-B-Y is now faster and start to switch to that route, with result that the X-A travel time rises. A new equilibrium is only reached when all 2000 drivers have switched to the X-A-B-Y route, adding 20 minutes to X-A travel time, and the total journey time for everyone is 80 minutes. No-one has any incentive to use the X-B or A-Y links at all.
In this new situation, once again, no individual will gain from changing their route, and everyone has an extra 15 minutes on their journeys each day. But if everyone were to agree not to use the link A-B, journey times would of course revert to 65 minutes. So what has gone wrong?
Explanation and Implications.
The underlying explanation of this paradox is what economists call an externality. Every additional driver on a congested route is adding a small amount to the travel time of every other driver on that route, but the very considerable extra cost (in terms of time) imposed on other drivers is not something that any individual driver can perceive directly, and it does not enter his/ her decision making. As a result, the individual "selfish" strategies of each driver result in everyone reaching their destination later.
The stable state of traffic flows, in which no-one has anything to gain by changing their individual behaviour is what economists call a Nash equilibrium; and it can be a long way from the ideal outcome. The best solution, at least in theory, would be found by “pricing the externality”. Some form of road pricing, or an easily administered toll, would confront each individual driver with a “congestion charge” on the key routes, and would be set at a level that restored the previous equilibrium.
Power grid upgrades may cause blackouts, warns Braess's paradox.
I had naively assumed that network flows governed by the laws of physics were immune from this paradox, which it can be argued, stems from the foibles of a world where humans fail to cooperate and markets fail to produce sensible outcomes. It turns out that power networks have their own intrinsic problems, described as a Braess paradox, according to researchers at the University of Göttingen and the Max Planck Institute for Dynamics and Self-Organization (MPIDS). Whether these deserve the label of paradox is another matter.
In traffic networks, as explained above, Braess's paradox has a clear economic explanation, that of externalities, “selfish” behaviour and a suboptimal Nash equilibrium. In power grids, the issue is purely one of physics, and is due to a phenomenon called "geometric frustration." A stable operation of the power grid corresponds to a synchronous state of generators and motors, which must rotate with exactly the same frequency and fixed phase differences. Adding a new transmission line introduces new pathways for the electric power, reducing losses, but also introduces a new constraint. In certain situations the power grid cannot meet all the constraints and becomes unstable. In my view, despite the analogies with traffic flows, this is best described as physics, not as a paradox.
But the decentralisation of decision making in the electricity sector has also raised the possibility of market solutions that will operate to relieve congestion constraints in local networks. In that case there may be numerous participants connected to the grid who are making economic choices in response to incentives that reflect congestion on particular parts of the network. That is likely to mean, in any complex network, that situations can arise that are analogous to the traffic congestion example described above, and the potential for the Braess paradox of economics will come into play.
And Implications for Future Decentralised System Operations for Power Networks
There is some evidence that decentralised networks are less susceptible to the dynamic instabilities of large complex networks caused by synchronisation issues. But, like any transport network, they will be potentially subject to the problems associated with multiple actors attempting measures to resolve congestion problems, ie the Braess “sub-optimal Nash equilibrium problem “ described above
The development of local congestion markets within power networks is still largely hypothetical, but the possible emergence of the Braess paradox is one factor that supports an important role for a distribution network operator (DNO) in coordinating the operation of the power system. That might include responsibility for some form of congestion pricing. Quite how big that role should be, and whether it extends to making the DNO the sole retail supplier for decentralised local networks, is a much bigger question on which the jury is still undecided.
In the meantime, it will be worth remembering that market failures are not just theoretical inventions. They matter and they can cause serious problems.
Read more at: https://phys.org/news/2012-10-power-grid-blackouts-braess-paradox.html#jCp
Read more at: https://phys.org/news/2012-10-power-grid-blackouts-braess-paradox.html#jCp
And see some interesting comments attaching to this blog
 Witthaut, D. and Timme, M. Braess's paradox in oscillator networks, desynchronization and power outage. New Journal of Physics, (2012), 14(August)
Friday, March 8, 2019
What if anything does the European experience tell us ?
There is a continuing and long running source of tension in political approaches to the power sector and indeed to network energy utilities as a whole. It is between central direction and state involvement on the one hand and privatisation and liberalised markets on the other. The distinction is of course often full of ambiguities. Today, two very different models for electricity commercialization operate in the different states of the USA. Most popular now is a competitive model, in which power producers can openly access transmission infrastructure and participate in wholesale electricity markets. The other is the traditional regulated monopoly model, in which state commissions regulate privately owned and vertically integrated electricity providers, who are de facto central planners. In Europe state owned companies, such as EdF (still 85% state owned), have participated successfully in liberalising markets across Europe.
Three factors help explain both the liberalisation trend and the fact that it will continue to struggle for wider acceptance. They are “the three D’s” -decarbonisation, digitalisation and decentralisation, all of which are having and will continue to have a major impact on the governance of the power sector.
But first it is worth recalling the features of the power sector that explain its historical development as some form of monopoly, with central control (not always at national level) over the operation of the system.
1. The natural monopoly character of networks, the high and medium voltage transmission or distribution lines, transformers and control operations which imply a heavy cost of physical fixed infrastructure.
2. The command and control nature of the real time operation of systems to balance supply and demand.
3. Finally there is the need to secure investment in very expensive immobile plant, eg generating plant, with a very long life. Investors need inducement to put money in a utility in return for a low but secure rate of return – this is critical to the affordability of power for consumers.
The answer historically was always that some form of centrally controlled monopoly, publicly or privately owned, and with a high degree of vertical integration, was both inevitable and demonstrably the most economically efficient model. The liberalisation message, by contrast, was that you could and indeed should unbundle the elements that were natural monopoly – the wires business - from the elements that were in principle at least open to competition – generation and retail sale to consumers. The natural monopoly networks could then be put into private ownership and regulated by statute to maintain quality standards and limit the owners to a fair rate of return. The competitive elements, especially generation, would be subject to market disciplines and economic regulation confined to monitoring by the national competition authorities.
A major factor permitting liberalisation was the extent of advance in sophisticated communication and IT systems (digitalisation), which had the effect of reducing the otherwise huge transactions costs in unbundling an integrated activity like electricity supply. This allowed, inter alia, the penetration of competitive market ideas right down to the level of retail supply to small consumers. Even so the industry retains a substantial, and necessary, element of command and control, over system operations, which is achieved through a complex structure of operating codes and protocols.
So what are the lessons from European experience. The messages are mixed. The UK was a pioneer and pushed the liberalisation process furthest – widely cited as a great achievement. The rest of Europe, pushed by the European Commission, has been moving more slowly and hesitantly along the same track. There are a plethora of different models, with variations on the degree of unbundling, monopoly and competition, and different forms of state involvement and intervention.
But has liberalisation been a great success? One can certainly point to particular plus points, especially in the regulation of the network businesses, but I would suggest that the answer is much more complex and nuanced. The UK has major problems with the power sector, with several indicators that all is not well. Little or no investment in generation now takes place without some form of government subsidy or guarantee; in other words the government has been sucked back into close involvement with, and responsibility for, investment decisions in the sector; at the same time it has lost many of the key levers and policy instruments previously provided by ownership, and which would assist good decision making.
Another area of disquiet has been the operation of retail competition, with substantial allegations of various forms of abuse or unfair trading practices. At the last election both parties went into the election on a platform of price controls, an anathema to any conception of a properly functioning liberalised market.
And has the more liberalised UK industry been more successful than its rivals within the EU? If we look at the levels of retail prices as an international comparison (which has its own complexities) the evidence is not compelling. The UK is in the middle of the pack and does not do as well as France in particular. France, with its history of state ownership and a dirigiste approach to the economy, has been perhaps the most successful over the period, at least for the power sector.
For the future we need to turn to the other two D’s. Decarbonisation will have the most profound impact on how the sector is managed and governed. There are huge difficulties in relying on a carbon price alone to achieve the emissions reductions we need. That implies more interventionist policies not fewer. And the low carbon technologies lack the flexibility of fossil generation. This creates both technical and theoretical difficulties for efficient systems operation within a conventional market framework. This will inevitable lead to some return towards more command and control within system operations.
Digitalisation has enabled the unbundling and successful market experiments that we have. It now needs to be deployed to enable much more interesting forms of competition and innovation at the retail level, but it won’t resolve the fundamental investment conundrum – of getting large scale investment.
Decentralisation and local control depend partly on what become the dominant low carbon technologies, but much more on the growing involvement of consumers – the demand side. The centre of gravity of decision may move away from the centre to much more local entities. But many of the same dilemmas for policy, securing investment and meeting low carbon targets, will still be there. Paradoxically the loss of scale may predispose to less market oriented approaches, as the complexities and transaction costs involved in small scale power networks outweigh any theoretical benefit from competition incentives.
In my view we are seeing a slow move back towards greater government involvement in the power sector, for all the reasons above. The challenge will be to develop our existing structures to accommodate the interventions necessary to decarbonise the economy, while at the same time continuing to exploit the innovations, not least in tariffs and metering, that can spring at least in part from domestic supply competition.
 This is an issue discussed in more detail elsewhere on this site, but is part of a family of rather technical issues around the conditions that can create market failure.
Thursday, February 28, 2019
Antti Lipponen of the Finnish Meteorological Institute has created an animated chart that shows the trend in global and national temperatures from 1900 to 2017, ie almost to the present day. The distinctive feature is that it shows how an overall trend, significant in itself, can mask even more dramatic swings in individual geographies and in variations between them. Some though not all of these are likely to be experienced as episodes of unusual or extreme weather, eg drought or flood.It is expressed in terms of temperature anomalies, or deviations from the norm. In this case this just means deviation from the average over the period 1951-1980.
To interpret what is happening in more detail, the following is a useful key
Year. In the centre of the circle is the year of the observations, starting in 1900.
Global temperature trend: top right of the picture.
Temperature anomalies. Moving out from the centre are five concentric rings, corresponding to the size of the anomaly: from – 2.0oC through –1.0oC, 0.0oC, +1.0oC, to +2.0oC. Individual country values are expresses as spokes emanating from the centre, with colours expressing the scale of the deviations, (negative = blue, positive = red)
Geography. The individual countries should be visible in full screen mode, but for a quick impression of what is happening in global differences, note that the quadrants can be summarised as follows. Moving clockwise from 12.00, we find:
Top right; Asia and the Middle East
Bottom right: Africa
Bottom left: Europe
Top left: The Americas and Oceania
The animation has the virtue of demonstrating in visual form the potential significance of global warming in provoking increasing numbers of extremes, which may nevertheless vary substantially from year to year, country to country, and continent to continent.
Bottom right: Africa
Bottom left: Europe
Top left: The Americas and Oceania
The animation has the virtue of demonstrating in visual form the potential significance of global warming in provoking increasing numbers of extremes, which may nevertheless vary substantially from year to year, country to country, and continent to continent.
Monday, February 25, 2019
SERIOUS US RECOGNITION OF CLIMATE CHANGE RESPONSIBILITIES. BUT IDEOLOGICAL DIFFERENCES NEED TO BE PUT ASIDE.
A more urgent approach to climate issues is stirring again among US politicians and opinion formers. A recent FT article highlights apparent conflicts of approach, and the need to reconcile them. In part any conflict might be seen as just another illustration of the deep ideological divide between believers in the ability of unfettered markets to resolve all problems, and proponents of a much greater role for state intervention. The reality is that effective policy on mitigation of climate change will always demand a combination of coordinated actions at government level with the powerful incentives that economic instruments can provide. There is no single silver bullet. Markets won't work on their own, and we also need both effective action on infrastructure and focused regulation.
Martin Wolf is always worth reading on economics and this remains true when he turns his attention to climate issues (The US debate on climate change is heating up. FT, 19 February 2019). This recent article suggests that informed opinion in the US, perhaps for the first time since the early days of the Obama administration, is moving towards increasing recognition of the fact that the US, widely seen as a laggard on climate related policies, cannot stand back from its global responsibilities in relation to curbing emissions. The US, as a country, still has the greatest historic responsibility for CO2 (the most significant of the greenhouse gases), even though it has now been overtaken by China on current emissions. It is also still close to the highest for current per capita emissions, where it is matched only by a few countries such as Australia (coal once again at the heart of the problem) and surpassed in the oil states of the Middle East by countries such as a Saudi Arabia with a notorious history of wasteful subsidies to energy consumption.
Data source: CDIAC (1751-2013) and BP (2014-2015)
Wolf highlights two recent announcements. The first is an “economists’ statement on carbon dividends”, endorsed by 3,333 US economists, including four former chairs of the Federal Reserve and 27 Nobel laureates. Its main elements are
· a carbon tax starting at $40 a ton
· combined with a “carbon dividend” approach to return the proceeds to US citizens
· border tax adjustments for the carbon content of imports and exports
· prices to act as a substitute for unnecessary regulations.
This plan is also to be proposed to "other leading greenhouse gas emitting countries".
Source: Union of Concerned Scientists
The ideas per se are not new. Most economists view emissions pricing as a necessary if not sufficient part of any solution. Others have discussed carbon wealth fund, and tax and dividend approaches. (The “Carbon wealth fund” tag links to three earlier articles.) Helm and Hepburn were early proponents of border tax adjustments. But bringing them together as a coherent package, and recognising the seriousness of the issue (where economists have often been slow), is very welcome.The second announcement is the Green New Deal proposed by a group of House Democrats, led by Alexandria Ocasio-Cortez. It is a proposal to transform the US economy by moving to zero carbon sources for power generation and upgrading all buildings to achieve maximum energy efficiency. The approach is strong on regulatory intervention and infrastructure investment, but ignores the incentive effects of prices and markets. Activists vigorously oppose market-based mechanisms, emissions trading and offsets. With even more Utopian fervour they also dismiss major technology options such as carbon capture and storage, nuclear power, waste-to-energy and biomass energy, when the selection of at least some of these options is at least helpful, and very likely necessary for a low or zero carbon future.
1. The starting point of $40 per tonne is too low on its own to incentivise much or most of the investments and behavioural. Moreover, signalling a profile of carbon prices rising over time runs in the opposite direction to the environmental value of emissions reduction, which ought to prioritise current emissions.
2. Inter alia a rising profile leads to the well-known Green Paradox, where fossil producers have an incentive to advance their production of the most polluting fuels.
3. Border tax adjustments make a great deal of theoretical sense, but pose huge practical and administrative issues in measuring carbon content. Given the existing complexities of international trade, rules of origin, etc, and in the current global trade environment, they could quickly fall into a tangled web of complex negotiations and disputes.
4. The major investors willing to fund big infrastructure projects at a low or modest cost of capital, eg pension funds and sovereign wealth funds, demand secure revenue streams that markets on their own will not provide. This necessarily implies either government guarantee in some form, or some equivalent in terms of monopoly status and mandated moves towards low carbon. (This argument is explored in more depth on another page on this site).
5. Effective strategy is highly dependent on innovation, and the market failures related to innovation are well documented. Again this implies significant interventions as well as price signals that incentivise investment.
6. There are numerous instances where regulation on its own can achieve significant gains. One of the most dramatic illustrations of this is the effect of the change in US road vehicle regulation, in the late 1980s, relaxing the Corporate Average Fuel Efficiency (CAFE) standards for vehicle fuel efficiency. Technical progress in engine efficiency, implying a falling consumption per tonne, but without the CAFE targets the efficiency gain simply translated into heavier (ie more wasteful in energy terms) vehicles, and vehicle fuel efficiency improvement came to an end.
Trends in engine efficiency contrasted with trends in vehicle fuel economy, responding to relaxation of US Corporate Average Fuel Economy (CAFE) standards in late 1980s. Source; Lutsey and Sperling.
The reality is that effective progress depends on a fusion of the two camps. The propositions are not mutually exclusive but mutually reinforcing. Many of the “other leading greenhouse gas emitting countries”, to whom the proposals are to be put, have long recognised (as has the World Bank) that a trio of policy instruments is required.
· Markets and taxes are the instruments with which economists are most at home, framed to reflect the massive externalities of future environmental damage that are not captured without an intervention of the kind proposed by the US economists.
· Innovation and infrastructure, which for the most part are only partly and insufficiently incentivised by market signals, and usually depend heavily on additional intervention with some form of government support and (typically) financial or regulatory guarantees.
· Regulations and standards which only governments can enforce, to support policy objectives and promote behavioural change where that is part of the solution.
 Helm D, Hepburn C, Ruta G (2012) Trade, climate change, and the political game theory of border carbon adjustments. Oxford Review of Economic Policy, 28: 368-394.
 This is discussed at much greater length in the author’s earlier paper. Cumulative Carbon Emissions And Climate Change: Has The Economics Of Climate Policies Lost Contact With The Physics?, John Rhys, OIES Working Paper EV 57, July 2011.
 The “Green Paradox” is usually attributed to a controversial book by German economist, Hans-Werner Sinn, describing the observation that policies that becomes greener with the passage of time acts like an expressed intention to expropriate fossil resources for the owners, inducing them to accelerate extraction and hence to accentuate the problems.
Tuesday, February 19, 2019
And can we justify ambitious targets?
It is easy, for anyone concerned about the future of the planet and our place on it, to assume that formal economic analysis of the case for mitigating climate change is almost redundant or has only limited value in determining the course of action we should take. However, Simon Dietz’s strongly recommended presentation at the Oxford Martin School last week was a timely reminder both that we cannot in practice escape responsibility for some balancing of costs and benefits, and that, in spite of the limitations, trying to answer that question can still yield valuable recommendations and insights.
Weaknesses in the economic calculus
The weaknesses of conventional applied economics, especially cost benefit analysis, in quantifying issues on the scale of climate change are widely recognised.
Some of the weaknesses are of a technical, conceptual or even philosophical nature. In dealing with risk and uncertainty, there is little or no empirical basis for assessing probability distributions. Non-linearities and non-market effects present technical challenges. The policy choices are a long way away from the world of marginal analysis in which cost benefit analysis (CBA) is typically more comfortable. Distributional and inter-generational inequalities bring in philosophical and ethical questions, and economics still lacks a philosophically sound and universally accepted basis for time discounting.
An even bigger issue perhaps has been the inability of conventional macro-economics to capture the complexities, or indeed the potential scale, of major disruptions caused by climate (or, arguably, other non-marginal or disruptive changes such as Brexit). So-called integrated assessment models (IAMs) purport to capture complex feedbacks between climate impacts and economic output, but the judgment from academics on IAMs has been damning. Such models come “close to assuming directly that the impacts and costs will be modest, and close to excluding the possibility of catastrophic outcomes”, according to Nicholas Stern. In other words, they largely assume away the problem they are supposed to be analysing. It is for these reasons that CBA arguments have become the new line of defence for climate sceptics whose refusal to accept the science has clearly become untenable.In plain language, Robin Harding in the FT claimed that
… one standard model only gives damage greater than 50 per cent of output with 20oC of warming. Combine that with the assumption that the economy will be many times bigger in the future and the problem is clear. Your grandchildren might be cooking in their own fat on the London Underground, but rather than regarding them as dead, these economic models would regard them as wealthier than you. …… After the financial crisis, the world did not construct vastly complicated models to estimate the chances of another meltdown and the damage it would cause. Policy makers simply recognised that regulations such as the US Dodd-Frank Act are a small price to pay for preventing a repeat performance. It is time to take a similar risk-based approach to the greater problem of climate change.
But we still need to choose between 2 oC and 1.5 oC targets
Simon Dietz’s presentation focused very clearly on a current issue – the case for, and practicality of, adoption of the aspirational 1.5oC target that arose from the COP 21 Paris Agreement. He pointed out, correctly, that the lower target would remove a very substantial part of the “remaining carbon budget”, and that this required unprecedented rates of reduction in carbon emissions, leading us inter alia to early “zero carbon” and the necessity of carbon capture. Other calculations implied a high implicit carbon price of $100 per tonne of CO2, or more.
Defending cost benefit analysis, at least partly on the basis that alternatives faced, and similarly failed to resolve, most of the same difficult issues as CBA, he attempted to answer the 1.5oC question. His analysis, which neatly side-stepped some of the methodological issues, ultimately boiled down to the biggest uncertainty of all – estimating the cost of the damage from climate change.
Successfully avoiding the CBA/ modelling trap described above, of assuming away the problem, he argued that the (wide) range of damage estimates made from an empirical rather than an IAM approach could at the very least make a strong case for the lower target, even if this was not conclusive in CBA terms. Subsequent discussion focused on what was or was not included in the various estimates of the costs of climate change, notably in relation to resource conflicts and migration.
The conclusions may have seemed uncontroversial or even too mild to many in the audience, especially those of us with professional exposure to climate issues, but Simon noted that at least one distinguished economist had estimated 3.5oC of warming, often taken to be catastrophic, as an “optimum” amount. Simon also emphasised the importance of the “options” argument for adopting a lower target. More action now increases the room for manoeuvre as time passes, especially if more knowledge increases the estimated risk of “tipping points” or serious downside and catastrophic outcomes.Reinforcing the case for early action and tighter targets
Related arguments have previously been advanced by the actuarial profession, who of course have long specialised in the analysis of catastrophic risk. In 2014 Oliver Bettis addressed an event at the Grantham Institute and argued for a “risk of ruin” approach similar to that used in the insurance industry. The conclusion of his analysis, and using “actuarial” risk parameters, was that:
· the CO2 already released (400ppm) produces unacceptable risk of ruin
· emergency decarbonisation may be the correct risk management response.
· allowing for slow feedbacks, the right target might be below 350ppm
· removal of CO2 from the atmosphere should be investigated.
In qualitative terms this could be seen as a similar set of conclusions to Simon’s, but with even more aggressive support for lower targets. Actuaries are possibly more risk averse than economists.
And assessing the practicality and cost
Perhaps the most important barriers to a lower target are simply those of practicality and cost. A badly managed and extremely disruptive re-engineering of the entire energy sector, and indeed the entire economy, could after all bring with it its own “risks of ruin”. It is often asserted that the costs of action are simply unaffordable.
However, at least in very broad macro-economic terms, the admittedly high costs of decarbonisation appear as eminently manageable when compared to other major disruptive impacts on the world economy. At a national level for example, if we take Simon’s very approximate estimate of $100 per tonne, and apply it to UK emissions of about 500 million tonnes of CO2, we get a cost of c. 1.4% of GDP; this is consistent with the 1-2 % range common in post Stern discussions of the cost of action necessary to curtail emissions to a “safe” level. As often pointed out, this might mean reaching a particular standard of living less than12 months later by 2050.
While in most contexts 1-2 % of GDP is a substantial quantity, it is comparable to other energy and other commodity price shocks that the world economy has absorbed, mostly without ruinous dislocations. Notably these include the multiple oil price shocks of the last 45 years. Compared to the impact of the 2008 financial crash and its consequences, estimated to have cut GDP by as much as 15%, or to current expectations of the possible near term cost to the UK of Brexit, the cost of climate action can be seen as modest and eminently affordable.
Intellectually therefore the case for ambitious climate action is strong. Less well developed are, first, the levels of political support and, second, agreement on, and acceptance of, the actual means to get there. But that, as they say, is a series of stories for another day.
 Pindyck, Robert S. 2013. "Climate Change Policy: What Do the Models Tell Us?" Journal of Economic Literature, 51(3): 860-72. Pindyck is particularly scathing.
A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g., the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models' descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.
 The Structure of Economic Modeling of the Potential Impacts of Climate change: Grafting Gross Underestimation of Risk onto Already Narrow Science Models / Nicholas Stern. Journal of Economic Literature, Vol. 51, N° 3, pp. 838-859, September 2013
 Robin Harding. 2014. “A high price for ignoring the risks of catastrophe.” Financial Times. 18 February 2014.
 The maintaining of options is also an important part of the case for attaching a higher value to saving current emissions, rather than assuming a rising real price of carbon, addressed by the author, inter alia in Cumulative Carbon Emissions and Climate Change. Has the Economics of Climate Policies Lost Contact with the Physics? Oxford Institute for Energy Studies Working Paper.
 Risk Management and Climate Change: Risk of Ruin. Oliver Bettis. Event at Grantham Research Institute on Climate Change and the Environment London, 14 January 2014.