Tuesday, February 19, 2019

CAN COST BENEFIT ANALYSIS GRASP THE CLIMATE CHANGE NETTLE?


And can we justify ambitious targets?

It is easy, for anyone concerned about the future of the planet and our place on it, to assume that formal economic analysis of the case for mitigating climate change is almost redundant or has only limited value in determining the course of action we should take. However, Simon Dietz’s strongly recommended presentation at the Oxford Martin School last week[1] was a timely reminder both that we cannot in practice escape responsibility for some balancing of costs and benefits, and that, in spite of the limitations, trying to answer that question can still yield valuable recommendations and insights.

Weaknesses in the economic calculus

The weaknesses of conventional applied economics, especially cost benefit analysis, in quantifying issues on the scale of climate change are widely recognised.

Some of the weaknesses are of a technical, conceptual or even philosophical nature. In dealing with risk and uncertainty, there is little or no empirical basis for assessing probability distributions. Non-linearities and non-market effects present technical challenges. The policy choices are a long way away from the world of marginal analysis in which cost benefit analysis (CBA) is typically more comfortable. Distributional and inter-generational inequalities bring in philosophical and ethical questions, and economics still lacks a philosophically sound and universally accepted basis for time discounting.

An even bigger issue perhaps has been the inability of conventional macro-economics to capture the complexities, or indeed the potential scale, of major disruptions caused by climate (or, arguably, other non-marginal or disruptive changes such as Brexit).  So-called integrated assessment models (IAMs) purport to capture complex feedbacks between climate impacts and economic output, but the judgment from academics on IAMs has been damning[2].  Such models come “close to assuming directly that the impacts and costs will be modest, and close to excluding the possibility of catastrophic outcomes”, according to Nicholas Stern[3].  In other words, they largely assume away the problem they are supposed to be analysing. It is for these reasons that CBA arguments have become the new line of defence for climate sceptics whose refusal to accept the science has clearly become untenable.
In plain language, Robin Harding in the FT[4] claimed that

… one standard model only gives damage greater than 50 per cent of output with 20oC of warming. Combine that with the assumption that the economy will be many times bigger in the future and the problem is clear. Your grandchildren might be cooking in their own fat on the London Underground, but rather than regarding them as dead, these economic models would regard them as wealthier than you. …
… After the financial crisis, the world did not construct vastly complicated models to estimate the chances of another meltdown and the damage it would cause. Policy makers simply recognised that regulations such as the US Dodd-Frank Act are a small price to pay for preventing a repeat performance. It is time to take a similar risk-based approach to the greater problem of climate change.

But we still need to choose between 2 oC and 1.5 oC targets

Simon Dietz’s presentation focused very clearly on a current issue – the case for, and practicality of, adoption of the aspirational 1.5oC target that arose from the COP 21 Paris Agreement. He pointed out, correctly, that the lower target would remove a very substantial part of the “remaining carbon budget”, and that this required unprecedented rates of reduction in carbon emissions, leading us inter alia to early “zero carbon” and the necessity of carbon capture. Other calculations implied a high implicit carbon price of $100 per tonne of CO2, or more.

Defending cost benefit analysis, at least partly on the basis that alternatives faced, and similarly failed to resolve, most of the same difficult issues as CBA, he attempted to answer the 1.5oC question. His analysis, which neatly side-stepped some of the methodological issues, ultimately boiled down to the biggest uncertainty of all – estimating the cost of the damage from climate change.

Successfully avoiding the CBA/ modelling trap described above, of assuming away the problem, he argued that the (wide) range of damage estimates made from an empirical rather than an IAM approach could at the very least make a strong case for the lower target, even if this was not conclusive in CBA terms.  Subsequent discussion focused on what was or was not included in the various estimates of the costs of climate change, notably in relation to resource conflicts and migration.

The conclusions may have seemed uncontroversial or even too mild to many in the audience, especially those of us with professional exposure to climate issues, but Simon noted that at least one distinguished economist had estimated 3.5oC of warming, often taken to be catastrophic, as an “optimum” amount. Simon also emphasised the importance of the “options” argument[5] for adopting a lower target. More action now increases the room for manoeuvre as time passes, especially if more knowledge increases the estimated risk of “tipping points” or serious downside and catastrophic outcomes.
Reinforcing the case for early action and tighter targets

Related arguments have previously been advanced by the actuarial profession, who of course have long specialised in the analysis of catastrophic risk. In 2014 Oliver Bettis addressed an event at the Grantham Institute and argued [6]for a “risk of ruin”  approach similar to that used in the insurance industry. The conclusion of his analysis, and using “actuarial” risk parameters, was that:
  
·         the CO2 already released (400ppm) produces unacceptable risk of ruin

·         emergency decarbonisation may be the correct risk management response.

·         allowing for slow feedbacks, the right target might be below 350ppm

·         removal of CO2 from the atmosphere should be investigated.

In qualitative terms this could be seen as a similar set of conclusions to Simon’s, but with even more aggressive support for lower targets. Actuaries are possibly more risk averse than economists.

And assessing the practicality and cost

Perhaps the most important barriers to a lower target are simply those of practicality and cost. A badly managed and extremely disruptive re-engineering of the entire energy sector, and indeed the entire economy, could after all bring with it its own “risks of ruin”. It is often asserted that the costs of action are simply unaffordable.

However, at least in very broad macro-economic terms, the admittedly high costs of decarbonisation appear as eminently manageable when compared to other major disruptive impacts on the world economy. At a national level for example, if we take Simon’s very approximate estimate of $100 per tonne, and apply it to UK emissions of about 500 million tonnes of CO2, we get a cost of c. 1.4% of GDP; this is consistent with the 1-2 % range common in post Stern discussions of the cost of action necessary to curtail emissions to a “safe” level. As often pointed out, this might mean reaching a particular standard of living less than12 months later by 2050.

While in most contexts 1-2 % of GDP is a substantial quantity, it is comparable to other energy and other commodity price shocks that the world economy has absorbed, mostly without ruinous dislocations. Notably these include the multiple oil price shocks of the last 45 years.  Compared to the impact of the 2008 financial crash and its consequences, estimated to have cut GDP by as much as 15%, or to current expectations of the possible near term cost to the UK of Brexit, the cost of climate action can be seen as modest and eminently affordable.

Intellectually therefore the case for ambitious climate action is strong. Less well developed are, first, the levels of political support and, second, agreement on, and acceptance of,  the actual means to get there. But that, as they say, is a series of stories for another day.





[1] This can be viewed online and, together with the immediately previous presentation by Joeri Rogelj provides an excellent introduction to understanding the current state of play on carbon budgets and policy dilemmas.
[2] Pindyck, Robert S. 2013. "Climate Change Policy: What Do the Models Tell Us?" Journal of Economic Literature, 51(3): 860-72. Pindyck is particularly scathing.
A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g., the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models' descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.

[3] The Structure of Economic Modeling of the Potential Impacts of Climate change: Grafting Gross Underestimation of Risk onto Already Narrow Science Models / Nicholas Stern. Journal of Economic Literature, Vol. 51, N° 3, pp. 838-859, September 2013

[4] Robin Harding. 2014.  “A high price for ignoring the risks of catastrophe.”  Financial Times. 18 February 2014.

[5] The maintaining of options is also an important part of the case for attaching a higher value to saving current emissions, rather than assuming a rising real price of carbon, addressed by the author, inter alia in Cumulative Carbon Emissions and Climate Change. Has the Economics of Climate Policies Lost Contact with the Physics?  Oxford Institute for Energy Studies Working Paper.


[6] Risk Management and Climate Change: Risk of Ruin. Oliver Bettis. Event at Grantham Research Institute on Climate Change and the Environment London, 14 January 2014.

Sunday, January 27, 2019

HITACHI PROPOSES NATIONALISATION OF NUCLEAR. HAS PRIVATISATION PROVED COMPATIBLE WITH LOW CARBON POLICIES?

Hitachi unwillingness to proceed with the Wylfa project represents not only a crisis for the future of large scale nuclear projects in the UK, but also casts doubt on the competence of our governments to implement low carbon policies, or for the country to secure the investments it needs within the current market and ownership arrangements. It follows on from cancellation of support for carbon capture, after the private sector had sunk hundreds of millions of investment, and more recently for the renewable tidal lagoon programme. For this and other reasons it is time to question many of the basic policy and economic assumptions that underpin the current organisation and governance of the sector. 

"Time for a realistic appraisal of our liberalised market experiment.  Is this the end of the road for the wilder free market fantasies of the 1990s?"
……...
We are in the UK currently witnessing a number of testaments to the failings of structures set up in the 1990s, and after, as an important and widely acclaimed part of the Thatcherite revolution to reduce the role of the state and re-invigorate the disciplines of the marketplace. This has been most evident in the spheres where permanent contact, and conflict, between public policy and private incentives is almost unavoidable. For the UK the most salient current examples are probably:

·         Transport, where public dissatisfaction with prices and service quality on the railways is reaching new heights, and, some allege, the level of subsidy exceeds that given to the pre-privatisation network.

·         Health, where the notorious Lansley reforms have proved to be either useless or counter-productive in making the NHS “internal market” more effective.

·         Electricity, where Hitachi’s effective withdrawal from UK nuclear power construction threatens a fundamental plank of the UK’s energy policy.

·         Oil, where the failure of the private sector to make adequate provision for North Sea decommissioning places a potential bill on taxpayers estimated at £24 billion, a number not much smaller than the £39 billion the UK will be paying as it leaves the EU (and the latter number is much less likely to escalate).

This is a big subject, so today’s comment is confined to the power industry and a little bit of its history.  We do of course need to recognise that electricity privatisation did indeed achieve some important gains. Most notably the move away from self to independent regulation and into private ownership created the financial incentives for efficiency in electricity distribution, where costs fell substantially. It’s also true that the previous governance structure of the industry was far from perfect. As a 1980s Chief Economist at the somewhat dysfunctional Electricity Council, then tasked with the sector’s regulation, I observed that the old CEGB had a degree of professional arrogance, a tendency to goldplating investment and poor investment decisions, and also disregard for the interests of consumers and the downstream distribution sector. It was in some ways far too independent of political control. On the positive side, it was always highly professional and inter alia contributed substantially to UK science research efforts in ways that have since been badly missed.

The structure set up at privatisation, to which I also contributed, was also a success, at least in the short term. Its most remarkable achievement, however, was to replicate the reliable, efficient and internationally admired operation of the National Grid. This was to change from a centralised command and control of plant dispatch in ascending order of cost, the so-called merit order, with a market system that could, as a matter of both principle and practice, deliver the same performance from the existing collection of power stations. The seamless transfer from public to private, and from command and control to a free market and private ownership, was achieved without the misfortunes that were to dog the privatised rail network a few years later, where the network management in essence “lost control of the assets”, inter alia leading to some tragic accidents.

But if operational efficiency was maintained, and distribution costs reduced, successful transition to dependence on private sector investment was much less clear. The market arrangements were designed to allow the threat of power shortages to induce price rises that would incentivise enough capacity through very large “price spikes”.  Together with initial animal spirits and the coincidence of radical technology change (combined cycle gas) with cheap gas, along with guaranteed markets for some of the investors, this all worked for a while, but it was not long before the cracks started to appear. Government and the then regulator refused to accept the logic that deficient supply could in principle drive higher prices and that these would be essential, in the absence of any other commitments to investors, to get new capacity built. The original market mechanism designed to act as a signal for new capacity was abandoned with the new trading arrangements in 2000. Since then there has been no substantial new investment in generation that has not relied either on long term contractual guarantees, as with the nuclear programme, or on similar levels of guarantee through feed-in tariffs.

The government has in practice been wholly unable to escape responsibility for sector investment. As the Hitachi episode, among others, shows, this has brought its energy policy, and the pretence of relying on private sector finance, close to collapse. However that pretence has brought its own costs. Complex financial structures, as with private finance initiative (PFI) projects, have been successful mainly in raising the cost of capital, as compared to keeping projects on the government books.

These are far from being the only problems with the new market structures that have been allowed to evolve since 1990 and 2000. Others include the continuing EU wide absence of carbon prices necessary to promote low carbon policies, failure of the sector structure to promote rational tariffs either now or for the future, inappropriate loading of social and other costs into consumer prices, and widespread dissatisfaction with energy supplier profit margins.[1] All of these subjects are extremely important and deserving of more analysis.[2]

Nor is there any evidence, on the basis of international price comparison[3], that the UK, or at least its consumers, have benefited from the path breaking reforms of the Thatcher era. The table below compares UK and French domestic electricity prices over the last three years. The relative position of the UK benefits significantly from the exchange rate decline after the 2016 referendum. France, a near neighbour and similarly sized economy, is chosen as an interesting comparator because it has had a much more centrally controlled system, and, unlike the UK, has benefited from a successful nuclear programme.
The comparison of prices to large industrial consumer prices, which are a closer measure of generation costs, show a similar picture.


This shows the small but significant improvement in the UK’s position attributable to the post referendum fall in the exchange rate. But the larger gap, as compared to households, also suggests that UK “competitive failure” has been manifested mainly in power generation.

All of the above leads to the suggestion that all is not well with energy sector governance in the UK. Widely acclaimed as leading the world in the 1990s, the reality has been that the UK’s liberalised market frameworks have simply not delivered within a 21st century environment. Surely this is the time for a comprehensive re-appraisal.



[1] A number of these tariff issues are discussed much more fully in the author’s paper prepared for Energy Systems Catapult, and published by them as
Cost Reflective Pricing in Energy Networks.  The nature of future tariffs, and implications for households and their technology choices. April 2018.
[2] A much fuller discussion of these points is also given on another page on this site, Low Carbon Power, and also the author’s “think piece”, published in 2016 by the Energy Technologies Institute on how to deliver efficient networks for a low carbon future energy system. It aims inert alia to set an agenda for a future power systems architecture.
Enabling Efficient Networks  for Low Carbon Futures: Options for governance and regulation. 2016
[3] Price comparison data is taken from sources published by BEIS and is readily available online.

Saturday, January 19, 2019

BREXIT, HITACHI AND CLIMATE CHANGE. SERIOUS LESSONS TO BE LEARNED.




Michael Mackenzie wrote in yesterday’s FT (18 January) that Brexit was weighing heavily on on investment confidence in the UK.

“… one opinion has held sway among professional custodians of money for some time: steer clear of the UK.” Investors are increasingly wary of investing money in the UK economy, partly because of the dangers associated with a disorderly exit from the EU, partly because a UK not in the EU is a far less attractive proposition, and partly because of continuing chaos and uncertainty in government.

This is clearly one of the factors behind Hitachi’s decision to pause construction on the Wylfa nuclear power station in Anglesey. But another is the sheer difficulty and complexity of putting together private sector financing for massive energy projects. “Foreign companies are increasingly leery of British infrastructure projects”, according to Nick Butler, also writing in the FT. And this problem transcends Brexit, although it is certainly amplified by it.

According to FT reports (17 January) “People involved in the Wylfa project said a lack of firm financing commitments made it impossible for Hitachi to keep pumping in its own cash.” Hitachi intimated that their involvement could only continue if the project were kept off their balance sheet, limited further investment was required and there was a prospect of adequate profit. FT reporting commented that “… to meet these criteria is likely to require a significant change in the UK government’s approach to financing nuclear power.”

This is a huge blow to the government’s plans for early decarbonisation of the power sector, and hence for its contribution to internationally agreed targets for combatting climate change. It is widely argued that the costs of other renewable sources such as solar energy or offshore wind are falling sufficiently rapidly that this should not be a concern, and that these technologies are already more than competitive with nuclear power.

However, the government has also effectively killed off the Swansea Bay tidal lagoon project, a renewables energy project with the potential to overcome some of the objections to wind and solar power, that it was insufficiently predictable to provide a complete answer to the issue of supply security and reliability. Once again there is a strong suspicion that the government was unwilling to support financing arrangements that would have given this project a sufficiently low cost of capital to make it viable.

In spite of nuclear travails, it remains difficult to envisage a low carbon future without some elements of nuclear power.  “It’s difficult to see a low-carbon energy system in the future which has no new nuclear,” says George Day, the head of policy and regulation at the government-funded Energy Systems Catapult.[1]  Moreover the government has previously killed off prospects for early adoption of carbon capture and storage (CCS), another prime candidate for non-intermittent baseload power generation.

In my view there are a number of clear lessons to be learned from this debacle.

First, it is clear that almost any form of long life generation investment in the power sector represents infrastructure that will only be created when investors have a very clear policy and regulatory commitment from government. In many instances, as with the feed-in tariffs to support renewables, this will often amount to essentially a long term government guarantee. The arguments are explored more fully on another page on this site, but it is currently very difficult to point to any form of generation investment that is not supported either by long term tariff arrangements or explicit guarantees.

Second, there is clear evidence that the government’s insistence on private sector finance, and keeping the government’s involvement “off the books”, runs the danger of raising the cost of capital and also of performance failure. The Hitachi debacle may be another illustration of the weaknesses and very high capital costs exposed both in the private finance initiative and the blunders associated with the public private partnership for upgrading the London underground [2]. The risks include an impact both on cost – these are all extremely capital intensive projects, and hence on affordability. But, if not delivered, they also imperil the other energy trilemma objectives of security and sustainability.

Third, from a public policy perspective, we are reminded once again that projects that may well be essential to address fundamental concerns such as greenhouse gas emissions (GHG) and climate change, will almost always appear as “uneconomic” when there is no means for them to capture the value of full human cost of greenhouse gas emissions. Current carbon prices are nowhere near the level required, either to match the future damage those emissions will cause, or the likely cost of capturing carbon from the atmosphere, something that will almost certainly be required to meet temperature targets such as 1.5o C. Contrary to some conventional assumptions, an even higher value attaches to reducing current CO2 emissions compared to those in twenty years time. (Again this issue is explored in more detail on another page.)

Fourth, it seems impossible to escape the pernicious consequences of the Brexit traincrash, even though the energy sector might be seen as one of the least affected, at least directly, by question marks over future trade with Europe.  

Fifth and finally, and this is a challenge for my own involvement in the Oxford Martin School programme concerned with renewables, we need to focus more attention on defining, and if possible increasing, the extent to which we can meet future requirements from a combination of intermittent or variable output sources combined with storage and the management of consumer loads. Not least this could help to mitigate the failure to bring forward either carbon capture or nuclear investment in sufficiently timely fashion.


[1]Energy Systems Catapult is part of a network of world-leading centres set up by the government to transform the UK’s capability for innovation in specific sectors and help drive future economic growth. Its aim is, by taking an independent, whole energy systems view, to work with stakeholders across the energy sector (consumers, industry, academia and government) to identify innovation priorities, gaps in the market and overcome barriers to accelerating the decarbonisation of the energy system at least cost.
Catapult modelling has tended to support both nuclear power, including smaller modular nuclear technology, and carbon capture.
[2] The Blunders of our Governments, Ivor Crewe and Anthony King. 2013. Its treatment of the London underground fiasco, and the very high cost of capital incurred, is particularly scathing.

Friday, December 28, 2018

CARBON FEE AND DIVIDEND. AN IDEA WHOSE TIME IS COMING.


One simple idea unites almost all environmental economists. It is the importance of pricing external costs – the social and environmental damage caused by greenhouse gas emissions – into markets so that users and consumers are forced to recognise and pay for these costs at the point of use, and not allowed to fall instead on the public at large or on future generations.

Unfortunately a simple implementation of such an approach runs head first into some major political realities. The first is that the level of pricing applied to fossil fuels that would be necessary to reduce emissions quickly or substantially is high, and would be a major economic shock both to many industries and consumers. The industries affected tend to be extremely vocal. The second is the observation that any resulting price increases are most likely to have a  disproportionate impact on poorer consumers.

Just to illustrate the point the European Union’s emissions trading scheme (the ETS) has over long periods produced carbon prices of significantly less than 15 per tonne, in a period when the price at which key technology transformations occur has appeared to be much higher, around €70 or more. Even more significantly it is now widely recognised that the world is so far from reaching safe climate targets that it will have to contemplate large scale sequestration of CO2, ie extraction from the atmosphere, with much higher costs, probably at least €200 per tonne (best available current technology puts this figure at around €600 per tonne).

Failures of the ETS reflect many factors, including industry lobbying and the post crisis recession, but they certainly include weak political will in relation to higher carbon prices.

Carbon Fee and Dividend

Citizens’ Climate Lobby[1] has proposed a carbon fee and dividend approach, in which the whole of of the revenue from a “carbon fee” is returned directly to households as a monthly dividend. The idea is that the majority of households will receive more in dividend than they pay in increased energy costs. This feature protects family budgets, frees households to make independent choices about their energy usage. The higher fuel prices spur innovation in low-carbon products.

The scheme would include border adjustments[2] to protect UK businesses, levying import fees on products imported from countries without a price on carbon, along with rebates to UK industries exporting to those countries. This would discourage businesses from relocating where they can emit more CO2 and incentivise other countries to adopt similar carbon pricing policies.

In fact this idea is part of a family of similar ideas aimed at shifting the burden of taxation towards “green taxes” which seek to redress existing market failures that comprehensively fail to address environmental problems. The counter part to these socially beneficial taxes is a reduction in other forms of taxation, and related concepts include “revenue neutral carbon taxation” and the notion of a carbon “wealth fund”, promoted in an earlier blog on this site by Adam Whitmore.

The case for action

I believe there will be numerous points of detail to get right, whatever new approach is adopted, and I have a few concerns on specifics.

·         Border adjustment taxes look fine in principle, but the administrative detail in their application looks formidable, and they would still have to comply with international rules on trade.

·         CCL propose a gradual increase in tax on carbon, starting from quite low numbers. As discussed in another page on this site, the stark message of the science is that the highest cost attaches to early emissions, since they are both cumulative and bring forward climate milestones.

·         Getting both the macroeconomics and the redistributive consequences right will be challenging, both technically and politically.

·         The idea of a wealth fund, as opposed to immediate revenue neutrality, may be the preferable route.

Despite these concerns I am more and more convinced[3] that more effective action on carbon pricing is essential, and that its combination with a redistributive agenda in domestic politics is probably the best available way to break the current logjams on climate policy.





[1] Citizens’ Climate Lobby (CCL) is a non-profit, nonpartisan, grassroots advocacy organisation, started in the United States. There are now CCL groups in 40 countries, including the UK network established in 2015.

[2] An idea also previously promoted by Dieter Helm and Cameron Hepburn
[3] https://www.ft.com/content/9a6f8e90-0398-11e9-bf0f-53b8511afd73.  Just a final footnote. 23 December issue – Arun Majumdar in the FT discusses the issue.

Thursday, December 6, 2018

ENERGY STORAGE. FINDING THE SILVER BULLET


Energy storage is now widely recognised as a necessary component in most options for achieving a low carbon future. For most of our history, energy storage has taken the form of physical stocks of fossil fuel, to be drawn on as and when required. Matching supply and demand in real time has therefore been comparatively simple. Even in the complex world of large power systems, generation can be turned up or down with comparative ease.

That world is changing. Renewable resources (mostly) provide energy according to their own timetable and the dictates of weather and season, most obviously so for the best developed resources of solar and wind power. Nuclear power output can be used to follow load, at a cost, but is still relatively inflexible. Matching these outputs to highly variable consumer demand, for a variety of energy services from heat to transport, as well as appliances and processes of all kinds, is going to be more and more challenging. This implies the fundamental importance of energy storage.

The key technical and economic requirements will in different applications include minimising weight or volume (car batteries), round trip energy efficiency (minimising losses in conversion processes to and from storage), sufficient scale (for large power system applications), low capital cost per kW (unit of power output), and low capital cost per kWh (unit of energy stored). The application determines what is most feasible and economic in each case, and hence also both the choice of existing technologies and the priorities in looking for new approaches.

It’s often widely assumed by commentators that the great advances in battery technology mean that we are well on the way to solving all these problems. However that is far from being the case. Lithium- ion batteries are probably reaching their inherent natural technical limits, and although it may be possible to force down production costs further, they still represent a very high capital cost. As a result cost, as well as any scalability or resource limitations, are likely to inhibit their use other than in premium and high value applications, even though these extend to some high volume uses such as electric vehicles (EVs) and a few specific power system applications.  Currently foreseen battery technology scores well on efficiency and reasonably well on cost per kW, but not on capital cost per kWh.

We need and are going to need a number of very different types of storage, and the requirements differ widely across the spectrum of energy services and other requirements. Here are a few of the key issues and questions.

In practice the really critical distinction is between situations amenable to storage solutions that operate on the basis of a daily cycle or similar, and those that have to meet annual or seasonal cycles.  For the former, high capital costs can be acceptable but conversion efficiencies will matter more. For the latter, the reverse is the case. Low capital costs are essential and conversion efficiency less important.

The Dinorwic pumped storage scheme in North Wales illustrates the issue of scale and cost very well.  Dinorwig, the main storage facility accessed by National Grid in the UK, stores the equivalent of about 9 GWh of energy.  Construction is estimated to have cost c £ 500 million and involved shifting 10 million tonnes of rock.  It provides a significant contribution to managing daily load fluctuations, but is still relatively small in relation to the overall fluctuations in the daily load curve. In practice Dinorwig is sometimes used for the provision of ancillary services such as frequency control, rather than in the more obvious role of storing energy against a daily peak demand. Operating on a daily cycle, the all-in cost of storage is of the order of only a few pence per kWh, allowing the facility to make a valuable contribution to the efficiency of the power system.

The amount of storage necessary to flatten the typical current January daily load curve is of the order of 80 GWh, or about 9 Dinorwigs, in principle still a credible level of investment. However to cope with even the UK’s current annual seasonal variations in electricity consumption, the storage need would rise to about 17000 GWh, or nearly 2000 Dinorwigs. On conservative estimates of the need if UK space heating loads were to be met through electricity (even using heat pumps rather than resistive heating) the seasonal storage requirement could be up to three times higher, or 6000 Dinorwigs.

Recovery of the very high capital costs of storage, through revenue or compensating benefits, on the basis of a once a year store/ draw down cycle, is clearly impossible. The silver bullet of cheap seasonal storage has to come through a technology with extremely low capital costs per unit of storage capacity measured in kWh. The prima facie front runners for high volume seasonal storage are chemical or heat based solutions, including hydrogen/ ammonia, synthetic fuels, and bulk heat or phase change methods.

The nature of heat and chemical storage solutions means that seasonal and indeed all storage choices for the power sector can only be properly evaluated by reference to the totality of the energy system. This is most evident for the two very substantial sectors of heating and transport, whose decarbonisation is an essential part of energy policy, since

·         widespread use of heat networks provides an easier means to tap into low cost seasonal heat storage than attempting to recover the stored energy as electricity to power heat pumps.

·         choices in the transport sector, eg between hydrogen and electric vehicles, have profound consequences for the management of electricity supply.

·         electric vehicles are themselves a potentially substantial source of relatively short term storage, as protection against any intermittency in renewables supply.

But this also emphasises the need for a coordinated approach that treats the energy system as an entity, aims to minimise costs and finds compatible paths forward across the very different elements of power, heat for buildings and transport. Current market structures fall a long way short of that requirement.  




Friday, September 21, 2018

CLIMATE SCIENCE. RESIDUAL UNCERTAINTIES THAT DO NOT CHANGE THE UNDERLYING MESSAGE.


Understanding Time Lags Important for Climate Policies.


“Cumulative carbon”, the fact that human emissions of CO2 are removed only slowly from the atmosphere through the natural carbon cycle, is the essence of the climate change problem. Given the thermal inertia of our planet it may produce a substantial time lag between effective action, to limit emissions, and actual stabilisation of temperatures, equivalent to the operation of a domestic heating radiator on a one way ratchet. Some climate scientists believe this effect may have been exaggerated, and will be largely offset by other elements in the natural cycle. Even so there is a consensus that we need to move to a world of net zero emissions and beyond.

So much of the predictive element of climate science has been borne out by observation that it is easy to forget there are still major gaps in our understanding and major uncertainties that are very relevant to understanding what climate policies are necessary to (ultimately) stabilise global temperatures.

The slow but seemingly relentless upward trend in global surface temperatures sits firmly in the middle of past model predictions, and denials that it is actually happening (the famous Lawson “hiatus” based on cherry picking outlier el Nino effects), or that it has nothing to do with human contributions to greenhouse gas concentrations, look increasingly threadbare and ridiculous. There is no doubt that human-induced climate change is with us, and that it is dangerous.

However, although the science makes it clear that urgent emission reductions are essential, it is much less clear how much leeway we have, and whether the 1.5o C or 2.0o C “targets” are attainable. The uncertainties manifest themselves in discussions over the time lags involved, for example between stabilisation of the atmospheric concentration levels of CO2 and stabilisation of global temperature. These questions relate in turn to complexities of the natural carbon cycle and the

The issue of time lags is in some ways politically important. Thomas Stocker, then co-chairman of the IPCC Working Group I (assessing scientific aspects of the climate system and climate change), and addressing the Environmental Change Institute in Oxford in 2014, argued that “committed peak warming rises 3 to 8 times faster than observed warming”. The implication is that there are very substantial time lags. In this case temperatures could continue to rise, perhaps for several decades or even centuries, even if net human emissions were reduced to zero. Similar comments can be found from other climate scientists. It has even been suggested that the thermal inertia effect, considered on its own, could stretch the lag to about 200 years – the time for heat equilibrium adjustment to reach the deep oceans.

The intuitive physical explanation of long time lags is simply the phenomenon of thermal inertia. When we turn up the radiator at home it may take an hour or more for the room to reach a new equilibrium temperature. Global warming, in this analogy, is a radiator heating system on an upward ratchet. The issue for humanity is that by the time temperatures become really uncomfortable, we have already ratcheted up the future temperature to which we have committed. If this is a real danger it dramatically increases the risks associated with climate inaction or “business as usual” trends. It is also creates an alarming image of climate change as a kind of doomsday machine in which humanity is trapped through its failure to anticipate and respond.

However rather more optimistic views have been presented by other scientists. In 2014 Ricke and Caldeira[1] argued that the lags had been seriously overstated, suggesting a time lag of only ten years as a more appropriate estimate. Oxford-based climate scientists, such as Myles Allen, have also suggested that stabilising atmospheric concentrations could lead to a comparatively early stabilisation of global temperature.

The main reason is the potential offsetting effect of the absorption of incremental carbon through the various elements of the natural carbon cycle, including ocean CO2 uptake and the behaviour of biosphere carbon sinks. However there is no natural law that requires such a balance of effects, and the reality seems to be that we are trying to compare two magnitudes, both of which are of major importance but are also difficult to measure with precision. If the effects broadly cancel out over particular timescales, this is essentially a numerical coincidence within the modelling effort. We can expect future research, better measurement, and associated modelling, will gradually improve our understanding.

But melting ice caps are a separate story!

The above discussion does not however cover all the long time lags involved. One particular concern has to be the polar ice caps. If global temperatures reach the point where these start to melt, then the consequential effects in the form of rising sea levels will go on for centuries. Ice sheets, as opposed to sea ice, can, like the retreating glaciers, only be restored by precipitation. That will be slow and net annual ice gain will probably depend on conditions associated with a fall in global and polar temperatures to or beyond what used to be regarded as normal.

And the policy implications of these uncertainties?

In reality the consequences for policy of differing estimates can be exaggerated, since there is agreement on most fundamentals. Most modelling now recognises that stabilisation of temperature, even at a higher level, requires progress to net zero human emissions. Importantly, this is probably unachievable without substantial measures to remove CO2 (and other gases) from the atmosphere.  In practice processes for carbon sequestration are likely to be very expensive process. They represent a form of geo-engineering.









Monday, September 17, 2018

SMART METERS. PRIVACY ISSUES OR A VITAL TOOL FOR OUR FUTURE?




The Guardian two weeks ago featured an anguished reader’s letter concerned about the invasion of privacy involved in the installation of smart meters in UK households. It’s worth reflecting briefly on what the privacy and security issues might be, what the real social value of smart meters might be, and how we should balance these with an effective policy.

Smart meters, and I have just acquired one for my electricity supply, will tell you a number of things that you might have found it difficult to work out previously. These may include for example what your power use (in watts or kilowatts) is at any instant in time, what it was in the last few hours, days or months. The privacy concern is that the utility, your supplier, can be assumed to have the capability to collect and keep this data, and, for example, could build up a picture of the minute by minute electricity consumption of every household with a smart meter.

There are many, not particularly sinister, reasons why utilities might want to do this. At a minimum it can provide a better understanding of how consumers use electricity can help in planning future system needs, make sure that local networks have adequate capacity to cope with fluctuations in load, and so on.

On privacy and security issues we should perhaps be far more worried about the amount of sensitive information held on you by your bank and your credit or store card issuer, not to mention Facebook, Google and your telecoms supplier, or, and, currently in the news, the airlines you use. Between them these have tons of information about your shopping habits, lifestyle, opinions, financial affairs, favourite websites, and so on, all of which, if privacy and security are breached, potentially give rise to much more serious abuses than someone being able to work out what time a household has breakfast or runs the washing machine.

It’s also certainly true, as a number of readers are testifying, that the government has not fully thought through its policy objectives on smart meters, and that the programme is unlikely to deliver many of the promised benefits, at least in the short term. And of course there are as usual a lot of horror stories on installation failures. But none of this should blind us to the fact that there is a huge and essential future for devices which create a much closer connection between the way we use energy, and electricity in particular, and the factors that constrain when and how it is produced and delivered.

Let us take a simple future example. We expect a big future for electric vehicles. This has the potential to create big spikes in load, with everyone switching on together, in a period when we will depend increasingly on renewable energy sources which are much more variable than currently, and harder to match to varying consumer demand. One answer to this is for some EV owners to charge their vehicles overnight, but for the timing of that supply to be at the discretion of the supplier, who can match it to when production is available to meet it. In exchange for this surrender of direct control, consumers will get a much more favourable tariff for their EVs, and the practical issues of managing network overload, when all EV owners try to re-charge their vehicles immediately on return from work, can be avoided. This obviously implies an element of intrusion, in the sense that the utility “knows” the purpose of the load it is required to meet. It also assigns a choice (over timing) from the consumer to the utility. But this is a commercial transaction, willingly entered on both sides, providing benefits to both parties.

Smart meters are just a starting point for more sophisticated and user friendly tariff and conditions of use arrangements which can redefine the ways we use energy. No doubt there will be privacy issues to be managed, but essentially these will be no different in character from the privacy issues we encounter in relation to almost all our transactions with businesses large and small. They will be substantially less than most of those that we face in relation to financial transactions, social media and our day to day use of IT and the internet..

Some of the benefits that can flow from more sophisticated metering and tariffs are highlighted in a report published last week by Energy Systems Catapult, which starts to explore the numerous tariff issues highlighted by progress towards a low carbon economy.

Thursday, August 23, 2018

INFRASTRUCTURE SPENDING. THE GENOA BRIDGE COLLAPSE AND PARALLELS WITH INACTION OVER CLIMATE ISSUES.


A low cost of capital but a high cost of inadequate investment.

Why, when interest rates are so low and there is a global glut of capital, are there not much higher levels of
investment in infrastructure?

The Genoa bridge collapse is a reminder of the high human and economic cost of failure to provide the safe infrastructure on which a modern society depends. In the case of the failed bridge the blame game is now in full cry, and the causes to which its sudden and catastrophic collapse can be, will be, or are being attributed are numerous and not necessarily mutually exclusive. They include:

·        questions over the original 1960s design.

·        failure to heed expert professional advice that the bridge was at imminent risk.

·        reluctance on the part of populist politicians to countenance any form of infrastructure spending, combined with

·        a readiness to shout “Project Fear” in response to expert opinion.

·        constant pressures to reduce public spending

·        failures of those responsible for bridge upkeep,. the adequacy of maintenance and monitoring of its structural integrity.

Mostly these factors relate to unwillingness to spend money on infrastructure investment, and the search for excuses to avoid doing so. Tony Barber, writing in the FT (17 August) argues as follows.
“… the case for more government spending on infrastructure is unanswerable. Of course, … controversial questions of taxation, spending priorities and state borrowing. But unless these questions are addressed, the next catastrophic bridge collapse is only a matter of time.

For “catastrophic bridge collapse” read “global climate crises”. The parallels with action on climate policy are both clear and frightening. Action on climate is as essential to our collective future as bridge maintenance is to road safety. The cost of the investment spend is small in relation to the costs of failure (an economic disaster for Genoa and for Italy). And most spending to deal with climate issues is essentially infrastructure in one form or another.

The parallels extend to the politics. As with the Genoa bridge, populist politicians (eg Redwood, Rees Mogg and Lawson) prefer to decry expert opinion, in this instance the science of climate, as “climate alarmism”. The ultimate costs of failure (to tackle climate issues) will, as with the bridge, be much higher both in absolute terms and relative to the necessary levels of investment. Infrastructure is commonly a public good, with collective benefits, and no doubt many people would prefer to enjoy the benefit without having to pay a share of the cost. For climate policy, reluctance to pay is reinforced by the knowledge that the problem is not even contained regionally or nationally, but is global. Unilateral national action is therefore of limited value. If the world is in reality doomed to endure climate catastrophe because of the indifference or inadequate response of other big players (eg the USA, China, the EU, India), then national investment, however well executed, provides little benefit to the country that undertakes it. It is a classic “free rider” problem.

We consistently overestimate the cost of the necessary investments.

Why are so many politicians are able to convince themselves, and indeed much of the press and public, that the cost of essential infrastructure investments is unaffordable? Part of this comes down to consideration of how such capital projects are financed and stems from fallacies that surround much of the conventional analysis of the real cost of that capital, and how that investment should be rewarded.

How is it that governments are currently able to borrow at rates of interest that are close to zero, or even negative in real inflation adjusted terms, while EDF’s Hinkley Point project is reported as providing a lifetime return of around 9%?  Needless to say such a rate of return means a high cost to the consumer.

The cost of capital reflects risk? But this is a very specific definition of risk.

Conventional capital market theory (the well known CAPM model beloved of finance theorists and the basis of much of financial analysis) does indeed link the cost of capital to perceived market risk. But it is a very specific definition of risk, defined as the extent of the correlation between the stock market price of an individual share and the volatility of the stock market (eg the FTSE 100 index) as a whole (the “beta” value). It does not depend on the riskiness of an individual project, since it is assumed that investors will diversify away from individual risks, even risks as big as that of a massive overrun on construction costs. Project specific risk, in this context, is irrelevant.

The same conventional theory suggests that the cost of capital, viewed as the return over the whole life of an asset, should equate, loosely at least, to the rate of return achieved by utility investments generally. Utility earnings tend to have a rather low correlation with the general economy and the various share indices, implying a low cost of capital. The costs of climate change, and hence the benefits of limiting those costs, are almost entirely independent of any short term market volatility.

And the same is true for almost any “essential” investment, a category in which we can place both bridge structural repair and investment in mitigating climate change. In commercial terms these should be seen as low risk investments, and financed on a basis that properly reflects that. Conventional finance theory, therefore, is entirely consistent with the use of very low discount rates for appraisal of essential infrastructure and “public good” investments.

We can manage risk and contain the cost of capital.

Recent UK history abounds in expensive mistakes in relation to major capital investment programmes, and many of these relate to a failure to get the right links between risk and reward (the cost of capital) or to manage risks in the most appropriate way.  These include notorious private finance initiative (PFI) projects in which the public finances (via the NHS) have been forced to bear the burden of high returns on capital, even though most of the real risks have remained with the public sector or been de facto underwritten by government. There are two major principles that should help avoid these mistakes in the future.

The first is to recognise project specific risks and separate them from calculation of appropriate returns over the full life of an asset. Many major capital projects, such as nuclear power stations or tidal barrages, are intrinsically subject to significant uncertainties over construction costs and the risk of cost overruns. These may be shared between the construction business and the utility company that operates the asset, and will be embodied in the cost attributed to the asset at the time of its commissioning. The contractor will need to be properly rewarded for the risk taken during construction. But thereafter, within a well regulated sector, the asset enjoys a safe “low beta” utility return. Investment should therefore be appraised on the basis of a very low cost of capital.

The second is to ensure that economic regulation of utilities (eg the power sector) is appropriate. This means providing a degree of security over the future regulatory framework to protect the investors (in the asset) from expropriation of what is for them a sunk cost, if necessary through revenue guarantees.   If the framework is inadequate in providing this kind of regulatory reassurance, the investment will inevitably carry a much higher “project specific” risk premium. The public sector will in effect be paying very heavily for risks that are within its power to avoid.

Lord Stern attracted a great deal of criticism following the Stern Review, for arguing for very low discount rates to be applicable to assessment of climate mitigation investment. Recent experience of ultra-low interest rates, and indeed some of the indications of the rates attainable for major projects, with appropriate guarantees, suggests that low discount rates may be the most appropriate for evaluating and comparing low carbon investments.