Climate Resilience Means Avoiding Sources That Emit Carbon dioxide
Author
Kathleen Barrón - Exelon Corporation
Exelon Corporation
Current Issue
Issue
3
Kathleen Barrón

One of the most controversial questions being asked in the electricity sector these days is whether we have a power grid that is resilient to the natural and manmade threats that it faces. Answering this question requires policymakers to evaluate the threats and determine whether the work of system planners and market designers will reduce the magnitude and duration of a disturbance in grid operations by spurring operational practices and investments in transmission, distribution, and generation assets.

But what if some of those very assets are among the causes of the disturbance? In other words, what if the resources powering the grid itself are contributing to, and exacerbating, the severe weather events that are interrupting electric service to customers for days or weeks on end?

As the effects of a changing climate have become more apparent, electric utilities have engaged in adaptation strategies to ensure that they are better prepared for severe weather events whose frequency and intensity will increase with climate change. For example, sea-level rise, one of the most readily observable climate impacts, is already having significant effects on the infrastructure of coastal utilities. Rising sea levels endanger power plants and other structures that rely on ocean water for cooling.

Furthermore, utility infrastructure, such as distribution lines to reach coastal residential and commercial locations, is susceptible to both long-term inundation as well as periodic tidal flooding and/or storm surge flooding. In response, many coastal utilities — as well as those with infrastructure on the coastline or within tidal basins — are looking to raise structures above current and projected sea levels.

Climate change is also intensifying weather-related extremes — in the summer creating higher overall and peak temperatures as well as longer and more frequent heat waves. Such events increase the need for electricity, yet extreme heat reduces transformer capacity. Conversely, as we on the East Coast saw again this winter, polar vortices can bring extreme cold, which can also stress the system in unexpected ways.

Finally, and most obviously, high-wind events damage transmission and distribution lines and poles, disrupting service until they can be repaired. State commissions are authorizing funds to harden the distribution network, acknowledging that events are not one-off occurrences, but rather, are an increasingly likely outcome of a changing climate. They are expecting utilities to bring the system back up just as quickly, despite the increasing intensity and frequency of severe weather.

Investing in system improvements is well worth the cost, given that 2017 was the most expensive year on record for severe weather and climate events, according to the World Meteorological Organization. A recent WMO report described how the very active North Atlantic hurricane season, major monsoon floods in the Indian subcontinent, and continuing severe drought in parts of east Africa contributed to this record-setting economic impact. The report estimates that the total disaster losses from weather- and climate-related events in 2017 was $320 billion, the largest annual total on record (after adjustment for inflation).

However, this “react” strategy only goes so far. It makes little sense to focus exclusively on adapting the grid infrastructure to this changing reality — instead, infrastructure investments should be designed to ensure that the system that provides electricity from coast to coast is contributing lower and lower amounts of the pollution that is accelerating and intensifying climate impacts.

A realistic and rational approach to identifying the components of a resilient system should incorporate the potential for this system to mitigate the risk it faces, and be responsive to the fact that generation sources that emit high levels of greenhouse gas pollution are not identical to generation sources that emit little to no climate pollution. In other words, planning a generation system that is resilient must include planning for a system that is both able to withstand interruptions and also does not contribute to interruptions by exacerbating climate change.

There is a real-world example facing the mid-Atlantic power system right now. Four nuclear power plants in Pennsylvania and Ohio have announced that they will retire in the next three years. Last year, those plants produced 40 terawatt hours of carbon-free power. When they retire, that generation will be replaced almost exclusively by fossil fuel-fired plants, resulting in over 20 million metric tons of additional carbon emissions annually, the equivalent of putting over 4 million cars on the road. Until we factor the emissions impact of our generation sources into our resilience planning, we are destined to stay in this never-ending cycle.

Climate resilience means avoiding sources that emit carbon dioxide.

Extreme in Temperature — and in the Handling and Use of Science
Author
Craig M. Pease - Vermont Law School Environmental Law Center
Vermont Law School Environmental Law Center
Current Issue
Issue
3
Craig M. Pease

Ours is an age of unprecedented extremes — in the climate system and in politics.

Noah Diffenbaugh and colleagues’ carefully worded and nuanced 2017 PNAS paper “Quantifying the influence of global warming on unprecedented extreme climate events” develops methods to attribute heat waves, droughts, and high rainfall to climate change. Similarly, but perhaps counterintuitively, Judah Cohen and colleagues’ Nature Communications article from this year links recent increased snowfall and decreased winter temperatures in the eastern United States to a warming Arctic. The Earth’s climate is a complex system, and one’s simple intuition of cause and effect is not always correct. There are lots of subtleties.

A broad review of the science linking climate change and extremes in temperature, rainfall, and other weather may be found in the IPCC Special Report of Working Groups I and II. As scientists are wont to do, this technical literature linking increased atmospheric carbon dioxide to climate extremes seems to mostly discuss uncertainties, problems, and alternative analyses and interpretations, sometimes maddeningly so.

Yet, studies that attribute climate extremes to increased atmospheric carbon dioxide are on the cutting edge of scientific knowledge. It is inevitable that there will be uncertainties. Paradoxically, the honest and full disclosure of uncertainties is a great marker for scientific quality. Any study that fails to disclose its material uncertainties and problems is immediately suspect.

Which brings us to Lamar Smith (R-Texas), the chair of the House Committee on Science, Space, and Technology and a favorite of the oil and gas industry. Last year, he was quoted as saying “Likewise, extreme weather events are often falsely linked to increased carbon emissions. Historical data . . . demonstrate no discernible connections.” That is a baldly baseless assertion and demonstrates a complete failure to delve into the technical literature and present, point-by-point, whatever evidence he might conjure up supporting his position.

Before that, he attacked Katheryn Sullivan, then administrator of the National Oceanic and Atmospheric Administration, when his office stated that “NOAA needs to come clean about why they altered the data.” Here Smith accused agency scientists of falsifying their data, with not a shred of evidence to back up his assertion.

Most recently, Smith introduced the Honest and Open New EPA Science Treatment Act — or HONEST Act. Therein, Smith advocates public release of the data that underlie the scientific papers cited in EPA rulemaking. That bill would “prohibit the Environmental Protection Agency from proposing, finalizing, or disseminating a covered action unless all scientific . . . information relied on to support such action is . . . publicly available in a manner sufficient for independent analysis.” Moreover, EPA Administrator Scott Pruit is currently pursuing administrative actions in parallel with Smith’s bill.

The wording of the legislation seems rather bland, and under other political circumstances, it might well be innocuous. The rather ambiguous words in Smith’s bill together with Chevron deference could easily result in the legislation having little practical impact — if the EPA administrator were so inclined. But Pruit is clearly inclined to aggressive action on all fronts to undercut environmental regulations. Pruitt and Smith are using this dustup about public disclosure of scientific data that they themselves created to push their political agenda.

No amount of arguing with these gentlemen about climate science will get anywhere useful. I am struck by the comparison between the scientific papers that they dispute and the supposed facts they offer for policies that would hamstring rulemaking. Climate science papers contain literally page after page after page of carefully constructed nuanced sentences, where the authors lay out the limitations and problems with their own analysis, and where they oh so carefully explain exactly what the data show and do not show.

Despite offers of a “red team/blue team” debate on climate change, I think scientists would often be better off to simply decline offers to engage in public disputes with those who know nothing about the science. Inevitably, such debate will not concern science. And just as inevitably, scientists will lose.

A better solution is to have real scientists leading key congressional committees that oversee science. Yet of the 535 members in the U.S. Congress, only about 5 percent are doctors, dentists, veterinarians, psychologists, engineers, mathematicians, or scientists while about 40 percent are lawyers.

Extremes in temperature — and in the handling and use of science.

Local Governments and Climate Externalities
Author
David Bookbinder - Niskanen Center
Niskanen Center
Current Issue
Issue
3
Parent Article

By the time it is consumed or used, every good and service in an industrialized economy has generated environmental externalities, meaning environmental costs that are not included in the price. Large or small, hidden or obvious, those environmental costs instead are imposed on third parties who are strangers to the transactions that injure them.

To the Niskanen Center, the government’s proper role in environmental protection is as the corrective mechanism for these systemic market failures. And today we are witnessing something very unusual: in the face of federal inaction, local governments are trying not only to compel polluters to internalize their externalities, but are doing so in a way that may be more economically efficient than anything the feds could come up with.

A single source, such as a coal-fired power plant, can impose a multiplicity of such externalities. Locally, people downwind are exposed to pollutants that directly injure their health, such as particulate matter and mercury. Regionally, it emits large amount of oxides of nitrogen, one of the precursors to ground-level ozone. And globally, coal-fired power plants are the largest source of CO2.

There are two ways to get a pollution source to deal with these unpriced externalities: government intervention or legal liability to the people injured by these emissions. In turn, government intervention takes one of two forms: command-and-control regulation, which directly limits the amount of pollution released, or a pricing system, which reduces pollution by imposing a monetary cost for each unit of the pollutant released.

Governments prefer regulation, which provides more certainty both that emissions will be reduced, and about which sources will be doing so. Industry generally prefers pricing systems, as they are more economically efficient at reducing emissions. Either way, government action eliminates the externality and thus protects its citizens.

In the United States, we have largely opted for government intervention over tort liability, and most of that intervention has come in the form of command-and-control regulation, with only limited use of pricing. That pricing has come in the form of cap-and-trade systems (which are actually regulation-pricing hybrids), such as the federal sulfur dioxide program and California’s CO2 market. And, as government intervention has grown, the practice of imposing tort liability has diminished; in some instances, government intervention has eliminated, or greatly reduced, the pollution; in others, by complying with the terms of the permit limits, the polluter may be shielded from third-party liability.

However, sometimes the government does not ensure internalization of environmental externalities, necessitating resort to torts. Interestingly, liability is essentially a pricing system; the chief economic difference between the tort system and a tax or cap-and-trade system is that the former determines prices by the actual costs of the injury, and the latter imposes costs instead by the amount of emissions. Thus it may be that such liability is the more accurate mechanism for pricing externalities.

And that brings me to a rather unique form of government action, the tort cases filed against the fossil fuel industry by New York City, San Francisco, Oakland, and six smaller California cities and counties. The Niskanen Center approves of these local governments’ efforts to get the fossil fuel industry to internalize the costs of climate change, the biggest externality in history.

The local governments’ narrative is that the defendants produced coal, oil, and natural gas, and that at some point they became aware of the potentially catastrophic consequences. To protect their financial interests, the defendants (at best) remained silent about those consequences or (at worst), actively worked to conceal this information. And now, when defendants can no longer plausibly deny their products’ externalities (or even, as some defendants have done, admit that fossil fuels are the primary cause of global warming), they plan to produce more, and even more carbon-intensive, fossil fuels.

Given this conduct, say the plaintiffs, in choosing between those defendants and the taxpayers, who should pay for the things that these governments must do in order to protect their and their citizens’ property?

In other words, these local governments are attempting both to price the climate externality, and to set the price based on the actual injuries. If successful, as rational economic actors, defendants presumably would then internalize that price going forward. In other words, these lawsuits may result in an internal carbon tax set at the level required to compensate for the actual injuries caused by these fuels, and thus be the most economically efficient pricing mechanism for climate externalities.

Some Historical Context to Today’s Debates on the Climate Agreement
Author
Robert N. Stavins - Harvard Kennedy School
Harvard Kennedy School
Current Issue
Issue
3
Robert N. Stavins

The European Union, China, India, Brazil, South Korea, Canada, and other countries are negotiating the details for implementation of the Paris Agreement, and are developing domestic policies to achieve their respective Nationally Determined Contributions under the accord. At the same time, the United States — under the leadership of President Donald Trump — has announced its intention to withdraw from the Paris Agreement as soon as permitted (November 2020), and has taken significant steps to roll back domestic climate change policies. This may be a good time to place this U.S. government behavior into historical context.

Of course, the history of climate change science goes back at least to Svante Arrhenius in the 19th century, but my focus is not on the history of the science, but on the policy history, in particular, the history of discussions within the U.S. government regarding climate change and potential policy responses.

Some might think that the starting point would be the 1988 Congressional hearings — led by Senators Timothy Wirth and Albert Gore — which the New York Times covered in a long article. That was during the last year of the Reagan administration, but the story really begins more than two decades earlier.

On November 5, 1965, President Lyndon Johnson released a report authored by the Environmental Pollution Panel of the President’s Science Advisory Committee. The report included a 23-page discussion of the climatic effects of increased concentrations of atmospheric carbon dioxide, due to the combustion of fossil fuels, and — interestingly enough — concluded with a proposal for research on a specific approach to responding, namely with what is now called geoengineering.

In his introduction to the report, Johnson emphasized that “we will need increased basic research in a variety of specific areas,” and then went on to state: “We must give highest priority of all to increasing the numbers and quality of the scientists and engineers working on problems related to the control and management of pollution.”

Four years later, Daniel Patrick Moynihan — one of the leading public intellectuals of the 20th century — was working in the Nixon White House, and sent a memorandum to John Ehrlichman, then a key presidential assistant (who subsequently served 18 months in federal prison for his role in the Watergate conspiracy). In the memo, Moynihan referenced the Johnson administration’s report, focused on “the carbon dioxide problem,” described the basic science of the greenhouse effect, highlighted anticipated impacts including sea-level rise, proposed potential policy responses including “stop burning fossil fuels,” and concluded that “this is a subject that the administration ought to get involved with.” We do not know whether Ehrlichman responded.

From today’s perspective in the second year of the Trump administration, it may — or may not — be comforting to recognize that scientific and even policy attention by the White House to climate change goes back more than five decades. Since the Johnson administration, there have surely been ups and downs — through the administrations of Presidents Nixon, Ford, Carter, Reagan, Bush I, Clinton, Bush II, Obama, and Trump.

This list of presidential administrations illustrates that the White House swings between parties. It should also remind us that whether a single four-year term or even the maximum of eight years, administrations are relatively short-lived when judged in historical context.

All of which reminds me of a personal story. In November 2016, just days after the U.S. election, I was in Marrakech, Morocco, for the annual U.N. climate negotiations. I was speaking on a panel assembled by the government of China in their pavilion. Those who preceded me voiced their dismay about the election and their very low expectations for the climate change policy that would likely be forthcoming from Donald Trump and his administration-to-be.

Our moderator from the Chinese government then introduced me to speak, and as I listened with headphones to the simultaneous translation, I heard him say, “And now Harvard’s Professor Stavins will bring us some good news from the United States.” I was dumbfounded. What could I possibly say? I walked to the lectern, sipped some water, took a deep breath, and said to the audience, “When you get to be my age, you recognize that four years is not a long time!”

That will have to suffice as an optimistic conclusion to this column.

Some historical context to today’s debates on the climate agreement.

Unnatural Disaster
Author
Margaret Peloso - Vinson & Elkins
Kristen Miller - Vinson & Elkins
Vinson & Elkins
Vinson & Elkins
Current Issue
Issue
3
Unnatural Disaster

As a society, we are devoted to the idea of spreading the costs of catastrophic losses. Continuing this commitment in the face of projected increases due to climate change will require ensuring that such programs also create incentives to engage in hazard mitigation.

Margaret PelosoKristen MillerMargaret Peloso is an environmental and natural resources partner in Vinson & Elkins’s Washington, D.C., office. Kristen Miller is an environmental and natural resources associate in the D.C. office.

Natural disaster response in the United States has long been characterized by the distribution of significant amounts of government funds to aid recovery. These funds are allocated in the form of direct aid in the wake of an event and through subsidized insurance policies before a hurricane, flood, wildfire, or tornado. In fact, our cultural paradigms around sharing the burden of disaster recovery have become so strong that researchers find that individual property owners often choose to live in hazard-prone areas in part because they believe the government will make these areas safe for them or help them rebuild in the wake of an adverse event. This effect is further compounded by the fact that people often misevaluate or even ignore risk, and tend to be under-insured for high-value losses that are rarely experienced.

Against this backdrop, the United States has experienced increasing natural disaster losses from hurricane, flood, and fire events, all of which are attributed at least in part to the impacts of climate change. According to the National Climate Assessment, recent increases in hurricane activity are attributable in part to higher sea surface temperatures. The report notes that floods may intensify in the future as a result of climate change because human-induced warming increases the factors that cause and amplify inundation, such as heavy or prolonged precipitation and storm surges. Wildfires are also impacted, as hotter and drier weather, as well as earlier snow melt, extends the length of the wildfire season and causes fires to burn more acreage. In fact, the impact of extreme weather events can already be seen in the National Flood Insurance Program and the Stafford Disaster relief program administered by the Federal Emergency Management Agency, which collectively required FEMA to borrow approximately $24 billion from the federal treasury to pay out NFIP claims and provide Stafford Disaster relief in the wake of Hurricane Katrina and Superstorm Sandy.

Going forward, climate change will pose two separate but related challenges for the programs that distribute government funds in response to disasters. First, can social insurance be structured such that it remains a financially viable tool to promote resilience and community recovery? Second, how can insurance programs be structured to promote adaptation measures that reduce exposure to extreme events?

An interesting example of the limitations of social insurance arises from the California courts’ attempt to socialize wildfire costs through an expansion of the state’s inverse condemnation law. As this example and others show, the viability of social insurance will be dependent upon the ability of policymakers to proactively structure programs that both engage in risk spreading and promote risk-reduction behaviors.

The role of any insurance product is to spread risk. Insurers remain solvent by collecting more premiums than the losses they have to pay out. They accomplish this in two ways: setting premiums that correlate to the risk of loss, and ensuring risks that are sufficiently diversified that the likelihood of too many claims being made at the same time is reduced. This is particularly challenging when insuring natural disaster events, which often cause losses that are both near total and occur at the same time. When a hurricane storm surge causes flooding, for example, it does not just inundate one house. If an insurer has too much exposure to flood risk, it will not be able to maintain solvency. While insurers can reduce their exposure by purchasing reinsurance and transferring some of the risk to a third party, the ability of insurers to purchase reinsurance will be limited by whether they can increase premiums to cover the costs of such reinsurance programs.

T he problem of limited market capacity to address correlated disaster losses led to the creation of the National Flood Insurance Program in the United States. Prior to the 1950s, commercial flood insurance was available. However, a series of catastrophic floods along the Mississippi River caused many insurers to face significant losses — leading them to either go out of business or exit the flood insurance market. The federal government therefore stepped in to become an insurer of last resort, creating NFIP by statute in 1968.

The program acts as primary insurance for property owners that is underwritten by the federal government. The purchase of insurance under NFIP is required for all property owners who hold a federally backed mortgage in the 100-year floodplain. In theory, NFIP premiums should be set at levels that reflect the risk of flooding at any given property, but there are several factors that result in premiums being heavily subsidized, including the setting of rates based on outdated maps that do not reflect current flood risks.

The structural issues in NFIP were particularly exposed after Katrina and Sandy, when claims on policies vastly exceeded the amount the program collected in premiums. While FEMA is required by statute to repay resultant borrowing, a 2017 GAO report concluded that NFIP is unlikely to generate sufficient revenues to repay these debts. Last year FEMA used new statutory authority to place reinsurance for NFIP, transferring $1.042 billion of risk to the private market to cover payouts exceeding $4 billion. This policy was triggered after Hurricane Harvey, which caused estimated losses to NFIP between $8.5 and $9.5 billion.

The other form of disaster relief at the federal level is the Stafford Disaster Relief Act, which provides aid for the immediate aftermath of presidentially declared disasters as well as for rebuilding. In the case of flood or hurricane events, Stafford works in concert with NFIP to provide funds for rebuilding, but there are limitations on Stafford Relief provided in response to floods. Specifically, the Stafford Act prohibits the use of funds to provide assistance to property owners who were legally required to obtain flood insurance and did not have coverage. There is not a similar statutory provision requiring insurance coverage for recipients of aid after other disaster events, including wildfires.

Prudently, the Stafford Act also has programs to funds to reduce risk, including the Hazard Mitigation Grant Program and the Pre-Disaster Mitigation Program. The first provides post-disaster funds for the deployment of hazard-mitigation measures during rebuilding, although implementation of the program has been uneven. The latter provides grant funding to state, local, and tribal governments for mitigation measures.

The upshot from the combination of the NFIP and Stafford programs is that federal taxpayers will bear most of the costs of disaster recovery (and perhaps an even greater share for non-flood disasters such as fires). In this context, there is an argument that the NFIP program, while flawed, is better than nothing; if taxpayers inevitably foot the bill, it is better for all property owners to pay something in insurance premiums, even if the premiums do not fully reflect risks. In addition, once property owners purchase insurance, the pricing of insurance can be used to create incentives for risk mitigation. NFIP, for example, offers to reduce premiums based upon risk-reduction measures taken at the community level, such as restoring natural functions of floodplains. Because Stafford Relief for other disasters is not tied to similar insurance requirements, the ability of federal policymakers to encourage hazard mitigation for natural disaster risks such as fires is limited to attempting to incentivize action through federal grant programs for hazard-risk reduction.

Several states have also intervened to provide social insurance programs for natural disasters. The most notable of these is in Florida, where the state both participates in the primary insurance market and acts as a reinsurer to provide coverage for hurricane damage. Florida participates in the primary insurance market through a company called Citizens Property Insurance Corporation, created by the legislature in 2002 to provide property insurance to Floridians who are unable to find coverage on the private market. While Citizens is financed by policyholder premiums, catastrophic losses that exceed premiums are financed by assessments that Citizens is statutorily required to levy on all holders of insurance policies in the state — including those who do not purchase their insurance from Citizens — until the debt is eliminated.

Florida is also involved in the reinsurance market through the Florida Hurricane Catastrophe Fund, which provides a way for insurers to increase their capacity by transferring some of their assumed risk to a third party. Florida’s reinsurance intervention occurred in the wake of Hurricane Andrew in 1992, when many private insurers informed the state of their intent to exit the market. Because it wanted to ensure the continued availability of reasonably priced coverage, the legislature authorized the state to create its own reinsurance program, the Florida Hurricane Catastrophe Fund. Relying on its authority to raise funds through the issuance of pre- and post-event bonds, the fund accumulated a balance of $14.9 billion between the years 2006 and 2016, when the state experienced minimal storm activity. This changed in 2017, when the fund reported estimated losses of $2.04 billion from Hurricane Irma. While a significant balance remains even after this loss, the fund warns that it might need to resort to emergency assessments or post-event bonding if a storm of sufficient size were to hit.

The examples outlined above highlight not only the precarious financial state of social insurance programs, a condition that will only worsen if climate change increases the frequency or severity of extreme weather events, but also the failure of these programs to incentivize risk-mitigation behavior. If insured losses increase in the future, the ability of heavily subsidized social insurance programs to maintain solvency will be in question. In addition, policymakers’ decision to provide heavily discounted insurance rates — which would not be available in a competitively priced market — has undermined the function that insurance rates play in signaling the extent of hazard exposure to property owners. These social insurance programs thus create the risk of increasing societal exposure to natural hazards. Therefore, to the extent that social insurance programs are to be maintained, they must be reexamined and modified to meet the twin goals of risk spreading and promoting hazard mitigation.

The need to revisit our social insurance mechanisms is well illustrated by the unsustainable approach that some California courts have taken to wildfire risk. Damage from wildfires in the state has increased over recent decades, especially those occurring during the Santa Ana season, which is characterized by strong, dry winds that cause fires to spread more quickly. The wildfires in Northern California last October were some of the most destructive in state history, killing 44 people and destroying an estimated 8,900 structures.

In response to fires in years past, California property owners have sought to recoup some of their damages by bringing lawsuits against power companies on the grounds that electric transmission and distribution lines and other equipment owned by the utilities played a role in starting the fires at issue. For example, in Barham v. Southern California Edison Company, homeowners argued that power lines ignited the wildfire that destroyed their homes. Finding in favor of such plaintiffs, California courts have held power companies liable for wildfire damages, effectively creating a social insurance regime in which power companies provide additional insurance to homeowners who face losses.

California is unique in this approach. The state constitution requires that the government must pay an owner fair compensation whenever property is taken or damaged for public use under the state’s eminent domain power. Normally, this process occurs when a public entity initiates an eminent domain proceeding. But when the public entity fails to do so, property owners may seek compensation by bringing an inverse condemnation action. As the California Supreme Court has explained, “The underlying purpose of . . . inverse . . . condemnation is to distribute throughout the community the loss inflicted upon the individual by . . . public improvements: to socialize the burden . . . that should be assumed by society.”

The application of this doctrine has been expanded to hold privately owned utilities liable when they damage private property while providing a public service. Courts have explained that while inverse condemnation liability applies only to public entities, privately owned utilities may be held liable as public entities because such utilities enjoy a state-protected monopoly to provide a public service and are afforded some eminent domain authority to construct the power lines that serve customers. As the Barham court explained with respect to the wildfire case against SCE, public utilities are therefore “more akin to a governmental entity.” In the court’s view, because utilities provide “services and functions . . . of vital public interest,” the “loss-spreading rationale” that drives inverse condemnation still applies. “The fundamental policy” driving these decisions “is to spread among the benefiting community any burden disproportionately borne by a member of that community.”

In effect, the state courts have grabbed onto the utility model as a way to spread risk, akin to an insurance program for fire damages. Critical to this policy rationale, however, is the assumption by courts that the loss will successfully be redistributed among the public. In inverse condemnation cases against governmental entities, this makes sense because costs incurred by such entities can be socialized through taxes, thereby distributing the costs across the public.

But this assumption breaks down when the entity incurring the costs is a privately owned entity that does not have the ability to tax. While a utility could theoretically spread costs across its customer base by raising charges, those prices are closely regulated by California’s Public Utilities Commission. As noted by a state appellate court in Pacific Bell v. Southern California Edison — another inverse condemnation case involving a privately owned utility — this tight regulatory control is in large part due to the fact that utilities enjoy state-protected monopolies over services that are essential to the public.

In return, the state regulates the prices that utilities charge their customers. Pursuant to the California Public Utilities Code, the Public Utilities Commission therefore ensures that all charges for services provided by a public utility are “just and reasonable.” To do so, the commission hears general rate case proceedings that determine the costs of operating, maintaining, and financing the infrastructure used to run the utility. Using these numbers, it authorizes the total amount of revenue a utility can collect in order to cover costs as well as a pre-approved profit.

Because electricity rates are predetermined via rate cases, utilities cannot simply adjust their rates when they experience unexpected costs. Rather, they must apply for approval to recover those costs through formal mechanisms established by the commission. Such applications, and the commission’s decisions to grant or deny them, are highly fact-specific and do not guarantee recovery.

It is here that the rationale adopted by the courts in Barham and Pacific Bell for holding utilities liable under inverse condemnation — and the de facto supplemental fire insurance regime — starts to break down. Those courts argued that utilities should be held liable under inverse condemnation theory precisely because doing so would ensure that the risks posed by the utilities’ services would be “spread among the benefitting community.” But, as SCE argued in Pacific Bell, this loss-sharing rationale does not make sense when applied to investor-owned utilities because utilities cannot reliably spread losses among the broader public without guaranteed cost-recovery.

Notably, the Pacific Bell court rejected that argument, stating that it was unpersuaded by SCE’s “implication that the commission would not allow Edison [rate] adjustments to pass on damages liability” from the case. This holding is striking because it indicates that the courts misunderstand key realities of ratemaking law, which simply does not guarantee recovery of unexpected costs. Indeed, last year the commission denied an application by San Diego Gas & Electric to recover costs incurred from similar wildfire litigation. This decision directly negates the Pacific Bell and Barham courts’ assumption that power companies can reliably spread risks of unexpected losses associated with their services across their customer base and, as a result, undermines the doctrinal rationale for applying inverse condemnation liability to privately owned utilities.

This raises the question of why California would turn to the utilities rather than the insurance markets as a mechanism to provide social insurance for fire losses. California does have a legislatively created insurer of last-resort for fires, called the California Fair Access Insurance Requirements Plan. Under the California Insurance Code, all insurers licensed to write property and casualty insurance must participate in the FAIR Plan. The FAIR Plan provides fire insurance policies to homeowners who cannot obtain them elsewhere, and participating insurers bear the losses and expenses of the plan in proportion to their share of the market. However, while insurers are required to provide coverage, homeowners are not legally required to buy the policies. And even those homeowners who do have fire insurance may well find themselves under-insured in the event of a catastrophic loss if they have not purchased additional fire coverage. The courts’ approach to inverse condemnation in these cases exacerbates under-insurance concerns and fails to incentivize risk mitigation, such as zoning restrictions for fire-prone areas.

The literature establishes that most property owners will tend to under-insure for natural disaster risks and fail to take mitigation measures to reduce their hazard exposure. It is not clear why the policy solution to these behavioral economic problems should be to shift the cost of losses to utilities — especially if they are unable to socialize the costs of those losses through rate recovery. As noted above, this is particularly problematic because losses from wildfires, as is true with all natural disasters, are widespread and occur at the same time. Further, unlike an insurance underwriter or a government relief program, utilities providing social insurance through inverse condemnation proceedings have no ability to incentivize investments that will mitigate future wildfire risks. As it stands, California’s current approach to wildfire losses is unsustainable and fails to achieve either the risk spreading or hazard mitigation goals of a social insurance program.

As a society, we have been politically committed to the idea of spreading the costs of disaster losses.
Continuing this commitment in the face of projected increases in such losses due to climate change will require that social insurance programs be restructured to ensure that while they spread risk, they also create sufficient incentives to engage in hazard-mitigation behavior. This can be accomplished both through rate reductions in subsidized insurance programs and also through conditioning receipt of disaster relief for rebuilding on the adoption of hazard-mitigation measures. To be most effective, social insurance programs should be administered in concert with zoning requirements to ensure that taxpayer funds are not being used in a manner that will increase future losses. As currently structured, most programs create a significant moral hazard by allowing property owners to rebuild in disaster areas with full knowledge that others will bear the cost should they lose their home again. On this dimension, California’s current use of inverse condemnation looks particularly inadequate, as neither the utilities nor the state have the ability to condition how any awarded compensation can be spent — meaning there is no way to encourage mitigation.

There are several ways that the programs discussed in this article could be modified to encourage hazard mitigation. For federal and state insurance programs, premiums should accurately reflect risks in order to provide stronger signals to policyholders regarding the hazards of building in certain locations. NFIP, for example, should rely on up-to-date flood zone maps when setting rates. Social insurance programs should also encourage hazard mitigation by offering discounted premiums for such measures, as NFIP does. In addition, state programs should be structured to take full advantage of federal disaster mitigation funds. For example, California’s legislature might consider creating a comprehensive social insurance program that takes advantage of the federal Pre-Disaster Mitigation Program, which provides grant funding for wildfire and utility-line mitigation measures.

Social insurance programs also need to ensure that there is a continued source of funds that can be drawn upon in the event of disaster. As noted, this is a major concern for NFIP, which has put FEMA in debt. The obvious solution is to raise premiums to reflect the actual risk of natural disasters. However, to the extent that this is politically unpalatable, legislators will have to come up with some other funding mechanism. Florida, for example, chose to address this problem by giving Citizens the authority to levy assessments on policyholders in order to pay for any losses that exceed premiums.

Finally, programs should distribute costs in a manner that reflects the likelihood of increasing losses in the future. The current federal system seems designed to provide relief for truly one-in-a-lifetime disaster events that communities could not have foreseen nor prepared for. However, NOAA reports that in 2016, the United States was subject to 16 separate disaster events with damages in excess of $1 billion. Therefore, any reexamination of social insurance programs should ask whether these significant costs are most appropriately spread over the federal tax base as a whole or if social insurance for certain types of risks should be spread across a smaller subsection of property owners who share in that risk, as the California courts seem to have intended. TEF

LEAD FEATURE ❧ As a society, we are devoted to the idea of spreading the costs of catastrophic losses. Continuing this commitment in the face of projected increases due to climate change will require ensuring that such programs also create incentives to engage in hazard mitigation.

Delivering Climate Change Progress
Author
Dan Esty - Yale University
Yale University
Current Issue
Issue
2
Delivering Climate Change Progress

With all the challenges that humanity faces, there are huge opportunities as well.

Which is not to say that the environmental news isn’t bleak. When the world community met in Bonn last November to advance the Paris Agreement on climate change, Washington signaled it would be leaving the 2015 accord and abandoning the key domestic program for achieving America’s commitment to reduce its greenhouse gas emissions by 26-28 percent over the next dozen years, the Clean Power Plan to cut power plant emissions. But despite the new administration’s actions, the momentum behind America’s Paris pledge remains strong — and emissions reductions in general continue across the world, as 190-plus other nations move to implement the agreement.

On the downside, we face profound challenges not only at the national level with the new administration, where the pullback from environmental regulation has been well documented, but also at the state level, where budget crises are taking a toll. For example, the Connecticut Department of Energy and Environmental Protection (which I led from 2011 to 2014) faces dramatic budget cuts and staff reductions. And the CT Green Bank, which I helped to launch — bringing Republicans and Democrats together to use limited clean energy resources to leverage private capital — faces budget challenges too.

These pressures require us to pursue our environmental agenda in new and better ways. For instance, one of the most profound points of learning from ecological science over the last fifty years is that we must take a systems approach to environmental problems. Air, water, waste, and land use are all connected. Issues at the global, national, state, and local levels are all connected. Thus, we need to use the current crisis to shape a 21st century policy strategy that is more integrated and better captures the opportunities from systems thinking. 

The logic of connectedness extends to the political domain. Yet, our elected officials appear more deeply divided than ever. Clean energy can move on a bipartisan basis, but it takes hard work, it takes compromise, and it takes doing things in better ways, not simply reiterating the same old arguments that have kept people apart for so very long.

While a systems approach can and should be deployed across the environmental agenda, climate change looms as the central — even existential — challenge of our times, demanding worldwide collaboration and, at the same time, transformative change toward a clean energy future at the local, state, and national levels. As a young EPA official, I helped to negotiate the 1992 Framework Convention on Climate Change. Maurice Strong, the Canadian diplomat and businessman who chaired the 1992 Rio Earth Summit at which the convention was launched, took me aside and said, “Dan, you’ve got to remember, that when we gather all these presidents and prime ministers, only two outcomes are possible: Success and real success.” Sadly, we have not delivered real success over the ensuing 25 years. Emissions have continued to rise, and we have not transformed the energy foundation for our planet.

The 1992 climate treaty was top-down, reflecting the prevailing wisdom that national governments were the way to deliver transformative change and broad-based outcomes. In contrast, the Paris Agreement shifts toward bottom-up strategies that recognize the reality that presidents and prime ministers don’t actually control most of the decisions that determine the carbon footprints of their societies. Those decisions — about urban development, transportation, housing, and economic activity — fall more directly to mayors, governors, CEOs, university presidents, and the leaders of community organizations. The Paris accord, with its more decentralized structure, reflects the fact that they are the ones who make the actual decisions that will determine whether our society decarbonizes.

The importance of this shift in focus cannot be over-stated. Since the 1648 Treaty of Westphalia, national governments have been in charge. But what was the right structure to solve the religious wars of Europe in the 17th century might not be right for solving 21st century environmental problems. We are not one nation with one leader in one place. We have a much richer tapestry of political and societal leadership. California Governor Jerry Brown, for instance, leads a sovereign state with great potential to deliver greenhouse gas emissions reductions. Likewise dozens of other governors, mayors, and corporate leaders have committed their states, cities, and companies to climate action — thus keeping momentum behind the Paris Agreement.

I argue that this new broader leadership framework should be formally acknowledged and celebrated. In this regard, I would like to see the Paris Agreement opened to signature by mayors, governors, CEOs, and others who are steering society toward a transformed energy future. This same logic would apply, I might add, to all future global agreements where national governments alone cannot deliver successful outcomes.

More generally the game plan of the 1992 framework convention, reflecting 20th century thinking, centered on targets and timetables for emissions reductions. I call this “the lawyer’s mistake,” since those with legal training often think that if you pass a law, write regulations, sign a treaty, or issue rules, people will follow them. No one in business would have made that mistake. They would regard the treaty as a mission statement or maybe a business plan, but lacking a serious implementation strategy. The Paris Agreement gets beyond this error, shifting people’s focus from mere goals to incentives to deliver solutions — particularly new strategies for financing investments in energy efficiency and renewable power infrastructure. And it moves away from a command-and-control model that demands conformity to a single path forward to an approach that asks each country to say what it can do and how it will do it.

To that end, there is no better incentive to reduce emissions and expand the deployment of wind, solar, and other renewable power sources than to make people pay for the harm they cause — thus steering them toward clean energy options. In this spirit, many countries (and companies — and even universities) have begun to put carbon charges in place. Using price signals stands in contrast with the 20th century strategy of regulatory mandates, which require the government to figure out all the answers — and then tell business what to do. But in the 21st century we face a broad-based problem where everyone’s behavior has to change, not just large businesses but the myriad of small businesses and individuals too.

Another contrast with the 20th century is that we now live in the Information Age and have a variety of Big Data and communications tools that did not exist in the past. We can track harms with much greater precision and simultaneously gauge whether our policy interventions are working. Thus, we have the capacity today to measure performance at the national, state, local, and company scales — and to identify leaders, laggards, and best practices. The Paris Agreement reflects this new data opportunity and calls for a “stocktake”every five years to see if the actions being undertaken are delivering at the pace and scale required to mitigate climate change. 

With the Paris Agreement, I believe we have turned a corner — and the move toward a decarbonized future is now inevitable. But let me tell you the bad news. The pace of change can be affected by political leadership. President Trump’s push to withdraw the Clean Power Plan will have an impact. Likewise, the administration’s budget cuts and other regulatory changes (including the plan to pull back from using a $40-per-ton “social cost of carbon” in regulatory analyses) will slow the shift toward a clean energy future. 

But it will not stop it. Coal is not coming back. Market forces ensure that fact regardless of regulatory changes. And innovation in support of a transformed energy future continues around the world — with or without the United States. While I disagree with much of what the administration is doing, it must be said that the Clean Air Act isn’t the best vehicle for addressing climate change. Simply put, it doesn’t provide a ready way to put a price on emissions. In this regard, I would prefer a carbon charge that begins at $5 per ton of carbon dioxide or equivalent and escalates by $5 every year until the carbon charge reaches $100 a ton at year 20. We know that carbon pricing works. In the Northeast, we already pay a $5-per-ton charge through the Regional Greenhouse Gas Initiative — and top-tier clean energy results. 

The Clean Power Plan, by contrast, emerged under the old 20th century regulatory model because there was no other possibility available. Congress had signaled that it would not pass comprehensive climate change legislation. So, we ended up with a primitive tool. Within the constraints of the Clean Air Act, the CPP offers considerable flexibility. Each state has been given a target for reducing emissions. Not each power plant, each state. And which states got the hardest assignments? Those who had already done the most. As the commissioner of Connecticut’s Department of Energy and Environmental Protection at the time, I was furious about this structure that assigned the states that had dragged their feet on climate change more lax targets. I then realized the separate standards were politically shrewd. When the challengers from the foot-dragging states go to court, the judges are going to look at them and say, “Really? When other states are already 90 percent decarbonized, why can’t you take the first easy steps?”

While some might see the current political challenges as dire, I think action on climate change will continue apace — even in the United States. For one thing, President Trump is finding out that he cannot erase the CPP with the stroke of a pen. Our law says that once a regulation has been finalized, you have to take it down by the same notice-and-comment process. American administrative law requires that an agency act furthermore in a manner that is neither arbitrary nor capricious. In re-examining the Clean Power Plan, EPA must make a decision based on science and facts. There have to be hearings, citations of relevant studies, and careful review of the administrative record before a judgment can be made that a different policy would better achieve the statutory goals of avoiding emissions that endanger public health and welfare. In addition, Congress and the courts have roles to play. As the administration has already seen, these co-equal branches will not hesitate to act.

Just as governors and mayors are stepping up to the issue of climate change, so too are corporate leaders. In the wake of Trump’s pullback on climate change, the business community has not, by and large, walked back from its commitments to reduce emissions. To the contrary, nearly 2,000 companies have joined the We Are Still In climate coalition. Citizens will also be critical in delivering a sustainable future. People are putting their environmental values into action as consumers — signaling their interest in sustainability by buying green products, such as electric vehicles. Likewise, an ever-wider swath of investors are saying, “I want the companies in my portfolio to align with my values” and therefore are asking for more information on the environmental, social, and governance performance of companies, including details on corporate climate change action plans. I see this trend as continuing, with more and more of us factoring carbon footprints into all kinds of decisions, including how we do our business, how we lead our lives, how we raise our children, and how we engage with our communities.

We have entered an era of sustainability. Not everyone yet recognizes it, but a growing number of people and institutions and businesses have come to accept that we face a sustainability imperative. In this regard, we have to gauge progress not just in terms of economic results but also environmental and social outcomes. Multiple goals that entail inevitable tradeoffs makes policymaking more difficult. With this broader perspective in mind, we can achieve real success on climate change and other challenges, but it will require transformation of our environmental policies — and our politics. TEF

TESTIMONY ❧ No baseball team picks players in 2018 the way it did in 1978. Nor does any business do marketing today the same way it did in decades past. Environmental protection, however, remains stuck in a top-down 20th century regulatory model. But new tools and strategies, including carbon pricing, could unleash a sustainability revolution that drives innovation — and delivers a transformed energy future.

Revenue Use Matters
Author
Donald Goldberg - Climate Law & Policy Project
Dave Grossman - Green Light Group Consulting
Climate Law & Policy Project
Green Light Group Consulting
Current Issue
Issue
2
Revenue Use Matters

Pricing carbon and using some or all of the proceeds to provide strategic, cost-effective subsidies could achieve deeper, faster emissions cuts than a conventional price alone — without increasing costs to industry or consumers.

Donald Goldberg and Dave GrossmanDonald Goldberg is the executive director of Climate Law & Policy Project. Dave Grossman is principal at Green Light Group Consulting.

We are not reducing greenhouse gas emissions quickly enough. Sure, renewable energy is proliferating, electric vehicles are starting to gain market share, and countless numbers of innovations and policies are being pursued all over the world to reduce the amount of carbon released to the atmosphere. Yet we remain far from the needed decarbonization trajectory. After remaining flat for three straight years, worldwide emissions ticked up another 2 percent in 2017 and are predicted to continue rising in 2018, according to a report by the Global Carbon Project. The U.S. Energy Information Administration has projected that world energy-related CO2 emissions will rise 16 percent between 2015 and 2040. The UN Environment Program, in its latest “emissions gap” report, found that the existing national pledges under the Paris Agreement are only a third of what is needed by 2030 to meet internationally agreed temperature targets.

At the same time, climate science seems to paint a bleaker picture with every new study. In November, the U.S. Global Change Research Program released its part of the National Climate Assessment, finding that climate change, driven by human activities, is causing global and U.S. temperatures to rise, heat waves to become more frequent, the incidence of wildfires to increase, the frequency and intensity of heavy rainfall to grow, ocean temperatures to warm, and sea levels to rise. All of that is occurring with only about 1°C of warming. To have a two-thirds chance of limiting warming to 2°C at the end of the century, the Intergovernmental Panel on Climate Change has concluded that global greenhouse gas emissions must be net zero by the latter half of the century — and significant amounts of negative emissions will probably be needed thereafter. Humanity is not even close to being on pace to achieve that.

In the United States, the Trump administration has abdicated leadership on climate change and is attempting to roll back climate-related regulations. Many subnational actors — states, cities, businesses, universities — have stepped up to assert climate leadership, pledging to meet the commitments the United States made under the Paris Agreement. This leadership is most welcome, but achieving our Paris commitments, much less achieving true deep decarbonization, will be a heavy lift. There is a suite of existing policies in U.S. states (and around the world) to address climate change, but given the scale of reductions needed, we need to boost these policies significantly to increase their emission-reducing power.

There is a growing consensus that carbon pricing is one of the key policies needed to achieve meaningful emission reductions. Putting a price on carbon, whether via a tax or a cap-and-trade mechanism, sends an economic signal that the atmosphere is no longer a free dumping ground for greenhouse gas pollution, spurring emission reductions and clean energy deployment. Some of the states leading the way on climate change, such as California and the northeastern and mid-Atlantic states in the Regional Greenhouse Gas Initiative, already have carbon pricing policies in place.

Carbon pricing alone, however, is unlikely to get us to the levels of emission reductions needed. Analyses of carbon prices around the world have found that most are far below estimates of the social cost of carbon (a measure of the cost of the damages caused by emitting one ton of carbon dioxide). There is a way, though, to make carbon pricing policies much more powerful drivers of reductions. Here’s the key: how the revenues are used can matter just as much as the price itself.

The uses of carbon revenues are starting to get more attention. Disagreement over revenue use may have been the primary factor that torpedoed the Washington state carbon tax referendum in 2016. Generally speaking, there are three broad categories of revenue use that are being implemented or at least discussed. The first is to support activities that bear some relation to climate change, such as achieving additional emission reductions and adapting to climate impacts, or that mitigate negative effects of the climate policy, such as offsetting the regressive effects of a carbon price on the poor. Another approach is to promote economic efficiency through a revenue-neutral tax swap that would replace economically undesirable taxes, such as business or payroll taxes. The third route is to provide a “dividend” to all citizens, whether based on the premise that the atmosphere belongs equally to every individual or based on the political calculus of building public support.

While all of these uses of revenue could be socially beneficial, and each has its supporters, there is a strong argument to be made for pursuing the first approach and devoting some meaningful portion of revenues to achieving additional emission reductions. First, as just noted, most carbon pricing policies are not that robust; political constraints create a significant hurdle to implementing carbon pricing policies at levels sufficient to achieve the reductions required. Second, there are some needed reductions that a carbon price will be unable to reach (e.g., some energy efficiency measures), requiring other types of solutions that carbon revenues could help fund. Third, to achieve the global targets of keeping warming well below 2°C (and below 1.5°C if possible), the reduction trajectory has to be so steep that it seems imprudent to give away resources that could be used to help. Finally, even if a cap or tax could get enacted that could achieve some jurisdictions’ share of the 1.5°C or 2°C targets, the fact that we are already experiencing significant adverse impacts at about 1°C of warming suggests that those targets do not necessarily represent what is “safe” — just what would provide a reasonable chance of avoiding the worst impacts of climate change. In addition, emissions in other jurisdictions, especially in the developing world, will not be declining on a trajectory to meet global climate targets, so jurisdictions leading the way will have to go above and beyond.

Using carbon revenues to achieve additional reductions likely would have strong public support. Several polls over the past few years have shown that the preferred use of carbon revenues is to support the development of clean energy. For instance, a 2016 Yale poll found that 81 percent of registered voters support using carbon tax revenues to support the development of clean energy, more than for any other use; the least popular uses of tax revenues were reducing corporate taxes (26 percent), reducing payroll taxes (46 percent), and returning the money as dividends to households (48 percent). Similarly, a 2014 National Surveys on Energy and Environment poll found that a carbon tax with revenues used to fund research and development for renewable energy programs received 60 percent support, including support from majorities of Democrats, Republicans, and independents — and greater support than rebate checks or deficit reduction.

The RGGI states and California already direct most of the revenues from emission allowance auctions toward climate-related purposes, investing both in reducing emissions (through renewable energy and energy efficiency) and in moderating the economic effects of carbon prices on their citizens. At present, the RGGI states and California simply allocate the proceeds from emission allowance auctions into particular programs, many of which seem to be chosen in a rather piecemeal fashion. There does not seem to be a disciplined effort to tailor the spending of auction revenues in order to achieve both the biggest emission-reducing bang for the buck and reductions beyond what their caps alone would achieve. We need our leading states to do even better.

A price-and-subsidy system is one way to do better. This approach not only puts a price on CO2 and possibly other greenhouse gas emissions to create a financial disincentive to emit those gases, but also uses the revenues generated to provide targeted subsidies that cost-effectively encourage investment in additional reductions of emissions — reductions well beyond those that would have been achieved by the carbon tax or cap itself.

The basics of the price-and-subsidy approach are pretty straightforward. The first step, clearly, is having emitters pay for their emissions, whether via a carbon tax or allowance auctions in a cap-and-trade system. Some portion of the proceeds are then pooled in a fund and used to subsidize additional reductions. If revenues are to be directed toward achieving additional cuts, it makes sense to do so cost-effectively, which can be achieved by utilizing mechanisms, such as reverse auctions, that “buy” additional reductions, starting with the cheapest beyond what the price signal or cap alone would achieve. It also makes sense to limit subsidies to the difference between the carbon price and the abatement cost of the reduction, to avoid offering excessive subsidies that duplicate the incentive of the price. In addition, reductions should be paid for only as they occur, rather than offering up-front, multi-year payments to projects. Combined, these cost-effective features maximize the amount of additional reductions that can be achieved with the pooled revenues.

Let’s make this even clearer with a simple example. Imagine a jurisdiction enacts a tax of $20 per ton of CO2. Any entity that can reduce emissions for less than that cost will do so, to avoid having to pay the tax. That is the effect of the price signal. If a reduction costs $21 a ton, however, the emitter’s incentive is to pay the tax and save a dollar. If, instead, that emitter is given a subsidy of $1 per ton, and emitters with $22-per-ton reductions are given subsidies of $2 per ton, then those reductions also would get made. The cost to emitters would be the same — $20 per ton — but instead of paying it as a tax, they would spend it, in concert with the subsidy, to achieve reductions. (Giving these emitters subsidies of, say, $5 a ton would be wasteful.) Any emitter or project developer who wants to could submit a bid for a way of achieving reductions. The subsidies would go first to the cheapest reductions beyond the price signal, working up the reduction cost curve until all of the designated carbon revenues have been spent.

Over time, as the carbon tax rises, some activities that had received subsidies would no longer qualify. For instance, the emitters with the $22-per-ton reductions would no longer receive subsidies once the tax rises higher than that level, as the price signal alone should then drive those reductions. The risk of receiving smaller or no subsidies in later years as the tax level rises — and therefore having to bear more of the reduction costs themselves — should give emitters incentive to use the subsidies to make their reductions early. Activities performed earlier to achieve reductions mean less greenhouse gases added to the atmosphere.

None of these policy mechanisms are novel in and of themselves. Carbon prices are being implemented in many jurisdictions. Reverse auctions are already used to purchase renewable energy, energy efficiency, and emissions reductions. Subsidies to support clean energy, energy efficiency, and other ways of reducing greenhouse gas emissions are also common. What the price-and-subsidy approach does is to link up these elements into a single, systematic, turbocharged whole. Carbon prices send out the signal to reduce emissions, and reverse auctions for subsidies amplify that signal, increasing the incentive to abate and, therefore, the scale and rate of emission reductions.

Some simplified modeling — such as using a linear marginal abatement cost curve — can make clear the potential power of using carbon revenues to accelerate reductions under a price-and-subsidy approach. First, let’s assume that all such revenues are directed toward achieving additional reductions. Modeling suggests that a price-and-subsidy approach could boost a conventional carbon pricing policy that would achieve a 20 percent reduction to one that theoretically could achieve a 60 percent reduction — without increasing costs for emitters or consumers. Relatively stringent reduction targets could become even more ambitious: a 40 percent reduction could theoretically become an 80 percent reduction, a 60 percent reduction could become a 92 percent reduction, and so on. These numbers, of course, are purely theoretical. Reality is not a simplified model. Technology, reliability, or other constraints may limit the number of additional reductions that are achievable during a given period. Some projects take time to get up and running. Still, the potential of the approach is clear.

Few jurisdictions are likely to devote all the revenues generated by a carbon tax or cap-and-trade program to achieving additional reductions, as there are other political, social, and climate realities that could benefit from carbon revenues. Some percentage probably should go to offset the regressive effects of the carbon price on the poor. Some could go to help coal communities transition. Some may have to go to tax relief, dividends, or other areas needed to garner political support. Some revenues probably should be used to promote adaptation and resilience to climate impacts. The need for urgent climate action, however, suggests that a meaningful portion of the revenues should go toward cost-effectively achieving additional near-term reductions.

Using even a relatively small percentage of the revenues could give a significant boost to reductions. Again, simplified modeling shows the potential power of this approach. For example, given a price that would achieve a 20 percent reduction alone, it is theoretically possible to boost reductions to 27 percent using only a tenth of the revenues, to 35 percent using a quarter of the revenues, or to 45 percent using half of the revenues.

Jurisdictions implementing a price-and-subsidy approach will have to determine which types of additional reductions qualify for the reverse auction subsidies. The price-and-subsidy approach presented here works best when the additional reductions are ones already subject to the price, which enables the subsidy to provide emitters with enough of an incentive to take further action. (Carbon revenues could, of course, also be used to support reductions of emissions not covered by the price, but that is outside the scope of this proposal.)

Price-and-subsidy could be technology-neutral, designed to simply accelerate the next-cheapest reductions available beyond what the cap or tax would achieve. Constraints could also be implemented to support additional objectives. For instance, to address environmental justice concerns, priority could be given to bids to achieve additional reductions in low-income communities or to achieve reductions in local air pollution as well as in greenhouse gases. In addition, there should probably be a constraint to prevent using revenues in ways that achieve cheap, near-term reductions but that lock in technologies or infrastructure incompatible with deep decarbonization pathways.

While price-and-subsidy can work with either a carbon tax or a cap-and-trade system, there is an extra step required for the latter. To ensure the reductions subsidized by the reverse auction are additional to what the cap alone would achieve, an allowance must be retired or otherwise removed from the system for each subsidized ton of reduction. Otherwise, excess allowances could be banked, or other emitters could use them instead of making reductions (which means the subsidized reductions would end up displacing reductions required by the cap instead of being additional). Assuming the universe of bidders for allowances is the same as the universe of bidders for subsidies, a jurisdiction could even have the allowance auction and the reverse auction rely on the same bids and only sell the allowances that are actually needed. Alternatively, and more simply, it probably would be sufficient to reduce the number of allowances sold in subsequent auctions to reflect the number of reductions that, to date, have been achieved by means of subsidies. Reducing allowance sales to account for prior subsidized reductions would allow a jurisdiction to ratchet its cap down further — and then continue to use allowance revenues to drive even more reductions.

The core idea is to accelerate reductions and to do so cost-effectively. A price-and-subsidy approach would enable governments to use carbon revenues to achieve deeper, faster emission cuts without increasing costs to emitters or consumers. Looked at another way, governments could achieve higher levels of reductions far more cheaply with a price-and-subsidy approach than with a conventional price alone. Even if it utilizes only a portion of revenues, a price-and-subsidy system can help jurisdictions dramatically accelerate their drive toward a zero-carbon future.

Given that the U.S. government is likely to remain actively hostile to efforts to fight climate change for the foreseeable future, leading states trying to ensure the country meets its Paris commitments — and goes even further to achieve deep decarbonization — could use a price-and-subsidy approach to take climate leadership. They should combine carbon prices with cost-effective subsidies to spur much larger, much faster emission reductions. True climate leadership should include using the money collected from a tax or an allowance system to get onto an emissions-reduction trajectory that is more commensurate with the urgency of the climate challenge. Revenue use matters. TEF

CENTERPIECE ❧ Pricing carbon and using some or all of the proceeds to provide strategic, cost-effective subsidies could achieve deeper, faster emissions cuts than a conventional price alone — without increasing costs to industry or consumers.

An Essential Strategy
Author
Alan Biller - Consultant
Consultant
Current Issue
Issue
2
An Essential Strategy

Achieving an energy economy based entirely on renewables quickly enough to meet the Paris goal of 2ºC is a high-risk strategy. Removing carbon dioxide from emissions waste streams and burying it is necessary — as may be extraction of the gas from the atmosphere.

Alan MillerAlan Miller retired from the Climate Business Department at the International Finance Corporation in December 2013 and is now an independent consultant on climate finance and policy

An evaluation of current trends and country commitments by the United Nations Environment Program, the “Emissions Gap Report 2017” estimates that only about a third of the necessary reductions to realize the Paris Agreement’s temperature goals have been pledged. “The gap between the reductions needed and the national pledges made in Paris is alarmingly high,” the report concludes. Reductions must be achieved quickly due to the long atmospheric lifetime of carbon dioxide. Thus, “If the emissions gap is not closed by 2030, it is extremely unlikely that the goal of holding global warming to well below 2°C can still be reached.”

In response to this intimidating challenge, experts have devised many scenarios showing how the Paris goals might be achieved. While continued rapid growth in use of clean energy is central to all such analyses, it is insufficient without a concomitant, radical transition away from current technologies for burning of fossil fuels and particularly coal — which amount to about 40 percent of the world’s electricity and is a key source of energy for cement manufacturing, steel making, and other industrial processes. Yet dozens of new coal plants are currently under construction or planned around the world, and new coal mines have opened even in Germany. Enormous economic and political costs will be incurred if these plants are to be closed in the near future. So any realistic policy scenario must allow for the role of fossil fuels.

To offset this trend and ensure the Paris goals can be met, carbon removal and sequestration (or storage) will be essential. As the International Energy Agency stated in 2016, “There is no other technology solution that can significantly reduce emissions from the coal and gas power generation capacity that will remain a feature of the electricity mix for the foreseeable future.” Indeed, as you shall see we may need to go even beyond that.

A discussion of carbon dioxide removal encompasses a wide range of methods, some natural, some sophisticated, expensive technologies. A National Academy of Sciences review of the subject in 2015 provides several helpful definitions and associated acronyms: “Carbon Dioxide Removal (CDR) refers broadly to efforts to remove carbon dioxide from the atmosphere, including land management strategies . . . and direct air capture and sequestration (DACS). CDR techniques complement carbon capture and sequestration (CCS) methods that primarily focus on reducing CO2 emissions from point sources such as fossil fuel power plants.” Where combined with the use of the captured gas, systems are referred to as CCUS.

Thus, carbon removal can refer to natural, biological processes such as afforestation (planting trees) as well as engineering methods for removing carbon from flue gas or even directly from the air. However, only CCS (as defined by the NAS) attempts to reduce emissions from power plants and thus to address the problem created by the widespread use of coal at its source. While CDR and DACS may prove necessary in the long run, the near term policy focus needs to be on CCS.

Effective CCS requires effective storage as well as carbon capture; leakage into the atmosphere negates the benefits of reduced emissions and could create liability issues deterring investment. Currently, there is substantial opportunity for storage in depleted oil and gas fields. CO2 can also be used for enhanced coal bed methane, the generation of natural gas from deep, unminable coal seams. The effective reduction of CO2 emissions from such methods depends on site conditions and appropriate engineering but provides both some reduction in emissions as well as an economic incentive for the further development of carbon removal. How effective is this technology? An IEA assessment concluded that “the volume of the CO2 injected and stored can significantly outweigh the emissions from combusting the oil that is subsequently produced.” In countries like India and China with low quality coal but very little natural gas, the combination may even be marginally economical without carbon taxes or other incentives.

In the longer term, however, the quantities of carbon dioxide captured will require storage in much larger amounts. According to an IPCC review, “Captured CO2 could be deliberately injected into the ocean at great depth, where most of it would remain isolated from the atmosphere for centuries.” The first real test of such storage was put into operation in Norway more than 20 years ago, but the environmental implications of doing so on a large scale require much more study. As the IPCC review states, “CO2 effects on marine organisms will have ecosystem consequences; however, no controlled ecosystem experiments have been performed in the deep ocean. . . . It is expected that ecosystem consequences will increase with increasing CO2 concentration, but no environmental thresholds have been identified. It is also presently unclear, how species and ecosystems would adapt to sustained, elevated CO2 levels.”

There are several flavors of carbon capture and storage. The three most advanced and demonstrated technologies for power plants are post-combustion CO2 capture, currently used at NRG Energy’s Petra Nova plant in Texas; oxy-combustion, demonstrated at some large pilot plants; and pre-combustion CO2 capture in combination with coal gasification. The third category has received the bulk of support to date, with mixed results as discussed below.

Despite it’s potential for reducing the largest source of carbon dioxide emissions, CCS of any stripe has to date received limited support from Washington or other national capitals. Only 17 projects are in operation worldwide (nine in the United States) with four more under construction. Most of these are linked to industrial facilities with separation of the CO2 part of their process. The current global CO2-capture capacity is only about one tenth of one percent of emissions. Rather than growing, the pipeline of new CCS projects has been shrinking, from 77 in 2010 to around 38 today, and as of a November 2016, IEA report no projects have progressed to construction since 2014.

T he lack of enthusiasm for CCS despite growing evidence of the need is due to several factors. The coal industry has generally preferred to question climate science and the need to do anything. In the absence of carbon taxes or other climate policies, commercial interest in emissions capture has been largely limited to enhanced oil and gas production, in which CO2 is injected into rock formations to force out the fossil fuel. While coal-burning utilities are arguably second only to coal-mining companies in the need for technologies that could allow continued use of coal in energy production, they have had limited incentive to finance the costs of research and demonstration.

A few utilities have shown interest in the potential for combining CO2 capture with coal gasification, the third category above. The Kemper Project in Mississippi, undertaken by one of the country’s largest utilities, the Southern Company, was planned as a commercial-scale demonstration of the technology based on a very small pilot project. Construction was initiated in 2010, and after expenditures of over $7 billion (including a $133 million federal tax credit) the CCS features were abandoned, leaving only the possibility of operating as a natural gas plant. The project was effectively canceled last June by order of the state Public Service Commission, with assignment of financial responsibility still to be resolved.

Some environmentalists were quick to argue that Kemper’s failure illustrates why CCS “is a waste of our tax dollars and a false solution to the climate crisis,” as one put it. However, others pointed to mismanagement unrelated to the technology. This included equipment never tested at commercial scale; inadequate time spent on engineering; a rush to completion to avoid loss of tax credits; and the failure to learn from another gasification facility operating in Indiana. “The Kemper Project failure is not due to any problem with the equipment required to capture CO2,” argue NRDC lawyer David Hawkins and scientist George Peridas. “All of the problems are due to the system components upstream of the capture stage. . . . . The conclusion is not that CCS is a flop.”

Indeed, the case for CCS rests on a combination of arguments from different vantage points. The “Emissions Gap Report” points to the availability of clean energy and land use strategies for emissions reductions but also recognizes the reality that coal use is not going away soon. A headline in the New York Times last July makes the point: “As Beijing Joins Climate Fight, Chinese Companies Build Coal Plants.” Even in relatively green Germany, political and economic realities dictated the opening of new coal mines, partly to compensate for the closing of nuclear power plants. Consequently, the UNEP report acknowledges the potential need for carbon capture technologies of any and all flavors despite their limited development to date.

As the Kemper Project illustrates, much of the current image of CCS is associated with capital-intensive systems with large land requirements and long time requirements for construction. There has also been recent media focus on what might be termed “moon shot” ideas for removing CO2 directly from the atmosphere. A company owned in part by Bill Gates and based in Canada, Carbon Engineering, is attempting to commercialize a process for this feat, described as falling “somewhere between toxic-waste cleanup and alchemy” by the New Yorker writer Elizabeth Kolbert.

Much more attention needs to be given to the existence of the many other promising approaches for CCS, some in development for more than a decade by relatively small companies and entrepreneurs. These innovators are working on ways to capture and store carbon with the potential for low costs, a small footprint, and often additional economic and environmental benefits.

One example is Jupiter Oxygen, a company with more than a decade of experience with carbon capture and a process with multiple environmental benefits. The firm uses oxy-combustion (injecting oxygen in the combustion process to achieve high flame temperatures) in a process that allows very effective removal of CO2 and nitrogen as well as improving energy efficiency and incineration of most conventional pollutants. The company had technical support from the DOE National Energy Technology Laboratory a decade ago, has substantial operating experience, and is currently pursuing partnerships in China and India based on CCUS — enhanced coalbed methane and industrial applications of CO2.

Blue Planet, a company based on pioneering materials science by Stanford doctor and scientist Bret Costanza, uses water-based methods to capture CO2 from flue gas and makes cementitious building materials. The company has attracted impressive support, with an advisory board that includes former FDA Commissioner Donald Kennedy, former National Renewable Energy Lab Director Denis Hayes, and actor Leonardo di Caprio.

The Carbon X-Prize, a competition with a $20 million award for “breakthrough technologies that convert the most carbon dioxide emissions from natural gas and power plant facilities into products with the highest net value,” announced 27 semi-finalists in 2016. One of the most intriguing was originally conceived in a high school chemistry lab by a teenager. The young inventor is now working with a Yale professor and has secured funding to build a pilot plant that will use waste gas from a power plant or chemical factory and capture one metric ton of carbon emissions per day.

Given the magnitude of the effort required and the complexity of the technical challenges, continued support for these and other innovative smaller companies with CCS concepts should be a priority. As a recent “Economist Briefing” observed, “Progress will be needed on many fronts. All the more reason to test lots of technologies. For the time being even researchers with a horse in the race are unwilling to bet on a winner.”

Yet even as current research shows that climate change may be increasingly dangerous and unavoidable, support for CCS remains slow to develop. Perhaps the greatest source of resistance is the belief that alternative approaches are better — and achievable. The proposition that renewable energy can be the solution to climate change has been aggressively advocated and is credible as a mathematical proposition. The rapid rate of advancement in solar, wind, and battery and other energy storage technologies has indeed been, and continues to be, impressive. A recent end of year review by Bloomberg New Energy Finance cites plummeting emissions auction prices, the entry of significant new markets, and record corporate renewable power purchase agreements. On the other hand, the same source points to negative policy developments in several markets including the United States and South Africa — e.g., the recent decision by President Trump to increase tariffs on solar imports — as well as the risk of rising interest rates for renewable technologies with high capital costs.

Assuming the political support could be found for an all-out clean energy strategy and implemented in every large energy-consuming country, there are significant technical issues to be resolved before this could be done consistent with the existing electricity grid. Power from wind and solar energy are variable and not dispatchable consistent with the management and operation of centralized power grids. Reliance on natural gas plants as a backup is not consistent with the aggressive decarbonization required to stay below 2°C. Unless some other alternatives emerge, as clean energy advocate Dave Roberts has noted, CCS will be essential; without it “other dispatchable resources [would] have to dramatically scale up to compensate — we’d need a lot of new transmission, a lot of new storage, a lot of demand management, and a lot of new hydro, biogas, geothermal, and whatever else we can think of.” Thus, while it is theoretically possible the Paris goals can be met based almost entirely on clean energy, the majority of analyses advocate a combination including clean energy and CCS.

Many environmentalists advocate for carbon sequestration through natural means, primarily by planting trees. The CO2 uptake of existing forests is substantial — in the United States offsetting fossil fuel emissions by about 15 percent. Studies suggest about a third of current carbon emissions could be captured this way, potentially even more if conflicts with food production could be managed, with further reductions through environmentally beneficial measures to increase CO2 absorption and retention in soils. Using trees as fuel for power plants could even generate “negative emissions” if combined with CCS technologies — an approach the last IPCC report stated will be “critical in the context of the timing of emissions reductions,” and also dependent on effective technologies for CCS.

Unfortunately, the trend in forestry has been toward more deforestation and forest degradation, collectively estimated to account for 8 to 15 percent of the rise in global CO2 concentrations. While desirable for many environmental and social reasons, reversing this trend has so far proven to be a major challenge. And climate change may make this still more difficult, as reflected in the recent California wildfires, forest dieback due to pests in Colorado, and expectations of more severe drought in some currently forested regions. A recent article in Nature notes that efforts to raise biomass stocks have only been verifiable in temperate forests, where their potential is limited, whereas large uncertainties hinder verification in the tropical forest, where the largest potential is located. California is working on a Forest Carbon Plan, expected to be finalized this year, which could serve as a model.

Given the magnitude of the climate challenge, there is an increasing consensus that all options for mitigating emissions need to be deployed as soon as possible. For some, the situation is so bad that it is now necessary to consider options much more worrisome from an environmental perspective — climate intervention (also called geoengineering). This includes measures such as injection of sulfates in the atmosphere to reflect sunlight and cool the earth’s surface. An initial review by a committee of the National Academy of Sciences concluded in 2016 that such measures merit further research given they could be implemented at a relatively low cost despite “an array of environmental, social, legal, economic, ethical, and political risks.”

Given the seriousness of climate risks and the absence of any single fully effective solution, increasing support for CCS thus seems fully justified and increasingly urgent. In the United States, CCS may also have one additional benefit going for it: a surprising measure of bipartisan political support. Proposals for CCS have attracted backing from both coal state Republicans and liberal Democrats. The Western Governors Association, under the leadership of Wyoming’s Republican governor, Matt Mead, and Montana’s Democratic governor, Steve Bullock, convened a working group composed of 14 states to advocate policies that encourage CCS technologies. A broader coalition with similar interests, the National Enhanced Oil Recovery Initiative, includes fossil fuel companies, labor unions, and national environmental organizations.

Reflecting this diverse political support, last July a bipartisan group of 25 senators introduced the Future Act (for Furthering carbon capture, Utilization, Technology, Underground storage, and Reduced Emissions) to extend and expand a federal tax credit, known as Section 45Q, which incentivizes capturing carbon dioxide from power and industrial sources for enhanced oil recovery and other uses. Another bill with bipartisan support, the Carbon Capture Improvement Act, would authorize states to use tax-exempt private activity bonds to help finance carbon capture equipment. Allowance for such bonds was retained in the recent changes in tax law, a change originally contemplated in the version first passed by the House.

The two bills would be a substantial step toward encouraging increased interest and investment in CCS projects, although limited in key respects. First, insofar as the captured carbon is to be used primarily for enhanced oil recovery or enhanced methane production, fossil fuels are still being burned. Another concern is that the bill would provide limited support for innovative ideas from high-risk companies. Such early stage research support should be a federal responsibility and would seem to be consistent with administration support for coal. At a recent IEA summit, DOE Secretary Rick Perry stated, “While we come from different corners of the world, we can all agree that innovation, research, and development for [carbon capture and underground storage] technologies can help us achieve our common economic and environmental goals.” However, the administration has so far given little indication of formal support for CCS and even proposed significant cuts to the fossil fuel program.

Expanded tax credits for CCS for enhanced oil recovery in the United States also do not promote carbon capture where it is most needed, in China, India, and other rapidly growing developing nations with coal-dependent energy systems. China alone currently produces about four times as much coal as does the United States, and because of their populations and coal reserves the IEA projects that China and India will account for the lion’s share of global growth in coal consumption in coming decades. Whereas in the United States most CCS would be retrofits to existing coal plants, in China and India there will be opportunities for integrating systems with new plants and industrial facilities — particularly if combined with desperately needed control strategies for conventional pollutants like smog precursors, acid rain, and particulates.

There is an established international initiative with the relevant focus, the Carbon Sequestration Leadership Forum, founded in 2003, which now includes ministerial-level participation from 25 countries. There was also some hope for support with the establishment of Mission Innovation, a global initiative announced during the Paris COP21 climate negotiations to encourage clean energy innovation. Seven of the initial 20 sponsors included reference to CCS when the initiative was announced, but U.S. support is now uncertain. A more ambitious, coordinated, and well financed international effort to include all the world’s largest coal producers and consumers is needed. There are multiple institutions and international initiatives for clean energy, including the International Renewable Energy Agency, the Clean Energy Ministerial, and the Climate Technology Centers organized under the UN climate convention. Given the importance of financing, there also needs to be more of a role for the World Bank and other international financial institutions in a position to provide risk capital as well as technical assistance.

Bipartisan political support may be growing in the United States, but it still faces numerous challenges. In recent congressional testimony, a spokesman for the governor of Wyoming focused on the time required to do environmental reviews and permitting of pipelines as equal if not greater obstacles. Broader leadership for long-term CCS development in the United States also remains an issue, as the future of the DOE fossil energy program is in question and the states with the most progressive climate policies have not made it a priority.

Carbon capture and sequestration remains a necessary if less than ideal solution to the challenge of climate change. As time passes and other solutions appear to be inadequate, a growing body of analysis points to CCS as among the only remaining sources of hope for avoiding catastrophic climate change. As the IEA concluded in its 2016 report, “CCS is the potential ‘sleeping giant’ that needs to be awakened to respond to the increased ambition of the Paris Agreement.” TEF

LEAD FEATURE ❧ Achieving an energy economy based entirely on renewables quickly enough to meet the Paris goal of 2ºC is a high-risk strategy. Removing carbon dioxide from waste streams and burying it is necessary — as may be extraction of the gas from the atmosphere.

Environmental Lawyers Pay Close Attention to Trump v. California
Author
Ethan Shenkman - Arnold & Porter Kay Scholer LLP
Arnold & Porter Kay Scholer LLP
Current Issue
Issue
2
Ethan Shenkman

Justice Louis Brandeis famously remarked, “It is one of the happy incidents of the federal system that a single courageous state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.” That maxim rings as true today as it did in 1932. Nowhere is this more evident than in the tug of war underway between the Trump administration and California. More than ever, environmental practitioners focused on the national stage must stay attuned to developments in the Golden State, and vice versa.

As Trump pursues a deregulatory agenda, California has rushed to fill the void. For example, as EPA reconsiders the Obama-era Clean Power Plan, California announced a suite of state-level measures to achieve its own goal of reducing greenhouse gas emissions to 40 percent below 1990 levels by the year 2030. Among other things, California extended authorization for its cap-and-trade program through 2030, and Governor Jerry Brown has been advocating linking its trading system, and expanding the western electricity grid, to other states. California Air Resources Board Chairwoman Mary Nichols has touted these initiatives as a direct response to the CPP’s withdrawal.

The most notorious example of this phenomenon may be SB 49, the “anti-backsliding” bill, which garnered significant attention, but remains stalled. SB 49 would prohibit a state or local agency from revising its rules to be less stringent than federal environmental “baseline standards,” which are defined as those standards that were in effect before Trump took office. In the event of a federal retreat from that baseline, SB 49 would require California agencies to quickly pass “emergency regulations” without the usual notice and environmental review processes.

California has also sought to compensate for the absence of U.S. leadership internationally. President Trump’s decision to withdraw from the Paris Agreement cannot go into effect until November 2020 at the earliest. But Brown and the governors of 13 other states and Puerto Rico were not inclined to wait, instead forming the U.S. Climate Alliance. The Climate Alliance members, which represent approximately a third of the U.S. population and 22 percent of its aggregate GHG emissions, have each pledged to meet the same reduction goal that the U.S. submitted in Paris. Brown attended the latest climate talks in Bonn, along with key members of the California legislature, and has pledged to pursue climate partnerships with Canada, Mexico, and China.

Practitioners with clients operating in multiple jurisdictions may find themselves navigating a patchwork quilt of state-level regulation. In some cases — as in the field of consumer disclosure and product labeling — “California is not only reacting to national developments, but driving them,” points out Anthony Samson, who heads Arnold & Porter’s government affairs shop in Sacramento. California is the only state with a law requiring disclosure of ingredients for household cleaning products, but “the California consumer products market is so massive that most manufacturers follow its lead for all of its products rather than sell products specifically labeled for California commerce.”

California Attorney General Xavier Bacerra has also jumped into the fray. A study by the Sacramento Bee shows that he has filed 24 lawsuits challenging the Trump administration in 17 subject areas, including multiple actions to forestall regulatory rollbacks at EPA, Interior, and Energy.

Practitioners will also be closely following developments in the area of automobile fuel economy standards. EPA and the National Highway Traffic Safety Administration are considering whether to revisit Obama-era GHG emissions and fuel-economy standards that were originally projected to achieve an industry fleet-wide average of 54.5 miles per gallon for cars and light trucks by model year 2025. California, however, has determined to maintain its standards under a special preemption waiver granted by EPA. To the extent state and federal standards diverge, cooperative federalism may be put to the test, as new questions could be raised concerning the waiver and preemption.

Meanwhile, in perhaps the boldest move to date, California legislators have proposed the Clean Cars 2040 Act, which would require all new cars registered in the state after that date to be zero-emission vehicles.

Tanya DeRivi, director of government affairs for the Southern California Public Power Authority, neatly sums it up: “Our organization has long followed federal and California legal and policy developments on environmental, energy, and tax issues,” she says. “But it has become even more important for our organization to understand the intersection between the two so we can better offer guidance to our members amidst the Trump-California legal and policy battles.” Where will those battles lead? Stay tuned.

 

Environmental lawyers pay close attention to Trump v. California.

Recent Cold Weather Shows Grid’s Reliance on Oil, Upping Emissions
Author
Kathleen Barrón - Exelon Corporation
Exelon Corporation
Current Issue
Issue
2
Kathleen Barrón

The extreme cold weather in the Northeast and Mid-Atlantic this winter severely tested the performance of the power grid, which has come to increasingly rely on natural gas for generation in a way that may have unforeseen consequences on air emissions.

The electric system relies on natural gas to fuel both baseload and peaking power plants. Baseload refers to the minimum level of everyday demand for electricity. Peaking refers to rapid, short-term demand such as that which occurs in the early evening as people return home at sunset.

Baseload plants powered by natural gas are generally highly fuel-efficient combined-cycle plants that emit a fraction of the pollutants of coal. To the extent that combined-cycle natural gas units operate instead of coal-fired units, there is an environmental benefit due to reduced emissions and waste. This shift has occurred intentionally, to reduce emissions, as well as naturally, due to the rapid and sustained drop in natural gas prices over the last decade. The use of natural gas for electricity production in the United States has grown since 2001 from approximately 10 percent to over one-third of total generation.

Natural gas has long been valued as a cleaner peaking fuel for turbines, which provide the ability to ramp up electricity output within minutes. But because natural gas supply can sometimes be constrained or otherwise restricted, many gas units have the ability to operate in dual-fuel mode: they can burn either natural gas or fuel oil. Fuel oil releases more pollution and is generally more expensive than natural gas, and therefore is not used for normal operations. However, it is stored much more easily and serves as a hedge against natural gas delivery interruptions or price challenges.

Since many operators of natural gas plants maintain an ability to burn fuel oil, in some areas, such as New York City, there are requirements that operators burn oil under certain grid or weather conditions to preserve natural gas supply and affordability, particularly for residential customers who may rely on gas for heating. In other instances, an operator may be forced to burn oil because natural gas has been diverted to residential heating or supply was otherwise disrupted, including due to weather-related malfunction.

During cold weather such as we saw in January, natural gas prices may in fact spike high enough that oil becomes the more economical fuel. Indeed, the Energy Information Administration reports that average peak power prices in the Northeast and Mid-Atlantic for January 5 reached over $250 per megawatt-hour, compared with an average between $30-50/MWh in the previous six weeks.

As a result, the eastern grid burned a substantial amount of oil. In fact, preliminary data suggest New England burned more oil during this year’s two-week cold snap than the previous two years combined. During the cold weather, 35 percent of power generation in New England was from oil. In the Mid-Atlantic, oil-fired generation hit 10 percent. On a typical winter day, four percent of power generation is oil-fired.

This reliance on fuel oil to fill gaps in natural gas supply brings a staggering environmental cost. With regard to greenhouse gases, fuel oil has approximately 75 to 80 percent of the CO2 emissions of coal, as compared to roughly half for natural gas. Fuel oils also emit toxic metals and other hazardous pollutants. Finally, oil units emit sulfur dioxide at the same rate as coal and nitrogen oxides at three times the rate.

According to Massachusetts Energy and Environment Secretary Matthew Beaton, in January’s 15 days of cold weather, what New England “used in oil is the equivalent of approximately five percent of the total emissions reduction we need between 2014 and 2020,” referring to the requirement that the state achieve greenhouse gas emissions reductions of 25 percent below 1990 emissions levels by 2020. “Economically, this is a disaster for us in New England. Equally as important, environmentally the emissions and the profiles of what occurred in this timeframe are nothing but a disaster.”

To address these environmental impacts, many jurisdictions have imposed emission-rate limits, annual run-time limits, or are engaged in phasing out the use of these fuels altogether. Following the extreme cold this year, grid operators have become worried about units needed for reliability reaching emissions limits and thus being unavailable if there is another cold snap.

As Senator Lisa Murkowski (R-AK) put it, this experience will serve as an important stress test of the evolving grid. Important vulnerabilities were identified and their potential solutions have a range of implications that we cannot ignore.

The author is grateful for the assistance of Kathy Robertson in developing this column.

Recent cold weather shows grid’s reliance on oil, upping emissions.