In recent years, Artificial Intelligence (AI) applications have rapidly become more sophisticated and widespread, “even as legal and regulatory frameworks struggle to keep up.” Moreover, AI’s often-overlooked environmental implications are simultaneously “sweeping and quite complicated,” and for all of its promise to help improve the environment, AI could in fact cause environmental harm. With those framing remarks, Andrew Tutt, a Senior Associate with the law firm Arnold & Porter, opened a February 18 webinar on “Environmental Applications & Implications of Artificial Intelligence,” the third in ELI’s GreenTech series running through 2021.
Before introducing the expert panelists, Tutt added that questions about AI’s environmental protection promises and potential pitfalls raise the fundamental question: What governance mechanisms are needed to ensure we harness the opportunity while mitigating the environmental harms? He then introduced Priya Donti, the Co-founder and Chair of Climate Change AI; Emma Strubell, an Assistant Professor at Carnegie Mellon University’s Language Technologies Institute; and Aidan O’Sullivan, an Associate Professor at University College London’s Energy Institute.
In her remarks, Donti, whose work focuses on creating Machine Learning (ML) techniques to reduce electricity-sector greenhouse gases (GHGs), said that AI’s relationship to climate change depends on how it is used. Many AI applications can reduce GHGs, help societies adapt to the pressing crisis of changing climate, and bolster existing strategies to address the problem. Donti and colleagues detailed opportunities in a 2019 paper, Tackling Climate Change With Machine Learning, which described how ML could be applied across sectors, including energy, land use, climate prediction, and others for both adaptation and mitigation purposes.
Forecasting solar and wind production, transportation demand, or localized extreme events are other benefits. In real-world systems, AI and ML can increase operational efficiency, she said, and offered several examples, such as a DeepMind application that reduced the cooling load of Google’s data centers and others that can optimize residential and commercial heating and cooling, supply chains, and food delivery operations.
In many other ways, however, AI and ML can increase emissions unless society makes concerted efforts to counter such impacts, Donti noted. For example, AI tools are being used to make GHG-intensive industries such as oil and gas or mining more productive, thus having potentially large impacts by entrenching fossil fuels and slowing the transition to clean energy. AI employed for targeted “personalized advertising” could drive up product consumption, with emissions implications. While potentially large, the emissions effects of these AI downsides have been hard to quantify.
Commenting that climate impacts must be a central consideration in AI policy, Donti urged regulatory and reporting requirements to make sure AI lifecycle impacts are accounted for in climate policymaking. Additionally, the public sector’s capacity to deal with the complicated issues must be shored up, including through stakeholder engagement and incorporating best practices for AI use, she suggested.
Although quantifying AI’s downsides poses difficulties, methods are available to measure ML’s direct energy consumption and AI levers are available that can be used to lower energy use, Strubell said. Citing data from a 2019 paper she and colleagues wrote—Energy and Policy Considerations for Deep Learning in NLP—Strubell said she was pleased that the paper received extensive media coverage and brought attention to the potential environmental impact of ML, but she regretted the hyperbole and misunderstanding about the research, including claims that AI GHGs could destroy the planet. A key distinction that was misunderstood and was behind alarmist headlines, Strubell explained, is that the ML trial-and-error “tuning” process requires performing the training step multiple times and can consume four times greater amounts of energy than training, but it occurs a few times in a three-month period and is usually limited to institutions or big tech companies and is not a widespread energy hog.
Precise estimates of actual GHGs due to AI are unavailable, Strubell noted, but it is estimated that less than 1% of global electricity use is due to data centers, with a small part of that total likely to be due to AI.
Panelist O’Sullivan presented a “positive story on decarbonization” in the power sector. Digitalization has enabled the energy sector to benefit from AI as the sector drives toward optimization rather than relying on human operators, who mostly repeat their previous actions. AI allows operators to do what is currently done with systems designed decades ago, “but better,” and it allows experts to “reimagine” energy system with previously inconceivable capabilities, he said. For example, in balancing demand and supply, which is increasingly volatile with more weather-dependent renewables on the grid, AI can constantly monitor changing weather and grid conditions and adjust electricity flows in ways that are beyond a human operator’s capacity. AI can curtail electricity theft, and with sufficient information it can be used to give personalized recommendations to customers based on factors like the thermal properties of their homes.
However, O’Sullivan cited obstacles, such as getting buy-in to deploy AI from actors with different incentives in a sector that is complex and involves politics, policy, technology, sociology, and law. Such factors create innovation challenges. Other obstacles include the need for stronger research links between power systems and AI and the tech and finance sectors’ “hoovering up” AI talent.
Tutt, who cited a paper he wrote, An FDA for Algorithms, about algorithmic governance in the United States, said that AI regulation raises questions about what makes AI different. Currently “largely unregulated,” AI raises issues of possible regulatory approaches that “could” and “should” be taken and the implications for the environment. People who create AI understand sophisticated math but do not understand how the AI got to the solution it generated. That can be a real problem both for confidence in the AI solution’s reliability and for ensuring “you don’t have any catastrophic outcomes,” he said.
Noting that AI is in its very infancy, Tutt said that in the long term, people with real expertise in this technology should be in the driver’s seat across a range of substantive regulatory areas from environment, to vehicles, to consumer credit, and other applications. In the very short term, as we await more comprehensive approaches, EPA could be at the forefront setting rules of the road for the use of AI in regulated industries. EPA also could employ AI in its own regulations to create more sophisticated and nuanced environmental regulation, as well as to test its own regulatory assumptions and improve responses. Every agency could do that. Agencies are taking their own steps and are not coordinated, but as more students get their Ph.D.s, they are likely to join regulatory agencies where they can help “to solve some of the world’s biggest problems,” he said.
An article written by Julietta Rose and Henry Gunther, law students from Berkeley Law and Washington University, covers the topic of Governing AI & the Importance of Environmentally Sustainable and Equitable Innovation. The article was published in the November 2020 issue of the Environmental Law Reporter.
Learn more about the GreenTech Webinar Series at https://www.greentechconference.org/webinar-series. The GreenTech Webinar Series is run by Kasantha Moodley, Manager of the ELI Innovation Lab.