Kategorien: Nachrichten

Wie Experten den wachsenden ökologischen Fußabdruck der generativen KI angehen

The second part of our series on the environmental impact of generative AI delves into what researchers and engineers are doing to lessen the considerable carbon footprint of this accelerating technology. Generative AI, undoubtedly progressing at quite an astonishing speed, has its energy demands keeping up with the pace. As predicted by the International Energy Agency, the worldwide electricity usage by data centers could essentially double by 2030, amounting to approximately 945 terawatt-hours. This is more than the yearly electricity consumption of a country like Japan!

The root cause of this energy surge can be linked to our increasing need to train and run massive AI models. It comes as no surprise then that Goldman Sachs Research’s recent analysis suggests that about 60% of this energy demand will be met by fossil fuels, potentially adding a whopping 220 million tons of carbon dioxide to the atmosphere annually.

Diving Deeper Into The Carbon Costs

When we discuss the environmental impact of AI, the focus is predominantly on operational carbon – mainly emissions produced by running GPUs and cooling systems. But there’s another side to this coin. According to Vijay Gadepally from MIT Lincoln Laboratory, the discussion tends to overlook “embodied carbon”, which refers to the emissions generated during the construction and retrofitting of data centers. These massive structures filled with miles of cabling and top-performing hardware, constructed using steel and concrete, are also big players in the game.

On the bright side, many companies, including Meta and Google, are now exploring eco-friendly construction materials like mass timber to bring down this hidden carbon cost. However, our fight against emissions doesn’t stop here. Sometimes, the solution is as straightforward as dimming the lights or simply running GPUs at just 30% of their maximum energy consumption. Surprisingly, this has minimal impact on model performance while significantly easing cooling demands.

Here, engineers also have room to play. They can opt for less energy-intensive hardware or use lower-precision processors optimized for specific tasks. Furthermore, by employing early stopping in model training—halting the process before obtaining the last couple of percentage points of accuracy—we can cut energy usage by half.

The Future Of AI And Its Energy Usage

The good news is not limited to hardware. Neil Thompson of MIT’s FutureTech Research Project introduces us to the power of algorithmic enhancements, which is increasing energy efficiency two-fold nearly every 8-9 months. Thompson proposed the term “negaflop”, which refers to computational operations conserved through smarter algorithms, much like “negawatt” applies to saved electricity. Some of these innovative techniques include pruning unnecessary neural network components and applying compression. Both these strategies drastically cut down computational requirements without sacrificing performance.

While the above strategies hold promise, timing definitely matters! Deepjyoti Deka from the MIT Energy Initiative suggests that not all electricity is created equal. The carbon intensity of one kilowatt-hour can differ greatly based on the time of day and energy source. By scheduling non-urgent AI workloads during periods of abundant renewable energy, data centers can make a meaningful reduction in their carbon footprint.

Location can also play a part in cutting down environmental impact. For instance, cooler climates, such as northern Sweden, can drastically reduce the need for energy-intensive cooling systems. Some governments are even considering building data centers on the moon, where operations could potentially run entirely on renewable energy. Although still a futuristic concept, this gives us a glimpse into what the future might hold for us.

There’s no denying the irony that AI itself could help mitigate its own environmental impact. Jennifer Turliuk, a former MIT Sloan Fellow, points out that AI can expedite the integration of renewable energy into the grid. Generative models could dramatically speed up interconnection studies, which right now take years to finish. AI can also optimize renewable energy generation forecasts, perform predictive maintenance on solar panels, and identify the most efficient locations for new infrastructure. If correctly applied, this could greatly accelerate the deployment of clean energy technologies and inform smarter policy decisions for an eco-friendly future.

With the help of Turliuk and her team, we might be able to precisely quantify these trade-offs. They developed the Net Climate Impact Score—a framework to evaluate the full environmental cost and benefit of AI projects. In her opinion, collaboration between academia, industry, and regulators is critical to making AI more sustainable. As Turliuk rightly puts it, “Every day counts. We have a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense before the effects of climate change become irreversible.”

To deep dive into the topic, refer to the Originalartikel auf MIT News.

Max Krawiec

Teilen Sie
Herausgegeben von
Max Krawiec

Diese Website verwendet Cookies.