Do the environmental costs of AI data centers justify construction moratoria?

By Eben Macdonald ’26, Durham University

Cover Art Credit: Colin Bridges ’26, Washington and Lee University

Header art by Colin Bridges '26 of a wrecking ball going into a skyscraper

The artificial intelligence (AI) revolution has left a deep environmental footprint. High resource intensity and outsized greenhouse gas emissions animate the case for a moratorium on data center construction. Here, I present a framework to scrutinize this proposal with clear conditions under which it could be justified. To satisfy them, the environmental costs of AI should meet at least one of the following: (i) they must impose rivalrous harm on societal welfare by tangibly exacerbating both resource constraints and climate change; (ii) they cannot be amenable via surgical policy reforms, demanding outright construction bans as the only available option; and finally (iii) they cannot be transitory, meaning a failure to diminish over time. Reviewing the evidence, I argue that the current pace of AI’s development does not meet any of these criteria. While prudent intervention should be welcomed to steer the industry towards a greener future, the ethical premises which constitute support for a moratorium can justify similarly restrictive policies towards much of our current technological portfolio. 

In December 2025, US Senator Bernie Sanders publicly endorsed a total pause on AI data center construction. He surprisingly omitted reference to AI’s environmental costs in a video widely shared on social media, instead emphasizing the projected implications for employment opportunities and wealth inequality. Nonetheless, discussion of the technology’s environmental dimensions is ubiquitous within contemporary literature. As Hlablisa (2025) establishes, every facet of the AI industry is voraciously consumptive, demanding 7,000-12,000 liters of water for every one megawatt hour of electricity it produces, 1.8-2.5 cubic meters to manufacture a single semiconductor, and 2-5 liters for every 1,000 times a model is run. For this reason, AI’s pollutive output is huge compared to most current technologies; according to Han et al. (2024), training a single large language model emits pollution equivalent to 10,000 round trips between Los Angeles and New York City, degrading air quality and magnifying public health costs across the United States. The yearly electricity usage of AI models like GPT-3 equates to that of 120 U.S. homes (U.S.-Asia Sustainable Development Foundation 2024). Since huge ecological costs are embedded in all stages of AI’s development which precede the construction of data centers themselves (like building semiconductors), the technology, at present, exerts a majorly outsized environmental impact (Green AI Institute 2025). 

AI contributes negative externalities, vast costs to society which producers do not internalize, making the introduction of prohibitive regulations—if not, outright moratoria—acceptable across all walks of political philosophy. Neo-Marxist conceptions by the likes of Murdock (2025) espouse AI as an extractive technology where environmental costs reflect the amoral pursuit of capital accumulation with total disregard for societal harm. The fact, though, that these costs operate as externalities, or unconsented infringements on private property rights, means that pro-market philosophies similarly authorize government inhibitions on data center construction, in line with Rothbard’s (1982) Non-Aggression Principle of individual rights. While the state might reserve a prima facie right to halt the AI economy, this is thus far insufficient to prove it should go ahead and exercise that right. In the spirit of Armatas and Borrie (2025), pragmatist environmental ethics acknowledge that tradeoffs with living standards affect policy calculations, meaning environmentally harmful activities, even those of data centers, can be retrospectively justified if they enable a satisfactory elevation of human welfare. Cutting down a small wood of trees to make way for a hospital seems acceptable to most, while mass deforestation to expand cattle production and drive down meat prices slightly often represents a moral atrocity. Put simply, the tolerable exchange rate between environmental harm and human welfare lies within sensitive boundaries, and this paper contributes to a discussion of whether we can place the current AI economy inside of them.

Of course, the anticipated effects of AI on society are far from universally optimistic. Fears range from mass job displacement to emboldened state authoritarianism (Deranty, 2022; Lee et al. 2025), in contrast to the far rosier projections of AI bolstering research efficiency and engineering a productivity boom akin to the Industrial Revolution (Madanchian and Taherdoost, 2025; Fang et al. 2025). This paper avoids an empirical discussion of AI’s impacts and instead seeks to autonomously navigate an ethical tradeoff: assuming the pessimistic objections to AI are mostly unfounded, and that the proposed benefits (which are vast) will materialize, what threshold of environmental harm must be met to justify capping the technology’s progress and thereby a sacrifice of this future welfare? Below, three conditions are set out to establish the justificatory parameters for a halt on data center construction, of which at least one must be met. 

I. Are Environmental Costs Rivalrous?

The environmental harm of AI data centers, we have established, is enormous. A single center consumes more water than entire neighborhoods put together, while emitting unfathomable quantities of greenhouse gases. Although the negative externalities are clear, they do not form a sufficient case for a moratorium. This is because those externalities make a high marginal contribution (large when not contextualizing it within other activities) to already-existing environmental harm, but not necessarily a high aggregate one (small compared to the total). Phrased differently, the environmental costs of AI are non-rivalrous. This means that even if data centers make an already bad situation worse by adding to global emissions and resource demands, their impingements are not large enough in a relative sense that they exert zero-sum detractions from human welfare. To illustrate, consider you are a university student who has returned home after a long day. Famished, you open the communal fridge, excitedly awaiting a box of three doughnuts you left beforehand. To your fury, you see that one has been taken, leaving only two remaining. In another scenario, consider that you had a much larger box of twenty. Upon opening the fridge, you see that only nineteen remain. While perhaps irked that a greedy housemate nicked your snacks, the subjective disutility you experience is understandably far less as a higher-percentage expropriation intuitively brings greater harm.

This very logic diminishes the social culpability we can reasonably ascribe to AI given its environmental footprint. Indeed, a single data center pollutes more than any other infrastructure asset, but the aggregate impact of AI is still small compared to the totality of human activity as a mere 2% of global energy demand originates from data centers and cryptocurrencies combined (International Energy Agency 2024). The same goes for worsening resource constraints, namely water insecurity. Li et al. (2023) argue that AI’s voracious water consumption will magnify global inequities by raising prices in developing countries. To do so, AI’s demand would have to at least constitute a noticeable fraction of the global total, thereby diverting water from the mugs of the world’s poorest. In 2021, data centers drank up 840 thousand cubic meters of water (BCG 2024). Ethiopia, deemed to be the globe’s most water-insecure country, saw 10.55 billion cubic meters of freshwater withdrawals that same year (North Western Institute for Policy Research 2025; Our World in Data 2025). 

For this reason, AI, however extractive its relationship with nature might be, cannot be held responsible for tangibly exacerbating climate change and raising the prices of key commodities (like water). An essential ethical threshold for moratoria is therefore out of reach, especially considering the logical implications of the standard AI is subject to. As an example, cattle farming is responsible for 41% of global deforestation, a clear case of rivalrous ecological destruction (Ritchie 2021). This means that a consistent application of the principle environmental harm stipulates government inhibitions against AI would impose rations on a variety of other activities, including meat consumption and flights. Of course, this is not necessarily a hard bullet to bite, as support for tight restrictions on certain human activities for environmental reasons appears in the literature (Deckers 2016). The goal of this paper’s argument, however, is to elucidate that AI, despite its novelty, should be held to the same standards by regulators that most other technologies are and that extrapolating the principle favoring precautionary restrictions might demand a wider set of interventions opposed by most.

Nonetheless, when and where justified, moratoria proposals are defended on the grounds that unrestrained consumption (and hence the fulfillment of personal utility) imposes rivalrous harm. This is the case both at the individual and institutional level. One is more likely to refrain from grabbing the last biscuit off a plate of snacks than taking an extra generous portion from a plentiful buffet, as the marginal infringement on fellow consumers is much higher in the latter than the former. Wartime rations, meanwhile, which function as impediments to additional consumption, follow a logic of inhibiting zero-sum overconsumption in the context of major supply constraints. Since food availability becomes limited, above-average consumption rates by the wealthy actively crowd out supply for the less well-off, forcing the state to intervene with rations. AI data centers deserve a similar charity: while the environmental effects seem outsized relative to the extent of AI infrastructure, they make little difference to the already-existing totality of ecological harm. This significantly weakens the case for such restrictive policies.

II. Are Environmental Costs Amenable? 

While data centers may imprint vast ecological harms, there is a separate burden which proponents of inhibitive regulation must satisfy: that pausing data center construction is the only way to manage that harm, where no other policy options are available. This question applies to all stakeholders of the AI revolution, spanning industry developers themselves to those at the receiving end of AI-imposed resource constraints. Before we enact moratoria, we must look to simpler policies first which mitigate the environmental harms while allowing the supposed benefits of AI to be realized. 

Throughout technological history, regulators have successfully balanced market freedom with environmental interests. Following decades of rising car pollution, US legislation passed in the 1970s required manufacturers to incorporate more efficient technology into engine designs, giving way to the introduction of catalytic converters (He and Jin 2017). Simply, bans and moratoria are rarely a necessary solution, as policymakers can allow technological development to proceed while surgically minimizing the costs. In honesty, regulations that internalize pollution might be difficult to construct for AI given its technological novelty; however, Friedman (1980) proposes a simple tax on all pollution, instead of regulation, to guide market incentives. When expanded to cover industrial water consumption, this tax could easily encompass the externalities associated with data centers. That said, the industry’s sheer vastness might mean that tax-based measures fail to successfully rein in AI’s harm as opposed to legal fiat. With a market capitalization of $5 trillion (at the time of writing), Nvidia, the mothership of the global AI economy, could easily absorb the costs of higher taxes without being significantly pressurized to reform its environmental conduct. Nonetheless, the possibility that initiatives aside from construction freezes might manage pollution, while retaining the benefits of AI, itself postpones the necessity for moratoria. One does not have to be a classical liberal to subscribe to a prima facie endorsement of individual rights whereby an ethical delay should exist between problem identification and the use of (forceful) government interventions. If and where they exist, substitutes should be pursued automatically. 

This same standard goes for the stakeholders of AI. The welfarist argument for restraining AI’s development assumes that the environmental harm stakeholders incur (say, hypothetically, higher water prices) are unavoidable. In the way that policymakers cannot surgically remove the costs, no underlying institutional changes could pre-emptively mitigate the fragility of the water supply to AI demand, leaving no alternative but to enact freezes on data center construction. 

This is not true. To illustrate, suppose that a town and an AI data center sit on opposite sides of a lake. Both obtain their water from the lake, but supply limitations instigate rivalrous demand, and the townspeople must cope with tangibly higher water prices, all thanks to the data center’s consumption. While they might reasonably petition for the building’s dismantlement, the issues the population faces are reducible to an even more fundamental economic problem: namely, a classic tragedy-of-the-commons in line with what Acemoglu et al. (2018) describe. This is where failure to assign ownership rights to a common ecological resource, like a body of water, leads to overexploitation and resource depletion. In Acemoglu et al.’s case, overfishing is described, but the principle is no different with water consumption. Instead of ending the center’s operations, the town can try various solutions to alleviate a ubiquitous issue within environmental economics. For example, clear delineation of property rights would reserve a larger section of the lake for the town’s consumption while another smaller section for the data center’s use, both relieving pressure on water prices and encouraging the center, as we will soon see, to search for cheaper resources elsewhere.

This scenario is extremely charitable to proponents of construction freezes, as it is a direct example of rivalrous consumption. Even here, we can see that moratoria would not be necessary, since less restrictive alternatives are still available to policymakers. The global implication, in a hypothetical where AI takes up a worrying fraction of water demand (say, more than 5%), is not to target AI’s infrastructure, but to extirpate the basic drivers of limited water supply which magnify the burden data centers pose. These are debated, but evidence suggests that water insecurity is symptomatic of a wider failure to engender economic growth. By providing stable investment, growth can satisfy the heavily capital-intensive demands of water infrastructure (Dangui and Jia 2022). Therefore, this makes moratoria an unfocused and morally disproportionate reaction to AI data center construction.

III. Are Environmental Costs Transitory?

The environmental case for freezing AI infrastructure growth does not just demand evidence that there are large, outsized impacts of AI development. To justify state prohibitions which would hold AI to vastly different standards from other technology, we must prove that those impacts are here to stay. Put differently, the case for a moratorium becomes much weaker if the intensity of AI’ s environmental harm is transitory.  

The current evidence, as well as economic intuition, suggests that the costs of AI will rapidly diminish into the future. To return to Hlablisa (2025), this is already happening in real time, as the energy intensity for a single AI prompt dropped by 30-fold between 2023 and 2024. Indeed, Hlablisa reports projections that aggregate harm from AI will rise into the late 2020s. Given the diminishing per capita footprint of individual AI activities, this reflects even more centers being built, not the centers themselves becoming more resource intensive. When the AI boom finally begins to settle down and demand naturally plateaus, so will the industry’s total footprint. This makes perfect sense because resource intensity is an economic burden on the produce, which, regardless of policy interventions, instills an organic incentive to improve efficiency. While it might be affordable for data centers to use gallons of water, it would be cheaper to achieve the same output by using less. This is consistent with standard theory which says that the market’s desire for economic efficiency often translates to more economical resource use and therefore environmental gains. Meyer and Pac (2013) find that after privatization, Eastern European energy firms saw a 55% reduction in sulfur dioxide emissions by installing more efficient technology. As capital investment pours into AI, the pressure will grow to minimize input costs through innovation. At some point not too far from the present, a data center might not emit significantly more greenhouse gases or consume more water than a normal factory. 

Although AI’s footprint might converge to the industrial mean, this does not conclusively defeat the argument for heavy-handed intervention. Earlier, we discussed the basic exchange rate between environmental harm and human utility. To elaborate further, this possesses a temporal dimension, as there is no guarantee that diminution will happen quickly. Therefore, we must ask, for what length of time are we willing to endure disproportionate environmental harm from AI for what amount of future gain in welfare? 

Our previous technological history suggests we are willing to tolerate much longer-lasting environmental harms for a given unit of welfare. Car pollution subsided at a far slower rate than AI’s resource intensity is currently diminishing. Coal production, meanwhile, imposed highly rivalrous environmental costs, as enormous smog pollution reduced life expectancies and birth weights while acid rain inflicted havoc on local ecosystems, for centuries before the pivot towards cleaner alternatives (like natural gas) began (Jha and Muller 2018; Ghaffarpasand and Dope 2025). Nonetheless, proposals to ban cars or outlaw coal production (effectively halting industrialization) would, throughout history, have been viewed as absurd since the welfare gains were thought to be more than a worthy tradeoff, and pollution was technologically amenable (Goklany 2007; Ridley 2011). However, the argument for halting data center construction could retrospectively justify both these hypothetical bans, as well as others on most technologies, which have displayed high environmental transition costs. 

IV. Conclusion 

If the welfare gains from AI-led modernization were not as large (or even negative) compared to previous technologies for which we have tolerated major environmental costs, the case for iron-fisted government intervention might be stronger. Although this paper has not entered into a consideration of the economic impacts AI will yield—whether mass unemployment or soaring productivity—the ethical framework it has set out can nonetheless inform policy discussions where an understanding of those impacts is present.  

References

Acemoglu, Daron, David I Laibson, and John A List. Economics. Pearson, 2018. 

Armatas, Christopher A., and William T. Borrie. “A Pragmatist Ecological Economics – Normative Foundations and a Framework for Actionable Knowledge.” Ecological Economics 227 (2024): 108422, https://doi.org/10.1016/j.ecolecon.2024.108422. 

U.S.-Asia Sustainable Development Foundation. “White Paper on Global Artificial Intelligence Environmental Impact.” Last modified October 21, 2024. https://www.uasdf.org/greenaiwhitepaper. 

Dangui, Kokou, and Shaofeng Jia. “Water Infrastructure Performance in Sub-Saharan Africa: An Investigation of the Drivers and Impact on Economic Growth.” Water 14, no. 21 (2022): 3522. https://doi.org/10.3390/w14213522.  

Deckers, Jan. Animal (De)Liberation : Should the Consumption of Animal Products Be Banned? Ubiquity Press, 2016. 

Deranty, JP. “Artificial Intelligence and Work: A Critical Review of Recent Research from the Social Sciences.” SSRN Electronic Journal (2022). https://doi.org/10.2139/ssrn.4083455.

Fang, Xinmin, Lingfeng Tao, and Zhengxiong Li. “Closer to Language than Steam: AI as the Cognitive Engine of a New Productivity Revolution.” ArXiv,  (2025). https://doi.org/10.48550/arxiv.2506.10281.  

Friedman, Milton, and Rose D Friedman. Free to Choose: A Personal Statement. Paw Prints, 1980. 

Goklany, Indur M. The Improving State of the World. Cato Institute, 2007. 

Ghaffarpasand, Omid, and Francis D. Pope. “Air Pollution Emissions from Vehicles as a Function of Their Current Real-World Market Price.” Journal of Cleaner Production 534, (2025): 147076. https://doi.org/10.1016/j.jclepro.2025.147076. 

Han, Yuelin, Zhifeng Wu, Pengfei Li, Adam Wierman, and Shaolei Ren. “The Unpaid Toll: Quantifying the Public Health Impact of AI.” ArXiv, (2024). https://doi.org/10.48550/arxiv.2412.06288.  

He, Hui, Jin Lingzhi, Beijing, Berlin, Brussels, Francisco San, and Washington. “A Historical Review of the U.S. Vehicle Emission Compliance Program and Emission Recall Cases.” White Paper, (2017). https://theicct.org/wp-content/uploads/2021/06/EPA-Compliance-and-Recall_ICCT_White-Paper_12042017_vF.pdf  

“How to Manage AI’s Thirst for Water.” BCG Global, last modified April 22, 2024. https://www.bcg.com/publications/2024/how-to-manage-ai-thirst-for-water. 

“How the World Experiences Water Insecurity: Water Insecurity Experiences (WISE) Scales.” Northwestern University, accessed 2024. https://www.ipr.northwestern.edu/wise-scales/impact/data/. 

“Green AI Institute – White Paper on Global Artificial Intelligence Environmental Impact.” Green AI Institute, 2025. https://www.greenai.institute/whitepaper/white-paper-on-global-artificial-intelligence-environmental-impact.  

Jha, Akshaya, and Nicholas Z. Muller. “The Local Air Pollution Cost of Coal Storage and Handling: Evidence from U.S. Power Plants.” Journal of Environmental Economics and Management 92 (2018): 360–96. https://doi.org/10.1016/j.jeem.2018.09.005.  

Lee, Tsung-Ling, Sharifah Sekalala, and Pedro Villarreal. “AI and Data Surveillance: Embedding a Human Rights-Based Approach.” The Journal of Law Medicine & Ethics (2025) 1–5. https://doi.org/10.1017/jme.2025.17. 

Madanchian, Mitra, and Hamed Taherdoost. “The Impact of Artificial Intelligence on Research Efficiency.” Results in Engineering 26 (2025): 104743. https://doi.org/10.1016/j.rineng.2025.104743.  

Meyer, Andrew, and Grzegorz Pac. “Environmental Performance of State-Owned and Privatized Eastern European Energy Utilities.” Energy Economics 36 (2013): 205–14. https://doi.org/10.1016/j.eneco.2012.08.019.  

Murdock, Graham. “Artificial Intelligence as Primitive Accumulation: Enclosure, Extraction, Exploitation.” Communication and Change 1, no. 1 (2025). https://doi.org/10.1007/s44382-025-00004-1. 

Murray Newton Rothbard. The Ethics of Liberty. New York University Press, 2002. 

Ritchie, Hannah, and Max Roser. “Forests and Deforestation.” Our World in Data, last modified May 2024. https://ourworldindata.org/drivers-of-deforestation#cutting-down-forests-what-are-the-drivers-of-deforestation 

Ridley, Matt. The Rational Optimist : How Prosperity Evolves. Harper Perennial, 2011. 

Sibongiseni Hlabisa. “The Ecology of Artificial Intelligence: Energy, Water, Materials, and Land Limits of Digital Systems.” Carbon Neutral Systems 1, no. 1 (2025). https://doi.org/10.1007/s44438-025-00018-8.

The views, opinions, and conclusions expressed in student‑authored works published [in this journal / on this website] are those of their respective authors and do not necessarily reflect the official policy, position, or views of Washington and Lee University or the Mudd Center or its administrators, faculty, or staff.