CONVERSATIONS

Benedetta Brevini on the AI sublime bubble – and how to pop it

Benedetta Brevini is Associate Professor of the political economy of communication at the University of Sydney, and Senior Visiting Fellow at the London School of Economics and Political Science. A journalist and media activist as well as an academic, Benedetta studies the relationship between data capitalism, artificial intelligence, the climate crisis, and environmental communication. Her latest book, Is AI Good for the Planet?, explores the true environmental costs of AI.

Despite these costs, AI is being waved around as if it were a magic wand that can save us from the climate crisis. Benedetta dispels this myth. These discourses' real power is to distract from the fact that we have unleashed capitalism on both society and nature. When AI systems are deployed for “green" causes like monitoring forest biomass or optimizing agriculture, they are reinforcing a tunnel of thought with no light on the other side. Real sustainability, Benedetta shows, requires seeing all the links in the AI supply chain, as much ideological as material.

You've written extensively about how technology in general, and AI in particular, has been invoked as a way to fix the climate problem. Can you give us an overview of the dominant strains of this discourse? What kinds of imaginaries are being mobilized?

The idea that is becoming dominant presents AI as a magic wand, with which we can sort out the biggest problem that society faces. It's part of a hegemonic techno-imaginary that is taking up a lot of ground. I've recently been studying how the European Commission, for example, has been looking to AI in search of the capacity to not only deal with the climate crisis, but also to sort out the rest of democracy's biggest problems, including the problems generated by capitalism.

An illustrative quote comes from an important report published by the World Economic Forum in 2018. It’s a report on artificial intelligence for the earth, and it's one of the best examples of this idea of AI as a magic wand: “We have a unique opportunity to harness this Fourth Industrial Revolution, and the societal shift it triggers, to help address environmental issues and redesign how we manage our shared global environment... The intelligence and productivity gains that AI will deliver can unlock new solutions to society's most pressing environmental challenges: climate change, biodiversity, ocean health, water management, air pollution, and resilience, among others."

Discursively, it's significant that they use the phrase Fourth Industrial Revolution. This type of language was used back in the '90s by people like Nicholas Negroponte and George Gilder. I remember Negroponte claiming that a New Athenian age was developing, thanks to the possibilities of the internet. Now we are again talking of a revolution in technology that will help us sort out all the calamities of the world. This is the sublime phase of a new technology, and it’s a very strong techno-imaginary. I use sublime following Vincent Mosco, who has done a great job identifying this typical approach to so-called new technology. 

We know AI is not new; we have been discussing it since before World War II. But precisely because of the acceleration of data capitalism, we also know that there has been an incredible uptake in the last decade. This is why it is becoming a “new" technology, which in turn is why it's so difficult to look at AI from a more critical perspective. That's precisely how you build mythologies and common sense, to borrow from Antonio Gramsci. We’re prevented from challenging these mythologies around technologies the moment they become common sense.

As it happens, we are often subjected to a sort of amnesia about the older technology that gets displaced by the new one. In this case, it would be the web, the internet in general, being replaced. We are forgetting that all of the internet’s promises have not been met, even though they seemed within reach in the ’90s. We are once again experiencing this amnesia with AI.

What are some of the consequences of this sublimation of technology?

An important one is that it obfuscates the materiality of these technologies. It obscures how the development of technology is completely embedded in the social, political, and economic structures of the society that actually develops the technology. To paraphrase Raymond Williams, technology is always in a full sense social. 

We know that we are developing AI under a framework of super-capitalism, a capitalism that is unregulated. We are developing it for profit, in a state where monopolies are governing everything that happens with AI. We're dealing with digital lords, with tech giants – they are the ones who are ruling the development.

The sublimation of AI also obfuscates its literal materiality. Let’s remember that AI is a collection of technologies, a collection of infrastructures. When we talk about the cloud, we are already thinking of something sublime, something we can't touch – a white, fluffy cloud in the sky. We think about AI as having the same kind of immaterial character. But that’s not the case. So, I'm always trying to visualize data centers as full of dust and very noisy. Because the moment we obfuscate the materiality of these technologies, we forget their environmental impact.

What kinds of claims is this literature from Davos mobilizing? That AI will allow us to run the economy in a greener and more sustainable manner? Or is the claim that society will itself be reshaped by AI?

There are both kinds of claims, actually. A frequent and prominent claim is the ineluctability of AI. The mythology is that AI is coming, even if we don't want it to come; and that, when it comes, it will completely disrupt society. The idea of disruption is also typical of the first sublime phase. This literature is presenting AI as ineluctable and disruptive. 

The other thing it promises is that AI will bring us a more efficient society. Here it is clearly borrowing from neoliberal language, without even challenging the concept of efficiency. What does efficiency mean for a society versus the economy versus a business, for example? So, that’s the third of the revolutionary claims that we see. AI is coming whether or not we want it to come, and AI is changing society – from how we deliver social services and education, to how we organize our banking and financial systems.

And how is AI supposedly going to disrupt the climate crisis?

By allowing us to forecast the adverse effects of climate change. One example I have been looking into is an application called Treeswift. It's a spinoff of Penn Engineering, and it's sold as an AI-powered forest monitoring system – an enhancement of environmental management – that uses autonomous drones and machine learning to capture data, to capture images, and to create an inventory to map forest biomass. We are told this is how we become more efficient at managing the environment. 

A second example, which falls under the same umbrella discourse of AI helping with climate crisis mitigation and adaptation, is wildfire control. We know that one of the adverse weather events will be fires. Again, the idea is that through drones and machine machine learning, it would be easier to predict the development of fires and thus easier to control them.

Another huge application for climate informatics, as this field is now being called, is in the agricultural industry. Of course, agriculture industries will be impacted, and want to minimize the adverse effects on farming. How do we do that? We create real-time inventory through sensors, we do mapping, and then we make informed decisions on that basis. The key word here is to “optimize" crops!

The same story applies for water management. In my book, I study a watershed in northern China that exemplifies these strategies. They used a lot of machine learning analysis to identify the climatological and hydrological relationships that were present, and then they forecasted the precipitation and streamflow in relation to their water management efforts. These are very applied examples of how AI can be used for the climate.

These types of efforts have traditionally been referred to as “green technology," or “sustainable AI." But you’ve noted a discursive contradiction here, too.

We have indeed been calling this type of application sustainable AI. The problem is that so-called sustainable AI is not itself a green technology per se, but rather a technology that is helping us manage climate change issues. 

I've been following discussions about how ChatGPT might help with the climate. Can a chatbot that is producing a lot of misinformation, that is prone to hallucination, actually help us? The argument, again, focuses on collecting data and making predictive forecasts. But there's no way the AI developers actually believe this, given all the document limitations around the chatbot – it's not even one of the more sophisticated types of neural network-based AI – as well as its carbon footprint. So, I find claims that ChatGPT will be useful for climate management to be really problematic.

But there is also a discussion within environmental justice activist groups, who want to use AI like ChatGPT to help them with campaign elements like press releases. This also really surprised me. I understand that it can be helpful with generating very superficial arguments, or with rephrasing topics and themes. But I thought that the climate activist community would be a bit more aware of the superficiality.

To tie these examples back to the topic of populism, perhaps we can lean on an oversimplified definition of populism – as a discourse that identifies an enemy standing in the way – and ask, what would be the enemy in this case? Are there populist elements present in this talk of sustainable AI?

What's happening here is the creation – and constant celebration – of a particular sort of propaganda, rather than the creation of the Other. A more populist discourse might position humans as the Other, and say that we should delegate our political decision making to AI, because humans are less capable than AI. But that's not what we see in those examples.

I don't see the idea of AI as a magic wand as building the Other. What I see, rather, is that AI is being sold to us as a techno-fix to the contemporary form of capitalism. It is just pushing this type of capitalism more and more, which has led to the emergence of these big global Tech monopolies – that I call Digital Lords – that have enormous power over us. They are the ones who have increasingly taken over decision making. So, AI is aligned with this trend, rather than with creating an antagonist. It has more to do with reaffirming the kind of capitalism that is being developed in the West versus the East, and with reaffirming the dominance of the Digital Lords.

But the Other can be abstract. We could look at neoliberalism as a populist discourse that points to the inefficiency of the public sector as the Other – the antagonist might be personified by corrupt or rent-seeking officials, but ultimately it is an abstract force. Could there be a similar dynamic at play in posing AI and technology as more efficient than humans?

Thinking about the populist slogans around neoliberalism, I agree that the enemies are always clear. But we are in the presence of something slightly different with AI. It's connected more with the idea of preserving the status quo, even as it is pushed to its limits by the planetary crisis we are facing. 

I come back again to David Harvey's notion of a technological fix. The fix is presented as helping us to overcome this crisis, but what it actually does is legitimize the status quo in a way that defeats the possibility of any other imaginaries. So, if we want to think of the Other, it would probably be the image of another type of society.

I don't like the juxtaposition of AI versus humans, because I think there is something more happening. It's really about celebrating the system in which we are presently, and about defeating any chance we might have to imagine something different – something that is necessary for the climate, but which would entail a complete reorganization of the current form of capitalism. And it is that which we don't seem to want to do. 

Instead of thinking about alternative cosmologies – instead of following indigenous thinking, or Latin American activists' arguments about environmental justice and the redistribution of resources – we are just avoiding such changes.

Who are the actors deploying these discourses about AI, and how can we identify their shortcomings? What is the political economy behind them, so to speak?

Looking at these emerging techno-imaginaries, we see that they are actually being put forward by major PR companies working for major AI developers. With AI development in particular, even beyond the matter of green discourses, it's worth remembering that the competition is between two blocs. It’s the US versus China, with the EU far behind. And the AI developers winning the race are all based in the US.

When we're talking about actors, it's worth thinking about who actually makes the decisions. Yochai Benkler, who is not at all a communist, has argued that we are missing the opportunity to define and develop AI, because it's completely in the hands of industry. Those who embrace a pluralistic vision of policy making might say that AI discourses are coming from different stakeholders, but the reality is that, when you look at the academics in Europe writing whitepapers on AI, they are also funded by industry. This is how Google's position papers end up being so similar to the discourse coming out of the academy. It's all about the ethics of AI, and nothing else.

There's great alignment there. If we're interested in who is developing a counter-discourse, it might be the unions. Because the unions are very worried about the conditions of the workers, about the fact that workers are losing jobs because of AI. They would likely be the ones to develop a more promising type of discourse.

If we go beyond just AI and think about discourses around greening infrastructure – which could be digital but also extractivist – are there coalitions within the sectors of capital, between Big Oil and Big Tech for instance?

It's always connected. Industries always reinforce one another. On the one hand, you have Google and Microsoft – not Amazon, who has been lagging behind on greenwashing – declaring they are carbon-negative, thanks to various carbon credit schemes. But then, on the other hand, they are developing all their AI for Big Oil, helping that industry increase its profits significantly over the last five years. 

Big Tech firms claim that they are negative in terms of carbon emissions, even as they help oil and gas be more efficient. What does this mean? If they are helping oil and gas be more efficient, it means that they are drilling more and digging more – which of course goes against what they need to do if they want to go green. AI for Big Oil goes against the very possibility of sustainability.

On the issue of green tech, there’s another important new concept: the “twin transition.” The latest communications coming from the European Commission are very clear that the digital revolution and the Green New Deal have to go hand in hand; they are twins. But if we think that there is not enough critique in the field of AI, there is even less in the field of green technology. I really think we lost the battle here already, because nobody is ever questioning if green tech is actually green.

The uncritical discourse is coming from industry, of course, but also from policymakers. In my view, it's even more populist, even more difficult to challenge. The fact that they're using the green label makes it impossible to challenge. This is what Chantal Mouffe is saying in her latest book. While we need the Green Revolution, we can’t let it become green capitalism. I couldn't agree more.

Can you break down green capitalism as it actually exists now? What are some of the hidden costs of AI? As you have explained, not only is it environmentally costly to train large models, but it also distracts us from tackling problems in more politically efficacious ways.

When we look at the environmental cost of AI, it is important to look at the entire AI supply and production chain, despite the difficulties of doing so. We can't really measure the total environmental impact if we don't start with the beginning.

It starts with how we are extracting our resources, with the metals. Where do we get the lithium that we need for batteries, for example? We need to recognize the violence that countries with these resources have been subjected to over the centuries. Look at Chile; look at what's happening in the Congo and elsewhere in Africa. For the first time, the European Commission is recognizing that we have a problem here: in the next ten years, the demand for lithium will increase by 3,000 percent in Europe alone. What will the environmental costs be?

Then, before we even get to the phase of consumption, we have the training of the algorithms. There is a famous study by researchers at the University of Massachusetts Amherst, which I reference in my book in order to contextualize the carbon emissions. What does it mean to produce a language model that emits about 284,000 kilograms of carbon? Well, consider that a flight between Rome and London consumes something like 234 kilos. Yet we are constantly told that we should be limiting our usage of transport, because transport is somehow the one human activity leading to an unsustainable world. The reality is that just training an algorithm is much worse.

Next, we need to consider the data centers. Data centers' carbon emissions and environmental cost are hotly debated. These are the emissions that have been most affected by these big corporations’ greenwashing, as they race to show that they are building sustainable data centers. Are they sustainable? I'm not sure. Some are better than others, but let's remember that about two-thirds of the world's electricity grid is still based on fossil fuels. So, if you have a data center that is sucking up the electricity from an average place in the world, chances are it is still relying on fossil fuels.

That’s all part of the production phase. Despite the incommensurability of scales between things like individual flights and training an algorithm, is greater awareness needed around consumption?

We do need to establish that the way we consume matters. Using applications on our mobiles means we are using the cloud, and consuming energy. But more significant is the final phase, which is the discarding of technology.

Unfortunately, again, the biggest e-dumps in the world are all in former colonies, or in places that are presently colonies of China. These are places like Bangladesh and Cambodia and Kenya, where the biggest e-dump for Europe is. These countries have no laws on environmental harms, so that's where we dump all our e-waste. But what are the environmental costs of that transportation? What are the environmental harms generated at the local level? We don't calculate it. At the moment, the best measurements we have are for the training phase and for the carbon emissions of specific data centers. 

We still lack an estimate of the entire supply chain. If we don't calculate the costs of all the phases – if we don't look at every part of this complex supply and production chain – it's very hard to make assessments. That’s why we need to connect all of the supply chain calculations. But, just by looking at the data that we do have, we already know that we need to question the sustainability of AI in the context of the climate crisis. It is unacceptable to only speak of “green AI” as a way to manage the crisis.

Taking your point about the danger of looking at parts of the supply chain in isolation, let’s zoom in on the training of models. As you say, it's where we have the most data, and it's also a huge point of interest at present, given how many people are encountering these models for the first time through ChatGPT. What do we know about the energy intensivity of training these models as they get more sophisticated?

Honestly, collecting this data is very complicated. Why? Because they are not in the public domain. Despite the fact that we are constantly talking about Open AI, it's not open. It’s a huge investment by Microsoft. The reality is that we don't exactly know how the black box behind ChatGPT is working. We don't actually know how it generates its results, because there is a lot of opacity.

Nobody knows, right, not even Open AI? That's the nature of the black box.

Some people know better than others. But the interesting thing is that they already calculated the cost of training GPT-3, the model ChatGPT was originally trained on. It equaled 610 flights between New York and Paris. It's a very considerable usage of energy, and we know that GPT-4 is much more complex. All of this is just training. 

Then, of course, we need to consider the consumption and then the discard. In any case, the biggest assumption is that, again, it's going to be useful for the same type of management of resources, for the same type of forecasting. But I haven't yet seen anything else in terms of how it can be used for fighting climate change.

Climate Change AI, a group based in Canada, published a position paper on how machine learning can be used for resource management. They haven't addressed ChatGPT yet, but they will likely say that it’s even better for prediction and forecasting. I’m quite skeptical about that, because I'm skeptical about its accuracy. Scientists have proven it to be inaccurate on many levels. And again, we don't know what data it has been trained on. How can we trust the forecasting of a system when we don't know the supply of its data?

Can you tell us more about the future of consumer-facing AI? How will geopolitical conflicts, not to mention ecological limits, affect the rate of innovation in these technologies?

Looking at the trajectory of the world, I don't think we have moved that far away from colonialist capitalism. Now we have data colonialist capitalism, if you like, but the colonial legacies stay – especially when it comes to AI. 

We know perfectly well that 99 percent of these applications can't be used in places where the infrastructures are lacking. The lack of infrastructures in former colonies across the Global South is a huge issue, but we seem to not even acknowledge it in this arms race to conquer AI. The European Commission's talk of twin transitions finally noted that we might have an additional little problem with the environmental impact of these technologies. That, for me, was a great victory. 

We usually only see the problem with resources. But, at the same time, we seem to have no problem demanding more of them. Will we be able to keep producing at the rate needed? We know the Donbas is an important region for lithium production. We can unfortunately expect that wars will continue to be generated by the need to accumulate resources. There will be constant conflicts to control and exploit these resources. 

I'm following what's happening in Chile with great interest, because, for the first time, there is a movement trying to protect these resources. They are trying to prevent big corporations from gaining control of them. But in the end, I think we will keep up the same trends that have built up this data capitalism, this technocracy, and left out the Global South.

How long can it continue? I hope that the climate crisis, with its incessant weather events, will lead us to some reflections. But so far, I haven't seen anything. We have less than ten years, as the recent IPCC report repeated, to keep the increase of temperature below 1.5ºC. If we don't act in the next five years, we're missing the opportunity. 

I certainly don't see AI as the solution to this problem. The solution is to reorganize how resources are managed and how capitalism functions. But that will require a political will that we seem to lack in the Global North.

What are the alternatives to the solutionist and capitalist logic? Is a decentralized, community-led effort needed to move technology under a different framework altogether?

I am someone who still believes in public service. The idea of completely getting rid of a capitalist framework in the next seven years seems very daunting, so I try to be more pragmatic. The left has often struggled with how to respond to neoliberalism and to capitalism. If we reimagine these types of technologies in the public interest – with the climate emergency at the center of every decision about technology – we would already be making much progress.

There still is an opportunity to think about technology differently. World War II changed the way we see technology, and the mythologies have only been accelerated by the development of surveillance capitalism and data capitalism – but it’s not working. We need to acknowledge that we are not addressing the climate crisis by adopting AI; we address it by stopping the extraction of fossil fuels. That, for me, is the biggest emergency of the next five years.

Then we can turn to the ideas coming from different cosmologies. One of the most interesting is the idea of custodianship, which I associate with Aboriginal and Maori communities. To be a custodian of nature is to live with it, not to exploit it. We have also seen inspiring developments at the level of local communities and cities. This is where we get much of thinking that is influenced by cosmologies of custodianship, as well as Latin American perspectives I also find very promising. A great coalition of cities trying to address these issues locally would be a good step, too. 

But we must first address the biggest and most urgent issue: we can't have an economy that is based on fossil fuels. We're simply not going to meet the Paris Agreement target. And once we are beyond the now famous 1.5°C above pre-industrial levels mark, the risk of extreme weather events causing death, displacement, and poverty for millions of people globally will increase significantly.

To avoid this, we need firm decision making at the global level. We have the technologies to do this. We already have what we need. We can use ocean energy, we can use solar energy. We can get to the issues of discarding the panels, but we have to start by leaving fossil fuels in the ground. We cannot embrace a climate solutionism that we already know is unsustainable.

Interviewed by Evgeny Morozov and Ekaitz Cancela

Edited by Marc Shkurovich

Further Readings
TITLE
AUTHOR
PUBLICATION
DATE
DOI
Benedetta Brevini
Polity Press
2022-01-06
Benedetta Brevini, Lukasz Swiatek
Routledge
2020-11-26
Emma Strubell, Ananya Ganesh, Andrew McCallum
57th Annual Meeting of the Association for Computational Linguistics (ACL)
2019-06-05
Lasse F. Wolff Anthony, Benjamin Kanding, Raghavendra Selvan
ICML Workshop on “Challenges in Deploying and monitoring Machine Learning Systems"
2020-06-07
Vincent Mosco
The MIT Press
2005-09-23