A Doomsayers Contest: Huxely, Orwell, and Others
Author: Oliver Lukitsch, inspired by Markus Peschl with a few proudly admitted AI-generated sprinkles.
Today, the proliferation of information is out of control. A driving force behind this is the emergence of accessible AI tools for content generation. What used to be a costly, resource-intensive exercise, such as generating news that closely resembles credible media content, is now an overly simple, easy-to-perform task with tools like ChatGPT. It only takes a few minutes to create a credible piece of content – credible in appearance, while not necessarily being true. This content can range from genuinely useful pieces to outright fake news that can be weaponized in a variety of ways.
Two great thinkers of the 20th century would have had different perspectives on the looming threats of AI: Aldous Huxley and George Orwell. In this short blog post, we want to revisit the potential pitfalls in the age of hyper-accelerated content creation and consider which of these risks is more pertinent. Rather than provide a simple answer to this question, we will explore the options available and leave it to you, the thinking reader, to do the rest.
A quick disclaimer up front: Dystopian “speculative design” can anticipate real dangers, but it’s up to us to make sure they don’t happen. There is much to be excited, curious, and hopeful about when considering the benefits of generative AI. Don’t take this article as an attempt to side with the doomsayers. Neither naive positivity nor doom and gloom should guide our thinking.
George Orwell and Surveillance Capitalism
Orwell’s dystopias are known for their reflection and critique of hierarchy, power, and political oppression. His dystopia was that an elitist political class will take away the freedom of their subordinates, often against their will and against their choice. Those suffering from oppression are often conscious of how the system affects their freedom of choice, privacy, movement, or speech. It is brute force, exerted physically and psychologically that keeps the people in check.
Before the advent of hyper-accessible generative AI systems such as ChatGPT or midjourney, AI was indeed seen as a driver of “surveillance capitalism”, a notion famously termed by the social psychologist Shoshana Zuboff.
AI plays a significant role in surveillance capitalism. The vast amounts of data collected by corporations are processed using AI algorithms to derive insights, patterns, and predictions about individuals’ behaviors, preferences, and interests. These insights are then used to target personalized advertising and design user experiences. AI enables the automation and optimization of monitoring practices, allowing companies to extract value from personal data at unprecedented scale and efficiency.
Most importantly, AI technologies are being used to enhance surveillance capabilities. Facial recognition systems, predictive policing algorithms, and automated surveillance systems are examples of AI applications used for surveillance purposes. These technologies enable continuous monitoring and analysis of individuals’ activities, contributing to the expansion and intensification of surveillance in various domains, such as public spaces, workplaces, and online platforms – and they can be used by government agencies (predictive policing) as well as by corporations.
For any Orwellian scenario to unfold successfully and according to its internal logic, surveillance capitalism comes in handy. Moreover, AI comes in handy because it provides the authorities and the top players in a social hierarchy with the means to predict their subordinates. If you can predict them, you can control them. Alleged “crimes” (defined so by the authorities) can be avoided before they occur, and counterfactual predictions can create conditions that induce people to act in ways that are beneficial to those in power.
“The Age of Surveillance Capitalism” Shoshana Zuboff’s groundbreaking account of how the tech giants are amassing almost feudalistic power against a backdrop of threatened democracy aligns closely with the Orwellian vision of a dystopian future marked by pervasive surveillance, control, and the erosion of individual autonomy.
However, the emergence of generative AI systems did change or should change the narrative.
What would Huxley say: Social Cooling and Manufacturing Consent
Huxley’s dystopian fiction depicts more subtle means to force people into submission. Instead of using brute force and threats to make people obey, a totalitarian regime may simply give people what they want, and give it to them so generously that they become enslaved to their own pre-existing desires. People are being drowned in the apparent benevolence of their leaders. For Huxley, the most important form of defiance, then, is not resistance to top-down oppression, but resistance to one’s own desires and self-liberation.
The emergence of generative AI and the resulting proliferation of content suggests an approaching Huxlerian scenario. It is the sheer amount of content that can be created in exactly the way we feel we need it that could drive us into a form of techno-submission; a submission to AI tools driven by cognitive laziness and our disposition to take the path of least resistance.
Taken at face value, this scenario looks more innocent than the Orwellian one, but it can be more devastating. While fear and material oppression can indeed break us down, it can only work if it allows the oppressed to resist, or more accurately, to think about and engage in resistance. Exercising control, Orwell-style means making certain actions impermissible. In doing so, the oppressor ironically keeps alive the very blueprint for rebellion. That is, to present ideas as forbidden is to keep those ideas alive.
Likewise, the mere idea that AI-based technologies are used to control and predict our lives is a major driver of resistance against such technologies.
According to Huxley, however, we must pay attention to how AI systems feed us with content we’ve been craving; deeply specialized, salient, and tailor-made content. We are willing consumers of such systems. And why shouldn’t we be?
First, generative AI is a powerful linguistic tool. It uses natural language inputs and, among other things, creates natural language outputs. In doing so, it intrudes on one of humanity’s most sacred mental domains, the ability to read and write. And that can be a problem.
The Sacred Skill: Language and Critical Thinking
As Alard von Kittlitz recently pointed out in Die Zeit: The human mind is not naturally predisposed to reading and writing. While we acquire language extremely quickly in childhood, reading, and writing are learned slowly and painstakingly over many years until we reach a certain level of proficiency. This is exhausting, hard work for all of us, but it pays off because this cultural practice is the very backbone of high culture, human self-transcendence, and creative thought.
Generative AI can take a significant weight off our shoulders – it can write for us. Our hard-learned writing skills can be outsourced. But why should we give up this cultural achievement and let machines maintain it for us? Shouldn’t we be deeply interested and motivated by the fact that we can do it ourselves? Should we be motivated by the fact that we have endowed ourselves with this semiotic skill?
I am not so sure. Even as I write this, I must admit that I am guilty of drawing on AI tools to aid me in my struggle – and they really did. Also, we usually do not write and read to preserve our human standing as high culture. Somewhat fittingly, the University of Graz recently recovered the oldest book ever created. The text was about beer. And as most of us do not engage in artistic writing, this finding is telling. We write to meet more basic needs, more primitive desires. I’m being paid for typing these lines too. And wedge writing in ancient Mesopotamia was used for bookkeeping.
So the Huxlerian prospect is very real that the flood of AI-created content will be greeted with open arms. The fear is that it will distract us from our deepest, yet hard-to-cultivate, human need for growth, our need to challenge ourselves, and our need for cognitive resilience. The twofold consequence would be that, first, we might lose our sense of critical engagement with issues we might need to address creatively or adaptively, and, second, we might lose the very means of remaining critical in the face of social power structures. In fact, the emergence of AI is exactly such an issue to be reflected on.
AI could be used not only to control us. It could also be used to eradicate critical language skills for reflection and our ability to engage with content with which we’re in dissonance. The problem is that it will feel good at first – and that is the real sneakiness of the “Brave New World”.
A final note: AI utopias are no less important than dystopian views. This article is not a manifesto to side with the doomsayers. But weighing dystopia against utopia will clarify our vision of the future to come. No doubt it’ll be interesting.
Subscribe to Our Newsletter
Keep your innovative edge with more stories like this and additional reading tips, muses, and project updates.
Images by Sacha T. Sas and Markus Spiske @Unsplash