Time magazine’s publication of an article entitled “Pausing AI Developments Isn't Enough. We Need to Shut it All Down” was a turning point in the recent debates surrounding artificial intelligence. The author, AI theorist Eliezer Yudkowsky, faced immediate pushback from figures inside and outside of the tech world and was described as an “alarmist” whose position has “no basis in reality.”
On first glance, this response seemed sensible. Yudkowsky makes some extreme claims in the Time piece, which can be summed up as follows: If progress in AI continues developing on the gains of Chat-GPT and the significantly more advanced large language model, GPT-4, humanity will most likely be wiped out.
The hyperbolic tone of Yudkowsky’s article, alongside his publication history in Harry Potter fan fiction and lack of university credentials, has made him quite easy to dismiss. However, not long after the Time piece was published, academic AI pioneer and Turing Award recipient Geoffrey Hinton announced his resignation from his research position with Google in order to speak openly about the existential risk posed by rapid advancements in AI.
It seems that Hinton’s decision was influenced at least in part by Yudkowsky’s clarion call. Hinton told Wired earlier this month that “I listened to him [Yudkowsky] thinking he was going to be crazy. I don't think he's crazy at all.” Hinton still supports the continuation of AI research and is himself invested in AI companies, but his decision to sound the alarm about the dangers of AI suggests that Yudkowsky’s position is not nearly as far-fetched as it may have initially appeared.
The technical details of the hypothetical artificial intelligence explosion described by Yudkowsky and Hinton are complex and beyond the scope of this essay. Social and political strategies to promote AI safety have been skillfully discussed elsewhere. The questions I want to address here are of a more philosophical nature: Does our understanding of human essence (or lack thereof) impact our attitude toward the development of artificial intelligence? Beyond technical, social, and political problems, what role does the broader scientific worldview play in thinking about AI and the future?
Telos
To help answer these questions, we need to take a detour into the history of philosophy and science. I will discuss the idea of telos and teleology before turning to the intellectual foundations of AI research to suggest that the rejection of telos and teleological thinking has been and continues to be a driving force behind the reckless development of AI. As such, we should get the meaning of these terms straight.
Telos is a Greek word meaning “end,” “completion,” or “purpose” and is most often associated with Aristotle’s philosophy. Aristotle believed that processes in nature are not entirely arbitrary but, rather, tend toward certain ends: The acorn has an inherent tendency to grow into an oak tree and reaches completion or fulfillment as an oak tree. What’s more, Aristotle argues that the final form of an oak, human being, or other teleological form is more than the sum of its parts:
Just as one who discusses the parts or equipment of anything should not be thought of as doing so in order to draw attention to the matter, nor for the sake of the matter, but rather in order to draw attention to the overall shape (e.g. to a house rather than bricks, mortar, and timbers); likewise one should consider the discussion of nature to be referring to the composite and the overall substantial being rather than to those things which do not exist when separated from their substantial being.
We might describe this idea of a holistically conceived “substantial being” as the teleological endpoint of an entity’s development. Aristotle goes on to link telos to goodness, stating that “not every stage that is last claims to be an end (telos), but only that which is best.”
The fundamental goodness of such a teleological end implies another translation of telos: “limit.” The goodness of a human being is, perhaps paradoxically, inextricably linked to his or her limitations. Telos implies not only growth toward a final form, but also a sense of satisfaction in that final state of completion. Put another way, teleological processes tend toward particular ends and find peace in the fulfillment of those ends.
This idea of telos and the conception of limits that it entails may sound strange in today’s world where the transgression of limits—biological, ethical, ecological—is often celebrated. However, Aristotelian teleology exerted considerable influence on philosophical and theological thinking in Europe and the Middle East for centuries after his death. It was only with the Scientific Revolution in Europe that teleological conceptions of the world were banished from mainstream intellectual culture.
Charles Darwin’s nineteenth-century theory of evolution was the nail in the coffin of telos. From the oak tree to the human being, life forms were no longer understood as the completion of a teleological process but, instead, became mere way stations on the interminable, stochastic, and morally neutral evolutionary bridge to nowhere.
The theory of evolution has gone a long way toward explaining (or explaining away, depending on your perspective) many aspects of biological reality and change. Rarely, however, do its adherents ask what was lost in the wholesale repudiation of teleological thinking promoted by the field of evolutionary biology. Nor do they consider that some idea of teleology seems necessary to develop a compelling theory of consciousness, cognition, and value.
Philosopher Bertrand Russell captured the downstream effects of this anti-teleological paradigm shift in the mid-twentieth century. He stated that Aristotle’s “belief in purpose [telos] as the fundamental concept in science” was a sign of “decay” in the Hellenic world, a decay that resulted in the proverbial Dark Ages and was only reversed with the Scientific Revolution in the sixteenth and seventeenth centuries. For Russell, telos is not only a key obstacle to the incorporation of modern scientific methodology into philosophy, but also itself an indicator of broader cultural and intellectual decline. In other words, teleology is not just wrong, but to take it seriously is to accelerate civilizational decay. Russell’s remarks represent the position that had become a new form of “common sense” in Western intellectual culture, a position that provides the philosophical foundation (and justification) for research in artificial intelligence.
Transhumanism
What does all of this have to do with the recent furor surrounding Yudkowsky, Hinton, and the threat of AI takeover? The excommunication of teleology from the realm of acceptable scientific opinion brought with it the idea that humanity has not reached anything resembling an end point or limit as a species. In the twentieth century, information theory, cryptology, and cybernetics took the Scientific Revolution’s assault on teleology to its logical conclusion: transhumanism. Thinkers in these domains began to contest the view that human beings are inherently limited creatures and forcefully rejected the idea that human fulfillment could be found within limits. As an alternative, transhumanists advocate radical genetic and cognitive enhancements to transform humans into “posthumans.” For transhumanists, there is no best, only better.
This transhumanist ideology, especially common in Silicon Valley, implies that humans can and should use their generalized intelligence to take the evolutionary process into their own hands. Think Elon Musk’s Neuralink or, more to the point, Yudkowsky’s own views on transhumanism. In a 2007 essay, Yudkowsky argues for human augmentation through novel technology. He states that “if it’s possible” humans should strive for a “million-year lifespan” and, similarly, if the technology were available to do so, active efforts should be taken to raise children’s IQs. Yudkowsky notes elsewhere that the transhumanist science fiction he read as a child convinced him that he would be immortal. This belief was only shaken when his research in the field of artificial intelligence alignment led him to conclude that human extinction was a more likely outcome of AI development than immortality.
Hinton, too, is concerned with the question of immortality. “I think it's a very bad idea for old white men to be immortal,” he states in a recent interview recorded by the MIT podcast, In Machines We Trust. Note that Hinton is not opposed to immortality as such; instead, he appears only to take issue with “old white men” (themselves already a shrinking demographic group) becoming immortal. Though not an overt transhumanist like Yudkowsky, Hinton still stands awestruck before the transhuman fantasy of immortality. He reports the “good news” that “we have figured out how to build beings that are immortal” to an adoring audience at the Jeffrey Epstein-funded MIT Media Lab.
These “immortal” AIs, according to Hinton, can store information across multiple hardware systems and thereby exist independently from the physical limits of any individual storage device. While the transhuman promise of immortality for augmented humans may be a pipe dream, machine intelligences can now partake in a digital immortality all their own. In a deeply ironic turn of events, a technology—AI—that has promised not only radical improvements to human life but even transhuman transcendence of human limitations now poses, in the eyes of some of its most famous champions, an existential threat to humanity itself.
Despite this dire state of affairs, technotopians like Yudkowsky still cling to the transhuman solution. Yudkowsky advocates augmenting human intelligence as a strategy to stop AI takeover: “If you get to the point where you’re turning into gods yourselves, you’re not quite home free, but you’re sure past a lot of the death.” To do this, he informs us that we would need “a crash program on augmenting human intelligence to the point where humans can solve [AI] alignment.” For all of his AI doom, Yudkowsky still believes in the technological transcendence of telos as our greatest hope of survival.
Artificial intelligence and transhumanism are not one in the same thing. However, AI thought leaders overwhelmingly exhibit philosophical affinities with transhumanism. Why might this be? AI research is premised on the assumption that intelligence can and should be isolated and abstracted away from the embodied existence and natural limitations of the human being. (It is worth noting that “can” and “should” are largely indistinguishable concepts in the tech world, with scientists like Hinton stating that “If I hadn’t done it, somebody else would have.”).
After intelligence is instrumentalized as a functional set of capabilities unmoored from human emotion and bodily limitation, it can be developed either for transhumanist augmentation or toward the goal of artificial general intelligence (AGI). The impenetrable biological complexity of humans alongside the recent deep learning revolution have rendered AGI more feasible than an augmented posthumanity, but the motive behind both projects is the same.
In both cases, telos must be rejected: Either we make ourselves “smarter” though cognitive enhancements or we make intelligent machines that are smarter than us in order to enhance our lives. In the first instance, transhumanists seek to directly push past any teleological endpoint of the human being. In the second, they displace and project their anti-teleological drives onto inanimate machines who (they thought) would obediently do our bidding but, instead, might destroy the planet in the process.
The idea that intelligence might be teleologically limited for very good reasons and that it should stay that way is anathema to the transhumanist project. Even if transhumanism is itself a “failed promise,” as philosopher Susan B. Levin contends, its operative ideological principles still lay at the foundation of wanton AI development and the threat of AGI ruin.
Enough is Enough
Enough is enough. I mean this in a double sense. First, in the sense that Yudkowsky said “shut it all down” in his Time magazine piece. Instead of striving for more complicated ways to reform and regulate AI research in hopes of “aligning” future superintelligences with amorphously defined human values, Yudkowsky is correct that it would be much more responsible to enact an indefinite moratorium on AI development at this pivotal juncture.
If Yudkowsky greatly overstates the possibility of AI takeover, a significantly lower risk would still be unacceptable. AI apocalypse could wipe out everyone, “including children who did not choose this and did not do anything wrong,” and Yudkowsky is not the only one concerned. Indeed, even if Yudkowsky and Hinton are completely off their rockers and there is a zero percent chance of AI takeover,
reminds us that AI “will at minimum be responsible for mass unemployment, fakery on an unprecedented scale and the breakdown of shared notions of reality.” In either scenario, we must say: enough is enough.In the second sense, “enough is enough” is to be taken quite literally. What I mean is that human telos is enough in and of itself.
Yes, we are limited, both physically and mentally, but it is precisely those limits that help make us complete. What if, instead of lamenting and resisting those limits, we immanently explored them to their fullest? What would happen if we grew to accept and even revere the teleological process that has produced the richness of humanity, a richness that cannot be reduced to the functional concept of intelligence? What might yet be discovered if we relinquish the restive quest for immortality and artificial intelligence and direct our human energies in other directions? What if we took as our starting point the maxim: “the human being is enough”?
It is difficult to say “enough is enough,” in both of these senses, but I see no other way forward. Either we succumb to the false promise of transhumanism and its evil twin of AGI, or we accept the telos of the human being. The choice is ours.
Nice piece on teleology. Especially for someone, like me, who thinks there is a telos in nature. It might have something to do with the modern AI hype, indeed. I never thought about it from this perspective. But IMO it is much more the material monism that followed Descartes dualism that stands behind it. Nowadays, AI, computer science, neuroscience, cognitive sciences and the philosophy of mind rely mostly on a physicalist or functionalist conception of the relationship between the mind and the brain. You are your brain, your brain is your mind and emotions and all what you are. A mechanistic conception that even no longer questions whether the mind is computational or something beyond it, despite there are tons of people having put forward good arguments against. So, inside this paradigm, if one doesn’t question it and even doesn’t know that alternative hypotheses exist (most people like Musk and in the IT branch are completely clueless in this sense) you must obviously believe that the transition from human to transhumanism, and AI to AGI is only a matter of time. It is only a matter of trying harder and harder until we get there. But, IF those who contend otherwise are right, namely that the mind and consciousness are something more than just an electro-chemical epiphenomenon arising from a machinery in our skull, then all these AI fantasies are doomed to failure. It is not about trying harder, it is simply a matter of principle. I believe the latter case is what history is going to tell us. I don't nurture concerns about Terminator-styled doomsday scenarios. I think that even the risk of mass unemployment is overemphasized. There are much more urgent necessities to focus on.