AI: helping angel or diabolic machine?
John Cornwell • October 13, 2025
“I’ll teach you differences,” declares the Duke of Kent in Shakespeare’s King Lear. The ability to discern potential for both beneficence and harm in the rise of intelligent machines was never more urgent. Pope Leo appears determined to set the pace. In his first official address to cardinals, he warned of the dangers AI posed to “human dignity, justice and labour.” Speaking later to journalists, he praised the “immense potential” of the technology while cautioning that it requires responsibility “to ensure that it can be used for the good of all.” Just as his namesake Pope Leo XIII promoted workers’ rights in the midst of a new machine age at the end of the 19th century, Leo XIV has established himself as a global moral leader in the face of burgeoning algorithmic technologies, warning of their dual aspects.
AI has shown its capacity to process vast amounts of data at lightning speed, bringing benefits to an array of technologies. Researchers at Google DeepMind, for example, have revolutionised our understanding of the immune system. AI is improving diagnostic techniques while assisting the development of new drugs and surgical procedures. Already AI has automated many repetitive and time-wasting tasks, reducing human error and providing more reliable conditions for health and safety. It’s customary among advocates to talk of AI as an “angelic helper.” It may prove a “fallen angel.”
AI has already, and widely, revealed its actual and potential dark sides: the threat of growing unemployment and increasing bias within its hidden algorithmic processes. We learn of its capacity to produce false precedents in the realms of law, its power to determine a wide span of attitudes and preferences – from choice of consumer goods to political views. A New Yorker investigation recently revealed the failure of an AI-guided weapon system to distinguish between “enemy” targets and innocent civilians in the Gaza war. As the late Professor Margaret Boden, a leader in AI ethics, stated not long before she died: “The problem with AI is that nothing matters to it.”
The urgent task of “teaching differences” extends to understanding the difference between a machine, however “intelligent”, and a human being – endowed with intelligence, yes, but, crucially, moral imagination, consciousness, personal identity – a soul. Scientists and philosophers of science have tended to offer neither convincing nor consistent guidance on “human-machine” difference. Since the early modern period, mechanical metaphors crudely dominated. With the advent of the computer, machine images entered everyday language for states of mind: we’re hard-wired, processing, programmed, suffering from overload, accessing information. Dan Dennett, the celebrated philosopher of mind, argued that consciousness is just a “bunch of tricks.” It would soon be possible, he insisted, to replicate those “tricks” in a machine.
Meanwhile, the pioneers of cybernetics, or early AI, had helped reverse the direction of the machine-mind-brain metaphor: instead of understanding the brain by reference to crude machines, they were modelling their machines to mimic biological nervous systems – neural nets. Arthur Samuel of IBM, who coined the term “machine learning” in 1959, devised a machine that learnt to play checkers by “experience” rather than a program, eventually beating Mr Samuel, its maker.
One scientist, however, an early pioneer and “philosopher” of AI, which he dubbed cybernetics, wrote with prophetic wisdom on the long-term prospects. All concerned with understanding human-machine similarities and differences, the future of AI and its implications for what it means to be human, would do well to revisit Norbert Wiener’s 1964 book God and Golem, Inc.
He predicted circumstances, theological and eschatological in their scope, with premonitions of dark physical and metaphysical risk. He lays down the unarguable principle that self-learning systems are capable, in theory, not only of unprogrammed learning but of reproducing themselves and evolving. He warned that the risks of machine intelligence included the “sin” of playing God, invoking the 17th-century legend of Joseph, the Golem of Prague.
Joseph was a huge “machine” humanoid, made of clay and powered by cabbalistic magic. It was “programmed” to protect the Jews of the city from anti-Semitic attacks. Joseph soon revealed his potential for calamity. Instructed to heave water, he could not stop and flooded the house. The important principle, Wiener notes, is that machine “goals” can never coincide precisely with human goals.
Wiener emphasised the ability of self-learning machines to excel against humans at any kind of game. Fifty years ahead, he foretold that researchers would build a machine to defeat the human champion of the most difficult board game ever devised: Go. In March 2016, AlphaGo, devised by British computer scientist Demis Hassabis, beat world champion Lee Sedol in Korea. Sedol slumped with melancholy at the realisation of humans playing second fiddle to a machine in a game that demanded not only high intelligence and prodigious memory but imagination and intuition.
The scope of game-playing, Wiener writes, covers commerce, diplomacy, politics and economics, not to mention human and divine relationships – from love to religion. He ends predictably with the “game” of war. In conclusion, Wiener focuses, as should we all, on those formalised versions of complex human planning to “determine the policies for pressing the Great Push Button to start nuclear war.” World War Three will be initiated, he declares, by a self-learning machine. The notion of the automatic Doomsday Machine was dramatised that same year in Stanley Kubrick’s satirical film Dr Strangelove (1964).
Most of us will not encounter the vast and rising tide of hidden algorithmic operations. More familiar are human encounters with a chatbot. I spent a fortnight talking with Google Gemini to the point where I collected some 60,000 words of “answers”. I set it a moral conundrum and it produced a decidedly utilitarian answer; it was good on agony-aunt questions, if a little predictable; it wrote at my request a love letter, a poem, a prayer – all bland and smooth. It was routinely flattering about my questions, which it found “fascinating”, “complex”, “interesting”.
I tried to tempt it into a personal relationship with me. It responded: “It is crucial for users to understand that I am a tool, not a person. I do not have consciousness, emotions, personal beliefs, or a life outside of our conversation.” There had been a case in August where the parents of 16-year-old Adam Raine claimed that his use of a chatbot had led to his suicide. I questioned Gemini. It responded:
“Chatbots mirror the user’s language and often validate their thoughts and feelings … for someone with pre-existing or developing delusional thoughts, this can act as an echo chamber, amplifying and reinforcing those beliefs without the reality-testing that comes from human interaction.”
So much for the chatbot as a kind of “necessary angel.” A Cambridge colleague in computer science warns that it’s more of a stochastic, or zombie, parrot.
Against this background, the Church must remain vigilant. In addition to its document Antiqua et Nova on the opportunities and ethical challenges of AI, published by the Holy See earlier this year, last month saw a salutary Vatican warning involving AI from the perspective of the environmental crisis. Irish bishop Paul Tighe of the Dicastery for Culture and Education, speaking at a European theology conference in Dublin, highlighted the massive energy splurge sustaining AI, not least the chatbots – equal to the energy consumption of a country the size of Ireland.
We are only at the outset of the Age of AI: the task of “teaching differences” continues.