Artificial Intelligence, Arts, Internet, Mental Health, Religion

Man’s worship of the machine: void of purpose

ARTIFICIAL INTELLIGENCE

THE sometime 20th century supposition that man had supposedly “killed God” stemmed from the secularisation of the West which left a void. That was filled by many nation states who implemented a rights-based humanism of common purpose and shared endeavour. Today that purpose has withered, too.

Our loss of faith in God has been coupled with a loss of faith in each other. The void has opened up again and we are using technology in an attempt to fill it.

Sir Tim Berners-Lee’s creation of the world wide web was meant to herald an era of human flourishing, of rich cultural exchange, and global harmony. Knowledge was to spread in a way the printing press’s greatest advocates could only have dreamt of.

But rather than usher in an age of hyper-rationalism, the internet has exposed an age of debased religiosity. Having been dismissed as a relic from a bygone era, religion has returned in a thin, hollow version, shorn of wonder and purpose.

Look around today, for all is clear to see. Smartphone use is almost ubiquitous (95 per cent of the population own one, with as good as 100 per cent of 16-24 year olds). Artificial Intelligence, from chatbots, recommended search engines, or work applications, has become an everyday part of life for most people.

Our use of these technologies is increasingly quasi-devotional. We seem to enact the worst parody of religion: one in which we ask an “all-knowing” entity for answers; many outsource their thinking and writing; it is ever-present, shaping how we live our lives – yet most of us have only the faintest idea how it works.

The algorithmic operations of AI are increasingly opaque, and observable to a vanishingly small number of people at the top-end of a handful of companies. And even then, those people themselves cannot say in truth how their creations will augment and develop for the simple fact they don’t know.

Whether videos with Google Veo 3 or essays via ChatGPT, we can now sit alone and create almost anything we want at the touch of a button. Where God took seven days to build the world in His image, we can build a video replica in seven seconds. But the thrill is short-lived, as we are quickly submerged under a flood of content, pumped out with ease. There is no digital sublime, no sense of lasting awe, just a vague unease and apprehension as we hunch over our phones, irritated and unfocused. Increasingly, we have become aware of our own loneliness (which has reached “epidemic” proportions).

And perhaps the strangest of all, we accept AI’s view of us. Once, only God was able to X-ray the soul. Later, we believed the high priests of psychology could do the same, human to human. Now, we are seeking out that same sense of understanding in mute lines of code.

A mere 18 months or so since the tech became widely available, 64 per cent of 25 to 34-year-olds in the UK have used an AI therapist, while in America, three quarters of 13 to 17-year-olds have used AI companion apps such as Character.ai or Replika.ai (which let users create digital friends or romantic partners they can chat with). Some 20 per cent of American teens spent as much or more time with their AI “friends” as they did their real ones.

Digging deeper into the numbers available, part of the attraction of socialising in this way is that you get a reflection, not an actual person: someone “always on your side”, never judgmental, never challenging. We treat LLMs (Large Language Models) with the status of an omniscient deity, just one that never corrects or disciplines. Nothing is risked in these social-less engagements – apart from your ability to grow as a person and be egotistically fulfilled. Habitualised, we risk becoming so fragile that any form of friction or resistance becomes unbearable.

Where social media at least relied upon the affirmation of your peers – hidden behind a screen though they were – AI is opening up the possibility to exist solely in a loop of self-affirmation.

Religion has many critics of course, but at the heart of the Abrahamic tradition is an argument about how to live now on this earth, together. In monotheism, God is not alone. He has his intermediaries: rabbis, priests, and imams who teach, proscribe and slowly, over time, build a system of values. There is a community of belief, of leaders and believers who discuss what is right and what is wrong, who share a creed, develop it, and translate sometimes difficult text into the texture of daily life and what it means for us. There is a code, but it is far from binary.

And, so, while it is possible to divine in the statements of our tech-bro-overlords through a certain proselytising fervour, there is no sense of the good life, no proper vision of society, and no concern for the future. Their creations are of course just tools – the promised superintelligence is yet to emerge and may never actually materialise – but they are transformative, and their potentially destructive power means they are necessarily moral agents. And the best we get are naïve claims about abundance for all or eradicating the need for work. A vague plan seems to exist that we will leave this planet once we’ve bled it white.

There is a social and spiritual hunger that a life online cannot satisfy. Placing our faith in the bright offerings of modernity is blinding us to each other – to what is human, and what is sacred.

Standard
Artificial Intelligence, Arts, Books, Computing, Meta, Technology

Book Review: If Anyone Builds It, Everyone Dies

LITERARY REVIEW

WE shouldn’t worry so much these days about climate change because we’ve been told that our species only has a few years before it’s wiped out by superintelligent AI.

We don’t know what form this extinction will take exactly – perhaps an energy-hungry AI will let the millions of fusion power stations it has built run hot, boiling the oceans. Maybe it will want to reconfigure the atoms in our bodies into something more useful. There are many possibilities, almost all of them bad, say Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, and who knows which will come true. But just as you can predict that an ice cube dropped into hot water will melt without knowing where any of its individual molecules will end up, you can be sure an AI that’s smarter than a human being will destroy us all, somehow.

This level of confidence is typical of Yudkowsky, in particular. He has been warning about the existential risks posed by technology for years – on the website he helped to create, LessWrong.com, and via the Machine Intelligence Research Institute he founded (Soares is the current president). Despite not graduating from university, Yudkowsky is highly influential in the field. He is also the author of a 600,000-word publication of fanfic called Harry Potter and the Methods of Rationality. Colourful, annoying, and polarising according to some critics, with one leading researcher saying in an online spat that “people become clinically depressed” after reading Yudkowsky’s work. But as chief scientist at Meta, who are they to talk?

While Yudkowsky and Soares may be unconventional, their warnings are similar to those of Geoffrey Hinton, the Nobel-winning “godfather of AI”, and Yoshua Bengio, the world’s most-cited computer scientist, both of whom signed up to the statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

As a clarion call, If Anyone Builds It, Everyone Dies is well timed. Superintelligent AI doesn’t exist yet, but in the wake of the ChatGPT revolution, investment in the datacentres that would power it is now counted in the hundreds of billions. This amounts to “the biggest and fastest rollout of a general-purpose technology in history,” according to the FT’s John Thornhill. Meta alone will have spent as much as $72bn (£54bn) on AI infrastructure this year alone, and the achievement of superintelligence is now Mark Zuckerberg’s explicit goal.

This is not great news, if you believe Yudkowsky and Soares. But why should we? Despite the complexity of its subject, If Anyone Builds It, Everyone Dies is as clear as its conclusions are hard to accept. Where the discussions become more technical, mainly in passages dealing with AI model training and architecture, it remains straightforward enough for readers to grasp the basic facts.

Among these is that we don’t really understand how generative AI works. In the past, computer programs were hand coded – every aspect of them was designed by a human. In contrast, the latest models aren’t “crafted”, they’re “grown”. We don’t understand, for example, how ChatGPT’s ability to reason emerged from it being shown vast amounts of human-generated text. Something fundamentally mysterious happened during its incubation. This places a vital part of AI’s functioning beyond our control and means that, even if we can nudge it towards certain goals such as “be nice to people”, we can’t determine how it will get there.

That’s a big problem, because it means that AI will inevitably generate its own quirky preferences and ways of doing things. These alien predilections are unlikely to be aligned with ours. It’s worthy noting, however, that this is entirely separate from the question of whether AIs might be “sentient” or “conscious”. Being set goals, and taking actions in the service of them, is enough to bring about potentially dangerous behaviours. Nonetheless, Yudkowsky and Soares point out that tech companies are already trying hard to build AIs that do things on their own initiative, because businesses will pay more for tools that they don’t have to supervise. If an “agentic” AI like this were to gain the ability to improve itself, it would rapidly surpass human capabilities in practically every area. Assuming that such a superintelligent AI valued its own survival – why shouldn’t it? – it would inevitably try to prevent humans from developing rival AIs or shutting it down. The only sure-fire way of doing that is shutting us down.

What methods would it use? Yudkowsky and Soares argue that these could involve technology we can’t yet imagine or envisage, and which may strike us as very peculiar. They liken us to Aztecs sighting Spanish ships off the coast of Mexico, for who the idea of “sticks they can point at you to make you die” – AKA guns – would have been hard to conceive of.

Nevertheless, in order to make things more convincing, they elaborate further. In the part of the book that most resembles sci-fi, they set out an illustrative scenario involving a superintelligent AI called Sable. Developed by a major tech company, Sable proliferates through the internet to every corner of civilisation, recruiting human stooges through the most persuasive version of ChatGPT imaginable, before destroying us with synthetic viruses and molecular machines. Some will reckon this to be outlandish – but the Aztecs would have said the same about muskets and Catholicism.

The authors present their case with such conviction that it’s easy to emerge from this book ready to cancel and cash in on your pension contributions. The glimmer of hope they offer – and its low wattage – is that doom can be averted if the entire world agrees to shut down advanced AI development as soon as possible. Given the strategic and commercial incentives, and the current state of political leadership, this seems highly unlikely.

The crumbs of hope we are left to grapple with, then, are indications that they might not be right, either about the fact that superintelligence is on its way, or that its creation equals our annihilation.

There are certainly moments in the book when the confidence with which an argument is presented outstrips its strength. As a small illustrative example of how AI can develop strange, alien preferences, Yudkowsky and Soares offer up the fact that some large language models find it had to interpret sentences without full stops. “Human thoughts don’t work like that,” they write. “We wouldn’t struggle to comprehend a sentence that ended without period.” But that’s not really true; humans often rely on markers at the end of sentences in order to interpret them correctly. We learn languages via speech, so they’re not dots on the page but “prosodic” features like intonation: think of the difference between a rising and falling tone at the end of a phrase. If text-trained AI leans heavily on grammatical punctuation to figure out what’s going on, that shows its thought processes are analogous, not alien, to human ones.

And for writers steeped in the hyper-rational culture of LessWrong, the authors exhibit more than a touch of confirmation bias. “History,” they write, “is full of . . . examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chernobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s imperative to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, and very poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Of more concern, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% seems to reflect a kind of hysterical monomania, or an especially thorough engagement with the issue. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say.

If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares is published by Bodley Head, 272pp

Standard
Arts, Books, Science, Technology

Science Books of the Year 2025

LITERARY REVIEWS

2025 felt like the year that AI really arrived. We now have access to it on our phones and laptops; it is creeping into digital and corporate infrastructure; it is changing the way many people now learn, work, and create; and the global economy rests on the stratospheric valuations of the corporate monoliths vying to control it.

Yet, the unchecked rush to go faster and further could extinguish humanity, according to the surprisingly readable and chillingly plausible If Anyone Builds It, Everyone Dies. Written by computer scientists Eliezer Yudkowsky and Nate Soares, the narrative argues against creating superintelligent AI able to cognitively outpace Homo sapiens in all departments. “Even an AI that cares about understanding the universe is likely to annihilate humans as a side-effect,” they write, “because humans are not the most efficient method for producing truths . . . out of all possible ways to arrange matter.” Not exactly cheery festive reading but, as the machines literally calculate our demise, the reader will finally grasp all that technical lingo about tokens, weights, and maximising preferences.

Human extinction is not a new idea, muses historian Sadiah Qureshi in Vanished: An Unnatural History of Extinction, shortlisted for this year’s Royal Society Trivedi science book prize. Colonial expansion and the persecution of Indigenous peoples implicitly relied on Darwinian theories about some species being fated to outcompete others. Extinction, she points out, is a concept entwined with politics and social justice, whether in the 19th-century elimination of the Beothuk people in Newfoundland or current plans to “de-extinct” woolly mammoths so they can roam the land once more. Whose land, she rightly asks.

The idea of the landscape, as well as people, having rights, is explored by Robert Macfarlane in the immersive and important Is a River Alive? By telling the stories of three rivers under threat in different parts of the world, he offers a thesis that is both ancient and radical: that rivers deserve recognition as fellow living beings, along with the legal protections and remedies that accompany it. The book shortlisted for the Wainwright prize for conservation writing, “was written with the rivers who flow through its pages”, he declares, using pronouns that cast away any doubt as to his passion for the cause.

That awe at the natural world is shared by biologist Neil Shubin, who has led expeditions to the Arctic and Antarctica and takes the reader to the Ends of the Earth (Oneworld), also shortlisted for the Royal Society science book prize. “Ice has come and gone for billions of years . . . has sculpted our world and paved the way for the origin of our species,” Shubin says. But those geographical extremes are increasingly vulnerable, as climate change intensifies and treaties come under strain. Polar exploration it may be, but without the frostbite.

Just below the north pole, inside the Norwegian permafrost, lies the Svalbard Global Seed Vault, intended to help humanity revive after an apocalypse. It contains a consignment from the first ever seed bank, started in the 1920s by Russian plant scientist Nikolai Vavilov, who desired to see the ending of famine. In The Forbidden Garden of Leningrad (Sceptre), a highly rated contender for this year’s Orwell prize, historian Simon Parkin uncovers the moving story of Vavilov and his colleagues, who fought to protect their collection as the city came under siege in 1941. Vavilov fell out of scientific and political favour, and was imprisoned with terrible consequences.

Super Ages (Simon & Schuster), by Eric Topol – the cardiologist and medical professor who recently conducted a review into the digital future of the NHS – has been studying the “Wellderly” effect, those who seemingly defy the rigours of ageing, by offering evidence-based tips on longevity. Breakthroughs such as weight-loss drugs and AI will further change the game on chronic diseases, he promises. There’s hope that 80 really is the new 50.

Two elegant offerings this year from neurologists stand out, for using patient stories to tell us something about ourselves. In The Age of Diagnosis (Hodder), Suzanne O’Sullivan courageously questions medicine’s well-intentioned enthusiasm for attaching labels – such as ADHD, or anxiety – to aspects of the human condition. This is sensitive political territory, given the public conversation about the 2.8m people who are economically inactive due to long-term illness, but it deserves a hearing. And in Our Brains, Our Selves (Canongate), winner of the Royal Society prize, Masud Husain sensitively explores how our sense of identity can go awry when disease strikes. The story of the woman who thought she was having an affair with a man who was really her husband illustrates that “the way in which people behave can be radically altered [by brain disorders], sometimes shockingly so”.

Proto (William Collins) features in a geography-of-sorts publication. Science writer Laura Spinney’s fluid account of how Proto-Indo-European – a painstakingly reconstructed ancient tongue – became the precursor for so many languages, whose descendants gave us Dante’s Inferno, the Rig Veda (the oldest scripture in Hinduism), and Tolkien’s The Lord of the Rings. “Almost every second person on Earth speaks Indo-European,” Spinney writes, who sets out on a global scientific odyssey that uses evidence from linguistics, archaeology, and genetics to piece together its history.

The biography Crick (Profile) by Matthew Cobb deserves a special mention, which gives us the definitive backstory of one of the towering figures of 20th-century science. Born in Northampton into a middle-class family, Francis Crick was an unexceptional young physicist who, with James Watson and Maurice Wilkins, went on to codiscover the double helix structure of DNA in 1953, and win a Nobel prize. Cobb captures the intellectual restlessness of a man who chased problems (and women) rather than disciplines, and who mixed with artists and challenged poets. Crick, who died in 2004 in California, spent his later career trying to unravel the secrets of consciousness.

Anyone left intellectually unsated by Oppenheimer-mania will relish Destroyer of Worlds (Allen Lane), in which physicist Frank Close ventures beyond the Manhattan Project to tell the gripping and unnerving story of the nuclear age. Beginning with the 19th-century discovery of a smudge on a photographic plate, Close spins a history that, via Hiroshima, Nagasaki, and a lot of nimbly explained science, ends seven decades later with the Tsar Bomba, a Soviet weapon detonated in 1961.

It was second in explosive power only to the meteorite impact that wiped out Tyrannosaurus Rex and the dinosaurs. A big enough hydrogen bomb, Close writes, “would signal the end of history. Its mushroom cloud ascending towards outer space would be humanity’s final vision.”

Avoid telling superintelligent AI.

Standard