Artificial Intelligence, Arts, Internet, Mental Health, Religion

Man’s worship of the machine: void of purpose

ARTIFICIAL INTELLIGENCE

THE sometime 20th century supposition that man had supposedly “killed God” stemmed from the secularisation of the West which left a void. That was filled by many nation states who implemented a rights-based humanism of common purpose and shared endeavour. Today that purpose has withered, too.

Our loss of faith in God has been coupled with a loss of faith in each other. The void has opened up again and we are using technology in an attempt to fill it.

Sir Tim Berners-Lee’s creation of the world wide web was meant to herald an era of human flourishing, of rich cultural exchange, and global harmony. Knowledge was to spread in a way the printing press’s greatest advocates could only have dreamt of.

But rather than usher in an age of hyper-rationalism, the internet has exposed an age of debased religiosity. Having been dismissed as a relic from a bygone era, religion has returned in a thin, hollow version, shorn of wonder and purpose.

Look around today, for all is clear to see. Smartphone use is almost ubiquitous (95 per cent of the population own one, with as good as 100 per cent of 16-24 year olds). Artificial Intelligence, from chatbots, recommended search engines, or work applications, has become an everyday part of life for most people.

Our use of these technologies is increasingly quasi-devotional. We seem to enact the worst parody of religion: one in which we ask an “all-knowing” entity for answers; many outsource their thinking and writing; it is ever-present, shaping how we live our lives – yet most of us have only the faintest idea how it works.

The algorithmic operations of AI are increasingly opaque, and observable to a vanishingly small number of people at the top-end of a handful of companies. And even then, those people themselves cannot say in truth how their creations will augment and develop for the simple fact they don’t know.

Whether videos with Google Veo 3 or essays via ChatGPT, we can now sit alone and create almost anything we want at the touch of a button. Where God took seven days to build the world in His image, we can build a video replica in seven seconds. But the thrill is short-lived, as we are quickly submerged under a flood of content, pumped out with ease. There is no digital sublime, no sense of lasting awe, just a vague unease and apprehension as we hunch over our phones, irritated and unfocused. Increasingly, we have become aware of our own loneliness (which has reached “epidemic” proportions).

And perhaps the strangest of all, we accept AI’s view of us. Once, only God was able to X-ray the soul. Later, we believed the high priests of psychology could do the same, human to human. Now, we are seeking out that same sense of understanding in mute lines of code.

A mere 18 months or so since the tech became widely available, 64 per cent of 25 to 34-year-olds in the UK have used an AI therapist, while in America, three quarters of 13 to 17-year-olds have used AI companion apps such as Character.ai or Replika.ai (which let users create digital friends or romantic partners they can chat with). Some 20 per cent of American teens spent as much or more time with their AI “friends” as they did their real ones.

Digging deeper into the numbers available, part of the attraction of socialising in this way is that you get a reflection, not an actual person: someone “always on your side”, never judgmental, never challenging. We treat LLMs (Large Language Models) with the status of an omniscient deity, just one that never corrects or disciplines. Nothing is risked in these social-less engagements – apart from your ability to grow as a person and be egotistically fulfilled. Habitualised, we risk becoming so fragile that any form of friction or resistance becomes unbearable.

Where social media at least relied upon the affirmation of your peers – hidden behind a screen though they were – AI is opening up the possibility to exist solely in a loop of self-affirmation.

Religion has many critics of course, but at the heart of the Abrahamic tradition is an argument about how to live now on this earth, together. In monotheism, God is not alone. He has his intermediaries: rabbis, priests, and imams who teach, proscribe and slowly, over time, build a system of values. There is a community of belief, of leaders and believers who discuss what is right and what is wrong, who share a creed, develop it, and translate sometimes difficult text into the texture of daily life and what it means for us. There is a code, but it is far from binary.

And, so, while it is possible to divine in the statements of our tech-bro-overlords through a certain proselytising fervour, there is no sense of the good life, no proper vision of society, and no concern for the future. Their creations are of course just tools – the promised superintelligence is yet to emerge and may never actually materialise – but they are transformative, and their potentially destructive power means they are necessarily moral agents. And the best we get are naïve claims about abundance for all or eradicating the need for work. A vague plan seems to exist that we will leave this planet once we’ve bled it white.

There is a social and spiritual hunger that a life online cannot satisfy. Placing our faith in the bright offerings of modernity is blinding us to each other – to what is human, and what is sacred.

Standard
Artificial Intelligence, Arts, Books, Computing, Meta, Technology

Book Review: If Anyone Builds It, Everyone Dies

LITERARY REVIEW

WE shouldn’t worry so much these days about climate change because we’ve been told that our species only has a few years before it’s wiped out by superintelligent AI.

We don’t know what form this extinction will take exactly – perhaps an energy-hungry AI will let the millions of fusion power stations it has built run hot, boiling the oceans. Maybe it will want to reconfigure the atoms in our bodies into something more useful. There are many possibilities, almost all of them bad, say Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, and who knows which will come true. But just as you can predict that an ice cube dropped into hot water will melt without knowing where any of its individual molecules will end up, you can be sure an AI that’s smarter than a human being will destroy us all, somehow.

This level of confidence is typical of Yudkowsky, in particular. He has been warning about the existential risks posed by technology for years – on the website he helped to create, LessWrong.com, and via the Machine Intelligence Research Institute he founded (Soares is the current president). Despite not graduating from university, Yudkowsky is highly influential in the field. He is also the author of a 600,000-word publication of fanfic called Harry Potter and the Methods of Rationality. Colourful, annoying, and polarising according to some critics, with one leading researcher saying in an online spat that “people become clinically depressed” after reading Yudkowsky’s work. But as chief scientist at Meta, who are they to talk?

While Yudkowsky and Soares may be unconventional, their warnings are similar to those of Geoffrey Hinton, the Nobel-winning “godfather of AI”, and Yoshua Bengio, the world’s most-cited computer scientist, both of whom signed up to the statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

As a clarion call, If Anyone Builds It, Everyone Dies is well timed. Superintelligent AI doesn’t exist yet, but in the wake of the ChatGPT revolution, investment in the datacentres that would power it is now counted in the hundreds of billions. This amounts to “the biggest and fastest rollout of a general-purpose technology in history,” according to the FT’s John Thornhill. Meta alone will have spent as much as $72bn (£54bn) on AI infrastructure this year alone, and the achievement of superintelligence is now Mark Zuckerberg’s explicit goal.

This is not great news, if you believe Yudkowsky and Soares. But why should we? Despite the complexity of its subject, If Anyone Builds It, Everyone Dies is as clear as its conclusions are hard to accept. Where the discussions become more technical, mainly in passages dealing with AI model training and architecture, it remains straightforward enough for readers to grasp the basic facts.

Among these is that we don’t really understand how generative AI works. In the past, computer programs were hand coded – every aspect of them was designed by a human. In contrast, the latest models aren’t “crafted”, they’re “grown”. We don’t understand, for example, how ChatGPT’s ability to reason emerged from it being shown vast amounts of human-generated text. Something fundamentally mysterious happened during its incubation. This places a vital part of AI’s functioning beyond our control and means that, even if we can nudge it towards certain goals such as “be nice to people”, we can’t determine how it will get there.

That’s a big problem, because it means that AI will inevitably generate its own quirky preferences and ways of doing things. These alien predilections are unlikely to be aligned with ours. It’s worthy noting, however, that this is entirely separate from the question of whether AIs might be “sentient” or “conscious”. Being set goals, and taking actions in the service of them, is enough to bring about potentially dangerous behaviours. Nonetheless, Yudkowsky and Soares point out that tech companies are already trying hard to build AIs that do things on their own initiative, because businesses will pay more for tools that they don’t have to supervise. If an “agentic” AI like this were to gain the ability to improve itself, it would rapidly surpass human capabilities in practically every area. Assuming that such a superintelligent AI valued its own survival – why shouldn’t it? – it would inevitably try to prevent humans from developing rival AIs or shutting it down. The only sure-fire way of doing that is shutting us down.

What methods would it use? Yudkowsky and Soares argue that these could involve technology we can’t yet imagine or envisage, and which may strike us as very peculiar. They liken us to Aztecs sighting Spanish ships off the coast of Mexico, for who the idea of “sticks they can point at you to make you die” – AKA guns – would have been hard to conceive of.

Nevertheless, in order to make things more convincing, they elaborate further. In the part of the book that most resembles sci-fi, they set out an illustrative scenario involving a superintelligent AI called Sable. Developed by a major tech company, Sable proliferates through the internet to every corner of civilisation, recruiting human stooges through the most persuasive version of ChatGPT imaginable, before destroying us with synthetic viruses and molecular machines. Some will reckon this to be outlandish – but the Aztecs would have said the same about muskets and Catholicism.

The authors present their case with such conviction that it’s easy to emerge from this book ready to cancel and cash in on your pension contributions. The glimmer of hope they offer – and its low wattage – is that doom can be averted if the entire world agrees to shut down advanced AI development as soon as possible. Given the strategic and commercial incentives, and the current state of political leadership, this seems highly unlikely.

The crumbs of hope we are left to grapple with, then, are indications that they might not be right, either about the fact that superintelligence is on its way, or that its creation equals our annihilation.

There are certainly moments in the book when the confidence with which an argument is presented outstrips its strength. As a small illustrative example of how AI can develop strange, alien preferences, Yudkowsky and Soares offer up the fact that some large language models find it had to interpret sentences without full stops. “Human thoughts don’t work like that,” they write. “We wouldn’t struggle to comprehend a sentence that ended without period.” But that’s not really true; humans often rely on markers at the end of sentences in order to interpret them correctly. We learn languages via speech, so they’re not dots on the page but “prosodic” features like intonation: think of the difference between a rising and falling tone at the end of a phrase. If text-trained AI leans heavily on grammatical punctuation to figure out what’s going on, that shows its thought processes are analogous, not alien, to human ones.

And for writers steeped in the hyper-rational culture of LessWrong, the authors exhibit more than a touch of confirmation bias. “History,” they write, “is full of . . . examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chernobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s imperative to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, and very poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Of more concern, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% seems to reflect a kind of hysterical monomania, or an especially thorough engagement with the issue. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say.

If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares is published by Bodley Head, 272pp

Standard
Artificial Intelligence, Arts, Britain, Economic, Government, Intellectual Property, Legal, Society, Technology

Press freedom, copyright laws, and AI firms

BRITAIN

AMONG Britain’s greatest contributions to Western culture are press freedom and copyright law. Established side by side more than 300 years ago, they underpinned the Enlightenment, the Industrial Revolution, and much of the social change that followed.

They facilitated the free flow and exchange of ideas, opinions, literature and music, and offered legal safeguards for creators and publishers against having their work stolen or plagiarised.

Today, these sacred principles are at risk as never before.

In their headlong rush to develop all-embracing artificial intelligence systems, big-tech firms seem determined to ride roughshod over the intellectual property rights of those whose material they want to appropriate.

Musicians, authors, film and TV companies, artists and media organisations are already seeing their work lifted and used without permission. As the struggle for AI dominance intensifies, this larceny is becoming increasingly brazen.

Worse still, the UK Government appears to be taking the side of the tech giants over the creatives.

In a consultative document on possible changes to copyright law, it has proposed four options. Of these, its “preferred” option is to give a new exemption to AI firms, allowing them to develop their machine learning with copyrighted material without permission unless the holder actively opts out of the process.

Ministers have claimed such a change would give creators more control, but this is an illusion.

One of the strengths of British copyright is that it’s automatic. Works do not have to be registered to be protected from being stolen.

That means individual artists and the smallest local news sites have the same rights and protections as the largest publishers.

Permitting AI firms to take what they want unless rights have been reserved is like telling burglars they can walk into homes unless there is a note on the door asking them not to. In any case, there is no effective technical means of reserving rights and creatives will often be unaware their material has been “scraped”.

It would be far better to strengthen rather than weaken copyright legislation so it can be enforced quickly and effectively against infringements by AI developers. The onus should surely be on them not to break the law in the first place.

Everyone understands that AI is a vast and growing phenomenon which will be of enormous benefit in fields such as healthcare and business efficiency.

Many people will also appreciate the Government’s desire for Britain to be at the forefront of this technological revolution. But that cannot be used as cover to trample over crucial rights and freedoms.

Ingesting the entire output of the British music industry or mass-market news websites will not contribute anything to medical research.

Neither will it do much for our economy, as most of the profits generated by the tech companies will be taken out of the country.

It is both surprising and troubling that the Government has done no analysis of the economic impact of its proposal.

The UK has the world’s second largest creative sector, generating an estimated £126billion a year and supporting 2.4million jobs. Relaxing copyright law would cause it incalculable damage.

We also have vibrant, free and media pluralism – for now at least.

Our traditional press is in the process of rapid flux, as print gradually gives way to new digital platforms and revenue streams. But the fundamentals remain the same – to inform and entertain the public with fair, accurate, challenging and well-written journalism.

In this age of conspiracy, disinformation, and fake news, trusted sources of information and commentary are more important than ever. But it costs money to produce them, and if every article can immediately be copied without payment, then generating the revenue needed to sustain reliable journalism becomes impossible.

A free and independent media has long been a cornerstone of our democracy, but it is under very serious threat. We take it for granted at our peril.

Standard