Arts, BBC, Broadcasting, Culture, Government, Media, Society, Technology

For the BBC to survive requires answering some critical questions

UK MEDIA

WE are now overwhelmed with the number of ways in which we can view content. It can be difficult to know where to begin: Netflix, Apple TV, Amazon, YouTube, TikTok, and Instagram are all just a click away.

This profound transformation of the entertainment and digital media industry has fragmented audiences and also altered the future prospects of the UK broadcasters that previously dominated our viewing experience. The BBC in particular faces profound questions as it enters discussions with the Government on charter renewal.

The BBC is not alone in facing critical questions about its future. Channel 4, too, is caught up in the complexity as the globalised entertainment industry reorders itself around a handful of gigantic platforms.

Most in Britain still like to talk about “our” broadcasters as if they are permanent fixtures of national life. In reality, they are now islands under threat in an ocean dominated by American tech companies and our addictive relationships with our smart phones.

In the United States, the industry has drawn the obvious conclusion. If you want to survive in this era, you need more scale. That is why we see major studios and platforms circling each other, exploring combinations that would have been unthinkable just a decade ago. When a studio as diverse and storied as Paramount concludes that it needs to combine with a bigger partner like Warner Bros simply to flourish in the streaming age, it tells us something about the brutal economics of global entertainment today.

Yet in the UK, our public service broadcasters risk remaining stuck in old models and old ways of thinking. They are still organised around linear schedules, legacy silos, and institutional pride, rather than around the single hard question that now matters: how do we build something big and compelling enough to matter in a digital world where the viewer is always one click away from bypassing British content altogether? At precisely the moment when courageous transformation is required, we risk clinging to structures designed for a previous century.

For the BBC, this question is existential. The age profile of its audiences keeps creeping upwards. Younger viewers are drifting to platforms whose names barely existed when the last licence fee settlement was negotiated. The corporation has made great efforts to pivot to digital and to find ways of connecting with young audiences, but the time has come to acknowledge that on its own it cannot achieve what it needs to with that demographic. It would benefit immensely from a new, deep and durable relationship with younger audiences at scale.

For Channel 4, the risk is different but just as stark. It has always prided itself on being smaller, nimble, and more disruptive. But in a world of global streaming, “small and nimble” can start to look like under-capitalised and vulnerable. The advertising market is fragmenting. Production costs are rising. The channel’s ability to take creative risks depends on a financial base that is no longer guaranteed. It needs scale – not to become safe and bland, but to ensure it still exists a decade from now.

As charter renewal begins, questions on the BBC’s future are starting to revolve around the possibility of advertising and subscription-based services.

But there is a different solution: a merger between the BBC and Channel 4.

This would address both of their problems at once. The BBC would gain more of the younger, increasingly diverse audiences it desperately needs for a long-term future. Channel 4 would gain the scale and security it needs to keep commissioning the bold, distinctive work that has always been its hallmark.

Together they could build a single, world-class public service media platform that is genuinely capable of competing in a global market.

Needless to say, there would be objections. How would the advertising model work? Would Channel 4’s irreverent tone be smothered by BBC bureaucracy?

Such concerns are real but could be overcome with political and institutional courage. It is far easier for ministers to tinker at the margins than to rethink the entire architecture of public service broadcasting. It is more comfortable for executives to protect their fiefdoms than to imagine themselves as part of something larger. But comfort is not a strategy. In the absence of bold change and reform, both organisations will slowly move towards irrelevance with younger audiences slipping further away.

The question, then, is not whether a merger between the BBC and Channel 4 would be complicated. Of course it would. The most pressing question is whether we are prepared to let two British institutions wither on the margins of a global entertainment market, or whether we are willing to give them the scale and strength they need to thrive.

In an age of giants, muddling through as we are is the most dangerous option of all. That can only lead to demise.

TWO

THE terms for the decennial review of the BBC’s Royal Charter have been set. Unsurprisingly, the Government has chosen to avoid asking the difficult question of whether the licence fee continues to make sense. While raising other forms of revenue will be considered, the regressive tax on those consuming live media is going to stay.

This is a missed opportunity. The licence fee has become an embarrassing anachronism. The notion that a licence is required to watch live content produced by broadcasters charging their own independent fees to consumers is a bizarre legacy of early arguments over radio broadcasting. If it has failed to keep pace with the developments in media of the last century, it has certainly failed to keep pace with those in the new millennium.

Yet the BBC is financially reliant upon this structure, and desperate to retain it. This unique and privileged position allows the organisation of being able to charge not only their own customers, but those of their direct competitors. The results, however, are strictly negative.

The BBC is simultaneously desperate to retain public approval and also to maintain the line that it produces public services which would otherwise have no home. These objectives are in clear tension: the first drives it to produce the sort of content commercial stations would already produce; the second, a sort of Reithian public education. In practice, the former objective seems to dominate, and the latter instinct to be redirected into nakedly political exercises that promote the views of the organisation’s staff.

It is difficult if not inconceivable to argue that this activity permits the subsidies given to the BBC through the licence fee – particularly when they increasingly drag Britain into disrepute.

President Trump’s lawsuit against the broadcaster for misrepresentation – and the long, shameful list of incidents demonstrating bias on foreign policy issues – illustrate how problems for the state broadcaster can become problems for the Foreign and Commonwealth Office.

Given this, it would have been better to rip the sticking plaster off before the Government confirmed the BBC’s autonomy over the licence fee. It should have made clear to the BBC that it must prepare for a future without it, and begin to separate the state from the broadcaster. This is, after all, the long-term direction of travel. As things stand, the inevitable has been postponed, and the adjustment will be all the harder when it eventually arrives.

Standard
Artificial Intelligence, Arts, Books, Computing, Meta, Technology

Book Review: If Anyone Builds It, Everyone Dies

LITERARY REVIEW

WE shouldn’t worry so much these days about climate change because we’ve been told that our species only has a few years before it’s wiped out by superintelligent AI.

We don’t know what form this extinction will take exactly – perhaps an energy-hungry AI will let the millions of fusion power stations it has built run hot, boiling the oceans. Maybe it will want to reconfigure the atoms in our bodies into something more useful. There are many possibilities, almost all of them bad, say Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, and who knows which will come true. But just as you can predict that an ice cube dropped into hot water will melt without knowing where any of its individual molecules will end up, you can be sure an AI that’s smarter than a human being will destroy us all, somehow.

This level of confidence is typical of Yudkowsky, in particular. He has been warning about the existential risks posed by technology for years – on the website he helped to create, LessWrong.com, and via the Machine Intelligence Research Institute he founded (Soares is the current president). Despite not graduating from university, Yudkowsky is highly influential in the field. He is also the author of a 600,000-word publication of fanfic called Harry Potter and the Methods of Rationality. Colourful, annoying, and polarising according to some critics, with one leading researcher saying in an online spat that “people become clinically depressed” after reading Yudkowsky’s work. But as chief scientist at Meta, who are they to talk?

While Yudkowsky and Soares may be unconventional, their warnings are similar to those of Geoffrey Hinton, the Nobel-winning “godfather of AI”, and Yoshua Bengio, the world’s most-cited computer scientist, both of whom signed up to the statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

As a clarion call, If Anyone Builds It, Everyone Dies is well timed. Superintelligent AI doesn’t exist yet, but in the wake of the ChatGPT revolution, investment in the datacentres that would power it is now counted in the hundreds of billions. This amounts to “the biggest and fastest rollout of a general-purpose technology in history,” according to the FT’s John Thornhill. Meta alone will have spent as much as $72bn (£54bn) on AI infrastructure this year alone, and the achievement of superintelligence is now Mark Zuckerberg’s explicit goal.

This is not great news, if you believe Yudkowsky and Soares. But why should we? Despite the complexity of its subject, If Anyone Builds It, Everyone Dies is as clear as its conclusions are hard to accept. Where the discussions become more technical, mainly in passages dealing with AI model training and architecture, it remains straightforward enough for readers to grasp the basic facts.

Among these is that we don’t really understand how generative AI works. In the past, computer programs were hand coded – every aspect of them was designed by a human. In contrast, the latest models aren’t “crafted”, they’re “grown”. We don’t understand, for example, how ChatGPT’s ability to reason emerged from it being shown vast amounts of human-generated text. Something fundamentally mysterious happened during its incubation. This places a vital part of AI’s functioning beyond our control and means that, even if we can nudge it towards certain goals such as “be nice to people”, we can’t determine how it will get there.

That’s a big problem, because it means that AI will inevitably generate its own quirky preferences and ways of doing things. These alien predilections are unlikely to be aligned with ours. It’s worthy noting, however, that this is entirely separate from the question of whether AIs might be “sentient” or “conscious”. Being set goals, and taking actions in the service of them, is enough to bring about potentially dangerous behaviours. Nonetheless, Yudkowsky and Soares point out that tech companies are already trying hard to build AIs that do things on their own initiative, because businesses will pay more for tools that they don’t have to supervise. If an “agentic” AI like this were to gain the ability to improve itself, it would rapidly surpass human capabilities in practically every area. Assuming that such a superintelligent AI valued its own survival – why shouldn’t it? – it would inevitably try to prevent humans from developing rival AIs or shutting it down. The only sure-fire way of doing that is shutting us down.

What methods would it use? Yudkowsky and Soares argue that these could involve technology we can’t yet imagine or envisage, and which may strike us as very peculiar. They liken us to Aztecs sighting Spanish ships off the coast of Mexico, for who the idea of “sticks they can point at you to make you die” – AKA guns – would have been hard to conceive of.

Nevertheless, in order to make things more convincing, they elaborate further. In the part of the book that most resembles sci-fi, they set out an illustrative scenario involving a superintelligent AI called Sable. Developed by a major tech company, Sable proliferates through the internet to every corner of civilisation, recruiting human stooges through the most persuasive version of ChatGPT imaginable, before destroying us with synthetic viruses and molecular machines. Some will reckon this to be outlandish – but the Aztecs would have said the same about muskets and Catholicism.

The authors present their case with such conviction that it’s easy to emerge from this book ready to cancel and cash in on your pension contributions. The glimmer of hope they offer – and its low wattage – is that doom can be averted if the entire world agrees to shut down advanced AI development as soon as possible. Given the strategic and commercial incentives, and the current state of political leadership, this seems highly unlikely.

The crumbs of hope we are left to grapple with, then, are indications that they might not be right, either about the fact that superintelligence is on its way, or that its creation equals our annihilation.

There are certainly moments in the book when the confidence with which an argument is presented outstrips its strength. As a small illustrative example of how AI can develop strange, alien preferences, Yudkowsky and Soares offer up the fact that some large language models find it had to interpret sentences without full stops. “Human thoughts don’t work like that,” they write. “We wouldn’t struggle to comprehend a sentence that ended without period.” But that’s not really true; humans often rely on markers at the end of sentences in order to interpret them correctly. We learn languages via speech, so they’re not dots on the page but “prosodic” features like intonation: think of the difference between a rising and falling tone at the end of a phrase. If text-trained AI leans heavily on grammatical punctuation to figure out what’s going on, that shows its thought processes are analogous, not alien, to human ones.

And for writers steeped in the hyper-rational culture of LessWrong, the authors exhibit more than a touch of confirmation bias. “History,” they write, “is full of . . . examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chernobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s imperative to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, and very poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Of more concern, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% seems to reflect a kind of hysterical monomania, or an especially thorough engagement with the issue. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say.

If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares is published by Bodley Head, 272pp

Standard
Arts, Books, Science, Technology

Science Books of the Year 2025

LITERARY REVIEWS

2025 felt like the year that AI really arrived. We now have access to it on our phones and laptops; it is creeping into digital and corporate infrastructure; it is changing the way many people now learn, work, and create; and the global economy rests on the stratospheric valuations of the corporate monoliths vying to control it.

Yet, the unchecked rush to go faster and further could extinguish humanity, according to the surprisingly readable and chillingly plausible If Anyone Builds It, Everyone Dies. Written by computer scientists Eliezer Yudkowsky and Nate Soares, the narrative argues against creating superintelligent AI able to cognitively outpace Homo sapiens in all departments. “Even an AI that cares about understanding the universe is likely to annihilate humans as a side-effect,” they write, “because humans are not the most efficient method for producing truths . . . out of all possible ways to arrange matter.” Not exactly cheery festive reading but, as the machines literally calculate our demise, the reader will finally grasp all that technical lingo about tokens, weights, and maximising preferences.

Human extinction is not a new idea, muses historian Sadiah Qureshi in Vanished: An Unnatural History of Extinction, shortlisted for this year’s Royal Society Trivedi science book prize. Colonial expansion and the persecution of Indigenous peoples implicitly relied on Darwinian theories about some species being fated to outcompete others. Extinction, she points out, is a concept entwined with politics and social justice, whether in the 19th-century elimination of the Beothuk people in Newfoundland or current plans to “de-extinct” woolly mammoths so they can roam the land once more. Whose land, she rightly asks.

The idea of the landscape, as well as people, having rights, is explored by Robert Macfarlane in the immersive and important Is a River Alive? By telling the stories of three rivers under threat in different parts of the world, he offers a thesis that is both ancient and radical: that rivers deserve recognition as fellow living beings, along with the legal protections and remedies that accompany it. The book shortlisted for the Wainwright prize for conservation writing, “was written with the rivers who flow through its pages”, he declares, using pronouns that cast away any doubt as to his passion for the cause.

That awe at the natural world is shared by biologist Neil Shubin, who has led expeditions to the Arctic and Antarctica and takes the reader to the Ends of the Earth (Oneworld), also shortlisted for the Royal Society science book prize. “Ice has come and gone for billions of years . . . has sculpted our world and paved the way for the origin of our species,” Shubin says. But those geographical extremes are increasingly vulnerable, as climate change intensifies and treaties come under strain. Polar exploration it may be, but without the frostbite.

Just below the north pole, inside the Norwegian permafrost, lies the Svalbard Global Seed Vault, intended to help humanity revive after an apocalypse. It contains a consignment from the first ever seed bank, started in the 1920s by Russian plant scientist Nikolai Vavilov, who desired to see the ending of famine. In The Forbidden Garden of Leningrad (Sceptre), a highly rated contender for this year’s Orwell prize, historian Simon Parkin uncovers the moving story of Vavilov and his colleagues, who fought to protect their collection as the city came under siege in 1941. Vavilov fell out of scientific and political favour, and was imprisoned with terrible consequences.

Super Ages (Simon & Schuster), by Eric Topol – the cardiologist and medical professor who recently conducted a review into the digital future of the NHS – has been studying the “Wellderly” effect, those who seemingly defy the rigours of ageing, by offering evidence-based tips on longevity. Breakthroughs such as weight-loss drugs and AI will further change the game on chronic diseases, he promises. There’s hope that 80 really is the new 50.

Two elegant offerings this year from neurologists stand out, for using patient stories to tell us something about ourselves. In The Age of Diagnosis (Hodder), Suzanne O’Sullivan courageously questions medicine’s well-intentioned enthusiasm for attaching labels – such as ADHD, or anxiety – to aspects of the human condition. This is sensitive political territory, given the public conversation about the 2.8m people who are economically inactive due to long-term illness, but it deserves a hearing. And in Our Brains, Our Selves (Canongate), winner of the Royal Society prize, Masud Husain sensitively explores how our sense of identity can go awry when disease strikes. The story of the woman who thought she was having an affair with a man who was really her husband illustrates that “the way in which people behave can be radically altered [by brain disorders], sometimes shockingly so”.

Proto (William Collins) features in a geography-of-sorts publication. Science writer Laura Spinney’s fluid account of how Proto-Indo-European – a painstakingly reconstructed ancient tongue – became the precursor for so many languages, whose descendants gave us Dante’s Inferno, the Rig Veda (the oldest scripture in Hinduism), and Tolkien’s The Lord of the Rings. “Almost every second person on Earth speaks Indo-European,” Spinney writes, who sets out on a global scientific odyssey that uses evidence from linguistics, archaeology, and genetics to piece together its history.

The biography Crick (Profile) by Matthew Cobb deserves a special mention, which gives us the definitive backstory of one of the towering figures of 20th-century science. Born in Northampton into a middle-class family, Francis Crick was an unexceptional young physicist who, with James Watson and Maurice Wilkins, went on to codiscover the double helix structure of DNA in 1953, and win a Nobel prize. Cobb captures the intellectual restlessness of a man who chased problems (and women) rather than disciplines, and who mixed with artists and challenged poets. Crick, who died in 2004 in California, spent his later career trying to unravel the secrets of consciousness.

Anyone left intellectually unsated by Oppenheimer-mania will relish Destroyer of Worlds (Allen Lane), in which physicist Frank Close ventures beyond the Manhattan Project to tell the gripping and unnerving story of the nuclear age. Beginning with the 19th-century discovery of a smudge on a photographic plate, Close spins a history that, via Hiroshima, Nagasaki, and a lot of nimbly explained science, ends seven decades later with the Tsar Bomba, a Soviet weapon detonated in 1961.

It was second in explosive power only to the meteorite impact that wiped out Tyrannosaurus Rex and the dinosaurs. A big enough hydrogen bomb, Close writes, “would signal the end of history. Its mushroom cloud ascending towards outer space would be humanity’s final vision.”

Avoid telling superintelligent AI.

Standard