Computing, Digital Economy, Technology

Quantum computing threatens crypto’s future

CRYPTOCURRENCIES

Intro: Bitcoin et al are increasingly becoming more insecure – and not just because of price volatility

In Robert Harris’s The Second Sleep, the novelist imagines a world hundreds of years in the future where humanity has regressed to a medieval standard of living, population, and way of thinking.

Towards the end of the book, we learn what might have caused this calamity: not a pandemic, meteorite strike, or nuclear war, but a complete collapse in the digital economy.

From payment systems to Just-In-Time supply chains and the wealth of machinery that sustains us, the modern economy is almost wholly dependent on digital instruction.

If everything went down all at the same time, a state of anarchy would rapidly establish itself. In the ensuing chaos, it would be every man for himself with likely devastating consequences for lives and civilisation more widely.

It is hard to imagine what combination of circumstances might completely and lastingly disable the digital economy. However, cyber threats are very much a real, present, and fast increasing danger.

You only need to look at what happened to Jaguar Land Rover last year to see the dire consequences when firewalls are breached. It closed the UK automotive company down for more than a month.

Growing resources and time are being devoted to further securing these systems in more or less every walk of life – a prime example of the diseconomies of technology if ever there was one – and not least in the wild-west world of cryptocurrencies, wholly dependent as they are on complex encryption to safeguard value and assign ownership rights. The threat posed to these assets by advances and developments in quantum computing has long been a main topic of debate in the Bitcoin community. The issue has recently gone viral after being raised in Christopher Wood’s much-followed Greed and Fear investment newsletter.

As the head of equity strategy at the investment bank Jefferies, Wood points out that deriving a public key from a private key is computationally simple. Bitcoin and other forms of cryptocurrency rely for their security on the assumption that the reverse operation would take trillions of years, even for a sophisticated supercomputer. But as Wood says, “This asymmetry collapses with the arrival of cryptographically relevant quantum computers, potentially reducing the time to derive a private key from a public key to mere hours or days.”

The launch by Microsoft of the Majorana One quantum chip may have accelerated so-called “Q-Day” – the date when quantum computers become powerful enough to break most current public-key encryption – by several years. A report published early last summer by Chaincode Labs estimated that up to 50pc of all Bitcoins in circulation (four to 10 million of them) could be vulnerable to theft, with reused addresses and “Satoshi-era” wallets thought to be the most exposed. These were named after Bitcoin’s anonymous founder.

They call Bitcoin “digital gold”. A better description might be fool’s gold, for the whole construct depends crucially on a constantly expanding pool of demand.

Once that demand stabilises or falls then the whole store-of-value illusion begins to collapse. Whether the threat from quantum computing is real or not, it’s giving plenty of pause for thought. It also appears to be quite seriously damaging attempts by Donald Trump’s White House to normalise crypto as a respectable asset class.

BlackRock flagged quantum computing as a key risk when launching its iShares Bitcoin Trust ETF last year while El Salvador, the first country to adopt Bitcoin as legal tender, has seen fit to split its reserves of the virtual currency between 14 different addresses as insurance against potential theft.

Wood himself was an early convert to crypto, but he appears to have lost the faith, reallocating the entire 10pc of his synthetic portfolio once occupied by Bitcoin to physical gold and gold-mining stocks.

Not that you need to crack the encryption code to steal Bitcoin. Contrary to the sales pitch, cryptocurrencies are already one of the most insecure forms of money around – and not just because their price is so volatile.

North Korean hackers managed to swipe $1.5bn (£1.1bn) from the crypto exchange Bybit in February last year. For the year as a whole, a total of $3.5bn of Bitcoin is reckoned to have been stolen. Particularly vulnerable are those who brag acclaim about their crypto wealth on social media: extortion or kidnap can quickly follow.

And because the whole purpose of crypto is to be free of government oversight and interference, it makes the funds virtually impossible to recover once a wallet has been opened and drained by someone else.

In any case, some quite extreme solutions to the quantum threat have been proposed, including simply burning the vulnerable coins in an attempt to preserve the currency’s underlying integrity.

Extreme, yes, and also a root-and-branch betrayal of individual property rights – a bit like telling half of all sterling account holders that their money had been cancelled. In such circumstances, the pound would never be trusted again.

It is not just crypto that is at risk from quantum computing. The entire payments system, which is similarly just numbers on a computer screen, would also be exposed.

Harris’s imagined societal collapse in his novel may not be as fanciful as it seems.

From its early origins in Caesar’s cipher, encryption has been a constantly evolving and improving form of security. Maybe money, both crypto and fiat, can indeed be made quantum resilient.

But there is no compelling answer to the quantum threat yet and the two underlying forces that have sustained Bitcoin and its mini-mes from the start – worries about debasement of fiat currencies and the appeal of self-custody – will lose their value if it turns out your wallet can be easily stolen. There has never been an exact correlation between the price of Bitcoin and that of gold but up until the past several days, the two seemed to have completely decoupled, with the gold price surging ahead over the past year but Bitcoin flat or falling.

Digital gold it is not, and that’s possibly got something to do with the threat posed by quantum computing.

Despite enthusiastic backing from the Trump White House, crypto has so far failed to achieve the credibility among institutional investors that promoters were hoping.

For all its faults, fiat currency – backed by the taxpayer and underwritten by the central bank – continues to be a more secure form of money than the snake oil of a decentralised ledger.

Like almost everything else, crypto has become part of the culture wars divide, such that true believers are far more likely to be on the American Right than the Left.

Yet, any hopes Trump might have had of enriching himself, his backers, and his supporters by fully embracing the crypto revolution have so far proved misplaced.

Recent falls have wiped out the entirety of the gains seen under Trump’s swashbuckling, deregulatory agenda. It’s not the end of the world – but nor is it the reinvention of money once promised or hoped for.

Standard
Artificial Intelligence, Arts, Books, Computing, Meta, Technology

Book Review: If Anyone Builds It, Everyone Dies

LITERARY REVIEW

WE shouldn’t worry so much these days about climate change because we’ve been told that our species only has a few years before it’s wiped out by superintelligent AI.

We don’t know what form this extinction will take exactly – perhaps an energy-hungry AI will let the millions of fusion power stations it has built run hot, boiling the oceans. Maybe it will want to reconfigure the atoms in our bodies into something more useful. There are many possibilities, almost all of them bad, say Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, and who knows which will come true. But just as you can predict that an ice cube dropped into hot water will melt without knowing where any of its individual molecules will end up, you can be sure an AI that’s smarter than a human being will destroy us all, somehow.

This level of confidence is typical of Yudkowsky, in particular. He has been warning about the existential risks posed by technology for years – on the website he helped to create, LessWrong.com, and via the Machine Intelligence Research Institute he founded (Soares is the current president). Despite not graduating from university, Yudkowsky is highly influential in the field. He is also the author of a 600,000-word publication of fanfic called Harry Potter and the Methods of Rationality. Colourful, annoying, and polarising according to some critics, with one leading researcher saying in an online spat that “people become clinically depressed” after reading Yudkowsky’s work. But as chief scientist at Meta, who are they to talk?

While Yudkowsky and Soares may be unconventional, their warnings are similar to those of Geoffrey Hinton, the Nobel-winning “godfather of AI”, and Yoshua Bengio, the world’s most-cited computer scientist, both of whom signed up to the statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

As a clarion call, If Anyone Builds It, Everyone Dies is well timed. Superintelligent AI doesn’t exist yet, but in the wake of the ChatGPT revolution, investment in the datacentres that would power it is now counted in the hundreds of billions. This amounts to “the biggest and fastest rollout of a general-purpose technology in history,” according to the FT’s John Thornhill. Meta alone will have spent as much as $72bn (£54bn) on AI infrastructure this year alone, and the achievement of superintelligence is now Mark Zuckerberg’s explicit goal.

This is not great news, if you believe Yudkowsky and Soares. But why should we? Despite the complexity of its subject, If Anyone Builds It, Everyone Dies is as clear as its conclusions are hard to accept. Where the discussions become more technical, mainly in passages dealing with AI model training and architecture, it remains straightforward enough for readers to grasp the basic facts.

Among these is that we don’t really understand how generative AI works. In the past, computer programs were hand coded – every aspect of them was designed by a human. In contrast, the latest models aren’t “crafted”, they’re “grown”. We don’t understand, for example, how ChatGPT’s ability to reason emerged from it being shown vast amounts of human-generated text. Something fundamentally mysterious happened during its incubation. This places a vital part of AI’s functioning beyond our control and means that, even if we can nudge it towards certain goals such as “be nice to people”, we can’t determine how it will get there.

That’s a big problem, because it means that AI will inevitably generate its own quirky preferences and ways of doing things. These alien predilections are unlikely to be aligned with ours. It’s worthy noting, however, that this is entirely separate from the question of whether AIs might be “sentient” or “conscious”. Being set goals, and taking actions in the service of them, is enough to bring about potentially dangerous behaviours. Nonetheless, Yudkowsky and Soares point out that tech companies are already trying hard to build AIs that do things on their own initiative, because businesses will pay more for tools that they don’t have to supervise. If an “agentic” AI like this were to gain the ability to improve itself, it would rapidly surpass human capabilities in practically every area. Assuming that such a superintelligent AI valued its own survival – why shouldn’t it? – it would inevitably try to prevent humans from developing rival AIs or shutting it down. The only sure-fire way of doing that is shutting us down.

What methods would it use? Yudkowsky and Soares argue that these could involve technology we can’t yet imagine or envisage, and which may strike us as very peculiar. They liken us to Aztecs sighting Spanish ships off the coast of Mexico, for who the idea of “sticks they can point at you to make you die” – AKA guns – would have been hard to conceive of.

Nevertheless, in order to make things more convincing, they elaborate further. In the part of the book that most resembles sci-fi, they set out an illustrative scenario involving a superintelligent AI called Sable. Developed by a major tech company, Sable proliferates through the internet to every corner of civilisation, recruiting human stooges through the most persuasive version of ChatGPT imaginable, before destroying us with synthetic viruses and molecular machines. Some will reckon this to be outlandish – but the Aztecs would have said the same about muskets and Catholicism.

The authors present their case with such conviction that it’s easy to emerge from this book ready to cancel and cash in on your pension contributions. The glimmer of hope they offer – and its low wattage – is that doom can be averted if the entire world agrees to shut down advanced AI development as soon as possible. Given the strategic and commercial incentives, and the current state of political leadership, this seems highly unlikely.

The crumbs of hope we are left to grapple with, then, are indications that they might not be right, either about the fact that superintelligence is on its way, or that its creation equals our annihilation.

There are certainly moments in the book when the confidence with which an argument is presented outstrips its strength. As a small illustrative example of how AI can develop strange, alien preferences, Yudkowsky and Soares offer up the fact that some large language models find it had to interpret sentences without full stops. “Human thoughts don’t work like that,” they write. “We wouldn’t struggle to comprehend a sentence that ended without period.” But that’s not really true; humans often rely on markers at the end of sentences in order to interpret them correctly. We learn languages via speech, so they’re not dots on the page but “prosodic” features like intonation: think of the difference between a rising and falling tone at the end of a phrase. If text-trained AI leans heavily on grammatical punctuation to figure out what’s going on, that shows its thought processes are analogous, not alien, to human ones.

And for writers steeped in the hyper-rational culture of LessWrong, the authors exhibit more than a touch of confirmation bias. “History,” they write, “is full of . . . examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chernobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s imperative to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, and very poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Of more concern, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% seems to reflect a kind of hysterical monomania, or an especially thorough engagement with the issue. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say.

If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares is published by Bodley Head, 272pp

Standard