Artificial Intelligence, Research, Science, Society, Technology

Superintelligent AI and its threat to humanity

TWO

Jack Clark, a co-founder of Anthropic, said just days ago that he believed there was a 60pc chance that AI systems would be capable of building themselves by 2028, kicking off a feedback loop in which technology rapidly surpasses human intelligence.

“I don’t know how to wrap my head around it,” Clarke wrote. “If that happens, we will cross a Rubicon into a nearly-impossible-to-forecast future.”

Just a few years ago, researchers were pessimistic about the idea that machines would overtake us any time soon.

A survey of conference attendees in 2018 put the data at which AI would surpass humans – a moment called “artificial general intelligence” (AGI) – at around 2068. A quarter of respondents said it would not happen within a century.

A similar test in 2022 moved those timelines forward, putting the AGI date at 2060, with just 10pc believing it would take more than 100 years.

A year later, this was revised down to 2047.

Metaculus, a crowdsourced predictions website, has moved its forecast from 2070 six years again to 2032.

“Forecasts have shifted substantially from mid-century toward the near term,” analysts at Rand, a security think tank, noted in a recent report.

Two major developments since the release of ChatGPT in 2022 have explained this. The first was the release of so-called “reasoning” AI systems in late 2024.

Earlier AI systems could string together lines of text, but they showed little evidence of planning or working through problems as humans do.

Reasoning systems, which display lines of text resembling a stream of consciousness, came up with far more complex and coherent answers to questions. They have also been able to cut down on “hallucinations” in which the systems make up answers.

METR says reasoning systems have dramatically improved the rate of technological progression.

“Progress was doubling every seven months [then] around the time of reasoning models, it appears that there was a one-time switch to a faster doubling time – maybe between three to four months.”

The second change has been AI’s ability to write computer code. Software engineers have used AI systems for a couple of years to help with basic programming tasks but their output was often riddled with errors and required constant checking.

Towards the end of 2025, AI systems suddenly became capable of writing entire computer programs on their own, a moment that has been described as a “tipping point”.

Now almost all of the computer code at tech companies is machine generated.

In recent months, major AI companies have explicitly stated that their goal is to build AI that will go on to develop superintelligence.

In December, OpenAI defined its goals as developing “increasingly capable AI and in particular capable or recursive self-improvement (RSI)”.

RSI has become such a buzzword that investors are throwing billions of dollars at start-ups chasing the idea, even ones that are just weeks old and have no business model to speak of.

This month, Recursive Superintelligence, a UK-US start-up founded by former Google and OpenAI researchers, announced that it had raised $650m (£480m) in what it called a “bold bet on self-improving AI”.

The similarly named Silicon Valley start-up Ricursive Intelligence has recently raised hundreds of millions of dollars, at a $4bn valuation.

In an attempt to have a seat at the table, the UK has set up a £500m “sovereign AI” fund to back companies that could influence the industry’s direction.

Earlier this year, OpenAI said that for the first time, one of its AI systems had helped to build itself.

Early versions of the company’s latest coding system, GPT-5.3-Codex, were partly used to test later versions, an early step towards AI that can build itself.

“GPT-5.3-Codex is our first model that was instrumental in creating itself,” the company said. “Our team was blown away by how much Codex was able to accelerate its own development.”

There seems to be little doubt that if AI systems are able to meaningfully and increasingly contribute to their own development this could push progress to be super-exponential.

AI researchers have said that once the models start properly contributing to their own development, the rate of progress will speed up significantly – not only when it comes to the capabilities of AI systems but with all the knock-on effects of scientific breakthroughs, job displacement, and widespread security risks.

“[Imagine] how much scientific progress you could make if you could just copy the best researcher in your field, somewhere between 1,000 and a million times,” says the chief executive of AI company Apollo Research, which has worked with the Government’s AI Security Institute on understanding advanced AI systems.

“Or if the research staff at Anthropic went from a few thousand to a few million overnight. They would make more progress and plausibly, a lot more progress. At that point, things are going to get pretty wild.”

Standard