Artificial Intelligence, Research, Science, Society, Technology

Superintelligent AI and its threat to humanity

THREE

Not everybody is convinced. AI researchers at Meta told Clark that his forecast was “improbable”, arguing that even if today’s bots can successfully write code, without human supervision they would be likely to end up in “recursive failure loops”.

The idea of a demographic explosion in the number of AI bots may also be at odds with limited amounts of energy needed to power them, and a growing public backlash against the data centres where they would reside.

It would also be forgivable to find the idea impossibly abstract.

To date, the hundreds of billions of dollars poured into AI and all the supposed technical progress has meant little to many people, let alone the hazy predictions about what may or may not happen in the future.

AI has targeted small pockets of graduate hiring, been blamed for some large-scale corporate layoffs, but few jobs can be fully automated away. Nor has the technology shown much sign of leading to research breakthroughs or a science fiction catastrophe. If AI were going to be so transformative, wouldn’t we be seeing more signs of this?

A recently published study from the Centre for British Progress found “no evidence that [AI] has replaced jobs at scale in the UK”. Certain professions, however, have been hit more than most. The Society of Authors says that 37pc of illustrators, 43pc of translators, and 86pc of authors have reported decreased earnings as a result of AI.

This reflects what some AI experts are referring to as a “jagged frontier”, in which AI appears superhuman at complicated tasks and incompetent at basic ones. But the gap between these two extremes can be deceptively short.

Cases from tech history have shown that once AI starts approaching human capabilities, it soon becomes overwhelmingly better.

For decades, chess computers stood no chance against professional players. In 1996, Garry Kasparov, the world chess champion, closely defeated IBM’s Deep Blue machine. A year later, Kasparov was defeated and within a few years, grandmasters stood zero chance against the best chess computers.

“Electricity, computing, and the internet each promised to change everything and, in the long run, they did. But for decades following their introduction, the data suggested they had changed almost nothing,” the Centre for British Progress noted.

Some researchers are predicting that a similar thing is likely to happen when it comes to building AI itself. They believe that when AI and humans are contributing equally to the technology’s development, it could take just a year for the process of progression to be entirely automated. The assumption here is that millions of researchers will be working on advancing AI, instead of thousands as we do today. This, they argue, would massively accelerate the pace of AI progress.

Yet, there are signs that some of AI’s more dangerous capabilities are now starting to emerge.

Anthropic recently restricted access to its new Mythos model to a handful of tech companies after the system found thousands of security flaws in computer systems and web browsers.

If made widely available to hackers, Mythos could have unleashed chaos.

While some have brushed this off as exuberant marketing, initial results have suggested that Mythos could be devastating in the wrong hands.

Firefox, the web browser company that is part of a cohort with access to the system, said this month that it had fixed 423 security flaws in the first four weeks it had access to Mythos – almost twice as many as it fixed last year.

In the last few days, Google security researchers said that criminals had attempted to launch a major cyber attack by exploiting a flaw they had discovered with the help of AI. While the culprits were not named, the tech giant said hackers from China and North Korea had shown a “particular interest” in using AI to carry out cyber-attacks.

Some forecasters have tried to map out what the path to all-powerful AI might look like.

The AI Futures Project – run by a former OpenAI researcher – has sketched out an unnerving scenario in which AI overtakes human intelligence in around 2027.

By the end of 2028, researchers predict this could lead to cancer being cured and a fully automated economy in which humans need not work.

Within another three years, humanity could be wiped out or could be on its way to colonising the galaxy, depending on what path governments and AI companies take along the way.

Standard