Art, Artificial Intelligence, Arts, Culture, Society, Technology

AI-generic-slop is theft from real artists

CREATIVE ART

Intro: Art generated by online tools is painfully bland and is leading us down the path to cultural stagnation

Pablo Picasso, one of the most influential artists of the 20th century, admitted that “Great artists steal.” The Spanish genius assimilated African mask imagery into modern art, and many other greats throughout history have done something similar. Essentially, this is how creativity works. But behind their masterpieces are struggle, friction, and unique vision. Enter another entirely different beast, the theft by proliferating AI engines. These are killing creativity, harming real artists, and fuelling an epidemic of unoriginality.

By serving prompts to generators such as Midjourney or DALL-E, people can generate images on screen, in just a few seconds. Anyone can conjure up a Vincent van Gogh-styled still life or Leonardo da Vinci-inspired selfie and at once exhibit it online. Social media platforms such as X are filled with fans of this technology who declare: “AI art is art.” But this doesn’t make it true.

In fact, AI “art” doesn’t even exist – it is an illusion. AI models work on pattern recognition, not artistic decision making. While an “AI artist” may serve prompts to this technology, they cannot be considered the author of its output. It has simply been remixed from ready-made imagery without thinking, feeling, intent, or ingenuity. Absent from AI “art” is creative process, which should take more than a few seconds. This is apparent in the low-quality, generic slop that’s produced. Lacking a distinctiveness of style and voice, it can only offer a dynamic of smooth homogeneity.

It bypasses craft, which is what great artists develop – with brushes and paint, pencils and paper – over months, years, and even decades. AI artists celebrate the power of technology to make creativity accessible, and this forms their central argument and tenet as to why it’s so great. True craft, however, takes dedication, consistent practice, and experimentation.

John Constable not only worked tirelessly inside his studio but made countless studies en plein air – as revealed in Tate Britain’s current exhibition, Turner & Constable. Celebrating two of Britain’s greatest painters, it shows what being an artist really takes. On display are watercolours, oils and sketches, as well as paint-covered palettes, paintboxes, and even a sketching chair.

Among Constable’s masterpieces is his 1836 work Hampstead Heath with a Rainbow, where prismatic hues glide through menacing clouds. His technique looks effortless but was suffused with genius-level skill. And behind it, unseen by the average enthusiast, are more than 100 cloud studies he created in an attempt to capture their transient energy.

Where AI generates pictures in an instant, Constable was committed to an ongoing process; the experience gained through observation and documentation was ultimately of immense benefit to him.

Similarly, JMW Turner made around 37,000 sketches of landscapes he’d seen with his own eyes. Determined to evoke the raw power of nature – from blazing sunsets to howling storms – he pushed realism towards abstraction with an excitement that’s visible in his energetic brushstrokes.

In contrast to Constable and Turner’s radical compositions, AI’s aesthetic is flat, twee, and often old fashioned. Defined by a saccharine palette of candy colours and hazy tones, automatically generated landscapes are hollow, sanitised, and no match for Britain’s great painters and artists. Working some 200 years ago, they painted emotive, not idealised, places of both personal and historic significance.

What is more, both Constable and Turner began their paintings by looking, and really observing the world. This fundamental act is absent from the process of AI’s so-called artists who are more like a client giving instructions to a graphic designer than an artist painting at their easel. AI engines are also doing real harm to contemporary artists and their hard work.  

Among those who have already experienced its damaging effects is Australian painter Kim Leutwyler. She says her distinct style has been copied by app-generated portraits. “My issue isn’t with AI itself, but with the unethical way it has been trained without artists’ consent,” she said. “The right to opt in or out of having your data scraped for AI training should be fundamental, not optional.” This view is widely held across all of the creative industries.

AI, then, is pilfering from artists, the very people it relies on. It harms us all with its blandness. Rather than moving art forward, like Turner and Constable did in their day, it contributes to what has been termed “cultural stagnation”.

Anyone infuriated by Hollywood’s endless remakes of viewer favourites has a similar impact. It threatens both originality and individual thinking. And because future AI will only draw from more of this generated material, it will continue to create typical rather than unique visions.

AI art isn’t art, it’s a mirage, and it won’t be looked at for longer than a doom-scrolling second. In our world of efficiency and productivity, creative pursuits are one of very few remaining places where human endeavour is vital. Behind the brushstrokes of Turner and Constable are years of looking, thinking, making and struggle, and that’s what creative art is.

Standard
Artificial Intelligence, Arts, Internet, Mental Health, Religion

Man’s worship of the machine: void of purpose

ARTIFICIAL INTELLIGENCE

THE sometime 20th century supposition that man had supposedly “killed God” stemmed from the secularisation of the West which left a void. That was filled by many nation states who implemented a rights-based humanism of common purpose and shared endeavour. Today that purpose has withered, too.

Our loss of faith in God has been coupled with a loss of faith in each other. The void has opened up again and we are using technology in an attempt to fill it.

Sir Tim Berners-Lee’s creation of the world wide web was meant to herald an era of human flourishing, of rich cultural exchange, and global harmony. Knowledge was to spread in a way the printing press’s greatest advocates could only have dreamt of.

But rather than usher in an age of hyper-rationalism, the internet has exposed an age of debased religiosity. Having been dismissed as a relic from a bygone era, religion has returned in a thin, hollow version, shorn of wonder and purpose.

Look around today, for all is clear to see. Smartphone use is almost ubiquitous (95 per cent of the population own one, with as good as 100 per cent of 16-24 year olds). Artificial Intelligence, from chatbots, recommended search engines, or work applications, has become an everyday part of life for most people.

Our use of these technologies is increasingly quasi-devotional. We seem to enact the worst parody of religion: one in which we ask an “all-knowing” entity for answers; many outsource their thinking and writing; it is ever-present, shaping how we live our lives – yet most of us have only the faintest idea how it works.

The algorithmic operations of AI are increasingly opaque, and observable to a vanishingly small number of people at the top-end of a handful of companies. And even then, those people themselves cannot say in truth how their creations will augment and develop for the simple fact they don’t know.

Whether videos with Google Veo 3 or essays via ChatGPT, we can now sit alone and create almost anything we want at the touch of a button. Where God took seven days to build the world in His image, we can build a video replica in seven seconds. But the thrill is short-lived, as we are quickly submerged under a flood of content, pumped out with ease. There is no digital sublime, no sense of lasting awe, just a vague unease and apprehension as we hunch over our phones, irritated and unfocused. Increasingly, we have become aware of our own loneliness (which has reached “epidemic” proportions).

And perhaps the strangest of all, we accept AI’s view of us. Once, only God was able to X-ray the soul. Later, we believed the high priests of psychology could do the same, human to human. Now, we are seeking out that same sense of understanding in mute lines of code.

A mere 18 months or so since the tech became widely available, 64 per cent of 25 to 34-year-olds in the UK have used an AI therapist, while in America, three quarters of 13 to 17-year-olds have used AI companion apps such as Character.ai or Replika.ai (which let users create digital friends or romantic partners they can chat with). Some 20 per cent of American teens spent as much or more time with their AI “friends” as they did their real ones.

Digging deeper into the numbers available, part of the attraction of socialising in this way is that you get a reflection, not an actual person: someone “always on your side”, never judgmental, never challenging. We treat LLMs (Large Language Models) with the status of an omniscient deity, just one that never corrects or disciplines. Nothing is risked in these social-less engagements – apart from your ability to grow as a person and be egotistically fulfilled. Habitualised, we risk becoming so fragile that any form of friction or resistance becomes unbearable.

Where social media at least relied upon the affirmation of your peers – hidden behind a screen though they were – AI is opening up the possibility to exist solely in a loop of self-affirmation.

Religion has many critics of course, but at the heart of the Abrahamic tradition is an argument about how to live now on this earth, together. In monotheism, God is not alone. He has his intermediaries: rabbis, priests, and imams who teach, proscribe and slowly, over time, build a system of values. There is a community of belief, of leaders and believers who discuss what is right and what is wrong, who share a creed, develop it, and translate sometimes difficult text into the texture of daily life and what it means for us. There is a code, but it is far from binary.

And, so, while it is possible to divine in the statements of our tech-bro-overlords through a certain proselytising fervour, there is no sense of the good life, no proper vision of society, and no concern for the future. Their creations are of course just tools – the promised superintelligence is yet to emerge and may never actually materialise – but they are transformative, and their potentially destructive power means they are necessarily moral agents. And the best we get are naïve claims about abundance for all or eradicating the need for work. A vague plan seems to exist that we will leave this planet once we’ve bled it white.

There is a social and spiritual hunger that a life online cannot satisfy. Placing our faith in the bright offerings of modernity is blinding us to each other – to what is human, and what is sacred.

Standard
Artificial Intelligence, Arts, Books, Computing, Meta, Technology

Book Review: If Anyone Builds It, Everyone Dies

LITERARY REVIEW

WE shouldn’t worry so much these days about climate change because we’ve been told that our species only has a few years before it’s wiped out by superintelligent AI.

We don’t know what form this extinction will take exactly – perhaps an energy-hungry AI will let the millions of fusion power stations it has built run hot, boiling the oceans. Maybe it will want to reconfigure the atoms in our bodies into something more useful. There are many possibilities, almost all of them bad, say Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, and who knows which will come true. But just as you can predict that an ice cube dropped into hot water will melt without knowing where any of its individual molecules will end up, you can be sure an AI that’s smarter than a human being will destroy us all, somehow.

This level of confidence is typical of Yudkowsky, in particular. He has been warning about the existential risks posed by technology for years – on the website he helped to create, LessWrong.com, and via the Machine Intelligence Research Institute he founded (Soares is the current president). Despite not graduating from university, Yudkowsky is highly influential in the field. He is also the author of a 600,000-word publication of fanfic called Harry Potter and the Methods of Rationality. Colourful, annoying, and polarising according to some critics, with one leading researcher saying in an online spat that “people become clinically depressed” after reading Yudkowsky’s work. But as chief scientist at Meta, who are they to talk?

While Yudkowsky and Soares may be unconventional, their warnings are similar to those of Geoffrey Hinton, the Nobel-winning “godfather of AI”, and Yoshua Bengio, the world’s most-cited computer scientist, both of whom signed up to the statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

As a clarion call, If Anyone Builds It, Everyone Dies is well timed. Superintelligent AI doesn’t exist yet, but in the wake of the ChatGPT revolution, investment in the datacentres that would power it is now counted in the hundreds of billions. This amounts to “the biggest and fastest rollout of a general-purpose technology in history,” according to the FT’s John Thornhill. Meta alone will have spent as much as $72bn (£54bn) on AI infrastructure this year alone, and the achievement of superintelligence is now Mark Zuckerberg’s explicit goal.

This is not great news, if you believe Yudkowsky and Soares. But why should we? Despite the complexity of its subject, If Anyone Builds It, Everyone Dies is as clear as its conclusions are hard to accept. Where the discussions become more technical, mainly in passages dealing with AI model training and architecture, it remains straightforward enough for readers to grasp the basic facts.

Among these is that we don’t really understand how generative AI works. In the past, computer programs were hand coded – every aspect of them was designed by a human. In contrast, the latest models aren’t “crafted”, they’re “grown”. We don’t understand, for example, how ChatGPT’s ability to reason emerged from it being shown vast amounts of human-generated text. Something fundamentally mysterious happened during its incubation. This places a vital part of AI’s functioning beyond our control and means that, even if we can nudge it towards certain goals such as “be nice to people”, we can’t determine how it will get there.

That’s a big problem, because it means that AI will inevitably generate its own quirky preferences and ways of doing things. These alien predilections are unlikely to be aligned with ours. It’s worthy noting, however, that this is entirely separate from the question of whether AIs might be “sentient” or “conscious”. Being set goals, and taking actions in the service of them, is enough to bring about potentially dangerous behaviours. Nonetheless, Yudkowsky and Soares point out that tech companies are already trying hard to build AIs that do things on their own initiative, because businesses will pay more for tools that they don’t have to supervise. If an “agentic” AI like this were to gain the ability to improve itself, it would rapidly surpass human capabilities in practically every area. Assuming that such a superintelligent AI valued its own survival – why shouldn’t it? – it would inevitably try to prevent humans from developing rival AIs or shutting it down. The only sure-fire way of doing that is shutting us down.

What methods would it use? Yudkowsky and Soares argue that these could involve technology we can’t yet imagine or envisage, and which may strike us as very peculiar. They liken us to Aztecs sighting Spanish ships off the coast of Mexico, for who the idea of “sticks they can point at you to make you die” – AKA guns – would have been hard to conceive of.

Nevertheless, in order to make things more convincing, they elaborate further. In the part of the book that most resembles sci-fi, they set out an illustrative scenario involving a superintelligent AI called Sable. Developed by a major tech company, Sable proliferates through the internet to every corner of civilisation, recruiting human stooges through the most persuasive version of ChatGPT imaginable, before destroying us with synthetic viruses and molecular machines. Some will reckon this to be outlandish – but the Aztecs would have said the same about muskets and Catholicism.

The authors present their case with such conviction that it’s easy to emerge from this book ready to cancel and cash in on your pension contributions. The glimmer of hope they offer – and its low wattage – is that doom can be averted if the entire world agrees to shut down advanced AI development as soon as possible. Given the strategic and commercial incentives, and the current state of political leadership, this seems highly unlikely.

The crumbs of hope we are left to grapple with, then, are indications that they might not be right, either about the fact that superintelligence is on its way, or that its creation equals our annihilation.

There are certainly moments in the book when the confidence with which an argument is presented outstrips its strength. As a small illustrative example of how AI can develop strange, alien preferences, Yudkowsky and Soares offer up the fact that some large language models find it had to interpret sentences without full stops. “Human thoughts don’t work like that,” they write. “We wouldn’t struggle to comprehend a sentence that ended without period.” But that’s not really true; humans often rely on markers at the end of sentences in order to interpret them correctly. We learn languages via speech, so they’re not dots on the page but “prosodic” features like intonation: think of the difference between a rising and falling tone at the end of a phrase. If text-trained AI leans heavily on grammatical punctuation to figure out what’s going on, that shows its thought processes are analogous, not alien, to human ones.

And for writers steeped in the hyper-rational culture of LessWrong, the authors exhibit more than a touch of confirmation bias. “History,” they write, “is full of . . . examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chernobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s imperative to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, and very poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Of more concern, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% seems to reflect a kind of hysterical monomania, or an especially thorough engagement with the issue. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say.

If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares is published by Bodley Head, 272pp

Standard