Britain, Government, Internet, National Security, Politics, Society, Technology

Put social media bosses in the dock

INTERNET AND SOCIAL MEDIA

Intro: Lies and disinformation on social media is fuelling violence and the breakdown of society

The violent thugs and bigots rampaging through the streets of UK towns and cities in the dreadful days since the Southport killing of three young children deserve severe punishment for their appalling crimes.

The giant businesses that enable the lies and exaggerations that fuel the riots should also be in the dock – as should the people who own them.

For the online anonymity they facilitate allows anyone in the world the chance to say anything they want, however incendiary, and to escape responsibility.

Built into the internet from its inception decades ago, anonymity is hugely profitable for tech billionaires, but the horrendous price for this free-for-all is paid by the rest of us: mostly law-abiding, peaceful people who respect the truth. Internet anonymity is the default setting when you set up an email address or a social media account. You can pretend to be anyone, anywhere.

The anarchy and chaos unleashed after Southport highlights the danger. An anonymous account on X (formerly Twitter) called Europe Invasion first spread the incendiary lie that the suspect in the stabbing case was a Muslim immigrant. That post – completely invented – was viewed a staggering six million times.

We have no idea who is behind Europe Invasion, with its relentless and misleading crimes, and doom-laden commentary about ethnic strife. It gives no contact details or any other explicit clues about its funding, staff, location, or aims.

For those who have spent decades dealing with Russian disinformation, it may well smell and look like a Kremlin propaganda outlet in an attempt to sow dissension and mistrust in Western societies – a Russian tactic for many years.

Moscow has unwitting accomplices. Look at the man in charge of X, Elon Musk. A self-declared “free speech absolutist”, Musk closed the departments responsible for dealing with disinformation when he first acquired Twitter. And he has made it far harder to report abuse. The result has been to intensify the toxic mischief coursing through the veins of our democracy.

When Musk took-over the ailing Twitter platform two years ago, accounts with verifiable owners still benefited from a “blue tick” – an award which prevented pranksters and fraudsters impersonating public figures, mainstream media outlets, and businesses. Not any more.

One of Musk’s first moves was to offer blue ticks to anyone willing to pay for them.

That’s why, at a cursory glance, Europe Invasion looks like a regular media outlet – with the “blue tick” stamp of authenticity for which someone, somewhere, has presumably paid. Musk has also lifted the ban Twitter had imposed on such divisive figures as the far-Right firebrand Tommy Robinson who has been blamed for helping fuel violent disorder with his social media posts.

Musk contributes directly to the toxic atmosphere he has helped create. Adding insult to injury he is now embroiled in a war of words with Sir Keir Starmer saying that “civil war is inevitable” in Britain.

The sensible citizens of our land will conclude Musk is not just the wealthiest man in the world, but also the silliest. He knows nothing about this country – and is not ashamed to show it. But among his 200 million followers there will be many who believe him, with untold consequences for this country’s image abroad, and stability at home.

There is even a greater danger to our national security. The internet is the central nervous system of our civilisation, used in everything from finance to health care and transport.

It is horribly susceptible and vulnerable to carelessness (as we saw recently in the massive global disruption from a faulty software update). Yet it is being attacked by malevolent state actors such as Russia and China.

The reason for our plight is simple: greed. Checking identities costs money. So too does nailing lies, running a proper complaints system, and installing proper security.

For the tech giants, it is far simpler to let chaos rip, and watch the profits roll in.

Yet the answer lies in our own hands – and those of our elected politicians in parliament.

As a first step, our regulators and lawmakers should demand that tech bosses immediately remove material that constitutes incitement to riot. Unless they do that, they are aiding and abetting serious crimes.

The tech giants’ titanic lobbying efforts have cowed politicians for years. Curb the internet and you hamper innovation, the argument goes.

But the price now is too high. An American court has just handed down a landmark ruling that the online search giant Google is a monopoly that systematically crushes its rivals.

We need the same spirit here in the UK, with the media regulator OFCOM and the Competition and Markets Authority (CMA) working together to curb the power of these monstrous companies.

They behave like medieval monarchs, treating us as their digital serfs. It is high time to remove their neo-feudal protections and privileges and make them legally liable for the extraordinary harm they do.

Standard
Britain, Government, Internet, Legal, Society, Technology

New enforceable code for web giants

INFORMATION COMMISSIONER

FACEBOOK, Google and other social media platforms will be forced to introduce strict age checks on their websites or assume all their users are children.

Web firms that hoover up people’s personal information will have to guarantee they know the age of their users before allowing them to set up an account.

Companies that refuse will face fines of up to 4 per cent of their global turnover – £1.67billion in the case of Facebook.

The age checks are part of a tough new code being drawn up by the Information Commissioner’s Office (ICO), which is backed by existing laws and will come into force as early as the autumn.

. See also Internet safety: The era of tech self-regulation is ending

Experts claim it will have a “transformative” effect on social media sites, which have been accused of exposing young people to dangerous and illicit material, bullying and predators. It includes rules to help protect children from paedophiles online.

The code also aims to stop web firms bombarding children with harmful content, a problem highlighted by the case of Molly Russell, 14, who killed herself after Instagram allowed her to view self-harm images. Under the new code:

. Tech firms will be banned from building up a “profile” of children based on their search history, and then using it to send them suggestions for material such as pornography, hate speech and self-harm.

. Children’s privacy settings must automatically be set to the highest level.

. Geolocation services must be switched off by default, making it harder for trolls and paedophiles to target children based on their whereabouts.

. Tech firms will not be allowed to include features on children’s accounts designed to fuel addictive behaviour, including online videos that automatically start one after the other, notifications that arrive through the night, and prompts nudging children to lower their privacy settings.

Once the new rules are implemented, children should be asked to prove their age by uploading their passports or birth certificate to an independent verification firm. This would then give them a digital “fingerprint” which they could use to demonstrate their age on other websites.

Alternatively, the tech firms could ask children to get their parents’ consent, and have the parents prove their identity with a credit card.

If the web giants cannot guarantee the age of their users, they will have to assume they are all children – and dramatically limit the amount of information they collect on them, as set out in the code.

At present, a third of British children aged 11 and nearly half of those aged 12 have an account on Facebook, Twitter or another social network, OFCOM figures show.

Many youngsters are exposed to material or conversations they are too young to cope with as a result.

The Deputy Commissioner at the ICO, said: “We are going to be making it quite clear that there is a reasonable expectation that companies stick to their own published terms and policies, including what they say about age restrictions.”

A House of Lords amendment tabled by Baroness Beeban Kidron that ensures the new code will be drawn up and put into law, said: “I expect the code to say: ‘You may not, as a company, help children find things that are detrimental to their health and well-being.’ That is transformative. This is so radical because it goes into the engine room, into the mechanics of how businesses work and says you cannot exploit children.”

The rules will come into force by the end of the year, and will be policed by the ICO, which has the powers to hand out huge fines.

It will also use its powers to crack down on any web firm that does not have controls in place to enforce its own terms and conditions. Companies that say they ban pornography and hate speech online will have to show the watchdog they have reporting mechanisms in place, and that they quickly remove problem material.

Firms that demand children are aged 13 or above – as most web giants do – will also have to demonstrate that they strictly enforce this policy.

At the moment, web giants such as Facebook, simply ask children to confirm their age by entering their date of birth without demanding proof.

 

FOR far too long, social media giants have arrogantly refused to take responsibility for the filth swilling across their sites.

Many of these firms, cloistered in Silicon Valley ivory towers, are owned by tax-avoiding billionaires who are indifferent to the trauma inflicted on children using websites such as Facebook and Instagram.

At the click of a mouse, young children are at risk of exposure to paedophiles, self-harm images, online pornography and extremist propaganda.

Finally, however, these behemoths are being brought to heel by the Information Commissioner (ICO). They must ensure strict age checks and stop bombarding children with damaging content – or face multi-million-pound fines.

Such enforced regulation is very welcome and well overdue.

Standard
Britain, Government, Internet, Politics, Society

Internet safety: The era of tech self-regulation is ending

SOCIAL MEDIA

THE safety of the internet has been at the forefront of people’s minds in recent weeks. We have all heard the tragic stories of young and vulnerable people being negatively influenced by social media. Whilst the technology has the power to do good, it is clear that things need to change. With power comes responsibility and the time has certainly come for the tech companies to be held properly accountable.

. See also: Probe launched into online giants

The UK Government is serious in wishing to tackle many of the negative aspects associated with social media, and the forthcoming White Paper on online harms is indicative of their concern.

The world’s biggest technology firms, including Facebook, Twitter, Google and Apple are coming under increasing pressure from ministers who have made clear to them that they will not stand by and see people unreasonably and unnecessarily exposed to harm. They insist that if it wouldn’t be acceptable offline then it should not be acceptable online.

Safety is at the forefront of almost every other industry. The online world should be no different. Make no mistake, these firms are here to stay, and, as a result, they have a big role to play as part of the solution. It’s vital that they use their technology to protect the people – their customers – who use it every day.

It’s important not to lose sight of what online harms actually are. Yes, it includes things like cyberbullying, images of self -harm, terrorism and grooming. But disinformation – which challenges our ideas of democracy and truth – must be tackled head on, too.

Disinformation isn’t new. But the rise of tech platforms has meant that it is arguably more prevalent than ever before. It is now possible for a range of players to reach large parts of the population with false information. Tackling harms like disinformation is to be included in the Government’s White Paper. That will set out a new framework for making sure disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.

In the UK, most people who read the news now do so online. When it is read across platforms like Facebook, Google and Twitter and then shared thousands of times, the reach is immense. False information on these platforms has the potential to threaten public safety, harm national security, reduce trust in the media, damage the UK’s global influence and by undermining our democratic processes.

To date, we’re yet to see any evidence of disinformation affecting democratic processes in the UK. However, that is something that the Government is continuing to keep a very close eye on.

Tools exist to enable action to be taken, particularly through the use of Artificial Intelligence (AI). We’ve already seen welcome moves from platforms such as Facebook and Twitter, which have developed initiatives to help users identify the trustworthiness of sources and which have shut down thousands of fake sites. Because voluntary measures have not been enough, the UK Government wants trustworthy information to flourish online and for there to be transparency so that the public are not duped. Parliament is said to care deeply about this, as a recent report from the Select Committee into disinformation shows.

But more needs to be done. One of the main recommendations in the Cairncross report on the future of journalism was to put a “news quality obligation” on the larger online platforms – placing their efforts to improve people’s understanding of the trustworthiness of news articles under regulatory supervision.

Online firms rely on the masses spending time online. Individuals should only really do that if they feel safe there. A safer internet is surely good for business too.

It seems apparent that we can no longer rely on the industry’s goodwill. Around the world governments are facing the challenge of how to keep citizens safe online. As the era of self-regulation comes to an end, it would now seem that the UK can and should lead the way.

 

THE internet is a liberating force, but also potentially a malign one. MPs and ministers have been all too happy to expound upon the undoubted benefits brought by the rapid growth of the digital economy. Yet they have struggled to come up with measures that would address the damage that it can cause – from social media addiction and the abuse of online platforms by child groomers and terrorists, to the links between internet use and poor mental health among children.

There are promising signs that action may be imminent, however. A new report recently released by the House of Commons Digital, Culture, Media and Sport Committee calls for technology companies to be required to adhere to a Code of Ethics overseen by an independent regulator. The code would set down in writing what is and is not acceptable on social media, and the regulator, crucially, would have teeth: the power to launch legal action against firms that breach the code.

This is, undoubtedly, a welcome proposal. Much of the trouble that children and their parents have experienced online in recent years has been a consequence of a failure by the technology companies to take responsibility for the damage that their products and services can cause. They have continued to host harmful and sometimes illegal material, for example, and it is still too easy for young children to access their sites despite age limits.

As we can no longer rely on the industry’s goodwill, self-regulation has evidently failed. The photo sharing site Instagram, for instance, committed recently to banning all images of self-harm on its platform, but only after the outcry following the tragic death of a young and vulnerable person. Without legally-enforceable penalties, such companies – with their ‘move fast and break things’ cultures – face little incentive to prioritise the safety of their users, particularly young people and the vulnerable.

The Committee’s proposal currently remains just that, and the Government has pledged to produce a White Paper setting out how it intends to take the regulation of social media forward.

Half-measures will not be enough. Ministers must impose a statutory duty of care on the social media giants.

Standard