Britain, Government, Internet, Politics, Society

Internet safety: The era of tech self-regulation is ending

SOCIAL MEDIA

THE safety of the internet has been at the forefront of people’s minds in recent weeks. We have all heard the tragic stories of young and vulnerable people being negatively influenced by social media. Whilst the technology has the power to do good, it is clear that things need to change. With power comes responsibility and the time has certainly come for the tech companies to be held properly accountable.

. See also: Probe launched into online giants

The UK Government is serious in wishing to tackle many of the negative aspects associated with social media, and the forthcoming White Paper on online harms is indicative of their concern.

The world’s biggest technology firms, including Facebook, Twitter, Google and Apple are coming under increasing pressure from ministers who have made clear to them that they will not stand by and see people unreasonably and unnecessarily exposed to harm. They insist that if it wouldn’t be acceptable offline then it should not be acceptable online.

Safety is at the forefront of almost every other industry. The online world should be no different. Make no mistake, these firms are here to stay, and, as a result, they have a big role to play as part of the solution. It’s vital that they use their technology to protect the people – their customers – who use it every day.

It’s important not to lose sight of what online harms actually are. Yes, it includes things like cyberbullying, images of self -harm, terrorism and grooming. But disinformation – which challenges our ideas of democracy and truth – must be tackled head on, too.

Disinformation isn’t new. But the rise of tech platforms has meant that it is arguably more prevalent than ever before. It is now possible for a range of players to reach large parts of the population with false information. Tackling harms like disinformation is to be included in the Government’s White Paper. That will set out a new framework for making sure disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.

In the UK, most people who read the news now do so online. When it is read across platforms like Facebook, Google and Twitter and then shared thousands of times, the reach is immense. False information on these platforms has the potential to threaten public safety, harm national security, reduce trust in the media, damage the UK’s global influence and by undermining our democratic processes.

To date, we’re yet to see any evidence of disinformation affecting democratic processes in the UK. However, that is something that the Government is continuing to keep a very close eye on.

Tools exist to enable action to be taken, particularly through the use of Artificial Intelligence (AI). We’ve already seen welcome moves from platforms such as Facebook and Twitter, which have developed initiatives to help users identify the trustworthiness of sources and which have shut down thousands of fake sites. Because voluntary measures have not been enough, the UK Government wants trustworthy information to flourish online and for there to be transparency so that the public are not duped. Parliament is said to care deeply about this, as a recent report from the Select Committee into disinformation shows.

But more needs to be done. One of the main recommendations in the Cairncross report on the future of journalism was to put a “news quality obligation” on the larger online platforms – placing their efforts to improve people’s understanding of the trustworthiness of news articles under regulatory supervision.

Online firms rely on the masses spending time online. Individuals should only really do that if they feel safe there. A safer internet is surely good for business too.

It seems apparent that we can no longer rely on the industry’s goodwill. Around the world governments are facing the challenge of how to keep citizens safe online. As the era of self-regulation comes to an end, it would now seem that the UK can and should lead the way.

 

THE internet is a liberating force, but also potentially a malign one. MPs and ministers have been all too happy to expound upon the undoubted benefits brought by the rapid growth of the digital economy. Yet they have struggled to come up with measures that would address the damage that it can cause – from social media addiction and the abuse of online platforms by child groomers and terrorists, to the links between internet use and poor mental health among children.

There are promising signs that action may be imminent, however. A new report recently released by the House of Commons Digital, Culture, Media and Sport Committee calls for technology companies to be required to adhere to a Code of Ethics overseen by an independent regulator. The code would set down in writing what is and is not acceptable on social media, and the regulator, crucially, would have teeth: the power to launch legal action against firms that breach the code.

This is, undoubtedly, a welcome proposal. Much of the trouble that children and their parents have experienced online in recent years has been a consequence of a failure by the technology companies to take responsibility for the damage that their products and services can cause. They have continued to host harmful and sometimes illegal material, for example, and it is still too easy for young children to access their sites despite age limits.

As we can no longer rely on the industry’s goodwill, self-regulation has evidently failed. The photo sharing site Instagram, for instance, committed recently to banning all images of self-harm on its platform, but only after the outcry following the tragic death of a young and vulnerable person. Without legally-enforceable penalties, such companies – with their ‘move fast and break things’ cultures – face little incentive to prioritise the safety of their users, particularly young people and the vulnerable.

The Committee’s proposal currently remains just that, and the Government has pledged to produce a White Paper setting out how it intends to take the regulation of social media forward.

Half-measures will not be enough. Ministers must impose a statutory duty of care on the social media giants.

Standard