Britain, Government, Internet, Politics, Society

Internet safety: The era of tech self-regulation is ending

SOCIAL MEDIA

THE safety of the internet has been at the forefront of people’s minds in recent weeks. We have all heard the tragic stories of young and vulnerable people being negatively influenced by social media. Whilst the technology has the power to do good, it is clear that things need to change. With power comes responsibility and the time has certainly come for the tech companies to be held properly accountable.

. See also: Probe launched into online giants

The UK Government is serious in wishing to tackle many of the negative aspects associated with social media, and the forthcoming White Paper on online harms is indicative of their concern.

The world’s biggest technology firms, including Facebook, Twitter, Google and Apple are coming under increasing pressure from ministers who have made clear to them that they will not stand by and see people unreasonably and unnecessarily exposed to harm. They insist that if it wouldn’t be acceptable offline then it should not be acceptable online.

Safety is at the forefront of almost every other industry. The online world should be no different. Make no mistake, these firms are here to stay, and, as a result, they have a big role to play as part of the solution. It’s vital that they use their technology to protect the people – their customers – who use it every day.

It’s important not to lose sight of what online harms actually are. Yes, it includes things like cyberbullying, images of self -harm, terrorism and grooming. But disinformation – which challenges our ideas of democracy and truth – must be tackled head on, too.

Disinformation isn’t new. But the rise of tech platforms has meant that it is arguably more prevalent than ever before. It is now possible for a range of players to reach large parts of the population with false information. Tackling harms like disinformation is to be included in the Government’s White Paper. That will set out a new framework for making sure disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.

In the UK, most people who read the news now do so online. When it is read across platforms like Facebook, Google and Twitter and then shared thousands of times, the reach is immense. False information on these platforms has the potential to threaten public safety, harm national security, reduce trust in the media, damage the UK’s global influence and by undermining our democratic processes.

To date, we’re yet to see any evidence of disinformation affecting democratic processes in the UK. However, that is something that the Government is continuing to keep a very close eye on.

Tools exist to enable action to be taken, particularly through the use of Artificial Intelligence (AI). We’ve already seen welcome moves from platforms such as Facebook and Twitter, which have developed initiatives to help users identify the trustworthiness of sources and which have shut down thousands of fake sites. Because voluntary measures have not been enough, the UK Government wants trustworthy information to flourish online and for there to be transparency so that the public are not duped. Parliament is said to care deeply about this, as a recent report from the Select Committee into disinformation shows.

But more needs to be done. One of the main recommendations in the Cairncross report on the future of journalism was to put a “news quality obligation” on the larger online platforms – placing their efforts to improve people’s understanding of the trustworthiness of news articles under regulatory supervision.

Online firms rely on the masses spending time online. Individuals should only really do that if they feel safe there. A safer internet is surely good for business too.

It seems apparent that we can no longer rely on the industry’s goodwill. Around the world governments are facing the challenge of how to keep citizens safe online. As the era of self-regulation comes to an end, it would now seem that the UK can and should lead the way.

 

THE internet is a liberating force, but also potentially a malign one. MPs and ministers have been all too happy to expound upon the undoubted benefits brought by the rapid growth of the digital economy. Yet they have struggled to come up with measures that would address the damage that it can cause – from social media addiction and the abuse of online platforms by child groomers and terrorists, to the links between internet use and poor mental health among children.

There are promising signs that action may be imminent, however. A new report recently released by the House of Commons Digital, Culture, Media and Sport Committee calls for technology companies to be required to adhere to a Code of Ethics overseen by an independent regulator. The code would set down in writing what is and is not acceptable on social media, and the regulator, crucially, would have teeth: the power to launch legal action against firms that breach the code.

This is, undoubtedly, a welcome proposal. Much of the trouble that children and their parents have experienced online in recent years has been a consequence of a failure by the technology companies to take responsibility for the damage that their products and services can cause. They have continued to host harmful and sometimes illegal material, for example, and it is still too easy for young children to access their sites despite age limits.

As we can no longer rely on the industry’s goodwill, self-regulation has evidently failed. The photo sharing site Instagram, for instance, committed recently to banning all images of self-harm on its platform, but only after the outcry following the tragic death of a young and vulnerable person. Without legally-enforceable penalties, such companies – with their ‘move fast and break things’ cultures – face little incentive to prioritise the safety of their users, particularly young people and the vulnerable.

The Committee’s proposal currently remains just that, and the Government has pledged to produce a White Paper setting out how it intends to take the regulation of social media forward.

Half-measures will not be enough. Ministers must impose a statutory duty of care on the social media giants.

Standard
Britain, Government, Internet, Society, Technology

The Home Office unveils new technology that detects hate content

INTERNET & ONLINE ACTIVITY

Home Office steps up fight against terror content with new technology.

INTERNET giants will have little excuse for allowing extremist propaganda on their websites after the Home Office unveiled new technology to detect hate content.

Web firms have been told to increase efforts to remove terror-related posts after the UK was hit by attacks in London and Manchester last year. All had an “online component”, the Home Office said.

Now the UK Government has revealed advanced technology that aims to automatically detect extremist and hateful videos and content before they become publicly available online.

Tests have shown the £600,000 tool can identify 94 per cent of the entire content in Islamic State propaganda videos. The breakthrough came as a Home Office analysis revealed IS supporters used more than 400 separate online platforms to pump out propaganda last year.

The Home Office said it would share the technology with firms to combat the abuse of their platforms. Home Secretary Amber Rudd welcomed the development as she visited San Francisco for talks with technology giants. She said: “Those who commit terror attacks on our streets are increasingly influenced by what they see online. I hope this new technology the Home Office has helped develop can support others to go further and faster.”

Using ‘advanced machine learning’, the technology analyses terror videos to pick out ‘subtle signals’ and determine whether it is IS related propaganda or something else, such as a news report. The systems can be adapted to look for other violent extremist content.

The chief executive of ASI Data Science, Marc Warner, whose firm developed the new model, said major organisations such as Google and Facebook could not “solve this problem alone”.

 

YET, we all know that no amount of moral pressure has so far made Facebook, Google and Twitter remove the deluge of hate-filled extremism, sick trolling and other disturbing extreme content that pollutes their sites. Hit them in their pockets and they might just begin to change their ways.

It is promising that large scale multiconglomerate companies such as the Anglo-Dutch company Unilever is threatening to pull all advertising from the three internet giants if they don’t clamp down on this filth. Unilever – which has a £6.8billion-a-year marketing budget – is thoroughly sick of seeing its products being placed next to terrorist propaganda or sexualised images of children and has decided enough is enough.

Other big firms – notably Procter and Gamble – are making similar threats. We should sincerely hope many more will follow.

All we ever hear from the tech giants are weasel words. They say they take down extremist or illegal material as soon as they are alerted to it, but this is demonstrably untrue. And why should they have to be alerted, rather than policing this kind of content themselves?

The Home Office has now unveiled a new system that can automatically detect 94 per cent of Islamic State propaganda on the web. Is it really beyond multibillion pound corporations that specialise in technology to do the same – or even better? They have run out of excuses.

Standard
Britain, Government, Internet, Society

UK Justice Secretary says online trolling could be a criminal offence

INTERNET TROLLING

Online trolling could soon be made a criminal offence in the UK

ONLINE TROLLING could be made a criminal offence, the UK Justice Secretary has said.

David Gauke suggested he was prepared to act after Katie Price launched a campaign and petition for tougher penalties for web abuse.

The TV personality’s son Harvey, 15 – who is partially blind, autistic and has a range of other health problems – suffers constant abuse on social media, but last year a 19-year-old who targeted him on Twitter only received a police caution. Miss Price then set up a campaign demanding a new criminal offence to make online trolling a specific crime. Thus far, it has received 220,000 signatures – and led to an appearance in front of the Commons petitions committee.

Miss Price told the committee that a line should be drawn between ‘banter’ and criminal abuse – and said the law had failed to keep up with the changing use of technology.

Asked about the concerns Miss Price had, Mr Gauke acknowledged that we often see some appalling behaviour on social media.

The intervention comes just days after Theresa May warned social media giants they were undermining British democracy by allowing ‘intimidation and aggression’ to run riot online.

Firms such as Facebook and Twitter will face an official assessment of whether they are cracking down on abuse. There will also be an annual transparency report to expose the worst companies which fail to tackle the scourge of web hatred.

Officials will publish data on the scale of harmful content reported to different internet firms, how much is removed and how quickly.

Speaking to MPs, Miss Price said police were powerless to act in many cases of online abuse. She also said she wanted to see the creation of a register of offenders.

Speaking about her son’s case, Miss Price said: ‘Even the police were really embarrassed because it got to the point where they couldn’t take it any further because they couldn’t charge them with anything because there is nothing in place… since then it has just continued.

‘If it was a criminal offence I do not believe there would be so much of it . . . it would stop so many deaths, harassment and abuse. Some of you MPs have even had it as well. It happens to everyone – so it’s a no-brainer really.’

Standard