Priorities and Challenges for the Online Harms Legislation
I recently attended Westminster eForum policy conference Next steps for online regulation in the UK, 23rd June 2020, where industry experts and representatives from government discussed the key priorities and challenges for the upcoming Online Harms legislation.
Last year, the government released the Online Harms White Paper, which set out measures to keep UK users safe online. The consultation ran between April and July 2019. The government intend to establish in law a new duty of care towards internet users, which will be managed by an independent regulator. It has not yet been confirmed who the regulator will be. Companies operating in the online space will be held to account for tackling online harms, including illegal activity and content, and managing behaviours that are harmful but not illegal.
For more on online harms, read our blog Arbitrating Truth: Tackling Misinformation and Disinformation Online, June 2019.
Mark Bunting, Director, Content Policy, Ofcom, highlighted that 62% of adults and 81% of children surveyed (Ofcom/ICO Jigsaw Research "Potential online harms" Feb 2020) say they have experienced potential harm online. 74% of children relate that to either an interaction with another person or harmful content. In fact, more than one in four 12-15 year olds have experienced offensive language, unwelcome contact from strangers, "fake news" and bullying in the online environment.
So why is it so proving so difficult to protect children online? Online access has become a utility for children. It's how they communicate, learn and play. Children have a right of participation, to self-educate and to share content. But, the measures so far introduced by tech companies as part of self-regulation initiatives have failed. Moderating content is key to child safety, and automated moderation is not yet sophisticated enough to work in a meaningful way.
Susie Hargreaves, Chief Executive, Internet Watch Foundation, emphasised that it is essential to have trained human moderators, citing text, subjectivity and interpretation as the areas that pose the biggest challenge with moderation. Automated systems operate on statistics, meaning that they do not understand the content that they are checking. Professor Victoria Nash, Deputy Director, Associate Professor and Senior Policy Fellow, Oxford Internet Institute, referred to Oxford Internet Institute research that shows a rise in different types of hateful content, which poses challenges for machine learning solutions that are put in place.
"The way that misinformation morphs and develops over time, machine learning isn't equipped to cope with this."
Ben Bradley, Head of Digital Regulation, techUK
Automated moderation is an efficient and cost-effective solution for big tech firms, like Facebook and YouTube, but it seems there is a long way to go before it can be trusted as an effective alternative to human moderation. Andy Burrows, Head, Child Safety Online Policy, NSPCC feels there is a strong case for regulation to be child centered. 70% of 12 -15 year olds have at least one social media account, and 45% of 8-11 year olds who have a phone take it to bed with them (Ofcom, Children's Media Use, Data).
On the main social networks, 200,000 children (e.g. 1 in 25 children on Facebook) have sent, received, or been asked to send sexual content to an adult. There have been over 10,000 sexual communication with a child offenses in two and a half years, with 70% of these offenses taking place on Facebook, Snapchat and Instagram.
"Self-regulation has failed to address inherent risks to children, who by definition are a vulnerable group."
Andy Burrows, Head, Child Safety Online Policy, NSPCC
Mr Burrows argued that children should have equal protection online and offline, noting that platforms are not "neutral actors". Certain design choices can exacerbate risks, but it's also at the systemic design level where those risks can be best addressed. Age-verification will feature as part of the upcoming Online Harms Bill. Sarah Connolly, Director, Security and Online Harms, DCMS said that government are attempting to "balance a range of conflicting interests". Freedom of expression is at the heart of the government's approach, but protecting children online, remains a key pillar. She confirmed that the Online Harms Bill will be the legislative vehicle to roll out online age-verification.
It is not yet clear how age-verification will work, and internet users may be reluctant to share personal data such as a passport or credit card details in order to access services. Speaking about the implementation of the Online Harms Bill, Sarah Connolly said:
"It is an immensely complex issue and it is really important we get it right."
Sarah Connolly, Director, Security and Online Harms, DCMS
In a survey by Catch22, a charity working across children's social care and education, 38% of professionals said they feel they are not sufficiently trained to deal with the issues young people face online. Research has yet to be done to evidence the impact of online harms on children. Melissa Milner, Director, Communications and Engagement, Catch22, said that youth workers they speak to say that age verification is essential for the safety of children, as they're seeing children as young as 10 years old being groomed online.
When engaging with young people, Catch22 asked what they felt would improve their online experience. They proposed improved moderation, the removal of negative or harmful comments on pictures, and improved privacy and security, which indicates that children do not feel protected by the current system. Melissa Milner said the focus should remain on prevention and preventing online harm from happening in the first place. So, age-verification and addressing risk at the systemic design level may prove positive steps to take.
Post Contributor:
Caitriona Fitzsimons Digital Reporter
Comments
To post your comment, you need to log-in first. Click here to Log-in.
No Comments yet. Be the first to add a comment. :)