My blog

Online Harm Safety Measures - What Do They Achieve For Safeguarding?

26 Apr 2019

We can see regulations and national standards within the education, criminal justice, health and social care sectors. Have you ever thought, what does this really mean for the child or the adult receiving a service? Last week the UK Government published wide ranging proposals for the regulation of social media and other social network outlets. Is regulating the internet just another dogma for control, or will it actually help and prevent abuse? Will it enable the user to speak up and seek support, or even still stop people abusing others? We would all like to think that we can eradicate abuse, but the issue is one of such complexity driven deep and dark within the networks of society, both in on and off–line worlds, that this may only be a dream or a vision that all us working in this arena could possibly hope for.

Since the rise of the internet and the ‘internet of things’, there is an ongoing and increasing concern of what is being posted in all formats of words, photos, videos, pictures, such as content that may be deemed as offensive, child abuse, terrorism, exploitation and and self-harm.

There are too many unanswered questions when it comes to whether regulation would work? For example, how is regulation evaluated? Is its success going to be interpreted? What will make it successful? How do we know regulation will have the desired impact that it sets out to achieve? Will it act as a deterrent? The Government focusing on online safety is a good and well needed debate, one of which is close to my heart. It is the tone and doctorial nature that worries me and to what end will it reduce harm and abuse.

Historically there was a focus and proposal with regards to an online safety bill aimed at protecting children and this current bill seems to have extended the scope. I would argue that children’s responses should be informative and inclusive within the argument that it is not the technology and the tech giants that is the issue here. Proportioning blame and responsibility on to these companies is not a genuine attempt to deal with the issues. A narrowly focused proposal that only covers private sector companies facing fines and blocking access to sites is not the solution in dealing with social issues that affect children or the vulnerable. Agreeable there should be a mandatory duty of care to protect all users, however, under these new proposed laws is anything really going to change?

I am struggling to comprehended how these laws would impact on a reduction in online grooming? Has consultation been sought from children and young people? If they think this will protect them, or will the ‘players’ be that the social media companies, users find new ways around the regulations and drive misuse further underground? How does a regulator know what is deemed as ’safe’ for a young person, who is today deemed a ‘digital native’ and ‘digital savvy’? I fully advocate that children and young people should be included in the conversations.

If this is just another form of control and diversion, rather than tackling online activity from a child centred, bottom up approach, it is then just about taking money off companies by large financial pay outs. This to me makes the issue a moral and ethical one, in that will it change culture and on-line practises? Can the Government really force ‘users’ and ‘deliverers’ to behavie online in a different way?! Surely we protect young people the most by partnering with children and young people and listening and acting upon what they say? Yes, we all have a moral duty to protect children and young people, but there are more innovative approaches than one of control. I wait to see if this is the Government doing as we always have done and ultimately getting the same outcomes…

“The Online Harms White Paper sets out the government’s plans for a world-leading package of online safety measures that also supports innovation and a thriving digital economy. This package comprises legislative and non-legislative measures and will make companies more responsible for their users’ safety online, especially children and other vulnerable groups.
The White Paper proposes establishing in law a new duty of care towards users, which will be overseen by an independent regulator. Companies will be held to account for tackling a comprehensive set of online harms, ranging from illegal activity and content to behaviours which are harmful but not necessarily illegal” (Home Office, 2019).

No comments found

Leave a Reply

Your email address will not be published. Required fields are marked *

Please note: all comments are subject to moderation before appearing on the site, which may take a short while.

Let's connect

To book Sarah for an upcoming event, discuss a media enquiry or for any general questions, please complete the form below.

Send Message