I recently joined the Open Rights Group as a member and was asked to respond to the Governments Online Harms regulation plans within their open consultation which ends the 1st of July - Online Harms White Paper - Government Consultation.
Below is my response. Please do the same and keep the UK free from unelected regulators that could harm freeness of speech.
While we all agree, there is a need to tackle and deal with both illegal and harmful content online this must be put into the balance with our fundamental human rights of freedom of expression.
Users of social media, blogging sites and sites that encourage this freedom of expression need to be able to defend their right to publish legal content.
This issue with a regulatory framework, like the one you are suggesting, is that online companies will take the easy route to compliance. This is that they will create automated, indiscriminate algorithms and technology that takes down content that 'might' fall under the online harms act.
Whilst this might be fine for the illegal content its the content that sits in the grey box of not being illegal but may cause harm. There is too much ambiguity here, and with that, there is room for massive error and the closing down of legitimate content.
Digital companies will take the easy route out as we see YouTube and others doing now when it comes to claims of Copyright. They do not contact the person being claimed against; they do not check the validity of the request; they just delete the content, or de-monetise or de-platform the user.
This can become a form of censorship that could mean individual users could get de-platformed for their point of view.
Let's just look at the term 'disinformation' who defines what this is, is there a clear definition and who decides that boundary. We could say political propaganda and the PR business around it is 'disinformation'.
We also have to ask the question where is the task force to tackle those perpetrating the 'illegal harms'.
This legislation seems tailored to tackling the platforms and the removal of content. But where are the plans to tackle those that are clearly committing illegal acts of harm?
My fear is also for those who unwittingly promote or forward on 'disinformation' as you will see if you read the demos report "Warring Songs: Information Operations in the Digital Age" shows how a)Fact-checking alone is not enough as it only deals with a small % of the issue and b)people can unwittingly be drawn into promoting content from these Information Operations. These people could be de-platformed, silenced or censored, which is an infringement of their human rights.
A quote from the Newstatesmen summaries it well "Disinformation is particularly blurry, and will likely prove contentious for a future regulator. The DCMS spells out some expectations, including "promoting authoritative news sources", and suppressing content "disputed by reputable fact-checking services".
How far should the government go in determining what fictions are acceptable? When does a legitimate hunch become a conspiracy theory? These are political questions that require considered – and democratic – responses. While the government's move towards regulating the internet is a welcome step, there will be many who look at these proposals with justified suspicion."
The scheme pushes towards automated takedowns and speed, both of which are likely to come at the cost of accuracy. Accuracy is as important as removal for a scheme to be legitimate.
This process of using an Independent Regulator is also very questionable. They will not be elected but will have great power to decide what is free expressions it protects and what it restricts. This does not promote fair and accurate decisions.
The UK does not, and should not, allow state regulation of the press, allowing them to freely challenge and comment on issues, why would we not allow the same freedoms to millions of citizens and their lawful online speech.
Read more about Open Rights Group's position here:
View the full DCMS consultation:
There is also no explanation as to how harms and risk are being defined which is a central question of the duty of care. This could be too wide and impact legitimate online speech.
The proposal is also unrealistically vast and broad which will make them almost impossible to implement without a scatter gun approach that forces companies to take down legitimate content with no route to complain by the user. A more focused rights-based approach on social media as promoted by the Open Rights Group makes more sense.
Before going ahead with such a scheme there needs to be more public and industry consultation in order to facilitate discussions on a better way forward.