Many European (and indeed other) governments are proposing specific regulation to counter the many harms which online technology has brought in its wake. But what will be the relationship between such regulation and existing data protection controls? Focusing on the UK White Paper, these slides argue that for at least three reasons the interface will be much greater than has been presumed. Firstly, harms directed at individuals almost always involve personal data processing and, not least through the ʻright to be forgottenʼ, data protection lies at the heart of the response to this. Secondly, even broader harms to groups and society at large are arguably being driven by unbridled profiling and tracking which is also a core data protection issue. Finally, successful online harms regulation will often depend on robust systems to monitor and manage online content and activity. Although data protection is often seen in tension with this, its stance is actually more complex as the success of this regime itself depends on robust control mechanism. Nevertheless, there will clearly be an interaction here also. All these interactions point to the need for far-reaching engagement between online harms and data protection regulators. Whilst there will be the potential need to manage tricky issues even in areas of synergy (not least due to the GDPR’s one-stop shop mechanism), there is also huge scope for combined action to help ensure the democratic control of, and secure enjoyment of fundamental rights within, the online environment.
2. Online Harms Regulation: Proposals
Growing concern within European public debate.
New initiatives in Germany, France, Ireland and UK.
Focus here is UK White Paper (April 2019):
White Paper sets out a programme of action to tackle content or activity
that harms individual users, particularly children, or threatens our way
of life in the UK, either by undermining national security, or by
undermining our shared rights, responsibilities and opportunities to
foster integration …
The government will establish a new statutory duty of care to make
companies take more responsibility for the safety of their users and
tackle harm caused by content or activity on their services.
Compliance with this duty of care will be overseen and enforced by an
independent regulator.
3. What about the Interface with the GDPR?
White Paper seems to argue (i) DP working fairly well &
(ii) interface with new regulation will be pretty limited:
There is already an effective response to some categories of harmful
content or activity online. These will be excluded from the scope of the
new regulatory framework to avoid duplication of existing government
activity.
The following harms will be excluded from scope: …
• All harms suffered by individuals that result directly from a breach of
the data protection legislation, including distress arising from
intrusion, harm from unfair processing, and any financial losses.
4. Direct Individual Harms & Personal Data
But White Paper flags range of content/activity which
directly & negatively targeted at individual:
harassment,
cyberbullying,
trolling
revenge pornography.
Strong arguments made for including privacy invasion.
Almost all of this involves processing – indeed
publication – of personal data (photos, text, videos…)
5. Direct Individual Harms & GDPR
Even individual publication may trigger DP:
Social Platforms have controller responsibilities:
Further platform dissemination of lawful content
may itself be unlawful: Google Spain (RtbF) (2014)
[P]ublishing the video in question on a video website [YouTube] …
without restricting access to the video … not within the context of
purely personal or household activities.” (Buivids (2019))
[P]olicies in place in place to deal with: - complaints from people who
believe that their personal data may have been processing unfairly or
unlawfully because they have been the subject of derogatory,
threatening or abusive online postings by third parties; - disputes
between individuals about the factual accuracy of posts. (ICO, 2013)
r2hox (Flickr
6. Broader Harms in White Paper
Harms
Children
Accessing
Inappropriate
Content
Terrorism &
Serious
Violence
Hate CrimeDisinformation
Advocacy of
Self-Harm
7. Online Harms & Private Back-End Processing
ICO response argues that strong connection here:
and that this leads back to data protection:
The use of personal data is an integral part of many of the harms
outlined in the White Paper. For example, in the case of self-harm
content, children and young people are being directed to these sties
through nudges built on information drawn from personal data
relating to previous behaviour online. Profiling and cross device
tracking are now fundamental to the internet platforms’ business
model.
r2hox (Flickr
[I]t is the GDPR and the DPA18 that governs the use of personal data
and algorithms in the delivery of content online, which includes the
UK’s world leading ʿAge Appropriate Design Codeʾ.
8. Online Harms & Public Back-End Processing
White Paper itself flags e.g. of possible DP tension here:
Which has echoes with Netlog (2011) where held far-
reaching monitoring of social network to target IP
violations not fair balance with inter alia DP.
The [new] regulator will not compel companies to undertake general
monitoring of all communications on their online services, as this
would be a disproportionate burden on companies and would raise
concerns about user privacy. The government believes that there is
however, a strong case for mandating specific monitoring that
targets where there is a threat to national security or the physical
safety of children, such as CSEA [Child Sexual Exploitation and
Abuse] and terrorism.
r2hox (Flickr
9. DP & Public Back-End Processing: Complexities!
DP in many represents strong prescriptive regulation –
incompatible with practical anarchy.
Authentication: ICO Draft Age-Appropriate Design Code
stresses to avoid may need to take “specific measures … to
limit access by children”.
Identification: Fully anonymity may enable illegal data
publication and processing (cf. French DPA focus on this).
Blocking: European DP may require use of PhotoDNA to
block illicit images (AY v Facebook Ireland (2016))
10. Online Harm Regulation & DP: Conclusions
Two regimes have large overlap & potential for
coordination to help secure democratic control & human
rights within online space.
But apparent synergy raises tricky issues not just re:
regulatory consistency but also DP one-stop shop:
In July 2019 agreed that Google’s ʿback-endʾ processing must
be managed solely through Irish DP regulator.
Currently RtbF appears exempt but in future?
How will/would this effect online harms regulation?
Where potential inconsistency present then even more
even more tricky issues will require careful management.
r2hox (Flickr