Underage in the Digital Age

Posted on: June 29th, 2017

The Queen’s Speech announced a new data protection law to ensure the UK retains its world-class regime protecting personal data along with a new Digital Charter to ensure that the UK is “the safest place to be online.”

Whilst this is primarily aimed at replicating the General Data Protection Regulation (GDPR), which will take effect on 25 May 2018 in EU Member States, it’s unlikely to resolve the safeguarding issue of most concern to parents: age verification.

The GDPR states that children of 16 years and over can give consent but those under 13 can never give consent. For children between 13 and 15, parental consent must be obtained and that consent must be verifiable. In theory, this looks good but the challenge will be enforcement.

Some of the big social media platforms (Facebook, Instagram and Twitter) currently specify a minimum age of 13 to open an account. YouTube, on the other hand, requires someone of 18 years or over to open an account (and view any content tagged as age restricted). Even so, recent figures estimate that 78% of children aged 9-13 have a social media account and one study, the ‘Power of Image’ conducted in December 2016, found that 70% of 8-17 year olds have seen age inappropriate images within the last year, and that’s where the problem lies. If all it takes is for a child to claim to be 13 to open an account, it’s easy enough for them to claim to be 16 to avoid triggering the requirement to obtain parental consent.

The US experience suggests this will happen. An early adopter of safeguarding rules, the US enacted its Children’s Online Privacy Protection Act in 1998. That Act requires website operators to seek parental consent before collecting personal information from anyone under 13 years of age. But, if a child under 13 lies about their age to create a social media account, they lose any protection they had under the Act. It’s hard to imagine a less half-hearted protection.

Despite today’s children being digital natives, having a better grasp of technology than many adults, they have yet to develop the cognitive, critical and social skills necessary to be media literate and safely navigate the virtual world. So, if the law is struggling, we may need to rely on social media companies to have a strong enough sense of corporate and social responsibility to create a genuinely safe environment for children.  

One solution, for example, might be a voluntary register, with parental consent, which provides a twostep verification – let’s call it “ChildCheck.co.uk”. A parent opens an account with their identity verified in the usual way via a credit card and email. The parent then creates an account for their child. Parent and child are given user codes.

Social media companies could then choose to require anyone opening an account either to verify themselves as adults or to enter their ChildCheck code which triggers an email to the parent to authorise their child opening the account. No doubt there are other better ideas out there but the point is, there are certainly ways in which child age verification can be improved beyond the insufficient legal requirements.

Two changes under GDPR and the proposed new UK law which we can applaud is the requirement that companies providing services to children have a clear privacy notice written in a way children will understand. In addition, young people will have the right to demand social networks delete any personal data they shared prior to turning 18.

So, a good effort at refining how online companies communicate with children and handle their data, but improvement is needed in verifying whether a child should be on site in the first place.