Advertisers Must Avoid Targeting Children, but is Age Verification Tech Up to the Task?

Tim Cross 05 May, 2022 

Many of us will have encountered some sort of age verification system on the internet, when accessing age-restricted content. A lot of the time, it’s as simple as clicking a button to confirm you’re over 18, or entering a date of birth. Sometimes these checks go further, for example requiring the individual to enter credit card details.

While the rules vary by country, historically these checks have been fairly lax, and reserved for a limited selection of websites – gambling, pornography, alcohol sales, and the like.

But new regulations will see a wider range of websites and apps required by law to verify the age of their users. For example the EU’s Audiovisual Media Services Directive, which seeks to level the playing field between digital and traditional media companies, will tighten rules for user-generated content platforms. And this will require more reliable methods than simply asking the user for their date of birth.

It’s increasingly becoming an issue for digital advertising too. It’s already law in a number of countries that gambling and alcohol brands must not target their ads towards children, or content which children are likely to engage with. And provisions within the EU’s proposed Digital Services Act will make it illegal to show targeted ads to minors. But as VideoWeek discussed earlier this year, that’s only possible if those involved in the digital advertising supply chain know that the impression is being served to a child. If they’re unsure, they have to choose either to target the ad and risk breaking the law, or just avoid targeted advertising altogether.

And on the open web, where users often haven’t given the website they’re visiting any information about themselves, age verification becomes even more difficult.

“In my view, I don’t have the slightest idea [how you’d verify age],” said Fernando Parreira, business director at Portuguese media company SAPO in an IAB Europe roundtable discussion in January. “And I hope the regulators bring us a solution! My first thought was that you’d have to have a login platform in place to have a provable ID of the user. But still, there are global platforms with login systems in place and they’re not 100 percent bulletproof.”

No Catch-22

As hinted at above, age verification is a substantially different problem depending on whether the publisher involved is able to directly ask for information from the user or not.

For websites and apps which are able to collect information from users, for example when they first register for an account, there are a variety of techniques available. These include passport scans, facial recognition, voice analysis, and consulting with credit reference agencies.

These techniques can be very reliable. On the more stringent end of the spectrum, a website could ask for a recognised form of ID. If the user uploads a picture of the ID, AI can scan for evidence of tampering. If the ID contains a chip, that can be scanned by a user’s smartphone, providing a fairly bulletproof way to prove its authenticity.

The use could then be asked to turn on their webcam or phone camera and take a picture of themselves, which would then be compared to the ID – again AI would play a role in judging whether the webcam photo matches the ID.

Obviously these more reliable techniques have some potential drawbacks. But Iain Corby, executive director of the Age Verification Providers Association, says these have largely been overcome.

From the consumer’s perspective, these checks are quite a lot of work to simply sign up to an app or website. But a number of providers operate across different websites, reducing the input required from the user. These providers may only require the user to scan their passport the first time they visit one of the websites that provider works with; for all subsequent sites, they’ll just need to take a quick picture to prove it’s them.

Privacy is another commonly cited issue. This is a particular issue for age verification – it’s been suggested that there’s something of a catch-22, whereby websites aren’t allowed to collect personal data from minors, but have to collect their personal data in order to know whether they’re a minor or not.

But Corby says these technologies can operate in ways whereby data is never stored and processing takes place on the user’s device. Or alternatively, the data which is sent off-device can be encrypted. For example, if the user is required to submit a photo, this photo isn’t sent to the age verification provider’s servers, but rather a mathematical interpretation of that photo (which can’t be translated back into an image).

And these techniques have received the seal of approval from the UK’s data regulator, the ICO. “Life got a whole lot easier last year when the ICO issued a formal statutory legal opinion stating that they are comfortable that you can use personal data, including sensitive personal data such as biometrics, for the purposes of age assurance under the public interest basis for data processing,” said Corby.

And this can serve advertisers too. Wherever a users has verified their age, this signal could be passed on to advertisers, allowing them to judge what kind of user they’re targeting.

A matter of judgement

It’s much harder to verify age as reliably when there’s no direct opportunity to ask the user for information. And this will be the case a lot of the time for advertisers trying to judge whether they’re engaging with a minor or not, especially on the open web.

Usually in these cases, the best solution is to judge the likely audience of the content itself. For example, a specialist website for children’s video content can fairly claim that its audience will mostly be children. Meanwhile a financial advice blog could similarly claim that their audience will mostly be adults.

Obviously these techniques aren’t massively reliable. After all, a particularly forward-thinking 15 year old might want to read up on investment strategies. So inevitably, advertisers which targeted ads to these types of websites will end up targeting at least a few children.

The key question for the ad industry, in particular in relation to the DSA’s new rules, is what level of certainty will be required by law.

In the UK, this contextual approach is accepted for the UK’s existing gambling ad laws. The Advertising Standards Authority requires that gambling brands avoid targeting children through their selection of media, or through the content of the ad itself. So they simply have to ensure they’re not advertising next to context which is geared towards children, and the ads themselves don’t contain celebrities or characters which appeal to children.

This same approach could be taken by the DSA. But the AVPA’s Iain Corby suspects that given the current mood music from regulators, we’ll see tighter requirements for websites and apps to be certain of the audiences they’re offering advertisers.

“I think we’re reaching a tipping point where it’s generally going to be necessary to know the age, or at least the age range, of an online user for almost every service, except perhaps the most innocent and harmless,” he said. “I think advertisers are going to have to ask publishers and platforms to provide age-verified audiences, so that they can specifically target adults. There’s a lot of disdain for targeted ads towards children right now, it’s quite hard to see how advertisers could continue operating the way they have been so far.”

Follow VideoWeek on Twitter and LinkedIn.

2022-05-05T14:08:06+01:00

About the Author:

Tim Cross is Assistant Editor at VideoWeek.
Go to Top