Toxic Comments and Content Farms: The Challenges of Building a Child-Safe Video Platform

Tim Cross 19 August, 2019 

Earlier this year, kidtech company SuperAwesome announced Rukkaz, a video platform specifically designed for children (focused on ages 6 to 12) and family-oriented content. Whilst the content is in many ways user-generated content (UGC), all content is vetted by humans to guarantee it’s appropriate for children. In some ways the timing couldn’t have been better, as social platforms like YouTube and TikTok are currently facing well-documented legal challenges around their handling of child safety, the very problem which Rukkaz is trying to solve. But from content to comments sections to advertising, user-generated content (UGC) platforms have a few different fronts where children’s safety and privacy can be put at risk. VAN spoke with SuperAwesome co-founder and CEO Dylan Collins to hear how Rukkaz hope to create a safe and trustworthy environment for younger viewers.

Why do you think there’s a need for a video platform specifically for kids?Over the years we’ve seen more and more requests from all parts of the kids’ ecosystem to create more infrastructure for video and for content creators in particular. As has been made fairly public, kids and family content creators and influencers have only ever really had YouTube in terms of a platform to engage with their community and distribute video content. But YouTube was never designed for kids, it’s an adult platform and their issues with the kids audience have been well documented absolutely everywhere. If you talk to creators in the category, you’ll hear their revenues are declining and their community features and engagement options are being turned off.

But to build a kid-safe platform you need scale globally, to deliver on the promise of monetisation and of bringing the right quality and quantity of content creators to the table. And because of our size and infrastructure we already have in place, it made sense for us to do it.

How do you decide which type of content is appropriate for the six to twelve age group?
Ultimately you want a place which is actually going to be entertaining and useful for that audience, so you’ve got to allow edgy content to be in there while keeping it safe. If you think about Nickelodeon in the mid-nineties, that’s our sort of ethos around content. It’s edgy, it’s genuinely fun and interesting, and deep down it’s appropriate, but you’re not screaming about the fact that it’s appropriate.

Rukkaz is built on our community and social tech, so it’s allowing for moderated comments and the right kind of content, but without being overtly patronising about it. There’s a reason it’s called ‘Rukkaz’ and not ‘Kid Safe Videos’!

And the intention here is not to try and displace YouTube, or have creators completely leave YouTube. What we’re seeing is as new video platforms emerge, they all provide different types of functionality and different bridges between audiences and their communities. So we’re doing the same, specifically focusing on this segment in the six-to-twelve year old space.

A lot of the problems we’ve seen on YouTube come from the difficulty of moderating huge volumes of content. How will Rukkaz handle moderation?
About a year and a half ago we created a kid-safe influencer division, which in turn created a content certification programme for YouTubers called SafeFam. And that was because there were all these creators who were looking for content standards which they could publicly adhere to, to show they were trying to create safe environments. We talk a lot about brand safety, but in the kids and family space that’s really not that well defined at all, certainly not within YouTube. So in Rukkaz, all creators go through the SafeFam certification system, and everything that goes up is approved, all content goes through our review-queue.

Over time we’ll probably start to allow a concept of a verified user, where ‘super-users’ who were previously just consumers but now want to upload videos themselves would be able to. But it’s not an open user-generated content platform in the way that YouTube is. It will evolve over time, but step one is about making it the right kind of community and engagement platform for creators.

Rukkaz says it will use human-supervised algorithms. How will those work, and why are they necessary?
One of the challenges for YouTube’s discovery algorithms is they’ve lead to artificial demand for certain types on content. You see this with nursery rhyme content on YouTube, there’s such a proliferation of nursery rhymes, and there’s some weird nursery rhyme content emerging too. Partially that’s driven by an algorithm delivering content based on what you last watched, and the creators realising this is some of the most valuable content in terms of monetisation. So you get these content factories just producing derivations of nursery rhyme content.

We will have a discovery algorithm, but we’re including editors so you’ve got human-driven suggestions of what you might like. There’s this mania around removing humans from algorithms, but in the world of kids you can’t do that, we think humans can enhance these algorithms.

How will advertising on Rukkaz work, and how do you keep it child-friendly and respectful of their privacy?
It’s all built on our kidtech infrastructure, all our kid-safe ads are being served by our kid-safe ad platform. That means there’s never any personal data collected whatsoever, and all of the ads are human-reviewed in terms of being appropriate for that audience in that part of the world. So instead we try to contextually match relevant ads to the right type of content.

And what’s not really known is that the kids ad market is the biggest privacy-based ad market in the world, so it’s the healthiest, safest type of advertising you’re going to see in that respect.

So many platforms seem to struggle with their comments sections becoming toxic, hostile environments. How will Rukkaz avoid this problem?
We already have kid-safe social tools which are used to power kid-safe communities, and they combines a few things. Firstly, all commenting is done within a moderated environment, so all comments have passed through a moderation layer which has a human review component. There’s also the notion of an invisible safety scored for users, so if you’re a registered user the system monitors the positive things you’ve done and ascribes you a different level of community visibility.

It seems to be a fairly universal rule that commenting brings out the douchebag in everybody! And kids will be kids. But everything we’ve built is designed to allow for that and to keep communities kid-safe. One of the challenges YouTube has had is it’s only had fairly binary options, like completely turning commenting off. But ultimately you have to facilitate community.

We’ve seen child protection problems arise when adults and children mix on the same platform. Do all UGC platforms need to have separate versions for children and adults in order to be safe for children?

Currently, the way pretty much every general audience platform is made, they’re fundamentally designed for adults – if you want to create an account, you have to be over thirteen. They assume there’s one type of user, and that is an adult. You can age-gate, but typically kids are going to follow the creators they want to follow regardless. So that’s why we think a separate platform for children is appropriate.

But I think in the future what we’ll have is an underlying AI that sits in every content platform and is automatically detecting whether a user is most-likely a child or an adult, without asking. Then you can switch the underlying delivery technology from privacy and safety mode into adult mode, or vice-versa. And given the progress we’ve seen on kids’ privacy laws and the like, I think within the next decade we’ll see legal requirements for this sort of AI to be operating behind the scenes dealing with this. It’s the only logical way that you can ensure kids are being put into a safe environment regardless of which app or platform they’re on.

Do you think governments around the world are doing enough to protect children online?
We’re seeing more activity in this space than we ever have. Kids’ privacy and digital well-being is probably one of the most progressive and proactive areas in terms of government involvement we’re seeing around the world. When we started SuperAwesome five years ago, the US had set the gold standard in terms of kids’ digital privacy, and other countries have now followed – the UK, Europe, India, Brazil and China have all made progress. Frankly, it probably represents an exception to the rule in terms of government having a positive impact! We’ve gone from having kids’ digital privacy laws which cover about 50 or 60 million kids five years ago, to in about two year time where we’ll see around one billion kids covered by digital privacy law. It’s the first time we’re seeing the internet being globally regulated for a user-type, we’re not seeing that anywhere else.

2019-08-20T10:51:19+01:00

About the Author:

Tim Cross is Assistant Editor at VideoWeek.
Go to Top