The jury is still out on the extent to which contextual data can fill in gaps left by the loss of third-party cookies and other identifiers.
But context isn’t just a replacement for behavioural or personal data. Contextual data has been a mainstay of online advertising pretty much as long as the internet has existed – just perhaps not the most glamorous one.
And over the past five or so years, contextual advertising has become more sophisticated – and complicated. There are a range of businesses handling contextual targeting in different ways, specialising in different platforms, and filling different roles.
Below is our guide to the different categories of contextual video ad businesses, and how they differ from each other.
The most common, and perhaps simplest and most easily scalable, method of analysing context in video content is to derive context from metadata around a video.
This metadata will usually include the URL, as well as text included on the page, to pick out context. The idea being that if a video appears on a page where the word ‘football’ appears 10 different times, there’s a good chance the video is related to football.
Metadata might also include tags, categories, and other metadata included by the video creator themselves, though this isn’t always 100 percent reliable, especially on user-generated content platforms.
Many companies using metadata use artificial intelligence models to hone their targeting, which may include an element of human review. This can, for example, help give heavier weighting to some metadata. If the aforementioned video with the word ‘football’ repeated ten times also includes the words ‘Peppa pig’ three times, this might indicate it’s actually a piece of kids content – and thus not appropriate to show an ad for Sky Sports subscriptions.
Companies: Zefr, Vibrant, Pixability, Channel Factory, Precise TV, video intelligence, Peer39, OpenSlate
Another method is to look at the video content itself in order to work out relevant contextual categories. The idea being that metadata related to a video is really just a proxy for the context given inside the video content itself, and this metadata might miss some aspects of the content.
This also allows longer content (for example, a TV show or film) to be broken down into smaller sections. A film might not feature cars overall, but might have one scene involving a lot of cars, creating a contextual opportunity.
This can be split into two techniques – which might be combined to create an even more accurate picture.
The first is to translate audio into text, and then analyse this text for contextual clues. Audio recognition can be very accurate, though it won’t always give a full picture of what’s happening on screen – for example in a video with little or no dialogue.
The second is to use computer vision, which analyses images within a video, and image recognition technology to give an overall picture of what a video is about. This tech, when accurate, can create very granular data on the people, locations, items and brands contained within a video. But the complexity of the technology has made it slower to scale and pick up adoption.
Companies: Silverpush, GumGum, 7th Minute, TappX, Grapeshot
This technique is essentially a combination of standard contextual targeting with behavioural and performance data, which picks out less obvious contextual opportunities for brands.
Analysis of performance data, for example, might find that ads for coffee companies have strong performance in terms of direct sales when run on content relating to fishing. So fishing can essentially be seen as a relevant context for coffee brands, and coffee brands looking to drive direct sales can use that information to contextually target fishing content.
Or similarly, analysis of behavioural data might show that consumers in the market for cosmetics tend to consume a lot of content relating to nature. So cosmetics brands might choose to target nature content, as well as content explicitly related to cosmetics.
Companies: Illuma, Seedtag, Pixability, Precise TV
Real world context
Context can be thought of as not just the content which someone is watching, but their “real world” context as well. This might be things like time of day, location, and weather.
These data points are fairly common in digital advertising, and wouldn’t always be thought of as contextual data.
But contextual companies using these data points combine them to create real world contextual categories, which say more about a viewer’s real-world status than just ‘it’s night time’.
If a user is watching a video from a CTV device at 8.00pm from a consistent location, this might be categorised as ‘evening viewing’. Or if they’re watching a video from a mobile device while on the move at 7.30am, this could be categorised as ‘morning commute’.
These real world contextual categories won’t always be accurate, but they can give an interesting picture of a viewer’s setting, and enable some creative targeting or optimisation opportunities.
Companies: Beemray, Groundtruth
Sitting alongside these contextual targeting specialists, we’ve also seen a few contextual data marketplaces emerge.
These businesses essentially aggregate contextual data from a number of different providers, enabling advertisers to pull from a variety of sources to target and optimise their campaign, across a range of platforms.
This fills a number of purposes. Firstly, it allows buyers to combine all the different methods of deriving context from video, as described above, giving a more full picture of context.
And it makes it easier to target based on shared contextual categories across formats. Many of the companies listed above specialise either in social video, outstream video, or OTT and CTV. So contextual data marketplaces simplify the process of buying campaigns with shared contextual parameters across all of these different formats.
Companies: IRIS.TV, Peer39