How Linius Uses ‘Video Virtualisation’ for Hyper-Personalisation

Tim Cross 25 March, 2019 

As artificial intelligence and machine learning technologies evolve, media companies are increasingly using personalisation as a point of differentiation, both in terms of advertising and in terms of surfacing content to their audiences they’re most likely to find relevant and interesting.

Personalisation has now progressed so far that some believe a new term is required: ‘hyper-personalisation’. While there’s no set definition of hyper-personalisation, it’s generally used to describe personalisation that goes beyond the basics of name, location and browsing habits, using a much broader range of data points to personalise ads and media on the fly.

One of the companies pushing forwards hyper-personalisation is Australian tech company Linius, whose personalisation capabilities are enhanced by the company’s ‘video virtualisation’ tech.

Freeing video from its container

Linius CEO Chris Richardson explained to VAN how it works. “All the video on the internet is encoded data wrapped in a container,” he said. “You can picture a glass of water, where the glass is the container and the water on the inside is the encoded video data itself. When we talk about virtualising video, what we do is remove the glass and just leave the water behind.”

“Now that’s no longer a playable video, that’s just raw data, and that’s what we index and what enables all the magic behind the scenes,” continued Richardson. “But to make it play on your device, you still need a container. So we create a new container, a new ‘glass’, but we leave it empty. So it’s a video file but with none of the data inside it – instead it points back to the internet where the data is sitting. And that enables us to modify that data on the fly between the source back on the internet and your actual player, and then on the player the data gets put back into the glass and reassembled.”

This virtualisation of video enables media companies to pull video data from a range of different clips and create one new piece of video content. “Today, if you search for a video on YouTube or wherever and say ‘show me the last ten Man City goals’, you’ll get however many thousand videos,” said Richardson. “The last ten Man City goals will be in there somewhere, but unless someone has gone and manually created a video of those last ten goals, then one video showing you what you want doesn’t exist.”

“With virtualisation, since that video is empty, it can pull the data from anywhere. So you can start saying things like ‘Goal number one is in the fifth video starting at 1.28, and runs for 45 seconds – stream those 45 seconds. Goals number two and three are in the third video at 4.28 and 7.40, stream those two videos.’”

Richardson says this in itself is one of the more promising areas where the tech can be applied for hyper-personalisation. Media companies can use the tech to create personalised streams of content based on a user’s search request, or past preferences and behaviour.

One company already trialling the tech is Swedish news platform Newstag. Newstag compiles clips from a number of global news media companies and allows users to create personalised feeds of news content based on the topics they’re most interested in.

Newstag CEO and founder Henrik Eklund says that the key draw of Linius is its ability to sort through a huge library of content “in a heartbeat”. “We handle thousands of clips per day, and machine learning is just not enough because we can sort it but we can’t access it, because there’s just too much content,” he said. Newstag is now experimenting using Linius’ tech to sift through content, enabling users to search for specific topics and be served a personalised video stream based on that topic (a beta version is available here).

Eklund says that media companies with a lot of archive video content might would likely benefit most, as relevant parts of old video content could be resurfaced and served up as part of a personalised video stream, based on a search request.

All of these capabilities rely on content recognition, and Linius isn’t in the content recognition game itself. But Richardson said that many premium video content owners already have a lot of metadata describing what’s happening in their video content. Where this isn’t the case, Linius works with third party AI specialists, which he says are “leapfrogging each other on a daily basis in terms of their skillsets”.

Personalised creative and ad insertion

Linius is trialling the tech for hyper-personalised advertising too. The company recently partnered with video delivery solutions company Hemisphere to build a hyper-personalised video advertising prototype.

This ad personalisation has two strands. One is the creation of personalised video ads based on the user data, which could for example include the viewer’s name, their face, or a personalised offer as part of the creative (though similar things have been managed elsewhere).

But the ads themselves can also be inserted at any point within the video, meaning they can be placed at contextually relevant times within a film or TV show.

Richardson believes there is potential for deep analytics too. Ad insertion from a technical perspective looks like server-side ad insertion, but Richardson says Linius can see when an ad has been fast-forwarded, paused, or muted, which can be fed back to the advertiser to show them what is and isn’t working, and give them a better idea of how their ads are being watched.

Newstag’s Eklund said his company is not using Linius’ advertising capabilities yet, but said it would be “a very natural next step”. And he sees possibilities for how sponsorship could work in products like Newstag’s AI search. “If you search for something related to new cars for example, you can have a car manufacturer sponsoring that search or being part of that experience.” That sponsorship would be inserted into the stream itself rather than running as a pre-roll ad.

The potential use cases for hyper-virtualisation could quite genuinely revolutionise how we consume video and goes far beyond how we think about video search today. For example, a user could simply request all comedy clips on a certain theme from a comedy festival and view them as one united stream. Or if someone wanted wanted to watch all the reviews of a particular car model in one video, it would simply be a question of requesting it. Then if a researcher wanted to view all news clips on a certain issue from multiple sources, it could be compiled instantly and without duplication.

2019-03-25T12:25:14+01:00

About the Author:

Tim Cross is Assistant Editor at VideoWeek.
Go to Top