Different Attention Measurement Tools Produce Significantly Different Results, finds ARF Study

Tim Cross-Kovoor 28 May, 2024 

Attention measurement has grown in popularity in recent years, thanks in part to its strong intuitive pull. Attention is table stakes for any ad to be effective – if audiences don’t pay any attention to an ad, it’s unlikely to have made an impression. Meanwhile high levels of attention suggest audiences are engaging with an ad, which seems likely to lead to positive outcomes for the marketer.

But ‘attention’ isn’t a particularly well defined word (at least in the ad industry). Different vendors sharing the ‘attention’ label use very different techniques for measuring it. In some cases these vendors’ products are designed to work in live campaigns on the open web, meaning they’re much more limited in the data points they have access to than those which run in lab conditions.

The Advertising Research Foundation, an influential ad research body founded by the ANA and 4As, set out just under two years ago to study the validity of these emerging measurement tools, essentially asking whether they do what they say on the tin.

Following initial groundwork which profiled and categorised the various vendors and their approaches to attention measurement, the second phase looked at the role of attention in creative testing, while the third phase will focus on media. Results from this second phase are out now and those results are… mixed.

What are we measuring?

For its study, the ARF asked attention measurement companies to provide attention data for 32 pieces of creative, covering different brand categories, campaign objectives, media types, and creative approaches. Of the vendors approached by the ARF, 12 contributed their data. Methodologies used by these companies included post-ad surveys, eye tracking, facial coding, and neurological monitoring.

The ARF found that these companies’ definitions of attention generally differed from those used by industry bodies, and tended to emphasise aspects which align best with their method of measurement. And this was reflected in the data itself.

Across these vendors, the ARF looked at the congruity of their scores. If one vendor ranks a piece of creative well for attention, do others agree? And are they aligned on which pieces of creative perform poorly?

The answer, across different methodologies, was not really. The ARF went through the companies’ data and essentially standardised them into 1-5 ratings, based on how scores for pieces of creative compared with others. And there was significant variation. In the data presented in the ARF’s report, each ad received at least one ranking of 1, 2, 3, and 4 from one of the vendors, while two (of four) ads received the whole range of 1-5.

Looking within methodologies, there was more consistency. The survey-based methods ranked each ad quite similarly, as did those which used a mix of facial coding and eye tracking. However for companies which used eye tracking alone, there wasn’t much correlation between results.

The ARF’s study also looked at whether attention metrics for ads correlated with advertisers’ own reports of how well that ad delivered. In other words, if an ad scores highly on an attention measure, does that mean it’s more likely to have achieved the advertiser’s goals?

The answer again was no. “Findings show that there is little to no correlation between the advertisers’ reported success level of the ad and that of the attention measurement companies, and there is no clear trend amongst measurement strategies,” said the ARF’s report.

Attention to detail

The ARF’s report goes into detail about possible technical reasons for these discrepancies, including limitations of the study itself. Nonetheless, the study raises some serious questions for those measuring attention.

While attention vendors all claim to measure the same thing, the ARF’s data suggests that’s not the case. “In the final analysis, even though these results can all be labeled attention, they very often measure different things,” says the ARF report. “While scale and cost are important, buyers of attention metrics need to understand the nuances of each method.”

And these attention metrics don’t appear to indicate campaign success.

That’s not to say these metrics aren’t valid. On the inability to predict campaign success, the ARF emphasised that advertisers’ own evaluations of campaign success aren’t perfect either. Plus, it’s generally acknowledged that a baseline of attention is required for an ad to be effective at all. So it’s still important to know whether this baseline has been crossed.

On the lack of correlation between attention vendors’ metrics, the ARF said this highlights the importance of understanding exactly what each company is measuring, rather than assuming they’re an accurate measure of attention. An eye tracking vendor for example is ultimately just that – a company which tracks whether an individual looked at an ad.

Finally, while the ARF found that those in the industry tend to assume that creative is more important than media for driving attention, it also cited research suggesting this isn’t the case. The next part of the ARF’s initiative will look at vendors’ ability to measure how media affects attention – it may be there’s more correlation here.

Follow VideoWeek on Twitter and LinkedIn.

2024-05-28T11:18:38+01:00

About the Author:

Tim Cross is Assistant Editor at VideoWeek.
Go to Top