Your guide to evaluating advertising effectiveness benchmarks

As outlined in our larger report on advertising effectiveness normative benchmarks, as well as our recent deep-dive on the optimal ways to leverage said benchmarks, norms are absolutely essential to good advertising research. Benchmarks help inform researchers about baseline ad performance, and high-quality ones enable increasing confident assertions about campaign impacts. Unfortunately, if your norms are based on ancient data, outdated platforms, or invalidated assumptions, their value is effectively null - if not negative - for the organization. This article is your guide to evaluating advertising effectiveness benchmarks.

 

What does ‘high quality’ advertising effectiveness normative benchmark data look like?


The old adage ‘garbage in, garbage out’ is pretty well-established in any line of reputable research, and it’s even more apt when leveraging normative data. Because norms are aggregations of large, diverse datasets, any violations of data quality are amplified when considering the significant scale and longitudinal nature of normative datasets. Advertisers and their research partners should take care to establish core quality assumptions around their benchmarks, and should remediate any issues before applying this data to any specific advertising effectiveness evaluation. If certain assumptions for data quality are not met, utilizing mediocre data for benchmarking exercises is wishful [and suboptimal] thinking.


But how does one determine the ‘quality’ of a given normative dataset? Marketers should focus on three key factors:

A. Breadth of attitudinal and behavioral metrics: is this data wide enough?
B. Coverage across platforms and channels: can this data capture my audiences?
C. Temporal recency of data: is this data relevant today?

Below, we unpack each of these factors, and explain why each is so critical in enabling a high quality benchmark dataset and creating effective marketing campaigns.

 


Breadth of attitudinal and behavioral metrics


In most cases, advertising benchmarks are constrained by their overall breadth. They may be able to tell you - with some accuracy - what a ‘good’ or ‘bad’ campaign looks like on a singular metric (e.g., brand awareness, brand favorability, purchase intent, etc.), but they rarely can tell you this across a multitude of other metrics. It follows that if you’re interested in how your campaigns are impacting multiple metrics (and you should be), then you’re left relying on different sources for benchmarks for Metric A vs. Metric B.


This breadth problem is particularly troublesome when considering that successful advertising measurement should involve assessing both attitudinal and behavioral metrics. Almost all legacy research vendors rely solely on top-of-funnel attitudinal metrics to assess impact while bottom of funnel information on conversions and sales is only available from different vendors. For example you can search for site conversion rate norms from a variety of sources, but they are all dependent on outdated cookie tracking. Some e-commerce sites have norms, but only if you are advertising on their sites. Unfortunately, these one-off normative metrics from a hodgepodge of sources offer only a partial view of the complete path-to-purchase funnel. Attitudinal lift is critical, but a lack of unified visibility into behavioral impacts (e.g., search, site visitation, add-to-cart) limits confidence in overall ROI.


So even if a provider can give you solid benchmarks on a few brand metrics - let’s say unaided awareness and brand favorability - they may be missing key conclusions about how the same ads are influencing digital behavior like search and site visitation. For that digital lens, you’ll have to go to another provider, run another set of studies, and leverage a totally different set of benchmarks. Sounds like just what your limited-bandwidth team would enjoy doing, doesn’t it?


Let’s say your newest campaign was able to drive a 5-point lift in purchase intent. That’s a fantastic outcome, and one that your organization presumes will impact down-funnel behaviors like search and sales. That said, if you require two research providers to assess lift - one for attitudes and another for digital behavior - it’s relatively difficult to determine if this change in brand favorability truly drives downstream consumer actions. That’s why it’s so critical to utilize partners who assess both attitudinal and behavioral lift in parallel. They can enable analysis of whether specific brand metrics drive specific outcome metrics, both at the campaign level (for your ads) and at the benchmark level (for your industry).




It follows that marketers should look out for providers who can cover the vast majority of the funnel through one program. Having standardized research studies that assess brand and behavioral lift in the same overarching methodology helps (1) cover all important KPIs in one fell swoop, (2) identify campaigns that differentially move distinct KPIs, and (3) compare every internal campaign you run against its own internal-facing competitive set. Furthermore, you can begin leveraging normative benchmarks from your own and other industries to see which KPIs you’re having more / less success with from a singular data source.


Consistency of source helps ensure each study is conducted on a similar audience, ultimately eliminating the risk of comparing ‘apples’ in some studies to ‘oranges’ in another. Not only does this empower much stronger research and conclusions for marketers, it also helps marketers more effectively communicate their findings to stakeholders and advocate for marketing effectiveness. Explaining the nuances of two, three, or even four vendor datasets to one’s audience is a seemingly-impossible task, while describing the elegance of a singular solution makes for a highly compelling internal narrative.



Coverage across platforms and channels


Most advertising effectiveness benchmarks are hamstrung by an inability to see across all channels and platforms in the digital landscape. The slow death of third-party cookies, limited accuracy of IP addresses, and walling off of platform-specific user data prevent marketers from effectively keeping track of user behavior. Furthermore, changes in the TV landscape (moves to CTV and OTT) are making viewing behavior increasingly fragmented, leading to more blind spots for ad exposure. Together, these issues leave huge gaps in measurement validity, as exposed groups are lost into an immense digital ocean.


When working with research partners on normative benchmarks, you need to have a keen understanding of where your consumer coverage starts and ends. In other words, you need to know where providers can measure exposure and subsequent digital behavior. Start by asking yourself - or your provider - some of the following questions:


Exposure visibility: Do these benchmarks assess ad exposure impacts on all of the platforms you run campaigns through? Or are they limited to a few platforms where a small set of your campaigns are focused?


Image of silos representing exposure visibility.If a given provider can only see ad exposure on certain platforms, their benchmarks are likely heavily biased by channel. Most brands and campaigns are heavily-focused on a cross-channel approach: place ads across the digital, TV, and OOH universe, and assume that this coverage helps connect to a multitude of distinct audiences. But if your research provider cannot ‘detect’ ad exposure on certain platforms due to technology constraints, their benchmarks will be heavily biased towards the platforms they have access to. This can lead to benchmarks with extremely low generalizability, on top of creating large blindspots for your understanding of typical campaign performance in preferred channels.


Longitudinal identifiers: Are the benchmarks based on clearly-traced consumer behavior patterns that span across critical platforms and channels? Or are you relying on outdated, risk-heavy technology?


Image of footprints displaying longitudinal identifiers.Keeping tabs on exposed and unexposed consumers in a highly-fragmented digital environment is a huge challenge. In most cases, vendors are reliant upon nearly-deprecated methodologies that face substantial hurdles in a privacy-focused [present and] future. Apple, Google, and other titans of the tech industry are continuing to scale back access to cookie-like tags that enable legacy ad effectiveness methodologies. And without those tags, research providers are scrambling to identify strong, consistent ways to view longitudinal consumer behavior across the digital ecosystem.


If you’re working with vendors who utilize these legacy technologies, benchmarking advertising effectiveness is fraught with bad assumptions. As privacy regulations continue tightening, what’s the value of a norm based on technology that will no longer be viable in the near-term? Even if those antiquated norms are valuable right now, when privacy regulations evolve further, it will prove impossible to compare new research efforts to these old databases. If digital behavior is a critical KPI for your organization’s ad effectiveness efforts, make sure you find a partner who can reliably assess and benchmark digital behavior without reliance on quickly depreciating cookies and their proxies such as mobile IDs and IP addresses.


 

Temporal recency of data


Many benchmarks and normative datasets are rapidly losing value because they rely on distant historical campaigns for scale. Campaigns tested 5-10 years ago may be considered ‘valid’ for inclusion in a typical database, even though they launched on antiquated platforms and at a time when consumer behavior differed notably from where it sits today. Generational changes, COVID-19 impacts, and geopolitics are transforming advertising and corresponding consumer behavior in ways that make recency a critical component of reliable benchmarks.




When working with research partners, be sure to probe on the recency and relevance of campaigns that are included in normative benchmarks. Do you really want to be evaluating your campaign performance against a set of heavily dated ads that were run a digital generation ago? If not, be sure to partner with a vendor who can leverage a high volume of recent studies, and one that you trust will continue to grow as the advertising industry evolves. Being stuck with a given provider due to historical circumstances is challenging, but moving to a provider who is more relevant in today’s environment can help future-proof your research efforts and normative comparisons.

 


 

Checklist for robust ad effectiveness benchmark data


Do your benchmarks provide what you need to drive organizational value?

  • Seeing ‘full-funnel’ across attitudinal and behavioral metrics
I understand where ads are/aren’t working well across entire path-to-purchase

  • Using consistent methodology across all campaigns, sites and channels
My benchmarks are not unduly influenced by specific publishers or data partners

  • Enabling cross-channel coverage to span otherwise ‘gated’ domains
I am actively minimizing blind spots in my measurement practices

  • Referencing highly-recent and relevant data
My benchmarks are generalizable for current and future digital-first consumers

  • Benchmarks give me all qualities needed for confident attribution


Missing something from the checklist? Not to worry! DISQO has quarterly ad effectiveness benchmarks that can replace your suboptimal legacy solutions. With full-funnel measurability, cross-channel visibility, and near-term recency, these benchmarks are designed to help marketers discover what ‘good’ looks like within a modern ad campaign, and to test themselves against that standard to create. On top of that, we publish regular reporting on our benchmark data, helping clients think through new and unique applications of our industry-leading product.

Instead of relying on antiquated solutions, come learn more about how DISQO's advertising benchmarks are setting a new standard for campaign evaluation. Contact us for a walkthrough today, and download our most recent report to see what you might be missing.

Related Material



 

Subscribe now!

Get our new reports, case studies, podcasts, articles and events