Underperforming Advertisements?
Retail media networks have become a popular model in digital advertising, pioneered by giants such as Amazon and Walmart. These platforms offer brands the opportunity to place their pitches directly on the "digital shelf," catching users who are already in a buying mood.
Marketers judge the impact of their campaigns with return on ad spend (ROAS), a metric that measures the sales attributed to a campaign divided by the amount spent on that campaign. However, the calculation of ROAS is not as straightforward as it seems.
The basic formula for ROAS is the total dollars generated from a campaign divided by the total dollars spent on it. But many decisions and details go into calculating this number. Retail media networks offer clarity in measuring the effect of campaigns due to the data trail left by shoppers who are often logged into an account.
However, there are significant assumptions and methodological differences in how ROAS is measured and attributed, which affect cross-platform comparisons. Common assumptions and methodologies include attribution window and conversion lag, revenue attribution model, inclusion of costs, accounting framework, and consistency over time vs. across platforms.
Attribution window and conversion lag refer to the period following ad exposure or click that tracks revenue generated. Some platforms calculate ROAS on a cohort basis, dynamically updating revenue attribution as time progresses beyond the initial campaign spend. Revenue attribution models vary, with some networks using last-click and others multi-touch attribution, affecting which sales count toward ROAS.
Beyond direct media spend, some calculations may or may not include additional costs such as creative production or management fees, altering the denominator in the ROAS formula. Some networks measure ROAS aligned to accrual accounting, while others use cash accounting, influencing timing and amount of attributed revenue.
A single retail media network may keep its ROAS calculation consistent over time, allowing for tracking performance trends within that platform. However, differing methodologies across platforms mean that ROAS is not inherently comparable from one network to another.
These differences require advertisers to carefully align methodologies and assumptions when comparing ROAS across platforms to avoid misleading conclusions and inefficient budget decisions. Misleading budget allocation can occur when disparate ROAS methodologies lead marketers to falsely believe one platform is performing better.
To make meaningful comparisons, advertisers must align assumptions around the ROAS calculation method, such as matching attribution windows and cost inclusions, or create customized, unified metrics with partners. Some ROAS metrics account continuously for conversion lag, while others provide static snapshots, affecting cross-platform performance assessment.
ROAS typically measures campaign-level efficiency, but platforms vary in granularity and scope, which complicates cross-platform benchmarking. Advertisers should ask questions about the granularity of sales attribution, who gets credit for a purchase, extrapolation of untraceable sales, and multiple sales attribution to multiple campaigns or channels.
Marketing leaders need a firm understanding of the components that make up the ROAS numbers they're using to ensure consistency, accuracy, and transparency in their data. Through conversations with advertising partners, advertisers should be able to design ROAS measures that meet their needs.
Researchers analysed nearly 600 advertising campaigns with Albertson's in 2023 and 2024 to test the impact of common assumptions that go into ROAS calculations. They found that seemingly ordinary, defensible choices can move ROAS by 63 percent, which can flip a go to a no-go. Taking a proactive role in how advertising partners gather ROAS data will make a huge difference in calculating your campaigns' effectiveness.