top of page

Your Marketing Metrics Look Great. Here's Why They Might Be Wrong.

  • Feb 18
  • 12 min read

Your dashboard looks healthy. But does it tell you what's actually driving your revenue?


I want to start with something that makes most performance marketers uncomfortable.

 

Your ROAS is strong. CAC is stable. Conversion rates are trending up. Revenue is growing. And yet how much of that growth would have happened without your marketing?

 


That is not a philosophical question. It is a budget allocation question. And most teams never answer it honestly, because the platforms they depend on were never designed to help them answer it.

 

In my previous blog - If Your Biggest Platform crashed Tomorrow, What Happens to Your Revenue? we measured structural dependency - how exposed your revenue is to a single platform. This blog goes one level deeper: why that dependency is so hard to see, and why your measurement system is actively making it worse.


The Problem with Attribution


Consider a typical consumer journey.

 

Someone sees your YouTube ad while watching a product review. They search your brand three days later. They click your paid search result. They buy.

 

Search gets the credit. Or Meta. Or Performance Max. Depending on your attribution model. But here is the question nobody asks: would they have bought anyway?

 

Attribution tells you where the conversion happened. It does not tell you what caused it.

 

That gap between where a conversion occurred and what actually drove it is where most marketing budgets quietly leak.

 

Why Platforms Make This Worse


This is not a measurement problem. It has a direction.

 

Platforms are optimized to appear in high-credit positions particularly last-touch. Their reporting systems are built to show contribution, not causation. And when you browse a product, that signal immediately feeds back into targeting. Retargeting appears. Abandoned cart reminders appear. When you convert, the platform claims credit for a decision that was already in motion.

 

You are not only measuring your marketing. You are measuring the platform's algorithm performing on top of existing demand.

 

This is not malicious. It is by design. But the result is the same: your dashboards overstate what is working.


What Measurement Distortion Actually Costs You


The issue is not just accuracy. It is what happens downstream when you act on inaccurate data.

 

When a platform consistently over-attributes conversions, your capital follows the signal. Budgets flow toward it. Dependency deepens. Diversification shrinks. The platform gains pricing power. CAC inflates slowly, then suddenly.

 

Here is what that loop looks like in practice:

 

Attribution over-credits demand capture →

You allocate more budget to "high-performing" channels →

Dependency concentration increases →

Switching costs rise →

Platforms gain pricing power →

CAC inflates →

Repeat.

 

Notice what is happening here. What looks like optimization is quietly building structural fragility. Your dashboards improve. Your resilience erodes.

 

This is the measurement-dependency connection. And it is the reason why reducing platform dependency is so much harder than it looks. The measurement system keeps pointing you back toward the very channels you are trying to de-risk.


Three Types of Growth


Not all conversions are equal. But attribution treats them as if they are.

 

When you run incrementality testing, which we will get to, you start to see three distinct categories:

 

  • Real Growth - Conversions that would not have happened without your marketing. This is the only category that represents genuine demand creation.

 

  • Captured Demand - Conversions that were already likely. The consumer was already in market. Your marketing showed up at the right moment, but did not create the intent. You captured it.

 

  • Platform Noise - Variance from seasonality, category trends, or external factors that would have driven revenue regardless.

 

In many businesses, a significant portion of what appears as growth is actually captured demand or noise. The platform is surfacing purchase intent that already existed, crediting itself for the conversion, and reinforcing your belief that you are the one creating it.

 

When you optimize budget allocation based on that signal, you get very good at capturing demand. You get progressively worse at creating it.

 

That is a problem because captured demand does not scale. It is finite. Real growth is what compounds.


Why Consumer Behaviour Makes This Harder


Consumers do not buy in straight lines.

 

They scroll, research, compare, ask a trusted friend, forget, re-encounter your brand six weeks later, and then decide. Brand memory, peer influence, timing, and context all interact across a period that attribution models were never designed to capture.

 

Attribution compresses that complexity into a single credit event. The last touchpoint wins. The rest of the journey disappears.

 

The practical consequence: capital consistently shifts toward short-term demand capture and away from the brand-building that created the demand in the first place. You withdraw the support that makes the whole system work, and you credit the thing that simply shows up at the end.

 

This is one reason why brand investment tends to decay quietly in performance-heavy organizations. Not because anyone decides to stop building the brand but because measurement never gives it credit, so it never wins the budget conversation.


The Incrementality Gap: A 4-Question Causality Check


You do not need advanced econometrics to start closing this gap. You need four honest questions, asked in a room with your media and analytics leads.

 

  1. Have we run a true incrementality test — geo holdout, audience holdout, or a platform lift study in the last 12 months?

  2. Do we know the actual gap between our attributed ROAS and our incremental ROAS?

  3. Do we adjust budget allocation based on incrementality results or just dashboard performance?

  4. Have we ever tested the downstream effect of brand investment on performance CAC?


Score your answers:

 

  • 4 Yes → Causality-driven allocation. You are in a small minority.

  • 2–3 Yes → Partial clarity. You have the foundations. Keep building.

  • 0–1 Yes → Correlation-dependent. Your capital allocation is based on a story the platform is telling you about itself.

 

Most teams land at 0–1. That is not a failure of intelligence. It is a failure of process. Incrementality testing is not standard practice but it should be.

  

What to Do in the Next 90 Days


Run this as an operational sequence, not a one-off project.


Step 1: Identify your highest-spend channel

This is where measurement distortion is most likely to be hiding. High spend + high attribution claims = the most important place to test.


Step 2: Design a 4–6 week incrementality test

You have three practical options depending on your setup:

 

  • Geo holdout - Pause or reduce spend in a set of matched geographic markets. Measure revenue difference vs. control markets.

  • Audience holdout - Exclude a randomized audience segment from targeting. Compare conversion rates vs. exposed group.

  • Platform lift study - Most major platforms offer these natively. They are not perfect (the platform grades its own homework) but they are a useful starting point.

 

A note on scale: if your monthly spend on this channel is below a threshold that makes geo splits unviable, start with an audience holdout instead. The methodology matters less than the discipline of testing at all.

 

Step 3: Calculate your Incrementality Gap

Compare attributed ROAS against incremental ROAS from the test results. Use this as your benchmark:

 

  • Gap under 10% → Stable. Attribution is reasonably accurate for this channel.

  • Gap of 10–25% → Monitor. There is likely over-attribution, but not yet at crisis level.

  • Gap over 25% → Reallocation candidate. A major chunk of budget is chasing demand that was never yours to create.


Step 4: Test the brand-performance interaction

Incrementality testing on performance channels is only half the picture. The other half is understanding whether brand investment is quietly doing the heavy lifting that performance gets credit for.

 

Run a simple test: increase brand spend in two or three selected regions over 8–10 weeks. Track downstream CPA changes in those regions vs. control. If brand investment is generating real demand, you will see it in lower acquisition costs on the performance side.

 

This is one of the most underdone tests in marketing. It is also one of the most revealing.


What Changes When You Measure This Way

Short term, you get more accurate capital allocation. Budgets stop flowing toward channels on the basis of inflated signals.

 

Medium term, blended CAC starts to fall. Not because you are spending less but because you are spending it where it actually creates demand, rather than where it gets credit for demand that was already there.

 

Long term, you build what the Demand Control Score was really measuring: structural resilience. The ability to shift capital confidently because you understand what is actually working.

 

The teams that get good at this become genuinely harder to compete against. They compound. Their competitors optimize.


A Harder Question to Close With

If you paused all paid media tomorrow - Meta, Google for 60 days:

 

•         Would consumers still remember you?

•         Would they still find you?

•         Would revenue hold, or would it collapse?

 

The honest answer to those questions tells you more about your real growth than any dashboard.

 

Measurement distortion is not just a data problem. It is a structural one. It shapes where capital goes, which shapes what the business becomes. Getting measurement right is not a technical exercise, it is how you design growth that survives.

 

Your Action This Week

Block 60 minutes with your media and analytics leads. Run through the four questions in the Incrementality Gap check. Be honest.

 

If you land at 0–1 Yes, that is your starting point. Pick your highest-spend channel. Design the test. Run it over the next 4–6 weeks. Check the gap.

 

That is where structural measurement discipline begins.



Definitions & Core Concepts


What is incrementality in marketing?

Incrementality in marketing measures the true causal impact of a marketing activity - specifically, how many conversions would not have happened without that activity. It is distinct from attribution, which simply assigns credit for conversions to the touchpoints that appeared in the consumer journey. Incrementality testing answers the question: if this campaign had not run, would this purchase still have occurred? The answer is often different from what attribution models suggest.


What is the difference between attribution and incrementality?

Attribution assigns credit for a conversion to one or more marketing touchpoints that appeared before the purchase. Incrementality measures whether those touchpoints actually caused the purchase. A consumer who was already in-market and ready to buy may encounter a retargeting ad, convert, and be attributed to that ad but the conversion would have happened anyway. Attribution captures that conversion; incrementality reveals it was not caused by the marketing.


What are the three types of marketing growth?

A The three types of marketing growth are:

Real Growth - conversions that would not have occurred without marketing, representing genuine demand creation;

Captured Demand - conversions from consumers who were already likely to buy, where marketing simply appeared at the right moment but did not create the intent; and

Platform Noise - revenue variance driven by seasonality, category trends, or external factors that would have occurred regardless of marketing activity. Most attribution models count all three as marketing-driven results.


What is the Incrementality Gap?

The Incrementality Gap is the difference between a channel's attributed ROAS and its incremental ROAS - the return measured only on conversions that would not have happened without the marketing. A gap below 10% suggests attribution is reasonably accurate. A gap of 10–25% indicates over-attribution that should be monitored. A gap above 25% signals that a material portion of budget is being allocated to a channel based on inflated performance signals, making it a reallocation candidate.


What is platform noise in digital marketing?

Platform noise refers to revenue or conversion activity that appears in marketing dashboards but is driven by external factors rather than marketing effort including seasonal demand patterns, category-wide trends, macroeconomic conditions, or competitive shifts. Because platforms attribute conversions to themselves regardless of the underlying cause, platform noise gets credited as marketing performance. This makes dashboards look stronger than the underlying marketing is, and distorts capital allocation toward channels that are simply present when demand naturally rises.

  

How-To & Practical Application


How do I run an incrementality test for my marketing campaigns?

There are three practical methods. A geo holdout test pauses or reduces spend in a set of matched geographic markets and compares revenue against control markets where spending continues normally. An audience holdout test excludes a randomized segment of your audience from targeting and compares their conversion rate against the exposed group. A platform lift study is offered natively by most major platforms useful as a starting point, though the platform is evaluating its own impact. Run the test for 4–6 weeks and compare attributed ROAS against the incremental ROAS calculated from holdout results.


How do I know if my marketing budget is being wasted on demand that would have happened anyway?

Run an incrementality test on your highest-spend channel. Calculate the Incrementality Gap - the difference between your attributed ROAS and your incremental ROAS from the test. If the gap exceeds 25%, a significant portion of your budget is capturing demand that already existed rather than creating new demand. As a starting diagnostic, ask your team four questions: Have you run an incrementality test in the last 12 months? Do you know your attributed vs incremental ROAS gap? Do you adjust budgets based on that gap? Have you tested whether brand investment lowers performance CAC?


How do I test whether brand investment is affecting my performance marketing results?

Increase brand spend in two or three selected regions over 8–10 weeks while holding performance spend constant. Track CPA changes in those regions against control regions where brand spend did not increase. If brand investment is creating real demand, downstream performance CAC in the test regions will fall because the brand is generating intent that performance channels then convert more efficiently. This test reveals whether performance channels are creating demand or simply harvesting demand that brand activity already built.


What is a geo holdout test in marketing?

A geo holdout test is an incrementality testing method where a marketer pauses or significantly reduces advertising spend in a defined set of geographic markets; the holdout group; while maintaining normal spend in comparable control markets. After 4–6 weeks, revenue performance in both groups is compared. The difference in conversion rates or revenue between the holdout and control markets represents the incremental impact of the advertising. It is one of the most reliable methods for measuring true marketing causation because it isolates real-world behaviour rather than modelling it.

 

 Thresholds & Benchmarks


What is a good ROAS gap between attributed and incremental results?

An Incrementality Gap — the difference between attributed ROAS and incremental ROAS — below 10% suggests that attribution is reasonably accurate for that channel and the spend is defensible. A gap of 10–25% indicates meaningful over-attribution that should be monitored and partially discounted in budget planning. A gap above 25% is a strong signal for budget reallocation: a material portion of that channel's reported performance reflects demand capture or platform noise rather than real demand creation.


How often should a marketing team run incrementality tests?

Marketing teams should run at least one incrementality test per year on their highest-spend channel. Teams spending significantly on two or more major platforms should test each channel on a rotating basis. Incrementality results have a shelf life — platform algorithms, audience compositions, and competitive conditions change — so a test result from 18 months ago should not be the basis for current budget decisions. The discipline of regular testing is more important than the precision of any single test.

  

Why It Happens & What Causes It


Why do marketing dashboards overstate channel performance?

Marketing dashboards overstate channel performance primarily because they rely on attribution models that assign credit based on touchpoint presence, not causal impact. Platforms are structurally incentivized to appear in high-credit positions — particularly last-touch — so retargeting and conversion-stage ads claim credit for purchases that were already in progress. Additionally, natural demand drivers such as seasonality and brand memory are not visible to attribution models, so their contribution gets credited to whatever paid channel appeared nearest to the conversion.


Why does marketing measurement distortion lead to platform dependency?

When attribution models over-credit a platform's contribution to revenue, budgets flow toward that platform because it appears to be the highest performer. As budget concentration increases, switching costs rise, teams invest in platform-specific expertise, creative formats, and audience data that do not transfer. The platform gains pricing power as dependency deepens. CAC inflates. And because the measurement system keeps reporting strong attributed results, the dependency is reinforced rather than questioned. Measurement distortion and platform dependency form a self-reinforcing loop.


Why does performance marketing consistently underinvest in brand building?

Performance marketing systems are optimized to allocate budget toward channels that show measurable, attributed conversions. Brand investment — which builds memory structures and consumer intent over weeks or months — rarely gets direct attribution credit because the conversion happens long after and through a different touchpoint. As a result, brand investment consistently loses the budget conversation in performance-heavy organizations, not because it is not working, but because measurement systems were not designed to capture its contribution. Over time, this defunds the demand creation that makes performance marketing efficient.


What is captured demand versus created demand in marketing?

Captured demand refers to conversions from consumers who were already in the market and likely to purchase — the marketing simply appeared at the right moment and claimed credit for an intent that already existed. Created demand refers to conversions that resulted from marketing genuinely building awareness, consideration, or intent that would not have developed otherwise. The distinction matters because captured demand is finite — it is bounded by existing market size — while created demand compounds. Optimizing exclusively for captured demand produces diminishing returns as the available in-market audience shrinks.

Subscribe to our newsletter

Comments


Contact us

Follow me:

  • Grey LinkedIn Icon
  • Instagram
  • X
  • Youtube

© 2025 Digital with Vineeth. All rights reserved.

bottom of page