What is Incrementality Testing?
Incrementality testing measures the true causal impact of a marketing campaign by comparing a test group that sees the campaign against a control group that doesn't. It separates real lift from conversions that would've happened anyway.
On This Page
What is Incrementality Testing?
Incrementality testing is a controlled experiment that isolates the true lift a marketing activity produces by comparing outcomes between an exposed group and a holdout group that never sees the campaign.
Think of it as the gold standard for answering one question: “Would this conversion have happened without my ad?” Standard attribution models try to assign credit after the fact. Incrementality testing removes the guesswork by running an actual experiment before drawing conclusions.
A 2023 Nielsen study found that 60% of marketers couldn’t confidently measure whether their paid media drove real results or just captured people who were already going to convert. That gap is exactly what incrementality testing closes.
Why Does Incrementality Testing Matter?
Without incrementality data, you’re likely overspending on channels that get credit for conversions they didn’t actually cause.
- Reveals true ROAS — Most attribution models inflate the value of retargeting and branded search because those channels catch users late in the funnel
- Kills wasted spend — Brands that run incrementality tests typically find 10-30% of their ad budget generates zero incremental lift
- Proves value to leadership — Finance teams trust controlled experiments more than attribution dashboards full of modeled data
- Improves budget allocation — Shifting dollars from low-lift channels to high-lift ones compounds over quarters
Any marketing team spending $10K+ per month on paid media needs incrementality testing. The cost of not knowing is almost always higher than the cost of the test itself.
How Incrementality Testing Works
The setup mirrors a scientific experiment. You split your audience, change one variable, and measure the difference.
Design the Experiment
Pick the channel, campaign, or tactic you want to test. Define your success metric — purchases, signups, demo requests. Then split your target audience into two groups: a test group (exposed to the campaign) and a control group (held out entirely). Random assignment is critical. Biased groups produce meaningless results.
Run the Holdout
The test group sees your ads normally. The control group sees nothing — or a placeholder ad (called a “ghost ad” or PSA). The holdout period usually lasts 2-4 weeks, depending on your conversion cycle length.
Measure the Lift
Compare conversion rates between the two groups. If the test group converts at 4.2% and the control converts at 3.1%, your incremental lift is 1.1 percentage points — roughly 35% of conversions were truly driven by the campaign. The rest would’ve happened regardless.
Incrementality Testing Examples
Example 1: Facebook retargeting audit. An ecommerce brand spends $25K/month on Facebook retargeting. Their attribution dashboard says it drives a 6x ROAS. They run an incrementality test and discover the true lift is only 1.8x — most of those “attributed” buyers were already mid-checkout. They cut the budget by 40% with no revenue impact.
Example 2: Local service business testing Google Ads. A plumbing company runs a geo-based incrementality test — ads in Denver, no ads in similar-sized Colorado Springs. After 30 days, Denver leads are 22% higher. That’s clean proof the ads work, beyond what click-through rate alone could show.
Example 3: Content marketing lift. A B2B SaaS company uses a holdout group to test whether their blog content accelerates deals. Leads who read 3+ articles before a demo convert to paid at 2.4x the rate of the holdout group that only received email nurtures.
Common Mistakes to Avoid
Most businesses make the same handful of errors. Recognizing them saves months of wasted effort.
Chasing tactics without strategy. Jumping on every new channel or trend without a clear plan. TikTok one month, LinkedIn the next, podcasts after that — none done well enough to produce results. Pick your channels based on where your audience actually spends time, not what’s trending on marketing Twitter.
Measuring the wrong things. Tracking impressions and likes instead of conversion rate and revenue. Vanity metrics feel good in reports. They don’t pay the bills.
Ignoring existing customers. Most marketing teams focus 90% of their energy on acquisition and 10% on retention. The math says that’s backwards — acquiring a new customer costs 5-7x more than keeping one.
Key Metrics to Track
| Metric | What It Measures | Good Benchmark |
|---|---|---|
| Customer Acquisition Cost (CAC) | Total cost to acquire one customer | Varies by industry — lower is better |
| Customer Lifetime Value (CLV) | Revenue from a customer over time | Should be 3x+ your CAC |
| Conversion Rate | % of visitors who take desired action | 2-5% for websites, 15-25% for email |
| Return on Investment (ROI) | Revenue generated vs money spent | 5:1 is a common benchmark |
| Click-Through Rate (CTR) | % of people who click after seeing | 2-5% for ads, 3-10% for email |
Quick Comparison
| Aspect | Basic Approach | Advanced Approach |
|---|---|---|
| Strategy | Ad hoc, reactive | Planned, data-driven |
| Measurement | Vanity metrics (likes, views) | Business metrics (revenue, CAC, LTV) |
| Tools | Spreadsheets, manual tracking | Marketing automation, CRM integration |
| Timeline | Short-term campaigns | Long-term compounding strategy |
| Team | One person does everything | Specialized roles or automated workflows |
Frequently Asked Questions
How is incrementality testing different from A/B testing?
A/B testing compares two variations of something (ad creative, landing page) to see which performs better. Incrementality testing compares “something vs. nothing” to determine whether the entire campaign produces real lift above baseline.
How long should an incrementality test run?
Most tests need 2-4 weeks minimum. The duration depends on your conversion volume and sales cycle length. Low-traffic campaigns need longer windows to reach statistical significance.
Can small businesses run incrementality tests?
Yes, but you need enough conversion volume for the results to be statistically valid. Geo-based tests (ads in one city, none in another) work well for local businesses with multiple service areas.
Want to make sure your marketing spend actually drives results? theStacc publishes 30 SEO-optimized articles to your site every month — building an organic channel with provable incremental value. Start for $1 →
Sources
- Nielsen: Marketing Attribution and Incrementality
- Google Ads Help: About conversion lift studies
- Meta Business Help: Conversion lift tests
- Measured: The State of Incrementality 2024
Related Terms
A/B testing is a controlled experiment that compares two versions of a webpage, email, or ad to see which one drives more conversions. It removes guesswork from marketing decisions by letting real user behavior pick the winner.
AttributionMarketing attribution is the process of identifying which touchpoints contribute to conversions. Learn about attribution models, tools, and how to measure marketing ROI.
Conversion RateConversion rate is the percentage of visitors who complete a desired action. Learn the formula, industry benchmarks, and proven tactics to improve your conversion rate.
Marketing Mix Modeling (MMM)Statistical analysis measuring each marketing channel's contribution to revenue.
Return on Ad Spend (ROAS)ROAS (return on ad spend) measures revenue generated for every dollar spent on advertising. Learn the formula, benchmarks, and how to improve your ROAS.