Incrementality Testing Guide for Marketers | MarketingAgency.sg


Incrementality Testing: Measuring the True Impact of Your Marketing Campaigns

Every Singapore business running digital campaigns faces the same nagging question: did that ad actually cause the conversion, or would the customer have purchased anyway? Attribution models tell you which touchpoint gets credit, but they cannot tell you whether the marketing itself made a difference. That distinction is exactly what incrementality testing is designed to answer.

In 2026, with rising media costs and tighter budgets across Singapore’s competitive landscape, understanding true incremental lift has become essential. Businesses that rely solely on last-click attribution or even multi-touch models risk over-investing in channels that merely capture existing demand rather than creating new conversions. Incrementality testing strips away the noise and reveals the causal impact of your spend.

This guide walks you through the core concepts of incrementality testing, from simple holdout experiments to advanced geo-lift studies and platform-native lift tools. Whether you manage Google Ads campaigns or run paid social through social media marketing, you will learn how to design tests that prove whether your marketing truly moves the needle.

What Is Incrementality in Marketing?

Incrementality measures the additional conversions generated by a marketing activity that would not have occurred without it. The core formula is straightforward:

Incremental Lift = (Test Group Conversions – Control Group Conversions) / Control Group Conversions x 100

If your test group (exposed to ads) converts at 4.2% and your control group (not exposed) converts at 3.1%, your incremental lift is approximately 35.5%. This means roughly one-third of your attributed conversions are genuinely caused by the advertising.

The concept matters because standard attribution conflates correlation with causation. A user who sees your retargeting ad and then purchases might have bought regardless. Incrementality testing isolates the causal effect by comparing outcomes between equivalent groups where the only difference is exposure to your marketing.

For Singapore businesses spending across multiple channels, understanding incrementality prevents budget misallocation. A channel showing strong return on ad spend in your attribution dashboard might deliver near-zero incremental lift, meaning those conversions would have happened organically through your SEO efforts or direct traffic.

Holdout Experiments: The Foundation

The holdout experiment is the simplest and most rigorous form of incrementality testing. You randomly split your audience into two groups: the test group sees your ads as normal, while the holdout (control) group is deliberately excluded from seeing them. After a set period, you compare conversion rates between the two groups.

The key steps for running a holdout experiment are:

  • Define your hypothesis — State clearly what you expect the campaign to achieve incrementally
  • Determine sample size — Use a statistical power calculator to ensure your test can detect meaningful differences (typically 80% power at 95% confidence)
  • Randomise assignment — Ensure test and control groups are statistically equivalent across demographics, purchase history, and behaviour
  • Set the holdout percentage — Most tests use a 10-20% holdout; larger holdouts improve statistical power but sacrifice potential revenue
  • Run for sufficient duration — Allow at least one to two full purchase cycles (typically two to four weeks for most Singapore e-commerce businesses)
  • Measure and compare — Calculate conversion rates, revenue per user, and incremental cost per acquisition

The primary limitation of holdout experiments is the opportunity cost. Withholding ads from a portion of your audience means potentially lost conversions during the test period. However, the insight gained typically far outweighs this short-term cost.

Geo-Lift Testing for Regional Campaigns

Geo-lift testing uses geographic regions as test and control groups rather than individual users. This approach is particularly useful when user-level randomisation is impractical, such as with television, out-of-home, or broad-reach digital campaigns.

In Singapore’s context, geo-lift testing might compare performance across different planning areas or postal districts. For businesses operating across Southeast Asia, country-level or city-level comparisons provide cleaner test environments.

The framework for geo-lift testing follows this structure:

  1. Select matched regions — Identify geographic areas with similar baseline conversion rates, demographics, and seasonal patterns
  2. Establish a baseline period — Collect four to eight weeks of pre-test data to confirm region equivalence
  3. Activate the treatment — Run the campaign in test regions while keeping control regions unexposed
  4. Measure the lift — Compare post-intervention performance, adjusting for any pre-existing trends
  5. Calculate statistical significance — Use methods such as synthetic control or Bayesian structural time series to validate results

Google’s open-source CausalImpact package and Meta’s GeoLift library are both free tools that Singapore marketers can use to analyse geo-lift experiments with proper statistical rigour.

Ghost Ads and PSA Methodology

Ghost ads (also called phantom ads or public service announcement methodology) solve a specific problem with standard holdout tests: the control group in a typical holdout simply does not see your ad, but they also do not see a replacement. This means any difference might partly reflect the attention-capturing effect of advertising in general rather than your specific creative.

In the ghost ad methodology, the control group sees a placebo ad — often a public service announcement or a charity advertisement — in the exact same placement where your ad would have appeared. This ensures both groups have identical browsing experiences except for the specific creative content.

The ghost ad approach provides cleaner measurement because it controls for:

  • Selection bias — Both groups are equally likely to be in ad-receptive contexts
  • Attention effects — Both groups see an advertisement, isolating the effect of your specific message
  • Platform algorithm bias — Both groups are served ads through the same auction and targeting mechanisms

While more complex to implement than simple holdouts, ghost ad studies are considered the gold standard for measuring true creative and campaign incrementality. They are especially valuable when optimising your content marketing and paid media creative simultaneously.

Facebook Lift Studies and Google Conversion Lift

Major advertising platforms offer built-in incrementality measurement tools that simplify the testing process considerably.

Meta (Facebook) Conversion Lift Studies

Meta’s conversion lift tool automatically creates randomised test and control groups within your target audience. The control group is prevented from seeing your ads, and Meta measures the difference in conversions between groups. Key requirements in 2026 include a minimum daily budget (typically SGD 500 or more), the Conversions API properly configured, and a test duration of at least two weeks.

Results from Meta lift studies show you the incremental conversions, incremental cost per result, and the confidence interval. These studies work across Facebook, Instagram, Messenger, and the Audience Network.

Google Conversion Lift

Google offers conversion lift measurement for YouTube and Display campaigns. The tool uses Google’s identity graph to create matched test and control groups, measuring lift in conversions, store visits, or brand metrics. For Singapore advertisers, Google conversion lift requires working with a Google representative or agency partner and meeting minimum spend thresholds.

Both platform tools have limitations. They only measure incrementality within their own ecosystem and cannot account for cross-platform effects. A holistic view requires combining platform lift studies with broader geo-lift or holdout experiments across your entire digital marketing mix.

Designing Your First Incrementality Test

For Singapore businesses new to incrementality testing, start with a structured approach using this framework:

The IMPACT Framework for Incrementality Testing:

  • I — Identify the question — What specific channel, campaign, or tactic do you want to evaluate?
  • M — Method selection — Choose between holdout, geo-lift, ghost ads, or platform lift based on your constraints
  • P — Power calculation — Determine the minimum sample size needed to detect a meaningful lift (use at least 80% power)
  • A — Assign groups — Ensure randomisation is truly random and groups are balanced
  • C — Control variables — Hold all other marketing activities constant during the test period
  • T — Timeline and triggers — Set clear start dates, end dates, and decision criteria before running the test

A common mistake is running tests that are too short or with insufficient sample sizes. For a Singapore e-commerce business with 10,000 monthly visitors and a 2% baseline conversion rate, you would need approximately 15,000 users per group to detect a 20% incremental lift at 95% confidence. Plan your test duration accordingly.

When to Use Incrementality Testing

Incrementality testing is not necessary for every campaign or channel. It is most valuable in specific scenarios:

High-value decisions — Before committing a large portion of your budget to a single channel, an incrementality test validates whether the investment truly drives additional results. If you are considering scaling your Google Ads spend by 50%, test incrementality first.

Retargeting evaluation — Retargeting campaigns are notorious for inflated attribution because they target users already showing purchase intent. Incrementality testing frequently reveals that retargeting delivers lower true lift than attribution reports suggest.

Brand campaign justification — Upper-funnel activities like awareness campaigns on social media or display are difficult to attribute through standard models. Lift studies provide the evidence needed to justify continued investment.

Channel overlap resolution — When multiple channels claim credit for the same conversions, incrementality testing on individual channels reveals which ones genuinely contribute and which are free-riding on organic demand.

New market entry — When expanding into new Singapore market segments or Southeast Asian markets, incrementality tests help you understand baseline demand versus marketing-driven demand from the outset.

As a general rule, run incrementality tests quarterly on your largest spending channels and whenever you are considering a significant budget reallocation. The insights feed directly into your marketing mix modelling and overall strategy.

Frequently Asked Questions

How long should an incrementality test run?

Most incrementality tests require two to four weeks to accumulate sufficient data for statistical significance. The exact duration depends on your traffic volume, baseline conversion rate, and the minimum detectable effect you need. Higher-traffic Singapore businesses may achieve significance in as little as one week, while lower-volume businesses may need six weeks or more.

What is a good incremental lift percentage?

Incremental lift varies significantly by channel and campaign type. Prospecting campaigns typically show 20-60% incremental lift, while retargeting campaigns often show 5-20%. Brand awareness campaigns might show 10-30% lift on downstream conversions. Any positive, statistically significant lift indicates the campaign is generating real value.

Can small businesses in Singapore afford incrementality testing?

Yes. Platform-native lift studies from Meta and Google are free to run, though they require minimum spend levels. For businesses spending at least SGD 3,000 per month on a single platform, conversion lift studies are accessible and worthwhile. Smaller spenders can use simpler before-and-after tests or geo-based comparisons.

How does incrementality testing differ from A/B testing?

A/B testing compares two variations of a marketing element (such as ad creative or landing page design) to determine which performs better. Incrementality testing compares exposure versus non-exposure to determine whether the marketing activity itself drives additional results. A/B testing optimises execution; incrementality testing validates the investment.

What happens if my incrementality test shows zero lift?

A zero-lift result is still a valuable finding. It indicates that the tested channel or campaign is not generating additional conversions beyond what would occur organically. This insight allows you to reallocate budget to channels with proven incremental impact, ultimately improving your overall marketing efficiency.

Should I stop all other marketing during an incrementality test?

You do not need to stop other marketing, but you should keep other activities as stable as possible during the test period. Avoid launching new campaigns, changing budgets significantly, or running major promotions that could confound results. The goal is to isolate the variable you are testing from other changes in your marketing mix.