How to Run A/B Tests with Google Ads Experiments
Making changes to a high-performing Google Ads campaign can feel risky. What if the new bidding strategy tanks your conversion rate? What if the updated ad copy drives fewer clicks? Google Ads Experiments solves this problem by letting you test changes on a portion of your traffic before committing to them fully, giving you statistically validated data to make confident decisions.
For Singapore businesses where Google Ads budgets are tightly managed and every dollar needs to deliver measurable results, experiments are an essential tool. Rather than guessing whether a new landing page, bid strategy, or audience setting will improve performance, you can test it with real data from your actual campaigns.
This google ads experiments tutorial walks you through every aspect of running A/B tests in Google Ads, from choosing what to test and configuring your experiment to reading results and implementing winning changes. By the end, you will have a repeatable testing framework that continuously improves your campaign performance.
Understanding Experiment Types in Google Ads
Google Ads offers several types of experiments, each designed for different testing scenarios. Understanding which type to use is the first step in running effective tests.
Campaign experiments are the most versatile option. They create a copy of an existing campaign (the “trial” or “experiment arm”) where you can modify settings, and then split traffic between the original and the experiment. This is the standard approach for testing bidding strategies, ad copy, landing pages, audience targeting, and most other campaign-level changes.
Ad variations let you test changes to your ad text at scale across multiple campaigns without creating full campaign experiments. You can swap headlines, descriptions, URLs, or ad extensions across all ads that meet certain criteria. This is ideal for testing messaging changes like adding a promotional offer or changing your call to action.
Performance Max experiments allow you to test specific changes within Performance Max campaigns, such as comparing different asset groups or final URL expansion settings. Given that Performance Max is increasingly important for Singapore advertisers running Shopping and multi-channel campaigns, these experiments help you optimise without disrupting proven performance.
Video experiments are designed for YouTube advertising, letting you compare different video creatives, targeting options, or bidding approaches. For brands investing in video marketing across Singapore and Southeast Asia, these experiments ensure your video ad budget is spent effectively.
Deciding What to Test
Not all tests are created equal. The most valuable experiments test changes that could meaningfully impact your key metrics. Here are the highest-impact areas to test for Singapore campaigns.
Bidding strategies: This is one of the most impactful tests you can run. Compare manual CPC against automated strategies like Target CPA, Target ROAS, or Maximise Conversions. Many Singapore advertisers find that switching from manual bidding to Target CPA reduces their cost per acquisition by 15 to 30 per cent, but the only way to confirm this for your specific account is to test it.
Landing pages: Test sending traffic to different landing pages or page variations. Compare your current product page against a simplified version, or test a dedicated landing page against a general category page. Landing page quality directly impacts Quality Score, which affects both ad position and cost per click.
Ad copy: Test different value propositions, calls to action, and messaging angles. For a Singapore tuition centre, you might test “Top Results Since 2010” against “Free Trial Class Available” to see which headline drives more enrolments. Use ad variations for broad copy tests and campaign experiments for more targeted tests.
Audience targeting: Test adding audience segments as observation or targeting layers. Compare performance when targeting in-market audiences for your product category against your standard keyword-only approach. Test whether adding remarketing audiences as bid adjustments improves conversion rates.
Keyword match types: Test broad match with smart bidding against exact match or phrase match campaigns. Google has been expanding broad match capabilities significantly, and many Singapore advertisers are seeing strong results with broad match paired with Target CPA or Target ROAS bidding.
Ad extensions: Test different sitelinks, callout extensions, structured snippets, and promotion extensions. While these tests can be run informally, a structured experiment gives you cleaner data on which extension combinations drive the best click-through and conversion rates.
Setting Up a Campaign Experiment Step by Step
Here is how to create a campaign experiment in Google Ads, using a bidding strategy test as our example.
Step 1: Navigate to Experiments. In your Google Ads account, click Campaigns in the left menu, then select Experiments. Click the blue plus button to create a new experiment.
Step 2: Choose your experiment type. Select Campaign experiment from the available options. This will create a trial campaign based on an existing campaign.
Step 3: Select the base campaign. Choose the campaign you want to test changes on. Pick a campaign with sufficient traffic and conversion volume to generate statistically significant results. Campaigns with fewer than 100 conversions per month may require longer experiment durations to reach significance.
Step 4: Name your experiment. Give it a clear, descriptive name that indicates what is being tested. For example, “Search — SG — Target CPA Test — March 2026”. This naming convention helps you track multiple experiments over time and understand the purpose of each at a glance.
Step 5: Make your changes. Google Ads creates a draft copy of your base campaign. Open the draft and make only the changes you want to test. If testing a bidding strategy, navigate to Settings > Bidding and change the strategy from, say, Manual CPC to Target CPA. Set your target CPA based on your historical cost per conversion data. Crucially, change only one variable at a time. If you change both the bidding strategy and the ad copy, you will not know which change caused any difference in performance.
Step 6: Review and create. Double-check that only your intended changes are reflected in the experiment draft. Click Schedule to proceed to the traffic split and timing configuration.
Configuring Traffic Split and Duration
Traffic split and experiment duration are critical settings that determine whether your test results are reliable and actionable.
Traffic split: This determines what percentage of your campaign’s traffic goes to the experiment versus the original. A 50/50 split gives you the fastest results because both arms receive equal traffic, reducing the time needed to reach statistical significance. However, if you are nervous about a radical change, a 30/70 split (30 per cent to the experiment, 70 per cent to the original) limits your risk while still generating usable data. Never go below 20 per cent for the experiment arm as it will take too long to gather meaningful results.
Split method: Google Ads offers two split methods. Cookie-based ensures that individual users consistently see either the original or experiment version, which prevents the same person from experiencing both variations. Search-based randomly assigns each search to one arm, meaning the same user might see different versions on different searches. Cookie-based splitting is generally more reliable for measuring conversion impact because it provides a cleaner user experience.
Experiment duration: Run your experiment for a minimum of two weeks and ideally four to six weeks. Shorter experiments are susceptible to day-of-week effects and random fluctuations. The ideal duration depends on your traffic volume and conversion rate. Campaigns with 50+ conversions per week can often reach significance in two to three weeks. Lower-volume campaigns targeting niche Singapore audiences may need six to eight weeks.
Start and end dates: Avoid starting experiments during major promotional periods like Great Singapore Sale, 11.11, or Chinese New Year unless you are specifically testing promotional strategies. Promotional periods create unusual traffic and conversion patterns that may not represent normal performance. Set your start date for a stable period and let the experiment run through its full duration without interruption.
Budget considerations: When you split traffic, your budget is also split proportionally. A campaign spending SGD 100 per day with a 50/50 split allocates approximately SGD 50 per day to each arm. Ensure this reduced budget does not limit either arm so severely that it affects delivery. If necessary, increase the overall campaign budget during the experiment period to maintain adequate spend across both arms.
Reading and Interpreting Results
Once your experiment has been running for the planned duration, it is time to analyse the results. Google Ads provides a built-in comparison view, but understanding what the numbers actually mean is where many marketers struggle.
Accessing results: Go to Campaigns > Experiments and click on your active experiment. Google Ads displays a side-by-side comparison of key metrics between the original campaign and the experiment arm, including clicks, impressions, CTR, average CPC, conversions, conversion rate, and cost per conversion.
Statistical significance: The most important element in the results view is the confidence level indicator. Google Ads shows a blue star icon when a metric difference is statistically significant, typically at 95 per cent confidence or higher. This means there is a 95 per cent probability that the observed difference is real and not due to random chance. Do not make decisions based on results that have not reached significance, no matter how promising they look.
Direction and magnitude: Look at both the direction (improvement or decline) and magnitude (how much) of each metric. An experiment that improves conversion rate by 2 per cent at 95 per cent confidence is useful but modest. An experiment that improves conversion rate by 25 per cent at 95 per cent confidence is transformative. Conversely, a 2 per cent improvement that is not statistically significant tells you the change probably makes no meaningful difference.
Multiple metrics matter: Do not evaluate experiments on a single metric alone. A bidding strategy change might improve conversion rate but increase cost per conversion, or improve conversions but reduce conversion value. Review all relevant metrics together to form a complete picture. For Singapore e-commerce campaigns, ROAS is often the deciding metric. For lead generation campaigns, cost per qualified lead is usually the priority.
Segment by device: Check whether the experiment results differ between mobile and desktop. A change that improves mobile performance but hurts desktop performance (or vice versa) requires a more nuanced implementation strategy. Given Singapore’s high mobile usage rate, mobile performance should be weighted heavily in your analysis.
Implementing Winning Variations
When your experiment produces a clear winner, implementing the change correctly ensures you capture the full benefit without disruption.
Applying the experiment: If the experiment arm outperformed the original with statistical significance, click Apply in the experiment view. Google Ads gives you two options: Apply to original campaign modifies your base campaign with the experiment settings, while Convert to new campaign creates a separate campaign with the experiment settings and pauses the original. For most tests, applying to the original campaign is cleaner because it preserves the campaign’s historical data and quality scores.
Ending inconclusive experiments: If the experiment did not reach statistical significance after the planned duration, end the experiment and revert to the original settings. An inconclusive result is still valuable information — it tells you that the tested change does not make a meaningful difference, freeing you to test other variables instead.
Post-implementation monitoring: After applying a winning experiment, monitor the campaign closely for two weeks. Occasionally, changes that performed well during the experiment period behave differently at full traffic volume. Set up automated rules or campaign monitoring alerts to flag significant performance deviations.
Document your findings: Maintain a testing log that records what was tested, the hypothesis, results, and whether the change was implemented. Over time, this log builds institutional knowledge about what works for your Singapore campaigns. Share insights with your team through regular reviews, and use findings to inform strategy across other campaigns or content channels.
Building a Continuous Testing Framework
One-off experiments deliver one-off improvements. A continuous testing framework compounds gains over time, systematically improving campaign performance month after month.
Create a testing roadmap. Map out the next three to six months of experiments based on potential impact and effort. Start with high-impact, low-effort tests like bidding strategy changes and ad copy variations. Save complex tests like full landing page redesigns for later when you have established your testing process.
Test one thing at a time. Never run multiple experiments on the same campaign simultaneously. Overlapping experiments contaminate results because you cannot attribute performance changes to a specific variable. If you have multiple campaigns, run different experiments across different campaigns to test more ideas in parallel.
Build a hypothesis for every test. Before creating an experiment, write a clear hypothesis: “Changing from manual CPC to Target CPA at SGD 45 will reduce cost per conversion by at least 15 per cent while maintaining conversion volume.” This focuses your experiment design and makes results interpretation clearer.
Establish minimum detectable effect. Decide in advance what magnitude of change would be meaningful for your business. If a 5 per cent improvement in conversion rate would not materially affect your bottom line, set your minimum detectable effect at 10 per cent and size your experiment duration accordingly. This prevents you from chasing trivial improvements.
Share results across channels. Insights from Google Ads experiments often apply to other marketing channels. If you discover that urgency-based ad copy (“Limited Time Offer”) outperforms feature-based copy (“Award-Winning Quality”) in Google Ads, test the same messaging angle in your email marketing campaigns and social media advertising.
Iterate on winners. When an experiment produces a significant improvement, build on it. If Target CPA bidding beat manual CPC, your next experiment might test different target CPA values to find the optimal setting. Each iteration moves you closer to the best possible configuration for your Singapore market campaigns in 2026.
Frequently Asked Questions
How much traffic do I need to run a Google Ads experiment?
There is no strict minimum, but campaigns with at least 100 conversions per month will reach statistical significance within a reasonable timeframe. Lower-volume campaigns can still run experiments but may need four to eight weeks to generate reliable results. For niche Singapore campaigns with limited traffic, consider running experiments on broader campaign segments.
Can I run experiments on Performance Max campaigns?
Yes, Google Ads supports experiments for Performance Max campaigns. You can test changes like final URL expansion, asset groups, and bidding targets. However, Performance Max experiments have some limitations compared to Search campaign experiments, so review the available options before setting up your test.
What happens to my budget during an experiment?
Your daily budget is split proportionally between the original campaign and the experiment arm based on your traffic split percentage. A 50/50 split divides the budget evenly. You may want to increase your overall budget during the experiment period to ensure neither arm is budget-limited, which could skew results.
How do I know when results are statistically significant?
Google Ads displays a blue star icon next to metrics that have reached statistical significance, typically at 95 per cent confidence. You can also see the confidence percentage in the experiment results view. Wait until key metrics reach significance before making decisions, even if early trends look promising.
Can I stop an experiment early if results look clearly positive?
It is generally best to let experiments run for their planned duration. Early results can be misleading due to small sample sizes, day-of-week effects, or the learning period of automated bidding strategies. Stopping early increases the risk of implementing a change based on incomplete data. Only stop early if the experiment is causing severe negative performance that threatens your business.
What should I do if my experiment results are inconclusive?
An inconclusive result means the tested change does not produce a statistically significant difference. End the experiment and revert to your original settings. Review whether the experiment ran long enough and had sufficient traffic. If the test duration and volume were adequate, the result tells you the change does not meaningfully affect performance, which is valuable knowledge in itself.


