Free Funnel Audit
Convert more customers today!

SEO
10 mins read
SEO
10 mins read
Running A/B tests in a Shopify store should feel simple, not technical. The CausalFunnel app gives you a clean way to test ideas, compare versions of a page, and see what your shoppers actually respond to. This guide shows you how to use the CausalFunnel A/B testing tool inside Shopify in a clear, step-by-step flow, so you can follow along with your own dashboard as you read.
You will learn how to set up your first test, select what you want to change, send traffic to both versions, and interpret the results with clarity. Each section mirrors what you see inside the plugin, so you never lose track of where to click or what to adjust.
If this is your first time running an experiment or you want a smoother method for trials you already do, this walkthrough will help you use the tool with confidence. Once the basic setup is covered, we move deeper into test conditions, triggers, interpretation, and publishing the winning version.

Β
Log in to your Shopify account, search for the app named CausalFunnel A/B Test, open the app, and click the Launch A/B Test button or the create test button. After that you will see a set of test type options. Below is a clear, practical breakdown of each option and how to choose between them.
Available test types and when to use each

Use this when you want to adjust small parts of a page that guide user actions. This is ideal for changes that help a shopper notice, read, or respond faster.
Use this when price presentation or discount levels may affect conversion or average order value. This is for experiments that change the price itself or how price is shown.
Examples Include:Β
Designed for experiments that change the cart experience or how discounts are applied and displayed. Use this when you want to reduce cart abandonment or increase average order value.Β
Examples Include:Β
When the goal is improving product pages, recommendations, or visual presentation use a product test.Β
Examples Include:Β
Use this to test different shipping options and messaging with the goal of improving checkout conversion and customer satisfaction.Β
Examples Include:Β
Note: This option is shown as coming soon in the dashboard
Reserved for experiments that affect checkout flow, payment options, and trust signals. These tests require care because they touch the purchase path.Β
Examples Include:Β
Note: This option is shown as coming soon in the dashboard
Click Get Expert Recommendations to receive AI backed suggestions from CausalFunnel conversion experts. This will analyze your store and recommend tests most likely to move revenue. You can also view success stories to see real world examples of lifts other stores achieved.
Before we move into the later test settings, letβs first get a full run through of how the Element Test works inside the CausalFunnel Shopify app. This is the most flexible testing option for changing buttons, images, and other page elements, so understanding its setup will help you run precise, meaningful experiments.
Once the user clicks Element Test, the setup begins. The flow is broken into four parts: basic details, variant setup, element configuration, and test settings. Each part affects how the experiment behaves, so this section explains every option before moving forward.

The first step is to define the purpose of the test.
Test Name
Enter a clear name that reflects what youβre changing. This helps you identify tests later, especially when you run several experiments.
Description
Add a short note about what youβre testing and why. This section is useful for team alignment and record keeping.
Test Goal
Describe the outcome you want to improve. For element tests, this is often
Tips for Element Tests
The dashboard gives practical reminders
After entering these details, click Continue to Configuration.
This section defines the type of element experiment you want to run and sets the foundation for your control and variant.

The dashboard displays:
Configure Button Position Test
This lets you move a button to a different location and measure which placement performs better.
Element Type
Choose the type of element you want to modify. In this case select Button.
Position
Choose what kind of test you want to perform. The two options are
If you choose Image Replacement, you will configure an image instead of a button, but the flow remains similar.
If any required details are skipped, the dashboard will ask you to complete the configuration.
Click Continue to Test Setting when ready.

Button Configuration (when testing a button)
This screen captures information about the specific button you want to run the experiment on.
Button Name
A descriptive label such as βAdd to Cart Button.β
Page URL
Paste the page link where the button appears. The app uses this for previews and for CSS selector detection.
CSS Selector
Once filled in, you can proceed.
Image Configuration (if Image Replacement was chosen)
The fields mirror the button flow.
Image Name
Name the image you are modifying such as βHero Banner Image.β
Page URL
URL where the image appears.
CSS Selector
Selector targeting the exact image element.
βFind Imageβ helps gather it directly from the preview.

This part tells the system where you want the button moved.
The dashboard displays:
Button to Move
Shows the button name and the CSS selector you previously entered.
Target Container Name
Enter a descriptive name like βProduct Form Buttons Area.β
Target Container CSS Selector
Paste or pick the selector for the container where the button should be placed.
Using Find Container simplifies this by letting you point and capture the selector.
Position Method
Choose the action that determines how the button is placed within the container. The options are
The preview updates to show a Position Summary, including
You can preview the change or apply it.
Once satisfied, click Continue to Test Settings.
This section defines how the test will run, who will see it, and how results will be measured.

Test Split
Choose how traffic is divided between control and variant. The default is 50 and 50.
Test Duration
Set the number of days the test should run. The recommended window is between fourteen and thirty days depending on traffic.
Minimal Sample Size
Set the number of visitors needed per variant before a result can be considered reliable. Default is one thousand.
Confidence Level
Defines how statistically strong the result should be. Default is ninety five percent.
Primary Success Metric
Choose what the test is optimizing for. For element tests this is usually the conversion rate.
This is where you decide who will see the test.
Options include
Target Devices
All devices or device specific targeting.
User Type
All users or specific segments.
Traffic Source
Test for all traffic or limit it to certain sources.
Target Countries
Include all or select only certain regions.
Time Targeting
Run the test continuously or during specific time windows.
Browser Targeting
Test across all browsers or restrict to selected ones.
Referrer Sites
Choose visitors arriving from specific websites like google dot com or facebook dot com.
UTM Parameters
Enter UTM tags to run experiments only on certain campaigns.
Test Management
Options include
Click Save and Next.
The summary screen shows all test details in one place.
You will see
There are preview buttons for both the original version and the variant, so you can confirm everything looks correct.
Once reviewed, click Launch Test.
Once your test goes live, the app sends you straight to the A/B Testing Dashboard. This is the command center where every running experiment sits, and the layout makes it easy to track whatβs happening at a glance.
At the top, youβll notice quick filters showing how many tests are active, paused, stopped, or completed. Right below that sits the table where each experiment appears with its essential details.
Youβll see columns for the test name, the type of test you created, its current status, the traffic split you assigned, the date it was created, and the available actions. A fresh Element Test usually appears as:
This table becomes your ongoing workspace. Every test you create appears here, and every action you take begins here.

The Pricing Test helps you understand how different price displays influence buying decisions. When someone selects this option from the dashboard, the first screen they see is the basic information stage. This part sets the intention of the experiment, so the details entered here matter.
What you need to fill in

The first field is the test name. Keep it simple and specific. Something like β29.99 vs 30 on Hoodieβ works far better than a vague label because you can recognise it instantly on the dashboard later.
Thereβs also a short description box. This is simply a place to record what youβre comparing and the reason behind it. A line or two is enough, but it helps when you look back to understand why this test was started.
Next, you choose the test goal. Pricing experiments usually aim to lift conversions by changing how prices are shown. It could be a comparison between rounded and charm pricing, a new discount style, or a change in how the currency appears. The goal keeps the experiment focused.
Small reminders that help shape a good test
The tool then highlights a few suggestions worth paying attention to. Testing two or three price points is usually ideal because it gives you clarity without overwhelming the results. Trying small format changes, such as β$29.99β versus β$30β, often reveals patterns that are easy to miss. And finally, running the test for a minimum of two weeks helps the results settle across regular shopping patterns.
Once this first stage is complete, the user moves to the variants section where the actual price versions are created.

After the basic information is saved, the next screen asks you to choose which product you want to run the pricing experiment on. A search bar makes it simple to find the product by name or ID, and you can scroll through your full list if you prefer. Once the product is selected, the tool loads its existing variants so you can start setting up the comparison.
What happens on the configuration screen
The page is divided into two sides. On the left is the control group. This shows the product exactly as it appears in your store right now. Every detail is locked except the price display, since pricing tests only evaluate how price changes influence buying behavior.
On the right is the test variant. This is where you enter the alternate price you want to compare. The tool reminds you that the experimental price must be the same or lower than the original. This keeps the test fair and prevents accidental increases that can affect shopper trust.
If you are testing only one version, you simply adjust the price for Group B. Once the new value is entered, the tool confirms that a valid change has been detected. If nothing is edited, it wonβt allow you to proceed, since there must be a meaningful difference between the control and the test.
By the time this step is done, you have two clear versions of the same product: the original and the price you want to put to the test. The next part moves into the test settings, where the timing and traffic split are configured.
Once the product and the new price are confirmed, the next screen controls how the test runs. Every setting here influences data quality, so this step walks through each option in a simple sequence.

The page starts with the test split. By default, half of your visitors see the original price and half see the test price. You can adjust the percentages if needed, but most merchants keep it balanced so both versions get equal exposure.
Next is the time window. The default is fourteen days, and the tool highlights that running the experiment for at least two weeks helps avoid misleading results caused by short-term spikes in traffic. Merchants testing higher-value products often extend this to thirty days for added reliability.
Then comes the minimum number of visitors each group needs before the test is considered valid. The tool sets this at one thousand per variant. This baseline ensures that the final outcome is based on real buying behavior, not a handful of early clicks.
The confidence level is automatically set to ninety-five percent. This indicates how certain the system needs to be before declaring a winner. Keeping it high protects the test from random swings and produces cleaner, more dependable insights.
Here you choose what result matters most for the experiment. For pricing tests, the most common metric is conversion rate, since the goal is to see how price changes influence purchasing behavior. Some merchants may focus on add-to-cart actions for products with longer decision cycles.
Below the basic settings, a list of optional refinements lets you control who enters the test. You can leave everything open for a broad sample or define specific audiences.
The available filters include:
Each filter narrows the test to the shoppers you want to study. Merchants running global stores often keep this section untouched unless the test is meant for a specific group.
Test management
At the bottom of the screen are two quality controls. Auto-stop ends the test automatically once significance is reached, preventing wasted traffic. Email notifications keep you updated when major milestones are hit so you donβt need to check the dashboard constantly.
Summary panel
A small summary panel on the right collects every setting youβve selected. It shows the expected duration, minimum sample requirement, confidence level, primary metric, and any targeting rules you added.
Once everything looks correct, the button at the bottom lets you save and move to the final overview.
The final screen brings everything together so you can verify the setup before the experiment goes live. This page is designed to give a complete picture of your test: what youβre changing, who will see it, and how long it will run.

Expected results panel
At the top, the tool summarizes the projected scope of the test. Youβll see the planned duration, the expected number of visitors, the confidence level, and the improvement target youβre aiming for. This quick snapshot helps confirm that the experiment aligns with your goals.
Discount setup
Below the overview is the discount code configuration. This is where you specify the code that shoppers will receive when they complete a purchase under the test variant. Once entered, the system automatically creates this code inside your Shopify admin when the test launches. This saves time and keeps tracking accurate.
Variant comparison
Next comes a side by side comparison of the control and the test version. Each block displays the product name, product ID, and the pricing structure being tested. The control group shows the original price, while the variant group displays the new price along with the discount percentage calculated from the difference. Preview buttons let you see what each version will look like on the storefront before proceeding.
Basic settings recap
Further down the page, all test settings appear in a clean list. This includes the traffic split, runtime, minimum sample size, confidence level, and the primary success metric. It also reflects every targeting rule you selected earlier, such as device type, region, or traffic source. Auto stop and email notifications are also shown here so nothing is overlooked.
Final actions
At the bottom, two options are available. You can save the test as a draft if you want to revisit it later, or you can start the experiment immediately. Once the launch button is pressed, the pricing test becomes active in your Shopify store and begins collecting real time data.
This completes the setup of the Pricing Test.
Expected results panel
This top area shows the planned runtime, the estimated visitor pool, the confidence target, and the improvement target you entered. Confirm the duration and minimum visitors match your plan because they set how long the test will collect usable data.
Discount code configuration
Here you enter the discount code that will be offered to shoppers in the variant. When you launch the test the app automatically creates this code in your Shopify admin. Do these checks now
Cart experience comparison
This area shows the control and the variant side by side so you can confirm the customer experience.
Control group summary
Variant group summary
Basic settings recap
Targeting and management recap
Preview and verification steps before launch
Final actions
You have three choices at the bottom
After launch quick checklist
Once you have verified every item, press Launch Test to start collecting data.
A few advanced testing modules are already planned and will be added to the dashboard soon. These additions will expand your control over on site behavior and help you run broader experiments across the full purchase path. Product Test will allow you to compare recommendations, product descriptions, and page layouts. Shipping Test will bring tools to trial delivery messages, shipping options, and free shipping prompts. Checkout Test will enable experiments on payment methods, form fields, trust indicators, and overall checkout experience. Each of these modules will follow the same structured workflow you used in the cart and discount setup.
With the Cart and Discount Test fully configured, launched, and tracked through the A B Testing Dashboard, you now have a complete framework to study how pricing cues, discount triggers, and cart messages influence conversions. The upcoming modules will add even more flexibility, allowing every stage of the customer journey to be tested with the same clarity. For now, continue monitoring performance, review the data once significance is reached, and apply the winning setup directly to improve your storeβs results.
Β
A 1000-word article can support one main keyword. It can also support three to five related terms. It can include natural cluster terms that support the topic. The exact number depends on the flow of the article.
Every page should have one clear primary keyword. This helps search engines understand your direction. It also helps your page stay focused. A clear keyword also makes writing easier.
You should avoid adding too many keywords in one place. Too many keywords create confusion for readers. They make your page look unnatural and unhelpful. Use only the terms that support your message.
Keyword clusters work better because they support many ideas. Long-tail keywords can also help but have narrow reach. Clusters create strong context and improve your topic authority.
Keywords matter because they guide your page. AI tools still study words to understand meaning. They also check helpfulness and structure. Clear keywords support these systems.
Start using our A/B test platform now and unlock the hidden potential of your website traffic. Your success begins with giving users the personalized experiences they want.
Start Your Free Trial
Empowering businesses to optimize their conversion funnels with AI-driven insights and automation. Turn traffic into sales with our advanced attribution platform.