A/B Testing for Ecommerce: Split Test Your Way to More Sales

When you’re perfecting a product page or crafting copy for an email, it’s difficult to know what will resonate with your audience. After all, you are not Miss Cleo, and you don’t have a crystal ball into the future. 

When in doubt, our motto is always “Don’t guess. Test.” Let the data do the talking.

 Your test could be as simple as changing the color of a button and seeing what gets more clicks. But if you really want to boost sales, there’s so much more you can and should experiment with.

Read on to see how A/B testing can help you achieve ecommerce greatness. 

What is A/B testing?

Whether it be a product page, your checkout process, or an email, A/B testing is the process of showing two variations to two equal audiences to see which one customers prefer. This process is also known as split testing. 

Why A/B test?

A/B testing offers a data-backed way to boost your conversion rate and improve the customer experience.

A memorable experience will have shoppers buying from you not just once, but again and again.

With A/B testing, a successful conversion can be defined as:

  • A sale (the most common metric)

  • A subscription to a service or a newsletter

  • A survey response

A/B Testing Examples

You can test all sorts of things in your ecommerce store to create a seamless and memorable customer journey. Here are some of the most common elements that we see in an A/B test:

  • Calls to action: button placement, color, and text

  • Images and video

  • Social proof: badges, customer testimonials, and reviews

  • Site navigation, structure, and product page layout

  • Checkout process: from cart structures to buttons and colors, almost every element listed above can also be tested in your cart experience

How to set up a test

Ready to start testing your site? Great! Here are the steps to creating a split test.

1. Create a hypothesis.

Remember all of those times you’ve thought to yourself “I wonder what would happen if I changed x on my website?” Those are hypotheses. Make a list of all of the things you think could improve conversion. From there, articulate the why behind it. 

Here are a few real-life examples we’ve used for A/B tests at Drip. 

“If we change the homepage navigation, customers will have an easier time finding the information about Drip they are looking for.”

“If we change the language on the CTA button, we will see a higher clickthrough rate.”

These if/then statements are the basis upon which you build your test.

2. Determine your variable.

The variable is the element of the test that will be different between audiences. Here are the variables from the examples we already talked about.

“If we redo the homepage navigation, customers will have an easier time finding the information about Drip they are looking for.”

In this example, the homepage navigation is the variable.

“Using more actionable language on the CTA button, we will see a higher clickthrough rate.”

Here we’re changing the language on the CTA button, making it the variable. 

3. Determine parameters.

You can’t just launch a test and decide when you’re done based on a whim. I mean, you could, but wouldn’t you rather have a plan? Parameters will give you that plan, telling you how long your test will need to run to achieve statistical significance. 

What is statistical significance, you might ask? Well, it’s the difference between guessing whether something is working or whether it was a fluke.

Here’s how it works. First, you’ll need to know how many visitors and conversions the page you’re testing on averages. The more visitors and conversions, the faster you’ll reach statistical significance. Next, you’ll need a statistical significance calculator. I love this one from Neil Patel. 

Set up your test, and then input the site visits and conversions from each variation. The calculator will tell you which variation is winning, as well as whether or not your test is statistically significant. 

Best Practices for A/B Testing

Avoid cloaking.

You might be tempted to set up your site so that one version will be shown to humans and the other will be for robots. We don’t recommend this approach, as it can damage your site’s SEO. 

Tools like Optimizely are great for automatically setting up your tests, and they’ll even calculate statistical significance for you!

Use a temporary 302 redirect.

If you’re testing two versions of a homepage, using a 302 redirect can be more beneficial as it signals to Google that your change is only temporary. 

Stop the test once you reach statistical significance.

Once you know which version won the test, end the test to avoid issues with SEO or confused audiences. 

Don’t test multiple things on the same page at once.

 

Doing multiple tests on the same page makes it nearly impossible to know which change is resulting in the outcome you were looking for.