How To A/B Test Your Emails For Ecomm Success

Do you wish you could know what kind of email content your audience is really interested in, and what drives them to open, click, and convert? Well, wish granted because this blog is all about A/B testing and how you can use it to get a peek inside your customers' brains. 

We’ve tapped ecomm marketing expert Kasey Luck, founder and CEO of Luck & Co Agency, to share her expertise and insights on the topic.

Whether you are an A/B test newbie or looking for a refresher, this blog breaks down everything you need to know to get started now.

So, what is A/B testing?

Good question! A/B testing is a technical term for sending variations of the same email to a portion of your email subscriber list, then looking at the engagement data to get insights about your audience. Through this, you learn which variations your subscribers best respond to and can use that info to increase the efficacy of your email marketing.

When more of your customers like your emails, the more likely they are to purchase from you (again and again). What's more, when testing offers, copy, or imagery, the learnings you pick up in A/B email tests can often be applied to other marketing channels and lead to optimizations across your ecommerce marketing. That means engaged customers and more revenue for your business - yay! 

What metrics should I test?

When A/B testing, you are looking at Open Rate, Click Rate, and Conversion Rate. You’ll choose one metric for each test depending on the information you are seeking. So how do you choose? 

Open Rate

Do you want to know what subject lines, preheader copy, or send times work best for reaching your audience? Run a split test where OPEN RATE is the winning criteria.

The Fall 2021 launch of Apple iOS15 somewhat skews this metric, since it allows customers to opt out of sharing their open, location, and device data. Since Apple Mail reports anything received through their app as "opened", open rates may be somewhat inflated as adoption of iOS15 increases through the end of 2021. Keep this in mind when looking at data over the next few months.

Click Rate 

Do you want to know what image to use in an email, which line of copy catches your reader's attention, or what design drives the most conversions? Run an A/B measuring click rate. Since your goal, once an email is opened, is to drive customers to your website, anything in the email body should be tested with the goal of optimizing for click rate. Other elements to test with click rate as the metric include:

  • Call To Action (CTA) button copy

  • Location of a CTA button

  • Location or variations of a header or featured image 

  • Copy, including variations on headers, titles, and CTAs.

  • Significantly different design templates that communicate the same message.

Conversion Rate 

Lastly, let's talk A/B tests that focus on Conversion Rate. If you want to know what offer type (discount or free gift?), campaign storytelling (positioning), or urgency tactic drives the most purchases, then you want to run an A/B test measuring conversion rate. Keep in mind that the factors that contribute to open rate and click rate also ultimately influence conversion rate, since the customer journey begins the moment your email lands in your customer's inbox. Nonetheless, variations that are specifically focused on and have an effect on getting your customer through the checkout flow are those whose success you’ll determine based on sales. 

Now that you know what you want to test for, let’s get it set up for success.

How do I run a reliable A/B test?

First things first: 

  1. Test only one variation of one element at a time. If you are testing an image, don’t change any other factors (such as location, size, copy, etc). Multiple changes will muddy your metrics. 

  2. Have a goal in mind, answer a specific question. When you start with an intention, your testing and results will be more clear. Don’t just test for the sake of testing.

  3. Use the same types of subscribers. If you are testing different types (say, subscribers who have purchased with you vs those who have not) then you can expect different results - that is a test in itself and would be the variation. However, if you are testing variations in the email itself, then the list of those you send it two should be similar to the best of your knowledge. 

  4. Bigger test group, more reliable results. For reliable results, each variation you send should go to a big enough sample of recipients. This will depend on your overall list size, of course. That said, aim for a minimum of 1,000-3,000 recipients. Don’t have that many to test? You might need to split your total list with each test, and make sure to work on building up your subscribers!

  5. Record (and use) test results. Once you have results from your A/B test, it’s time to apply that to your overall email strategy. Create a spreadsheet and keep track of your learnings, building upon these insights will help refine your email strategy and make your emails increasingly more effective over time.

Alright, ready to see these tips in action? Here we go!

Examples say it best... a real A/B test.

The following is an example of a real A/B test with a Luck & Co Agency partner, and it well illustrates the valuable insight that can be gained. 

First off, you should know that 70% of people abandon their cart during the checkout process. That’s why abandoned cart emails are key in helping shoppers overcome hesitations and finalize their purchases. That’s also why it is so important to run tests to find out what makes your abandoned cart emails most effective. 

All three variations you’ll see below feature the abandoned product, with a key element changed:

Variation A includes product reviews, leveraging the power of social proof to help motivate your customer to complete their purchase: 

 Variation B features defining brand values that the company thinks is a draw for their customer, specifically, the material used and how it is different and better: 

 Finally, Variation C only shows the abandoned product but neither of the other blocks. Here, we wanted to see if getting right to the point with a simple reminder would be best:

The test metric here is Click Rate, because when testing variations within the body of the email (it's 'content') you are making changes in hopes it will increase the number of readers who are motivated to click and land on your website. 

Ultimately, the client found a winner and utilized that approach within their abandoned cart automation. So which approach is best for you? Remember, with A/B testing you are looking at how YOUR subscriber list responds, meaning the best result for your emails could turn out very different than another business’s subscriber list. That's why tests are so helping at getting unique insights that reflect the preferences of your subscriber list.

Now that you know some A/B testing best practices, you’re ready to dive into the minds of your customers to learn what makes them tick… 

Your turn: set up a Drip single email A/B test 

With Drip single email split (A/B) tests you can:

  • Choose your winning criteria to test (open or click-through-rate)

  • Test up to four subject line or email content variations in each campaign.

  • Set a test pool from your Segment before you declare a winner.

  • Select the time duration to run the test and then automatically send the winning variation to the rest of your segment. 

  • See opens, clicks, and revenue for each variation on the Dashboard to optimize future sends.

Now it’s your turn. First decide what metric you want to test: open rate, click, or conversion rate. This will determine which elements of the email you vary, which is the next decision (subject lines, images, copy, etc). From here, it’s off to the races so check out this step-by-step guide and you’ll be up and running in no time. 

That’s it! You are fully equipped to run reliable and valuable email A/B tests.