One thing I love about email is data. Because it’s so readily available, it’s easy to run split tests and see email performance in real time. You’ll often find me poring over spreadsheets and constantly refreshing data as if I were watching a close race.

But none of that matters if A/B testing isn’t set up correctly.

Without a solid foundation, your A/B test results are unreliable and can lead you in the wrong direction. And that can cost you in engagement, conversions, and ultimately, in subscribers and customers. So before you think about your next test, make sure you’re set up for success to get the insights you need to drive your marketing strategy.

And who better to talk about A/B testing than our resident testing expert and senior growth manager, John Kim? John runs most of the conversion testing on our website and has taught me quite a bit to sharpen my own skills. And now you’ll get to learn from him, too.

What are the key things you need to do to run a successful A/B test?

No matter where you’re testing (e.g. email, website, in-app, or paid ad), the basics remain the same. Get them right, and you’re well on your way to results you can trust and take action on.

Curious about this year’s email trends?

Add your voice to our 2021 State of Email survey to help us discover and share the latest insights, like what your peers are A/B testing and how that’s impacting email marketing success.

Take our survey →

Know what you’re testing

Before executing on your A/B test, it’s critical to understand exactly what you’re planning to test. At Litmus, we have a number of criteria we document for each A/B test to make sure we maximize our chances of success and learnings.

Hypothesis

Perhaps the most vital element of your A/B test—a good hypothesis—is an answer to a problem that you’re trying to solve.

Your hypothesis should be clear, focused, and made with some underlying or limited evidence. Put simply, it’s an educated guess to how you might solve a complex business problem. It’s important that your hypothesis is clearly defined because your experiment will be designed to test it.

Start writing your hypothesis! In our case, they’re often written using an if-then statement.

Example: If we change our standard button color to orange instead of green, we will see an increase in click-throughs.

Goal

The next element we like to document before running any experiment is the goal of the experiment. Ultimately, what are you trying to accomplish for your business?

Be clear about what success means to you.

Example: Our goal is to increase click-throughs on the button to, in turn, increase conversions on the next page, resulting in either higher trial sign-ups or activations overall.

Metrics

Prior to running your experiment, it’s important to know what you will monitor for your primary metrics. Given your hypothesis and goal, be clear about which one or two metrics you will use to determine success when it comes to your previously stated goals.

This step is important because you will want to make sure you:

  1. Know which metrics are important to you
  2. Have the ability to monitor and attribute that activity back to a given user and cohort (your test and control audiences taken from your overall audience).
  3. Understand your secondary metrics.. In addition to  primary metrics, it’s important to monitor how users interact with the rest of your experience.

Guardrails

The different ways any given test can affect your business can come as a surprise.

What we do in this step is document all the metrics and channels that the upcoming A/B test could positively or negatively impact.

It’s important to go through this exercise so we can:

  1. Minimize surprises for any given test
  2. Weigh (as best as possible) the potential benefits vs. the risks/guardrails.

Our team commits significant effort in preparing for tests.  We enter each test with realistic expectations, thresholds for success and failure, and are prepared for a multitude of outcomes.

Access not-so-typical email metrics in Litmus Analytics

Standard email metrics like open rate, click-through rate, unsubscribe rate, and more can only tell you so much. Understand how your audience is interacting with your emails with details such as email client, read rate, and more.

Get more email data →

Split test and track

A/B testing or split testing is a widely available feature and offered with most email service providers (ESPs) and Marketing Automation platforms. If you want to run tests on your marketing site or your app, tools like VWO or Optimizely offer solutions, too.

When it comes to selecting your audience, determine the number of people you need from your audience to be in your overall test to establish statistical significance, or the likelihood that the difference in conversion rates between group A and group B is not due to random chance.  You’ll want to carve out a portion of your overall audience to split 50/50 into these groups if you have a large enough audience. Here at Litmus, we’ve come across various tools over the years to help. One of our faves is Neil Patel’s A/B testing calculator.

Once you determine how many people need to be in your test audience, half of it should not have any changes applied to their experience. This group will be your Control group. As best as possible, their experience should closely resemble what you consider to be your baseline or typical experience. The other half of your audience will be your variant cohort. For the users in this group, apply the test treatment.

A/B tests are usually analyzed at a cohort level. Meaning—we assess whether or not the cohort that received the treatment experience converted significantly different than the Control cohort.

It’s vital that placing an audience member into a given cohort is random and that every given one receives only a single treatment. If we were to consider the makeup of each of the cohorts (test and control), we want to ensure that we do not introduce any bias towards a particular demographic, firmographic, or any other characteristic of the user for a single cohort. Randomizing your cohorts and having fewer variants better ensures that your cohorts represent a random selection of your audience.

Wrapping up

A/B testing doesn’t have to be hard, but if you don’t set them up correctly the insights you glean from them won’t mean much. Understanding the fundamentals we’ve explored here will set you up for success, so you can apply your learnings to your entire marketing strategy. Remember, take a step back to think through each element and you’ll be well on your way. Stay tuned for our blog on A/B testing your email marketing, where we’ll dive more deeply into testing our favorite channel.

logo.png

More data. More insights.

Get more email data—more insights—when you go beyond standard email metrics. Access read rate, forward rate, and more with the power of Litmus Email Analytics, built right into Litmus Plus.

Try Litmus Plus for free →

The post How to set yourself up for A/B testing success appeared first on Litmus.