Intro to A/B Split Testing

Courtney WhitingConversion Lead / Information Architect
Published:
Updated:
As the Conversion Lead at Experts Exchange, I spend the largest chunk of my time working out A/B split tests to see what will help us reach our conversion goals. A/B testing is a valuable practice that not nearly enough companies and consultants trying to sell their services use to their advantage. Without a scientific way of testing version of your webpage, app, etc. you’re just guessing. Chances are you’re bleeding valuable conversions and money.

A/B testing gives you the ability to discover exactly what is most effective for your audience without opinions, false assumptions or speculations getting in the way. The numbers and results of your tests don’t lie. In this article, I will give an overview of A/B testing, show how to define what you should test, and tell you how to declare a test a winner or loser.

What is A/B testing?
A/B testing put simply is testing one version of a webpage, mobile app, etc. against another. The first thing you must do when setting up an A/B test is determine what your success metric is, also known as your conversion. This could be an app download, a free trial sign up, a prospect contacting you for your consulting services, etc. Then, you will examine the page that leads to those successes and come up with a hypothesis for what changes to that page will help improve your conversion metric.

Based on that hypothesis you will create a new version of your page with those changes implemented. Once you have the alternate version created, you will send a portion of your traffic the original version and the other portion to your new page. You will track and record visits to the pages versus the successful conversions to determine which version is the winner. The conversion rate for each version of the page is calculated as (conversions from that test) divided by (unique visitors to that test).

Because of the complexity of our site, at Experts Exchange we have an internal tool for distributing the traffic to different tests and then use Omniture aka Adobe SiteCatalyst  (http://www.adobe.com/solutions/digital-marketing.html) to track results. For those just starting out with testing, an excellent tool is Optimizely (https://www.optimizely.com/). It not only allows you to create the versions of the pages and send traffic to rotating pages within the same URL, but it also tracks your visits and goal metrics for easy analysis.

Deciding what to test: Big Change vs Incremental Change Tests
One conversation that is constantly had among our team while deciding on the next version of a page to test, is whether to do a big overhaul change to a page, or to test small tweaks within the format of an existing page. There are pros and cons to both testing paths.

Those in favor of big overhaul tests make a point that the big tests have the most potential for big conversion increase. While this is a valid point, it also has the potential to be a big loser at the end of the test. The concern with focusing only on big overhaul changes is that, especially in the case of a loss, you learn very little about your customer. Which aspects of the change caused it to win? Which caused it to lose? There’s a chance that one of the elements of change you introduced actually would have increased your conversion drastically on its own, but another change that hurt conversion caused the whole test to fail and sends you moving forward leaving behind a section that could have improved conversion.

Those in favor of slow and steady incremental change tests make the point that you can gain a lot more learning over time with single isolated changes, even when the learning comes from a loss. For example, at Experts Exchange we introduced a test that simply changed the text on a sign up button from ‘view this solution’ to ‘subscribe to view this solution’. It was a very small change to the page, but we saw a bump in conversions as a result of the change. We were able to learn that by being up front about what was going to be required of the customer, we were more likely to get them through the entire sign up process. We have since been able to apply this learning to a wide number of follow up tests.

The learning went beyond just that one time bump in traffic, but contributed to the success of many other tests. In the case of a loss, we discover what does not help our conversion and can steer future tests away from these sort of changes. The drawback to incremental changes is that you can only take a single page style so far. You aren’t likely to see the big leaps in the conversion rate you can see in the big change tests.

The best approach is a combination of big and small changes in your testing plan pipeline. Keep small changes going through so you can get the slow and steady increase to conversion, while testing a few big change tests to see if you can uncover a big win and a new direction to take the page. When you find a winning large change test, move your incremental changes to be based off that new page and continue the process.

Requirements to declare a winning page
There are three requirements that a test must satisfy before being declared a winner:
  1. First, it must receive at least two weeks of full testing traffic. Every day of the week behaves different. You need to make sure that you have allowed the test to see all days of the week at least twice before being sure that the test results are stable. Tests will often still take longer than two weeks to satisfy the other winning requirements.
  2. Second, both your test page and your original page must receive at least 50 conversions. Without this many sign ups, you cannot be confident in the tests ability to convert.
  3. Third, your test must be the original page with a statistical significance of at least 95%. You can find your statistical significance quickly with this online tool: http://www.splittestcalculator.com/
If you try and declare a test a winner before it meets all the requirements, you run the risk of choosing a page that will actually convert worse than your original page. There have been many occasions where a test started out doing well and looked as though it would be a winner, but after giving it the time to satisfy these three requirements, it dropped off and lost to the original. If we had made a call based on these tests hitting even just one of the requirements without waiting to see if it met the other two, it would have cost us in the long run.

What to do after you find a winner
First of all, way to go! You found a winner and increased your conversion! Get someone to give you a high five and then update the distribution so all your traffic is going to the higher converting version of the page. What to do after you’ve made those updates? Start over!!! A/B testing never ever stops. There’s always room to grow and more conversions to discover as well as changing behavior within the marketplace. Your next goal will always be to beat your last winning test.
 
3
1,238 Views
Courtney WhitingConversion Lead / Information Architect

Comments (0)

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.