How to test two versions of a concept to understand which one performs better.
On this page Skip this page navigation
What is it?
In the context of web analytics, A/B testing is a method used to test two versions of a single variable to discover which version best achieves the intended goal.
For example, you may be interested in testing whether a website button labelled “Register Now” (Version A) leads to higher or lower registration rates for an upcoming webinar than a button labelled “Sign Up Now” (Version B). Users are randomly assigned to view Version A or Version B of your website, which would allow you to compare response rates for each version of the site.
Having measurable and observable outcomes that relate to your users’ goals is necessary when performing A/B tests on a webpage, so that you can compare success rates across different versions. If the users arrive on a page with no indication of where to go next, there is no way to measure success. In this case, you may consider preference tests instead.
Purpose of A/B testing
The purpose of A/B testing is to allow individuals, teams and companies to make minor, iterative tweaks in order to meet design standards that help users achieve their goals. A/B testing allows us to construct hypotheses and then test them to see how users behave.
A/B testing will give you a great insight on how users are behaving throughout your website, but it will not answer why. To better understand why users behave the way they do, qualitative studies are paired to complement A/B tests.
Setting up A/B tests
There are many tools online such as Optimizely or Unbounce to help you setup A/B tests. The setup will vary depending on what you want to test and the online tool you use.
A/B testing and preference testing
A/B testing and preference testing are separate and rather different user research techniques, but they are often mixed up.
With A/B testing, the researcher analyzes behavioural patterns from a large sample of people to determine which version of a site is ideal to accomplish a goal. The user is exposed to only one variation and is not aware of the other variations available. In preference testing, the user is presented with multiple concepts and is asked to give feedback on what they prefer and why. A/B testing allows researchers to rely on analytical data to determine outcomes while preference testing results in stated preferences from users.