How to do A/B Testing Brilliantly
So instead of relying on guesswork, it would make far more sense to perform A/B testing (sometimes called split testing) on your content to see what works best.
You won’t be surprised to hear that different audiences behave differently. Older people will have different interests to younger people, people looking for cheap products will behave differently to those looking for high end products and regular customers will behave differently to brand new customers.
And what works for one business or industry won’t necessarily work for another. When it comes to the content of your landing pages and emails, there really is no such thing as ‘best practice’, because someone else’s best practice won’t be the same as yours.
A/B testing will help you work out how your audience reacts, so you can make sure you are giving them what they really want, to increase conversions and sales.
A/B testing – the basics
A/B testing is performing a marketing experiment (eg on your landing pages or emails) in which you split your audience in two to test a number of variables, to see which performs better. So you show version A to half of your audience and version B to the other half.
To run an effective A/B test you need to create two versions of a piece of content, with changes to just one thing – this could be something like the headline, the image, the length of the copy or the background colour. You will show one version to one audience and the other version to another audience of the same size. Set a time period which is long enough to draw accurate conclusions about the results, then analyse which version worked better.
Two types of A/B tests are user experience test and design test.
The user experience test involves moving something, such as a call-to-action button, on your page to see if that increases clicks. The existing version is A, while the new version is the challenger, or B. So in version B, you could move the button from the bottom of the page to the top, or from the sidebar to the middle of the page.
In a design test, you would leave your call-to-action button in its usual place, but perhaps experiment with the colour. It’s important that the B version of your button still leads to the same landing page. If you usually use a green button, but the blue button receives more clicks after your A/B test, it could be worth switching to a blue button for all of your existing landing pages and future marketing campaigns.
What do you want to achieve from your A/B testing?
Every business is different and there are many reasons why you may want to conduct split testing. Some of the most common reasons to carry out A/B testing are:
-
Increased conversions – making changes to text, colours or the layout of your page can increase the number of people who click on a link to get to a landing page. This can increase the number of people who convert into a lead – whether they are filling out a contact form on your site, signing up for a subscription or making a purchase.
-
Increased web traffic – testing different headlines and titles can increase the number of people who click on a link to get to your website. More traffic means more visibility for your products and services and ultimately should lead to more sales.
-
Reduced bounce rate – if your visitors leave, or bounce, quickly after visiting your site, testing different fonts, images or introductory text can help cut down your bounce rate and encourage your visitors to stick around for longer.
- Reduced abandoned baskets – ecommerce sites will be familiar with abandoned baskets, which could be as high as 40 to 75% for some companies. Testing checkout page design, product images and shipping cost information can help reduce abandonment.
A/B testing – sound economic sense
A/B testing will always make good economic sense. Testing is low cost but, when you get the answers you are looking for, it is high in reward.
If you employ someone for a salary of £30,000 to write five pieces of content a week, each piece of content would cost on average just over £115 (based on 260 posts a year). If each post generates around 10 leads, that means it costs your business £115 for 10 leads.
Rather than producing two separate pieces of content, your content creator could spend two days producing two versions of the same post (A/B testing). If the new version (version B) is a success and doubles your conversions to 20, you have spent £115 to potentially double the number of customers you get through your site.
If the test fails, you have lost £115 and haven’t increased your clicks, conversions or sales, but you know what not to use next time! You are one step closer to getting it right for your business. It doesn’t matter how many times your tests ‘fail’, because each failure is giving you answers and the eventual success will almost always outweigh the cost of conducting the testing.
A step-by-step guide to A/B testing
Test one variable at a time – when you work on optimising your website and emails, it is likely that you will want to test a number of variables. But the only way to be sure of the effectiveness of a change is to test one factor at a time, otherwise you can’t be sure which variable led to the change in performance.
Consider the various elements of your landing pages, emails or marketing resources and possible alternatives for wording, layout and design. Easy things to test include your email subject lines or the way you personalise your emails.
Small changes, like different images in your email or different wording on your call-to-action button can produce big results.
- What is your goal? – Think about what you want to achieve before you conduct your test. It might make it easier to actually state in advance what you expect to happen and analyse your results based on this prediction eg ‘I think moving the call-to-action button to the top of the page will increase conversions’.
If you wait until afterwards to consider why the data is important to you and what your goals are, you might realise that you haven’t set up the test in the right way to get the answers you were looking for.
- Set your control and your challenger – Once you know your goal, use the unaltered (existing) version as your control eg the landing page design and copy you normally use.
From there, build a different version, or your challenger. So if you are wondering whether having less text on your landing page would increase conversions, set up your challenger with less text than your control version.
- Split your sample groups equally – For your testing to be valid, your two audiences need to be of the same size. If you have more knowledge of your audience eg for emails, it should also be split equally in terms of things like gender, age and geographical location. The more similar your audiences are, the more accurate your results will be – they won’t be skewed by something like a predominantly older or predominantly female audience.
- How big is your sample? – If you are testing an email, it will be easy enough to determine your sample size. You could either send one version to half of your list and the other version to the other half, or you could decide you are just going to run the tests with some of your email list. Either way, you will know exactly how many people have received each version and will be able to analyse your results accurately.
A web page is slightly more tricky, as it doesn’t have a finite audience. In this case, you need to leave your test running for long enough to get sufficient views, so that you can tell if there is a significant difference between the two versions.
- Only run one test at a time – For your results to be accurate, you need to be sure you are only running one test at a time. So make sure nobody else in your team is running tests at the same time as you!
For example, if you are running an A/B test on an email that leads to a landing page, at the same time as your colleague is split testing that landing page, you can’t be sure which change caused an increase in the leads.
- Use the right tool for the job – To run an A/B test for an email or on your website, you will need to use the right tool. There are a number of options of A/B testing tools available. A popular choice is Google Analytics’ Experiments, which allows you to test up to 10 different version of a web page and compare performance using a random sample of visitors.
- Carry out your tests simultaneously – To make sure your testing is valid, you need to test both versions at exactly the same time. Whether it is the time of day, day of the week or month of the year, timing always has an effect on visitor behaviour. So if you ran test A on Wednesday and test B on Friday, you couldn’t be sure whether the differences were down to your design changes or the day of the week.
The only exception to the rule is, of course, if you are testing the timing itself. In that case, you would put out two identical emails, but on different days of the week or at different times of the day. The right time to send out emails can vary significantly between different industries and product type, so it always makes sense to find the optimal time for your business.
- Give the test time – You are running an A/B test because you want to get significant data, which will tell you whether to make changes to your website or the emails you send. To be sure you get the results you need, you need enough time to give you a significant sample of your audience, which means you shouldn’t analyse the data too early.
How long is long enough will be different for every business and every type of test. The results from testing an email should come through in days, if not hours. But testing a web page can take much longer. A key point to remember is that the more traffic you get, the quicker you will get statistically significant results. But if your business doesn’t get much traffic to your website, the results of your A/B testing will take much longer.
- Ask for feedback – A/B testing can show you whether a green button works better than a blue one, or whether a short headline is better than a long one, but it can’t tell you why users think this way. To make your results even better, you could ask for feedback while you are running your testing to give you some qualitative data.
One way to do this is through a pop-up survey on your site, which comes up as a visitor is about to leave or appears on your thank you pages. Asking why they did or didn’t click on a button or fill out a form will give you more understanding of your visitors and the way they behave.
- Don’t lose sight of your goal – When testing is complete and it is time to do your analysis, you need to remember the goal you set right at the start of the process. So if your goal was to increase conversions, you shouldn’t get too bogged down in how many people opened your emails or how many people clicked on a link. In fact, your results may show that one version has lower clicks, but higher conversions.
- How significant are the results? – Once you have your results, you need to decide whether or not to act on them. In that case, you need to decide how significant they should be to justify choosing one version (and potentially having to make changes) over another.
To make it easier, you might want to put your results in a simple table. There are only two possible outcomes – converted or did not convert – from your two versions – A and B.
If 5000 people received a version of your email – 2500 received each email – and 200 converted from version A, while 800 converted from version B, you would say that was significant. But if 460 converted from version A and 480 converted from version B, you would probably argue that the difference wasn’t significant enough to warrant making a change.
- Take action based on the results – If one version is significantly better than the other, you have a winner. The conclusion to your testing should be to get rid of the losing variation of your A/B test.
If neither version is statistically better, you have learned that the variable you tested doesn’t have a significant impact on your business and you can stick with your original version.
- Plan your next test – Your A/B test has given you results, but it is only one test on one aspect of your marketing. The next step is to start planning your next test, so you can keep optimising your web pages and emails.
If you tested your headline, you could move on to testing your images, colour scheme or copy. The more you test and amend, the more chances you have to increase your conversion rates, leads and sales.
More Posts.

