Why A/B tests suck and what you can do instead (Insights > statistics)

The most common advice when evaluating a new marketing idea (a new page variant or a new pricing table) is: “you should test that”.


Sounds good… but how?


Most people think of running an A/B test.


In this article, you’ll learn why a/b testing is a terrible idea for most businesses (ok, usually not terrible but not the most optimal).


I’ll also tell you what tests you can do instead that will work 10x or 100x better than a/b testing.

1. Why a/b testing sucks

A/B tests can be great (sometimes).

As a major in Maths and Statistics, I love when you can support a hypothesis with beautiful cold hard data (like on the example above - you can be 97.9% sure that variant B is a winner)


However…

a) It’s hard to run an a/b test properly

You need strict conditions to make sure your A/B test is reliable:

What does it mean for you?

If you’re running traffic to a product or service page, you’ll need at least 200 conversions* to do one simple test. (*actually depends, but click here to check statistical significance)


You also need the same “type” of traffic. If you’re using a tool that divides traffic to two variants, you need to make sure visitors are coming from the same source.

Can you use the same ad sets in the same (hopefully narrow) period of time, same targeting, switch off retargeting, etc? Usually, it’s difficult to get enough “clean” conversions.


Also, you cannot rely on historical data and compare sales from 2 different periods - your conversion percentage could be affected by weather, a new competitor on the market, or a pandemic (!) (or one of a million other reasons)


In an ideal world, you should be taking into account only the end goal (most often sales), not partial conversions like sign-ups or inbound calls.


Principle: “Messing around with the parts of your funnel won’t necessarily improve the system as a whole (and may make them worse)” - you can read a brilliant article on systems thinking vs linear cause-effect thinking here.

  • At least 100 conversions per variant (sometimes less, you can check how sure you can be
  • Homogeneity of traffic (the traffic has to come from the same source)

When A/B tests make sense?

A/B tests are PHENOMENAL for any type of business where you can get enough end-conversions (sales, not sign-ups or leads):

-ecommerce

-some affiliate sites

-low-ticket info-products


Still, qualitative tests (insights) will get you a better bang for the buck than A/B tests (statistics), more on that later.

b) An A/B test reveals only 1 piece of data

Let’s assume you can perform a precise A/B test.


Then let’s assume your result is statistically significant and the variant beat the control (which most often is not the case).


This experiment gives you only 1 piece of binary information. You know your new version works better but you don’t know why.

Your A/B test will never tell you if:

-people don’t resonate with specific parts of your copy

-there are usability bugs on some devices

-you’re not answering an objection that most of the people have

-your guarantee is not strong enough

-a pop-up shows up in the wrong place


Statistical data don’t hold any of the qualitative (“why”) information that you need - only a yes/no answer to a narrow experiment.

3. What you can do instead- qualitative testing (getting insights)

Source: https://uxdesign.cc/a-crash-course-in-ux-design-research

The best you’ll get from an A/B test is an answer to a single “yes/no” question.


With qualitative tests, you can uncover many “whys” which will give you 100x more insight than an a/b test.


Quoting Nielsen Norman Group (leading UX consultancy company):

There are 2 main types of user research:

quantitative (statistics) and

qualitative (insights).


Quant has quaint advantages, but qualitative delivers the best results for the least money.

Furthermore, quantitative studies are often too narrow to be useful and are sometimes directly misleading.

You get insights into your customer’s objections, problems and aspirations. You’ll find out which elements on your page help (or cripple) your conversions and why. You’ll find out what’s broken, missing or redundant.


With qualitative testing you can optimize for a much greater maximum than with A/B testing alone. (see example in section 5)

4. How do you conduct qualitative tests?

Qualitative testing sounds complicated but can be easy.


Be mindful that visitors who visit your page are on different stages of awareness (of their problem, different solutions, your product).


That’s why it makes sense to involve both users that don’t know your product and your customers who know you well. Both groups will present you with different kinds of insights.


Qualitative testing can be as simple as showing your page to people and getting their feedback.


My trick is going to a coworking space in the morning and getting random people to comment on my landing page in exchange for a free coffee. You explain to them a bit of context, show them the offer and then don’t say anything.


Let them speak out loud, scroll around the page, see where they stop and get engaged, see where they would drop out.


5 coffees and 90 minutes later you’ll identify strong and weak parts of your copy, what impression your design makes and also what causes 80% of your website usability issues!

Getting my flatmates (who are also marketers) to rip apart a landing page for my own project during lunch. By the end of the meal, I’ll identify the 20% of the elements that cause 80% of confusion.

5. Still want to run an A/B test? Combine both methods for maximum lifts in conversions

Qualitative and quantitative are not opposites, they cover different spectrums. Use both for insights + scientific proof that your new variant actually wins.

Example: one dog food e-commerce company invested in CRO (Conversion Rate Optimization) to improve conversions. Instead of testing random ideas like different button color, they started with qualitative testing to gather insights.


Turns out, dog owners really don’t want to run out of food for their beloved pet so timing is crucial for the purchasing decision. The e-commerce company added information when exactly the package will arrive (“at your door before Wednesday, January 11”), so that the buyer doesn’t need to worry if they’ll get food on time.


This minor change lifted conversions by 29% (which was proven by an A/B test).

6. Summary

Most marketers have an analytical approach, so they think of A/B tests as their default method of testing.


Qualitative testing will get you more bang for the buck because you’ll get INSIGHTS instead of just a single “yes/no” answer.

8020 IM Solutions

590 Kingston Road, London,

United Kingdom, SW20 8DN