Trying to improve your ROI by fine tuning conversion rates, be very cautious about A/B testing strategy and it will pay dividends.
If you succumb to reading blog posts about how to … almost anything and don’t we all from time to time, you may be led to believe that tinkering around with low grade a/b testing on small samples is going to transform our performance. Let me tell you from a lot of experience, not only is this not true, you are much more likely to cause minor damage in the long term.
A/B testing is the very most basic form of UX testing though still very powerful in the right hands. The benefit is that we get to know what the customer did as opposed to an opinion about what they would do. The later is what we got when we relied on customer visits and online focus groups. Don’t rule out these techniques either they are powerful in the right hands, but I have even had users tell me a story of their experience that directly contradicted what my black box tests had recorded just minutes previously.
The biggest problem with a/b testing is sample size and that is followed closely by segmentation and sample makeup. Random a/b tests with unknown users tells very little about why etc, but the worst problem is the “Flaw of averages” . What I mean by this is the natural variation over time that occurs in any data set. Unless and until you know from experience what is a safe sample for this purpose, what will happen over and over is that you will initially see a very convincing anomaly, e.g visitor after visitor goes to sample 6b and abandons the journey. You then spend time looking at heat maps and star gazing, make a change and bingo they begin to convert. You pat yourself on the back, but 3 months later you find that it fizzled out in a week and things got even worse. Remarkably the lines on your chart have a habit of converging if you wait a little while before acting.
Many marketers and UX specialists waste an enormous amount of time on such pursuits before they learn to wait for a bigger sample and a stronger signal and search for evidence to back the hypothesis.
A much better approach that can be used in many scenarios is to consult again with your personas and your latest User Journey notes, hazard a guess at what needs may be going unmet and offer the users three or more CTAs or landing pages each addressing a slightly different scenario. Not only will you usually see some improvement in results but from their choices you will learn a great deal and you will alleviate the “small sample” issue.