Header tag

Wednesday 14 May 2014

Testing - which recipe got 197% uplift in conversion?

We've all seen them.  Analytics agencies and testing software providers alike use them:  the headline that says, 'our customer achieved 197% conversion lift with our product'And with good reason.  After all, if your product can give a triple-digit lift in conversion, revenue or sales, then it's something to shout about and is a great place to start a marketing campaign.

Here are a just a few quick examples:

Hyundai achieve a 62% lift in conversions by using multi-variate testing with Visual Website Optimizer.

Maxymiser show how a client achieved a 23% increase in orders

100 case studies, all showing great performance uplift




It's great.  Yes, A/B testing can revolutionise your online performance and you can see amazing results.  There are only really two questions left to ask:  why and how?

Why did recipe B achieve a 197% lift in conversions compared to recipe A?  How much effort, thought and planning went into the test? How did you achieve the uplift?  Why did you measure that particular metric?  Why did you test on this page?  How did you choose which part of the page to test?  How many hours went into the planning for the test?

There is no denying that the final results make for great headlines, and we all like to read the case studies and play spot-the-difference between the winning recipe and the defeated control recipe, but it really isn't all about the new design.  It's about the behind-the-scenes work that went into the test.  Which page should be tested?  It's about how the design was put together; why the elements of the page were selected and why the decision that was taken to run the test.  There are hours of planning; analysing data and writing a hypothesis that sit behind the good tests.  Or perhaps the testing team just got lucky?  

How much of this amazing uplift was down to the tool, and how much of it was due to the planning that went into using the tool?  If your testing program isn't doing well, and your tests aren't showing positive results, then probably the last thing you need to look at is the tool you're using.  There are a number of other things to look at first (quality of hypothesis and quality of analysis come to mind as starting points).

Let me share a story from a different situation which has some interesting parallels.  There was considerable controversy around the Team GB Olympic Cycling team's performance in 2012.  The GB cyclists achieved remarkable success in 2012, winning medals in almost all the events they entered.  This led to some questions around the equipment they were using - the British press commented that other teams thought they were using 'magic' wheels.  Dave Brailsford, the GB cycling coach during the Olympics, once joked that some of the competitors were complaining about the British team's wheels being more round

Image: BBC

However, Dave Brailsford previously mentioned (in reviewing the team's performance in the 2008 Olympics, four years earlier) that the team's successful performances there were due to the "aggregation of marginal gains"in the design of the bikes and equipment, which is perhaps the most concise description of the role of the online testing manager.  To quote again from the Team Sky website:


The skinsuit did not win Cooke the gold medal. The tyres did not win her the gold medal. Nor did her cautious negotiation of the final corner. But taken together, alongside her training and racing programme, the support from her team-mates, and her attention to many other small details, it all added up to a significant advantage - a winning advantage.
Read more at http://www.teamsky.com/article/0,27290,17547_5792058,00.html#zuO6XzKr1Q3hu87X.99
The skinsuit did not win Cooke the gold medal. The tyres did not win her the gold medal. Nor did her cautious negotiation of the final corner. But taken together, alongside her training and racing programme, the support from her team-mates, and her attention to many other small details, it all added up to a significant advantage - a winning advantage.
Read more at http://www.teamsky.com/article/0,27290,17547_5792058,00.html#zuO6XzKr1Q3hu87X.99
"The skinsuit did not win Cooke [GB cyclist] the gold medal. The tyres did not win her the gold medal. Nor did her cautious negotiation of the final corner. But taken together, alongside her training and racing programme, the support from her team-mates, and her attention to many other small details, it all added up to a significant advantage - a winning advantage."

It's not about wild new designs that are going to single-handedly produce 197% uplifts in performance, it's about the steady, methodical work in improving performance step by step by step, understanding what's working and what isn't, and then going on to build on those lessons.  As an aside, was the original design really that bad, that it could be improved by 197% - and who approved it in the first place?

It's certainly not about the testing tool that you're using, whether it's Maxymiser, Adobe's Test and Target, or Visual Website Optimizer, or even your own in-house solution.  I would be very wary of changing to a new tool just because the marketing blurb says that you should start to see 197% lift in conversion just by using it.

In conclusion, I can only point to this cartoon as a summary of what I've been saying.



No comments:

Post a Comment