Header tag

Wednesday 16 July 2014

When to Quit Iterative Testing: Snakes and Ladders



I have blogged a few times about iterative testing, the process of using one test result to design a better test and then repeating the cycle of reviewing test data and improving the next test.  But there are instances when it's time to abandon iterative testing, and play analytical snakes and ladders instead.  Surely not?  Well, there are some situations where iterative testing is not the best tool (or not a suitable tool) to use in online optimisation, and it's time to look at other options.  I have identified three examples where iterative testing is totally unsuitable:

1.  You have optimised an area of the page so well that you're now seeing the law of diminshing returns - your online testing is showing smaller and smaller gains with each test and you're reaching the top of the ladder.
2.  The business teams have identified another part of the page or site that is a higher priority than the area you're testing on.
3.  The design teams want to test something game-changing, which is completely new and innovative.

This is no bad thing.

After all, iterative testing is not the be-all-and-end-all of online optimization.  There are other avenues that you need to explore, and I've mentioned previously the difference between iterative testing and creative testing.  I've also commented that fresh ideas from outside the testing program (typically from site managers who have sales targets to hit) are extremely valuable.  All you need to work out is how to integrate these new ideas into your overall testing strategy.  Perhaps your testing strategy is entirely focused on future-state (it's unlikely, but not impossible). Sometimes, it seems, iterative testing is less about science and hypotheses, and more like a game of snakes and ladders.

Let's take a look at the three reasons I've identified for stopping iterative testing.

1.  It's quite possible that you reach the optimal size, colour or design for a component of the page.  You've followed your analysis step by step, as you would follow a trail of clues or footsteps, and it's led you to the top of a ladder (or a dead end) and you really can't imagine any way in which the page component could be any better.  You've tested banners, and you know that a picture of a man performs better than a woman, that text should be green, the call to action button should be orange and that the best wording is "Find out more."  But perhaps you've only tested having people in your banner - you've never tried having just your product, and it's time to abandon iterative testing and leap into the unknown.  It's time to try a different ladder, even if it means sliding down a few snakes first.

2.  The business want to change focus.  They have sales performance data, or sales targets, which focus on a particular part of the catalogue:  men's running shoes; ladies' evening shoes, or high-performance digital cameras.  Business requests can change far more quickly than test strategies, and you may find yourself playing catch-up if there's a new priority for the business.  Don't forget that it's the sales team who have to maintain the site, meet the targets and maximise their performance on a daily basis, and they will be looking for you to support their team as much as plan for future state.  Where possible, transfer the lessons and general principles you've learned from previous tests to give yourself a head start in this new direction - it would be tragic if you have to slide down the snake and start right at the bottom of a new ladder.

3.  On occasions, the UX and design teams will want to try something futuristic, that exploits the capabilities of new technology (such as Scene 7 integration, AJAX, a new API, XHTML... whatever).  If the executive in charge of online sales, design or marketing has identified or sponsored a brand new online technology that will probably revolutionise your site's performance, and he or she wants to test it, then it'll probably get fast-tracked through the tesing process.  However, it's still essential to carry out due diligence in the testing process, to make sure you have a proper hypothesis and not a HIPPOthesis.  When you test the new functionality, you'll want to be able to demonstrate whether or not it's helped your website, and how and why.  You'll need to have a good hypothesis and the right KPIs in place.  Most importantly - if it doesn't do well, then everybody will want to know why, and they'll be looking to you for the answers.  If you're tracking the wrong metrics, you won't be able to answer the difficult questions.

As an example, Nike have an online sports shoe customisation option - you can choose the colour and design for your sports shoes, using an online palette and so on.  I'm guessing that it went through various forms of testing (possibly even A/B testing) and that it was approved before launch.  But which metrics would they have monitored?  Number of visitors who tried it?  Number of shoes configured?  Or possibly the most important one - how many shoes were purchased?  Is it reasonable to assume that because it's worked for Nike, that it will work for you, when you're looking to encourage users to select car trim colours, wheel style, interior material and so on?  Or are you creating something that's adding to a user's workload and making it less likely that they will actually complete the purchase?

So, be aware:  there are times when you're climbing the ladder of iterative testing that it may be more profitable to stop climbing, and try something completely different - even if it means landing on a snake!

No comments:

Post a Comment