Header tag

Thursday 6 November 2014

Building Momentum in Online Testing - Key Takeaways

As I mentioned in my previous post, I was recently invited to speak at the eMetrics Summit in London, and based on discussions afterwards, the content was really useful to the attendees.  I'm glad that people were able to find it useful, and here, I'd like to share some of the key points that I raised (and some that I forgot to mention).
Image Credit: eMetrics Summit official photography
There are a large number of obstacles to building momentum with an optimisation program, but most of them can be grouped into one of these categories:

A.  Lack of development resource (HTML and JavaScript developers)
B.  Lack of management buy-in and access to resource
C.  Tests take too long to develop, run, or call a winner
D.  Tests keep losing (or, perversely, tests keep winning and the view is that "testing is completed")
E.  Lack of design resource (UXers or designers)

These issues can be addressed in a number of ways, and the general ideas I outlined were:

1.  If you need to improve your win rate, or if you don't have much development resource, re-use your existing mboxes and iterate.  You won't need to wait for IT deployments or for a developer to code new 'mboxes', you can use them again, test and learn and test again.

2.  If you need to improve the impact of your tests (i.e. your tests are producing flat results, or the wins are very small) then make more dramatic changes to your test recipes, and createI commented that generally speaking, the more differences there are between control and the test recipe, the greater the difference in performance (which may be positive or negative).  If you keep iterating and making small changes, you'll probably see smaller lifts or falls; if you take a leap into the unknown, you'll either fly or crash.

Remember not to throw out your analytics just because you're being creative - you'll need to look at the analytics carefully, as always, and any and all VOC data you have.  The key difference is that you're testing bigger changes, more changes, or both - you shouldn't be trying new ideas just because they seem good (you'll still need some reason for the recipe).

3.  If you need to get tests moving more quickly, then reduce the number of recipes per test.  More recipes means more time to develop; more time to run (less traffic per recipe per day) and more time to analyse the results afterwards.  Be selective - each recipe should address the original test hypothesis in a different way, you shouldn't need to add on recipe after recipe just because it looks like a good idea.  Also, only test on high-traffic or critical pages, where there's plenty of volume of traffic, or where it's mission-critical (for example, cart pages, or key landing pages).  As a bonus, if you work on optimising conversion or bounce rate for your PPC or display marketing traffic, you'll have an automatic champion in your online marketing department.

Extra:  If you do decide to run with a large number of recipes, then monitor the recipes' performance more frequently.  As soon as you can identify a recipe which is significantly and definitely underperforming vs control, switch it off.  This has two benefits:  a) you drive a larger share of traffic through the remaining recipes, and b) you're saving the business money because you've stopped traffic going through a low-converting (or low-performing) recipe - which was costing money.

4.  Getting management buy-in and support on an ongoing basis:  this is not easy, especially when analysts are, stereotypically, numbers-people rather than people-people. We find it easier to work with numbers than to work with people, since numbers are clear-cut and well-defined, and people can be... well... messy and unpredictable.  Brooks Bell have recently released a blog post about five ways to manage up, which I recommend.  The main recommendation is to get out there and share.  Share your winners (pleasant) and your losers (unpleasant), but also explain why you think a test is winning or losing.  This kind of discussion will lead naturally on to, "Well, it lost because this component was too big/too small/in the wrong place." and starts to inform your next test.

I also talked through my ideas on what makes a good test idea, and what makes for a bad test idea; here's the diagram I shared on 'good test ideas'.

In this diagram, the top circle defines what your customers want, based on your analysis; the lower left circle defines your coding capabilities and the lower right defines ideas that are aligned with your company brand and which are supported by your management team.

So where are the good test ideas?  You might think that they are in segment D.  In fact, these are recommendations for immediate action.  The best test ideas are close to segment D, but not actually in it; the areas around segment D are the best places - where two of the three circles intersect, but where the third is nearly aligned too.  For example; in segment F, we have ideas that the developers can produce, and which management are aligned with, but where there is a doubt about if it will help customer experience.  Here, the idea may be a new way of customising or personalising your product in your order process - upgrading the warranty or guarantee; adding a larger battery or a special waterproof coating (whatever your product may be).  This may work well on your site, but it may also be too complex.  Your customer experience data may show that users want more options for customising and configuring their purchase - but is this the best way to do it?  Let's test!


I also briefly covered bad test ideas - things that should not be tested.  There's a short list:

Don't test making improvements such as bug fixes, broken links, broken image links, spelling and grammar mistakes.  There's no point - it's a clear winner.  

Don't test fixes for historic bugs in your page templates - for example where you're integrating newer designs or elements (product videos, for example) that weren't catered for when the layout was originally built.  The alignment of the elements on the page are a little off, things don't fit or line up vertically, horizontally - these can be improved with a test, but really, this isn't fixing the main issue, which is that the page needs fixing.  The test will show the financial upside of making the fix (and this would be the only valid case for running the test) but the bottom line is that a test will only prove what you already know.
I wrapped up my keynote by mentioning the need to select your KPIs for the test, and for that, I have to confess that I borrowed from a blog post I wrote earlier this year, which was a sporting example of metrics.
Presenting the "metrics in sport" slide, Image Credit: Aurelie Pols
I'm already looking forward to the next conference, which will probably be in 2015!

No comments:

Post a Comment