When I think about what we do here at Analytics Pros, narrowing the distance between our clients and their customers, I get especially excited by the topic of A/B and Multivariate testing. Why? A robust testing program closes the loop between site design and user experience, allowing us to gain insight into the gears that are turning within the user’s mind as they find their way to a conversion.
A well-measured website is a massive source of information about real people doing real things while they interact with some awesome brands. Testing provides a uniquely powerful platform for gaining insight about those interactions. As powerful as testing can be, the process of finding those insights can be equally daunting.
Here are some tips to help you along the way…
Quality By Design
When setting up a testing program it’s important to have a clear understanding of what your goals are. Start by defining what it means for a user to convert into a customer on your site. This can be as simple to define as users who’ve made a purchase on an ecommerce site, or as difficult to pin down as users who’ve crossed a threshold of engagement on a community site. Ask yourself:
- How do I define conversion?
- What actions must my users take to perform a conversion?
- Are there any required steps common to all users that must happen before conversion takes place?
- Is the conversion path rigid (each step must happen in a particular order) or fluid (steps can happen in any order)?
I love to map these things out in a flowchart. It’s a great way to represent process data and capture the relationships between lots of moving pieces.
What is In A Test
Now that you’ve mapped out your conversion paths, the real fun begins (though, if you are a geek like me, you got a real kick out of that flowchart). Leveraging your existing knowledge of the site and your users, begin to formulate a plan for how you will test each step in the conversion process. Are there any bottlenecks / pain points that would lift the overall conversion rate if they were optimized? Test those steps first.
When you are developing your experimental design, think about the in’s and out’s of your experiment (AKA factors and responses if you’re familiar with formal experimental design terms). The inputs to your design (factors) are the changes / variations you will be making in the test. Factors can be small in scope (the color of a button, the text of a message) or extremely large in scope (like when you’re testing an entirely new site design).
The outputs of your test (responses) are the results you will be measuring across all test variations. In many cases this will be the final conversion you are attempting to optimize, but don’t narrow your focus too much. Think about what other parallel interactions your test variation could impact and include those in your goals as well.
Make It Manageable
The more effort you spend on testing, the more complex the whole program can become. Simplify things a bit by creating a rhythm to your testing efforts.
- Design: While you have one test up and running use that down time to start building out the next test in your plan.
- Launch: Start up the test you just finished working on when your previous test is complete.
- Analyze: Run through your analysis of the the first test while your new test is running. Tip: Go beyond percent increase and look at the behavior of users exposed to test variations across all of the dimensions/metrics you are measuring.
- Repeat: Now that your new test is running and you’ve gleaned insight from your last test – begin the whole process over again.
We’ve only scratched the surface of the points to consider when developing a robust testing platform, but hopefully these steps can help you go from drowning in the complexity of A/B and Multivariate testing to swimming in insights.