fbpx

Introduction to CRO: The Only Testing Process You Need To Know

 

Introduction to CRO: The Only Testing Process You Need To Know

 

Fun fact: You could spend a lifetime running a CRO program before even touching your website.

Conversion rate optimisation (CRO) is the act of measuring a person’s journey through a system and finding speed bumps or areas for optimisation in order to improve that system. A healthy CRO program should improve usability and reduce friction for your audience. 

Making improvements can lower costs, grow customers and increase revenue. Seems pretty simple right? It doesn’t stop there… CRO can and should be conducted at all stages of the customer journey. You can map your customer’s journey with our free course here. 

Broadly speaking, conversion rate optimisation can be broken down into a few key phases: Discovery, Analyse, Experiment and Review & Optimise. 

 

Man with magnify glass giphy

Discovery 

Every good project starts with a discovery phase. CRO has no exemption! 

This is your opportunity to gather as much data as you can. Qualitative or quantitative, the more the merrier. Its essential to gather as much information about the systems you are trying to improve in order to find measurable ways to improve it. You can trawl back through your existing data, or put the wheels in motion to collect new data. What’s crucial is that it the data you collect is clean, accessible, trustworthy and easy to share. 

In this phase you will need to: 

  • Get a shared understanding of the goals of the business. 
  • Ensure the data you collect will be in line with these goals. 
  • Consider the types of data you will need. Should you run a survey? Are you collecting behaviour analytics? Can you gather a focus group? 
  • Consider the types of tests you are capable of conducting (more on this below) 
  • Agree on a timeframe for the project. 
  • Ensure there is leadership buy in and transparency with key stakeholders. 
  • Understand which metrics you are trying to influence. Are you measuring them correctly? 
  • Identify 1-3 Key Performance Indicators (KPIs) to focus on as outcome measures. 

 

Baby intensely reading book giphy

Analyse 

Now that you’ve gained enough data for a near forensic view of your system, it’s time to weave the data with your business goals, and, most importantly of all, critical thinking. 

The first step in the analysis phase is to understand the viability of a project. Asking questions around the priority, importance and ease of executing the project will better help your business case or any justification for taking on the project. 

You should ask yourself: 

  • Is this a problem worth solving? 
  • Why is it worth solving? 
  • What impact will solving this have on the business? Read more about the downfalls of A/B testing here.
  • How will I know when the problem is solved? 
  • What resources will be required to do this correctly? Do we have them? 

As an example, let’s say you’re tasked with reducing the number of pages in your checkout. You may have a checkout flow similar to this: 

Cart > Cart Confirmation with ‘Proceed to Payment’ CTA

 

> Delivery details page > Payment details page > Account creation page with ‘Place Order’ CTA > Order confirmation page 

Obviously, that’s way too many steps for a pleasant customer experience – but are you sure? What data have you gathered in order to prove that? Can you justify changing or testing a change? Are you able to predict a value for the business if the problem is solved? 

Now that we’ve gained some visibility, analysed our data and had a chance to critically unpack the issues that lie in front of us, it’s time to set a hypothesis. Vital to the success of your CRO project is your Hypothesis. It’s the glue that will hold your project together. A well constructed hypothesis will allow you to definitively understand whether your experiment has or has not worked. 

It can be as simple as: 

 

💡 I believe that [a change], will result in [an effect] because [rationale] 

We’ll use the scenario above as an example: I believe that [consolidating the delivery details, payment details and account creation pages in our checkout] will result in [a minimum 7.5% uptick in total conversions over 8 months] because [our data shows that by the time visitors hit the account creation page, on average 24% have bounced.] 

For some extra brownie points, you may also want to take the rationale a bit further to include your business case. Such as: [Out of 1000 transactions over the last 8 months with an AOV of $90.00, this has the potential to provide an additional $6,750 in revenue to the business.] 

Remember, your hypothesis should be time bound and as specific as you can make it! 

 

Man drinking can that says science Giphy

Experiment 

The experiment phase is where we put the wheels of our hypothesis in motion. It’s also important to understand the types of test we will be running. Best practice is to start with a split A/B test – especially if you’re just getting started with CRO. Once you have a good grip on the process, you can explore or other types of testing. Some of these may include: 

  • A/B testing | The act of creating a split test between (typically) 2 different variations of your system in order to measure which works best. Read our complete guide on A/B testing here. 
  • A/B/n testing | The act of creating 3 or more variations of a system while keeping one variation as a ‘control’. The ‘n’ refers to any number of variations to the test. 
  • Multivariate testing | Think A/B testing on steroids! It’s the act of testing multiple variations against each other in order to understand how they interact. Note, most people are not ready for this, you need a lot of data, skills and time for successful multivariate testing. 
  • User testing | The act of gathering a group of real world human (whaaaat!? I know!) to test a system and report back or to answer a survey in line with the data you wish to collect.
  • Common sense testing | Just had to throw this one in for a stab. A fantastic experiment to conduct is to break your bias and assess things critically. Test your common sense… Is the current way really the best way? Is the system best practice? What is the system missing? 

Essential testing tools 

Whichever testing method you choose, you’ll need a way to conduct and monitor it. There is an ever-growing number of tools out there for conversion rate optimisation, but here are just a few we find essential: 

  • google analytics (or adobe analytics) | As you may have used in your initial data gathering phase, your analytics platform will be essential to measuring the outcomes of your test. 
  • hotjar | Exceptional product. Hotjar is a free behavioural analytics tool. It provides click maps, heat maps and session recordings, providing insights into how people are interacting with your web property. 
  • google optimise | Conduct tests across your site without even touching your CMS! Google optimise allows you to split your existing properties and make changes in their platform – don’t forget to set it up properly. The anti flicker script is essential. 
  • Miro | The ultimate virtual white board. For wire framing your test or simply mapping the steps or features you want to change. Great for planning! 
  • Google Tag Manager | Having a good grasp on google tag manager will really help if you feel like there are certain pieces of data missing. You can set up custom events to feed your analytics platform in order to track things like button clicks, time on page, scroll depth and so much more. Your options are nearly endless! 

So, you’ve gathered your data, you understand the problem, you’ve constructed a hypothesis, you’ve put forward a business case, you’ve picked the type of test you’ll be running and you know the tools you’ll be using to monitor the results… Is there anything we’ve missed? No seriously, if theres are plot hole here, we want to know, so contact us...No? Time to launch and monitor. 

 

Breaking Bad giphy "it's over"

Review & Reset 

Now that the test is complete, we have the ability to look back at it retrospectively. 

  • Was the test successful? 
  • Was it the right test for our hypothesis? 
  • How did it impact the KPIs we focused on throughout the planning and execution phases?
  • How accurate were the predictions we made in our business case? 
  • Are the results of the test statistically significant? 

It’s important that the results of your test show a statistical significance. I’m not a statistician, and I’m sure you aren’t either, but in order to truly understand if your test was successful (and not a random result) you need understand what statistical significance is. Simply put, it’s a measure of confidence that the difference between 2 or more results did not occur by chance. So looking at the outcomes of your test, how do your baseline or control metrics look? Were your test metrics higher, lower? By how much? Are you comfortable saying the difference did not happen by chance? 

Now that you have results from your test, properly analysed them for statistical significance and have had a chance to grab a cuppa, it’s time to either deploy the changes you made as part of the test conditions or go back to the top of this article and start again. 

And, now that your test is complete, you have the best part of the job – sharing the results! Yaaay! While you’re at it, share the results with us. Honestly, we’re always keen to see a test result that isn’t covid related. 

Leave a Reply

Your email address will not be published. Required fields are marked *