In this guide, I will tell you the conversion optimization process I have used to get results at the company I work for and how you can use this process for yourself. I will describe all the steps and what tools I use to execute them. Finally, I will share with your the spreadsheet I use to facilitate the whole process so you can get started for yourself.
Increasing the performance of your website can be extremely daunting because of the number of aspects involved with achieving this goal. There are tools around that tell you that you can easily increase your website’s revenue if you use their tool.
Articles online tell you best practices but rarely touch upon the most important aspect. You need to have a process to be able to grow your business continually. The process will keep you heading in the right direction and give you the highest chance of reaching significant results.
Step #1: Collecting issues and insights
The first step is collecting issues and insights with research methods. We want to be able to choose one of these issues or insights to base our whole process on. This is important because this will keep you focused on solving actual problems instead of just following best practices.
If you want to collect issues and insights you can follow my guide about different 8 UX research methods to find them. It will also describe what kind of research methods exists and which one you should execute first.
Before you start optimizing make sure you get a big list of issues and insights to plan your tests on. It’s also a good idea to collect the things you already know from your research in a single spreadsheet so you can prioritize them.
I log these issues and insights in my conversion optimization process spreadsheet:
The way of logging these issues and insights is based on the guide of CXL. First I will describe the issue and insight with a title. Then I will put every issue and insights into its own bucket.
These buckets are pre-defined in the spreadsheet and the definition is also described in their guide. It gives every issue and insights a type to act on. Then you want to describe the page type where the problem exists so you can use that for pre-testing at a later stage.
Then you will want to describe the issue or insights with a little bit more detail and what the next action should be. The rating I use to prioritize them is based on potential impact, ease of development, and expert opinion. Keeping these factors in mind I try to rate the issues and insights from 1 to 5 stars to help me get the best ones.
I don’t work for the biggest company so I have the privilege of choosing everything on my own. If you work with a team you can input the person responsible for testing solutions for a specific issue or insight. Also, make sure you describe the research method you have used for collecting the issue or insight and link to the source. I use the number field to describe the comments that unfolded that specific problem.
Step #2: Choosing the issue or insight
Now that you have a big list of insights and issues you can choose the one that you are going to solve. Choose the one with the most stars and that you think will have the most impact at this moment in time.
Step #3: Writing a hypothesis
The issue or insight you have chosen will need a testing hypothesis to be “solved”. I use a standard layout for writing them that I learned from CXL.
We believe that doing [A] for people [B] will make the outcome [C] happen.
The hypothesis is a potential solution to your problem and will be true or false depending on the test results. I log and prioritize these hypotheses in a separate spreadsheet tab. You could write multiple hypotheses for the same problem or add more based on different issues and insights. The one with the highest score, in the end, will be the hypothesis you take to your next step.
I score these hypotheses based on the PXL framework to increase the effectiveness of what I am going to test first.
Step #4: Pre-testing your hypothesis
Now that you have a list of prioritized hypotheses you know exactly what the next thing is you are going to test. Before you start testing it’s important to pre-test your hypothesis. Pre-testing is just running the numbers to find out what kind of effect the test needs to have to reach a significant result and how long you will probably need to run your test. This all depends on the sample size and the risk factors you are willing to take.
This process helps you assess if the test is even worth developing or that you should move on to a new test.
To run a pre-test you can use the AB Test calculator from CXL which is designed to answer all the questions you need before pre-testing. Go to the calculator and click the pre-test analysis tab.
You will need to add in your weekly traffic, weekly conversions, and the number of variants you are going to test. This allows the calculator to show the minimal detectable effect for every week of testing with your traffic.
Make sure the traffic numbers you insert into the calculator match the type of page you are going to analyze, otherwise the pre-test won’t be correct. Let’s say you are testing the shopping basket for example. You need to know how many sessions or users you have every week that enter the page. There are many ways of doing this and it doesn’t have to be perfect. As long as it gets you a rough estimate of what you can expect from your test.
I like to use Google Analytics’ goal funnels and insert all the sessions that way. This cannot be set up to visualize historical data so if you don’t have this setup I recommend you set this so you are able to pre-test your next test. You can use any other funnel tool that counts the number of sessions entering this page. Learn how to set up goal funnels here.
An alternative would be looking at the number of unique pageviews on the page and entering that as the weekly traffic. These might not be sessions but you will still get a rough estimate of the traffic that is needed for your test.
Going to the all pages tab and filtering out the page based on the URL can give you this data.
Another alternative would be creating a segment that you can use in the channel report where you can view sessions and other data.
You can create segments on the report tab or in the Google Analytics admin menu.
Remember that you create your rules based on your website URL otherwise, it will not work. Sometimes you might be testing pages where there are multiple page URLs to be considered. This will be a little bit more advanced and can usually be solved by filtering URLs with the “contains” filter.
Otherwise, you might need to write a regular expression to target the URLs you need. A front-end developer might be able to help you with that. I learned to write regular expressions by reading this book.
The number of conversions depends on what you are testing. You could use e-commerce transactions when you are testing on a webshop or look at the pageviews of the next step in the funnel. When you are testing the basket page this is probably the checkout form. Filter the data using one of the techniques from before. Input the conversions and the number of variants u are testing including the control to get the data you are looking for.
Usually, optimizers test their hypothesis about 2 to 4 weeks and this shows you what effect you would need. This is because most browsers store cookie data for about 4 weeks. Let’s say we are gunning for a 4-week test which will mean that you will need an increase of at least 10,96% to get a statistically valid result. If you think your change will be able to reach that effect you can decide to execute your test.
I usually use this data to get the monetary value of this effect to get people excited and people onboard. Just fill in 4 weeks of data in the calculator and see that a significant test with this dummy data would give you 6200 extra revenue a month! Note that the evaluation needs a little bit of a bigger effect to reach significance so this is something to remember.
Looking at the potential monetary value also helps you see if the test is even worth it when comparing it to the costs of running the whole process. Keep in mind that 85% of the tests will not be significant and will only cost your money. On average only 10 to 15% off all the tests will reach statistical significance according to Invespcro. The goal is to make those winning tests have such a big effect so that the costs are justified.
Step #5: Preto-typing your test(optional)
If the return on investment is worth it, you can start to choose to preto-type your hypothesis before choosing to develop the test for it. This is the act of testing a concept in a smaller and simpler setting to see if your hypothesis holds any merit.
The way you preto-type your test really depends on the concept, your test, and your own imagination. Let’s say for example you are testing if showing currency symbols has an effect on e-commerce transactions and revenue. Before testing it on the entire website you can decide to run an offline a/b test with your colleagues and see how currency symbols affect their decision making.
If you want to learn more about preto-typing you can read more about it here.
Step #6: Pitching your case
Before the test can be developed it needs to be presented to the person that is in charge of the developer’s schedule. Sometimes smaller tests are easily done on your own but I found that it’s usually a good idea to at least run your code by a senior developer to see if it might create problems that you are not seeing. I know from experience that a test of mine actually created problems in the logistics department and they were spotted after the test was finished.
I don’t like to spend too much time on making presentations so I have a predefined template for all my testing pitches.
Slide 1: Describing the issue or insights
Considering you have a list of all your insights and issues and you are basing your tests on them you can just copy them from your spreadsheet to the first slide. This also helps you identify when you are creating tests that are based on nothing but best practices and gut feeling. I usually like to show images or videos of the source that your insights or issue are based upon. We are not just making up issues and insights because this is convenient right? Prove that.
Slide 2: Describing your hypothesis
Copy the hypothesis from your spreadsheet and explain why you think this should be verified.
Slide 3: Showing your testing variants
Show a bunch of screenshots and information about the control version and variants of your tests. It’s okay to say that you are using best practices to solve an actual problem because this actually helps increase the chance of getting a significant result because the solution is based on research. Just make sure you are not just trying to solve a problem that is not there. Explain the differences between the control and variant and why you made certain design choices in your variant.
Slide 4: A glimpse of the gold at the end of the rainbow
Because you have pre-tested your hypothesis you will know exactly what the impact on the website revenue can be if your test is significant. Show this in your presentation and you are bound to make people excited about developing this test.
Good luck with your pitch! Let’s make it happen.
Step #7: Developing your test
If you need a developer this step will take more time because you need to prepare the work for him or her. I usually create a prototype within Adobe XD and create the screens for all the website breakpoints. This way the developers know exactly what to change at each breakpoint and are able to understand the functionality by clicking through your prototype. You can share it using the share for developers feature in Adobe XD and he or she will be able to leave down questions if needed. You will get notified of those questions by email. One of the benefits of having your test made by a developer is that it’s faster to implement after your test is finished and yielded good results.
Adobe XD is free by the way so that shouldn’t cost you anything but time either. It’s actually really easy to learn and those of you who used other Adobe products in the past should be able to grasp it in no time. One downside is that you are not able to save a lot of shared designs so you might need to upgrade to a paid account in the future. This way the developers are always able to access old designs.
When a developer has finished creating your test you should check it rigorously for any problems on different screen sizes and browsers. You are responsible if anything goes wrong so make sure your test doesn’t create any problems.
Step #8: Scheduling & launching your test
When everything works as it should you are able to schedule your test with your developers. I recommend starting your tests on a Monday so you are able to fix any problems that arise without ruining your weekend.
Run the test for at least 2 weeks and longer if you have the time and resources. Running a test for longer isn’t free so make sure you don’t overdo it.
You can use the ab test calculator to determine if your sample size is big enough after two weeks. If it says you need a few more days then go ahead and keep your test running. Anything over 14 days will mean it’s time to run the next test.
Step #9: Informing your colleagues
Make sure your organization knows what you are testing and when you are testing because this will help prevent questions. If people expect the change that is coming you will probably have a lot fewer questions. It also helps because people can notify you of circumstances you weren’t aware of that could affect the results of your test.
Maybe there is a server migration for example. If the website goes down in the middle of your test then you might as well start over because this would invalidate your test results.
You can notify your colleagues by sending an email with the summarized details of your test. Or just send the presentation that you used for the pitch. I found that a lot of people actually find this very interesting even if they have nothing to do with it. This helps you connect with your colleagues and generate feedback. The amount of feedback you can receive by just sharing your process can be more than you think. Also, remind them of the day it’s launched because some people might forget it when you send it two weeks in advance.
Step #10: Monitoring your test
Now that your test is running you might ask yourself what to do in the meantime? Obviously you want to continue researching and developing your next tests. But you want to make sure everything is running smoothly while it’s online so you can make adjustments if you find any problems.
Usually what I do is check the results every morning to see if your test isn’t going horribly wrong. Just look at the results quickly and move on. Don’t dive in too deeply because you will want to do that when your test is finished. Otherwise, you might end up using a lot of your precious time diving deep into the analysis when you don’t even have enough data to draw valid conclusions.
Another thing I do is watching people navigate through my tests to see if any problems arise that need fixing or to collect some insights that might be valuable for the next test.
I like to use Hotjar and enable user recordings so I have new recordings to watch every morning. It’s fairly easy to set up as you can use the URL targeting you used with your test. This way the recording starts when a visitor sees either one of the variants. I like to do this because then you will not have to watch a bunch of sessions that never reach your experiment.
You can also connect Hotjar to Google Optimize so you can filter out different variants in the user recordings you want to look at. The control is less interesting to look at because it doesn’t have any changes.
Using this method I found more bugs than you could have ever found then by just testing it on your own device. The costs of the program are fairly low and you can use it for a bunch of other useful research methods.
Step #11: Evaluating your test
When your test has finally finished it’s time to evaluate your test. Use the ab test calculator to find out if your test was significant and what kind of effect it will have on the revenue of the company.
I wrote a whole guide on evaluating your test with Google Optimize and Google analytics so definitely check that out if you want to learn more.
If you don’t find that you have significant results you can try to segment your data to see if you do have these results with a different target group or device.
The results are what they are and if you don’t have significant results you are just going to have to accept that and move on to the next text.
Step #12: Sharing the results
Share your results with your colleagues even if they are not significant because every test has value because of the learning aspect. You want to make sure that concepts that are tested are well known and not to be repeated if the results don’t show a significant impact because this would be a waste of time and money.
U can use the slides again that was used for your pitch and mail to share the results. Just add another few slides with all the data and your conclusions. This way your colleagues will be visually informed of the results and are far more likely to understand what you have tested.
Make sure to ask them for feedback and opinions on the results because those could yield valuable insights for the future.
Step #13: Logging your results
Now it’s time to open up your spreadsheet again and put in all the information related to the test so you can look back at it in the future.
This way you are logging exactly what tests have been completed and the results they yielded. You will want to prevent repeating the same tests obviously.
Step #14: Updating your hypothesis
Now go back to the hypothesis tab and mark your hypothesis with red if it was not true and false if it was true. I like to hide the hypothesis when the result comes in so it doesn’t get in the way of your PXL sorting table. Right-click on the row number and hide it if you want to do the same.
Step #15: Updating your issue or insight
Going back to your issues and insights you can change the action value to complete or delete solved issues altogether if they hold no value for the future. Some insights could have this trait for example. You can hide them like the previous step so you can always look at them in the future if you wanted to.
Step #16: Implementing the changes
If your hypothesis has been validated successfully, you have one final step to complete. Implementing the change you tested. When you have developed the test yourself with Google Optimize you might need to make a design in Adobe XD so your developers can successfully recreate it. You can also share the variant but usually, the code won’t matter to a developer because creating the change in the backend will be very different.
Discuss with your developers what they would need to create your winning change and make sure it’s created. Don’t forget about it because it might get lost in their backlog. Keep tabs on their sprint and ask them when it is finished. Only when it is implemented and tested you are to let go of this winning change.
Link to the spreadsheet:
Subscribe to my newsletter to receive a link to the conversion optimization spreadsheet that you need to follow this process. You will also receive bi-weekly emails when new articles are posted on GrowthPenguin.
You won’t be able to change the information in this spreadsheet. You need to make a copy so you can use it for yourself.
I hope this article will help you get started with consistently optimizing for your own company or the company you work for using a process. I am not saying this is the perfect process but it has worked for me in the past and still does today. I gradually improved my process from learning from my mistakes and reading a lot from well-known experts in the fields. If you have any comments or questions regarding my process you can leave them down in the comments below. Good luck with making a better user experience!