Categories
CRO

Chapter #4: How to interpret your ab testing results when stopping your test?

You input your observation data into the test design and notice you need to end your test. Let’s finish our test the right way. This chapter will show you how to interpret your ab testing results and get the most out of your tests.

How to interpret your ab testing results?

Test design in Analytics toolkit
(Image source)

Congratulations if your test got in that excellent green area. You can now stop your test and declare a winner. If your test was in the red area, you don’t have a winner. Better luck next time! Now it’s time to interpret ab testing results.

Scrolling down in our test design in Analytics Toolkit will reveal our final observation data.

How to interpret your final observation results?
(Image source)

We are interested in the confidence metric and estimated lift.

The confidence metric will tell us the probability that the hypothesis is true. In the above example, we know this with 99.34% certainty. Which is more than the 95% that we were looking for when creating the test design.

The estimated lift signals the magnitude of the change. In the above example, we reached a 5% lift for this target group, which is above the minimum effect of interest that we calculated in our test design.

How to calculate the monetary contribution?

Let’s start with a flat way of doing it by using the CXL calculator.

Input the test duration:

Test duration in CXL calculator
(Image source)

Then your test results:

Inputting conversion data in CXL calculator
(Image source)

You can already see the extra transactions per month in this example. Five hundred thirty-five additional transactions for a 5% lift are not too bad!

Extra transactions in CXL calculator
(Image source)

But how much money would that be? We can calculate that by inputting the average order value of the variant.

Monthly monetary contribution in CXL calculator
(Image source)

You can get the average order value from the variant by adding an extra dimension to our Google Analytics experiment table.

Flat table in Google Analytics
(Image source)

Inputting the average order value from the variant into the CXL calculator will give us the monthly monetary contribution for the test implementation.

Monthly monetary contribution in CXL calculator
(Image source)

Now that’s something you can discuss with your manager to convince them of the value of conversion optimization. I’ll bet he or she will ask you when you are running your next test. You have another one running already, right?

Some of the more skeptical clients and managers will need more than a flat calculation. What about the costs? There is also that small risk that your test was false.

Analytics Toolkit has a great ROI calculator in which you calculate the return of investment.

RIO calculator in Analytics Toolkit
(Image source)

It’s similar to our test design but has costs & benefits added to it. Let’s fill in the form.

Choose “Testing for Superiority.”

 Test type in Analytics Toolkit
(Image source)

Switch the analysis type to “AGILE Sequential Testing.”

Analysis type in Analytics Toolkit
(Image source)

Input the “Baseline Conversion Rate” of the control version.

Baseline conversion rate in Analytics Toolkit
(Image source)

Choose one variant versus the control.

Variants in in Analytics Toolkit
(Image source)

Input the users per week of both variants.

Users per week in in Analytics Toolkit
(Image source)

Set the desired duration of testing to correlate with your test design.

Desired duration in Analytics Toolkit
(Image source)

Leave the significance threshold at 95%.

Significance Treshhold in Analytics Toolkit
(Image source)

Input the relative lift of your winning variant.

Expected relative lift in Analytics Toolkit
(Image source)

Leave the standard deviation at 3%.

Standard deviation in in Analytics Toolkit
(Image source)

Leave the scaling parameter is 1.00.

Scaling Parameter in Analytics Toolkit
(Image source)

Input 1 at the “Years of effect” field. You can increase this to your liking.

Years of effect in Analytics Toolkit
(Image source)

The revenue per week can be taken directly from Google Analytics or calculated using the Baseline Conversion Rate. In our example, we see that 2% of 125000 users are 2500 transactions. With an order value of 50,- we have our revenue per week.

Revenue per week in Analytics Toolkit
(Image source)

Input your “cost to test.”

Cost to test in Analytics Toolkit
(Image source)

We did not spend anything on tools in this guide, so that the only cost will be your time.

Because we created the test ourselves, we need a developer to implement and develop the test. Calculate the cost of that developer working on your change.

Cost to implement in Analytics Toolkit
(Image source)

When testing a new tool, you might have a monthly cost to use the software. Input that in the “Monthly maintenance.”

Monthly maintenance in Analytics Toolkit
(Image source)

Sometimes big decisions are made on test results, and reversing that decision can cost a lot of money. Input the cost of reversing that decision here.

Cost to reverse in Analytics Toolkit
(Image source)

If you are using a tool and the variant is hiding the tool, you might save money from removing the tool from your website. Input that in the “Monthly savings.”

Monthly savings in Analytics Toolkit
(Image source)

When you have everything set up, you can click “Estimate Risk / Reward.”

RIO calculator in in Analytics Toolkit
(Image source)

The calculator will give you many tables and graphs from which you can extract the monetary contribution.

Looking at the “Probability Adjusted Gain” shows you that range of adjusted revenue in one year.

Probability adjusted gain in Analytics Toolkit
(Image source)

In this example, we can see that the worst result would lose you 4194 dollars, and the best result will win you 137258 dollars in 12 months. It seems like this implementation is definitely worth the risk.

Scrolling down to the “Flat Prior Risk / Reward Analysis” will show you similar results to the calculation from CXL.

Flat prior risk reward analysis in Analytics Toolkit
(Image source)

How to log your results?

Now you know how to interpret ab testing results, but we want to log our results for future reference. You don’t just want to throw your test data away.

In the spreadsheet from this series that we use for user research, we also have a tab called “testing list.”

Here is the link again (no email required):

Conversion optimization – GrowthPenguin 

You can edit the spreadsheet by creating a copy.

You want to fill in a row for every test you finished. Make sure you segment your tests in the same way you evaluated them by separating Desktop & Tablet with Mobile.

Give your test a number.

Testnumber in Google Spreadsheets
(Image source)

Input the name of your test like you did when creating your test design:

Test (number) – (description)

Testname in Google Spreadsheets
(Image source)

Enter the start and end date of your test.

Date in Google Spreadsheets
(Image source)

Enter the device for that specific test.

Device in Google Spreadsheets
(Image source)

Then briefly describe your test for future reference. Try to explain it in a way that someone else understands it.

Description in Google Spreadsheets
(Image source)

Then input if your test was a significant winner and by how much.

Beating the test in Google Spreadsheets
(Image source)

Input if your sample size was big enough.

Sample size in in Google Spreadsheets
(Image source)

If your test was a winner, you also want to input the monetary RIO of your test. I usually put the yearly ROI from Analytics Toolkit here.

Monetary RIO in Google Spreadsheets
(Image source)

Input if you want to implement the test or not.

Implementation in Google Sheets
(Image source)

Input your final data in the spreadsheet for future reference.

AB data in Google Sheets
(Image source)

If the process or results of this test result have taught you something, you should write it down, so you never forget it.

Learnings in Google Sheets
(Image source)

You can use this process of logging test results to show your manager how much work and profit you are generating for the company. 

You will find that when your team grows, it’s essential that everyone knows what tests have finished and what their results were. When your team knows this, you will prevent tests from overlapping and repeating.

How to share your learnings?

You could prepare a whole presentation, but that takes so much time! We want to get on to the next test. Just send an e-mail or chat like this for the same result:

Hello everyone,

Our test: “Test 1: Changing stuff on Mobile” has statistically significant results! We achieved a 10% lift of conversion rate, which equals a monthly monetary contribution of €100.000.

Control:

Amazon before change
(Image source)

Variant:

Amazon after change
(Image source)

If there are any questions about the process or the change, let me know!

How to decide on your next test?

Just go back to your hypothesis priority list in your spreadsheet and select the one on top. You really should have this ready as soon as the other one finishes if you want to maintain testing velocity.

You can mark all the hypotheses that you have proven green to signal that they were true.

Hypothesis in Google Sheets
(Image source)

Also, hide the issue or insights in the spreadsheet if it is not relevant anymore after your test:

Hide row in Google Sheets
(Image source)

This way, you can always find it if you need it, and you keep your issue list nice and tidy.

How to increase your budget after your first win?

This whole series was about getting your first win on a zero budget. Congratulations if you got the result you desired! You have proven the value of conversion optimization to your boss and colleagues, and it’s time to take the next step.

If you want this process to be more impactful, easier, and faster, you want to invest resources into improving it. There is only so much that you can do on a zero budget. I recommend taking a course at CXL.

CXL has the most comprehensive conversion optimization training program in the world. Its CRO Minidegree program is designed for optimizers who want to build and run world-class programs.

If you’re looking to build a career in conversion optimization, there’s nothing like it out there. You’re going to get advanced level skills – from conducting conversion research to running a testing program. 

The whole thing is self-paced and teaches you a systematic, repeatable process for getting wins. The best part – the content is industry agnostic, so you can use it to get uplifts in any industry.

And, if you want your company to pay for it, they provide a PDF to help you pitch it to your boss.

Check out the CRO Minidegree program here.

If you use my link to purchase your mini-degree, I will get a contribution from CXL at no additional cost to you! This contribution helps me keep creating content for free!

I wish you the best of luck with your conversion optimization efforts in the future!

Get the full guide below!

How to win your first a/b test on a zero budget?

If you read this guide of 79 pages, you will learn:

  • If your company is ready for conversion optimization
  • How to conduct user research on a zero budget
  • How to prepare and run a test on a zero budget
  • How to interpret your ab testing results when stopping your test

Are your tests statistically valid?

This statistics course for A/B testing will help you avoid costly testing mistakes.