Have you ever wondered why user research on a zero budget is so expensive? Sometimes you simply don’t have the budget to get all the tools you need.
This chapter will show you how to conduct user research on a zero budget!
I will describe what user research is and how you can plan a zero-budget research strategy.
Then I will list many free tools and methods that you can use to fill your strategy and collect the insights you need for your first a/b testing win.
Good luck with setting it up! But first, the basics.
Sidenote: In this article, I will be using the word issues a lot. Issues can also be interpreted as insight.
What is user research on a zero budget?
User research is the activity of gathering issues with research methods. Issues help us understand the user better and adapt our product or service to fit their needs.
Arin Bhowmick gives the following definition of user research:
“User research focuses on understanding user expectations, behaviors, needs, and motivations through methodical, investigative approaches. Insights(issues) are then used to ensure that all product design decisions do benefit the user.”
We are researching the user for the users and getting the benefit of them using our product.
User research can involve a range of activities depending on the insights needed. A few examples of user research are ab-testing, focus groups, and surveys. Here is a graph of user research methods from NNG containing a bunch more examples:
This is a graph from 2014, so the user research industry has evolved a lot since then.
You might ask yourself:
“When am I done researching?”
To that, I answer the following:
“Until you have enough issues to act upon.”
So keep an eye on your extensive list of issues and make sure you have enough to go on before you start slacking on user research.
I am guessing your list is empty if you’re reading this. So, where to begin? You need to start planning your user research strategy.
How to plan out your user research strategy on a zero budget?
Most research methods can be done within a week so let’s make a 6-week strategy where we execute one method at a time. You might get some overlap if you get stuck on specific methods, but that’s okay. Just move up your planning a bit. We are not trying to win a race. We are just trying to the end with a bunch of quality issues on our list.
You can set it up in your project management tool of your choosing, but you can do it in my Google Spreadsheet for free. The sheet contains all the templates you need for getting your first winning test on a budget, so be sure to check the other tabs:
Download it here(no email required):
You can edit the spreadsheet by making a copy.
I encourage you to do the methods you think have the most impact first to keep you motivated in the long run.
This spreadsheet will be your project planning for collecting issues over the coming weeks.
But where should you collect them?
How to collect your issues?
Peep Laja developed the process I use. It places all your research into a single spreadsheet. Here is a link to the original post from CXL.
There is already one of the tabs called “issues” in the spreadsheet you downloaded earlier. The sheet looks like this:
The main idea of this spreadsheet is to put issues into buckets that decide how to act upon them. Here is a quick summary of what bucket your issue should be in.
Hypothesize is where you put issues where there are multiple unclear solutions to a problem. You will need to hypothesize different ideas and test them to optimize your product.
Test is an issue where you can make hypotheses that need to be tested.
These issues require more user research.
You should measure this.
Just Do It
Just Do It is the stuff that you should only implement and not test. Examples would be bugs and other technical stuff.
Give the issues that have a high impact and are easy to implement the highest priority rating.
What do you do with those issues?
The main goal of collecting these issues is to generate testing hypotheses from them. We are going to prioritize these hypotheses with a framework of our choosing.
It’s a checklist that scores and prioritizes your testing hypothesis. This prioritization tells you what to test first.
I also added this tab to the spreadsheet for your convenience. Feel free to use a different framework if you prefer it over this one.
Once you have many hypotheses, you will know what has the most opportunity to succeed and has the most significant impact. It also takes implementation effort into account.
How to translate your issues to testing hypotheses?
A testing hypothesis is a statement that you are trying to validate with your tests.
It can contain the issue, change, population, metric data change, and business cycles.
Craig Sullivan shared a great hypothesis kit for us to use. For my first example, I will use a simple kit.
Here is the kit:
“Because we saw (issue) we expect that (change) will cause (impact). We’ll measure this using (data metric).”
You will be using this kit on the issues that are in the bucket test and hypothesize.
Let’s imagine we have the following test issue:
“In our survey, we found that users would like to be able to add multiple products at a time.”
We can translate this into a testing hypothesis:
“Because we saw that users would like to be able to add multiple products at a time we expect that adding a product amount dropdown on the product page will cause more people to buy our products. We’ll measure this using conversion-rate.”
This hypothesis can be put into our spreadsheet to be prioritized.
I like to take this a step further by using the advanced kit. This kit will require you to dig in Google Analytics, so if you are not comfortable with that yet, you can stick to the simple kit.
Here is the advanced kit:
“Because we saw (issue) we expect that (change) for (population) will cause (impact). We expect to see (data metric change) over a period of (x business cycles).”
We can use the same issue from before to translate this into a testing hypothesis:
“Because we saw that users would like to be able to add multiple products at a time we expect that adding a product amount dropdown on the product page for users on mobile devices will cause more people to buy our products. We expect to see a 5% conversion rate increase over a period of 4 weeks.”
When testing hypotheses, we recommend splitting your tests into two segments:
- Desktop & tablet
We split to prevent sample pollution when analyzing your test results.The data metric change is the minimal detectable effect needed to get a significant change within the period of testing. It’s an industry standard to run tests for at least four business cycles(weeks). You will need to calculate the data metric change using a pre-test analysis.
How to get the minimal detectable effect?
You can get the minimal detectable effect using the CXL calculator:
You will need the following data of the pages we are going to run the test on.
- Weekly traffic in users
- Weekly transactions
- Number of variants
All this data can be extracted from Google Analytics.
Make sure to split your data per device to prevent sample pollution.
Here is what we can see when we input the data in the pre-test analysis tool:
This table is where we extract the minimal detectable effect. Take the percentage from four weeks and use that in your hypothesis.
You can also get the minimal detectable effect from Analytics Toolkit. You will have to sign up for the free trial to use the tool, which requires no creditcard.
I find that this tool is more accurate compared to CXL. Just input in your sample size for four weeks of testing and the baseline conversion rate to get the effect you need.
Now that you know what to do with an issue, it’s time to collect them, with our zero-budget in mind.
What research methods can be conducted on a zero budget?
I described how each user research method works and what free tools you can use to execute them. We will be using the ResearchXL framework as our starting point. We will choose six methods from this framework that we can do with trials and free tools.
Method 1: Technical analysis
- Google Analytics
- Google Spreadsheets
- A trial version of crossbrowsertesting.com or a similar software package
How does the method work?
We are going to check for issues in different browsers, devices, and pages. Using Google Analytics allows us to pinpoint potential problem areas that we can investigate further with our testing software trial.
When writing this article, you will get about 100 minutes of free testing time, so it’s essential only to check browsers, devices, and pages with a significant chance of having a problem.
Step 1: Gathering the problem browser versions
We will start by going into Google Analytics in the Browser & OS report.
Then we are going to add a desktop segment to our report.
Click on the first browser with the most traffic.
Now click advanced.
Now create a filter to include all versions with more than 100 users.
Set the date range to the last 30 days.
Now we can look at our report to find desktop browser versions with lower than average conversion rates.
Write down the browser version that has a lower conversion rate compared to the other browser versions. Here we can see that Opera version 67, 68, 73 might have problems.
You will do this for every browser and device to gather an extensive list to check with our cross-browser tool.
Step 2: Checking your browser list for issues
Now we are going to open crossbrowsertesting.com and sign up for a free trial.
Now click on live testing and start a live test.
Input the URL of the start of your customer journey.
Choose the first version on your list and click run test.
Now you will complete your customer journey within 5 minutes(trial restriction) and see if you come across problems. You will be able to add these problems to your issue list.
On an e-commerce website, this would be completing a transaction, for example. Set up a test account, or just cancel your orders afterward.
Again, you will be able to do this for 100 minutes, so make them count!
I usually just put these issues into the “Just Do It” bucket because these bugs require no testing. You can estimate the revenue gained by fixing this bug by looking at the same type’s functional browser versions.
Imagine that the browser you are looking at has a conversion rate of 1%, and the better versions have 4%. This observation means you will probably quadruple the revenue for the users with that browser version! Not very scientific, but it will give you a ballpark number.
Keep going through this process until you have checked all browser versions in your list or your trial runs out. You can always use similar software trials to get more minutes!
Method 2: Heuristic analysis
- Google slides
- CXL heuristic checklist
How does the method work?
In this method, you and your (preferably) colleagues are going to judge screenshots of your website by using this checklist from CXL:
Start by opening up google slides and creating a new presentation.
You are going to fill this presentation with screenshots of all your essential pages for all devices.
Go to your website and right-click the page to inspect it.
In the top right of the browser, we can click on the device icon.
Set it to a smaller mobile device such as an iPhone 5/SE.
Then click on the three dots next to it and capture a screenshot of your selected device.
Then drag the screenshot into google slides from your download bar.
Do this for every significant page in your customer journey.
Put the checklist points next to it and start pointing out problems with that page.
Do this for all your screenshots and add all issues to your issue log.
Method 3: Google Analytics audit
- Google analytics
- CXL google analytics checklist
How does the method work?
For this method, we are going to use this google analytics checklist from CXL to find problems on your website in Google Analytics.
If you don’t know how to check these reports for issues, you can check this article. Check all the reports and log all your issues into the spreadsheet.
Method 4: User recordings
How does the method work?
We are going to use Clarity to watch user recordings of visitors on our website. Check out the demo yourself.
You will have to add the Clarity tracking code to your website.
Just look at many recordings of the most important pages in your customer journey and write down all the issues you come across.
The clarity dashboard also shows you recordings with dead clicks:
Look at those to find places where people think they should be able to click.
Few more to check:
Rage clicks could point out user frustration.
Might be a problem with your web page here.
Accidental clicks or information that was not expected can be found here.
Just take a full day of watching videos until you can’t take it anymore. I will guarantee you that this will give you insights. Doing this with a couple of colleagues will make it a lot more enjoyable.
Method 5: Heatmaps
- Clarity(Free alternative to Hotjar)
How does the method work?
In the same tool, we are going to look at scroll maps and click/tap maps. I already wrote a guide on setting up heatmap testing with Hotjar and analyzing them. The same principles in clarity apply, but this software is free. Here is a summary of the questions you can ask about the two types:
- Is the most important information above the average fold?
- Are visitors leaving because of a fake bottom?
- When have most people found what there are looking for?
- Are there any prominent dead zones where users expect a link?
- Are users clicking on the crucial elements on your website?
You know the drill by now. Do this for all your important pages and devices, and write down your issues in the log.
Method 6: On-site surveys
- A trial version of Hotjar
How does the method work?
On-site surveys are one of the best methods you can use for qualitative insights. You can use it to ask specific questions on pages to find problems or solve issues in the investigate bucket.
Here is how you can set it up:
You need to set up your surveys on all the essential pages of your customer journey(not all at the same time).
Imagine you have the following issue collected:
“We found in Google Analytics that a lot of users are leaving on the favorites page.”
This issue will be in the investigate bucket because you want to know how this page is used and why people are leaving.
You could ask the question:
“Do you like adding products to your favorites list?”
When they respond with no, you can ask them why!
If you can’t think of any good questions, you can check out this article.
Get about a hundred or more responses and group similar answers together. This way, you will know if specific issues are a common problem on your website. Write every issue you find in the issue log.
User research on a zero budget is easy once you have the process to back it up. You can use any research method you want that’s available on the internet as long as you systematically add issues in your log. Now, what are we going to do with all these hypotheses?
Did you know it’s possible to prepare and run tests on a zero budget?
This chapter will tell you how.
It contains all the information you need without spending a dime!
This chapter contains:
- Step 1: How do you prepare your testing tools?
- Step 2: How to run tests?
Get the full guide below!
If you read this guide of 79 pages, you will learn:
- If your company is ready for conversion optimization
- How to conduct user research on a zero budget
- How to prepare and run a test on a zero budget
- How to interpret your ab testing results when stopping your test
Are your tests statistically valid?
This statistics course for A/B testing will help you avoid costly testing mistakes.