Step-by-step guide for designing Lean Experiments
Designing sound experiments is critical to creating valid and reusable knowledge. This guide will help you achieve that.

Table of Contents
Why should I care about experimentation?
For entrepreneurs it is about measuring progress through amount of validated learning created to date.
For those working with innovation in established organisation it is about creating reusable knowledge and organisational value.
Designing sound experiments is critical to creating valid and reusable knowledge.
If you search online for design of experiments you will most likely find resources from fields of applied statistics or experimental psychology.
Although both are insanely interesting, they require substantial investment of time.
But you want to learn now, and you want to run an experiment tomorrow.
Well, then this is the guide for you.
This step by step guide is a product of a decade of experience, cut down to the very minimum you need to design an experiment to learn about almost anything within business context.
I've been designing experiments since my university days—I've even written an algorithm for specific experimental design involving mixtures.
Since then I've deployed various experimental designs, from quantitative, to qualitative, to mixed models.
What I've come to realise is that having a structured way to probe how we see the world and maximising learning is the most important part, and not the technical stuff and slavish adherence.
That realisation was pivotal moment in how I coach others to design and conduct experiments efficiently.
This guide consists of three phases and ten steps:
- Phase I: Design the experiment.
Step 1. Define learning goal.
Step 2. Describe who will you learn it from.
Step 3. Detail the experiment you will conduct.
Step 4. Define fail or success criteria.
Step 5. Define time boundary.
- Phase II: Conduct the experiment.
Step 6. Test the experiment.
Step 7. Run the experiment.
- Phase III: Learn from the experiment.
Step 8. Capture and document results.
Step 9. Analyse and interpret the results.
Step 10. Decide about next steps.
Since this guide focuses specifically on designing experiments for maximising learning you can couple it with any other tool you are using.
Phase I: Design the experiment

Step 1. Define learning goal.
What do we want to learn?
What are we assuming?
What is the research objective or falsifiable hypothesis?
Good experiments begin with a clear learning goal.
Assumptions are questioned, hypotheses are tested.
You can translate your assumption into a falsifiable hypothesis by restating it with numbers.
I suggest using one of the following two formats:
- Hypothesis format from The Real Startup Book:
The change - the metric - the impact - the timeframe.
Example:
If we add a lock icon next to the credit card information, the completion of the checkout process will increase by 15% in 3 months.
-
XYZ format by Alberto Savoia:
At least X% of Y will Z.
Example:
At least 15% of dog owners will add a six-pack of beer for dogs for $4 when they buy dog food.
If you find it difficult to write a falsifiable hypotheses that might be a signal that you should do more thinking about your learning goal and underlying assumptions.
If you have multiple falsifiable hypotheses for a single assumption, then I suggest designing an experiment for each hypothesis. That is usually cheaper and faster than designing a single experiment to test multiple hypotheses.
Step 2. Describe who will you learn it from.
Who are you going to learn it from?
Describe in detail the customer group you will run this experiment on.
Traditional way of describing them is using demographic and psychographic data. Examples of former are age, gender, occupation, geolocation, income, education, and so on. Examples of latter are personality, values, opinions, attitudes, interests, and lifestyles.
If you can access both, cheaply and quickly, that's great. If not, do not despair, and build your understanding of your customer over time, with relentless research and experimentation.
In the lean experimentation, one of the following descriptors must be present:
- what is the problem this group has,
- what is the underserved need this group has, or
- what are critical jobs-to-be-done this group has.
Focus on these three above, and collect demographic and psychographic data along the way.
Notice that I wrote customer group. That is the most likely group you will be learning from, but not the only one. Depending on the maturity of your business model, other groups of interest might be partners, employees, shareholders, ecosystem participants, and local population.
All the people that match your description form theoretical population.
Those of them that you can get access to form study population.
Finally, those that you actually reach with your experiment or other form of research form your sample.
Technical term for the above is sampling.
You don't need to have a PhD in statistics to master it, but it does take some study and practice. Some considerations to be aware of:
- Random selection of sample participants (individuals) from your study population will give you more generalisable results.
- Selecting "first ten" on your list is not random selection.
- Random selection is more important for mature ideas, and when you have sample sizes above 500.
- If you cannot randomise your sample when testing an idea in early stages (before product market fit for example), make a note of it and proceed. Once you come to analysis and interpretation be sure to mention it. Your learning will still be valuable, but it might not be representative of the whole study population.
Doing a lot of experiments or large sample experiments with consistently poor sampling practices can be harmful, since you might be making decisions based on poor—or even worse, misleading—data and interpretations.
In organisations—including startups—it is beneficial if everybody involved with interpreting experiment results is acquainted with sampling basics. Decision makers included.
Step 3. Detail the experiment you will conduct.
How will you learn it?
How will the experiment be executed?
Write a step-by-step script—cookbook style—of how will you do the experiment.
It is easy to bungle up experiments with our interference, and that's why it is important to have the steps clearly written.
Easiest way to start is to look at existing experiment designs and use them as inspiration. Here are guides for some common designs:
- Customer discovery interviews (most commonly problem interview).
- Solution interviews.
- Comprehension testing.
- Smoke tests (most commonly with landing page or video).
- Concierge test.
- Wizard of Oz test.
- Surveys (How to Design and Analyze a Survey by Christopher Peters).
- Split or A/B testing (A/B Sensei by Rik Higham).
Reference materials you should have bookmarked:
- The Real Startup Book is a catalogue of various Lean Startup experiments.
- Experimentation Hub is a collection of free online tools for quantitative experimentation.
- Research Methods Knowledge Base covers the entire research process. Good time to consult it is if you are asking yourself Is my experiment good enough? or Can I trust my experiment?
Step 4. Define fail or success criteria.
What is the smallest response or result that would justify spending more time on this?
Dan Toma and I teach innovators three ways of defining fail or success criteria:
- Extrapolated from the business case. You can use elements of your revenue or profit formula. Alternatively, you can use TAM-SAM-SOM analysis.
- Based on the industry benchmark. You can use data like the industry average, market leaders performance, and performance of your competitors and existing alternatives.
- Hippocratic oath. You can use your current performance as a minimum success criteria. In other words, the change you are experimenting with should produce better results (response) than the current solution.
If you have been working on your idea for more than eight weeks and cannot use any of the above, that is a signal that something is wrong.
For nascent and early stage ideas, a fallback criterion you can use is:
What is the smallest response or result that would justify spending more time on this?
Defining fail or success criteria before running the experiment allows you to make less biased decision if you should persevere, pivot, or stop.
Step 5. Define time boundary.
How long will this take?
Write for how long is the experiment going to be run.
Length should be based on the sample size and experiment type.
If your experiment will take more than two weeks than challenge yourself and see if you can design an experiment with the same learning goal that would take less time.
Alternative boundary condition can be reaching desired sample size.
For example, if you defined that your experiment will be administered to 100 participants, then you can stop once they have been reached.
Phase II: Conduct the experiment

Step 6. Test the experiment.
Are there any showstoppers?
Do questions make sense?
Is it possible to do it within the time frame you allotted?
If there are others doing the experiment with you, make sure they understand the experiment script and the learning goal.
This step is about testing the technical side of your experiment.
For example, if you are sending out an email with a landing page and call to action make sure that all the links work, that tracking cookies and codes work, and so on.
This step is not about getting an early dip into potential outcome.
Don't bias yourself unnecessarily.
Step 7. Run the experiment.
Go and do it!
Resist the urge to interfere and deviate from your experiment script too much.
Document any deviations, so you are aware of them during analysis and interpretation of results.
Discipline pays off in the long run.
Phase III: Learn from the experiment

Step 8. Capture and document results.
Write the numerical value(s) of the result and objective observations you made during the experiment.
No interpretations, no fluff, just write them down as they are.
Here are some examples:
7/11 interviewees ranked problem #2 as the top issue.
7% (432/6177) of website visitors left their email.
21% (23/111) of commuters purchased the £17 milkshake with thin straw.
Some examples of objective observations:
On question 7 customer changed posture, crossing arms, leaning backwards, and gazing to the right for two seconds before answering.
During the interview interviewee's boss was walking in the background. Interviewee could see him through the windowed wall of the meeting room. Although meeting rooms are claimed to be soundproof, it is possible to hear muffled voices if you are by the doors.
When we laid down six cards with problem statements customer reached out for card #3 the moment we placed it on the desk.
Resist the urge to include you interpretations in this step.
Step 9. Analyse and interpret the results.
Reflect on what did you learn, and write down your interpretation of the experiment result.
This is the moment you are creating validated learning.
Transform into Dora the Explorer or The Grand Inquisitor, whichever imagery you prefer, and ask the following:
- What do the numbers and observations mean?
- If they are below what you expected, why?
- If they are above what you expected, why?
- How do the findings relate to the learning goal?
- What was clarified?
- What was not clarified?
- What new facts or uncertainties have surfaced?
- Has there been anything you haven't expected at all?
- How does this new knowledge relate to what you currently know?
- What implications does this new knowledge have for the future?
Be specific and write in full sentences.
That will make it easy to reuse the knowledge gained from this experiment.
Step 10. Decide about next steps.
Write down the decision you made after this experiment.
Specific, actionable next step.
Pivot? Where to, why, and how?
Persevere? More experiments? If so, which one and what for? If not, what activities, when, by whom, and by what time?
Perish, perhaps? There is no shame in deciding to stop exploring certain opportunities or leads, if it is backed by your findings.
Once you have written down your next action, take a moment to reflect upon the whole experiment.
Read it in reverse order, from the last to first step.
Then read it in proper order.
Does the decision you are proposing make sense? Is everything logically connected?
Have you noticed any logical fallacies or cognitive biases?
Give it to a colleague, and ask them to do the same.
If anything pops up then revisit steps 9 and 10.
Where to next?
Use the guide to design several experiments. It takes seven to nine experiments before you won't need the guide, except when you get stuck or need some inspiration.
Suggested for your further development:
- Creating reusable knowledge: how to design effective experiments. Innovation teams can generate value in several ways, and creating reusable knowledge is one easy way to do so. In this webinar I go through all the steps outlined above.
- Visual tools for experimentation and innovation accounting. In this post I explain three proven tools for visual overview of experiments, understanding progress, and innovation accounting.
If all of the above are not enough, then you might have to study some more challenging reading or consider getting professional help. Following two books should be available at your local university library:
- The SAGE Handbook of Qualitative Research. 2nd edition can be borrowed online for free.
- Design and Analysis of Experiments by Douglas C. Montgomery. You can borrow online an old and vandalised copy for free.
In the future I will update this section with more resources, including templates and checklists. Let me know what's most challenging for you and your team so I can prioritise better.
Step-by-step guide for designing Lean Experiments by Bruno Pešec is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Based on a work at https://www.pesec.no/step-by-step-guide-for-designing-lean-experiments.
In other words, you can republish and link to this guide, even commercially, as long as you provide attribution, and don't change it. If in doubt send me an email and we'll work it out.

Bruno Unfiltered
Subscribe to get the latest posts delivered right to your inbox. No spam. Only Bruno.