How to Develop an Activation Thesis and Prove it
Ideally, all it would take to get activated users would be some hand-stacking and shouting “It’s morphin’ time.”
Annoyingly, it’s a little more difficult. Getting users to an activation milestone—performing some key step(s) that increases their likelihood of retention—is hard enough, but that’s only half the story. Activation milestones aren’t obvious. You have to:
- Develop an initial thesis around what “activated” is for your product
- Define this concept explicitly so it can be tested
- Test and measure your activation thesis
- Either prove or dismiss your thesis according to a predefined goal, then
- If proven, iterate to continually improve your activation thesis (and retention), go back to step 1, and redo this cycle
- If dismissed, pivot to a new thesis, go back to step 1, and redo this cycle
As a PM your life is forever in this loop, cycling through improvements or pivots to improve the experience for your users and retention for your company.
Here we want to take you through how to think about this process, the pitfalls you are going to face, and how to overcome these issues to get what you need for this to work—data—quickly.
The roadmap to the roadmap to activation
You are building a roadmap to activation for your users. Something they follow, through value and nudges, from A to B to C to “Aha” to “I’m using this every day.”
But before that, we need a roadmap to that roadmap. Lewis and Clark might not have known where they were going, but they knew how to get there. For product managers and their teams, that means using an experimental process to find what successful users do in your product:
The idea of this cycle is that a product manager can quickly experiment and iterate to find optimal solutions. Now, every PM who has ever tried to do this will see a critical flaw in this diagram. It is missing a key component—data.
This entire loop is dependent on data. You even need data before you get into this loop, to know where to start with your initial thesis. But to get data you need to instrument your product, and to instrument your product you need your engineering team.
Stop us if this scenario sounds familiar:
- You don’t have any data to know what events are best to track, so you take an (educated) guess.
- You submit a ticket to the engineering team to instrument those events.
- You’re lucky, and it’s picked up for the next sprint.
- You wait two weeks to get event tracking pushed to production.
- You wait a couple more weeks to get enough data to run the analysis.
The reality is every arrow in that diagram represents days or weeks of waiting—for data or resources. This is anathema to the move-fast PMs need to do in these scenarios. If you can’t get users to stay with your product, things will go downhill quickly. So you need to cycle through the loop quickly, iterating and testing different ideas as you move towards the product equivalent of morphin’ time.
To move fast as a product manager, you need two things:
- The ability to instrument to product yourself so you aren’t dependent on an overstretched engineering team.
- The ability to track anything from the get-go, so no matter how many changes to the thesis, you always have data to test hypotheses.
These two ideas underpin how Freshpaint works. It can be set up by PMs, and it tracks everything out of the box.
Let’s work through an example of how this might work for developing and proving an activation thesis.
How to run an activation experiment
Let’s say we’re building a video messaging product, a la Loom or Wistia’s Soapbox. What gets people to continue to use the product? This is what we want to figure out.
Developing an initial activation thesis
Your thesis is the concept you believe to be true and that you want to prove.
Your activation thesis has to eventually be actionable, as you have to tie it to events users take in your product. But you don’t need to get bogged down in the specifics immediately. What’s important for your initial thesis is to start to get an understanding of how your users interact with the product, what’s important for them, and how that tracks with your understanding of your product. Though data is critical, don’t discount your knowledge:
Thesis = data + intuition
So even at this point we can instrument our product and start tracking. As we don’t have specifics of what we want to track, we can use autotrack to grab what we perceive as critical actions users can take in the product. You can then immediately start sharing this data out to an analytics tool such as Mixpanel. If you are using Mixpanel, then bringing up the Insights dashboard after a couple of days will give you a peek into what your users are doing.
To understand activation milestones, we’ll probably have to filter our data a little. We might want to either a) look at events that happen immediately after the completion of onboarding and/or b) within a specific time frame from onboarding. Time-bound activation milestones, such as Facebook’s famous “7 friends in 10 days”, are a good option as they add a constraint to both your analysis and your eventual solution.
In this scenario, we see new users recording a video straight after they complete onboarding which leads us to this initial thesis:
Recording videos is an important step to retention and a potential activation milestone.
What are we missing at this point?
- Any context for why they are recording videos. Great, you have a video recording app and users are recording videos. Problem Solved. But why are they doing that?
- What happens next? This is a critical part that will be missing from the initial thesis and ultimately part of what you want an answer to. If a user records a video, what do they do after that? And does that recording directly lead to retention?
Now we have that statement, we start to turn it into something concrete.
<aside>Good PMs borrow, Great PMs steal, said Pablo Picasso. It’s totally legitimate to not start from zero with an activation thesis. Here’s a list of working activation milestones for different types of companies that you can steal as a starting point for your investigation. What we’ll see, however, is that the nuance is how you iterate on this initial thesis over time as you start to understand your users and product better.</aside>
Defining your thesis so you can track it
Your thesis is just a statement, a hunch. But hunches can’t be instrumented. It’s not uncommon for PMs to not think about how actionable their activation thesis is and then struggle to test it. Two main pitfalls here are:
- Making the time-bound too long. If your analytics are now allowing you to move fast, your thesis shouldn’t be slow. An activated user is someone doing something in the first week or two of signing up, not a month hence. They aren’t active, and you are wasting time waiting to collect your data.
- Not making the thesis user-centric. Products don’t activate, users do. You don’t care about what is happening in your product, you care about what your users are doing in your product.
You need an exact definition that you can test. A testable version of the thesis statement above is:
If a user records a video they are 2X as likely to retain.
That is an activation milestone you can test. Do you want to test it? Hmm.
It has one thing going for it: it’s not complicated. Most of your users are going to do this so you are going to have a good test population.
But it also has a downside: it’s not complicated. It is too easy for your users to complete so it’s going to have little predictive power.
For now, we’re going to run with it. There can be a habit of trying to be too clever at this point. This probably is too not-clever but we want to start collecting data and iterating–that is what this is all about!
In Freshpaint, a PM or product team can instrument this themselves. Autotrack is collecting data on all events, but to single out specific events to share with a product analytics tool like Mixpanel, they can use the visual tagger.
All the team would have to do is select and label the ‘Record Video’ button. That’s it. That event would then be logged and sent through to analytics.
<aside>Another mistake PMs make when setting out activation theses is setting them at the wrong point in time: too early. Your onboarding isn’t part of your activation. The user has to be experiencing the value of your product to be activated. Have we made that mistake here? Let’s see.</aside>
Testing your thesis
We’re now collecting data and in a position to test our activation thesis. There is a key component missing from the definition above: what does ‘retain’ mean?
This is going to be different depending on the type of product. For SaaS, it’s going to be some kind of continual use, but on the timescale of months (e.g. they use the product 1-2 per month). For a consumer app, it might be everyday use or a few times a week. For ecommerce, it’s going to be X number of purchases in Y number of months.
We’re not building TikTok, we’re building Loom, so if a user comes back and uses the product again, say, 3 times this month, we’re going to say they are retained.
<aside>We’ve done a little bait-and-switch here. Retention is about long-term use, but for an outcome for our activation testing, we can’t have it stretch out for eternity, so we’ll say as long as they come back a few times this month, we’re successful.</aside>
To test effectively, we want to set out our sample size in advance (you can use this calculator here). Working from there and knowing our onboarding rate, we can surmise how long we need to run the test. As we need to measure retention, this will be longer than 1 month no matter what sample size.
Here, you’re in a quandary. You want to move fast, but with great time comes great statistical power. So longer experiments are better. While you’re testing, there isn’t a ton you can do as you don’t want to mess with the experiment, so sit back and relax (actually if you’re using Freshpaint, there is a ton you can be doing right now, but we’ll save that for the next section).
The decision phase
Now we have our data, we can decide whether our initial activation thesis was successful or not.
Reader, it was not. Our analytics showed that most users did record an initial video, but that wasn’t predictive of retention. This is good and bad: bad because we were wrong, but good because we moved in the right direction.
So we need to iterate on our thesis a little, add some more event tracking, and try again.
This is where a ton of activation experiments fall apart. You can get the resources for any initial instrumentation, but when you have to continually go back to the engineering team to ask for more, it becomes problematic. The scenario that we set out at the beginning continues like this:
- Analysis shows your guess was wrong.
- You submit a ticket to the engineering team to instrument new events.
- It’s not picked up for the next sprint because engineering thinks you're an idiot for needing them again.
- You wait four weeks to get event tracking pushed to production.
- You wait a couple more weeks to get enough data to run analysis.
- Data looks interesting, but you want to tweak the experiment design a little.
- You submit a ticket to the engineering team to instrument new events.
- They torch your car.
- And so on.
So it either takes more time or gets stalled completely and you have to work with what you’ve got.
But… we have Freshpaint. This means every action is already being tracked in the background, we’re just not tagging them and sending them to Mixpanel yet. We can go into Visual Tagger and tag new events ourselves. When we tag new events, Freshpaint uses Time Machine to backfill up to a year's worth of historical data, so you can do your further analysis right now.
In fact, we can be doing this while the experiment is running. As we learn more about our users, we can continue to think of important actions, tag them, and get all the historical data Freshpaint’s been capturing into Mixpanel. Let’s say we learn on some customer interviews “huh, it’s not just recording videos that users are doing, some are also sharing them. I wonder if that’s correlative to retention?” We already have that data to analyze, we just didn’t know it was important until now.
We can tag the “Share Video” button and send that data to Mixpanel to immediately see how our users that were recording videos over the last month were also sharing.
We can re-run our analysis to see how this “Record + Share” cohort is correlated to retention compared to our “Record Only” and “No Record” cohorts. Lo-and-behold, that is how to get activated users in a video recording product–get them to share their recordings.
Now we can iterate on that new thesis and go back and start our loop fresh with new ideas.
This is because we’ve effectively compressed the thesis->definition->experimentation->decision loop, in two ways:
- Those arrows no longer correspond to weeks or months of engineering time.
- The experimentation phase is no longer run in serial, it is now run in parallel.
Doing it this way is how Bitrise derisked their roadmap by getting data 14x faster. “With Freshpaint we can answer questions in a few days compared to waiting weeks for the engineering team,” says analytics manager Andras Lendvay.
Hopefully, you can also get double-digit increases in experimentation velocity and activation.
It doesn’t stop there. Now we have our milestone to work towards, we can also use that information to get users toward that milestone. We can work on the other half of the story.
We might do that by:
- connecting our data to Iterable, so we can message users that have recorded a video, but not yet shared.
- Testing out new UI design to highlight the Share button
- Testing out different onboarding experiences to move people towards sharing immediately.
As you now have all the data, and the ability to pump this data into any tool you want, you can experiment quickly, and to your heart’s content.
Direction over perfection
There is a pit of anxiety that sits within every PM when they are asked to run an experiment like this. What if I’m wrong? What if I’ve forgotten something? What if I have to start again?
Everyone makes mistakes. You’re never going to set up the perfect experiment the first time. That’s what experimentation is–iterating over ideas and methods until you get a helpful answer.
At Freshpaint, we just want to get you to this answer faster by making experimentation easier. Hit us up if you want to see how it would work for your company.