North Star Metrics, The Myth of Active Users, and Building with Product Led Growth
Note: This is a repost of an interview I did with a former colleague, now friend, and generally awesome guy CJ Gustafson for his newsletter Mostly Metrics. If you haven't subscribed to it yet, you absolutely should - his writing is brilliant and his meme game is A+.
I had the privilege to sit down with Ben Williams, VP of Product at Snyk. One of his areas of responsibility is leading the Snyk product growth organization. He’s a brilliant product strategist, and all around really cool British guy. For the last couple of years I’ve always thought of him as a James Bond type who happens to work in Product - equal parts mystery and intelligence (don’t tell him I said that though).
Ben’s led product teams at Telelogic, IBM, Temenos, and previous to Snyk, CloudBees. He’s an early practitioner of Product Led Growth while working at the intersection of DevSecOps for the last decade.
A little bit on where Ben works. Snyk is a cybersecurity company pioneering a new category - developer security. The company embeds security into the software development lifecycle to secure applications from code to cloud and back to code.
The Snyk developer security platform identifies, prioritizes, and automatically remediates vulnerabilities found in open source code, proprietary code, containers, and within cloud and infrastructure configurations.
From a GTM standpoint, what sets the company apart is its highly effective Product Led Growth (PLG) sales motion. Developers can try the software before they buy it, and many times purchase without even talking to someone.
Disclosure: These are me and Ben’s personal opinions and do not necessarily reflect anything on behalf of Snyk.
Ben and I touched on a multitude of metrics he uses to gauge company success from a product lens. Enjoy.
North Star Metrics: Deconstruct Revenue into specific, actionable levers
User Experience: Marry Qualitative & Quantitative frameworks for the full view
Habit Moments: Ensure users embed your product into their daily workflows
Free to Paid: Align pricing and packaging to personas, value props, and budget
Efficient Growth: Optimize for LTV to CAC and Product Driven Revenue
Product Led Growth: Realize PLG is a mindset, not something you ‘do’
Active Users: Beware some common traps
Does the Product team have a North Star metric?
Revenue for most companies is the ultimate North Star metric. But the art is in decomposing that North Star and understanding the levers you have at your disposal to drive improvements.
Something we’re increasingly focused on from a product standpoint is the number of Weekly Fixing Orgs. This is defined as the number of teams who use Snyk to fix vulnerabilities in their code each week.
Aligning our North Star to org based (or team based) metrics rather than user based metrics is important for Snyk, since we are ultimately enabling teams to secure the applications they create. That doesn’t mean we don’t obsessively think about developers and their journeys with Snyk, just that we do so in the context of them most often working in a team.
When I was at CloudBees, a DevOps company, our North Star metric was CI/CD Pipeline Runs. Execution of automated jobs in pipelines was effectively the simplest and most demonstrative measurement of engagement.
What metrics gauge a successful user experience for developers, the ones using your product day to day?
One thing that’s important to acknowledge is that Developers aren’t our only users. We also serve the needs of other personas. Devs are not typically working in a vacuum. On the whole we have to be intentional in designing for the development lifecycle in the concept of teams and organizations.
AppSec (application security) is a common persona we support. And a core product principle for us is being Developer First. We help AppSec empower their developers to create secure software. That principle, that philosophy, and how it manifests in the Snyk platform it’s an important differentiator for us in the market.
I’m also big on pairing quantitative with qualitative insights. When it comes to the actual developer experience there is often no substitute for good qualitative research - interviews, observational studies and the like.
You’re starting to get a good mix between quantitative (usually helps tell you the ‘what’) and qualitative (usually helps explain the ‘why’) with tools like Full Story which allow you to replay user sessions and effectively observe users interacting with your product, without needing to be there with them.
With that being said, the most common quant view to evaluate the developer experience is looking at the experience funnel. This gives us the ability to see the volume of users who successfully complete their workflows. And for those who don’t, we can see where they dropped off. Then the qualitative piece may come in when you know where, but not why, they dropped off.
Is there a point or a trigger when you know based on user behavior that customer won’t churn?
We have a number of health related indicators. Non-recurring tests are tests that are explicitly initiated by developers to scan their code for vulnerabilities as a part of their workflows. We look at not only the volume but the frequency of non-recurring tests.
We also have an activation and engagement framework where we are thinking about breaking that down into what we call set up moments - the things they need to set up like installing integrations, importing projects, and ‘aha’ moments where there is demonstrable value.
The key thing we look for here is going beyond the aha moments, and into what we call habit moments - behaviors that are demonstrative of habits being formed around the use of the product - having a defined notion of someone being activated.
For us, that’s something we refer to as F30D - fixing within 30 days. If we see that behavior we know that organizations that get to that level are strongly, statistically correlated with having long term retention and increased downstream monetization as well.
And we can then bucket different levels of engagement in organisations, and work to help shift users and their teams into higher levels of engagement - ultimately helping them get more value from the Snyk platform.
When you think about moving a user from free to paid how do you decide which features to gate? Is it based on where the budget sits? Is it based on experimentation?
It’s a bit of both. Having a pricing and packaging philosophy is really important as an underlying foundation. We have our Free plan which is intended for individual developers and small teams working on open source projects. We have our Team plan which is intended for scaling teams. We have our Business plan which is teams of teams, perhaps in a business unit. And our Enterprise plan which is typically addressing the types of needs that only become important at a certain level of scale, across an entire company.
There are some things you try to do in the context of a product led strategy, principles that are not hard and fast but general things to consider.
Security is a team sport and we want developers to work with their teams so it would be counter productive to say only an individual developer can use this.
As we move up through levels of organizational scale, the demands on governance and compliance increase and so features that meet those needs are typically reserved for our Business and Enterprise plans. Of course features are just one way to differentiate plans; there’s typically other variables that can be employed. Overall, I’m a fan of applying the Jobs To Be Done framework to support a value based approach to pricing and packaging.
When you report the Product team’s progress to company execs, what metrics do you use? Are they the same metrics your team uses to communicate internally day to day?
The short answer is no. I don’t think it helps to present a different picture or a different set of data to execs as it does to the team. I’m a big believer in transparency. Typically day to day you might use a different lens though.
Execs most often want a macro view, and want to understand how the key metrics related to topline revenue. To align with that priority, we report back to them on volume of free users, volume of activated teams, PQL pipeline volume, and self serve revenue.
These things are not necessarily actionable on a day to day basis though. So at the team level, day to day, you have to break things down into pieces that are directly influenceable through experimentation and feature work.
How does your team define “efficient growth”?
LTV to CAC is often useful, particularly from the perspective of how a self serve business can improve overall efficiency. I think something that’s particularly interesting for us is how efficient is the product in generating monetization opportunities and ultimately revenue. We have this metric called the Product Driven Revenue Rate. It describes the percentage of our overall ARR where we saw significant and meaningful activity in the product before a Salesforce opportunity was created.
So when that’s really healthy it means we are able to efficiently rely on the product itself as a primary source of leads. And if that’s the case, even without any outbound sales activity whatsoever, you are able to generate a certain amount of revenue. That’s really powerful for efficiency in a business.
Do the metrics for product teams at PLG companies drastically differ for those at Top Down Enterprise sales companies?
I think it depends more on culture than on sales structure.
PLG (Product Led Growth) companies will maniacally focus on metrics that quantify user value and user experience, and will optimize for those in the knowledge that they will be leading indicators for future monetization.
There’s nothing that prohibits companies with a strict enterprise sales model from adopting that same mindset and focus.
When I’ve done budgeting in the past, it seems like the ratio between product and engineering employees ends up being 5:1 or 7:1. Something in that ballpark.
In your career have you found there to be an optimal staffing ratio between product and engineering?
Generally accepted wisdom suggests 1 PM to 7 (plus or minus 2) engineers. But it’s definitely something that’s dependent on the specific product or business. API centric products or products with relatively higher back end surface might have many more engineers for each PM. There’s often a cultural aspect at play too. At the end of the day - like with most things - context is important and taking a one size fits all approach is rarely appropriate.
Hot takes time - What’s a metric or leading indicator you think product teams over index on?
Something I do see a lot is a focus on active users. And it’s not that it’s inherently wrong, but more the potential it brings for misguided application. One problem I see is an arbitrary definition of active. Most commonly that’s just lowest common denominator of any activity in the product, like just logging in.
Whereas, a definition that’s centered around users experiencing the core product value is a better proxy.
Another problem is focusing on active users without focusing on the growth accounting behind it: how many of those active users are new users…how many are new users from the previous day vs month…how many have been resurrected after going dormant or churning? Those details matter.
Stepping back from that, a common trap is when product teams focus on what you can think of as growth metrics. Growth metrics are basically metrics that describe the overall state of the business rather than what we can think of as product metrics that characterize how the product behaves.
You can think of growth metrics as a function of product metrics and the number of new users. Active users is a growth metric, as are metrics like the total number of tests being run in a Snyk context. Where product metrics include things like activation rates, retention rates, or to give a Snyk example, the average number of non recurring tests run by a developer in the first 30 days.
Even more useful is to understand and quantify the growth loops - the acquisition and engagement loops - that fuel your product growth. It’s the metrics around your growth loops that provide much better leading indicators of product health.
Does company scale change the way you think about metrics? Have the metrics you use changed from $25M to $100M in ARR?
Your mindset absolutely changes, but from a metrics perspective I don’t think anything changes philosophically. As you think about drivers for growth, your growth strategy must evolve as you scale revenue. That’s going to require attention and focus on different sets of metrics. An example may be on some point in your journey you introduce a new complimentary product. Until that point, attach rate is meaningless, but from that point onward it’s very important.
It’s not about a fundamentally different approach, but absolutely, you’re going to be looking at different and new things, and looking at them from different perspectives.
To get a little philosophical on you - if you could put one business related message on a billboard in Silicon Valley or downtown New York what would it be?
I do have one that immediately comes to mind and it’s not from your traditional product leader or business leader. It’s from Oprah Winfrey:
Luck is preparation meeting opportunity.
A big thanks to Ben. If you or someone you know works in a field with fascinating metrics, give me a shout.