ICE and a portfolio of bets
ICE is a firm favourite prioritisation framework in growth.
In this post, I’ll share a method that augments ICE data so that teams can better balance their portfolio of growth bets.
This post is presented by Sprig - build a product people love.
Sprig is an AI-powered platform that enables you to collect relevant product experience insights from the right users, so you can make product decisions quickly and confidently. Next-gen product teams like Figma and Notion rely on Sprig to build the user-centric products people love.
At Snyk, Sprig was an indispensable part of our product stack, and now, readers of The Product-Led Geek can get 10% off!
With the ICE prioritisation framework, growth teams rank experiment and innovation work in three dimensions:
Impact (I): How much impact we hypothesise the work will create.
Confidence (C): How confident we are that the work will deliver the hypothesised impact.
Ease (E): How easy we estimate the work will be to deliver.
It’s typical for each to be individually scored on a scale of 1-10.
A simple multiplication leads to an overall ICE score for each idea in the growth backlog.
When you rank the backlog based on the ICE score, you get a prioritised list of work for the growth team to tackle.
It’s very simple, and in our growth context, simple is most often a good thing.
ICE is a great place for new growth teams to get started when it comes to managing their backlog.
Thanks for reading The Product-Led Geek! Subscribe for free to receive new posts and support my work.
The ICE problem
But ICE also has a problem with bias.
In fairness, ICE was never intended to provide absolute prioritisation (despite some teams using it that way) but rather to support relative prioritisation between competing ideas.
Other rubrics are important to bring needed context to ICE scoring.
We can plot out a 2x2 of Impact vs Risk, where we define Risk as:
Risk = 1 / Confidence x Ease
ICE correctly discourages product and growth teams from scheduling work that is low impact, low confidence and low ease (Avoid).
ICE also correctly encourages teams to schedule work with high impact, confidence and ease (Quick Wins).
The challenge with ICE is in determining the relative prioritisation of high risk, high impact work (Big Bets) and low impact, low risk work (Caution).
I name that bottom quadrant ‘Caution’ because many teams default to doing lots of work in this area naturally (and ICE can reinforce that), yet it’s often work that barely moves the needle on our growth objectives.
I am not advocating that no work happen here in the Caution quadrant. In fact, being intentional here is where teams can get a fast rate of learning and amass compound gains (optimisation, doubling down).
But, once the quick wins are out of the way, focusing only here is a trap because the bets that can meaningfully change your growth trajectory get ignored.
So, I encourage teams to exercise caution.
Because ICE actively discourages teams from taking big bets.
Utilising techniques such as weighted confidence scores is a good idea for many reasons, but it doesn’t actually help with this problem. In most situations, it amplifies it.
A portfolio-based solution
So, what should growth teams do here?
My favoured approach is to take a step back and use this 2x2 periodically to
Assess ongoing and planned bets in the current period
Plan bets for the upcoming period
As a growth leader what I’m looking for is an intentionally balanced portfolio of bets.
Just like we typically approach investment strategy.
In a great growth process, the planned work aligns with company objectives and company and product strategy.
So, depending on the company and market context, leaning into bigger bets can be more (or less) appropriate.
The same holds true with investment strategy, of course.
At different points in an investor's life, their goals will lead to a different balance of risk across their investments.
If, when planning for an upcoming period, we map out every item on the growth backlog, it might look something like this:
And (beyond other constraints and dependencies), we’ll be intentional in not letting our ICE scoring be the sole determinant of our prioritisation.
Instead, we’ll ask and debate an important question:
What balance of bets best aligns with our overall objectives?
helpfully also suggests considering confidence through the lens of the type of experiments you’ll be running. With discovery tests, you’re starting from a place of low signal. A key goal of testing here is to find signals and learn so that you can increase confidence. As you establish signals, your tests are typically more incremental and with an objectively higher levels of confidence. These are optimisation tests.
Discovery tests are where you’ll unlock potential new growth opportunity and where appropriate (for example where our growth model predicts that existing growth loops are approaching plateau) we want to ensure we’re investing enough in these ideas.
We’ll also filter ideas that aren’t aligned with our current objectives or area(s) of focus.
And we might end up with something like this:
A balanced portfolio of bets in our focus area(s), where the balance of risk and impact is aligned with our overall objectives.
Note: A great growth process supports filtering and contextualising ICE scoring. I have developed a preferred growth process that I use as a blueprint for helping my advising clients get started with growth, and I’ll be sharing that in an upcoming newsletter post - stay tuned!
Simplicity is one of the things that makes ICE great.
Just be aware that the same simplicity leads to potential biases and be proactive in countering them.
This post was presented by Sprig.
Until next time!