

Discover more from The Product-Led Geek
Product Qualified Accounts at Risk of Churn (aka the inverse PQA)
Flipping PQAs on their head to predict and prevent churn or downsell
I’m currently running a giveaway - if you’d like to have a chance to win a FREE 1-year membership to Reforge, check out this post.
PQA Recap
I’ve been writing recently about Product Qualified Accounts (PQAs) and their central role in the Product-Led Sales (PLS) process. If you haven’t read that post yet, start there.
PQAs provide the account-level signals necessary to drive efficient GTM team engagement with potential revenue opportunities.
But something I’ve not seen or heard being discussed is how the same methodology can be applied in reverse on existing paying customers to help predict voluntary churn (or down-sell) and intervene to prevent it.
As a reminder, here’s how I define a PQA:
PQA: Product Qualified Account; an existing or prospective customer within your ICP that meets some objective scoring criteria, signalling that GTM team involvement would be beneficial to help monetise.
Now let’s add a new acronym and definition to our lexicon:
PQARC: Product Qualified Account at Risk of Churn; an existing customer that fails to meet some objective scoring criteria, signalling that GTM team involvement would be beneficial to help retain.
Note that unlike PQA, ICP inclusion is not in the definition. This is because the same approach can be used to intervene in both ICP and non-ICP churn risk accounts. However, dependent on your level of risk tolerance, intervention priority should be given to accounts in the segment of the market that best aligns with your product strategy.
Isn’t this just a customer health score?
Various customer health scores have been employed by CS teams for years in helping monitor and diagnose account health and proactively manage existing customer relationships, but most often using completely different processes from what is happening in the typical sales cycle.
Here we apply the same principles in a product-led context.
This inverse PQA approach means the GTM teams at large (across all facets of the customer lifecycle) can be aligned on the quantitative process, and the data here can still feed into a broader account 360 that factors in qualitative signals from the account.
You apply the same 4-step flow (Input, Scoring, Action, Review) in this churn prediction and prevention scenario.
PQA inversion
Let’s take a look at the 4 types of inputs from the PQA process, and see how they can be inverted and applied for PQARC:
1. Behavioural inputs
❓Answers:
What are users/teams NOT doing across the product that they previously were, or that we would expect a healthy account of their shape and tenure to be doing?
📍Source:
Behavioural analytics data (instrumented product surfaces)
💡 Tips:
Like with PQAs, include ALL surfaces both in and out of the core product (e.g. visiting support policy pages).
Include key actions/events (e.g. invitations, team signups, core value events); consider both action/event quantity and rate. Has the volume dipped below an expected healthy threshold? Has the rate decelerated below a level appropriate to the team/company?
What is appropriate will often need to be determined by the other signals (for example, when a product is well adopted across the expected levels in a company, it would be natural for new invitations and signups to drop to close to zero. Firmographic data around company size, as well as self-declared data on team size and use case, will help you determine what you define as appropriate in the scoring model)
2. Use-Case inputs
❓Answers:
Has the reason for users/teams being here shifted over time?
📍Source:
Self-declared (e.g. via onboarding questions)
💡 Tips:
The use cases that early adopters within an account specify in your surveys may differ from those specified by users that join later. Comparing that shift (both self-declared and from what you observe from behavioural data) with what you expect healthy use case evolution to look like is an important insight in the scoring.
Being able to differentiate individual and team use cases is still important - healthy B2B accounts will see a bias toward team use cases as adoption grows within an account.
3. Firmographic
❓Answers:
Where do user and their teams work? Is the customer growing (if so, we might expect to see adoption and engagement evolving in line with that growth)?
📍Source:
Enriched data (e.g. via Clearbit, ZoomInfo etc.).
Self-declared (e.g. via onboarding questions).
💡 Tips:
With PQARC, firmographic data provides segmentation to prioritise intervention on ICP accounts.
Additionally, it contextualises behavioural data such that scoring is appropriate - a healthy signal for one account might signal significant risk for another based on what we know about the respective companies.
4. Ecosystem
❓Answers:
What platform ecosystems do users/teams/accounts exist in, and have those changed?
📍Source:
Connected integrations
💡 Tips:
Like with PQAs, this signal is not relevant to all products, but if it is, ecosystem data can be an important signal of how well (or otherwise) your product is embedded in your customer's processes.
Scoring and Action
How you leverage the input data for PQARC scoring and action ultimately follows the same workflow as for PQAs, with signals to engage driving churn prevention playbooks.
In the above graphic I depict this as something distinct from the PQA scoring and action flow. In reality its all the same model, but different parameters and scoring are kicking off different workflows (playbooks)
Like with PQAs, the timing of engagement with PQARCs is important - but in this case, proactive intervention is a good thing, so scoring should reflect signals of trending toward thresholds vs waiting for the thresholds to be reached. Acknowledge that the plays you might need to run (and the effort you need to invest) to get a customer back on track will differ by how far off track they are.
Review
Like with the core PQA flows, the feedback loops to improve the model are critical. Periodic reviews of the accounts that you intervened with to understand patterns around save (retain), churn and down-sell is needed to create feedback to ensure that:
Input data is sufficient to support the model
The scoring model itself is effectively predictive of when intervention is beneficial
The playbooks are optimally resulting in retained accounts
Note: Look out for accounts that the model doesn’t flag and route but still end up churning. These accounts that don't reach a PQARC scoring threshold but still churn need investigation - is the root cause largely unavoidable and unpreventable (e.g. the customer gets acquired and the parent company has standardised on an alternative solution), or have you learnt something that should be reflected in the scoring model?
I’d love to hear from anyone that has experimented in this area and potentially feature you in a future Product-Led Geek newsletter post. Just drop me a line at ben@plgeek.com.
Todays listen:
Ben Chestnut on the Mailchimp story on Guy Raz’s How I Built This podcast
3 interesting reads:
First Round Review on Fear within the executive team
Kevan Lee on Network Effects
Jenn Dearing Davis on Category creation on Dear Stage 2