👓 The PLGeek Guide to Product Qualified Accounts (PQAs)

How to use PQAs to drive revenue processes in B2B SaaS

You’ve almost certainly heard or read about Product Qualified Leads (PQLs).

But here’s what you might not have been told:

PQLs alone are insufficient in a B2B context beyond prosumer context. 

PQLs have been misunderstood and misused over time.

They’ve defaulted to a definition of a user of your product who’s crossed some usage/value threshold.

The problem with this is that in B2B, it’s noisy; most of these users won’t have any buying authority and following up with them individually is problematic because you are not taking into account the aggregate usage patterns of individuals in a team/company - which really should be the primary assessment of readiness for sales engagement. Focusing on PQLs defined like this without that macro context leads to poor quality leads, premature outreach, inefficient sales motions, and user frustration.

This is why Product Qualified Accounts (PQAs) are so important in B2B.

❝

Product Qualified Accounts (PQAs) are existing or prospective customers within your ICP who meet some objective scoring criteria that signals that sales team involvement would be beneficial to help monetise.

And with that definition, we can better define PQLs as individual users within an ICP account that signal buying intent and authority.

The ideal scenario is having a PQA with an identified PQL.

PQLs can often be identified within a PQA by the nature of the things they are doing in the product (sending larger volumes of invitations for example).

You can have a PQA with no identified PQLs but you’ll need to join the dots by leaning on exploratory sales outreach and/or ABM (Account Based Marketing) to find a buyer within the account.

But PQAs must be at the centre of the model that drives the Product-Led Sales (PLS) process.

So what does that look like? Let’s dig in.

The product-led sales process

Looking at the product-led sales process from 50,000 ft, we can break it down into 4 key steps:

  1. Collect data about accounts

  2. Objectively score each account

  3. Act on the subset of accounts that meet scoring criteria

  4. Periodically review performance to improve each of the first 3 steps.

Let’s take a look at each of these in detail:

Inputs

Inputs are the raw data used to inform the scoring model, and can be categorised into 4 high-level buckets:

1. Behavioural inputs

❓Answers:

  • What are users/teams doing across the product?

📍Source:

  • Behavioural analytics data (instrumented product surfaces)

💡 Tips:

  • Be sure to include ALL surfaces both in and out of the core product (e.g. visiting support policy pages).

  • Include key actions/events (e.g. self-serve purchase, invitations, signups); consider both action/event quantity and rate.

  • Include aggregate signals (e.g. Team activated).

  • Identify distinct pockets of usage within a single company.

2. Use-Case inputs

❓Answers:

  • Why are users within the account here?

📍Source:

  • Self-declared (e.g. via onboarding questions)

💡 Tips:

  • Focus on a small set of use cases you are targeting, but add an ‘other’ option to help identify emerging use cases.

  • If your product supports both individual use cases and team use cases, ensure you can differentiate via this data

  • This is a critical piece of data and important to collect during the new user acquisition/onboarding flow - there’s often a reluctance to collect this data amidst fear of introducing friction. In most cases, consider this necessary friction - beyond the importance of these data points in the PQA process, it’s essential for personalisation.

  • Seek also to discover their use case context; are they here to hire you for work, for personal, to learn as a student, or something else?

3. Firmographic

❓Answers:

  • Where do user and their teams work?

📍Source:

  • Enriched data (eg via Clearbit, ZoomInfo etc.).

  • Self-declared (e.g. via onboarding questions).

💡 Tips:

  • This is a vital piece of data since it enables ICP matching.

4. Ecosystem

❓Answers:

  • What platform ecosystems do users/teams/accounts exist in?

📍Source:

  • Connected integrations

💡 Tips:

  • Not relevant to all products, but even when it is, it is often overlooked; knowing this data can not only enrich sales conversations but also provide another important signal to ICP matching. (e.g. does this account use GitHub, Bitbucket or GitLab to manage their source code? We support all three SCMs, but our current ICP focus is narrowed further to companies that use GitHub).

Scoring

This is the application of a model where all input data is combined and processed in near real-time to determine a PQA score.

It’s common to start from a regression analysis on past monetisation events to get going, but expect a lot of iteration here, which takes time (more on that below) to get to a place of high confidence and repeatability.

If you don’t have enough historical data for robust analysis, start by focusing on the signals that you know (from speaking with existing customers) are demonstrative of significant value being realised.

Scoring will never be perfect - expect a continuous effort to improve.

Part of the scoring model will be ICP matching. I find it useful to visualise this in the context of the broader market:

Note: I use the word Accounts here to mean clustered usage of a team/company - that could be amongst already paying customers or those on a free plan.

PLS platforms such as Pocus, Correlated, Groundswell, Endgame can help score and segment PQAs.

Companies with mature data science functions might also take a homegrown approach and build vs buy, but it can be a big lift so budget for ongoing resources to build and maintain. Most times, I’d not recommend the build path.

This step is also where we handle the segmentation and routing of PQAs. More on that next…

Action

The action you take on the subset of accounts that meet specified scoring criteria should be codified in Playbooks.

Playbooks document an end-to-end flow for how to engage with a category of PQAs and are how the scoring is operationalised by sales teams.

So playbooks might target specific scenarios, such as

  • Consolidating multiple pockets of usage within a company.

  • Driving free to paid conversion in accounts signalling a specific use case.

  • Plan Expansion - driving upsell of accounts from one plan to another given observation of high propensity behaviour or likelihood to exceed usage threshold

  • Product Expansion - driving cross-sell of products given signals of use case relevance.

Different variations of playbooks will exist for cases where PQLs (a user is an identified buyer) exist vs don't. You might also have different playbook variations based on opportunity size, geolocation, and so on.

Playbooks have multiple steps. Some steps will be automated (e.g. send a message to a Slack channel to notify the sales team of a new PQA or notify sales management when PQAs have no assigned owner). Some steps will be semi-automated (e.g. BDR initiates the send of a templated Salesloft email to one or more PQLs within the account). Other steps will be manual (e.g. calling a PQL within the PQA).

Some steps may also have SLOs associated with them (e.g. time to initial outreach on user request (hand raise)).

I segment PQAs as a subset of ICP accounts in the previous graphic. This is important.

Just because we identify an account in our ICP doesn’t mean it’s the optimal time to engage.

It’s critical to wait for the right behavioural signals (could be an in-product hand raise, some aggregate usage threshold, specific events, etc.) and use-case data to engage.

Engaging too soon can kill any chance of success for your product-led sales process.

It's tempting for companies transitioning from a sales-led model who now have access to what immediately seems like a rich and voluminous source of leads from product usage to drop everything and rush to pursue those leads relentlessly.

You're guaranteed to alienate users and accounts by taking that approach.

It's also highly inefficient.

Bonus tip: Maintain appropriate PQA fidelity. Different accounts will qualify as a PQA for different reasons - ensuring this rich information is maintained and made available to revenue teams is critical in helping them contextualise any and all communication.

Review

The highest-performing revenue teams treat the entire PQA process as an ongoing series of experiments used to optimise revenue.

On a regular cadence, learnings from sales engagements (closed-won or closed-lost, time-to-close and so on) should be used to improve the process via a review and feedback loop. An ideal cadence will align with the typical sales cycle length but should be every 6 months at a minimum.

Key areas of feedback will be around:

  • Inputs: Is all the data we’re collecting useful for the model? Can we improve the signal-to-noise ratio by removing data that isn’t meaningful? Is there data we should have in the model that we don’t have today? What would be the best way to get that data?

  • Scoring: Are we utilising the data in the right way? Are our thresholds appropriate? Does the scoring effectively support the playbooks we have or that are emerging?

  • Action: What opportunities do we have to improve how we act on a given PQA? Can some of the manual or semi-automated steps now be fully automated?

In a complex system like this, unless you have strong conviction that changes to any part can be made without dependent effects, try not to change too much at once so that you’re better able to isolate the impact of your improvements.

For improvement around inputs and scoring, the internal models of PLS platforms can support the feedback loop and enable easier testing of changes.

Note: There will be accounts that don't reach a PQA scoring threshold where they're routed to GTM teams, but still convert. This will often be because they are not the ICP (maybe smaller accounts). However, ICP customers will sometimes self-serve monetise without reaching the PQA threshold. Another feedback loop is important here to learn if there are patterns of usage that should be scored higher in the model for GTM team involvement to drive higher-value contracts.

Bringing it all together

Mapping the four steps out in more detail gives us this diagram representing the end-to-end flow for product-led sales in B2B SaaS.

If you’ve adopted (or are in the process of adopting) a product-led sales process, and are utilising PQAs (or have a contrary opinion!), I’d love to hear from you and potentially feature you in a future Product-Led Geek newsletter post. Just drop me a line at [email protected].

Thank you for reading The Product-Led Geek. This post is public, so feel free to share it.

Todays listen:

Hila Qu with an ultimate guide to adding a PLG motion on Lenny’s podcast

Reply

or to participate.