AI is Redefining Dev Tool Adoption (And What To Do About It)

Welcome folks! 👋

This edition of The Product-Led Geek will take 10 minutes to read and you’ll learn:

  • Why dev tools must now serve three distinct user types including coding agents

  • The difference between Augmented DevEx and LLMEx and when each matters most

  • How AI optimisation creates a self-reinforcing loop that determines market winners

And a big thanks to Laura Tacho (CTO @ DX) for review and feedback as I pulled this post together.

Let’s go!

TOGETHER WITH YOUR BRAND

I’m now accepting newsletter sponsorship bookings for H2 2025.

Reach over 7700 founders, leaders and operators working in product and growth at some of the world’s best tech companies including Vercel, Paypal, Adobe, Lovable, Clerk, Canva, Miro, Amplitude, Google, Meta, Tailscale, Twilio and Salesforce.

Please support our sponsors!

GEEK OUT

AI is Redefining Dev Tool Adoption (And What To Do About It)

Something interesting happened to developer tools over the last decade. We stopped pretending that making developers productive was a nice-to-have. It became the whole game.

This shift wasn't subtle. In 2014, if you told a startup founder they should obsess over their API documentation, they'd nod politely and go back to building features. By 2020, companies were hiring entire teams just to make their developer docs better.

Stripe became a $95 billion company partly by making payments integration not suck. Vercel built a business on making deployment one command. The message was clear: great developer experience wins.

§But now something new is happening.

Dev tool companies need to serve three distinct types of users:

  1. Developers who have coding expertise

  2. Developers who do not have coding expertise

  3. Coding agents

Note: I didn’t call the second group ‘vibe coders’ because I see vibe coding as a modality that developers of any level of experience and expertise can apply.

And this has some big implications.

Most notably dev tool companies need to build and optimise for three types of user experience:

The first is what I’ll call Legacy DevEx - optimising for human developers working independently. This is becoming less relevant over time.

The second I’ll refer to as Augmented DevEx - optimising for human developers working with AI assistants.

The third I call LLMEx - making your tools directly consumable by LLMs.

In this post I’m going to focus on Augmented DevEx and LLMEx.

Understanding both is crucial for anyone building or using developer tools today.

How We Got Here

To understand where we're going, we need to understand how we got here.

The story of developer tools has three acts.

Act One was the age of documentation.

In the 1990s and early 2000s, if you wanted to use a database you'd spend days reading manuals, configuring drivers, and debugging connection strings. The assumption was that developers had time to figure things out. They were expensive, sure, but there weren't that many of them, and software projects took years anyway.

Then a few things happened.

  • The internet made distribution free. Suddenly you could build something on Monday and have thousands of users by Friday.

  • Open source ate everything, with tools like React and PostgreSQL becoming industry standards.

  • Third-party APIs made complex services plug-and-play - why spend six months building a payment system when Stripe exists?

  • Venture capital discovered software. When you're burning $500k a month, every day a developer spends fighting with bad tools is money on fire.

The combination was explosive. Developer productivity became a bottleneck for company growth. And when something becomes a bottleneck, money flows to fix it.

So Act Two was the age of experience.

Companies realised that developer time was precious.

This is why Stripe can charge 2.9% + 30¢ per transaction (2.5% + 20p in the UK 🇬🇧) - a premium over bare-metal payment processing - and developers would thank them for it. Seven lines of code to accept payments? Shut up and take my money.

It's why GitHub can charge $21/user/month for features you could technically cobble together with GitLab CE and a Linux server. The premium isn't for storage - it's for the seamless experience.

It's why MongoDB can charge 3-4x for Atlas over what you'd pay to run MongoDB yourself on EC2. Teams gladly pay more to never think about database maintenance.

The formula was simple:

Respect developers' time, and they'll love you for it.

Make their lives easier, and they'll convince their companies to pay you.

By 2020, this wasn't a secret anymore. Every developer tool company tried to be "the Stripe of X." Good documentation became table stakes. CLI tools got beautiful. Error messages started making sense. If you shipped a dev tool with a bad onboarding experience, you'd be roasted on HN, Reddit, or Twitter.

Act Three began with ChatGPT.

Initially, developers used AI as a smarter Stack Overflow. Stuck on a regex? Ask ChatGPT. Can't remember the syntax for a React hook? ChatGPT. It was nice but not revolutionary.

But quickly, distinct patterns emerged that mapped to our three user types:

  1. Experienced developers began using AI as an implementation partner for tools they carefully chose, and delegating some tool choices and technical decisions to AI.

  2. Less experienced developers started delegating most tool choices to AI, focusing on outcomes over technical decisions.

  3. Coding agents became direct consumers of developer tools, requiring their own optimisation patterns.

This split reflected a fundamental shift in how different types of users approach building software.

Experienced developers still want to make careful tool choices, but are increasingly leaning on AI to augment their expertise with implementation support.

Less experienced developers (most that are vibe coding) typically prefer to focus on product outcomes, letting AI handle most of the technical decisions.

With the assumption that legacy DevEx is less and less relevant (or put another way is rapidly evolving with new realities of AI), this split is creating two parallel tracks in developer experience, each serving different needs and different types of developers.

It’s something I’ve been thinking about a lot, and it’s been reshaping some of my perspectives on dev tool growth.

Augmented DevEx and LLMEx

It’s mid 2025.

You're building a new app.

You need authentication.

You've decided to use Clerk so you check their docs. They have documentation specifically for AI-assisted implementation - prompt templates you can copy into Cursor or ChatGPT.

That's Augmented DevEx.

You're making the architectural decision (use Clerk), and the AI helps you implement it faster by understanding Clerk's AI-optimised documentation.

You need to add charts to a dashboard. This time, you just tell your AI assistant 'add a line chart showing growth of a metric over time.' Because Recharts' library is easily consumable by LLMs through structured docs, clear use-case descriptions, and large volumes of publicly accessible code examples, the training data was filled with examples, and models can access tools (web search, context7 etc.) to fetch AI optimised docs and references in realtime. As a result they can recommend, understand and implement Recharts directly without you visiting the docs.

That's LLMEx - the tool is optimised for AI consumption and implementation.

Both are critical. Neither is replacing the other.

Augmented DevEx and LLMEx aren't competing futures.

They're complementary modes that support developers in their day-to-day workflows.

Augmented DevEx: Amplifying Human Judgment

Augmented DevEx is about making human developers more effective when working with AI assistants. It recognises that humans are still making the important decisions - which tools to use, how to architect systems, what tradeoffs to accept. The AI is an implementation accelerator, not a decision maker.

The best example is what the likes of Clerk, Prisma, Vercel and others are doing with their documentation. They're not just writing for humans anymore. They're creating prompt libraries, code templates formatted for AI consumption, and examples specifically designed to be fed by developers into their coding assistants.

This is smart because it preserves what humans are good at - taste, judgment, understanding business context - while offloading what LLM powered assistants are good at - syntax, boilerplate, implementation details.

When you choose to use Stripe for payments, you're making a business decision based on reliability, cost, features, and trust. When you copy Stripe's AI-optimised docs into Cursor to implement the integration, you're using Augmented DevEx to execute that decision faster.

Think about the example earlier in this post.

  1. You choose Clerk for auth.

  2. You paste Clerk’s AI-ready prompt into her assistant.

  3. You get a complete AI generated integration, including:

    • Error handling

    • Type safety

    • Best practices

  4. You review and tailor the implementation as needed.

Some dev tool companies are going even further.

21st.dev’s Magic MCP gives you options from your prompt.

"Here are 7 different variants of this component."

The dev chooses based on taste and requirements. The AI handles the implementation details.

Augmented DevEx is about acknowledging the reality and inevitability of AI-assisted development, and finding ways to help devs make it easier to use your tools in that context.

LLMEx: When AI Makes the Choices

LLMEx serves a different need: making your tools the default choice when AI assistants are given autonomy.

Think about some common scenarios where a dev might delegate tool choices to AI:

  • "Add authentication to this React app"

  • "Add storage for media uploads"

  • "Add error tracking"

In each case, the AI chooses between competing tools - Clerk vs Auth0, UploadThing vs S3, Sentry vs Rollbar. These choices aren't random. They're influenced by:

  1. Documentation Quality: How well can LLMs understand your docs?

  2. Implementation Examples: How many successful uses exist in training data?

  3. Context Awareness: Does your tool provide clear use-case guidance?

  4. AI-Ready Integration: How easily can LLMs implement your tool?

The LLM makes that choice based on what it knows - training data, web search results, what's already in the project, what documentation it can understand.

The stakes here are higher than most realise.

Consider what happens when you type into your assistant: "Add authentication to my app"

If your auth tool isn't easily discoverable and implementable by AI:

  • You don't even show up as an option

  • The AI defaults to competitors with better LLM optimisation.

This dynamic is creating a new competitive frontier.

Tools that optimise for AI discovery become the default choice for an entire generation of AI-assisted developers.

Strong LLM optimisation leads to:

  • Higher adoption rates in new projects

  • More mentions in AI-generated code

  • Increased market share in AI-first development platforms

And it’s a self-reinforcing loop:

  1. AI recommends tools with good LLM optimisation.

  2. More developers use those recommended tools.

  3. More code examples and usage data enter the training data.

  4. LLMs become even better at implementing those tools.

  5. This leads to even more recommendations and adoptions.

The cycle continues, making popular tools more popular.

This creates a ‘rich get richer’ effect where tools that invest early in LLM optimisation gain an increasing advantage over time.

This is why we're seeing the emergence of llm.txt files and machine-optimised documentation.

A few examples below

These structured, high signal-to-noise formats make it easier for LLMs to:

  1. Understand when to recommend the tool.

  2. Match features to user requirements.

  3. Generate accurate implementation code.

If an LLM can easily understand how to use Sentry but struggles with Rollbar's docs, guess which one it will choose?

Beyond llms.txt, tools are implementing other AI-optimisation strategies:

  1. Semantic Versioning for AI

    • Version-specific implementation guides

    • Breaking change detection

    • Compatibility matrices

  2. Context-Aware Examples

    • Framework-specific implementations

    • Stack-aware code samples

    • Performance implications

  3. Integration Patterns

    • Common use case templates

    • Error handling patterns

    • Security best practices

LLMEx is about influencing the decisions LLMs make when humans delegate specific tasks to them.

The human is still deciding to add error tracking. They're just letting the AI handle the details of which tool and how to implement it.

The goal is becoming the default choice in the AI-assisted dev workflow.

The Interplay - A Tale of Two Developers

The interaction between Augmented DevEx and LLMEx looks different depending on who's doing the development. Take our example of adding auth to an app:

The Professional Developer's Approach:

  1. Frames the problem

    • Confirms auth requirements (email, social, MFA)

    • Evaluates compliance needs (GDPR, SOC-2)

    • Short-lists Clerk, Auth0, and Supabase Auth

  2. Chooses Clerk for

    • First-class webhook support

    • Clear tenancy model

    • Scalable pricing

  3. Designs integration

    • Creates AuthProvider abstraction

    • Sets up webhook handling

    • Plans route protection

  4. Uses AI for implementation

    • Uses the provided prompt from the Clerk docs

    • Generates code

    • Creates test cases

    • Handles edge cases

  5. Reviews and hardens

The Vibe Coder's Journey:

  1. Types one line to their AI assistant and hits enter:

Add a login to my app and make it secure.

  1. Lets the AI pick the provider, wire everything up, maybe suggest 2FA or social logins, and trusts the default setup.

  2. Hits deploy.

Both approaches are valid in different situations.

The professional developer brings context, standards, and structure, using the AI as a power tool.

The vibe coder leans fully on the AI as a co-pilot and shortcut.

Modern DevEx needs to enable both: precision and abstraction, depth and speed.

And increasingly, it needs to support real-time collaboration between human and AI. I'm particularly excited about the more interactive conversational modes that the likes of Claude Code are providing, where mid-processing you can feed new information to the LLM or tweak the direction it's going.

Which is more important?

For experienced devs, infrastructure tools (databases, hosting, authentication) currently benefit more from Augmented DevEx. These are high-stakes decisions that humans want to make deliberately. Developers choose Postgres or MySQL based on deep requirements. They choose AWS or Vercel based on team expertise and scale needs. AI helps implement these choices but shouldn't make them.

Utility libraries (charting, date handling, form validation) benefit more from LLMEx. These are often commodity choices where "good enough" is fine. If an AI chooses Recharts over Chart.js for a basic bar chart, who cares? The dev probably doesn't want to spend time evaluating charting libraries for simple use cases anyway.

And for those that are vibe coding, where devs in the driving seat approach infrastructure decisions very differently from traditional professional development.

Devs working like this delegate critical infrastructure decisions to AI-powered builder platforms like Lovable, v0, Replit, or Bolt. They'd rather trust the default stack choices and recommendations of these platforms than make potentially costly mistakes.

For them, LLMEx isn't just for utility libraries - it extends to every decision too.

And vibe coded commercial SaaS apps are being launched every day like this.

This isn't necessarily bad - these platforms are increasingly sophisticated at making sensible, scalable architectural choices. And there's a crucial feedback loop here: even for professional developers making careful tool choices, the quality of AI-generated implementation code depends heavily on having abundant, high-quality examples in the training data. When tools optimise for LLMEx, they create better training examples, which leads to better AI assistance for everyone - including devs who are making deliberate architectural choices.

This represents an interesting dynamic in how different types of developers approach infrastructure decisions: traditional pros who want Augmented DevEx to implement their careful choices faster, versus vibe coders who trust the platforms to choose wisely on their behalf.

Dev tool companies need to invest in both.

Make it easy for humans to implement when they've made a deliberate choice.

Make it easy for AIs to choose you when the human has delegated the decision.

This is critical for dev tool growth in this AI-assisted era, and highlights a key decision for dev tool companies as they think about who they are serving and where they invest in growth.

What This Means for Dev Tool Companies

If you're building developer tools, you need great Augmented DevEx and great LLMEx, but weighted based on your tool's nature and your target developer audience.

For Augmented DevEx:

  • Create AI-ready documentation with complete, copyable examples.

  • Build prompt templates for common implementation patterns.

  • Structure your docs so AI assistants can easily extract code patterns.

  • Make your getting-started guides AI-assistant friendly.

For LLMEx:

  • Create machine-readable API descriptions (OpenAPI, llm.txt).

  • Ensure your documentation appears in training data and web searches.

  • Make your tool easy to identify and differentiate (clear use cases, unique naming).

  • Provide structured data about capabilities, limitations, and pricing.

Most importantly, understand that these serve different moments in the development process for different types of devs.

The Opportunity

The split between Augmented DevEx and LLMEx represents an opportunity to serve different types of developers better at different moments in their flow.

Sometimes developers need help implementing a careful choice. Sometimes they want to delegate a routine decision. Sometimes they're exploring options. Sometimes they know exactly what they want. Good developer tools will support all these modes.

Understand when each matters and optimise accordingly.

I’m still optimistic that LLMs won’t replace developers. We’re heading toward a future where Developers orchestrate LLMs to build more ambitious things faster. The tools that support this collaboration - both when humans lead and when they delegate - will define the next era of developer experience.

DevEx has always been about respecting developers' time and intelligence. That hasn't changed. What's changed is that developers now work with AI partners, and those partnerships take different forms moment to moment.

The future of DevEx is human and AI collaboration, a dance with each leading when they’re strongest. It’s on Dev tool companies to build for that dance.

Enjoying this content? Subscribe to get every post direct to your inbox!

THAT’S A WRAP

Before you go, here are 3 ways I can help:

Take the FREE Learning Velocity Index assessment - Discover how your team's ability to learn and leverage learnings stacks up in the product-led world. Takes 2 minutes and you get free advice.

Book a free 1:1 consultation call with me - I keep a handful of slots open each week for founders and product growth leaders to explore working together and get some free advice along the way. Book a call.

Sponsor this newsletter - Reach over 7700 founders, leaders and operators working in product and growth at some of the world’s best tech companies including Vercel, Paypal, Adobe, Lovable, Clerk, Canva, Miro, Amplitude, Google, Meta, Tailscale, Twilio and Salesforce.

That’s all for today,

If there are any product, growth or leadership topics that you’d like me to write about, just hit reply to this email or leave a comment and let me know!

And if you enjoyed this post, consider upgrading to a VIG Membership to get the full Product-Led Geek experience and access to every post in the archive including all guides.

Until next time!

— Ben

RATE THIS POST (1 CLICK - DON'T BE SHY!)

Your feedback helps me improve my content

Login or Subscribe to participate in polls.

Reply

or to participate.