cloud-engineering
ai-coding
claude-code
aws
platform-engineering
devops
vendor-lock-in

AI Agent Lock-In Is Here

Claude Code Hit $2.5B. Amazon Engineers Can’t Use It. This Is the New Vendor Lock-In.

Claude Code just hit a $2.5 billion run-rate — doubled since January 1st. Yet 1,500 Amazon engineers are fighting for permission to use it, steered toward AWS Kiro instead. This is vendor lock-in repackaged for the AI agent era. Platform-native vs platform-agnostic is the new architectural fault line. Here's what every cloud team needs to decide.

The $2.5 Billion Signal

Claude Code just hit a $2.5 billion annualized run-rate. That number doubled since January 1st. Business subscriptions quadrupled in the same period. Enterprise customers now account for more than half of all Claude Code revenue.

$2.5B

Run-Rate ARR

4%

GitHub Commits

2x

Weekly Users Since Jan

9 mo

Zero to $2.5B

According to a SemiAnalysis report published February 5, 2026, Claude Code now authors 4% of all public commits on GitHub worldwide — double the share from one month earlier. At the current trajectory, they project Claude Code will author 20%+ of all daily commits by the end of 2026.

Claude Code went from zero revenue to $2.5 billion in approximately nine months. It’s part of Anthropic’s broader $14 billion ARR — up from $1 billion just 14 months ago. Eight of the Fortune 10 are now Anthropic customers. Over 500 customers spend more than $1 million annually, up from roughly a dozen two years ago.

The Amazon Paradox

Amazon has invested $8 billion in Anthropic. They sell Claude Code to customers through AWS Bedrock. Yet roughly 1,500 Amazon engineers are fighting internally for permission to use Claude Code in production work.

Instead, Amazon is steering teams toward Kiro — their in-house AI coding agent built on top of Anthropic’s Claude models combined with Amazon’s proprietary tooling and infrastructure. Internal guidance limits the use of third-party AI coding tools for production code without formal approval.

Sales engineers who sell Claude Code to customers through Bedrock have expressed concerns that it’s difficult to convince customers to use a product they haven’t officially approved for internal production use.

The irony is thick. But it’s not really about Amazon. This is a preview of tensions every major cloud vendor will face. Your dev team wants the best tool. Your procurement team wants the integrated one. When those two forces diverge, you get a new flavor of lock-in.

Meanwhile at Microsoft

Microsoft has taken the opposite approach — actively encouraging engineers to test Claude Code alongside GitHub Copilot and share performance feedback. The data flows back into improving Copilot itself.

Two tech giants. Same investor in Anthropic. Completely opposite internal strategies. The contrast tells you everything about where the AI agent wars are heading.

Two Camps Are Forming

Platform-Native Agents

Kiro, GitHub Copilot

Deep integration with specific cloud ecosystems

Access to proprietary APIs and internal services

Optimized for that vendor's workflow and toolchain

But tied to a single platform

The trade-off: Maximum ecosystem integration at the cost of portability. When you build your dev workflow around Kiro, you’re deepening your commitment to AWS — not just for infrastructure, but for how your team writes code.

Platform-Agnostic Agents

Claude Code, OpenAI Codex

Work across any cloud, any stack, any language

Stronger general reasoning capabilities

Portable between projects and employers

But less deeply integrated with specific platforms

The trade-off: Flexibility and best-in-class reasoning at the cost of native ecosystem depth. Your workflows are portable, but you may need to bridge gaps that native tools handle natively.

Why This Matters Now

AI coding agents aren’t just writing code anymore. They’re becoming infrastructure.

CI/CD Pipelines

Agents triaging failures, generating fixes, and auto-merging patches

Issue Triage

Agents classifying, prioritizing, and drafting responses to production incidents

IaC Generation

Agents writing Terraform, Pulumi, and CloudFormation from architectural intent

Code Review

Agents reviewing PRs, suggesting improvements, and enforcing standards at scale

Once you build your development workflow around a specific agent, switching costs are real. Custom prompts, AGENTS.md files, tool integrations, CI/CD hooks, team muscle memory — all of it creates inertia. This is cloud lock-in, but at the developer tooling layer.

Three Questions Every Cloud Team Should Ask

1

Does your AI agent choice lock you into a cloud vendor?

If your AI coding assistant only works well with AWS services, you’ve added another lock-in vector you didn’t have before. Your infrastructure was already locked in. Now your development process is too. That’s a compounding effect that makes migration exponentially harder.

2

What happens when your vendor’s agent competes with the best agent?

AWS pushing Kiro over Claude Code is a preview of tensions every major cloud will face. Your dev team wants the best tool. Your procurement team wants the integrated one. When the vendor-preferred tool and the developer-preferred tool diverge, someone has to make a deliberate choice — not let it happen by default.

3

Are you building agent-portable workflows?

The teams that abstract their AI tooling will have the same advantage as teams that went multi-cloud early — flexibility when the landscape shifts. Use open standards like MCP and AGENTS.md (both now under the Linux Foundation’s Agentic AI Foundation). Keep your agent orchestration layer swappable. Treat AI agents like you treat cloud services: with an exit strategy.

The Numbers Behind the Shift

Anthropic (Feb 2026)

$14B annualized revenue (up from $1B in Dec 2024)

$380B valuation after $30B Series G

500+ customers spending $1M+ annually

8 of 10 Fortune 10 companies are customers

30,000 Accenture professionals being trained on Claude

Claude Code Specifically

$2.5B annualized run-rate (18% of total revenue)

4% of all public GitHub commits worldwide

135K+ commits per day on GitHub

4x business subscription growth since Jan 1

20%+ projected daily commit share by end of 2026

Meanwhile, Amazon claims 70% of its software engineers have used Kiro at least once as of January 2026. But “used at least once” and “prefer to use daily” are very different metrics. The 1,500-engineer pushback suggests the gap between mandated adoption and organic preference is wide.

The Platform Engineering Playbook

The $2.5B run-rate tells you where developer preference is heading. The Kiro push tells you where vendor incentives are heading. The gap between those two forces is where platform engineering teams need to make a deliberate choice.

Audit your agent dependencies

Map every place an AI agent touches your workflow: IDE, CI/CD, code review, IaC, incident response. Identify which are vendor-locked and which are portable.

Adopt open agent standards

MCP (Model Context Protocol) and AGENTS.md are now under the Linux Foundation. Build your agent integrations on open standards, not proprietary SDKs.

Keep the orchestration layer swappable

Your AGENTS.md files, prompt libraries, and workflow definitions should work with any capable agent. Don't hardcode agent-specific behaviors into your pipelines.

Measure what your team actually uses

Track which agents your engineers choose when given freedom vs which they're told to use. The gap between those two metrics is your lock-in risk.

Negotiate agent flexibility into contracts

When signing cloud or enterprise AI agreements, ensure your team retains the right to use best-of-breed agent tooling — not just the vendor's preferred agent.

The Bottom Line

Don’t sleepwalk into AI agent lock-in the way many teams sleepwalked into cloud lock-in a decade ago.

The vendor lock-in debate has a new front: the tools your engineers use to write, review, and ship code. Make it a deliberate architectural decision, not an accidental procurement outcome.