Ingress NGINX Is Dying
Here's Your No-Panic Migration Plan
The most popular Kubernetes ingress controller — used by roughly half of all cloud-native environments — gets archived in March 2026. No more security patches. No more bug fixes. Maintained by a tiny team of volunteers, it collapsed under burnout. Here's your no-panic migration plan.
The Wake-Up Call
The most popular Kubernetes ingress controller—used by roughly half of all cloud-native environments—gets archived in March 2026. No more security patches. No more bug fixes. The project, maintained by a tiny team of volunteers (often just one or two people), collapsed under burnout.
If that doesn't make you audit your clusters this weekend, I don't know what will.
Why This Happened
Burnout Killed the Project
Ingress NGINX powered roughly half of all Kubernetes clusters in production. But behind that massive footprint? A maintainer team that could fit in a phone booth. Most of the heavy lifting fell on one or two people—unpaid volunteers carrying a project that Fortune 500 companies depended on daily.
This is the open-source sustainability crisis in its purest form: critical infrastructure maintained by exhausted volunteers. When they stepped back, the project had no succession plan. The CNCF made the call to archive rather than let it rot in a false state of “maintained.”
“We didn't lose Ingress NGINX to a technical failure. We lost it to a people failure—the failure of an industry to fund and sustain the projects it depends on.”
3 Things Every Platform Team Needs to Know
The Ingress API Was Always a Workaround
Design Debt From Day One
The Kubernetes Ingress resource was never designed to handle the complexity of modern traffic routing. It was a v1beta1 idea that got promoted to GA more out of inertia than conviction. The API was too limited by design—no support for header-based routing, traffic splitting, TCP/UDP, or request mirroring.
So what did every vendor do? They bolted on custom annotations to make it useful. NGINX added nginx.ingress.kubernetes.io/*. Traefik added its own. HAProxy, Contour, Ambassador—all different, all incompatible.
The result? A fragile patchwork nobody could standardize. Your “portable” Kubernetes manifests were anything but portable.
Gateway API Is the Real Successor
Not a Quick Hack — The Actual Fix
Gateway API isn't just “Ingress v2.” It's a ground-up redesign by the same SIG-Network community that built the original Ingress spec. The key difference: they learned from a decade of mistakes.
Expressive
Header-based routing, traffic splitting, request mirroring, and TCP/UDP support are first-class citizens—not annotation hacks.
Portable
One spec, multiple implementations. Your HTTPRoute manifest works across Cilium, Istio, Envoy Gateway, Contour, and more.
Role-Oriented
Separate GatewayClass, Gateway, and HTTPRoute resources let infra teams and app teams operate independently with clear boundaries.
Extensible
Vendor-specific features use typed extension points instead of unstructured annotation strings. Type safety meets traffic routing.
Gateway API graduated to GA (v1.0) in October 2023. It's production-ready today.
You Have Two Migration Paths
Choose Your Adventure
Path A: Cilium Ingress
Near Drop-In Today
Cilium's Ingress controller is the closest drop-in replacement for Ingress NGINX. It understands the existing Ingress resource spec, so many of your manifests work with minimal changes.
Path B: Cilium Gateway API
The Long-Term Bet
Full Gateway API adoption with Cilium as the implementation. This is the strategic play—you get the modern API, full portability, and future-proof routing.
About ingress2gateway
The ingress2gateway tool helps convert Ingress manifests to Gateway API resources automatically. It handles the common cases well, but won't handle every edge case—especially custom NGINX annotations. Plan for manual work on complex routing rules.
The Migration Playbook
Step-by-Step: From Ingress NGINX to Gateway API
Audit Your Clusters
Run kubectl get ingress --all-namespaces and catalog every Ingress resource. Flag any using NGINX-specific annotations — those need manual attention.
Classify by Complexity
Simple host/path routing? ingress2gateway handles it. Custom annotations for rate limiting, auth, or rewrites? Those go in the manual migration pile.
Deploy Cilium in Parallel
Install Cilium alongside your existing NGINX controller. Both can coexist using different IngressClass values. Zero downtime during transition.
Convert a Non-Critical Workload First
Pick a staging service or internal tool. Convert its Ingress to an HTTPRoute, validate routing end-to-end, and document any gotchas your team finds.
Roll Out Progressively
Migrate services one at a time, starting with lowest-risk. Use traffic splitting to canary new routes alongside old ones before cutting over.
Decommission NGINX
Once all traffic flows through Cilium Gateway API, remove the NGINX controller entirely. Update your Helm charts, CI/CD pipelines, and monitoring dashboards.
What NOT to Do
Don't Wait for March
Once the archive happens, you're running unpatched infrastructure. Every CVE that drops is your problem with no fix upstream.
Don't Fork Ingress NGINX
You don't want to maintain a fork of a project that a dedicated team couldn't sustain. That's inheriting someone else's burnout.
Don't Assume ingress2gateway Handles Everything
The tool covers standard Ingress resources well, but custom NGINX annotations, Lua snippets, and ModSecurity rules need manual migration.
Don't Big-Bang the Migration
Migrating all services at once is how you turn a planned migration into an incident. Progressive rollout, service by service.
Ingress vs Gateway API
| Feature | Ingress | Gateway API |
|---|---|---|
| Header routing | Annotation hacks | Native support |
| Traffic splitting | Not supported | Built-in weights |
| TCP/UDP | Not supported | TCPRoute / UDPRoute |
| Role separation | Single resource | GatewayClass / Gateway / Route |
| Portability | Vendor-locked annotations | Cross-implementation |
| Status | Archived March 2026 | GA since October 2023 |
The Bigger Picture: Open Source Sustainability
Ingress NGINX isn't the first critical project to collapse under maintainer burnout, and it won't be the last. OpenSSL, Log4j, core-js—the pattern repeats. Projects that underpin billions in infrastructure get maintained by people who can't pay rent with GitHub stars.
If your organization depends on open-source infrastructure, this is your reminder to:
Fund the projects you depend on
Contribute engineering time upstream
Have migration plans before the crisis hits
Pro Tip
Don't wait for March. Start this week. Pick a non-critical workload, convert its Ingress resources to Gateway API, and validate routing in staging. By the archive date, your team will already know the rough spots—and you'll sleep better knowing your clusters aren't running on borrowed time.
Your Weekend Checklist
Run kubectl get ingress --all-namespaces and export the list
Identify NGINX-specific annotations in each Ingress resource
Install ingress2gateway and do a dry-run conversion
Pick one low-risk service for a pilot migration
Schedule a team review for Monday to plan the rollout
Key Takeaways
Ingress NGINX is getting archived in March 2026 — no more security patches or bug fixes
The Ingress API was a v1beta1 workaround that overstayed its welcome. Vendor annotations made it a fragile, non-portable patchwork.
Gateway API is the successor — expressive, portable, role-oriented, and GA since October 2023
Two migration paths: Cilium Ingress for quick wins, Cilium Gateway API for the long-term bet
The ingress2gateway tool helps with common cases but plan for manual work on custom annotations
Start with a non-critical workload this week — don't wait for March to discover the rough spots
This is also an open-source sustainability wake-up call: fund and contribute to the projects you depend on
Ingress Was a v1beta1 Idea That Overstayed Its Welcome
Time to move on. Your clusters will thank you. Your SREs will thank you. And the maintainers who gave everything they had? They deserve better than a fork.