AI Coding Cognitive Debt Prevention Kit
A practical rollout kit to help teams use AI coding tools without losing codebase comprehension. Includes PR review gates, handoff templates, team rituals, and measurable guardrails you can adapt to your workflow.
What This Playbook Solves
AI coding tools can increase output and still degrade team understanding. This playbook helps you preserve comprehension, ownership, and maintainability while keeping the speed advantages of AI-assisted development.
The GitHub repository is the implementation artifact. This page is the rollout guide: how to introduce the kit, where to apply it first, what to measure, and how to avoid turning good guardrails into process theater.
What's In The Kit
PR Review Guardrails
Use repeatable review gates to make sure AI-assisted changes are understood, not just merged.
Handoff + ADR Templates
Capture decision context so future maintainers are not reconstructing prompt history from git diffs.
Team Rituals
Short recurring review loops help teams catch drift early and refine guardrails without bureaucracy.
Metrics & Signals
Measure comprehension and operational stability, not just output volume and PR throughput.
Who This Is For
- Engineering managers adopting AI coding tools across teams
- Staff/principal engineers standardizing review quality
- Platform/DevEx teams building team-level guardrails
- Tech leads seeing rising rework, brittle PRs, or hard-to-explain changes
Use This When
- PR volume is up but reviewer confidence is down
- On-call incidents are increasingly tied to “nobody knows this code”
- AI-assisted changes bypass architecture or style norms
- Your team needs a lightweight, measurable starting point instead of a policy memo
🚀 Quick Start (2-Week Pilot)
Start small. Pilot the kit with one team and one class of changes (for example: app features, infra PRs, or automation scripts). Treat this as a workflow experiment, not a top-down compliance rollout.
# 1) Clone the kit
git clone https://github.com/kesslernity/cognitive-debt-prevention-kit.git
cd cognitive-debt-prevention-kit
# 2) Create a pilot folder in your engineering docs repo
mkdir -p ../your-team-docs/ai-coding-guardrails
# 3) Copy the templates you want to pilot first
# (PR checklist, handoff template, ADR template, review prompts, etc.)
cp -R templates/* ../your-team-docs/ai-coding-guardrails/
# 4) Start with one team for 2 weeks before org-wide rollout
# Track review depth, rework rate, and incident/rollback signalsCore Templates You Should Start With
1. PR Review Checklist (Minimum Viable Guardrail)
This is the highest-leverage place to start. It adds a comprehension gate at the exact moment code gets approved.
## AI-Assisted Change Review Checklist
- [ ] I can explain what changed without reading the AI prompt history
- [ ] I understand why this implementation was chosen over alternatives
- [ ] I validated behavior locally (tests + manual checks where relevant)
- [ ] Edge cases and failure modes were reviewed
- [ ] Security/privacy implications were reviewed (inputs, auth, secrets, logging)
- [ ] The code matches team conventions and existing architecture patterns
- [ ] I can identify the owner responsible for maintaining this code
- [ ] I added/updated docs, ADRs, or runbooks if behavior changed materially2. Comprehension Gate Prompts
Use these in code reviews, handoffs, and incident follow-ups. The goal is not to interrogate people. It's to make understanding visible.
## Comprehension Gate Prompts (Reviewer or Author)
1. Explain this change in plain language to a teammate who was not in the prompt session.
2. Which parts are copied/adapted from existing patterns in the codebase, and which are new?
3. What breaks if this assumption is wrong?
4. What observability signal would tell us this is failing in production?
5. What would you change if you had to maintain this for 12 months?
6. If the AI tool were unavailable tomorrow, could the team still debug this?3. Handoff Template (Kill Prompt-Only Context)
Teams accumulate cognitive debt when implementation context lives only in prompt threads and local chat history. A short handoff note fixes that.
# AI-Assisted Change Handoff
## What changed
- Feature / fix:
- Files touched:
## Why this approach
- Decision summary:
- Alternatives considered:
## Risk + blast radius
- User impact if broken:
- Rollback path:
## Validation performed
- Tests run:
- Manual checks:
## Open questions / follow-ups
- 4. ADR Template for AI-Assisted Decisions
Use this for medium/high-impact changes where the implementation choice matters long-term. It prevents future teams from reverse-engineering intent from code alone.
# ADR: AI-Assisted Implementation Decision
## Context
What problem are we solving? What constraints matter?
## Decision
What did we choose and why?
## Human understanding checkpoint
- Which parts of the implementation are fully understood by the team?
- Which parts need deeper review before broad reuse?
## Risks
- Operational risk:
- Security/privacy risk:
- Maintainability risk:
## Validation plan
- Pre-merge checks:
- Post-deploy metrics/signals:
## Revisit trigger
When should we re-evaluate this decision?Team Operating Cadence (Keep It Lightweight)
The kit works best when it becomes a small habit, not a giant process. One short weekly review is enough to improve quality fast.
Weekly 30-minute Cognitive Debt Review
Agenda (sample)
1. Review 2-3 AI-assisted PRs (good + bad examples)
2. Identify one recurring failure pattern
3. Tighten one checklist item or prompt
4. Pick one metric to watch next week
5. Capture one snippet/pattern in team docsMeasure What Matters (Not Just PR Count)
Rework Rate
Follow-up PRs fixing AI-authored changes within 14 days
Target direction: Down
Review Depth
PRs with explicit reasoning / risk notes / validation evidence
Target direction: Up
Incident Linkage
Incidents traced to misunderstood changes or brittle AI code
Target direction: Down
Bus Factor for Changes
More than one engineer can explain and maintain the code
Target direction: Up
30-Day Rollout Plan
Week 1: Pilot Setup
- • Choose one team and one workflow
- • Start with PR checklist + handoff template
- • Define 2-3 metrics to track
Week 2: Observe Friction
- • Collect examples of good/bad AI-assisted PRs
- • Trim checklist items that add noise
- • Add one team-specific prompt
Week 3: Expand Coverage
- • Add ADR template for high-impact changes
- • Use comprehension prompts in reviews
- • Track rework and review depth trends
Week 4: Standardize
- • Publish the team’s adapted version
- • Document rollout learnings
- • Decide whether to expand org-wide
Common Anti-Patterns
- Treating “tests pass” as proof of understanding
- Copying prompts into PRs instead of documenting decisions
- Mandating heavy templates for every tiny change
- Using AI output volume as the primary success metric
- Skipping rollback notes because “the AI can fix it later”
The goal is not to slow teams down. It's to preserve engineering judgment while AI tools accelerate output. If the process gets heavier than the risk, reduce it.
How to Know It's Working
Positive Signals
- • Review comments shift from syntax to architecture/risk
- • Fewer “mystery” fixes after merge
- • More consistent handoffs between engineers
- • Faster debugging of AI-assisted changes
Watchouts
- • People checklisting without understanding
- • Templates copied but left blank or generic
- • “AI productivity” celebrated while incidents rise
- • Rollout applied equally to low- and high-risk changes
References & Companion Resources
GitHub Repository: kesslernity/cognitive-debt-prevention-kit
Companion article: You Ship Faster with AI. You Understand Less. Welcome to Cognitive Debt.
Tip: Keep the GitHub repo as the source of truth for templates and artifacts. Use this page as the rollout and adoption guide for teams.