Back to Playbooks
AI CodingDevSecOpsPlatform EngineeringGovernanceSoftware Delivery

Vibe Coding Governance Rollout Kit

Ship AI-assisted code fast without paying for it in incidents

Zara's blueprint for AI coding governance: risk-tiered PR gates, review-depth rules, CI enforcement patterns, and a 30/60/90 rollout model so teams keep speed without reliability fallout.

Zara mode: Blueprint.

20 min read

Blueprint Mode: Step 0 - Define the Operating Contract

The objective is not to slow developers down. The objective is to prevent speed from bypassing architecture, security, and reliability controls. Teams should accelerate on low-risk paths and hard-gate high-blast-radius changes.

Blueprint Mode: Step 1 - Classify Risk

risk-tier-matrix.mdmarkdown
# AI-authored change risk classification

| Tier | Scope | Examples | Approval Rule | Deploy Rule |
|---|---|---|---|---|
| T1 | Low-risk text/tests | docs, comments, snapshots, test data | 1 reviewer | standard CI |
| T2 | App behavior (bounded) | handlers, service logic, UI flows | 2 reviewers | standard CI + regression tests |
| T3 | Security/infra touch | auth, IAM, network, IaC, stateful paths | 2 reviewers incl senior | manual approval gate |
| T4 | Critical safety path | payments, identity core, destructive infra operations | architecture sign-off + incident owner sign-off | change window + rollback rehearsal |

Blueprint Mode: Step 2 - Standardize Human Review

reviewer-contract.ymlyaml
# Reviewer contract for AI-authored pull requests

required_pr_sections:
  - prompt_summary
  - assumptions_and_constraints
  - architecture_impact
  - threat_or_abuse_considerations
  - rollback_plan
  - test_evidence

reject_if:
  - missing_risk_tier
  - missing_owner_for_changed_domain
  - generated_diff_without_human_explanation
  - ci_policy_checks_not_green

notes:
  - The model can propose.
  - A human remains accountable for correctness and risk acceptance.

Blueprint Mode: Step 3 - Enforce in CI

.github/workflows/ai-pr-governance.ymlyaml
name: ai-pr-governance
on:
  pull_request:
    branches: [main]

jobs:
  classify:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Detect risk tier from changed paths
        run: ./scripts/ai-risk-tier.sh > risk-tier.txt
      - name: Upload tier artifact
        uses: actions/upload-artifact@v4
        with:
          name: risk-tier
          path: risk-tier.txt

  enforce:
    runs-on: ubuntu-latest
    needs: classify
    steps:
      - uses: actions/download-artifact@v4
        with:
          name: risk-tier
      - name: Enforce approval policy
        run: ./scripts/enforce-ai-pr-policy.sh risk-tier.txt
      - name: Enforce required PR sections
        run: ./scripts/check-pr-template-completeness.sh
      - name: Block auto-merge on T3/T4
        run: ./scripts/block-automerge-for-high-risk.sh risk-tier.txt

Blueprint Mode: Step 4 - Roll Out in 30 / 60 / 90 Days

30 Days: Contain and Instrument

Adopt a mandatory AI PR template with risk tier, prompt summary, and rollback plan.

Classify paths into T1-T4 and gate T3/T4 with manual approvals.

Track AI-authored vs AI-assisted vs human-authored PR ratios.

Ban direct auto-merge for PRs touching auth, infra, payments, or data durability.

60 Days: Standardize Review Depth

Create reviewer playbooks by domain (app, platform, security).

Require senior reviewer for T3 and architecture owner sign-off for T4.

Add CI checks for policy-as-code, SAST, secret scanning, and IaC policy.

Run incident tabletop exercises using AI-authored high-risk diff scenarios.

90 Days: Optimize for Throughput + Trust

Measure review-time deltas and defect deltas by risk tier, not by anecdotes.

Allow controlled fast lanes for proven low-risk classes (T1/T2).

Apply stricter pre-merge controls only where risk data justifies it.

Publish a monthly AI coding governance report to engineering leadership.

Blueprint Mode: Step 5 - Measure Every Week

ai-governance-scorecard.ymlyaml
# Weekly AI code governance scorecard

adoption:
  ai_authored_pr_ratio: "%"
  ai_assisted_pr_ratio: "%"

quality:
  post_merge_defect_rate_ai_vs_human: "ratio"
  revert_rate_ai_vs_human: "ratio"
  escaped_security_findings_ai: "count"

delivery:
  median_review_time_ai_prs: "hours"
  median_review_time_human_prs: "hours"
  deploy_success_rate_ai_touched: "%"

safety:
  t3_t4_changes_without_required_approval: "count"
  rollback_events_ai_related: "count"
  incidents_with_ai_touched_code: "count"

decision:
  expand_scope: yes/no
  freeze_scope: yes/no
  focus_area_next_week: "text"

Roast Mode: Bad Defaults to Kill

Shipping blind is not speed. It is deferred outage response.

If a high-risk AI change has no accountable owner, it does not ship.

No rollback plan means no release permission.

“Looks right” is not enough for auth, infra, payments, or state.

The model proposes. Engineers decide. Systems enforce.

Success Metric

Keep AI throughput high while holding post-merge quality and delivery stability flat or better. If speed rises and reliability drops, your governance model is underpowered.

This is the blueprint. Steal it. Build this.

Back to Blog