← Back to Blog

By David Nielsen · March 1, 2026 · 6 min read

AI-Powered Spec Linting for Coding Agents

Your AI coding agent is only as good as the issue it's reading. Speclint analyzes every GitHub issue before your agent touches it — scoring completeness, flagging ambiguity, and blocking low-quality specs from wasting compute.

The Core Problem

Cursor, Codex, and Claude Code don't push back on bad specs. They hallucinate confidently and ship the wrong thing. Speclint is the quality gate that runs before your agent does.

The agent doesn't know your spec is broken

When you assign a GitHub issue to Codex or kick it off in Cursor, the agent reads the issue title, description, and acceptance criteria — then starts writing code. If the spec is vague, the agent fills in the blanks with assumptions. Those assumptions are often wrong.

This isn't a model problem. GPT-4, Claude 3.5, and Gemini all do the same thing: they're trained to be helpful and complete, so they complete the task even when the task is underspecified. The result is code that compiles, passes tests you wrote for the wrong behavior, and ships the wrong feature.

The fix isn't a better model. It's better specs.

What spec linting actually checks

Speclint analyzes each GitHub issue across five dimensions that predict whether an AI coding agent will ship the right thing:

  • Problem statement clarity — Is there a specific, observable problem described? Or just a vague feature request?
  • Acceptance criteria testability — Can the ACs be verified by running tests or inspecting the UI? Or are they subjective ("should feel fast")?
  • Scope boundedness — Is the issue small enough for a single agent pass? Or does it hide three features behind one title?
  • Codebase context — Are the relevant files, components, or API endpoints mentioned? Or does the agent have to guess where to start?
  • Edge case coverage — Are the failure modes described? What should happen when things go wrong?

Each dimension contributes to a completeness_score from 0–100. Issues scoring 80+ get the agent_ready label and are safe to assign to your coding agent. Issues below 80 get a comment with specific remediation.

How the GitHub Action works

Speclint plugs directly into your GitHub workflow. Add the action to your repo and it runs every time an issue is opened or labeled:

# .github/workflows/speclint.yml
on:
  issues:
    types: [opened, edited, labeled]

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: speclint/lint-issues@v1
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          speclint-api-key: ${{ secrets.SPECLINT_API_KEY }}

The action posts a structured comment with the completeness_score, the failing dimensions, and suggested improvements. If the issue passes, it gets labeled agent_ready. No manual review required.

Why this matters for Cursor and Codex workflows

If you're running a small AI dev shop — 2–5 engineers using Cursor or Codex for the heavy lifting — the bottleneck isn't the agent's coding ability. It's the quality of the issues going into the pipeline. A good agent with a bad spec wastes 30–60 minutes of compute and human review time per iteration.

Multiply that by 10–20 issues per week and you're looking at a significant drag on throughput. Speclint catches the bad specs before the agent starts — the same way ESLint catches syntax errors before your code ships.

Getting started

Get your API key at speclint.ai/get-key, add it as a repository secret (SPECLINT_API_KEY), and install the GitHub Action. Your first 100 issue lints are free.

Stop letting bad specs reach your agent

Get your API key and install the GitHub Action in under 5 minutes.

- uses: speclint/lint-issues@v1
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    speclint-api-key: ${{ secrets.SPECLINT_API_KEY }}
Get Your API Key