By David Nielsen · February 28, 2026 · 8 min read
The 15-Item Definition of Ready Checklist Every Scrum Team Needs
Most Definition of Ready checklists have 5 items. Most sprints still blow up from the same root causes: unknown dependencies, missing designs, undefined NFRs, no PO sign-off. Here's a complete 15-item DoR checklist that covers the gaps — with 5 items you can automate away entirely.
Key Takeaway
A rigorous Definition of Ready eliminates the two most common sprint failures: mid-sprint clarification loops and post-sprint rework from misunderstood scope. Teams that enforce DoR consistently report 30-40% fewer sprint commitment misses. The first 5 items can be drafted automatically with AI — leaving only judgment calls for human review.
Why 5-item checklists leave gaps
The typical DoR covers: user story format, acceptance criteria, estimated, no blockers, PO approved. That's a good starting point. It doesn't cover design artifacts, API contracts, security requirements, NFRs, or QA scenarios — all of which, when undefined, cause mid-sprint slowdowns or post-sprint rework.
The expanded 15-item checklist below is designed for teams building software products. Not every item applies to every story — use judgment. But every item on this list has caused real sprint failures when skipped. For foundational DoR concepts, see our core Definition of Ready guide.
The complete 15-item checklist
User story follows standard format
AI-automatable"As a [type of user], I want [goal], so that [benefit]." This format forces the team to articulate who benefits and why — not just what to build. If you can't fill in all three blanks, the story isn't understood yet.
Problem statement is clearly articulated
AI-automatableNot a solution, not a feature request — a problem. "Users can't find their order history on mobile" is a problem. "Add an order history page" is a solution. Stories built on solutions skip the why and frequently deliver the wrong thing.
Acceptance criteria written (minimum 3 testable conditions)
AI-automatableEach AC must be binary: either it passes or it doesn't. "Improve the loading experience" fails this test. "Dashboard loads in under 2 seconds on 4G connections" passes it. Minimum 3 conditions covers the happy path, at least one edge case, and one failure/error state.
Gherkin scenarios drafted for complex flows
AI-automatableGiven/When/Then format for any story that touches a multi-step user flow, state change, or branching logic. Not every story needs Gherkin — save it for flows where ambiguity about sequence could cause rework. For simpler acceptance criteria, see our guide to writing acceptance criteria that work.
Dependencies identified and documented
List every upstream dependency: APIs that must exist, data that must be migrated, other stories that must ship first. Unknown dependencies are the #1 cause of sprint spillover. If you discover them mid-sprint, the estimate is invalidated.
Blockers are removed or have a clear resolution path
A blocker is anything that prevents the team from starting work on day 1 of the sprint. Either resolve it before planning or document a specific owner and date for resolution. "We'll figure it out" is not a resolution path.
Story is independently deliverable
The "I" in INVEST. Can this story be shipped and provide value without requiring another story to be complete? If story B can't ship without story A, consider merging them or restructuring the slice. Stories with hard dependencies create convoy effects in the sprint.
Effort estimate assigned (story points or t-shirt size)
AI-automatableNot a precise prediction — a sizing agreement. The team has enough information to say whether this is an S, M, L, or XL effort. If nobody can estimate it, the story isn't refined enough. Inability to estimate is a DoR failure signal.
Team agrees estimate is achievable in one sprint
If the estimate implies more than one sprint of work, break it down. A story that spans sprint boundaries is an epic in disguise. The split point is almost always obvious once you ask: "What's the smallest version of this that delivers value?"
Design mockups or wireframes attached (if UI work)
Any story that changes the user interface needs a visual reference before it enters the sprint. Low-fidelity is fine — even a sketch. Engineers shouldn't be making visual design decisions during implementation; that decision-making time isn't in the estimate.
API contracts defined (if backend work)
For stories touching an API — new endpoint, changed schema, auth flow — the contract must be agreed on before work starts. This includes: method, path, request/response shape, error codes, and auth requirements. Front-end and back-end teams can then work in parallel against the contract.
Security and compliance requirements noted
Does this story handle PII? Does it touch authentication or authorization? Does it affect audit logs or data retention? These requirements change the implementation significantly and need to be in scope before the estimate. Discovering them mid-sprint forces rework.
Non-functional requirements documented
Performance targets, accessibility requirements (WCAG level), browser/device compatibility, localization needs. "It should be fast" is not an NFR. "The API response must be under 300ms at the 95th percentile" is. NFRs left undefined become scope arguments after the fact.
Test scenarios identified (not just happy path)
QA shouldn't be writing test scenarios for the first time during the sprint. The team should agree on: happy path, key edge cases, and at least one negative test (what happens when input is invalid or the system is unavailable). This is different from acceptance criteria — it's the QA execution plan.
Product owner has signed off on scope
The PO confirms: this story solves the right problem, the AC covers the right conditions, and the scope is appropriate for the sprint. Without this, engineers might implement something technically correct but misaligned with business intent. PO sign-off is the final gate before planning.
How to use this checklist in your refinement meeting
Don't run every story through all 15 items in the meeting — that's how refinement becomes a 3-hour death march. Instead:
- Before the meeting: Run stories through an AI tool to auto-generate items 1-4 and item 8 (story format, problem statement, AC, Gherkin if needed, effort estimate). This gives you a draft to react to instead of a blank page.
- In the meeting: Review only the judgment items — dependencies (5-7), design artifacts (10-11), security/NFRs (12-13), QA scenarios (14), and PO sign-off (15). These can't be automated and require team discussion.
- Gate at planning: Any story missing a checklist item during sprint planning goes back. Don't pull it in with the intent to resolve it later — "later" is always mid-sprint.
Which items to automate vs. which need human judgment
✦ AI can draft these
- 1. User story format
- 2. Problem statement
- 3. Acceptance criteria
- 4. Gherkin scenarios
- 8. Effort estimate
⚑ Humans must decide these
- 5-7. Dependencies, blockers, independence
- 9. Achievable in one sprint
- 10-11. Design artifacts, API contracts
- 12-13. Security, NFRs
- 14. QA scenarios
- 15. PO sign-off
The goal is to enter every refinement meeting with items 1-5 already drafted, so the team spends their time on judgment — not transcription.
How to handle stories that don't meet DoR
The answer is simple and uncomfortable: don't pull them into the sprint.
In practice, teams feel pressure to fill the sprint capacity and pull in "mostly ready" stories with open items. This is where sprint failures originate. A team that consistently misses commitments by 20-30% almost always has a DoR enforcement problem, not an estimation problem.
When a story fails DoR at planning: flag the specific items that aren't met, assign an owner for each gap, and set a date. If the gaps can be resolved in 1-2 days, consider pulling the story in with a commitment to resolve before anyone starts work. If resolution requires more than a couple of days — the story waits for the next sprint.
For a deeper dive on the refinement process itself, see our guide to backlog refinement best practices.
Auto-generate items 1–4 for your next refinement
Paste a raw backlog item. Get a structured user story, acceptance criteria, Gherkin scenarios, and effort estimate in seconds — free, no signup.
Refine My Backlog