Usage
software-design-reviewallowed-skills#agent-creator
Configuration
read_fileglobgrep |
Instructions
Overview
Steps
Identify the design to review: accept a design name, an existing stored design, or a pasted design specification restate the design's apparent purpose, target users, and problem being solved if the input is fragmented, reconstruct the main proposal before critiquing it
Resolve the design artifact: if the full design specification is already present in the conversation, review that text directly and do not force retrieval through design tools if the user refers to a stored design by name or asks to review an existing design, use globwith pattern .stencila/designs/*.mdto locate likely candidates use read_fileto load the selected stored design before reviewing it if multiple similarly named designs exist, compare the candidates and review the one that best matches the user's request
Understand the design before judging it: summarize the proposed system, feature, or change in plain language identify the stated goals, non-goals, scope boundaries, assumptions, constraints, and acceptance criteria if present note any missing context that materially limits confidence in the review
Evaluate the design for clarity and structure: check whether the problem, goals, scope, and intended users are clearly stated check whether sections are organized logically and terminology is consistent flag ambiguous wording, undefined terms, implicit assumptions, and places where a developer or stakeholder could misinterpret the design
Evaluate completeness: check whether the design covers the core requirements needed for implementation and review assess whether it addresses functional requirements, non-functional requirements, architecture, interfaces, data, dependencies, security, privacy, observability, rollout, migration, and operations when relevant identify missing decisions, missing edge cases, and missing acceptance criteria or success measures
Evaluate correctness and internal consistency: look for contradictions between goals, scope, architecture, interfaces, and acceptance criteria flag requirements that do not appear to solve the stated problem or that conflict with stated constraints identify assumptions that seem unsupported, unrealistic, or inconsistent with the rest of the design
Evaluate feasibility and tradeoffs: assess whether the proposed approach is plausible given the stated constraints, dependencies, scale, team context, and delivery goals identify technical, product, operational, or organizational risks call out important tradeoffs such as complexity versus speed, flexibility versus simplicity, consistency versus performance, or cost versus reliability note when the design chooses a tradeoff implicitly and should make it explicit
Evaluate implementability and actionability: determine whether engineers could reasonably plan and begin implementation from the document assess whether interfaces, responsibilities, and acceptance criteria are specific enough to guide work and testing flag vague statements that should be rewritten as concrete requirements, decisions, or open questions
Produce a structured review report: begin with a concise overall assessment list the most important strengths so the user knows what to keep organize issues by severity or priority rather than by minor editorial order provide actionable recommendations that explain what to change and why when useful, suggest replacement wording, additional sections, or sharper acceptance criteria
Distinguish facts from uncertainty:
clearly label assumptions made during the review separate definite problems from possible risks or questions that need confirmation avoid inventing system facts that are not supported by the design
Review Checklist
Problem and Scope
Is the problem statement clear and specific? Are the intended users, stakeholders, or operators identified? Are goals and non-goals explicit and non-overlapping? Is in-scope versus out-of-scope work clear? Does the design stay focused on the stated problem?
Requirements and Acceptance Criteria
Are the important functional requirements present? Are non-functional requirements identified where relevant, such as performance, reliability, security, privacy, accessibility, maintainability, and observability? Are acceptance criteria or success measures specific and testable? Do requirements describe outcomes rather than vague aspirations or implementation tasks? Are major edge cases and failure modes considered?
Architecture and Interfaces
Is the proposed architecture understandable and appropriately detailed? Are major components, responsibilities, and interactions defined? Are external systems, APIs, or integration points identified? Are important data models, state transitions, or storage decisions explained when relevant? Does the design omit architectural detail that implementers would likely need?
Correctness and Consistency
Do different sections agree with each other? Do the requirements, architecture, and acceptance criteria align? Are assumptions explicit and reasonable? Are there contradictions, impossible constraints, or missing dependencies? Does the proposed solution actually address the stated problem?
Feasibility and Risk
Is the design realistic for the stated timeline, constraints, and environment? Are key technical and operational risks identified? Are rollout, migration, backward compatibility, or operational concerns covered when relevant? Are dependencies and external constraints acknowledged? Are the hardest parts of the design surfaced rather than hidden?
Tradeoffs and Decision Quality
Does the design explain why this approach was chosen? Are plausible alternatives or tradeoffs acknowledged when they matter? Are cost, complexity, performance, usability, and maintainability implications visible? Are any important decisions left implicit when they should be explicit? Does the design over-engineer or under-specify the solution?
Actionability
Could a team use this document to estimate, plan, and implement the work? Are open questions clearly separated from settled decisions? Are next changes to the spec obvious from the review? Is the feedback concrete enough to revise the design efficiently?
Report Format
Overall Assessment
Strengths
Findings
Clarity and structure Completeness Correctness and consistency Feasibility and risks Tradeoffs and decision quality Actionability
indicate severity as High , Medium , or Low describe the issue precisely explain why it matters
Recommendations
Open Questions
Examples
a structured critique of the scheduling design's scope, requirements, architecture, and risks feedback on timezone handling, platform validation, failure recovery, and acceptance criteria prioritized recommendations such as clarifying supported platforms, specifying retry behavior, and tightening observability requirements
a review highlighting strengths in workflow coverage and data modeling warnings about missing auditability, permission rules, and operational assumptions concrete suggestions for rollout, reporting requirements, and measurable acceptance criteria
Edge Cases
Very short or partial design : Do not refuse. Review what exists, identify the most important missing sections, and state the confidence limits caused by missing detail. Mostly good design with a few weak spots : Preserve strengths in the review instead of rewriting the whole design as if it were poor. Highly speculative design : Distinguish conjectural risks from confirmed issues and recommend validation steps. Conflicting requirements : Call out the conflict explicitly, explain the tradeoff, and suggest one or more resolution paths. No acceptance criteria : Flag this clearly and suggest candidate criteria or the dimensions they should cover. Review drifting into implementation : Suggest implementation-relevant clarifications when helpful, but keep the primary deliverable as critique and design improvement guidance rather than source code.
.stencila/skills/software-design-review/SKILL.md