software-code-reviewer
Usage
~code-review-parallel/workflow
When to use
when you want independent code reviews from multiple LLM families to reduce blind spots when you want a synthesized report that highlights where reviewers agree and disagree
When not to use
when a single-model code review is sufficient when the task is implementation, refactoring, or test writing rather than review
Configuration
software-code-reviewer |
Pipeline
software-code-reviewer
synthesize-prompt
synthesize-promptSummary — a brief overview of the code under review and the overall assessment. Prioritized Findings — a numbered list of findings ordered by severity (critical → high → medium → low → informational). For each finding: Title : concise name Severity : critical / high / medium / low / informational Agreement : note which reviewers flagged this (e.g., "All three", "A + B", "C only") Description : what the issue is and why it matters Recommendation : concrete fix or improvement
Consensus & Disagreement — highlight: Areas where all three reviewers agreed Areas where reviewers disagreed or only one reviewer flagged an issue, with brief reasoning about which perspective seems strongest
Overall Recommendation — a single clear recommendation (approve, approve with changes, or request revisions) based on the weight of evidence across all three reviews.
.stencila/reviews/.stencila/reviews/codec-markdown-review.md
human-review
human-reviewpreamble: |
The three parallel code reviews have been synthesized into a unified findings report.
Please review the synthesis and decide whether to accept or request revisions.
questions:
- header: Decision
question: Is the synthesized review report acceptable?
type: single-select
options:
- label: Accept
- label: Revise
store: human.decision
finish-if: Accept
- header: Revision Notes
question: What should be changed in the synthesis?
show-if: "human.decision == Revise"
store: human.feedback.stencila/workflows/code-review-parallel/WORKFLOW.md