Quick Start
stencila workflows create my-workflow "A multi-stage data pipeline".stencila/workflows/my-workflow/WORKFLOW.md
Permanent vs Ephemeral Workflows
.stencila/workflows/
.gitignore*
an agent creates a workflow on your behalf you want to try a short-lived workflow before deciding to keep it you want a workflow that should not be committed or retained by default
Workflow Names
1–64 characters Only lowercase alphanumeric characters and hyphens No leading, trailing, or consecutive hyphens
thing-activitycode-reviewthing-rolecode-reviewer
thing-processfor the default case thing-process-approachwhen you need to distinguish multiple workflows for the same broad process
thing is the artifact or domain the workflow acts on, such as code, blog, agent, or schemaprocess is the broad lifecycle stage or end-to-end goal, such as generation, refinement, publication, or reviewapproach is an optional qualifier for the workflow's strategy, cost, or tradeoffs, such as quick, iterative, consensus, thorough, or guided
thing
Prefer purpose over pipeline shape
create-review-refine-test-deploydraft-review-edit-publish
Recommended patterns
thing-process | code-reviewdocumentation-generationagent-refinementschema-publication | |
thing-process-approach | documentation-generation-quickcode-generation-iterativearchitecture-design-consensusagent-creation-guided |
Common approach modifiers
quickor linear— a simple, low-cost, usually single-pass workflow iterativeor agile— a workflow with review and refinement loops consensusor ensemble— multiple parallel branches whose outputs are compared or combined thoroughor exhaustive— a deeper, more expensive workflow with extra checks or specialist stages guidedor interactive— a workflow that pauses for user input at important decision points
Examples
code-review | |
code-generation-iterative | |
documentation-generation-quick | |
architecture-design-consensus | |
agent-creation-guided |
name
Directory Structure
.stencila/workflows/
.stencila/
workflows/
code-review/
WORKFLOW.md
test-and-deploy/
WORKFLOW.md
lit-review/
WORKFLOW.md.stencila/
workflows/
draft-review/
.gitignore # contains: *
WORKFLOW.mdThe WORKFLOW.md File
WORKFLOW.md
YAML frontmatter — metadata (name, description, goal) Markdown body — a short human-readable explanation of the workflow, then a DOT pipeline in a ```dotfenced code block, then optional additional documentation and referenced content blocks
---
name: lit-review
description: Search and summarize recent literature
---
This workflow searches for recent papers, summarizes the key findings, and drafts a literature review.
```dot
digraph lit_review {
Start -> Search
Search [prompt="Search for recent papers on: $goal"]
Search -> Summarize
Summarize [prompt="Summarize the key findings across the papers"]
Summarize -> Draft
Draft [prompt="Draft a literature review from the summaries"]
Draft -> End
}
```---
name: code-review
description: Automated code review with human approval gate
---
This workflow implements, tests, and reviews code changes.
```dot
digraph code_review {
Start -> Design
Design [agent="code-planner", prompt="Design the solution for: $goal"]
Design -> Build
Build [agent="code-engineer", prompt="Implement the design"]
Build -> Test
Test [agent="code-tester", prompt="Run tests and validate"]
Test -> Review [label="Pass", condition="outcome=success"]
Test -> Build [label="Fail", condition="outcome!=success"]
Review [ask="Review the code changes"]
Review -> End [label="[A] Approve"]
Review -> Design [label="[R] Revise"]
}
``````dot```dot
Recommended DOT organization
any graph-level attributes first the entry edge ( Start -> FirstNode) near the top then, for each node, the node definition followed immediately by that node's outgoing edge or edges
Reusing multiline prompts, shell scripts, and questions
prompt-refshell-refask-refinterview-ref
---
name: thing-creation
description: Create and review a thing using referenced multiline content
---
This workflow creates a thing using a creator agent, runs validation checks, and collects human feedback.
```dot
digraph thing_creation {
Start -> Create
Create [agent="thing-creator", prompt-ref="#creator-prompt"]
Create -> Check
Check [shell-ref="#run-checks"]
Check -> HumanFeedback
HumanFeedback [ask-ref="#human-question", question-type="freeform"]
HumanFeedback -> End
}
```
```text #creator-prompt
Create or update a Stencila thing for this goal: $goal
Before starting, check for reviewer feedback from a previous iteration.
If feedback is present, use it to revise the existing draft instead of starting over.
```
```sh #run-checks
make lint
uv test
./integration-tests.sh
```
```text #human-question
What should be improved before the next revision?
```WORKFLOW.md
Multi-question interviews
interview-ref
askask-refinterview-ref
Interview spec format
preamblequestions
question | ||
type | freeformyes-noconfirmsingle-selectmulti-select | |
header | ||
options | {label, description?} | |
default | ||
store | review.feedback | |
show-if | "decision == Revise" | |
finish-if | "no" |
Routing
single-select
single-selectquestion-type="freeform"
Storing answers
store$KEY$review.feedback
store
Conditional questions with show-if
show-ifshow-if"store_key == value""store_key != value"store_keystore
questions:
- question: "Is the implementation acceptable?"
type: single-select
store: decision
options:
- label: Accepted
- label: Revise
- question: "What specific changes are needed?"
store: revision_notes
show-if: "decision == Revise"
- question: "Any final comments for the changelog?"
store: changelog_notes
show-if: "decision == Accepted"show_-ifstoreshow-if==!=
Early exit with finish-if
finish-iffinish-if
finish-ifyes-noconfirmsingle-selectfreeformmulti-select
questions:
- question: "Would you like to provide detailed feedback?"
type: yes-no
store: wants_feedback
finish-if: "no"
- question: "What went well?"
store: feedback.positive
- question: "What could be improved?"
store: feedback.negativewants_feedback
show-iffinish-iffinish-ifshow-if
questions:
- question: "Do you want to proceed with the review?"
type: yes-no
store: proceed
finish-if: "no"
- question: "What type of review?"
type: single-select
store: review_type
options:
- label: Code
- label: Design
- question: "Which code areas need attention?"
store: code_areas
show-if: "review_type == Code"
- question: "Which design aspects need attention?"
store: design_areas
show-if: "review_type == Design"Example: review with decision and feedback
---
name: code-review-guided
description: Implement and review with structured human feedback
goal: Implement the feature and get approval
---
This workflow builds the requested feature and then pauses for a structured human review interview that collects both a routing decision and detailed feedback. If the reviewer selects Revise, the feedback is stored and the pipeline loops back to rebuild.
```dot
digraph code_review_guided {
Start -> Build
Build [agent="code-engineer", prompt="Implement: $goal"]
Build -> Review
Review [interview-ref="#review-interview"]
Review -> End [label="Approve"]
Review -> Build [label="Revise"]
}
```
```yaml #review-interview
preamble: |
Please review the implementation and provide structured feedback.
questions:
- question: "Is the implementation ready to merge?"
header: Decision
type: single-select
options:
- label: Approve
- label: Revise
store: review.decision
- question: "What specific changes should be made?"
header: Feedback
store: review.feedback
```The Buildnode implements the feature The Reviewnode pauses and presents both questions as a single form The first single-selectquestion ("Decision") determines routing — its option labels match the outgoing edge labels Approveand ReviseThe freeform question ("Feedback") stores the human's detailed feedback as review.feedbackIf the human selects "Revise", the pipeline loops back to Buildwhere the prompt can reference $review.feedback
Example: terminal feedback collection
preamble: |
The report has been generated. Please provide your assessment.
questions:
- question: "How would you rate the quality?"
type: single-select
options:
- label: Excellent
- label: Good
- label: Needs improvement
store: survey.quality
- question: "Any additional comments?"
store: survey.commentsCollect
When to use interviews vs separate human nodes
interview-ref
a review step naturally combines a routing decision with structured feedback you want to collect multiple related answers in a single human pause reducing the number of separate human pauses improves the reviewer experience
askask-ref
the questions are independent and belong to different stages of the pipeline the answers drive different routing decisions at different points in the graph simpler single-question nodes make the graph easier to read
Workflow composition and nesting
the child process is useful in more than one parent workflow a parent workflow is easier to read if a complex stage is collapsed into a single node you want to standardize a repeated process behind a reusable workflow boundary
Authoring a child workflow node
workflow
Implementcode-implementation
Example: parent and child workflows
---
name: code-review-composed
description: Orchestrate implementation through a reusable child workflow and final human review
goal: Implement and approve the requested change
---
This workflow delegates implementation to the `code-implementation` child workflow, then pauses for human review. The reviewer can approve or loop back for another implementation pass.
```dot
digraph code_review_composed {
Start -> Implement
Implement [workflow="code-implementation"]
Implement -> Review
Review [ask="Review the implementation"]
Review -> End [label="Approve"]
Review -> Implement [label="Revise"]
}
```---
name: code-implementation
description: Design, implement, and test a requested software change
---
This workflow designs an implementation plan, builds it, and runs tests. It is intended as a reusable child workflow for parent workflows that handle review and approval.
```dot
digraph code_implementation {
Start -> Design
Design [agent="code-planner", prompt="Design an implementation for: $goal"]
Design -> Build
Build [agent="code-engineer", prompt="Implement the approved design"]
Build -> Test
Test [agent="code-tester", prompt="Run tests and report any failures"]
Test -> End
}
```What happens when a workflow calls another workflow
the child still gets the normal workflow context, including values such as $goalit knows which parent workflow and parent node called it when it finishes, its final output becomes the output of that parent node
Design guidance
keep child workflows independently meaningful and reusable give child workflows names that describe the subprocess they perform, not just that they are children avoid splitting a simple two-step flow into separate workflows unless reuse or clarity genuinely benefits let the parent workflow show the overall process, and let the child workflow hold the repeated or detailed steps
Avoid cyclic composition
A -> B -> A
Improving Discoverability and Delegation
descriptionkeywordswhen-to-usewhen-not-to-use
---
name: code-review
description: Automated code review with human approval gate
keywords:
- code
- review
- testing
- approval
when-to-use:
- when the user wants an automated code review pipeline
- when changes need testing and human approval before merging
when-not-to-use:
- when the user wants a quick one-shot code review without a pipeline
- when the task is about writing new code rather than reviewing it
---when-to-usewhen-not-to-use
Referencing Agents
agent
Shared workflows can be committed to a repository and used by the whole team Personal agents (in ~/.config/stencila/agents/) let each user configure their preferred model, provider, and API keys The same code-engineernode runs with different backing models depending on who runs the workflow
agentagent.modelagent.provider
agent.*
Setting a Goal
goal$goal
---
name: data-analysis
description: Analyze and report on experimental data
goal: Analyze climate data from 2020-2024
-----goal
Validation
# Validate by name
stencila workflows validate code-review
# Validate by path
stencila workflows validate .stencila/workflows/code-review/
# Validate a WORKFLOW.md file directly
stencila workflows validate .stencila/workflows/code-review/WORKFLOW.mdName format (kebab-case, 1–64 characters) Name matches directory name Description is non-empty Pipeline DOT syntax is valid (if present)