Usage
software-test-executionallowed-skills#agent-creator
Configuration
read_fileglobgrepshell |
Instructions
Overview
Required Inputs
| Test command | No | Command to run the scoped tests (discovered if not provided) |
| Test files | No | List of test file paths (used to scope the command if needed) |
| Slice scope | No | Description of what the slice covers (for context in the report) |
| Target packages | No | Packages, crates, or directories involved (for command discovery) |
| Slice name | No | Name or identifier of the current slice (for the report header) |
Outputs
| Result | Whether the tests passed or failed (Pass / Fail) |
| Report | A structured report with counts, failure details, and notes |
Steps
Gather test metadata
The test command is the primary input — if present, use it directly Test files and target packages are used as fallbacks for constructing a test command if no command is provided Slice name and scope provide context for the report
Determine the test command
a. Identify the build system
Use globto search for build files in the target directories: Cargo.toml, go.mod, package.json, pyproject.toml, setup.py, pom.xml, build.gradle*, Gemfile, mix.exs, Makefile
Read the relevant config files to understand the test configuration
b. Check for canonical test commands
Look in Makefilefor test:targets Check package.jsonscripts.testfor JS/TS projects Check CI config files ( .github/workflows/*.yml) for test commands Consult references/framework-detection.mdfor the mapping from build file to test command
c. Construct a scoped command
If test files list specific files, scope the command to those files If target packages name specific packages, scope to those packages Never run the full project test suite when scoping is possible See references/framework-detection.mdfor scoping patterns per framework
Execute the test command
Run the test command via shellwith an appropriate timeout (default 120 seconds; increase to 300 seconds for compiled languages like Rust, Go, or C++ where compilation may be slow) Capture both stdout and stderr — test output may appear on either stream Record the exit code
Timeout handling
If the command times out, report it as a failure with an explanation that the test suite exceeded the time limit Suggest possible causes: infinite loops, network-dependent tests, excessive test scope
Compilation or collection errors
If the output contains compilation errors (e.g., error[E...]in Rust, SyntaxErrorin Python, Cannot find modulein JS), report this as a failure Include the compilation error details in the structured output so the upstream agent can fix them Do not attempt to fix the errors — this skill only executes and reports
Missing dependencies
If tests fail because a test framework or dependency is not installed (e.g., ModuleNotFoundError: No module named 'pytest', error: no matching package named), report this as a failure Include the missing dependency in the output so the upstream agent can address it
Parse the test output
Number of tests passed Number of tests failed Number of tests skipped or ignored Names and failure details for each failing test (assertion messages, expected vs actual values, stack traces) Names of passing tests (when available in the output)
references/output-parsing.md
Determine the overall result
The exit code is 0 No test failures are reported in the output At least one test actually ran (zero tests executed is suspicious and should be flagged as a potential issue, but still reported as Pass if exit code is 0)
The exit code is non-zero, OR Any test failures appear in the output, OR A compilation or collection error prevented tests from running
Report structured results
## Test Results: [PASS | FAIL]
**Slice**: <slice name>
**Command**: `<the command that was run>`
**Exit code**: <code>
**Duration**: <time if available>
### Summary
- Passed: N
- Failed: N
- Skipped: N
### Failed Tests
1. `<test_name>` — <brief failure reason>
```
<assertion message or error detail>
```
### Passed Tests
1. `<test_name>`
### Notes
- <any observations: warnings, slow tests, suspicious patterns>Examples
Example 1: Rust — tests pass
cargo test -p my-auth
$ cargo test -p my-auth
Compiling my-auth v0.1.0
Finished test target(s)
Running unittests src/lib.rs
running 4 tests
test token::tests::test_valid_token_parses ... ok
test token::tests::test_expired_token_rejected ... ok
test token::tests::test_malformed_token_returns_error ... ok
test token::tests::test_missing_roles_uses_empty_vec ... ok
test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered outExample 2: Python — tests fail
pytest tests/test_parser.py
$ pytest tests/test_parser.py
FAILED tests/test_parser.py::test_empty_file_raises_error - ImportError: cannot import name 'parse_csv'
FAILED tests/test_parser.py::test_malformed_row_skipped - ImportError: cannot import name 'parse_csv'
====== 0 passed, 2 failed in 0.12s ======Example 3: No test command provided — discovery needed
globfinds frontend/package.jsonRead package.json→ "scripts": { "test": "vitest run" }Construct scoped command: cd frontend && npx vitest run src/components/__tests__/Button.test.tsx
Example 4: Compilation error
cargo test -p my-codec
$ cargo test -p my-parser
error[E0433]: failed to resolve: could not find `Parser` in `parser`
--> src/lib.rs:15:22
|
15 | use crate::parser::Parser;
| ^^^^^^^ not found in `parser`Edge Cases
No test command provided and no build system detected : Search for a Makefilewith a test target or CI config with a test step. If nothing is found, report the failure clearly — "Could not determine how to run tests for this project" — and report Fail. Test command runs but produces no output : Some frameworks produce no output on success. If exit code is 0 and no failures are detected, report Pass with a note that no detailed output was available. If exit code is non-zero with no output, report Fail and suggest checking stderr or increasing verbosity. Flaky tests : If a test sometimes passes and sometimes fails, report the observed result and note in the report that the test may be flaky (e.g., if the failure is timing-related or network-dependent). Do not re-run automatically — let the upstream agent decide. Very large test output : If the output exceeds what can be reasonably included in a report (thousands of lines), summarize: include the summary line, all failure details, and the first few passing tests. Truncate with a note about the total count. Multiple test commands needed : If test files span multiple packages that require different test commands (e.g., both a Rust crate and a Python package), run each command separately and combine the results. The overall result is Pass only if all commands pass. Tests pass but with warnings : Warnings do not affect the Pass/Fail result. Include notable warnings in the Notes section of the report (e.g., deprecation warnings, compiler warnings about unused code). Zero tests executed : If the test runner reports 0 tests run and exits with code 0, report as Pass but flag this prominently — it usually means the test filter matched nothing, which may indicate stale test file paths or an incorrect scoping pattern. Test command not found : If the shellexecution fails because the test runner binary is not installed (e.g., command not found: pytest), report Fail with the specific error so the upstream agent can install the dependency. Workspace-level vs package-level test commands : Some projects run tests from the workspace root (e.g., cargo test -p <crate>from the repo root) while others require cd <dir> && <command>. Check whether the test command includes a directory change or package selector, and if it fails with a "not found" error, try running from the workspace root with a package flag.
.stencila/skills/software-test-execution/SKILL.md