Skip to content

feat: add workflow plugin for repeatable multi-step processes#52

Open
jmbeach wants to merge 1 commit intowedow:masterfrom
jmbeach:feat/workflow-plugin
Open

feat: add workflow plugin for repeatable multi-step processes#52
jmbeach wants to merge 1 commit intowedow:masterfrom
jmbeach:feat/workflow-plugin

Conversation

@jmbeach
Copy link

@jmbeach jmbeach commented Mar 8, 2026

Implements a tk equivalent for 'bd mol' and 'bd formula'. Not sure how many people are using it, but beads has support for reusable workflows: https://steveyegge.github.io/beads/workflows.
It's got a really weird chemistry analogy 🤷 .

Anyways, I've found it super useful, so it would be cool to get this integrated into the project.


Here's what you get if you run tk workflow:

Usage: tk workflow <command> [args]

Commands:
  list                     List available workflow templates
  run <name> [options]     Instantiate a workflow as tickets
    --var key=value        Set a variable (repeatable)
    --dry-run              Preview without creating tickets

Workflow files are TOML, stored in:
  .tickets/workflows/      (project-level, checked into git)
  ~/.config/ticket/workflows/  (user-level, personal)

Example (.tickets/workflows/release.toml):

  workflow = "release"
  description = "Standard release workflow"
  version = 1

  [vars.version]
  description = "Release version"
  required = true
  pattern = "^\d+\.\d+\.\d+$"

  [vars.env]
  description = "Target environment"
  default = "staging"
  enum = ["staging", "production"]

  [[steps]]
  id = "bump-version"
  title = "Bump version to {{version}}"

  [[steps]]
  id = "changelog"
  title = "Update CHANGELOG"
  needs = ["bump-version"]

  [[steps]]
  id = "test"
  title = "Run full test suite"
  needs = ["changelog"]

  [[steps]]
  id = "publish"
  title = "Publish {{version}} to {{env}}"
  needs = ["test"]
  type = "human"

Variable fields:
  required = true          Error if not provided via --var
  default = "value"        Used when --var not provided
  pattern = "regex"        Validate value against regex
  enum = ["a", "b"]       Validate value is in list

Step fields:
  id = "name"              Unique step identifier (required)
  title = "text"           Step title, supports {{var}} (required)
  description = "text"     Step description, supports {{var}}
  needs = ["step-id"]      Dependencies (wait for these steps)
  type = "human"           Informational step type

I've tested it locally and it's working. Here's the workflow definition I'm using:

workflow = "plan-and-implement-tdd"
description = "{{feature_name}} Feature implementation using openspec to plan and tk / agent teams to implement"
version = 1

[vars.feature_name]
description = "Name of the feature"
required = true

[vars.plan_file]
description = "Plan file path"
required = true

[[steps]]
id = "plan"
title = "[{{feature_name}}] Plan the feature and create artifacts using /opsx:new"
type = "human"

[[steps]]
id = "cbcp"
title = "[{{feature_name}}] Create a branch, commit, and push (use /cbcp)"
needs = ["plan"]

[[steps]]
id = "create-test-tickets"
title = "[{{feature_name}}] Create Test Tickets"
description = """
Looking at the openspec artifacts that were created for this branch (Plan file: {{plan_file}}),
create tickets (tk) for TDD (one ticket per test we need) under this ticket.
Each ticket should have Acceptance Criteria that the tests should fail!
That's how TDD works.
DO NOT WRITE THE IMPLEMENTATION TO PASS THE TESTS: We'll do that later.
"""
needs = ["cbcp"]

[[steps]]
id = "teams-create-tests"
title = "[{{feature_name}}] Run agent team to create the tests"
description = """
Use tk on the test epic id to see what tests need created.

Create an agent team to write failing tests in parallel. Follow these rules:

TEAM SIZING:
- Count the number of test tickets (tk) under the epic.
- Spawn ceil(ticket_count / 5) teammates, minimum 2, maximum 5.
- Start each teammate with model "opus".

WORK PARTITIONING (file ownership):
- Assign each teammate a disjoint set of test tickets. No two teammates share a ticket.
- Each teammate OWNS the test files for their assigned tickets. No other teammate may edit those files.
- The lead must state the file ownership mapping in each teammate's spawn prompt.

SPAWN PROMPT (include all of this for every teammate):
- The openspec artifact paths for this branch (list them explicitly - the ones with plan file: {{plan_file}}).
- The test framework and conventions used in this project (detect from existing tests).
- The specific ticket IDs and descriptions assigned to this teammate.
- The exact file paths this teammate is responsible for creating.
- The instruction: "Write tests that FAIL. Do not write any implementation code."

WORKFLOW:
1. The lead partitions tickets across teammates and spawns them with detailed prompts.
2. Teammates work in parallel, each writing tests only for their assigned tickets/files.
3. After all teammates finish, the lead does a consistency review across all test files
   to check for duplicated setup, inconsistent naming, or missing coverage.
4. The lead marks each ticket as done only after verifying the tests exist and fail.

DO NOT WRITE THE IMPLEMENTATION TO PASS THE TESTS: We'll do that later.
"""
needs = ["create-test-tickets"]

[[steps]]
id = "create-implementation-tickets"
title = "[{{feature_name}}] Create Implementation tickets"
description = """
Now that all the tests are created, create an epic ticket (using /beads)
for implementation called "impl-{{feature_name}}".
Create implementation tickets under the new epic.
The descriptions / acceptance criteria for the new implementation tickets should
be based on the openspec artifacts created for this branch (the ones with plan file {{plan_file}}) and the new tests.
"""
needs = ["teams-create-tests"]

[[steps]]
id = "teams-implement"
title = "[{{feature_name}}] Run an agent team to implement the implementation tickets."
description = """
Use tk to find children of <impl-epic-id> to see the tasks hat need implemented.

Create an agent team to implement the feature in parallel. Follow these rules:

TEAM SIZING:
- Count the number of implementation tickets under the epic.
- Spawn ceil(ticket_count / 5) teammates, minimum 2, maximum 5.
- Start each teammate with model "opus".

PLAN APPROVAL (required):
- Spawn teammates with plan approval required.
- Each teammate must submit a plan before writing any code.
- The lead reviews each plan and rejects any that:
  - Modify files owned by another teammate.
  - Don't reference the relevant failing tests.
  - Introduce unnecessary abstractions beyond what the tests require.
- Only after the lead approves a plan does the teammate begin implementation.

WORK PARTITIONING (file ownership):
- Assign each teammate a disjoint set of implementation tickets.
- Each teammate OWNS the source files for their assigned tickets. No two teammates edit the same file.
- If two tickets require changes to the same file, assign them to the same teammate.
- The lead must state the file ownership mapping in each teammate's spawn prompt.

SPAWN PROMPT (include all of this for every teammate):
- The openspec artifact paths for this branch (list them explicitly - the ones with plan file {{plan_file}}).
- The paths to the failing test files relevant to this teammate's tickets.
- The specific ticket IDs and descriptions assigned to this teammate.
- The exact source file paths this teammate is responsible for.
- The instruction: "Make the failing tests pass. Do not modify test files."

WORKFLOW:
1. The lead partitions tickets, maps file ownership, and spawns teammates with detailed prompts.
2. Each teammate submits a plan. The lead approves or rejects with feedback.
3. Approved teammates implement in parallel, each only touching their own files.
4. After all teammates finish, the lead runs the full test suite.
5. If tests fail, the lead identifies which teammate's files are involved and
   sends them a message with the failure output to fix.
6. The lead marks each ticket as done only after its tests pass.
"""
needs = ["create-implementation-tickets"]

[[steps]]
id = "codereview"
title = "[{{feature_name}}] Commit any unstaged changes then run the /codereview skill"
needs = ["teams-implement"]

[[steps]]
id = "fix-review"
title = "[{{feature_name}}] Address code review feedback"
description = """
Scan the codebase for comments containing "AI_REVIEW".
If there are no findings, skip this step.

If there are findings that need to be addressed:

Use subagents (Task tool) — NOT an agent team — to fix findings in parallel.

GROUPING:
- Parse AI_REVIEW findings and group them by file (or group of closely related files).
- Spawn one subagent per group. No two subagents should touch the same file.

SUBAGENT PROMPT (include all of this for every subagent):
- The specific AI_REVIEW comments assigned to this subagent (paste the full text).
- The file paths this subagent is responsible for.
- The instruction: "Address these review findings. Do not modify files outside your assignment."

WORKFLOW:
1. Group findings by file.
2. Spawn all subagents in parallel (multiple Task tool calls in a single message).
3. When all subagents return, review the changes for correctness.
"""
needs = ["codereview"]

[[steps]]
id = "test"
title = "[{{feature_name}}] Make sure tests still pass."
needs = ["fix-review"]

[[steps]]
id = "opsx-verify"
title = "[{{feature_name}}] Verify the implementation with opsx:verify"
needs = ["test"]

[[steps]]
id = "manual-verify"
title = "[{{feature_name}}] Optionally manually verify that the implementation actually works"
type = "human"
needs = ["opsx-verify"]

[[steps]]
id = "opsx-archive"
title = "[{{feature_name}}] Archive the artifacts with opsx:archive"
needs = ["manual-verify"]

[[steps]]
id = "push"
title = "[{{feature_name}}] Commit (if there's any unstaged changes), push, open a PR, merge"
needs = ["opsx-archive"]

Implements tk workflow list/run commands as a plugin. Supports TOML
workflow definitions with variable substitution, pattern/enum validation,
step dependencies, and dry-run mode. Bash 3.2 compatible.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant