-
Notifications
You must be signed in to change notification settings - Fork 7
Description
DAK AI Skill Library — Requirements
Status: DRAFT v0.8 (2026-03-05)
Repo: WorldHealthOrganization/smart-base
Inspiration: jtlicardo/bpmn-assistant (MIT) — LLMFacade copy-lifted with gratitude and attribution
BPMN sautohring is one skill within smart guidelines and IG authoring
IG authoring is one category of skills. there are more (e.g. decision support execution, fhir resource authoring) that will come in future.
1. Goals
- Generate and edit standard BPMN 2.0 XML natively — no Zeebe/Camunda extensions
- Enforce DAK constraints: innermost swimlanes = DAK personas =
ActorDefinition - Import BPMN from
input/business-processes/viabpmn_extractor.py+bpmn2fhirfsh.xsl; validate FSH output - Provide a common BPMN include generated from compiled FHIR instances
- IG Publisher validation as GitHub Actions and PR slash commands
- Issue classification → content-layer labels → specialized skill triggers
- Hard constraint: Zero WHO hosting cost. Zero WHO AI cost.
Skills run as GitHub Actions (repo-owner secret) or locally via Docker (user's.env)
2. Key Conventions
2.1 BPMN Storage Path
input/business-processes/*.bpmn
Confirmed from bpmn_extractor.py: glob.glob("input/business-processes/*bpmn").
SVG files from the same directory are handled by svg_extractor.py.
2.2 Lane ID → ActorDefinition Mapping
BPMN: <lane id="X" name="Some Name">
FSH: Instance: X
InstanceOf: $SGActor
Title: "Some Name"
The lane @id is the bare FSH instance ID — no DAK. prefix on the lane itself.
bpmn2fhirfsh.xsl generates the file as ActorDefinition-DAK.{@id}.fsh and
sets * id = "DAK.{@id}" inside the FSH, but the lane @id in BPMN = the bare instance name.
3. Security Model for API Keys
Hard constraints:
- API keys MUST NOT appear in dispatch inputs, issue comments, PR comments, or any user-visible UI
- Exactly two legitimate locations: repo secret (CI) or local
.envfile (Docker/local)- LLM steps skip gracefully when no key present — non-LLM validation always runs
- Zero WHO infrastructure cost; zero WHO AI cost
3.1 Two Execution Contexts
| GitHub Actions | Local Docker | |
|---|---|---|
| Key source | ${{ secrets.DAK_LLM_API_KEY }} (repo owner sets once) |
.env file (gitignored) |
| Billed to | Whoever owns the key (repo owner's choice) | Developer's own LLM account |
| WHO cost | $0 | $0 |
| Triggered by | push / PR / label / slash command / scheduled | Developer CLI |
3.2 Graceful Degradation
# Every LLM-powered skill action starts with this check:
api_key = os.environ.get("DAK_LLM_API_KEY", "")
if not api_key:
print("⚠️ DAK_LLM_API_KEY not set — LLM step skipped (structural validation still runs)")
sys.exit(0) # exit 0 = skip, not failure| Skill | No key | With key |
|---|---|---|
| BPMN structure validation | ✅ runs | ✅ runs |
| Swimlane ↔ ActorDef validation | ✅ runs | ✅ runs |
| IG Publisher build/validate | ✅ runs | ✅ runs |
| Issue classification | keyword fallback | LLM classification |
| LLM BPMN authoring | ✅ runs | |
| LLM error interpretation | ✅ runs |
3.3 Repo Secret Setup (one-time, by repo owner)
Repository → Settings → Secrets and variables → Actions
→ New repository secret: DAK_LLM_API_KEY = sk-...
Repository → Settings → Secrets and variables → Variables
→ New variable: DAK_LLM_MODEL = gpt-4o
4. Environment Parity: Local = CI
4.1 CI Environment (authoritative)
From ghbuild.yml — the IG build uses hl7fhir/ig-publisher-base:latest with these additions:
apt-get install python3 python3-pip python3-venv
pip3 install GitPython>=3.1.40 PyYAML>=6.0 requests>=2.28.0 lxml
npm install -g fsh-sushi
curl .../publisher.jar -o ./input-cache/publisher.jar4.2 Local Docker Image
# DAK Skill Library — Local Development Image
# Mirrors ghbuild.yml CI environment exactly.
# Base: hl7fhir/ig-publisher-base (Jekyll, Ruby, Java 17, Node.js)
FROM hl7fhir/ig-publisher-base:latest
LABEL org.opencontainers.image.title="DAK Skill Library"
LABEL org.opencontainers.image.source="https://github.com/WorldHealthOrganization/smart-base"
# Python packages — identical to ghbuild.yml
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 python3-pip python3-venv \
&& ln -sf /usr/bin/python3 /usr/bin/python \
&& pip3 install --break-system-packages \
"GitPython>=3.1.40" \
"PyYAML>=6.0" \
"requests>=2.28.0" \
"lxml" \
"litellm>=1.0.0" \
"pdfplumber" \
"pandas" \
&& rm -rf /var/lib/apt/lists/*
# SUSHI — identical to ghbuild.yml
RUN npm install -g fsh-sushi
# IG Publisher jar — pre-baked so local runs don't need network
# Override: -v /local/publisher.jar:/app/publisher.jar
RUN mkdir -p /app/input-cache \
&& curl -L \
https://github.com/HL7/fhir-ig-publisher/releases/latest/download/publisher.jar \
-o /app/input-cache/publisher.jar
# DAK skill library
COPY . /app/skills/
RUN pip3 install --break-system-packages -e /app/skills/
# Workspace — mount IG repo here: -v $(pwd):/workspace
WORKDIR /workspace
ENV PUBLISHER_JAR=/app/input-cache/publisher.jar
ENV DAK_IG_ROOT=/workspace
ENTRYPOINT ["python3", "/app/skills/cli/dak_skill.py"]
CMD ["--help"]4.3 docker-compose.yml
# DAK Skills — local compose
# Alias: alias dak='docker compose -f .github/skills/docker-compose.yml run --rm'
# Usage: dak validate | dak validate-ig | dak import-bpmn | dak author "..." | dak shell
version: "3.9"
x-dak: &dak
image: dak-skill:latest
build:
context: .github/skills
dockerfile: Dockerfile
volumes:
- .:/workspace
- dak-pkg:/var/lib/.fhir
- dak-igcache:/workspace/fhir-package-cache
env_file: [.env]
working_dir: /workspace
services:
validate: { <<: *dak, command: [validate] }
validate-ig: { <<: *dak, command: [validate-ig] }
import-bpmn: { <<: *dak, command: [import-bpmn] }
build-ig: { <<: *dak, command: [build-ig] }
author: { <<: *dak, command: [author] }
shell: { <<: *dak, entrypoint: /bin/bash }
volumes:
dak-pkg:
dak-igcache:4.4 .env.example (committed; .env gitignored)
# DAK Skill Library — local config. Copy to .env (never commit .env).
#
# LLM features (authoring, error interpretation, classification):
# Leave blank → LLM steps skipped, structural validation still runs.
# Billed to YOUR account, not WHO.
#
DAK_LLM_API_KEY= # sk-... (OpenAI) | sk-ant-... (Anthropic) | leave blank
DAK_LLM_MODEL=gpt-4o # gpt-4o | gpt-4o-mini | claude-3-5-sonnet-20241022 | gemini-2.0-flash
# IG Publisher (usually defaults are fine)
DAK_TX_SERVER= # optional custom terminology server5. Issue Classification → Labels
5.1 How GitHub Labels Work (for reference)
GitHub labels are predefined per repository. When a user creates or edits an issue, the right sidebar shows a Labels picker — a dropdown of all labels defined in the repo. Users select from this list; they cannot free-type new labels. The repo owner creates the label set once under Issues → Labels or via the API.
This means:
- The four
content:*labels must be created in the repo first (one-time setup) - After that, any contributor can apply them from the issue sidebar UI
- The classifier workflow can also apply them automatically via
GITHUB_TOKEN - Both paths trigger the same
on: issues: types: [labeled]workflows
5.2 Label Taxonomy
| Label | Color | Description |
|---|---|---|
content:L1 |
#0075ca (blue) |
WHO source guideline: recommendations, evidence, narrative |
content:L2 |
#e4e669 (yellow) |
DAK FHIR assets: BPMN, actors, questionnaires, CQL, data elements |
content:L3 |
#d73a4a (red) |
Implementation adaptations: national/program-level customizations |
content:translation |
#0e8a16 (green) |
Translations: any layer, any UN language |
Labels are not mutually exclusive — one issue can carry multiple.
5.3 Auto-Classification Workflow
name: Classify Issue
on:
issues:
types: [opened, edited]
jobs:
classify:
runs-on: ubuntu-latest
permissions:
issues: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Check DAK enabled
id: dak
run: |
[ -f dak.json ] \
&& echo "enabled=true" >> $GITHUB_OUTPUT \
|| echo "enabled=false" >> $GITHUB_OUTPUT
- name: Classify and label
if: steps.dak.outputs.enabled == 'true'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DAK_LLM_API_KEY: ${{ secrets.DAK_LLM_API_KEY }}
DAK_LLM_MODEL: ${{ vars.DAK_LLM_MODEL || 'gpt-4o-mini' }}
ISSUE_NUMBER: ${{ github.event.issue.number }}
ISSUE_TITLE: ${{ github.event.issue.title }}
ISSUE_BODY: ${{ github.event.issue.body }}
run: python3 .github/skills/dak_authoring/actions/classify_issue_action.py5.4 Keyword Classification (fallback, no LLM needed)
Expanded keyword lists covering realistic issue language across all DAK content types:
"""
Issue classifier — applies content:L1/L2/L3/translation labels.
Uses LLM when DAK_LLM_API_KEY is set; falls back to keyword matching.
Both paths use the same label application logic.
"""
# ── Keyword lists ──────────────────────────────────────────────────────────
L1_KEYWORDS = [
# WHO guideline source
"recommendation", "who recommendation", "guideline", "who guideline",
"clinical guideline", "evidence", "evidence base", "evidence-based",
"narrative", "who narrative", "source content", "who document",
# Sections of WHO guideline documents
"section 2", "section 3", "section 4", "annex", "appendix",
"executive summary", "background", "scope", "target population",
# Clinical content
"clinical", "intervention", "outcome", "efficacy", "safety",
"contraindication", "dosage", "dose", "regimen", "protocol",
"screening", "diagnosis", "treatment", "management", "referral",
"counselling", "counseling", "antenatal", "postnatal", "maternal",
"newborn", "child", "adolescent", "immunization", "vaccination",
# Process
"new recommendation", "update recommendation", "change guideline",
"outdated", "superseded", "retracted",
]
L2_KEYWORDS = [
# BPMN / process
"bpmn", "business process", "swimlane", "swim lane", "workflow",
"process diagram", "process model", "process flow", "flow diagram",
"lane", "pool", "gateway", "sequence flow", "start event", "end event",
"user task", "service task", "business rule", "send task", "receive task",
# Personas / actors
"persona", "actor", "actordefinition", "actor definition",
"health worker", "healthcare worker", "community health worker", "chw",
"clinician", "nurse", "midwife", "physician", "doctor", "pharmacist",
"supervisor", "facility", "patient", "client", "caregiver",
# FHIR resources / FSH
"fhir", "fsh", "sushi", "profile", "instance", "extension",
"codesystem", "code system", "valueset", "value set", "conceptmap",
"structuredefinition", "logical model", "implementation guide", "ig",
# DAK components
"questionnaire", "data element", "data dictionary", "decision table",
"decision logic", "cql", "clinical quality language", "library",
"plandefinition", "activitydefinition", "measure",
"requirement", "non-functional", "functional requirement",
# DAK L2 editorial
"dak", "digital adaptation kit", "l2", "component 2", "component 3",
"component 4", "component 5", "component 6", "component 7", "component 8",
"business process", "generic persona", "related persona",
"core data element", "decision support", "scheduling logic",
"indicator", "performance indicator",
]
L3_KEYWORDS = [
# Geographic / organizational scope
"national", "country", "country-specific", "country adaptation",
"local", "regional", "district", "sub-national",
"program", "programme", "program-level", "programme-level",
# Adaptation process
"adaptation", "adapt", "localize", "localise", "contextualize",
"contextualise", "customise", "customize", "context-specific",
"l3", "layer 3", "implementation guide", "conformance",
# System / interoperability
"system", "ehr", "emr", "electronic health record",
"health information system", "his", "dhis2", "openemr", "openmrs",
"mapping", "terminology mapping", "code mapping",
"interoperability", "integration", "api", "openapi",
"capability statement",
]
TRANSLATION_KEYWORDS = [
# Languages
"translation", "translate", "translated", "translating",
"arabic", "عربي", "ar",
"chinese", "mandarin", "中文", "zh",
"french", "français", "francais", "fr",
"russian", "русский", "ru",
"spanish", "español", "espanol", "es",
"portuguese", "português", "pt",
# Translation tooling
"weblate", "po file", ".po", "pot file", ".pot", "gettext",
"msgstr", "msgid", "locale", "localization", "localisation",
"i18n", "l10n", "internationalization",
# Translation issues
"mistranslation", "mistranslated", "wrong translation",
"translation error", "translation review", "translation update",
"string", "untranslated", "missing translation",
]
def classify_by_keywords(title: str, body: str) -> list[str]:
"""Keyword-based fallback classifier. Case-insensitive. No LLM needed."""
text = (title + " " + (body or "")).lower()
labels = []
if any(k in text for k in L1_KEYWORDS): labels.append("content:L1")
if any(k in text for k in L2_KEYWORDS): labels.append("content:L2")
if any(k in text for k in L3_KEYWORDS): labels.append("content:L3")
if any(k in text for k in TRANSLATION_KEYWORDS): labels.append("content:translation")
return labels
def apply_labels(issue_number: int, labels: list[str]) -> None:
"""Apply labels to issue via GitHub REST API using GITHUB_TOKEN."""
import requests, os
token = os.environ["GITHUB_TOKEN"]
repo = os.environ["GITHUB_REPOSITORY"]
if not labels:
return
r = requests.post(
f"https://api.github.com/repos/{repo}/issues/{issue_number}/labels",
headers={"Authorization": f"Bearer {token}",
"Accept": "application/vnd.github+json"},
json={"labels": labels},
timeout=10,
)
r.raise_for_status()
print(f"✅ Applied labels: {labels}")
def main():
import os, sys
from common.prompts import load_prompt
from common.smart_llm_facade import SmartLLMFacade
issue_number = int(os.environ["ISSUE_NUMBER"])
title = os.environ.get("ISSUE_TITLE", "")
body = os.environ.get("ISSUE_BODY", "")
api_key = os.environ.get("DAK_LLM_API_KEY", "")
if api_key:
# LLM path
prompt = load_prompt("dak_authoring", "classify_issue",
issue_title=title, issue_body=body[:4000])
llm = SmartLLMFacade(api_key=api_key,
model=os.environ.get("DAK_LLM_MODEL", "gpt-4o-mini"))
result = llm.call(prompt, structured_output=True)
labels = result.get("labels", [])
print(f"LLM classification: {result.get('reasoning')}")
else:
# Keyword fallback — no LLM cost
labels = classify_by_keywords(title, body)
print(f"⚠️ No LLM key — keyword fallback used. Labels: {labels}")
apply_labels(issue_number, labels)
if __name__ == "__main__":
main()5.5 Label-Triggered Skill Workflows
Each label triggers a dedicated workflow. Pattern is identical for all four; shown for L2:
name: L2 DAK Content Skill
on:
issues:
types: [labeled]
jobs:
dak-authoring:
if: github.event.label.name == 'content:L2'
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
contents: write
steps:
- uses: actions/checkout@v4
- name: Run L2 DAK authoring skill
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DAK_LLM_API_KEY: ${{ secrets.DAK_LLM_API_KEY }}
DAK_LLM_MODEL: ${{ vars.DAK_LLM_MODEL || 'gpt-4o' }}
ISSUE_NUMBER: ${{ github.event.issue.number }}
ISSUE_TITLE: ${{ github.event.issue.title }}
ISSUE_BODY: ${{ github.event.issue.body }}
run: python3 .github/skills/dak_authoring/actions/dak_authoring_action.py| Label | Workflow | Entry point |
|---|---|---|
content:L1 |
skill-l1-review.yml |
l1_review/actions/l1_review_action.py |
content:L2 |
skill-l2-dak.yml |
dak_authoring/actions/dak_authoring_action.py |
content:L3 |
skill-l3-review.yml |
l3_review/actions/l3_review_action.py |
content:translation |
skill-translation.yml |
translation/actions/translation_action.py |
6. PR Slash Commands
6.1 /validate (no key needed for structural check)
Modeled on existing pr-deploy-slash.yml. Triggers on PR comment starting with /validate.
Uses ${{ secrets.DAK_LLM_API_KEY }} for optional LLM error interpretation — silently skipped if absent.
name: PR Slash-Command Validate
on:
issue_comment:
types: [created]
jobs:
validate:
if: >
github.event.issue.pull_request != null &&
startsWith(github.event.comment.body, '/validate')
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: read
contents: read
steps:
- name: Acknowledge
uses: actions/github-script@v7
with:
script: |
await github.rest.reactions.createForIssueComment({
owner: context.repo.owner, repo: context.repo.repo,
comment_id: context.payload.comment.id, content: 'eyes',
});
- name: Get PR branch
id: pr
uses: actions/github-script@v7
with:
script: |
const pr = await github.rest.pulls.get({
owner: context.repo.owner, repo: context.repo.repo,
pull_number: context.issue.number,
});
core.setOutput('branch', pr.data.head.ref);
- uses: actions/checkout@v4
with:
ref: ${{ steps.pr.outputs.branch }}
- name: Run DAK structural validation (always runs, no key needed)
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.issue.number }}
run: python3 .github/skills/ig_publisher/actions/validate_dak_action.py
- name: Run LLM error interpretation (skipped if no key)
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DAK_LLM_API_KEY: ${{ secrets.DAK_LLM_API_KEY }}
DAK_LLM_MODEL: ${{ vars.DAK_LLM_MODEL || 'gpt-4o-mini' }}
PR_NUMBER: ${{ github.event.issue.number }}
run: python3 .github/skills/ig_publisher/actions/interpret_errors_action.py7. Repository Layout
smart-base/
├── .env.example # Committed; gitignored: .env
├── input/
│ └── business-processes/ # BPMN source files (confirmed path)
│ └── *.bpmn
├── .github/
│ ├── workflows/
│ │ ├── ci.yml # Existing
│ │ ├── ghbuild.yml # Existing (hl7fhir/ig-publisher-base Docker)
│ │ ├── pr-deploy-slash.yml # Existing: /deploy
│ │ ├── pr-validate-slash.yml # NEW: /validate
│ │ ├── classify-issue.yml # NEW: auto-label on issue open/edit
│ │ ├── skill-l1-review.yml # NEW: content:L1 label trigger
│ │ ├── skill-l2-dak.yml # NEW: content:L2 label trigger
│ │ ├── skill-l3-review.yml # NEW: content:L3 label trigger
│ │ └── skill-translation.yml # NEW: content:translation label trigger
│ └── skills/
│ ├── Dockerfile # FROM hl7fhir/ig-publisher-base — mirrors CI
│ ├── docker-compose.yml # Service aliases: validate, author, import, shell
│ ├── README.md
│ ├── skills_registry.yaml
│ ├── cli/
│ │ └── dak_skill.py # dak-skill CLI entry point
│ ├── common/
│ │ ├── smart_llm_facade.py # Copy-lifted (bpmn-assistant, attributed)
│ │ ├── prompts.py # load_prompt() — plain Python str.format_map
│ │ ├── ig_errors.py # FATAL/ERROR/WARNING/INFORMATION
│ │ ├── fsh_utils.py
│ │ ├── ig_publisher_iface.py # Wraps run_ig_publisher.py
│ │ └── prompts/
│ │ ├── dak_bpmn_constraints.md
│ │ ├── bpmn_xml_schema.md
│ │ └── actor_context.md
│ ├── bpmn_author/
│ │ ├── skills.yaml
│ │ ├── prompts/
│ │ │ ├── create_or_edit_bpmn.md # includes dak_bpmn_constraints, bpmn_xml_schema
│ │ │ └── validate_bpmn.md
│ │ ├── validators/
│ │ │ ├── bpmn_xml_validator.py # lxml: structure, no Zeebe, wellformedness
│ │ │ └── swimlane_validator.py # lanes present, no orphan tasks, no dup IDs
│ │ └── actions/
│ │ └── bpmn_author_action.py
│ ├── bpmn_import/
│ │ ├── skills.yaml
│ │ ├── prompts/
│ │ │ └── interpret_import_errors.md
│ │ ├── validators/
│ │ │ └── swimlane_actor_validator.py # lane id X → Instance: X exists?
│ │ └── actions/
│ │ └── bpmn_import_action.py # wraps bpmn_extractor.py
│ ├── ig_publisher/
│ │ ├── skills.yaml
│ │ ├── prompts/
│ │ │ ├── interpret_ig_errors.md
│ │ │ └── validate_dak.md
│ │ └── actions/
│ │ ├── validate_dak_action.py
│ │ ├── validate_ig_action.py
│ │ ├── interpret_errors_action.py
│ │ └── build_ig_action.py
│ ├── dak_authoring/
│ │ ├── skills.yaml
│ │ ├── prompts/
│ │ │ ├── classify_issue.md
│ │ │ ├── l2_authoring.md
│ │ │ └── change_proposal.md
│ │ └── actions/
│ │ ├── classify_issue_action.py # LLM + keyword fallback
│ │ └── dak_authoring_action.py
│ ├── l1_review/ # v0.2
│ ├── l3_review/ # v0.3
│ └── translation/ # v0.3
8. BPMN Skill Groups
8.1 bpmn_author — Author / Edit BPMN
Scope: Create or edit input/business-processes/*.bpmn. Knows BPMN 2.0 structure and DAK swimlane rules only. Zero knowledge of FHIR extraction.
DAK Lane Rules:
<lane id="X">= bare ID —Xmust be a valid FSH instance identifier<lane name="Y">= human name — will becomeTitle: "Y"in generated FSH- Innermost lanes only (no
<childLaneSet>child) map to personas - No Zeebe/Camunda namespaces in any generated BPMN
8.2 bpmn_import — Import → FSH
Scope: Import input/business-processes/*.bpmn files via existing bpmn_extractor.py pipeline.
- Invokes
bpmn_extractor.py→ appliesbpmn2fhirfsh.xsl - Validates: every innermost
<lane id="X">→input/fsh/actors/ActorDefinition-DAK.X.fshexists - Reports in IG Publisher error format via
ig_errors.py
8.3 Common BPMN Include
Generated by extending generate_smart_liquid.py with generate_bpmn_common_liquid():
- Source:
output/ActorDefinition-*.json— compiled FHIR instances (authoritative) - Output:
input/includes/bpmn-common.liquid - Same post-build pattern as existing
smart.liquidgeneration
9. All Skills MUST
- Read
DAK_LLM_API_KEY+DAK_LLM_MODELfrom env only — never from any user input - Skip LLM steps gracefully (exit 0, log warning) when key absent
- Run structural validation regardless of key presence
- Use
ig_errors.pyFATAL/ERROR/WARNING/INFORMATION format for all output - Be invocable as:
dak-skill <cmd>(Docker CLI), GitHub Actions step - Use
.mdprompt files with{variable}placeholders —load_prompt()helper - Include
skills.yamlmanifest - Reference originating GitHub issue number in all generated artifacts
10. One-Time Repository Setup
Steps the repo owner performs once, documented in .github/skills/README.md:
1. Create labels (Issues → Labels → New label):
content:L1 #0075ca "WHO source guideline content"
content:L2 #e4e669 "DAK FHIR assets"
content:L3 #d73a4a "Implementation adaptations"
content:translation #0e8a16 "Translation of any content layer"
2. Add secret (Settings → Secrets and variables → Actions → New repository secret):
DAK_LLM_API_KEY = sk-...
3. Add variable (Settings → Secrets and variables → Variables → New variable):
DAK_LLM_MODEL = gpt-4o (or gpt-4o-mini to reduce cost)
4. Build local Docker image (optional, for local development):
docker build -t dak-skill .github/skills/
11. Open Questions (v0.1 scope)
- Lane ID characters: Any restrictions on valid BPMN
<lane id>characters that map cleanly to FSH instance IDs? (FSH IDs allow[A-Za-z0-9\-\.]) /validateaccess control: Any PR commenter can trigger, or restrict to collaborators with triage access?- Classification for edited issues: Re-run classifier when issue body is edited — add/replace labels or preserve manually-applied ones?
content:L2auto-trigger: When the classifier appliescontent:L2, theskill-l2-dak.ymlimmediately triggers. Is this desired, or should there be a deliberate human opt-in step (e.g., a separatedak:processlabel)?
12. Next Steps
- Resolve open questions (§11)
- Create GitHub issue in
smart-basetracking this work - Create four
content:*labels in repo - Write
.env.example+ update.gitignore - Write
Dockerfile+docker-compose.yml - Scaffold
.github/skills/directory structure - Copy-lift
LLMFacade→common/smart_llm_facade.py(attributed) - Implement
common/prompts.py,common/ig_errors.py,ig_publisher_iface.py - Extend
generate_smart_liquid.py→generate_bpmn_common_liquid() - Write
classify_issue_action.pywith expanded keyword lists - Draft
classify-issue.yml+ fourskill-*.ymllabel-triggered workflows - Draft
pr-validate-slash.yml -
bpmn_authorfirst: validators, prompts, action -
bpmn_importsecond: swimlane validator (lane id X →Instance: X), import action -
ig_publisherthird: validate + interpret actions - Write
cli/dak_skill.py+skills_registry.yaml - Write
README.mdwith one-time setup instructions