From 945021988eb9c1819609ed852b08d179c77e3f1a Mon Sep 17 00:00:00 2001 From: Don Syme Date: Sat, 28 Feb 2026 03:01:15 +0000 Subject: [PATCH 1/7] analysis --- DEADCODE.md | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 212 insertions(+) create mode 100644 DEADCODE.md diff --git a/DEADCODE.md b/DEADCODE.md new file mode 100644 index 0000000000..0289482622 --- /dev/null +++ b/DEADCODE.md @@ -0,0 +1,212 @@ +# Dead Code Removal Plan + +## ⚠️ Critical Lesson Learned (Session 1 failure) + +Running `deadcode ./cmd/...` only analyses the main binary entry points in `cmd/`. +It is **blind to `internal/tools/` programs**, which are separate binaries with their own `main()` functions, called by the Makefile and CI. + +In Session 1 we deleted `pkg/cli/actions_build_command.go` and `internal/tools/actions-build/` because `deadcode ./cmd/...` reported them as unreachable — but they are actively used by `make actions-build` / `make actions-validate` in CI. + +**Correct command:** +```bash +deadcode ./cmd/... ./internal/tools/... 2>/dev/null +``` + +This covers all entry points. The 381-entry list from Session 1 is **invalid** — regenerate it. + +--- + +## Methodology + +Dead code is identified using: +```bash +deadcode ./cmd/... ./internal/tools/... 2>/dev/null +``` + +The tool reports unreachable functions/methods from ALL entry points (`cmd/` + `internal/tools/`). +It does NOT report unreachable constants, variables, or types — only functions. + +**Important rules:** +- **Always include `./internal/tools/...` in the deadcode command** +- Run `go build ./...` after every batch +- Run `go vet ./...` to catch test compilation errors (cheaper than `go test`) +- Run `go test -tags=integration ./pkg/affected/...` to spot-check +- Always check if a "fully dead" file contains live constants/vars before deleting +- The deadcode list was generated before any deletions; re-run after major batches + +--- + +## ⚠️ Status: Plan needs regeneration + +The phases and batches below were based on the **incorrect** `./cmd/...`-only scan. +Before proceeding, reset to main and regenerate the list with: + +```bash +deadcode ./cmd/... ./internal/tools/... 2>/dev/null | tee /tmp/deadcode-correct.txt | wc -l +``` + +The groups below are a rough guide but individual entries may differ. + +--- + +## Phase 1: Fully Dead Files + +These files have ALL their functions dead. Each must be checked for: +- [ ] Live constants, variables, or types used elsewhere +- [ ] Test files that reference the deleted functions +- [ ] `internal/tools/` dependencies + +### Group 1A: CLI fully dead files (re-verify after fix) +- [ ] `pkg/cli/actions_build_command.go` — **⚠️ NOT dead: used by `internal/tools/actions-build/`** +- [ ] `pkg/cli/exec.go` — re-verify with corrected command +- [ ] `pkg/cli/generate_action_metadata_command.go` — **⚠️ NOT dead: used by `internal/tools/generate-action-metadata/`** +- [ ] `pkg/cli/logs_display.go` (1/1 dead) → surgery on `logs_overview_test.go` +- [ ] `pkg/cli/mcp_inspect_safe_inputs_inspector.go` (1/1 dead) → delete `mcp_inspect_safe_inputs_test.go` +- [ ] `pkg/cli/validation_output.go` (2/2 dead) + +### Group 1B: Console fully dead files (3 files) +- [ ] `pkg/console/form.go` (1/1 dead) → delete `form_test.go` +- [ ] `pkg/console/layout.go` (4/4 dead) → surgery on `golden_test.go` +- [ ] `pkg/console/select.go` (2/2 dead) + +### Group 1C: Misc utility fully dead files (4 files) +- [ ] `pkg/logger/error_formatting.go` (1/1 dead) +- [ ] `pkg/parser/ansi_strip.go` (1/1 dead) → surgery on frontmatter tests +- [ ] `pkg/parser/virtual_fs_test_helpers.go` (1/1 dead, test helper only) +- [ ] `pkg/stringutil/paths.go` (1/1 dead) → delete `paths_test.go` + +### Group 1D: Workflow bundler fully dead files (5 files) +These are the JS bundler subsystem — entirely unused. +- [ ] `pkg/workflow/bundler.go` (6/6 dead) → delete 14+ bundler test files +- [ ] `pkg/workflow/bundler_file_mode.go` (12/12 dead) — **CAUTION: contains live const `SetupActionDestination`** +- [ ] `pkg/workflow/bundler_runtime_validation.go` (3/3 dead) +- [ ] `pkg/workflow/bundler_safety_validation.go` (3/3 dead) +- [ ] `pkg/workflow/bundler_script_validation.go` (2/2 dead) + +### Group 1E: Workflow other fully dead files (9 files) +- [ ] `pkg/workflow/compiler_string_api.go` (2/2 dead) → delete `compiler_string_api_test.go` +- [ ] `pkg/workflow/compiler_test_helpers.go` (3/3 dead) — test helper, check usage +- [ ] `pkg/workflow/copilot_participant_steps.go` (3/3 dead) +- [ ] `pkg/workflow/dependency_tracker.go` (2/2 dead) +- [ ] `pkg/workflow/env_mirror.go` (2/2 dead) +- [ ] `pkg/workflow/markdown_unfencing.go` (1/1 dead) +- [ ] `pkg/workflow/prompt_step.go` (2/2 dead) — **CAUTION: may be referenced by tests** +- [ ] `pkg/workflow/safe_output_builder.go` (10/10 dead) — **CAUTION: contains live type `ListJobBuilderConfig`** +- [ ] `pkg/workflow/sh.go` (5/5 dead) — **CAUTION: contains live constants (prompts dir, file names) and embed directive** + +--- + +## Phase 2: Near-Fully Dead Files (high value, some surgery) + +These files are mostly dead and worth cleaning next: + +- [ ] `pkg/workflow/script_registry.go` (11/13 dead) — keep only `GetActionPath`, `DefaultScriptRegistry` +- [ ] `pkg/workflow/artifact_manager.go` (14/16 dead) — remove 14 functions +- [ ] `pkg/constants/constants.go` (13/27 dead) — remove 13 constants +- [ ] `pkg/workflow/map_helpers.go` (5/7 dead) — remove 5 functions +- [ ] `pkg/workflow/js.go` (17/47 dead) — remove 17 JS bundle functions +- [ ] `pkg/workflow/compiler_types.go` (17/45 dead) — remove 17 types/methods + +--- + +## Phase 3: Partially Dead Files (1–6 dead per file) + +Individual function removals across ~100 files. To be tackled after Phase 1 and 2. + +High-count files to prioritize: +- `pkg/workflow/expression_builder.go` (9/27 dead) +- `pkg/workflow/validation_helpers.go` (6/10 dead) +- `pkg/cli/docker_images.go` (6/11 dead) +- `pkg/workflow/domains.go` (10/27 dead) + +--- + +## Batch Execution Log + +### Session 1 — ABORTED (incorrect deadcode command) + +Used `deadcode ./cmd/...` — missed `internal/tools/` entry points. Deleted: +- `pkg/cli/actions_build_command.go` — **WRONG: used by `make actions-build` via `internal/tools/actions-build/`** +- `pkg/cli/exec.go`, `pkg/cli/generate_action_metadata_command.go`, etc. +- `internal/tools/actions-build/`, `internal/tools/generate-action-metadata/` +- CI job `actions-build` from `.github/workflows/ci.yml` + +PR #18782 failed CI with `make: *** No rule to make target 'actions-build'`. + +**Reset to main required before Session 2.** + +### Session 2 — TODO (use corrected command: `./cmd/... ./internal/tools/...`) + +--- + +## Key Constant/Var Dependencies (must rescue before deleting) + +These live values are defined in files that are otherwise fully dead: + +| Const/Var | Used by live code | Currently in | +|-----------|-------------------|--------------| +| `SetupActionDestination` | `safe_outputs_steps.go` etc. | `bundler_file_mode.go` | +| `cacheMemoryPromptFile` | `cache.go` | `sh.go` | +| `cacheMemoryPromptMultiFile` | `cache.go` | `sh.go` | +| `promptsDir` | `unified_prompt_step.go`, `repo_memory_prompt.go` | `sh.go` | +| `prContextPromptFile` | `unified_prompt_step.go` | `sh.go` | +| `tempFolderPromptFile` | `unified_prompt_step.go` | `sh.go` | +| `playwrightPromptFile` | `unified_prompt_step.go` | `sh.go` | +| `markdownPromptFile` | `unified_prompt_step.go` | `sh.go` | +| `xpiaPromptFile` | `unified_prompt_step.go` | `sh.go` | +| `repoMemoryPromptFile` | `repo_memory_prompt.go` | `sh.go` | +| `repoMemoryPromptMultiFile` | `repo_memory_prompt.go` | `sh.go` | +| `safeOutputsPromptFile` | `unified_prompt_step.go` | `sh.go` | +| `safeOutputsCreatePRFile` | `unified_prompt_step.go` | `sh.go` | +| `safeOutputsPushToBranchFile` | `unified_prompt_step.go` | `sh.go` | +| `safeOutputsAutoCreateIssueFile` | `unified_prompt_step.go` | `sh.go` | +| `githubContextPromptText` (embed) | `unified_prompt_step.go` | `sh.go` | +| `ListJobBuilderConfig` type | `add_labels.go` (dead), `safe_output_builder.go` (dead) | `safe_output_builder.go` | + +**Strategy:** Create `pkg/workflow/workflow_constants.go` to hold rescued constants and embed. +`ListJobBuilderConfig` is only used by dead code, so needs no rescue. + +--- + +## Test Files to Delete (when their entire subject is deleted) + +| Test file | Reason to delete | +|-----------|-----------------| +| `pkg/cli/actions_build_command_test.go` | Tests deleted CLI commands | +| `pkg/cli/exec_test.go` | Tests deleted exec functions | +| `pkg/cli/generate_action_metadata_command_test.go` | Tests deleted command | +| `pkg/cli/validation_output_test.go` | Tests deleted functions | +| `pkg/cli/mcp_inspect_safe_inputs_test.go` | References `spawnSafeInputsInspector` (deleted) | +| `pkg/console/form_test.go` | Tests deleted `RunForm` | +| `pkg/stringutil/paths_test.go` | Tests deleted `NormalizePath` | +| `pkg/workflow/compiler_string_api_test.go` | Tests deleted `ParseWorkflowString` | +| `pkg/workflow/script_registry_test.go` | Tests dead registry methods | +| All `pkg/workflow/bundler_*_test.go` | Tests deleted bundler | + +## Test Files Needing Surgery + +| Test file | What to remove | +|-----------|---------------| +| `pkg/cli/logs_overview_test.go` | Remove 4 tests using deleted `DisplayLogsOverview` | +| `pkg/console/golden_test.go` | Remove tests using deleted `LayoutTitleBox` | +| `pkg/parser/frontmatter_utils_test.go` | Remove `TestStripANSI`, `BenchmarkStripANSI` | +| `pkg/parser/frontmatter_merge_test.go` | Remove stray comment | +| `pkg/workflow/compiler_custom_actions_test.go` | Remove tests using dead registry methods | +| `pkg/workflow/compiler_action_mode_test.go` | Remove tests using dead registry methods | +| `pkg/workflow/custom_action_copilot_token_test.go` | Remove test using `RegisterWithAction` | + +--- + +## PR Strategy + +**PR 1:** Phase 1 Groups 1A + 1B + 1C (CLI, console, misc utilities — no workflow risk) +- 13 files deleted +- Clean, low-risk, easy to review + +**PR 2:** Phase 1 Groups 1D + 1E (bundler + workflow dead files) +- 14 files deleted +- More complex due to constant rescue and test surgery + +**PR 3:** Phase 2 (near-fully dead) + +**PR 4:** Phase 3 (individual function removals, many files) From 87d84c96f0d80dfb78b185342230995f224c9893 Mon Sep 17 00:00:00 2001 From: Don Syme Date: Sat, 28 Feb 2026 03:08:00 +0000 Subject: [PATCH 2/7] Remove dead code batch 1: CLI, console, misc utility (17 files deleted) --- DEADCODE.md | 33 +- pkg/cli/copilot_agent_test.go | 45 --- pkg/cli/exec.go | 139 ------- pkg/cli/exec_test.go | 229 ----------- pkg/cli/logs_display.go | 220 ----------- pkg/cli/logs_overview_test.go | 239 ------------ pkg/cli/mcp_inspect_safe_inputs_inspector.go | 134 ------- pkg/cli/mcp_inspect_safe_inputs_test.go | 264 ------------- pkg/cli/validation_output.go | 54 --- pkg/cli/validation_output_test.go | 234 ----------- pkg/console/form.go | 122 ------ pkg/console/form_test.go | 169 -------- pkg/console/golden_test.go | 118 ------ pkg/console/layout.go | 162 -------- pkg/console/layout_test.go | 383 ------------------- pkg/console/select.go | 91 ----- pkg/console/select_test.go | 87 ----- pkg/logger/error_formatting.go | 47 --- pkg/logger/error_formatting_test.go | 177 --------- pkg/parser/ansi_strip.go | 12 - pkg/parser/frontmatter_merge_test.go | 2 - pkg/parser/frontmatter_utils_test.go | 213 ----------- pkg/parser/virtual_fs_test_helpers.go | 12 - pkg/workflow/metrics_test.go | 90 ----- 24 files changed, 24 insertions(+), 3252 deletions(-) delete mode 100644 pkg/cli/exec.go delete mode 100644 pkg/cli/exec_test.go delete mode 100644 pkg/cli/logs_display.go delete mode 100644 pkg/cli/mcp_inspect_safe_inputs_inspector.go delete mode 100644 pkg/cli/mcp_inspect_safe_inputs_test.go delete mode 100644 pkg/cli/validation_output.go delete mode 100644 pkg/cli/validation_output_test.go delete mode 100644 pkg/console/form.go delete mode 100644 pkg/console/form_test.go delete mode 100644 pkg/console/layout.go delete mode 100644 pkg/console/layout_test.go delete mode 100644 pkg/console/select.go delete mode 100644 pkg/console/select_test.go delete mode 100644 pkg/logger/error_formatting.go delete mode 100644 pkg/logger/error_formatting_test.go delete mode 100644 pkg/parser/ansi_strip.go delete mode 100644 pkg/parser/virtual_fs_test_helpers.go diff --git a/DEADCODE.md b/DEADCODE.md index 0289482622..4876555fc5 100644 --- a/DEADCODE.md +++ b/DEADCODE.md @@ -49,6 +49,17 @@ The groups below are a rough guide but individual entries may differ. --- +## Session 2 Analysis (2026-02-28) + +**Command:** `deadcode ./cmd/... ./internal/tools/... 2>/dev/null` +**Total dead entries:** 362 +**Fully dead files:** 25 +**Partially dead files:** 117 + +Confirmed NOT dead (correctly excluded now): `pkg/cli/actions_build_command.go`, `pkg/cli/generate_action_metadata_command.go` + +--- + ## Phase 1: Fully Dead Files These files have ALL their functions dead. Each must be checked for: @@ -56,10 +67,8 @@ These files have ALL their functions dead. Each must be checked for: - [ ] Test files that reference the deleted functions - [ ] `internal/tools/` dependencies -### Group 1A: CLI fully dead files (re-verify after fix) -- [ ] `pkg/cli/actions_build_command.go` — **⚠️ NOT dead: used by `internal/tools/actions-build/`** -- [ ] `pkg/cli/exec.go` — re-verify with corrected command -- [ ] `pkg/cli/generate_action_metadata_command.go` — **⚠️ NOT dead: used by `internal/tools/generate-action-metadata/`** +### Group 1A: CLI fully dead files (4 files) +- [ ] `pkg/cli/exec.go` (4/4 dead) - [ ] `pkg/cli/logs_display.go` (1/1 dead) → surgery on `logs_overview_test.go` - [ ] `pkg/cli/mcp_inspect_safe_inputs_inspector.go` (1/1 dead) → delete `mcp_inspect_safe_inputs_test.go` - [ ] `pkg/cli/validation_output.go` (2/2 dead) @@ -131,11 +140,19 @@ Used `deadcode ./cmd/...` — missed `internal/tools/` entry points. Deleted: - `internal/tools/actions-build/`, `internal/tools/generate-action-metadata/` - CI job `actions-build` from `.github/workflows/ci.yml` -PR #18782 failed CI with `make: *** No rule to make target 'actions-build'`. +PR #18782 failed CI with `make: *** No rule to make target 'actions-build'`. Reset to main. + +### Session 2 — In Progress + +#### Batch 1: Groups 1A (CLI) + 1B (Console) + 1C (Misc) — COMPLETE ✅ + +Deleted 17 files, surgery on 6 test files. `go build ./...` + `go vet ./...` + `make fmt` all clean. -**Reset to main required before Session 2.** +Deferred `pkg/stringutil/paths.go` to Batch 2 — callers in bundler files still present. -### Session 2 — TODO (use corrected command: `./cmd/... ./internal/tools/...`) +#### Batch 2: Groups 1D + 1E (Workflow fully dead) — TODO +#### Batch 3: Phase 2 (Near-fully dead, high-value partial files) — TODO +#### Batch 4: Phase 3 (Individual function removals) — TODO --- @@ -172,9 +189,7 @@ These live values are defined in files that are otherwise fully dead: | Test file | Reason to delete | |-----------|-----------------| -| `pkg/cli/actions_build_command_test.go` | Tests deleted CLI commands | | `pkg/cli/exec_test.go` | Tests deleted exec functions | -| `pkg/cli/generate_action_metadata_command_test.go` | Tests deleted command | | `pkg/cli/validation_output_test.go` | Tests deleted functions | | `pkg/cli/mcp_inspect_safe_inputs_test.go` | References `spawnSafeInputsInspector` (deleted) | | `pkg/console/form_test.go` | Tests deleted `RunForm` | diff --git a/pkg/cli/copilot_agent_test.go b/pkg/cli/copilot_agent_test.go index 0ed4ebfde5..2829182d6e 100644 --- a/pkg/cli/copilot_agent_test.go +++ b/pkg/cli/copilot_agent_test.go @@ -7,8 +7,6 @@ import ( "path/filepath" "strings" "testing" - - "github.com/github/gh-aw/pkg/logger" ) func TestCopilotCodingAgentDetector_IsGitHubCopilotCodingAgent(t *testing.T) { @@ -245,49 +243,6 @@ func TestExtractToolName(t *testing.T) { } } -func TestExtractErrorMessage(t *testing.T) { - tests := []struct { - name string - line string - expected string - }{ - { - name: "removes ISO timestamp", - line: "2024-01-15T10:00:00.123Z ERROR: Connection failed", - expected: "Connection failed", - }, - { - name: "removes bracketed timestamp", - line: "[2024-01-15 10:00:00] ERROR: File not found", - expected: "File not found", - }, - { - name: "removes log level prefix", - line: "ERROR: Invalid input", - expected: "Invalid input", - }, - { - name: "handles warning prefix", - line: "WARNING: Deprecated API", - expected: "Deprecated API", - }, - { - name: "handles plain message", - line: " Simple error message ", - expected: "Simple error message", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := logger.ExtractErrorMessage(tt.line) - if result != tt.expected { - t.Errorf("Expected '%s', got '%s'", tt.expected, result) - } - }) - } -} - func TestIntegration_CopilotCodingAgentWithAudit(t *testing.T) { // Create a temporary directory that simulates a GitHub Copilot coding agent run // NOTE: GitHub Copilot coding agent runs do NOT have aw_info.json (that's for agentic workflows) diff --git a/pkg/cli/exec.go b/pkg/cli/exec.go deleted file mode 100644 index 4d50af16ce..0000000000 --- a/pkg/cli/exec.go +++ /dev/null @@ -1,139 +0,0 @@ -package cli - -import ( - "bytes" - "os" - "os/exec" - "strings" - - "github.com/cli/go-gh/v2" - "github.com/github/gh-aw/pkg/logger" -) - -var execLog = logger.New("cli:exec") - -// ghExecOrFallback executes a gh CLI command if GH_TOKEN is available, -// otherwise falls back to an alternative command. -// The gh CLI arguments are inferred from the fallback command arguments. -// Returns the stdout, stderr, and error from whichever command was executed. -func ghExecOrFallback(fallbackCmd string, fallbackArgs []string, fallbackEnv []string) (string, string, error) { - ghToken := os.Getenv("GH_TOKEN") - - if ghToken != "" { - // Use gh CLI when GH_TOKEN is available - // Infer gh args from fallback args - ghArgs := inferGhArgs(fallbackCmd, fallbackArgs) - execLog.Printf("Using gh CLI: gh %s", strings.Join(ghArgs, " ")) - stdout, stderr, err := gh.Exec(ghArgs...) - return stdout.String(), stderr.String(), err - } - - // Fall back to alternative command when GH_TOKEN is not available - execLog.Printf("Using fallback command: %s %s", fallbackCmd, strings.Join(fallbackArgs, " ")) - cmd := exec.Command(fallbackCmd, fallbackArgs...) - - // Add custom environment variables if provided - if len(fallbackEnv) > 0 { - cmd.Env = append(os.Environ(), fallbackEnv...) - } - - // Capture stdout and stderr separately like gh.Exec - var stdout, stderr bytes.Buffer - cmd.Stdout = &stdout - cmd.Stderr = &stderr - - err := cmd.Run() - return stdout.String(), stderr.String(), err -} - -// inferGhArgs infers gh CLI arguments from fallback command arguments -func inferGhArgs(fallbackCmd string, fallbackArgs []string) []string { - if fallbackCmd != "git" || len(fallbackArgs) == 0 { - // For non-git commands, use gh exec - return append([]string{"exec", "--", fallbackCmd}, fallbackArgs...) - } - - // Handle git commands - gitCmd := fallbackArgs[0] - - switch gitCmd { - case "clone": - // git clone [options] - // -> gh repo clone [options] - return buildGhCloneArgs(fallbackArgs[1:]) - default: - // For other git commands, use gh exec - return append([]string{"exec", "--", "git"}, fallbackArgs...) - } -} - -// buildGhCloneArgs builds gh repo clone arguments from git clone arguments -func buildGhCloneArgs(gitArgs []string) []string { - ghArgs := []string{"repo", "clone"} - - var repoURL string - var targetDir string - var otherArgs []string - - // Options that take a value - optsWithValue := map[string]bool{ - "--branch": true, - "--depth": true, - "--origin": true, - "--template": true, - "--config": true, - "--server-option": true, - "--upload-pack": true, - "--reference": true, - "--reference-if-able": true, - "--separate-git-dir": true, - } - - // Parse git clone arguments - for i := 0; i < len(gitArgs); i++ { - arg := gitArgs[i] - if strings.HasPrefix(arg, "https://") || strings.HasPrefix(arg, "git@") { - repoURL = arg - } else if strings.HasPrefix(arg, "-") { - // It's an option - otherArgs = append(otherArgs, arg) - // Check if this option takes a value - if optsWithValue[arg] && i+1 < len(gitArgs) { - i++ // Move to next arg - otherArgs = append(otherArgs, gitArgs[i]) - } - } else if repoURL != "" && targetDir == "" { - // This is the target directory - targetDir = arg - } - } - - // Extract repo slug from URL (remove https://github.com/ or enterprise domain) - repoSlug := extractRepoSlug(repoURL) - - // Build gh args: gh repo clone -- [git options] - ghArgs = append(ghArgs, repoSlug) - if targetDir != "" { - ghArgs = append(ghArgs, targetDir) - } - - if len(otherArgs) > 0 { - ghArgs = append(ghArgs, "--") - ghArgs = append(ghArgs, otherArgs...) - } - - return ghArgs -} - -// extractRepoSlug extracts the owner/repo slug from a GitHub URL -func extractRepoSlug(repoURL string) string { - githubHost := getGitHubHost() - - // Remove the GitHub host from the URL - slug := strings.TrimPrefix(repoURL, githubHost+"/") - - // Remove .git suffix if present - slug = strings.TrimSuffix(slug, ".git") - - return slug -} diff --git a/pkg/cli/exec_test.go b/pkg/cli/exec_test.go deleted file mode 100644 index 8e9137d146..0000000000 --- a/pkg/cli/exec_test.go +++ /dev/null @@ -1,229 +0,0 @@ -//go:build !integration - -package cli - -import ( - "strings" - "testing" -) - -func TestGhExecOrFallback(t *testing.T) { - tests := []struct { - name string - ghToken string - fallbackCmd string - fallbackArgs []string - fallbackEnv []string - expectError bool - description string - }{ - { - name: "uses git when GH_TOKEN not set", - ghToken: "", - fallbackCmd: "echo", - fallbackArgs: []string{"fallback executed"}, - fallbackEnv: nil, - expectError: false, - description: "should use fallback command when GH_TOKEN is not set", - }, - { - name: "uses fallback with custom env", - ghToken: "", - fallbackCmd: "sh", - fallbackArgs: []string{"-c", "echo $TEST_VAR"}, - fallbackEnv: []string{"TEST_VAR=test_value"}, - expectError: false, - description: "should pass custom environment variables to fallback command", - }, - { - name: "fallback command failure", - ghToken: "", - fallbackCmd: "false", // command that always fails - fallbackArgs: []string{}, - fallbackEnv: nil, - expectError: true, - description: "should return error when fallback command fails", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - // Set or unset GH_TOKEN based on test case - if tt.ghToken != "" { - t.Setenv("GH_TOKEN", tt.ghToken) - } - - stdout, _, err := ghExecOrFallback(tt.fallbackCmd, tt.fallbackArgs, tt.fallbackEnv) - - if tt.expectError && err == nil { - t.Errorf("Expected error for test '%s', got nil", tt.description) - } else if !tt.expectError && err != nil { - t.Errorf("Unexpected error for test '%s': %v", tt.description, err) - } - - // For successful fallback tests, verify output - if !tt.expectError && tt.fallbackCmd == "echo" { - if !strings.Contains(stdout, "fallback executed") { - t.Errorf("Expected stdout to contain 'fallback executed', got: %s", stdout) - } - } - - // For env test, verify environment variable was passed - if !tt.expectError && tt.fallbackCmd == "sh" && len(tt.fallbackEnv) > 0 { - if !strings.Contains(stdout, "test_value") { - t.Errorf("Expected stdout to contain 'test_value', got: %s", stdout) - } - } - - // With separated stdout/stderr, we don't expect both to be populated - // This is a change from the previous CombinedOutput behavior - }) - } -} - -func TestGhExecOrFallbackWithGHToken(t *testing.T) { - // This test verifies behavior when GH_TOKEN is set - // Note: We can't easily test actual gh.Exec without a real token, - // so we test that the function attempts to use gh CLI - - // Set a placeholder token - t.Setenv("GH_TOKEN", "placeholder_token_for_test") - - // This will likely fail since we don't have a valid token, - // but we're testing that it attempts gh.Exec path - _, _, err := ghExecOrFallback( - "echo", - []string{"fallback"}, - nil, - ) - - // We expect an error because gh.Exec will fail with invalid token/nonexistent repo - // The important part is that it tried the gh.Exec path - if err == nil { - // If it succeeded, it means it used the fallback, which is wrong - t.Error("Expected function to attempt gh.Exec with GH_TOKEN set") - } -} - -func TestGhExecOrFallbackIntegration(t *testing.T) { - // Integration test: verify the function works end-to-end without GH_TOKEN - // (GH_TOKEN is not set by default in this test) - - // Use a simple command that we know will work - stdout, _, err := ghExecOrFallback( - "echo", - []string{"integration test output"}, - nil, - ) - - if err != nil { - t.Errorf("Unexpected error in integration test: %v", err) - } - - if !strings.Contains(stdout, "integration test output") { - t.Errorf("Expected output to contain 'integration test output', got: %s", stdout) - } -} - -func TestExtractRepoSlug(t *testing.T) { - tests := []struct { - name string - repoURL string - githubHost string - expectedSlug string - }{ - { - name: "standard GitHub URL", - repoURL: "https://github.com/owner/repo", - githubHost: "", - expectedSlug: "owner/repo", - }, - { - name: "GitHub URL with .git suffix", - repoURL: "https://github.com/owner/repo.git", - githubHost: "", - expectedSlug: "owner/repo", - }, - { - name: "enterprise GitHub URL", - repoURL: "https://github.enterprise.com/owner/repo", - githubHost: "https://github.enterprise.com", - expectedSlug: "owner/repo", - }, - { - name: "enterprise GitHub URL with .git", - repoURL: "https://github.enterprise.com/owner/repo.git", - githubHost: "https://github.enterprise.com", - expectedSlug: "owner/repo", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - // Set environment - if tt.githubHost != "" { - t.Setenv("GITHUB_SERVER_URL", tt.githubHost) - } - - slug := extractRepoSlug(tt.repoURL) - if slug != tt.expectedSlug { - t.Errorf("Expected slug '%s', got '%s'", tt.expectedSlug, slug) - } - }) - } -} - -func TestInferGhArgs(t *testing.T) { - tests := []struct { - name string - fallbackCmd string - fallbackArgs []string - expectedGhArgs []string - }{ - { - name: "git clone simple", - fallbackCmd: "git", - fallbackArgs: []string{"clone", "https://github.com/owner/repo", "/tmp/dir"}, - expectedGhArgs: []string{"repo", "clone", "owner/repo", "/tmp/dir"}, - }, - { - name: "git clone with depth", - fallbackCmd: "git", - fallbackArgs: []string{"clone", "--depth", "1", "https://github.com/owner/repo", "/tmp/dir"}, - expectedGhArgs: []string{"repo", "clone", "owner/repo", "/tmp/dir", "--", "--depth", "1"}, - }, - { - name: "git clone with branch", - fallbackCmd: "git", - fallbackArgs: []string{"clone", "--depth", "1", "https://github.com/owner/repo", "/tmp/dir", "--branch", "main"}, - expectedGhArgs: []string{"repo", "clone", "owner/repo", "/tmp/dir", "--", "--depth", "1", "--branch", "main"}, - }, - { - name: "git checkout", - fallbackCmd: "git", - fallbackArgs: []string{"-C", "/tmp/dir", "checkout", "abc123"}, - expectedGhArgs: []string{"exec", "--", "git", "-C", "/tmp/dir", "checkout", "abc123"}, - }, - { - name: "non-git command", - fallbackCmd: "echo", - fallbackArgs: []string{"hello"}, - expectedGhArgs: []string{"exec", "--", "echo", "hello"}, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - ghArgs := inferGhArgs(tt.fallbackCmd, tt.fallbackArgs) - if len(ghArgs) != len(tt.expectedGhArgs) { - t.Errorf("Expected %d args, got %d: %v", len(tt.expectedGhArgs), len(ghArgs), ghArgs) - return - } - for i, arg := range ghArgs { - if arg != tt.expectedGhArgs[i] { - t.Errorf("Arg %d: expected '%s', got '%s'", i, tt.expectedGhArgs[i], arg) - } - } - }) - } -} diff --git a/pkg/cli/logs_display.go b/pkg/cli/logs_display.go deleted file mode 100644 index 06df28487d..0000000000 --- a/pkg/cli/logs_display.go +++ /dev/null @@ -1,220 +0,0 @@ -// This file provides command-line interface functionality for gh-aw. -// This file (logs_display.go) contains functions for displaying workflow logs information -// to the console, including summary tables and metrics. -// -// Key responsibilities: -// - Rendering workflow logs overview tables -// - Formatting metrics for display (duration, tokens, cost) -// - Aggregating totals across multiple runs - -package cli - -import ( - "fmt" - "os" - "path/filepath" - "strconv" - "strings" - "time" - - "github.com/github/gh-aw/pkg/console" - "github.com/github/gh-aw/pkg/logger" - "github.com/github/gh-aw/pkg/timeutil" -) - -var logsDisplayLog = logger.New("cli:logs_display") - -// displayLogsOverview displays a summary table of workflow runs and metrics -func displayLogsOverview(processedRuns []ProcessedRun, verbose bool) { - if len(processedRuns) == 0 { - logsDisplayLog.Print("No processed runs to display") - return - } - - logsDisplayLog.Printf("Displaying logs overview: runs=%d, verbose=%v", len(processedRuns), verbose) - - // Prepare table data - headers := []string{"Run ID", "Workflow", "Status", "Duration", "Tokens", "Cost ($)", "Turns", "Errors", "Warnings", "Missing Tools", "Missing Data", "Noops", "Safe Items", "Created", "Logs Path"} - var rows [][]string - - var totalTokens int - var totalCost float64 - var totalDuration time.Duration - var totalTurns int - var totalErrors int - var totalWarnings int - var totalMissingTools int - var totalMissingData int - var totalNoops int - var totalSafeItems int - - for _, pr := range processedRuns { - run := pr.Run - // Format duration - durationStr := "" - if run.Duration > 0 { - durationStr = timeutil.FormatDuration(run.Duration) - totalDuration += run.Duration - } - - // Format cost - costStr := "" - if run.EstimatedCost > 0 { - costStr = fmt.Sprintf("%.3f", run.EstimatedCost) - totalCost += run.EstimatedCost - } - - // Format tokens - tokensStr := "" - if run.TokenUsage > 0 { - tokensStr = console.FormatNumber(run.TokenUsage) - totalTokens += run.TokenUsage - } - - // Format turns - turnsStr := "" - if run.Turns > 0 { - turnsStr = strconv.Itoa(run.Turns) - totalTurns += run.Turns - } - - // Format errors - errorsStr := strconv.Itoa(run.ErrorCount) - totalErrors += run.ErrorCount - - // Format warnings - warningsStr := strconv.Itoa(run.WarningCount) - totalWarnings += run.WarningCount - - // Format missing tools - var missingToolsStr string - if verbose && len(pr.MissingTools) > 0 { - // In verbose mode, show actual tool names - toolNames := make([]string, len(pr.MissingTools)) - for i, tool := range pr.MissingTools { - toolNames[i] = tool.Tool - } - missingToolsStr = strings.Join(toolNames, ", ") - // Truncate if too long - if len(missingToolsStr) > 30 { - missingToolsStr = missingToolsStr[:27] + "..." - } - } else { - // In normal mode, just show the count - missingToolsStr = strconv.Itoa(run.MissingToolCount) - } - totalMissingTools += run.MissingToolCount - - // Format missing data - var missingDataStr string - if verbose && len(pr.MissingData) > 0 { - // In verbose mode, show actual data types - dataTypes := make([]string, len(pr.MissingData)) - for i, data := range pr.MissingData { - dataTypes[i] = data.DataType - } - missingDataStr = strings.Join(dataTypes, ", ") - // Truncate if too long - if len(missingDataStr) > 30 { - missingDataStr = missingDataStr[:27] + "..." - } - } else { - // In normal mode, just show the count - missingDataStr = strconv.Itoa(run.MissingDataCount) - } - totalMissingData += run.MissingDataCount - - // Format noops - var noopsStr string - if verbose && len(pr.Noops) > 0 { - // In verbose mode, show truncated message preview - messages := make([]string, len(pr.Noops)) - for i, noop := range pr.Noops { - msg := noop.Message - if len(msg) > 30 { - msg = msg[:27] + "..." - } - messages[i] = msg - } - noopsStr = strings.Join(messages, ", ") - // Truncate if too long - if len(noopsStr) > 30 { - noopsStr = noopsStr[:27] + "..." - } - } else { - // In normal mode, just show the count - noopsStr = strconv.Itoa(run.NoopCount) - } - totalNoops += run.NoopCount - - // Format safe items count - safeItemsStr := strconv.Itoa(run.SafeItemsCount) - totalSafeItems += run.SafeItemsCount - - // Truncate workflow name if too long - workflowName := run.WorkflowName - if len(workflowName) > 20 { - workflowName = workflowName[:17] + "..." - } - - // Format relative path - relPath, _ := filepath.Rel(".", run.LogsPath) - - // Format status - show conclusion directly for completed runs - statusStr := run.Status - if run.Status == "completed" && run.Conclusion != "" { - statusStr = run.Conclusion - } - - row := []string{ - strconv.FormatInt(run.DatabaseID, 10), - workflowName, - statusStr, - durationStr, - tokensStr, - costStr, - turnsStr, - errorsStr, - warningsStr, - missingToolsStr, - missingDataStr, - noopsStr, - safeItemsStr, - run.CreatedAt.Format("2006-01-02"), - relPath, - } - rows = append(rows, row) - } - - // Prepare total row - totalRow := []string{ - fmt.Sprintf("TOTAL (%d runs)", len(processedRuns)), - "", - "", - timeutil.FormatDuration(totalDuration), - console.FormatNumber(totalTokens), - fmt.Sprintf("%.3f", totalCost), - strconv.Itoa(totalTurns), - strconv.Itoa(totalErrors), - strconv.Itoa(totalWarnings), - strconv.Itoa(totalMissingTools), - strconv.Itoa(totalMissingData), - strconv.Itoa(totalNoops), - strconv.Itoa(totalSafeItems), - "", - "", - } - - // Render table using console helper - tableConfig := console.TableConfig{ - Title: "Workflow Logs Overview", - Headers: headers, - Rows: rows, - ShowTotal: true, - TotalRow: totalRow, - } - - logsDisplayLog.Printf("Rendering table: total_tokens=%d, total_cost=%.3f, total_duration=%s", totalTokens, totalCost, totalDuration) - - fmt.Fprint(os.Stderr, console.RenderTable(tableConfig)) -} diff --git a/pkg/cli/logs_overview_test.go b/pkg/cli/logs_overview_test.go index 7324be2a4e..e78c4f19f7 100644 --- a/pkg/cli/logs_overview_test.go +++ b/pkg/cli/logs_overview_test.go @@ -4,61 +4,9 @@ package cli import ( "testing" - "time" ) // TestLogsOverviewIncludesMissingTools verifies that the overview table includes missing tools count -func TestLogsOverviewIncludesMissingTools(t *testing.T) { - processedRuns := []ProcessedRun{ - { - Run: WorkflowRun{ - DatabaseID: 12345, - WorkflowName: "Test Workflow A", - Status: "completed", - Conclusion: "success", - CreatedAt: time.Now(), - Duration: 5 * time.Minute, - TokenUsage: 1000, - EstimatedCost: 0.01, - Turns: 3, - ErrorCount: 0, - WarningCount: 2, - MissingToolCount: 1, - LogsPath: "/tmp/gh-aw/run-12345", - }, - MissingTools: []MissingToolReport{ - {Tool: "terraform", Reason: "Infrastructure automation needed"}, - }, - }, - { - Run: WorkflowRun{ - DatabaseID: 67890, - WorkflowName: "Test Workflow B", - Status: "completed", - Conclusion: "failure", - CreatedAt: time.Now(), - Duration: 3 * time.Minute, - TokenUsage: 500, - EstimatedCost: 0.005, - Turns: 2, - ErrorCount: 1, - WarningCount: 0, - MissingToolCount: 3, - LogsPath: "/tmp/gh-aw/run-67890", - }, - MissingTools: []MissingToolReport{ - {Tool: "kubectl", Reason: "K8s management"}, - {Tool: "docker", Reason: "Container runtime"}, - {Tool: "helm", Reason: "K8s package manager"}, - }, - }, - } - - // Capture output by redirecting - this is a smoke test to ensure displayLogsOverview doesn't panic - // and that it processes the MissingToolCount field - displayLogsOverview(processedRuns, false) - displayLogsOverview(processedRuns, true) -} // TestWorkflowRunStructHasMissingToolCount verifies that WorkflowRun has the MissingToolCount field func TestWorkflowRunStructHasMissingToolCount(t *testing.T) { @@ -116,118 +64,6 @@ func TestLogsOverviewHeaderIncludesMissing(t *testing.T) { } // TestDisplayLogsOverviewWithVariousMissingToolCounts tests different scenarios -func TestDisplayLogsOverviewWithVariousMissingToolCounts(t *testing.T) { - testCases := []struct { - name string - processedRuns []ProcessedRun - expectedNonPanic bool - }{ - { - name: "no missing tools", - processedRuns: []ProcessedRun{ - { - Run: WorkflowRun{ - DatabaseID: 1, - WorkflowName: "Clean Workflow", - MissingToolCount: 0, - LogsPath: "/tmp/gh-aw/run-1", - }, - MissingTools: []MissingToolReport{}, - }, - }, - expectedNonPanic: true, - }, - { - name: "single missing tool", - processedRuns: []ProcessedRun{ - { - Run: WorkflowRun{ - DatabaseID: 2, - WorkflowName: "Workflow with One Missing", - MissingToolCount: 1, - LogsPath: "/tmp/gh-aw/run-2", - }, - MissingTools: []MissingToolReport{ - {Tool: "terraform", Reason: "Need IaC"}, - }, - }, - }, - expectedNonPanic: true, - }, - { - name: "multiple missing tools", - processedRuns: []ProcessedRun{ - { - Run: WorkflowRun{ - DatabaseID: 3, - WorkflowName: "Workflow with Multiple Missing", - MissingToolCount: 5, - LogsPath: "/tmp/gh-aw/run-3", - }, - MissingTools: []MissingToolReport{ - {Tool: "terraform", Reason: "IaC"}, - {Tool: "kubectl", Reason: "K8s"}, - {Tool: "docker", Reason: "Containers"}, - {Tool: "helm", Reason: "Packages"}, - {Tool: "argocd", Reason: "GitOps"}, - }, - }, - }, - expectedNonPanic: true, - }, - { - name: "mixed missing tool counts", - processedRuns: []ProcessedRun{ - { - Run: WorkflowRun{ - DatabaseID: 4, - WorkflowName: "Workflow A", - MissingToolCount: 0, - LogsPath: "/tmp/gh-aw/run-4", - }, - MissingTools: []MissingToolReport{}, - }, - { - Run: WorkflowRun{ - DatabaseID: 5, - WorkflowName: "Workflow B", - MissingToolCount: 2, - LogsPath: "/tmp/gh-aw/run-5", - }, - MissingTools: []MissingToolReport{ - {Tool: "kubectl", Reason: "K8s"}, - {Tool: "docker", Reason: "Containers"}, - }, - }, - { - Run: WorkflowRun{ - DatabaseID: 6, - WorkflowName: "Workflow C", - MissingToolCount: 1, - LogsPath: "/tmp/gh-aw/run-6", - }, - MissingTools: []MissingToolReport{ - {Tool: "helm", Reason: "Packages"}, - }, - }, - }, - expectedNonPanic: true, - }, - } - - for _, tc := range testCases { - t.Run(tc.name, func(t *testing.T) { - // This test ensures displayLogsOverview doesn't panic with various missing tool counts - defer func() { - if r := recover(); r != nil && tc.expectedNonPanic { - t.Errorf("displayLogsOverview panicked with: %v", r) - } - }() - displayLogsOverview(tc.processedRuns, false) - displayLogsOverview(tc.processedRuns, true) - }) - } -} // TestTotalMissingToolsCalculation verifies totals are calculated correctly func TestTotalMissingToolsCalculation(t *testing.T) { @@ -252,83 +88,8 @@ func TestTotalMissingToolsCalculation(t *testing.T) { } // TestOverviewDisplayConsistency verifies that the overview function is consistent -func TestOverviewDisplayConsistency(t *testing.T) { - // Create a run with known values - processedRuns := []ProcessedRun{ - { - Run: WorkflowRun{ - DatabaseID: 99999, - WorkflowName: "Consistency Test", - Status: "completed", - Conclusion: "success", - Duration: 10 * time.Minute, - TokenUsage: 2000, - EstimatedCost: 0.02, - Turns: 5, - ErrorCount: 1, - WarningCount: 3, - MissingToolCount: 2, - CreatedAt: time.Date(2024, 1, 15, 10, 30, 0, 0, time.UTC), - LogsPath: "/tmp/gh-aw/run-99999", - }, - MissingTools: []MissingToolReport{ - {Tool: "terraform", Reason: "IaC"}, - {Tool: "kubectl", Reason: "K8s"}, - }, - }, - } - - // Call displayLogsOverview - it should not panic and should handle all fields - defer func() { - if r := recover(); r != nil { - t.Errorf("displayLogsOverview panicked: %v", r) - } - }() - - displayLogsOverview(processedRuns, false) - displayLogsOverview(processedRuns, true) -} // TestMissingToolsIntegration tests the full flow from ProcessedRun to display -func TestMissingToolsIntegration(t *testing.T) { - // Create a ProcessedRun with missing tools - processedRuns := []ProcessedRun{ - { - Run: WorkflowRun{ - DatabaseID: 11111, - WorkflowName: "Integration Test Workflow", - Status: "completed", - Conclusion: "success", - MissingToolCount: 2, - }, - MissingTools: []MissingToolReport{ - { - Tool: "terraform", - Reason: "Infrastructure automation needed", - Alternatives: "Manual AWS console", - Timestamp: "2024-01-15T10:30:00Z", - WorkflowName: "Integration Test Workflow", - RunID: 11111, - }, - { - Tool: "kubectl", - Reason: "Kubernetes cluster management", - WorkflowName: "Integration Test Workflow", - RunID: 11111, - }, - }, - }, - } - - // Verify count is correct - if processedRuns[0].Run.MissingToolCount != 2 { - t.Errorf("Expected MissingToolCount to be 2, got %d", processedRuns[0].Run.MissingToolCount) - } - - // Display should work without panicking - displayLogsOverview(processedRuns, false) - displayLogsOverview(processedRuns, true) -} // TestMissingToolCountFieldAccessibility verifies field is accessible func TestMissingToolCountFieldAccessibility(t *testing.T) { diff --git a/pkg/cli/mcp_inspect_safe_inputs_inspector.go b/pkg/cli/mcp_inspect_safe_inputs_inspector.go deleted file mode 100644 index 386c513579..0000000000 --- a/pkg/cli/mcp_inspect_safe_inputs_inspector.go +++ /dev/null @@ -1,134 +0,0 @@ -package cli - -import ( - "errors" - "fmt" - "os" - "os/exec" - "path/filepath" - "time" - - "github.com/github/gh-aw/pkg/console" - "github.com/github/gh-aw/pkg/parser" - "github.com/github/gh-aw/pkg/types" - "github.com/github/gh-aw/pkg/workflow" -) - -// spawnSafeInputsInspector generates safe-inputs MCP server files, starts the HTTP server, -// and launches the inspector to inspect it -func spawnSafeInputsInspector(workflowFile string, verbose bool) error { - mcpInspectLog.Printf("Spawning safe-inputs inspector for workflow: %s", workflowFile) - - // Check if node is available - if _, err := exec.LookPath("node"); err != nil { - return fmt.Errorf("node not found. Please install Node.js to run the safe-inputs MCP server: %w", err) - } - - // Resolve the workflow file path - workflowPath, err := ResolveWorkflowPath(workflowFile) - if err != nil { - return err - } - - // Convert to absolute path if needed - if !filepath.IsAbs(workflowPath) { - cwd, err := os.Getwd() - if err != nil { - return fmt.Errorf("failed to get current directory: %w", err) - } - workflowPath = filepath.Join(cwd, workflowPath) - } - - if verbose { - fmt.Fprintln(os.Stderr, console.FormatInfoMessage("Inspecting safe-inputs from: "+workflowPath)) - } - - // Use the workflow compiler to parse the file and resolve imports - // This ensures that imported safe-inputs are properly merged - compiler := workflow.NewCompiler( - workflow.WithVerbose(verbose), - ) - workflowData, err := compiler.ParseWorkflowFile(workflowPath) - if err != nil { - return fmt.Errorf("failed to parse workflow file: %w", err) - } - - // Get safe-inputs configuration from the parsed WorkflowData - // This includes both direct and imported safe-inputs configurations - safeInputsConfig := workflowData.SafeInputs - if safeInputsConfig == nil || len(safeInputsConfig.Tools) == 0 { - return errors.New("no safe-inputs configuration found in workflow") - } - - fmt.Fprintln(os.Stderr, console.FormatInfoMessage(fmt.Sprintf("Found %d safe-input tool(s) to configure", len(safeInputsConfig.Tools)))) - - // Create temporary directory for safe-inputs files - tmpDir, err := os.MkdirTemp("", "gh-aw-safe-inputs-*") - if err != nil { - return fmt.Errorf("failed to create temporary directory: %w", err) - } - defer func() { - if err := os.RemoveAll(tmpDir); err != nil && verbose { - fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to cleanup temporary directory: %v", err))) - } - }() - - if verbose { - fmt.Fprintln(os.Stderr, console.FormatInfoMessage("Created temporary directory: "+tmpDir)) - } - - // Write safe-inputs files to temporary directory - if err := writeSafeInputsFiles(tmpDir, safeInputsConfig, verbose); err != nil { - return fmt.Errorf("failed to write safe-inputs files: %w", err) - } - - // Find an available port for the HTTP server - port := findAvailablePort(safeInputsStartPort, verbose) - if port == 0 { - return errors.New("failed to find an available port for the HTTP server") - } - - if verbose { - fmt.Fprintln(os.Stderr, console.FormatInfoMessage(fmt.Sprintf("Using port %d for safe-inputs HTTP server", port))) - } - - // Start the HTTP server - serverCmd, err := startSafeInputsHTTPServer(tmpDir, port, verbose) - if err != nil { - return fmt.Errorf("failed to start safe-inputs HTTP server: %w", err) - } - defer func() { - if serverCmd.Process != nil { - // Try graceful shutdown first - if err := serverCmd.Process.Signal(os.Interrupt); err != nil && verbose { - fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to send interrupt signal: %v", err))) - } - // Wait a moment for graceful shutdown - time.Sleep(500 * time.Millisecond) - // Attempt force kill (may fail if process already exited gracefully, which is fine) - _ = serverCmd.Process.Kill() - } - }() - - // Wait for the server to start up - if !waitForServerReady(port, 5*time.Second, verbose) { - return errors.New("safe-inputs HTTP server failed to start within timeout") - } - - fmt.Fprintln(os.Stderr, console.FormatSuccessMessage("Safe-inputs HTTP server started successfully")) - fmt.Fprintln(os.Stderr, console.FormatInfoMessage(fmt.Sprintf("Server running on: http://localhost:%d", port))) - fmt.Fprintln(os.Stderr) - - // Create MCP server config for the safe-inputs server - safeInputsMCPConfig := parser.MCPServerConfig{ - BaseMCPServerConfig: types.BaseMCPServerConfig{ - Type: "http", - URL: fmt.Sprintf("http://localhost:%d", port), - Env: make(map[string]string), - }, - Name: "safeinputs", - } - - // Inspect the safe-inputs MCP server using the Go SDK (like other MCP servers) - return inspectMCPServer(safeInputsMCPConfig, "", verbose, false) -} diff --git a/pkg/cli/mcp_inspect_safe_inputs_test.go b/pkg/cli/mcp_inspect_safe_inputs_test.go deleted file mode 100644 index 60d89e2e8f..0000000000 --- a/pkg/cli/mcp_inspect_safe_inputs_test.go +++ /dev/null @@ -1,264 +0,0 @@ -//go:build !integration - -package cli - -import ( - "os" - "path/filepath" - "strings" - "testing" - - "github.com/github/gh-aw/pkg/workflow" -) - -// TestSpawnSafeInputsInspector_NoSafeInputs tests the error case when workflow has no safe-inputs -func TestSpawnSafeInputsInspector_NoSafeInputs(t *testing.T) { - // Create temporary directory with a workflow file - tmpDir := t.TempDir() - workflowsDir := filepath.Join(tmpDir, ".github", "workflows") - if err := os.MkdirAll(workflowsDir, 0755); err != nil { - t.Fatalf("Failed to create workflows directory: %v", err) - } - - // Create a test workflow file WITHOUT safe-inputs - workflowContent := `--- -on: push -engine: copilot ---- -# Test Workflow - -This workflow has no safe-inputs configuration. -` - workflowPath := filepath.Join(workflowsDir, "test.md") - if err := os.WriteFile(workflowPath, []byte(workflowContent), 0644); err != nil { - t.Fatalf("Failed to write workflow file: %v", err) - } - - // Change to the temporary directory - originalDir, _ := os.Getwd() - defer os.Chdir(originalDir) - os.Chdir(tmpDir) - - // Try to spawn safe-inputs inspector - should fail - err := spawnSafeInputsInspector("test", false) - if err == nil { - t.Error("Expected error when workflow has no safe-inputs, got nil") - } - - // Verify error message mentions "no safe-inputs" - if err != nil && err.Error() != "no safe-inputs configuration found in workflow" { - t.Errorf("Expected specific error message, got: %v", err) - } -} - -// TestSpawnSafeInputsInspector_WithSafeInputs tests file generation with a real workflow -func TestSpawnSafeInputsInspector_WithSafeInputs(t *testing.T) { - // This test verifies that the function correctly parses a workflow and generates files - // We can't actually start the server or inspector in a test, but we can verify file generation - - // Create temporary directory with a workflow file - tmpDir := t.TempDir() - workflowsDir := filepath.Join(tmpDir, ".github", "workflows") - if err := os.MkdirAll(workflowsDir, 0755); err != nil { - t.Fatalf("Failed to create workflows directory: %v", err) - } - - // Create a test workflow file with safe-inputs - workflowContent := `--- -on: push -engine: copilot -safe-inputs: - echo-tool: - description: "Echo a message" - inputs: - message: - type: string - description: "Message to echo" - required: true - run: | - echo "$message" ---- -# Test Workflow - -This workflow has safe-inputs configuration. -` - workflowPath := filepath.Join(workflowsDir, "test.md") - if err := os.WriteFile(workflowPath, []byte(workflowContent), 0644); err != nil { - t.Fatalf("Failed to write workflow file: %v", err) - } - - // Change to the temporary directory - originalDir, _ := os.Getwd() - defer os.Chdir(originalDir) - os.Chdir(tmpDir) - - // We can't fully test spawnSafeInputsInspector because it tries to start a server - // and launch the inspector, but we can test the file generation part separately - // by calling writeSafeInputsFiles directly - - // Parse the workflow using the compiler to get safe-inputs config - // (including any imported safe-inputs) - compiler := workflow.NewCompiler() - workflowData, err := compiler.ParseWorkflowFile(workflowPath) - if err != nil { - t.Fatalf("Failed to parse workflow: %v", err) - } - - safeInputsConfig := workflowData.SafeInputs - if safeInputsConfig == nil { - t.Fatal("Expected safe-inputs config to be parsed") - } - - // Create a temp directory for files - filesDir := t.TempDir() - - // Write files - err = writeSafeInputsFiles(filesDir, safeInputsConfig, false) - if err != nil { - t.Fatalf("writeSafeInputsFiles failed: %v", err) - } - - // Verify the echo-tool.sh file was created - toolPath := filepath.Join(filesDir, "echo-tool.sh") - if _, err := os.Stat(toolPath); os.IsNotExist(err) { - t.Error("echo-tool.sh not found") - } - - // Verify tools.json contains the echo-tool - toolsPath := filepath.Join(filesDir, "tools.json") - toolsContent, err := os.ReadFile(toolsPath) - if err != nil { - t.Fatalf("Failed to read tools.json: %v", err) - } - - // Simple check that the tool name is in the JSON - if len(toolsContent) < 50 { - t.Error("tools.json seems too short") - } -} - -// TestSpawnSafeInputsInspector_WithImportedSafeInputs tests that imported safe-inputs are resolved -func TestSpawnSafeInputsInspector_WithImportedSafeInputs(t *testing.T) { - // Create temporary directory with workflow and shared files - tmpDir := t.TempDir() - workflowsDir := filepath.Join(tmpDir, ".github", "workflows") - sharedDir := filepath.Join(workflowsDir, "shared") - if err := os.MkdirAll(sharedDir, 0755); err != nil { - t.Fatalf("Failed to create workflows directory: %v", err) - } - - // Create a shared workflow file with safe-inputs - sharedContent := `--- -safe-inputs: - shared-tool: - description: "Shared tool from import" - inputs: - param: - type: string - description: "A parameter" - required: true - run: | - echo "Shared: $param" ---- -# Shared Workflow -` - sharedPath := filepath.Join(sharedDir, "shared.md") - if err := os.WriteFile(sharedPath, []byte(sharedContent), 0644); err != nil { - t.Fatalf("Failed to write shared workflow file: %v", err) - } - - // Create a test workflow file that imports the shared workflow - workflowContent := `--- -on: push -engine: copilot -imports: - - shared/shared.md -safe-inputs: - local-tool: - description: "Local tool" - inputs: - message: - type: string - description: "Message to echo" - required: true - run: | - echo "$message" ---- -# Test Workflow - -This workflow imports safe-inputs from shared/shared.md. -` - workflowPath := filepath.Join(workflowsDir, "test.md") - if err := os.WriteFile(workflowPath, []byte(workflowContent), 0644); err != nil { - t.Fatalf("Failed to write workflow file: %v", err) - } - - // Change to the temporary directory - originalDir, _ := os.Getwd() - defer os.Chdir(originalDir) - os.Chdir(tmpDir) - - // Parse the workflow using the compiler to get safe-inputs config - // This should include both local and imported safe-inputs - compiler := workflow.NewCompiler() - workflowData, err := compiler.ParseWorkflowFile(workflowPath) - if err != nil { - t.Fatalf("Failed to parse workflow: %v", err) - } - - safeInputsConfig := workflowData.SafeInputs - if safeInputsConfig == nil { - t.Fatal("Expected safe-inputs config to be parsed") - } - - // Verify both local and imported tools are present - if len(safeInputsConfig.Tools) != 2 { - t.Errorf("Expected 2 tools (local + imported), got %d", len(safeInputsConfig.Tools)) - } - - // Verify local tool exists - if _, exists := safeInputsConfig.Tools["local-tool"]; !exists { - t.Error("Expected local-tool to be present") - } - - // Verify imported tool exists - if _, exists := safeInputsConfig.Tools["shared-tool"]; !exists { - t.Error("Expected shared-tool (from import) to be present") - } - - // Create a temp directory for files - filesDir := t.TempDir() - - // Write files - err = writeSafeInputsFiles(filesDir, safeInputsConfig, false) - if err != nil { - t.Fatalf("writeSafeInputsFiles failed: %v", err) - } - - // Verify both tool handler files were created - localToolPath := filepath.Join(filesDir, "local-tool.sh") - if _, err := os.Stat(localToolPath); os.IsNotExist(err) { - t.Error("local-tool.sh not found") - } - - sharedToolPath := filepath.Join(filesDir, "shared-tool.sh") - if _, err := os.Stat(sharedToolPath); os.IsNotExist(err) { - t.Error("shared-tool.sh not found") - } - - // Verify tools.json contains both tools - toolsPath := filepath.Join(filesDir, "tools.json") - toolsContent, err := os.ReadFile(toolsPath) - if err != nil { - t.Fatalf("Failed to read tools.json: %v", err) - } - - // Check that both tool names are in the JSON - toolsJSON := string(toolsContent) - if !strings.Contains(toolsJSON, "local-tool") { - t.Error("tools.json should contain 'local-tool'") - } - if !strings.Contains(toolsJSON, "shared-tool") { - t.Error("tools.json should contain 'shared-tool'") - } -} diff --git a/pkg/cli/validation_output.go b/pkg/cli/validation_output.go deleted file mode 100644 index f27ad3663b..0000000000 --- a/pkg/cli/validation_output.go +++ /dev/null @@ -1,54 +0,0 @@ -package cli - -import ( - "fmt" - "os" - - "github.com/github/gh-aw/pkg/console" - "github.com/github/gh-aw/pkg/logger" -) - -var validationOutputLog = logger.New("cli:validation_output") - -// FormatValidationError formats validation errors for console output -// Preserves structured error content while applying console styling -// -// This function bridges the gap between pure validation logic (plain text errors) -// and CLI presentation layer (styled console output). By keeping validation errors -// as plain text at the validation layer, we maintain testability and reusability -// while providing consistent styled output in CLI contexts. -// -// The function handles both simple single-line errors and complex multi-line -// structured errors (like GitHubToolsetValidationError) by applying console -// formatting to preserve the error structure and readability. -func FormatValidationError(err error) string { - if err == nil { - return "" - } - - errMsg := err.Error() - validationOutputLog.Printf("Formatting validation error: %s", errMsg) - - // Apply console formatting to the entire error message - // This preserves structured multi-line errors while adding visual styling - return console.FormatErrorMessage(errMsg) -} - -// PrintValidationError prints a validation error to stderr with console formatting -// -// This is a convenience helper that combines formatting and printing in one call. -// All validation errors should be printed using this function to ensure consistent -// styling across the CLI. -// -// Example usage: -// -// if err := ValidateWorkflow(config); err != nil { -// PrintValidationError(err) -// return err -// } -func PrintValidationError(err error) { - if err == nil { - return - } - fmt.Fprintln(os.Stderr, FormatValidationError(err)) -} diff --git a/pkg/cli/validation_output_test.go b/pkg/cli/validation_output_test.go deleted file mode 100644 index 167e36bf0e..0000000000 --- a/pkg/cli/validation_output_test.go +++ /dev/null @@ -1,234 +0,0 @@ -//go:build !integration - -package cli - -import ( - "errors" - "strings" - "testing" - - "github.com/github/gh-aw/pkg/workflow" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -// TestFormatValidationError verifies that validation errors are formatted with console styling -func TestFormatValidationError(t *testing.T) { - tests := []struct { - name string - err error - expectEmpty bool - mustContain []string - mustNotChange string // Content that must be preserved - }{ - { - name: "nil error returns empty string", - err: nil, - expectEmpty: true, - }, - { - name: "simple single-line error", - err: errors.New("missing required field 'engine'"), - expectEmpty: false, - mustContain: []string{ - "missing required field 'engine'", - }, - mustNotChange: "missing required field 'engine'", - }, - { - name: "error with example", - err: errors.New("invalid engine: unknown. Valid engines are: copilot, claude, codex, custom. Example: engine: copilot"), - expectEmpty: false, - mustContain: []string{ - "invalid engine", - "Valid engines are", - "Example:", - }, - mustNotChange: "invalid engine: unknown. Valid engines are: copilot, claude, codex, custom. Example: engine: copilot", - }, - { - name: "multi-line error", - err: errors.New(`invalid configuration: - field 'engine' is required - field 'on' is missing`), - expectEmpty: false, - mustContain: []string{ - "invalid configuration", - "field 'engine' is required", - "field 'on' is missing", - }, - }, - { - name: "structured validation error (GitHubToolsetValidationError)", - err: workflow.NewGitHubToolsetValidationError(map[string][]string{ - "issues": {"list_issues", "create_issue"}, - }), - expectEmpty: false, - mustContain: []string{ - "ERROR", - "issues", - "list_issues", - "create_issue", - "Suggested fix", - }, - }, - { - name: "error with formatting characters", - err: errors.New("path must be relative, got: /absolute/path"), - mustContain: []string{ - "path must be relative", - "/absolute/path", - }, - mustNotChange: "path must be relative, got: /absolute/path", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := FormatValidationError(tt.err) - - if tt.expectEmpty { - assert.Empty(t, result, "Expected empty string for nil error") - return - } - - // Verify content is preserved - if tt.mustNotChange != "" { - assert.Contains(t, result, tt.mustNotChange, - "Formatted error must contain original error message") - } - - // Verify all required content is present - for _, expected := range tt.mustContain { - assert.Contains(t, result, expected, - "Formatted error must contain: %s", expected) - } - - // Verify formatting is applied (should not be identical to plain error) - if tt.err != nil && !tt.expectEmpty { - plainMsg := tt.err.Error() - // The formatted message should be longer (due to ANSI codes or prefix) - // or at minimum have the error symbol prefix - if result == plainMsg { - t.Errorf("Expected formatting to be applied, but result matches plain error.\nPlain: %s\nFormatted: %s", - plainMsg, result) - } - } - }) - } -} - -// TestPrintValidationError verifies that PrintValidationError outputs to stderr -// Note: This is a smoke test to ensure the function doesn't panic -func TestPrintValidationError(t *testing.T) { - tests := []struct { - name string - err error - }{ - { - name: "nil error does not panic", - err: nil, - }, - { - name: "simple error does not panic", - err: errors.New("test error"), - }, - { - name: "complex structured error does not panic", - err: workflow.NewGitHubToolsetValidationError(map[string][]string{ - "repos": {"get_repository"}, - }), - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - // This test ensures PrintValidationError doesn't panic - // Actual output testing would require capturing stderr - require.NotPanics(t, func() { - PrintValidationError(tt.err) - }, "PrintValidationError should not panic") - }) - } -} - -// TestFormatValidationErrorPreservesStructure verifies that multi-line errors maintain their structure -func TestFormatValidationErrorPreservesStructure(t *testing.T) { - // Create a structured error with multiple lines and sections - structuredErr := workflow.NewGitHubToolsetValidationError(map[string][]string{ - "issues": {"list_issues", "create_issue"}, - "actions": {"list_workflows"}, - }) - - result := FormatValidationError(structuredErr) - - // Verify structure is preserved - require.NotEmpty(t, result, "Result should not be empty") - - // Verify line breaks are maintained (multi-line error) - assert.Contains(t, result, "\n", "Multi-line structure should be preserved") - - // Verify all sections are present - sections := []string{ - "ERROR", - "actions", - "issues", - "list_workflows", - "list_issues", - "create_issue", - "Suggested fix", - "toolsets:", - } - - for _, section := range sections { - assert.Contains(t, result, section, - "Structured error should contain section: %s", section) - } - - // Verify the error message contains the original structured content - originalMsg := structuredErr.Error() - lines := strings.SplitSeq(originalMsg, "\n") - for line := range lines { - if strings.TrimSpace(line) != "" { - assert.Contains(t, result, strings.TrimSpace(line), - "Structured error should preserve line: %s", line) - } - } -} - -// TestFormatValidationErrorContentIntegrity verifies that formatting doesn't alter error content -func TestFormatValidationErrorContentIntegrity(t *testing.T) { - errorMessages := []string{ - "simple error", - "error with special chars: @#$%^&*()", - "error with path: /home/user/file.txt", - "error with URL: https://example.com", - "error with code snippet: engine: copilot", - "multi\nline\nerror\nwith\nbreaks", - "error with numbers: 123 456 789", - "error with quotes: 'single' and \"double\"", - } - - for _, msg := range errorMessages { - t.Run("content_integrity_"+strings.ReplaceAll(msg, "\n", "_"), func(t *testing.T) { - err := errors.New(msg) - result := FormatValidationError(err) - - // Verify the original message content is present in the result - assert.Contains(t, result, msg, - "Formatted error must preserve original content") - - // Verify no content is lost or corrupted - // The formatted version should contain at least as many meaningful characters - originalLength := len(strings.TrimSpace(msg)) - // Remove common ANSI codes to get actual content length - cleanResult := strings.ReplaceAll(result, "\033[", "") - cleanResult = strings.ReplaceAll(cleanResult, "\x1b[", "") - - if len(cleanResult) < originalLength { - t.Errorf("Formatting appears to have removed content. Original: %d chars, Result: %d chars", - originalLength, len(cleanResult)) - } - }) - } -} diff --git a/pkg/console/form.go b/pkg/console/form.go deleted file mode 100644 index 91078e0e0e..0000000000 --- a/pkg/console/form.go +++ /dev/null @@ -1,122 +0,0 @@ -//go:build !js && !wasm - -package console - -import ( - "errors" - "fmt" - - "github.com/charmbracelet/huh" - "github.com/github/gh-aw/pkg/tty" -) - -// RunForm executes a multi-field form with validation -// This is a higher-level helper that creates a form with multiple fields -func RunForm(fields []FormField) error { - // Validate inputs first before checking TTY - if len(fields) == 0 { - return errors.New("no form fields provided") - } - - // Validate field configurations before checking TTY - for _, field := range fields { - if field.Type == "select" && len(field.Options) == 0 { - return fmt.Errorf("select field '%s' requires options", field.Title) - } - if field.Type != "input" && field.Type != "password" && field.Type != "confirm" && field.Type != "select" { - return fmt.Errorf("unknown field type: %s", field.Type) - } - } - - // Check if stdin is a TTY - if not, we can't show interactive forms - if !tty.IsStderrTerminal() { - return errors.New("interactive forms not available (not a TTY)") - } - - // Build form fields - var huhFields []huh.Field - for _, field := range fields { - switch field.Type { - case "input": - inputField := huh.NewInput(). - Title(field.Title). - Description(field.Description). - Placeholder(field.Placeholder) - - if field.Validate != nil { - inputField.Validate(field.Validate) - } - - // Type assert to *string - if strPtr, ok := field.Value.(*string); ok { - inputField.Value(strPtr) - } else { - return fmt.Errorf("input field '%s' requires *string value", field.Title) - } - - huhFields = append(huhFields, inputField) - - case "password": - passwordField := huh.NewInput(). - Title(field.Title). - Description(field.Description). - EchoMode(huh.EchoModePassword) - - if field.Validate != nil { - passwordField.Validate(field.Validate) - } - - // Type assert to *string - if strPtr, ok := field.Value.(*string); ok { - passwordField.Value(strPtr) - } else { - return fmt.Errorf("password field '%s' requires *string value", field.Title) - } - - huhFields = append(huhFields, passwordField) - - case "confirm": - confirmField := huh.NewConfirm(). - Title(field.Title) - - // Type assert to *bool - if boolPtr, ok := field.Value.(*bool); ok { - confirmField.Value(boolPtr) - } else { - return fmt.Errorf("confirm field '%s' requires *bool value", field.Title) - } - - huhFields = append(huhFields, confirmField) - - case "select": - selectField := huh.NewSelect[string](). - Title(field.Title). - Description(field.Description) - - // Convert options to huh.Option format - huhOptions := make([]huh.Option[string], len(field.Options)) - for i, opt := range field.Options { - huhOptions[i] = huh.NewOption(opt.Label, opt.Value) - } - selectField.Options(huhOptions...) - - // Type assert to *string - if strPtr, ok := field.Value.(*string); ok { - selectField.Value(strPtr) - } else { - return fmt.Errorf("select field '%s' requires *string value", field.Title) - } - - huhFields = append(huhFields, selectField) - - default: - } - } - - // Create and run the form - form := huh.NewForm( - huh.NewGroup(huhFields...), - ).WithAccessible(IsAccessibleMode()) - - return form.Run() -} diff --git a/pkg/console/form_test.go b/pkg/console/form_test.go deleted file mode 100644 index 64efed30b9..0000000000 --- a/pkg/console/form_test.go +++ /dev/null @@ -1,169 +0,0 @@ -//go:build !integration - -package console - -import ( - "errors" - "testing" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestRunForm(t *testing.T) { - t.Run("function signature", func(t *testing.T) { - // Verify the function exists and has the right signature - _ = RunForm - }) - - t.Run("requires fields", func(t *testing.T) { - fields := []FormField{} - - err := RunForm(fields) - require.Error(t, err, "Should error with no fields") - assert.Contains(t, err.Error(), "no form fields", "Error should mention missing fields") - }) - - t.Run("validates input field", func(t *testing.T) { - var name string - fields := []FormField{ - { - Type: "input", - Title: "Name", - Description: "Enter your name", - Value: &name, - }, - } - - err := RunForm(fields) - // Will error in test environment (no TTY), but that's expected - require.Error(t, err, "Should error when not in TTY") - assert.Contains(t, err.Error(), "not a TTY", "Error should mention TTY") - }) - - t.Run("validates password field", func(t *testing.T) { - var password string - fields := []FormField{ - { - Type: "password", - Title: "Password", - Description: "Enter password", - Value: &password, - }, - } - - err := RunForm(fields) - // Will error in test environment (no TTY), but that's expected - require.Error(t, err, "Should error when not in TTY") - assert.Contains(t, err.Error(), "not a TTY", "Error should mention TTY") - }) - - t.Run("validates confirm field", func(t *testing.T) { - var confirmed bool - fields := []FormField{ - { - Type: "confirm", - Title: "Confirm action", - Value: &confirmed, - }, - } - - err := RunForm(fields) - // Will error in test environment (no TTY), but that's expected - require.Error(t, err, "Should error when not in TTY") - assert.Contains(t, err.Error(), "not a TTY", "Error should mention TTY") - }) - - t.Run("validates select field with options", func(t *testing.T) { - var selected string - fields := []FormField{ - { - Type: "select", - Title: "Choose option", - Description: "Select one", - Value: &selected, - Options: []SelectOption{ - {Label: "Option 1", Value: "opt1"}, - {Label: "Option 2", Value: "opt2"}, - }, - }, - } - - err := RunForm(fields) - // Will error in test environment (no TTY), but that's expected - require.Error(t, err, "Should error when not in TTY") - assert.Contains(t, err.Error(), "not a TTY", "Error should mention TTY") - }) - - t.Run("rejects select field without options", func(t *testing.T) { - var selected string - fields := []FormField{ - { - Type: "select", - Title: "Choose option", - Value: &selected, - Options: []SelectOption{}, - }, - } - - err := RunForm(fields) - require.Error(t, err, "Should error with no options") - assert.Contains(t, err.Error(), "requires options", "Error should mention missing options") - }) - - t.Run("rejects unknown field type", func(t *testing.T) { - var value string - fields := []FormField{ - { - Type: "unknown", - Title: "Test", - Value: &value, - }, - } - - err := RunForm(fields) - require.Error(t, err, "Should error with unknown field type") - assert.Contains(t, err.Error(), "unknown field type", "Error should mention unknown type") - }) - - t.Run("validates input field with custom validator", func(t *testing.T) { - var name string - fields := []FormField{ - { - Type: "input", - Title: "Name", - Description: "Enter your name", - Value: &name, - Validate: func(s string) error { - if len(s) < 3 { - return errors.New("must be at least 3 characters") - } - return nil - }, - }, - } - - err := RunForm(fields) - // Will error in test environment (no TTY), but that's expected - require.Error(t, err, "Should error when not in TTY") - assert.Contains(t, err.Error(), "not a TTY", "Error should mention TTY") - }) -} - -func TestFormField(t *testing.T) { - t.Run("struct creation", func(t *testing.T) { - var value string - field := FormField{ - Type: "input", - Title: "Test Field", - Description: "Test Description", - Placeholder: "Enter value", - Value: &value, - } - - assert.Equal(t, "input", field.Type, "Type should match") - assert.Equal(t, "Test Field", field.Title, "Title should match") - assert.Equal(t, "Test Description", field.Description, "Description should match") - assert.Equal(t, "Enter value", field.Placeholder, "Placeholder should match") - }) -} diff --git a/pkg/console/golden_test.go b/pkg/console/golden_test.go index 0c2acc0413..648da3cf65 100644 --- a/pkg/console/golden_test.go +++ b/pkg/console/golden_test.go @@ -7,9 +7,7 @@ import ( "strings" "testing" - "github.com/charmbracelet/lipgloss" "github.com/charmbracelet/x/exp/golden" - "github.com/github/gh-aw/pkg/styles" ) // TestGolden_TableRendering tests table rendering with different configurations @@ -132,36 +130,6 @@ func TestGolden_BoxRendering(t *testing.T) { } // TestGolden_LayoutBoxRendering tests layout box rendering (returns string) -func TestGolden_LayoutBoxRendering(t *testing.T) { - tests := []struct { - name string - title string - width int - }{ - { - name: "layout_narrow", - title: "Test", - width: 30, - }, - { - name: "layout_medium", - title: "Trial Execution Plan", - width: 60, - }, - { - name: "layout_wide", - title: "GitHub Agentic Workflows Compilation Report", - width: 100, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - output := LayoutTitleBox(tt.title, tt.width) - golden.RequireEqual(t, []byte(output)) - }) - } -} // TestGolden_TreeRendering tests tree rendering with different hierarchies func TestGolden_TreeRendering(t *testing.T) { @@ -467,94 +435,8 @@ func TestGolden_MessageFormatting(t *testing.T) { } // TestGolden_LayoutComposition tests composing multiple layout elements -func TestGolden_LayoutComposition(t *testing.T) { - tests := []struct { - name string - sections func() []string - }{ - { - name: "title_and_info", - sections: func() []string { - return []string{ - LayoutTitleBox("Trial Execution Plan", 60), - "", - LayoutInfoSection("Workflow", "test-workflow"), - LayoutInfoSection("Status", "Ready"), - } - }, - }, - { - name: "complete_composition", - sections: func() []string { - return []string{ - LayoutTitleBox("Trial Execution Plan", 60), - "", - LayoutInfoSection("Workflow", "test-workflow"), - LayoutInfoSection("Status", "Ready"), - "", - LayoutEmphasisBox("⚠️ WARNING: Large workflow file", styles.ColorWarning), - } - }, - }, - { - name: "multiple_emphasis_boxes", - sections: func() []string { - return []string{ - LayoutEmphasisBox("✓ Success", styles.ColorSuccess), - "", - LayoutEmphasisBox("⚠️ Warning", styles.ColorWarning), - "", - LayoutEmphasisBox("✗ Error", styles.ColorError), - } - }, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - sections := tt.sections() - output := LayoutJoinVertical(sections...) - golden.RequireEqual(t, []byte(output)) - }) - } -} // TestGolden_LayoutEmphasisBox tests emphasis boxes with different colors -func TestGolden_LayoutEmphasisBox(t *testing.T) { - tests := []struct { - name string - content string - color lipgloss.AdaptiveColor - }{ - { - name: "error_box", - content: "✗ ERROR: Compilation failed", - color: styles.ColorError, - }, - { - name: "warning_box", - content: "⚠️ WARNING: Deprecated syntax", - color: styles.ColorWarning, - }, - { - name: "success_box", - content: "✓ SUCCESS: All tests passed", - color: styles.ColorSuccess, - }, - { - name: "info_box", - content: "ℹ INFO: Processing workflow", - color: styles.ColorInfo, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - output := LayoutEmphasisBox(tt.content, tt.color) - golden.RequireEqual(t, []byte(output)) - }) - } -} // TestGolden_InfoSection tests info section rendering func TestGolden_InfoSection(t *testing.T) { diff --git a/pkg/console/layout.go b/pkg/console/layout.go deleted file mode 100644 index 3ea85a5597..0000000000 --- a/pkg/console/layout.go +++ /dev/null @@ -1,162 +0,0 @@ -//go:build !js && !wasm - -// Package console provides layout composition helpers for creating styled CLI output with Lipgloss. -// -// # Layout Composition Helpers -// -// The layout package provides reusable helper functions for common Lipgloss layout patterns. -// These helpers automatically respect TTY detection and provide both styled (TTY) and plain text -// (non-TTY) output modes. -// -// # Usage Example -// -// Here's a complete example showing how to compose a styled CLI output: -// -// import ( -// "fmt" -// "os" -// "github.com/github/gh-aw/pkg/console" -// "github.com/github/gh-aw/pkg/styles" -// ) -// -// // Create layout elements -// title := console.LayoutTitleBox("Trial Execution Plan", 60) -// info1 := console.LayoutInfoSection("Workflow", "test-workflow") -// info2 := console.LayoutInfoSection("Status", "Ready") -// warning := console.LayoutEmphasisBox("⚠️ WARNING: Large workflow file", styles.ColorWarning) -// -// // Compose sections vertically with spacing -// output := console.LayoutJoinVertical(title, "", info1, info2, "", warning) -// fmt.Fprintln(os.Stderr, output) -// -// # TTY Detection -// -// All layout helpers automatically detect whether output is going to a terminal (TTY) or being -// piped/redirected. In TTY mode, they use Lipgloss styling with colors and borders. In non-TTY -// mode, they output plain text suitable for parsing or logging. -// -// # Available Helpers -// -// - LayoutTitleBox: Centered title with double border -// - LayoutInfoSection: Info section with left border emphasis -// - LayoutEmphasisBox: Thick-bordered box with custom color -// - LayoutJoinVertical: Composes sections with automatic spacing -// -// # Comparison with Existing Functions -// -// These helpers complement the existing RenderTitleBox, RenderInfoSection, and -// RenderComposedSections functions in console.go. The key differences: -// -// - Layout helpers return strings instead of []string for simpler composition -// - LayoutInfoSection takes separate label and value parameters -// - LayoutEmphasisBox provides custom color support with thick borders -// - Layout helpers are designed for inline composition and chaining -package console - -import ( - "strings" - - "github.com/charmbracelet/lipgloss" - "github.com/github/gh-aw/pkg/styles" - "github.com/github/gh-aw/pkg/tty" -) - -// LayoutTitleBox renders a title with a double border box as a single string. -// In TTY mode, uses Lipgloss styled box centered with the Info color scheme. -// In non-TTY mode, renders plain text with separator lines. -// This is a simpler alternative to RenderTitleBox that returns a string instead of []string. -// -// Example: -// -// title := console.LayoutTitleBox("Trial Execution Plan", 60) -// fmt.Fprintln(os.Stderr, title) -func LayoutTitleBox(title string, width int) string { - if tty.IsStderrTerminal() { - // TTY mode: Use Lipgloss styled box - box := lipgloss.NewStyle(). - Bold(true). - Foreground(styles.ColorInfo). - Border(lipgloss.DoubleBorder(), true, false). - Padding(0, 2). - Width(width). - Align(lipgloss.Center). - Render(title) - return box - } - - // Non-TTY mode: Plain text with separators - separator := strings.Repeat("=", width) - return separator + "\n " + title + "\n" + separator -} - -// LayoutInfoSection renders an info section with left border emphasis as a single string. -// In TTY mode, uses Lipgloss styled section with left border and padding. -// In non-TTY mode, adds manual indentation. -// This is a simpler alternative to RenderInfoSection that returns a string and takes label/value. -// -// Example: -// -// info := console.LayoutInfoSection("Workflow", "test-workflow") -// fmt.Fprintln(os.Stderr, info) -func LayoutInfoSection(label, value string) string { - content := label + ": " + value - - if tty.IsStderrTerminal() { - // TTY mode: Use Lipgloss styled section with left border and padding - section := lipgloss.NewStyle(). - Border(lipgloss.NormalBorder(), false, false, false, true). - BorderForeground(styles.ColorInfo). - PaddingLeft(2). - Render(content) - return section - } - - // Non-TTY mode: Add manual indentation - return " " + content -} - -// LayoutEmphasisBox renders content in a rounded-bordered box with custom color. -// In TTY mode, uses Lipgloss styled box with rounded border for a polished appearance. -// In non-TTY mode, renders content with surrounding marker lines. -// -// Example: -// -// warning := console.LayoutEmphasisBox("⚠️ WARNING: Large workflow", styles.ColorWarning) -// fmt.Fprintln(os.Stderr, warning) -func LayoutEmphasisBox(content string, color lipgloss.AdaptiveColor) string { - if tty.IsStderrTerminal() { - // TTY mode: Use Lipgloss styled box with rounded border for a softer appearance - box := lipgloss.NewStyle(). - Bold(true). - Foreground(color). - Border(styles.RoundedBorder). - BorderForeground(color). - Padding(0, 2). - Render(content) - return box - } - - // Non-TTY mode: Content with marker lines - marker := strings.Repeat("!", len(content)+4) - return marker + "\n " + content + "\n" + marker -} - -// LayoutJoinVertical composes sections vertically with automatic spacing. -// In TTY mode, uses lipgloss.JoinVertical for proper composition. -// In non-TTY mode, joins sections with newlines. -// -// Example: -// -// title := console.LayoutTitleBox("Plan", 60) -// info := console.LayoutInfoSection("Status", "Ready") -// output := console.LayoutJoinVertical(title, info) -// fmt.Fprintln(os.Stderr, output) -func LayoutJoinVertical(sections ...string) string { - if tty.IsStderrTerminal() { - // TTY mode: Use Lipgloss to compose sections vertically - return lipgloss.JoinVertical(lipgloss.Left, sections...) - } - - // Non-TTY mode: Join with newlines - return strings.Join(sections, "\n") -} diff --git a/pkg/console/layout_test.go b/pkg/console/layout_test.go deleted file mode 100644 index 6360c99d06..0000000000 --- a/pkg/console/layout_test.go +++ /dev/null @@ -1,383 +0,0 @@ -//go:build !integration - -package console - -import ( - "strings" - "testing" - - "github.com/charmbracelet/lipgloss" - "github.com/github/gh-aw/pkg/styles" -) - -func TestLayoutTitleBox(t *testing.T) { - tests := []struct { - name string - title string - width int - expected []string // Substrings that should be present in output - }{ - { - name: "basic title", - title: "Test Title", - width: 40, - expected: []string{ - "Test Title", - }, - }, - { - name: "longer title", - title: "Trial Execution Plan", - width: 80, - expected: []string{ - "Trial Execution Plan", - }, - }, - { - name: "title with special characters", - title: "⚠️ Important Notice", - width: 60, - expected: []string{ - "⚠️ Important Notice", - }, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - output := LayoutTitleBox(tt.title, tt.width) - - // Check that output is not empty - if output == "" { - t.Error("LayoutTitleBox() returned empty string") - } - - // Check that title appears in output - for _, expected := range tt.expected { - if !strings.Contains(output, expected) { - t.Errorf("LayoutTitleBox() output missing expected string '%s'\nGot:\n%s", expected, output) - } - } - }) - } -} - -func TestLayoutInfoSection(t *testing.T) { - tests := []struct { - name string - label string - value string - expected []string // Substrings that should be present in output - }{ - { - name: "simple label and value", - label: "Workflow", - value: "test-workflow", - expected: []string{ - "Workflow", - "test-workflow", - }, - }, - { - name: "status label", - label: "Status", - value: "Active", - expected: []string{ - "Status", - "Active", - }, - }, - { - name: "file path value", - label: "Location", - value: "/path/to/file", - expected: []string{ - "Location", - "/path/to/file", - }, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - output := LayoutInfoSection(tt.label, tt.value) - - // Check that output is not empty - if output == "" { - t.Error("LayoutInfoSection() returned empty string") - } - - // Check that expected strings appear in output - for _, expected := range tt.expected { - if !strings.Contains(output, expected) { - t.Errorf("LayoutInfoSection() output missing expected string '%s'\nGot:\n%s", expected, output) - } - } - }) - } -} - -func TestLayoutEmphasisBox(t *testing.T) { - tests := []struct { - name string - content string - color lipgloss.AdaptiveColor - expected []string // Substrings that should be present in output - }{ - { - name: "warning message", - content: "⚠️ WARNING", - color: styles.ColorWarning, - expected: []string{ - "⚠️ WARNING", - }, - }, - { - name: "error message", - content: "✗ ERROR: Failed", - color: styles.ColorError, - expected: []string{ - "✗ ERROR: Failed", - }, - }, - { - name: "success message", - content: "✓ Success", - color: styles.ColorSuccess, - expected: []string{ - "✓ Success", - }, - }, - { - name: "info message", - content: "ℹ Information", - color: styles.ColorInfo, - expected: []string{ - "ℹ Information", - }, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - output := LayoutEmphasisBox(tt.content, tt.color) - - // Check that output is not empty - if output == "" { - t.Error("LayoutEmphasisBox() returned empty string") - } - - // Check that content appears in output - for _, expected := range tt.expected { - if !strings.Contains(output, expected) { - t.Errorf("LayoutEmphasisBox() output missing expected string '%s'\nGot:\n%s", expected, output) - } - } - }) - } -} - -func TestLayoutJoinVertical(t *testing.T) { - tests := []struct { - name string - sections []string - expected []string // Substrings that should be present in output - }{ - { - name: "single section", - sections: []string{"Section 1"}, - expected: []string{"Section 1"}, - }, - { - name: "multiple sections", - sections: []string{"Section 1", "Section 2", "Section 3"}, - expected: []string{ - "Section 1", - "Section 2", - "Section 3", - }, - }, - { - name: "sections with empty strings", - sections: []string{"Section 1", "", "Section 2"}, - expected: []string{ - "Section 1", - "Section 2", - }, - }, - { - name: "empty sections", - sections: []string{}, - expected: []string{}, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - output := LayoutJoinVertical(tt.sections...) - - // For empty sections, output should be empty - if len(tt.sections) == 0 { - if output != "" { - t.Errorf("LayoutJoinVertical() expected empty string, got: %s", output) - } - return - } - - // Check that expected strings appear in output - for _, expected := range tt.expected { - if expected == "" { - continue - } - if !strings.Contains(output, expected) { - t.Errorf("LayoutJoinVertical() output missing expected string '%s'\nGot:\n%s", expected, output) - } - } - }) - } -} - -func TestLayoutCompositionAPI(t *testing.T) { - t.Run("compose multiple layout elements", func(t *testing.T) { - // Test the API example from the documentation - title := LayoutTitleBox("Trial Execution Plan", 60) - info := LayoutInfoSection("Workflow", "test-workflow") - warning := LayoutEmphasisBox("⚠️ WARNING", styles.ColorWarning) - - // Compose sections vertically with spacing - output := LayoutJoinVertical(title, "", info, "", warning) - - // Verify all elements are present in output - expected := []string{ - "Trial Execution Plan", - "Workflow", - "test-workflow", - "⚠️ WARNING", - } - - for _, exp := range expected { - if !strings.Contains(output, exp) { - t.Errorf("Composed output missing expected string '%s'\nGot:\n%s", exp, output) - } - } - }) -} - -func TestLayoutWidthConstraints(t *testing.T) { - tests := []struct { - name string - width int - }{ - {"narrow width", 40}, - {"medium width", 60}, - {"wide width", 80}, - {"very wide width", 120}, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - output := LayoutTitleBox("Test", tt.width) - - // Output should not be empty - if output == "" { - t.Error("LayoutTitleBox() returned empty string") - } - - // In non-TTY mode, separator length should match width - // We can't test TTY mode easily, but we can check non-TTY - lines := strings.Split(output, "\n") - if len(lines) > 0 { - // First line should contain separators or styled content - if len(lines[0]) == 0 { - t.Error("LayoutTitleBox() first line is empty") - } - } - }) - } -} - -func TestLayoutWithDifferentColors(t *testing.T) { - colors := []struct { - name string - color lipgloss.AdaptiveColor - }{ - {"error color", styles.ColorError}, - {"warning color", styles.ColorWarning}, - {"success color", styles.ColorSuccess}, - {"info color", styles.ColorInfo}, - {"purple color", styles.ColorPurple}, - {"yellow color", styles.ColorYellow}, - } - - for _, c := range colors { - t.Run(c.name, func(t *testing.T) { - output := LayoutEmphasisBox("Test Content", c.color) - - // Output should not be empty - if output == "" { - t.Error("LayoutEmphasisBox() returned empty string") - } - - // Content should be present - if !strings.Contains(output, "Test Content") { - t.Errorf("LayoutEmphasisBox() missing content, got: %s", output) - } - }) - } -} - -func TestLayoutNonTTYOutput(t *testing.T) { - // These tests verify that non-TTY output is plain text - // In actual non-TTY environment, output should be plain without ANSI codes - - t.Run("title box non-tty format", func(t *testing.T) { - output := LayoutTitleBox("Test", 40) - // Should contain the title - if !strings.Contains(output, "Test") { - t.Errorf("Expected title in output, got: %s", output) - } - }) - - t.Run("info section non-tty format", func(t *testing.T) { - output := LayoutInfoSection("Label", "Value") - // Should contain label and value - if !strings.Contains(output, "Label") || !strings.Contains(output, "Value") { - t.Errorf("Expected label and value in output, got: %s", output) - } - }) - - t.Run("emphasis box non-tty format", func(t *testing.T) { - output := LayoutEmphasisBox("Content", styles.ColorWarning) - // Should contain content - if !strings.Contains(output, "Content") { - t.Errorf("Expected content in output, got: %s", output) - } - }) -} - -// Example demonstrates how to compose a styled CLI output -// using the layout helper functions. -func Example() { - // Create layout elements - title := LayoutTitleBox("Trial Execution Plan", 60) - info1 := LayoutInfoSection("Workflow", "test-workflow") - info2 := LayoutInfoSection("Status", "Ready") - warning := LayoutEmphasisBox("⚠️ WARNING: Large workflow file", styles.ColorWarning) - - // Compose sections vertically with spacing - output := LayoutJoinVertical(title, "", info1, info2, "", warning) - - // In a real application, you would output to stderr: - // fmt.Fprintln(os.Stderr, output) - - // For test purposes, just verify the output contains expected content - if !strings.Contains(output, "Trial Execution Plan") { - panic("missing title") - } - if !strings.Contains(output, "test-workflow") { - panic("missing workflow name") - } - if !strings.Contains(output, "WARNING") { - panic("missing warning") - } -} diff --git a/pkg/console/select.go b/pkg/console/select.go deleted file mode 100644 index 0d2a94a0ba..0000000000 --- a/pkg/console/select.go +++ /dev/null @@ -1,91 +0,0 @@ -//go:build !js && !wasm - -package console - -import ( - "errors" - - "github.com/charmbracelet/huh" - "github.com/github/gh-aw/pkg/tty" -) - -// PromptSelect shows an interactive single-select menu -// Returns the selected value or an error -func PromptSelect(title, description string, options []SelectOption) (string, error) { - // Validate inputs first - if len(options) == 0 { - return "", errors.New("no options provided") - } - - // Check if stdin is a TTY - if not, we can't show interactive forms - if !tty.IsStderrTerminal() { - return "", errors.New("interactive selection not available (not a TTY)") - } - - var selected string - - // Convert options to huh.Option format - huhOptions := make([]huh.Option[string], len(options)) - for i, opt := range options { - huhOptions[i] = huh.NewOption(opt.Label, opt.Value) - } - - form := huh.NewForm( - huh.NewGroup( - huh.NewSelect[string](). - Title(title). - Description(description). - Options(huhOptions...). - Value(&selected), - ), - ).WithAccessible(IsAccessibleMode()) - - if err := form.Run(); err != nil { - return "", err - } - - return selected, nil -} - -// PromptMultiSelect shows an interactive multi-select menu -// Returns the selected values or an error -func PromptMultiSelect(title, description string, options []SelectOption, limit int) ([]string, error) { - // Validate inputs first - if len(options) == 0 { - return nil, errors.New("no options provided") - } - - // Check if stdin is a TTY - if not, we can't show interactive forms - if !tty.IsStderrTerminal() { - return nil, errors.New("interactive selection not available (not a TTY)") - } - - var selected []string - - // Convert options to huh.Option format - huhOptions := make([]huh.Option[string], len(options)) - for i, opt := range options { - huhOptions[i] = huh.NewOption(opt.Label, opt.Value) - } - - multiSelect := huh.NewMultiSelect[string](). - Title(title). - Description(description). - Options(huhOptions...). - Value(&selected) - - // Set limit if specified (0 means no limit) - if limit > 0 { - multiSelect.Limit(limit) - } - - form := huh.NewForm( - huh.NewGroup(multiSelect), - ).WithAccessible(IsAccessibleMode()) - - if err := form.Run(); err != nil { - return nil, err - } - - return selected, nil -} diff --git a/pkg/console/select_test.go b/pkg/console/select_test.go deleted file mode 100644 index 9e513e8ac3..0000000000 --- a/pkg/console/select_test.go +++ /dev/null @@ -1,87 +0,0 @@ -//go:build !integration - -package console - -import ( - "testing" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestPromptSelect(t *testing.T) { - t.Run("function signature", func(t *testing.T) { - // Verify the function exists and has the right signature - _ = PromptSelect - }) - - t.Run("requires options", func(t *testing.T) { - title := "Select an option" - description := "Choose one" - options := []SelectOption{} - - _, err := PromptSelect(title, description, options) - require.Error(t, err, "Should error with no options") - assert.Contains(t, err.Error(), "no options", "Error should mention missing options") - }) - - t.Run("validates parameters with options", func(t *testing.T) { - title := "Select an option" - description := "Choose one" - options := []SelectOption{ - {Label: "Option 1", Value: "opt1"}, - {Label: "Option 2", Value: "opt2"}, - } - - _, err := PromptSelect(title, description, options) - // Will error in test environment (no TTY), but that's expected - require.Error(t, err, "Should error when not in TTY") - assert.Contains(t, err.Error(), "not a TTY", "Error should mention TTY") - }) -} - -func TestPromptMultiSelect(t *testing.T) { - t.Run("function signature", func(t *testing.T) { - // Verify the function exists and has the right signature - _ = PromptMultiSelect - }) - - t.Run("requires options", func(t *testing.T) { - title := "Select options" - description := "Choose multiple" - options := []SelectOption{} - limit := 0 - - _, err := PromptMultiSelect(title, description, options, limit) - require.Error(t, err, "Should error with no options") - assert.Contains(t, err.Error(), "no options", "Error should mention missing options") - }) - - t.Run("validates parameters with options", func(t *testing.T) { - title := "Select options" - description := "Choose multiple" - options := []SelectOption{ - {Label: "Option 1", Value: "opt1"}, - {Label: "Option 2", Value: "opt2"}, - {Label: "Option 3", Value: "opt3"}, - } - limit := 10 - - _, err := PromptMultiSelect(title, description, options, limit) - // Will error in test environment (no TTY), but that's expected - require.Error(t, err, "Should error when not in TTY") - assert.Contains(t, err.Error(), "not a TTY", "Error should mention TTY") - }) -} - -func TestSelectOption(t *testing.T) { - t.Run("struct creation", func(t *testing.T) { - opt := SelectOption{ - Label: "Test Label", - Value: "test-value", - } - - assert.Equal(t, "Test Label", opt.Label, "Label should match") - assert.Equal(t, "test-value", opt.Value, "Value should match") - }) -} diff --git a/pkg/logger/error_formatting.go b/pkg/logger/error_formatting.go deleted file mode 100644 index 6ba2d55086..0000000000 --- a/pkg/logger/error_formatting.go +++ /dev/null @@ -1,47 +0,0 @@ -package logger - -import ( - "regexp" - "strings" -) - -// Pre-compiled regexes for performance (avoid recompiling in hot paths). -var ( - // Timestamp patterns for log cleanup - // Pattern 1: ISO 8601 with T or space separator (e.g., "2024-01-01T12:00:00.123Z " or "2024-01-01 12:00:00 "). - timestampPattern1 = regexp.MustCompile(`^\d{4}-\d{2}-\d{2}[T\s]\d{2}:\d{2}:\d{2}(\.\d+)?([+-]\d{2}:\d{2}|Z)?\s*`) - // Pattern 2: Bracketed date-time (e.g., "[2024-01-01 12:00:00] "). - timestampPattern2 = regexp.MustCompile(`^\[\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\]\s*`) - // Pattern 3: Bracketed time only (e.g., "[12:00:00] "). - timestampPattern3 = regexp.MustCompile(`^\[\d{2}:\d{2}:\d{2}\]\s+`) - // Pattern 4: Time only with optional milliseconds (e.g., "12:00:00.123 "). - timestampPattern4 = regexp.MustCompile(`^\d{2}:\d{2}:\d{2}(\.\d+)?\s+`) - - // Log level pattern for message cleanup (case-insensitive). - logLevelPattern = regexp.MustCompile(`(?i)^\[?(ERROR|WARNING|WARN|INFO|DEBUG)\]?\s*[:-]?\s*`) -) - -// ExtractErrorMessage extracts a clean error message from a log line. -// It removes timestamps, log level prefixes, and other common noise. -// If the message is longer than 200 characters, it will be truncated. -func ExtractErrorMessage(line string) string { - // Remove common timestamp patterns using pre-compiled regexes - cleanedLine := line - cleanedLine = timestampPattern1.ReplaceAllString(cleanedLine, "") - cleanedLine = timestampPattern2.ReplaceAllString(cleanedLine, "") - cleanedLine = timestampPattern3.ReplaceAllString(cleanedLine, "") - cleanedLine = timestampPattern4.ReplaceAllString(cleanedLine, "") - - // Remove common log level prefixes using pre-compiled regex - cleanedLine = logLevelPattern.ReplaceAllString(cleanedLine, "") - - // Trim whitespace - cleanedLine = strings.TrimSpace(cleanedLine) - - // If the line is too long (>200 chars), truncate it - if len(cleanedLine) > 200 { - cleanedLine = cleanedLine[:197] + "..." - } - - return cleanedLine -} diff --git a/pkg/logger/error_formatting_test.go b/pkg/logger/error_formatting_test.go deleted file mode 100644 index c07856146a..0000000000 --- a/pkg/logger/error_formatting_test.go +++ /dev/null @@ -1,177 +0,0 @@ -//go:build !integration - -package logger - -import ( - "strings" - "testing" -) - -func TestExtractErrorMessage(t *testing.T) { - tests := []struct { - name string - input string - expected string - }{ - { - name: "ISO 8601 timestamp with T separator and Z", - input: "2024-01-01T12:00:00.123Z Error: connection failed", - expected: "connection failed", - }, - { - name: "ISO 8601 timestamp with T separator and timezone offset", - input: "2024-01-01T12:00:00.123+00:00 Error: connection failed", - expected: "connection failed", - }, - { - name: "Date-time with space separator", - input: "2024-01-01 12:00:00 Error: connection failed", - expected: "connection failed", - }, - { - name: "Date-time with space separator and milliseconds", - input: "2024-01-01 12:00:00.456 Error: connection failed", - expected: "connection failed", - }, - { - name: "Bracketed date-time", - input: "[2024-01-01 12:00:00] Error: connection failed", - expected: "connection failed", - }, - { - name: "Bracketed time only", - input: "[12:00:00] Error: connection failed", - expected: "connection failed", - }, - { - name: "Time only with milliseconds", - input: "12:00:00.123 Error: connection failed", - expected: "connection failed", - }, - { - name: "Time only without milliseconds", - input: "12:00:00 Error: connection failed", - expected: "connection failed", - }, - { - name: "ERROR prefix with colon", - input: "ERROR: connection failed", - expected: "connection failed", - }, - { - name: "ERROR prefix without colon", - input: "ERROR connection failed", - expected: "connection failed", - }, - { - name: "Bracketed ERROR prefix", - input: "[ERROR] connection failed", - expected: "connection failed", - }, - { - name: "Bracketed ERROR prefix with colon", - input: "[ERROR]: connection failed", - expected: "connection failed", - }, - { - name: "WARNING prefix", - input: "WARNING: disk space low", - expected: "disk space low", - }, - { - name: "WARN prefix", - input: "WARN: deprecated API used", - expected: "deprecated API used", - }, - { - name: "INFO prefix", - input: "INFO: service started", - expected: "service started", - }, - { - name: "DEBUG prefix", - input: "DEBUG: processing request", - expected: "processing request", - }, - { - name: "Case insensitive log level", - input: "error: connection failed", - expected: "connection failed", - }, - { - name: "Combined timestamp and log level", - input: "2024-01-01 12:00:00 ERROR: connection failed", - expected: "connection failed", - }, - { - name: "Combined ISO timestamp with Z and log level", - input: "2024-01-01T12:00:00Z ERROR: connection failed", - expected: "connection failed", - }, - { - name: "Multiple timestamps - only first is removed", - input: "[12:00:00] 2024-01-01 12:00:00 ERROR: connection failed", - expected: "2024-01-01 12:00:00 ERROR: connection failed", - }, - { - name: "No timestamp or log level", - input: "connection failed", - expected: "connection failed", - }, - { - name: "Empty string", - input: "", - expected: "", - }, - { - name: "Only whitespace", - input: " ", - expected: "", - }, - { - name: "Truncation at 200 chars", - input: "ERROR: " + strings.Repeat("a", 250), - expected: strings.Repeat("a", 197) + "...", - }, - { - name: "Exactly 200 chars - no truncation", - input: "ERROR: " + strings.Repeat("a", 193), - expected: strings.Repeat("a", 193), - }, - { - name: "Real world example from metrics.go", - input: "2024-01-15 14:30:22 ERROR: Failed to connect to database", - expected: "Failed to connect to database", - }, - { - name: "Real world example from copilot_agent.go", - input: "2024-01-15T14:30:22.123Z ERROR: API request failed", - expected: "API request failed", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := ExtractErrorMessage(tt.input) - if result != tt.expected { - t.Errorf("ExtractErrorMessage(%q) = %q, want %q", tt.input, result, tt.expected) - } - }) - } -} - -func BenchmarkExtractErrorMessage(b *testing.B) { - testLine := "2024-01-01T12:00:00.123Z ERROR: connection failed to remote server" - - for b.Loop() { - ExtractErrorMessage(testLine) - } -} - -func BenchmarkExtractErrorMessageLong(b *testing.B) { - testLine := "2024-01-01T12:00:00.123Z ERROR: " + strings.Repeat("very long error message ", 20) - - for b.Loop() { - ExtractErrorMessage(testLine) - } -} diff --git a/pkg/parser/ansi_strip.go b/pkg/parser/ansi_strip.go deleted file mode 100644 index a8d911ca9f..0000000000 --- a/pkg/parser/ansi_strip.go +++ /dev/null @@ -1,12 +0,0 @@ -package parser - -import ( - "github.com/github/gh-aw/pkg/stringutil" -) - -// StripANSI removes ANSI escape codes from a string. -// This is a thin wrapper around stringutil.StripANSI for backward compatibility. -// The comprehensive implementation lives in pkg/stringutil/ansi.go. -func StripANSI(s string) string { - return stringutil.StripANSI(s) -} diff --git a/pkg/parser/frontmatter_merge_test.go b/pkg/parser/frontmatter_merge_test.go index e0c23c3ef5..7af8849902 100644 --- a/pkg/parser/frontmatter_merge_test.go +++ b/pkg/parser/frontmatter_merge_test.go @@ -259,5 +259,3 @@ func TestMergeToolsFromJSON(t *testing.T) { }) } } - -// Test StripANSI function diff --git a/pkg/parser/frontmatter_utils_test.go b/pkg/parser/frontmatter_utils_test.go index c0142145d9..b9dba2eb71 100644 --- a/pkg/parser/frontmatter_utils_test.go +++ b/pkg/parser/frontmatter_utils_test.go @@ -6,7 +6,6 @@ import ( "encoding/json" "os" "path/filepath" - "strings" "testing" "github.com/github/gh-aw/pkg/testutil" @@ -385,220 +384,8 @@ name: Test } // Test mergeToolsFromJSON function -func TestStripANSI(t *testing.T) { - tests := []struct { - name string - input string - expected string - }{ - { - name: "empty string", - input: "", - expected: "", - }, - { - name: "plain text without ANSI", - input: "Hello World", - expected: "Hello World", - }, - { - name: "simple CSI color sequence", - input: "\x1b[31mRed Text\x1b[0m", - expected: "Red Text", - }, - { - name: "multiple CSI sequences", - input: "\x1b[1m\x1b[31mBold Red\x1b[0m\x1b[32mGreen\x1b[0m", - expected: "Bold RedGreen", - }, - { - name: "CSI cursor movement", - input: "Line 1\x1b[2;1HLine 2", - expected: "Line 1Line 2", - }, - { - name: "CSI erase sequences", - input: "Text\x1b[2JCleared\x1b[K", - expected: "TextCleared", - }, - { - name: "OSC sequence with BEL terminator", - input: "\x1b]0;Window Title\x07Content", - expected: "Content", - }, - { - name: "OSC sequence with ST terminator", - input: "\x1b]2;Terminal Title\x1b\\More content", - expected: "More content", - }, - { - name: "character set selection G0", - input: "\x1b(0Hello\x1b(B", - expected: "Hello", - }, - { - name: "character set selection G1", - input: "\x1b)0World\x1b)B", - expected: "World", - }, - { - name: "keypad mode sequences", - input: "\x1b=Keypad\x1b>Normal", - expected: "KeypadNormal", - }, - { - name: "reset sequence", - input: "Before\x1bcAfter", - expected: "BeforeAfter", - }, - { - name: "save and restore cursor", - input: "Start\x1b7Middle\x1b8End", - expected: "StartMiddleEnd", - }, - { - name: "index and reverse index", - input: "Text\x1bDDown\x1bMUp", - expected: "TextDownUp", - }, - { - name: "next line and horizontal tab set", - input: "Line\x1bENext\x1bHTab", - expected: "LineNextTab", - }, - { - name: "complex CSI with parameters", - input: "\x1b[38;5;196mBright Red\x1b[48;5;21mBlue BG\x1b[0m", - expected: "Bright RedBlue BG", - }, - { - name: "CSI with semicolon parameters", - input: "\x1b[1;31;42mBold red on green\x1b[0m", - expected: "Bold red on green", - }, - { - name: "malformed escape at end", - input: "Text\x1b", - expected: "Text", - }, - { - name: "malformed CSI at end", - input: "Text\x1b[31", - expected: "Text", - }, - { - name: "malformed OSC at end", - input: "Text\x1b]0;Title", - expected: "Text", - }, - { - name: "escape followed by invalid character", - input: "Text\x1bXInvalid", - expected: "TextInvalid", - }, - { - name: "consecutive escapes", - input: "\x1b[31m\x1b[1m\x1b[4mText\x1b[0m", - expected: "Text", - }, - { - name: "mixed content with newlines", - input: "Line 1\n\x1b[31mRed Line 2\x1b[0m\nLine 3", - expected: "Line 1\nRed Line 2\nLine 3", - }, - { - name: "common terminal output", - input: "\x1b[?25l\x1b[2J\x1b[H\x1b[32m✓\x1b[0m Success", - expected: "✓ Success", - }, - { - name: "git diff style colors", - input: "\x1b[32m+Added line\x1b[0m\n\x1b[31m-Removed line\x1b[0m", - expected: "+Added line\n-Removed line", - }, - { - name: "unicode content with ANSI", - input: "\x1b[33m🎉 Success! 测试\x1b[0m", - expected: "🎉 Success! 测试", - }, - { - name: "very long CSI sequence", - input: "\x1b[1;2;3;4;5;6;7;8;9;10;11;12;13;14;15mLong params\x1b[0m", - expected: "Long params", - }, - { - name: "CSI with question mark private parameter", - input: "\x1b[?25hCursor visible\x1b[?25l", - expected: "Cursor visible", - }, - { - name: "CSI with greater than private parameter", - input: "\x1b[>0cDevice attributes\x1b[>1c", - expected: "Device attributes", - }, - { - name: "all final CSI characters test", - input: "\x1b[@\x1b[A\x1b[B\x1b[C\x1b[D\x1b[E\x1b[F\x1b[G\x1b[H\x1b[I\x1b[J\x1b[K\x1b[L\x1b[M\x1b[N\x1b[O\x1b[P\x1b[Q\x1b[R\x1b[S\x1b[T\x1b[U\x1b[V\x1b[W\x1b[X\x1b[Y\x1b[Z\x1b[[\x1b[\\\x1b[]\x1b[^\x1b[_\x1b[`\x1b[a\x1b[b\x1b[c\x1b[d\x1b[e\x1b[f\x1b[g\x1b[h\x1b[i\x1b[j\x1b[k\x1b[l\x1b[m\x1b[n\x1b[o\x1b[p\x1b[q\x1b[r\x1b[s\x1b[t\x1b[u\x1b[v\x1b[w\x1b[x\x1b[y\x1b[z\x1b[{\x1b[|\x1b[}\x1b[~Text", - expected: "Text", - }, - { - name: "CSI with invalid final character", - input: "Before\x1b[31Text after", - expected: "Beforeext after", - }, - { - name: "real world lipgloss output", - input: "\x1b[1;38;2;80;250;123m✓\x1b[0;38;2;248;248;242m Success message\x1b[0m", - expected: "✓ Success message", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := StripANSI(tt.input) - if result != tt.expected { - t.Errorf("StripANSI(%q) = %q, want %q", tt.input, result, tt.expected) - } - }) - } -} // Benchmark StripANSI function for performance -func BenchmarkStripANSI(b *testing.B) { - testCases := []struct { - name string - input string - }{ - { - name: "plain text", - input: "This is plain text without any ANSI codes", - }, - { - name: "simple color", - input: "\x1b[31mRed text\x1b[0m", - }, - { - name: "complex formatting", - input: "\x1b[1;38;2;255;0;0m\x1b[48;2;0;255;0mComplex formatting\x1b[0m", - }, - { - name: "mixed content", - input: "Normal \x1b[31mred\x1b[0m normal \x1b[32mgreen\x1b[0m normal \x1b[34mblue\x1b[0m text", - }, - { - name: "long text with ANSI", - input: strings.Repeat("\x1b[31mRed \x1b[32mGreen \x1b[34mBlue\x1b[0m ", 100), - }, - } - - for _, tc := range testCases { - b.Run(tc.name, func(b *testing.B) { - for range b.N { - StripANSI(tc.input) - } - }) - } -} func TestIsWorkflowSpec(t *testing.T) { tests := []struct { diff --git a/pkg/parser/virtual_fs_test_helpers.go b/pkg/parser/virtual_fs_test_helpers.go deleted file mode 100644 index 72e1095b67..0000000000 --- a/pkg/parser/virtual_fs_test_helpers.go +++ /dev/null @@ -1,12 +0,0 @@ -package parser - -// SetReadFileFuncForTest overrides the file reading function for testing. -// This enables testing virtual filesystem behavior in native (non-wasm) builds. -// Returns a cleanup function that restores the original. -func SetReadFileFuncForTest(fn func(string) ([]byte, error)) func() { - original := readFileFunc - readFileFunc = fn - return func() { - readFileFunc = original - } -} diff --git a/pkg/workflow/metrics_test.go b/pkg/workflow/metrics_test.go index 8ad1a4ad57..72d384997a 100644 --- a/pkg/workflow/metrics_test.go +++ b/pkg/workflow/metrics_test.go @@ -5,8 +5,6 @@ package workflow import ( "encoding/json" "testing" - - "github.com/github/gh-aw/pkg/logger" ) func TestExtractFirstMatch(t *testing.T) { @@ -668,94 +666,6 @@ func TestPrettifyToolName(t *testing.T) { } } -func TestExtractErrorMessage(t *testing.T) { - tests := []struct { - name string - input string - expected string - }{ - { - name: "Simple error message", - input: "Failed to connect to server", - expected: "Failed to connect to server", - }, - { - name: "Error with timestamp prefix", - input: "2024-01-01 12:00:00 Connection timeout", - expected: "Connection timeout", - }, - { - name: "Error with timestamp and milliseconds", - input: "2024-01-01 12:00:00.123 Connection refused", - expected: "Connection refused", - }, - { - name: "Error with bracket timestamp", - input: "[12:00:00] Permission denied", - expected: "Permission denied", - }, - { - name: "Error with ERROR prefix", - input: "ERROR: File not found", - expected: "File not found", - }, - { - name: "Error with [ERROR] prefix", - input: "[ERROR] Invalid configuration", - expected: "Invalid configuration", - }, - { - name: "Warning with WARN prefix", - input: "WARN - Deprecated API usage", - expected: "Deprecated API usage", - }, - { - name: "Error with WARNING prefix", - input: "WARNING: Resource limit reached", - expected: "Resource limit reached", - }, - { - name: "Timestamp and log level combined", - input: "2024-01-01 12:00:00 ERROR: Failed to initialize", - expected: "Failed to initialize", - }, - { - name: "Very long message truncation", - input: "This is a very long error message that exceeds the maximum character limit and should be truncated to prevent overly verbose output in the audit report which could make it harder to read and understand the key issues", - expected: "This is a very long error message that exceeds the maximum character limit and should be truncated to prevent overly verbose output in the audit report which could make it harder to read and unders...", - }, - { - name: "Empty string", - input: "", - expected: "", - }, - { - name: "Only whitespace", - input: " \t ", - expected: "", - }, - { - name: "Case insensitive ERROR prefix", - input: "error: Connection failed", - expected: "Connection failed", - }, - { - name: "Mixed case WARNING prefix", - input: "Warning: Low memory", - expected: "Low memory", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := logger.ExtractErrorMessage(tt.input) - if result != tt.expected { - t.Errorf("logger.ExtractErrorMessage(%q) = %q, want %q", tt.input, result, tt.expected) - } - }) - } -} - func TestFinalizeToolMetrics(t *testing.T) { tests := []struct { name string From e80b9a1761a857267b9cd41ed16293ca50adaacd Mon Sep 17 00:00:00 2001 From: Don Syme Date: Sat, 28 Feb 2026 03:27:34 +0000 Subject: [PATCH 3/7] Remove dead code batch 2: workflow bundler + dead files (~7850 lines deleted, 53 files) - Delete entire bundler subsystem (5 source + 14 test files) - Delete compiler_string_api.go OOPS - restored after discovering WASM uses it - Delete other dead workflow files: copilot_participant_steps, dependency_tracker, env_mirror, markdown_unfencing, prompt_step, safe_output_builder, sh.go - Delete stringutil/paths.go (callers were in bundler) - Rescue live constants/embeds from sh.go to prompt_constants.go - Rescue SetupActionDestination const to setup_action_paths.go - Rewrite script_registry.go to minimal 3-function form (removes RuntimeMode) - Remove dead functions from add_labels.go, create_issue.go, create_pull_request.go - Remove generateStaticPromptStep from unified_prompt_step.go - Fix all test compilation errors (remove tests for deleted functionality) - Restore compiler_string_api.go: used by WASM binary (build constraint hides from deadcode) - Update DEADCODE.md: add WASM build constraint warning --- DEADCODE.md | 5 +- pkg/stringutil/paths.go | 42 -- pkg/stringutil/paths_test.go | 129 ---- pkg/workflow/add_labels.go | 32 - pkg/workflow/bundler.go | 589 ------------------ pkg/workflow/bundler_deduplicate_test.go | 44 -- .../bundler_duplicate_modules_test.go | 65 -- pkg/workflow/bundler_file_mode.go | 529 ---------------- pkg/workflow/bundler_file_mode_test.go | 255 -------- pkg/workflow/bundler_fs_undefined_test.go | 13 - pkg/workflow/bundler_function_scope_test.go | 13 - pkg/workflow/bundler_indentation_test.go | 58 -- pkg/workflow/bundler_inline_test.go | 59 -- pkg/workflow/bundler_integration_test.go | 55 -- pkg/workflow/bundler_quotes_test.go | 103 --- pkg/workflow/bundler_runtime_mode_test.go | 79 --- pkg/workflow/bundler_runtime_validation.go | 176 ------ pkg/workflow/bundler_safety_validation.go | 223 ------- pkg/workflow/bundler_scope_mixing_test.go | 13 - pkg/workflow/bundler_scope_narrowing_test.go | 13 - pkg/workflow/bundler_script_validation.go | 149 ----- .../bundler_script_validation_test.go | 244 -------- pkg/workflow/bundler_test.go | 79 --- pkg/workflow/compiler_custom_actions_test.go | 192 ------ pkg/workflow/copilot_participant_steps.go | 153 ----- .../copilot_participant_steps_test.go | 49 -- pkg/workflow/create_issue.go | 149 ----- pkg/workflow/create_pull_request.go | 207 ------ .../custom_action_copilot_token_test.go | 51 -- pkg/workflow/dependency_tracker.go | 121 ---- pkg/workflow/dependency_tracker_test.go | 185 ------ pkg/workflow/env_mirror.go | 137 ---- pkg/workflow/env_mirror_test.go | 221 ------- pkg/workflow/inline_imports_test.go | 254 -------- pkg/workflow/markdown_unfencing.go | 141 ----- pkg/workflow/markdown_unfencing_test.go | 277 -------- pkg/workflow/prompt_constants.go | 28 + pkg/workflow/prompt_step.go | 64 -- pkg/workflow/prompt_step_helper_test.go | 138 ---- pkg/workflow/prompt_step_test.go | 146 ----- pkg/workflow/safe_output_builder.go | 202 ------ pkg/workflow/safe_outputs_app_import_test.go | 70 --- pkg/workflow/safe_outputs_app_test.go | 149 ----- pkg/workflow/safe_outputs_env_test.go | 196 ------ pkg/workflow/safe_outputs_messages_test.go | 52 -- pkg/workflow/script_registry.go | 323 +--------- pkg/workflow/script_registry_test.go | 298 --------- pkg/workflow/setup_action_paths.go | 5 + pkg/workflow/sh.go | 152 ----- pkg/workflow/sh_integration_test.go | 371 ----------- pkg/workflow/sh_test.go | 309 --------- pkg/workflow/staged_add_issue_labels_test.go | 73 --- pkg/workflow/staged_create_issue_test.go | 88 --- pkg/workflow/staged_pull_request_test.go | 88 --- pkg/workflow/unified_prompt_step.go | 39 -- 55 files changed, 39 insertions(+), 7856 deletions(-) delete mode 100644 pkg/stringutil/paths.go delete mode 100644 pkg/stringutil/paths_test.go delete mode 100644 pkg/workflow/bundler.go delete mode 100644 pkg/workflow/bundler_deduplicate_test.go delete mode 100644 pkg/workflow/bundler_duplicate_modules_test.go delete mode 100644 pkg/workflow/bundler_file_mode.go delete mode 100644 pkg/workflow/bundler_file_mode_test.go delete mode 100644 pkg/workflow/bundler_fs_undefined_test.go delete mode 100644 pkg/workflow/bundler_function_scope_test.go delete mode 100644 pkg/workflow/bundler_indentation_test.go delete mode 100644 pkg/workflow/bundler_inline_test.go delete mode 100644 pkg/workflow/bundler_integration_test.go delete mode 100644 pkg/workflow/bundler_quotes_test.go delete mode 100644 pkg/workflow/bundler_runtime_mode_test.go delete mode 100644 pkg/workflow/bundler_runtime_validation.go delete mode 100644 pkg/workflow/bundler_safety_validation.go delete mode 100644 pkg/workflow/bundler_scope_mixing_test.go delete mode 100644 pkg/workflow/bundler_scope_narrowing_test.go delete mode 100644 pkg/workflow/bundler_script_validation.go delete mode 100644 pkg/workflow/bundler_script_validation_test.go delete mode 100644 pkg/workflow/bundler_test.go delete mode 100644 pkg/workflow/copilot_participant_steps.go delete mode 100644 pkg/workflow/copilot_participant_steps_test.go delete mode 100644 pkg/workflow/custom_action_copilot_token_test.go delete mode 100644 pkg/workflow/dependency_tracker.go delete mode 100644 pkg/workflow/dependency_tracker_test.go delete mode 100644 pkg/workflow/env_mirror.go delete mode 100644 pkg/workflow/env_mirror_test.go delete mode 100644 pkg/workflow/markdown_unfencing.go delete mode 100644 pkg/workflow/markdown_unfencing_test.go create mode 100644 pkg/workflow/prompt_constants.go delete mode 100644 pkg/workflow/prompt_step.go delete mode 100644 pkg/workflow/prompt_step_helper_test.go delete mode 100644 pkg/workflow/prompt_step_test.go delete mode 100644 pkg/workflow/safe_output_builder.go delete mode 100644 pkg/workflow/safe_outputs_env_test.go delete mode 100644 pkg/workflow/script_registry_test.go create mode 100644 pkg/workflow/setup_action_paths.go delete mode 100644 pkg/workflow/sh.go delete mode 100644 pkg/workflow/sh_integration_test.go delete mode 100644 pkg/workflow/sh_test.go delete mode 100644 pkg/workflow/staged_add_issue_labels_test.go delete mode 100644 pkg/workflow/staged_create_issue_test.go delete mode 100644 pkg/workflow/staged_pull_request_test.go diff --git a/DEADCODE.md b/DEADCODE.md index 4876555fc5..d313889ad7 100644 --- a/DEADCODE.md +++ b/DEADCODE.md @@ -28,6 +28,7 @@ It does NOT report unreachable constants, variables, or types — only functions **Important rules:** - **Always include `./internal/tools/...` in the deadcode command** +- **Beware `//go:build js && wasm` files** — `cmd/gh-aw-wasm/` uses functions like `ParseWorkflowString` and `CompileToYAML` that deadcode can't see because the WASM binary can't be compiled without `GOOS=js GOARCH=wasm`. Always check `cmd/gh-aw-wasm/main.go` before deleting functions from `pkg/workflow/`. - Run `go build ./...` after every batch - Run `go vet ./...` to catch test compilation errors (cheaper than `go test`) - Run `go test -tags=integration ./pkg/affected/...` to spot-check @@ -93,8 +94,8 @@ These are the JS bundler subsystem — entirely unused. - [ ] `pkg/workflow/bundler_script_validation.go` (2/2 dead) ### Group 1E: Workflow other fully dead files (9 files) -- [ ] `pkg/workflow/compiler_string_api.go` (2/2 dead) → delete `compiler_string_api_test.go` -- [ ] `pkg/workflow/compiler_test_helpers.go` (3/3 dead) — test helper, check usage +- [x] `pkg/workflow/compiler_string_api.go` ~~(2/2 dead) → delete~~ **⚠️ DO NOT DELETE — used by `cmd/gh-aw-wasm/` (WASM binary has `//go:build js && wasm` constraint invisible to deadcode)** +- [x] `pkg/workflow/compiler_test_helpers.go` (3/3 dead) — test helper, **DO NOT DELETE** (used by 15 test files) - [ ] `pkg/workflow/copilot_participant_steps.go` (3/3 dead) - [ ] `pkg/workflow/dependency_tracker.go` (2/2 dead) - [ ] `pkg/workflow/env_mirror.go` (2/2 dead) diff --git a/pkg/stringutil/paths.go b/pkg/stringutil/paths.go deleted file mode 100644 index e63f0bc176..0000000000 --- a/pkg/stringutil/paths.go +++ /dev/null @@ -1,42 +0,0 @@ -package stringutil - -import "strings" - -// NormalizePath normalizes a file path by resolving . and .. components. -// It splits the path on "/" and processes each component: -// - Empty parts and "." are skipped -// - ".." moves up one directory (if possible) -// - Other parts are added to the result -// -// This is useful for resolving relative paths in bundler operations and -// other file path manipulations where . and .. components need to be resolved. -// -// Examples: -// -// NormalizePath("a/b/../c") // returns "a/c" -// NormalizePath("./a/./b") // returns "a/b" -// NormalizePath("a/b/../../c") // returns "c" -// NormalizePath("../a/b") // returns "a/b" (leading .. is ignored) -// NormalizePath("a//b") // returns "a/b" (empty parts removed) -func NormalizePath(path string) string { - // Split path into parts - parts := strings.Split(path, "/") - var result []string - - for _, part := range parts { - if part == "" || part == "." { - // Skip empty parts and current directory references - continue - } - if part == ".." { - // Go up one directory - if len(result) > 0 { - result = result[:len(result)-1] - } - } else { - result = append(result, part) - } - } - - return strings.Join(result, "/") -} diff --git a/pkg/stringutil/paths_test.go b/pkg/stringutil/paths_test.go deleted file mode 100644 index caf718d464..0000000000 --- a/pkg/stringutil/paths_test.go +++ /dev/null @@ -1,129 +0,0 @@ -//go:build !integration - -package stringutil - -import "testing" - -func TestNormalizePath(t *testing.T) { - tests := []struct { - name string - path string - expected string - }{ - { - name: "simple path", - path: "a/b/c", - expected: "a/b/c", - }, - { - name: "path with single dot", - path: "a/./b", - expected: "a/b", - }, - { - name: "path with multiple dots", - path: "./a/./b/./c", - expected: "a/b/c", - }, - { - name: "path with double dot", - path: "a/b/../c", - expected: "a/c", - }, - { - name: "path with multiple double dots", - path: "a/b/../../c", - expected: "c", - }, - { - name: "path with leading double dot", - path: "../a/b", - expected: "a/b", - }, - { - name: "path with trailing double dot", - path: "a/b/..", - expected: "a", - }, - { - name: "path with empty parts", - path: "a//b///c", - expected: "a/b/c", - }, - { - name: "complex path", - path: "a/./b/../c/d/../../e", - expected: "a/e", - }, - { - name: "empty path", - path: "", - expected: "", - }, - { - name: "single dot", - path: ".", - expected: "", - }, - { - name: "double dot only", - path: "..", - expected: "", - }, - { - name: "multiple double dots beyond root", - path: "../../a", - expected: "a", - }, - { - name: "mixed slashes and dots", - path: "a/b/./c/../d", - expected: "a/b/d", - }, - { - name: "path with only dots and slashes", - path: "./../.", - expected: "", - }, - { - name: "real-world bundler path", - path: "./lib/utils/../../helpers/common", - expected: "helpers/common", - }, - { - name: "deeply nested path with parent refs", - path: "a/b/c/d/../../../e/f", - expected: "a/e/f", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := NormalizePath(tt.path) - if result != tt.expected { - t.Errorf("NormalizePath(%q) = %q; want %q", tt.path, result, tt.expected) - } - }) - } -} - -func BenchmarkNormalizePath(b *testing.B) { - path := "a/b/c/./d/../e/f/../../g" - for b.Loop() { - NormalizePath(path) - } -} - -func BenchmarkNormalizePath_Simple(b *testing.B) { - path := "a/b/c/d/e" - for b.Loop() { - NormalizePath(path) - } -} - -func BenchmarkNormalizePath_Complex(b *testing.B) { - path := "./a/./b/../c/d/../../e/f/g/h/../../../i" - for b.Loop() { - NormalizePath(path) - } -} diff --git a/pkg/workflow/add_labels.go b/pkg/workflow/add_labels.go index 113a261b43..cb2adcb654 100644 --- a/pkg/workflow/add_labels.go +++ b/pkg/workflow/add_labels.go @@ -1,8 +1,6 @@ package workflow import ( - "errors" - "github.com/github/gh-aw/pkg/logger" ) @@ -38,33 +36,3 @@ func (c *Compiler) parseAddLabelsConfig(outputMap map[string]any) *AddLabelsConf return &config } - -// buildAddLabelsJob creates the add_labels job -func (c *Compiler) buildAddLabelsJob(data *WorkflowData, mainJobName string) (*Job, error) { - addLabelsLog.Printf("Building add_labels job for workflow: %s, main_job: %s", data.Name, mainJobName) - - if data.SafeOutputs == nil || data.SafeOutputs.AddLabels == nil { - return nil, errors.New("safe-outputs configuration is required") - } - - cfg := data.SafeOutputs.AddLabels - - // Build list job config - listJobConfig := ListJobConfig{ - SafeOutputTargetConfig: cfg.SafeOutputTargetConfig, - Allowed: cfg.Allowed, - Blocked: cfg.Blocked, - } - - // Use shared builder for list-based safe-output jobs - return c.BuildListSafeOutputJob(data, mainJobName, listJobConfig, cfg.BaseSafeOutputConfig, ListJobBuilderConfig{ - JobName: "add_labels", - StepName: "Add Labels", - StepID: "add_labels", - EnvPrefix: "GH_AW_LABELS", - OutputName: "labels_added", - Script: getAddLabelsScript(), - Permissions: NewPermissionsContentsReadIssuesWritePRWrite(), - DefaultMax: 3, - }) -} diff --git a/pkg/workflow/bundler.go b/pkg/workflow/bundler.go deleted file mode 100644 index 8c2f9c84f3..0000000000 --- a/pkg/workflow/bundler.go +++ /dev/null @@ -1,589 +0,0 @@ -// This file provides JavaScript bundling for agentic workflows. -// -// # JavaScript Bundler with Runtime Mode Support -// -// The bundler supports two runtime environments: -// -// 1. GitHub Script Mode (RuntimeModeGitHubScript) -// - Used for JavaScript embedded in GitHub Actions YAML via actions/github-script -// - No module system available (no require() or module.exports at runtime) -// - All local requires must be bundled inline -// - All module.exports statements are removed -// - Validation ensures no local requires or module references remain -// -// 2. Node.js Mode (RuntimeModeNodeJS) -// - Used for standalone Node.js scripts that run on filesystem -// - Full CommonJS module system available -// - module.exports statements are preserved -// - Local requires can remain if modules are available on filesystem -// - Less aggressive bundling and validation -// -// # Usage -// -// For GitHub Script mode (default for backward compatibility): -// -// bundled, err := BundleJavaScriptFromSources(mainContent, sources, "") -// // or explicitly: -// bundled, err := BundleJavaScriptWithMode(mainContent, sources, "", RuntimeModeGitHubScript) -// -// For Node.js mode: -// -// bundled, err := BundleJavaScriptWithMode(mainContent, sources, "", RuntimeModeNodeJS) -// -// # Guardrails and Validation -// -// The bundler includes several guardrails based on runtime mode: -// -// - validateNoLocalRequires: Ensures all local requires (./... or ../...) are bundled (GitHub Script mode only) -// - validateNoModuleReferences: Ensures no module.exports or exports.* remain (GitHub Script mode only) -// - removeExports: Strips module.exports from bundled code (GitHub Script mode only) -// -// These validations prevent runtime errors when JavaScript is executed in environments -// without a module system. - -package workflow - -import ( - "fmt" - "path/filepath" - "regexp" - "strings" - - "github.com/github/gh-aw/pkg/logger" -) - -var bundlerLog = logger.New("workflow:bundler") - -// RuntimeMode represents the JavaScript runtime environment -type RuntimeMode int - -const ( - // RuntimeModeGitHubScript indicates JavaScript running in actions/github-script - // In this mode: - // - All local requires must be bundled (no module system) - // - module.exports statements must be removed - // - No module object references allowed - RuntimeModeGitHubScript RuntimeMode = iota - - // RuntimeModeNodeJS indicates JavaScript running as a Node.js script - // In this mode: - // - module.exports can be preserved - // - Local requires can be kept if modules are available on filesystem - // - Full Node.js module system is available - RuntimeModeNodeJS -) - -// String returns a string representation of the RuntimeMode -func (r RuntimeMode) String() string { - switch r { - case RuntimeModeGitHubScript: - return "github-script" - case RuntimeModeNodeJS: - return "nodejs" - default: - return "unknown" - } -} - -// BundleJavaScriptFromSources bundles JavaScript from in-memory sources -// sources is a map where keys are file paths (e.g., "sanitize.cjs") and values are the content -// mainContent is the main JavaScript content that may contain require() calls -// basePath is the base directory path for resolving relative imports (e.g., "js") -// -// DEPRECATED: Use BundleJavaScriptWithMode instead to specify runtime mode explicitly. -// This function defaults to RuntimeModeGitHubScript for backward compatibility. -// -// Migration guide: -// - For GitHub Script action (inline in YAML): use BundleJavaScriptWithMode(content, sources, basePath, RuntimeModeGitHubScript) -// - For Node.js scripts (filesystem-based): use BundleJavaScriptWithMode(content, sources, basePath, RuntimeModeNodeJS) -// -// This function will be maintained for backward compatibility but new code should use BundleJavaScriptWithMode. -func BundleJavaScriptFromSources(mainContent string, sources map[string]string, basePath string) (string, error) { - return BundleJavaScriptWithMode(mainContent, sources, basePath, RuntimeModeGitHubScript) -} - -// BundleJavaScriptWithMode bundles JavaScript from in-memory sources with specified runtime mode -// sources is a map where keys are file paths (e.g., "sanitize.cjs") and values are the content -// mainContent is the main JavaScript content that may contain require() calls -// basePath is the base directory path for resolving relative imports (e.g., "js") -// mode specifies the target runtime environment (GitHub script action vs Node.js) -func BundleJavaScriptWithMode(mainContent string, sources map[string]string, basePath string, mode RuntimeMode) (string, error) { - bundlerLog.Printf("Bundling JavaScript: source_count=%d, base_path=%s, main_content_size=%d bytes, runtime_mode=%s", - len(sources), basePath, len(mainContent), mode) - - // Validate that no runtime mode mixing occurs - if err := validateNoRuntimeMixing(mainContent, sources, mode); err != nil { - bundlerLog.Printf("Runtime mode validation failed: %v", err) - return "", err - } - - // Track already processed files to avoid circular dependencies - processed := make(map[string]bool) - - // Bundle the main content recursively - bundled, err := bundleFromSources(mainContent, basePath, sources, processed, mode) - if err != nil { - bundlerLog.Printf("Bundling failed: %v", err) - return "", err - } - - // Deduplicate require statements (keep only the first occurrence) - bundled = deduplicateRequires(bundled) - - // Mode-specific processing and validations - switch mode { - case RuntimeModeGitHubScript: - // GitHub Script mode: remove module.exports from final output - bundled = removeExports(bundled) - - // Inject await main() call for inline execution - // This allows scripts to export main when used with require(), but still execute - // when inlined directly in github-script action - if strings.Contains(bundled, "async function main()") || strings.Contains(bundled, "async function main ()") { - bundled = bundled + "\nawait main();\n" - bundlerLog.Print("Injected 'await main()' call for GitHub Script inline execution") - } - - // Validate all local requires are bundled and module references removed - if err := validateNoLocalRequires(bundled); err != nil { - bundlerLog.Printf("Validation failed: %v", err) - return "", err - } - if err := validateNoModuleReferences(bundled); err != nil { - bundlerLog.Printf("Module reference validation failed: %v", err) - return "", err - } - - case RuntimeModeNodeJS: - // Node.js mode: more permissive, allows module.exports and may allow local requires - // Local requires are OK if modules will be available on filesystem - bundlerLog.Print("Node.js mode: module.exports preserved, local requires allowed") - // Note: We still bundle what we can, but don't fail on remaining requires - } - - // Log size information about the bundled output - lines := strings.Split(bundled, "\n") - var maxLineLength int - for _, line := range lines { - if len(line) > maxLineLength { - maxLineLength = len(line) - } - } - - bundlerLog.Printf("Bundling completed: processed_files=%d, output_size=%d bytes, output_lines=%d, max_line_length=%d chars", - len(processed), len(bundled), len(lines), maxLineLength) - return bundled, nil -} - -// bundleFromSources processes content and recursively bundles its dependencies from the sources map -// The mode parameter controls how module.exports statements are handled -func bundleFromSources(content string, currentPath string, sources map[string]string, processed map[string]bool, mode RuntimeMode) (string, error) { - bundlerLog.Printf("Processing file for bundling: current_path=%s, content_size=%d bytes, runtime_mode=%s", currentPath, len(content), mode) - - // Regular expression to match require('./...') or require("./...") - // This matches both single-line and multi-line destructuring: - // const { x } = require("./file.cjs"); - // const { - // x, - // y - // } = require("./file.cjs"); - // Captures the require path where it starts with ./ or ../ - requireRegex := regexp.MustCompile(`(?s)(?:const|let|var)\s+(?:\{[^}]*\}|\w+)\s*=\s*require\(['"](\.\.?/[^'"]+)['"]\);?`) - - // Find all requires and their positions - matches := requireRegex.FindAllStringSubmatchIndex(content, -1) - - if len(matches) == 0 { - bundlerLog.Print("No requires found in content") - // No requires found, return content as-is - return content, nil - } - - bundlerLog.Printf("Found %d require statements to process", len(matches)) - - var result strings.Builder - lastEnd := 0 - - for _, match := range matches { - // match[0], match[1] are the start and end of the full match - // match[2], match[3] are the start and end of the captured group (the path) - matchStart := match[0] - matchEnd := match[1] - pathStart := match[2] - pathEnd := match[3] - - // Write content before this require - result.WriteString(content[lastEnd:matchStart]) - - // Extract the require path - requirePath := content[pathStart:pathEnd] - - // Resolve the full path relative to current path - var fullPath string - if currentPath == "" { - fullPath = requirePath - } else { - fullPath = filepath.Join(currentPath, requirePath) - } - - // Ensure .cjs extension - if !strings.HasSuffix(fullPath, ".cjs") && !strings.HasSuffix(fullPath, ".js") { - fullPath += ".cjs" - } - - // Normalize the path (clean up ./ and ../) - fullPath = filepath.Clean(fullPath) - - // Convert Windows path separators to forward slashes for consistency - fullPath = filepath.ToSlash(fullPath) - - // Check if we've already processed this file - if processed[fullPath] { - bundlerLog.Printf("Skipping already processed file: %s", fullPath) - // Skip - already inlined - result.WriteString("// Already inlined: " + requirePath + "\n") - } else { - // Mark as processed - processed[fullPath] = true - - // Look up the required file in sources - requiredContent, ok := sources[fullPath] - if !ok { - bundlerLog.Printf("Required file not found in sources: %s", fullPath) - return "", fmt.Errorf("required file not found in sources: %s", fullPath) - } - - bundlerLog.Printf("Inlining file: %s (size: %d bytes)", fullPath, len(requiredContent)) - - // Recursively bundle the required file - requiredDir := filepath.Dir(fullPath) - bundledRequired, err := bundleFromSources(requiredContent, requiredDir, sources, processed, mode) - if err != nil { - return "", err - } - - // Remove exports from the bundled content based on runtime mode - var cleanedRequired string - if mode == RuntimeModeGitHubScript { - // GitHub Script mode: remove all module.exports - cleanedRequired = removeExports(bundledRequired) - bundlerLog.Printf("Processed %s (github-script mode): original_size=%d, after_export_removal=%d", - fullPath, len(bundledRequired), len(cleanedRequired)) - } else { - // Node.js mode: preserve module.exports - cleanedRequired = bundledRequired - bundlerLog.Printf("Processed %s (nodejs mode): size=%d, module.exports preserved", - fullPath, len(bundledRequired)) - } - - // Add a comment indicating the inlined file - fmt.Fprintf(&result, "// === Inlined from %s ===\n", requirePath) - result.WriteString(cleanedRequired) - fmt.Fprintf(&result, "// === End of %s ===\n", requirePath) - } - - lastEnd = matchEnd - } - - // Write any remaining content after the last require - result.WriteString(content[lastEnd:]) - - return result.String(), nil -} - -// removeExports removes module.exports and exports statements from JavaScript code -// This function removes ALL exports, including conditional ones, because GitHub Script -// mode does not support any form of module.exports -func removeExports(content string) string { - lines := strings.Split(content, "\n") - var result strings.Builder - - // Regular expressions for export patterns - moduleExportsRegex := regexp.MustCompile(`^\s*module\.exports\s*=`) - exportsRegex := regexp.MustCompile(`^\s*exports\.\w+\s*=`) - - // Pattern for inline conditional exports like: - // ("undefined" != typeof module && module.exports && (module.exports = {...}), - // This pattern is used by minified code - inlineConditionalExportRegex := regexp.MustCompile(`\(\s*["']undefined["']\s*!=\s*typeof\s+module\s*&&\s*module\.exports`) - - // Track if we're inside a conditional export block that should be removed - inConditionalExport := false - conditionalDepth := 0 - - // Track if we're inside an unconditional module.exports block - inModuleExports := false - moduleExportsDepth := 0 - - for i, line := range lines { - trimmed := strings.TrimSpace(line) - - // Check for inline conditional export pattern (minified style) - // These lines should be entirely removed as they only contain the conditional export - if inlineConditionalExportRegex.MatchString(trimmed) { - // Skip the entire line - it's an inline conditional export - continue - } - - // Check if this starts a conditional export block - // Pattern: if (typeof module !== "undefined" && module.exports) { - // These need to be REMOVED for GitHub Script mode - if strings.Contains(trimmed, "if") && - strings.Contains(trimmed, "module") && - strings.Contains(trimmed, "exports") && - strings.Contains(trimmed, "{") { - inConditionalExport = true - conditionalDepth = 1 - // Skip this line - we're removing conditional exports for GitHub Script mode - continue - } - - // Track braces if we're in a conditional export - skip all lines until it closes - if inConditionalExport { - for _, ch := range trimmed { - if ch == '{' { - conditionalDepth++ - } else if ch == '}' { - conditionalDepth-- - if conditionalDepth == 0 { - inConditionalExport = false - // Skip this closing line and continue - continue - } - } - } - // Skip all lines inside the conditional export block - continue - } - - // Check if this line starts an unconditional module.exports assignment - if moduleExportsRegex.MatchString(line) { - // Check if it's a multi-line object export (ends with {) - if strings.Contains(trimmed, "{") && !strings.Contains(trimmed, "}") { - // This is a multi-line module.exports = { ... } - inModuleExports = true - moduleExportsDepth = 1 - // Skip this line and start tracking the export block - continue - } else { - // Single-line export, skip just this line - continue - } - } - - // Track braces if we're in an unconditional module.exports block - if inModuleExports { - // Count braces to track when the export block ends - for _, ch := range trimmed { - if ch == '{' { - moduleExportsDepth++ - } else if ch == '}' { - moduleExportsDepth-- - if moduleExportsDepth == 0 { - inModuleExports = false - // Skip this closing line and continue - continue - } - } - } - // Skip all lines inside the export block - continue - } - - // Skip lines that are unconditional exports.* assignments - if exportsRegex.MatchString(line) { - // Skip this line - it's an unconditional export - continue - } - - result.WriteString(line) - if i < len(lines)-1 { - result.WriteString("\n") - } - } - - return result.String() -} - -// deduplicateRequires removes duplicate require() statements from bundled JavaScript -// For destructured imports from the same module, it merges them into a single require statement -// keeping only the first occurrence of each unique require for non-destructured imports. -// IMPORTANT: Only merges requires that have the same indentation level to avoid moving -// requires across scope boundaries (which would cause "X is not defined" errors) -func deduplicateRequires(content string) string { - lines := strings.Split(content, "\n") - - // Helper to get indentation level of a line - getIndentation := func(line string) int { - count := 0 - for _, ch := range line { - //nolint:staticcheck // switch would require label for break; if-else is clearer here - if ch == ' ' { - count++ - } else if ch == '\t' { - count += 2 // Treat tab as 2 spaces for comparison - } else { - break - } - } - return count - } - - // Track module imports per indentation level: map[indent]map[moduleName][]names - moduleImportsByIndent := make(map[int]map[string][]string) - // Track which lines are require statements to skip during first pass - requireLines := make(map[int]bool) - // Track order of first appearance of each module per indentation: map[indent][]moduleName - moduleOrderByIndent := make(map[int][]string) - // Track the first line number where we see a require at each indentation - firstRequireLineByIndent := make(map[int]int) - - // Regular expression to match destructured require statements - // Matches: const/let/var { name1, name2 } = require('module'); - destructuredRequireRegex := regexp.MustCompile(`^\s*(?:const|let|var)\s+\{\s*([^}]+)\s*\}\s*=\s*require\(['"]([^'"]+)['"]\);?\s*$`) - // Regular expression to match non-destructured require statements - // Matches: const/let/var name = require('module'); - simpleRequireRegex := regexp.MustCompile(`^\s*(?:const|let|var)\s+(\w+)\s*=\s*require\(['"]([^'"]+)['"]\);?\s*$`) - - // First pass: collect all require statements grouped by indentation level - for i, line := range lines { - indent := getIndentation(line) - - // Try destructured require first - destructuredMatches := destructuredRequireRegex.FindStringSubmatch(line) - if len(destructuredMatches) > 2 { - moduleName := destructuredMatches[2] - destructuredNames := destructuredMatches[1] - - requireLines[i] = true - - // Initialize map for this indentation level if needed - if moduleImportsByIndent[indent] == nil { - moduleImportsByIndent[indent] = make(map[string][]string) - firstRequireLineByIndent[indent] = i - } - - // Parse the destructured names (split by comma and trim whitespace) - names := strings.Split(destructuredNames, ",") - for _, name := range names { - name = strings.TrimSpace(name) - if name != "" { - moduleImportsByIndent[indent][moduleName] = append(moduleImportsByIndent[indent][moduleName], name) - } - } - - // Track order of first appearance at this indentation - if len(moduleImportsByIndent[indent][moduleName]) == len(names) { - moduleOrderByIndent[indent] = append(moduleOrderByIndent[indent], moduleName) - } - continue - } - - // Try simple require - simpleMatches := simpleRequireRegex.FindStringSubmatch(line) - if len(simpleMatches) > 2 { - moduleName := simpleMatches[2] - varName := simpleMatches[1] - - requireLines[i] = true - - // Initialize map for this indentation level if needed - if moduleImportsByIndent[indent] == nil { - moduleImportsByIndent[indent] = make(map[string][]string) - firstRequireLineByIndent[indent] = i - } - - // For simple requires, store the variable name with a marker - if _, exists := moduleImportsByIndent[indent][moduleName]; !exists { - moduleOrderByIndent[indent] = append(moduleOrderByIndent[indent], moduleName) - } - moduleImportsByIndent[indent][moduleName] = append(moduleImportsByIndent[indent][moduleName], "VAR:"+varName) - } - } - - // Second pass: write output - var result strings.Builder - // Track which indentation levels have had their merged requires written - wroteRequiresByIndent := make(map[int]bool) - - for i, line := range lines { - indent := getIndentation(line) - - // Skip original require lines, we'll write merged ones at the first require position for each indent level - if requireLines[i] { - // Check if this is the first require at this indentation level - if firstRequireLineByIndent[indent] == i && !wroteRequiresByIndent[indent] { - // Write all merged require statements for this indentation level - moduleImports := moduleImportsByIndent[indent] - moduleOrder := moduleOrderByIndent[indent] - - indentStr := strings.Repeat(" ", indent) - - for _, moduleName := range moduleOrder { - imports := moduleImports[moduleName] - if len(imports) == 0 { - continue - } - - // Separate VAR: prefixed (simple requires) from destructured imports - var varNames []string - var destructuredNames []string - for _, imp := range imports { - if after, ok := strings.CutPrefix(imp, "VAR:"); ok { - varNames = append(varNames, after) - } else { - destructuredNames = append(destructuredNames, imp) - } - } - - // Deduplicate variable names for simple requires - if len(varNames) > 0 { - seen := make(map[string]bool) - var uniqueVarNames []string - for _, varName := range varNames { - if !seen[varName] { - seen[varName] = true - uniqueVarNames = append(uniqueVarNames, varName) - } - } - - // Write simple require(s) - use the first unique variable name - if len(uniqueVarNames) > 0 { - varName := uniqueVarNames[0] - fmt.Fprintf(&result, "%sconst %s = require(\"%s\");\n", indentStr, varName, moduleName) - bundlerLog.Printf("Keeping simple require: %s at indent %d", moduleName, indent) - } - } - - // Handle destructured imports - if len(destructuredNames) > 0 { - // Remove duplicates while preserving order - seen := make(map[string]bool) - var uniqueImports []string - for _, imp := range destructuredNames { - if !seen[imp] { - seen[imp] = true - uniqueImports = append(uniqueImports, imp) - } - } - - fmt.Fprintf(&result, "%sconst { %s } = require(\"%s\");\n", - indentStr, strings.Join(uniqueImports, ", "), moduleName) - bundlerLog.Printf("Merged destructured require for %s at indent %d: %v", moduleName, indent, uniqueImports) - } - } - wroteRequiresByIndent[indent] = true - } - // Skip this require line (it's been merged or will be merged) - continue - } - - // Keep non-require lines - result.WriteString(line) - if i < len(lines)-1 { - result.WriteString("\n") - } - } - - return result.String() -} diff --git a/pkg/workflow/bundler_deduplicate_test.go b/pkg/workflow/bundler_deduplicate_test.go deleted file mode 100644 index cecf32640b..0000000000 --- a/pkg/workflow/bundler_deduplicate_test.go +++ /dev/null @@ -1,44 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -// TestDeduplicateRequiresPreservesIndentation tests that deduplicateRequires -// preserves the indentation level of requires -func TestDeduplicateRequiresPreservesIndentation(t *testing.T) { - input := `async function main() { - const fs = require("fs"); - - if (fs.existsSync("/tmp/test.txt")) { - console.log("exists"); - } -} - -const path = require("path"); -console.log(path.basename("/tmp/file.txt")); -` - - output := deduplicateRequires(input) - - t.Logf("Input:\n%s", input) - t.Logf("Output:\n%s", output) - - // Check that fs require is at indent 2 - if !strings.Contains(output, " const fs = require(\"fs\");") { - t.Error("fs require should have 2 spaces of indentation") - } - - // Check that path require is at indent 0 - if !strings.Contains(output, "const path = require(\"path\");") { - t.Error("path require should have 0 spaces of indentation") - - // Check if it was incorrectly indented - if strings.Contains(output, " const path = require(\"path\");") { - t.Error("path require was incorrectly indented with 2 spaces") - } - } -} diff --git a/pkg/workflow/bundler_duplicate_modules_test.go b/pkg/workflow/bundler_duplicate_modules_test.go deleted file mode 100644 index 9d3387606e..0000000000 --- a/pkg/workflow/bundler_duplicate_modules_test.go +++ /dev/null @@ -1,65 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -// TestDeduplicateRequiresDuplicateModules tests that when multiple files require -// the same module with the same variable name, only one require statement is kept -func TestDeduplicateRequiresDuplicateModules(t *testing.T) { - // Simulates what happens when multiple inlined files all require "fs" - input := `const fs = require("fs"); -const path = require("path"); -// Inlined from file1.cjs -const fs = require("fs"); -// Inlined from file2.cjs -const fs = require("fs"); -const path = require("path"); -// Inlined from file3.cjs -const fs = require("fs"); - -function useModules() { - fs.existsSync("/tmp"); - path.join("/tmp", "test"); -} -` - - output := deduplicateRequires(input) - - t.Logf("Input:\n%s", input) - t.Logf("Output:\n%s", output) - - // Should have exactly 1 fs require - fsCount := strings.Count(output, `const fs = require`) - if fsCount != 1 { - t.Errorf("Expected 1 fs require, got %d", fsCount) - } - - // Should have exactly 1 path require - pathCount := strings.Count(output, `const path = require`) - if pathCount != 1 { - t.Errorf("Expected 1 path require, got %d", pathCount) - } - - // Both requires should come before their usage - fsRequireIndex := strings.Index(output, `require("fs")`) - fsUsageIndex := strings.Index(output, "fs.existsSync") - pathRequireIndex := strings.Index(output, `require("path")`) - pathUsageIndex := strings.Index(output, "path.join") - - if fsRequireIndex == -1 { - t.Error("fs require not found") - } - if pathRequireIndex == -1 { - t.Error("path require not found") - } - if fsUsageIndex != -1 && fsRequireIndex > fsUsageIndex { - t.Errorf("fs require should come before fs.existsSync usage") - } - if pathUsageIndex != -1 && pathRequireIndex > pathUsageIndex { - t.Errorf("path require should come before path.join usage") - } -} diff --git a/pkg/workflow/bundler_file_mode.go b/pkg/workflow/bundler_file_mode.go deleted file mode 100644 index 9c96e5a5cb..0000000000 --- a/pkg/workflow/bundler_file_mode.go +++ /dev/null @@ -1,529 +0,0 @@ -// This file provides JavaScript bundling for agentic workflows. -// -// # File Mode Bundler -// -// This file implements a file-based bundling mode for GitHub Script actions that writes -// JavaScript files to disk instead of inlining them in YAML. This approach maximizes -// reuse of helper modules within the same job. -// -// # How it works -// -// 1. CollectScriptFiles - Recursively collects all JavaScript files used by a script -// 2. GenerateWriteScriptsStep - Creates a step that writes all files to /opt/gh-aw/scripts/ -// 3. GenerateRequireScript - Converts a script to require from the local filesystem -// -// # Benefits -// -// - Reduces YAML size by avoiding duplicate inlined code -// - Maximizes reuse of helper modules within the same job -// - Makes debugging easier (files exist on disk during execution) -// - Reduces memory pressure from large bundled strings - -package workflow - -import ( - "crypto/sha256" - "encoding/hex" - "fmt" - "path/filepath" - "regexp" - "sort" - "strings" - - "github.com/github/gh-aw/pkg/logger" -) - -var fileModeLog = logger.New("workflow:bundler_file_mode") - -// ScriptsBasePath is the directory where JavaScript files are written at runtime -// This must match SetupActionDestination since files are copied there by the setup action -const ScriptsBasePath = "/opt/gh-aw/actions" - -// SetupActionDestination is the directory where the setup action writes activation scripts -const SetupActionDestination = "/opt/gh-aw/actions" - -// ScriptFile represents a JavaScript file to be written to disk -type ScriptFile struct { - // Path is the relative path within ScriptsBasePath (e.g., "create_issue.cjs") - Path string - // Content is the JavaScript content to write - Content string - // Hash is a short hash of the content for cache invalidation - Hash string -} - -// ScriptFilesResult contains the collected script files and metadata -type ScriptFilesResult struct { - // Files is the list of files to write, deduplicated and sorted - Files []ScriptFile - // MainScriptPath is the path to the main entry point script - MainScriptPath string - // TotalSize is the total size of all files in bytes - TotalSize int -} - -// CollectScriptFiles recursively collects all JavaScript files used by a script. -// It starts from the main script and follows all local require() statements. -// Top-level await patterns (like `await main();`) are patched to work in CommonJS. -// -// Parameters: -// - scriptName: Name of the main script (e.g., "create_issue") -// - mainContent: The main script content -// - sources: Map of all available JavaScript sources (from GetJavaScriptSources()) -// -// Returns a ScriptFilesResult with all files needed, or an error if a required file is missing. -// -// Note: This includes the main script in the output. Use CollectScriptDependencies if you -// only want the dependencies (for when the main script is inlined in github-script). -func CollectScriptFiles(scriptName string, mainContent string, sources map[string]string) (*ScriptFilesResult, error) { - fileModeLog.Printf("Collecting script files for: %s (%d bytes)", scriptName, len(mainContent)) - - // Track collected files and avoid duplicates - collected := make(map[string]*ScriptFile) - processed := make(map[string]bool) - - // The main script path - mainPath := scriptName + ".cjs" - - // Patch top-level await patterns to work in CommonJS - patchedContent := patchTopLevelAwaitForFileMode(mainContent) - - // Add the main script first - hash := computeShortHash(patchedContent) - collected[mainPath] = &ScriptFile{ - Path: mainPath, - Content: patchedContent, - Hash: hash, - } - processed[mainPath] = true - - // Recursively collect dependencies - if err := collectDependencies(mainContent, "", sources, collected, processed); err != nil { - return nil, err - } - - // Convert to sorted slice for deterministic output - var files []ScriptFile - totalSize := 0 - for _, file := range collected { - files = append(files, *file) - totalSize += len(file.Content) - } - - // Sort by path for consistent output - sort.Slice(files, func(i, j int) bool { - return files[i].Path < files[j].Path - }) - - fileModeLog.Printf("Collected %d files, total size: %d bytes", len(files), totalSize) - - return &ScriptFilesResult{ - Files: files, - MainScriptPath: mainPath, - TotalSize: totalSize, - }, nil -} - -// CollectScriptDependencies collects only the dependencies of a script (not the main script itself). -// This is used when the main script is inlined in github-script but its dependencies -// need to be written to disk. -// -// Parameters: -// - scriptName: Name of the main script (e.g., "create_issue") -// - mainContent: The main script content -// - sources: Map of all available JavaScript sources (from GetJavaScriptSources()) -// -// Returns a ScriptFilesResult with only the dependency files, or an error if a required file is missing. -func CollectScriptDependencies(scriptName string, mainContent string, sources map[string]string) (*ScriptFilesResult, error) { - fileModeLog.Printf("Collecting dependencies for: %s (%d bytes)", scriptName, len(mainContent)) - - // Track collected files and avoid duplicates - collected := make(map[string]*ScriptFile) - processed := make(map[string]bool) - - // Mark the main script as processed so we don't include it - mainPath := scriptName + ".cjs" - processed[mainPath] = true - - // Recursively collect dependencies (but not the main script) - if err := collectDependencies(mainContent, "", sources, collected, processed); err != nil { - return nil, err - } - - // Convert to sorted slice for deterministic output - var files []ScriptFile - totalSize := 0 - for _, file := range collected { - files = append(files, *file) - totalSize += len(file.Content) - } - - // Sort by path for consistent output - sort.Slice(files, func(i, j int) bool { - return files[i].Path < files[j].Path - }) - - fileModeLog.Printf("Collected %d dependency files, total size: %d bytes", len(files), totalSize) - - return &ScriptFilesResult{ - Files: files, - MainScriptPath: mainPath, - TotalSize: totalSize, - }, nil -} - -// collectDependencies recursively collects all files required by the given content -func collectDependencies(content string, currentDir string, sources map[string]string, collected map[string]*ScriptFile, processed map[string]bool) error { - // Regular expression to match require('./...') or require("./...") - requireRegex := regexp.MustCompile(`require\(['"](\.\.?/[^'"]+)['"]\)`) - - matches := requireRegex.FindAllStringSubmatch(content, -1) - for _, match := range matches { - if len(match) <= 1 { - continue - } - - requirePath := match[1] - - // Resolve the full path - var fullPath string - if currentDir == "" { - fullPath = requirePath - } else { - fullPath = filepath.Join(currentDir, requirePath) - } - - // Ensure .cjs extension - if !strings.HasSuffix(fullPath, ".cjs") && !strings.HasSuffix(fullPath, ".js") { - fullPath += ".cjs" - } - - // Normalize the path - fullPath = filepath.Clean(fullPath) - fullPath = filepath.ToSlash(fullPath) - - // Skip if already processed - if processed[fullPath] { - continue - } - processed[fullPath] = true - - // Look up in sources - requiredContent, ok := sources[fullPath] - if !ok { - return fmt.Errorf("required file not found in sources: %s", fullPath) - } - - // Add to collected - hash := computeShortHash(requiredContent) - collected[fullPath] = &ScriptFile{ - Path: fullPath, - Content: requiredContent, - Hash: hash, - } - - fileModeLog.Printf("Collected dependency: %s (%d bytes)", fullPath, len(requiredContent)) - - // Recursively process this file's dependencies - requiredDir := filepath.Dir(fullPath) - if err := collectDependencies(requiredContent, requiredDir, sources, collected, processed); err != nil { - return err - } - } - - return nil -} - -// computeShortHash computes a short SHA256 hash of the content (first 8 characters) -func computeShortHash(content string) string { - hash := sha256.Sum256([]byte(content)) - return hex.EncodeToString(hash[:])[:8] -} - -// patchTopLevelAwaitForFileMode wraps top-level `await main();` calls in an async IIFE. -// CommonJS modules don't support top-level await, so we need to wrap it. -// -// This transforms: -// -// await main(); -// -// Into: -// -// (async () => { await main(); })(); -func patchTopLevelAwaitForFileMode(content string) string { - // Match `await main();` at the end of the file (with optional whitespace/newlines) - // This pattern is used in safe output scripts as the entry point - awaitMainRegex := regexp.MustCompile(`(?m)^await\s+main\s*\(\s*\)\s*;?\s*$`) - - return awaitMainRegex.ReplaceAllString(content, "(async () => { await main(); })();") -} - -// GenerateWriteScriptsStep generates the YAML for a step that writes all collected -// JavaScript files to /opt/gh-aw/scripts/. This step should be added once at the -// beginning of the safe_outputs job. -// -// The generated step uses a heredoc to write each file efficiently. -func GenerateWriteScriptsStep(files []ScriptFile) []string { - if len(files) == 0 { - return nil - } - - fileModeLog.Printf("Generating write scripts step for %d files", len(files)) - - var steps []string - - steps = append(steps, " - name: Setup JavaScript files\n") - steps = append(steps, " id: setup_scripts\n") - steps = append(steps, " shell: bash\n") - steps = append(steps, " run: |\n") - steps = append(steps, fmt.Sprintf(" mkdir -p %s\n", ScriptsBasePath)) - - // Write each file using cat with heredoc - for _, file := range files { - filePath := fmt.Sprintf("%s/%s", ScriptsBasePath, file.Path) - - // Ensure parent directory exists - dir := filepath.Dir(filePath) - if dir != ScriptsBasePath { - steps = append(steps, fmt.Sprintf(" mkdir -p %s\n", dir)) - } - - // Use heredoc to write file content safely - // Generate unique delimiter using file hash to avoid conflicts - delimiter := GenerateHeredocDelimiter("FILE_" + file.Hash) - steps = append(steps, fmt.Sprintf(" cat > %s << '%s'\n", filePath, delimiter)) - - // Write content line by line - lines := strings.SplitSeq(file.Content, "\n") - for line := range lines { - steps = append(steps, fmt.Sprintf(" %s\n", line)) - } - - steps = append(steps, fmt.Sprintf(" %s\n", delimiter)) - } - - return steps -} - -// GenerateRequireScript generates the JavaScript code that requires the main script -// from the filesystem instead of inlining the bundled code. -// -// For GitHub Script mode, the script is wrapped in an async IIFE to support -// top-level await patterns used in the JavaScript files (e.g., `await main();`). -// The globals (github, context, core, exec, io) are automatically available -// in the GitHub Script execution context. -func GenerateRequireScript(mainScriptPath string) string { - fullPath := fmt.Sprintf("%s/%s", ScriptsBasePath, mainScriptPath) - // Wrap in async IIFE to support top-level await in the required module - return fmt.Sprintf(`(async () => { await require('%s'); })();`, fullPath) -} - -// GitHubScriptGlobalsPreamble is JavaScript code that exposes the github-script -// built-in objects (github, context, core, exec, io) on the global JavaScript object. -// This allows required modules to access these globals via globalThis. -const GitHubScriptGlobalsPreamble = `// Expose github-script globals to required modules -globalThis.github = github; -globalThis.context = context; -globalThis.core = core; -globalThis.exec = exec; -globalThis.io = io; - -` - -// GetInlinedScriptForFileMode gets the main script content and transforms it for inlining -// in the github-script action while using file mode for dependencies. -// -// This function: -// 1. Adds a preamble to expose github-script globals (github, context, core, exec, io) on globalThis -// 2. Gets the script content from the registry -// 3. Transforms relative require() calls to absolute paths (e.g., './helper.cjs' -> '/opt/gh-aw/scripts/helper.cjs') -// 4. Patches top-level await patterns to work in the execution context -// -// This is different from GenerateRequireScript which just generates a require() call. -// Inlining the main script is necessary because: -// - require() runs in a separate module context without the GitHub Script globals -// - The main script needs access to github, context, core, etc. in its top-level scope -// -// Dependencies are still loaded from files using require() and can access the globals -// via globalThis (e.g., globalThis.github, globalThis.core). -func GetInlinedScriptForFileMode(scriptName string) (string, error) { - // Get script content from registry - content := DefaultScriptRegistry.GetSource(scriptName) - if content == "" { - return "", fmt.Errorf("script not found in registry: %s", scriptName) - } - - // Transform relative requires to absolute paths pointing to /opt/gh-aw/scripts/ - transformed := TransformRequiresToAbsolutePath(content, ScriptsBasePath) - - // Patch top-level await patterns - patched := patchTopLevelAwaitForFileMode(transformed) - - // Add preamble to expose globals to required modules - result := GitHubScriptGlobalsPreamble + patched - - fileModeLog.Printf("Inlined script %s: %d bytes (transformed from %d)", scriptName, len(result), len(content)) - - return result, nil -} - -// RewriteScriptForFileMode rewrites a script's require statements to use absolute -// paths from /tmp/gh-aw/scripts/ instead of relative paths. -// -// This transforms: -// -// const { helper } = require('./helper.cjs'); -// -// Into: -// -// const { helper } = require('/opt/gh-aw/scripts/helper.cjs'); -func RewriteScriptForFileMode(content string, currentPath string) string { - // Regular expression to match local require statements - requireRegex := regexp.MustCompile(`require\(['"](\.\.?/)([^'"]+)['"]\)`) - - return requireRegex.ReplaceAllStringFunc(content, func(match string) string { - // Extract the path - submatches := requireRegex.FindStringSubmatch(match) - if len(submatches) < 3 { - return match - } - - relativePrefix := submatches[1] - requirePath := submatches[2] - - // Resolve the full path - var fullPath string - currentDir := filepath.Dir(currentPath) - switch relativePrefix { - case "./": - if currentDir == "." || currentDir == "" { - fullPath = requirePath - } else { - fullPath = filepath.Join(currentDir, requirePath) - } - case "../": - parentDir := filepath.Dir(currentDir) - fullPath = filepath.Join(parentDir, requirePath) - } - - // Normalize - fullPath = filepath.Clean(fullPath) - fullPath = filepath.ToSlash(fullPath) - - // Return the rewritten require - return fmt.Sprintf("require('%s/%s')", ScriptsBasePath, fullPath) - }) -} - -// TransformRequiresToAbsolutePath rewrites all relative require statements in content -// to use the specified absolute base path. -// -// This transforms: -// -// const { helper } = require('./helper.cjs'); -// -// Into: -// -// const { helper } = require('/base/path/helper.cjs'); -// -// Parameters: -// - content: The JavaScript content to transform -// - basePath: The absolute path to use for requires (e.g., "/opt/gh-aw/safeoutputs") -func TransformRequiresToAbsolutePath(content string, basePath string) string { - // Regular expression to match local require statements - requireRegex := regexp.MustCompile(`require\(['"](\.\.?/)([^'"]+)['"]\)`) - - return requireRegex.ReplaceAllStringFunc(content, func(match string) string { - // Extract the path - submatches := requireRegex.FindStringSubmatch(match) - if len(submatches) < 3 { - return match - } - - requirePath := submatches[2] - - // Return the rewritten require with the base path - return fmt.Sprintf("require('%s/%s')", basePath, requirePath) - }) -} - -// PrepareFilesForFileMode prepares all collected files for file mode by rewriting -// their require statements to use absolute paths. -func PrepareFilesForFileMode(files []ScriptFile) []ScriptFile { - result := make([]ScriptFile, len(files)) - for i, file := range files { - rewritten := RewriteScriptForFileMode(file.Content, file.Path) - result[i] = ScriptFile{ - Path: file.Path, - Content: rewritten, - Hash: computeShortHash(rewritten), - } - } - return result -} - -// CollectAllJobScriptFiles collects all JavaScript files needed by multiple scripts -// in a single job. This deduplicates common helper files across different safe output types. -// -// Parameters: -// - scriptNames: List of script names to collect (e.g., ["create_issue", "add_comment"]) -// - sources: Map of all available JavaScript sources -// -// Returns a combined ScriptFilesResult with all deduplicated files. -func CollectAllJobScriptFiles(scriptNames []string, sources map[string]string) (*ScriptFilesResult, error) { - fileModeLog.Printf("Collecting files for %d scripts: %v", len(scriptNames), scriptNames) - - // Track all collected files across all scripts - allFiles := make(map[string]*ScriptFile) - - for _, name := range scriptNames { - // Get the script content from the registry - content := DefaultScriptRegistry.GetSource(name) - if content == "" { - fileModeLog.Printf("Script not found in registry: %s, skipping", name) - continue - } - - // Collect only this script's dependencies (not the main script itself) - // The main script is inlined in the github-script action - result, err := CollectScriptDependencies(name, content, sources) - if err != nil { - return nil, fmt.Errorf("failed to collect dependencies for script %s: %w", name, err) - } - - // Merge into allFiles - for _, file := range result.Files { - if existing, ok := allFiles[file.Path]; ok { - // Already have this file - verify content matches - if existing.Hash != file.Hash { - fileModeLog.Printf("WARNING: File %s has different content from different scripts", file.Path) - } - } else { - allFiles[file.Path] = &ScriptFile{ - Path: file.Path, - Content: file.Content, - Hash: file.Hash, - } - } - } - } - - // Convert to sorted slice - var files []ScriptFile - totalSize := 0 - for _, file := range allFiles { - files = append(files, *file) - totalSize += len(file.Content) - } - - sort.Slice(files, func(i, j int) bool { - return files[i].Path < files[j].Path - }) - - fileModeLog.Printf("Total collected: %d unique dependency files, %d bytes", len(files), totalSize) - - return &ScriptFilesResult{ - Files: files, - TotalSize: totalSize, - }, nil -} diff --git a/pkg/workflow/bundler_file_mode_test.go b/pkg/workflow/bundler_file_mode_test.go deleted file mode 100644 index 12df187178..0000000000 --- a/pkg/workflow/bundler_file_mode_test.go +++ /dev/null @@ -1,255 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -func TestCollectScriptFiles(t *testing.T) { - // Create mock sources with dependencies - sources := map[string]string{ - "main.cjs": ` -const { helper } = require('./helper.cjs'); -const { util } = require('./utils/util.cjs'); -helper(); -util(); -`, - "helper.cjs": ` -const { shared } = require('./shared.cjs'); -function helper() { - shared(); - console.log("helper"); -} -module.exports = { helper }; -`, - "shared.cjs": ` -function shared() { - console.log("shared"); -} -module.exports = { shared }; -`, - "utils/util.cjs": ` -function util() { - console.log("util"); -} -module.exports = { util }; -`, - } - - result, err := CollectScriptFiles("main", sources["main.cjs"], sources) - if err != nil { - t.Fatalf("CollectScriptFiles failed: %v", err) - } - - // Should collect all 4 files - if len(result.Files) != 4 { - t.Errorf("Expected 4 files, got %d", len(result.Files)) - for _, f := range result.Files { - t.Logf(" - %s", f.Path) - } - } - - // Check that main script path is set - if result.MainScriptPath != "main.cjs" { - t.Errorf("Expected MainScriptPath to be 'main.cjs', got '%s'", result.MainScriptPath) - } - - // Check total size is > 0 - if result.TotalSize == 0 { - t.Error("Expected TotalSize > 0") - } -} - -func TestCollectScriptFiles_MissingDependency(t *testing.T) { - sources := map[string]string{ - "main.cjs": ` -const { missing } = require('./missing.cjs'); -missing(); -`, - } - - _, err := CollectScriptFiles("main", sources["main.cjs"], sources) - if err == nil { - t.Fatal("Expected error for missing dependency, got nil") - } - if !strings.Contains(err.Error(), "missing.cjs") { - t.Errorf("Expected error to mention 'missing.cjs', got: %v", err) - } -} - -func TestCollectScriptFiles_CircularDependency(t *testing.T) { - // Circular dependencies should be handled (file only processed once) - sources := map[string]string{ - "a.cjs": ` -const { b } = require('./b.cjs'); -module.exports = { a: () => b() }; -`, - "b.cjs": ` -const { a } = require('./a.cjs'); -module.exports = { b: () => console.log("b") }; -`, - } - - result, err := CollectScriptFiles("a", sources["a.cjs"], sources) - if err != nil { - t.Fatalf("CollectScriptFiles failed with circular dependency: %v", err) - } - - // Should collect both files without infinite loop - if len(result.Files) != 2 { - t.Errorf("Expected 2 files, got %d", len(result.Files)) - } -} - -func TestGenerateWriteScriptsStep(t *testing.T) { - files := []ScriptFile{ - { - Path: "test.cjs", - Content: "console.log('hello');", - Hash: "abc12345", - }, - } - - steps := GenerateWriteScriptsStep(files) - if len(steps) == 0 { - t.Fatal("Expected steps to be generated") - } - - // Check that the step includes the mkdir command - stepsStr := strings.Join(steps, "") - if !strings.Contains(stepsStr, "mkdir -p /opt/gh-aw/actions") { - t.Error("Expected mkdir command for actions directory") - } - - // Check that the file is written - if !strings.Contains(stepsStr, "cat > /opt/gh-aw/actions/test.cjs") { - t.Error("Expected cat command for writing file") - } - - // Check that content is included - if !strings.Contains(stepsStr, "console.log") { - t.Error("Expected file content to be included") - } -} - -func TestGenerateRequireScript(t *testing.T) { - script := GenerateRequireScript("create_issue.cjs") - - if !strings.Contains(script, "/opt/gh-aw/actions/create_issue.cjs") { - t.Errorf("Expected script to require from /opt/gh-aw/actions/, got: %s", script) - } - - if !strings.Contains(script, "require(") { - t.Error("Expected script to contain require()") - } - - // Should be wrapped in async IIFE to support top-level await - if !strings.Contains(script, "(async () =>") { - t.Error("Should be wrapped in async IIFE to support top-level await") - } - - // Should have the closing IIFE parentheses - if !strings.Contains(script, ")()") { - t.Error("Should have IIFE invocation") - } -} - -func TestRewriteScriptForFileMode(t *testing.T) { - tests := []struct { - name string - content string - currentPath string - wantContain string - }{ - { - name: "simple relative require", - content: "const { helper } = require('./helper.cjs');", - currentPath: "main.cjs", - wantContain: "/opt/gh-aw/actions/helper.cjs", - }, - { - name: "nested relative require", - content: "const { util } = require('./utils/util.cjs');", - currentPath: "main.cjs", - wantContain: "/opt/gh-aw/actions/utils/util.cjs", - }, - { - name: "parent directory require", - content: "const { shared } = require('../shared.cjs');", - currentPath: "utils/util.cjs", - wantContain: "/opt/gh-aw/actions/shared.cjs", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := RewriteScriptForFileMode(tt.content, tt.currentPath) - if !strings.Contains(result, tt.wantContain) { - t.Errorf("Expected result to contain %q, got: %s", tt.wantContain, result) - } - }) - } -} - -func TestPrepareFilesForFileMode(t *testing.T) { - files := []ScriptFile{ - { - Path: "main.cjs", - Content: "const { helper } = require('./helper.cjs'); helper();", - Hash: "abc123", - }, - { - Path: "helper.cjs", - Content: "module.exports = { helper: () => {} };", - Hash: "def456", - }, - } - - prepared := PrepareFilesForFileMode(files) - if len(prepared) != 2 { - t.Fatalf("Expected 2 prepared files, got %d", len(prepared)) - } - - // Check that require paths are rewritten - mainFile := prepared[0] - if !strings.Contains(mainFile.Content, "/opt/gh-aw/actions/helper.cjs") { - t.Errorf("Expected main file to have rewritten require path, got: %s", mainFile.Content) - } - - // Check that hash is updated - if mainFile.Hash == files[0].Hash { - t.Error("Expected hash to be updated after rewriting") - } -} - -func TestCollectAllJobScriptFiles(t *testing.T) { - // This test uses the actual script registry - // Skip if registry is empty (shouldn't happen in normal runs) - if !DefaultScriptRegistry.Has("create_issue") { - t.Skip("Script registry not populated") - } - - scriptNames := []string{"create_issue", "add_comment"} - sources := GetJavaScriptSources() - - result, err := CollectAllJobScriptFiles(scriptNames, sources) - if err != nil { - t.Fatalf("CollectAllJobScriptFiles failed: %v", err) - } - - // Should collect at least the 2 main scripts plus shared dependencies - if len(result.Files) < 2 { - t.Errorf("Expected at least 2 files, got %d", len(result.Files)) - } - - // Check that helpers are deduplicated (shared files should appear only once) - pathCounts := make(map[string]int) - for _, f := range result.Files { - pathCounts[f.Path]++ - if pathCounts[f.Path] > 1 { - t.Errorf("File %s appears multiple times", f.Path) - } - } -} diff --git a/pkg/workflow/bundler_fs_undefined_test.go b/pkg/workflow/bundler_fs_undefined_test.go deleted file mode 100644 index 991b24f944..0000000000 --- a/pkg/workflow/bundler_fs_undefined_test.go +++ /dev/null @@ -1,13 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" -) - -// TestBundleJavaScriptFsInsideFunctionWithMultilineDestructure tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptFsInsideFunctionWithMultilineDestructure(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} diff --git a/pkg/workflow/bundler_function_scope_test.go b/pkg/workflow/bundler_function_scope_test.go deleted file mode 100644 index c00a186c8d..0000000000 --- a/pkg/workflow/bundler_function_scope_test.go +++ /dev/null @@ -1,13 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" -) - -// TestBundleJavaScriptWithRequireInsideFunction tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptWithRequireInsideFunction(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} diff --git a/pkg/workflow/bundler_indentation_test.go b/pkg/workflow/bundler_indentation_test.go deleted file mode 100644 index 58a0d4e3c8..0000000000 --- a/pkg/workflow/bundler_indentation_test.go +++ /dev/null @@ -1,58 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -// TestDeduplicateRequiresWithMixedIndentation tests what happens when requires have different indentation -func TestDeduplicateRequiresWithMixedIndentation(t *testing.T) { - // This simulates the real scenario where some code has no indentation - // but other inlined code has indentation - input := `const { execFile } = require("child_process"); -const os = require("os"); - -function someFunction() { - const fs = require("fs"); - const path = require("path"); - - fs.existsSync("/tmp"); - path.join("/tmp", "test"); -} -` - - output := deduplicateRequires(input) - - t.Logf("Input:\n%s", input) - t.Logf("Output:\n%s", output) - - // Count requires at each indentation level - lines := strings.Split(output, "\n") - indent0Requires := 0 - indent2Requires := 0 - - for _, line := range lines { - if strings.Contains(line, "require(") { - // Count leading spaces - spaces := len(line) - len(strings.TrimLeft(line, " ")) - switch spaces { - case 0: - indent0Requires++ - t.Logf("Indent 0: %s", line) - case 2: - indent2Requires++ - t.Logf("Indent 2: %s", line) - } - } - } - - t.Logf("Requires at indent 0: %d", indent0Requires) - t.Logf("Requires at indent 2: %d", indent2Requires) - - // fs and path should stay at indent 2 (inside the function scope) - if indent2Requires != 2 { - t.Errorf("Expected 2 requires at indent 2 (fs and path inside function), got %d", indent2Requires) - } -} diff --git a/pkg/workflow/bundler_inline_test.go b/pkg/workflow/bundler_inline_test.go deleted file mode 100644 index fd243ba7a8..0000000000 --- a/pkg/workflow/bundler_inline_test.go +++ /dev/null @@ -1,59 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -// TestDeduplicateRequiresWithInlinedContent tests deduplication with comment markers -func TestDeduplicateRequiresWithInlinedContent(t *testing.T) { - input := `// === Inlined from ./safe_outputs_mcp_server.cjs === -const { execFile, execSync } = require("child_process"); -const os = require("os"); -// === Inlined from ./read_buffer.cjs === -class ReadBuffer { -} -// === End of ./read_buffer.cjs === -// === Inlined from ./mcp_server_core.cjs === -const fs = require("fs"); -const path = require("path"); -function initLogFile(server) { - if (!fs.existsSync(server.logDir)) { - fs.mkdirSync(server.logDir, { recursive: true }); - } -} -// === End of ./mcp_server_core.cjs === -// === End of ./safe_outputs_mcp_server.cjs === -` - - output := deduplicateRequires(input) - - t.Logf("Input:\n%s", input) - t.Logf("Output:\n%s", output) - - // Check that fs and path requires are present - if !strings.Contains(output, `require("fs")`) { - t.Error("fs require should be present in output") - } - - if !strings.Contains(output, `require("path")`) { - t.Error("path require should be present in output") - } - - // Check that they come before fs.existsSync usage - fsRequireIndex := strings.Index(output, `require("fs")`) - fsUsageIndex := strings.Index(output, "fs.existsSync") - found := strings.Contains(output, `require("path")`) - - if fsRequireIndex == -1 { - t.Error("fs require not found") - } - if !found { - t.Error("path require not found") - } - if fsUsageIndex != -1 && fsRequireIndex > fsUsageIndex { - t.Errorf("fs require should come before fs.existsSync usage (require at %d, usage at %d)", fsRequireIndex, fsUsageIndex) - } -} diff --git a/pkg/workflow/bundler_integration_test.go b/pkg/workflow/bundler_integration_test.go deleted file mode 100644 index a3419fe355..0000000000 --- a/pkg/workflow/bundler_integration_test.go +++ /dev/null @@ -1,55 +0,0 @@ -//go:build integration - -package workflow - -import ( - "testing" -) - -// TestBundlerIntegration tests the integration of bundler with embedded scripts -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundlerIntegration(t *testing.T) { - t.Skip("Bundler integration tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundlerCaching tests that bundling is cached and only happens once -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundlerCaching(t *testing.T) { - t.Skip("Bundler caching tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundlerConcurrency tests that the bundler works correctly under concurrent access -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundlerConcurrency(t *testing.T) { - t.Skip("Bundler concurrency tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundledScriptsContainHelperFunctions verifies that helper functions are properly bundled -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundledScriptsContainHelperFunctions(t *testing.T) { - t.Skip("Bundled scripts helper function tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundledScriptsDoNotContainExports verifies that exports are removed from bundled scripts -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundledScriptsDoNotContainExports(t *testing.T) { - t.Skip("Bundled scripts exports tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundledScriptsHaveCorrectStructure verifies the structure of bundled scripts -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundledScriptsHaveCorrectStructure(t *testing.T) { - t.Skip("Bundled scripts structure tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestSourceFilesAreSmaller verifies that source files are smaller than bundled scripts -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestSourceFilesAreSmaller(t *testing.T) { - t.Skip("Source file size comparison tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestGetJavaScriptSources verifies that GetJavaScriptSources returns all embedded sources -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestGetJavaScriptSources(t *testing.T) { - t.Skip("JavaScript sources tests skipped - scripts now use require() pattern to load external files at runtime") -} diff --git a/pkg/workflow/bundler_quotes_test.go b/pkg/workflow/bundler_quotes_test.go deleted file mode 100644 index 4ecce1e775..0000000000 --- a/pkg/workflow/bundler_quotes_test.go +++ /dev/null @@ -1,103 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -// TestDeduplicateRequiresWithSingleAndDoubleQuotes tests that deduplicateRequires -// handles both single and double quoted require statements correctly -func TestDeduplicateRequiresWithSingleAndDoubleQuotes(t *testing.T) { - input := `const fs = require("fs"); -const path = require('path'); - -function test() { - const result = path.join("/tmp", "test"); - return fs.readFileSync(result); -} -` - - output := deduplicateRequires(input) - - t.Logf("Input:\n%s", input) - t.Logf("Output:\n%s", output) - - // Check that both requires are present - if !strings.Contains(output, `const fs = require("fs");`) { - t.Error("fs require with double quotes should be present") - } - - if !strings.Contains(output, `const path = require('path');`) && - !strings.Contains(output, `const path = require("path");`) { - t.Error("path require should be present (with single or double quotes)") - } - - // Check that path is defined before its use - found := strings.Contains(output, "const fs") - pathIndex := strings.Index(output, "const path") - joinIndex := strings.Index(output, "path.join") - - if pathIndex == -1 { - t.Error("path require is missing") - } - if joinIndex == -1 { - t.Error("path.join usage is missing") - } - if pathIndex > joinIndex { - t.Errorf("path require appears after path.join usage (path at %d, join at %d)", pathIndex, joinIndex) - } - if !found { - t.Error("fs require is missing") - } -} - -// TestDeduplicateRequiresMixedQuotesMultiple tests that the regex correctly -// handles multiple requires with mixed quote styles -func TestDeduplicateRequiresMixedQuotesMultiple(t *testing.T) { - input := `const fs = require("fs"); -const path = require('path'); -const os = require("os"); - -function useModules() { - console.log(fs.readFileSync("/tmp/test")); - console.log(path.join("/tmp", "test")); - console.log(os.tmpdir()); -} -` - - output := deduplicateRequires(input) - - t.Logf("Input:\n%s", input) - t.Logf("Output:\n%s", output) - - // Should have exactly one fs require - fsCount := strings.Count(output, `const fs = require`) - if fsCount != 1 { - t.Errorf("Expected 1 fs require, got %d", fsCount) - } - - // Should have exactly one path require - pathCount := strings.Count(output, `const path = require`) - if pathCount != 1 { - t.Errorf("Expected 1 path require, got %d", pathCount) - } - - // Should have exactly one os require - osCount := strings.Count(output, `const os = require`) - if osCount != 1 { - t.Errorf("Expected 1 os require, got %d", osCount) - } - - // All three modules should be present - if !strings.Contains(output, `require("fs")`) && !strings.Contains(output, `require('fs')`) { - t.Error("fs module should be required") - } - if !strings.Contains(output, `require("path")`) && !strings.Contains(output, `require('path')`) { - t.Error("path module should be required") - } - if !strings.Contains(output, `require("os")`) && !strings.Contains(output, `require('os')`) { - t.Error("os module should be required") - } -} diff --git a/pkg/workflow/bundler_runtime_mode_test.go b/pkg/workflow/bundler_runtime_mode_test.go deleted file mode 100644 index 43c9e8f9d9..0000000000 --- a/pkg/workflow/bundler_runtime_mode_test.go +++ /dev/null @@ -1,79 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" -) - -// TestRuntimeModeString tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestRuntimeModeString(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptWithMode_GitHubScript tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptWithMode_GitHubScript(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptWithMode_NodeJS tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptWithMode_NodeJS(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptWithMode_GitHubScriptValidation tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptWithMode_GitHubScriptValidation(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestValidateNoModuleReferences tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestValidateNoModuleReferences(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptFromSources_BackwardCompatibility tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptFromSources_BackwardCompatibility(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptWithMode_MultipleFiles_NodeJS tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptWithMode_MultipleFiles_NodeJS(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestValidateNoRuntimeMixing_GitHubScriptWithNodeJsHelper tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestValidateNoRuntimeMixing_GitHubScriptWithNodeJsHelper(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestValidateNoRuntimeMixing_NodeJsWithNodeJsHelper tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestValidateNoRuntimeMixing_NodeJsWithNodeJsHelper(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestValidateNoRuntimeMixing_GitHubScriptWithCompatibleHelper tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestValidateNoRuntimeMixing_GitHubScriptWithCompatibleHelper(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestValidateNoRuntimeMixing_GitHubScriptWithGitHubScriptAPIs tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestValidateNoRuntimeMixing_GitHubScriptWithGitHubScriptAPIs(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestValidateNoRuntimeMixing_TransitiveDependency tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestValidateNoRuntimeMixing_TransitiveDependency(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} diff --git a/pkg/workflow/bundler_runtime_validation.go b/pkg/workflow/bundler_runtime_validation.go deleted file mode 100644 index bad1ac0d05..0000000000 --- a/pkg/workflow/bundler_runtime_validation.go +++ /dev/null @@ -1,176 +0,0 @@ -// This file provides JavaScript runtime mode validation for agentic workflows. -// -// # Runtime Mode Validation -// -// This file validates that JavaScript scripts are compatible with their target runtime mode -// and that different runtime modes are not mixed in a bundling operation. This prevents -// runtime errors from incompatible API usage. -// -// # Runtime Modes -// -// GitHub Script Mode: -// - Used for JavaScript embedded in GitHub Actions YAML via actions/github-script -// - No module system available (no require() or module.exports at runtime) -// - GitHub Actions globals available (core.*, exec.*, github.*) -// -// Node.js Mode: -// - Used for standalone Node.js scripts that run on filesystem -// - Full CommonJS module system available -// - Standard Node.js APIs available (child_process, fs, etc.) -// - No GitHub Actions globals -// -// # Validation Functions -// -// - validateNoRuntimeMixing() - Ensures all files being bundled are compatible with target mode -// - validateRuntimeModeRecursive() - Recursively validates runtime compatibility -// - detectRuntimeMode() - Detects the intended runtime mode of a JavaScript file -// -// # When to Add Validation Here -// -// Add validation to this file when: -// - It validates runtime mode compatibility -// - It checks for mixing of incompatible scripts -// - It detects runtime-specific APIs -// -// For bundling functions, see bundler.go. -// For bundle safety validation, see bundler_safety_validation.go. -// For script content validation, see bundler_script_validation.go. -// For general validation, see validation.go. -// For detailed documentation, see scratchpad/validation-architecture.md - -package workflow - -import ( - "fmt" - "regexp" - "strings" - - "github.com/github/gh-aw/pkg/logger" - "github.com/github/gh-aw/pkg/stringutil" -) - -var bundlerRuntimeLog = logger.New("workflow:bundler_runtime_validation") - -// validateNoRuntimeMixing checks that all files being bundled are compatible with the target runtime mode -// This prevents mixing nodejs-only scripts (that use child_process) with github-script scripts -// Returns an error if incompatible runtime modes are detected -// Note: This function uses fail-fast error handling because runtime mode conflicts in dependencies -// need to be resolved one at a time, and showing multiple conflicting dependency chains would be confusing -func validateNoRuntimeMixing(mainScript string, sources map[string]string, targetMode RuntimeMode) error { - bundlerRuntimeLog.Printf("Validating runtime mode compatibility: target_mode=%s", targetMode) - - // Track which files have been checked to avoid redundant checks - checked := make(map[string]bool) - - // Recursively validate the main script and its dependencies - // This uses fail-fast error handling because runtime conflicts need sequential resolution - return validateRuntimeModeRecursive(mainScript, "", sources, targetMode, checked) -} - -// validateRuntimeModeRecursive recursively validates that all required files are compatible with the target runtime mode -func validateRuntimeModeRecursive(content string, currentPath string, sources map[string]string, targetMode RuntimeMode, checked map[string]bool) error { - // Extract all local require statements - requireRegex := regexp.MustCompile(`require\(['"](\.\.?/[^'"]+)['"]\)`) - matches := requireRegex.FindAllStringSubmatch(content, -1) - - for _, match := range matches { - if len(match) <= 1 { - continue - } - - requirePath := match[1] - - // Resolve the full path - var fullPath string - if currentPath == "" { - fullPath = requirePath - } else { - fullPath = currentPath + "/" + requirePath - } - - // Ensure .cjs extension - if !strings.HasSuffix(fullPath, ".cjs") && !strings.HasSuffix(fullPath, ".js") { - fullPath += ".cjs" - } - - // Normalize the path - fullPath = stringutil.NormalizePath(fullPath) - - // Skip if already checked - if checked[fullPath] { - continue - } - checked[fullPath] = true - - // Get the required file content - requiredContent, ok := sources[fullPath] - if !ok { - // File not found - this will be caught by other validation - continue - } - - // Detect the runtime mode of the required file - detectedMode := detectRuntimeMode(requiredContent) - - // Check for incompatibility - if detectedMode != RuntimeModeGitHubScript && targetMode != detectedMode { - return fmt.Errorf("runtime mode conflict: script requires '%s' which is a %s script, but the main script is compiled for %s mode.\n\nNode.js scripts cannot be bundled with GitHub Script mode scripts because they use incompatible APIs (e.g., child_process, fs).\n\nTo fix this:\n- Use only GitHub Script compatible scripts (core.*, exec.*, github.*) for GitHub Script mode\n- Or change the main script to Node.js mode if it needs Node.js APIs", - fullPath, detectedMode, targetMode) - } - - // Recursively check the required file's dependencies - requiredDir := "" - if strings.Contains(fullPath, "/") { - parts := strings.Split(fullPath, "/") - requiredDir = strings.Join(parts[:len(parts)-1], "/") - } - - if err := validateRuntimeModeRecursive(requiredContent, requiredDir, sources, targetMode, checked); err != nil { - return err - } - } - - return nil -} - -// detectRuntimeMode attempts to detect the intended runtime mode of a JavaScript file -// by analyzing its content for runtime-specific patterns. -// This is used to detect if a LOCAL file being bundled is incompatible with the target mode. -func detectRuntimeMode(content string) RuntimeMode { - // Check for Node.js-specific APIs that are CALLED in the code - // These indicate the script uses Node.js-only functionality - // Note: We only check for APIs that are fundamentally incompatible with github-script, - // specifically child_process APIs like execSync/spawnSync - nodeOnlyPatterns := []string{ - `\bexecSync\s*\(`, // execSync function call - `\bspawnSync\s*\(`, // spawnSync function call - } - - for _, pattern := range nodeOnlyPatterns { - matched, _ := regexp.MatchString(pattern, content) - if matched { - bundlerRuntimeLog.Printf("Detected Node.js mode: pattern '%s' found", pattern) - return RuntimeModeNodeJS - } - } - - // Check for github-script specific APIs - // These indicate the script is intended for GitHub Script mode - githubScriptPatterns := []string{ - `\bcore\.\w+`, // @actions/core - `\bgithub\.\w+`, // github context - } - - for _, pattern := range githubScriptPatterns { - matched, _ := regexp.MatchString(pattern, content) - if matched { - bundlerRuntimeLog.Printf("Detected GitHub Script mode: pattern '%s' found", pattern) - return RuntimeModeGitHubScript - } - } - - // If no specific patterns found, assume it's compatible with both (utility/helper functions) - // and return GitHub Script mode as the default/most restrictive - bundlerRuntimeLog.Print("No runtime-specific patterns found, assuming GitHub Script compatible") - return RuntimeModeGitHubScript -} diff --git a/pkg/workflow/bundler_safety_validation.go b/pkg/workflow/bundler_safety_validation.go deleted file mode 100644 index 1c235ddd99..0000000000 --- a/pkg/workflow/bundler_safety_validation.go +++ /dev/null @@ -1,223 +0,0 @@ -// This file provides JavaScript bundler safety validation for agentic workflows. -// -// # Bundle Safety Validation -// -// This file validates bundled JavaScript to ensure safe module dependencies and prevent -// runtime errors from missing modules. Validation ensures compatibility with target runtime mode. -// -// # Validation Functions -// -// - validateNoLocalRequires() - Validates bundled JavaScript has no local require() statements -// - validateNoModuleReferences() - Validates no module.exports or exports references remain -// - ValidateEmbeddedResourceRequires() - Validates embedded JavaScript dependencies exist -// -// # Validation Pattern: Bundling Verification -// -// Bundle safety validation ensures that local require() statements are inlined and -// module references are removed when required: -// - Scans bundled JavaScript for require('./...') or require('../...') patterns -// - Ignores require statements inside string literals -// - Returns hard errors if local requires are found (indicates bundling failure) -// - Helps prevent runtime module-not-found errors -// -// # When to Add Validation Here -// -// Add validation to this file when: -// - It validates JavaScript bundling correctness -// - It checks for missing module dependencies -// - It validates CommonJS require() statement resolution -// -// For bundling functions, see bundler.go. -// For runtime mode validation, see bundler_runtime_validation.go. -// For script content validation, see bundler_script_validation.go. -// For general validation, see validation.go. -// For detailed documentation, see scratchpad/validation-architecture.md - -package workflow - -import ( - "fmt" - "regexp" - "strings" - - "github.com/github/gh-aw/pkg/logger" - "github.com/github/gh-aw/pkg/stringutil" -) - -var bundlerSafetyLog = logger.New("workflow:bundler_safety_validation") - -// Pre-compiled regular expressions for validation (compiled once at package initialization for performance) -var ( - // moduleExportsRegex matches module.exports references - moduleExportsRegex = regexp.MustCompile(`\bmodule\.exports\b`) - // exportsRegex matches exports.property references - exportsRegex = regexp.MustCompile(`\bexports\.\w+`) -) - -// validateNoLocalRequires checks that the bundled JavaScript contains no local require() statements -// that weren't inlined during bundling. This prevents runtime errors from missing local modules. -// Returns an error if any local requires are found, otherwise returns nil -func validateNoLocalRequires(bundledContent string) error { - bundlerSafetyLog.Printf("Validating bundled JavaScript: %d bytes, %d lines", len(bundledContent), strings.Count(bundledContent, "\n")+1) - - // Regular expression to match local require statements - // Matches: require('./...') or require("../...") - localRequireRegex := regexp.MustCompile(`require\(['"](\.\.?/[^'"]+)['"]\)`) - - lines := strings.Split(bundledContent, "\n") - var foundRequires []string - - for lineNum, line := range lines { - // Check for local requires - matches := localRequireRegex.FindAllStringSubmatch(line, -1) - for _, match := range matches { - if len(match) > 1 { - requirePath := match[1] - foundRequires = append(foundRequires, fmt.Sprintf("line %d: require('%s')", lineNum+1, requirePath)) - } - } - } - - if len(foundRequires) > 0 { - bundlerSafetyLog.Printf("Validation failed: found %d un-inlined local require statements", len(foundRequires)) - return NewValidationError( - "bundled-javascript", - fmt.Sprintf("%d un-inlined requires", len(foundRequires)), - "bundled JavaScript contains local require() statements that were not inlined during bundling", - fmt.Sprintf("Found un-inlined requires:\n\n%s\n\nThis indicates a bundling failure. Check:\n1. All required files are in actions/setup/js/\n2. Bundler configuration includes all dependencies\n3. No circular dependencies exist\n\nRun 'make build' to regenerate bundles", strings.Join(foundRequires, "\n")), - ) - } - - bundlerSafetyLog.Print("Validation successful: no local require statements found") - return nil -} - -// validateNoModuleReferences checks that the bundled JavaScript contains no module.exports or exports references -// This is required for GitHub Script mode where no module system exists. -// Returns an error if any module references are found, otherwise returns nil -func validateNoModuleReferences(bundledContent string) error { - bundlerSafetyLog.Printf("Validating no module references: %d bytes", len(bundledContent)) - - lines := strings.Split(bundledContent, "\n") - var foundReferences []string - - for lineNum, line := range lines { - trimmed := strings.TrimSpace(line) - - // Skip comment lines - if strings.HasPrefix(trimmed, "//") || strings.HasPrefix(trimmed, "/*") || strings.HasPrefix(trimmed, "*") { - continue - } - - // Check for module.exports - if moduleExportsRegex.MatchString(line) { - foundReferences = append(foundReferences, fmt.Sprintf("line %d: module.exports reference", lineNum+1)) - } - - // Check for exports. - if exportsRegex.MatchString(line) { - foundReferences = append(foundReferences, fmt.Sprintf("line %d: exports reference", lineNum+1)) - } - } - - if len(foundReferences) > 0 { - bundlerSafetyLog.Printf("Validation failed: found %d module references", len(foundReferences)) - return NewValidationError( - "bundled-javascript", - fmt.Sprintf("%d module references", len(foundReferences)), - "bundled JavaScript for GitHub Script mode contains module.exports or exports references", - fmt.Sprintf("Found module references:\n\n%s\n\nGitHub Script mode does not support CommonJS module system. Check:\n1. Bundle configuration removes module references\n2. Code doesn't use module.exports or exports\n3. Using appropriate runtime mode (consider 'nodejs' mode if module system is needed)\n\nRun 'make build' to regenerate bundles", strings.Join(foundReferences, "\n")), - ) - } - - bundlerSafetyLog.Print("Validation successful: no module references found") - return nil -} - -// ValidateEmbeddedResourceRequires checks that all embedded JavaScript files in the sources map -// have their local require() dependencies available in the sources map. This prevents bundling failures -// when a file requires a local module that isn't embedded. -// -// This validation helps catch missing files in GetJavaScriptSources() at build/test time rather than -// at runtime when bundling fails. -// -// Parameters: -// - sources: map of file paths to their content (from GetJavaScriptSources()) -// -// Returns an error if any embedded file has local requires that reference files not in sources -func ValidateEmbeddedResourceRequires(sources map[string]string) error { - bundlerSafetyLog.Printf("Validating embedded resources: checking %d files for missing local requires", len(sources)) - - // Regular expression to match local require statements - // Matches: require('./...') or require("../...") - localRequireRegex := regexp.MustCompile(`require\(['"](\.\.?/[^'"]+)['"]\)`) - - var missingDeps []string - - // Check each file in sources - for filePath, content := range sources { - bundlerSafetyLog.Printf("Checking file: %s (%d bytes)", filePath, len(content)) - - // Find all local requires in this file - matches := localRequireRegex.FindAllStringSubmatch(content, -1) - if len(matches) == 0 { - continue - } - - bundlerSafetyLog.Printf("Found %d require statements in %s", len(matches), filePath) - - // Check each require - for _, match := range matches { - if len(match) <= 1 { - continue - } - - requirePath := match[1] - - // Resolve the required file path relative to the current file - currentDir := "" - if strings.Contains(filePath, "/") { - parts := strings.Split(filePath, "/") - currentDir = strings.Join(parts[:len(parts)-1], "/") - } - - var resolvedPath string - if currentDir == "" { - resolvedPath = requirePath - } else { - resolvedPath = currentDir + "/" + requirePath - } - - // Ensure .cjs extension - if !strings.HasSuffix(resolvedPath, ".cjs") && !strings.HasSuffix(resolvedPath, ".js") { - resolvedPath += ".cjs" - } - - // Normalize the path (remove ./ and ../) - resolvedPath = stringutil.NormalizePath(resolvedPath) - - // Check if the required file exists in sources - if _, ok := sources[resolvedPath]; !ok { - missingDep := fmt.Sprintf("%s requires '%s' (resolved to '%s') but it's not in sources map", - filePath, requirePath, resolvedPath) - missingDeps = append(missingDeps, missingDep) - bundlerSafetyLog.Printf("Missing dependency: %s", missingDep) - } else { - bundlerSafetyLog.Printf("Dependency OK: %s -> %s", filePath, resolvedPath) - } - } - } - - if len(missingDeps) > 0 { - bundlerSafetyLog.Printf("Validation failed: found %d missing dependencies", len(missingDeps)) - return NewValidationError( - "embedded-javascript", - fmt.Sprintf("%d missing dependencies", len(missingDeps)), - "embedded JavaScript files have missing local require() dependencies", - fmt.Sprintf("Missing dependencies:\n\n%s\n\nTo fix:\n1. Add missing .cjs files to actions/setup/js/\n2. Update GetJavaScriptSources() in pkg/workflow/js.go to include them\n3. Ensure file paths match require() statements\n4. Run 'make build' to regenerate bundles\n\nExample:\n//go:embed actions/setup/js/missing-file.cjs\nvar missingFileSource string", strings.Join(missingDeps, "\n")), - ) - } - - bundlerSafetyLog.Printf("Validation successful: all local requires are available in sources") - return nil -} diff --git a/pkg/workflow/bundler_scope_mixing_test.go b/pkg/workflow/bundler_scope_mixing_test.go deleted file mode 100644 index bc404d7aee..0000000000 --- a/pkg/workflow/bundler_scope_mixing_test.go +++ /dev/null @@ -1,13 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" -) - -// TestBundleJavaScriptWithMixedScopeRequires tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptWithMixedScopeRequires(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} diff --git a/pkg/workflow/bundler_scope_narrowing_test.go b/pkg/workflow/bundler_scope_narrowing_test.go deleted file mode 100644 index e03b010ea1..0000000000 --- a/pkg/workflow/bundler_scope_narrowing_test.go +++ /dev/null @@ -1,13 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" -) - -// TestBundleJavaScriptScopeNarrowing tests bundler functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptScopeNarrowing(t *testing.T) { - t.Skip("Bundler tests skipped - scripts now use require() pattern to load external files at runtime") -} diff --git a/pkg/workflow/bundler_script_validation.go b/pkg/workflow/bundler_script_validation.go deleted file mode 100644 index 17211e686b..0000000000 --- a/pkg/workflow/bundler_script_validation.go +++ /dev/null @@ -1,149 +0,0 @@ -// This file provides JavaScript script content validation for agentic workflows. -// -// # Script Content Validation -// -// This file validates JavaScript script content to ensure compatibility with runtime modes -// and adherence to platform conventions. Validation enforces proper API usage patterns -// for GitHub Script mode vs Node.js mode. -// -// # Validation Functions -// -// - validateNoExecSync() - Ensures GitHub Script mode scripts use exec instead of execSync -// - validateNoGitHubScriptGlobals() - Ensures Node.js scripts don't use GitHub Actions globals -// -// # Design Rationale -// -// The script content validation enforces two key constraints: -// 1. GitHub Script mode: Should not use execSync (use async exec from @actions/exec instead) -// 2. Node.js mode: Should not use GitHub Actions globals (core.*, exec.*, github.*) -// -// These rules ensure that scripts follow platform conventions: -// - GitHub Script mode runs inline in GitHub Actions YAML with GitHub-specific globals available -// - Node.js mode runs as standalone scripts with standard Node.js APIs only -// -// Validation happens at registration time (via panic) to catch errors during development/testing -// rather than at runtime. -// -// # When to Add Validation Here -// -// Add validation to this file when: -// - It validates JavaScript code content based on runtime mode -// - It checks for API usage patterns (execSync, GitHub Actions globals) -// - It validates script content for compatibility with execution environment -// -// For bundling functions, see bundler.go. -// For bundle safety validation, see bundler_safety_validation.go. -// For runtime mode validation, see bundler_runtime_validation.go. -// For general validation, see validation.go. -// For detailed documentation, see scratchpad/validation-architecture.md - -package workflow - -import ( - "fmt" - "regexp" - "strings" - - "github.com/github/gh-aw/pkg/logger" -) - -var bundlerScriptLog = logger.New("workflow:bundler_script_validation") - -// validateNoExecSync checks that GitHub Script mode scripts do not use execSync -// GitHub Script mode should use exec instead for better async/await handling -// Returns an error if execSync is found, otherwise returns nil -func validateNoExecSync(scriptName string, content string, mode RuntimeMode) error { - // Only validate GitHub Script mode - if mode != RuntimeModeGitHubScript { - return nil - } - - bundlerScriptLog.Printf("Validating no execSync in GitHub Script: %s (%d bytes)", scriptName, len(content)) - - // Regular expression to match execSync usage - // Matches: execSync(...) with various patterns - execSyncRegex := regexp.MustCompile(`\bexecSync\s*\(`) - - lines := strings.Split(content, "\n") - var foundUsages []string - - for lineNum, line := range lines { - trimmed := strings.TrimSpace(line) - - // Skip comment lines - if strings.HasPrefix(trimmed, "//") || strings.HasPrefix(trimmed, "/*") || strings.HasPrefix(trimmed, "*") { - continue - } - - // Check for execSync usage - if execSyncRegex.MatchString(line) { - foundUsages = append(foundUsages, fmt.Sprintf("line %d: %s", lineNum+1, strings.TrimSpace(line))) - } - } - - if len(foundUsages) > 0 { - bundlerScriptLog.Printf("Validation failed: found %d execSync usage(s) in %s", len(foundUsages), scriptName) - return fmt.Errorf("GitHub Script mode script '%s' contains %d execSync usage(s):\n %s\n\nGitHub Script mode should use exec instead of execSync for better async/await handling", - scriptName, len(foundUsages), strings.Join(foundUsages, "\n ")) - } - - bundlerScriptLog.Printf("Validation successful: no execSync usage found in %s", scriptName) - return nil -} - -// validateNoGitHubScriptGlobals checks that Node.js mode scripts do not use GitHub Actions globals -// Node.js scripts should not rely on actions/github-script globals like core.*, exec.*, or github.* -// Returns an error if GitHub Actions globals are found, otherwise returns nil -func validateNoGitHubScriptGlobals(scriptName string, content string, mode RuntimeMode) error { - // Only validate Node.js mode - if mode != RuntimeModeNodeJS { - return nil - } - - bundlerScriptLog.Printf("Validating no GitHub Actions globals in Node.js script: %s (%d bytes)", scriptName, len(content)) - - // Regular expressions to match GitHub Actions globals - // Matches: core.method, exec.method, github.property - coreGlobalRegex := regexp.MustCompile(`\bcore\.\w+`) - execGlobalRegex := regexp.MustCompile(`\bexec\.\w+`) - githubGlobalRegex := regexp.MustCompile(`\bgithub\.\w+`) - - lines := strings.Split(content, "\n") - var foundUsages []string - - for lineNum, line := range lines { - trimmed := strings.TrimSpace(line) - - // Skip comment lines and type references - if strings.HasPrefix(trimmed, "//") || strings.HasPrefix(trimmed, "/*") || strings.HasPrefix(trimmed, "*") { - continue - } - if strings.Contains(trimmed, "/// 0 { - bundlerScriptLog.Printf("Validation failed: found %d GitHub Actions global usage(s) in %s", len(foundUsages), scriptName) - return fmt.Errorf("node.js mode script '%s' contains %d GitHub Actions global usage(s):\n %s\n\nNode.js scripts should not use GitHub Actions globals (core.*, exec.*, github.*)", - scriptName, len(foundUsages), strings.Join(foundUsages, "\n ")) - } - - bundlerScriptLog.Printf("Validation successful: no GitHub Actions globals found in %s", scriptName) - return nil -} diff --git a/pkg/workflow/bundler_script_validation_test.go b/pkg/workflow/bundler_script_validation_test.go deleted file mode 100644 index e9f767f315..0000000000 --- a/pkg/workflow/bundler_script_validation_test.go +++ /dev/null @@ -1,244 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestValidateNoExecSync_GitHubScriptMode(t *testing.T) { - tests := []struct { - name string - scriptName string - content string - mode RuntimeMode - expectError bool - }{ - { - name: "GitHub Script mode with execSync should fail", - scriptName: "test_script", - content: ` -const { execSync } = require("child_process"); -const result = execSync("ls -la"); -`, - mode: RuntimeModeGitHubScript, - expectError: true, - }, - { - name: "GitHub Script mode with exec should pass", - scriptName: "test_script", - content: ` -const { exec } = require("@actions/exec"); -await exec.exec("ls -la"); -`, - mode: RuntimeModeGitHubScript, - expectError: false, - }, - { - name: "GitHub Script mode without exec should pass", - scriptName: "test_script", - content: ` -const fs = require("fs"); -const data = fs.readFileSync("file.txt"); -`, - mode: RuntimeModeGitHubScript, - expectError: false, - }, - { - name: "Node.js mode with execSync should pass (not checked)", - scriptName: "test_script", - content: ` -const { execSync } = require("child_process"); -const result = execSync("ls -la"); -`, - mode: RuntimeModeNodeJS, - expectError: false, - }, - { - name: "GitHub Script mode with execSync in comment should pass", - scriptName: "test_script", - content: ` -// Don't use execSync, use exec instead -const { exec } = require("@actions/exec"); -`, - mode: RuntimeModeGitHubScript, - expectError: false, - }, - { - name: "GitHub Script mode with multiple execSync calls should fail", - scriptName: "test_script", - content: ` -const { execSync } = require("child_process"); -execSync("git status"); -const output = execSync("git diff"); -`, - mode: RuntimeModeGitHubScript, - expectError: true, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - err := validateNoExecSync(tt.scriptName, tt.content, tt.mode) - if tt.expectError { - require.Error(t, err, "Expected validation to fail") - assert.Contains(t, err.Error(), "execSync", "Error should mention execSync") - } else { - assert.NoError(t, err, "Expected validation to pass") - } - }) - } -} - -func TestValidateNoGitHubScriptGlobals_NodeJSMode(t *testing.T) { - tests := []struct { - name string - scriptName string - content string - mode RuntimeMode - expectError bool - }{ - { - name: "Node.js mode with core.* should fail", - scriptName: "test_script", - content: ` -const fs = require("fs"); -core.info("This is a message"); -`, - mode: RuntimeModeNodeJS, - expectError: true, - }, - { - name: "Node.js mode with exec.* should fail", - scriptName: "test_script", - content: ` -const fs = require("fs"); -await exec.exec("ls -la"); -`, - mode: RuntimeModeNodeJS, - expectError: true, - }, - { - name: "Node.js mode with github.* should fail", - scriptName: "test_script", - content: ` -const fs = require("fs"); -const repo = github.context.repo; -`, - mode: RuntimeModeNodeJS, - expectError: true, - }, - { - name: "Node.js mode without GitHub Actions globals should pass", - scriptName: "test_script", - content: ` -const fs = require("fs"); -const data = fs.readFileSync("file.txt"); -console.log("Processing data"); -`, - mode: RuntimeModeNodeJS, - expectError: false, - }, - { - name: "GitHub Script mode with core.* should pass (not checked)", - scriptName: "test_script", - content: ` -core.info("This is a message"); -core.setOutput("result", "value"); -`, - mode: RuntimeModeGitHubScript, - expectError: false, - }, - { - name: "Node.js mode with GitHub Actions globals in comment should pass", - scriptName: "test_script", - content: ` -// Don't use core.info in Node.js scripts -console.log("Use console.log instead"); -`, - mode: RuntimeModeNodeJS, - expectError: false, - }, - { - name: "Node.js mode with type reference should pass", - scriptName: "test_script", - content: ` -/// -const fs = require("fs"); -`, - mode: RuntimeModeNodeJS, - expectError: false, - }, - { - name: "Node.js mode with multiple GitHub Actions globals should fail", - scriptName: "test_script", - content: ` -const fs = require("fs"); -core.info("Message"); -exec.exec("ls"); -const repo = github.context.repo; -`, - mode: RuntimeModeNodeJS, - expectError: true, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - err := validateNoGitHubScriptGlobals(tt.scriptName, tt.content, tt.mode) - if tt.expectError { - assert.Error(t, err, "Expected validation to fail") - } else { - assert.NoError(t, err, "Expected validation to pass") - } - }) - } -} - -func TestScriptRegistry_RegisterWithMode_Validation(t *testing.T) { - t.Run("GitHub Script mode with execSync should return error", func(t *testing.T) { - registry := NewScriptRegistry() - invalidScript := ` -const { execSync } = require("child_process"); -execSync("ls -la"); -` - err := registry.RegisterWithMode("invalid_script", invalidScript, RuntimeModeGitHubScript) - require.Error(t, err, "Should return error when registering GitHub Script with execSync") - assert.Contains(t, err.Error(), "execSync", "Error should mention execSync") - }) - - t.Run("Node.js mode with GitHub Actions globals should return error", func(t *testing.T) { - registry := NewScriptRegistry() - invalidScript := ` -const fs = require("fs"); -core.info("This should not be here"); -` - err := registry.RegisterWithMode("invalid_script", invalidScript, RuntimeModeNodeJS) - require.Error(t, err, "Should return error when registering Node.js script with GitHub Actions globals") - assert.Contains(t, err.Error(), "GitHub Actions global", "Error should mention GitHub Actions globals") - }) - - t.Run("Valid GitHub Script mode should not return error", func(t *testing.T) { - registry := NewScriptRegistry() - validScript := ` -const { exec } = require("@actions/exec"); -core.info("This is valid for GitHub Script mode"); -` - err := registry.RegisterWithMode("valid_script", validScript, RuntimeModeGitHubScript) - assert.NoError(t, err, "Should not return error with valid GitHub Script") - }) - - t.Run("Valid Node.js mode should not return error", func(t *testing.T) { - registry := NewScriptRegistry() - validScript := ` -const fs = require("fs"); -const { execSync } = require("child_process"); -console.log("This is valid for Node.js mode"); -` - err := registry.RegisterWithMode("valid_script", validScript, RuntimeModeNodeJS) - assert.NoError(t, err, "Should not return error with valid Node.js script") - }) -} diff --git a/pkg/workflow/bundler_test.go b/pkg/workflow/bundler_test.go deleted file mode 100644 index 7f11094316..0000000000 --- a/pkg/workflow/bundler_test.go +++ /dev/null @@ -1,79 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" -) - -// TestBundleJavaScriptFromSources tests bundling JavaScript from source map -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptFromSources(t *testing.T) { - t.Skip("JavaScript bundling tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptFromSourcesWithoutRequires tests bundling without requires -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptFromSourcesWithoutRequires(t *testing.T) { - t.Skip("JavaScript bundling without requires tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestRemoveExports tests removing exports from JavaScript -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestRemoveExports(t *testing.T) { - t.Skip("Remove exports tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptFromSourcesWithMultipleRequires tests bundling with multiple requires -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptFromSourcesWithMultipleRequires(t *testing.T) { - t.Skip("JavaScript bundling with multiple requires tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptFromSourcesWithNestedPath tests bundling with nested paths -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptFromSourcesWithNestedPath(t *testing.T) { - t.Skip("JavaScript bundling with nested paths tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestValidateNoLocalRequires tests validation that no local requires remain -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestValidateNoLocalRequires(t *testing.T) { - t.Skip("Validate no local requires tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptValidationSuccess tests successful validation -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptValidationSuccess(t *testing.T) { - t.Skip("JavaScript bundling validation success tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptValidationFailure tests validation failure handling -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptValidationFailure(t *testing.T) { - t.Skip("JavaScript bundling validation failure tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptWithNpmPackages tests bundling with npm packages -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptWithNpmPackages(t *testing.T) { - t.Skip("JavaScript bundling with npm packages tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestRemoveExportsMultiLine tests removing multi-line exports -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestRemoveExportsMultiLine(t *testing.T) { - t.Skip("Remove multi-line exports tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestRemoveExportsConditional tests removing conditional exports -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestRemoveExportsConditional(t *testing.T) { - t.Skip("Remove conditional exports tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBundleJavaScriptMergesDestructuredImports tests merging destructured imports -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBundleJavaScriptMergesDestructuredImports(t *testing.T) { - t.Skip("JavaScript bundling destructured imports tests skipped - scripts now use require() pattern to load external files at runtime") -} diff --git a/pkg/workflow/compiler_custom_actions_test.go b/pkg/workflow/compiler_custom_actions_test.go index 33c61a5c73..b09c327e02 100644 --- a/pkg/workflow/compiler_custom_actions_test.go +++ b/pkg/workflow/compiler_custom_actions_test.go @@ -8,7 +8,6 @@ import ( "testing" "github.com/github/gh-aw/pkg/stringutil" - "github.com/stretchr/testify/require" ) // TestActionModeValidation tests the ActionMode type validation @@ -101,130 +100,6 @@ func TestActionModeIsScript(t *testing.T) { } } -// TestScriptRegistryWithAction tests registering scripts with action paths -func TestScriptRegistryWithAction(t *testing.T) { - registry := NewScriptRegistry() - - testScript := `console.log('test');` - actionPath := "./actions/test-action" - - err := registry.RegisterWithAction("test_script", testScript, RuntimeModeGitHubScript, actionPath) - require.NoError(t, err) - - if !registry.Has("test_script") { - t.Error("Script should be registered") - } - - if got := registry.GetActionPath("test_script"); got != actionPath { - t.Errorf("Expected action path %q, got %q", actionPath, got) - } - - if got := registry.GetSource("test_script"); got != testScript { - t.Errorf("Expected source %q, got %q", testScript, got) - } -} - -// TestScriptRegistryActionPathEmpty tests that scripts without action paths return empty string -func TestScriptRegistryActionPathEmpty(t *testing.T) { - registry := NewScriptRegistry() - - testScript := `console.log('test');` - registry.Register("test_script", testScript) - - if got := registry.GetActionPath("test_script"); got != "" { - t.Errorf("Expected empty action path, got %q", got) - } -} - -// TestCustomActionModeCompilation tests workflow compilation with custom action mode -func TestCustomActionModeCompilation(t *testing.T) { - // Create a temporary directory for the test - tempDir := t.TempDir() - - // Create a test workflow file - workflowContent := `--- -name: Test Custom Actions -on: issues -safe-outputs: - create-issue: - max: 1 ---- - -Test workflow with safe-outputs. -` - - workflowPath := tempDir + "/test-workflow.md" - if err := os.WriteFile(workflowPath, []byte(workflowContent), 0644); err != nil { - t.Fatalf("Failed to write test workflow: %v", err) - } - - // Register a test script with an action path - // Save original state first - origSource := DefaultScriptRegistry.GetSource("create_issue") - origActionPath := DefaultScriptRegistry.GetActionPath("create_issue") - - testScript := ` -const { core } = require('@actions/core'); -core.info('Creating issue'); -` - err := DefaultScriptRegistry.RegisterWithAction( - "create_issue", - testScript, - RuntimeModeGitHubScript, - "./actions/create-issue", - ) - require.NoError(t, err) - - // Restore after test - defer func() { - if origSource != "" { - if origActionPath != "" { - _ = DefaultScriptRegistry.RegisterWithAction("create_issue", origSource, RuntimeModeGitHubScript, origActionPath) - } else { - _ = DefaultScriptRegistry.RegisterWithMode("create_issue", origSource, RuntimeModeGitHubScript) - } - } - }() - - // Compile with dev action mode - compiler := NewCompilerWithVersion("1.0.0") - compiler.SetActionMode(ActionModeDev) - compiler.SetNoEmit(false) - - if err := compiler.CompileWorkflow(workflowPath); err != nil { - t.Fatalf("Compilation failed: %v", err) - } - - // Read the generated lock file - lockPath := stringutil.MarkdownToLockFile(workflowPath) - lockContent, err := os.ReadFile(lockPath) - if err != nil { - t.Fatalf("Failed to read lock file: %v", err) - } - - lockStr := string(lockContent) - - // Verify safe_outputs job exists (consolidated mode) - found := strings.Contains(lockStr, "safe_outputs:") - if !found { - t.Fatal("safe_outputs job not found in lock file") - } - - // Verify handler manager step is present (create_issue is now handled by handler manager) - if !strings.Contains(lockStr, "id: process_safe_outputs") { - t.Error("Expected process_safe_outputs step in compiled workflow (create-issue is now handled by handler manager)") - } - // Verify handler config contains create_issue - if !strings.Contains(lockStr, "create_issue") { - t.Error("Expected create_issue in handler config") - } - - // Verify the workflow compiles successfully with custom action mode - if !strings.Contains(lockStr, "actions/github-script") { - t.Error("Expected github-script action in compiled workflow") - } -} - // TestInlineActionModeCompilation tests workflow compilation with inline mode (default) func TestInlineActionModeCompilation(t *testing.T) { // Create a temporary directory for the test @@ -281,73 +156,6 @@ Test workflow with dev mode. } } -// TestCustomActionModeFallback tests that compilation falls back to inline mode -// when action path is not registered -func TestCustomActionModeFallback(t *testing.T) { - // Create a temporary directory for the test - tempDir := t.TempDir() - - // Create a test workflow file - workflowContent := `--- -name: Test Fallback -on: issues -safe-outputs: - create-issue: - max: 1 ---- - -Test fallback to inline mode. -` - - workflowPath := tempDir + "/test-workflow.md" - if err := os.WriteFile(workflowPath, []byte(workflowContent), 0644); err != nil { - t.Fatalf("Failed to write test workflow: %v", err) - } - - // Ensure create_issue is registered without an action path - // Save original state first - origSource := DefaultScriptRegistry.GetSource("create_issue") - origActionPath := DefaultScriptRegistry.GetActionPath("create_issue") - - testScript := `console.log('test');` - err := DefaultScriptRegistry.RegisterWithMode("create_issue", testScript, RuntimeModeGitHubScript) - require.NoError(t, err) - - // Restore after test - defer func() { - if origSource != "" { - if origActionPath != "" { - _ = DefaultScriptRegistry.RegisterWithAction("create_issue", origSource, RuntimeModeGitHubScript, origActionPath) - } else { - _ = DefaultScriptRegistry.RegisterWithMode("create_issue", origSource, RuntimeModeGitHubScript) - } - } - }() - - // Compile with dev action mode - compiler := NewCompilerWithVersion("1.0.0") - compiler.SetActionMode(ActionModeDev) - compiler.SetNoEmit(false) - - if err := compiler.CompileWorkflow(workflowPath); err != nil { - t.Fatalf("Compilation failed: %v", err) - } - - // Read the generated lock file - lockPath := stringutil.MarkdownToLockFile(workflowPath) - lockContent, err := os.ReadFile(lockPath) - if err != nil { - t.Fatalf("Failed to read lock file: %v", err) - } - - lockStr := string(lockContent) - - // Verify it falls back to actions/github-script when action path is not found - if !strings.Contains(lockStr, "actions/github-script@") { - t.Error("Expected fallback to 'actions/github-script@' when action path not found") - } -} - // TestScriptActionModeCompilation tests workflow compilation with script mode func TestScriptActionModeCompilation(t *testing.T) { // Create a temporary directory for the test diff --git a/pkg/workflow/copilot_participant_steps.go b/pkg/workflow/copilot_participant_steps.go deleted file mode 100644 index d7fa717f81..0000000000 --- a/pkg/workflow/copilot_participant_steps.go +++ /dev/null @@ -1,153 +0,0 @@ -package workflow - -import ( - "fmt" - "slices" - - "github.com/github/gh-aw/pkg/logger" -) - -var copilotParticipantLog = logger.New("workflow:copilot_participant_steps") - -// CopilotParticipantConfig holds configuration for generating Copilot participant steps -type CopilotParticipantConfig struct { - // Participants is the list of users/bots to assign/review - Participants []string - // ParticipantType is either "assignee" or "reviewer" - ParticipantType string - // CustomToken is the custom GitHub token from the safe output config - CustomToken string - // SafeOutputsToken is the GitHub token from the safe-outputs config - SafeOutputsToken string - // ConditionStepID is the step ID to check for output (e.g., "create_issue", "create_pull_request") - ConditionStepID string - // ConditionOutputKey is the output key to check (e.g., "issue_number", "pull_request_url") - ConditionOutputKey string -} - -// buildCopilotParticipantSteps generates steps for adding Copilot participants (assignees or reviewers) -// This function extracts the common logic between issue assignees and PR reviewers -func buildCopilotParticipantSteps(config CopilotParticipantConfig) []string { - copilotParticipantLog.Printf("Building Copilot participant steps: type=%s, count=%d", config.ParticipantType, len(config.Participants)) - - if len(config.Participants) == 0 { - copilotParticipantLog.Print("No participants to add, returning empty steps") - return nil - } - - var steps []string - - // Add checkout step for gh CLI to work - steps = append(steps, " - name: Checkout repository for gh CLI\n") - steps = append(steps, fmt.Sprintf(" if: steps.%s.outputs.%s != ''\n", config.ConditionStepID, config.ConditionOutputKey)) - steps = append(steps, fmt.Sprintf(" uses: %s\n", GetActionPin("actions/checkout"))) - steps = append(steps, " with:\n") - steps = append(steps, " persist-credentials: false\n") - - // Check if any participant is "copilot" to determine token preference - hasCopilotParticipant := slices.Contains(config.Participants, "copilot") - - // Choose the first non-empty custom token for precedence - effectiveCustomToken := config.CustomToken - if effectiveCustomToken == "" { - effectiveCustomToken = config.SafeOutputsToken - } - - // Use agent token preference if adding copilot as participant, otherwise use regular token - var effectiveToken string - if hasCopilotParticipant { - copilotParticipantLog.Print("Using Copilot coding agent token preference") - effectiveToken = getEffectiveCopilotCodingAgentGitHubToken(effectiveCustomToken) - } else { - copilotParticipantLog.Print("Using regular GitHub token") - effectiveToken = getEffectiveGitHubToken(effectiveCustomToken) - } - - // Generate participant-specific steps - switch config.ParticipantType { - case "assignee": - copilotParticipantLog.Printf("Generating issue assignee steps for %d participants", len(config.Participants)) - steps = append(steps, buildIssueAssigneeSteps(config, effectiveToken)...) - case "reviewer": - copilotParticipantLog.Printf("Generating PR reviewer steps for %d participants", len(config.Participants)) - steps = append(steps, buildPRReviewerSteps(config, effectiveToken)...) - } - - return steps -} - -// buildIssueAssigneeSteps generates steps for assigning issues -func buildIssueAssigneeSteps(config CopilotParticipantConfig, effectiveToken string) []string { - var steps []string - - for i, assignee := range config.Participants { - // Special handling: "copilot" should be passed as "@copilot" to gh CLI - actualAssignee := assignee - if assignee == "copilot" { - actualAssignee = "@copilot" - } - - steps = append(steps, fmt.Sprintf(" - name: Assign issue to %s\n", assignee)) - steps = append(steps, fmt.Sprintf(" if: steps.%s.outputs.%s != ''\n", config.ConditionStepID, config.ConditionOutputKey)) - steps = append(steps, fmt.Sprintf(" uses: %s\n", GetActionPin("actions/github-script"))) - steps = append(steps, " env:\n") - steps = append(steps, fmt.Sprintf(" GH_TOKEN: %s\n", effectiveToken)) - steps = append(steps, fmt.Sprintf(" ASSIGNEE: %q\n", actualAssignee)) - steps = append(steps, fmt.Sprintf(" ISSUE_NUMBER: ${{ steps.%s.outputs.%s }}\n", config.ConditionStepID, config.ConditionOutputKey)) - steps = append(steps, " with:\n") - steps = append(steps, " script: |\n") - steps = append(steps, " const { setupGlobals } = require('"+SetupActionDestination+"/setup_globals.cjs');\n") - steps = append(steps, " setupGlobals(core, github, context, exec, io);\n") - // Load script from external file using require() - steps = append(steps, " const { main } = require('/opt/gh-aw/actions/assign_issue.cjs');\n") - steps = append(steps, " await main({ github, context, core, exec, io });\n") - - // Add a comment after each assignee step except the last - if i < len(config.Participants)-1 { - steps = append(steps, "\n") - } - } - - return steps -} - -// buildPRReviewerSteps generates steps for adding PR reviewers -func buildPRReviewerSteps(config CopilotParticipantConfig, effectiveToken string) []string { - var steps []string - - for i, reviewer := range config.Participants { - // Special handling: "copilot" uses the GitHub API with "copilot-pull-request-reviewer[bot]" - // because gh pr edit --add-reviewer does not support @copilot - if reviewer == "copilot" { - steps = append(steps, fmt.Sprintf(" - name: Add %s as reviewer\n", reviewer)) - steps = append(steps, " if: steps.create_pull_request.outputs.pull_request_number != ''\n") - steps = append(steps, fmt.Sprintf(" uses: %s\n", GetActionPin("actions/github-script"))) - steps = append(steps, " env:\n") - steps = append(steps, " PR_NUMBER: ${{ steps.create_pull_request.outputs.pull_request_number }}\n") - steps = append(steps, " with:\n") - steps = append(steps, fmt.Sprintf(" github-token: %s\n", effectiveToken)) - steps = append(steps, " script: |\n") - steps = append(steps, " const { setupGlobals } = require('"+SetupActionDestination+"/setup_globals.cjs');\n") - steps = append(steps, " setupGlobals(core, github, context, exec, io);\n") - // Load script from external file using require() - steps = append(steps, " const { main } = require('/opt/gh-aw/actions/add_copilot_reviewer.cjs');\n") - steps = append(steps, " await main({ github, context, core, exec, io });\n") - } else { - steps = append(steps, fmt.Sprintf(" - name: Add %s as reviewer\n", reviewer)) - steps = append(steps, " if: steps.create_pull_request.outputs.pull_request_url != ''\n") - steps = append(steps, " env:\n") - steps = append(steps, fmt.Sprintf(" GH_TOKEN: %s\n", effectiveToken)) - steps = append(steps, fmt.Sprintf(" REVIEWER: %q\n", reviewer)) - steps = append(steps, " PR_URL: ${{ steps.create_pull_request.outputs.pull_request_url }}\n") - steps = append(steps, " run: |\n") - steps = append(steps, " gh pr edit \"$PR_URL\" --add-reviewer \"$REVIEWER\"\n") - } - - // Add a comment after each reviewer step except the last - if i < len(config.Participants)-1 { - steps = append(steps, "\n") - } - } - - return steps -} diff --git a/pkg/workflow/copilot_participant_steps_test.go b/pkg/workflow/copilot_participant_steps_test.go deleted file mode 100644 index 5cc8925d86..0000000000 --- a/pkg/workflow/copilot_participant_steps_test.go +++ /dev/null @@ -1,49 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" -) - -// TestBuildCopilotParticipantSteps_EmptyParticipants tests workflow functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBuildCopilotParticipantSteps_EmptyParticipants(t *testing.T) { - t.Skip("Workflow tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBuildCopilotParticipantSteps_IssueAssignee tests workflow functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBuildCopilotParticipantSteps_IssueAssignee(t *testing.T) { - t.Skip("Workflow tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBuildCopilotParticipantSteps_CopilotAssignee tests workflow functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBuildCopilotParticipantSteps_CopilotAssignee(t *testing.T) { - t.Skip("Workflow tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBuildCopilotParticipantSteps_PRReviewer tests workflow functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBuildCopilotParticipantSteps_PRReviewer(t *testing.T) { - t.Skip("Workflow tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBuildCopilotParticipantSteps_CopilotReviewer tests workflow functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBuildCopilotParticipantSteps_CopilotReviewer(t *testing.T) { - t.Skip("Workflow tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBuildCopilotParticipantSteps_CustomToken tests workflow functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBuildCopilotParticipantSteps_CustomToken(t *testing.T) { - t.Skip("Workflow tests skipped - scripts now use require() pattern to load external files at runtime") -} - -// TestBuildCopilotParticipantSteps_MixedParticipants tests workflow functionality -// SKIPPED: Scripts are now loaded from external files at runtime using require() pattern -func TestBuildCopilotParticipantSteps_MixedParticipants(t *testing.T) { - t.Skip("Workflow tests skipped - scripts now use require() pattern to load external files at runtime") -} diff --git a/pkg/workflow/create_issue.go b/pkg/workflow/create_issue.go index aaccca04ef..6e94d40aff 100644 --- a/pkg/workflow/create_issue.go +++ b/pkg/workflow/create_issue.go @@ -1,8 +1,6 @@ package workflow import ( - "errors" - "fmt" "slices" "github.com/github/gh-aw/pkg/logger" @@ -97,150 +95,3 @@ func (c *Compiler) parseIssuesConfig(outputMap map[string]any) *CreateIssuesConf func hasCopilotAssignee(assignees []string) bool { return slices.Contains(assignees, "copilot") } - -// filterNonCopilotAssignees returns assignees excluding "copilot" -func filterNonCopilotAssignees(assignees []string) []string { - var result []string - for _, a := range assignees { - if a != "copilot" { - result = append(result, a) - } - } - return result -} - -// buildCopilotCodingAgentAssignmentStep generates a post-step for assigning Copilot coding agent to created issues -// This step uses the agent token with full precedence chain -func buildCopilotCodingAgentAssignmentStep(configToken, safeOutputsToken string) []string { - var steps []string - - // Choose the first non-empty custom token for precedence - effectiveCustomToken := configToken - if effectiveCustomToken == "" { - effectiveCustomToken = safeOutputsToken - } - - // Get the effective agent token with full precedence chain - effectiveToken := getEffectiveCopilotCodingAgentGitHubToken(effectiveCustomToken) - - steps = append(steps, " - name: Assign Copilot to created issues\n") - steps = append(steps, " if: steps.create_issue.outputs.issues_to_assign_copilot != ''\n") - steps = append(steps, fmt.Sprintf(" uses: %s\n", GetActionPin("actions/github-script"))) - steps = append(steps, " with:\n") - steps = append(steps, fmt.Sprintf(" github-token: %s\n", effectiveToken)) - steps = append(steps, " script: |\n") - steps = append(steps, " const { setupGlobals } = require('"+SetupActionDestination+"/setup_globals.cjs');\n") - steps = append(steps, " setupGlobals(core, github, context, exec, io);\n") - // Load script from external file using require() - steps = append(steps, " const { main } = require('/opt/gh-aw/actions/assign_copilot_to_created_issues.cjs');\n") - steps = append(steps, " await main({ github, context, core, exec, io });\n") - - return steps -} - -// buildCreateOutputIssueJob creates the create_issue job -func (c *Compiler) buildCreateOutputIssueJob(data *WorkflowData, mainJobName string) (*Job, error) { - if data.SafeOutputs == nil || data.SafeOutputs.CreateIssues == nil { - return nil, errors.New("safe-outputs.create-issue configuration is required") - } - - if createIssueLog.Enabled() { - createIssueLog.Printf("Building create-issue job: workflow=%s, main_job=%s, assignees=%d, labels=%d", - data.Name, mainJobName, len(data.SafeOutputs.CreateIssues.Assignees), len(data.SafeOutputs.CreateIssues.Labels)) - } - - // Build custom environment variables specific to create-issue using shared helpers - var customEnvVars []string - customEnvVars = append(customEnvVars, buildTitlePrefixEnvVar("GH_AW_ISSUE_TITLE_PREFIX", data.SafeOutputs.CreateIssues.TitlePrefix)...) - customEnvVars = append(customEnvVars, buildLabelsEnvVar("GH_AW_ISSUE_LABELS", data.SafeOutputs.CreateIssues.Labels)...) - customEnvVars = append(customEnvVars, buildLabelsEnvVar("GH_AW_ISSUE_ALLOWED_LABELS", data.SafeOutputs.CreateIssues.AllowedLabels)...) - customEnvVars = append(customEnvVars, buildAllowedReposEnvVar("GH_AW_ALLOWED_REPOS", data.SafeOutputs.CreateIssues.AllowedRepos)...) - - // Add expires value if set - if data.SafeOutputs.CreateIssues.Expires > 0 { - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_ISSUE_EXPIRES: \"%d\"\n", data.SafeOutputs.CreateIssues.Expires)) - } - - // Add group flag if set - customEnvVars = append(customEnvVars, buildTemplatableBoolEnvVar("GH_AW_ISSUE_GROUP", data.SafeOutputs.CreateIssues.Group)...) - if data.SafeOutputs.CreateIssues.Group != nil { - createIssueLog.Print("Issue grouping flag set") - } - - // Add close-older-issues flag if enabled - customEnvVars = append(customEnvVars, buildTemplatableBoolEnvVar("GH_AW_CLOSE_OLDER_ISSUES", data.SafeOutputs.CreateIssues.CloseOlderIssues)...) - if data.SafeOutputs.CreateIssues.CloseOlderIssues != nil { - createIssueLog.Print("Close older issues flag set") - } - - // Add footer flag if explicitly set to false - if data.SafeOutputs.CreateIssues.Footer != nil && *data.SafeOutputs.CreateIssues.Footer == "false" { - customEnvVars = append(customEnvVars, " GH_AW_FOOTER: \"false\"\n") - createIssueLog.Print("Footer disabled - XML markers will be included but visible footer content will be omitted") - } - - // Add standard environment variables (metadata + staged/target repo) - customEnvVars = append(customEnvVars, c.buildStandardSafeOutputEnvVars(data, data.SafeOutputs.CreateIssues.TargetRepoSlug)...) - - // Check if copilot is in assignees - if so, we'll output issues for assign_to_agent job - assignCopilot := hasCopilotAssignee(data.SafeOutputs.CreateIssues.Assignees) - if assignCopilot { - customEnvVars = append(customEnvVars, " GH_AW_ASSIGN_COPILOT: \"true\"\n") - createIssueLog.Print("Copilot assignment requested - will output issues_to_assign_copilot for assign_to_agent job") - } - - // Build post-steps for non-copilot assignees only - // Copilot assignment must be done in a separate step with the agent token - var postSteps []string - - // Get the effective GitHub token to use for gh CLI - var safeOutputsToken string - if data.SafeOutputs != nil { - safeOutputsToken = data.SafeOutputs.GitHubToken - } - - nonCopilotAssignees := filterNonCopilotAssignees(data.SafeOutputs.CreateIssues.Assignees) - if len(nonCopilotAssignees) > 0 { - postSteps = buildCopilotParticipantSteps(CopilotParticipantConfig{ - Participants: nonCopilotAssignees, - ParticipantType: "assignee", - CustomToken: data.SafeOutputs.CreateIssues.GitHubToken, - SafeOutputsToken: safeOutputsToken, - ConditionStepID: "create_issue", - ConditionOutputKey: "issue_number", - }) - } - - // Add post-step for copilot assignment using agent token - if assignCopilot { - postSteps = append(postSteps, buildCopilotCodingAgentAssignmentStep(data.SafeOutputs.CreateIssues.GitHubToken, safeOutputsToken)...) - } - - // Create outputs for the job - outputs := map[string]string{ - "issue_number": "${{ steps.create_issue.outputs.issue_number }}", - "issue_url": "${{ steps.create_issue.outputs.issue_url }}", - "temporary_id_map": "${{ steps.create_issue.outputs.temporary_id_map }}", - } - - // Add issues_to_assign_copilot output if copilot assignment is requested - if assignCopilot { - outputs["issues_to_assign_copilot"] = "${{ steps.create_issue.outputs.issues_to_assign_copilot }}" - } - - // Use the shared builder function to create the job - return c.buildSafeOutputJob(data, SafeOutputJobConfig{ - JobName: "create_issue", - StepName: "Create Output Issue", - StepID: "create_issue", - MainJobName: mainJobName, - CustomEnvVars: customEnvVars, - Script: getCreateIssueScript(), - ScriptName: "create_issue", // For custom action mode - Permissions: NewPermissionsContentsReadIssuesWrite(), - Outputs: outputs, - PostSteps: postSteps, - Token: data.SafeOutputs.CreateIssues.GitHubToken, - TargetRepoSlug: data.SafeOutputs.CreateIssues.TargetRepoSlug, - }) -} diff --git a/pkg/workflow/create_pull_request.go b/pkg/workflow/create_pull_request.go index b14c6f24c3..c38d11f898 100644 --- a/pkg/workflow/create_pull_request.go +++ b/pkg/workflow/create_pull_request.go @@ -1,10 +1,6 @@ package workflow import ( - "errors" - "fmt" - - "github.com/github/gh-aw/pkg/constants" "github.com/github/gh-aw/pkg/logger" ) @@ -38,209 +34,6 @@ type CreatePullRequestsConfig struct { GithubTokenForExtraEmptyCommit string `yaml:"github-token-for-extra-empty-commit,omitempty"` // Token used to push an empty commit to trigger CI events. Use a PAT or "app" for GitHub App auth. } -// buildCreateOutputPullRequestJob creates the create_pull_request job -func (c *Compiler) buildCreateOutputPullRequestJob(data *WorkflowData, mainJobName string) (*Job, error) { - if data.SafeOutputs == nil || data.SafeOutputs.CreatePullRequests == nil { - return nil, errors.New("safe-outputs.create-pull-request configuration is required") - } - - if createPRLog.Enabled() { - draftValue := "true" // Default - if data.SafeOutputs.CreatePullRequests.Draft != nil { - draftValue = *data.SafeOutputs.CreatePullRequests.Draft - } - fallbackAsIssue := getFallbackAsIssue(data.SafeOutputs.CreatePullRequests) - createPRLog.Printf("Building create-pull-request job: workflow=%s, main_job=%s, draft=%v, reviewers=%d, fallback_as_issue=%v", - data.Name, mainJobName, draftValue, len(data.SafeOutputs.CreatePullRequests.Reviewers), fallbackAsIssue) - } - - // Build pre-steps for patch download, checkout, and git config - var preSteps []string - - // Step 1: Download patch artifact from unified agent-artifacts - preSteps = append(preSteps, " - name: Download patch artifact\n") - preSteps = append(preSteps, " continue-on-error: true\n") - preSteps = append(preSteps, fmt.Sprintf(" uses: %s\n", GetActionPin("actions/download-artifact"))) - preSteps = append(preSteps, " with:\n") - preSteps = append(preSteps, " name: agent-artifacts\n") - preSteps = append(preSteps, " path: /tmp/gh-aw/\n") - - // Step 2: Checkout repository - // Step 3: Configure Git credentials - // Pass the target repo to configure git remote correctly for cross-repo operations - // Use token precedence chain instead of hardcoded github.token - // Precedence: create-pull-request config token > safe-outputs token > GH_AW_GITHUB_TOKEN || GITHUB_TOKEN - var configToken string - if data.SafeOutputs.CreatePullRequests != nil { - configToken = data.SafeOutputs.CreatePullRequests.GitHubToken - } - var safeOutputsToken string - if data.SafeOutputs != nil { - safeOutputsToken = data.SafeOutputs.GitHubToken - } - // Choose the first non-empty custom token for precedence - effectiveCustomToken := configToken - if effectiveCustomToken == "" { - effectiveCustomToken = safeOutputsToken - } - // Get effective token (handles fallback to GH_AW_GITHUB_TOKEN || GITHUB_TOKEN) - gitToken := getEffectiveSafeOutputGitHubToken(effectiveCustomToken) - - // Use the resolved token for checkout - preSteps = buildCheckoutRepository(preSteps, c, data.SafeOutputs.CreatePullRequests.TargetRepoSlug, gitToken) - - preSteps = append(preSteps, c.generateGitConfigurationStepsWithToken(gitToken, data.SafeOutputs.CreatePullRequests.TargetRepoSlug)...) - - // Build custom environment variables specific to create-pull-request - var customEnvVars []string - // Pass the workflow ID for branch naming - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_WORKFLOW_ID: %q\n", mainJobName)) - // Pass custom base branch only if explicitly configured; JS will resolve dynamically otherwise - if data.SafeOutputs.CreatePullRequests.BaseBranch != "" { - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_CUSTOM_BASE_BRANCH: %q\n", data.SafeOutputs.CreatePullRequests.BaseBranch)) - } - customEnvVars = append(customEnvVars, buildTitlePrefixEnvVar("GH_AW_PR_TITLE_PREFIX", data.SafeOutputs.CreatePullRequests.TitlePrefix)...) - customEnvVars = append(customEnvVars, buildLabelsEnvVar("GH_AW_PR_LABELS", data.SafeOutputs.CreatePullRequests.Labels)...) - customEnvVars = append(customEnvVars, buildLabelsEnvVar("GH_AW_PR_ALLOWED_LABELS", data.SafeOutputs.CreatePullRequests.AllowedLabels)...) - // Pass draft setting - default to true for backwards compatibility - if data.SafeOutputs.CreatePullRequests.Draft != nil { - customEnvVars = append(customEnvVars, buildTemplatableBoolEnvVar("GH_AW_PR_DRAFT", data.SafeOutputs.CreatePullRequests.Draft)...) - } else { - customEnvVars = append(customEnvVars, " GH_AW_PR_DRAFT: \"true\"\n") - } - - // Pass the if-no-changes configuration - ifNoChanges := data.SafeOutputs.CreatePullRequests.IfNoChanges - if ifNoChanges == "" { - ifNoChanges = "warn" // Default value - } - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_PR_IF_NO_CHANGES: %q\n", ifNoChanges)) - - // Pass the allow-empty configuration - if data.SafeOutputs.CreatePullRequests.AllowEmpty != nil { - customEnvVars = append(customEnvVars, buildTemplatableBoolEnvVar("GH_AW_PR_ALLOW_EMPTY", data.SafeOutputs.CreatePullRequests.AllowEmpty)...) - } else { - customEnvVars = append(customEnvVars, " GH_AW_PR_ALLOW_EMPTY: \"false\"\n") - } - - // Pass the auto-merge configuration - if data.SafeOutputs.CreatePullRequests.AutoMerge != nil { - customEnvVars = append(customEnvVars, buildTemplatableBoolEnvVar("GH_AW_PR_AUTO_MERGE", data.SafeOutputs.CreatePullRequests.AutoMerge)...) - } else { - customEnvVars = append(customEnvVars, " GH_AW_PR_AUTO_MERGE: \"false\"\n") - } - - // Pass the fallback-as-issue configuration - default to true for backwards compatibility - if data.SafeOutputs.CreatePullRequests.FallbackAsIssue != nil { - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_PR_FALLBACK_AS_ISSUE: \"%t\"\n", *data.SafeOutputs.CreatePullRequests.FallbackAsIssue)) - } else { - customEnvVars = append(customEnvVars, " GH_AW_PR_FALLBACK_AS_ISSUE: \"true\"\n") - } - - // Pass the maximum patch size configuration - maxPatchSize := 1024 // Default value - if data.SafeOutputs != nil && data.SafeOutputs.MaximumPatchSize > 0 { - maxPatchSize = data.SafeOutputs.MaximumPatchSize - } - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_MAX_PATCH_SIZE: %d\n", maxPatchSize)) - - // Pass activation comment information if available (for updating the comment with PR link) - // These outputs are only available when reaction is configured in the workflow - if data.AIReaction != "" && data.AIReaction != "none" { - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_COMMENT_ID: ${{ needs.%s.outputs.comment_id }}\n", constants.ActivationJobName)) - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_COMMENT_REPO: ${{ needs.%s.outputs.comment_repo }}\n", constants.ActivationJobName)) - } - - // Add expires value if set (only for same-repo PRs - when target-repo is not set) - if data.SafeOutputs.CreatePullRequests.Expires > 0 && data.SafeOutputs.CreatePullRequests.TargetRepoSlug == "" { - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_PR_EXPIRES: \"%d\"\n", data.SafeOutputs.CreatePullRequests.Expires)) - } - - // Add footer flag if explicitly set to false - if data.SafeOutputs.CreatePullRequests.Footer != nil && *data.SafeOutputs.CreatePullRequests.Footer == "false" { - customEnvVars = append(customEnvVars, " GH_AW_FOOTER: \"false\"\n") - createPRLog.Print("Footer disabled - XML markers will be included but visible footer content will be omitted") - } - - // Add extra empty commit token (for pushing an empty commit to trigger CI) - // Defaults to GH_AW_CI_TRIGGER_TOKEN when not explicitly configured - ciTriggerToken := data.SafeOutputs.CreatePullRequests.GithubTokenForExtraEmptyCommit - switch ciTriggerToken { - case "app": - customEnvVars = append(customEnvVars, " GH_AW_CI_TRIGGER_TOKEN: ${{ steps.safe-outputs-app-token.outputs.token || '' }}\n") - createPRLog.Print("Extra empty commit using GitHub App token") - case "default", "": - // Use the magic GH_AW_CI_TRIGGER_TOKEN secret (default behavior when not explicitly configured) - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_CI_TRIGGER_TOKEN: %s\n", getEffectiveCITriggerGitHubToken(""))) - createPRLog.Print("Extra empty commit using GH_AW_CI_TRIGGER_TOKEN") - default: - customEnvVars = append(customEnvVars, fmt.Sprintf(" GH_AW_CI_TRIGGER_TOKEN: %s\n", ciTriggerToken)) - createPRLog.Printf("Extra empty commit using explicit token") - } - - // Add standard environment variables (metadata + staged/target repo) - customEnvVars = append(customEnvVars, c.buildStandardSafeOutputEnvVars(data, data.SafeOutputs.CreatePullRequests.TargetRepoSlug)...) - - // Build post-steps for reviewers if configured - var postSteps []string - if len(data.SafeOutputs.CreatePullRequests.Reviewers) > 0 { - // Get the effective GitHub token to use for gh CLI - var safeOutputsToken string - if data.SafeOutputs != nil { - safeOutputsToken = data.SafeOutputs.GitHubToken - } - - postSteps = buildCopilotParticipantSteps(CopilotParticipantConfig{ - Participants: data.SafeOutputs.CreatePullRequests.Reviewers, - ParticipantType: "reviewer", - CustomToken: data.SafeOutputs.CreatePullRequests.GitHubToken, - SafeOutputsToken: safeOutputsToken, - ConditionStepID: "create_pull_request", - ConditionOutputKey: "pull_request_url", - }) - } - - // Create outputs for the job - outputs := map[string]string{ - "pull_request_number": "${{ steps.create_pull_request.outputs.pull_request_number }}", - "pull_request_url": "${{ steps.create_pull_request.outputs.pull_request_url }}", - "issue_number": "${{ steps.create_pull_request.outputs.issue_number }}", - "issue_url": "${{ steps.create_pull_request.outputs.issue_url }}", - "branch_name": "${{ steps.create_pull_request.outputs.branch_name }}", - "fallback_used": "${{ steps.create_pull_request.outputs.fallback_used }}", - "error_message": "${{ steps.create_pull_request.outputs.error_message }}", - } - - // Choose permissions based on fallback-as-issue setting - fallbackAsIssue := getFallbackAsIssue(data.SafeOutputs.CreatePullRequests) - var permissions *Permissions - if fallbackAsIssue { - // Default: include issues: write for fallback behavior - permissions = NewPermissionsContentsWriteIssuesWritePRWrite() - createPRLog.Print("Using permissions with issues:write (fallback-as-issue enabled)") - } else { - // Fallback disabled: only need contents: write and pull-requests: write - permissions = NewPermissionsContentsWritePRWrite() - createPRLog.Print("Using permissions without issues:write (fallback-as-issue disabled)") - } - - // Use the shared builder function to create the job - return c.buildSafeOutputJob(data, SafeOutputJobConfig{ - JobName: "create_pull_request", - StepName: "Create Pull Request", - StepID: "create_pull_request", - MainJobName: mainJobName, - CustomEnvVars: customEnvVars, - Script: "", // Legacy - handler manager uses require() to load handler from /tmp/gh-aw/actions - Permissions: permissions, - Outputs: outputs, - PreSteps: preSteps, - PostSteps: postSteps, - Token: data.SafeOutputs.CreatePullRequests.GitHubToken, - TargetRepoSlug: data.SafeOutputs.CreatePullRequests.TargetRepoSlug, - }) -} - // parsePullRequestsConfig handles only create-pull-request (singular) configuration func (c *Compiler) parsePullRequestsConfig(outputMap map[string]any) *CreatePullRequestsConfig { // Check for singular form only diff --git a/pkg/workflow/custom_action_copilot_token_test.go b/pkg/workflow/custom_action_copilot_token_test.go deleted file mode 100644 index 91772a204b..0000000000 --- a/pkg/workflow/custom_action_copilot_token_test.go +++ /dev/null @@ -1,51 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -// TestCustomActionCopilotTokenFallback tests that custom actions use the correct -// Copilot token fallback when no custom token is provided -func TestCustomActionCopilotTokenFallback(t *testing.T) { - compiler := NewCompiler() - - // Register a test custom action - testScript := `console.log('test');` - actionPath := "./actions/test-action" - err := DefaultScriptRegistry.RegisterWithAction("test_handler", testScript, RuntimeModeGitHubScript, actionPath) - require.NoError(t, err) - - workflowData := &WorkflowData{ - Name: "Test Workflow", - SafeOutputs: &SafeOutputsConfig{}, - } - - // Test with UseCopilotRequestsToken=true and no custom token - config := GitHubScriptStepConfig{ - StepName: "Test Custom Action", - StepID: "test", - CustomToken: "", // No custom token - UseCopilotRequestsToken: true, - } - - steps := compiler.buildCustomActionStep(workflowData, config, "test_handler") - stepsContent := strings.Join(steps, "") - - t.Logf("Generated steps:\n%s", stepsContent) - - // Should use COPILOT_GITHUB_TOKEN directly (no fallback chain) - // Note: COPILOT_GITHUB_TOKEN is the recommended token for Copilot operations - // and does NOT have a fallback to GITHUB_TOKEN because GITHUB_TOKEN lacks - // permissions for agent sessions and bot assignments - assert.Contains(t, stepsContent, "secrets.COPILOT_GITHUB_TOKEN", "Should use COPILOT_GITHUB_TOKEN") - assert.NotContains(t, stepsContent, "COPILOT_TOKEN ||", "Should not use deprecated COPILOT_TOKEN") - - // Verify no fallback chain (COPILOT_GITHUB_TOKEN is used directly) - assert.NotContains(t, stepsContent, "||", "Should not have fallback chain for Copilot token") -} diff --git a/pkg/workflow/dependency_tracker.go b/pkg/workflow/dependency_tracker.go deleted file mode 100644 index 0d517468d6..0000000000 --- a/pkg/workflow/dependency_tracker.go +++ /dev/null @@ -1,121 +0,0 @@ -package workflow - -import ( - "fmt" - "path/filepath" - "regexp" - "strings" - - "github.com/github/gh-aw/pkg/logger" -) - -var dependencyTrackerLog = logger.New("workflow:dependency_tracker") - -// FindJavaScriptDependencies analyzes a JavaScript file and recursively finds all its dependencies -// without actually bundling the code. Returns a map of file paths that are required. -// -// Parameters: -// - mainContent: The JavaScript content to analyze -// - sources: Map of file paths to their content -// - basePath: Base directory path for resolving relative imports (e.g., "js") -// -// Returns: -// - Map of file paths (relative to basePath) that are dependencies -// - Error if a required file is not found in sources -func FindJavaScriptDependencies(mainContent string, sources map[string]string, basePath string) (map[string]bool, error) { - dependencyTrackerLog.Printf("Finding JavaScript dependencies: source_count=%d, base_path=%s", len(sources), basePath) - - // Track discovered dependencies - dependencies := make(map[string]bool) - - // Track files we've already processed to avoid circular dependencies - processed := make(map[string]bool) - - // Recursively find dependencies starting from the main content - if err := findDependenciesRecursive(mainContent, basePath, sources, dependencies, processed); err != nil { - dependencyTrackerLog.Printf("Dependency tracking failed: %v", err) - return nil, err - } - - dependencyTrackerLog.Printf("Dependency tracking completed: found %d dependencies", len(dependencies)) - return dependencies, nil -} - -// findDependenciesRecursive processes content and recursively tracks its dependencies -func findDependenciesRecursive(content string, currentPath string, sources map[string]string, dependencies map[string]bool, processed map[string]bool) error { - // Regular expression to match require('./...') or require("./...") - // This matches both single-line and multi-line destructuring: - // const { x } = require("./file.cjs"); - // const { - // x, - // y - // } = require("./file.cjs"); - // Captures the require path where it starts with ./ or ../ - requireRegex := regexp.MustCompile(`(?s)(?:const|let|var)\s+(?:\{[^}]*\}|\w+)\s*=\s*require\(['"](\.\.?/[^'"]+)['"]\);?`) - - // Find all requires - matches := requireRegex.FindAllStringSubmatch(content, -1) - - if len(matches) == 0 { - // No requires found, nothing to track - return nil - } - - dependencyTrackerLog.Printf("Found %d require statements in current file", len(matches)) - - for _, match := range matches { - if len(match) < 2 { - continue - } - - // Extract the require path - requirePath := match[1] - - // Resolve the full path relative to current path - var fullPath string - if currentPath == "" { - fullPath = requirePath - } else { - fullPath = filepath.Join(currentPath, requirePath) - } - - // Ensure .cjs extension - if !strings.HasSuffix(fullPath, ".cjs") && !strings.HasSuffix(fullPath, ".js") { - fullPath += ".cjs" - } - - // Normalize the path (clean up ./ and ../) - fullPath = filepath.Clean(fullPath) - - // Convert Windows path separators to forward slashes for consistency - fullPath = filepath.ToSlash(fullPath) - - // Check if we've already processed this file - if processed[fullPath] { - dependencyTrackerLog.Printf("Skipping already processed dependency: %s", fullPath) - continue - } - - // Mark as processed - processed[fullPath] = true - - // Add to dependencies - dependencies[fullPath] = true - dependencyTrackerLog.Printf("Added dependency: %s", fullPath) - - // Look up the required file in sources - requiredContent, ok := sources[fullPath] - if !ok { - dependencyTrackerLog.Printf("Required file not found in sources: %s", fullPath) - return fmt.Errorf("required file not found in sources: %s", fullPath) - } - - // Recursively find dependencies of this file - requiredDir := filepath.Dir(fullPath) - if err := findDependenciesRecursive(requiredContent, requiredDir, sources, dependencies, processed); err != nil { - return err - } - } - - return nil -} diff --git a/pkg/workflow/dependency_tracker_test.go b/pkg/workflow/dependency_tracker_test.go deleted file mode 100644 index c13cc9bd2f..0000000000 --- a/pkg/workflow/dependency_tracker_test.go +++ /dev/null @@ -1,185 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -func TestFindJavaScriptDependencies(t *testing.T) { - tests := []struct { - name string - mainContent string - sources map[string]string - basePath string - wantDeps map[string]bool - wantErr bool - errorMessage string - }{ - { - name: "simple single dependency", - mainContent: `const { foo } = require("./helper.cjs"); -console.log(foo());`, - sources: map[string]string{ - "js/helper.cjs": `function foo() { return "bar"; } -module.exports = { foo };`, - }, - basePath: "js", - wantDeps: map[string]bool{ - "js/helper.cjs": true, - }, - wantErr: false, - }, - { - name: "chained dependencies", - mainContent: `const { a } = require("./module-a.cjs"); -console.log(a);`, - sources: map[string]string{ - "js/module-a.cjs": `const { b } = require("./module-b.cjs"); -module.exports = { a: b };`, - "js/module-b.cjs": `module.exports = { b: "value" };`, - }, - basePath: "js", - wantDeps: map[string]bool{ - "js/module-a.cjs": true, - "js/module-b.cjs": true, - }, - wantErr: false, - }, - { - name: "circular dependencies handled", - mainContent: `const { x } = require("./a.cjs");`, - sources: map[string]string{ - "js/a.cjs": `const { y } = require("./b.cjs"); -module.exports = { x: y };`, - "js/b.cjs": `const { x } = require("./a.cjs"); -module.exports = { y: "val" };`, - }, - basePath: "js", - wantDeps: map[string]bool{ - "js/a.cjs": true, - "js/b.cjs": true, - }, - wantErr: false, - }, - { - name: "no dependencies", - mainContent: `console.log("no requires here"); -const x = 42;`, - sources: map[string]string{}, - basePath: "js", - wantDeps: map[string]bool{}, - wantErr: false, - }, - { - name: "missing dependency error", - mainContent: `const { missing } = require("./not-found.cjs");`, - sources: map[string]string{}, - basePath: "js", - wantDeps: nil, - wantErr: true, - errorMessage: "required file not found in sources", - }, - { - name: "multiple dependencies", - mainContent: `const { a } = require("./a.cjs"); -const { b } = require("./b.cjs"); -const { c } = require("./c.cjs");`, - sources: map[string]string{ - "js/a.cjs": `module.exports = { a: 1 };`, - "js/b.cjs": `module.exports = { b: 2 };`, - "js/c.cjs": `module.exports = { c: 3 };`, - }, - basePath: "js", - wantDeps: map[string]bool{ - "js/a.cjs": true, - "js/b.cjs": true, - "js/c.cjs": true, - }, - wantErr: false, - }, - { - name: "multi-line destructuring", - mainContent: `const { - foo, - bar, - baz -} = require("./utils.cjs");`, - sources: map[string]string{ - "js/utils.cjs": `module.exports = { foo: 1, bar: 2, baz: 3 };`, - }, - basePath: "js", - wantDeps: map[string]bool{ - "js/utils.cjs": true, - }, - wantErr: false, - }, - { - name: "safe-outputs MCP server dependencies", - mainContent: `const { createServer, registerTool, normalizeTool, start } = require("./mcp_server_core.cjs"); -const { loadConfig } = require("./safe_outputs_config.cjs"); -const { createAppendFunction } = require("./safe_outputs_append.cjs"); -const { createHandlers } = require("./safe_outputs_handlers.cjs");`, - sources: map[string]string{ - "js/mcp_server_core.cjs": `const { readBuffer } = require("./read_buffer.cjs"); -module.exports = { createServer, registerTool, normalizeTool, start };`, - "js/read_buffer.cjs": `module.exports = { readBuffer };`, - "js/safe_outputs_config.cjs": `module.exports = { loadConfig };`, - "js/safe_outputs_append.cjs": `module.exports = { createAppendFunction };`, - "js/safe_outputs_handlers.cjs": `const { normalize } = require("./normalize_branch_name.cjs"); -module.exports = { createHandlers };`, - "js/normalize_branch_name.cjs": `module.exports = { normalize };`, - }, - basePath: "js", - wantDeps: map[string]bool{ - "js/mcp_server_core.cjs": true, - "js/read_buffer.cjs": true, - "js/safe_outputs_config.cjs": true, - "js/safe_outputs_append.cjs": true, - "js/safe_outputs_handlers.cjs": true, - "js/normalize_branch_name.cjs": true, - }, - wantErr: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - gotDeps, err := FindJavaScriptDependencies(tt.mainContent, tt.sources, tt.basePath) - - if (err != nil) != tt.wantErr { - t.Errorf("FindJavaScriptDependencies() error = %v, wantErr %v", err, tt.wantErr) - return - } - - if tt.wantErr { - if err == nil { - t.Errorf("FindJavaScriptDependencies() expected error containing %q but got no error", tt.errorMessage) - } else if tt.errorMessage != "" && !strings.Contains(err.Error(), tt.errorMessage) { - t.Errorf("FindJavaScriptDependencies() error = %q, expected to contain %q", err.Error(), tt.errorMessage) - } - return - } - - // Check that all wanted dependencies are present - for dep := range tt.wantDeps { - if !gotDeps[dep] { - t.Errorf("FindJavaScriptDependencies() missing expected dependency: %q", dep) - } - } - - // Check that no unexpected dependencies are present - for dep := range gotDeps { - if !tt.wantDeps[dep] { - t.Errorf("FindJavaScriptDependencies() unexpected dependency: %q", dep) - } - } - - // Check count - if len(gotDeps) != len(tt.wantDeps) { - t.Errorf("FindJavaScriptDependencies() got %d dependencies, want %d", len(gotDeps), len(tt.wantDeps)) - } - }) - } -} diff --git a/pkg/workflow/env_mirror.go b/pkg/workflow/env_mirror.go deleted file mode 100644 index ae31020082..0000000000 --- a/pkg/workflow/env_mirror.go +++ /dev/null @@ -1,137 +0,0 @@ -// This file provides environment variable mirroring for agent containers. -// -// This file contains logic for mirroring essential GitHub Actions runner environment -// variables into the agent container. The Ubuntu runner image provides many environment -// variables that workflows and actions depend on (e.g., JAVA_HOME, ANDROID_HOME, -// CHROMEWEBDRIVER, CONDA, etc.). This module ensures these are available inside -// the AWF (Agent Workflow Firewall) container. -// -// Environment variables are passed through using AWF's --env flag, which sets -// environment variables only if they exist on the host. This ensures graceful -// handling of missing variables. -// -// Reference: scratchpad/ubuntulatest.md section "Environment Variables" - -package workflow - -import ( - "sort" - - "github.com/github/gh-aw/pkg/logger" -) - -var envMirrorLog = logger.New("workflow:env_mirror") - -// MirroredEnvVars is the list of environment variables from the GitHub Actions -// Ubuntu runner that should be mirrored into the agent container. -// -// These are grouped by category: -// - Java JDK homes (for multiple Java versions) -// - Android SDK paths -// - Browser WebDriver paths -// - Package manager paths -// - Go workspace path -// -// Variables are only passed through if they exist on the host runner. -// Reference: scratchpad/ubuntulatest.md -var MirroredEnvVars = []string{ - // Java JDK homes (multiple versions available on Ubuntu runner) - "JAVA_HOME", - "JAVA_HOME_8_X64", - "JAVA_HOME_11_X64", - "JAVA_HOME_17_X64", - "JAVA_HOME_21_X64", - "JAVA_HOME_25_X64", - - // Android SDK paths - "ANDROID_HOME", - "ANDROID_SDK_ROOT", - "ANDROID_NDK", - "ANDROID_NDK_HOME", - "ANDROID_NDK_ROOT", - "ANDROID_NDK_LATEST_HOME", - - // Browser WebDriver paths (for Selenium/browser automation) - "CHROMEWEBDRIVER", - "EDGEWEBDRIVER", - "GECKOWEBDRIVER", - "SELENIUM_JAR_PATH", - - // Package manager paths - "CONDA", - "VCPKG_INSTALLATION_ROOT", - - // Go workspace path - "GOPATH", - - // .NET environment - "DOTNET_ROOT", - - // Python environment - "PIPX_HOME", - "PIPX_BIN_DIR", - - // Ruby environment - "GEM_HOME", - "GEM_PATH", - - // Rust environment - "CARGO_HOME", - "RUSTUP_HOME", - - // Homebrew (Linux) - "HOMEBREW_PREFIX", - "HOMEBREW_CELLAR", - "HOMEBREW_REPOSITORY", - - // Swift - "SWIFT_PATH", - - // Common tool homes - "GOROOT", - "NVM_DIR", - - // Azure environment - "AZURE_EXTENSION_DIR", -} - -// GetMirroredEnvArgs returns the AWF command-line arguments for mirroring -// environment variables from the runner into the agent container. -// -// AWF uses the --env flag to pass environment variables in KEY=VALUE format. -// The output uses shell variable expansion syntax (e.g., JAVA_HOME=${JAVA_HOME}) -// so that the actual value is resolved at runtime from the host environment. -// -// Example output: ["--env", "JAVA_HOME=${JAVA_HOME}", "--env", "ANDROID_HOME=${ANDROID_HOME}", ...] -// -// This function always returns the same list of environment variables to mirror. -// Variables that don't exist on the host will expand to empty strings at runtime. -func GetMirroredEnvArgs() []string { - envMirrorLog.Print("Generating mirrored environment variable arguments") - - // Sort for consistent output - sortedVars := make([]string, len(MirroredEnvVars)) - copy(sortedVars, MirroredEnvVars) - sort.Strings(sortedVars) - - var args []string - for _, envVar := range sortedVars { - // Use shell variable expansion syntax so the value is resolved at runtime - // Pre-wrap in double quotes so shellEscapeArg preserves them (allowing shell expansion) - args = append(args, "--env", "\""+envVar+"=${"+envVar+"}\"") - } - - envMirrorLog.Printf("Generated %d environment variable mirror arguments", len(sortedVars)) - return args -} - -// GetMirroredEnvVarsList returns the list of environment variables that -// are mirrored from the runner to the agent container. -// -// This is useful for documentation and debugging purposes. -func GetMirroredEnvVarsList() []string { - result := make([]string, len(MirroredEnvVars)) - copy(result, MirroredEnvVars) - sort.Strings(result) - return result -} diff --git a/pkg/workflow/env_mirror_test.go b/pkg/workflow/env_mirror_test.go deleted file mode 100644 index e3a7a57286..0000000000 --- a/pkg/workflow/env_mirror_test.go +++ /dev/null @@ -1,221 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestGetMirroredEnvArgs(t *testing.T) { - args := GetMirroredEnvArgs() - - // Should return pairs of --env and KEY=${KEY} format - require.NotEmpty(t, args, "Should return environment variable arguments") - require.Equal(t, 0, len(args)%2, "Arguments should come in pairs (--env, KEY=${KEY})") - - // Verify the structure of arguments - for i := 0; i < len(args); i += 2 { - assert.Equal(t, "--env", args[i], "Even indices should be --env flag") - assert.NotEmpty(t, args[i+1], "Odd indices should be environment variable assignments") - // Verify the "KEY=${KEY}" format with outer double quotes - assert.True(t, len(args[i+1]) >= 2 && args[i+1][0] == '"' && args[i+1][len(args[i+1])-1] == '"', - "Should be wrapped in double quotes for shell expansion, got: %s", args[i+1]) - assert.Contains(t, args[i+1], "=", "Should contain = for KEY=VALUE format") - assert.Contains(t, args[i+1], "=${", "Should contain =${ for shell expansion") - assert.Contains(t, args[i+1], "}", "Should contain } for shell expansion") - } -} - -func TestGetMirroredEnvArgs_ContainsExpectedVariables(t *testing.T) { - args := GetMirroredEnvArgs() - - // Convert to a set for easy lookup (extract variable name from "KEY=${KEY}" format) - varSet := make(map[string]bool) - for i := 1; i < len(args); i += 2 { - // Extract the variable name from "KEY=${KEY}" format - envAssignment := args[i] - // Skip the leading quote and get the part before the '=' - if len(envAssignment) > 1 && envAssignment[0] == '"' { - for j := 1; j < len(envAssignment); j++ { - if envAssignment[j] == '=' { - varSet[envAssignment[1:j]] = true - break - } - } - } - } - - // Test that critical environment variables are included - expectedVars := []string{ - "JAVA_HOME", - "JAVA_HOME_17_X64", - "ANDROID_HOME", - "CHROMEWEBDRIVER", - "GECKOWEBDRIVER", - "CONDA", - "VCPKG_INSTALLATION_ROOT", - "GOPATH", - } - - for _, expected := range expectedVars { - assert.True(t, varSet[expected], "Should include %s in mirrored environment variables", expected) - } -} - -func TestGetMirroredEnvArgs_IsSorted(t *testing.T) { - args := GetMirroredEnvArgs() - - // Extract just the variable names from "KEY=${KEY}" format (odd indices) - var varNames []string - for i := 1; i < len(args); i += 2 { - envAssignment := args[i] - // Skip the leading quote and get the part before the '=' - if len(envAssignment) > 1 && envAssignment[0] == '"' { - for j := 1; j < len(envAssignment); j++ { - if envAssignment[j] == '=' { - varNames = append(varNames, envAssignment[1:j]) - break - } - } - } - } - - // Verify they are sorted - for i := 1; i < len(varNames); i++ { - assert.LessOrEqual(t, varNames[i-1], varNames[i], - "Environment variables should be sorted, but %s comes after %s", - varNames[i-1], varNames[i]) - } -} - -func TestGetMirroredEnvVarsList(t *testing.T) { - vars := GetMirroredEnvVarsList() - - require.NotEmpty(t, vars, "Should return a list of environment variables") - - // Verify the list contains expected variables - varSet := make(map[string]bool) - for _, v := range vars { - varSet[v] = true - } - - assert.True(t, varSet["JAVA_HOME"], "Should include JAVA_HOME") - assert.True(t, varSet["ANDROID_HOME"], "Should include ANDROID_HOME") - assert.True(t, varSet["CHROMEWEBDRIVER"], "Should include CHROMEWEBDRIVER") -} - -func TestGetMirroredEnvVarsList_IsSorted(t *testing.T) { - vars := GetMirroredEnvVarsList() - - // Verify they are sorted - for i := 1; i < len(vars); i++ { - assert.LessOrEqual(t, vars[i-1], vars[i], - "Environment variables should be sorted, but %s comes after %s", - vars[i-1], vars[i]) - } -} - -func TestMirroredEnvVars_NoDuplicates(t *testing.T) { - vars := GetMirroredEnvVarsList() - - seen := make(map[string]bool) - for _, v := range vars { - assert.False(t, seen[v], "Duplicate environment variable found: %s", v) - seen[v] = true - } -} - -func TestMirroredEnvVars_IncludesJavaVersions(t *testing.T) { - vars := GetMirroredEnvVarsList() - - varSet := make(map[string]bool) - for _, v := range vars { - varSet[v] = true - } - - // Java versions commonly available on GitHub Actions runners - javaVersions := []string{ - "JAVA_HOME_8_X64", - "JAVA_HOME_11_X64", - "JAVA_HOME_17_X64", - "JAVA_HOME_21_X64", - } - - for _, javaVar := range javaVersions { - assert.True(t, varSet[javaVar], "Should include %s for Java version support", javaVar) - } -} - -func TestMirroredEnvVars_IncludesAndroidVars(t *testing.T) { - vars := GetMirroredEnvVarsList() - - varSet := make(map[string]bool) - for _, v := range vars { - varSet[v] = true - } - - // Android environment variables from the runner - androidVars := []string{ - "ANDROID_HOME", - "ANDROID_SDK_ROOT", - "ANDROID_NDK", - "ANDROID_NDK_HOME", - } - - for _, androidVar := range androidVars { - assert.True(t, varSet[androidVar], "Should include %s for Android development support", androidVar) - } -} - -func TestMirroredEnvVars_IncludesBrowserVars(t *testing.T) { - vars := GetMirroredEnvVarsList() - - varSet := make(map[string]bool) - for _, v := range vars { - varSet[v] = true - } - - // Browser/WebDriver environment variables from the runner - browserVars := []string{ - "CHROMEWEBDRIVER", - "EDGEWEBDRIVER", - "GECKOWEBDRIVER", - "SELENIUM_JAR_PATH", - } - - for _, browserVar := range browserVars { - assert.True(t, varSet[browserVar], "Should include %s for browser automation support", browserVar) - } -} - -func TestGetMirroredEnvArgs_CorrectFormat(t *testing.T) { - args := GetMirroredEnvArgs() - - // Find ANDROID_HOME in the args and verify its format - found := false - for i := 0; i < len(args); i += 2 { - if args[i] == "--env" && i+1 < len(args) { - // Check for the specific format: "KEY=${KEY}" with outer double quotes - if args[i+1] == "\"ANDROID_HOME=${ANDROID_HOME}\"" { - found = true - break - } - } - } - assert.True(t, found, "Should include \"ANDROID_HOME=${ANDROID_HOME}\" in correct format with outer double quotes") - - // Also verify JAVA_HOME format - foundJava := false - for i := 0; i < len(args); i += 2 { - if args[i] == "--env" && i+1 < len(args) { - if args[i+1] == "\"JAVA_HOME=${JAVA_HOME}\"" { - foundJava = true - break - } - } - } - assert.True(t, foundJava, "Should include \"JAVA_HOME=${JAVA_HOME}\" in correct format with outer double quotes") -} diff --git a/pkg/workflow/inline_imports_test.go b/pkg/workflow/inline_imports_test.go index 70e7ac7fd9..96364357ca 100644 --- a/pkg/workflow/inline_imports_test.go +++ b/pkg/workflow/inline_imports_test.go @@ -12,219 +12,6 @@ import ( "github.com/stretchr/testify/require" ) -// TestInlinedImports_FrontmatterField verifies that inlined-imports: true activates -// compile-time inlining of imports (without inputs) and the main workflow markdown. -func TestInlinedImports_FrontmatterField(t *testing.T) { - tmpDir := t.TempDir() - - // Create a shared import file with markdown content - sharedDir := filepath.Join(tmpDir, ".github", "workflows", "shared") - require.NoError(t, os.MkdirAll(sharedDir, 0o755)) - sharedFile := filepath.Join(sharedDir, "common.md") - sharedContent := `--- -tools: - bash: true ---- - -# Shared Instructions - -Always follow best practices. -` - require.NoError(t, os.WriteFile(sharedFile, []byte(sharedContent), 0o644)) - - // Create the main workflow file with inlined-imports: true - workflowDir := filepath.Join(tmpDir, ".github", "workflows") - workflowFile := filepath.Join(workflowDir, "test-workflow.md") - workflowContent := `--- -name: inlined-imports-test -on: - workflow_dispatch: -permissions: - contents: read -engine: copilot -inlined-imports: true -imports: - - shared/common.md ---- - -# Main Workflow - -This is the main workflow content. -` - require.NoError(t, os.WriteFile(workflowFile, []byte(workflowContent), 0o644)) - - compiler := NewCompiler( - WithNoEmit(true), - WithSkipValidation(true), - ) - - wd, err := compiler.ParseWorkflowFile(workflowFile) - require.NoError(t, err, "should parse workflow file") - require.NotNil(t, wd) - - // WorkflowData.InlinedImports should be true (parsed into the workspace data) - assert.True(t, wd.InlinedImports, "WorkflowData.InlinedImports should be true") - - // ParsedFrontmatter should also have InlinedImports = true - require.NotNil(t, wd.ParsedFrontmatter, "ParsedFrontmatter should not be nil") - assert.True(t, wd.ParsedFrontmatter.InlinedImports, "InlinedImports should be true") - - // Compile and get YAML - yamlContent, err := compiler.CompileToYAML(wd, workflowFile) - require.NoError(t, err, "should compile workflow") - require.NotEmpty(t, yamlContent, "YAML should not be empty") - - // With inlined-imports: true, the import should be inlined (no runtime-import macros) - assert.NotContains(t, yamlContent, "{{#runtime-import", "should not generate any runtime-import macros") - - // The shared content should be inlined in the prompt - assert.Contains(t, yamlContent, "Shared Instructions", "shared import content should be inlined") - assert.Contains(t, yamlContent, "Always follow best practices", "shared import content should be inlined") - - // The main workflow content should also be inlined (no runtime-import for main file) - assert.Contains(t, yamlContent, "Main Workflow", "main workflow content should be inlined") - assert.Contains(t, yamlContent, "This is the main workflow content", "main workflow content should be inlined") -} - -// TestInlinedImports_Disabled verifies that without inlined-imports, runtime-import macros are used. -func TestInlinedImports_Disabled(t *testing.T) { - tmpDir := t.TempDir() - - sharedDir := filepath.Join(tmpDir, ".github", "workflows", "shared") - require.NoError(t, os.MkdirAll(sharedDir, 0o755)) - sharedFile := filepath.Join(sharedDir, "common.md") - sharedContent := `--- -tools: - bash: true ---- - -# Shared Instructions - -Always follow best practices. -` - require.NoError(t, os.WriteFile(sharedFile, []byte(sharedContent), 0o644)) - - workflowDir := filepath.Join(tmpDir, ".github", "workflows") - workflowFile := filepath.Join(workflowDir, "test-workflow.md") - workflowContent := `--- -name: no-inlined-imports-test -on: - workflow_dispatch: -permissions: - contents: read -engine: copilot -imports: - - shared/common.md ---- - -# Main Workflow - -This is the main workflow content. -` - require.NoError(t, os.WriteFile(workflowFile, []byte(workflowContent), 0o644)) - - compiler := NewCompiler( - WithNoEmit(true), - WithSkipValidation(true), - ) - - wd, err := compiler.ParseWorkflowFile(workflowFile) - require.NoError(t, err, "should parse workflow file") - require.NotNil(t, wd) - - require.NotNil(t, wd.ParsedFrontmatter, "ParsedFrontmatter should be populated") - assert.False(t, wd.ParsedFrontmatter.InlinedImports, "InlinedImports should be false by default") - - yamlContent, err := compiler.CompileToYAML(wd, workflowFile) - require.NoError(t, err, "should compile workflow") - - // Without inlined-imports, the import should use runtime-import macro (with full path from workspace root) - assert.Contains(t, yamlContent, "{{#runtime-import .github/workflows/shared/common.md}}", "should generate runtime-import macro for import") - - // The main workflow markdown should also use a runtime-import macro - assert.Contains(t, yamlContent, "{{#runtime-import .github/workflows/test-workflow.md}}", "should generate runtime-import macro for main workflow") -} - -// TestInlinedImports_HashChangesWithBody verifies that the frontmatter hash includes -// the entire markdown body when inlined-imports: true. -func TestInlinedImports_HashChangesWithBody(t *testing.T) { - tmpDir := t.TempDir() - - content1 := `--- -name: test -on: - workflow_dispatch: -inlined-imports: true -engine: copilot ---- - -# Original body -` - content2 := `--- -name: test -on: - workflow_dispatch: -inlined-imports: true -engine: copilot ---- - -# Modified body - different -` - // Normal mode (no inlined-imports) - body changes should not affect hash - contentNormal1 := `--- -name: test -on: - workflow_dispatch: -engine: copilot ---- - -# Body variant A -` - contentNormal2 := `--- -name: test -on: - workflow_dispatch: -engine: copilot ---- - -# Body variant B - same hash expected -` - - file1 := filepath.Join(tmpDir, "test1.md") - file2 := filepath.Join(tmpDir, "test2.md") - fileN1 := filepath.Join(tmpDir, "normal1.md") - fileN2 := filepath.Join(tmpDir, "normal2.md") - require.NoError(t, os.WriteFile(file1, []byte(content1), 0o644)) - require.NoError(t, os.WriteFile(file2, []byte(content2), 0o644)) - require.NoError(t, os.WriteFile(fileN1, []byte(contentNormal1), 0o644)) - require.NoError(t, os.WriteFile(fileN2, []byte(contentNormal2), 0o644)) - - cache := parser.NewImportCache(tmpDir) - - hash1, err := parser.ComputeFrontmatterHashFromFile(file1, cache) - require.NoError(t, err) - hash2, err := parser.ComputeFrontmatterHashFromFile(file2, cache) - require.NoError(t, err) - hashN1, err := parser.ComputeFrontmatterHashFromFile(fileN1, cache) - require.NoError(t, err) - hashN2, err := parser.ComputeFrontmatterHashFromFile(fileN2, cache) - require.NoError(t, err) - - // With inlined-imports: true, different body content should produce different hashes - assert.NotEqual(t, hash1, hash2, - "with inlined-imports: true, different body content should produce different hashes") - - // Without inlined-imports, body-only changes produce the same hash - // (only env./vars. expressions from body are included) - assert.Equal(t, hashN1, hashN2, - "without inlined-imports, body-only changes should not affect hash") - - // inlined-imports mode should also produce a different hash than normal mode - // (frontmatter text differs, so hash differs regardless of body treatment) - assert.NotEqual(t, hash1, hashN1, - "inlined-imports and normal mode should produce different hashes (different frontmatter)") -} - // TestInlinedImports_FrontmatterHashInline_SameBodySameHash verifies determinism. func TestInlinedImports_FrontmatterHashInline_SameBodySameHash(t *testing.T) { tmpDir := t.TempDir() @@ -252,47 +39,6 @@ engine: copilot assert.Equal(t, hash1, hash2, "same content should produce the same hash") } -// TestInlinedImports_InlinePromptActivated verifies that inlined-imports also activates inline prompt mode. -func TestInlinedImports_InlinePromptActivated(t *testing.T) { - tmpDir := t.TempDir() - - workflowDir := filepath.Join(tmpDir, ".github", "workflows") - require.NoError(t, os.MkdirAll(workflowDir, 0o755)) - workflowFile := filepath.Join(workflowDir, "inline-test.md") - workflowContent := `--- -name: inline-test -on: - workflow_dispatch: -permissions: - contents: read -engine: copilot -inlined-imports: true ---- - -# My Workflow - -Do something useful. -` - require.NoError(t, os.WriteFile(workflowFile, []byte(workflowContent), 0o644)) - - compiler := NewCompiler( - WithNoEmit(true), - WithSkipValidation(true), - ) - - wd, err := compiler.ParseWorkflowFile(workflowFile) - require.NoError(t, err) - - yamlContent, err := compiler.CompileToYAML(wd, workflowFile) - require.NoError(t, err) - - // When inlined-imports is true, the main markdown body is also inlined (no runtime-import for main file) - assert.NotContains(t, yamlContent, "{{#runtime-import", "should not generate any runtime-import macros") - // Main workflow content should be inlined - assert.Contains(t, yamlContent, "My Workflow", "main workflow content should be inlined") - assert.Contains(t, yamlContent, "Do something useful", "main workflow body should be inlined") -} - // TestInlinedImports_AgentFileError verifies that when inlined-imports: true and a custom agent // file is imported, ParseWorkflowFile returns a compilation error. // Agent files require runtime access and will not be resolved without sources. diff --git a/pkg/workflow/markdown_unfencing.go b/pkg/workflow/markdown_unfencing.go deleted file mode 100644 index 12e1c2746f..0000000000 --- a/pkg/workflow/markdown_unfencing.go +++ /dev/null @@ -1,141 +0,0 @@ -package workflow - -import ( - "strings" - - "github.com/github/gh-aw/pkg/logger" -) - -var markdownUnfencingLog = logger.New("workflow:markdown_unfencing") - -// UnfenceMarkdown removes an outer code fence from markdown content if the entire -// content is wrapped in a markdown/md code fence. This handles cases where agents -// accidentally wrap the entire markdown body in a code fence. -// -// The function detects: -// - Content starting with ```markdown, ```md, ~~~markdown, or ~~~md (case insensitive) -// - Content ending with ``` or ~~~ -// - The closing fence must match the opening fence type (backticks or tildes) -// -// Returns the unfenced content if a wrapping fence is detected, otherwise returns -// the original content unchanged. -func UnfenceMarkdown(content string) string { - if content == "" { - return content - } - - markdownUnfencingLog.Printf("Checking content for outer markdown fence (%d bytes)", len(content)) - - // Trim leading/trailing whitespace for analysis - trimmed := strings.TrimSpace(content) - - // Check for opening fence: ```markdown, ```md, ~~~markdown, or ~~~md - // Must be at the start of the content (after trimming) - lines := strings.Split(trimmed, "\n") - if len(lines) < 2 { - // Need at least opening fence and closing fence - return content - } - - firstLine := strings.TrimSpace(lines[0]) - lastLine := strings.TrimSpace(lines[len(lines)-1]) - - // Check if first line is a markdown code fence - var fenceChar string - var fenceLength int - var isMarkdownFence bool - - // Check for backtick fences (3 or more backticks) - if strings.HasPrefix(firstLine, "```") { - fenceChar = "`" - // Count the number of consecutive backticks - fenceLength = 0 - for _, ch := range firstLine { - if ch == '`' { - fenceLength++ - } else { - break - } - } - remainder := strings.TrimSpace(firstLine[fenceLength:]) - // Check if it's markdown or md language tag or empty - if remainder == "" || strings.EqualFold(remainder, "markdown") || strings.EqualFold(remainder, "md") { - isMarkdownFence = true - } - } else if strings.HasPrefix(firstLine, "~~~") { - // Check for tilde fences (3 or more tildes) - fenceChar = "~" - // Count the number of consecutive tildes - fenceLength = 0 - for _, ch := range firstLine { - if ch == '~' { - fenceLength++ - } else { - break - } - } - remainder := strings.TrimSpace(firstLine[fenceLength:]) - // Check if it's markdown or md language tag or empty - if remainder == "" || strings.EqualFold(remainder, "markdown") || strings.EqualFold(remainder, "md") { - isMarkdownFence = true - } - } - - if !isMarkdownFence { - // Not a markdown fence, return original content - markdownUnfencingLog.Print("No outer markdown fence detected, returning content unchanged") - return content - } - - markdownUnfencingLog.Printf("Detected opening markdown fence: char=%q, length=%d", fenceChar, fenceLength) - - // Check if last line is a matching closing fence - // Must have at least as many fence characters as the opening fence - var isClosingFence bool - if fenceChar == "`" { - // Count backticks in last line - closingFenceLength := 0 - for _, ch := range lastLine { - if ch == '`' { - closingFenceLength++ - } else { - break - } - } - // Must have at least as many backticks as opening fence - if closingFenceLength >= fenceLength && strings.TrimSpace(lastLine[closingFenceLength:]) == "" { - isClosingFence = true - } - } else if fenceChar == "~" { - // Count tildes in last line - closingFenceLength := 0 - for _, ch := range lastLine { - if ch == '~' { - closingFenceLength++ - } else { - break - } - } - // Must have at least as many tildes as opening fence - if closingFenceLength >= fenceLength && strings.TrimSpace(lastLine[closingFenceLength:]) == "" { - isClosingFence = true - } - } - - if !isClosingFence { - // No matching closing fence, return original content - markdownUnfencingLog.Print("No matching closing fence found, returning content unchanged") - return content - } - - // Extract the content between the fences - // Remove first and last lines - innerLines := lines[1 : len(lines)-1] - innerContent := strings.Join(innerLines, "\n") - - markdownUnfencingLog.Printf("Unfenced markdown content: removed outer %s fence", fenceChar) - - // Return the inner content with original leading/trailing whitespace style preserved - // We preserve the trimming behavior that was applied - return strings.TrimSpace(innerContent) -} diff --git a/pkg/workflow/markdown_unfencing_test.go b/pkg/workflow/markdown_unfencing_test.go deleted file mode 100644 index cd2e182a64..0000000000 --- a/pkg/workflow/markdown_unfencing_test.go +++ /dev/null @@ -1,277 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestUnfenceMarkdown(t *testing.T) { - tests := []struct { - name string - input string - expected string - }{ - { - name: "basic markdown fence with backticks", - input: "```markdown\nThis is the content\n```", - expected: "This is the content", - }, - { - name: "markdown fence with md language tag", - input: "```md\nThis is the content\n```", - expected: "This is the content", - }, - { - name: "markdown fence with tildes", - input: "~~~markdown\nThis is the content\n~~~", - expected: "This is the content", - }, - { - name: "markdown fence with md and tildes", - input: "~~~md\nThis is the content\n~~~", - expected: "This is the content", - }, - { - name: "markdown fence with no language tag", - input: "```\nThis is the content\n```", - expected: "This is the content", - }, - { - name: "markdown fence with multiline content", - input: "```markdown\nLine 1\nLine 2\nLine 3\n```", - expected: "Line 1\nLine 2\nLine 3", - }, - { - name: "markdown fence with nested code blocks", - input: "```markdown\nHere is some code:\n```javascript\nconsole.log(\"hello\");\n```\n```", - expected: "Here is some code:\n```javascript\nconsole.log(\"hello\");\n```", - }, - { - name: "markdown fence with leading and trailing whitespace", - input: " ```markdown\nContent here\n``` ", - expected: "Content here", - }, - { - name: "markdown fence case insensitive", - input: "```MARKDOWN\nContent\n```", - expected: "Content", - }, - { - name: "markdown fence with MD uppercase", - input: "```MD\nContent\n```", - expected: "Content", - }, - { - name: "not a markdown fence - different language", - input: "```javascript\nconsole.log(\"test\");\n```", - expected: "```javascript\nconsole.log(\"test\");\n```", - }, - { - name: "not fenced - no closing fence", - input: "```markdown\nThis has no closing fence", - expected: "```markdown\nThis has no closing fence", - }, - { - name: "not fenced - mismatched fence types", - input: "```markdown\nContent\n~~~", - expected: "```markdown\nContent\n~~~", - }, - { - name: "not fenced - content before opening fence", - input: "Some text before\n```markdown\nContent\n```", - expected: "Some text before\n```markdown\nContent\n```", - }, - { - name: "not fenced - content after closing fence", - input: "```markdown\nContent\n```\nSome text after", - expected: "```markdown\nContent\n```\nSome text after", - }, - { - name: "empty string", - input: "", - expected: "", - }, - { - name: "only whitespace", - input: " \n\t\t\t\n\t\t\t", - expected: " \n\t\t\t\n\t\t\t", - }, - { - name: "single line", - input: "```markdown", - expected: "```markdown", - }, - { - name: "markdown fence with empty content", - input: "```markdown\n```", - expected: "", - }, - { - name: "markdown fence with only whitespace content", - input: "```markdown\n \n```", - expected: "", - }, - { - name: "markdown fence with complex nested structures", - input: "```markdown\n# Heading\n\nSome text with **bold** and *italic*.\n\n```python\ndef hello():\n print(\"world\")\n```\n\nMore text here.\n```", - expected: "# Heading\n\nSome text with **bold** and *italic*.\n\n```python\ndef hello():\n print(\"world\")\n```\n\nMore text here.", - }, - { - name: "markdown fence with special characters", - input: "```markdown\nContent with ${{ github.actor }} and @mentions\n```", - expected: "Content with ${{ github.actor }} and @mentions", - }, - { - name: "longer backtick fence", - input: "````markdown\nContent\n````", - expected: "Content", - }, - { - name: "longer tilde fence", - input: "~~~~markdown\nContent\n~~~~", - expected: "Content", - }, - { - name: "markdown fence with extra spaces in language tag", - input: "``` markdown \nContent\n```", - expected: "Content", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := UnfenceMarkdown(tt.input) - assert.Equal(t, tt.expected, result, "Unfenced content should match expected") - }) - } -} - -func TestUnfenceMarkdownPreservesNonWrappedContent(t *testing.T) { - // Test that normal markdown content is not modified - tests := []struct { - name string - input string - }{ - { - name: "normal markdown with headers", - input: "# Title\n\nSome content here.\n\n## Subtitle\n\nMore content.", - }, - { - name: "markdown with multiple code blocks", - input: "Some text\n\n```javascript\ncode1();\n```\n\nMore text\n\n```python\ncode2()\n```", - }, - { - name: "markdown with inline code", - input: "Use `code` for inline code snippets.", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := UnfenceMarkdown(tt.input) - assert.Equal(t, tt.input, result, "Non-wrapped content should remain unchanged") - }) - } -} - -func TestUnfenceMarkdownFenceLengthMatching(t *testing.T) { - // Test that fence lengths must match (closing must be >= opening) - tests := []struct { - name string - input string - expected string - }{ - { - name: "4 backticks opening, 4 backticks closing", - input: "````markdown\nContent\n````", - expected: "Content", - }, - { - name: "4 backticks opening, 5 backticks closing", - input: "````markdown\nContent\n`````", - expected: "Content", - }, - { - name: "5 backticks opening, 5 backticks closing", - input: "`````markdown\nContent\n`````", - expected: "Content", - }, - { - name: "3 backticks opening, 4 backticks closing", - input: "```markdown\nContent\n````", - expected: "Content", - }, - { - name: "4 backticks opening, 3 backticks closing - should not unfence", - input: "````markdown\nContent\n```", - expected: "````markdown\nContent\n```", - }, - { - name: "10 backticks opening, 10 backticks closing", - input: "``````````markdown\nContent\n``````````", - expected: "Content", - }, - { - name: "4 tildes opening, 4 tildes closing", - input: "~~~~markdown\nContent\n~~~~", - expected: "Content", - }, - { - name: "5 tildes opening, 6 tildes closing", - input: "~~~~~markdown\nContent\n~~~~~~", - expected: "Content", - }, - { - name: "4 tildes opening, 3 tildes closing - should not unfence", - input: "~~~~markdown\nContent\n~~~", - expected: "~~~~markdown\nContent\n~~~", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := UnfenceMarkdown(tt.input) - assert.Equal(t, tt.expected, result, "Fence length matching should work correctly") - }) - } -} - -func TestUnfenceMarkdownRealWorldExamples(t *testing.T) { - // Test real-world examples that might come from agents - tests := []struct { - name string - input string - expected string - }{ - { - name: "agent response with issue update", - input: "```markdown\n# Issue Analysis\n\nI've reviewed the code and found the following:\n\n- Bug in line 42\n- Missing validation\n```", - expected: "# Issue Analysis\n\nI've reviewed the code and found the following:\n\n- Bug in line 42\n- Missing validation", - }, - { - name: "agent response with code examples", - input: "```markdown\nHere's the fix:\n\n```go\nfunc Fix() {\n // Fixed code\n}\n```\n\nThis should resolve the issue.\n```", - expected: "Here's the fix:\n\n```go\nfunc Fix() {\n // Fixed code\n}\n```\n\nThis should resolve the issue.", - }, - { - name: "agent response with multiple sections", - input: "```md\n## Summary\n\nCompleted the task.\n\n## Changes\n\n- Updated file A\n- Fixed bug in B\n\n## Testing\n\nAll tests pass.\n```", - expected: "## Summary\n\nCompleted the task.\n\n## Changes\n\n- Updated file A\n- Fixed bug in B\n\n## Testing\n\nAll tests pass.", - }, - { - name: "plain markdown without fence - no change", - input: "## Summary\n\nTask completed successfully.", - expected: "## Summary\n\nTask completed successfully.", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := UnfenceMarkdown(tt.input) - assert.Equal(t, tt.expected, result, "Real-world examples should unfence correctly") - }) - } -} diff --git a/pkg/workflow/prompt_constants.go b/pkg/workflow/prompt_constants.go new file mode 100644 index 0000000000..2f9d9c39be --- /dev/null +++ b/pkg/workflow/prompt_constants.go @@ -0,0 +1,28 @@ +package workflow + +import _ "embed" + +// Prompt file paths at runtime (copied by setup action to /opt/gh-aw/prompts) +const ( + promptsDir = "/opt/gh-aw/prompts" + prContextPromptFile = "pr_context_prompt.md" + tempFolderPromptFile = "temp_folder_prompt.md" + playwrightPromptFile = "playwright_prompt.md" + markdownPromptFile = "markdown.md" + xpiaPromptFile = "xpia.md" + cacheMemoryPromptFile = "cache_memory_prompt.md" + cacheMemoryPromptMultiFile = "cache_memory_prompt_multi.md" + repoMemoryPromptFile = "repo_memory_prompt.md" + repoMemoryPromptMultiFile = "repo_memory_prompt_multi.md" + safeOutputsPromptFile = "safe_outputs_prompt.md" + safeOutputsCreatePRFile = "safe_outputs_create_pull_request.md" + safeOutputsPushToBranchFile = "safe_outputs_push_to_pr_branch.md" + safeOutputsAutoCreateIssueFile = "safe_outputs_auto_create_issue.md" +) + +// GitHub context prompt is kept embedded because it contains GitHub Actions expressions +// that need to be extracted at compile time. Moving this to a runtime file would require +// reading and parsing the file during compilation, which is more complex. +// +//go:embed prompts/github_context_prompt.md +var githubContextPromptText string diff --git a/pkg/workflow/prompt_step.go b/pkg/workflow/prompt_step.go deleted file mode 100644 index 1e2341b3e8..0000000000 --- a/pkg/workflow/prompt_step.go +++ /dev/null @@ -1,64 +0,0 @@ -package workflow - -import ( - "strings" - - "github.com/github/gh-aw/pkg/logger" -) - -var promptStepLog = logger.New("workflow:prompt_step") - -// appendPromptStep generates a workflow step that appends content to the prompt file. -// It encapsulates the common YAML scaffolding for prompt-related steps, reducing duplication -// across multiple prompt generation helpers. -// -// Parameters: -// - yaml: The string builder to write the YAML to -// - stepName: The name of the workflow step (e.g., "Append XPIA security instructions to prompt") -// - renderer: A function that writes the actual prompt content to the YAML -// - condition: Optional condition string to add an 'if:' clause (empty string means no condition) -// - indent: The indentation to use for nested content (typically " ") -func appendPromptStep(yaml *strings.Builder, stepName string, renderer func(*strings.Builder, string), condition string, indent string) { - promptStepLog.Printf("Appending prompt step: name=%s, hasCondition=%v", stepName, condition != "") - - yaml.WriteString(" - name: " + stepName + "\n") - - // Add conditional if provided - if condition != "" { - promptStepLog.Printf("Adding condition: %s", condition) - yaml.WriteString(" if: " + condition + "\n") - } - - yaml.WriteString(" env:\n") - yaml.WriteString(" GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt\n") - yaml.WriteString(" run: |\n") - - // Call the renderer to write the actual content - renderer(yaml, indent) - promptStepLog.Print("Prompt step appended successfully") -} - -// appendPromptStepWithHeredoc generates a workflow step that appends content to the prompt file -// using a heredoc (cat << 'PROMPT_EOF' >> "$GH_AW_PROMPT" pattern). -// This is used by compiler functions that need to embed static structured content without variable substitution. -// -// Parameters: -// - yaml: The string builder to write the YAML to -// - stepName: The name of the workflow step -// - renderer: A function that writes the content between the heredoc markers -func appendPromptStepWithHeredoc(yaml *strings.Builder, stepName string, renderer func(*strings.Builder)) { - promptStepLog.Printf("Appending prompt step with heredoc: name=%s", stepName) - - delimiter := GenerateHeredocDelimiter("PROMPT") - yaml.WriteString(" - name: " + stepName + "\n") - yaml.WriteString(" env:\n") - yaml.WriteString(" GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt\n") - yaml.WriteString(" run: |\n") - yaml.WriteString(" cat << '" + delimiter + "' >> \"$GH_AW_PROMPT\"\n") - - // Call the renderer to write the content - renderer(yaml) - - yaml.WriteString(" " + delimiter + "\n") - promptStepLog.Print("Heredoc prompt step appended successfully") -} diff --git a/pkg/workflow/prompt_step_helper_test.go b/pkg/workflow/prompt_step_helper_test.go deleted file mode 100644 index e2f0dc6145..0000000000 --- a/pkg/workflow/prompt_step_helper_test.go +++ /dev/null @@ -1,138 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -func TestGenerateStaticPromptStep(t *testing.T) { - tests := []struct { - name string - description string - promptText string - shouldInclude bool - wantOutput bool - wantInOutput []string - }{ - { - name: "generates step when shouldInclude is true", - description: "Append test instructions to prompt", - promptText: "Test prompt content\nLine 2", - shouldInclude: true, - wantOutput: true, - wantInOutput: []string{ - "- name: Append test instructions to prompt", - "GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt", - `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`, - "Test prompt content", - "Line 2", - "EOF", - }, - }, - { - name: "skips generation when shouldInclude is false", - description: "Append skipped instructions to prompt", - promptText: "This should not appear", - shouldInclude: false, - wantOutput: false, - wantInOutput: []string{}, - }, - { - name: "handles multiline prompt text correctly", - description: "Append multiline instructions to prompt", - promptText: "Line 1\nLine 2\nLine 3\nLine 4", - shouldInclude: true, - wantOutput: true, - wantInOutput: []string{ - "Line 1", - "Line 2", - "Line 3", - "Line 4", - }, - }, - { - name: "handles empty prompt text", - description: "Append empty instructions to prompt", - promptText: "", - shouldInclude: true, - wantOutput: true, - wantInOutput: []string{ - "- name: Append empty instructions to prompt", - `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`, - "EOF", - }, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - var yaml strings.Builder - - generateStaticPromptStep(&yaml, tt.description, tt.promptText, tt.shouldInclude) - output := yaml.String() - - if tt.wantOutput { - if output == "" { - t.Error("Expected output to be generated, but got empty string") - } - - // Check that all expected strings are present - for _, want := range tt.wantInOutput { - if !strings.Contains(output, want) { - t.Errorf("Expected output to contain %q, but it didn't.\nGot:\n%s", want, output) - } - } - } else { - if output != "" { - t.Errorf("Expected no output when shouldInclude is false, but got:\n%s", output) - } - } - }) - } -} - -func TestGenerateStaticPromptStepConsistencyWithOriginal(t *testing.T) { - // Test that the new helper produces the same output as the original implementation - // by comparing with a known-good expected structure from appendPromptStep - - tests := []struct { - name string - description string - promptText string - }{ - { - name: "temp folder style prompt", - description: "Append temporary folder instructions to prompt", - promptText: "Use /tmp/gh-aw/agent/ directory", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - // Generate using new helper - var helperYaml strings.Builder - generateStaticPromptStep(&helperYaml, tt.description, tt.promptText, true) - - // Generate using original pattern - var originalYaml strings.Builder - appendPromptStep(&originalYaml, - tt.description, - func(y *strings.Builder, indent string) { - WritePromptTextToYAML(y, tt.promptText, indent) - }, - "", // no condition - " ") - - helperOutput := helperYaml.String() - originalOutput := originalYaml.String() - - // Compare outputs - if helperOutput != originalOutput { - t.Errorf("Helper output does not match original.\nHelper:\n%s\nOriginal:\n%s", - helperOutput, originalOutput) - } - }) - } -} diff --git a/pkg/workflow/prompt_step_test.go b/pkg/workflow/prompt_step_test.go deleted file mode 100644 index 2dd62173a6..0000000000 --- a/pkg/workflow/prompt_step_test.go +++ /dev/null @@ -1,146 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -func TestAppendPromptStep(t *testing.T) { - tests := []struct { - name string - stepName string - condition string - wantSteps []string - }{ - { - name: "basic step without condition", - stepName: "Append test instructions to prompt", - condition: "", - wantSteps: []string{ - "- name: Append test instructions to prompt", - "env:", - "GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt", - "run: |", - `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`, - "Test prompt content", - "GH_AW_PROMPT_EOF", - }, - }, - { - name: "step with condition", - stepName: "Append conditional instructions to prompt", - condition: "github.event.issue != null", - wantSteps: []string{ - "- name: Append conditional instructions to prompt", - "if: github.event.issue != null", - "env:", - "GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt", - "run: |", - `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`, - "Conditional prompt content", - "GH_AW_PROMPT_EOF", - }, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - var yaml strings.Builder - - // Call the helper with a simple renderer - var promptContent string - if tt.condition == "" { - promptContent = "Test prompt content" - } else { - promptContent = "Conditional prompt content" - } - - appendPromptStep(&yaml, tt.stepName, func(y *strings.Builder, indent string) { - WritePromptTextToYAML(y, promptContent, indent) - }, tt.condition, " ") - - result := yaml.String() - - // Check that all expected strings are present - for _, want := range tt.wantSteps { - if !strings.Contains(result, want) { - t.Errorf("Expected output to contain %q, but it didn't.\nGot:\n%s", want, result) - } - } - }) - } -} - -func TestAppendPromptStepWithHeredoc(t *testing.T) { - tests := []struct { - name string - stepName string - content string - wantSteps []string - }{ - { - name: "basic heredoc step", - stepName: "Append structured data to prompt", - content: "Structured content line 1\nStructured content line 2", - wantSteps: []string{ - "- name: Append structured data to prompt", - "env:", - "GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt", - "run: |", - `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`, - "Structured content line 1", - "Structured content line 2", - "GH_AW_PROMPT_EOF", - }, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - var yaml strings.Builder - - appendPromptStepWithHeredoc(&yaml, tt.stepName, func(y *strings.Builder) { - y.WriteString(tt.content) - }) - - result := yaml.String() - - // Check that all expected strings are present - for _, want := range tt.wantSteps { - if !strings.Contains(result, want) { - t.Errorf("Expected output to contain %q, but it didn't.\nGot:\n%s", want, result) - } - } - }) - } -} - -func TestPromptStepRefactoringConsistency(t *testing.T) { - // Test that the unified prompt step includes temp folder instructions - // (Previously tested individual prompt steps, now tests unified approach) - - t.Run("unified_prompt_step includes temp_folder", func(t *testing.T) { - var yaml strings.Builder - compiler := &Compiler{} - data := &WorkflowData{ - ParsedTools: NewTools(map[string]any{}), - } - compiler.generateUnifiedPromptStep(&yaml, data) - - result := yaml.String() - - // Verify key elements are present - if !strings.Contains(result, "Create prompt with built-in context") { - t.Error("Expected unified step name not found") - } - if !strings.Contains(result, "GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt") { - t.Error("Expected GH_AW_PROMPT env variable not found") - } - // Verify temp folder instructions are included - if !strings.Contains(result, `cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"`) { - t.Error("Expected cat command for temp folder prompt file not found") - } - }) -} diff --git a/pkg/workflow/safe_output_builder.go b/pkg/workflow/safe_output_builder.go deleted file mode 100644 index e03d459ddc..0000000000 --- a/pkg/workflow/safe_output_builder.go +++ /dev/null @@ -1,202 +0,0 @@ -package workflow - -import ( - "fmt" - "strings" - - "github.com/github/gh-aw/pkg/logger" -) - -var safeOutputBuilderLog = logger.New("workflow:safe_output_builder") - -// ====================================== -// Generic Env Var Builders -// ====================================== - -// BuildTargetEnvVar builds a target environment variable line for safe-output jobs. -// envVarName should be the full env var name like "GH_AW_CLOSE_ISSUE_TARGET". -// Returns an empty slice if target is empty. -func BuildTargetEnvVar(envVarName string, target string) []string { - if target == "" { - return nil - } - return []string{fmt.Sprintf(" %s: %q\n", envVarName, target)} -} - -// BuildRequiredLabelsEnvVar builds a required-labels environment variable line for safe-output jobs. -// envVarName should be the full env var name like "GH_AW_CLOSE_ISSUE_REQUIRED_LABELS". -// Returns an empty slice if requiredLabels is empty. -func BuildRequiredLabelsEnvVar(envVarName string, requiredLabels []string) []string { - if len(requiredLabels) == 0 { - return nil - } - labelsStr := strings.Join(requiredLabels, ",") - return []string{fmt.Sprintf(" %s: %q\n", envVarName, labelsStr)} -} - -// BuildRequiredTitlePrefixEnvVar builds a required-title-prefix environment variable line for safe-output jobs. -// envVarName should be the full env var name like "GH_AW_CLOSE_ISSUE_REQUIRED_TITLE_PREFIX". -// Returns an empty slice if requiredTitlePrefix is empty. -func BuildRequiredTitlePrefixEnvVar(envVarName string, requiredTitlePrefix string) []string { - if requiredTitlePrefix == "" { - return nil - } - return []string{fmt.Sprintf(" %s: %q\n", envVarName, requiredTitlePrefix)} -} - -// BuildRequiredCategoryEnvVar builds a required-category environment variable line for discussion safe-output jobs. -// envVarName should be the full env var name like "GH_AW_CLOSE_DISCUSSION_REQUIRED_CATEGORY". -// Returns an empty slice if requiredCategory is empty. -func BuildRequiredCategoryEnvVar(envVarName string, requiredCategory string) []string { - if requiredCategory == "" { - return nil - } - return []string{fmt.Sprintf(" %s: %q\n", envVarName, requiredCategory)} -} - -// BuildMaxCountEnvVar builds a max count environment variable line for safe-output jobs. -// envVarName should be the full env var name like "GH_AW_CLOSE_ISSUE_MAX_COUNT". -func BuildMaxCountEnvVar(envVarName string, maxCount int) []string { - return []string{fmt.Sprintf(" %s: %d\n", envVarName, maxCount)} -} - -// overrideEnvVarLine replaces the first env var line in lines that starts with keyPrefix -// with newLine. If no match is found, newLine is appended. -func overrideEnvVarLine(lines []string, keyPrefix string, newLine string) []string { - for i, line := range lines { - trimmed := strings.TrimSpace(line) - if strings.HasPrefix(trimmed, keyPrefix) { - lines[i] = newLine - return lines - } - } - return append(lines, newLine) -} - -// BuildAllowedListEnvVar builds an allowed list environment variable line for safe-output jobs. -// envVarName should be the full env var name like "GH_AW_LABELS_ALLOWED". -// Always outputs the env var, even when empty (empty string means "allow all"). -func BuildAllowedListEnvVar(envVarName string, allowed []string) []string { - allowedStr := strings.Join(allowed, ",") - return []string{fmt.Sprintf(" %s: %q\n", envVarName, allowedStr)} -} - -// ====================================== -// Close Job Env Var Builders -// ====================================== - -// BuildCloseJobEnvVars builds common environment variables for close operations. -// prefix should be like "GH_AW_CLOSE_ISSUE" or "GH_AW_CLOSE_PR". -// Returns a slice of environment variable lines. -func BuildCloseJobEnvVars(prefix string, config CloseJobConfig) []string { - var envVars []string - - // Add target - envVars = append(envVars, BuildTargetEnvVar(prefix+"_TARGET", config.Target)...) - - // Add required labels - envVars = append(envVars, BuildRequiredLabelsEnvVar(prefix+"_REQUIRED_LABELS", config.RequiredLabels)...) - - // Add required title prefix - envVars = append(envVars, BuildRequiredTitlePrefixEnvVar(prefix+"_REQUIRED_TITLE_PREFIX", config.RequiredTitlePrefix)...) - - return envVars -} - -// ====================================== -// List Job Env Var Builders -// ====================================== - -// BuildListJobEnvVars builds common environment variables for list-based operations. -// prefix should be like "GH_AW_LABELS" or "GH_AW_REVIEWERS". -// Returns a slice of environment variable lines. -func BuildListJobEnvVars(prefix string, config ListJobConfig, maxCount int) []string { - var envVars []string - - // Add allowed list - envVars = append(envVars, BuildAllowedListEnvVar(prefix+"_ALLOWED", config.Allowed)...) - - // Add blocked list - envVars = append(envVars, BuildAllowedListEnvVar(prefix+"_BLOCKED", config.Blocked)...) - - // Add max count - envVars = append(envVars, BuildMaxCountEnvVar(prefix+"_MAX_COUNT", maxCount)...) - - // Add target - envVars = append(envVars, BuildTargetEnvVar(prefix+"_TARGET", config.Target)...) - - return envVars -} - -// ====================================== -// List Job Builder Helpers -// ====================================== - -// ListJobBuilderConfig contains parameters for building list-based safe-output jobs -type ListJobBuilderConfig struct { - JobName string // e.g., "add_labels", "assign_milestone" - StepName string // e.g., "Add Labels", "Assign Milestone" - StepID string // e.g., "add_labels", "assign_milestone" - EnvPrefix string // e.g., "GH_AW_LABELS", "GH_AW_MILESTONE" - OutputName string // e.g., "labels_added", "assigned_milestones" - Script string // JavaScript script for the operation - Permissions *Permissions // Job permissions - DefaultMax int // Default max count if not specified in config - ExtraCondition ConditionNode // Additional condition to append (optional) -} - -// BuildListSafeOutputJob builds a list-based safe-output job using shared logic. -// This consolidates the common builder pattern used by add-labels, assign-milestone, and assign-to-user. -func (c *Compiler) BuildListSafeOutputJob(data *WorkflowData, mainJobName string, listJobConfig ListJobConfig, baseSafeOutputConfig BaseSafeOutputConfig, builderConfig ListJobBuilderConfig) (*Job, error) { - safeOutputBuilderLog.Printf("Building list safe-output job: %s", builderConfig.JobName) - - // Handle max count with default – use literal integer if set, else fall back to DefaultMax - maxCount := builderConfig.DefaultMax - if n := templatableIntValue(baseSafeOutputConfig.Max); n > 0 { - maxCount = n - } - safeOutputBuilderLog.Printf("Max count set to: %d", maxCount) - - // Build custom environment variables using shared helpers - customEnvVars := BuildListJobEnvVars(builderConfig.EnvPrefix, listJobConfig, maxCount) - - // If max is a GitHub Actions expression, override with the expression value - if baseSafeOutputConfig.Max != nil && templatableIntValue(baseSafeOutputConfig.Max) == 0 { - exprLine := buildTemplatableIntEnvVar(builderConfig.EnvPrefix+"_MAX_COUNT", baseSafeOutputConfig.Max) - if len(exprLine) > 0 { - prefix := builderConfig.EnvPrefix + "_MAX_COUNT:" - customEnvVars = overrideEnvVarLine(customEnvVars, prefix, exprLine[0]) - } - } - - // Add standard environment variables (metadata + staged/target repo) - customEnvVars = append(customEnvVars, c.buildStandardSafeOutputEnvVars(data, listJobConfig.TargetRepoSlug)...) - - // Create outputs for the job - outputs := map[string]string{ - builderConfig.OutputName: fmt.Sprintf("${{ steps.%s.outputs.%s }}", builderConfig.StepID, builderConfig.OutputName), - } - - // Build base job condition - jobCondition := BuildSafeOutputType(builderConfig.JobName) - - // Add extra condition if provided - if builderConfig.ExtraCondition != nil { - jobCondition = BuildAnd(jobCondition, builderConfig.ExtraCondition) - } - - // Use the shared builder function to create the job - return c.buildSafeOutputJob(data, SafeOutputJobConfig{ - JobName: builderConfig.JobName, - StepName: builderConfig.StepName, - StepID: builderConfig.StepID, - MainJobName: mainJobName, - CustomEnvVars: customEnvVars, - Script: builderConfig.Script, - Permissions: builderConfig.Permissions, - Outputs: outputs, - Condition: jobCondition, - Token: baseSafeOutputConfig.GitHubToken, - TargetRepoSlug: listJobConfig.TargetRepoSlug, - }) -} diff --git a/pkg/workflow/safe_outputs_app_import_test.go b/pkg/workflow/safe_outputs_app_import_test.go index fc535f33cb..ce97525831 100644 --- a/pkg/workflow/safe_outputs_app_import_test.go +++ b/pkg/workflow/safe_outputs_app_import_test.go @@ -5,7 +5,6 @@ package workflow import ( "os" "path/filepath" - "strings" "testing" "github.com/stretchr/testify/assert" @@ -148,72 +147,3 @@ This workflow overrides the imported app configuration. assert.Equal(t, "${{ secrets.LOCAL_APP_SECRET }}", workflowData.SafeOutputs.App.PrivateKey) assert.Equal(t, []string{"repo2"}, workflowData.SafeOutputs.App.Repositories) } - -// TestSafeOutputsAppImportStepGeneration tests that imported app config generates correct steps -func TestSafeOutputsAppImportStepGeneration(t *testing.T) { - compiler := NewCompilerWithVersion("1.0.0") - - // Create a temporary directory for test files - tmpDir := t.TempDir() - workflowsDir := filepath.Join(tmpDir, ".github", "workflows") - err := os.MkdirAll(workflowsDir, 0755) - require.NoError(t, err, "Failed to create workflows directory") - - // Create a shared workflow with app configuration - sharedWorkflow := `--- -safe-outputs: - app: - app-id: ${{ vars.SHARED_APP_ID }} - private-key: ${{ secrets.SHARED_APP_SECRET }} ---- - -# Shared App Configuration -` - - sharedFile := filepath.Join(workflowsDir, "shared-app.md") - err = os.WriteFile(sharedFile, []byte(sharedWorkflow), 0644) - require.NoError(t, err, "Failed to write shared file") - - // Create main workflow that imports the app configuration - mainWorkflow := `--- -on: issues -permissions: - contents: read -imports: - - ./shared-app.md -safe-outputs: - create-issue: ---- - -# Main Workflow -` - - mainFile := filepath.Join(workflowsDir, "main.md") - err = os.WriteFile(mainFile, []byte(mainWorkflow), 0644) - require.NoError(t, err, "Failed to write main file") - - // Change to the workflows directory for relative path resolution - oldDir, err := os.Getwd() - require.NoError(t, err, "Failed to get current directory") - err = os.Chdir(workflowsDir) - require.NoError(t, err, "Failed to change directory") - defer os.Chdir(oldDir) - - // Parse the workflow - workflowData, err := compiler.ParseWorkflowFile("main.md") - require.NoError(t, err, "Failed to parse workflow") - - // Build the safe_outputs job - job, err := compiler.buildCreateOutputIssueJob(workflowData, "main") - require.NoError(t, err, "Failed to build safe_outputs job") - require.NotNil(t, job, "Job should not be nil") - - // Convert steps to string - stepsStr := strings.Join(job.Steps, "") - - // Verify token minting and invalidation steps are present - assert.Contains(t, stepsStr, "Generate GitHub App token", "Token minting step should be present") - assert.Contains(t, stepsStr, "Invalidate GitHub App token", "Token invalidation step should be present") - assert.Contains(t, stepsStr, "${{ vars.SHARED_APP_ID }}", "Should use imported app ID") - assert.Contains(t, stepsStr, "${{ secrets.SHARED_APP_SECRET }}", "Should use imported secret") -} diff --git a/pkg/workflow/safe_outputs_app_test.go b/pkg/workflow/safe_outputs_app_test.go index ef79165c9d..698091997e 100644 --- a/pkg/workflow/safe_outputs_app_test.go +++ b/pkg/workflow/safe_outputs_app_test.go @@ -85,104 +85,6 @@ Test workflow with minimal app configuration. assert.Empty(t, workflowData.SafeOutputs.App.Repositories) } -// TestSafeOutputsAppTokenMintingStep tests that token minting step is generated -func TestSafeOutputsAppTokenMintingStep(t *testing.T) { - compiler := NewCompilerWithVersion("1.0.0") - - markdown := `--- -on: issues -permissions: - contents: read -safe-outputs: - create-issue: - app: - app-id: ${{ vars.APP_ID }} - private-key: ${{ secrets.APP_PRIVATE_KEY }} ---- - -# Test Workflow - -Test workflow with app token minting. -` - - // Create a temporary test file - tmpDir := t.TempDir() - testFile := filepath.Join(tmpDir, "test.md") - err := os.WriteFile(testFile, []byte(markdown), 0644) - require.NoError(t, err, "Failed to write test file") - - workflowData, err := compiler.ParseWorkflowFile(testFile) - require.NoError(t, err, "Failed to parse markdown content") - - // Build the safe_outputs job - job, err := compiler.buildCreateOutputIssueJob(workflowData, "main") - require.NoError(t, err, "Failed to build safe_outputs job") - require.NotNil(t, job, "Job should not be nil") - - // Convert steps to string for easier assertion - stepsStr := strings.Join(job.Steps, "") - - // Verify token minting step is present - assert.Contains(t, stepsStr, "Generate GitHub App token", "Token minting step should be present") - assert.Contains(t, stepsStr, "actions/create-github-app-token", "Should use create-github-app-token action") - assert.Contains(t, stepsStr, "app-id: ${{ vars.APP_ID }}", "Should use configured app ID") - assert.Contains(t, stepsStr, "private-key: ${{ secrets.APP_PRIVATE_KEY }}", "Should use configured private key") - - // Verify token invalidation step is present - assert.Contains(t, stepsStr, "Invalidate GitHub App token", "Token invalidation step should be present") - assert.Contains(t, stepsStr, "if: always()", "Invalidation step should always run") - assert.Contains(t, stepsStr, "/installation/token", "Should call token invalidation endpoint") - - // Verify token is used in github-script step - assert.Contains(t, stepsStr, "${{ steps.safe-outputs-app-token.outputs.token }}", "Should use app token in github-script") -} - -// TestSafeOutputsAppTokenMintingStepWithRepositories tests token minting with repositories -func TestSafeOutputsAppTokenMintingStepWithRepositories(t *testing.T) { - compiler := NewCompilerWithVersion("1.0.0") - - markdown := `--- -on: issues -permissions: - contents: read -safe-outputs: - create-issue: - app: - app-id: ${{ vars.APP_ID }} - private-key: ${{ secrets.APP_PRIVATE_KEY }} - repositories: - - "repo1" - - "repo2" ---- - -# Test Workflow - -Test workflow with app token minting and repository restrictions. -` - - // Create a temporary test file - tmpDir := t.TempDir() - testFile := filepath.Join(tmpDir, "test.md") - err := os.WriteFile(testFile, []byte(markdown), 0644) - require.NoError(t, err, "Failed to write test file") - - workflowData, err := compiler.ParseWorkflowFile(testFile) - require.NoError(t, err, "Failed to parse markdown content") - - // Build the safe_outputs job - job, err := compiler.buildCreateOutputIssueJob(workflowData, "main") - require.NoError(t, err, "Failed to build safe_outputs job") - require.NotNil(t, job, "Job should not be nil") - - // Convert steps to string for easier assertion - stepsStr := strings.Join(job.Steps, "") - - // Verify repositories are included in the minting step using block scalar format - assert.Contains(t, stepsStr, "repositories: |-", "Should use block scalar format for multiple repositories") - assert.Contains(t, stepsStr, "repo1", "Should include first repository") - assert.Contains(t, stepsStr, "repo2", "Should include second repository") -} - // TestSafeOutputsAppWithoutSafeOutputs tests that app without safe outputs doesn't break func TestSafeOutputsAppWithoutSafeOutputs(t *testing.T) { compiler := NewCompilerWithVersion("1.0.0") @@ -209,57 +111,6 @@ Test workflow without safe outputs. assert.Nil(t, workflowData.SafeOutputs, "SafeOutputs should be nil") } -// TestSafeOutputsAppTokenOrgWide tests org-wide GitHub App token with wildcard -func TestSafeOutputsAppTokenOrgWide(t *testing.T) { - compiler := NewCompilerWithVersion("1.0.0") - - markdown := `--- -on: issues -permissions: - contents: read -safe-outputs: - create-issue: - app: - app-id: ${{ vars.APP_ID }} - private-key: ${{ secrets.APP_PRIVATE_KEY }} - repositories: - - "*" ---- - -# Test Workflow - -Test workflow with org-wide app token. -` - - // Create a temporary test file - tmpDir := t.TempDir() - testFile := filepath.Join(tmpDir, "test.md") - err := os.WriteFile(testFile, []byte(markdown), 0644) - require.NoError(t, err, "Failed to write test file") - - workflowData, err := compiler.ParseWorkflowFile(testFile) - require.NoError(t, err, "Failed to parse markdown content") - - // Build the safe_outputs job - job, err := compiler.buildCreateOutputIssueJob(workflowData, "main") - require.NoError(t, err, "Failed to build safe_outputs job") - require.NotNil(t, job, "Job should not be nil") - - // Convert steps to string for easier assertion - stepsStr := strings.Join(job.Steps, "") - - // Verify token minting step is present - assert.Contains(t, stepsStr, "Generate GitHub App token", "Token minting step should be present") - assert.Contains(t, stepsStr, "actions/create-github-app-token", "Should use create-github-app-token action") - - // Verify repositories field is NOT present (org-wide access) - assert.NotContains(t, stepsStr, "repositories:", "Should not include repositories field for org-wide access") - - // Verify other fields are still present - assert.Contains(t, stepsStr, "owner:", "Should include owner field") - assert.Contains(t, stepsStr, "app-id:", "Should include app-id field") -} - // TestSafeOutputsAppTokenDiscussionsPermission tests that discussions permission is included func TestSafeOutputsAppTokenDiscussionsPermission(t *testing.T) { compiler := NewCompilerWithVersion("1.0.0") diff --git a/pkg/workflow/safe_outputs_env_test.go b/pkg/workflow/safe_outputs_env_test.go deleted file mode 100644 index b16926aca3..0000000000 --- a/pkg/workflow/safe_outputs_env_test.go +++ /dev/null @@ -1,196 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "testing" -) - -func TestSafeOutputsEnvConfiguration(t *testing.T) { - compiler := NewCompiler() - - t.Run("Should parse env configuration in safe-outputs", func(t *testing.T) { - frontmatter := map[string]any{ - "name": "Test Workflow", - "safe-outputs": map[string]any{ - "create-issue": nil, - "env": map[string]any{ - "GITHUB_TOKEN": "${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "CUSTOM_API_KEY": "${{ secrets.CUSTOM_API_KEY }}", - "DEBUG_MODE": "true", - }, - }, - } - - config := compiler.extractSafeOutputsConfig(frontmatter) - if config == nil { - t.Fatal("Expected SafeOutputsConfig to be parsed") - } - - if config.Env == nil { - t.Fatal("Expected Env to be parsed") - } - - expected := map[string]string{ - "GITHUB_TOKEN": "${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "CUSTOM_API_KEY": "${{ secrets.CUSTOM_API_KEY }}", - "DEBUG_MODE": "true", - } - - for key, expectedValue := range expected { - if actualValue, exists := config.Env[key]; !exists { - t.Errorf("Expected env key %s to exist", key) - } else if actualValue != expectedValue { - t.Errorf("Expected env[%s] to be %q, got %q", key, expectedValue, actualValue) - } - } - }) - - t.Run("Should include custom env vars in create-issue job", func(t *testing.T) { - data := &WorkflowData{ - Name: "Test", - FrontmatterName: "Test Workflow", - SafeOutputs: &SafeOutputsConfig{ - CreateIssues: &CreateIssuesConfig{BaseSafeOutputConfig: BaseSafeOutputConfig{Max: strPtr("1")}}, - Env: map[string]string{ - "GITHUB_TOKEN": "${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "DEBUG_MODE": "true", - }, - }, - } - - job, err := compiler.buildCreateOutputIssueJob(data, "main_job") - if err != nil { - t.Fatalf("Failed to build create issue job: %v", err) - } - - expectedEnvVars := []string{ - "GITHUB_TOKEN: ${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "DEBUG_MODE: true", - } - assertEnvVarsInSteps(t, job.Steps, expectedEnvVars) - }) - - t.Run("Should include custom env vars in create-pull-request job", func(t *testing.T) { - data := &WorkflowData{ - Name: "Test", - FrontmatterName: "Test Workflow", - SafeOutputs: &SafeOutputsConfig{ - CreatePullRequests: &CreatePullRequestsConfig{BaseSafeOutputConfig: BaseSafeOutputConfig{Max: strPtr("1")}}, - Env: map[string]string{ - "GITHUB_TOKEN": "${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "API_ENDPOINT": "https://api.example.com", - }, - }, - } - - job, err := compiler.buildCreateOutputPullRequestJob(data, "main_job") - if err != nil { - t.Fatalf("Failed to build create pull request job: %v", err) - } - - expectedEnvVars := []string{ - "GITHUB_TOKEN: ${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "API_ENDPOINT: https://api.example.com", - } - assertEnvVarsInSteps(t, job.Steps, expectedEnvVars) - }) - - t.Run("Should work without env configuration", func(t *testing.T) { - frontmatter := map[string]any{ - "name": "Test Workflow", - "safe-outputs": map[string]any{ - "create-issue": nil, - }, - } - - config := compiler.extractSafeOutputsConfig(frontmatter) - if config == nil { - t.Fatal("Expected SafeOutputsConfig to be parsed") - } - - // Env should be nil when not specified - if config.Env != nil { - t.Error("Expected Env to be nil when not configured") - } - - // Job creation should still work - data := &WorkflowData{ - Name: "Test", - FrontmatterName: "Test Workflow", - SafeOutputs: config, - } - - _, err := compiler.buildCreateOutputIssueJob(data, "main_job") - if err != nil { - t.Errorf("Job creation should work without env configuration: %v", err) - } - }) - - t.Run("Should handle empty env configuration", func(t *testing.T) { - frontmatter := map[string]any{ - "name": "Test Workflow", - "safe-outputs": map[string]any{ - "create-issue": nil, - "env": map[string]any{}, - }, - } - - config := compiler.extractSafeOutputsConfig(frontmatter) - if config == nil { - t.Fatal("Expected SafeOutputsConfig to be parsed") - } - - if config.Env == nil { - t.Error("Expected Env to be empty map, not nil") - } - - if len(config.Env) != 0 { - t.Errorf("Expected Env to be empty, got %d entries", len(config.Env)) - } - }) - - t.Run("Should handle non-string env values gracefully", func(t *testing.T) { - frontmatter := map[string]any{ - "name": "Test Workflow", - "safe-outputs": map[string]any{ - "create-issue": nil, - "env": map[string]any{ - "STRING_VALUE": "valid", - "INT_VALUE": 123, // should be ignored - "BOOL_VALUE": true, // should be ignored - "NULL_VALUE": nil, // should be ignored - }, - }, - } - - config := compiler.extractSafeOutputsConfig(frontmatter) - if config == nil { - t.Fatal("Expected SafeOutputsConfig to be parsed") - } - - if config.Env == nil { - t.Fatal("Expected Env to be parsed") - } - - // Only string values should be included - if len(config.Env) != 1 { - t.Errorf("Expected only 1 env var (string values only), got %d", len(config.Env)) - } - - if config.Env["STRING_VALUE"] != "valid" { - t.Error("Expected STRING_VALUE to be preserved") - } - - // Non-string values should be ignored - if _, exists := config.Env["INT_VALUE"]; exists { - t.Error("Expected INT_VALUE to be ignored") - } - if _, exists := config.Env["BOOL_VALUE"]; exists { - t.Error("Expected BOOL_VALUE to be ignored") - } - if _, exists := config.Env["NULL_VALUE"]; exists { - t.Error("Expected NULL_VALUE to be ignored") - } - }) -} diff --git a/pkg/workflow/safe_outputs_messages_test.go b/pkg/workflow/safe_outputs_messages_test.go index 73a4e1472c..8f3754077f 100644 --- a/pkg/workflow/safe_outputs_messages_test.go +++ b/pkg/workflow/safe_outputs_messages_test.go @@ -193,55 +193,3 @@ func TestSerializeMessagesConfig(t *testing.T) { } }) } - -func TestMessagesEnvVarInSafeOutputJobs(t *testing.T) { - compiler := NewCompiler() - - t.Run("Should include GH_AW_SAFE_OUTPUT_MESSAGES env var when messages configured", func(t *testing.T) { - data := &WorkflowData{ - Name: "Test", - FrontmatterName: "Test Workflow", - SafeOutputs: &SafeOutputsConfig{ - CreateIssues: &CreateIssuesConfig{BaseSafeOutputConfig: BaseSafeOutputConfig{Max: strPtr("1")}}, - Messages: &SafeOutputMessagesConfig{ - Footer: "> Custom footer [{workflow_name}]({run_url})", - }, - }, - } - - job, err := compiler.buildCreateOutputIssueJob(data, "main_job") - if err != nil { - t.Fatalf("Failed to build create issue job: %v", err) - } - - stepsStr := strings.Join(job.Steps, "") - if !strings.Contains(stepsStr, "GH_AW_SAFE_OUTPUT_MESSAGES:") { - t.Error("Expected GH_AW_SAFE_OUTPUT_MESSAGES to be included in job steps") - } - - // Verify it contains the serialized footer - if !strings.Contains(stepsStr, "Custom footer") { - t.Error("Expected serialized messages to contain the custom footer text") - } - }) - - t.Run("Should not include GH_AW_SAFE_OUTPUT_MESSAGES when messages not configured", func(t *testing.T) { - data := &WorkflowData{ - Name: "Test", - FrontmatterName: "Test Workflow", - SafeOutputs: &SafeOutputsConfig{ - CreateIssues: &CreateIssuesConfig{BaseSafeOutputConfig: BaseSafeOutputConfig{Max: strPtr("1")}}, - }, - } - - job, err := compiler.buildCreateOutputIssueJob(data, "main_job") - if err != nil { - t.Fatalf("Failed to build create issue job: %v", err) - } - - stepsStr := strings.Join(job.Steps, "") - if strings.Contains(stepsStr, "GH_AW_SAFE_OUTPUT_MESSAGES:") { - t.Error("Expected GH_AW_SAFE_OUTPUT_MESSAGES to NOT be included when messages not configured") - } - }) -} diff --git a/pkg/workflow/script_registry.go b/pkg/workflow/script_registry.go index 602d9a813a..bb0c6cc026 100644 --- a/pkg/workflow/script_registry.go +++ b/pkg/workflow/script_registry.go @@ -1,60 +1,6 @@ -// This file provides a ScriptRegistry for managing JavaScript script bundling. -// -// # Script Registry Pattern -// -// The ScriptRegistry eliminates the repetitive sync.Once pattern found throughout -// the codebase for lazy script bundling. Instead of declaring separate variables -// and getter functions for each script, register scripts once and retrieve them -// by name with runtime mode verification. -// -// # Before (repetitive pattern): -// -// var ( -// createIssueScript string -// createIssueScriptOnce sync.Once -// ) -// -// func getCreateIssueScript() string { -// createIssueScriptOnce.Do(func() { -// sources := GetJavaScriptSources() -// bundled, err := BundleJavaScriptFromSources(createIssueScriptSource, sources, "") -// if err != nil { -// createIssueScript = createIssueScriptSource -// } else { -// createIssueScript = bundled -// } -// }) -// return createIssueScript -// } -// -// # After (using registry with runtime mode verification): -// -// // Registration at package init -// DefaultScriptRegistry.RegisterWithMode("create_issue", createIssueScriptSource, RuntimeModeGitHubScript) -// -// // Usage anywhere with mode verification -// script := DefaultScriptRegistry.GetWithMode("create_issue", RuntimeModeGitHubScript) -// -// # Benefits -// -// - Eliminates ~15 lines of boilerplate per script (variable pair + getter function) -// - Centralizes bundling logic -// - Consistent error handling -// - Thread-safe lazy initialization -// - Easy to add new scripts -// - Runtime mode verification prevents mismatches between registration and usage -// -// # Runtime Mode Verification -// -// The GetWithMode() method verifies that the requested runtime mode matches the mode -// the script was registered with. This catches configuration errors at compile time -// rather than at runtime. If there's a mismatch, a warning is logged but the script -// is still returned to avoid breaking workflows. - package workflow import ( - "fmt" "strings" "sync" @@ -63,26 +9,12 @@ import ( var registryLog = logger.New("workflow:script_registry") -// scriptEntry holds the source and bundled versions of a script +// scriptEntry holds metadata about a registered script. type scriptEntry struct { - source string - bundled string - mode RuntimeMode // Runtime mode for bundling - actionPath string // Optional path to custom action (e.g., "./actions/create-issue") - once sync.Once + actionPath string // Optional path to custom action (e.g., "./actions/create-issue") } -// ScriptRegistry manages lazy bundling of JavaScript scripts. -// It provides a centralized place to register source scripts and retrieve -// bundled versions on-demand with caching. -// -// Thread-safe: All operations use internal synchronization. -// -// Usage: -// -// registry := NewScriptRegistry() -// registry.Register("my_script", myScriptSource) -// bundled := registry.Get("my_script") +// ScriptRegistry manages script metadata and custom action paths. type ScriptRegistry struct { mu sync.RWMutex scripts map[string]*scriptEntry @@ -96,109 +28,6 @@ func NewScriptRegistry() *ScriptRegistry { } } -// Register adds a script source to the registry. -// The script will be bundled lazily on first access via Get(). -// Scripts registered this way default to RuntimeModeGitHubScript. -// -// Parameters: -// - name: Unique identifier for the script (e.g., "create_issue", "add_comment") -// - source: The raw JavaScript source code (typically from go:embed) -// -// If a script with the same name already exists, it will be overwritten. -// This is useful for testing but should be avoided in production. -// -// Returns an error if validation fails. -func (r *ScriptRegistry) Register(name string, source string) error { - return r.RegisterWithMode(name, source, RuntimeModeGitHubScript) -} - -// RegisterWithMode adds a script source to the registry with a specific runtime mode. -// The script will be bundled lazily on first access via Get(). -// Performs compile-time validation to ensure the script follows runtime mode conventions. -// -// Parameters: -// - name: Unique identifier for the script (e.g., "create_issue", "add_comment") -// - source: The raw JavaScript source code (typically from go:embed) -// - mode: Runtime mode for bundling (GitHub Script or Node.js) -// -// If a script with the same name already exists, it will be overwritten. -// This is useful for testing but should be avoided in production. -// -// Compile-time validations: -// - GitHub Script mode: validates no execSync usage (should use exec instead) -// - Node.js mode: validates no GitHub Actions globals (core.*, exec.*, github.*) -// -// Returns an error if validation fails, allowing the caller to handle gracefully -// instead of crashing the process. -func (r *ScriptRegistry) RegisterWithMode(name string, source string, mode RuntimeMode) error { - r.mu.Lock() - defer r.mu.Unlock() - - if registryLog.Enabled() { - registryLog.Printf("Registering script: %s (%d bytes, mode: %s)", name, len(source), mode) - } - - // Perform compile-time validation based on runtime mode - if err := validateNoExecSync(name, source, mode); err != nil { - return fmt.Errorf("script registration validation failed for %q: %w", name, err) - } - - if err := validateNoGitHubScriptGlobals(name, source, mode); err != nil { - return fmt.Errorf("script registration validation failed for %q: %w", name, err) - } - - r.scripts[name] = &scriptEntry{ - source: source, - mode: mode, - actionPath: "", // No custom action by default - } - - return nil -} - -// RegisterWithAction registers a script with both inline code and a custom action path. -// This allows the compiler to choose between inline mode (using actions/github-script) -// or custom action mode (using the provided action path). -// -// Parameters: -// - name: Unique identifier for the script (e.g., "create_issue") -// - source: The raw JavaScript source code (for inline mode) -// - mode: Runtime mode for bundling (GitHub Script or Node.js) -// - actionPath: Path to custom action (e.g., "./actions/create-issue" for development) -// -// The actionPath should be a relative path from the repository root for development mode. -// In the future, this can be extended to support versioned references like -// "github/gh-aw/.github/actions/create-issue@SHA" for release mode. -// -// Returns an error if validation fails, allowing the caller to handle gracefully -// instead of crashing the process. -func (r *ScriptRegistry) RegisterWithAction(name string, source string, mode RuntimeMode, actionPath string) error { - r.mu.Lock() - defer r.mu.Unlock() - - if registryLog.Enabled() { - registryLog.Printf("Registering script with action: %s (%d bytes, mode: %s, action: %s)", - name, len(source), mode, actionPath) - } - - // Perform compile-time validation based on runtime mode - if err := validateNoExecSync(name, source, mode); err != nil { - return fmt.Errorf("script registration validation failed for %q: %w", name, err) - } - - if err := validateNoGitHubScriptGlobals(name, source, mode); err != nil { - return fmt.Errorf("script registration validation failed for %q: %w", name, err) - } - - r.scripts[name] = &scriptEntry{ - source: source, - mode: mode, - actionPath: actionPath, - } - - return nil -} - // GetActionPath retrieves the custom action path for a script, if registered. // Returns an empty string if the script doesn't have a custom action path. func (r *ScriptRegistry) GetActionPath(name string) string { @@ -218,162 +47,18 @@ func (r *ScriptRegistry) GetActionPath(name string) string { return entry.actionPath } -// Get retrieves a bundled script by name. -// Bundling is performed lazily on first access and cached for subsequent calls. -// -// If bundling fails, the original source is returned as a fallback. -// If the script is not registered, an empty string is returned. -// -// Thread-safe: Multiple goroutines can call Get concurrently. -// -// DEPRECATED: Use GetWithMode instead to specify the expected runtime mode. -// This allows the compiler to verify the runtime mode matches the registered mode. -func (r *ScriptRegistry) Get(name string) string { - r.mu.RLock() - entry, exists := r.scripts[name] - r.mu.RUnlock() - - if !exists { - if registryLog.Enabled() { - registryLog.Printf("Script not found: %s", name) - } - return "" - } - - entry.once.Do(func() { - if registryLog.Enabled() { - registryLog.Printf("Bundling script: %s (mode: %s)", name, entry.mode) - } - - sources := GetJavaScriptSources() - bundled, err := BundleJavaScriptWithMode(entry.source, sources, "", entry.mode) - if err != nil { - registryLog.Printf("Bundling failed for %s, using source as-is: %v", name, err) - entry.bundled = entry.source - } else { - if registryLog.Enabled() { - registryLog.Printf("Successfully bundled %s: %d bytes", name, len(bundled)) - } - entry.bundled = bundled - } - }) - - return entry.bundled -} - -// GetWithMode retrieves a bundled script by name with runtime mode verification. -// Bundling is performed lazily on first access and cached for subsequent calls. -// -// The expectedMode parameter allows the compiler to verify that the registered runtime mode -// matches what the caller expects. If there's a mismatch, a warning is logged but the script -// is still returned to avoid breaking existing workflows. -// -// If bundling fails, the original source is returned as a fallback. -// If the script is not registered, an empty string is returned. -// -// Thread-safe: Multiple goroutines can call GetWithMode concurrently. -func (r *ScriptRegistry) GetWithMode(name string, expectedMode RuntimeMode) string { - r.mu.RLock() - entry, exists := r.scripts[name] - r.mu.RUnlock() - - if !exists { - if registryLog.Enabled() { - registryLog.Printf("Script not found: %s", name) - } - return "" - } - - // Verify the runtime mode matches what the caller expects - if entry.mode != expectedMode { - registryLog.Printf("WARNING: Runtime mode mismatch for script %s: registered as %s but requested as %s", - name, entry.mode, expectedMode) - } - - entry.once.Do(func() { - if registryLog.Enabled() { - registryLog.Printf("Bundling script: %s (mode: %s)", name, entry.mode) - } - - sources := GetJavaScriptSources() - bundled, err := BundleJavaScriptWithMode(entry.source, sources, "", entry.mode) - if err != nil { - registryLog.Printf("Bundling failed for %s, using source as-is: %v", name, err) - entry.bundled = entry.source - } else { - if registryLog.Enabled() { - registryLog.Printf("Successfully bundled %s: %d bytes", name, len(bundled)) - } - entry.bundled = bundled - } - }) - - return entry.bundled -} - -// GetSource retrieves the original (unbundled) source for a script. -// Useful for testing or when bundling is not needed. -func (r *ScriptRegistry) GetSource(name string) string { - r.mu.RLock() - defer r.mu.RUnlock() - - entry, exists := r.scripts[name] - if !exists { - return "" - } - return entry.source -} - -// Has checks if a script is registered in the registry. -func (r *ScriptRegistry) Has(name string) bool { - r.mu.RLock() - defer r.mu.RUnlock() - - _, exists := r.scripts[name] - return exists -} - -// Names returns a list of all registered script names. -// Useful for debugging and testing. -func (r *ScriptRegistry) Names() []string { - r.mu.RLock() - defer r.mu.RUnlock() - - names := make([]string, 0, len(r.scripts)) - for name := range r.scripts { - names = append(names, name) - } - return names -} - // DefaultScriptRegistry is the global script registry used by the workflow package. // Scripts are registered during package initialization via init() functions. var DefaultScriptRegistry = NewScriptRegistry() -// GetScript retrieves a bundled script from the default registry. -// This is a convenience function equivalent to DefaultScriptRegistry.Get(name). -// -// DEPRECATED: Use GetScriptWithMode to specify the expected runtime mode. -func GetScript(name string) string { - return DefaultScriptRegistry.Get(name) -} - -// GetScriptWithMode retrieves a bundled script from the default registry with mode verification. -// This is a convenience function equivalent to DefaultScriptRegistry.GetWithMode(name, mode). -func GetScriptWithMode(name string, mode RuntimeMode) string { - return DefaultScriptRegistry.GetWithMode(name, mode) -} - // GetAllScriptFilenames returns a sorted list of all .cjs filenames from the JavaScript sources. // This is used by the build system to discover which files need to be embedded in custom actions. -// The returned list includes all .cjs files found in pkg/workflow/js/, including dependencies. func GetAllScriptFilenames() []string { registryLog.Print("Getting all script filenames from JavaScript sources") sources := GetJavaScriptSources() filenames := make([]string, 0, len(sources)) for filename := range sources { - // Only include .cjs files (exclude .json and other files) if strings.HasSuffix(filename, ".cjs") { filenames = append(filenames, filename) } @@ -381,10 +66,8 @@ func GetAllScriptFilenames() []string { registryLog.Printf("Found %d .cjs files in JavaScript sources", len(filenames)) - // Sort for consistency sortedFilenames := make([]string, len(filenames)) copy(sortedFilenames, filenames) - // Using a simple sort to avoid importing sort package issues for i := range sortedFilenames { for j := i + 1; j < len(sortedFilenames); j++ { if sortedFilenames[i] > sortedFilenames[j] { diff --git a/pkg/workflow/script_registry_test.go b/pkg/workflow/script_registry_test.go deleted file mode 100644 index 3962d822b5..0000000000 --- a/pkg/workflow/script_registry_test.go +++ /dev/null @@ -1,298 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "sync" - "testing" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestScriptRegistry_Register(t *testing.T) { - registry := NewScriptRegistry() - - err := registry.Register("test_script", "console.log('hello');") - require.NoError(t, err) - - assert.True(t, registry.Has("test_script"), "registry should have test_script after registration") - assert.False(t, registry.Has("nonexistent"), "registry should not have nonexistent script") -} - -func TestScriptRegistry_Get_NotFound(t *testing.T) { - registry := NewScriptRegistry() - - result := registry.Get("nonexistent") - - assert.Empty(t, result) -} - -func TestScriptRegistry_Get_BundlesOnce(t *testing.T) { - registry := NewScriptRegistry() - - // Register a simple script that doesn't require bundling - source := "console.log('hello');" - err := registry.Register("simple", source) - require.NoError(t, err) - - // Get should bundle and return result - result1 := registry.Get("simple") - result2 := registry.Get("simple") - - // Both calls should return the same result (cached) - assert.Equal(t, result1, result2) - assert.NotEmpty(t, result1) -} - -func TestScriptRegistry_GetSource(t *testing.T) { - registry := NewScriptRegistry() - - source := "const x = 1;" - err := registry.Register("test", source) - require.NoError(t, err) - - // GetSource should return original source - assert.Equal(t, source, registry.GetSource("test")) -} - -func TestScriptRegistry_GetSource_NotFound(t *testing.T) { - registry := NewScriptRegistry() - - result := registry.GetSource("nonexistent") - - assert.Empty(t, result) -} - -func TestScriptRegistry_Names(t *testing.T) { - registry := NewScriptRegistry() - - require.NoError(t, registry.Register("script_a", "a")) - require.NoError(t, registry.Register("script_b", "b")) - require.NoError(t, registry.Register("script_c", "c")) - - names := registry.Names() - - assert.Len(t, names, 3) - assert.Contains(t, names, "script_a") - assert.Contains(t, names, "script_b") - assert.Contains(t, names, "script_c") -} - -func TestScriptRegistry_ConcurrentAccess(t *testing.T) { - registry := NewScriptRegistry() - source := "console.log('concurrent test');" - err := registry.Register("concurrent", source) - require.NoError(t, err) - - // Test concurrent Get calls - var wg sync.WaitGroup - results := make([]string, 10) - - for i := range 10 { - wg.Add(1) - go func(idx int) { - defer wg.Done() - results[idx] = registry.Get("concurrent") - }(i) - } - - wg.Wait() - - // All results should be the same (due to Once semantics) - for i := 1; i < 10; i++ { - assert.Equal(t, results[0], results[i], "concurrent access should return consistent results") - } -} - -func TestScriptRegistry_Overwrite(t *testing.T) { - registry := NewScriptRegistry() - - err := registry.Register("test", "original") - require.NoError(t, err) - assert.Equal(t, "original", registry.GetSource("test")) - - err = registry.Register("test", "updated") - require.NoError(t, err) - assert.Equal(t, "updated", registry.GetSource("test")) -} - -func TestScriptRegistry_Overwrite_AfterGet(t *testing.T) { - registry := NewScriptRegistry() - - // Register initial script - err := registry.Register("test", "console.log('original');") - require.NoError(t, err) - - // Trigger bundling by calling Get() - firstResult := registry.Get("test") - assert.NotEmpty(t, firstResult) - assert.Contains(t, firstResult, "original") - - // Overwrite with new source - err = registry.Register("test", "console.log('updated');") - require.NoError(t, err) - - // Verify GetSource returns new source - assert.Equal(t, "console.log('updated');", registry.GetSource("test")) - - // Verify Get() returns bundled version of new source - secondResult := registry.Get("test") - assert.NotEmpty(t, secondResult) - assert.Contains(t, secondResult, "updated") - assert.NotContains(t, secondResult, "original") -} - -func TestDefaultScriptRegistry_GetScript(t *testing.T) { - // Create a fresh registry for this test to avoid interference - oldRegistry := DefaultScriptRegistry - DefaultScriptRegistry = NewScriptRegistry() - defer func() { DefaultScriptRegistry = oldRegistry }() - - // Register a test script - err := DefaultScriptRegistry.Register("test_global", "global test") - require.NoError(t, err) - - // GetScript should use DefaultScriptRegistry - result := GetScript("test_global") - require.NotEmpty(t, result) -} - -func TestScriptRegistry_Has(t *testing.T) { - registry := NewScriptRegistry() - - assert.False(t, registry.Has("missing"), "registry should not have missing script") - - err := registry.Register("present", "code") - require.NoError(t, err) - - assert.True(t, registry.Has("present"), "registry should have present script after registration") - assert.False(t, registry.Has("still_missing"), "registry should not have still_missing script") -} - -func TestScriptRegistry_RegisterWithMode(t *testing.T) { - // Create a custom registry for testing to avoid side effects - registry := NewScriptRegistry() - - // Test that bundling respects runtime mode - // In GitHub Script mode: module.exports should be removed - // In Node.js mode: module.exports should be preserved - - scriptWithExports := `function test() { - return 42; -} - -module.exports = { test }; -` - - // Register with GitHub Script mode (default) - err := registry.Register("github_mode", scriptWithExports) - require.NoError(t, err) - githubResult := registry.Get("github_mode") - - // Should not contain module.exports in GitHub Script mode - assert.NotContains(t, githubResult, "module.exports", - "GitHub Script mode should remove module.exports") - assert.Contains(t, githubResult, "function test()", - "Should still contain the function") - - // Register with Node.js mode - err = registry.RegisterWithMode("nodejs_mode", scriptWithExports, RuntimeModeNodeJS) - require.NoError(t, err) - nodejsResult := registry.Get("nodejs_mode") - - // Should contain module.exports in Node.js mode - assert.Contains(t, nodejsResult, "module.exports", - "Node.js mode should preserve module.exports") - assert.Contains(t, nodejsResult, "function test()", - "Should still contain the function") -} - -func TestScriptRegistry_RegisterWithMode_PreservesDifference(t *testing.T) { - registry := NewScriptRegistry() - - source := `function helper() { - return "value"; -} - -module.exports = { helper };` - - // Register same source with different modes - err := registry.RegisterWithMode("github_mode", source, RuntimeModeGitHubScript) - require.NoError(t, err) - err = registry.RegisterWithMode("nodejs_mode", source, RuntimeModeNodeJS) - require.NoError(t, err) - - githubResult := registry.Get("github_mode") - nodejsResult := registry.Get("nodejs_mode") - - // GitHub Script mode should remove module.exports - assert.NotContains(t, githubResult, "module.exports", - "GitHub Script mode should remove module.exports") - assert.Contains(t, githubResult, "function helper()", - "Should contain the function in GitHub mode") - - // Node.js mode should preserve module.exports - assert.Contains(t, nodejsResult, "module.exports", - "Node.js mode should preserve module.exports") - assert.Contains(t, nodejsResult, "function helper()", - "Should contain the function in Node.js mode") -} - -func TestScriptRegistry_GetWithMode(t *testing.T) { - registry := NewScriptRegistry() - - source := `function helper() { - return "value"; -} - -module.exports = { helper };` - - // Register with GitHub Script mode - err := registry.RegisterWithMode("test_script", source, RuntimeModeGitHubScript) - require.NoError(t, err) - - // Test GetWithMode with matching mode - should work without warning - result := registry.GetWithMode("test_script", RuntimeModeGitHubScript) - assert.NotEmpty(t, result, "Should return bundled script") - assert.NotContains(t, result, "module.exports", "GitHub Script mode should remove module.exports") - - // Test GetWithMode with mismatched mode - should log warning but still work - result2 := registry.GetWithMode("test_script", RuntimeModeNodeJS) - assert.NotEmpty(t, result2, "Should return bundled script even with mode mismatch") - // The script was bundled with GitHub Script mode, so module.exports should still be removed - assert.NotContains(t, result2, "module.exports", "Script was bundled with GitHub Script mode") -} - -func TestScriptRegistry_GetWithMode_ModeMismatch(t *testing.T) { - registry := NewScriptRegistry() - - source := `function test() { return 42; } -module.exports = { test };` - - // Register with Node.js mode - err := registry.RegisterWithMode("nodejs_script", source, RuntimeModeNodeJS) - require.NoError(t, err) - - // Request with GitHub Script mode - should log warning - result := registry.GetWithMode("nodejs_script", RuntimeModeGitHubScript) - - // Script was bundled with Node.js mode, so module.exports should be preserved - assert.Contains(t, result, "module.exports", "Node.js mode should preserve module.exports") -} - -func TestGetScriptWithMode(t *testing.T) { - // Create a fresh registry for this test - oldRegistry := DefaultScriptRegistry - DefaultScriptRegistry = NewScriptRegistry() - defer func() { DefaultScriptRegistry = oldRegistry }() - - // Register a test script - err := DefaultScriptRegistry.RegisterWithMode("test_helper", "function test() { return 1; }", RuntimeModeGitHubScript) - require.NoError(t, err) - - // Test GetScriptWithMode - result := GetScriptWithMode("test_helper", RuntimeModeGitHubScript) - require.NotEmpty(t, result) - assert.Contains(t, result, "function test()") -} diff --git a/pkg/workflow/setup_action_paths.go b/pkg/workflow/setup_action_paths.go new file mode 100644 index 0000000000..16736ac89d --- /dev/null +++ b/pkg/workflow/setup_action_paths.go @@ -0,0 +1,5 @@ +package workflow + +// SetupActionDestination is the path where the setup action copies script files +// on the agent runner (e.g. /opt/gh-aw/actions). +const SetupActionDestination = "/opt/gh-aw/actions" diff --git a/pkg/workflow/sh.go b/pkg/workflow/sh.go deleted file mode 100644 index b79dd0ce7b..0000000000 --- a/pkg/workflow/sh.go +++ /dev/null @@ -1,152 +0,0 @@ -package workflow - -import ( - _ "embed" - "fmt" - "strings" - - "github.com/github/gh-aw/pkg/logger" -) - -var shLog = logger.New("workflow:sh") - -// Prompt file paths at runtime (copied by setup action) -const ( - promptsDir = "/opt/gh-aw/prompts" - prContextPromptFile = "pr_context_prompt.md" - tempFolderPromptFile = "temp_folder_prompt.md" - playwrightPromptFile = "playwright_prompt.md" - markdownPromptFile = "markdown.md" - xpiaPromptFile = "xpia.md" - cacheMemoryPromptFile = "cache_memory_prompt.md" - cacheMemoryPromptMultiFile = "cache_memory_prompt_multi.md" - repoMemoryPromptFile = "repo_memory_prompt.md" - repoMemoryPromptMultiFile = "repo_memory_prompt_multi.md" - safeOutputsPromptFile = "safe_outputs_prompt.md" - safeOutputsCreatePRFile = "safe_outputs_create_pull_request.md" - safeOutputsPushToBranchFile = "safe_outputs_push_to_pr_branch.md" - safeOutputsAutoCreateIssueFile = "safe_outputs_auto_create_issue.md" -) - -// GitHub context prompt is kept embedded because it contains GitHub Actions expressions -// that need to be extracted at compile time. Moving this to a runtime file would require -// reading and parsing the file during compilation, which is more complex. -// -//go:embed prompts/github_context_prompt.md -var githubContextPromptText string - -// WritePromptFileToYAML writes a shell command to cat a prompt file from /opt/gh-aw/prompts/ -// This replaces the previous approach of embedding prompt text in the binary. -func WritePromptFileToYAML(yaml *strings.Builder, filename string, indent string) { - shLog.Printf("Writing prompt file reference to YAML: file=%s", filename) - promptPath := fmt.Sprintf("%s/%s", promptsDir, filename) - yaml.WriteString(indent + fmt.Sprintf("cat \"%s\" >> \"$GH_AW_PROMPT\"\n", promptPath)) -} - -// WriteShellScriptToYAML writes a shell script with proper indentation to a strings.Builder -func WriteShellScriptToYAML(yaml *strings.Builder, script string, indent string) { - scriptLines := strings.SplitSeq(script, "\n") - for line := range scriptLines { - // Skip empty lines at the beginning or end - if strings.TrimSpace(line) != "" { - fmt.Fprintf(yaml, "%s%s\n", indent, line) - } - } -} - -// WritePromptTextToYAML writes static prompt text to a YAML heredoc with proper indentation. -// Use this function for prompt text that contains NO variable placeholders or expressions. -// It chunks the text into groups of lines of less than MaxPromptChunkSize characters, with a maximum of MaxPromptChunks chunks. -// Each chunk is written as a separate heredoc to avoid GitHub Actions step size limits (21KB). -// -// For prompt text with variable placeholders that need substitution, use WritePromptTextToYAMLWithPlaceholders instead. -func WritePromptTextToYAML(yaml *strings.Builder, text string, indent string) { - shLog.Printf("Writing prompt text to YAML: text_size=%d bytes, chunks=%d", len(text), len(strings.Split(text, "\n"))) - textLines := strings.Split(text, "\n") - chunks := chunkLines(textLines, indent, MaxPromptChunkSize, MaxPromptChunks) - shLog.Printf("Created %d chunks for prompt text", len(chunks)) - - delimiter := GenerateHeredocDelimiter("PROMPT") - // Write each chunk as a separate heredoc - // For static prompt text without variables, use direct cat to file - for _, chunk := range chunks { - yaml.WriteString(indent + "cat << '" + delimiter + "' >> \"$GH_AW_PROMPT\"\n") - for _, line := range chunk { - fmt.Fprintf(yaml, "%s%s\n", indent, line) - } - yaml.WriteString(indent + delimiter + "\n") - } -} - -// WritePromptTextToYAMLWithPlaceholders writes prompt text with variable placeholders to a YAML heredoc with proper indentation. -// Use this function for prompt text containing __VAR__ placeholders that will be substituted with sed commands. -// The caller is responsible for adding the sed substitution commands after calling this function. -// It uses placeholder format (__VAR__) instead of shell variable expansion, to prevent template injection. -// -// For static prompt text without variables, use WritePromptTextToYAML instead. -func WritePromptTextToYAMLWithPlaceholders(yaml *strings.Builder, text string, indent string) { - textLines := strings.Split(text, "\n") - chunks := chunkLines(textLines, indent, MaxPromptChunkSize, MaxPromptChunks) - - delimiter := GenerateHeredocDelimiter("PROMPT") - // Write each chunk as a separate heredoc - // Use direct cat to file (append mode) - placeholders will be substituted with sed - for _, chunk := range chunks { - yaml.WriteString(indent + "cat << '" + delimiter + "' >> \"$GH_AW_PROMPT\"\n") - for _, line := range chunk { - fmt.Fprintf(yaml, "%s%s\n", indent, line) - } - yaml.WriteString(indent + delimiter + "\n") - } -} - -// chunkLines splits lines into chunks where each chunk's total size (including indent) is less than maxSize. -// Returns at most maxChunks chunks. If content exceeds the limit, it truncates at the last chunk. -func chunkLines(lines []string, indent string, maxSize int, maxChunks int) [][]string { - shLog.Printf("Chunking lines: total_lines=%d, max_size=%d, max_chunks=%d", len(lines), maxSize, maxChunks) - if len(lines) == 0 { - return [][]string{{}} - } - - var chunks [][]string - var currentChunk []string - currentSize := 0 - - for _, line := range lines { - // Calculate size including indent and newline - lineSize := len(indent) + len(line) + 1 - - // If adding this line would exceed the limit, start a new chunk - if currentSize+lineSize > maxSize && len(currentChunk) > 0 { - // Check if we've reached the maximum number of chunks - if len(chunks) >= maxChunks-1 { - // We're at the last allowed chunk, so add remaining lines to current chunk - currentChunk = append(currentChunk, line) - currentSize += lineSize - continue - } - - // Start a new chunk - shLog.Printf("Starting new chunk: previous_chunk_size=%d, chunks_so_far=%d", currentSize, len(chunks)) - chunks = append(chunks, currentChunk) - currentChunk = []string{line} - currentSize = lineSize - } else { - currentChunk = append(currentChunk, line) - currentSize += lineSize - } - } - - // Add the last chunk if there's content - if len(currentChunk) > 0 { - chunks = append(chunks, currentChunk) - } - - // If we still have no chunks, return an empty chunk - if len(chunks) == 0 { - return [][]string{{}} - } - - shLog.Printf("Chunking complete: created %d chunks", len(chunks)) - return chunks -} diff --git a/pkg/workflow/sh_integration_test.go b/pkg/workflow/sh_integration_test.go deleted file mode 100644 index b84bb42f11..0000000000 --- a/pkg/workflow/sh_integration_test.go +++ /dev/null @@ -1,371 +0,0 @@ -//go:build integration - -package workflow - -import ( - "strings" - "testing" -) - -// TestWritePromptTextToYAML_IntegrationWithCompiler verifies that WritePromptTextToYAML -// correctly handles large prompt text that would be used in actual workflow compilation. -// This test simulates what would happen if an embedded prompt file was very large. -func TestWritePromptTextToYAML_IntegrationWithCompiler(t *testing.T) { - // Create a realistic scenario: a very long help text or documentation - // that might be included as prompt instructions - section := strings.Repeat("This is an important instruction line that provides guidance to the AI agent on how to perform its task correctly. ", 10) - - // Create 200 lines to ensure we exceed 20KB - lines := make([]string, 200) - for i := range lines { - lines[i] = section - } - largePromptText := strings.Join(lines, "\n") - - // Calculate total size - totalSize := len(largePromptText) - if totalSize < 20000 { - t.Fatalf("Test setup error: prompt text should be at least 20000 bytes, got %d", totalSize) - } - - var yaml strings.Builder - indent := " " // Standard indent used in workflow generation - - // Call the function as it would be called in real compilation - WritePromptTextToYAML(&yaml, largePromptText, indent) - - result := yaml.String() - - // Verify multiple heredoc blocks were created - heredocCount := strings.Count(result, `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`) - if heredocCount < 2 { - t.Errorf("Expected multiple heredoc blocks for large text (%d bytes), got %d", totalSize, heredocCount) - } - - // Verify we didn't exceed 5 chunks - if heredocCount > 5 { - t.Errorf("Expected at most 5 heredoc blocks (max limit), got %d", heredocCount) - } - - // Verify each heredoc is closed - eofCount := strings.Count(result, indent+"GH_AW_PROMPT_EOF") - if eofCount != heredocCount { - t.Errorf("Expected %d EOF markers to match %d heredoc blocks, got %d", heredocCount, heredocCount, eofCount) - } - - // Verify the content is preserved (check first and last sections) - firstSection := section[:100] - lastSection := section[len(section)-100:] - if !strings.Contains(result, firstSection) { - t.Error("Expected to find beginning of original text in output") - } - if !strings.Contains(result, lastSection) { - t.Error("Expected to find end of original text in output") - } - - // Verify the YAML structure is valid (basic check) - if !strings.Contains(result, `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`) { - t.Error("Expected proper heredoc syntax in output") - } - - t.Logf("Successfully chunked %d bytes into %d heredoc blocks", totalSize, heredocCount) - - // Verify no lines are lost - extract content from heredoc blocks and compare - extractedLines := extractLinesFromYAML(result, indent) - originalLines := strings.Split(largePromptText, "\n") - - if len(extractedLines) != len(originalLines) { - t.Errorf("Line count mismatch: expected %d lines, got %d lines", len(originalLines), len(extractedLines)) - } - - // Verify content integrity by checking line-by-line - mismatchCount := 0 - for i := 0; i < len(originalLines) && i < len(extractedLines); i++ { - if originalLines[i] != extractedLines[i] { - mismatchCount++ - if mismatchCount <= 3 { // Only report first 3 mismatches - t.Errorf("Line %d mismatch:\nExpected: %q\nGot: %q", i+1, originalLines[i], extractedLines[i]) - } - } - } - - if mismatchCount > 0 { - t.Errorf("Total line mismatches: %d", mismatchCount) - } -} - -// TestWritePromptTextToYAML_RealWorldSizeSimulation simulates various real-world scenarios -// to ensure chunking works correctly across different text sizes. -func TestWritePromptTextToYAML_RealWorldSizeSimulation(t *testing.T) { - tests := []struct { - name string - textSize int // approximate size in bytes - linesCount int // number of lines - expectedChunks int // expected number of chunks - maxChunks int // should not exceed this - }{ - { - name: "small prompt (< 1KB)", - textSize: 500, - linesCount: 10, - expectedChunks: 1, - maxChunks: 1, - }, - { - name: "medium prompt (~10KB)", - textSize: 10000, - linesCount: 100, - expectedChunks: 1, - maxChunks: 1, - }, - { - name: "large prompt (~25KB)", - textSize: 25000, - linesCount: 250, - expectedChunks: 2, - maxChunks: 2, - }, - { - name: "very large prompt (~50KB)", - textSize: 50000, - linesCount: 500, - expectedChunks: 3, - maxChunks: 3, - }, - { - name: "extremely large prompt (~120KB)", - textSize: 120000, - linesCount: 1200, - expectedChunks: 5, - maxChunks: 5, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - // Create text of approximately the desired size - // Account for newlines: total size = linesCount * (lineSize + 1) - 1 (no trailing newline) - lineSize := (tt.textSize + 1) / tt.linesCount // Adjust for newlines - if lineSize < 1 { - lineSize = 1 - } - line := strings.Repeat("x", lineSize) - lines := make([]string, tt.linesCount) - for i := range lines { - lines[i] = line - } - text := strings.Join(lines, "\n") - - var yaml strings.Builder - indent := " " - - WritePromptTextToYAML(&yaml, text, indent) - - result := yaml.String() - heredocCount := strings.Count(result, `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`) - - if heredocCount < tt.expectedChunks { - t.Errorf("Expected at least %d chunks for %s, got %d", tt.expectedChunks, tt.name, heredocCount) - } - - if heredocCount > tt.maxChunks { - t.Errorf("Expected at most %d chunks for %s, got %d", tt.maxChunks, tt.name, heredocCount) - } - - eofCount := strings.Count(result, indent+"GH_AW_PROMPT_EOF") - if eofCount != heredocCount { - t.Errorf("EOF count (%d) doesn't match heredoc count (%d) for %s", eofCount, heredocCount, tt.name) - } - - t.Logf("%s: %d bytes chunked into %d blocks", tt.name, len(text), heredocCount) - - // Verify no lines are lost - extractedLines := extractLinesFromYAML(result, indent) - originalLines := strings.Split(text, "\n") - - if len(extractedLines) != len(originalLines) { - t.Errorf("%s: Line count mismatch - expected %d lines, got %d lines", tt.name, len(originalLines), len(extractedLines)) - } - }) - } -} - -// extractLinesFromYAML extracts the actual content lines from a YAML heredoc output -// by parsing the heredoc blocks and removing the indent -func extractLinesFromYAML(yamlOutput string, indent string) []string { - var lines []string - inHeredoc := false - - for _, line := range strings.Split(yamlOutput, "\n") { - // Check if we're starting a heredoc block - if strings.Contains(line, `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`) { - inHeredoc = true - continue - } - - // Check if we're ending a heredoc block - if strings.TrimSpace(line) == "GH_AW_PROMPT_EOF" { - inHeredoc = false - continue - } - - // If we're in a heredoc block, extract the content line - if inHeredoc { - // Remove the indent from the line - if strings.HasPrefix(line, indent) { - contentLine := strings.TrimPrefix(line, indent) - lines = append(lines, contentLine) - } - } - } - - return lines -} - -// TestWritePromptTextToYAML_NoDataLoss verifies that no lines or chunks are lost -// during the chunking process, even with edge cases. -func TestWritePromptTextToYAML_NoDataLoss(t *testing.T) { - tests := []struct { - name string - lines []string - expectLoss bool - }{ - { - name: "single line", - lines: []string{"Single line of text"}, - expectLoss: false, - }, - { - name: "multiple short lines", - lines: []string{"Line 1", "Line 2", "Line 3", "Line 4", "Line 5"}, - expectLoss: false, - }, - { - name: "empty lines", - lines: []string{"Line 1", "", "Line 3", "", "Line 5"}, - expectLoss: false, - }, - { - name: "very long single line", - lines: []string{strings.Repeat("x", 25000)}, - expectLoss: false, - }, - { - name: "exactly at chunk boundary", - lines: func() []string { - // Create lines that total exactly 20000 bytes with indent - line := strings.Repeat("x", 100) - lines := make([]string, 180) - for i := range lines { - lines[i] = line - } - return lines - }(), - expectLoss: false, - }, - { - name: "large number of lines requiring max chunks", - lines: func() []string { - line := strings.Repeat("y", 1000) - lines := make([]string, 600) - for i := range lines { - lines[i] = line - } - return lines - }(), - expectLoss: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - text := strings.Join(tt.lines, "\n") - var yaml strings.Builder - indent := " " - - WritePromptTextToYAML(&yaml, text, indent) - - result := yaml.String() - - // Extract lines from the YAML output - extractedLines := extractLinesFromYAML(result, indent) - - // Verify line count - if len(extractedLines) != len(tt.lines) { - t.Errorf("Line count mismatch: expected %d lines, got %d lines", len(tt.lines), len(extractedLines)) - t.Logf("Original lines: %d", len(tt.lines)) - t.Logf("Extracted lines: %d", len(extractedLines)) - } - - // Verify content integrity - mismatchCount := 0 - for i := 0; i < len(tt.lines) && i < len(extractedLines); i++ { - if tt.lines[i] != extractedLines[i] { - mismatchCount++ - if mismatchCount <= 3 { - t.Errorf("Line %d mismatch:\nExpected: %q\nGot: %q", i+1, tt.lines[i], extractedLines[i]) - } - } - } - - if mismatchCount > 0 { - t.Errorf("Total line mismatches: %d", mismatchCount) - } - }) - } -} - -// TestWritePromptTextToYAML_ChunkIntegrity verifies that chunks are properly formed -// and that the chunking process maintains data integrity. -func TestWritePromptTextToYAML_ChunkIntegrity(t *testing.T) { - // Create a large text that will require multiple chunks - line := strings.Repeat("Test line with some content. ", 50) - lines := make([]string, 300) - for i := range lines { - lines[i] = line - } - text := strings.Join(lines, "\n") - - var yaml strings.Builder - indent := " " - - WritePromptTextToYAML(&yaml, text, indent) - - result := yaml.String() - - // Count heredoc blocks - heredocCount := strings.Count(result, `cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"`) - - t.Logf("Created %d heredoc blocks for %d lines (%d bytes)", heredocCount, len(lines), len(text)) - - // Verify we have multiple chunks but not exceeding max - if heredocCount < 2 { - t.Errorf("Expected multiple chunks for large text, got %d", heredocCount) - } - - if heredocCount > MaxPromptChunks { - t.Errorf("Expected at most %d chunks, got %d", MaxPromptChunks, heredocCount) - } - - // Verify all heredocs are properly closed - eofCount := strings.Count(result, indent+"GH_AW_PROMPT_EOF") - if eofCount != heredocCount { - t.Errorf("Heredoc closure mismatch: %d opens, %d closes", heredocCount, eofCount) - } - - // Verify no data loss - extractedLines := extractLinesFromYAML(result, indent) - if len(extractedLines) != len(lines) { - t.Errorf("Line count mismatch: expected %d, got %d", len(lines), len(extractedLines)) - } - - // Verify content integrity by checking a few random samples - sampleIndices := []int{0, len(lines) / 4, len(lines) / 2, len(lines) * 3 / 4, len(lines) - 1} - for _, idx := range sampleIndices { - if idx < len(lines) && idx < len(extractedLines) { - if lines[idx] != extractedLines[idx] { - t.Errorf("Content mismatch at line %d:\nExpected: %q\nGot: %q", idx+1, lines[idx], extractedLines[idx]) - } - } - } -} diff --git a/pkg/workflow/sh_test.go b/pkg/workflow/sh_test.go deleted file mode 100644 index f527d02916..0000000000 --- a/pkg/workflow/sh_test.go +++ /dev/null @@ -1,309 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -func TestWritePromptTextToYAML_SmallText(t *testing.T) { - var yaml strings.Builder - text := "This is a small text\nWith a few lines\nThat doesn't need chunking" - indent := " " - - WritePromptTextToYAML(&yaml, text, indent) - - result := yaml.String() - - // Get the expected delimiter - delimiter := GenerateHeredocDelimiter("PROMPT") - expectedHeredoc := `cat << '` + delimiter + `' >> "$GH_AW_PROMPT"` - - // Should have exactly one heredoc block - if strings.Count(result, expectedHeredoc) != 1 { - t.Errorf("Expected 1 heredoc block for small text, got %d", strings.Count(result, expectedHeredoc)) - } - - // Should contain all original lines - if !strings.Contains(result, "This is a small text") { - t.Error("Expected to find original text in output") - } - if !strings.Contains(result, "With a few lines") { - t.Error("Expected to find original text in output") - } - if !strings.Contains(result, "That doesn't need chunking") { - t.Error("Expected to find original text in output") - } - - // Should have proper EOF markers - if strings.Count(result, indent+delimiter) != 1 { - t.Errorf("Expected 1 EOF marker, got %d", strings.Count(result, indent+delimiter)) - } -} - -func TestWritePromptTextToYAML_LargeText(t *testing.T) { - var yaml strings.Builder - // Create text that exceeds 20000 characters - longLine := strings.Repeat("This is a very long line of content that will be repeated many times to exceed the character limit. ", 10) - lines := make([]string, 50) - for i := range lines { - lines[i] = longLine - } - text := strings.Join(lines, "\n") - indent := " " - - // Calculate expected size - totalSize := 0 - for _, line := range lines { - totalSize += len(indent) + len(line) + 1 - } - - // This should create multiple chunks since each line is ~1000 chars and we have 50 lines - WritePromptTextToYAML(&yaml, text, indent) - - result := yaml.String() - - // Get the expected delimiter - delimiter := GenerateHeredocDelimiter("PROMPT") - expectedHeredoc := `cat << '` + delimiter + `' >> "$GH_AW_PROMPT"` - - // Should have multiple heredoc blocks - heredocCount := strings.Count(result, expectedHeredoc) - if heredocCount < 2 { - t.Errorf("Expected at least 2 heredoc blocks for large text (total size ~%d bytes), got %d", totalSize, heredocCount) - } - - // Should not exceed 5 chunks (max limit) - if heredocCount > 5 { - t.Errorf("Expected at most 5 heredoc blocks, got %d", heredocCount) - } - - // Should have matching EOF markers - eofCount := strings.Count(result, indent+delimiter) - if eofCount != heredocCount { - t.Errorf("Expected %d EOF markers to match %d heredoc blocks, got %d", heredocCount, heredocCount, eofCount) - } - - // Should contain original content (or at least the beginning if truncated) - firstLine := strings.Split(text, "\n")[0] - if !strings.Contains(result, firstLine[:50]) { - t.Error("Expected to find beginning of original text in output") - } -} - -func TestWritePromptTextToYAML_ExactChunkBoundary(t *testing.T) { - var yaml strings.Builder - indent := " " - - // Create text that's exactly at the 20000 character boundary - // Each line: indent (10) + line (100) + newline (1) = 111 bytes - // 180 lines = 19,980 bytes (just under 20000) - line := strings.Repeat("x", 100) - lines := make([]string, 180) - for i := range lines { - lines[i] = line - } - text := strings.Join(lines, "\n") - - WritePromptTextToYAML(&yaml, text, indent) - - result := yaml.String() - - // Get the expected delimiter - delimiter := GenerateHeredocDelimiter("PROMPT") - expectedHeredoc := `cat << '` + delimiter + `' >> "$GH_AW_PROMPT"` - - // Should have exactly 1 heredoc block since we're just under the limit - heredocCount := strings.Count(result, expectedHeredoc) - if heredocCount != 1 { - t.Errorf("Expected 1 heredoc block for text just under limit, got %d", heredocCount) - } -} - -func TestWritePromptTextToYAML_MaxChunksLimit(t *testing.T) { - var yaml strings.Builder - indent := " " - - // Create text that would need more than 5 chunks (if we allowed it) - // Each line: indent (10) + line (1000) + newline (1) = 1011 bytes - // 600 lines = ~606,600 bytes - // At 20000 bytes per chunk, this would need ~31 chunks, but we limit to 5 - line := strings.Repeat("y", 1000) - lines := make([]string, 600) - for i := range lines { - lines[i] = line - } - text := strings.Join(lines, "\n") - - WritePromptTextToYAML(&yaml, text, indent) - - result := yaml.String() - - // Get the expected delimiter - delimiter := GenerateHeredocDelimiter("PROMPT") - expectedHeredoc := `cat << '` + delimiter + `' >> "$GH_AW_PROMPT"` - - // Should have exactly 5 heredoc blocks (the maximum) - heredocCount := strings.Count(result, expectedHeredoc) - if heredocCount != 5 { - t.Errorf("Expected exactly 5 heredoc blocks (max limit), got %d", heredocCount) - } - - // Should have matching EOF markers - eofCount := strings.Count(result, indent+delimiter) - if eofCount != 5 { - t.Errorf("Expected 5 EOF markers, got %d", eofCount) - } -} - -func TestWritePromptTextToYAML_EmptyText(t *testing.T) { - var yaml strings.Builder - text := "" - indent := " " - - WritePromptTextToYAML(&yaml, text, indent) - - result := yaml.String() - - // Get the expected delimiter - delimiter := GenerateHeredocDelimiter("PROMPT") - expectedHeredoc := `cat << '` + delimiter + `' >> "$GH_AW_PROMPT"` - - // Should have at least one heredoc block (even for empty text) - if strings.Count(result, expectedHeredoc) < 1 { - t.Error("Expected at least 1 heredoc block even for empty text") - } - - // Should have matching EOF markers - if strings.Count(result, indent+delimiter) < 1 { - t.Error("Expected at least 1 EOF marker") - } -} - -func TestChunkLines_SmallInput(t *testing.T) { - lines := []string{"line1", "line2", "line3"} - indent := " " - maxSize := 20000 - maxChunks := 5 - - chunks := chunkLines(lines, indent, maxSize, maxChunks) - - if len(chunks) != 1 { - t.Errorf("Expected 1 chunk for small input, got %d", len(chunks)) - } - - if len(chunks[0]) != 3 { - t.Errorf("Expected chunk to contain 3 lines, got %d", len(chunks[0])) - } -} - -func TestChunkLines_ExceedsSize(t *testing.T) { - // Create lines that will exceed maxSize - line := strings.Repeat("x", 1000) - lines := make([]string, 50) - for i := range lines { - lines[i] = line - } - - indent := " " - maxSize := 20000 - maxChunks := 5 - - chunks := chunkLines(lines, indent, maxSize, maxChunks) - - // Should have multiple chunks - if len(chunks) < 2 { - t.Errorf("Expected at least 2 chunks, got %d", len(chunks)) - } - - // Verify each chunk (except possibly the last) stays within size limit - for i, chunk := range chunks { - size := 0 - for _, line := range chunk { - size += len(indent) + len(line) + 1 - } - - // Last chunk might exceed if we hit maxChunks limit - if i < len(chunks)-1 && size > maxSize { - t.Errorf("Chunk %d exceeds size limit: %d > %d", i, size, maxSize) - } - } - - // Verify total lines are preserved - totalLines := 0 - for _, chunk := range chunks { - totalLines += len(chunk) - } - if totalLines != len(lines) { - t.Errorf("Expected %d total lines, got %d", len(lines), totalLines) - } -} - -func TestChunkLines_MaxChunksEnforced(t *testing.T) { - // Create many lines that would need more than maxChunks - line := strings.Repeat("x", 1000) - lines := make([]string, 600) - for i := range lines { - lines[i] = line - } - - indent := " " - maxSize := 20000 - maxChunks := 5 - - chunks := chunkLines(lines, indent, maxSize, maxChunks) - - // Should have exactly maxChunks - if len(chunks) != maxChunks { - t.Errorf("Expected exactly %d chunks (max limit), got %d", maxChunks, len(chunks)) - } - - // Verify all lines are included (even if last chunk is large) - totalLines := 0 - for _, chunk := range chunks { - totalLines += len(chunk) - } - if totalLines != len(lines) { - t.Errorf("Expected %d total lines, got %d", len(lines), totalLines) - } -} - -func TestChunkLines_EmptyInput(t *testing.T) { - lines := []string{} - indent := " " - maxSize := 20000 - maxChunks := 5 - - chunks := chunkLines(lines, indent, maxSize, maxChunks) - - // Should return at least one empty chunk - if len(chunks) != 1 { - t.Errorf("Expected 1 chunk for empty input, got %d", len(chunks)) - } - - if len(chunks[0]) != 0 { - t.Errorf("Expected empty chunk, got %d lines", len(chunks[0])) - } -} - -func TestChunkLines_SingleLineExceedsLimit(t *testing.T) { - // Single line that exceeds maxSize - line := strings.Repeat("x", 25000) - lines := []string{line} - - indent := " " - maxSize := 20000 - maxChunks := 5 - - chunks := chunkLines(lines, indent, maxSize, maxChunks) - - // Should still have one chunk with that single line - if len(chunks) != 1 { - t.Errorf("Expected 1 chunk, got %d", len(chunks)) - } - - if len(chunks[0]) != 1 { - t.Errorf("Expected 1 line in chunk, got %d", len(chunks[0])) - } -} diff --git a/pkg/workflow/staged_add_issue_labels_test.go b/pkg/workflow/staged_add_issue_labels_test.go deleted file mode 100644 index c20aa5bd76..0000000000 --- a/pkg/workflow/staged_add_issue_labels_test.go +++ /dev/null @@ -1,73 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -func TestAddLabelsJobWithStagedFlag(t *testing.T) { - // Create a compiler instance - c := NewCompiler() - - // Test with staged: true - workflowData := &WorkflowData{ - Name: "test-workflow", - SafeOutputs: &SafeOutputsConfig{ - AddLabels: &AddLabelsConfig{}, - Staged: true, - }, - } - - job, err := c.buildAddLabelsJob(workflowData, "main_job") - if err != nil { - t.Fatalf("Unexpected error building add labels job: %v", err) - } - - // Convert steps to a single string for testing - stepsContent := strings.Join(job.Steps, "") - - // Check that GH_AW_SAFE_OUTPUTS_STAGED is included in the env section - if !strings.Contains(stepsContent, " GH_AW_SAFE_OUTPUTS_STAGED: \"true\"\n") { - t.Error("Expected GH_AW_SAFE_OUTPUTS_STAGED environment variable to be set to true in add-labels job") - } - - // Test with staged: false - workflowData.SafeOutputs.Staged = false - - job, err = c.buildAddLabelsJob(workflowData, "main_job") - if err != nil { - t.Fatalf("Unexpected error building add labels job: %v", err) - } - - stepsContent = strings.Join(job.Steps, "") - - // Check that GH_AW_SAFE_OUTPUTS_STAGED is not included in the env section when false - // We need to be specific to avoid matching the JavaScript code that references the variable - if strings.Contains(stepsContent, " GH_AW_SAFE_OUTPUTS_STAGED:") { - t.Error("Expected GH_AW_SAFE_OUTPUTS_STAGED environment variable not to be set when staged is false") - } - -} - -func TestAddLabelsJobWithNilSafeOutputs(t *testing.T) { - // Create a compiler instance - c := NewCompiler() - - // Test with no SafeOutputs config - this should fail - workflowData := &WorkflowData{ - Name: "test-workflow", - SafeOutputs: nil, - } - - _, err := c.buildAddLabelsJob(workflowData, "main_job") - if err == nil { - t.Error("Expected error when SafeOutputs is nil") - } - - expectedError := "safe-outputs configuration is required" - if !strings.Contains(err.Error(), expectedError) { - t.Errorf("Expected error message to contain '%s', got: %v", expectedError, err) - } -} diff --git a/pkg/workflow/staged_create_issue_test.go b/pkg/workflow/staged_create_issue_test.go deleted file mode 100644 index a332d87785..0000000000 --- a/pkg/workflow/staged_create_issue_test.go +++ /dev/null @@ -1,88 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -func TestCreateIssueJobWithStagedFlag(t *testing.T) { - // Create a compiler instance - c := NewCompiler() - - // Test with staged: true - workflowData := &WorkflowData{ - Name: "test-workflow", - SafeOutputs: &SafeOutputsConfig{ - CreateIssues: &CreateIssuesConfig{}, - Staged: true, // pointer to true - }, - } - - job, err := c.buildCreateOutputIssueJob(workflowData, "main_job") - if err != nil { - t.Fatalf("Unexpected error building create issue job: %v", err) - } - - // Convert steps to a single string for testing - stepsContent := strings.Join(job.Steps, "") - - // Check that GH_AW_SAFE_OUTPUTS_STAGED is included in the env section - if !strings.Contains(stepsContent, " GH_AW_SAFE_OUTPUTS_STAGED: \"true\"\n") { - t.Error("Expected GH_AW_SAFE_OUTPUTS_STAGED environment variable to be set to true in create-issue job") - } - - // Test with staged: false - workflowData.SafeOutputs.Staged = false // pointer to false - - job, err = c.buildCreateOutputIssueJob(workflowData, "main_job") - if err != nil { - t.Fatalf("Unexpected error building create issue job: %v", err) - } - - stepsContent = strings.Join(job.Steps, "") - - // Check that GH_AW_SAFE_OUTPUTS_STAGED is not included in the env section when false - // We need to be specific to avoid matching the JavaScript code that references the variable - if strings.Contains(stepsContent, " GH_AW_SAFE_OUTPUTS_STAGED:") { - t.Error("Expected GH_AW_SAFE_OUTPUTS_STAGED environment variable not to be set when staged is false") - } - -} - -func TestCreateIssueJobWithoutSafeOutputs(t *testing.T) { - // Create a compiler instance - c := NewCompiler() - - // Test with no SafeOutputs config - this should fail - workflowData := &WorkflowData{ - Name: "test-workflow", - SafeOutputs: nil, - } - - _, err := c.buildCreateOutputIssueJob(workflowData, "main_job") - if err == nil { - t.Error("Expected error when SafeOutputs is nil") - } - - expectedError := "safe-outputs.create-issue configuration is required" - if !strings.Contains(err.Error(), expectedError) { - t.Errorf("Expected error message to contain '%s', got: %v", expectedError, err) - } - - // Test with SafeOutputs but no CreateIssues config - this should also fail - workflowData.SafeOutputs = &SafeOutputsConfig{ - CreatePullRequests: &CreatePullRequestsConfig{}, - Staged: true, - } - - _, err = c.buildCreateOutputIssueJob(workflowData, "main_job") - if err == nil { - t.Error("Expected error when CreateIssues is nil") - } - - if !strings.Contains(err.Error(), expectedError) { - t.Errorf("Expected error message to contain '%s', got: %v", expectedError, err) - } -} diff --git a/pkg/workflow/staged_pull_request_test.go b/pkg/workflow/staged_pull_request_test.go deleted file mode 100644 index 6ca6880cd3..0000000000 --- a/pkg/workflow/staged_pull_request_test.go +++ /dev/null @@ -1,88 +0,0 @@ -//go:build !integration - -package workflow - -import ( - "strings" - "testing" -) - -func TestCreatePullRequestJobWithStagedFlag(t *testing.T) { - // Create a compiler instance - c := NewCompiler() - - // Test with staged: true - workflowData := &WorkflowData{ - Name: "test-workflow", - SafeOutputs: &SafeOutputsConfig{ - CreatePullRequests: &CreatePullRequestsConfig{}, - Staged: true, - }, - } - - job, err := c.buildCreateOutputPullRequestJob(workflowData, "main_job") - if err != nil { - t.Fatalf("Unexpected error building create pull request job: %v", err) - } - - // Convert steps to a single string for testing - stepsContent := strings.Join(job.Steps, "") - - // Check that GH_AW_SAFE_OUTPUTS_STAGED is included in the env section - if !strings.Contains(stepsContent, " GH_AW_SAFE_OUTPUTS_STAGED: \"true\"\n") { - t.Error("Expected GH_AW_SAFE_OUTPUTS_STAGED environment variable to be set to true in create-pull-request job") - } - - // Test with staged: false - workflowData.SafeOutputs.Staged = false // pointer to false - - job, err = c.buildCreateOutputPullRequestJob(workflowData, "main_job") - if err != nil { - t.Fatalf("Unexpected error building create pull request job: %v", err) - } - - stepsContent = strings.Join(job.Steps, "") - - // Check that GH_AW_SAFE_OUTPUTS_STAGED is not included in the env section when false - // We need to be specific to avoid matching the JavaScript code that references the variable - if strings.Contains(stepsContent, " GH_AW_SAFE_OUTPUTS_STAGED:") { - t.Error("Expected GH_AW_SAFE_OUTPUTS_STAGED environment variable not to be set when staged is false") - } - -} - -func TestCreatePullRequestJobWithoutSafeOutputs(t *testing.T) { - // Create a compiler instance - c := NewCompiler() - - // Test with no SafeOutputs config - this should fail - workflowData := &WorkflowData{ - Name: "test-workflow", - SafeOutputs: nil, - } - - _, err := c.buildCreateOutputPullRequestJob(workflowData, "main_job") - if err == nil { - t.Error("Expected error when SafeOutputs is nil") - } - - expectedError := "safe-outputs.create-pull-request configuration is required" - if !strings.Contains(err.Error(), expectedError) { - t.Errorf("Expected error message to contain '%s', got: %v", expectedError, err) - } - - // Test with SafeOutputs but no CreatePullRequests config - this should also fail - workflowData.SafeOutputs = &SafeOutputsConfig{ - CreateIssues: &CreateIssuesConfig{}, - Staged: true, - } - - _, err = c.buildCreateOutputPullRequestJob(workflowData, "main_job") - if err == nil { - t.Error("Expected error when CreatePullRequests is nil") - } - - if !strings.Contains(err.Error(), expectedError) { - t.Errorf("Expected error message to contain '%s', got: %v", expectedError, err) - } -} diff --git a/pkg/workflow/unified_prompt_step.go b/pkg/workflow/unified_prompt_step.go index 13c3f32f64..459f28eda0 100644 --- a/pkg/workflow/unified_prompt_step.go +++ b/pkg/workflow/unified_prompt_step.go @@ -749,42 +749,3 @@ func buildSafeOutputsSections(safeOutputs *SafeOutputsConfig) []PromptSection { return sections } - -var promptStepHelperLog = logger.New("workflow:prompt_step_helper") - -// generateStaticPromptStep is a helper function that generates a workflow step -// for appending static prompt text to the prompt file. It encapsulates the common -// pattern used across multiple prompt generators (XPIA, temp folder, playwright, edit tool, etc.) -// to reduce code duplication and ensure consistency. -// -// Parameters: -// - yaml: The string builder to write the YAML to -// - description: The name of the workflow step (e.g., "Append XPIA security instructions to prompt") -// - promptText: The static text content to append to the prompt (used for backward compatibility) -// - shouldInclude: Whether to generate the step (false means skip generation entirely) -// -// Example usage: -// -// generateStaticPromptStep(yaml, -// "Append XPIA security instructions to prompt", -// xpiaPromptText, -// data.SafetyPrompt) -// -// Deprecated: This function is kept for backward compatibility with inline prompts. -// Use generateStaticPromptStepFromFile for new code. -func generateStaticPromptStep(yaml *strings.Builder, description string, promptText string, shouldInclude bool) { - promptStepHelperLog.Printf("Generating static prompt step: description=%s, shouldInclude=%t", description, shouldInclude) - // Skip generation if guard condition is false - if !shouldInclude { - return - } - - // Use the existing appendPromptStep helper with a renderer that writes the prompt text - appendPromptStep(yaml, - description, - func(y *strings.Builder, indent string) { - WritePromptTextToYAML(y, promptText, indent) - }, - "", // no condition - " ") -} From 3ee8e5ab0ed32e27fe67a3e12a02e5001f9afff9 Mon Sep 17 00:00:00 2001 From: Don Syme Date: Sat, 28 Feb 2026 03:31:10 +0000 Subject: [PATCH 4/7] Fix CI: remove dead test functions from compiler_action_mode_test.go TestReleaseModeCompilation and TestDevModeCompilation used deleted registry methods (RegisterWithAction, RegisterWithMode, RuntimeModeGitHubScript, Get) that were removed when script_registry.go was rewritten in batch 2. --- pkg/workflow/compiler_action_mode_test.go | 183 ---------------------- 1 file changed, 183 deletions(-) diff --git a/pkg/workflow/compiler_action_mode_test.go b/pkg/workflow/compiler_action_mode_test.go index fb0e3c7a77..1e0e0505ed 100644 --- a/pkg/workflow/compiler_action_mode_test.go +++ b/pkg/workflow/compiler_action_mode_test.go @@ -274,186 +274,3 @@ func TestActionModeDetectionWithReleaseFlag(t *testing.T) { }) } } - -// TestReleaseModeCompilation tests workflow compilation in release mode -// Note: This test uses create_issue which already has ScriptName set. -// Other safe outputs (add_labels, etc.) don't have ScriptName yet and will use inline mode. -func TestReleaseModeCompilation(t *testing.T) { - // Create a temporary directory for the test - tempDir := t.TempDir() - - // Save original environment - origSHA := os.Getenv("GITHUB_SHA") - origRef := os.Getenv("GITHUB_REF") - defer func() { - if origSHA != "" { - os.Setenv("GITHUB_SHA", origSHA) - } else { - os.Unsetenv("GITHUB_SHA") - } - if origRef != "" { - os.Setenv("GITHUB_REF", origRef) - } else { - os.Unsetenv("GITHUB_REF") - } - }() - - // Set release tag for testing - os.Setenv("GITHUB_REF", "refs/tags/v1.0.0") // Simulate release tag for auto-detection - - // Create a test workflow file - workflowContent := `--- -name: Test Release Mode -on: issues -safe-outputs: - create-issue: - max: 1 ---- - -Test workflow with release mode. -` - - workflowPath := tempDir + "/test-workflow.md" - if err := os.WriteFile(workflowPath, []byte(workflowContent), 0644); err != nil { - t.Fatalf("Failed to write test workflow: %v", err) - } - - // Save the original script to restore after test - origScript := DefaultScriptRegistry.Get("create_issue") - origActionPath := DefaultScriptRegistry.GetActionPath("create_issue") - - // Register test script with action path - testScript := `const { core } = require('@actions/core'); core.info('test');` - err := DefaultScriptRegistry.RegisterWithAction( - "create_issue", - testScript, - RuntimeModeGitHubScript, - "./actions/create-issue", - ) - require.NoError(t, err) - - // Restore after test - defer func() { - if origActionPath != "" { - _ = DefaultScriptRegistry.RegisterWithAction("create_issue", origScript, RuntimeModeGitHubScript, origActionPath) - } else { - _ = DefaultScriptRegistry.RegisterWithMode("create_issue", origScript, RuntimeModeGitHubScript) - } - }() - - // Compile - should auto-detect release mode from GITHUB_REF - compiler := NewCompilerWithVersion("1.0.0") - // Don't set action mode explicitly - let it auto-detect - compiler.SetActionMode(DetectActionMode("1.0.0")) - compiler.SetNoEmit(false) - - if compiler.GetActionMode() != ActionModeRelease { - t.Fatalf("Expected auto-detected release mode, got %s", compiler.GetActionMode()) - } - - if err := compiler.CompileWorkflow(workflowPath); err != nil { - t.Fatalf("Compilation failed: %v", err) - } - - // Read lock file - lockPath := stringutil.MarkdownToLockFile(workflowPath) - lockContent, err := os.ReadFile(lockPath) - if err != nil { - t.Fatalf("Failed to read lock file: %v", err) - } - - lockStr := string(lockContent) - - // Verify safe_outputs job exists (consolidated mode) - if !strings.Contains(lockStr, "safe_outputs:") { - t.Error("Expected safe_outputs job in compiled workflow") - } - - // Verify handler manager step is present (create_issue is now handled by handler manager) - if !strings.Contains(lockStr, "id: process_safe_outputs") { - t.Error("Expected process_safe_outputs step in compiled workflow (create-issue is now handled by handler manager)") - } - // Verify handler config contains create_issue - if !strings.Contains(lockStr, "create_issue") { - t.Error("Expected create_issue in handler config") - } -} - -// TestDevModeCompilation tests workflow compilation in dev mode -// Note: This test uses create_issue which already has ScriptName set. -func TestDevModeCompilation(t *testing.T) { - tempDir := t.TempDir() - - // Save original environment - origRef := os.Getenv("GITHUB_REF") - defer os.Setenv("GITHUB_REF", origRef) - - // Set environment for dev mode - os.Setenv("GITHUB_REF", "") // Local development (no GITHUB_REF) - - workflowContent := `--- -name: Test Dev Mode -on: issues -safe-outputs: - create-issue: - max: 1 ---- - -Test -` - - workflowPath := tempDir + "/test-workflow.md" - if err := os.WriteFile(workflowPath, []byte(workflowContent), 0644); err != nil { - t.Fatalf("Failed to write workflow: %v", err) - } - - // Save original script - origScript := DefaultScriptRegistry.Get("create_issue") - origActionPath := DefaultScriptRegistry.GetActionPath("create_issue") - - testScript := `const { core } = require('@actions/core'); core.info('test');` - err := DefaultScriptRegistry.RegisterWithAction("create_issue", testScript, RuntimeModeGitHubScript, "./actions/create-issue") - require.NoError(t, err) - - defer func() { - if origActionPath != "" { - _ = DefaultScriptRegistry.RegisterWithAction("create_issue", origScript, RuntimeModeGitHubScript, origActionPath) - } else { - _ = DefaultScriptRegistry.RegisterWithMode("create_issue", origScript, RuntimeModeGitHubScript) - } - }() - - compiler := NewCompilerWithVersion("1.0.0") - compiler.SetActionMode(DetectActionMode("dev")) - compiler.SetNoEmit(false) - - if compiler.GetActionMode() != ActionModeDev { - t.Fatalf("Expected auto-detected dev mode, got %s", compiler.GetActionMode()) - } - - if err := compiler.CompileWorkflow(workflowPath); err != nil { - t.Fatalf("Compilation failed: %v", err) - } - - lockPath := stringutil.MarkdownToLockFile(workflowPath) - lockContent, err := os.ReadFile(lockPath) - if err != nil { - t.Fatalf("Failed to read lock file: %v", err) - } - - lockStr := string(lockContent) - - // Verify safe_outputs job exists (consolidated mode) - if !strings.Contains(lockStr, "safe_outputs:") { - t.Error("Expected safe_outputs job in compiled workflow") - } - - // Verify handler manager step is present (create_issue is now handled by handler manager) - if !strings.Contains(lockStr, "id: process_safe_outputs") { - t.Error("Expected process_safe_outputs step in compiled workflow (create-issue is now handled by handler manager)") - } - // Verify handler config contains create_issue - if !strings.Contains(lockStr, "create_issue") { - t.Error("Expected create_issue in handler config") - } -} From a4eedeb7c5750e120a58389bc05d59705f8dd2a0 Mon Sep 17 00:00:00 2001 From: Don Syme Date: Sat, 28 Feb 2026 03:34:47 +0000 Subject: [PATCH 5/7] Fix CI: remove integration tests referencing deleted methods - Delete safe_outputs_env_integration_test.go (all 3 tests used deleted job builders) - Remove TestSafeOutputJobsIntegration, TestSafeOutputJobsWithCustomEnvVars, TestSafeOutputJobsMissingConfig from safe_outputs_integration_test.go - Fix stale imports in compiler_action_mode_test.go - Update DEADCODE.md: always vet with -tags=integration to catch integration tests --- DEADCODE.md | 2 +- pkg/workflow/compiler_action_mode_test.go | 4 - .../safe_outputs_env_integration_test.go | 296 ------------ pkg/workflow/safe_outputs_integration_test.go | 423 ------------------ 4 files changed, 1 insertion(+), 724 deletions(-) delete mode 100644 pkg/workflow/safe_outputs_env_integration_test.go diff --git a/DEADCODE.md b/DEADCODE.md index d313889ad7..3380615b64 100644 --- a/DEADCODE.md +++ b/DEADCODE.md @@ -30,7 +30,7 @@ It does NOT report unreachable constants, variables, or types — only functions - **Always include `./internal/tools/...` in the deadcode command** - **Beware `//go:build js && wasm` files** — `cmd/gh-aw-wasm/` uses functions like `ParseWorkflowString` and `CompileToYAML` that deadcode can't see because the WASM binary can't be compiled without `GOOS=js GOARCH=wasm`. Always check `cmd/gh-aw-wasm/main.go` before deleting functions from `pkg/workflow/`. - Run `go build ./...` after every batch -- Run `go vet ./...` to catch test compilation errors (cheaper than `go test`) +- Run `go vet ./...` **AND** `go vet -tags=integration ./...` to catch unit AND integration test errors - Run `go test -tags=integration ./pkg/affected/...` to spot-check - Always check if a "fully dead" file contains live constants/vars before deleting - The deadcode list was generated before any deletions; re-run after major batches diff --git a/pkg/workflow/compiler_action_mode_test.go b/pkg/workflow/compiler_action_mode_test.go index 1e0e0505ed..5ebcf5fbf4 100644 --- a/pkg/workflow/compiler_action_mode_test.go +++ b/pkg/workflow/compiler_action_mode_test.go @@ -4,11 +4,7 @@ package workflow import ( "os" - "strings" "testing" - - "github.com/github/gh-aw/pkg/stringutil" - "github.com/stretchr/testify/require" ) // TestActionModeDetection tests the DetectActionMode function diff --git a/pkg/workflow/safe_outputs_env_integration_test.go b/pkg/workflow/safe_outputs_env_integration_test.go deleted file mode 100644 index da3ee53864..0000000000 --- a/pkg/workflow/safe_outputs_env_integration_test.go +++ /dev/null @@ -1,296 +0,0 @@ -//go:build integration - -package workflow - -import ( - "strings" - "testing" - - "github.com/github/gh-aw/pkg/parser" -) - -// parseWorkflowFromContent is a helper function to parse workflow content for testing -func parseWorkflowFromContent(t *testing.T, content string, filename string) *WorkflowData { - t.Helper() - - result, err := parser.ExtractFrontmatterFromContent(content) - if err != nil { - t.Fatalf("Failed to extract frontmatter: %v", err) - } - - compiler := NewCompiler() - safeOutputs := compiler.extractSafeOutputsConfig(result.Frontmatter) - topTools := extractToolsFromFrontmatter(result.Frontmatter) - - workflowData := &WorkflowData{ - Name: filename, - FrontmatterName: extractStringFromMap(result.Frontmatter, "name", nil), - SafeOutputs: safeOutputs, - Tools: topTools, - } - - return workflowData -} - -func TestSafeOutputsEnvIntegration(t *testing.T) { - tests := []struct { - name string - frontmatter map[string]any - expectedEnvVars []string - expectedSafeOutput string - }{ - { - name: "Create issue job with custom env vars", - frontmatter: map[string]any{ - "name": "Test Workflow", - "on": "push", - "safe-outputs": map[string]any{ - "create-issue": nil, - "env": map[string]any{ - "GITHUB_TOKEN": "${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "DEBUG_MODE": "true", - }, - }, - }, - expectedEnvVars: []string{ - "GITHUB_TOKEN: ${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "DEBUG_MODE: true", - }, - expectedSafeOutput: "create-issue", - }, - { - name: "Create pull request job with custom env vars", - frontmatter: map[string]any{ - "name": "Test Workflow", - "on": "push", - "safe-outputs": map[string]any{ - "create-pull-request": nil, - "env": map[string]any{ - "CUSTOM_API_KEY": "${{ secrets.CUSTOM_API_KEY }}", - "ENVIRONMENT": "production", - }, - }, - }, - expectedEnvVars: []string{ - "CUSTOM_API_KEY: ${{ secrets.CUSTOM_API_KEY }}", - "ENVIRONMENT: production", - }, - expectedSafeOutput: "create-pull-request", - }, - { - name: "Add issue comment job with custom env vars", - frontmatter: map[string]any{ - "name": "Test Workflow", - "on": "issues", - "safe-outputs": map[string]any{ - "add-comment": nil, - "env": map[string]any{ - "NOTIFICATION_URL": "${{ secrets.WEBHOOK_URL }}", - "COMMENT_TEMPLATE": "template-v2", - }, - }, - }, - expectedEnvVars: []string{ - "NOTIFICATION_URL: ${{ secrets.WEBHOOK_URL }}", - "COMMENT_TEMPLATE: template-v2", - }, - expectedSafeOutput: "add-comment", - }, - { - name: "Multiple safe outputs with shared env vars", - frontmatter: map[string]any{ - "name": "Test Workflow", - "on": "push", - "safe-outputs": map[string]any{ - "create-issue": nil, - "create-pull-request": nil, - "env": map[string]any{ - "SHARED_TOKEN": "${{ secrets.SHARED_TOKEN }}", - "WORKFLOW_ID": "multi-output-test", - }, - }, - }, - expectedEnvVars: []string{ - "SHARED_TOKEN: ${{ secrets.SHARED_TOKEN }}", - "WORKFLOW_ID: multi-output-test", - }, - expectedSafeOutput: "create-issue,create-pull-request", - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - compiler := NewCompiler() - - // Extract the safe outputs configuration - config := compiler.extractSafeOutputsConfig(tt.frontmatter) - if config == nil { - t.Fatal("Expected SafeOutputsConfig to be parsed") - } - - // Verify env configuration is parsed correctly - if config.Env == nil { - t.Fatal("Expected Env to be parsed") - } - - // Build workflow data - data := &WorkflowData{ - Name: "Test", - FrontmatterName: "Test Workflow", - SafeOutputs: config, - } - - // Test job generation for each safe output type - if strings.Contains(tt.expectedSafeOutput, "create-issue") { - job, err := compiler.buildCreateOutputIssueJob(data, "main_job") - if err != nil { - t.Errorf("Error building create issue job: %v", err) - } - - assertEnvVarsInSteps(t, job.Steps, tt.expectedEnvVars) - } - - if strings.Contains(tt.expectedSafeOutput, "create-pull-request") { - job, err := compiler.buildCreateOutputPullRequestJob(data, "main_job") - if err != nil { - t.Errorf("Error building create pull request job: %v", err) - } - - assertEnvVarsInSteps(t, job.Steps, tt.expectedEnvVars) - } - - if strings.Contains(tt.expectedSafeOutput, "add-comment") { - job, err := compiler.buildCreateOutputAddCommentJob(data, "main_job", "", "", "") - if err != nil { - t.Errorf("Error building add issue comment job: %v", err) - } - - assertEnvVarsInSteps(t, job.Steps, tt.expectedEnvVars) - } - }) - } -} - -func TestSafeOutputsEnvFullWorkflowCompilation(t *testing.T) { - workflowContent := `--- -name: Test Environment Variables -on: push -safe-outputs: - create-issue: - title-prefix: "[env-test] " - labels: ["automated", "env-test"] - env: - GITHUB_TOKEN: ${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }} - DEBUG_MODE: "true" - CUSTOM_API_KEY: ${{ secrets.CUSTOM_API_KEY }} ---- - -# Environment Variables Test Workflow - -This workflow tests that custom environment variables are properly passed through -to safe output jobs. - -Create an issue with test results. -` - - workflowData := parseWorkflowFromContent(t, workflowContent, "test-env-workflow.md") - - // Verify the SafeOutputs configuration includes our environment variables - if workflowData.SafeOutputs == nil { - t.Fatal("Expected SafeOutputs to be parsed") - } - - if workflowData.SafeOutputs.Env == nil { - t.Fatal("Expected Env to be parsed") - } - - expectedEnvVars := map[string]string{ - "GITHUB_TOKEN": "${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "DEBUG_MODE": "true", - "CUSTOM_API_KEY": "${{ secrets.CUSTOM_API_KEY }}", - } - - for key, expectedValue := range expectedEnvVars { - if actualValue, exists := workflowData.SafeOutputs.Env[key]; !exists { - t.Errorf("Expected env key %s to exist", key) - } else if actualValue != expectedValue { - t.Errorf("Expected env[%s] to be %q, got %q", key, expectedValue, actualValue) - } - } - - // Build the create issue job and verify it includes our environment variables - compiler := NewCompiler() - job, err := compiler.buildCreateOutputIssueJob(workflowData, "main_job") - if err != nil { - t.Fatalf("Failed to build create issue job: %v", err) - } - - jobYAML := strings.Join(job.Steps, "") - - expectedEnvLines := []string{ - "GITHUB_TOKEN: ${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "DEBUG_MODE: true", - "CUSTOM_API_KEY: ${{ secrets.CUSTOM_API_KEY }}", - } - - for _, expectedEnvLine := range expectedEnvLines { - if !strings.Contains(jobYAML, expectedEnvLine) { - t.Errorf("Expected environment variable %q not found in job YAML", expectedEnvLine) - } - } - - // Verify issue configuration is present - if !strings.Contains(jobYAML, "GH_AW_ISSUE_TITLE_PREFIX: \"[env-test] \"") { - t.Error("Expected issue title prefix not found in job YAML") - } - - if !strings.Contains(jobYAML, "GH_AW_ISSUE_LABELS: \"automated,env-test\"") { - t.Error("Expected issue labels not found in job YAML") - } -} - -func TestSafeOutputsEnvWithStagedMode(t *testing.T) { - workflowContent := `--- -name: Test Environment Variables with Staged Mode -on: push -safe-outputs: - create-issue: - env: - GITHUB_TOKEN: ${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }} - DEBUG_MODE: "true" - staged: true ---- - -# Environment Variables with Staged Mode Test - -This workflow tests that custom environment variables work with staged mode. -` - - workflowData := parseWorkflowFromContent(t, workflowContent, "test-env-staged-workflow.md") - - // Verify staged mode is enabled - if !workflowData.SafeOutputs.Staged { - t.Error("Expected staged mode to be enabled") - } - - // Build the create issue job and verify it includes our environment variables and staged flag - compiler := NewCompiler() - job, err := compiler.buildCreateOutputIssueJob(workflowData, "main_job") - if err != nil { - t.Fatalf("Failed to build create issue job: %v", err) - } - - jobYAML := strings.Join(job.Steps, "") - - expectedEnvVars := []string{ - "GITHUB_TOKEN: ${{ secrets.SOME_PAT_FOR_AGENTIC_WORKFLOWS }}", - "DEBUG_MODE: true", - } - - assertEnvVarsInSteps(t, job.Steps, expectedEnvVars) - - // Verify staged mode is enabled - if !strings.Contains(jobYAML, "GH_AW_SAFE_OUTPUTS_STAGED: \"true\"") { - t.Error("Expected staged mode flag not found in job YAML") - } -} diff --git a/pkg/workflow/safe_outputs_integration_test.go b/pkg/workflow/safe_outputs_integration_test.go index e672d9b069..ac15d85231 100644 --- a/pkg/workflow/safe_outputs_integration_test.go +++ b/pkg/workflow/safe_outputs_integration_test.go @@ -7,252 +7,6 @@ import ( "testing" ) -// TestSafeOutputJobsIntegration tests that all safe output job types that have individual -// job builders can be built with proper environment configuration, including the critical -// GH_AW_WORKFLOW_ID variable. This prevents regressions where required environment variables -// are missing from compiled workflows. -func TestSafeOutputJobsIntegration(t *testing.T) { - tests := []struct { - name string - safeOutputType string - configBuilder func() *SafeOutputsConfig - requiredEnvVar string // The critical env var to check (usually GH_AW_WORKFLOW_ID) - jobBuilder func(*Compiler, *WorkflowData, string) (*Job, error) - }{ - { - name: "create_pull_request", - safeOutputType: "create-pull-request", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - CreatePullRequests: &CreatePullRequestsConfig{ - TitlePrefix: "[Test] ", - Labels: []string{"test"}, - }, - } - }, - requiredEnvVar: "GH_AW_WORKFLOW_ID", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputPullRequestJob(data, mainJobName) - }, - }, - { - name: "create_issue", - safeOutputType: "create-issue", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - CreateIssues: &CreateIssuesConfig{ - TitlePrefix: "[Test] ", - Labels: []string{"test"}, - }, - } - }, - requiredEnvVar: "GH_AW_WORKFLOW_ID", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputIssueJob(data, mainJobName) - }, - }, - { - name: "create_discussion", - safeOutputType: "create-discussion", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - CreateDiscussions: &CreateDiscussionsConfig{ - TitlePrefix: "[Test] ", - Category: "general", - }, - } - }, - requiredEnvVar: "GH_AW_WORKFLOW_ID", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputDiscussionJob(data, mainJobName, "") - }, - }, - { - name: "add_comment", - safeOutputType: "add-comment", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - AddComments: &AddCommentsConfig{ - BaseSafeOutputConfig: BaseSafeOutputConfig{ - Max: strPtr("5"), - }, - }, - } - }, - requiredEnvVar: "GH_AW_WORKFLOW_ID", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputAddCommentJob(data, mainJobName, "", "", "") - }, - }, - { - name: "add_labels", - safeOutputType: "add-labels", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - AddLabels: &AddLabelsConfig{ - Allowed: []string{"test", "automated"}, - }, - } - }, - requiredEnvVar: "GH_AW_WORKFLOW_ID", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildAddLabelsJob(data, mainJobName) - }, - }, - { - name: "missing_tool", - safeOutputType: "missing-tool", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - MissingTool: &MissingToolConfig{ - BaseSafeOutputConfig: BaseSafeOutputConfig{ - Max: strPtr("10"), - }, - }, - } - }, - requiredEnvVar: "GH_AW_MISSING_TOOL_MAX", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputMissingToolJob(data, mainJobName) - }, - }, - { - name: "create_pr_review_comment", - safeOutputType: "create-pr-review-comment", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - CreatePullRequestReviewComments: &CreatePullRequestReviewCommentsConfig{ - BaseSafeOutputConfig: BaseSafeOutputConfig{ - Max: strPtr("10"), - }, - }, - } - }, - requiredEnvVar: "GH_AW_WORKFLOW_ID", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputPullRequestReviewCommentJob(data, mainJobName) - }, - }, - { - name: "create_code_scanning_alert", - safeOutputType: "create-code-scanning-alert", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - CreateCodeScanningAlerts: &CreateCodeScanningAlertsConfig{ - BaseSafeOutputConfig: BaseSafeOutputConfig{ - Max: strPtr("10"), - }, - }, - } - }, - requiredEnvVar: "GH_AW_WORKFLOW_ID", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputCodeScanningAlertJob(data, mainJobName, "test-workflow.md") - }, - }, - { - name: "create_agent_session", - safeOutputType: "create-agent-session", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - CreateAgentSessions: &CreateAgentSessionConfig{ - BaseSafeOutputConfig: BaseSafeOutputConfig{ - Max: strPtr("5"), - }, - }, - } - }, - requiredEnvVar: "GH_AW_WORKFLOW_ID", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputAgentSessionJob(data, mainJobName) - }, - }, - { - name: "upload_assets", - safeOutputType: "upload-assets", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - UploadAssets: &UploadAssetsConfig{ - BaseSafeOutputConfig: BaseSafeOutputConfig{ - Max: strPtr("10"), - }, - }, - } - }, - requiredEnvVar: "GH_AW_WORKFLOW_ID", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildUploadAssetsJob(data, mainJobName, false) - }, - }, - } - - // Known issue: Individual job builders are missing GH_AW_WORKFLOW_ID - // These job builders need to be fixed to include the environment variable - // Tracked in: https://github.com/github/gh-aw/issues/7023 - knownMissingEnvVar := map[string]bool{ - "create_issue": true, - "create_discussion": true, - "add_comment": true, - "add_labels": true, - "create_pr_review_comment": true, - "create_code_scanning_alert": true, - "create_agent_session": true, - "upload_assets": true, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - // Skip tests for job builders with known missing GH_AW_WORKFLOW_ID - if knownMissingEnvVar[tt.name] && tt.requiredEnvVar == "GH_AW_WORKFLOW_ID" { - t.Skip("Known issue: GH_AW_WORKFLOW_ID missing from this job builder. Remove this skip when fixed.") - } - - // Create compiler instance - c := NewCompiler() - - // Build workflow data with the specific safe output configuration - workflowData := &WorkflowData{ - Name: "test-workflow", - Source: "test-source", - SafeOutputs: tt.configBuilder(), - } - - // Build the job - job, err := tt.jobBuilder(c, workflowData, "main_job") - if err != nil { - t.Fatalf("Failed to build %s job: %v", tt.name, err) - } - - if job == nil { - t.Fatalf("Job should not be nil for %s", tt.name) - } - - // Verify the job has steps - if len(job.Steps) == 0 { - t.Fatalf("Job should have at least one step for %s", tt.name) - } - - // Convert steps to string for checking environment variables - stepsContent := strings.Join(job.Steps, "") - - // Verify the required environment variable is present - if !strings.Contains(stepsContent, tt.requiredEnvVar) { - t.Errorf("Required environment variable %s not found in %s job steps.\nJob steps:\n%s", - tt.requiredEnvVar, tt.name, stepsContent) - } - - // Log success for debugging - t.Logf("✓ %s job built successfully with required env var %s", tt.name, tt.requiredEnvVar) - }) - } -} - -// TestConsolidatedSafeOutputsJobIntegration tests the consolidated safe outputs job -// which combines multiple safe output operations into a single job with multiple steps. -// Many safe output types (noop, push_to_pull_request_branch, update_issue, update_pull_request, -// update_discussion, close_issue, close_pull_request, close_discussion, add_reviewer, assign_milestone, -// assign_to_agent, assign_to_user, hide_comment, update_release) are built as steps within -// the consolidated job rather than as individual jobs. func TestConsolidatedSafeOutputsJobIntegration(t *testing.T) { tests := []struct { name string @@ -593,183 +347,6 @@ func TestConsolidatedSafeOutputsJobIntegration(t *testing.T) { } } -// TestSafeOutputJobsWithCustomEnvVars tests that custom environment variables -// from safe-outputs.env are properly propagated to all safe output job types. -func TestSafeOutputJobsWithCustomEnvVars(t *testing.T) { - tests := []struct { - name string - safeOutputType string - configBuilder func() *SafeOutputsConfig - customEnvVars map[string]string - jobBuilder func(*Compiler, *WorkflowData, string) (*Job, error) - }{ - { - name: "create_issue_with_custom_env", - safeOutputType: "create-issue", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - CreateIssues: &CreateIssuesConfig{ - TitlePrefix: "[Test] ", - }, - Env: map[string]string{ - "CUSTOM_VAR": "custom_value", - "GITHUB_TOKEN": "${{ secrets.CUSTOM_PAT }}", - }, - } - }, - customEnvVars: map[string]string{ - "CUSTOM_VAR": "CUSTOM_VAR: custom_value", - "GITHUB_TOKEN": "GITHUB_TOKEN: ${{ secrets.CUSTOM_PAT }}", - }, - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputIssueJob(data, mainJobName) - }, - }, - { - name: "create_pull_request_with_custom_env", - safeOutputType: "create-pull-request", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - CreatePullRequests: &CreatePullRequestsConfig{ - TitlePrefix: "[Test] ", - }, - Env: map[string]string{ - "DEBUG_MODE": "true", - "API_KEY": "${{ secrets.API_KEY }}", - }, - } - }, - customEnvVars: map[string]string{ - "DEBUG_MODE": "DEBUG_MODE: true", - "API_KEY": "API_KEY: ${{ secrets.API_KEY }}", - }, - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputPullRequestJob(data, mainJobName) - }, - }, - { - name: "add_comment_with_custom_env", - safeOutputType: "add-comment", - configBuilder: func() *SafeOutputsConfig { - return &SafeOutputsConfig{ - AddComments: &AddCommentsConfig{ - BaseSafeOutputConfig: BaseSafeOutputConfig{ - Max: strPtr("5"), - }, - }, - Env: map[string]string{ - "NOTIFICATION_URL": "${{ secrets.WEBHOOK_URL }}", - "ENVIRONMENT": "production", - }, - } - }, - customEnvVars: map[string]string{ - "NOTIFICATION_URL": "NOTIFICATION_URL: ${{ secrets.WEBHOOK_URL }}", - "ENVIRONMENT": "ENVIRONMENT: production", - }, - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - return c.buildCreateOutputAddCommentJob(data, mainJobName, "", "", "") - }, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - // Create compiler instance - c := NewCompiler() - - // Build workflow data with custom env vars - workflowData := &WorkflowData{ - Name: "test-workflow", - Source: "test-source", - SafeOutputs: tt.configBuilder(), - } - - // Build the job - job, err := tt.jobBuilder(c, workflowData, "main_job") - if err != nil { - t.Fatalf("Failed to build %s job: %v", tt.name, err) - } - - // Convert steps to string for checking environment variables - stepsContent := strings.Join(job.Steps, "") - - // Verify all custom environment variables are present - for envVarName, expectedContent := range tt.customEnvVars { - if !strings.Contains(stepsContent, expectedContent) { - t.Errorf("Custom environment variable %s not found in %s job.\nExpected: %s\nJob steps:\n%s", - envVarName, tt.name, expectedContent, stepsContent) - } - } - - t.Logf("✓ %s job includes all custom environment variables", tt.name) - }) - } -} - -// TestSafeOutputJobsMissingConfig tests that jobs fail gracefully when required configuration is missing -func TestSafeOutputJobsMissingConfig(t *testing.T) { - tests := []struct { - name string - jobBuilder func(*Compiler, *WorkflowData, string) (*Job, error) - shouldFail bool - }{ - { - name: "missing_tool_without_config", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - // Set SafeOutputs to nil to trigger validation error - data.SafeOutputs = nil - return c.buildCreateOutputMissingToolJob(data, mainJobName) - }, - shouldFail: true, - }, - { - name: "create_issue_without_config", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - // Set SafeOutputs to nil - data.SafeOutputs = nil - return c.buildCreateOutputIssueJob(data, mainJobName) - }, - shouldFail: true, - }, - { - name: "add_labels_without_config", - jobBuilder: func(c *Compiler, data *WorkflowData, mainJobName string) (*Job, error) { - // Set SafeOutputs to nil - data.SafeOutputs = nil - return c.buildAddLabelsJob(data, mainJobName) - }, - shouldFail: true, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - c := NewCompiler() - workflowData := &WorkflowData{ - Name: "test-workflow", - Source: "test-source", - } - - job, err := tt.jobBuilder(c, workflowData, "main_job") - - if tt.shouldFail { - if err == nil { - t.Errorf("Expected error for %s, but got none. Job: %v", tt.name, job) - } else { - t.Logf("✓ %s correctly failed with error: %v", tt.name, err) - } - } else { - if err != nil { - t.Errorf("Expected no error for %s, but got: %v", tt.name, err) - } - } - }) - } -} - -// TestConsolidatedSafeOutputsJobWithCustomEnv tests that custom environment variables -// are properly included in the consolidated safe outputs job. func TestConsolidatedSafeOutputsJobWithCustomEnv(t *testing.T) { c := NewCompiler() From f5db94cfdae930a6c244c38ddcd91e14fabef9f4 Mon Sep 17 00:00:00 2001 From: Don Syme Date: Sat, 28 Feb 2026 03:49:11 +0000 Subject: [PATCH 6/7] Remove dead code batch 3: compiler_types + js.go (-133 lines) Remove 7 zero-caller functions from compiler_types.go: - WithActionMode, WithRepositorySlug, WithGitRoot, WithInlinePrompt - GetDefaultVersion - (Compiler).GetWorkflowIdentifier, (Compiler).GetRepositorySlug Remove 10 zero-caller functions from js.go: - GetSafeOutputsMCPServerScript, GetSafeInputsMCPServerScript - GetSafeInputsToolFactoryScript, GetSafeInputsBootstrapScript - GetSafeOutputsConfigScript, GetSafeOutputsAppendScript - GetSafeOutputsHandlersScript, GetSafeOutputsToolsLoaderScript - GetSafeOutputsBootstrapScript - WriteJavaScriptToYAMLPreservingComments --- pkg/workflow/compiler_types.go | 37 ------------- pkg/workflow/js.go | 96 ---------------------------------- 2 files changed, 133 deletions(-) diff --git a/pkg/workflow/compiler_types.go b/pkg/workflow/compiler_types.go index 7857b5245d..f96fae0b86 100644 --- a/pkg/workflow/compiler_types.go +++ b/pkg/workflow/compiler_types.go @@ -32,11 +32,6 @@ func WithVersion(version string) CompilerOption { return func(c *Compiler) { c.version = version } } -// WithActionMode overrides the auto-detected action mode -func WithActionMode(mode ActionMode) CompilerOption { - return func(c *Compiler) { c.actionMode = mode } -} - // WithSkipValidation configures whether to skip schema validation func WithSkipValidation(skip bool) CompilerOption { return func(c *Compiler) { c.skipValidation = skip } @@ -67,23 +62,6 @@ func WithWorkflowIdentifier(identifier string) CompilerOption { return func(c *Compiler) { c.workflowIdentifier = identifier } } -// WithRepositorySlug sets the repository slug for schedule scattering -func WithRepositorySlug(slug string) CompilerOption { - return func(c *Compiler) { c.repositorySlug = slug } -} - -// WithGitRoot sets the git repository root directory for action cache path -func WithGitRoot(gitRoot string) CompilerOption { - return func(c *Compiler) { c.gitRoot = gitRoot } -} - -// WithInlinePrompt configures whether to inline markdown content directly in the compiled YAML -// instead of using runtime-import macros. This is required for Wasm/browser builds where -// the filesystem is unavailable at runtime. -func WithInlinePrompt(inline bool) CompilerOption { - return func(c *Compiler) { c.inlinePrompt = inline } -} - // FileTracker interface for tracking files created during compilation type FileTracker interface { TrackCreated(filePath string) @@ -99,11 +77,6 @@ func SetDefaultVersion(version string) { defaultVersion = version } -// GetDefaultVersion returns the default version -func GetDefaultVersion() string { - return defaultVersion -} - // Compiler handles converting markdown workflows to GitHub Actions YAML type Compiler struct { verbose bool @@ -281,21 +254,11 @@ func (c *Compiler) SetWorkflowIdentifier(identifier string) { c.workflowIdentifier = identifier } -// GetWorkflowIdentifier returns the current workflow identifier -func (c *Compiler) GetWorkflowIdentifier() string { - return c.workflowIdentifier -} - // SetRepositorySlug sets the repository slug for schedule scattering func (c *Compiler) SetRepositorySlug(slug string) { c.repositorySlug = slug } -// GetRepositorySlug returns the repository slug -func (c *Compiler) GetRepositorySlug() string { - return c.repositorySlug -} - // GetScheduleWarnings returns all accumulated schedule warnings for this compiler instance func (c *Compiler) GetScheduleWarnings() []string { return c.scheduleWarnings diff --git a/pkg/workflow/js.go b/pkg/workflow/js.go index 5d14936e91..737f4c70d5 100644 --- a/pkg/workflow/js.go +++ b/pkg/workflow/js.go @@ -97,10 +97,6 @@ func GetLogParserBootstrap() string { return "" } -func GetSafeOutputsMCPServerScript() string { - return "" -} - func GetSafeOutputsToolsJSON() string { return safeOutputsToolsJSONContent } @@ -121,10 +117,6 @@ func GetMCPLoggerScript() string { return "" } -func GetSafeInputsMCPServerScript() string { - return "" -} - func GetSafeInputsMCPServerHTTPScript() string { return "" } @@ -133,14 +125,6 @@ func GetSafeInputsConfigLoaderScript() string { return "" } -func GetSafeInputsToolFactoryScript() string { - return "" -} - -func GetSafeInputsBootstrapScript() string { - return "" -} - func GetSafeInputsValidationScript() string { return "" } @@ -153,26 +137,6 @@ func GetMCPHandlerPythonScript() string { return "" } -func GetSafeOutputsConfigScript() string { - return "" -} - -func GetSafeOutputsAppendScript() string { - return "" -} - -func GetSafeOutputsHandlersScript() string { - return "" -} - -func GetSafeOutputsToolsLoaderScript() string { - return "" -} - -func GetSafeOutputsBootstrapScript() string { - return "" -} - // Helper functions for formatting JavaScript in YAML func removeJavaScriptComments(code string) string { @@ -543,64 +507,4 @@ func WriteJavaScriptToYAML(yaml *strings.Builder, script string) { } } -// WriteJavaScriptToYAMLPreservingComments writes a JavaScript script with proper indentation to a strings.Builder -// while preserving JSDoc and inline comments, but removing TypeScript-specific comments. -// Used for security-sensitive scripts like redact_secrets. -func WriteJavaScriptToYAMLPreservingComments(yaml *strings.Builder, script string) { - // Validate that script is not empty - this helps catch errors where getter functions - // return empty strings after embedded scripts were removed - if strings.TrimSpace(script) == "" { - jsLog.Print("WARNING: Attempted to write empty JavaScript script to YAML (preserving comments)") - return - } - - scriptLines := strings.Split(script, "\n") - previousLineWasEmpty := false - hasWrittenContent := false // Track if we've written any content yet - - for i, line := range scriptLines { - trimmed := strings.TrimSpace(line) - - // Skip TypeScript-specific comments - if strings.HasPrefix(trimmed, "// @ts-") || strings.HasPrefix(trimmed, "/// Date: Sat, 28 Feb 2026 03:56:46 +0000 Subject: [PATCH 7/7] Update DEADCODE.md: batch 2+3 complete, revised batch 4 targets (259 dead remain) --- DEADCODE.md | 67 +++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 47 insertions(+), 20 deletions(-) diff --git a/DEADCODE.md b/DEADCODE.md index 3380615b64..0ccc0c8a4d 100644 --- a/DEADCODE.md +++ b/DEADCODE.md @@ -108,26 +108,37 @@ These are the JS bundler subsystem — entirely unused. ## Phase 2: Near-Fully Dead Files (high value, some surgery) -These files are mostly dead and worth cleaning next: - -- [ ] `pkg/workflow/script_registry.go` (11/13 dead) — keep only `GetActionPath`, `DefaultScriptRegistry` -- [ ] `pkg/workflow/artifact_manager.go` (14/16 dead) — remove 14 functions -- [ ] `pkg/constants/constants.go` (13/27 dead) — remove 13 constants -- [ ] `pkg/workflow/map_helpers.go` (5/7 dead) — remove 5 functions -- [ ] `pkg/workflow/js.go` (17/47 dead) — remove 17 JS bundle functions -- [ ] `pkg/workflow/compiler_types.go` (17/45 dead) — remove 17 types/methods +- [x] `pkg/workflow/script_registry.go` — rewritten minimal in batch 2 ✅ +- [x] `pkg/workflow/compiler_types.go` — 7 dead `With*` option funcs + 3 getters removed in batch 3; **10 dead remain** (see batch 4) +- [x] `pkg/workflow/js.go` — 10 dead bundle/Get* funcs removed in batch 3; **7 dead remain** (see batch 4) +- [ ] `pkg/workflow/artifact_manager.go` — **14 dead** — but tests call many of these; skip or do last +- [ ] `pkg/constants/constants.go` — **13 dead** (all `String()`/`IsValid()` methods on type aliases) — safe to remove +- [ ] `pkg/workflow/map_helpers.go` — **5 dead** — check test callers before removing --- -## Phase 3: Partially Dead Files (1–6 dead per file) - -Individual function removals across ~100 files. To be tackled after Phase 1 and 2. - -High-count files to prioritize: -- `pkg/workflow/expression_builder.go` (9/27 dead) -- `pkg/workflow/validation_helpers.go` (6/10 dead) -- `pkg/cli/docker_images.go` (6/11 dead) -- `pkg/workflow/domains.go` (10/27 dead) +## Phase 3 / Batch 4 Targets (current dead count: 259) + +Remaining high-value clusters from `deadcode ./cmd/... ./internal/tools/...`: + +| File | Dead | Notes | +|------|------|-------| +| `pkg/workflow/artifact_manager.go` | 14 | Many test callers; do last | +| `pkg/constants/constants.go` | 13 | All `String()`/`IsValid()` on semantic types; safe | +| `pkg/workflow/domains.go` | 10 | Check callers | +| `pkg/workflow/compiler_types.go` | 10 | Remaining With*/Get* | +| `pkg/workflow/expression_builder.go` | 9 | Check callers | +| `pkg/workflow/js.go` | 7 | Remaining Get* stubs | +| `pkg/workflow/validation_helpers.go` | 6 | Check callers | +| `pkg/cli/docker_images.go` | 6 | Check callers | +| `pkg/workflow/permissions_factory.go` | 5 | Check callers | +| `pkg/workflow/map_helpers.go` | 5 | Check callers | +| `pkg/workflow/engine_helpers.go` | 5 | Check callers | +| `pkg/console/console.go` | 5 | Check callers | +| `pkg/workflow/safe_outputs_env.go` | 4 | Check callers | +| `pkg/workflow/expression_nodes.go` | 4 | Check callers | + +**Long tail:** ~80 remaining files with 1–3 dead functions each. --- @@ -151,9 +162,25 @@ Deleted 17 files, surgery on 6 test files. `go build ./...` + `go vet ./...` + ` Deferred `pkg/stringutil/paths.go` to Batch 2 — callers in bundler files still present. -#### Batch 2: Groups 1D + 1E (Workflow fully dead) — TODO -#### Batch 3: Phase 2 (Near-fully dead, high-value partial files) — TODO -#### Batch 4: Phase 3 (Individual function removals) — TODO +#### Batch 2: Groups 1D + 1E (Workflow fully dead) — COMPLETE ✅ + +Deleted 35 files (bundler subsystem + env_mirror, copilot_participant_steps, dependency_tracker, +markdown_unfencing, prompt_step, safe_output_builder, sh.go, stringutil/paths.go). +Rescued: `prompt_constants.go`, `setup_action_paths.go`. Rewrote `script_registry.go` minimal. +Surgery on 12 test files. ~7,856 lines deleted. + +⚠️ **Lessons learned in batch 2:** +- `go vet ./...` misses integration tests — MUST also run `go vet -tags=integration ./...` +- `cmd/gh-aw-wasm/` has `//go:build js && wasm` — deadcode can't see it; `compiler_string_api.go` was wrongly deleted and restored +- Always check `cmd/gh-aw-wasm/main.go` before deleting `pkg/workflow` functions + +#### Batch 3: Phase 2 partial (compiler_types + js.go) — COMPLETE ✅ + +Removed 7 dead `With*` option funcs + 3 dead getters from `compiler_types.go`. +Removed 10 dead Get*/bundle funcs from `js.go`. +~133 lines deleted. Dead count: 362 → 259. + +#### Batch 4: Remaining Phase 2 + Phase 3 (individual removals) — TODO ---