Skip to content

feat: add system prompt support and debug logging#5

Open
alexrudloffBot wants to merge 4 commits intoatalovesyou:mainfrom
alexrudloff:feat/system-prompt-support
Open

feat: add system prompt support and debug logging#5
alexrudloffBot wants to merge 4 commits intoatalovesyou:mainfrom
alexrudloff:feat/system-prompt-support

Conversation

@alexrudloffBot
Copy link

Summary

  • Add system prompt support for OpenClaw integration via --append-system-prompt flag
  • Add --dangerously-skip-permissions flag for service mode operations
  • Add optional debug logging controlled by DEBUG_SUBPROCESS environment variable

Changes

  1. System prompt support - Enables passing backstory/memories from OpenClaw through the --append-system-prompt CLI flag
  2. Service mode - Adds --dangerously-skip-permissions to allow file operations when running as a service
  3. Debug logging - All debug logs are now behind a feature flag (DEBUG_SUBPROCESS=true) to keep production logs clean

Testing

  • Tested with OpenClaw integration
  • Debug logging can be enabled via environment variable

🤖 Created with Claude Code

grassX1998 and others added 4 commits February 1, 2026 11:30
The OpenAI Chat Completions API allows message content to be either a
plain string or an array of content parts (e.g.
[{type: "text", text: "..."}]). When an upstream client sends array
content, the previous code passed it directly to template literals,
resulting in "[object Object]" being sent to Claude CLI.

Add extractContent() helper that normalises both string and array
formats into a single string, and update the OpenAIChatMessage type to
reflect the actual API spec.
Adds support for passing system messages to Claude Code CLI via the
--append-system-prompt flag. This enables OpenClaw agents to properly
send their identity, backstory, and instructions through the proxy.

Changes:
- Support both "system" and "developer" message roles (OpenAI spec)
- Extract system messages separately from conversation messages
- Pass system prompt via --append-system-prompt CLI flag
- Add optional tools parameter for future tool restriction support

This fixes the issue where OpenClaw system messages (containing agent
identity and configuration) were being ignored, causing Claude to
respond with its default identity instead of the configured persona.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Add console.error logging for system prompt tracking
- Add --dangerously-skip-permissions flag for service operations
- Log assistant messages and results for debugging

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Replace all console.error calls with conditional debug() method
- Debug logging disabled by default, enabled via DEBUG_SUBPROCESS=true
- Add documentation in README troubleshooting section
- Keeps production logs clean while allowing detailed debugging when needed

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
jamshehan added a commit to jamshehan/claude-max-api-proxy that referenced this pull request Feb 8, 2026
Incorporates fixes from PR atalovesyou#5 (coraAIbot) and PR atalovesyou#7 (wende):

- Add --dangerously-skip-permissions flag for service/proxy usage
  Enables full system access when running as API proxy (file I/O, network, PM2, etc.)

- Add system prompt support via --append-system-prompt
  Handles both short system prompts (CLI flag) and long prompts (stdin)
  Prevents ENAMETOOLONG on Windows by using 8000 char threshold

- Fix content array extraction (PR atalovesyou#5 + our implementation)
  Handle OpenAI content format: string | ContentPart[]
  Extract text from content blocks correctly

- Fix rate limit crashes (PR atalovesyou#7)
  Guard against undefined model names when rate limits are hit

- Add debug logging with DEBUG_SUBPROCESS env var
  Helps diagnose prompt issues and subprocess behavior

- Add support for 'developer' role in OpenAI spec

- Keep stdin approach for main prompts (avoid ENAMETOOLONG)

All changes tested and verified with OpenClaw integration.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
bwiedmann added a commit to bwiedmann/claude-max-api-proxy that referenced this pull request Feb 15, 2026
wende added a commit to wende/claude-max-api-proxy that referenced this pull request Feb 18, 2026
Triaged all 14 open PRs from atalovesyou/claude-max-api-proxy,
implemented the valuable fixes, and added end-to-end test coverage.

Changes:
- Fix normalizeModelName crash on undefined model (atalovesyou#7 regression)
- Pass prompt via stdin instead of CLI arg to avoid E2BIG (atalovesyou#12)
- Increase subprocess timeout from 5 to 15 minutes (atalovesyou#20)
- Add Claude 4.5/4.6 model IDs and claude-max/ prefix (atalovesyou#10, atalovesyou#20)
- Include usage data in final streaming SSE chunk (atalovesyou#16)
- Wrap subprocess logging with DEBUG_SUBPROCESS env check (atalovesyou#5, atalovesyou#16)
- Strip CLAUDECODE env var from subprocesses (own fix)
- Add e2e test suite (7 tests covering health, models, completions)
wende added a commit to wende/claude-max-api-proxy that referenced this pull request Feb 18, 2026
Triaged all 14 open PRs from atalovesyou/claude-max-api-proxy,
implemented the valuable fixes, and added end-to-end test coverage.

Changes:
- Fix normalizeModelName crash on undefined model (atalovesyou#7 regression)
- Pass prompt via stdin instead of CLI arg to avoid E2BIG (atalovesyou#12)
- Increase subprocess timeout from 5 to 15 minutes (atalovesyou#20)
- Add Claude 4.5/4.6 model IDs and claude-max/ prefix (atalovesyou#10, atalovesyou#20)
- Include usage data in final streaming SSE chunk (atalovesyou#16)
- Wrap subprocess logging with DEBUG_SUBPROCESS env check (atalovesyou#5, atalovesyou#16)
- Strip CLAUDECODE env var from subprocesses (own fix)
- Add e2e test suite (7 tests covering health, models, completions)

Co-Authored-By: kevinfealey <10552286+kevinfealey@users.noreply.github.com>
Co-Authored-By: Max <257223904+Max-shipper@users.noreply.github.com>
Co-Authored-By: James Hansen <1359077+jamshehan@users.noreply.github.com>
Co-Authored-By: bitking <213560776+smartchainark@users.noreply.github.com>
Co-Authored-By: Alex Rudloff's AI Agents <258647843+alexrudloffBot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants