Skip to content

Conversation

@henryxwong
Copy link

Add Cline CLI support and update documentations.

@vercel
Copy link

vercel bot commented Feb 11, 2026

@henryxwong is attempting to deploy a commit to the Goshen Labs Team on Vercel.

A member of the Team first needs to authorize it.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 11, 2026

Greptile Overview

Greptile Summary

This PR adds comprehensive Cline CLI support to Ralphy, following the established engine integration patterns. The implementation includes a new ClineEngine class that uses non-interactive JSON output (cline -y --json), proper Windows stdin handling for multi-line prompts, and streaming support with progress tracking. The changes are well-tested with 6 passing tests and consistently integrated across all documentation, the CLI, the shell script, and the landing page components.

  • New ClineEngine implementation in cli/src/engines/cline.ts with proper JSON parsing for say:text and say:tool messages
  • Comprehensive test coverage in cli/src/engines/cline.test.ts (6 tests, all passing)
  • Updated detectStepFromOutput in base.ts to recognize Cline JSON format for progress tracking
  • Full integration in ralphy.sh for brownfield mode, parallel execution, conflict resolution, and monitoring
  • Documentation updated across README files with usage examples and engine comparison table
  • Landing page components updated to showcase Cline support

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • The implementation follows established patterns from other engines (Gemini, Copilot), includes comprehensive test coverage (all tests passing), and maintains consistency across the entire codebase. No breaking changes or security concerns identified.
  • No files require special attention

Important Files Changed

Filename Overview
cli/src/engines/cline.ts New Cline engine implementation following established patterns, with proper JSON parsing, Windows stdin support, and comprehensive test coverage
cli/src/engines/cline.test.ts Complete test suite covering args building, model override, engine args passthrough, response parsing, and streaming functionality
cli/src/engines/base.ts Added Cline JSON detection to detectStepFromOutput for progress tracking, with corresponding test coverage
cli/src/engines/index.ts Added Cline import and case to createEngine factory function, properly integrated with existing engines
cli/src/cli/args.ts Added --cline flag, updated description, and integrated Cline into engine selection logic
ralphy.sh Comprehensive Cline integration across brownfield tasks, parallel execution, conflict resolution, and monitoring with proper JSON parsing

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as Ralphy CLI
    participant Engine as ClineEngine
    participant Cline as Cline CLI
    
    User->>CLI: ralphy --cline "fix bug"
    CLI->>CLI: parseArgs() - detect --cline flag
    CLI->>Engine: createEngine("cline")
    CLI->>Engine: execute(prompt, workDir, options)
    
    alt Windows Platform
        Engine->>Engine: buildArgs() - pass prompt via stdin
        Engine->>Cline: cline -y --json (stdin: prompt)
    else Unix/Linux
        Engine->>Cline: cline -y --json "prompt"
    end
    
    loop JSON Output Lines
        Cline-->>Engine: {"type":"say","say":"tool","text":"..."}
        Engine->>Engine: detectStepFromOutput() -> "Implementing"
        Engine-->>CLI: onProgress("Implementing")
        Cline-->>Engine: {"type":"say","say":"text","text":"Final answer"}
    end
    
    Engine->>Engine: parseJsonLines() - extract last say:text
    Engine-->>CLI: AIResult{success, response, duration}
    CLI-->>User: Display result
Loading

@dosubot
Copy link

dosubot bot commented Feb 11, 2026

Related Documentation

4 document(s) may need updating based on files changed in this PR:

Goshen Labs's Space

AI Engine Integration
View Suggested Changes
@@ -6,7 +6,7 @@
 **Engine-Specific Arguments:** You can pass arbitrary arguments to any engine using the `--` separator. Everything after `--` is forwarded directly to the engine CLI. See the 'Engine-Specific Arguments' section for details and examples.
 
 **Requirements:**
-- AI CLI: [Claude Code](https://github.com/anthropics/claude-code), [OpenCode](https://opencode.ai/docs/), [Cursor](https://cursor.com), Codex, Qwen-Code, [Factory Droid](https://docs.factory.ai/cli/getting-started/quickstart), [GitHub Copilot](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/use-copilot-cli), [Gemini CLI](https://github.com/google-gemini/gemini-cli), or [Ollama](https://ollama.com) (requires Claude Code CLI)
+- AI CLI: [Claude Code](https://github.com/anthropics/claude-code), [OpenCode](https://opencode.ai/docs/), [Cursor](https://cursor.com), Codex, Qwen-Code, [Factory Droid](https://docs.factory.ai/cli/getting-started/quickstart), [GitHub Copilot](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/use-copilot-cli), [Gemini CLI](https://github.com/google-gemini/gemini-cli), [Kimi Code CLI](https://github.com/MoonshotAI/kimi-cli), or [Ollama](https://ollama.com) (requires Claude Code CLI)
 - **npm version (`ralphy-cli`)**: Node.js 18+ or Bun
 
 Each engine requires its CLI tool installed and available in the system PATH.
@@ -14,16 +14,17 @@
 ### Engine Overview
 | Engine        | CLI Command         | Selection Flag      | Permissions/Flags                  | Output Format      | Token/Cost Reporting | Notable Behaviors & Limitations |
 |---------------|--------------------|---------------------|------------------------------------|--------------------|----------------------|----------------------------------|
-| Claude Code   | `claude`           | `--claude` (default)| `--dangerously-skip-permissions`   | `stream-json`      | tokens + cost        | Auth issues may be environment-specific. Windows: .cmd wrappers now supported. Failed commands return error messages with exit codes. |
-| OpenCode      | `opencode`         | `--opencode`        | `run --format json`                | JSON lines         | tokens + cost        | CLI must be in PATH. Windows: .cmd wrappers now supported. Failed commands return error messages with exit codes. |
-| Codex         | `codex`            | `--codex`           | `exec --full-auto --json`          | File + JSON        | tokens (no cost)     | Uses temp file for output; no token reporting. Failed commands return error messages with exit codes. |
-| Cursor        | `agent`            | `--cursor`          | `--print --force --output-format stream-json` | stream-json | duration (ms)         | No token reporting. Failed commands return error messages with exit codes. |
-| Qwen-Code     | `qwen`             | `--qwen`            | `--output-format stream-json --approval-mode yolo` | stream-json | tokens                | Model override via `--model`. Failed commands return error messages with exit codes. |
-| Factory Droid | `droid exec`       | `--droid`           | `--output-format stream-json --auto medium` | stream-json | duration (ms)         | Model override via `--model`. Failed commands return error messages with exit codes. |
-| Trae Agent    | `trae`             | `--trae`            | `--print --force --output-format stream-json` | stream-json | tokens, duration (ms) | Model override via `--model`. Trae CLI must be in PATH. |
-| GitHub Copilot| `copilot`          | `--copilot`         | `--yolo`                            | plain text         | tokens (parsed from output) | Requires Copilot CLI. **Prompts are passed via temp files to preserve markdown. --yolo is always used for non-interactive mode. Only authentication errors with known output formats are detected and surfaced. Token usage is parsed and reported. Silent failures and infinite retry loops are fixed.** |
-| Gemini        | `gemini`           | `--gemini`          | `--output-format stream-json --yolo` | stream-json        | tokens + cost        | Model override via `--model`. Failed commands return error messages with exit codes. |
-| Ollama        | `claude`           | `--ollama`          | `--dangerously-skip-permissions` (Ollama env vars) | stream-json        | tokens + cost        | Runs local models via Claude Code CLI. Requires Ollama running locally and Claude Code CLI installed. Recommended models: `qwen3-coder`, `glm-4.7`, `gpt-oss:20b`, `gpt-oss:120b`. |
+| Claude Code   | `claude`           | `--claude` (default)| `--dangerously-skip-permissions`   | `stream-json`      | tokens + cost        | Auth issues may be environment-specific. Windows: .cmd wrappers supported. |
+| OpenCode      | `opencode`         | `--opencode`        | `run --format json`                | JSON lines         | tokens + cost        | CLI must be in PATH. Windows: .cmd wrappers supported. |
+| Codex         | `codex`            | `--codex`           | `exec --full-auto --json`          | File + JSON        | tokens (no cost)     | Uses temp file for output; no token reporting. |
+| Cursor        | `agent`            | `--cursor`          | `--print --force --output-format stream-json` | stream-json | duration (ms)         | No token reporting. |
+| Qwen-Code     | `qwen`             | `--qwen`            | `--output-format stream-json --approval-mode yolo` | stream-json | tokens                | Model override via `--model`. |
+| Factory Droid | `droid exec`       | `--droid`           | `--output-format stream-json --auto medium` | stream-json | duration (ms)         | Model override via `--model`. |
+| Trae Agent    | `trae`             | `--trae`            | `--print --force --output-format stream-json` | stream-json | tokens, duration (ms) | Model override via `--model`. |
+| GitHub Copilot| `copilot`          | `--copilot`         | `--acp --stdio --yolo`              | NDJSON (ACP)       | none                 | Uses Agent Client Protocol (ACP) for structured communication and streaming. Token counts not available. Legacy engine deprecated. |
+| Gemini        | `gemini`           | `--gemini`          | `--output-format stream-json --yolo` | stream-json        | tokens + cost        | Model override via `--model`. |
+| Kimi Code     | `kimi`             | `--kimi`            | `--yolo --output-format stream-json` | stream-json        | tokens               | Model override via `--model`. CLI must be installed and in PATH. |
+| Ollama        | `claude`           | `--ollama`          | `--dangerously-skip-permissions` (Ollama env vars) | stream-json        | tokens + cost        | Runs local models via Claude Code CLI. |
 
 ## Engine Integration Details
 
@@ -91,25 +92,42 @@
 ralphy --gemini --model gemini-pro "implement feature"
 ```
 
+### Kimi Code
+Integrated as `KimiEngine`. Uses the `kimi` CLI with streaming JSON output. Model override via `--model <name>`. Supports `--yolo` auto-approve mode and accepts additional engine-specific arguments. Token counts are parsed from output. Failed commands return error messages with exit codes. CLI must be installed and available in PATH.
+
+Example:
+```bash
+ralphy --kimi "your task"
+ralphy --kimi --model kimi-k2.5 "your task"
+```
+
 ### GitHub Copilot
-Integrated as `CopilotEngine`. Uses the `copilot` CLI. **Prompts are always written to temporary markdown files (with unique UUID filenames) and passed via the `--yolo` flag for non-interactive mode, enabling autonomous operation and allowing all tools and paths without prompting.** The `--yolo` flag is always used for non-interactive mode. Streaming mode is not supported due to reliability issues on Windows; non-streaming execution is used for all platforms.
+Integrated as `CopilotAcpEngine`. Uses the `copilot` CLI with the Agent Client Protocol (ACP) for structured NDJSON communication and real streaming support. The ACP engine is now the default for the `--copilot` flag. Legacy CopilotEngine (text parsing) is deprecated and retained only for reference.
+
+**Key Features:**
+- Structured NDJSON protocol communication via ACP
+- Real streaming support with `executeStreaming()` and `agent_message_chunk` events
+- Protocol-based error handling (stop reasons)
+- Auto-approval for tool/permission requests in yolo mode
+- Cross-platform compatibility (Windows, macOS, Linux)
+- No fragile text parsing or temp file prompts
 
 **Error Handling:**
-- Only authentication errors with known output formats are detected and surfaced. Specifically, if the Copilot CLI output starts with phrases like "no authentication", "not authenticated", "authentication required", or "please authenticate", a clear error message is shown with instructions to run `copilot /login` or set the `COPILOT_GITHUB_TOKEN` environment variable.
-- Other errors, such as rate limiting, network issues, or generic error messages, are not detected or surfaced as CLI errors unless they match the known authentication error patterns. This conservative approach avoids false positives, since Copilot CLI output may include phrases like "rate limit", "network error", or "error:" as part of valid responses (e.g., test results or code feedback).
-- Non-zero exit codes are not relied upon for error detection, as Copilot CLI may not use them consistently.
-- Fatal errors (such as authentication failure or missing CLI tools) abort all remaining tasks immediately and prevent infinite retry loops.
-
-**Token Usage Parsing:**
-- Input and output token counts are parsed from Copilot CLI output, supporting `k` and `m` suffixes (e.g., "17.5k in, 73 out").
-- Token usage is reported in the result when available.
-
-**Output Filtering:**
-- CLI artifacts, status messages, and stats sections are filtered out, returning only the meaningful response.
-- Markdown formatting is preserved in the prompt and response.
+- Errors are surfaced via ACP protocol stop reasons. Authentication errors are detected and reported with instructions to run `copilot /login` or set the `COPILOT_GITHUB_TOKEN` environment variable.
+- Other errors (rate limiting, network issues) are surfaced as protocol errors when available.
+- Fatal errors abort all remaining tasks and prevent infinite retry loops.
+
+**Token Usage Reporting:**
+- **Token counts are not available**: Copilot CLI's ACP implementation does not expose usage/token metadata. The result only contains `stopReason`. Telemetry loses token tracking for Copilot. This is a Copilot CLI limitation.
+
+**Streaming:**
+- Streaming is fully supported via ACP protocol. Progress updates are provided as response chunks arrive.
 
 **Automatic Cleanup:**
-- Temporary prompt files are always cleaned up after execution, even on errors.
+- ACP connections and Copilot CLI processes are cleaned up after execution.
+
+**Legacy Engine:**
+- The old CopilotEngine (text parsing) is deprecated. Use CopilotAcpEngine for all new integrations.
 
 Example:
 ```bash
@@ -128,6 +146,7 @@
 ralphy --qwen --model qwen-max "build api"
 ralphy --trae --model trae-pro "implement feature"
 ralphy --ollama --model glm-4.7 "add feature" # Ollama with specific model
+ralphy --kimi --model kimi-k2.5 "your task"   # Kimi Code with specific model
 ```
 
 # Engine-Specific Arguments
@@ -140,6 +159,9 @@
 
 # Pass claude-specific arguments
 ralphy --claude "add feature" -- --no-permissions-prompt
+
+# Pass kimi-specific arguments
+ralphy --kimi "your task" -- --custom-arg value
 
 # Works with any engine
 ralphy --cursor "fix bug" -- --custom-arg value
@@ -150,8 +172,8 @@
 ## Adding a New Engine
 To add a new AI engine:
 1. Implement a new class extending `BaseAIEngine`, specifying the engine's name and CLI command.
-2. Parse output according to the engine's format (e.g., stream-json, plain text, file).
-3. Handle errors using the shared error detection utilities.
+2. Integrate using the engine's protocol or output format (e.g., ACP, stream-json, file, NDJSON).
+3. Handle errors using the shared error detection utilities and protocol stop reasons.
 4. Support model overrides by appending the `--model <name>` flag if provided.
 5. Register the engine in the CLI flag parser and engine registry.
 6. Ensure CLI availability by checking the command exists in the system PATH.
@@ -165,8 +187,9 @@
 - **OpenCode**: CLI must be in PATH. Windows is supported for npm global CLIs; .cmd wrappers are handled automatically.
 - **Claude Code**: Authentication/connectivity issues may occur depending on environment; not considered a bug in Ralphy.
 - **Codex**: No token reporting; uses temp files for output.
-- **GitHub Copilot**: Requires Copilot CLI installed and available in PATH. Prompts are passed via temp files. --yolo is always used for non-interactive mode. Only authentication errors with known output formats are detected and surfaced. Token usage is parsed and reported. Infinite retry loops on fatal errors are fixed.
+- **GitHub Copilot**: Requires Copilot CLI installed and available in PATH. Uses ACP protocol for structured communication and streaming. Token counts are not available. Legacy engine deprecated.
 - **Ollama**: Requires [Ollama](https://ollama.com) running locally and Claude Code CLI installed and available in PATH. Only models with at least 64k context window are supported. If either dependency is missing, tasks will fail with a clear error message.
+- **Kimi Code**: Requires [Kimi Code CLI](https://github.com/MoonshotAI/kimi-cli) installed and available in PATH. Token counts are available. Model override via `--model`. If CLI is missing, tasks will fail with a clear error message.
 - **General**: Each engine requires its CLI tool installed and available in the system PATH.
 
 ## Example Usage
@@ -178,8 +201,9 @@
 ralphy --qwen "generate API"                   # Qwen-Code
 ralphy --droid "create test suite"             # Factory Droid
 ralphy --trae "implement feature"              # Trae Agent
-ralphy --copilot "add feature"                 # GitHub Copilot
+ralphy --copilot "add feature"                 # GitHub Copilot (ACP)
 ralphy --gemini "summarize document"           # Gemini CLI
+ralphy --kimi "your task"                      # Kimi Code CLI
 ralphy --ollama "add feature"                  # Ollama (local models via Claude Code CLI)
 
 ralphy --opencode --model opencode/glm-4.7-free "custom model"
@@ -187,6 +211,7 @@
 ralphy --trae --model trae-pro "implement feature with Trae"
 ralphy --copilot "add feature" -- --allow-all-tools --stream on
 ralphy --gemini --model gemini-pro "generate code with Gemini"
+ralphy --kimi --model kimi-k2.5 "your task"
 ralphy --ollama --model glm-4.7 "add feature"  # Ollama with specific model
 ```
 
@@ -194,10 +219,23 @@
 
 ## Project Standards
 ### v4.7.1
-- **Copilot engine improvements**: non-interactive mode (`--yolo`), conservative error detection (only authentication errors with known output formats), token usage parsing, temp file-based prompts for markdown preservation
+- **Copilot engine refactored to ACP protocol**: Structured NDJSON communication, streaming support, protocol-based error handling, auto-approval for yolo mode, Windows compatibility
+- **Token reporting limitation**: Copilot CLI ACP does not provide token counts; telemetry loses token tracking for Copilot
 - **Fixed infinite retry loop**: tasks now properly abort on fatal configuration/authentication errors
 - **Project standards**: added `.editorconfig` and `.gitattributes` for consistent coding styles
 
 
 ## Project Standards
-Ralphy now enforces consistent coding styles and line endings across all files using `.editorconfig` and `.gitattributes`. All text files use UTF-8 and LF line endings. Indentation is standardized (tabs for code, spaces for YAML/JSON). This ensures reliable cross-platform development and clean diffs.
+Ralphy enforces consistent coding styles and line endings across all files using `.editorconfig` and `.gitattributes`. All text files use UTF-8 and LF line endings. Indentation is standardized (tabs for code, spaces for YAML/JSON). This ensures reliable cross-platform development and clean diffs.
+
+---
+
+**References:**
+- [GitHub Copilot ACP Documentation](https://docs.github.com/en/copilot/reference/acp-server)
+- [Agent Client Protocol Spec](https://agentclientprotocol.com/protocol/overview)
+- [TypeScript SDK](https://agentclientprotocol.com/libraries/typescript)
+- [Kimi Code CLI](https://github.com/MoonshotAI/kimi-cli)
+
+---
+
+**Note:** The Copilot engine now uses ACP protocol for all operations. The legacy engine is deprecated and retained only for reference.

[Accept] [Decline]

Linux Compatibility and Dependency Checks
View Suggested Changes
@@ -42,7 +42,6 @@
 
 [Source](https://github.com/michaelshimeles/ralphy/pull/80)
 
-
 ### Bash Version
 Ralphy requires Bash version 4.0 or higher due to its use of features like associative arrays. To check your Bash version:
 ```sh
@@ -73,10 +72,15 @@
 ## Improved Error Messaging and Setup Hints
 Ralphy provides actionable error messages and setup hints for missing dependencies and configuration issues. Examples include:
 
-- If a required AI engine CLI (such as `opencode`, `codex`, `cursor`, `qwen`, or `claude`) is missing, ralphy will display a clear error and a URL or setup hint, e.g.:
+- If a required AI engine CLI (such as `opencode`, `codex`, `cursor`, `qwen`, `claude`, `copilot`, `droid`, or `cline`) is missing, ralphy will display a clear error and a URL or setup hint, e.g.:
   ```
   OpenCode CLI not found.
   Install from: https://opencode.ai/docs/
+  ```
+  or
+  ```
+  Cline CLI not found.
+  Install from: https://github.com/cline/cline
   ```
 - If the required PRD file (Markdown or YAML) is missing, ralphy will suggest how to create it and how to specify the file type:
   ```
@@ -95,5 +99,6 @@
 | git                 | `git --version`<br>Install via package manager           | `git is required but not installed. Install git before running Ralphy.`              |
 | Inside git repo     | `git rev-parse --is-inside-work-tree`                    | `Not a git repository. Ralphy requires a git repository to track changes.`           |
 | Bash ≥ 4.0          | `bash --version`<br>Install: `apt-get install bash` or `yum install bash` | `ERROR: Ralphy requires bash 4.0 or later. Current version: ...`                     |
+| Cline CLI           | `cline --version`<br>Install: [Cline CLI](https://github.com/cline/cline) | `Cline CLI not found. Install from: https://github.com/cline/cline`                  |
 
 Ralphy’s robust pre-flight checks and improved error messages help ensure all requirements are met before execution, reducing troubleshooting time and improving reliability on Linux systems.

[Accept] [Decline]

Model Override and Selection
View Suggested Changes
@@ -1,8 +1,46 @@
 ### Purpose and Usage
-By default, each engine (e.g., Claude, OpenCode, Qwen, Trae, Gemini, Ollama) uses its standard model. The `--model` flag lets you specify an alternative model for the selected engine. For convenience, shortcut flags like `--sonnet` are provided, which combine engine selection and model override in a single flag.
+By default, each engine (e.g., Claude, OpenCode, Qwen, Trae, Gemini, Ollama, Kimi) uses its standard model. The `--model` flag lets you specify an alternative model for the selected engine. For convenience, shortcut flags like `--sonnet` are provided, which combine engine selection and model override in a single flag.
+
+#### Per-Task Model Selection in YAML
+You can now specify a model for each individual task in your YAML task list using the `model:` property. When present, this per-task model takes precedence over any global model override specified via CLI flags. This allows you to mix and match models for different tasks within the same run, optimizing for cost, speed, or capability as needed.
+
+**Example (YAML):**
+```yaml
+tasks:
+  - title: Create user model
+    completed: false
+    model: claude
+  - title: Set up database
+    completed: false
+    model: opencode/kimi-k2.5-free
+```
+
+#### Organizing Tasks by Subsections (Categories)
+YAML task lists now support organizing tasks into subsections (categories) for better structure. Each subsection is a key under `tasks:`, and contains an array of tasks. You can use both per-task `model:` and `parallel_group:` properties within subsections.
+
+**Example (YAML with subsections):**
+```yaml
+tasks:
+  setup:
+    - title: Initialize project
+      completed: false
+      model: claude
+  features:
+    - title: Build dashboard
+      completed: false
+      model: opencode/kimi-k2.5-free
+      parallel_group: 1
+```
+
+Both flat array and subsection formats are supported. See the updated `example-prd.yaml` for a comprehensive example.
+
+#### Model Precedence
+- If a task has a `model:` property in YAML, it overrides any global model specified via CLI flags for that task.
+- If no per-task model is set, the global model override (if any) is used.
+- If neither is set, the engine's default model is used.
 
 **Planning Model Override:**
-You can now use the `--planning-model` flag to specify a separate model for the planning phase (file prediction) while keeping a different model for code generation/execution. This allows you to use a cheaper or faster model for planning and a more capable model for implementation, optimizing both cost and performance.
+You can use the `--planning-model` flag to specify a separate model for the planning phase (file prediction) while keeping a different model for code generation/execution. This allows you to use a cheaper or faster model for planning and a more capable model for implementation, optimizing both cost and performance.
 
 For example:
 ```bash
@@ -18,6 +56,7 @@
 
 [Reference: README.md](https://github.com/michaelshimeles/ralphy/blob/main/README.md)
 
+For more details on model override flow and engine selection, see the sections below.
 
 ### Model Override Flow in the Execution Pipeline
 When you invoke `ralphy` with `--model`, `--planning-model`, or a shortcut flag, the CLI argument parser determines the engine and both model overrides. For example, `--sonnet` is equivalent to `--claude --model sonnet`. The selected engine, model override, and planning model override are stored in the runtime options and passed through all phases of execution, including sequential runs, parallel execution, and conflict resolution.
@@ -32,7 +71,7 @@
 [Reference: cli/src/cli/args.ts, cli/src/execution/planning.ts, cli/src/execution/parallel.ts, cli/src/execution/sequential.ts]
 
 ### Specifying Models for Different Engines
-You can specify a model for any supported engine by combining the engine flag with `--model`. For example, to use a specific model with OpenCode, use `--opencode --model <model-name>`. The same pattern applies to other engines, including Trae, Gemini, and Ollama:
+You can specify a model for any supported engine by combining the engine flag with `--model`. For example, to use a specific model with OpenCode, use `--opencode --model <model-name>`. The same pattern applies to other engines, including Trae, Gemini, Ollama, and Kimi:
 
 ```bash
 ralphy --opencode --model opencode/glm-4.7-free "task"
@@ -40,6 +79,7 @@
 ralphy --trae --model trae-v1 "do something"
 ralphy --gemini --model gemini-1.0-pro "summarize"
 ralphy --ollama --model glm-4.7 "add feature"
+ralphy --kimi --model kimi-k2.5 "your task"
 ```
 
 You can also specify a separate planning model for any engine using `--planning-model <model-name>`. This model will be used only for the planning phase (file prediction), while the main model (from `--model`) is used for code generation and execution:
@@ -50,12 +90,12 @@
 ralphy --trae --model trae-v1 --planning-model trae-lite "task"
 ralphy --gemini --model gemini-1.0-pro --planning-model gemini-1.0 "summarize"
 ralphy --ollama --model glm-4.7 --planning-model glm-3.5 "add feature"
+ralphy --kimi --model kimi-k2.5 --planning-model kimi-k2.5 "your task"
 ```
 
 Only one engine/model combination can be specified per command invocation. There is no built-in support for specifying different models for multiple engines in a single command; you must run separate commands for each engine/model pair.
 
 [Reference: cli/src/cli/args.ts, cli/src/execution/planning.ts]
-
 
 ### Examples
 - Use the default model for Claude:
@@ -102,6 +142,14 @@
   ```bash
   ralphy --ollama --model glm-4.7 "add feature"
   ```
+- Use Kimi Code CLI with its default model:
+  ```bash
+  ralphy --kimi "your task"
+  ```
+- Use a custom model with Kimi Code CLI:
+  ```bash
+  ralphy --kimi --model kimi-k2.5 "your task"
+  ```
 - Use a separate planning model (e.g., for cost savings):
   ```bash
   ralphy --model opus --planning-model haiku "implement feature"
@@ -109,13 +157,13 @@
   ralphy --trae --model trae-v1 --planning-model trae-lite "task"
   ralphy --gemini --model gemini-1.0-pro --planning-model gemini-1.0 "summarize document"
   ralphy --ollama --model glm-4.7 --planning-model glm-3.5 "add feature"
+  ralphy --kimi --model kimi-k2.5 --planning-model kimi-k2.5 "your task"
   ```
-
 
 ### Best Practices
 - Use shortcut flags (like `--sonnet`) for common engine/model combinations to reduce typing and avoid mistakes.
-- Use `--model` with the appropriate engine flag for custom or less common model selections.
+- Use `--model` with the appropriate engine flag for custom or less common model selections, including `--kimi` for Kimi Code CLI.
 - Use `--planning-model` to select a cheaper or faster model for the planning phase, especially in parallel or no-git modes where planning is a distinct step. This can reduce cost and speed up planning without affecting code quality.
 - Ensure you specify only one engine/model pair per command.
 - Model overrides are consistently applied throughout the execution pipeline, including parallel runs and conflict resolution, so you can rely on the selected model being used for all phases of the operation.
-- Refer to the engine documentation or `ralphy --help` for the list of supported models for each engine.
+- Refer to the engine documentation or `ralphy --help` for the list of supported models for each engine, including Kimi Code CLI.

[Accept] [Decline]

Root User Detection and Restrictions
View Suggested Changes
@@ -2,6 +2,11 @@
 Ralphy includes logic to detect when it is being run as the root user. This detection is performed by checking if the effective user ID (EUID) or the output of `id -u` equals 0. If Ralphy determines it is running as root, it applies engine-specific restrictions and messaging to protect against unsafe or unsupported operations [source](https://github.com/michaelshimeles/ralphy/blob/fc2df589969b5fe16d31eccb4e7ff91314e31776/ralphy.sh#L622-L1056).
 
 **Note:** Ollama support is provided via the Claude Code CLI. When using Ollama (`--ollama`), all root user restrictions and behaviors are identical to those for Claude Code.
+
+**Kimi Code CLI** is supported as an AI engine (`--kimi` flag). Its root user behavior matches that of other engines that are not blocked as root (see below for details).
+
+### Restrictions for Claude Code and Cursor Engines
+When running as root, Ralphy does not allow the use of the Claude Code, Ollama (via Claude Code CLI), or Cursor engines. Attempting to use any of these engines as root results in an immediate error and process exit. This restriction is enforced regardless of any permission override flags.
 
 ### Restrictions for Claude Code and Cursor Engines
 When running as root, Ralphy does not allow the use of the Claude Code, Ollama (via Claude Code CLI), or Cursor engines. Attempting to use any of these engines as root results in an immediate error and process exit. This restriction is enforced regardless of any permission override flags.
@@ -18,13 +23,13 @@
 After displaying these messages, Ralphy exits with a non-zero status [source](https://github.com/michaelshimeles/ralphy/blob/fc2df589969b5fe16d31eccb4e7ff91314e31776/ralphy.sh#L622-L1056).
 
 ### Behavior for Other Engines
-For other supported engines—OpenCode, Codex, Qwen-Code, Factory Droid, Trae Agent, GitHub Copilot, and Gemini CLI—Ralphy does not enforce a hard restriction when running as root. Instead, it issues a warning:
+For other supported engines—OpenCode, Codex, Qwen-Code, Factory Droid, Trae Agent, GitHub Copilot, Gemini CLI, and Kimi Code CLI—Ralphy does not enforce a hard restriction when running as root. Instead, it issues a warning:
 
 ```
 WARNING: Running as root user. Some AI engines may have limited functionality.
 ```
 
-Execution continues, but some features may not work as expected due to permission or environment limitations [source](https://github.com/michaelshimeles/ralphy/blob/fc2df589969b5fe16d31eccb4e7ff91314e31776/ralphy.sh#L622-L1056). Trae Agent and Gemini CLI follow this behavior.
+Execution continues, but some features may not work as expected due to permission or environment limitations [source](https://github.com/michaelshimeles/ralphy/blob/fc2df589969b5fe16d31eccb4e7ff91314e31776/ralphy.sh#L622-L1056). Trae Agent, Gemini CLI, and Kimi Code CLI follow this behavior.
 
 **Note:** Ollama (via Claude Code CLI) is not included here, as it is blocked when running as root.
 
@@ -32,7 +37,7 @@
 If you encounter the root restriction, you have two main options:
 
 - Run Ralphy as a non-root user. This is the recommended and most secure approach for all engines.
-- If you must run as root, use an engine that does not enforce the root restriction: OpenCode (`--opencode`), Codex (`--codex`), Qwen-Code (`--qwen`), Factory Droid (`--droid`), Trae Agent (`--trae`), GitHub Copilot (`--copilot`), or Gemini CLI (`--gemini`) [source](https://github.com/michaelshimeles/ralphy/blob/fc2df589969b5fe16d31eccb4e7ff91314e31776/ralphy.sh#L622-L1056).
+- If you must run as root, use an engine that does not enforce the root restriction: OpenCode (`--opencode`), Codex (`--codex`), Qwen-Code (`--qwen`), Factory Droid (`--droid`), Trae Agent (`--trae`), GitHub Copilot (`--copilot`), Gemini CLI (`--gemini`), or Kimi Code CLI (`--kimi`) [source](https://github.com/michaelshimeles/ralphy/blob/fc2df589969b5fe16d31eccb4e7ff91314e31776/ralphy.sh#L622-L1056).
 
 **Note:** Ollama (via Claude Code CLI) is not available as an alternative when running as root.
 
@@ -45,7 +50,7 @@
     claude|ollama|cursor)
       log_error "Running as root is not supported with $AI_ENGINE."
       log_info "The --dangerously-skip-permissions flag cannot be used as root for security reasons."
-      log_info "Please run Ralphy as a non-root user, or use a different AI engine (--opencode, --codex, --qwen, --droid, --trae, --copilot, --gemini)."
+      log_info "Please run Ralphy as a non-root user, or use a different AI engine (--opencode, --codex, --qwen, --droid, --trae, --copilot, --gemini, --kimi)."
       exit 1
       ;;
     *)
@@ -56,7 +61,7 @@
 ```
 [source](https://github.com/michaelshimeles/ralphy/blob/fc2df589969b5fe16d31eccb4e7ff91314e31776/ralphy.sh#L622-L1056)
 
-Note: Trae Agent (`--trae`) and Gemini CLI (`--gemini`) are included among the engines that are not blocked as root. Ollama (via Claude Code CLI) is now blocked as root.
+Note: Trae Agent (`--trae`), Gemini CLI (`--gemini`), and Kimi Code CLI (`--kimi`) are included among the engines that are not blocked as root. Ollama (via Claude Code CLI) is now blocked as root.
 
 ### Summary Table
 | Engine         | Root Behavior         | User Message / Action                                                                 |
@@ -71,3 +76,4 @@
 | Trae Agent     | Warning, continue    | Warns about limited functionality, continues.                                        |
 | GitHub Copilot | Warning, continue    | Warns about limited functionality, continues.                                        |
 | Gemini CLI     | Warning, continue    | Warns about limited functionality, continues.                                        |
+| Kimi Code CLI  | Warning, continue    | Warns about limited functionality, continues.                                        |

[Accept] [Decline]

Note: You must be authenticated to accept/decline updates.

How did I do? Any feedback?  Join Discord

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant