diff --git a/.augment/rules/architecture.md b/.augment/rules/architecture.md new file mode 100644 index 0000000..9a771b7 --- /dev/null +++ b/.augment/rules/architecture.md @@ -0,0 +1,259 @@ +--- +type: "always_apply" +--- + +# Architecture and Design Patterns for MCPShell + +## Project Structure + +### Directory Organization +``` +mcpshell/ +├── cmd/ # Command-line interface implementation +│ ├── root.go # Root command and global flags +│ ├── mcp.go # MCP server command +│ ├── exe.go # Direct tool execution command +│ ├── validate.go # Configuration validation command +│ └── daemon.go # Daemon mode command +├── pkg/ # Core application packages +│ ├── server/ # MCP server implementation +│ ├── command/ # Command execution and runners +│ ├── config/ # Configuration loading and validation +│ ├── common/ # Shared utilities and types +│ └── utils/ # Helper functions +├── docs/ # Documentation +├── examples/ # Example configurations +├── tests/ # Integration and E2E tests +├── build/ # Build output directory +└── main.go # Application entry point +``` + +### Package Responsibilities + +#### `cmd/` Package +- Command-line interface implementation using Cobra +- Command definitions and flag parsing +- User interaction and output formatting +- Delegates business logic to `pkg/` packages + +#### `pkg/server/` Package +- MCP server lifecycle management +- Tool registration and discovery +- Request handling and routing +- Integration with MCP protocol library + +#### `pkg/command/` Package +- Command handler creation and execution +- Runner implementations (exec, firejail, sandbox-exec, docker) +- Template processing and parameter substitution +- Constraint evaluation and validation + +#### `pkg/config/` Package +- YAML configuration loading and parsing +- Configuration validation +- Tool definition structures +- Configuration merging for multiple files + +#### `pkg/common/` Package +- Shared types and interfaces +- Logging infrastructure +- Constraint compilation and evaluation (CEL) +- Template utilities +- Panic recovery +- Prerequisite checking + +#### `pkg/utils/` Package +- Helper functions for file operations +- Path resolution and normalization +- Home directory detection +- Tool file discovery + +## Design Patterns + +### Dependency Injection +- Pass dependencies (logger, config) as parameters to constructors +- Use constructor functions (New*) for complex types +- Avoid global state except for the global logger +- Example: + ```go + func New(cfg Config, logger *common.Logger) *Server { + return &Server{ + config: cfg, + logger: logger, + } + } + ``` + +### Interface-Based Design +- Define interfaces for pluggable components (Runner, ModelProvider) +- Use interfaces to enable testing with mocks +- Keep interfaces small and focused (Interface Segregation Principle) +- Example: + ```go + type Runner interface { + Run(ctx context.Context, shell string, command string, env []string, params map[string]interface{}, tmpfile bool) (string, error) + CheckImplicitRequirements() error + } + ``` + +### Factory Pattern +- Use factory functions for creating handlers and runners +- Factory functions handle initialization and validation +- Example: + ```go + func NewCommandHandler(tool config.Tool, shell string, logger *common.Logger) (*CommandHandler, error) + ``` + +### Strategy Pattern +- Multiple runner implementations (ExecRunner, FirejailRunner, SandboxRunner, DockerRunner) +- Runner selection based on requirements and availability +- Fallback to default runner when specific runner unavailable + +### Builder Pattern +- Configuration structs with optional fields +- Use functional options for complex initialization when needed +- Example: + ```go + type Config struct { + ConfigFile string + Shell string + Logger *common.Logger + Version string + Descriptions []string + DescriptionFiles []string + DescriptionOverride bool + } + ``` + +## Architectural Principles + +### Separation of Concerns +- Clear separation between CLI, business logic, and infrastructure +- Each package has a single, well-defined responsibility +- Avoid circular dependencies between packages + +### Error Handling +- Errors are wrapped with context at each layer +- Use `fmt.Errorf` with `%w` for error wrapping +- Log errors at the point where they can be handled +- Return errors to callers for decision-making + +### Logging Strategy +- Structured logging with levels (Debug, Info, Warn, Error) +- Logger passed as dependency, not accessed globally (except via GetLogger) +- Debug logging for detailed diagnostics +- Info logging for important events +- Error logging for failures + +### Context Propagation +- Pass `context.Context` as first parameter for I/O operations +- Use context for cancellation and timeouts +- Respect context cancellation in long-running operations + +### Configuration Management +- YAML-based configuration files +- Support for multiple configuration files with merging +- Validation at load time +- Default values for optional settings + +## Security Architecture + +### Defense in Depth +- Multiple layers of security (constraints, runners, validation) +- Fail-safe defaults (deny by default) +- Explicit whitelisting over blacklisting + +### Constraint System +- CEL-based constraint evaluation +- Constraints compiled at startup for early error detection +- Constraint failures block command execution +- Detailed logging of constraint evaluation + +### Runner Isolation +- Sandboxed execution environments (firejail, sandbox-exec, docker) +- Minimal permissions by default +- Network isolation when possible +- Filesystem restrictions + +### Input Validation +- Type checking for all parameters +- Constraint validation before execution +- Template validation at load time +- Path normalization and validation + +## Testing Architecture + +### Test Organization +- Unit tests in same package as source code (`*_test.go`) +- Integration tests in `tests/` directory +- Shell scripts for E2E testing +- Test utilities in `tests/common/` + +### Test Patterns +- Table-driven tests for multiple scenarios +- Test logger that discards output +- Mock implementations of interfaces +- Separate test fixtures and data + +### Test Coverage +- Unit tests for business logic +- Integration tests for command execution +- E2E tests for full workflows +- Security tests for constraint validation + +## Extension Points + +### Adding New Runners +1. Implement the `Runner` interface +2. Add runner-specific options and requirements +3. Register runner in runner factory +4. Add tests for new runner +5. Document runner capabilities and limitations + +### Adding New Commands +1. Create command file in `cmd/` package +2. Define command structure with Cobra +3. Implement command logic +4. Add command to root command in `init()` +5. Add tests and documentation + +### Adding New Model Providers +1. Implement the `ModelProvider` interface +2. Add provider-specific configuration +3. Register provider in model factory +4. Add tests for provider integration +5. Document provider setup and usage + +## Performance Considerations + +### Constraint Compilation +- Constraints compiled once at startup +- Compiled constraints reused for all executions +- Reduces overhead for repeated tool calls + +### Template Caching +- Templates parsed once during handler creation +- Reused for all executions of the same tool +- Reduces parsing overhead + +### Concurrent Execution +- Tools can be executed concurrently +- Context-based cancellation for timeouts +- Proper cleanup of resources + +## Scalability Considerations + +### Multiple Configuration Files +- Support for loading multiple configuration files +- Configuration merging for combining tool sets +- Efficient tool registration and lookup + +### Large Tool Sets +- Efficient tool registration +- Fast tool lookup by name +- Minimal memory overhead per tool + +### Long-Running Operations +- Context-based timeouts +- Graceful cancellation +- Resource cleanup on timeout or cancellation diff --git a/.augment/rules/configuration.md b/.augment/rules/configuration.md new file mode 100644 index 0000000..33d5f85 --- /dev/null +++ b/.augment/rules/configuration.md @@ -0,0 +1,66 @@ +# Configuration Standards for MCPShell + +## YAML Structure +```yaml +mcp: + description: "What this tool collection does" + run: + shell: bash + tools: + - name: "tool_name" + description: "What the tool does" + run: + command: "echo {{ .param }}" + params: + param: + type: string + description: "Parameter description" + required: true +``` + +## Required Fields +- MCP server: `description` +- Each tool: `name`, `description`, `run.command` +- Each parameter: `description` + +## Tool Naming +- Lowercase with underscores: `disk_usage`, `file_reader` +- Descriptive and concise + +## Parameters +- Types: `string`, `number`, `integer`, `boolean` +- Mark as `required: true` or provide `default` values +- Write detailed descriptions for LLM understanding + +## Constraints +- **ALWAYS** include constraints for user input +- Add inline comments explaining each constraint +- Common patterns: command injection prevention, path traversal, length limits, whitelisting + +## Templates +- Go template syntax: `{{ .param_name }}` +- Quote variables: `"{{ .param }}"` +- Supports Sprig functions + +## Runners +- Order by preference (most restrictive first) +- Include fallback (usually `exec`) +- Disable networking when not needed: `allow_networking: false` +- Specify OS requirements for platform-specific runners + +## Environment Variables +- **ONLY** pass explicitly whitelisted variables +- Document why each is needed + +## Timeouts +- **ALWAYS** specify timeout for commands that may hang +- Format: `"30s"`, `"5m"`, `"1h30m"` + +## Validation +- Use `mcpshell validate --tools ` +- Run `make validate-examples` in CI/CD + +## Agent Mode + +For AI agent functionality (LLM connectivity, RAG support), see the +[Don](https://github.com/inercia/don) project which uses MCPShell's tool configuration. diff --git a/.augment/rules/documentation.md b/.augment/rules/documentation.md new file mode 100644 index 0000000..7857aef --- /dev/null +++ b/.augment/rules/documentation.md @@ -0,0 +1,27 @@ +# Documentation Standards for MCPShell + +## Code Documentation +- **ALWAYS** include package-level documentation: `// Package .` +- Document all exported functions, types, and important fields +- Start comments with the name of what's being documented +- Use complete sentences with proper punctuation + +## Configuration Documentation +- Document all options in `docs/config.md` +- Document environment variables in `docs/config-env.md` +- Provide well-commented examples in `examples/` +- Include security rationale for constraints +- Show both simple and advanced patterns +- Cross-link related documentation (config, env vars, usage guides) + +## Security Documentation +- Maintain comprehensive `docs/security.md` +- Include prominent security warnings +- Explain risks of LLM command execution +- Provide secure configuration examples + +## Markdown Standards +- Use ATX-style headers (`#`, `##`, `###`) +- Specify language for code blocks (`yaml`, `go`, `bash`) +- Use descriptive link text +- Use relative links for internal docs diff --git a/.augment/rules/go.md b/.augment/rules/go.md new file mode 100644 index 0000000..8f84781 --- /dev/null +++ b/.augment/rules/go.md @@ -0,0 +1,28 @@ +# Go Coding Standards for MCPShell + +## Package Documentation +- **ALWAYS** include package-level documentation: `// Package .` +- Explain package purpose and responsibilities + +## Error Handling & Logging +- Wrap errors with `fmt.Errorf` and `%w`: `fmt.Errorf("failed to compile constraint '%s': %w", expr, err)` +- Error messages: lowercase, no punctuation +- **ALWAYS** use `common.Logger` (never `fmt.Println` or `log.Println`) +- Logger passed as parameter to functions +- Levels: Debug (diagnostics), Info (events), Warn (non-critical), Error (failures) + +## Panic Recovery +- Use `defer common.RecoverPanic()` at entry points and goroutines + +## Project-Specific Patterns +- YAML config with tags: `yaml:"field_name,omitempty"` +- Templates: Go `text/template` + Sprig functions, variables as `{{ .param_name }}` +- Context: First parameter for I/O operations, use `context.WithTimeout` +- Constructors: Provide `New*` functions for complex types +- Type assertions: Check success, handle failures gracefully + +## Code Quality +- Run `make format` before commits (runs `go fmt ./...` and `go mod tidy`) +- Pass `golangci-lint` checks +- Godoc-style comments for exported APIs +- Never manually edit `go.mod` diff --git a/.augment/rules/mcp-protocol.md b/.augment/rules/mcp-protocol.md new file mode 100644 index 0000000..d311ea2 --- /dev/null +++ b/.augment/rules/mcp-protocol.md @@ -0,0 +1,256 @@ +--- +type: "agent_requested" +description: "MCP: What is MCP? Protocol Communication, Server Implementation, Tool Registration, Request Handling" +--- + +# MCP Protocol Implementation Guidelines for MCPShell + +## MCP Protocol Overview + +### What is MCP? +- Model Context Protocol (MCP) is a standard protocol for connecting LLMs to external tools and data sources +- MCPShell implements the MCP server side, exposing command-line tools as MCP tools +- MCP clients (Cursor, VSCode, Claude Desktop) connect to MCPShell to access these tools + +### Protocol Communication +- MCPShell uses the `github.com/mark3labs/mcp-go` library for MCP protocol implementation +- Supports stdio transport (standard input/output) for communication +- JSON-RPC 2.0 message format for requests and responses + +## Server Implementation + +### Server Lifecycle +1. **Initialization**: Load configuration and create server instance +2. **Tool Registration**: Register all tools from configuration +3. **Server Start**: Begin listening for MCP requests +4. **Request Processing**: Handle tool calls and return results +5. **Shutdown**: Clean up resources and close connections + +### Server Creation +- Use `server.New()` to create a server instance with configuration +- Call `CreateServer()` to initialize the MCP server and register tools +- Call `Start()` to begin processing requests +- Example: + ```go + srv := server.New(server.Config{ + ConfigFile: configPath, + Logger: logger, + Version: version, + }) + + if err := srv.CreateServer(); err != nil { + return fmt.Errorf("failed to create server: %w", err) + } + + if err := srv.Start(); err != nil { + return fmt.Errorf("failed to start server: %w", err) + } + ``` + +## Tool Registration + +### Tool Definition +- Each tool is defined in YAML configuration +- Tools are converted to MCP tool format during registration +- Tool schema is generated from parameter definitions +- Example MCP tool structure: + ```go + mcp.Tool{ + Name: "tool_name", + Description: "Tool description", + InputSchema: mcp.ToolInputSchema{ + Type: "object", + Properties: map[string]interface{}{ + "param_name": map[string]interface{}{ + "type": "string", + "description": "Parameter description", + }, + }, + Required: []string{"param_name"}, + }, + } + ``` + +### Handler Registration +- Each tool has an associated handler function +- Handlers implement the `mcpserver.ToolHandlerFunc` signature +- Handlers are wrapped with panic recovery +- Example: + ```go + type ToolHandlerFunc func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) + ``` + +### Tool Validation +- Tools are validated during registration +- Constraint compilation happens at registration time +- Invalid tools are rejected with clear error messages +- Prerequisites (OS, executables) are checked before registration + +## Request Handling + +### Request Flow +1. MCP client sends `tools/call` request +2. Server routes request to appropriate tool handler +3. Handler validates parameters and constraints +4. Handler executes command via runner +5. Handler formats output and returns result +6. Server sends response back to client + +### Parameter Handling +- Parameters are extracted from `request.Params.Arguments` +- Type assertions are performed to ensure correct types +- Default values are applied for optional parameters +- Parameters are validated against constraints before execution + +### Error Handling +- Errors are returned as `mcp.CallToolResult` with error content +- Use `mcp.NewToolResultError()` for error results +- Use `mcp.NewToolResultText()` for success results +- Example: + ```go + if err != nil { + return mcp.NewToolResultError(err.Error()), nil + } + return mcp.NewToolResultText(output), nil + ``` + +## Tool Execution + +### Execution Flow +1. Extract and validate parameters +2. Apply default values for optional parameters +3. Evaluate constraints +4. Render command template with parameters +5. Select and configure runner +6. Execute command via runner +7. Format output with prefix if configured +8. Return result to client + +### Constraint Evaluation +- Constraints are evaluated before command execution +- All constraints must pass for execution to proceed +- Failed constraints block execution and return error +- Constraint failures are logged with details + +### Command Execution +- Commands are executed via runner implementations +- Runners provide isolation and security +- Timeouts are enforced via context +- Output is captured and returned to client + +## Response Formatting + +### Success Responses +- Return command output as text content +- Apply output prefix if configured +- Trim whitespace from output +- Example: + ```go + return mcp.NewToolResultText(output), nil + ``` + +### Error Responses +- Return error message as error content +- Include context about what failed +- Don't leak sensitive information in errors +- Example: + ```go + return mcp.NewToolResultError("command execution failed: invalid parameter"), nil + ``` + +## Agent Mode Integration + +For AI agent functionality that uses MCPShell tools, see the +[Don](https://github.com/inercia/don) project. Don spawns MCPShell as a +subprocess to execute MCP tools while handling LLM connectivity and +conversation management. + +## Protocol Extensions + +### Custom Descriptions +- Support for custom server descriptions via flags +- Descriptions can be loaded from files or URLs +- Multiple descriptions can be concatenated +- Descriptions help LLMs understand tool capabilities + +### Prompts Configuration +- Support for custom prompts in configuration +- Prompts provide additional context to LLMs +- Prompts are exposed via MCP protocol +- Example: + ```yaml + prompts: + - name: "example_prompt" + description: "Example prompt description" + arguments: + - name: "arg1" + description: "Argument description" + required: true + ``` + +## Best Practices + +### Tool Design +- Keep tools focused on single tasks +- Provide clear, descriptive tool names +- Write comprehensive tool descriptions +- Include examples in descriptions when helpful + +### Parameter Design +- Use descriptive parameter names +- Provide detailed parameter descriptions +- Set appropriate default values +- Mark required parameters explicitly + +### Error Messages +- Provide actionable error messages +- Include context about what failed +- Suggest how to fix the problem +- Don't leak sensitive information + +### Performance +- Compile constraints at registration time +- Parse templates once during handler creation +- Use context for timeouts and cancellation +- Clean up resources properly + +## Testing MCP Integration + +### Unit Testing +- Test tool registration logic +- Test parameter extraction and validation +- Test constraint evaluation +- Test error handling + +### Integration Testing +- Test full request/response cycle +- Test with actual MCP clients when possible +- Test error scenarios +- Test timeout handling + +### Manual Testing +- Use `mcpshell exe` command for direct tool testing +- Test with MCP clients (Cursor, VSCode) +- Verify tool descriptions are clear +- Test with various parameter combinations + +## Debugging MCP Issues + +### Logging +- Enable debug logging with `--log-level debug` +- Log all tool registrations +- Log all tool executions +- Log constraint evaluations + +### Common Issues +- **Tool not appearing in client**: Check tool registration logs +- **Parameter validation failing**: Check constraint definitions +- **Command execution failing**: Check runner configuration +- **Timeout errors**: Adjust timeout values in configuration + +### Troubleshooting Steps +1. Check server logs for errors +2. Verify configuration file syntax +3. Test tool directly with `mcpshell exe` +4. Verify MCP client configuration +5. Check network connectivity (if using HTTP transport) diff --git a/.augment/rules/security.md b/.augment/rules/security.md new file mode 100644 index 0000000..246d782 --- /dev/null +++ b/.augment/rules/security.md @@ -0,0 +1,56 @@ +# Security Rules for MCPShell + +## Core Principles +- **NEVER** allow arbitrary command execution without strict constraints +- **ALWAYS** prefer read-only operations +- **ALWAYS** validate inputs with CEL constraints +- **ALWAYS** use sandboxed runners when possible + +## Common Constraints + +### Command Injection Prevention +```yaml +constraints: + - "!param.contains(';')" # Prevent command chaining + - "!param.contains('&&')" # Prevent command chaining + - "!param.contains('|')" # Prevent piping + - "!param.contains('`')" # Prevent command substitution + - "!param.contains('$(')" # Prevent command substitution +``` + +### Path Traversal Prevention +```yaml +constraints: + - "!path.contains('../')" # Prevent directory traversal + - "path.startsWith('/allowed/directory/')" # Restrict to specific directory + - "path.matches('^[a-zA-Z0-9_\\-./]+$')" # Only safe characters +``` + +### Input Validation +```yaml +constraints: + - "param.size() > 0 && param.size() <= 1000" # Length limits + - "['ls', 'cat', 'echo'].exists(cmd, cmd == command)" # Command whitelist + - "['.txt', '.log', '.md'].exists(ext, filepath.endsWith(ext))" # File extensions +``` + +## Runner Security +- Use most restrictive runner available +- Disable networking: `allow_networking: false` +- Restrict filesystem access +- Specify OS requirements + +## Environment Variables +- **ONLY** pass explicitly whitelisted variables +- **NEVER** log sensitive data +- Document why each variable is needed + +## Security Checklist for New Tools +- [ ] All parameters have constraints +- [ ] Command injection blocked +- [ ] Path traversal prevented +- [ ] Input length limits enforced +- [ ] Appropriate runner selected +- [ ] Environment variables whitelisted +- [ ] Tool is read-only or justified +- [ ] Tested with malicious inputs diff --git a/.augment/rules/testing.md b/.augment/rules/testing.md new file mode 100644 index 0000000..5f6a2fd --- /dev/null +++ b/.augment/rules/testing.md @@ -0,0 +1,33 @@ +# Testing Standards for MCPShell + +## Test Organization +- Test files: `*_test.go` in same package as source +- Integration tests: `tests/` directory +- E2E tests: `tests/test_*.sh` shell scripts + +## Unit Testing +- Use table-driven tests for multiple scenarios +- Test logger: `var testLogger, _ = common.NewLogger("", "", common.LogLevelNone, false)` +- Test both success and failure cases +- **ALWAYS** test constraint validation logic +- **ALWAYS** test error handling paths +- **ALWAYS** test parameter type conversions +- **ALWAYS** test template rendering +- **ALWAYS** test runner selection + +## Integration Testing +- Shell scripts in `tests/` directory +- Use utilities from `tests/common/common.sh`: `info()`, `success()`, `fail()`, `skip()` +- Run with `make test-e2e` + +## Constraint Testing +- Test constraint compilation (valid/invalid expressions) +- Test constraint evaluation with various values +- **ALWAYS** test security constraints block malicious inputs +- Test command injection, path traversal, input limits + +## Test Execution +- Unit tests: `make test` +- Integration tests: `make test-e2e` +- Race detection: `go test -race ./...` +- Coverage: `go test -cover ./...` diff --git a/.gitignore b/.gitignore index 0cc43c1..1514b1e 100644 --- a/.gitignore +++ b/.gitignore @@ -31,6 +31,7 @@ coverage.txt *.swp *.swo *~ +*.code-workspace # Binary distribution files mcpshell-* diff --git a/LICENSE b/LICENSE index 0725f55..a7dad8e 100644 --- a/LICENSE +++ b/LICENSE @@ -1,6 +1,6 @@ MIT License -Copyright (c) 2024 Alvaro +Copyright (c) 2025 Alvaro Saurin Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/README.md b/README.md index 2607ddd..1b68891 100644 --- a/README.md +++ b/README.md @@ -121,20 +121,17 @@ Take a look at all the command in [this document](docs/usage.md). Configuration files use a YAML format defined [here](docs/config.md). See the [this directory](examples) for some examples. -For deploying MCPShell in containers and Kubernetes, see the [Container Deployment Guide](docs/mcp-containers.md). +For deploying MCPShell in containers and Kubernetes, see the [Container Deployment Guide](docs/usage-containers.md). ## Agent Mode -MCPShell can also be run in agent mode, providing direct connectivity between Large Language Models -(LLMs) and your command-line tools without requiring a separate MCP client. In this mode, -MCPShell connects to an OpenAI-compatible API (including local LLMs like Ollama), makes your -tools available to the model, executes requested tool operations, and manages the conversation flow. -This enables the creation of specialized AI assistants that can autonomously perform system tasks -using the tools you define in your configuration. The agent mode supports both interactive -conversations and one-shot executions, and allows you to define system and user prompts directly -in your configuration files. +For AI agent functionality that connects LLMs directly to tools, see the +[**Don**](https://github.com/inercia/don) project. Don provides: -For detailed information on using agent mode, see the [Agent Mode documentation](docs/usage-agent.md). +- Direct LLM connectivity without requiring a separate MCP client +- RAG (Retrieval-Augmented Generation) support +- Multi-agent architecture +- Uses MCPShell's tool configuration format ## Security Considerations diff --git a/cmd/agent.go b/cmd/agent.go deleted file mode 100644 index 0480262..0000000 --- a/cmd/agent.go +++ /dev/null @@ -1,362 +0,0 @@ -package root - -import ( - "bufio" - "context" - "fmt" - "io" - "os" - "os/signal" - "strings" - "sync" - "syscall" - "time" - - "github.com/fatih/color" - "github.com/spf13/cobra" - - "github.com/inercia/MCPShell/pkg/agent" - "github.com/inercia/MCPShell/pkg/common" - toolsConfig "github.com/inercia/MCPShell/pkg/config" -) - -// Cache the agent configuration to avoid duplicate resolution -var cachedAgentConfig agent.AgentConfig - -// processArgsWithStdin processes positional arguments and replaces "-" with STDIN content -// Returns the processed prompt and a boolean indicating if STDIN was used -func processArgsWithStdin(args []string) (string, bool, error) { - if len(args) == 0 { - return "", false, nil - } - - // Check if any argument is "-" (STDIN placeholder) - hasStdin := false - for _, arg := range args { - if arg == "-" { - hasStdin = true - break - } - } - - // If no STDIN placeholder, just join the arguments - if !hasStdin { - return strings.Join(args, " "), false, nil - } - - // Read STDIN content - stdinContent, err := io.ReadAll(os.Stdin) - if err != nil { - return "", false, fmt.Errorf("failed to read STDIN: %w", err) - } - - // Replace "-" with STDIN content in the arguments - processedArgs := make([]string, 0, len(args)) - for _, arg := range args { - if arg == "-" { - processedArgs = append(processedArgs, string(stdinContent)) - } else { - processedArgs = append(processedArgs, arg) - } - } - - return strings.Join(processedArgs, " "), true, nil -} - -// buildAgentConfig creates an AgentConfig by merging command-line flags with configuration file -func buildAgentConfig() (agent.AgentConfig, error) { - // Load configuration from file - config, err := agent.GetConfig() - if err != nil { - return agent.AgentConfig{}, fmt.Errorf("failed to load config: %w", err) - } - - // Start with default model from config file - var modelConfig agent.ModelConfig - if defaultModel := config.GetDefaultModel(); defaultModel != nil { - modelConfig = *defaultModel - } - - logger := common.GetLogger() - - // If --model flag not provided, check for environment variable - if agentModel == "" { - if envModel := os.Getenv("MCPSHELL_AGENT_MODEL"); envModel != "" { - agentModel = envModel - logger.Debug("Using model from MCPSHELL_AGENT_MODEL environment variable: %s", agentModel) - } - } - - // Override with command-line flags if provided - if agentModel != "" { - logger.Debug("Looking for model '%s' in agent config", agentModel) - - // Check if the specified model exists in config - if configModel := config.GetModelByName(agentModel); configModel != nil { - modelConfig = *configModel - logger.Info("Found model '%s' in config: model=%s, class=%s, name=%s", - agentModel, configModel.Model, configModel.Class, configModel.Name) - } else { - // Use command-line model name if not found in config - logger.Info("Model '%s' not found in config, using as direct model name", agentModel) - modelConfig.Model = agentModel - } - } - - // Merge system prompts from config file and command-line - if agentSystemPrompt != "" { - // Join system prompts from config with command-line system prompt - var allSystemPrompts []string - - // Add existing system prompts from config - if modelConfig.Prompts.HasSystemPrompts() { - allSystemPrompts = append(allSystemPrompts, modelConfig.Prompts.System...) - } - - // Add command-line system prompt - allSystemPrompts = append(allSystemPrompts, agentSystemPrompt) - - // Update the model config with merged prompts - modelConfig.Prompts.System = allSystemPrompts - } - - // Override API key and URL if provided - if agentOpenAIApiKey != "" { - modelConfig.APIKey = agentOpenAIApiKey - } - if agentOpenAIApiURL != "" { - modelConfig.APIURL = agentOpenAIApiURL - } - - // Handle environment variable substitution for API key - if strings.HasPrefix(modelConfig.APIKey, "${") && strings.HasSuffix(modelConfig.APIKey, "}") { - envVar := strings.TrimSuffix(strings.TrimPrefix(modelConfig.APIKey, "${"), "}") - modelConfig.APIKey = os.Getenv(envVar) - logger.Debug("Substituted API key from environment variable: %s", envVar) - } - - // Handle environment variable substitution for API URL - if strings.HasPrefix(modelConfig.APIURL, "${") && strings.HasSuffix(modelConfig.APIURL, "}") { - envVar := strings.TrimSuffix(strings.TrimPrefix(modelConfig.APIURL, "${"), "}") - modelConfig.APIURL = os.Getenv(envVar) - logger.Debug("Substituted API URL from environment variable: %s = %s", envVar, modelConfig.APIURL) - } - - // Resolve multiple config files into a single merged config file - if len(toolsFiles) == 0 { - return agent.AgentConfig{}, fmt.Errorf("tools configuration file(s) are required") - } - - localConfigPath, _, err := toolsConfig.ResolveMultipleConfigPaths(toolsFiles, logger) - if err != nil { - return agent.AgentConfig{}, fmt.Errorf("failed to resolve config paths: %w", err) - } - - return agent.AgentConfig{ - ToolsFile: localConfigPath, - UserPrompt: agentUserPrompt, - Once: agentOnce, - Version: version, - ModelConfig: modelConfig, - }, nil -} - -// agentCommand is a command that executes the MCPShell as an agent -var agentCommand = &cobra.Command{ - Use: "agent", - Short: "Execute the MCPShell as an agent", - Long: ` - -The agent command will execute the MCPShell as an agent, connecting to a remote LLM. - -Configuration is loaded from ~/.mcpshell/agent.yaml and can be overridden with command-line flags. -The configuration file should contain model definitions with their API keys and prompts. - -For example, you can do: - -$ mcpshell agent --tools=examples/config.yaml \ - --model "gpt-4o" \ - --system-prompt "You are a helpful assistant that debugs performance issues" \ - --user-prompt "I am having trouble with my computer. It is slow and I think it is due to the CPU usage." - -If a model is configured as default in the agent configuration file, you can omit the --model flag: - -You can provide initial user prompt as positional arguments: - -$ mcpshell agent I am having trouble with my computer. It is slow and I think it is due to the CPU usage. - -You can also use STDIN as part of the prompt by using '-' to represent it: - -$ cat failure.log | mcpshell agent --tools kubectl-ro.yaml \ - "I'm seeing this error in the Kubernetes logs" - "Please help me to debug this problem." - -When STDIN is used, the agent automatically runs in --once mode since STDIN is no longer available for interactive input. - -The agent will try to debug the issue with the given tools. -`, - Args: cobra.ArbitraryArgs, - PreRunE: func(cmd *cobra.Command, args []string) error { - // If --user-prompt is not provided but positional args exist, process them (including STDIN if "-" is present) - if agentUserPrompt == "" && len(args) > 0 { - processedPrompt, usedStdin, err := processArgsWithStdin(args) - if err != nil { - return fmt.Errorf("failed to process arguments: %w", err) - } - agentUserPrompt = processedPrompt - - // If STDIN was used, automatically enable --once mode since STDIN is no longer available for interactive input - if usedStdin && !agentOnce { - agentOnce = true - } - } - - // Initialize logger - logger, err := initLogger() - if err != nil { - return err - } - - // Build agent configuration (this will be cached for RunE) - cachedAgentConfig, err = buildAgentConfig() - if err != nil { - return err - } - - // Validate agent configuration - agentInstance := agent.New(cachedAgentConfig, logger) - if err := agentInstance.Validate(); err != nil { - return err - } - - return nil - }, - RunE: func(cmd *cobra.Command, args []string) error { - // Use the agentUserPrompt that was already set in PreRunE - // No need to process args again since PreRunE already handled STDIN if needed - - // Initialize logger - logger, err := initLogger() - if err != nil { - return err - } - - // Use cached agent configuration (built in PreRunE) - agentConfig := cachedAgentConfig - - // Create agent instance - agentInstance := agent.New(agentConfig, logger) - - // Create channels for user input and agent output - userInput := make(chan string) - agentOutput := make(chan string) - - ctx, cancel := context.WithCancel(context.Background()) - defer cancel() - - // Setup signal handling for graceful shutdown - signalChan := make(chan os.Signal, 1) - signal.Notify(signalChan, os.Interrupt, syscall.SIGTERM) - - var wg sync.WaitGroup - wg.Add(1) - go func() { - defer wg.Done() - select { - case <-signalChan: - logger.Info("Received interrupt signal, shutting down...") - cancel() - case <-ctx.Done(): - } - }() - - // Start a goroutine to read user input only when not in --once mode - if !agentConfig.Once { - wg.Add(1) - go func() { - defer wg.Done() - defer close(userInput) // Always close userInput when this goroutine exits - - scanner := bufio.NewScanner(os.Stdin) - inputChan := make(chan string) - - // Start a separate goroutine to read from stdin - go func() { - for scanner.Scan() { - inputChan <- scanner.Text() - } - close(inputChan) - }() - - for { - select { - case <-ctx.Done(): - return - case input, ok := <-inputChan: - if !ok { - return - } - select { - case userInput <- input: - case <-ctx.Done(): - return - } - } - } - }() - } - - // Start the agent - wg.Add(1) - go func() { - defer wg.Done() - if err := agentInstance.Run(ctx, userInput, agentOutput); err != nil { - // Don't log context cancellation as an error - it's an expected exit condition - if err != context.Canceled && err != context.DeadlineExceeded { - logger.Error(color.HiRedString("Agent encountered an error: %v", err)) - } - // Cancel context to abort all goroutines on fatal errors - cancel() - } - }() - - // Print agent output (using Print not Println to respect formatting from event handler) - for output := range agentOutput { - fmt.Print(output) - } - - // Wait for all goroutines with a timeout to prevent hanging - done := make(chan struct{}) - go func() { - wg.Wait() - close(done) - }() - - select { - case <-done: - // All goroutines finished normally - logger.Debug("All goroutines completed successfully") - case <-time.After(5 * time.Second): - // Force exit after timeout (agent already completed, this is just cleanup) - logger.Debug("Cleanup timeout reached, forcing shutdown (agent task already completed)") - } - - return nil - }, -} - -// init adds the agent command to the root command -func init() { - // Add agent command to root - rootCmd.AddCommand(agentCommand) - - // Add agent-specific flags as persistent flags so subcommands can use them - agentCommand.PersistentFlags().StringVarP(&agentModel, "model", "m", "", "LLM model to use (can also set MCPSHELL_AGENT_MODEL env var)") - agentCommand.PersistentFlags().StringVarP(&agentSystemPrompt, "system-prompt", "s", "", "System prompt for the LLM (optional, uses model-specific defaults if not provided)") - agentCommand.PersistentFlags().StringVarP(&agentUserPrompt, "user-prompt", "u", "", "Initial user prompt for the LLM") - agentCommand.PersistentFlags().StringVarP(&agentOpenAIApiKey, "openai-api-key", "k", "", "OpenAI API key (or set OPENAI_API_KEY environment variable)") - agentCommand.PersistentFlags().StringVarP(&agentOpenAIApiURL, "openai-api-url", "b", "", "Base URL for the OpenAI API (optional)") - agentCommand.PersistentFlags().BoolVarP(&agentOnce, "once", "o", false, "Exit after receiving a final response from the LLM (one-shot mode)") - - // Add config subcommand - agentCommand.AddCommand(agentConfigCommand) -} diff --git a/cmd/agent_config.go b/cmd/agent_config.go deleted file mode 100644 index 711d567..0000000 --- a/cmd/agent_config.go +++ /dev/null @@ -1,310 +0,0 @@ -package root - -import ( - "encoding/json" - "fmt" - "os" - "path/filepath" - - "github.com/spf13/cobra" - - "github.com/inercia/MCPShell/pkg/agent" - "github.com/inercia/MCPShell/pkg/utils" -) - -var ( - agentConfigShowJSON bool -) - -// agentConfigCommand is the parent command for agent configuration subcommands -var agentConfigCommand = &cobra.Command{ - Use: "config", - Short: "Manage agent configuration", - Long: ` - -The config command provides subcommands to manage agent configuration files. - -Available subcommands: -- create: Create a default agent configuration file -- show: Display the current agent configuration -`, -} - -// agentConfigCreateCommand creates a default agent configuration file -var agentConfigCreateCommand = &cobra.Command{ - Use: "create", - Short: "Create a default agent configuration file", - Long: ` - -Creates a default agent configuration file at ~/.mcpshell/agent.yaml. - -If the file already exists, it will be overwritten with the default configuration. -The default configuration includes sample models and prompts that you can customize. - -Example: -$ mcpshell agent config create -`, - Args: cobra.NoArgs, - RunE: func(cmd *cobra.Command, args []string) error { - logger, err := initLogger() - if err != nil { - return err - } - - // Create the default config file - if err := agent.CreateDefaultConfigForce(); err != nil { - logger.Error("Failed to create default config: %v", err) - return fmt.Errorf("failed to create default config: %w", err) - } - - mcpShellHome, err := utils.GetMCPShellHome() - if err != nil { - return fmt.Errorf("failed to get MCPShell home directory: %w", err) - } - - configPath := filepath.Join(mcpShellHome, "agent.yaml") - fmt.Printf("Default agent configuration created at: %s\n", configPath) - fmt.Println("You can now edit this file to customize your agent settings.") - - return nil - }, -} - -// ConfigShowOutput holds the JSON output structure for config show -type ConfigShowOutput struct { - ConfigurationFile string `json:"configuration_file"` - Models []ConfigShowModelInfo `json:"models"` - DefaultModel *ConfigShowModelInfo `json:"default_model,omitempty"` - Orchestrator *ConfigShowModelInfo `json:"orchestrator,omitempty"` - ToolRunner *ConfigShowModelInfo `json:"tool_runner,omitempty"` -} - -// ConfigShowModelInfo holds model info for JSON output -type ConfigShowModelInfo struct { - Name string `json:"name"` - Model string `json:"model"` - Class string `json:"class"` - Default bool `json:"default"` - APIKey string `json:"api_key_masked,omitempty"` - APIURL string `json:"api_url,omitempty"` - SystemPrompts []string `json:"system_prompts,omitempty"` -} - -// agentConfigShowCommand displays the current agent configuration -var agentConfigShowCommand = &cobra.Command{ - Use: "show", - Short: "Display the current agent configuration", - Long: ` - -Displays the current agent configuration in a pretty-printed format. - -The configuration is loaded from ~/.mcpshell/agent.yaml and parsed to show -the available models, their settings, and which model is set as default. - -Use --json flag to output in JSON format for easy parsing by other tools. - -Examples: -$ mcpshell agent config show -$ mcpshell agent config show --json -`, - Args: cobra.NoArgs, - RunE: func(cmd *cobra.Command, args []string) error { - logger, err := initLogger() - if err != nil { - return err - } - - // Get the config file path - mcpShellHome, err := utils.GetMCPShellHome() - if err != nil { - return fmt.Errorf("failed to get MCPShell home directory: %w", err) - } - configPath := filepath.Join(mcpShellHome, "agent.yaml") - - // Load the current configuration - config, err := agent.GetConfig() - if err != nil { - logger.Error("Failed to load config: %v", err) - return fmt.Errorf("failed to load config: %w", err) - } - - // Check if config is empty - if len(config.Agent.Models) == 0 { - if agentConfigShowJSON { - output := ConfigShowOutput{ - ConfigurationFile: configPath, - Models: []ConfigShowModelInfo{}, - } - encoder := json.NewEncoder(os.Stdout) - encoder.SetIndent("", " ") - return encoder.Encode(output) - } - - fmt.Printf("Configuration file: %s\n", configPath) - fmt.Println() - fmt.Println("No agent configuration found.") - fmt.Println("Run 'mcpshell agent config create' to create a default configuration.") - return nil - } - - // Output in JSON format if requested - if agentConfigShowJSON { - return outputConfigShowJSON(configPath, config) - } - - // Pretty print the configuration - fmt.Printf("Configuration file: %s\n", configPath) - fmt.Println() - fmt.Println("Agent Configuration:") - fmt.Println("===================") - fmt.Println() - - for i, model := range config.Agent.Models { - fmt.Printf("Model %d:\n", i+1) - fmt.Printf(" Name: %s\n", model.Name) - fmt.Printf(" Model: %s\n", model.Model) - fmt.Printf(" Class: %s\n", model.Class) - fmt.Printf(" Default: %t\n", model.Default) - - if model.APIKey != "" { - if model.APIKey == "${OPENAI_API_KEY}" { - fmt.Printf(" API Key: %s (from environment)\n", model.APIKey) - } else { - fmt.Printf(" API Key: %s\n", maskAPIKey(model.APIKey)) - } - } - - if model.APIURL != "" { - fmt.Printf(" API URL: %s\n", model.APIURL) - } - - // Display prompts information - if model.Prompts.HasSystemPrompts() { - systemPrompts := model.Prompts.GetSystemPrompts() - fmt.Printf(" System Prompts: %s\n", truncateString(systemPrompts, 80)) - } - - fmt.Println() - } - - // Show which model is default - defaultModel := config.GetDefaultModel() - if defaultModel != nil { - fmt.Printf("Default Model: %s (%s)\n", defaultModel.Name, defaultModel.Model) - } else { - fmt.Println("No default model configured.") - } - - return nil - }, -} - -// outputConfigShowJSON outputs the configuration in JSON format -func outputConfigShowJSON(configPath string, config *agent.Config) error { - output := ConfigShowOutput{ - ConfigurationFile: configPath, - Models: make([]ConfigShowModelInfo, 0, len(config.Agent.Models)), - } - - // Add all models - for _, model := range config.Agent.Models { - modelInfo := ConfigShowModelInfo{ - Name: model.Name, - Model: model.Model, - Class: model.Class, - Default: model.Default, - APIURL: model.APIURL, - } - - if model.APIKey != "" { - modelInfo.APIKey = maskAPIKey(model.APIKey) - } - - if model.Prompts.HasSystemPrompts() { - modelInfo.SystemPrompts = model.Prompts.System - } - - output.Models = append(output.Models, modelInfo) - } - - // Add default model - if defaultModel := config.GetDefaultModel(); defaultModel != nil { - modelInfo := ConfigShowModelInfo{ - Name: defaultModel.Name, - Model: defaultModel.Model, - Class: defaultModel.Class, - Default: defaultModel.Default, - APIURL: defaultModel.APIURL, - } - if defaultModel.APIKey != "" { - modelInfo.APIKey = maskAPIKey(defaultModel.APIKey) - } - if defaultModel.Prompts.HasSystemPrompts() { - modelInfo.SystemPrompts = defaultModel.Prompts.System - } - output.DefaultModel = &modelInfo - } - - // Add orchestrator model if defined - if orchestrator := config.GetOrchestratorModel(); orchestrator != nil { - modelInfo := ConfigShowModelInfo{ - Name: orchestrator.Name, - Model: orchestrator.Model, - Class: orchestrator.Class, - APIURL: orchestrator.APIURL, - } - if orchestrator.APIKey != "" { - modelInfo.APIKey = maskAPIKey(orchestrator.APIKey) - } - if orchestrator.Prompts.HasSystemPrompts() { - modelInfo.SystemPrompts = orchestrator.Prompts.System - } - output.Orchestrator = &modelInfo - } - - // Add tool runner model if defined - if toolRunner := config.GetToolRunnerModel(); toolRunner != nil { - modelInfo := ConfigShowModelInfo{ - Name: toolRunner.Name, - Model: toolRunner.Model, - Class: toolRunner.Class, - APIURL: toolRunner.APIURL, - } - if toolRunner.APIKey != "" { - modelInfo.APIKey = maskAPIKey(toolRunner.APIKey) - } - if toolRunner.Prompts.HasSystemPrompts() { - modelInfo.SystemPrompts = toolRunner.Prompts.System - } - output.ToolRunner = &modelInfo - } - - encoder := json.NewEncoder(os.Stdout) - encoder.SetIndent("", " ") - return encoder.Encode(output) -} - -// Helper function to mask API keys for security -func maskAPIKey(key string) string { - if len(key) <= 8 { - return "****" - } - return key[:4] + "****" + key[len(key)-4:] -} - -// Helper function to truncate long strings -func truncateString(s string, maxLen int) string { - if len(s) <= maxLen { - return s - } - return s[:maxLen-3] + "..." -} - -func init() { - // Add create and show subcommands to agent config - agentConfigCommand.AddCommand(agentConfigCreateCommand) - agentConfigCommand.AddCommand(agentConfigShowCommand) - - // Add flags to show command - agentConfigShowCommand.Flags().BoolVar(&agentConfigShowJSON, "json", false, "Output in JSON format") -} diff --git a/cmd/agent_info.go b/cmd/agent_info.go deleted file mode 100644 index b5164dc..0000000 --- a/cmd/agent_info.go +++ /dev/null @@ -1,426 +0,0 @@ -package root - -import ( - "context" - "encoding/json" - "fmt" - "os" - "path/filepath" - "strings" - "time" - - "github.com/fatih/color" - "github.com/sashabaranov/go-openai" - "github.com/spf13/cobra" - - "github.com/inercia/MCPShell/pkg/agent" - "github.com/inercia/MCPShell/pkg/common" - toolsConfig "github.com/inercia/MCPShell/pkg/config" - "github.com/inercia/MCPShell/pkg/utils" -) - -var ( - agentInfoJSON bool - agentInfoIncludePrompts bool - agentInfoCheck bool -) - -// agentInfoCommand displays information about the agent configuration -var agentInfoCommand = &cobra.Command{ - Use: "info", - Short: "Display agent configuration information", - Long: ` -Display information about the agent configuration including: -- LLM model details -- API configuration -- System prompts (with --include-prompts) -- LLM connectivity status (with --check) - -The configuration is loaded from ~/.mcpshell/agent.yaml and merged with -command-line flags (if provided). - -The --tools flag is optional for this command. It's only needed if you want -to verify the full agent configuration including tools setup. - -Examples: -$ mcpshell agent info -$ mcpshell agent info --json -$ mcpshell agent info --include-prompts -$ mcpshell agent info --check -$ mcpshell agent info --model gpt-4o --json -$ mcpshell agent info --tools examples/config.yaml -`, - Args: cobra.NoArgs, - PreRunE: func(cmd *cobra.Command, args []string) error { - // Initialize logger - logger, err := initLogger() - if err != nil { - return err - } - - // Tools are optional for agent info - we only need them for actual agent execution - logger.Debug("Agent info command initialized") - return nil - }, - RunE: func(cmd *cobra.Command, args []string) error { - logger, err := initLogger() - if err != nil { - return err - } - - // Build agent configuration (tools are optional for info command) - agentConfig, err := buildAgentConfigForInfo() - if err != nil { - return fmt.Errorf("failed to build agent config: %w", err) - } - - // Use the model config that was built - it already has the correct model - // based on: --model flag > MCPSHELL_AGENT_MODEL env var > default from config - orchestratorConfig := agentConfig.ModelConfig - toolRunnerConfig := agentConfig.ModelConfig - - // Check LLM connectivity if requested - var checkResult *CheckResult - if agentInfoCheck { - checkResult = checkLLMConnectivity(orchestratorConfig, logger) - } - - // Output in JSON format if requested - if agentInfoJSON { - err := outputJSON(agentConfig, orchestratorConfig, toolRunnerConfig, checkResult) - if err != nil { - return err - } - // If check was performed and failed, exit with error - if checkResult != nil && !checkResult.Success { - return fmt.Errorf("LLM connectivity check failed: %s", checkResult.Error) - } - return nil - } - - // Output in human-readable format - return outputHumanReadable(agentConfig, orchestratorConfig, toolRunnerConfig, checkResult) - }, -} - -// CheckResult holds the result of an LLM connectivity check -type CheckResult struct { - Success bool `json:"success"` - ResponseTime float64 `json:"response_time_ms"` - Error string `json:"error,omitempty"` - Model string `json:"model"` -} - -// InfoOutput holds the complete info output structure for JSON -type InfoOutput struct { - ConfigFile string `json:"config_file,omitempty"` - ToolsFile string `json:"tools_file,omitempty"` - Once bool `json:"once_mode"` - Orchestrator ModelInfo `json:"orchestrator"` - ToolRunner ModelInfo `json:"tool_runner"` - Check *CheckResult `json:"check,omitempty"` - Prompts *PromptsInfo `json:"prompts,omitempty"` -} - -// ModelInfo holds model configuration details for JSON output -type ModelInfo struct { - Model string `json:"model"` - Class string `json:"class"` - Name string `json:"name,omitempty"` - APIURL string `json:"api_url,omitempty"` - APIKey string `json:"api_key_masked,omitempty"` -} - -// PromptsInfo holds prompt information for JSON output -type PromptsInfo struct { - System []string `json:"system,omitempty"` - User string `json:"user,omitempty"` -} - -// buildAgentConfigForInfo creates an AgentConfig for the info command -// Unlike buildAgentConfig, this doesn't require tools files -func buildAgentConfigForInfo() (agent.AgentConfig, error) { - // Load configuration from file - config, err := agent.GetConfig() - if err != nil { - return agent.AgentConfig{}, fmt.Errorf("failed to load config: %w", err) - } - - // Start with default model from config file - var modelConfig agent.ModelConfig - if defaultModel := config.GetDefaultModel(); defaultModel != nil { - modelConfig = *defaultModel - } - - logger := common.GetLogger() - - // If --model flag not provided, check for environment variable - if agentModel == "" { - if envModel := os.Getenv("MCPSHELL_AGENT_MODEL"); envModel != "" { - agentModel = envModel - logger.Debug("Using model from MCPSHELL_AGENT_MODEL environment variable: %s", agentModel) - } - } - - // Override with command-line flags if provided - if agentModel != "" { - logger.Debug("Looking for model '%s' in agent config", agentModel) - - // Check if the specified model exists in config - if configModel := config.GetModelByName(agentModel); configModel != nil { - modelConfig = *configModel - logger.Info("Found model '%s' in config: model=%s, class=%s, name=%s", - agentModel, configModel.Model, configModel.Class, configModel.Name) - } else { - // Use command-line model name if not found in config - logger.Info("Model '%s' not found in config, using as direct model name", agentModel) - modelConfig.Model = agentModel - } - } - - // Merge system prompts from config file and command-line - if agentSystemPrompt != "" { - // Join system prompts from config with command-line system prompt - var allSystemPrompts []string - - // Add existing system prompts from config - if modelConfig.Prompts.HasSystemPrompts() { - allSystemPrompts = append(allSystemPrompts, modelConfig.Prompts.System...) - } - - // Add command-line system prompt - allSystemPrompts = append(allSystemPrompts, agentSystemPrompt) - - // Update the model config with merged prompts - modelConfig.Prompts.System = allSystemPrompts - } - - // Override API key and URL if provided - if agentOpenAIApiKey != "" { - modelConfig.APIKey = agentOpenAIApiKey - } - if agentOpenAIApiURL != "" { - modelConfig.APIURL = agentOpenAIApiURL - } - - // Handle environment variable substitution for API key - if strings.HasPrefix(modelConfig.APIKey, "${") && strings.HasSuffix(modelConfig.APIKey, "}") { - envVar := strings.TrimSuffix(strings.TrimPrefix(modelConfig.APIKey, "${"), "}") - modelConfig.APIKey = os.Getenv(envVar) - logger.Debug("Substituted API key from environment variable: %s", envVar) - } - - // Handle environment variable substitution for API URL - if strings.HasPrefix(modelConfig.APIURL, "${") && strings.HasSuffix(modelConfig.APIURL, "}") { - envVar := strings.TrimSuffix(strings.TrimPrefix(modelConfig.APIURL, "${"), "}") - modelConfig.APIURL = os.Getenv(envVar) - logger.Debug("Substituted API URL from environment variable: %s = %s", envVar, modelConfig.APIURL) - } - - // Tools file is optional for info command - toolsFile := "" - if len(toolsFiles) > 0 { - // Resolve tools configuration if provided - localConfigPath, _, err := toolsConfig.ResolveMultipleConfigPaths(toolsFiles, logger) - if err != nil { - return agent.AgentConfig{}, fmt.Errorf("failed to resolve config paths: %w", err) - } - toolsFile = localConfigPath - } - - return agent.AgentConfig{ - ToolsFile: toolsFile, - UserPrompt: agentUserPrompt, - Once: agentOnce, - Version: version, - ModelConfig: modelConfig, - }, nil -} - -// checkLLMConnectivity tests if the LLM is responding -func checkLLMConnectivity(modelConfig agent.ModelConfig, logger *common.Logger) *CheckResult { - result := &CheckResult{ - Model: modelConfig.Model, - } - - logger.Info("Testing LLM connectivity for model: %s", modelConfig.Model) - - // Initialize the model client - client, err := agent.InitializeModelClient(modelConfig, logger) - if err != nil { - result.Success = false - result.Error = fmt.Sprintf("Failed to initialize client: %v", err) - return result - } - - // Make a simple test request - ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) - defer cancel() - - startTime := time.Now() - - req := openai.ChatCompletionRequest{ - Model: modelConfig.Model, - Messages: []openai.ChatCompletionMessage{ - { - Role: openai.ChatMessageRoleUser, - Content: "Respond with just the word 'OK'", - }, - }, - MaxTokens: 10, - } - - _, err = client.CreateChatCompletion(ctx, req) - elapsed := time.Since(startTime) - - if err != nil { - result.Success = false - result.Error = fmt.Sprintf("LLM request failed: %v", err) - logger.Error("LLM connectivity check failed: %v", err) - return result - } - - result.Success = true - result.ResponseTime = float64(elapsed.Milliseconds()) - logger.Info("LLM connectivity check successful (%.0fms)", result.ResponseTime) - - return result -} - -// outputJSON outputs the configuration in JSON format -func outputJSON(agentConfig agent.AgentConfig, orchestrator, toolRunner agent.ModelConfig, check *CheckResult) error { - // Get agent config file path - var configFile string - if mcpShellHome, err := utils.GetMCPShellHome(); err == nil { - configFile = filepath.Join(mcpShellHome, "agent.yaml") - } - - output := InfoOutput{ - ConfigFile: configFile, - ToolsFile: agentConfig.ToolsFile, - Once: agentConfig.Once, - Orchestrator: ModelInfo{ - Model: orchestrator.Model, - Class: orchestrator.Class, - Name: orchestrator.Name, - APIURL: orchestrator.APIURL, - APIKey: maskAPIKey(orchestrator.APIKey), - }, - ToolRunner: ModelInfo{ - Model: toolRunner.Model, - Class: toolRunner.Class, - Name: toolRunner.Name, - APIURL: toolRunner.APIURL, - APIKey: maskAPIKey(toolRunner.APIKey), - }, - Check: check, - } - - // Include prompts if requested - if agentInfoIncludePrompts { - output.Prompts = &PromptsInfo{ - System: orchestrator.Prompts.System, - User: agentConfig.UserPrompt, - } - } - - encoder := json.NewEncoder(os.Stdout) - encoder.SetIndent("", " ") - return encoder.Encode(output) -} - -// outputHumanReadable outputs the configuration in human-readable format -func outputHumanReadable(agentConfig agent.AgentConfig, orchestrator, toolRunner agent.ModelConfig, check *CheckResult) error { - fmt.Println(color.HiCyanString("Agent Configuration")) - fmt.Println(strings.Repeat("=", 50)) - fmt.Println() - - // Show agent config file location - mcpShellHome, err := utils.GetMCPShellHome() - if err == nil { - agentConfigPath := filepath.Join(mcpShellHome, "agent.yaml") - fmt.Printf("Config File: %s\n", agentConfigPath) - } - - // General settings - if agentConfig.ToolsFile != "" { - fmt.Printf("Tools File: %s\n", agentConfig.ToolsFile) - } - fmt.Printf("Once Mode: %t\n", agentConfig.Once) - fmt.Println() - - // Orchestrator model - fmt.Println(color.HiYellowString("Orchestrator Model:")) - fmt.Printf(" Model: %s\n", orchestrator.Model) - if orchestrator.Name != "" { - fmt.Printf(" Name: %s\n", orchestrator.Name) - } - fmt.Printf(" Class: %s\n", orchestrator.Class) - if orchestrator.APIURL != "" { - fmt.Printf(" API URL: %s\n", orchestrator.APIURL) - } - if orchestrator.APIKey != "" { - fmt.Printf(" API Key: %s\n", maskAPIKey(orchestrator.APIKey)) - } - fmt.Println() - - // Tool-runner model (only if different from orchestrator) - if toolRunner.Model != orchestrator.Model || toolRunner.Class != orchestrator.Class { - fmt.Println(color.HiYellowString("Tool-Runner Model:")) - fmt.Printf(" Model: %s\n", toolRunner.Model) - if toolRunner.Name != "" { - fmt.Printf(" Name: %s\n", toolRunner.Name) - } - fmt.Printf(" Class: %s\n", toolRunner.Class) - if toolRunner.APIURL != "" { - fmt.Printf(" API URL: %s\n", toolRunner.APIURL) - } - if toolRunner.APIKey != "" { - fmt.Printf(" API Key: %s\n", maskAPIKey(toolRunner.APIKey)) - } - fmt.Println() - } - - // Prompts (if requested) - if agentInfoIncludePrompts { - fmt.Println(color.HiYellowString("Prompts:")) - if orchestrator.Prompts.HasSystemPrompts() { - fmt.Println(color.CyanString(" System Prompts:")) - for i, prompt := range orchestrator.Prompts.System { - fmt.Printf(" %d. %s\n", i+1, truncateString(prompt, 120)) - } - } else { - fmt.Println(" System Prompts: (none)") - } - if agentConfig.UserPrompt != "" { - fmt.Printf(" User Prompt: %s\n", truncateString(agentConfig.UserPrompt, 120)) - } - fmt.Println() - } - - // Check result (if performed) - if check != nil { - fmt.Println(color.HiYellowString("LLM Connectivity Check:")) - if check.Success { - fmt.Printf(" Status: %s\n", color.HiGreenString("✓ Connected")) - fmt.Printf(" Response: %.0fms\n", check.ResponseTime) - } else { - fmt.Printf(" Status: %s\n", color.HiRedString("✗ Failed")) - fmt.Printf(" Error: %s\n", check.Error) - return fmt.Errorf("LLM connectivity check failed: %s", check.Error) - } - fmt.Println() - } - - return nil -} - -func init() { - // Add info subcommand to agent command - agentCommand.AddCommand(agentInfoCommand) - - // Add info-specific flags - agentInfoCommand.Flags().BoolVar(&agentInfoJSON, "json", false, "Output in JSON format (for easy parsing)") - agentInfoCommand.Flags().BoolVar(&agentInfoIncludePrompts, "include-prompts", false, "Include full prompts in the output") - agentInfoCommand.Flags().BoolVar(&agentInfoCheck, "check", false, "Check LLM connectivity (exits with error if LLM is not responding)") -} diff --git a/cmd/completion.go b/cmd/completion.go new file mode 100644 index 0000000..99c1977 --- /dev/null +++ b/cmd/completion.go @@ -0,0 +1,148 @@ +package root + +import ( + "os" + "path/filepath" + "strings" + + "github.com/spf13/cobra" + + "github.com/inercia/MCPShell/pkg/utils" +) + +// completionCmd represents the completion command +var completionCmd = &cobra.Command{ + Use: "completion [bash|zsh|fish|powershell]", + Short: "Generate shell completion scripts", + Long: `Generate shell completion scripts for MCPShell. + +To load completions: + +Bash: + $ source <(mcpshell completion bash) + + # To load completions for each session, execute once: + # Linux: + $ mcpshell completion bash > /etc/bash_completion.d/mcpshell + # macOS: + $ mcpshell completion bash > $(brew --prefix)/etc/bash_completion.d/mcpshell + +Zsh: + # If shell completion is not already enabled in your environment, + # you will need to enable it. You can execute the following once: + $ echo "autoload -U compinit; compinit" >> ~/.zshrc + + # To load completions for each session, execute once: + $ mcpshell completion zsh > "${fpath[1]}/_mcpshell" + + # You will need to start a new shell for this setup to take effect. + +Fish: + $ mcpshell completion fish | source + + # To load completions for each session, execute once: + $ mcpshell completion fish > ~/.config/fish/completions/mcpshell.fish + +PowerShell: + PS> mcpshell completion powershell | Out-String | Invoke-Expression + + # To load completions for every new session, run: + PS> mcpshell completion powershell > mcpshell.ps1 + # and source this file from your PowerShell profile. +`, + DisableFlagsInUseLine: true, + ValidArgs: []string{"bash", "zsh", "fish", "powershell"}, + Args: cobra.MatchAll(cobra.ExactArgs(1), cobra.OnlyValidArgs), + Run: func(cmd *cobra.Command, args []string) { + switch args[0] { + case "bash": + _ = cmd.Root().GenBashCompletion(os.Stdout) + case "zsh": + _ = cmd.Root().GenZshCompletion(os.Stdout) + case "fish": + _ = cmd.Root().GenFishCompletion(os.Stdout, true) + case "powershell": + _ = cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout) + } + }, +} + +// listToolsFiles returns a list of available tools files for completion +func listToolsFiles() []string { + var completions []string + + // Get tools from the tools directory + toolsDir, err := utils.GetMCPShellToolsDir() + if err == nil { + entries, err := os.ReadDir(toolsDir) + if err == nil { + for _, entry := range entries { + if entry.IsDir() { + continue + } + name := entry.Name() + if strings.HasSuffix(name, ".yaml") || strings.HasSuffix(name, ".yml") { + // Add both with and without extension + completions = append(completions, name) + completions = append(completions, strings.TrimSuffix(strings.TrimSuffix(name, ".yaml"), ".yml")) + } + } + } + } + + // Get tools from current directory + cwd, err := os.Getwd() + if err == nil { + entries, err := os.ReadDir(cwd) + if err == nil { + for _, entry := range entries { + if entry.IsDir() { + continue + } + name := entry.Name() + if strings.HasSuffix(name, ".yaml") || strings.HasSuffix(name, ".yml") { + // Check if it looks like an MCP tools file (simple heuristic) + completions = append(completions, name) + } + } + } + } + + // Remove duplicates + seen := make(map[string]bool) + unique := make([]string, 0, len(completions)) + for _, c := range completions { + if !seen[c] { + seen[c] = true + unique = append(unique, c) + } + } + + return unique +} + +// toolsFileCompletion provides completion for the --tools flag +func toolsFileCompletion(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + completions := listToolsFiles() + + // Filter by prefix if user has typed something + if toComplete != "" { + filtered := make([]string, 0) + for _, c := range completions { + if strings.HasPrefix(c, toComplete) || strings.HasPrefix(filepath.Base(c), toComplete) { + filtered = append(filtered, c) + } + } + completions = filtered + } + + // Also allow file completion for arbitrary paths + return completions, cobra.ShellCompDirectiveDefault +} + +func init() { + rootCmd.AddCommand(completionCmd) + + // Register completion function for the --tools flag + _ = rootCmd.RegisterFlagCompletionFunc("tools", toolsFileCompletion) +} diff --git a/cmd/root.go b/cmd/root.go index 1f4870d..86aa143 100644 --- a/cmd/root.go +++ b/cmd/root.go @@ -28,14 +28,6 @@ var ( descriptionFile []string descriptionOverride bool - // Agent-specific flags - agentModel string - agentSystemPrompt string - agentUserPrompt string - agentOpenAIApiKey string - agentOpenAIApiURL string - agentOnce bool - // Application version information (set via SetVersion from main) version = "dev" commit = "none" diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000..ba5eeb3 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,84 @@ +# MCPShell Documentation + +This directory contains comprehensive documentation for MCPShell. + +## Getting Started + +- [Usage Guide](usage.md) - Command-line usage and basic concepts +- [Configuration](config.md) - YAML configuration file format +- [Security Considerations](security.md) - Security best practices and guidelines + +## Usage Guides + +### MCP Client Integration + +- [Cursor Integration](usage-cursor.md) - Using MCPShell with Cursor IDE +- [VS Code Integration](usage-vscode.md) - Using MCPShell with Visual Studio Code +- [Claude Desktop Integration](usage-claude-desktop.md) - Using MCPShell with Claude + Desktop +- [Codex CLI Integration](usage-codex-cli.md) - Using MCPShell with Codex CLI + +### Agent Mode + +For AI agent functionality (direct LLM connectivity, RAG support), see the +[Don](https://github.com/inercia/don) project which uses MCPShell's tool configuration. + +### Deployment + +- [Container Deployment](usage-containers.md) - Deploying MCPShell in containers and + Kubernetes + +## Configuration + +- [Configuration Reference](config.md) - Complete YAML configuration format +- [Environment Variables](config-env.md) - Environment variables reference for all modes +- [Runners Configuration](config-runners.md) - Sandboxed execution environments + (firejail, sandbox-exec, docker) + +## Development + +- [Development Guide](development.md) - Setting up development environment and + contributing +- [Release Process](release-process.md) - How releases are created and published +- [Troubleshooting](troubleshooting.md) - Common issues and solutions + +## Quick Links + +### For Users + +- **First time?** Start with [Usage Guide](usage.md) +- **Setting up tools?** See [Configuration](config.md) +- **Security concerns?** Read [Security Considerations](security.md) +- **Using with Cursor?** Check [Cursor Integration](usage-cursor.md) +- **Want agent mode?** See [Don](https://github.com/inercia/don) + +### For Developers + +- **Contributing?** Read [Development Guide](development.md) +- **Releasing?** Follow [Release Process](release-process.md) + +## Documentation Structure + +```text +docs/ +├── README.md # This file - documentation index +├── usage.md # Main usage guide +├── config.md # Configuration reference +├── config-env.md # Environment variables reference +├── config-runners.md # Runners configuration +├── security.md # Security guidelines +├── troubleshooting.md # Troubleshooting guide +├── usage-cursor.md # Cursor integration +├── usage-vscode.md # VS Code integration +├── usage-claude-desktop.md # Claude Desktop integration +├── usage-codex-cli.md # Codex CLI integration +├── usage-containers.md # Container deployment +├── development.md # Development guide +└── release-process.md # Release process +``` + +## External Resources + +- [GitHub Repository](https://github.com/inercia/MCPShell) +- [Model Context Protocol](https://modelcontextprotocol.io/) - MCP specification +- [cagent Library](https://github.com/docker/cagent) - Agent framework used by MCPShell diff --git a/docs/config-env.md b/docs/config-env.md new file mode 100644 index 0000000..ede1b5f --- /dev/null +++ b/docs/config-env.md @@ -0,0 +1,60 @@ +# Environment Variables Reference + +MCPShell supports various environment variables to customize its behavior across +different modes (MCP server, exe, daemon). Environment variables provide a +flexible way to configure MCPShell without modifying configuration files or passing +command-line flags. + +## Overview + +Environment variables in MCPShell are used for: + +- **Configuration paths**: Override default locations for config files and directories +- **System integration**: Platform-specific settings (HOME, SHELL, etc.) + +**Precedence**: In most cases, environment variables have lower precedence than +command-line flags but higher precedence than default values. See individual variable +descriptions for specific precedence rules. + +> **Note**: For agent-related environment variables (LLM API keys, model selection, RAG caching), +> see the [Don](https://github.com/inercia/don) project documentation. + +## Configuration Paths + +### `MCPSHELL_DIR` + +Specifies a custom MCPShell home directory. + +- **Default**: `~/.mcpshell` (Unix/Linux/macOS) or `%USERPROFILE%\.mcpshell` (Windows) +- **Used by**: All modes (mcp, agent, exe, daemon) +- **Example**: + ```bash + export MCPSHELL_DIR="/custom/mcpshell/dir" + mcpshell agent --tools=tools.yaml + ``` +- **Use cases**: + - Testing with isolated configurations + - Multi-user environments + - Custom deployment locations + +### `MCPSHELL_TOOLS_DIR` + +Specifies a custom tools directory. + +- **Default**: `~/.mcpshell/tools` +- **Used by**: All modes (mcp, agent, exe, daemon) +- **Example**: + ```bash + export MCPSHELL_TOOLS_DIR="/custom/tools/dir" + mcpshell mcp --tools=my-tools.yaml + ``` +- **Use cases**: + - Shared tools directory across projects + - Custom tools organization + - CI/CD environments + +## See Also + +- [Configuration Reference](config.md) - Tools configuration reference +- [Security Guide](security.md) - Security best practices +- [Don](https://github.com/inercia/don) - For agent-related environment variables diff --git a/docs/config-runners.md b/docs/config-runners.md index 2e718d8..6adf428 100644 --- a/docs/config-runners.md +++ b/docs/config-runners.md @@ -1,14 +1,16 @@ # Runner Configuration -The MCP CLI Adapter supports multiple _execution runners_ that allow tools to run in different -environments with various security restrictions. This document details how to configure and use these runners. +The MCP CLI Adapter supports multiple _execution runners_ that allow tools to run in +different environments with various security restrictions. This document details how to +configure and use these runners. For basic configuration information, see [Configuration Overview](config.md). ## Multiple Runners and Selection Process -You can define multiple runners for a tool to support different execution environments. The system -will select the first runner whose requirements are satisfied by the current system. +You can define multiple runners for a tool to support different execution environments. +The system will select the first runner whose requirements are satisfied by the current +system. Each runner definition includes: @@ -22,7 +24,7 @@ Here's an example of a tool with multiple runners: ```yaml run: - timeout: "30s" # Command will timeout after 30 seconds + timeout: "30s" # Command will timeout after 30 seconds command: "echo 'Hello {{ .name }}'" runners: - name: sandbox-exec @@ -33,10 +35,12 @@ run: options: allow_networking: false allow_user_folders: false - - name: exec # acts as a fallback + - name: exec # acts as a fallback ``` -**Note**: The `timeout` setting applies to all runners. Regardless of which runner is selected (sandbox-exec, firejail, or exec), the command will be terminated if it exceeds the specified timeout duration. +**Note**: The `timeout` setting applies to all runners. Regardless of which runner is +selected (sandbox-exec, firejail, or exec), the command will be terminated if it exceeds +the specified timeout duration. In this example: @@ -46,23 +50,25 @@ In this example: **Important Notes on Runner Selection:** -- The `runners` array is **optional**. If not provided, - **a default `exec` runner with no sandboxing will be used**. -- If you do specify `runners`, at least one of them must meet its requirements - for the tool to be available. -- No automatic fallback to `exec` occurs if you specify `runners` but none meet their requirements. -- If you want a fallback, explicitly add an `exec` runner with empty - requirements at the end of your runners list. +- The `runners` array is **optional**. If not provided, **a default `exec` runner with + no sandboxing will be used**. +- If you do specify `runners`, at least one of them must meet its requirements for the + tool to be available. +- No automatic fallback to `exec` occurs if you specify `runners` but none meet their + requirements. +- If you want a fallback, explicitly add an `exec` runner with empty requirements at the + end of your runners list. -It's recommended to always include a fallback runner (typically named "exec" with -no requirements) to ensure your tool can run on any platform if you want it to be universally available. +It's recommended to always include a fallback runner (typically named "exec" with no +requirements) to ensure your tool can run on any platform if you want it to be +universally available. ## Runner Types ### Default Runner (exec) -The default runner executes commands directly on the host system using the configured shell. -It has no special requirements or sandboxing. +The default runner executes commands directly on the host system using the configured +shell. It has no special requirements or sandboxing. ```yaml runners: @@ -71,22 +77,22 @@ runners: ### `sandbox-exec` Runner (macOS Only) -The sandbox runner uses macOS's `sandbox-exec` command to run commands in a sandboxed environment -with restricted access to the system. This provides an additional layer of security by -restricting what commands can access. +The sandbox runner uses macOS's `sandbox-exec` command to run commands in a sandboxed +environment with restricted access to the system. This provides an additional layer of +security by restricting what commands can access. ```yaml runners: - name: sandbox-exec options: - allow_networking: false # Disable network access - allow_user_folders: false # Restrict access to user folders - allow_read_folders: # List of folders to explicitly allow access to + allow_networking: false # Disable network access + allow_user_folders: false # Restrict access to user folders + allow_read_folders: # List of folders to explicitly allow access to - "/tmp" - "/path/to/project" - allow_read_files: # List of specific files to allow access to + allow_read_files: # List of specific files to allow access to - "/etc/config.yaml" - - "{{ env \"HOME\" }}/app.conf" + - '{{ env "HOME" }}/app.conf' ``` #### Sandbox Configuration Options @@ -94,27 +100,32 @@ runners: Available options: - `allow_networking`: When set to `false`, blocks all network access -- `allow_user_folders`: When set to `false`, restricts access to user folders like Documents, Desktop, etc. -- `allow_read_folders`: List of directories to explicitly allow read access to. Items in this list can use - Golang template replacements (using the tool parameters). -- `allow_read_files`: List of specific files to explicitly allow read access to. Items in this list can use - Golang template replacements (using the tool parameters). -- `allow_write_folders`: List of directories to explicitly allow write access to. Items in this list can use - Golang template replacements (using the tool parameters). -- `allow_write_files`: List of specific files to explicitly allow write access to. Items in this list can use - Golang template replacements (using the tool parameters). +- `allow_user_folders`: When set to `false`, restricts access to user folders like + Documents, Desktop, etc. +- `allow_read_folders`: List of directories to explicitly allow read access to. Items in + this list can use Golang template replacements (using the tool parameters). +- `allow_read_files`: List of specific files to explicitly allow read access to. Items + in this list can use Golang template replacements (using the tool parameters). +- `allow_write_folders`: List of directories to explicitly allow write access to. Items + in this list can use Golang template replacements (using the tool parameters). +- `allow_write_files`: List of specific files to explicitly allow write access to. Items + in this list can use Golang template replacements (using the tool parameters). - `custom_profile`: Specify a custom sandbox profile for advanced configuration **Important**: macOS `sandbox-exec` requires different syntax for files vs directories: -- Directories use `(allow file-read* (subpath "path"))` which allows access to the directory and all its contents -- Files use `(allow file-read* (literal "path"))` which allows access to that specific file only +- Directories use `(allow file-read* (subpath "path"))` which allows access to the + directory and all its contents +- Files use `(allow file-read* (literal "path"))` which allows access to that specific + file only -Use `allow_read_files` for specific file paths (e.g., config files) and `allow_read_folders` for directories. +Use `allow_read_files` for specific file paths (e.g., config files) and +`allow_read_folders` for directories. #### Custom Sandbox Profiles -For advanced usage, you can specify a completely custom sandbox profile using the `custom_profile` option. +For advanced usage, you can specify a completely custom sandbox profile using the +`custom_profile` option. Here's an example of a custom profile that: @@ -135,46 +146,53 @@ runners: ### `firejail` Runner (Linux Only) -The firejail runner uses [firejail](https://firejail.wordpress.com/) to run commands in a sandboxed environment on Linux systems. Firejail is a SUID sandbox program that restricts the running environment of untrusted applications using Linux namespaces and seccomp-bpf. +The firejail runner uses [firejail](https://firejail.wordpress.com/) to run commands in +a sandboxed environment on Linux systems. Firejail is a SUID sandbox program that +restricts the running environment of untrusted applications using Linux namespaces and +seccomp-bpf. ```yaml runners: - name: firejail options: - allow_networking: false # Disable network access - allow_user_folders: false # Restrict access to user folders - allow_read_folders: # List of folders to explicitly allow access to + allow_networking: false # Disable network access + allow_user_folders: false # Restrict access to user folders + allow_read_folders: # List of folders to explicitly allow access to - "/tmp" - "/etc/ssl/certs" - allow_read_files: # List of specific files to allow access to + allow_read_files: # List of specific files to allow access to - "/etc/config.yaml" - - "{{ env \"HOME\" }}/app.conf" + - '{{ env "HOME" }}/app.conf' ``` #### Requirements - Linux operating system -- Firejail installed (`apt-get install firejail` on Debian/Ubuntu or equivalent for your distribution) +- Firejail installed (`apt-get install firejail` on Debian/Ubuntu or equivalent for your + distribution) #### Firejail Configuration Options Available options: - `allow_networking`: When set to `false`, blocks all network access using `net none` -- `allow_user_folders`: When set to `false`, restricts access to common user folders like Documents, Desktop, etc. -- `allow_read_folders`: List of directories to explicitly allow read access to. Items in this list can use - Golang template replacements (using the tool parameters). -- `allow_read_files`: List of specific files to explicitly allow read access to. Items in this list can use - Golang template replacements (using the tool parameters). -- `allow_write_folders`: List of directories to explicitly allow both read and write access to. - Items in this list can use Golang template replacements (using the tool parameters). -- `allow_write_files`: List of specific files to explicitly allow both read and write access to. - Items in this list can use Golang template replacements (using the tool parameters). +- `allow_user_folders`: When set to `false`, restricts access to common user folders + like Documents, Desktop, etc. +- `allow_read_folders`: List of directories to explicitly allow read access to. Items in + this list can use Golang template replacements (using the tool parameters). +- `allow_read_files`: List of specific files to explicitly allow read access to. Items + in this list can use Golang template replacements (using the tool parameters). +- `allow_write_folders`: List of directories to explicitly allow both read and write + access to. Items in this list can use Golang template replacements (using the tool + parameters). +- `allow_write_files`: List of specific files to explicitly allow both read and write + access to. Items in this list can use Golang template replacements (using the tool + parameters). - `custom_profile`: Specify a custom firejail profile for advanced configuration -**Note**: For consistency with the sandbox-exec runner, firejail also supports separate file and folder lists. -While firejail uses `whitelist` for both, maintaining this separation improves configuration clarity and -cross-platform compatibility. +**Note**: For consistency with the sandbox-exec runner, firejail also supports separate +file and folder lists. While firejail uses `whitelist` for both, maintaining this +separation improves configuration clarity and cross-platform compatibility. #### Security Benefits @@ -188,7 +206,8 @@ The firejail runner adds several layers of security: #### Custom Firejail Profiles -For advanced usage, you can specify a completely custom firejail profile using the `custom_profile` option: +For advanced usage, you can specify a completely custom firejail profile using the +`custom_profile` option: ```yaml runners: @@ -205,22 +224,22 @@ runners: ### Docker Runner -The Docker runner executes commands inside Docker containers, providing -**strong isolation** from the host system. This runner creates a temporary script -file containing your command, then mounts it into a Docker container and executes it. +The Docker runner executes commands inside Docker containers, providing **strong +isolation** from the host system. This runner creates a temporary script file containing +your command, then mounts it into a Docker container and executes it. ```yaml runners: - name: docker options: - image: "alpine:latest" # Required: Docker image to use - allow_networking: true # Optional: Allow network access (default: true) - mounts: # Optional: Additional volumes to mount - - "/data:/data:ro" # Format: "host-path:container-path[:options]" + image: "alpine:latest" # Required: Docker image to use + allow_networking: true # Optional: Allow network access (default: true) + mounts: # Optional: Additional volumes to mount + - "/data:/data:ro" # Format: "host-path:container-path[:options]" - "/config:/etc/myapp:ro" - user: "1000:1000" # Optional: User to run as in container - workdir: "/app" # Optional: Working directory in container - docker_run_opts: "--cpus 1 --memory 512m" # Optional: Additional docker run options + user: "1000:1000" # Optional: User to run as in container + workdir: "/app" # Optional: Working directory in container + docker_run_opts: "--cpus 1 --memory 512m" # Optional: Additional docker run options prepare_command: | # Commands to run before the main command apt-get update @@ -230,35 +249,45 @@ runners: #### Requirements - Docker installed and available in PATH -- Appropriate permissions to run Docker containers (typically membership in the `docker` group or root) +- Appropriate permissions to run Docker containers (typically membership in the `docker` + group or root) #### Docker Configuration Options Available options: -- `image`: (Required) The Docker image to use for running the command (e.g., "alpine:latest", "ubuntu:22.04") -- `allow_networking`: When set to `false`, disables all network access for the container using `--network none` -- `network`: Specific network to connect the container to (e.g., "host", "bridge", or custom network name) -- `mounts`: A list of additional volumes to mount in the format "host-path:container-path[:options]" +- `image`: (Required) The Docker image to use for running the command (e.g., + "alpine:latest", "ubuntu:22.04") +- `allow_networking`: When set to `false`, disables all network access for the container + using `--network none` +- `network`: Specific network to connect the container to (e.g., "host", "bridge", or + custom network name) +- `mounts`: A list of additional volumes to mount in the format + "host-path:container-path[:options]" - `user`: Specify the user to run as within the container (format: "uid" or "uid:gid") - `workdir`: Set the working directory inside the container - `docker_run_opts`: String of additional options to pass to the `docker run` command -- `prepare_command`: Commands to run before the main command (e.g., for installing packages or setting up the environment) +- `prepare_command`: Commands to run before the main command (e.g., for installing + packages or setting up the environment) - `memory`: Memory limit for the container (e.g., "512m", "1g") - `memory_reservation`: Memory soft limit (e.g., "256m", "512m") - `memory_swap`: Swap limit equal to memory plus swap: '-1' to enable unlimited swap - `memory_swappiness`: Tune container memory swappiness (0 to 100, default -1) -- `cap_add`: Linux capabilities to add to the container (e.g., ["NET_ADMIN", "SYS_PTRACE"]) +- `cap_add`: Linux capabilities to add to the container (e.g., ["NET_ADMIN", + "SYS_PTRACE"]) - `cap_drop`: Linux capabilities to drop from the container (e.g., ["ALL"]) - `dns`: Custom DNS servers for the container (e.g., ["8.8.8.8", "1.1.1.1"]) -- `dns_search`: Custom DNS search domains for the container (e.g., ["example.com", "mydomain.local"]) -- `platform`: Set platform if server is multi-platform capable (e.g., "linux/amd64", "linux/arm64") +- `dns_search`: Custom DNS search domains for the container (e.g., ["example.com", + "mydomain.local"]) +- `platform`: Set platform if server is multi-platform capable (e.g., "linux/amd64", + "linux/arm64") #### Security Benefits The Docker runner provides several security advantages: -1. **Complete process isolation**: Processes inside the container are isolated from the host +1. **Complete process isolation**: Processes inside the container are isolated from the + host 1. **Configurable resource limits**: Can limit CPU, memory, and other resources 1. **Control over capabilities**: Docker restricts Linux capabilities by default 1. **Filesystem isolation**: Only mounted volumes are accessible @@ -313,11 +342,11 @@ runners: - name: docker options: image: "node:16-alpine" - memory: "1g" # Hard memory limit - memory_reservation: "512m" # Soft memory limit (container will try to release memory if below this value) - memory_swap: "1.5g" # Total memory+swap limit - memory_swappiness: 10 # Low swappiness value to prefer using RAM over swap - docker_run_opts: "--cpus 2" # Limit to 2 CPU cores + memory: "1g" # Hard memory limit + memory_reservation: "512m" # Soft memory limit (container will try to release memory if below this value) + memory_swap: "1.5g" # Total memory+swap limit + memory_swappiness: 10 # Low swappiness value to prefer using RAM over swap + docker_run_opts: "--cpus 2" # Limit to 2 CPU cores workdir: "/app" ``` @@ -328,8 +357,8 @@ runners: - name: docker options: image: "ubuntu:22.04" - cap_drop: ["ALL"] # Drop all capabilities by default - cap_add: ["NET_ADMIN", "NET_RAW"] # Add specific capabilities for network tools + cap_drop: ["ALL"] # Drop all capabilities by default + cap_add: ["NET_ADMIN", "NET_RAW"] # Add specific capabilities for network tools allow_networking: true prepare_command: | apt-get update @@ -343,12 +372,12 @@ runners: - name: docker options: image: "alpine:latest" - dns: ["8.8.8.8", "8.8.4.4"] # Use Google's public DNS servers + dns: ["8.8.8.8", "8.8.4.4"] # Use Google's public DNS servers dns_search: ["example.com", "internal.mycompany.net"] prepare_command: | # Install networking tools apk add --no-cache curl bind-tools - + # Test DNS resolution echo "Testing DNS resolution..." nslookup api.example.com @@ -361,14 +390,14 @@ runners: - name: docker options: image: "node:16" - platform: "linux/amd64" # Force x86_64 architecture even on ARM systems + platform: "linux/amd64" # Force x86_64 architecture even on ARM systems workdir: "/app" mounts: - "./app:/app" prepare_command: | # Install dependencies for x86_64 architecture npm install - + # Run tests to ensure platform compatibility npm test ``` @@ -380,14 +409,14 @@ runners: - name: docker options: image: "ubuntu:latest" - network: "host" # Use host network mode for full network access + network: "host" # Use host network mode for full network access prepare_command: | # Update package list apt-get update - + # Install networking tools apt-get install -y net-tools iputils-ping - + # Test network connectivity with host network netstat -tuln ``` @@ -402,20 +431,23 @@ runners: prepare_command: | # Update package lists apt-get update -y - + # Install required packages apt-get install -y --no-install-recommends \ curl \ jq \ ca-certificates - + # Clean up to reduce container size apt-get clean rm -rf /var/lib/apt/lists/* allow_networking: true ``` -The `prepare_command` is executed before the main command and can be used to install dependencies, configure the environment, or perform any setup tasks needed for the command to run successfully. This is especially useful for lightweight base images where you need to install additional tools. +The `prepare_command` is executed before the main command and can be used to install +dependencies, configure the environment, or perform any setup tasks needed for the +command to run successfully. This is especially useful for lightweight base images where +you need to install additional tools. ## Cross-Platform Example @@ -430,11 +462,11 @@ Here's a complete example of a tool that uses different runners based on the pla description: "Path to the file to read" required: true constraints: - - "filename.size() > 0" # Filename must not be empty - - "!filename.contains('../')" # Prevent directory traversal - - "['.txt', '.log', '.md'].exists(ext, filename.endsWith(ext))" # Only allow certain file extensions + - "filename.size() > 0" # Filename must not be empty + - "!filename.contains('../')" # Prevent directory traversal + - "['.txt', '.log', '.md'].exists(ext, filename.endsWith(ext))" # Only allow certain file extensions run: - timeout: "10s" # Timeout after 10 seconds + timeout: "10s" # Timeout after 10 seconds command: "cat {{ .filename }}" runners: - name: sandbox-exec @@ -444,7 +476,7 @@ Here's a complete example of a tool that uses different runners based on the pla allow_read_folders: - "/tmp" allow_read_files: - - "{{ .filename }}" # Specific file access + - "{{ .filename }}" # Specific file access - name: firejail options: allow_networking: false @@ -452,7 +484,7 @@ Here's a complete example of a tool that uses different runners based on the pla allow_read_folders: - "/tmp" allow_read_files: - - "{{ .filename }}" # Specific file access + - "{{ .filename }}" # Specific file access - name: exec output: prefix: "Contents of {{ .filename }}:" @@ -471,7 +503,7 @@ Here's an example showing how to properly configure file and folder access for k description: "Resource type (pods, deployments, etc.)" required: true run: - timeout: "30s" # Timeout after 30 seconds + timeout: "30s" # Timeout after 30 seconds command: "kubectl get {{ .resource }}" env: - KUBECONFIG @@ -483,9 +515,9 @@ Here's an example showing how to properly configure file and folder access for k allow_read_folders: - "/usr/bin" - "/bin" - - "{{ env \"HOME\" }}/.kube" # Directory with multiple kubeconfig files + - '{{ env "HOME" }}/.kube' # Directory with multiple kubeconfig files allow_read_files: - - "{{ env \"KUBECONFIG\" }}" # Specific kubeconfig file + - '{{ env "KUBECONFIG" }}' # Specific kubeconfig file - name: firejail options: allow_networking: true @@ -493,8 +525,8 @@ Here's an example showing how to properly configure file and folder access for k allow_read_folders: - "/usr/bin" - "/bin" - - "{{ env \"HOME\" }}/.kube" + - '{{ env "HOME" }}/.kube' allow_read_files: - - "{{ env \"KUBECONFIG\" }}" + - '{{ env "KUBECONFIG" }}' - name: exec ``` diff --git a/docs/config.md b/docs/config.md index d05cd43..e013256 100644 --- a/docs/config.md +++ b/docs/config.md @@ -1,6 +1,7 @@ # Configuration File -The MCPShell can be configured using a YAML configuration file to define the tools that the MCP server provides. +The MCPShell can be configured using a YAML configuration file to define the tools that +the MCP server provides. ## Basic Structure @@ -32,8 +33,7 @@ mcp: os: "" executables: - "" - options: -