Conversation
| if len(args) > 0 && !strings.HasPrefix(args[0], "-") { | ||
| *addr = args[0] | ||
| args = args[1:] | ||
| } |
There was a problem hiding this comment.
Potential for unexpected flag precedence
Assigning the positional argument directly to *addr before parsing flags can lead to confusion if both a positional argument and an --addr flag are provided. The flag parsing will override the positional argument, which may not be the user's intent. Consider explicitly detecting and rejecting cases where both are provided, or clearly documenting the precedence.
Recommended solution:
if len(args) > 0 && !strings.HasPrefix(args[0], "-") {
if containsAddrFlag(args[1:]) {
return "", errors.New("cannot specify address both positionally and with --addr flag")
}
*addr = args[0]
args = args[1:]
}Implement a helper to check for the presence of the addr flag in the remaining args.
| if host != "" && host != "localhost" { | ||
| if ip := net.ParseIP(host); ip == nil { | ||
| if strings.ContainsAny(host, " \t\n") { | ||
| return fmt.Errorf("invalid host: %s", host) | ||
| } | ||
| } |
There was a problem hiding this comment.
Insufficient host validation
The current host validation only checks for whitespace and whether the host is 'localhost' or a valid IP. It does not validate the host against RFC-compliant hostname rules, nor does it prevent reserved or malformed hostnames. This could allow invalid or potentially unsafe hostnames to be accepted.
Recommended solution:
Consider using a stricter hostname validation, such as using a regular expression or leveraging net package utilities to ensure the host is a valid domain name or IP address.
| t.Run(tt.name, func(t *testing.T) { | ||
| t.Parallel() | ||
| err := validateAddr(tt.addr) | ||
| if tt.wantErr && err == nil { | ||
| t.Errorf("validateAddr(%q) = nil, want error", tt.addr) | ||
| } | ||
| if !tt.wantErr && err != nil { | ||
| t.Errorf("validateAddr(%q) = %v, want nil", tt.addr, err) | ||
| } | ||
| }) |
There was a problem hiding this comment.
The subtest closure in the table-driven test captures the loop variable tt by reference, which can cause data races and unpredictable test results when running subtests in parallel. To avoid this, assign tt to a new variable within the loop before the closure:
for _, tt := range tests {
tt := tt // capture by value
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
// ...
})
}This ensures each subtest gets its own copy of tt, preventing concurrency issues.
| return fmt.Errorf("loading config: %w", err) | ||
| } | ||
|
|
||
| ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM) |
There was a problem hiding this comment.
The use of signal.NotifyContext to create a context that is canceled on SIGINT or SIGTERM is appropriate for graceful shutdown. However, passing this context directly to the Bubble Tea program (tea.NewProgram(model, tea.WithContext(ctx))) may result in the TUI being abruptly terminated when the signal is received, potentially preventing proper cleanup or user feedback. Consider handling the signal separately and coordinating a graceful shutdown sequence for the TUI.
| switch os.Args[1] { | ||
| case "cli": | ||
| return runCLI() | ||
| case "serve": | ||
| return runServe() | ||
| case "mcp": | ||
| return runMCP() | ||
| case "version", "--version", "-v": | ||
| runVersion() | ||
| return nil | ||
| case "help", "--help", "-h": | ||
| runHelp() | ||
| return nil | ||
| default: | ||
| return fmt.Errorf("unknown command: %s", os.Args[1]) | ||
| } |
There was a problem hiding this comment.
Maintainability Concern: Monolithic Command Dispatch
The current implementation uses a large switch statement on os.Args[1] to dispatch commands. As the number of commands increases, this approach will become harder to maintain and extend. Consider refactoring to use a map of command names to handler functions, which will improve modularity and make it easier to add, remove, or modify commands.
Recommended refactor:
var commands = map[string]func() error{
"cli": runCLI,
"serve": runServe,
"mcp": runMCP,
// ...
}
if cmd, ok := commands[os.Args[1]]; ok {
return cmd()
}
return fmt.Errorf("unknown command: %s", os.Args[1])This approach centralizes command registration and reduces the risk of errors when modifying the command set.
| func TestE2E_RateLimiting(t *testing.T) { | ||
| handler := e2eServer(t) | ||
|
|
||
| // Rate limiter is configured at 1 token/sec, burst 60. | ||
| // /health bypasses the middleware stack, so use an API path that goes through rate limiting. | ||
| var lastCode int | ||
| for i := range 65 { | ||
| w := httptest.NewRecorder() | ||
| r := httptest.NewRequest(http.MethodGet, "/api/v1/csrf-token", nil) | ||
| r.RemoteAddr = "10.99.99.99:12345" // Unique IP so no interference | ||
|
|
||
| handler.ServeHTTP(w, r) | ||
| lastCode = w.Code | ||
|
|
||
| if w.Code == http.StatusTooManyRequests { | ||
| // Verify Retry-After header | ||
| if got := w.Header().Get("Retry-After"); got != "1" { | ||
| t.Errorf("rate limited Retry-After = %q, want %q", got, "1") | ||
| } | ||
| t.Logf("rate limited at request %d", i+1) | ||
| return | ||
| } | ||
| } | ||
|
|
||
| t.Fatalf("rate limiter: no 429 within 65 requests, last status = %d", lastCode) | ||
| } |
There was a problem hiding this comment.
Rate Limiting Test May Be Brittle to Configuration Changes
The rate limiting test assumes a fixed configuration (1 token/sec, burst 60) and expects a 429 status within 65 requests. If the rate limiter configuration changes, this test could fail or become flaky, reducing maintainability.
Recommendation:
Consider making the test adaptive to the actual rate limiter configuration, or document the dependency clearly. Alternatively, expose the rate limiter parameters for the test so it can adjust the number of requests accordingly.
| // Returns 200 OK with {"status":"ok"}. | ||
| func health(w http.ResponseWriter, _ *http.Request) { | ||
| WriteJSON(w, http.StatusOK, map[string]string{"status": "ok"}) | ||
| WriteJSON(w, http.StatusOK, map[string]string{"status": "ok"}, nil) |
There was a problem hiding this comment.
Ensure that WriteJSON properly handles the nil error parameter and always returns a valid JSON response. If WriteJSON does not internally handle errors, consider adding explicit error handling to guarantee consistent and reliable responses for health checks.
| // setupIntegrationSessionManager creates a sessionManager backed by a real PostgreSQL database. | ||
| func setupIntegrationSessionManager(t *testing.T) *sessionManager { | ||
| t.Helper() | ||
|
|
||
| db := testutil.SetupTestDB(t) | ||
|
|
||
| store := session.New(sqlc.New(db.Pool), db.Pool, slog.New(slog.DiscardHandler)) | ||
|
|
||
| return &sessionManager{ | ||
| store: store, | ||
| hmacSecret: []byte("test-secret-at-least-32-characters!!"), | ||
| isDev: true, | ||
| logger: slog.New(slog.DiscardHandler), | ||
| } | ||
| } |
There was a problem hiding this comment.
Test Pollution Risk:
The integration tests create sessions and messages in the database but do not explicitly clean up after themselves. If the test database is not reset between tests, this can lead to test pollution and unreliable results.
Recommendation:
Ensure that the test database is reset or cleaned up after each test, either by using t.Cleanup() to drop created sessions/messages or by resetting the database state in SetupTestDB.
| _, err = sm.store.Session(ctx, sess.ID) | ||
| if err == nil { | ||
| t.Error("deleteSession() session still exists after deletion") | ||
| } |
There was a problem hiding this comment.
Incomplete Deletion Verification:
In TestDeleteSession_Success, the test verifies that the session is deleted, but does not check for orphaned messages or related data that may remain after session deletion. This could mask issues in the deletion logic.
Recommendation:
Extend the test to verify that all related data (e.g., messages) are also deleted when a session is removed, ensuring full cleanup and preventing data integrity issues.
| "path", r.URL.Path, | ||
| "method", r.Method, | ||
| ) | ||
| WriteError(w, http.StatusForbidden, "csrf_invalid", "CSRF validation failed") | ||
| WriteError(w, http.StatusForbidden, "csrf_invalid", "CSRF validation failed", logger) | ||
| return | ||
| } | ||
| next.ServeHTTP(w, r) | ||
| return | ||
| } | ||
|
|
||
| // Session-bound token | ||
| sessionID, ok := SessionIDFromContext(r.Context()) | ||
| sessionID, ok := sessionIDFromContext(r.Context()) | ||
| if !ok { | ||
| logger.Error("CSRF validation failed: session ID not in context", | ||
| logger.Error("validating CSRF: session ID not in context", | ||
| "path", r.URL.Path, | ||
| "method", r.Method, | ||
| ) | ||
| WriteError(w, http.StatusForbidden, "session_required", "session required") | ||
| WriteError(w, http.StatusForbidden, "session_required", "session required", logger) | ||
| return | ||
| } | ||
|
|
||
| if err := sm.CheckCSRF(sessionID, csrfToken); err != nil { | ||
| logger.Warn("CSRF validation failed", | ||
| logger.Warn("validating CSRF", | ||
| "error", err, | ||
| "session", sessionID, | ||
| "path", r.URL.Path, | ||
| "method", r.Method, | ||
| ) | ||
| WriteError(w, http.StatusForbidden, "csrf_invalid", "CSRF validation failed") | ||
| WriteError(w, http.StatusForbidden, "csrf_invalid", "CSRF validation failed", logger) | ||
| return | ||
| } | ||
|
|
There was a problem hiding this comment.
Potential Security Inconsistency: Middleware Ordering Required
The CSRF middleware assumes that the session ID is present in the request context, but if sessionMiddleware is not applied before csrfMiddleware, the session ID will be missing, resulting in a forbidden error. This could lead to inconsistent security enforcement depending on middleware order. To mitigate this, consider explicitly documenting or enforcing middleware ordering, or have csrfMiddleware check for the presence of sessionMiddleware and fail fast if not present.
Recommended solution:
- Ensure that sessionMiddleware is always applied before csrfMiddleware in the middleware chain.
- Optionally, add a runtime check or panic in csrfMiddleware if session ID is missing and sessionMiddleware is not present, to avoid silent failures.
No description provided.