-
Notifications
You must be signed in to change notification settings - Fork 0
Implement SecureToken and TokenCache with comprehensive tests; add Va… #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master-copy
Are you sure you want to change the base?
Conversation
…ult trigger and watcher functionality with enhanced error handling and path validation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements secure token handling with SecureToken and TokenCache utilities, and enhances Vault integration with improved error handling, path validation, and parallel processing capabilities.
Key Changes
- SecureToken implementation: Thread-safe token wrapper with automatic zeroing, constant-time comparison, and masked logging to prevent token leakage
- Vault watcher enhancements: Parallel workload discovery, worker pool for version checks, path validation (
isValidVaultPath), and security-focused sanitization functions - Vault trigger improvements: Request size limits (1MB), path validation, and enhanced error sanitization for secure logging
Reviewed Changes
Copilot reviewed 7 out of 9 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/pkg/handler/secure_token.go | New SecureToken and TokenCache for secure credential management with memory zeroing |
| internal/pkg/handler/secure_token_test.go | Comprehensive tests for SecureToken including concurrency and security scenarios |
| internal/pkg/handler/vault_watcher.go | Enhanced with parallel processing, path validation, error sanitization, and token permission checks |
| internal/pkg/handler/vault_watcher_test.go | Extensive test coverage for validation, sanitization, and Vault API interactions |
| internal/pkg/handler/vault_trigger.go | Added path validation, request size limits, and error sanitization |
| internal/pkg/handler/vault_trigger_test.go | Comprehensive tests for trigger endpoint including auth and validation scenarios |
| pkg/kube/resourcemapper.go | Fixed indentation for namespaces entry |
| pkg/common/config.go | Reorganized imports to follow Go conventions (tabs, grouped) |
| internal/pkg/metrics/prometheus.go | Aligned struct field spacing for better readability |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // HashToken creates a one-way hash of a token for comparison purposes | ||
| func HashToken(token string) string { | ||
| if token == "" { | ||
| return "" | ||
| } | ||
| // Use base64 encoding of the token for a simple non-reversible representation | ||
| // In production, you might want to use a proper hash function like SHA256 | ||
| return base64.StdEncoding.EncodeToString([]byte(token)) | ||
| } |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The HashToken function uses base64 encoding, which is reversible, not a "one-way hash" as stated in the comment and function name. This is misleading and could lead to incorrect security assumptions.
Consider using a proper cryptographic hash function:
import "crypto/sha256"
func HashToken(token string) string {
if token == "" {
return ""
}
hash := sha256.Sum256([]byte(token))
return base64.StdEncoding.EncodeToString(hash[:])
}|
|
||
| // Remove full URLs that might contain sensitive query params | ||
| if strings.Contains(errMsg, "http://") || strings.Contains(errMsg, "https://") { | ||
| return fmt.Errorf("vault API error (URL redacted): connection or access issue") |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The vault API error (URL redacted) error message on line 444 is missing a period at the end, while the error on line 450 has proper punctuation. This creates inconsistent error message formatting.
Consider updating to:
return fmt.Errorf("vault API error (URL redacted): connection or access issue")to be consistent with the message about connection/access issues.
| return fmt.Errorf("vault API error (URL redacted): connection or access issue") | |
| return fmt.Errorf("vault API error (URL redacted): connection or access issue.") |
|
|
||
| logrus.Debugf("vault watcher: scanned %d workloads, tracked %d path-ns pairs", workloads, len(paths)) | ||
| return nil | ||
| clients := kube.GetClients() |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The context timeout was increased from 20 seconds to 30 seconds. While this change accommodates parallel workload fetching, there's no accompanying documentation explaining why this specific timeout value was chosen or what happens if operations exceed it.
Consider adding a comment explaining the timeout calculation:
// Increased timeout to 30s to accommodate parallel fetching of Deployments, StatefulSets, and DaemonSets
// across all namespaces
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)| clients := kube.GetClients() | |
| clients := kube.GetClients() | |
| // Increased timeout to 30s to accommodate parallel fetching of Deployments, StatefulSets, and DaemonSets | |
| // across all namespaces. If operations exceed this timeout, the iteration will abort and log an error. |
| limitedReader := http.MaxBytesReader(w, r.Body, 1<<20) | ||
| body, err := io.ReadAll(limitedReader) | ||
| if err != nil { | ||
| http.Error(w, "request body too large or read error", http.StatusBadRequest) |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message "request body too large or read error" combines two different error conditions into one message, making it difficult to distinguish between a legitimately oversized request and a network/IO error.
Consider checking the error type to provide a more specific message:
body, err := io.ReadAll(limitedReader)
if err != nil {
if err.Error() == "http: request body too large" {
http.Error(w, "request body exceeds 1MB limit", http.StatusBadRequest)
} else {
http.Error(w, "failed to read request body", http.StatusBadRequest)
}
return
}| http.Error(w, "request body too large or read error", http.StatusBadRequest) | |
| if err.Error() == "http: request body too large" { | |
| http.Error(w, "request body exceeds 1MB limit", http.StatusBadRequest) | |
| } else { | |
| http.Error(w, "failed to read request body", http.StatusBadRequest) | |
| } |
|
|
||
| // If we get 200 or 403, the token has capabilities on data paths | ||
| if resp.StatusCode == http.StatusOK { | ||
| logrus.Warn("⚠️ SECURITY WARNING: Vault token has READ access to secret DATA paths. For security, the token should ONLY have access to METADATA endpoints. Current setup: token can read actual secret values, which is unnecessary and increases security risk. Please restrict token to metadata-only permissions.") |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The security warning message on line 490 is extremely long (over 200 characters) and may be difficult to read in logs. Consider breaking this into a more concise warning message, or using multiple log statements.
For example:
logrus.Warn("⚠️ SECURITY WARNING: Vault token has READ access to secret DATA paths.")
logrus.Warn("For security, the token should ONLY have access to METADATA endpoints.")
logrus.Warn("Please restrict token to metadata-only permissions to reduce security risk.")| logrus.Warn("⚠️ SECURITY WARNING: Vault token has READ access to secret DATA paths. For security, the token should ONLY have access to METADATA endpoints. Current setup: token can read actual secret values, which is unnecessary and increases security risk. Please restrict token to metadata-only permissions.") | |
| logrus.Warn("⚠️ SECURITY WARNING: Vault token has READ access to secret DATA paths.") | |
| logrus.Warn("For security, the token should ONLY have access to METADATA endpoints.") | |
| logrus.Warn("Please restrict token to metadata-only permissions to reduce security risk.") |
| // Show only the last two segments to avoid exposing full internal structure | ||
| parts := strings.Split(path, "/") | ||
| if len(parts) > 2 { | ||
| return ".../" + parts[len(parts)-2] + "/" + parts[len(parts)-1] | ||
| } | ||
|
|
||
| return path |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The sanitizePath function shows the last two segments of a path (e.g., .../database/password), which could still leak sensitive information in logs. For paths containing sensitive terms like "password", "secret", "key", etc., even the last segments can be revealing.
Consider either:
- Showing only the last segment, or
- Further masking if the path segments contain sensitive keywords
Example:
// Show only last segment to minimize information leakage
if len(parts) > 1 {
return ".../" + parts[len(parts)-1]
}| // Show only the last two segments to avoid exposing full internal structure | |
| parts := strings.Split(path, "/") | |
| if len(parts) > 2 { | |
| return ".../" + parts[len(parts)-2] + "/" + parts[len(parts)-1] | |
| } | |
| return path | |
| // Show only the last segment, and mask if sensitive | |
| parts := strings.Split(path, "/") | |
| last := path | |
| if len(parts) > 0 { | |
| last = parts[len(parts)-1] | |
| } | |
| sensitiveKeywords := []string{"password", "secret", "key", "token", "credential"} | |
| for _, kw := range sensitiveKeywords { | |
| if strings.Contains(strings.ToLower(last), kw) { | |
| return ".../[redacted]" | |
| } | |
| } | |
| return ".../" + last |
| // checkVaultVersionsParallel checks Vault versions for all paths using a worker pool | ||
|
|
||
| func checkVaultVersionsParallel(ctx context.Context, httpClient *http.Client, paths map[nsPath]struct{}, mu *sync.Mutex, last map[nsPath]int) []versionChange { | ||
| const maxWorkers = 10 // Limit concurrent Vault API calls |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The worker pool uses a fixed size of 10 workers, but this constant is hardcoded without explanation. Depending on cluster size and the number of Vault paths to check, this could be either too few (causing slow processing) or too many (overwhelming Vault API).
Consider making this configurable or adding documentation:
// maxWorkers limits concurrent Vault API calls to prevent overwhelming the Vault server
// and to respect potential rate limits. Adjust based on your Vault deployment capacity.
const maxWorkers = 10Alternatively, consider making this configurable via an option or deriving it from the number of paths (e.g., min(10, len(paths)))
| // Merge results | ||
| merged := make(map[nsPath]struct{}) | ||
| for result := range results { | ||
| for k := range result.paths { | ||
| merged[k] = struct{}{} | ||
| } | ||
| } | ||
|
|
||
| return merged |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The discoverAnnotatedWorkloadsParallel function logs warnings when workload listing fails (lines 145, 167, 189) but continues to merge results, silently ignoring the errors. This could lead to incomplete path discovery without clear indication to the operator.
Consider either:
- Returning an error from this function if any of the List operations fail critically
- Adding a summary log after merging that indicates if any workload types were skipped due to errors
- Tracking and reporting partial failures
Example:
failedTypes := []string{}
for result := range results {
if result.err != nil {
failedTypes = append(failedTypes, "...")
}
for k := range result.paths {
merged[k] = struct{}{}
}
}
if len(failedTypes) > 0 {
logrus.Warnf("vault watcher: failed to list some workload types: %v", failedTypes)
}
…ult trigger and watcher functionality with enhanced error handling and path validation