Fix: Resolve merge conflicts and enhance security#79
Conversation
…d rate limiting - Add clustering support based on available CPU cores and environment settings - Integrate PostHog analytics for API request and server metrics tracking - Implement rate limiting with IP validation and bounded in-memory storage - Enhance VercelRequest and VercelResponse interfaces with robust parsing and security headers - Improve CORS handling with origin allowlists and credential support - Validate and sanitize API endpoint paths to prevent directory traversal attacks - Add request body size limit and enforce request timeout handling - Provide structured logging for requests, responses, errors, and server lifecycle events - Add health endpoint with uptime, metrics, environment, and version info - Support graceful shutdown with analytics capture on termination signals - Update create-checkout-session API with stricter CORS origin checks and OPTIONS method handling - Refine hono-polar API subscription syncing with date object conversions and improved checkout flow - Enhance secret-chat API error handling with detailed status codes and messages - Update service worker cache revision for production deployment
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
High Priority Fixes: - Replace vulnerable regex patterns in IP validation with safe string operations - Secure cookie parsing with Object.create(null) to prevent prototype pollution - Enhanced file system operations with additional validation layers - Add PostHog analytics payload size limits (32KB) and comprehensive PII sanitization - Implement error message sanitization to prevent information leakage Security Improvements: - Safe IPv4/IPv6 validation without regex DoS vulnerability - Cookie name/value validation with length limits and safe patterns - Multi-layer path traversal protection for API endpoint resolution - PII pattern detection and redaction for analytics - Development vs production error handling with safe messaging - ESLint security rule compliance with appropriate exemptions for validated cases 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…ration limits - Updated regex patterns for sanitizing metadata, navigation, images, stylesheets, scripts, fonts, and meta tags to prevent potential vulnerabilities. - Implemented iteration limits to avoid catastrophic backtracking in regex operations. - Added validation checks for extracted URLs and text to ensure safety and compliance with length restrictions. This commit addresses security concerns and improves the robustness of HTML content extraction.
- Resolved CORS configuration conflict in api-dev-server.ts using secure whitelist approach - Resolved git provider detection conflict in lib/deployment/netlify.ts using comprehensive URL parsing - Fixed regex escape character issue in netlify.ts for security compliance 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
**HIGH RISK - CORS Misconfiguration Fixed:** - Separate trusted origins from allowed origins in api-dev-server.ts - Only enable credentials for explicitly trusted domains - Prevent credential hijacking via dynamic origin setting **MEDIUM RISK - URL Validation Bypass Fixed:** - Replace vulnerable substring matching with secure hostname validation - Use proper URL parsing to prevent domain spoofing attacks - Affected files: netlify.ts and vercel.ts deployment services **MEDIUM RISK - Information Exposure Prevention:** - Enhanced error sanitization in both development and production modes - Remove ALL sensitive paths, environment variables, credentials from error messages - Stricter character limits and complete information sanitization Security improvements protect against: - Credential theft via CORS misconfiguration - Domain spoofing attacks (evil.com/github.com bypasses) - Internal system information disclosure through error messages 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Fix Promise being passed to Convex in EnhancedChatInterface.tsx by properly consuming textStream - Fix 404 error on tRPC billing endpoint by correcting URL path to /hono/trpc/ - Add robust array checks to prevent Se.map undefined errors - Improve metadata handling with proper values instead of undefined - Enhanced error handling and logging for tRPC requests 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Take the more secure origin validation from main branch that includes additional isValidOrigin checks for both trusted and allowed origins. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Replace `existsSync` with `statSync` for safer file existence checks in API endpoints. - Improve error handling for non-file responses and inaccessible directories. - Update CORS handling in `create-checkout-session` to validate origins more robustly. - Introduce helper functions for validating environment variables in `hono-polar` API. - Refactor input sanitization and validation logic in various components for enhanced security. - Clean up unused imports and optimize component structures across multiple files.
- Update analytics configuration to enable based on the presence of the PostHog API key. - Improve IP hashing method for better privacy using SHA-256. - Refine IP validation logic with comprehensive regex for IPv6 support. - Enhance error responses in API endpoints to include decoded endpoint information. - Implement structured logging for API requests and errors, ensuring sensitive data is scrubbed. - Update README to reflect enhanced security features and request timeout settings. - Add new environment variables for Sentry error monitoring and privacy consent management. - Optimize dependency versions in package.json and bun.lock for improved stability.
- Simplified regex patterns for email and phone number validation. - Improved handling of sensitive headers in the scrubHeaders function. - Updated error sanitization methods to prevent sensitive data leaks. - Enhanced URL validation patterns for GitHub links. - Refined object sanitization to prevent prototype pollution and ensure safe handling of sensitive fields.
…for credentials transfer Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
- Resolved IP validation function conflicts by choosing the newer, more robust implementation - Resolved cookie parsing conflicts by keeping enhanced prototype pollution protection - Resolved file existence check conflicts by using the safer validation approach - Resolved HTML sanitization conflicts by keeping the sanitize-html library approach - All conflicts resolved while maintaining security best practices 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…m our branch Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…ranch Merged security enhancements while preserving functionality: - Enhanced error sanitization with comprehensive path and credential redaction - Improved CORS handling with secure credential-only for trusted origins - Upgraded IP hashing from base64 to SHA-256 for better anonymization - Tightened error message length limits and expanded error pattern matching - More comprehensive PII detection and sanitization patterns - Regenerated bun.lock to resolve dependency conflicts All security improvements from dev-branch have been prioritized while maintaining existing functionality from qoder branch. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
|
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 20372498 | Triggered | Generic High Entropy Secret | 72993ac | .env.deployment.template | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secret safely. Learn here the best practices.
- Revoke and rotate this secret.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
WalkthroughThis PR introduces broad security hardening, privacy consent infrastructure, Polar billing integration, deployment API/type refactors, safer parsing/sanitization across UI and server, a new /api/success endpoint, and multiple UX components (privacy banner, upgrade modal). It updates environment templates, package dependencies, and various APIs to use stricter validation, safe headers, IP extraction, and robust error sanitization. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Client (Web)
participant S as /api/success
participant A as Auth (verifyAuth)
participant B as Backend (/api/get-subscription)
C->>S: POST /api/success { userId }
alt Authorization header present
S->>A: verifyAuth(headers)
A-->>S: user or null
alt Auth failed
S-->>C: 401 Unauthorized (JSON)
else Auth OK
alt userId mismatch
S-->>C: 403 Forbidden (JSON)
else
S->>B: GET /api/get-subscription (forward auth)
alt OK
B-->>S: { planId, status }
S-->>C: 200 { success: true, planId, status }
else Failure
S-->>C: 200 { success: true, planId: "free", status: "none" }
end
end
end
else No Authorization
S->>B: GET /api/get-subscription (no auth)
alt OK
B-->>S: { planId, status }
S-->>C: 200 { success: true, planId, status }
else Failure
S-->>C: 200 { success: true, planId: "free", status: "none" }
end
end
note over S: Errors are sanitized before logging
sequenceDiagram
autonumber
participant U as User
participant Btn as PrivacyAwareSignInButton
participant PC as PrivacyConsentBanner
participant Clerk as Clerk Modal
U->>Btn: Click (modal mode)
Btn->>PC: Show banner
PC-->>Btn: onConsentChange(consented)
alt consented
Btn->>Clerk: openSignIn(appearance, redirects)
Clerk-->>U: Sign-in modal
else declined
Btn-->>U: Banner closed, no modal
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Warning Review ran into problems🔥 ProblemsErrors were encountered while retrieving linked issues. Errors (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Claude finished @otdoges's task —— View job PR Code Review ✅Comprehensive review completed for this security-focused PR resolving merge conflicts and enhancing security across 53 files. Review Checklist:
✅ Security Improvements Assessment🛡️ Excellent Security EnhancementsGitGuardian Alert - FALSE POSITIVEThe detected "secret" in 🔒 CORS Protection (
|
| */ | ||
|
|
||
| import React, { useState } from 'react'; | ||
| import { SignUp, useClerk } from '@clerk/clerk-react'; |
Check failure
Code scanning / ESLint
Disallow unused variables Error
|
|
||
| import React, { useState } from 'react'; | ||
| import { SignUp, useClerk } from '@clerk/clerk-react'; | ||
| import { motion, AnimatePresence } from 'framer-motion'; |
Check failure
Code scanning / ESLint
Disallow unused variables Error
| import React, { useState } from 'react'; | ||
| import { SignUp, useClerk } from '@clerk/clerk-react'; | ||
| import { motion, AnimatePresence } from 'framer-motion'; | ||
| import PrivacyConsentBanner, { useSignupPrivacyConsent } from '../PrivacyConsentBanner'; |
Check failure
Code scanning / ESLint
Disallow unused variables Error
| import { SignUp, useClerk } from '@clerk/clerk-react'; | ||
| import { motion, AnimatePresence } from 'framer-motion'; | ||
| import PrivacyConsentBanner, { useSignupPrivacyConsent } from '../PrivacyConsentBanner'; | ||
| import { Shield, Sparkles, Code, Zap } from 'lucide-react'; |
Check failure
Code scanning / ESLint
Disallow unused variables Error
| onComplete?: () => void; | ||
| } | ||
|
|
||
| export function EnhancedSignUp({ redirectUrl = '/chat', onComplete }: EnhancedSignUpProps) { |
Check failure
Code scanning / ESLint
Disallow unused variables Error
| }; No newline at end of file | ||
| }; | ||
|
|
||
| // Utility for sanitizing strings outside of React rendering |
Check warning
Code scanning / ESLint
Fast refresh only works when a file only exports components. Use a new file to share constants or functions between components. Warning
| setFetchStatus("idle"); | ||
| } | ||
| }, [resolvedPlan.id, user?.id, user?.primaryEmailAddress?.emailAddress]); | ||
| }, [resolvedPlan.id, user?.id, user?.primaryEmailAddress?.emailAddress, getToken, planPeriod]); |
Check warning
Code scanning / ESLint
verifies the list of dependencies for Hooks like useEffect and similar Warning
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([^/]+)\/([^/]+)(?:\/.*)?$/, | ||
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([^/]+)\/([^/]+)\.git$/, | ||
| /^git@github\.com:([^/]+)\/([^/]+)\.git$/, | ||
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)(?:\/.*)?$/, |
Check failure
Code scanning / ESLint
Detects potentially unsafe regular expressions, which may take a very long time to run, blocking the event loop. Error
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([^/]+)\/([^/]+)\.git$/, | ||
| /^git@github\.com:([^/]+)\/([^/]+)\.git$/, | ||
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)(?:\/.*)?$/, | ||
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)\.git$/, |
Check failure
Code scanning / ESLint
Detects potentially unsafe regular expressions, which may take a very long time to run, blocking the event loop. Error
|
|
||
| export default Sentry.withErrorBoundary(Pricing, { | ||
| fallback: ({ resetError }) => ( | ||
| const PricingErrorFallback = ({ resetError }: { resetError: () => void }) => ( |
Check warning
Code scanning / ESLint
Fast refresh only works when a file only exports components. Move your component(s) to a separate file. Warning
There was a problem hiding this comment.
Actionable comments posted: 33
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (35)
src/lib/webcontainer-templates.ts (2)
1396-1401: Fix Biome parse error by switching to “satisfies” pattern instead of a left-hand Record annotationBiome reports: expected “)” but found “}” at Line 1396. This is a common parser hiccup around generic annotations on the LHS (especially in files that may be parsed with JSX rules). Avoid the LHS type and use “satisfies” to keep strong typing and eliminate the parse error.
Apply this diff:
-export const templates: Record<string, () => WebContainerTemplate> = { - nextjs: createNextJsTemplate, - basic: createBasicExpressTemplate, - landing: createLandingPageTemplate, - portfolio: createPortfolioTemplate, -}; +export const templates = { + nextjs: createNextJsTemplate, + basic: createBasicExpressTemplate, + landing: createLandingPageTemplate, + portfolio: createPortfolioTemplate, +} satisfies Record<string, () => WebContainerTemplate>;
1396-1403: Preserve TemplateKey as a literal union (not string | number | symbol)By annotating templates as Record<string, …> on the LHS, typeof templates becomes an index-signature type, so keyof typeof templates widens to string | number | symbol. That weakens type safety across the public API. Using the “satisfies” pattern keeps literal keys (nextjs | basic | landing | portfolio), so TemplateKey remains the precise union while still enforcing the value types.
No further changes are needed if you adopt the diff above; TemplateKey will correctly infer as keyof typeof templates (a union of the actual keys). If you also want immutability, append as const before satisfies:
-} satisfies Record<string, () => WebContainerTemplate>; +} as const satisfies Record<string, () => WebContainerTemplate>;src/utils/performance-test.ts (2)
37-49: Prevent crashes where PerformanceObserver is unavailable; also avoid dropping 0-duration entriesInstantiating
PerformanceObserverwithout support will throw. Additionally,||treats0as falsy, replacing valid zero durations withstartTime. Use??.Apply this diff:
- const observer = new PerformanceObserver((list) => { - for (const entry of list.getEntries()) { - this.addMetric(`${type}-${entry.name}`, entry.duration || entry.startTime); - } - }); - - try { - observer.observe({ entryTypes: [type] }); - this.observers.set(type, observer); - } catch (error) { - console.warn(`Failed to observe ${type}:`, error); - } + if (typeof PerformanceObserver === 'function') { + const observer = new PerformanceObserver((list) => { + for (const entry of list.getEntries()) { + this.addMetric(`${type}-${entry.name}`, entry.duration ?? entry.startTime); + } + }); + try { + observer.observe({ entryTypes: [type] }); + this.observers.set(type, observer); + } catch (error) { + console.warn(`Failed to observe ${type}:`, error); + } + } else { + console.warn(`[PerformanceMonitor] PerformanceObserver not supported; skipping "${type}" monitoring.`); + }
94-105: Fix units for memory metrics (currently mislabeled as ms)Memory values (
usedJSHeapSize) are bytes, butlogReportprintsms. This misleads consumers of the report.Apply this diff:
- const report = this.getReport(); - console.group('🚀 Performance Report'); - - for (const [metric, stats] of Object.entries(report)) { - console.log(`${metric}:`, { - count: stats.count, - avg: `${stats.avg.toFixed(2)}ms`, - min: `${stats.min.toFixed(2)}ms`, - max: `${stats.max.toFixed(2)}ms` - }); - } + const report = this.getReport(); + console.group('🚀 Performance Report'); + const asMs = (v: number) => `${v.toFixed(2)}ms`; + const asMiB = (v: number) => `${(v / (1024 * 1024)).toFixed(2)} MiB`; + for (const [metric, stats] of Object.entries(report)) { + const isMemory = metric.startsWith('memory-'); + console.log(`${metric}:`, { + count: stats.count, + avg: isMemory ? asMiB(stats.avg) : asMs(stats.avg), + min: isMemory ? asMiB(stats.min) : asMs(stats.min), + max: isMemory ? asMiB(stats.max) : asMs(stats.max), + }); + }src/lib/github-service.ts (1)
254-265: Only fall back to create on 404; rethrow other errorsBare
catch {}treats all failures as “file not found,” causing a noisy second API call and masking permission/network issues. Discriminate 404 (Not Found) from other errors; rethrow non-404s.Apply this diff:
- } catch { - // File might not exist, create it instead - logger.info(`File ${file.path} not found, creating new file`, { filePath: file.path, action: 'create' }); - } + } catch (e) { + // Only fallback on 404; rethrow others to avoid masking auth/network errors + const msg = e instanceof Error ? e.message : String(e); + const notFound = /\b404\b/.test(msg); + if (notFound) { + logger.info(`File ${file.path} not found, creating new file`, { filePath: file.path, action: 'create' }); + } else { + logger.warn('Unable to fetch file SHA for update; aborting update', { filePath: file.path, error: msg }); + throw e; + } + }src/lib/error-handler.ts (1)
311-315: Fix parameter mismatch intriggerRetrycallThe
onClickhandler at src/lib/error-handler.ts:313 incorrectly passeserrorto the now-parameterlesstriggerRetry()method, causing a compile-time error. Update as follows:--- a/src/lib/error-handler.ts +++ b/src/lib/error-handler.ts @@ -311,5 +311,5 @@ action: error.isRetryable ? { label: 'Retry', - onClick: () => this.triggerRetry(error) + onClick: () => this.triggerRetry() } : undefined });• No other invocations of
triggerRetry(with arguments were found in the repository.src/lib/search-service.ts (4)
216-218: Restricted header: conditionally set User-Agent only on the serverUser-Agent is also forbidden in browsers. Set it only when running server-side.
- fetch(url, { - method: 'GET', - headers: { - 'User-Agent': 'ZapDev Website Analyzer/1.0', - 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', - }, - }), + (() => { + const headers: HeadersInit = { + 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', + }; + if (typeof window === 'undefined') { + (headers as Record<string, string>)['User-Agent'] = 'ZapDev Website Analyzer/1.0'; + } + return fetch(url, { method: 'GET', headers }); + })(),
199-207: Prevent client-side invocation ofanalyzeWebsiteI’ve confirmed that
analyzeWebsiteis being called directly in a React component on the client, which will break on cross-origin requests (and expose CORS/security concerns). This helper should only run in a server (Node/SSR) context.• src/components/WebsiteCloneDialog.tsx (around lines 311–317):
- Remove the direct calls to
analyzeWebsite()in your client component (both theonKeyDownandonClick)- Instead, have the component invoke a server-side API route (e.g.
/api/analyze-website) viafetch, which then callsanalyzeWebsiteunder the hood.• src/lib/search-service.ts (lines 199–207):
- Add an explicit guard at top of
analyzeWebsite:if (typeof window !== 'undefined') { throw new Error('analyzeWebsite is server-only; use via your API endpoint'); }- Document in JSDoc (and project README) that this function must only be invoked in SSR/Node.
These changes will ensure cross-origin requests aren’t attempted in the browser and clearly enforce the server-only boundary.
5-5: Apply the app’s sanitized logger across all modules
DestructuringloggerfromSentrycouples our code to Sentry internals and bypasses the central sanitization pipeline. We should consistently import and use the shared, PII-safe logger from@/lib/error-handlerinstead ofSentry.logger.Modules where
const { logger } = Sentry;appears:
- src/lib/firecrawl.ts:4
- src/lib/ai.ts:21
- src/lib/search-service.ts:5
- src/lib/github-service.ts:6
- src/lib/github-token-storage.ts:9
- src/lib/api-key-manager.ts:4
- src/lib/api-key-validator.ts:8
- src/lib/ai-monitoring.ts:27
- src/components/GitHubIntegration.tsx:29
- src/components/ChatInterface.tsx:46
- src/lib/ai-utils.ts:35
In each file, replace:
-import * as Sentry from '@sentry/react'; +import * as Sentry from '@sentry/react'; +import { logger } from '@/lib/error-handler'; @@ -const { logger } = Sentry; +// Use app-level logger for consistency and PII-safe logging
141-146: Remove restricted header “Accept-Encoding”The
Accept-Encodingheader is automatically managed by HTTP clients (browsers and many fetch libraries) and cannot be reliably set in browser environments—it will be ignored or may cause errors. Please remove it from the request headers.Locations to update:
- File: src/lib/search-service.ts
Lines: 142–143Suggested diff:
headers: { 'Accept': 'application/json', - 'Accept-Encoding': 'gzip', 'X-Subscription-Token': this.apiKey, },src/test-ai-implementation.ts (1)
435-446: ESM bug: require.main is undefined in type: module; use an ESM-aware direct-run checkIn an ESM project ("type": "module"), require is not defined. This will crash when running directly via tsx/node.
Apply:
+import { pathToFileURL } from 'url'; @@ -if (typeof window === 'undefined' && require.main === module) { - const tester = new AITester(); - tester.runAllTests() - .then(() => { - console.log('All tests completed'); - process.exit(0); - }) - .catch((error) => { - console.error('Test suite failed:', error); - process.exit(1); - }); -} +if (typeof window === 'undefined') { + const isDirectRun = process.argv[1] && import.meta.url === pathToFileURL(process.argv[1]).href; + if (isDirectRun) { + const tester = new AITester(); + tester.runAllTests() + .then(() => { + console.log('All tests completed'); + process.exit(0); + }) + .catch((error) => { + console.error('Test suite failed:', error); + process.exit(1); + }); + } +}src/components/WebsiteCloneDialog.tsx (3)
206-214: Fix potential stale-closure when marking the failing step in catch
analyzeWebsitecapturesstepsandcurrentStepfrom the render that created the callback. If an error occurs aftersetCurrentStep(...), the catch block may still see the older values and mark the wrong step aserror. Use refs to hold the live values or determine the failing step id explicitly.Minimal change within this hunk:
- const currentStepData = Array.isArray(steps) && steps.length > currentStep && currentStep >= 0 ? steps.at(currentStep) : null; + const currentStepData = Array.isArray(steps) && steps.length > currentStep && currentStep >= 0 + ? steps.at(currentStep) + : null;Recommended full fix (outside hunk; add refs and keep them in sync):
// near other state const stepsRef = React.useRef<CloneStep[]>([]); const currentStepRef = React.useRef(0); // whenever you set steps/currentStep: setSteps(analysisSteps); stepsRef.current = analysisSteps; setCurrentStep(n); currentStepRef.current = n; // in this catch, use refs: const liveSteps = stepsRef.current; const liveIndex = currentStepRef.current; const currentStepData = Array.isArray(liveSteps) && liveSteps.length > liveIndex && liveIndex >= 0 ? liveSteps.at(liveIndex) : null;
401-413: Sanitize and safely render external analysis fields (title, description, url, lists, and color swatches)
analysisis derived from crawling arbitrary sites and must be treated as untrusted. Render throughSafeTextand validate color strings before using them in inline styles.Targeted changes (illustrative; apply similarly to other display sites in this file):
-<h3 className="text-lg font-semibold text-gradient-static mb-2"> - {analysis.title || 'Website Analysis'} -</h3> +<h3 className="text-lg font-semibold text-gradient-static mb-2"> + <SafeText>{analysis.title || 'Website Analysis'}</SafeText> +</h3> -<p className="text-muted-foreground text-sm mb-3"> - {analysis.description || 'No description available'} -</p> +<p className="text-muted-foreground text-sm mb-3"> + <SafeText>{analysis.description || 'No description available'}</SafeText> +</p> -<span>{analysis.url}</span> +<span><SafeText>{analysis.url}</SafeText></span> -{analysis.technologies.map((tech, index) => ( - <Badge key={index} variant="secondary" className="text-xs"> - {tech} - </Badge> -))} +{analysis.technologies.map((tech, index) => ( + <Badge key={index} variant="secondary" className="text-xs"> + <SafeText>{tech}</SafeText> + </Badge> +))}Color validation helper (new util, outside hunk):
const isSafeColor = (c: string) => /^#([0-9a-f]{3}|[0-9a-f]{6})$/i.test(c) || /^rgb(a)?\(\s*\d{1,3}\s*,\s*\d{1,3}\s*,\s*\d{1,3}(?:\s*,\s*(0|0?\.\d+|1))?\s*\)$/i.test(c) || /^hsl(a)?\(\s*\d{1,3}\s*,\s*\d{1,3}%\s*,\s*\d{1,3}%(?:\s*,\s*(0|0?\.\d+|1))?\s*\)$/i.test(c); // usage: style={{ backgroundColor: isSafeColor(color) ? color : '#000' }}Also applies to: 424-449, 462-468, 479-485, 496-501, 513-521
126-134: Validate protocol for user-entered URLs before crawling
new URL(url)accepts schemes likejavascript:ordata:. Restrict tohttp:andhttps:to avoid unsafe protocols being handed to the crawler.Example addition right after
new URL(url):const parsed = new URL(url); if (!['http:', 'https:'].includes(parsed.protocol)) { throw new Error('Only http(s) URLs are allowed'); }src/pages/SecretChat.tsx (3)
379-385: Render message content with SafeText to prevent XSSUser/assistant messages are untrusted. Render via
SafeTextwithwhitespace-pre-wrapstyling instead of interpolating raw strings.Apply this diff:
-<p className="whitespace-pre-wrap">{message.content}</p> +<SafeText className="whitespace-pre-wrap">{message.content}</SafeText>And add the import at the top:
import { SafeText } from "@/components/ui/SafeText";
200-207: Sanitize inputs before storing/sending to backend to meet project security policyPer guidelines, sanitize user input before storage and display. Apply a sanitizer prior to
addMessageand before sending to the API.Apply this diff:
-const messageToSend = currentMessage; +import { sanitizeText } from "@/utils/text-sanitizer"; // or from the unified sanitizer you adopt +const messageToSend = sanitizeText(currentMessage);- await addMessage({ + await addMessage({ chatId: currentChatId, - content: messageToSend, + content: messageToSend, role: "user", });- await addMessage({ + await addMessage({ chatId: currentChatId, - content: assistantMessage.content, + content: sanitizeText(assistantMessage.content), role: "assistant", metadata: assistantMessage.metadata, });Also applies to: 242-249
24-47: Ensure server-side authentication and ownership checks in all Convex functions handling secrets, API keys, and AI rate limitsOur automated analysis identified several Convex queries and mutations without any reference to
ctx.author ownership verification. Per our security guidelines (“All Convex functions must verify authentication and enforce userId ownership checks”), these functions must be hardened to prevent unauthorized access.Files and definitions requiring immediate updates:
- convex/secretChats.ts
- getSecretChats
- getSecretChat
- createSecretChat
- addSecretMessage
- updateSecretChatTitle
- deleteSecretChat
- convex/secretAccess.ts
- hasSecretAccess
- isSecretSetupComplete
- setupSecretAccess
- verifySecretPassword
- revokeSecretAccess
- convex/secretApiKeys.ts
- getUserApiKeys
- setApiKey
- getDecryptedApiKey
- updateApiKeyLastUsed
- deleteApiKey
- toggleApiKey
- convex/aiRateLimit.ts
- aiRateLimitState
- checkAIRateLimit
- getAIUsageStats
- cleanupAIRateLimits
Suggested guard pattern for each query/mutation:
export const someFunction = query({...}); export const someFunction = mutation({...}); // Inside handler: const userId = ctx.auth.userId; if (!userId) { throw new Error("Unauthorized"); } // When fetching or mutating records, always filter by ownerId === userIdPlease update these definitions to use
ctx.auth.userIdand enforce ownership before accessing or modifying any user-specific data.convex/messages.ts (1)
803-810: Do not expose message IDs in auth errorsThese errors leak whether a message ID exists. For unauthorized access, respond with a generic message (and log details internally).
- if (!chat || chat.userId !== identity.subject) { - throw new Error(`Access denied for message ${messageId}`); - } + if (!chat || chat.userId !== identity.subject) { + throw new Error('Access denied'); + } @@ - if (message.userId !== identity.subject) { - throw new Error(`You can only delete your own messages (${messageId})`); - } + if (message.userId !== identity.subject) { + throw new Error('Access denied'); + }lib/deployment/types.ts (1)
167-199: Critical: Secrets reintroduced into a “non-sensitive” config surface (violates SECURITY NOTE).ZapdevDeploymentConfig currently embeds accessToken under netlify/vercel despite the earlier separation into ZapdevDeploymentSecrets. This makes accidental serialization/logging of secrets much more likely and contradicts the file’s own guidance and the Netlify/Vercel config comments above.
Apply this diff to remove tokens from ZapdevDeploymentConfig and introduce an explicit with-secrets type for internal wiring:
-// Configuration for zapdev deployment service (non-sensitive settings) -export interface ZapdevDeploymentConfig { - // Include sensitive deployment configuration - netlify: { - accessToken: string; - teamId?: string; - }; - vercel: { - accessToken: string; - teamId?: string; - }; +// Configuration for zapdev deployment service (non-sensitive settings) +export interface ZapdevDeploymentConfig { + // Platform config without secrets + netlify?: { + teamId?: string; + }; + vercel?: { + teamId?: string; + }; // Main domain for custom subdomains baseDomain: string; // "zapdev.link" ... } + +// Internal helper for runtime wiring only (do not serialize) +export type ZapdevDeploymentConfigWithSecrets = { + config: ZapdevDeploymentConfig; + secrets: ZapdevDeploymentSecrets; +};Follow-up: Update api/domains.ts (deploymentConfig construction and ZapdevDeploymentManager initialization) to pass secrets separately rather than embedding them in config. I’ve included a proposed patch in that file’s comment.
api/create-checkout-session.ts (1)
94-94: Avoid exposing PII in the checkout URL query string
EmbeddinguserIdin themetadatasearch parameter exposes personally identifiable information via logs, referer headers, and analytics.• Location:
api/create-checkout-session.ts, line 92
• Confirm your security policy on passing user identifiers in URLs.
• Refactor approach: issue a short-lived opaque token (e.g. JWT or nonce stored in KV/session) and only include that token in the query string. In your/hono/checkout-polarhandler, resolve the token back to{ userId, planId, period }server-side.Minimal diff example:
- honoCheckout.searchParams.set('metadata', JSON.stringify({ userId: auth.userId, planId, period: billingPeriod })); + // Persist metadata on server under oneTimeToken and only include the token in the URL + honoCheckout.searchParams.set('token', oneTimeToken);api/domains.ts (2)
91-108: Secrets in config object: avoid embedding accessToken; pass via secrets structure.You’re putting NETLIFY_ACCESS_TOKEN and VERCEL_ACCESS_TOKEN inside deploymentConfig (a general config object). This increases the chance of accidental exposure and conflicts with the types’ intent.
Apply this diff to keep config non-sensitive and pass secrets separately:
-const deploymentConfig: ZapdevDeploymentConfig = { +const deploymentConfig: ZapdevDeploymentConfig = { baseDomain: 'zapdev.link', - netlify: { - accessToken: process.env.NETLIFY_ACCESS_TOKEN || '', - teamId: process.env.NETLIFY_TEAM_ID, - }, - vercel: { - accessToken: process.env.VERCEL_ACCESS_TOKEN || '', - teamId: process.env.VERCEL_TEAM_ID, - }, + netlify: { teamId: process.env.NETLIFY_TEAM_ID }, + vercel: { teamId: process.env.VERCEL_TEAM_ID }, defaults: { platform: (process.env.DEFAULT_DEPLOYMENT_PLATFORM as DeploymentPlatform) || 'vercel', buildCommand: 'npm run build', outputDirectory: 'dist', nodeVersion: '18.x', }, }; // Initialize deployment manager let deploymentManager: ZapdevDeploymentManager; try { - deploymentManager = new ZapdevDeploymentManager({ - config: deploymentConfig, - analytics: { track: analytics.track.bind(analytics) }, - logger, - }); + deploymentManager = new ZapdevDeploymentManager({ + config: deploymentConfig, + secrets: { + netlify: { accessToken: process.env.NETLIFY_ACCESS_TOKEN || '', teamId: process.env.NETLIFY_TEAM_ID }, + vercel: { accessToken: process.env.VERCEL_ACCESS_TOKEN || '', teamId: process.env.VERCEL_TEAM_ID }, + }, + analytics: { track: analytics.track.bind(analytics) }, + logger, + });This assumes the manager constructor accepts secrets separately (recommended). If not, I can help adjust its signature.
61-76: Log redaction: hash or truncate IPs before logging to align with PII minimization.Current logs include raw IPs. The PR objectives mention “Better IP anonymization using SHA-256.” Adopt anonymization.
Apply this diff and import crypto at file top:
+import { createHash } from 'crypto'; @@ const logger = { info: (message: string, meta?: Record<string, unknown>) => { const timestamp = new Date().toISOString(); console.log(`🟢 [${timestamp}] DOMAINS-API INFO: ${message}`, meta ? JSON.stringify(meta) : ''); }, @@ }; +function anonymizeIp(ip: unknown, salt = process.env.LOG_SALT || ''): string { + const str = typeof ip === 'string' ? ip.split(',')[0].trim() : String(ip ?? ''); + return createHash('sha256').update(str + '|' + salt).digest('hex').slice(0, 16); // 64-bit-ish prefix +}And here:
- logger.info('Domains API request', { - action, - method: req.method, - userAgent: req.headers['user-agent'], - ip: req.headers['x-forwarded-for'] || req.socket?.remoteAddress - }); + logger.info('Domains API request', { + action, + method: req.method, + userAgent: req.headers['user-agent'], + ip_hash: anonymizeIp(req.headers['x-forwarded-for'] || req.socket?.remoteAddress), + });src/App.tsx (1)
59-63: /success should not be behind AuthGuard (breaks Stripe return flow for new/unauth’d users).The Success page handles session_id and displays auth prompts/toasts on 401; guarding it prevents unauthenticated customers from completing post-checkout. Prior guidance only mandated protecting /chat and /settings.
Move /success to the public routes block.
Apply:
- <Route path="/success" element={ - <AuthGuard> - <Success /> - </AuthGuard> - } /> + <Route path="/success" element={<Success />} />lib/deployment/manager.ts (1)
483-486: Fix inferred never[]: explicitly type instructions array.With strict TS,
const instructions = []infersnever[], making subsequentpushcalls type errors. Declare the element type explicitly.- const instructions = []; + const instructions: Array<{ + type: string; + name: string; + value: string; + description: string; + }> = [];src/components/LivePreview.tsx (1)
95-107: Critical: CSPframe-ancestors 'none'prevents the blob document from rendering in the iframeThe generated HTML includes
frame-ancestors 'none', which forbids framing and will block the preview inside your own iframe. Replace with'self'(blob inherits app origin) or remove the directive.Apply:
- <meta http-equiv="Content-Security-Policy" content="default-src 'none'; img-src 'self' data:; media-src 'none'; script-src 'none'; style-src 'self' 'unsafe-inline'; font-src 'self' data:; connect-src 'none'; frame-ancestors 'none'; base-uri 'self'; form-action 'none'"> + <meta http-equiv="Content-Security-Policy" content="default-src 'none'; img-src 'self' data:; media-src 'none'; script-src 'none'; style-src 'self' 'unsafe-inline'; font-src 'self' data:; connect-src 'none'; frame-ancestors 'self'; base-uri 'self'; form-action 'none'">src/hono-server.ts (2)
73-97: Clerk token verification uses wrong option key (jwtKey vs issuer), causing auth failures@clerk/backend verifyToken expects { issuer, audience? }. Passing jwtKey will break verification and treat all tokens as invalid.
- const { verifyToken } = await import('@clerk/backend'); - - const options: { jwtKey?: string; audience?: string } = { jwtKey: issuer }; - if (audience) options.audience = audience; - - const verified = await verifyToken(token, options) as { sub?: string; email?: string }; + const { verifyToken } = await import('@clerk/backend'); + const options: { issuer: string; audience?: string } = { issuer }; + if (audience) options.audience = audience; + const verified = await verifyToken(token, options) as { sub?: string; email?: string };
288-297: IDOR risk: Customer portal allows arbitrary customerId via query paramgetCustomerId returns event.req.query('customerId'), letting a user supply another customer’s ID. This is an access control flaw. Derive the customer ID from the authenticated user instead (e.g., via email/metadata lookup as in the /subscription route) and ignore query input.
- getCustomerId: async (event) => { - // Extract customer ID from query params or user context - const customerId = event.req.query('customerId'); - return customerId || ''; - }, + getCustomerId: async (c) => { + // Derive from authenticated user; do not trust client-provided IDs + const user = c.get('user'); + if (!user) throw new Error('Authentication required'); + const polar = getPolarClient(); + const customers = await polar.customers.list({ email: user.email || undefined, limit: 10 }); + const list = Array.isArray(customers) ? customers : (customers as { data?: Array<{ id: string; email?: string; metadata?: { userId?: string } }> }).data || []; + const match = list.find((cust) => cust.metadata?.userId === user.id || (user.email && cust.email === user.email)); + if (!match) throw new Error('Customer not found for authenticated user'); + return match.id; + },If you want to reuse the logic from /subscription, extract getUserCustomerId to a top-level helper and reuse it here for consistency.
api/hono-polar.ts (4)
131-148: CORS: avoid Access-Control-Allow-Origin: '*' with credentialsYou set Access-Control-Allow-Credentials: true and sometimes echo '*'. Browsers will reject this combo and it’s risky. Only echo a whitelisted origin and default to 403 (or omit CORS headers) for others.
- const origin = c.req.header('Origin') || '*'; + const origin = c.req.header('Origin') || ''; const allowedOrigins = ['http://localhost:8080', 'https://zapdev.link', 'https://www.zapdev.link']; - - if (origin === '*' || allowedOrigins.includes(origin)) { - c.header('Access-Control-Allow-Origin', origin); - } + if (origin && allowedOrigins.includes(origin)) { + c.header('Access-Control-Allow-Origin', origin); + } c.header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS'); c.header('Access-Control-Allow-Headers', 'Content-Type, Authorization, X-Requested-With, Polar-Webhook-Signature'); - c.header('Access-Control-Allow-Credentials', 'true'); + if (origin) c.header('Access-Control-Allow-Credentials', 'true');Also consider adding http://localhost:5173 to allowedOrigins to match dev host used elsewhere.
151-169: Bearer token not stripped before verifyClerkToken(), causing auth failuresverifyClerkToken expects a raw token. You’re passing the whole Authorization header.
-async function verifyClerkAuth(authHeader: string): Promise<{ id: string; email?: string } | null> { +async function verifyClerkAuth(authHeader: string): Promise<{ id: string; email?: string } | null> { try { + const token = authHeader.startsWith('Bearer ') ? authHeader.slice(7).trim() : authHeader.trim(); const issuer = process.env.CLERK_JWT_ISSUER_DOMAIN; const audience = process.env.CLERK_JWT_AUDIENCE; if (!issuer) { return null; } - const verified = await verifyClerkToken(authHeader, issuer, audience); + const verified = await verifyClerkToken(token, issuer, audience);
382-389: Runtime guard missing for POLAR_ACCESS_TOKEN in direct adapter pathYou use non-null assertion (!) here while other endpoints validate envs. Add a guard or reuse validatePolarEnvironment to fail fast with a clear error.
-app.get('/checkout-polar', Checkout({ - accessToken: process.env.POLAR_ACCESS_TOKEN!, +app.get('/checkout-polar', Checkout({ + accessToken: (() => { + const t = process.env.POLAR_ACCESS_TOKEN; + if (!t) throw new Error('POLAR_ACCESS_TOKEN is required'); + return t; + })(),
484-555: Webhook secret uses non-null assertion without validationMirror the env validation approach to avoid silent misconfig and insecure deployments.
-app.post('/webhooks', Webhooks({ - webhookSecret: process.env.POLAR_WEBHOOK_SECRET!, +app.post('/webhooks', Webhooks({ + webhookSecret: (() => { + const s = process.env.POLAR_WEBHOOK_SECRET; + if (!s) throw new Error('POLAR_WEBHOOK_SECRET is required'); + return s; + })(),api/deploy.ts (3)
372-375: 500 responses leak internal error messages to clients.Returning error.message violates the "Stronger error message sanitization" objective. Return a generic message plus a server-side errorId; log the full error with the id.
Apply this diff:
- return res.status(500).json({ - error: 'Internal server error', - message: error instanceof Error ? error.message : String(error) - }); + const errorId = (globalThis.crypto?.randomUUID?.() ?? `${Date.now().toString(36)}-${Math.random().toString(36).slice(2, 8)}`); + logger.error('API request failed', error, { errorId }); + return res.status(500).json({ + error: 'Internal server error', + errorId + });
317-323: Anonymize client IP in Deployment API logsThe raw client IP is still being logged, which exposes PII. Replace it with a SHA-256 hash (truncate to 16 hex characters) to meet the PR’s objective of “Better IP anonymization using SHA-256.”
• Location needing change:
api/deploy.ts:321– currently logsip: req.headers['x-forwarded-for'] || req.socket?.remoteAddress• Apply this refactor:
- logger.info('Deployment API request', { - action, - method: req.method, - userAgent: req.headers['user-agent'], - ip: req.headers['x-forwarded-for'] || req.socket?.remoteAddress - }); + const ipRaw = (req.headers['x-forwarded-for'] as string) || req.socket?.remoteAddress || ''; + const ipFirst = ipRaw.split(',')[0].trim(); + const { createHash } = await import('node:crypto'); + const ipHash = createHash('sha256').update(ipFirst).digest('hex').slice(0, 16); + logger.info('Deployment API request', { + action, + method: req.method, + userAgent: req.headers['user-agent'], + ipHash + });
285-289: Enforce strict CORS across all API endpointsThe PR’s goal is “Enhanced CORS protection with strict origin validation and credential handling,” yet we still have
Access-Control-Allow-Origin: '*'in three handlers, leaving them wide-open:
- api/deploy.ts (lines 285–289)
- api/domains.ts (line 180–182)
- api/success.ts (line 7–9)
Please replace the permissive headers in each with origin-allowlist logic. For example:
- // CORS headers - res.setHeader('Access-Control-Allow-Origin', '*'); - res.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS'); - res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization'); + // CORS headers (strict origin validation) + const origin = Array.isArray(req.headers.origin) ? req.headers.origin[0] : req.headers.origin; + const allowedOrigins = (process.env.ALLOWED_ORIGINS || '') + .split(',') + .map(o => o.trim()) + .filter(Boolean); + const isAllowed = origin && allowedOrigins.includes(origin); + if (isAllowed) { + res.setHeader('Access-Control-Allow-Origin', origin as string); + res.setHeader('Vary', 'Origin'); + res.setHeader('Access-Control-Allow-Credentials', 'true'); + } else { + res.setHeader('Access-Control-Allow-Origin', 'null'); + res.setHeader('Vary', 'Origin'); + } + res.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS'); + res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization');• Document
ALLOWED_ORIGINSas a comma-separated env var (e.g.https://app.zapdev.link,https://staging.zapdev.link).
• To support complex preflights, echo backAccess-Control-Request-HeadersviaAccess-Control-Allow-Headers.Please apply this change in api/deploy.ts, api/domains.ts, and api/success.ts to fully lock down CORS.
src/components/EnhancedChatInterface.tsx (1)
462-471: Render search results safely and enforce http(s) for href.Sanitize title/description and guard href against javascript: URLs.
Apply this diff:
- {searchResults.map((result, index) => ( + {searchResults.map((result, index) => { + const safeHref = /^https?:\/\//i.test(result.url) ? result.url : '#'; + return ( <div key={index} className="p-3 border-b border-white/10 last:border-0"> - <h4 className="font-medium text-sm">{result.title}</h4> - <p className="text-xs text-muted-foreground mt-1">{result.description}</p> - <a href={result.url} target="_blank" rel="noopener noreferrer" + <h4 className="font-medium text-sm">{sanitizeText(result.title)}</h4> + <p className="text-xs text-muted-foreground mt-1">{sanitizeText(result.description)}</p> + <a href={safeHref} target="_blank" rel="noopener noreferrer" className="text-xs text-blue-400 hover:text-blue-300 transition-colors"> - {result.url} + {sanitizeText(result.url)} </a> </div> - ))} + )})}
♻️ Duplicate comments (1)
api/hono-polar.ts (1)
51-57: Server-side Convex URL variable mismatch with env templategetConvexClient reads CONVEX_URL, but env-template defines VITE_CONVEX_URL. Add CONVEX_URL (server-side) to the template or fallback to VITE_CONVEX_URL if present.
See proposed env-template.txt diff in a separate comment.
| - **Security Headers**: HSTS, Content Security Policy (CSP), Referrer-Policy, Permissions-Policy, Cross-Origin-Opener-Policy (COOP), Cross-Origin-Resource-Policy (CORP), and X-Content-Type-Options=nosniff | ||
| - **Request Timeout**: Configurable timeout protection (default: 30s)- **Graceful Shutdown**: Clean shutdown with analytics reporting |
There was a problem hiding this comment.
Fix list formatting and punctuation in Security Headers and Request Timeout bullets
There’s a run-on bullet and minor punctuation issues that affect readability.
-**Security Headers**: HSTS, Content Security Policy (CSP), Referrer-Policy, Permissions-Policy, Cross-Origin-Opener-Policy (COOP), Cross-Origin-Resource-Policy (CORP), and X-Content-Type-Options=nosniff
-**Request Timeout**: Configurable timeout protection (default: 30s)- **Graceful Shutdown**: Clean shutdown with analytics reporting
+**Security Headers**: HSTS, Content Security Policy (CSP), Referrer-Policy, Permissions-Policy, Cross-Origin-Opener-Policy (COOP), Cross-Origin-Resource-Policy (CORP), and X-Content-Type-Options=nosniff
+**Request Timeout**: Configurable timeout protection (default: 30s)
+**Graceful Shutdown**: Clean shutdown with analytics reporting📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Security Headers**: HSTS, Content Security Policy (CSP), Referrer-Policy, Permissions-Policy, Cross-Origin-Opener-Policy (COOP), Cross-Origin-Resource-Policy (CORP), and X-Content-Type-Options=nosniff | |
| - **Request Timeout**: Configurable timeout protection (default: 30s)- **Graceful Shutdown**: Clean shutdown with analytics reporting | |
| - **Security Headers**: HSTS, Content Security Policy (CSP), Referrer-Policy, Permissions-Policy, Cross-Origin-Opener-Policy (COOP), Cross-Origin-Resource-Policy (CORP), and X-Content-Type-Options=nosniff | |
| - **Request Timeout**: Configurable timeout protection (default: 30s) | |
| - **Graceful Shutdown**: Clean shutdown with analytics reporting |
🧰 Tools
🪛 LanguageTool
[grammar] ~18-~18: There might be a mistake here.
Context: ...n /health endpoint for monitoring - Security Headers: HSTS, Content Securit...
(QB_NEW_EN)
[grammar] ~19-~19: There might be a mistake here.
Context: ... and X-Content-Type-Options=nosniff - Request Timeout: Configurable timeout p...
(QB_NEW_EN)
[grammar] ~19-~19: There might be a mistake here.
Context: ... Clean shutdown with analytics reporting ### 🛡️ Enhanced Security - **CORS Config...
(QB_NEW_EN)
🤖 Prompt for AI Agents
In API-SERVER-README.md around lines 18 to 19, the bullet list has a run-on
entry and punctuation issues; split the combined "Request Timeout" and "Graceful
Shutdown" into separate bullets, ensure each bullet is a single sentence, add
missing punctuation (periods) where appropriate, and make the "Security Headers"
bullet comma-separated and consistently formatted (e.g., use full header names
and consistent casing); update the list so each item is on its own line with
proper punctuation for readability.
| // Main CORS logic | ||
| if (isOriginAllowed(requestOrigin)) { | ||
| res.setHeader('Access-Control-Allow-Origin', requestOrigin); | ||
| res.setHeader('Access-Control-Allow-Credentials', 'true'); | ||
| } else if (requestOrigin) { | ||
| res.setHeader('Access-Control-Allow-Origin', 'null'); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Do not return Access-Control-Allow-Origin: 'null' for disallowed origins.
Setting ACAO to 'null' can inadvertently allow access from sandboxed documents (file://, data:). Simply omit ACAO for disallowed origins.
Apply this diff:
- } else if (requestOrigin) {
- res.setHeader('Access-Control-Allow-Origin', 'null');
- }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Main CORS logic | |
| if (isOriginAllowed(requestOrigin)) { | |
| res.setHeader('Access-Control-Allow-Origin', requestOrigin); | |
| res.setHeader('Access-Control-Allow-Credentials', 'true'); | |
| } else if (requestOrigin) { | |
| res.setHeader('Access-Control-Allow-Origin', 'null'); | |
| } | |
| // Main CORS logic | |
| if (isOriginAllowed(requestOrigin)) { | |
| res.setHeader('Access-Control-Allow-Origin', requestOrigin); | |
| res.setHeader('Access-Control-Allow-Credentials', 'true'); | |
| } |
🤖 Prompt for AI Agents
In api/create-checkout-session.ts around lines 34 to 40, the code sets
Access-Control-Allow-Origin: 'null' for disallowed origins which can
inadvertently permit sandboxed sources; remove the else-if branch that sets ACAO
to 'null' and instead do nothing (do not set Access-Control-Allow-Origin) when
the origin is not allowed so no ACAO header is returned for disallowed origins.
| if (subdomain && (subdomain.length < 3 || subdomain.length > 63 || !/^[a-z0-9-]+$/i.test(subdomain) || subdomain.startsWith('-') || subdomain.endsWith('-'))) { | ||
| return res.status(400).json({ | ||
| error: 'Invalid subdomain format. Must be 3-63 characters, alphanumeric and hyphens only.' | ||
| error: 'Invalid subdomain format. Must be 3-63 characters, alphanumeric and hyphens only, cannot start or end with hyphens.' | ||
| }); | ||
| } | ||
|
|
||
| const config: BaseDeploymentConfig = { | ||
| platform, | ||
| projectName, | ||
| subdomain, | ||
| subdomain: subdomain || projectName.toLowerCase().replace(/[^a-z0-9-]/g, '-'), | ||
| files, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Auto-generated subdomain may still violate your own rules. Normalize and enforce length.
When subdomain is omitted, you derive it from projectName but don’t remove leading/trailing hyphens, collapse repeats, or enforce 3–63 length. That can yield invalid subdomains.
Apply this diff here:
- subdomain: subdomain || projectName.toLowerCase().replace(/[^a-z0-9-]/g, '-'),
+ subdomain: subdomain || generateValidSubdomain(projectName),Then add this helper (outside this block):
// Place near other helpers (top-level)
const generateValidSubdomain = (name: string): string => {
const MAX = 63;
const MIN = 3;
let s = String(name || '')
.toLowerCase()
.replace(/[^a-z0-9-]/g, '-')
.replace(/-+/g, '-') // collapse repeats
.replace(/^-+|-+$/g, ''); // trim edges
if (s.length < MIN) s = `proj-${Date.now().toString(36)}`;
if (s.length > MAX) {
s = s.slice(0, MAX).replace(/-+$/g, ''); // avoid trailing hyphen after slice
if (!s) s = `proj-${Date.now().toString(36)}`;
}
return s;
};🤖 Prompt for AI Agents
In api/deploy.ts around lines 395 to 405, the auto-generated subdomain from
projectName can still produce invalid values (leading/trailing hyphens, repeated
hyphens, wrong length); add the provided generateValidSubdomain helper at
top-level near other helpers, then replace the existing inline derivation for
config.subdomain with a call to generateValidSubdomain(projectName) when
subdomain is not provided; keep the explicit incoming-subdomain validation block
as-is so user-supplied values are still rejected if invalid.
| const limit = parseInt((req.query.limit as string) || '20'); | ||
|
|
||
| const result = await deploymentManager.listAllDeployments(limit); | ||
| const manager = getDeploymentManager(); | ||
| const result = await manager.listAllDeployments(limit); | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Clamp and validate the list limit.
parseInt may yield NaN and allow unbounded values. Clamp to a sensible range (e.g., 1–100) and fallback safely.
Apply this diff:
- const limit = parseInt((req.query.limit as string) || '20');
+ const raw = Number(req.query.limit);
+ const limit = Number.isFinite(raw) ? Math.min(100, Math.max(1, Math.floor(raw))) : 20;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const limit = parseInt((req.query.limit as string) || '20'); | |
| const result = await deploymentManager.listAllDeployments(limit); | |
| const manager = getDeploymentManager(); | |
| const result = await manager.listAllDeployments(limit); | |
| const raw = Number(req.query.limit); | |
| const limit = Number.isFinite(raw) | |
| ? Math.min(100, Math.max(1, Math.floor(raw))) | |
| : 20; | |
| const manager = getDeploymentManager(); | |
| const result = await manager.listAllDeployments(limit); |
🤖 Prompt for AI Agents
In api/deploy.ts around lines 504 to 508, the parsing of req.query.limit uses
parseInt which can produce NaN or unbounded values; update the code to safely
read the query (handle possible array), parse to an integer, fallback to default
20 if parsing fails, then clamp the final value to the range 1–100 using
Math.max/Math.min (or equivalent) before passing it to
manager.listAllDeployments.
| res.setHeader('Access-Control-Allow-Origin', '*'); | ||
| res.setHeader('Access-Control-Allow-Methods', 'POST, OPTIONS'); | ||
| res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization'); | ||
|
|
There was a problem hiding this comment.
CORS set to wildcard '*' contradicts “strict origin validation” PR goal; tighten to an allowlist.
Reflect the request Origin when allowed, add Vary, and consider max-age. This prevents drive-by POSTs from arbitrary sites.
- res.setHeader('Access-Control-Allow-Origin', '*');
- res.setHeader('Access-Control-Allow-Methods', 'POST, OPTIONS');
- res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization');
+ const origin = req.headers.origin as string | undefined;
+ const allowedOrigins = new Set(
+ [process.env.PUBLIC_ORIGIN, 'http://localhost:3000', 'http://localhost:5173']
+ .filter(Boolean) as string[]
+ );
+ if (origin && allowedOrigins.has(origin)) {
+ res.setHeader('Access-Control-Allow-Origin', origin);
+ res.setHeader('Vary', 'Origin');
+ }
+ res.setHeader('Access-Control-Allow-Methods', 'POST, OPTIONS');
+ res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization');
+ res.setHeader('Access-Control-Max-Age', '600');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| res.setHeader('Access-Control-Allow-Origin', '*'); | |
| res.setHeader('Access-Control-Allow-Methods', 'POST, OPTIONS'); | |
| res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization'); | |
| // Tighten CORS to an allowlist rather than wildcard | |
| const origin = req.headers.origin as string | undefined; | |
| const allowedOrigins = new Set( | |
| [process.env.PUBLIC_ORIGIN, 'http://localhost:3000', 'http://localhost:5173'] | |
| .filter(Boolean) as string[] | |
| ); | |
| if (origin && allowedOrigins.has(origin)) { | |
| res.setHeader('Access-Control-Allow-Origin', origin); | |
| res.setHeader('Vary', 'Origin'); | |
| } | |
| res.setHeader('Access-Control-Allow-Methods', 'POST, OPTIONS'); | |
| res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization'); | |
| res.setHeader('Access-Control-Max-Age', '600'); |
🤖 Prompt for AI Agents
In api/success.ts around lines 7-10, CORS is currently using a wildcard '*'
which conflicts with the “strict origin validation” goal; replace it with an
allowlist check that reads req.headers.origin, compares it against a configured
allowlist, and if allowed sets Access-Control-Allow-Origin to that origin (not
'*'), otherwise do not set the header or return 403; also add a Vary: Origin
header and optionally set Access-Control-Max-Age to a sensible value to cache
preflight decisions. Ensure the allowlist is configurable (env or config file),
perform a safe string equality check, and apply the same logic for OPTIONS
preflight responses.
| // Handle various GitHub URL formats with domain validation - using safer patterns | ||
| const patterns = [ | ||
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([^/]+)\/([^/]+)(?:\/.*)?$/, | ||
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([^/]+)\/([^/]+)\.git$/, | ||
| /^git@github\.com:([^/]+)\/([^/]+)\.git$/, | ||
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)(?:\/.*)?$/, | ||
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)\.git$/, | ||
| /^git@github\.com:([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)\.git$/, | ||
| ]; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix ESLint security/detect-unsafe-regex by avoiding potentially unsafe URL regexes
ESLint is flagging the patterns at Line 132 and Line 133 as unsafe. While these look mostly benign, the trailing .* and broad alternations can still trigger the rule and can be problematic with pathological inputs. Replace the broad URL regex approach with URL parsing for HTTP(S) schemes and a tight SSH pattern. This both satisfies the linter and removes any ReDoS risk on user-controlled inputs.
Apply this diff to replace the regex-driven branch with a safer approach:
- // Handle various GitHub URL formats with domain validation - using safer patterns
- const patterns = [
- /^(?:https?:\/\/)?(?:www\.)?github\.com\/([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)(?:\/.*)?$/,
- /^(?:https?:\/\/)?(?:www\.)?github\.com\/([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)\.git$/,
- /^git@github\.com:([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)\.git$/,
- ];
- const cleanUrl = url.trim();
-
- for (const pattern of patterns) {
- const match = cleanUrl.match(pattern);
- if (match) {
- const [, owner, repo] = match;
-
- // Additional validation for owner and repo names
- const cleanOwner = owner.trim();
- const cleanRepo = repo.replace(/\.git$/, '').trim();
-
- // Validate GitHub username/org and repo name patterns separately - using safer patterns
- const ownerPattern = /^[a-z\d](?:[a-z\d]|[-](?=[a-z\d])){0,38}$/i;
- const repoPattern = /^[A-Za-z0-9._-]{1,100}$/;
-
- const isOwnerValid = ownerPattern.test(cleanOwner);
- const isRepoValid = repoPattern.test(cleanRepo);
-
- if (!isOwnerValid || !isRepoValid) {
- const invalidFields = [];
- if (!isOwnerValid) invalidFields.push('owner');
- if (!isRepoValid) invalidFields.push('repo');
-
- logger.warn(`Invalid GitHub ${invalidFields.join(' and ')} format`, {
- owner: cleanOwner,
- repo: cleanRepo,
- invalidFields
- });
- throw new Error('Invalid GitHub repository format');
- }
-
- return {
- owner: cleanOwner,
- repo: cleanRepo
- };
- }
- }
+ // Handle various GitHub URL formats without unsafe regex backtracking
+ const cleanUrl = url.trim();
+ let ownerCandidate: string | null = null;
+ let repoCandidate: string | null = null;
+
+ // SSH form: git@github.com:owner/repo(.git)?
+ const sshMatch = cleanUrl.match(/^git@github\.com:([\w-]{1,39})\/([\w._-]{1,100})(?:\.git)?$/);
+ if (sshMatch) {
+ ownerCandidate = sshMatch[1];
+ repoCandidate = sshMatch[2];
+ } else if (cleanUrl.startsWith('http://') || cleanUrl.startsWith('https://')) {
+ const parsed = new URL(cleanUrl);
+ if (!allowedDomains.includes(parsed.hostname.toLowerCase())) {
+ logger.warn('Rejected non-GitHub domain', { domain: parsed.hostname });
+ throw new Error('Only GitHub.com repositories are supported');
+ }
+ const segments = parsed.pathname.replace(/^\/+/, '').split('/');
+ if (segments.length >= 2) {
+ ownerCandidate = segments[0];
+ repoCandidate = segments[1].replace(/\.git$/i, '');
+ }
+ }
+
+ if (ownerCandidate && repoCandidate) {
+ // Additional validation for owner and repo names
+ const cleanOwner = ownerCandidate.trim();
+ const cleanRepo = repoCandidate.trim();
+
+ // Validate GitHub username/org and repo name patterns separately - using safer patterns
+ const ownerPattern = /^[a-z\d](?:[a-z\d]|-(?=[a-z\d])){0,38}$/i;
+ const repoPattern = /^[A-Za-z0-9._-]{1,100}$/;
+
+ const isOwnerValid = ownerPattern.test(cleanOwner);
+ const isRepoValid = repoPattern.test(cleanRepo);
+
+ if (!isOwnerValid || !isRepoValid) {
+ const invalidFields: Array<'owner' | 'repo'> = [];
+ if (!isOwnerValid) invalidFields.push('owner');
+ if (!isRepoValid) invalidFields.push('repo');
+
+ logger.warn(`Invalid GitHub ${invalidFields.join(' and ')} format`, {
+ owner: cleanOwner,
+ repo: cleanRepo,
+ invalidFields,
+ });
+ throw new Error('Invalid GitHub repository format');
+ }
+
+ return {
+ owner: cleanOwner,
+ repo: cleanRepo,
+ };
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Handle various GitHub URL formats with domain validation - using safer patterns | |
| const patterns = [ | |
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([^/]+)\/([^/]+)(?:\/.*)?$/, | |
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([^/]+)\/([^/]+)\.git$/, | |
| /^git@github\.com:([^/]+)\/([^/]+)\.git$/, | |
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)(?:\/.*)?$/, | |
| /^(?:https?:\/\/)?(?:www\.)?github\.com\/([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)\.git$/, | |
| /^git@github\.com:([a-zA-Z0-9_-]+)\/([a-zA-Z0-9_-]+)\.git$/, | |
| ]; | |
| // Handle various GitHub URL formats without unsafe regex backtracking | |
| const cleanUrl = url.trim(); | |
| let ownerCandidate: string | null = null; | |
| let repoCandidate: string | null = null; | |
| // SSH form: git@github.com:owner/repo(.git)? | |
| const sshMatch = cleanUrl.match(/^git@github\.com:([\w-]{1,39})\/([\w._-]{1,100})(?:\.git)?$/); | |
| if (sshMatch) { | |
| ownerCandidate = sshMatch[1]; | |
| repoCandidate = sshMatch[2]; | |
| } else if (cleanUrl.startsWith('http://') || cleanUrl.startsWith('https://')) { | |
| const parsed = new URL(cleanUrl); | |
| if (!allowedDomains.includes(parsed.hostname.toLowerCase())) { | |
| logger.warn('Rejected non-GitHub domain', { domain: parsed.hostname }); | |
| throw new Error('Only GitHub.com repositories are supported'); | |
| } | |
| const segments = parsed.pathname.replace(/^\/+/, '').split('/'); | |
| if (segments.length >= 2) { | |
| ownerCandidate = segments[0]; | |
| repoCandidate = segments[1].replace(/\.git$/i, ''); | |
| } | |
| } | |
| if (ownerCandidate && repoCandidate) { | |
| // Additional validation for owner and repo names | |
| const cleanOwner = ownerCandidate.trim(); | |
| const cleanRepo = repoCandidate.trim(); | |
| // Validate GitHub username/org and repo name patterns separately - using safer patterns | |
| const ownerPattern = /^[a-z\d](?:[a-z\d]|-(?=[a-z\d])){0,38}$/i; | |
| const repoPattern = /^[A-Za-z0-9._-]{1,100}$/; | |
| const isOwnerValid = ownerPattern.test(cleanOwner); | |
| const isRepoValid = repoPattern.test(cleanRepo); | |
| if (!isOwnerValid || !isRepoValid) { | |
| const invalidFields: Array<'owner' | 'repo'> = []; | |
| if (!isOwnerValid) invalidFields.push('owner'); | |
| if (!isRepoValid) invalidFields.push('repo'); | |
| logger.warn(`Invalid GitHub ${invalidFields.join(' and ')} format`, { | |
| owner: cleanOwner, | |
| repo: cleanRepo, | |
| invalidFields, | |
| }); | |
| throw new Error('Invalid GitHub repository format'); | |
| } | |
| return { | |
| owner: cleanOwner, | |
| repo: cleanRepo, | |
| }; | |
| } |
🧰 Tools
🪛 GitHub Check: ESLint
[failure] 132-132: Detects potentially unsafe regular expressions, which may take a very long time to run, blocking the event loop.
Unsafe Regular Expression
[failure] 133-133: Detects potentially unsafe regular expressions, which may take a very long time to run, blocking the event loop.
Unsafe Regular Expression
🪛 ESLint
[error] 132-132: Unsafe Regular Expression
(security/detect-unsafe-regex)
[error] 133-133: Unsafe Regular Expression
(security/detect-unsafe-regex)
🤖 Prompt for AI Agents
In src/lib/github-service.ts around lines 130 to 135, the three broad regexes
are triggering ESLint security/detect-unsafe-regex; replace them with a safer
approach: for HTTP(S) inputs use the built-in URL parser (new URL(input)) and
validate hostname is exactly "github.com" (allow optional "www.") then split
pathname, strip a trailing ".git" and extract owner and repo while validating
they match the allowed character set and non-empty; for SSH inputs keep a
single, tightly constrained regex only for the exact SSH form
"git@github.com:owner/repo.git" (no greedy quantifiers) to extract owner and
repo; remove the old patterns array and ensure all user input is
parsed/validated without using broad ".*" or alternation that can cause ReDoS.
| const PrivacyTab = () => { | ||
| const updatePrivacySetting = (key: keyof typeof privacySettings, value: boolean) => { | ||
| setPrivacySettings(prev => ({ | ||
| ...prev, | ||
| [key]: value, | ||
| })); | ||
| }; | ||
|
|
||
| const savePrivacySettings = () => { | ||
| const success = privacyConsent.setConsent(privacySettings); | ||
| if (success) { | ||
| toast.success('Privacy preferences saved successfully'); | ||
| // Reload if error monitoring consent changed to apply new Sentry configuration | ||
| if (privacySettings.errorMonitoring !== privacyConsent.hasErrorMonitoringConsent()) { | ||
| setTimeout(() => { | ||
| window.location.reload(); | ||
| }, 1000); | ||
| } | ||
| } else { | ||
| toast.error('Failed to save privacy preferences'); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Reload logic never triggers after consent change
You compare privacySettings.errorMonitoring to privacyConsent.hasErrorMonitoringConsent() after writing consent, so they’ll always match and the reload won’t occur. Capture the previous value before setConsent, then compare.
- const savePrivacySettings = () => {
- const success = privacyConsent.setConsent(privacySettings);
+ const savePrivacySettings = () => {
+ const prev = privacyConsent.getConsent();
+ const prevErrorMonitoring = !!prev?.errorMonitoring;
+ const success = privacyConsent.setConsent(privacySettings);
if (success) {
toast.success('Privacy preferences saved successfully');
- // Reload if error monitoring consent changed to apply new Sentry configuration
- if (privacySettings.errorMonitoring !== privacyConsent.hasErrorMonitoringConsent()) {
+ // Reload if error monitoring consent changed to apply new Sentry configuration
+ if (privacySettings.errorMonitoring !== prevErrorMonitoring) {
setTimeout(() => {
window.location.reload();
}, 1000);
}
} else {
toast.error('Failed to save privacy preferences');
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const PrivacyTab = () => { | |
| const updatePrivacySetting = (key: keyof typeof privacySettings, value: boolean) => { | |
| setPrivacySettings(prev => ({ | |
| ...prev, | |
| [key]: value, | |
| })); | |
| }; | |
| const savePrivacySettings = () => { | |
| const success = privacyConsent.setConsent(privacySettings); | |
| if (success) { | |
| toast.success('Privacy preferences saved successfully'); | |
| // Reload if error monitoring consent changed to apply new Sentry configuration | |
| if (privacySettings.errorMonitoring !== privacyConsent.hasErrorMonitoringConsent()) { | |
| setTimeout(() => { | |
| window.location.reload(); | |
| }, 1000); | |
| } | |
| } else { | |
| toast.error('Failed to save privacy preferences'); | |
| } | |
| }; | |
| const PrivacyTab = () => { | |
| const updatePrivacySetting = (key: keyof typeof privacySettings, value: boolean) => { | |
| setPrivacySettings(prev => ({ | |
| ...prev, | |
| [key]: value, | |
| })); | |
| }; | |
| const savePrivacySettings = () => { | |
| // Capture previous consent before updating | |
| const prev = privacyConsent.getConsent(); | |
| const prevErrorMonitoring = !!prev?.errorMonitoring; | |
| const success = privacyConsent.setConsent(privacySettings); | |
| if (success) { | |
| toast.success('Privacy preferences saved successfully'); | |
| // Reload if error monitoring consent changed to apply new Sentry configuration | |
| if (privacySettings.errorMonitoring !== prevErrorMonitoring) { | |
| setTimeout(() => { | |
| window.location.reload(); | |
| }, 1000); | |
| } | |
| } else { | |
| toast.error('Failed to save privacy preferences'); | |
| } | |
| }; | |
| // ... | |
| } |
🤖 Prompt for AI Agents
In src/pages/Settings.tsx around lines 662 to 683, the
reload-after-consent-change logic compares privacySettings.errorMonitoring to
privacyConsent.hasErrorMonitoringConsent() after calling setConsent so the
values will always match and never trigger; fix by reading and storing the
previous error monitoring consent (e.g., const prev =
privacyConsent.hasErrorMonitoringConsent()) before calling
privacyConsent.setConsent(privacySettings), then after a successful setConsent
compare prev !== privacySettings.errorMonitoring and call
window.location.reload() (with the same 1s timeout) when they differ.
| // Validate AI response content | ||
| export const validateResponse = (content: string): { isValid: boolean; error?: string } => { | ||
| if (!content || typeof content !== 'string') { | ||
| return { isValid: false, error: 'Response must be a non-empty string' }; | ||
| } | ||
|
|
||
| if (content.trim().length === 0) { | ||
| return { isValid: false, error: 'Response cannot be empty or only whitespace' }; | ||
| } | ||
|
|
||
| if (content.length > MAX_RESPONSE_LENGTH) { | ||
| return { isValid: false, error: `Response exceeds maximum length of ${MAX_RESPONSE_LENGTH} characters` }; | ||
| } | ||
|
|
||
| // Check for potentially malicious patterns | ||
| const maliciousPatterns = [ | ||
| /<script[^>]*>[\s\S]*?<\/script>/is, | ||
| /\bjavascript:/i, | ||
| // quoted or unquoted inline handlers | ||
| /\bon\w+\s*=\s*(?:"[^"]*"|'[^']*'|[^\s>]+)/i, | ||
| /<iframe[^>]*>[\s\S]*?<\/iframe>/is, | ||
| /<object[^>]*>[\s\S]*?<\/object>/is, | ||
| /<embed[^>]*>/i, | ||
| // restrict risky data URI media types commonly used for HTML/script delivery | ||
| /\bdata:(?:text\/html|application\/xhtml\+xml|image\/svg\+xml)/i, | ||
| /\bvbscript:/i | ||
| ]; | ||
|
|
||
| const hasMaliciousPattern = maliciousPatterns.some(pattern => pattern.test(content)); | ||
| if (hasMaliciousPattern) { | ||
| return { isValid: false, error: 'Response contains potentially unsafe content' }; | ||
| } | ||
| return { isValid: true }; | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix regex to catch </script > with whitespace and reduce bypass surface (CodeQL finding).
CodeQL flagged the script-tag regex. It misses cases like </script > and overly relies on greedy groups. Tighten the patterns and align flags.
Apply this diff:
- const maliciousPatterns = [
- /<script[^>]*>[\s\S]*?<\/script>/is,
- /\bjavascript:/i,
- // quoted or unquoted inline handlers
- /\bon\w+\s*=\s*(?:"[^"]*"|'[^']*'|[^\s>]+)/i,
- /<iframe[^>]*>[\s\S]*?<\/iframe>/is,
- /<object[^>]*>[\s\S]*?<\/object>/is,
- /<embed[^>]*>/i,
- // restrict risky data URI media types commonly used for HTML/script delivery
- /\bdata:(?:text\/html|application\/xhtml\+xml|image\/svg\+xml)/i,
- /\bvbscript:/i
- ];
+ const maliciousPatterns = [
+ // allow whitespace in closing tags and ensure word boundary on tag name
+ /<script\b[^>]*?>[\s\S]*?<\/script\s*>/i,
+ /\bjavascript:/i,
+ // quoted or unquoted inline handlers (onclick=, onload=, etc.)
+ /\bon\w+\s*=\s*(?:"[^"]*"|'[^']*'|[^\s>]+)/i,
+ /<iframe\b[^>]*?>[\s\S]*?<\/iframe\s*>/i,
+ /<object\b[^>]*?>[\s\S]*?<\/object\s*>/i,
+ /<embed\b[^>]*?>/i,
+ // restrict risky data URI media types commonly used for HTML/script delivery
+ /\bdata:(?:text\/html|application\/xhtml\+xml|image\/svg\+xml)\b/i,
+ /\bvbscript:/i,
+ ];Note: This validator is a defense-in-depth guard for plain-text contexts. Continue to render any user content with SafeText and avoid dangerouslySetInnerHTML.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Validate AI response content | |
| export const validateResponse = (content: string): { isValid: boolean; error?: string } => { | |
| if (!content || typeof content !== 'string') { | |
| return { isValid: false, error: 'Response must be a non-empty string' }; | |
| } | |
| if (content.trim().length === 0) { | |
| return { isValid: false, error: 'Response cannot be empty or only whitespace' }; | |
| } | |
| if (content.length > MAX_RESPONSE_LENGTH) { | |
| return { isValid: false, error: `Response exceeds maximum length of ${MAX_RESPONSE_LENGTH} characters` }; | |
| } | |
| // Check for potentially malicious patterns | |
| const maliciousPatterns = [ | |
| /<script[^>]*>[\s\S]*?<\/script>/is, | |
| /\bjavascript:/i, | |
| // quoted or unquoted inline handlers | |
| /\bon\w+\s*=\s*(?:"[^"]*"|'[^']*'|[^\s>]+)/i, | |
| /<iframe[^>]*>[\s\S]*?<\/iframe>/is, | |
| /<object[^>]*>[\s\S]*?<\/object>/is, | |
| /<embed[^>]*>/i, | |
| // restrict risky data URI media types commonly used for HTML/script delivery | |
| /\bdata:(?:text\/html|application\/xhtml\+xml|image\/svg\+xml)/i, | |
| /\bvbscript:/i | |
| ]; | |
| const hasMaliciousPattern = maliciousPatterns.some(pattern => pattern.test(content)); | |
| if (hasMaliciousPattern) { | |
| return { isValid: false, error: 'Response contains potentially unsafe content' }; | |
| } | |
| return { isValid: true }; | |
| }; | |
| // Validate AI response content | |
| export const validateResponse = (content: string): { isValid: boolean; error?: string } => { | |
| if (!content || typeof content !== 'string') { | |
| return { isValid: false, error: 'Response must be a non-empty string' }; | |
| } | |
| if (content.trim().length === 0) { | |
| return { isValid: false, error: 'Response cannot be empty or only whitespace' }; | |
| } | |
| if (content.length > MAX_RESPONSE_LENGTH) { | |
| return { isValid: false, error: `Response exceeds maximum length of ${MAX_RESPONSE_LENGTH} characters` }; | |
| } | |
| // Check for potentially malicious patterns | |
| const maliciousPatterns = [ | |
| // allow whitespace in closing tags and ensure word boundary on tag name | |
| /<script\b[^>]*?>[\s\S]*?<\/script\s*>/i, | |
| /\bjavascript:/i, | |
| // quoted or unquoted inline handlers (onclick=, onload=, etc.) | |
| /\bon\w+\s*=\s*(?:"[^"]*"|'[^']*'|[^\s>]+)/i, | |
| /<iframe\b[^>]*?>[\s\S]*?<\/iframe\s*>/i, | |
| /<object\b[^>]*?>[\s\S]*?<\/object\s*>/i, | |
| /<embed\b[^>]*?>/i, | |
| // restrict risky data URI media types commonly used for HTML/script delivery | |
| /\bdata:(?:text\/html|application\/xhtml\+xml|image\/svg\+xml)\b/i, | |
| /\bvbscript:/i, | |
| ]; | |
| const hasMaliciousPattern = maliciousPatterns.some(pattern => pattern.test(content)); | |
| if (hasMaliciousPattern) { | |
| return { isValid: false, error: 'Response contains potentially unsafe content' }; | |
| } | |
| return { isValid: true }; | |
| }; |
🧰 Tools
🪛 GitHub Check: CodeQL
[failure] 95-95: Bad HTML filtering regexp
This regular expression does not match script end tags like </script >.
| /** | ||
| * Sanitizes text for safe display in React components | ||
| * Handles null/undefined values with optional fallback | ||
| * @param text - The text to sanitize (can be null/undefined) | ||
| * @param fallback - Fallback text if input is null/undefined | ||
| * @returns Sanitized text safe for display | ||
| */ | ||
| export function sanitizeText(text: string | null | undefined, fallback = ''): string { | ||
| if (text == null) { | ||
| return fallback; | ||
| } | ||
|
|
||
| if (typeof text !== 'string') { | ||
| return escapeHtml(String(text)); | ||
| } | ||
|
|
||
| return escapeHtml(text); | ||
| } | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Naming collision risk: sanitizeText already exists elsewhere. Consolidate to a single source of truth.
There are existing sanitizeText implementations in src/components/ui/SafeText.tsx and src/utils/security.ts. Keep one canonical function to avoid divergent behavior.
Would you like a codemod PR to replace imports and remove duplicates?
🏁 Script executed:
#!/bin/bash
# Find duplicate sanitizeText implementations/usages
rg -nP --type=ts --type=tsx -C2 '\bexport\s+(?:const|function)\s+sanitizeText\b|sanitizeText\s*\('Length of output: 128
🏁 Script executed:
#!/bin/bash
# Searching for sanitizeText definitions and usages in .ts and .tsx files
rg -n -g '*.ts' -g '*.tsx' -C2 '\bexport\s+(?:const|function)\s+sanitizeText\b|\bsanitizeText\s*\('Length of output: 7022
Consolidate sanitizeText Implementations into a Single Canonical Utility
There are three distinct sanitizeText exports with differing behavior, which risks inconsistent sanitization across the app. Please choose one implementation and replace all other definitions and imports with that single source of truth.
• src/utils/text-sanitizer.ts (lines 43–49):
export function sanitizeText(text: string | null | undefined, fallback = ''): string { … }• src/utils/security.ts (lines 13–15):
export const sanitizeText = (text: string): string => { … };• src/components/ui/SafeText.tsx (lines 40–46):
export function sanitizeText(input: string): string { … }Next steps:
- Decide which implementation best fits your requirements (e.g., DOMPurify-based for HTML, or a simple escape/trim utility).
- Consolidate that function in one file (e.g.,
src/utils/text-sanitizer.ts). - Update all imports in
.ts/.tsxfiles to point to the canonical module. - Remove the now-redundant
sanitizeTextdefinitions in the other two files.
Let me know if you’d like help creating a codemod PR to automate these updates.
🤖 Prompt for AI Agents
In src/utils/text-sanitizer.ts around lines 36–54 there are multiple differing
sanitizeText implementations across the codebase; consolidate into a single
canonical utility by (1) choosing the desired behavior (e.g., DOMPurify-based
HTML sanitizer or simple escape+trim with null/undefined fallback) and
implementing that single function in src/utils/text-sanitizer.ts (ensure correct
types and default fallback behavior), (2) export it as the sole sanitizeText
export from that file, (3) update all imports in .ts/.tsx files
(src/utils/security.ts and src/components/ui/SafeText.tsx and any others) to
import from src/utils/text-sanitizer.ts, and (4) remove the duplicate
sanitizeText definitions from the other files; after changes run typecheck and
the test/build to verify no import/type errors.
| /** | ||
| * Creates sanitized HTML string for use with dangerouslySetInnerHTML | ||
| * Only use this when you need HTML rendering capabilities | ||
| * @param text - The text to sanitize | ||
| * @param fallback - Fallback text if input is null/undefined | ||
| * @returns Object with __html property for dangerouslySetInnerHTML | ||
| */ | ||
| export function createSanitizedHtml(text: string | null | undefined, fallback = ''): { __html: string } { | ||
| return { __html: sanitizeText(text, fallback) }; | ||
| } |
There was a problem hiding this comment.
Do not expose a helper for dangerouslySetInnerHTML; it conflicts with project guidelines.
Guidelines: “NEVER use dangerouslySetInnerHTML with user content.” Remove createSanitizedHtml to avoid encouraging this pattern.
-/**
- * Creates sanitized HTML string for use with dangerouslySetInnerHTML
- * Only use this when you need HTML rendering capabilities
- * @param text - The text to sanitize
- * @param fallback - Fallback text if input is null/undefined
- * @returns Object with __html property for dangerouslySetInnerHTML
- */
-export function createSanitizedHtml(text: string | null | undefined, fallback = ''): { __html: string } {
- return { __html: sanitizeText(text, fallback) };
-}
+// Intentionally no helper for dangerouslySetInnerHTML.
+// Use SafeText and plain text rendering to avoid XSS per project guidelines.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Creates sanitized HTML string for use with dangerouslySetInnerHTML | |
| * Only use this when you need HTML rendering capabilities | |
| * @param text - The text to sanitize | |
| * @param fallback - Fallback text if input is null/undefined | |
| * @returns Object with __html property for dangerouslySetInnerHTML | |
| */ | |
| export function createSanitizedHtml(text: string | null | undefined, fallback = ''): { __html: string } { | |
| return { __html: sanitizeText(text, fallback) }; | |
| } | |
| // Intentionally no helper for dangerouslySetInnerHTML. | |
| // Use SafeText and plain text rendering to avoid XSS per project guidelines. |
🤖 Prompt for AI Agents
In src/utils/text-sanitizer.ts around lines 76 to 85, remove the
createSanitizedHtml export (delete the function) because we must not provide a
helper that encourages dangerouslySetInnerHTML; find all call sites and replace
them to call sanitizeText(...) directly (or a safe renderer) and render
sanitized strings as plain text rather than passing an object to
dangerouslySetInnerHTML; if any component relied on createSanitizedHtml,
refactor those components to set textContent/children to sanitizeText(...) or
use a vetted HTML-rendering component that enforces the project policy.
Summary
Fixed all merge conflicts from PR #78 (Qoder) by resolving conflicts between qoder branch and dev-branch, prioritizing security improvements from dev-branch while preserving functionality from qoder.
Key Security Improvements
Files Changed
api-dev-server.ts- Multiple security enhancements (6 major conflicts resolved)src/components/EnhancedChatInterface.tsx- Better streaming response handlinglib/deployment/netlify.ts- Enhanced input validation and securitylib/deployment/vercel.ts- Secure URL parsing improvementsbun.lock- Regenerated for dependency consistencyTest Plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Security/Privacy Enhancements
Documentation
Tests