Skip to content

fix: resolve JavaScript errors in chat interface and tRPC integration#77

Merged
Jackson57279 merged 10 commits intomainfrom
dev-branch
Aug 23, 2025
Merged

fix: resolve JavaScript errors in chat interface and tRPC integration#77
Jackson57279 merged 10 commits intomainfrom
dev-branch

Conversation

@Jackson57279
Copy link
Owner

@Jackson57279 Jackson57279 commented Aug 23, 2025

Summary

  • Fixed Promise being passed to Convex in EnhancedChatInterface.tsx by properly consuming the textStream from AI responses
  • Fixed 404 error on tRPC billing endpoint by correcting URL path from /trpc/ to /hono/trpc/
  • Added robust array checks to prevent Se.map undefined errors in chat and message handling
  • Improved metadata handling with proper values instead of undefined which could cause Convex issues

Technical Details

  • EnhancedChatInterface.tsx: Properly consume streaming AI responses instead of passing Promise objects
  • useUsageTracking.ts: Corrected tRPC endpoint URL and switched to POST method as required by tRPC
  • Enhanced error handling and logging for better debugging
  • Added comprehensive array type checking to prevent runtime errors

Test plan

  • Code builds successfully without compilation errors
  • No more JavaScript console errors for Promise objects in Convex
  • tRPC billing endpoints should now resolve correctly
  • Enhanced array checking prevents Se.map undefined errors
  • Chat functionality should work without crashes

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Refactor

    • Switched the default AI model for chat responses and adjusted generation settings, improving response quality and stability.
    • Improved streaming of long AI replies, ensuring smoother display and better handling of lengthy outputs.
    • More reliable chat title generation and sidebar behavior on first messages or malformed data.
  • Bug Fixes

    • Prevented potential crashes by hardening message and chat data handling.
    • Stabilized subscription fetching with better error handling and credential management.
  • Chores

    • Added a new runtime dependency and reorganized dependency listings.

Jackson57279 and others added 9 commits August 22, 2025 20:52
…d rate limiting

- Add clustering support based on available CPU cores and environment settings
- Integrate PostHog analytics for API request and server metrics tracking
- Implement rate limiting with IP validation and bounded in-memory storage
- Enhance VercelRequest and VercelResponse interfaces with robust parsing and security headers
- Improve CORS handling with origin allowlists and credential support
- Validate and sanitize API endpoint paths to prevent directory traversal attacks
- Add request body size limit and enforce request timeout handling
- Provide structured logging for requests, responses, errors, and server lifecycle events
- Add health endpoint with uptime, metrics, environment, and version info
- Support graceful shutdown with analytics capture on termination signals
- Update create-checkout-session API with stricter CORS origin checks and OPTIONS method handling
- Refine hono-polar API subscription syncing with date object conversions and improved checkout flow
- Enhance secret-chat API error handling with detailed status codes and messages
- Update service worker cache revision for production deployment
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
High Priority Fixes:
- Replace vulnerable regex patterns in IP validation with safe string operations
- Secure cookie parsing with Object.create(null) to prevent prototype pollution
- Enhanced file system operations with additional validation layers
- Add PostHog analytics payload size limits (32KB) and comprehensive PII sanitization
- Implement error message sanitization to prevent information leakage

Security Improvements:
- Safe IPv4/IPv6 validation without regex DoS vulnerability
- Cookie name/value validation with length limits and safe patterns
- Multi-layer path traversal protection for API endpoint resolution
- PII pattern detection and redaction for analytics
- Development vs production error handling with safe messaging
- ESLint security rule compliance with appropriate exemptions for validated cases

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
…ration limits

- Updated regex patterns for sanitizing metadata, navigation, images, stylesheets, scripts, fonts, and meta tags to prevent potential vulnerabilities.
- Implemented iteration limits to avoid catastrophic backtracking in regex operations.
- Added validation checks for extracted URLs and text to ensure safety and compliance with length restrictions.

This commit addresses security concerns and improves the robustness of HTML content extraction.
- Resolved CORS configuration conflict in api-dev-server.ts using secure whitelist approach
- Resolved git provider detection conflict in lib/deployment/netlify.ts using comprehensive URL parsing
- Fixed regex escape character issue in netlify.ts for security compliance

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
**HIGH RISK - CORS Misconfiguration Fixed:**
- Separate trusted origins from allowed origins in api-dev-server.ts
- Only enable credentials for explicitly trusted domains
- Prevent credential hijacking via dynamic origin setting

**MEDIUM RISK - URL Validation Bypass Fixed:**
- Replace vulnerable substring matching with secure hostname validation
- Use proper URL parsing to prevent domain spoofing attacks
- Affected files: netlify.ts and vercel.ts deployment services

**MEDIUM RISK - Information Exposure Prevention:**
- Enhanced error sanitization in both development and production modes
- Remove ALL sensitive paths, environment variables, credentials from error messages
- Stricter character limits and complete information sanitization

Security improvements protect against:
- Credential theft via CORS misconfiguration
- Domain spoofing attacks (evil.com/github.com bypasses)
- Internal system information disclosure through error messages

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Fix Promise being passed to Convex in EnhancedChatInterface.tsx by properly consuming textStream
- Fix 404 error on tRPC billing endpoint by correcting URL path to /hono/trpc/
- Add robust array checks to prevent Se.map undefined errors
- Improve metadata handling with proper values instead of undefined
- Enhanced error handling and logging for tRPC requests

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
@vercel
Copy link

vercel bot commented Aug 23, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
zapdev Ready Ready Preview Comment Aug 23, 2025 6:35pm

@gitguardian
Copy link

gitguardian bot commented Aug 23, 2025

⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secret in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
20372498 Triggered Generic High Entropy Secret 72993ac .env.deployment.template View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 23, 2025

Walkthrough

The PR updates dependencies, switches AI model identifiers and pricing in ai.ts, adds streaming response handling with length capping in EnhancedChatInterface, and changes the subscription fetch in useUsageTracking to a tRPC POST endpoint with revised headers and error handling. Various type guards and metadata assignments were added.

Changes

Cohort / File(s) Summary
Dependencies
package.json
Added dependency @ai-sdk/vercel@^1.0.11. Reordered sanitize-html. No script/devDependency changes.
Chat UI streaming and safety
src/components/EnhancedChatInterface.tsx
Consumes textStream AsyncIterable, assembles chunks with a 50,000-char cap, falls back to string or generic text. Adds token estimation and static cost metadata. Strengthens type checks for messages/chats and first-message title logic.
Usage tracking subscription fetch
src/hooks/useUsageTracking.ts
Switched to POST /hono/trpc/billing.getUserSubscription with JSON body, credentials included, optional Authorization header. Adds non-OK and exception logging; returns parsed json?.result?.data ?? null.
AI model switch and pricing
src/lib/ai.ts
Replaced openai/gpt-oss-120b with moonshotai/kimi-k2-instruct across selection, telemetry, caching keys, and cost math. Updated pricing tables and lowered temperature to 0.6. Title generation and streaming paths updated accordingly.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant UI as EnhancedChatInterface
  participant AI as AI Response Source
  User->>UI: Send message
  UI->>AI: Request response
  alt Streaming (AsyncIterable)
    AI-->>UI: text chunks
    loop accumulate (cap 50k chars)
      UI->>UI: append chunk
    end
    UI->>UI: estimate tokens, set cost metadata
  else String/Other
    AI-->>UI: final text
    UI->>UI: set content, estimate tokens, set cost
  end
  UI-->>User: Render assistant message
  note over UI: Guards for messages/chats shape
Loading
sequenceDiagram
  autonumber
  participant Hook as useUsageTracking
  participant API as /hono/trpc/billing.getUserSubscription
  Hook->>API: POST with JSON {} + credentials (+Authorization if token)
  alt 2xx
    API-->>Hook: JSON tRPC envelope
    Hook->>Hook: return result.data or null
  else non-OK / error
    API-->>Hook: status/statusText
    Hook->>Hook: log error and return null
  end
Loading
sequenceDiagram
  autonumber
  participant Caller as Call Site
  participant AIlib as src/lib/ai.ts
  Caller->>AIlib: generate/stream (no API change)
  AIlib->>AIlib: select model = moonshotai/kimi-k2-instruct
  AIlib->>AIlib: temperature = 0.6
  AIlib->>AIlib: compute pricing & telemetry with new model
  AIlib-->>Caller: output + cost metadata
  note over AIlib: Caching keys & spans use new model id
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

A nibble of deps, a stream in the breeze,
I stitch the chunks with gentle ease.
New moonshot sails where prompts take flight,
Tokens tallied in softer light.
I thump my paw—POST, not GET!
Carrots secured; no errors yet. 🥕✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch dev-branch

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Take the more secure origin validation from main branch that includes
additional isValidOrigin checks for both trusted and allowed origins.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
@claude
Copy link

claude bot commented Aug 23, 2025

Claude finished @otdoges's task —— View job


PR Review Complete ✅

I've thoroughly reviewed this PR and the fixes look solid overall. Here's my detailed feedback:

Review Tasks:

  • Analyze EnhancedChatInterface.tsx changes for Promise handling fixes
  • Review useUsageTracking.ts tRPC endpoint corrections
  • Examine ai.ts improvements and array handling
  • Check package.json dependency changes
  • Address GitGuardian security alert
  • Provide overall assessment and recommendations

Positive Changes

1. Promise Handling Fix (EnhancedChatInterface.tsx:144-169)

Excellent fix! The streaming response handling is now properly implemented:

  • ✅ Correctly consumes textStream from AI responses instead of passing Promise objects to Convex
  • ✅ Proper async iteration with for await loop
  • ✅ Smart chunking with size limits (50,000 chars) to prevent memory issues
  • ✅ Fallback handling for unexpected response formats

2. tRPC Endpoint Correction (useUsageTracking.ts:159)

Good catch! The URL path correction:

  • ✅ Fixed /trpc//hono/trpc/ aligns with routing architecture
  • ✅ POST method correctly used for tRPC queries
  • ✅ Proper error handling with status logging

3. Enhanced Array Safety

Smart defensive programming:

  • ✅ Robust array checks in EnhancedChatInterface.tsx:238-242 prevent undefined errors
  • ✅ Safe property access with proper type guards
  • ✅ Fallback to empty arrays prevents crashes

⚠️ Security & Quality Concerns

1. GitGuardian Alert - HIGH PRIORITY

The .env.deployment.template file contains a flagged secret. Action needed:

  • 🔍 Review the flagged content in commit 72993ac
  • 🔄 If it's a real secret, rotate it immediately
  • 📝 If it's a template placeholder, consider using obvious placeholder text like your_key_here

2. Metadata Handling (EnhancedChatInterface.tsx:176-180)

Potential improvement:

// Current - hardcoded values
metadata: {
  model: 'ai-assistant',
  tokens: Math.floor(responseContent.length / 4), // Rough estimate
  cost: 0.01 // Default cost
}

Recommendation: Use actual values from AI response when available instead of estimates.

3. Error Handling Enhancement

The error handling is good but could be more specific:

  • Consider different error types (network, API rate limits, authentication)
  • Add more granular logging for debugging

🔍 Code Quality Assessment

Good Practices:

  • ✅ Proper TypeScript usage with type guards
  • ✅ Consistent error logging with structured data
  • ✅ Memory-conscious streaming with size limits
  • ✅ Fallback handling for edge cases

Performance Considerations:

  • ✅ Efficient string concatenation using array join
  • ✅ Sensible chunking limits prevent browser hanging
  • ✅ Proper cleanup of streaming resources

📋 Final Recommendations

Before Merging:

  1. 🚨 HIGH PRIORITY: Address the GitGuardian security alert
  2. 🧪 Testing: Verify the tRPC endpoint fix works in development
  3. 📊 Monitoring: Check that metadata values are reasonable in production

Future Enhancements:

  1. Add proper token counting instead of character-based estimates
  2. Implement retry logic for failed AI responses
  3. Consider adding response validation for AI output

🎯 Overall Assessment

✅ APPROVE - This PR successfully addresses the core JavaScript errors and improves system stability. The Promise handling fix is particularly well-implemented, and the defensive programming approach prevents runtime crashes.

Risk Level: Low (after security alert is addressed)
Code Quality: High
Test Coverage: Adequate for bug fixes

The changes are focused, well-reasoned, and follow good practices. Once the security concern is resolved, this is ready to merge.


Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/lib/ai.ts (1)

21-22: Bug: logger is undefined — Sentry React SDK doesn’t export logger

const { logger } = Sentry; yields undefined at runtime, causing Cannot read properties of undefined (reading 'info') the moment logging is used. Use your project logger (as elsewhere) or fallback to console.

Apply:

-import * as Sentry from '@sentry/react'
+import * as Sentry from '@sentry/react'
+import { logger } from '@/lib/error-handler'
@@
-const { logger } = Sentry;

If you prefer not to depend on the app logger here, replace calls with console.

🧹 Nitpick comments (15)
package.json (1)

126-126: Two HTML sanitization libs — consider consolidation

You now have both sanitize-html and dompurify in dependencies. Running two sanitization approaches tends to fragment XSS handling. Prefer a single, audited path and standardize via a sanitizeText helper used across components.

Happy to provide a repo-wide codemod to funnel all sanitization through one function.

src/hooks/useUsageTracking.ts (3)

158-169: tRPC HTTP call shape: header casing and body format

  • Header: use canonical Authorization casing. Header names are case-insensitive but consistent casing avoids surprises in some middleware.
  • Body: a raw {} may not match tRPC’s expected payload shape depending on your Hono adapter configuration. Some setups expect JSON-RPC envelopes or an input field. If this works locally, great—otherwise switch to a typed tRPC client and avoid manual fetch plumbing.

Apply header casing fix locally:

-          headers: {
-            'Content-Type': 'application/json',
-            ...(token ? { authorization: `Bearer ${token}` } : {}),
-          },
+          headers: {
+            'Content-Type': 'application/json',
+            ...(token ? { Authorization: `Bearer ${token}` } : {}),
+          },

Optional hardening (timeout + accept):

-        const res = await fetch(url, {
+        const res = await fetch(url, {
           method: 'POST', // tRPC queries use POST
           headers: {
             'Content-Type': 'application/json',
             ...(token ? { Authorization: `Bearer ${token}` } : {}),
           },
           body: JSON.stringify({}), // Empty body for query
           credentials: 'include',
+          signal: AbortSignal.timeout(15000),
+          // accept JSON explicitly (some proxies/edges are picky)
+          // headers: { Accept: 'application/json', ... }
         });

Longer-term: use a typed tRPC client with httpBatchLink to /hono/trpc so the request shape and response typing are guaranteed by the router. I can sketch this if you want.


170-176: Surface non-2xx body for diagnostics (optional)

You already log status/statusText. When debugging, dumping a short preview of the error payload (if JSON) dramatically speeds up triage.

-        if (!res.ok) {
-          console.error('tRPC subscription fetch failed:', res.status, res.statusText);
-          return null;
-        }
+        if (!res.ok) {
+          let bodyPreview = '';
+          try { bodyPreview = JSON.stringify(await res.clone().json()).slice(0, 300); } catch {}
+          console.error('tRPC subscription fetch failed:', res.status, res.statusText, bodyPreview);
+          return null;
+        }

174-179: Return type is implicit and untyped — leverage tRPC types

json?.result?.data ?? null is fine at runtime, but we lose type safety. Prefer using your router’s output type (e.g., RouterOutputs['billing']['getUserSubscription']) so consumers of getSubscription() are type-safe.

If you can expose your AppRouter type, I can wire this hook to infer the exact return type and replace the manual fetch with a typed client call.

src/lib/ai.ts (5)

29-33: Model pricing constants — verify rates and centralize model ID

Good to encapsulate pricing. Two suggestions:

  • Verify moonshotai/kimi-k2-instruct is supported by Groq and the rates are correct for your account/region.
  • Extract the model ID into a constant to avoid typos and duplicated string literals.
-const MODEL_PRICING = {
-  'moonshotai/kimi-k2-instruct': {
+const MODEL_ID = 'moonshotai/kimi-k2-instruct' as const;
+const MODEL_PRICING = {
+  [MODEL_ID]: {
     // Pricing based on Groq docs: $1.00 / 1M input tokens, $3.00 / 333,333 output tokens
     input: 1.00 / 1_000_000,
     output: 3.00 / 333_333,
   }
};

Follow-up: replace repeated 'moonshotai/kimi-k2-instruct' usages with MODEL_ID throughout this file.


160-167: Deduplicate hard-coded model string; add max tokens to control cost

You repeatedly embed the same model ID and omit an explicit maxTokens. Centralizing the ID improves maintainability, and setting maxTokens helps bound costs and latency.

Apply a subset of these replacements (assuming MODEL_ID from above):

-return (await groq)('moonshotai/kimi-k2-instruct');
+return (await groq)(MODEL_ID);
@@
-return groq('moonshotai/kimi-k2-instruct');
+return groq(MODEL_ID);
@@
-        aiMonitoring.recordOperation({
+        aiMonitoring.recordOperation({
           operation: 'generateText',
-          model: 'moonshotai/kimi-k2-instruct',
+          model: MODEL_ID,
@@
-        const estimatedCost = calculateCost('moonshotai/kimi-k2-instruct', estimatedInputTokens, estimatedOutputTokens);
+        const estimatedCost = calculateCost(MODEL_ID, estimatedInputTokens, estimatedOutputTokens);
@@
-        span.setAttribute("model", "moonshotai/kimi-k2-instruct");
+        span.setAttribute("model", MODEL_ID);
@@
-              () => withTimeout(generateText({
+              () => withTimeout(generateText({
                 model: currentModel,
                 prompt,
-                temperature: 0.6,
+                temperature: 0.6,
+                maxTokens: 1500,
               }), 60_000),
               'generateText',
-              { model: 'moonshotai/kimi-k2-instruct', promptLength: prompt.length }
+              { model: MODEL_ID, promptLength: prompt.length }
@@
-        const actualCost = usage ? calculateCost('moonshotai/kimi-k2-instruct', usage.inputTokens || 0, usage.outputTokens || 0) : estimatedCost;
+        const actualCost = usage ? calculateCost(MODEL_ID, usage.inputTokens || 0, usage.outputTokens || 0) : estimatedCost;
@@
-          model: 'moonshotai/kimi-k2-instruct',
+          model: MODEL_ID,
@@
-          model: "moonshotai/kimi-k2-instruct",
+          model: MODEL_ID,
@@
-          model: 'moonshotai/kimi-k2-instruct',
+          model: MODEL_ID,
@@
-        const estimatedCost = calculateCost('moonshotai/kimi-k2-instruct', estimatedInputTokens, estimatedOutputTokens);
+        const estimatedCost = calculateCost(MODEL_ID, estimatedInputTokens, estimatedOutputTokens);
@@
-        span.setAttribute("model", "moonshotai/kimi-k2-instruct");
+        span.setAttribute("model", MODEL_ID);
@@
-              async () => streamText({
-                model: await model,
+              async () => streamText({
+                model: model,
                 messages: [
                   { role: 'system', content: systemPrompt },
                   { role: 'user', content: prompt }
                 ],
-                temperature: 0.6,
+                temperature: 0.6,
+                maxTokens: 1500,
               }),
               'streamText',
-              { model: 'moonshotai/kimi-k2-instruct', promptLength: prompt.length }
+              { model: MODEL_ID, promptLength: prompt.length }
@@
-          model: 'moonshotai/kimi-k2-instruct',
+          model: MODEL_ID,
@@
-          model: "moonshotai/kimi-k2-instruct",
+          model: MODEL_ID,
@@
-          model: 'moonshotai/kimi-k2-instruct',
+          model: MODEL_ID,

Also applies to: 184-185, 219-221, 231-231, 239-242, 248-249, 252-253, 266-266, 275-275, 300-301, 370-370, 381-381, 392-395, 404-405, 412-412, 418-418


401-409: Cost accounting on streaming uses estimated cost only

For streaming you addTodayCost(estimatedCost) and record estimated tokens. If the SDK exposes actual usage upon stream completion, consider reconciling the estimate to avoid over/under‑charging.

If we can extract usage from the stream result or final aggregated text length, I can wire a reconciliation step and adjust daily cost accordingly.


164-167: Function name drift: getGemmaModel no longer returns Gemma

It now returns Kimi. Rename to getTitleModel or similar to prevent confusion for future maintainers.

I can include the rename and update call sites in this PR if you want.


482-498: Explicit return types for exported APIs

generateAIResponse, streamAIResponse, and generateChatTitleFromMessages rely on inference. For exported functions, add explicit return types to satisfy strict TS guidance and improve DX.

Example pattern:

type StreamResult = Awaited<ReturnType<typeof streamText>>;

export async function generateAIResponse(...): Promise<string> { ... }
export async function streamAIResponse(...): Promise<StreamResult> { ... }
export async function generateChatTitleFromMessages(...): Promise<string> { ... }
src/components/EnhancedChatInterface.tsx (6)

151-166: Good fix: consume textStream to avoid passing a Promise to Convex

The chunked consumption and 50k cap solve the earlier runtime error and add a safety bound. Two small refinements:

  • Hoist 50000 into a named constant for discoverability and consistency with other security limits.
  • Consider progressive UI updates (append chunks to a “draft assistant bubble”) for better UX; current approach waits until completion.

Add a local constant near the top of the file:

const MAX_STREAM_RESPONSE_LENGTH = 50_000 as const;

Then:

-          if (totalLength > 50000) {
+          if (totalLength > MAX_STREAM_RESPONSE_LENGTH) {
             break;
           }
@@
-        responseContent = chunks.join('').slice(0, 50000);
+        responseContent = chunks.join('').slice(0, MAX_STREAM_RESPONSE_LENGTH);

172-181: Sanitize AI output before storing; align metadata with model in ai.ts

  • Sanitize responseContent before saving to Convex to reduce XSS risk on render. Even if you render safely elsewhere, normalizing at write-time is safer.
  • The metadata model: 'ai-assistant' is vague. Consider syncing with the actual model ID used in ai.ts to keep analytics consistent.
-      await createMessageMutation({
+      await createMessageMutation({
         chatId: currentChatId as Id<'chats'>,
-        content: responseContent,
+        content: sanitizeText ? sanitizeText(responseContent) : responseContent,
         role: 'assistant',
         metadata: {
-          model: 'ai-assistant',
+          model: 'moonshotai/kimi-k2-instruct', // or import a shared MODEL_ID
           tokens: Math.floor(responseContent.length / 4), // Rough estimate
           cost: 0.01 // Default cost
         }
       });

If sanitizeText isn’t available from @/utils/security, I can add it and standardize usage.


184-190: Title generation trigger may be flaky for “first message”

You check messages.messages.length === 0 after creating the user message. Depending on Convex subscription timing, this might still be 0 or already 1. A more robust signal is to branch on whether we just created a new chat.

-      if (messages && typeof messages === 'object' && 'messages' in messages && Array.isArray(messages.messages) && messages.messages.length === 0) {
+      const isNewChat = !selectedChatId;
+      if (isNewChat) {
         await generateChatTitleFromMessages([
           { content: userInput, role: 'user' },
           { content: responseContent, role: 'assistant' }
         ]);
         // Update chat title (would need a mutation for this)
       }

You already know whether you created currentChatId in this call—using that signal avoids race conditions.


238-240: Guard shape for messages likely overfits; prefer array-or-empty

Unless your query returns a { messages: [...] } envelope, this might silently drop data. If api.messages.getChatMessages returns Message[] (typical Convex pattern), use a simple array guard.

-  if (messages && typeof messages === 'object' && 'messages' in messages && Array.isArray(messages.messages)) {
-    return messages.messages;
-  }
+  if (Array.isArray(messages)) return messages;
   return [];

If your server does return { messages }, ignore this.


288-289: Same guard issue for chats prop

Similar note here: Convex queries typically return arrays. If that’s your case, simplify to avoid shape drift.

-                chats={chats && typeof chats === 'object' && 'chats' in chats && Array.isArray(chats.chats) ? chats.chats : []}
+                chats={Array.isArray(chats) ? chats : []}

344-369: External search results — restrict URL schemes

You render arbitrary URLs from the web. Add a tiny guard to only allow http(s) links to avoid accidental javascript: navigation.

-                              <a href={result.url} target="_blank" rel="noopener noreferrer" 
+                              <a href={/^https?:\/\//i.test(result.url) ? result.url : '#'}
+                                 target="_blank" rel="noopener noreferrer" 
                                  className="text-xs text-blue-400 hover:text-blue-300 transition-colors">
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6b92285 and bfc239b.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • package.json (2 hunks)
  • src/components/EnhancedChatInterface.tsx (4 hunks)
  • src/hooks/useUsageTracking.ts (1 hunks)
  • src/lib/ai.ts (12 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/{pages,components}/**/*.tsx

📄 CodeRabbit inference engine (.cursor/rules/authentication-patterns.mdc)

Handle all authentication states in components by showing a loading spinner when loading, a sign-in prompt when unauthenticated, and protected content when authenticated.

Files:

  • src/components/EnhancedChatInterface.tsx
**/*ChatInterface*.tsx

📄 CodeRabbit inference engine (.cursor/rules/chat-ui-patterns.mdc)

Chat interfaces should follow the specified component structure: state management for selectedChatId, input, isTyping; useQuery for chats and messages; layout with ChatSidebar and ChatArea components.

Files:

  • src/components/EnhancedChatInterface.tsx
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/convex-security.mdc)

**/*.{ts,tsx}: All Convex queries and mutations MUST use proper authentication. Never accept user IDs from client parameters.
Always verify user owns the data before allowing access.
Use the authenticated user's identity.subject for user references.
Implement proper error messages that don't leak information.
Authentication verification in every function.
Authorization checks for data ownership.
Input validation and sanitization.
Error handling without information leakage.

**/*.{ts,tsx}: Use Sonner for toast notifications to provide consistent user feedback, including success, error, and loading states.
Always handle errors gracefully using try-catch blocks in asynchronous functions, providing user feedback and logging errors.
Provide specific, actionable error messages for form validation errors using toast notifications.
Handle common network error scenarios in catch blocks, providing appropriate toast messages for network errors, authentication errors, and unexpected errors.

If using TypeScript, use an enum to store flag names.

Strict TypeScript must be used with no 'any' types allowed

**/*.{ts,tsx}: NEVER use any type - use proper TypeScript types
Use unknown for truly unknown data types
Implement proper interface definitions
Do not use empty interfaces; use a type alias instead (e.g., type InputProps = ... instead of interface InputProps {})
All function parameters must be typed
All return types should be explicit for public APIs
Use proper generic constraints
Implement discriminated unions for state management
Use proper interface definitions for error handling types (e.g., interface ValidationResult { isValid: boolean; error?: string; })

**/*.{ts,tsx}: Always sanitize user input before storing or displaying using a sanitization function like sanitizeText.
Implement comprehensive input validation, including length checks and detection of malicious patterns, as shown in the validateInput function.
Define and use security constants suc...

Files:

  • src/components/EnhancedChatInterface.tsx
  • src/hooks/useUsageTracking.ts
  • src/lib/ai.ts
**/*.tsx

📄 CodeRabbit inference engine (.cursor/rules/error-handling.mdc)

**/*.tsx: Always provide loading feedback to users during asynchronous operations.
Use proper error boundaries to handle component crashes and display user-friendly error UI.

Use proper React component typing (e.g., const MyComponent: React.FC<Props> = ...)

**/*.tsx: Use the SafeText React component for all user-generated content to ensure safe text display.
NEVER use dangerouslySetInnerHTML with user content.
NEVER use direct string interpolation in HTML with user content.

Files:

  • src/components/EnhancedChatInterface.tsx
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/posthog-integration.mdc)

**/*.{js,jsx,ts,tsx}: Use a consistent naming convention for this storage. enum/const object members should be written UPPERCASE_WITH_UNDERSCORE.
If a custom property for a person or event is at any point referenced in two or more files or two or more callsites in the same file, use an enum or const object, as above in feature flags.

Files:

  • src/components/EnhancedChatInterface.tsx
  • src/hooks/useUsageTracking.ts
  • src/lib/ai.ts
src/components/**/*.tsx

📄 CodeRabbit inference engine (.cursor/rules/project-overview.mdc)

Input sanitization and validation must be implemented throughout the codebase to provide XSS protection

Files:

  • src/components/EnhancedChatInterface.tsx
src/**/*.tsx

📄 CodeRabbit inference engine (CLAUDE.md)

src/**/*.tsx: Use React Hook Form with Zod validation for client-side forms
Prevent XSS by sanitizing any user-generated content before rendering (avoid unsafe HTML, or sanitize it)
Implement proper error handling with typed error boundaries in React

Files:

  • src/components/EnhancedChatInterface.tsx
🧠 Learnings (4)
📚 Learning: 2025-08-09T23:03:01.787Z
Learnt from: CR
PR: otdoges/zapdev#0
File: .cursor/rules/chat-ui-patterns.mdc:0-0
Timestamp: 2025-08-09T23:03:01.787Z
Learning: Applies to **/*.{tsx} : Show typing indicators during AI responses in chat interfaces.

Applied to files:

  • src/components/EnhancedChatInterface.tsx
📚 Learning: 2025-08-09T23:03:01.787Z
Learnt from: CR
PR: otdoges/zapdev#0
File: .cursor/rules/chat-ui-patterns.mdc:0-0
Timestamp: 2025-08-09T23:03:01.787Z
Learning: Applies to **/*ChatInterface*.tsx : Chat interfaces should follow the specified component structure: state management for selectedChatId, input, isTyping; useQuery for chats and messages; layout with ChatSidebar and ChatArea components.

Applied to files:

  • src/components/EnhancedChatInterface.tsx
📚 Learning: 2025-08-09T23:03:01.787Z
Learnt from: CR
PR: otdoges/zapdev#0
File: .cursor/rules/chat-ui-patterns.mdc:0-0
Timestamp: 2025-08-09T23:03:01.787Z
Learning: Applies to **/*.{tsx} : Ensure proper keyboard navigation throughout the chat interface for accessibility.

Applied to files:

  • src/components/EnhancedChatInterface.tsx
📚 Learning: 2025-08-09T23:03:01.787Z
Learnt from: CR
PR: otdoges/zapdev#0
File: .cursor/rules/chat-ui-patterns.mdc:0-0
Timestamp: 2025-08-09T23:03:01.787Z
Learning: Applies to **/*.{tsx} : Implement auto-scroll to bottom on new messages in chat interfaces.

Applied to files:

  • src/components/EnhancedChatInterface.tsx
🧬 Code graph analysis (3)
src/components/EnhancedChatInterface.tsx (1)
scripts/test-ai-only.js (2)
  • streamResult (86-86)
  • chunks (87-87)
src/hooks/useUsageTracking.ts (2)
src/lib/auth-token.ts (1)
  • authTokenManager (34-34)
api-dev-server.ts (2)
  • json (463-465)
  • json (598-609)
src/lib/ai.ts (3)
src/lib/ai-utils.ts (3)
  • withRetry (97-157)
  • monitorAIOperation (250-308)
  • withTimeout (13-20)
src/lib/usage-service.ts (1)
  • recordAIConversation (80-98)
src/lib/ai-monitoring.ts (1)
  • aiMonitoring (366-366)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: claude-review
🔇 Additional comments (2)
package.json (2)

39-39: New dependency @ai-sdk/vercel — confirm intent and peer deps

Looks fine to add, but please verify:

  • It’s actually referenced in code in this PR or near-future work, otherwise it bloats install size.
  • Peer dependencies (if any) are satisfied and versions align with ai@^5.0.11.

If unused today, I can open a follow-up chore PR to wire or remove it—your call.


133-133: Confirmed: @hookform/resolvers v5.1.0+ fully supports Zod v4

You can safely upgrade to @hookform/resolvers@^5.1.0 (or newer) and continue using zodResolver without changes—v5.1.0 added first-class Zod 4 support while retaining Zod 3 compatibility (mail-archive.com).

Please note that Zod v4 introduced some breaking changes from v3. Review and adjust any schemas that rely on removed or changed APIs:

  • Unified error API under a single error param; dropped invalid_type_error/required_error and renamed errorMaperror (zod.dev).
  • Refine no longer accepts type-predicate functions for narrowing, and ctx.path is no longer populated in refinements.
  • .strict(), .passthrough(), and .strip() are deprecated in favor of z.strictObject(), z.looseObject(), and default stripping.
  • z.record() single-argument usage removed; use z.partialRecord() for optional keys, and new enum/exhaustive record behavior.
  • Intersection errors now throw a regular Error instead of a ZodError for unmergeable types.
  • Internal API changes (e.g. moved ._def._zod.def, renamed issue types under z.core.$ZodIssue*) won’t affect typical resolver usage.

For a full list of Zod v4 migration notes, see the official guide: https://zod.dev/v4/changelog.

@Jackson57279 Jackson57279 merged commit f0860e5 into main Aug 23, 2025
11 of 12 checks passed
@Jackson57279 Jackson57279 deleted the dev-branch branch August 23, 2025 19:05
This was referenced Aug 25, 2025
Merged
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants