Skip to content

Conversation

@urugondavishnu
Copy link
Contributor

This PR adds a new cookbook example that demonstrates how to use the TinyFish Web Agent API
to check where a manga is available by verifying multiple platforms in real time.

@coderabbitai
Copy link

coderabbitai bot commented Jan 25, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a Vite + React + TypeScript app and UI primitives, components (SearchHero, AgentCard, ResultsSummary), a coordination hook (useMangaSearch), utilities, Supabase client and types, Supabase function config, and two Supabase Edge functions: discover-manga-sites (uses Gemini to return candidate sites) and search-manga (invokes the Mino API and streams SSE). Flow: client triggers a search, discovery returns candidate sites, parallel per-site search functions call Mino for browser automation and stream live previews and final results back to the UI.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant UI as SearchHero (Client)
    participant Hook as useMangaSearch
    participant Discover as discover-manga-sites (Supabase)
    participant SearchFn as search-manga (Supabase)
    participant Mino as Mino API / Browser Agent
    participant AgentUI as AgentCard + ResultsSummary

    User->>UI: Enter manga title
    UI->>Hook: search(title)
    Hook->>Discover: POST /discover-manga-sites (title)
    Discover-->>Hook: [site1, site2, ...]
    Hook->>AgentUI: init agents (idle)
    Hook-->>UI: isSearching = true

    par Parallel site searches
        Hook->>SearchFn: POST /search-manga (siteUrl, title)
        SearchFn->>Mino: POST Mino.run(goal, url)
        Mino-->>SearchFn: SSE (streamingUrl, progress, complete)
        SearchFn-->>Hook: SSE events (stream, message, complete)
        Hook->>AgentUI: updateAgent(status, streamingUrl, statusMessage)
    and
        Hook->>SearchFn: POST /search-manga (siteN, title)
        SearchFn->>Mino: POST Mino.run(...)
        Mino-->>SearchFn: SSE events...
        SearchFn-->>Hook: SSE events...
        Hook->>AgentUI: updateAgent(...)
    end

    Hook-->>UI: isSearching = false
    AgentUI->>User: show live previews and final results / visit links
Loading
🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding a complete Manga Availability Finder cookbook example demonstrating TinyFish Web Agent API integration.
Description check ✅ Passed The description is directly related to the changeset, explaining that the PR adds a cookbook example demonstrating TinyFish Web Agent API usage for checking manga availability across multiple platforms.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Fix all issues with AI agents
In `@Manga-Availability-Finder/package.json`:
- Around line 2-4: The package name in package.json is generic
("vite_react_shadcn_ts"); update the "name" field to a descriptive identifier
like "manga-availability-finder" to match the cookbook example and improve
clarity in logs and tooling. Open package.json, change the value of the
top-level "name" property (currently vite_react_shadcn_ts) to the new name,
ensure it follows npm package name rules (lowercase, hyphens) and run any
necessary tooling (e.g., reinstall or rebuild) to pick up the new package name.
- Around line 15-88: The package.json currently lists many dependencies (see
"dependencies" and "devDependencies") but lacks a packageManager field and no
lockfile is committed; fix by choosing a package manager (npm, pnpm, or yarn),
run the corresponding install to generate the lockfile (package-lock.json,
pnpm-lock.yaml, or yarn.lock), add a top-level "packageManager" entry to
package.json that pins the tool and major version (e.g., "npm@9" or "pnpm@8"),
and commit both the updated package.json and the generated lockfile so installs
become deterministic across environments.

In `@Manga-Availability-Finder/src/components/AgentCard.tsx`:
- Around line 77-81: The render currently calls new URL(siteUrl) inside the
AgentCard component which will throw for scheme-less or malformed strings; wrap
the parsing in a small safe normalization/parse step (either a helper like
parseHostname or inline) that first ensures a scheme (e.g., prepend "https://"
if missing) and then attempts new URL(...) inside try/catch, returning
URL.hostname on success and falling back to the original siteUrl or an empty
string on failure; replace the direct new URL(siteUrl).hostname usage with the
safe result so the card never crashes on bad API data.

In `@Manga-Availability-Finder/src/components/ResultsSummary.tsx`:
- Around line 87-90: The list uses unstable keys (siteUrl) causing diff issues;
update the SearchResult type/interface to include the existing agent id, ensure
the agent mapping in Index (1).tsx passes that id into each SearchResult when
building results, and then change ResultsSummary to use key={result.id}
(replacing key={result.siteUrl}); reference the SearchResult interface, the
agent mapping in Index (1).tsx that constructs the results array, and the
ResultsSummary rendering loop to make these changes.

In `@Manga-Availability-Finder/src/components/ui/card.tsx`:
- Around line 17-22: The ref generic for CardTitle passed to React.forwardRef is
incorrect (HTMLParagraphElement) while the component renders an <h3>; update the
forwardRef signature so the first generic type is HTMLHeadingElement (keeping
the props generic as React.HTMLAttributes<HTMLHeadingElement>) so the ref type
matches the rendered element in the CardTitle component and keep
CardTitle.displayName as-is.

In `@Manga-Availability-Finder/src/hooks/useMangaSearch.ts`:
- Around line 31-47: The SSE fetch in useMangaSearch.ts (the POST to
`${supabaseUrl}/functions/v1/search-manga` inside searchSite()) can hang because
reader.read() may never resolve; add an AbortController per request, pass
controller.signal into fetch, and set a per-request timeout that calls
controller.abort(); in the reader loop catch the abort (or check signal) and
gracefully resolve/reject searchSite() and clear the timeout so Promise.all()
can complete and isSearching is unset; ensure the timeout is cleared on normal
completion and that abort-related errors are handled separately from other
errors.
- Around line 50-96: The SSE handling currently drops partial JSON when chunks
split lines; modify the loop that reads from response.body (the reader and
decoder) to use decoder.decode(value, { stream: true }) and maintain a leftover
buffer string that accumulates chunks, splitting on "\n" and keeping the final
partial line in leftover between reads; parse only complete lines that start
with "data: ", and updateAgent as before (references: reader.read(), decoder,
updateAgent(agent.id,...)); additionally track a boolean terminalReceived that
flips true when a "complete" or "error" event is processed and after the read
loop ends, if terminalReceived is false set the agent status to "error" with an
appropriate statusMessage and clear streamingUrl to avoid agents stuck in
"searching".

In `@Manga-Availability-Finder/src/pages/Index` (1).tsx:
- Around line 1-83: The file name contains a copy artifact "Index (1).tsx" which
doesn't match the import used elsewhere (importing ./pages/Index); rename the
component file to Index.tsx and ensure the default-exported component Index
remains unchanged (component symbol: Index) so existing imports (e.g., from
App.tsx) continue to resolve; after renaming, run a quick build or TypeScript
check to confirm there are no stale imports referencing "Index (1).tsx" and
update any references if found.

In `@Manga-Availability-Finder/tsconfig` (1).json:
- Around line 1-3: The TypeScript config file is incorrectly named "tsconfig
(1).json" so Vite/tsc ignore it; rename this file to tsconfig.json (removing the
" (1)") so tooling finds it, and verify the "references" array
(tsconfig.app.json and tsconfig.node.json) remains intact; alternatively, if you
must keep the name, update build scripts (npm scripts/CI) to pass -p "tsconfig
(1).json" to tsc/Vite, but the preferred fix is renaming to tsconfig.json.
🧹 Nitpick comments (6)
Manga-Availability-Finder/public/robots.txt (1)

1-14: Consider collapsing to a single allow-all block.

If you don't need crawler-specific rules, a single User-agent: * with Allow: / keeps the file shorter and easier to maintain.

♻️ Proposed simplification
-User-agent: Googlebot
-Allow: /
-
-User-agent: Bingbot
-Allow: /
-
-User-agent: Twitterbot
-Allow: /
-
-User-agent: facebookexternalhit
-Allow: /
-
-User-agent: *
-Allow: /
+User-agent: *
+Allow: /
Manga-Availability-Finder/tsconfig (1).json (1)

9-14: Consider tightening TypeScript strictness for safer checks.
Disabling noImplicitAny, strictNullChecks, and unused checks significantly reduces type safety. If this isn’t intentional, enabling stricter options early helps catch issues before the UI grows. If you still need to compile JS files, keep allowJs: true.

♻️ Possible stricter baseline
   "compilerOptions": {
     "baseUrl": ".",
     "paths": {
       "@/*": ["./src/*"]
     },
-    "noImplicitAny": false,
-    "noUnusedParameters": false,
-    "skipLibCheck": true,
-    "allowJs": true,
-    "noUnusedLocals": false,
-    "strictNullChecks": false
+    "strict": true,
+    "noImplicitAny": true,
+    "noUnusedParameters": true,
+    "noUnusedLocals": true,
+    "strictNullChecks": true,
+    "skipLibCheck": true
   }
Manga-Availability-Finder/src/pages/NotFound.tsx (2)

7-9: Consider downgrading expected 404s from console.error.

Line 8 logs an expected 404 as an error; this can trigger noisy error reporting in production. Consider console.warn/info or gating to dev-only logging.

♻️ Example adjustment
-  useEffect(() => {
-    console.error("404 Error: User attempted to access non-existent route:", location.pathname);
-  }, [location.pathname]);
+  useEffect(() => {
+    if (import.meta.env.DEV) {
+      console.warn("404: User attempted to access non-existent route:", location.pathname);
+    }
+  }, [location.pathname]);

1-18: Use <Link> for internal navigation to avoid a full page reload.

Line 16 uses a raw anchor which will refresh the app and drop client state. Prefer Link from react-router-dom for SPA navigation.

♻️ Proposed change
-import { useLocation } from "react-router-dom";
+import { Link, useLocation } from "react-router-dom";
...
-        <a href="/" className="text-primary underline hover:text-primary/90">
-          Return to Home
-        </a>
+        <Link to="/" className="text-primary underline hover:text-primary/90">
+          Return to Home
+        </Link>
Manga-Availability-Finder/docs/MINO_API_INTEGRATION.md (2)

9-41: Add language specifier to fenced code block.

The ASCII architecture diagram should have a language specifier for consistency and to satisfy markdown linting. Use text for plain text blocks.

📝 Suggested fix
-```
+```text
 ┌─────────────────────────────────────────────────────────────────────────────┐

178-200: Add language specifier to the goal prompt code block.

This natural language prompt block should have a language specifier. Use text for plain text content.

📝 Suggested fix
-```
+```text
 You are searching for a manga/webtoon called "${mangaTitle}" on this website.

Comment on lines +2 to +4
"name": "vite_react_shadcn_ts",
"private": true,
"version": "0.0.0",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider renaming the package to match the cookbook example.

The current name (vite_react_shadcn_ts) is generic and can be confusing in logs and tooling for this specific example. A more descriptive name (e.g., manga-availability-finder) will be clearer.

💡 Possible adjustment
-  "name": "vite_react_shadcn_ts",
+  "name": "manga-availability-finder",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"name": "vite_react_shadcn_ts",
"private": true,
"version": "0.0.0",
"name": "manga-availability-finder",
"private": true,
"version": "0.0.0",
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/package.json` around lines 2 - 4, The package name
in package.json is generic ("vite_react_shadcn_ts"); update the "name" field to
a descriptive identifier like "manga-availability-finder" to match the cookbook
example and improve clarity in logs and tooling. Open package.json, change the
value of the top-level "name" property (currently vite_react_shadcn_ts) to the
new name, ensure it follows npm package name rules (lowercase, hyphens) and run
any necessary tooling (e.g., reinstall or rebuild) to pick up the new package
name.

Comment on lines +15 to +88
"dependencies": {
"@hookform/resolvers": "^3.10.0",
"@radix-ui/react-accordion": "^1.2.11",
"@radix-ui/react-alert-dialog": "^1.1.14",
"@radix-ui/react-aspect-ratio": "^1.1.7",
"@radix-ui/react-avatar": "^1.1.10",
"@radix-ui/react-checkbox": "^1.3.2",
"@radix-ui/react-collapsible": "^1.1.11",
"@radix-ui/react-context-menu": "^2.2.15",
"@radix-ui/react-dialog": "^1.1.14",
"@radix-ui/react-dropdown-menu": "^2.1.15",
"@radix-ui/react-hover-card": "^1.1.14",
"@radix-ui/react-label": "^2.1.7",
"@radix-ui/react-menubar": "^1.1.15",
"@radix-ui/react-navigation-menu": "^1.2.13",
"@radix-ui/react-popover": "^1.1.14",
"@radix-ui/react-progress": "^1.1.7",
"@radix-ui/react-radio-group": "^1.3.7",
"@radix-ui/react-scroll-area": "^1.2.9",
"@radix-ui/react-select": "^2.2.5",
"@radix-ui/react-separator": "^1.1.7",
"@radix-ui/react-slider": "^1.3.5",
"@radix-ui/react-slot": "^1.2.3",
"@radix-ui/react-switch": "^1.2.5",
"@radix-ui/react-tabs": "^1.1.12",
"@radix-ui/react-toast": "^1.2.14",
"@radix-ui/react-toggle": "^1.1.9",
"@radix-ui/react-toggle-group": "^1.1.10",
"@radix-ui/react-tooltip": "^1.2.7",
"@supabase/supabase-js": "^2.91.0",
"@tanstack/react-query": "^5.83.0",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"cmdk": "^1.1.1",
"date-fns": "^3.6.0",
"embla-carousel-react": "^8.6.0",
"input-otp": "^1.4.2",
"lucide-react": "^0.462.0",
"next-themes": "^0.3.0",
"react": "^18.3.1",
"react-day-picker": "^8.10.1",
"react-dom": "^18.3.1",
"react-hook-form": "^7.61.1",
"react-resizable-panels": "^2.1.9",
"react-router-dom": "^6.30.1",
"recharts": "^2.15.4",
"sonner": "^1.7.4",
"tailwind-merge": "^2.6.0",
"tailwindcss-animate": "^1.0.7",
"vaul": "^0.9.9",
"zod": "^3.25.76"
},
"devDependencies": {
"@eslint/js": "^9.32.0",
"@tailwindcss/typography": "^0.5.16",
"@testing-library/jest-dom": "^6.6.0",
"@testing-library/react": "^16.0.0",
"@types/node": "^22.16.5",
"@types/react": "^18.3.23",
"@types/react-dom": "^18.3.7",
"@vitejs/plugin-react-swc": "^3.11.0",
"autoprefixer": "^10.4.21",
"eslint": "^9.32.0",
"eslint-plugin-react-hooks": "^5.2.0",
"eslint-plugin-react-refresh": "^0.4.20",
"globals": "^15.15.0",
"jsdom": "^20.0.3",
"lovable-tagger": "^1.1.13",
"postcss": "^8.5.6",
"tailwindcss": "^3.4.17",
"typescript": "^5.8.3",
"typescript-eslint": "^8.38.0",
"vite": "^5.4.19",
"vitest": "^3.2.4"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check for common lockfiles and packageManager field
ls -1 | grep -E '^(package-lock\.json|pnpm-lock\.yaml|yarn\.lock)$' || echo "No lockfile found"
echo "---"
grep -n '"packageManager"' package.json || echo "No packageManager field found"

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 187


🏁 Script executed:

#!/bin/bash
# Find package.json location
find . -name "package.json" -type f | head -20

# Check for lockfiles in the directory structure
find . -type f \( -name "package-lock.json" -o -name "pnpm-lock.yaml" -o -name "yarn.lock" \) | head -20

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 164


🏁 Script executed:

#!/bin/bash
# Check for packageManager field in Manga-Availability-Finder/package.json
grep -n '"packageManager"' ./Manga-Availability-Finder/package.json || echo "No packageManager field found"

# List lockfiles specifically in Manga-Availability-Finder directory
ls -1 ./Manga-Availability-Finder | grep -E '^(package-lock\.json|pnpm-lock\.yaml|yarn\.lock)$' || echo "No lockfile in Manga-Availability-Finder"

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 143


Add lockfile and packageManager field for reproducible installs.

With this wide dependency surface, a committed lockfile and packageManager field are necessary. Currently missing: neither a lockfile nor packageManager field exist in package.json. Lock the toolchain version and ensure deterministic installs across environments.

🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/package.json` around lines 15 - 88, The
package.json currently lists many dependencies (see "dependencies" and
"devDependencies") but lacks a packageManager field and no lockfile is
committed; fix by choosing a package manager (npm, pnpm, or yarn), run the
corresponding install to generate the lockfile (package-lock.json,
pnpm-lock.yaml, or yarn.lock), add a top-level "packageManager" entry to
package.json that pins the tool and major version (e.g., "npm@9" or "pnpm@8"),
and commit both the updated package.json and the generated lockfile so installs
become deterministic across environments.

Comment on lines +77 to +81
<h3 className="font-display font-semibold text-foreground truncate">
{siteName}
</h3>
<p className="text-xs text-muted-foreground truncate">{new URL(siteUrl).hostname}</p>
</div>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, explore the repo structure and locate the file
git ls-files | grep -i agent | head -20

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 151


🏁 Script executed:

#!/bin/bash
# Check if the file exists and get context
if [ -f "src/components/AgentCard.tsx" ]; then
  echo "=== AgentCard.tsx exists ==="
  wc -l src/components/AgentCard.tsx
else
  echo "File not found, searching..."
  find . -name "AgentCard.tsx" -type f
fi

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 158


🏁 Script executed:

#!/bin/bash
# Read the AgentCard.tsx file to see the code around lines 77-81
if [ -f "src/components/AgentCard.tsx" ]; then
  echo "=== Code around lines 77-81 ==="
  sed -n '70,90p' src/components/AgentCard.tsx
fi

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 55


🏁 Script executed:

#!/bin/bash
# Search for where siteUrl comes from - look for AgentCardProps definition
rg "AgentCardProps|interface AgentCard|type AgentCard" -A 10 --type ts --type tsx

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 100


🏁 Script executed:

#!/bin/bash
# Search for where AgentCard is used and how siteUrl is passed
rg "AgentCard" -B 2 -A 2 --type ts --type tsx | head -50

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 100


🏁 Script executed:

#!/bin/bash
# Find the data source - look for discover-manga-sites or similar data generation
rg "discover-manga-sites|siteUrl.*:" -B 3 -A 3 --type ts --type tsx | head -100

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 100


🏁 Script executed:

#!/bin/bash
cat ./Manga-Availability-Finder/src/components/AgentCard.tsx

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 5755


🏁 Script executed:

#!/bin/bash
# Search for AgentCardProps definition
rg "AgentCardProps|interface.*Agent" -A 10 ./Manga-Availability-Finder/

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 2898


🏁 Script executed:

#!/bin/bash
# Search for where AgentCard is instantiated to see how siteUrl is passed
rg "AgentCard\s*[\{\(]|<AgentCard" ./Manga-Availability-Finder/ -B 2 -A 4

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 2153


🏁 Script executed:

#!/bin/bash
# Search for data that feeds siteUrl - look for sites configuration
rg "sites|site.*url|siteUrl" ./Manga-Availability-Finder/ -i -B 1 -A 1

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 10277


🏁 Script executed:

#!/bin/bash
# Find the discover-manga-sites edge function
find ./Manga-Availability-Finder -name "*discover*" -type f

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 55


🏁 Script executed:

#!/bin/bash
# Check supabase functions directory
find . -path "*/supabase/functions/*" -name "*discover*" -o -path "*/functions/*" -name "*discover*" 2>/dev/null | head -20

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 55


🏁 Script executed:

#!/bin/bash
# Look at the useMangaSearch hook more carefully for how sites are processed
sed -n '30,120p' ./Manga-Availability-Finder/src/hooks/useMangaSearch.ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 3428


🏁 Script executed:

#!/bin/bash
# Get the beginning part of useMangaSearch where discover-manga-sites is called
sed -n '1,100p' ./Manga-Availability-Finder/src/hooks/useMangaSearch.ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 3618


🏁 Script executed:

#!/bin/bash
# Search for the discover-manga-sites function invocation
rg "discover-manga-sites" ./Manga-Availability-Finder -B 5 -A 10

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 8588


🏁 Script executed:

#!/bin/bash
# Look for any URL validation or normalization logic in the codebase
rg "https?://|URL.*siteUrl|siteUrl.*URL|normalize|scheme" ./Manga-Availability-Finder/src -A 2 -B 2

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 540


🏁 Script executed:

#!/bin/bash
# Search for any URL validation or normalization when sites are received
sed -n '80,130p' ./Manga-Availability-Finder/src/hooks/useMangaSearch.ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 1613


🏁 Script executed:

#!/bin/bash
# Check if there's any validation of sites before they're used
rg "sites\[" -A 5 -B 5 ./Manga-Availability-Finder/src/hooks/useMangaSearch.ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 55


🏁 Script executed:

#!/bin/bash
# Check the full search function around where sites are initialized
sed -n '90,160p' ./Manga-Availability-Finder/src/hooks/useMangaSearch.ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 2101


🏁 Script executed:

#!/bin/bash
# Check the full useMangaSearch to see if there's any URL validation
cat ./Manga-Availability-Finder/src/hooks/useMangaSearch.ts | wc -l

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 76


🏁 Script executed:

#!/bin/bash
# Get the complete search function
tail -100 ./Manga-Availability-Finder/src/hooks/useMangaSearch.ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 2899


Guard new URL(siteUrl) against malformed or scheme-less URLs to prevent render-time crashes.

The siteUrl prop is passed directly from the discover-manga-sites API response without validation. If the API returns a URL without a scheme (e.g., mangadex.org), the new URL() constructor will throw and break the card render. Add safe normalization and error handling.

🛡️ Suggested fix
 export function AgentCard({
   siteName,
   siteUrl,
   status,
   statusMessage,
   streamingUrl,
   mangaTitle,
 }: AgentCardProps) {
   const config = statusConfig[status];
+  const normalizedUrl = /^https?:\/\//i.test(siteUrl) ? siteUrl : `https://${siteUrl}`;
+  let hostname = siteUrl;
+  try {
+    hostname = new URL(normalizedUrl).hostname;
+  } catch {
+    hostname = siteUrl;
+  }
 
   return (
@@
-              <p className="text-xs text-muted-foreground truncate">{new URL(siteUrl).hostname}</p>
+              <p className="text-xs text-muted-foreground truncate">{hostname}</p>
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/src/components/AgentCard.tsx` around lines 77 - 81,
The render currently calls new URL(siteUrl) inside the AgentCard component which
will throw for scheme-less or malformed strings; wrap the parsing in a small
safe normalization/parse step (either a helper like parseHostname or inline)
that first ensures a scheme (e.g., prepend "https://" if missing) and then
attempts new URL(...) inside try/catch, returning URL.hostname on success and
falling back to the original siteUrl or an empty string on failure; replace the
direct new URL(siteUrl).hostname usage with the safe result so the card never
crashes on bad API data.

Comment on lines +87 to +90
{results.map((result) => (
<div
key={result.siteUrl}
className="flex items-center justify-between p-3 rounded-lg bg-muted/30 border border-border/50"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, locate the ResultsSummary.tsx file
find . -name "ResultsSummary.tsx" -type f

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 134


🏁 Script executed:

#!/bin/bash
# Look at the file structure to understand the component layout
git ls-files | grep -E "(ResultsSummary|SearchResult)" | head -20

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 132


🏁 Script executed:

#!/bin/bash
# Search for SearchResult interface definition
rg "interface SearchResult" -A 10 -B 2

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 1093


🏁 Script executed:

#!/bin/bash
# Read the ResultsSummary.tsx file to see the full context
wc -l ./Manga-Availability-Finder/src/components/ResultsSummary.tsx

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 138


🏁 Script executed:

#!/bin/bash
# Read lines around 87-90 from ResultsSummary.tsx
sed -n '80,100p' ./Manga-Availability-Finder/src/components/ResultsSummary.tsx

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 974


🏁 Script executed:

#!/bin/bash
# Search for where ResultsSummary is called from
rg "ResultsSummary" -B 2 -A 2 --type tsx --type ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 100


🏁 Script executed:

#!/bin/bash
# Search for ResultsSummary usage without type filter
rg "ResultsSummary" -B 3 -A 3

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 2521


🏁 Script executed:

#!/bin/bash
# Also look at the full context of ResultsSummary.tsx to understand data flow
head -50 ./Manga-Availability-Finder/src/components/ResultsSummary.tsx

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 2007


🏁 Script executed:

#!/bin/bash
# See the full results mapping in Index (1).tsx
rg -A 10 "results={agents.map" "Manga-Availability-Finder/src/pages/Index (1).tsx"

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 183


🏁 Script executed:

#!/bin/bash
# Look at the agent data structure - check AgentCard to understand the agent structure
cat ./Manga-Availability-Finder/src/components/AgentCard.tsx | head -80

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 2698


🏁 Script executed:

#!/bin/bash
# Look for where agents come from - search useMangaSearch hook
rg "useMangaSearch|export.*agents" -A 5 --context-line 3

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 514


🏁 Script executed:

#!/bin/bash
# Search for the agents.map call more carefully
grep -n "results={agents.map" "Manga-Availability-Finder/src/pages/Index (1).tsx" -A 10

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 405


🏁 Script executed:

#!/bin/bash
# Find useMangaSearch hook to understand agent structure
find . -name "*useMangaSearch*" -o -name "*hook*" | head -20

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 179


🏁 Script executed:

#!/bin/bash
# Look for where agents array is defined
rg "agents.*=" "Manga-Availability-Finder/src/pages" -A 3

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 2017


🏁 Script executed:

#!/bin/bash
# Look at the useMangaSearch hook to understand agent structure
cat ./Manga-Availability-Finder/src/hooks/useMangaSearch.ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 5690


🏁 Script executed:

#!/bin/bash
# Also check if there are multiple agents with same siteUrl (to verify uniqueness concern)
rg "siteUrl" ./Manga-Availability-Finder/src -A 2 -B 2

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 5321


Use a stable unique key for result rows.

The id field already exists in the source SiteAgent data but is not being passed to ResultsSummary. Currently using siteUrl as a key is problematic since multiple agents could reference the same domain or mirror sites, causing list diffing issues.

Add id to the SearchResult interface and pass it through from the agent mapping in Index (1).tsx, then use key={result.id} instead.

Required changes
interface SearchResult {
+  id: string;
   siteName: string;
   siteUrl: string;
   status: AgentStatus;
 }
results={agents.map((a) => ({
+  id: a.id,
   siteName: a.siteName,
   siteUrl: a.siteUrl,
   status: a.status,
 }))}
-            key={result.siteUrl}
+            key={result.id}
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/src/components/ResultsSummary.tsx` around lines 87
- 90, The list uses unstable keys (siteUrl) causing diff issues; update the
SearchResult type/interface to include the existing agent id, ensure the agent
mapping in Index (1).tsx passes that id into each SearchResult when building
results, and then change ResultsSummary to use key={result.id} (replacing
key={result.siteUrl}); reference the SearchResult interface, the agent mapping
in Index (1).tsx that constructs the results array, and the ResultsSummary
rendering loop to make these changes.

Comment on lines +17 to +22
const CardTitle = React.forwardRef<HTMLParagraphElement, React.HTMLAttributes<HTMLHeadingElement>>(
({ className, ...props }, ref) => (
<h3 ref={ref} className={cn("text-2xl font-semibold leading-none tracking-tight", className)} {...props} />
),
);
CardTitle.displayName = "CardTitle";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix type mismatch: ref type should be HTMLHeadingElement.

The ref is typed as HTMLParagraphElement but the rendered element is <h3>. This should be HTMLHeadingElement to match the actual DOM element.

🔧 Proposed fix
-const CardTitle = React.forwardRef<HTMLParagraphElement, React.HTMLAttributes<HTMLHeadingElement>>(
+const CardTitle = React.forwardRef<HTMLHeadingElement, React.HTMLAttributes<HTMLHeadingElement>>(
   ({ className, ...props }, ref) => (
     <h3 ref={ref} className={cn("text-2xl font-semibold leading-none tracking-tight", className)} {...props} />
   ),
 );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const CardTitle = React.forwardRef<HTMLParagraphElement, React.HTMLAttributes<HTMLHeadingElement>>(
({ className, ...props }, ref) => (
<h3 ref={ref} className={cn("text-2xl font-semibold leading-none tracking-tight", className)} {...props} />
),
);
CardTitle.displayName = "CardTitle";
const CardTitle = React.forwardRef<HTMLHeadingElement, React.HTMLAttributes<HTMLHeadingElement>>(
({ className, ...props }, ref) => (
<h3 ref={ref} className={cn("text-2xl font-semibold leading-none tracking-tight", className)} {...props} />
),
);
CardTitle.displayName = "CardTitle";
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/src/components/ui/card.tsx` around lines 17 - 22,
The ref generic for CardTitle passed to React.forwardRef is incorrect
(HTMLParagraphElement) while the component renders an <h3>; update the
forwardRef signature so the first generic type is HTMLHeadingElement (keeping
the props generic as React.HTMLAttributes<HTMLHeadingElement>) so the ref type
matches the rendered element in the CardTitle component and keep
CardTitle.displayName as-is.

Comment on lines 31 to 47
const supabaseUrl = import.meta.env.VITE_SUPABASE_URL;
const supabaseKey = import.meta.env.VITE_SUPABASE_PUBLISHABLE_KEY;

const response = await fetch(`${supabaseUrl}/functions/v1/search-manga`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${supabaseKey}`,
"apikey": supabaseKey,
},
body: JSON.stringify({ url: agent.siteUrl, mangaTitle: title }),
});

if (!response.ok) {
throw new Error(`HTTP error: ${response.status}`);
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

git ls-files | head -20

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 778


🏁 Script executed:

find . -name "useMangaSearch.ts" -o -name "package.json" | head -10

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 192


🏁 Script executed:

cat package.json | grep -A 20 '"browserslist"\|"version"\|"dependencies"\|"devDependencies"' | head -50

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 117


🏁 Script executed:

cat Manga-Availability-Finder/package.json

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 3014


🏁 Script executed:

wc -l Manga-Availability-Finder/src/hooks/useMangaSearch.ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 130


🏁 Script executed:

cat -n Manga-Availability-Finder/src/hooks/useMangaSearch.ts

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 6936


🌐 Web query:

AbortController fetch browser support compatibility 2025 modern browsers

💡 Result:

As of 2025, AbortController / AbortSignal for aborting fetch() requests is supported in all modern evergreen browsers:

  • Chrome / Chromium browsers (incl. new Edge): supported since Chrome 66 / Edge 16 (and later) [1]
  • Firefox: supported since Firefox 57 (and later) [1]
  • Safari (macOS): supported since Safari 12.1 (and later) [1]
  • iOS Safari: supported since iOS Safari 11.3 (and later) [1]
  • Not supported: Internet Explorer (all versions), Opera Mini [1]

MDN also lists AbortController as “widely available” and shows the same general minimum versions for desktop browsers (Chrome 66, Edge 16, Firefox 57, Safari 12.1) [2].

Sources:
[1] Can I use: “AbortController & AbortSignal” https://caniuse.com/abortcontroller
[2] MDN: “AbortController” https://developer.mozilla.org/en-US/docs/Web/API/AbortController


Add timeout/abort to prevent stalled SSE streams from blocking Promise.all().

The SSE stream reading loop (lines 48–96) can hang indefinitely if the connection stalls. When reader.read() never completes, searchSite() never resolves, causing Promise.all() at line 160 to block indefinitely and leaving isSearching stuck as true.

Add AbortController with a per-request timeout to abort stalled streams:

⏱️ Suggested fix (AbortController timeout)
-        const response = await fetch(`${supabaseUrl}/functions/v1/search-manga`, {
+        let timeoutId: ReturnType<typeof setTimeout> | undefined;
+        const controller = new AbortController();
+        timeoutId = setTimeout(() => controller.abort(), 60_000);
+
+        const response = await fetch(`${supabaseUrl}/functions/v1/search-manga`, {
           method: "POST",
           headers: {
             "Content-Type": "application/json",
             "Authorization": `Bearer ${supabaseKey}`,
             "apikey": supabaseKey,
           },
+          signal: controller.signal,
           body: JSON.stringify({ url: agent.siteUrl, mangaTitle: title }),
         });
@@
-      } catch (error) {
+      } catch (error) {
         console.error(`Error searching ${agent.siteName}:`, error);
         updateAgent(agent.id, {
           status: "error",
           statusMessage: error instanceof Error ? error.message : "Search failed",
           streamingUrl: undefined,
         });
+      } finally {
+        if (timeoutId) clearTimeout(timeoutId);
       }

AbortController is supported across all modern browsers (Chrome 66+, Firefox 57+, Safari 12.1+) and matches your project's target environment.

🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/src/hooks/useMangaSearch.ts` around lines 31 - 47,
The SSE fetch in useMangaSearch.ts (the POST to
`${supabaseUrl}/functions/v1/search-manga` inside searchSite()) can hang because
reader.read() may never resolve; add an AbortController per request, pass
controller.signal into fetch, and set a per-request timeout that calls
controller.abort(); in the reader loop catch the abort (or check signal) and
gracefully resolve/reject searchSite() and clear the timeout so Promise.all()
can complete and isSearching is unset; ensure the timeout is cleared on normal
completion and that abort-related errors are handled separately from other
errors.

Comment on lines 50 to 96
// Handle SSE stream
if (contentType?.includes("text/event-stream")) {
const reader = response.body?.getReader();
if (!reader) throw new Error("No response body");

const decoder = new TextDecoder();

while (true) {
const { done, value } = await reader.read();
if (done) break;

const chunk = decoder.decode(value);
const lines = chunk.split("\n");

for (const line of lines) {
if (line.startsWith("data: ")) {
try {
const data = JSON.parse(line.slice(6));

if (data.type === "stream" && data.streamingUrl) {
updateAgent(agent.id, {
streamingUrl: data.streamingUrl,
statusMessage: "Agent browsing..."
});
}

if (data.type === "complete") {
updateAgent(agent.id, {
status: data.found ? "found" : "not_found",
statusMessage: data.found ? "Manga found on this site!" : "Not available on this site",
streamingUrl: undefined,
});
}

if (data.type === "error") {
updateAgent(agent.id, {
status: "error",
statusMessage: data.error || "Search failed",
streamingUrl: undefined,
});
}
} catch {
// Ignore parse errors
}
}
}
}
Copy link

@coderabbitai coderabbitai bot Jan 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's locate and examine the useMangaSearch.ts file
fd "useMangaSearch.ts" --type f

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 126


🌐 Web query:

Can fetch() ReadableStream chunks split a single SSE data: line across multiple reads, requiring buffering to avoid JSON parse errors?

💡 Result:

Yes. fetch() streaming (response.body.getReader().read()) returns arbitrary byte chunks that do not align to SSE message boundaries or even line boundaries, so a single SSE line like data: {...}\n can be split across multiple reads. Therefore, if you JSON.parse() each decoded chunk (or each data: fragment) without buffering, you can hit parse errors because you’re parsing incomplete JSON.

To avoid this, buffer decoded text until you have a complete SSE “event” (terminated by a blank line), then apply the SSE rules: accumulate one or more data: lines (concatenated with \n between them), and only then parse as JSON if that payload is JSON. (w3.org)

Citations:


🏁 Script executed:

# Let's examine the actual code in useMangaSearch.ts
wc -l "Manga-Availability-Finder/src/hooks/useMangaSearch.ts"

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 130


🏁 Script executed:

# Read the relevant section (lines 40-110 to see context)
sed -n '40,110p' "Manga-Availability-Finder/src/hooks/useMangaSearch.ts" | cat -n

Repository: tinyfish-io/TinyFish-cookbook

Length of output: 3149


Buffer SSE chunks to avoid dropped events and stuck agent statuses.

reader.read() chunk boundaries are arbitrary and do not align to line boundaries, so a single SSE data: line can be split across multiple reads. Without buffering, incomplete JSON is silently dropped via the empty catch block, leaving agents permanently stuck in "searching". The stream can also end prematurely without sending a complete or error event.

Implement line buffering: accumulate partial lines across chunks using decoder.decode(value, { stream: true }), and preserve incomplete final lines. Track whether a terminal event (complete or error) was received, and emit an error status if the stream ends without one.

Suggested fix
-        if (contentType?.includes("text/event-stream")) {
-          const reader = response.body?.getReader();
-          if (!reader) throw new Error("No response body");
-          
-          const decoder = new TextDecoder();
-          
-          while (true) {
-            const { done, value } = await reader.read();
-            if (done) break;
-            
-            const chunk = decoder.decode(value);
-            const lines = chunk.split("\n");
-            
-            for (const line of lines) {
-              if (line.startsWith("data: ")) {
-                try {
-                  const data = JSON.parse(line.slice(6));
-                  
-                  if (data.type === "stream" && data.streamingUrl) {
-                    updateAgent(agent.id, { 
-                      streamingUrl: data.streamingUrl,
-                      statusMessage: "Agent browsing..." 
-                    });
-                  }
-                  
-                  if (data.type === "complete") {
-                    updateAgent(agent.id, {
-                      status: data.found ? "found" : "not_found",
-                      statusMessage: data.found ? "Manga found on this site!" : "Not available on this site",
-                      streamingUrl: undefined,
-                    });
-                  }
-                  
-                  if (data.type === "error") {
-                    updateAgent(agent.id, {
-                      status: "error",
-                      statusMessage: data.error || "Search failed",
-                      streamingUrl: undefined,
-                    });
-                  }
-                } catch {
-                  // Ignore parse errors
-                }
-              }
-            }
-          }
+        if (contentType?.includes("text/event-stream")) {
+          const reader = response.body?.getReader();
+          if (!reader) throw new Error("No response body");
+          
+          const decoder = new TextDecoder();
+          let buffer = "";
+          let completed = false;
+          
+          while (true) {
+            const { done, value } = await reader.read();
+            if (done) break;
+            
+            buffer += decoder.decode(value, { stream: true });
+            const lines = buffer.split("\n");
+            buffer = lines.pop() ?? "";
+            
+            for (const line of lines) {
+              if (line.startsWith("data: ")) {
+                try {
+                  const data = JSON.parse(line.slice(6));
+                  
+                  if (data.type === "stream" && data.streamingUrl) {
+                    updateAgent(agent.id, { 
+                      streamingUrl: data.streamingUrl,
+                      statusMessage: "Agent browsing..." 
+                    });
+                  }
+                  
+                  if (data.type === "complete") {
+                    completed = true;
+                    updateAgent(agent.id, {
+                      status: data.found ? "found" : "not_found",
+                      statusMessage: data.found ? "Manga found on this site!" : "Not available on this site",
+                      streamingUrl: undefined,
+                    });
+                  }
+                  
+                  if (data.type === "error") {
+                    completed = true;
+                    updateAgent(agent.id, {
+                      status: "error",
+                      statusMessage: data.error || "Search failed",
+                      streamingUrl: undefined,
+                    });
+                  }
+                } catch {
+                  // Ignore parse errors
+                }
+              }
+            }
+          }
+          
+          if (!completed) {
+            updateAgent(agent.id, {
+              status: "error",
+              statusMessage: "Stream ended before completion",
+              streamingUrl: undefined,
+            });
+          }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Handle SSE stream
if (contentType?.includes("text/event-stream")) {
const reader = response.body?.getReader();
if (!reader) throw new Error("No response body");
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split("\n");
for (const line of lines) {
if (line.startsWith("data: ")) {
try {
const data = JSON.parse(line.slice(6));
if (data.type === "stream" && data.streamingUrl) {
updateAgent(agent.id, {
streamingUrl: data.streamingUrl,
statusMessage: "Agent browsing..."
});
}
if (data.type === "complete") {
updateAgent(agent.id, {
status: data.found ? "found" : "not_found",
statusMessage: data.found ? "Manga found on this site!" : "Not available on this site",
streamingUrl: undefined,
});
}
if (data.type === "error") {
updateAgent(agent.id, {
status: "error",
statusMessage: data.error || "Search failed",
streamingUrl: undefined,
});
}
} catch {
// Ignore parse errors
}
}
}
}
if (contentType?.includes("text/event-stream")) {
const reader = response.body?.getReader();
if (!reader) throw new Error("No response body");
const decoder = new TextDecoder();
let buffer = "";
let completed = false;
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() ?? "";
for (const line of lines) {
if (line.startsWith("data: ")) {
try {
const data = JSON.parse(line.slice(6));
if (data.type === "stream" && data.streamingUrl) {
updateAgent(agent.id, {
streamingUrl: data.streamingUrl,
statusMessage: "Agent browsing..."
});
}
if (data.type === "complete") {
completed = true;
updateAgent(agent.id, {
status: data.found ? "found" : "not_found",
statusMessage: data.found ? "Manga found on this site!" : "Not available on this site",
streamingUrl: undefined,
});
}
if (data.type === "error") {
completed = true;
updateAgent(agent.id, {
status: "error",
statusMessage: data.error || "Search failed",
streamingUrl: undefined,
});
}
} catch {
// Ignore parse errors
}
}
}
}
if (!completed) {
updateAgent(agent.id, {
status: "error",
statusMessage: "Stream ended before completion",
streamingUrl: undefined,
});
}
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/src/hooks/useMangaSearch.ts` around lines 50 - 96,
The SSE handling currently drops partial JSON when chunks split lines; modify
the loop that reads from response.body (the reader and decoder) to use
decoder.decode(value, { stream: true }) and maintain a leftover buffer string
that accumulates chunks, splitting on "\n" and keeping the final partial line in
leftover between reads; parse only complete lines that start with "data: ", and
updateAgent as before (references: reader.read(), decoder,
updateAgent(agent.id,...)); additionally track a boolean terminalReceived that
flips true when a "complete" or "error" event is processed and after the read
loop ends, if terminalReceived is false set the agent status to "error" with an
appropriate statusMessage and clear streamingUrl to avoid agents stuck in
"searching".

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Address this @urugondavishnu

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed the issue @minhhuynh-tinyfish

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@Manga-Availability-Finder/README.md`:
- Line 78: Update the README environment variable name to be consistent: replace
the "TinyFish Web Agent AI KEY" label/comment with the actual environment
variable name used in the table, MINO_API_KEY (or change the table entry to
TinyFish Web Agent AI KEY if that is the intended variable), and ensure both the
descriptive label and the env var identifier match (reference the README's
environment variable table and the MINO_API_KEY identifier).
- Line 71: Replace the incorrect directory reference "cd webtoon-hunter" in the
README with the actual project directory name "cd Manga-Availability-Finder" so
the setup step navigates to the correct folder; update the line that currently
contains the command string "cd webtoon-hunter" to "cd
Manga-Availability-Finder".
- Line 107: Replace the inconsistent ASCII pipes at the end of the diagram line
"│  │  Gemini → Get site URLs│         │  TinyFish API → Browser Automation | 
|" with the box-drawing characters so it reads "│  │  Gemini → Get site URLs│   
│  TinyFish API → Browser Automation │  │"; update that exact line in README.md
to use "│" (U+2502) for the trailing two columns to match the rest of the
diagram.
🧹 Nitpick comments (3)
Manga-Availability-Finder/README.md (3)

34-37: Consider clarifying the goal string formatting.

The multi-line goal string uses ellipsis (...) which may be unclear to readers—is this meant to be pseudo-code or actual implementation? If it's actual code, consider showing proper template literal syntax with backticks and ${mangaTitle}.

📝 Suggested clarification
-    goal: `You are searching for a manga/webtoon called "${mangaTitle}"...
-           STEP 1: Find and use the search bar to enter the title
-           STEP 2: Analyze the search results for matches
-           STEP 3: Return JSON with { found: boolean, match_confidence: string }`,
+    goal: `You are searching for a manga/webtoon called "${mangaTitle}".
+           STEP 1: Find and use the search bar to enter the title.
+           STEP 2: Analyze the search results for matches.
+           STEP 3: Return JSON with { found: boolean, match_confidence: string }.`,

57-57: Clarify prerequisite options.

The phrasing "Lovable Cloud account (or Supabase project)" might be clearer if it explicitly states whether these are alternatives or if both are needed.


89-89: Add language specification to fenced code block.

The architecture diagram code block should specify a language (e.g., text) for consistent rendering across different markdown viewers.

📝 Proposed fix
-```
+```text

Based on static analysis hints.


# 3. Add secrets to your Lovable Cloud / Supabase project
# Navigate to Settings → Secrets and add:
# - TinyFish Web Agent AI KEY
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix environment variable naming inconsistency.

The comment refers to "TinyFish Web Agent AI KEY" but the environment variable table (line 63) defines it as MINO_API_KEY. This inconsistency could cause users to configure the wrong secret name.

🔧 Proposed fix
-#   - TinyFish Web Agent AI KEY
+#   - MINO_API_KEY
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/README.md` at line 78, Update the README
environment variable name to be consistent: replace the "TinyFish Web Agent AI
KEY" label/comment with the actual environment variable name used in the table,
MINO_API_KEY (or change the table entry to TinyFish Web Agent AI KEY if that is
the intended variable), and ensure both the descriptive label and the env var
identifier match (reference the README's environment variable table and the
MINO_API_KEY identifier).

│ │ discover-manga-sites │ │ search-manga (x6) │ │
│ │ (1x per search) │ │ (parallel browser agents) │ │
│ │ │ │ │ │
│ │ Gemini → Get site URLs│ │ TinyFish API → Browser Automation | |
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix inconsistent box drawing characters.

The line ends with pipe characters (| |) instead of box drawing characters (│ │), creating a visual inconsistency in the diagram.

🎨 Proposed fix
-│  │  Gemini → Get site URLs│         │  TinyFish API → Browser Automation |  |
+│  │  Gemini → Get site URLs│         │  TinyFish API → Browser Automation │  │
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
│ │ Gemini → Get site URLs│ │ TinyFish API → Browser Automation | |
│ │ Gemini → Get site URLs│ │ TinyFish API → Browser Automation │ │
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/README.md` at line 107, Replace the inconsistent
ASCII pipes at the end of the diagram line "│  │  Gemini → Get site URLs│       
│  TinyFish API → Browser Automation |  |" with the box-drawing characters so it
reads "│  │  Gemini → Get site URLs│         │  TinyFish API → Browser
Automation │  │"; update that exact line in README.md to use "│" (U+2502) for
the trailing two columns to match the rest of the diagram.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@Manga-Availability-Finder/src/pages/Index.tsx`:
- Around line 20-26: The header currently always displays "being searched";
update the label to reflect the search state by using the isSearching boolean in
the component that renders the agents count (the same block that references
agents.length). Change the span text to conditionally render "being searched"
when isSearching is true and a completed phrasing such as "searched" (or "were
searched") when isSearching is false so the UI accurately reflects the status
shown by agents and isSearching.
🧹 Nitpick comments (1)
Manga-Availability-Finder/tsconfig.json (1)

9-14: Enable TypeScript strict mode for this example project.

With noImplicitAny, noUnusedParameters, noUnusedLocals, and strictNullChecks all set to false, type checking is very permissive. Modern TypeScript+Vite+React projects use strict mode by default (via "strict": true), which provides better error detection and aligns with current best practices for cookbook examples.

♻️ Enable strictness flags
-    "noImplicitAny": false,
-    "noUnusedParameters": false,
+    "noImplicitAny": true,
+    "noUnusedParameters": true,
     "skipLibCheck": true,
     "allowJs": true,
-    "noUnusedLocals": false,
-    "strictNullChecks": false
+    "noUnusedLocals": true,
+    "strictNullChecks": true

Comment on lines +20 to +26
<div className="flex items-center justify-between">
<h2 className="font-display text-2xl font-bold text-foreground">
Search Agents
</h2>
<span className="text-sm text-muted-foreground">
{agents.length} site{agents.length !== 1 ? "s" : ""} being searched
</span>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Text says “being searched” even after search completes.

When isSearching is false, the header still reads “being searched,” which is misleading after completion. Consider switching the label based on isSearching.

📝 Suggested copy fix
-                <span className="text-sm text-muted-foreground">
-                  {agents.length} site{agents.length !== 1 ? "s" : ""} being searched
-                </span>
+                <span className="text-sm text-muted-foreground">
+                  {isSearching
+                    ? `${agents.length} site${agents.length !== 1 ? "s" : ""} being searched`
+                    : `${agents.length} site${agents.length !== 1 ? "s" : ""} searched`}
+                </span>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<div className="flex items-center justify-between">
<h2 className="font-display text-2xl font-bold text-foreground">
Search Agents
</h2>
<span className="text-sm text-muted-foreground">
{agents.length} site{agents.length !== 1 ? "s" : ""} being searched
</span>
<div className="flex items-center justify-between">
<h2 className="font-display text-2xl font-bold text-foreground">
Search Agents
</h2>
<span className="text-sm text-muted-foreground">
{isSearching
? `${agents.length} site${agents.length !== 1 ? "s" : ""} being searched`
: `${agents.length} site${agents.length !== 1 ? "s" : ""} searched`}
</span>
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/src/pages/Index.tsx` around lines 20 - 26, The
header currently always displays "being searched"; update the label to reflect
the search state by using the isSearching boolean in the component that renders
the agents count (the same block that references agents.length). Change the span
text to conditionally render "being searched" when isSearching is true and a
completed phrasing such as "searched" (or "were searched") when isSearching is
false so the UI accurately reflects the status shown by agents and isSearching.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@Manga-Availability-Finder/supabase/functions/discover-manga-sites/index.ts`:
- Around line 50-63: The fetch against Gemini (the call that builds `response`
using `GEMINI_API_KEY` and `prompt`) needs a client-side timeout: create an
AbortSignal via AbortSignal.timeout(10000) and pass it as `signal` in the fetch
options, then wrap the fetch in try/catch to handle the timeout (and rethrow/log
a clear TimeoutError case) so the function in discover-manga-sites/index.ts
cannot hang indefinitely.

In `@Manga-Availability-Finder/supabase/functions/search-manga/index.ts`:
- Around line 82-133: The stream handler currently splits each decoded chunk by
"\n" and JSON.parses immediately, which drops events when SSE frames are
fragmented; modify the start(controller) in the ReadableStream to use a
persistent buffer and streaming TextDecoder: keep a string buffer (e.g., let
buffer = ""), decode incoming Uint8Array with decoder.decode(value, { stream:
true }), append to buffer, then split buffer on "\n" into lines, process all
complete lines that start with "data: " (JSON.parse inside try/catch) and keep
the last partial line by assigning it back to buffer; when done (reader.read()
returns done) call decoder.decode() with no args to flush any remaining bytes,
append and process remaining lines, and ensure the existing logic around
streamingUrlSent, handling data.type === "COMPLETE"/"ERROR", and
controller.enqueue is preserved.

Comment on lines 50 to 63
const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=${GEMINI_API_KEY}`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
contents: [{ parts: [{ text: prompt }] }],
generationConfig: {
temperature: 0.3,
maxOutputTokens: 1024,
},
}),
}
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, locate the file and inspect the actual implementation at lines 50-63
find . -name "index.ts" -path "*/discover-manga-sites/*" -type f

Repository: tinyfish-io/tinyfish-cookbook

Length of output: 149


🏁 Script executed:

# Get the file content to see the actual implementation
cat -n "Manga-Availability-Finder/supabase/functions/discover-manga-sites/index.ts" | head -80

Repository: tinyfish-io/tinyfish-cookbook

Length of output: 3710


🏁 Script executed:

# Check for any existing timeout or AbortController usage in the file
rg -n "AbortController|timeout|signal" "Manga-Availability-Finder/supabase/functions/discover-manga-sites/index.ts"

Repository: tinyfish-io/tinyfish-cookbook

Length of output: 55


🏁 Script executed:

# Check the Supabase configuration to understand platform constraints
find . -name "deno.json" -o -name "deno.jsonc" | xargs cat

Repository: tinyfish-io/tinyfish-cookbook

Length of output: 55


🏁 Script executed:

# Look for deno config files more carefully
find . -name "deno.json*" -o -name "*.json" | grep -E "(deno|supabase)" | head -20

Repository: tinyfish-io/tinyfish-cookbook

Length of output: 55


🏁 Script executed:

# Check the full error handling context around the fetch
cat -n "Manga-Availability-Finder/supabase/functions/discover-manga-sites/index.ts" | sed -n '50,120p'

Repository: tinyfish-io/tinyfish-cookbook

Length of output: 3439


🌐 Web query:

Deno fetch timeout AbortController best practices

💡 Result:

Best practices for fetch timeouts in Deno (with AbortController / AbortSignal)

  1. Always set an explicit timeout for network calls
    Plain fetch() is allowed to hang indefinitely (no guarantee it resolves/rejects), so production code should attach a timeout. (github.com)

  2. Prefer AbortSignal.timeout(ms) (simplest + standard)
    Deno supports the standard AbortSignal.timeout(ms) helper, which returns an already-wired signal that aborts after the given duration. (deno.com)

    const res = await fetch(url, { signal: AbortSignal.timeout(5_000) });
  3. Differentiate timeout vs user-cancel vs other failures
    With AbortSignal.timeout(), failures reject with a TimeoutError DOMException, which you can distinguish from AbortError and other errors. (developer.mozilla.org)

    try {
      const res = await fetch(url, { signal: AbortSignal.timeout(5_000) });
      // ...
    } catch (err) {
      if (err instanceof DOMException && err.name === "TimeoutError") {
        // timed out
      } else if (err instanceof DOMException && err.name === "AbortError") {
        // aborted (usually user-cancel or other abort source)
      } else {
        // network / DNS / TLS / etc.
        throw err;
      }
    }
  4. If you need both “user cancel” and “timeout”, combine signals
    Use AbortSignal.any([...]) to abort when either the user cancels or the timeout fires. (developer.mozilla.org)

    const user = new AbortController();
    const signal = AbortSignal.any([user.signal, AbortSignal.timeout(5_000)]);
    
    const res = await fetch(url, { signal });
    // later: user.abort();
  5. Fallback pattern (when you can’t use AbortSignal.timeout)
    Use AbortController + setTimeout, and always clear the timer in a finally to avoid leaking timers. (developer.chrome.com)

    const controller = new AbortController();
    const t = setTimeout(() => controller.abort(), 5_000);
    
    try {
      return await fetch(url, { signal: controller.signal });
    } finally {
      clearTimeout(t);
    }
  6. Don’t reuse an aborted signal
    Once a signal is aborted, it stays aborted; create a fresh controller/signal per request (or per logical operation). (developer.mozilla.org)

Citations:


🌐 Web query:

Supabase Edge Function timeout limits default

💡 Result:

On Supabase hosted Edge Functions, the default time-related limits are:

  • Maximum duration (wall clock): 150s on Free, 400s on paid plans. [1]
  • Request idle timeout (no response sent before timeout → 504): 150s. [1]

(“Wall clock time limit reached” errors correspond to hitting the wall-clock duration and/or other runtime resource limits.) [2]

Sources: [1] Supabase Edge Functions Limits docs, [2] Supabase troubleshooting doc on wall clock time limit reached.


🌐 Web query:

Google Gemini API generateContent timeout recommendations

💡 Result:

Recommendations to reduce/handle generateContent timeouts (Gemini API / Vertex AI Gemini):

  • Increase the client-side deadline/timeout (many 504s are simply your request deadline being too short). In the Python client, set a larger timeout via request options (example shown with timeout: 600). [1]
  • Reduce prompt/context size and complexity (large prompts/long context are a common cause of 504 DEADLINE_EXCEEDED). Split large documents (e.g., multi‑page PDFs) into smaller chunks and/or summarize first, then ask follow-ups. [2]
  • Implement limited retries with exponential backoff for transient capacity issues (503 UNAVAILABLE) and occasional deadline spikes. Google’s Vertex AI guidance is to retry no more than 2 times, with >=1s delay and exponential backoff. [3]
  • Fail over to a faster model when appropriate (e.g., Pro → Flash) during overload/timeouts, per Gemini troubleshooting guidance for 503/500 scenarios. [2]
  • Expect long-running document requests to be risky over plain REST: community reports show REST requests timing out around ~60s on Vertex AI Gemini for large PDF inputs; consider smaller inputs or alternative approaches if you can’t complete within that window. [4]

Sources:
[1] Google AI Developers Forum example using request_options={"timeout": 600} for generate_content to address 504s. (discuss.ai.google.dev)
[2] Gemini API troubleshooting guidance for 503/504 and recommendations (switch model, reduce context, increase timeout). (ai.google.dev)
[3] Vertex AI Generative inference API errors page (retry guidance: <=2 retries, exponential backoff). (docs.cloud.google.com)
[4] Google Cloud Community thread reporting ~60s REST timeout for /models/...:generateContent with multi-page PDF. (googlecloudcommunity.com)

Citations:


🏁 Script executed:

# Check Deno std version and confirm AbortSignal.timeout support
# Deno std 0.168.0 was released in 2023; AbortSignal.timeout() is available
# Let's verify by checking if there are any deno version hints in the repo
grep -r "deno" "Manga-Availability-Finder/supabase/functions/discover-manga-sites/" 2>/dev/null || echo "No other deno config found"

Repository: tinyfish-io/tinyfish-cookbook

Length of output: 217


Add a client-side timeout to the Gemini API fetch to prevent hanging.

The fetch call lacks an explicit timeout, risking function hangs if the connection stalls. While error handling covers API failures, it doesn't protect against network timeouts. Use AbortSignal.timeout(10000) (10 seconds):

⏱️ Example timeout with AbortSignal
    const response = await fetch(
      `https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=${GEMINI_API_KEY}`,
      {
        method: "POST",
        headers: { "Content-Type": "application/json" },
+       signal: AbortSignal.timeout(10000),
        body: JSON.stringify({
          contents: [{ parts: [{ text: prompt }] }],
          generationConfig: {
            temperature: 0.3,
            maxOutputTokens: 1024,
          },
        }),
      }
    );

This is the standard Deno/Web API approach and requires no manual timer cleanup. Wrap in try-catch to handle TimeoutError separately if needed.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=${GEMINI_API_KEY}`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
contents: [{ parts: [{ text: prompt }] }],
generationConfig: {
temperature: 0.3,
maxOutputTokens: 1024,
},
}),
}
);
const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=${GEMINI_API_KEY}`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
signal: AbortSignal.timeout(10000),
body: JSON.stringify({
contents: [{ parts: [{ text: prompt }] }],
generationConfig: {
temperature: 0.3,
maxOutputTokens: 1024,
},
}),
}
);
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/supabase/functions/discover-manga-sites/index.ts`
around lines 50 - 63, The fetch against Gemini (the call that builds `response`
using `GEMINI_API_KEY` and `prompt`) needs a client-side timeout: create an
AbortSignal via AbortSignal.timeout(10000) and pass it as `signal` in the fetch
options, then wrap the fetch in try/catch to handle the timeout (and rethrow/log
a clear TimeoutError case) so the function in discover-manga-sites/index.ts
cannot hang indefinitely.

@simantak-dabhade simantak-dabhade removed the request for review from lozzle January 30, 2026 23:21
Make sure the URLs include a search query for the manga title where possible. Return exactly 5-6 sites.`;

const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=${GEMINI_API_KEY}`,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: while this may have no immediate issue because this is server-side code, it's still not good practice to put the API key in url param. Should move this into header.
example in https://ai.google.dev/gemini-api/docs/api-key

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent" \ -H "x-goog-api-key: $GEMINI_API_KEY" \ -H 'Content-Type: application/json' \ -X POST \ -d '{ "contents": [ { "parts": [ { "text": "Explain how AI works in a few words" } ] } ] }'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch - updated the request to pass the Gemini API key via headers (x-goog-api-key) instead of a URL param. Thanks!

const { done, value } = await reader.read();
if (done) break;

const chunk = decoder.decode(value);
Copy link

@minhhuynh-tinyfish minhhuynh-tinyfish Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to do it like this:
`
buffer += decoder.decode(value, { stream: true });

const lines = buffer.split('\n');

buffer = lines.pop() || '';
`
here ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point. updated the sse parsing.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@Manga-Availability-Finder/supabase/functions/search-manga/index.ts`:
- Around line 3-6: The corsHeaders object used for CORS responses (corsHeaders)
is missing Access-Control-Allow-Methods which can cause browsers to block POST
preflight; update the corsHeaders definition in index.ts to include an
"Access-Control-Allow-Methods" entry listing the allowed methods (e.g., "GET,
POST, OPTIONS") so preflight requests succeed and include that same header on
OPTIONS responses.
- Around line 13-21: The handler currently calls await req.json() which can
throw on malformed JSON and is bubbling up as a 500; modify the code around the
existing try so you explicitly catch JSON parse errors when calling req.json()
(e.g. wrap the const { url, mangaTitle } = await req.json() in its own
try/catch), and on parse errors return a 400 Response with the same
corsHeaders/"Content-Type": "application/json" and a body like { error: "invalid
JSON" }; for non-JSON errors keep the existing error handling path. Refer to the
existing req.json() call and the url/mangaTitle validation block to locate where
to add the small try/catch.
- Around line 82-143: The stream parsing can drop the last event and ignores
"data:" without a space; modify the ReadableStream start logic in the async
start(controller) (where decoder, encoder, buffer, lines, reader are used) to
flush the TextDecoder when the reader signals done (append
decoder.decode(undefined, { stream: false }) to buffer before breaking), process
any remaining buffer lines after the read loop (handle the leftover line instead
of discarding buffer), and change the per-line check from line.startsWith("data:
") to accept "data:" with or without a space (e.g., check
line.startsWith("data:") and trim the optional leading space when slicing the
payload) so final events and "data:"-prefixed SSE lines are correctly parsed and
emitted.

Comment on lines +3 to +6
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "authorization, x-client-info, apikey, content-type",
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add Access-Control-Allow-Methods to pass browser preflight.

Without it, cross-origin JSON POSTs can be blocked by the browser’s CORS preflight.

✅ Suggested fix
 const corsHeaders = {
   "Access-Control-Allow-Origin": "*",
   "Access-Control-Allow-Headers": "authorization, x-client-info, apikey, content-type",
+  "Access-Control-Allow-Methods": "POST, OPTIONS",
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "authorization, x-client-info, apikey, content-type",
};
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "authorization, x-client-info, apikey, content-type",
"Access-Control-Allow-Methods": "POST, OPTIONS",
};
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/supabase/functions/search-manga/index.ts` around
lines 3 - 6, The corsHeaders object used for CORS responses (corsHeaders) is
missing Access-Control-Allow-Methods which can cause browsers to block POST
preflight; update the corsHeaders definition in index.ts to include an
"Access-Control-Allow-Methods" entry listing the allowed methods (e.g., "GET,
POST, OPTIONS") so preflight requests succeed and include that same header on
OPTIONS responses.

Comment on lines +13 to +21
try {
const { url, mangaTitle } = await req.json();

if (!url || !mangaTitle) {
return new Response(
JSON.stringify({ error: "url and mangaTitle are required" }),
{ status: 400, headers: { ...corsHeaders, "Content-Type": "application/json" } }
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Return 400 for invalid JSON instead of 500.

req.json() throws on malformed bodies, which currently surfaces as a server error.

✅ Suggested fix
-    const { url, mangaTitle } = await req.json();
+    let payload: { url?: string; mangaTitle?: string };
+    try {
+      payload = await req.json();
+    } catch {
+      return new Response(
+        JSON.stringify({ error: "Invalid JSON body" }),
+        { status: 400, headers: { ...corsHeaders, "Content-Type": "application/json" } }
+      );
+    }
+    const { url, mangaTitle } = payload;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
const { url, mangaTitle } = await req.json();
if (!url || !mangaTitle) {
return new Response(
JSON.stringify({ error: "url and mangaTitle are required" }),
{ status: 400, headers: { ...corsHeaders, "Content-Type": "application/json" } }
);
}
try {
let payload: { url?: string; mangaTitle?: string };
try {
payload = await req.json();
} catch {
return new Response(
JSON.stringify({ error: "Invalid JSON body" }),
{ status: 400, headers: { ...corsHeaders, "Content-Type": "application/json" } }
);
}
const { url, mangaTitle } = payload;
if (!url || !mangaTitle) {
return new Response(
JSON.stringify({ error: "url and mangaTitle are required" }),
{ status: 400, headers: { ...corsHeaders, "Content-Type": "application/json" } }
);
}
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/supabase/functions/search-manga/index.ts` around
lines 13 - 21, The handler currently calls await req.json() which can throw on
malformed JSON and is bubbling up as a 500; modify the code around the existing
try so you explicitly catch JSON parse errors when calling req.json() (e.g. wrap
the const { url, mangaTitle } = await req.json() in its own try/catch), and on
parse errors return a 400 Response with the same corsHeaders/"Content-Type":
"application/json" and a body like { error: "invalid JSON" }; for non-JSON
errors keep the existing error handling path. Refer to the existing req.json()
call and the url/mangaTitle validation block to locate where to add the small
try/catch.

Comment on lines +82 to +143
const stream = new ReadableStream({
async start(controller) {
const decoder = new TextDecoder();
const encoder = new TextEncoder();
let streamingUrlSent = false;

let buffer = "";

try {
while (true) {
const { done, value } = await reader.read();
if (done) break;

buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() || "";

for (const line of lines) {
if (line.startsWith("data: ")) {
try {
const data = JSON.parse(line.slice(6));

// Send streaming URL immediately when available
if (data.streamingUrl && !streamingUrlSent) {
streamingUrlSent = true;
const event = `data: ${JSON.stringify({ type: "stream", streamingUrl: data.streamingUrl })}\n\n`;
controller.enqueue(encoder.encode(event));
}

// Check for completion
if (data.type === "COMPLETE" && data.resultJson) {
let found = false;
try {
const resultData = typeof data.resultJson === 'string'
? JSON.parse(data.resultJson)
: data.resultJson;
found = resultData.found === true;
} catch {
const resultStr = JSON.stringify(data.resultJson).toLowerCase();
found = resultStr.includes('"found": true') || resultStr.includes('"found":true');
}
const event = `data: ${JSON.stringify({ type: "complete", found })}\n\n`;
controller.enqueue(encoder.encode(event));
}

// Handle errors
if (data.type === "ERROR") {
const event = `data: ${JSON.stringify({ type: "error", error: data.message || "Search failed" })}\n\n`;
controller.enqueue(encoder.encode(event));
}
} catch {
// Ignore parse errors
}
}
}
}
} catch (error) {
const event = `data: ${JSON.stringify({ type: "error", error: "Stream error" })}\n\n`;
controller.enqueue(encoder.encode(event));
} finally {
controller.close();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Flush the decoder and accept data: without a space.

When the stream ends without a trailing newline, the last event can be dropped. Also, SSE allows data: with no space, which is currently ignored.

✅ Suggested fix
     const stream = new ReadableStream({
       async start(controller) {
         const decoder = new TextDecoder();
         const encoder = new TextEncoder();
         let streamingUrlSent = false;

         let buffer = "";

+        const handleLine = (raw: string) => {
+          const trimmed = raw.trim();
+          if (!trimmed.startsWith("data:")) return;
+          try {
+            const payload = trimmed.replace(/^data:\s?/, "");
+            const data = JSON.parse(payload);
+            // Send streaming URL immediately when available
+            if (data.streamingUrl && !streamingUrlSent) {
+              streamingUrlSent = true;
+              const event = `data: ${JSON.stringify({ type: "stream", streamingUrl: data.streamingUrl })}\n\n`;
+              controller.enqueue(encoder.encode(event));
+            }
+            // Check for completion
+            if (data.type === "COMPLETE" && data.resultJson) {
+              let found = false;
+              try {
+                const resultData = typeof data.resultJson === "string"
+                  ? JSON.parse(data.resultJson)
+                  : data.resultJson;
+                found = resultData.found === true;
+              } catch {
+                const resultStr = JSON.stringify(data.resultJson).toLowerCase();
+                found = resultStr.includes('"found": true') || resultStr.includes('"found":true');
+              }
+              const event = `data: ${JSON.stringify({ type: "complete", found })}\n\n`;
+              controller.enqueue(encoder.encode(event));
+            }
+            // Handle errors
+            if (data.type === "ERROR") {
+              const event = `data: ${JSON.stringify({ type: "error", error: data.message || "Search failed" })}\n\n`;
+              controller.enqueue(encoder.encode(event));
+            }
+          } catch {
+            // Ignore parse errors
+          }
+        };
+
         try {
           while (true) {
             const { done, value } = await reader.read();
             if (done) break;

             buffer += decoder.decode(value, { stream: true }); 
             const lines = buffer.split("\n");
             buffer = lines.pop() || "";
             
-            for (const line of lines) {
-              if (line.startsWith("data: ")) {
-                try {
-                  const data = JSON.parse(line.slice(6));
-                  // Send streaming URL immediately when available
-                  if (data.streamingUrl && !streamingUrlSent) {
-                    streamingUrlSent = true;
-                    const event = `data: ${JSON.stringify({ type: "stream", streamingUrl: data.streamingUrl })}\n\n`;
-                    controller.enqueue(encoder.encode(event));
-                  }
-                  // Check for completion
-                  if (data.type === "COMPLETE" && data.resultJson) {
-                    let found = false;
-                    try {
-                      const resultData = typeof data.resultJson === 'string' 
-                        ? JSON.parse(data.resultJson) 
-                        : data.resultJson;
-                      found = resultData.found === true;
-                    } catch {
-                      const resultStr = JSON.stringify(data.resultJson).toLowerCase();
-                      found = resultStr.includes('"found": true') || resultStr.includes('"found":true');
-                    }
-                    const event = `data: ${JSON.stringify({ type: "complete", found })}\n\n`;
-                    controller.enqueue(encoder.encode(event));
-                  }
-                  // Handle errors
-                  if (data.type === "ERROR") {
-                    const event = `data: ${JSON.stringify({ type: "error", error: data.message || "Search failed" })}\n\n`;
-                    controller.enqueue(encoder.encode(event));
-                  }
-                } catch {
-                  // Ignore parse errors
-                }
-              }
-            }
+            for (const line of lines) handleLine(line);
           }
+          buffer += decoder.decode(); // flush
+          if (buffer) {
+            for (const line of buffer.split("\n")) handleLine(line);
+          }
         } catch (error) {
           const event = `data: ${JSON.stringify({ type: "error", error: "Stream error" })}\n\n`;
           controller.enqueue(encoder.encode(event));
         } finally {
           controller.close();
         }
       },
     });
🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/supabase/functions/search-manga/index.ts` around
lines 82 - 143, The stream parsing can drop the last event and ignores "data:"
without a space; modify the ReadableStream start logic in the async
start(controller) (where decoder, encoder, buffer, lines, reader are used) to
flush the TextDecoder when the reader signals done (append
decoder.decode(undefined, { stream: false }) to buffer before breaking), process
any remaining buffer lines after the read loop (handle the leftover line instead
of discarding buffer), and change the per-line check from line.startsWith("data:
") to accept "data:" with or without a space (e.g., check
line.startsWith("data:") and trim the optional leading space when slicing the
payload) so final events and "data:"-prefixed SSE lines are correctly parsed and
emitted.

Copy link

@minhhuynh-tinyfish minhhuynh-tinyfish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aside from a few critical flags from coderabbit, no other concern from me

@urugondavishnu
Copy link
Contributor Author

Resolved the critical flags.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@Manga-Availability-Finder/README.md`:
- Around line 89-122: The fenced ASCII diagram block in README.md is missing a
language specifier; update the opening triple-backtick for the diagram (the
ASCII diagram block) to include the language "text" so the fence becomes
```text, which satisfies Markdownlint and preserves rendering.

Comment on lines +89 to +122
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ User Interface │
│ │
│ ┌─────────────┐ ┌──────────────────┐ ┌─────────────────────────────┐ │
│ │ SearchHero │───▶│ useMangaSearch │───▶│ AgentCard (x6 parallel) │ │
│ │ Component │ │ Hook │ │ with Live Stream Preview │ │
│ └─────────────┘ └──────────────────┘ └─────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ Edge Functions (Supabase) │
│ │
│ ┌────────────────────────┐ ┌────────────────────────────────────┐ │
│ │ discover-manga-sites │ │ search-manga (x6) │ │
│ │ (1x per search) │ │ (parallel browser agents) │ │
│ │ │ │ │ │
│ │ Gemini → Get site URLs│ │ TinyFish API → Browser Automation | |
│ │ (+ fallback sites) │ │ (SSE real-time streaming) │ │
│ └────────────────────────┘ └────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ External APIs │
│ │
│ ┌────────────────────────┐ ┌────────────────────────────────────┐ │
│ │ Gemini API │ │ TinyFish Web Agent API │ │
│ │ (Site Discovery) │ │ (Browser Automation + SSE) │ │
│ │ Called: 1x │ │ Called: 5-6x parallel │ │
│ └────────────────────────┘ └────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language to the fenced code block for the diagram.

Markdownlint warns about missing language spec; use text to keep rendering unchanged.

🛠️ Proposed fix
-```
+```text
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 89-89: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@Manga-Availability-Finder/README.md` around lines 89 - 122, The fenced ASCII
diagram block in README.md is missing a language specifier; update the opening
triple-backtick for the diagram (the ASCII diagram block) to include the
language "text" so the fence becomes ```text, which satisfies Markdownlint and
preserves rendering.

Copy link

@minhhuynh-tinyfish minhhuynh-tinyfish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants