-
Notifications
You must be signed in to change notification settings - Fork 2
add trpc-benchmarks package for Express vs Hono vs Elysia comparison #195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
add trpc-benchmarks package for Express vs Hono vs Elysia comparison #195
Conversation
New benchmark package comparing tRPC API performance across different server frameworks and runtimes: - Express + Node - Hono + Bun - Elysia + Bun Includes realistic e-commerce benchmark with SQLite/Prisma for DB ops, testing product listings, searches, order creation, and dashboard analytics. Uses k6 for load testing with ramping VU scenarios. Results show Hono+Bun ~16% faster than Express+Node, with Elysia performing similarly to Express in DB-bound scenarios.
Re-ran the 3-way comparison benchmark showing: - Hono+Bun: 7.20ms avg (15% faster than Express) - Elysia+Bun: 7.75ms avg (9% faster than Express) - Express+Node: 8.51ms avg (baseline) Hono remains the clear performance winner.
Results show Hono Native is still ~9.5% faster than Elysia Native on average latency (8.22ms vs 9.00ms), confirming the performance difference is in the frameworks, not tRPC overhead.
WalkthroughAdds a new trpc benchmarking workspace under Changes
Sequence Diagram(s)sequenceDiagram
participant Runner as Benchmark Runner
participant Server as Server Process\n(Express/Hono/Elysia)
participant K6 as k6/autocannon
participant DB as Prisma\n(SQLite)
Runner->>Server: spawn(server command, PORT)
Server-->>Runner: stdout "Server running"
Runner->>K6: spawn(benchmark script, env {BASE_URL})
K6->>Server: HTTP requests (/trpc or /api)
Server->>DB: prisma queries / transactions
DB-->>Server: query results
Server-->>K6: HTTP responses (200/4xx/5xx)
K6-->>Runner: output JSON/summary
Runner->>Runner: parse metrics → write results/*.json
Runner->>Server: terminate process
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
|
||
| await prisma.order.create({ | ||
| data: { | ||
| userId: randomElement(users).id, |
Check failure
Code scanning / CodeQL
Insecure randomness High
Math.random()
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 months ago
To eliminate the use of cryptographically insecure random number generation (Math.random()), replace calls to Math.random() in helper functions with a cryptographically secure approach. For Node.js, use the crypto.randomInt() method (introduced in Node 14+) to securely generate random integers. Similarly, for selecting random elements, use crypto.randomInt() to select an index from the array. For random floats, securely generate a random integer in a range and scale it to the desired float range.
Files/Regions to Change:
Edit the helper functions in packages/trpc-benchmarks/src/db/seed.ts:
- Replace
randomElement,randomInt, andrandomFloatimplementations to usecrypto.randomInt()and secure byte/number generation instead ofMath.random(). - Import Node's
cryptomodule at the file's top.
Necessary changes:
- Add
import * as crypto from 'crypto'at the top. - Replace usage of
Math.random()in lines 18, 22, and 26 with secure alternatives powered bycrypto.randomInt.
-
Copy modified line R5 -
Copy modified line R19 -
Copy modified line R23 -
Copy modified lines R27-R31
| @@ -2,6 +2,7 @@ | ||
| // Seeds the database with realistic e-commerce data for benchmarking | ||
|
|
||
| import { PrismaClient } from '@prisma/client' | ||
| import * as crypto from 'crypto' | ||
|
|
||
| const prisma = new PrismaClient() | ||
|
|
||
| @@ -15,15 +16,19 @@ | ||
|
|
||
| // Helper functions | ||
| function randomElement<T>(arr: T[]): T { | ||
| return arr[Math.floor(Math.random() * arr.length)] | ||
| return arr[crypto.randomInt(0, arr.length)] | ||
| } | ||
|
|
||
| function randomInt(min: number, max: number): number { | ||
| return Math.floor(Math.random() * (max - min + 1)) + min | ||
| return crypto.randomInt(min, max + 1) | ||
| } | ||
|
|
||
| function randomFloat(min: number, max: number): number { | ||
| return Math.random() * (max - min) + min | ||
| // Use a secure randomInt over a large range, scaled to float | ||
| const scale = 1e6 // precision: 6 decimal places | ||
| const rand = crypto.randomInt(0, scale) | ||
| const fraction = rand / scale | ||
| return fraction * (max - min) + min | ||
| } | ||
|
|
||
| const categoryNames = [ |
| let reviewsCreated = 0 | ||
|
|
||
| while (reviewsCreated < NUM_REVIEWS) { | ||
| const userId = randomElement(users).id |
Check failure
Code scanning / CodeQL
Insecure randomness High
Math.random()
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 months ago
The problem is the use of Math.random(), which is not cryptographically secure, in the following helper functions: randomElement, randomInt, and randomFloat. These should be replaced with a secure version using Node's crypto module (specifically, crypto.randomInt). For array selection in randomElement, generate a random index via crypto.randomInt(0, arr.length). For integer ranges, use crypto.randomInt(min, max+1) (since crypto.randomInt is exclusive of max). For floating-point ranges, since there is no direct API, generate a secure random 48-bit or 53-bit integer and divide appropriately to obtain a float in [0, 1).
The required edit is:
- Import Node's built-in
cryptomodule at the top. - Replace implementations of
randomElement,randomInt, andrandomFloatto use secure randomness. - Because
crypto.randomIntis asynchronous (and optionally provides a sync version), we can use the synchronous API in this context to avoid a large rewrite, or we can make these helpers async and update their usage. For simplicity and to retain code structure, we'll use the synchronous API (crypto.randomIntSync).
Thus, all helper functions will switch to securely generated random values, which propagates through the codebase. The rest of the seed logic does not need to change.
-
Copy modified line R5 -
Copy modified lines R19-R21 -
Copy modified lines R25-R26 -
Copy modified lines R30-R34
| @@ -2,6 +2,7 @@ | ||
| // Seeds the database with realistic e-commerce data for benchmarking | ||
|
|
||
| import { PrismaClient } from '@prisma/client' | ||
| import { randomInt as cryptoRandomInt, randomBytes } from 'crypto' | ||
|
|
||
| const prisma = new PrismaClient() | ||
|
|
||
| @@ -15,15 +16,22 @@ | ||
|
|
||
| // Helper functions | ||
| function randomElement<T>(arr: T[]): T { | ||
| return arr[Math.floor(Math.random() * arr.length)] | ||
| if (arr.length === 0) throw new Error('Cannot select a random element from an empty array'); | ||
| // Securely pick random index in [0, arr.length) | ||
| return arr[cryptoRandomInt(0, arr.length)] | ||
| } | ||
|
|
||
| function randomInt(min: number, max: number): number { | ||
| return Math.floor(Math.random() * (max - min + 1)) + min | ||
| // cryptoRandomInt is exclusive on upper bound, so add 1 | ||
| return cryptoRandomInt(min, max + 1) | ||
| } | ||
|
|
||
| function randomFloat(min: number, max: number): number { | ||
| return Math.random() * (max - min) + min | ||
| // Generate a secure float in [0, 1) by taking 6 random bytes (48 bits), dividing by 2^48. | ||
| const bytes = randomBytes(6); // 6 bytes = 48 bits | ||
| const intVal = bytes.readUIntBE(0, 6); | ||
| const random = intVal / 281474976710656; // 2^48 | ||
| return min + random * (max - min) | ||
| } | ||
|
|
||
| const categoryNames = [ |
| if (!product1 || !product2) return | ||
|
|
||
| var orderData = { | ||
| userId: 'user-' + (Math.floor(Math.random() * 100) + 1), |
Check failure
Code scanning / CodeQL
Insecure randomness High
Math.random()
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 months ago
To fix this issue, we should replace the use of Math.random() with a cryptographically secure random number generator. In JavaScript on Node.js, this can be done using the crypto module, specifically by using crypto.getRandomValues (in the browser) or crypto.randomInt (in Node.js). Because this is a benchmarking script and likely runs in the k6 environment (which uses V8 or JavaScriptCore), we need to ensure the fix works in the target environment. If Node.js primitives are available, use crypto.randomInt; otherwise, polyfill as needed.
For this code snippet, replace instances of Math.floor(Math.random() * 100) + 1 with generating a random integer in the range [1, 100] using Node's crypto.randomInt(1, 101). Add the appropriate import of the crypto module at the top if not already present.
Edit on line 203:
- Change
userId: 'user-' + (Math.floor(Math.random() * 100) + 1), - to
userId: 'user-' + crypto.randomInt(1, 101),
Add an import of crypto (import crypto from 'crypto') at the top of the file.
-
Copy modified line R8 -
Copy modified line R203
| @@ -5,7 +5,7 @@ | ||
| import http from 'k6/http' | ||
| import { check, group, sleep } from 'k6' | ||
| import { Trend, Counter, Gauge } from 'k6/metrics' | ||
|
|
||
| import crypto from 'crypto' | ||
| // Custom metrics | ||
| const productListLatency = new Trend('api_product_list_latency') | ||
| const productGetLatency = new Trend('api_product_get_latency') | ||
| @@ -200,7 +200,7 @@ | ||
| if (!product1 || !product2) return | ||
|
|
||
| var orderData = { | ||
| userId: 'user-' + (Math.floor(Math.random() * 100) + 1), | ||
| userId: 'user-' + crypto.randomInt(1, 101), | ||
| items: [ | ||
| { productId: product1.id, quantity: Math.floor(Math.random() * 2) + 1 }, | ||
| { productId: product2.id, quantity: Math.floor(Math.random() * 2) + 1 } |
| } else if (rand < 0.90) { | ||
| // 10% - User account | ||
| group('User Account', function() { | ||
| var userId = 'user-' + (Math.floor(Math.random() * 100) + 1) |
Check failure
Code scanning / CodeQL
Insecure randomness High
Math.random()
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 months ago
To fix the issue, replace the use of Math.random() with a cryptographically secure random number generator to ensure unpredictability, even in non-production or simulated contexts. In the browser (which k6 emulates), use crypto.getRandomValues (native web crypto API). In k6 scripts, this API is available: see k6 JS API documentation. We can generate a secure random integer in the desired range (1-100) using crypto.getRandomValues(new Uint32Array(1))[0] % 100 + 1.
Changes to make:
- Edit the code in
packages/trpc-benchmarks/src/k6/native-benchmark.json line 253 to replace the insecure random number generator (Math.random()) with a cryptographically secure alternative using the web crypto API. - No new imports are needed, as
cryptois available directly in k6 scripts. - Only line 253 needs to be changed.
-
Copy modified lines R253-R255
| @@ -250,7 +250,9 @@ | ||
| } else if (rand < 0.90) { | ||
| // 10% - User account | ||
| group('User Account', function() { | ||
| var userId = 'user-' + (Math.floor(Math.random() * 100) + 1) | ||
| var randArray = new Uint32Array(1); | ||
| crypto.getRandomValues(randArray); | ||
| var userId = 'user-' + (randArray[0] % 100 + 1); | ||
|
|
||
| var start = Date.now() | ||
| var res = http.get(BASE_URL + '/api/users/' + userId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
🧹 Nitpick comments (33)
packages/trpc-benchmarks/results/native-benchmark-2025-11-30T18-11-40-697Z.json (1)
1-38: Add trailing newline.JSON files should end with a newline character for better compatibility with version control systems and text processing tools.
packages/trpc-benchmarks/README.md (1)
213-238: Add language identifier to fenced code block.The example output code block should specify a language (e.g.,
text,console, orbash) for better syntax highlighting and rendering in Markdown viewers.Apply this change:
-``` +```text ================================================================================ tRPC API Benchmark Resultspackages/trpc-benchmarks/results/native-benchmark-2025-11-30T18-15-43-440Z.json (1)
1-38: Add trailing newline.JSON files should end with a newline character for better compatibility with version control systems and text processing tools.
packages/trpc-benchmarks/src/runners/compare-results.ts (2)
46-46: Use sort comparator instead of reverse().The pattern
.sort().reverse()is less efficient and less clear than using a descending comparator.Apply this diff:
- const jsonFiles = files.filter(f => f.endsWith('.json')).sort().reverse() + const jsonFiles = files.filter(f => f.endsWith('.json')).sort((a, b) => b.localeCompare(a))
132-133: Add safety check for division by zero.If
slowest.metrics.httpReqDuration.avgis zero, the division will result inInfinity. While unlikely, defensive coding is recommended.Add a check:
- const speedup = ((slowest.metrics.httpReqDuration.avg - fastest.metrics.httpReqDuration.avg) / - slowest.metrics.httpReqDuration.avg * 100) + const speedup = slowest.metrics.httpReqDuration.avg > 0 + ? ((slowest.metrics.httpReqDuration.avg - fastest.metrics.httpReqDuration.avg) / + slowest.metrics.httpReqDuration.avg * 100) + : 0packages/trpc-benchmarks/src/servers/express-server.ts (1)
40-54: Consider adding a shutdown timeout.The graceful shutdown handlers call
server.close()which waits indefinitely for existing connections to finish. This could hang if there are long-running requests.Consider adding a timeout:
process.on('SIGTERM', () => { console.log('[Express+Node] SIGTERM received, shutting down...') + const timeout = setTimeout(() => { + console.log('[Express+Node] Forcing shutdown after timeout') + process.exit(1) + }, 10000) // 10 second timeout + server.close(() => { + clearTimeout(timeout) console.log('[Express+Node] Server closed') process.exit(0) }) })Apply the same pattern to the SIGINT handler.
packages/trpc-benchmarks/src/servers/hono-server.ts (1)
58-73: Consider adding a shutdown timeout (same as Express server).Similar to the Express server, the Node.js path's graceful shutdown could hang indefinitely waiting for connections to close.
Apply the same timeout pattern suggested for the Express server.
packages/trpc-benchmarks/src/servers/express-db-server.ts (1)
38-49: Potential hang during shutdown if server.close() stalls.The
server.close()callback may never fire if there are keep-alive connections, causing the process to hang indefinitely. Consider adding a force-exit timeout.const shutdown = async () => { console.log('[Express+Node+SQLite] Shutting down...') await prisma.$disconnect() server.close(() => { console.log('[Express+Node+SQLite] Server closed') process.exit(0) }) + // Force exit after timeout if graceful shutdown stalls + setTimeout(() => { + console.log('[Express+Node+SQLite] Forcing exit after timeout') + process.exit(1) + }, 5000).unref() }packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (3)
48-61: Unhandled fetch errors may cause misleading behavior.The empty catch block silently swallows all errors. While this is intentional for "server not ready" cases, network errors or other issues are also silenced.
Consider logging unexpected errors for debugging:
} catch { - // Server not ready yet + // Server not ready yet - continue waiting }
85-109: Consider async execution for k6 benchmark.Using
execSyncblocks the event loop. While acceptable for a CLI tool, usingspawnwith async handling (like in other runners) would be more consistent and allow for streaming output during the benchmark.
236-246: Errors during benchmark are logged but execution continues.Catching errors and continuing is intentional for running all benchmarks, but failed configs will have empty metrics, potentially causing issues in
printComparisonwith undefined values.Consider skipping failed benchmarks in comparison output or adding a failure flag:
} catch (error) { console.error(`Failed to benchmark ${config.name}:`, error) + results.push({ config, metrics: {}, failed: true }) }packages/trpc-benchmarks/src/servers/hono-db-server.ts (2)
46-56: Duplicate shutdown logic for SIGTERM and SIGINT in Bun path.The shutdown handlers for SIGTERM and SIGINT are identical. Consider extracting to a shared function for consistency with the Node path.
+ const shutdown = async () => { + console.log('[Hono+Bun+SQLite] Shutting down...') + await prisma.$disconnect() + process.exit(0) + } + // Graceful shutdown for Bun - process.on('SIGTERM', async () => { - console.log('[Hono+Bun+SQLite] Shutting down...') - await prisma.$disconnect() - process.exit(0) - }) - process.on('SIGINT', async () => { - console.log('[Hono+Bun+SQLite] Shutting down...') - await prisma.$disconnect() - process.exit(0) - }) + process.on('SIGTERM', shutdown) + process.on('SIGINT', shutdown)
57-84: Node path shutdown may hang similar to Express server.Like the Express server,
server.close()may hang indefinitely with keep-alive connections. Consider adding a force-exit timeout for consistency.const shutdown = async () => { console.log('[Hono+Node+SQLite] Shutting down...') await prisma.$disconnect() server.close(() => { console.log('[Hono+Node+SQLite] Server closed') process.exit(0) }) + setTimeout(() => { + console.log('[Hono+Node+SQLite] Forcing exit after timeout') + process.exit(1) + }, 5000).unref() }packages/trpc-benchmarks/src/runners/run-express-benchmark.ts (2)
15-34: No validation for parsed CLI arguments.
parseIntreturnsNaNfor invalid input, and there's no validation fordurationformat. Invalid inputs could cause confusing k6 errors.for (let i = 0; i < args.length; i++) { if (args[i] === '--vus' && args[i + 1]) { - vus = parseInt(args[i + 1], 10) + const parsed = parseInt(args[i + 1], 10) + if (isNaN(parsed) || parsed < 1) { + console.error('Error: --vus must be a positive integer') + process.exit(1) + } + vus = parsed i++ } else if (args[i] === '--duration' && args[i + 1]) { + if (!/^\d+[smh]?$/.test(args[i + 1])) { + console.error('Error: --duration must be in format like "30s", "1m", "1h"') + process.exit(1) + } duration = args[i + 1] i++
45-78: Server startup detection relies on string matching.The readiness check looks for "Server running" in stdout. If the server's log message changes, the benchmark will fail with a timeout. Consider using a health check endpoint instead for more reliable detection.
A health check approach would be more robust:
async function waitForServer(url: string, maxAttempts = 20): Promise<boolean> { for (let i = 0; i < maxAttempts; i++) { try { const response = await fetch(`${url}/health`) if (response.ok) return true } catch { // Server not ready } await new Promise(r => setTimeout(r, 500)) } return false }This approach is already used in
run-native-benchmark.tsand would provide consistency across runners.packages/trpc-benchmarks/src/shared/router.ts (2)
91-97: Consider using TRPCError for consistent error handling.Throwing a plain
Errorworks, but usingTRPCErrorwithNOT_FOUNDcode provides better HTTP status mapping and client-side error handling.+import { TRPCError } from '@trpc/server' + get: publicProcedure.input(getUserInput).query(({ input }) => { const user = users.get(input.id) if (!user) { - throw new Error(`User ${input.id} not found`) + throw new TRPCError({ + code: 'NOT_FOUND', + message: `User ${input.id} not found` + }) } return user }),
192-196: Echo endpoint acceptsz.unknown()without size limits.Large payloads could cause memory pressure. For a benchmark tool this may be acceptable, but consider adding a size limit if the benchmark suite might receive untrusted input.
packages/trpc-benchmarks/src/runners/run-hono-benchmark.ts (2)
15-34: Consider validating parsed integer arguments.If
--vusis followed by a non-numeric string,parseIntreturnsNaN, which will propagate to k6 and cause unexpected behavior.for (let i = 0; i < args.length; i++) { if (args[i] === '--vus' && args[i + 1]) { - vus = parseInt(args[i + 1], 10) + const parsed = parseInt(args[i + 1], 10) + if (!Number.isNaN(parsed)) vus = parsed i++ } else if (args[i] === '--duration' && args[i + 1]) {
73-96: Add handler for premature server exit.If the server process exits before emitting "Server running" (e.g., a silent failure with exit code 0), the promise will hang until the 10-second timeout. The
run-all-benchmarks.tsrunner handles this case.serverProcess.on('error', (err) => { clearTimeout(timeout) reject(err) }) + + serverProcess.on('exit', (code) => { + if (code !== 0 && code !== null) { + clearTimeout(timeout) + reject(new Error(`Server exited with code ${code}`)) + } + }) }) }packages/trpc-benchmarks/package.json (1)
12-12:db:resetscript is not cross-platform.The
rm -fcommand won't work on Windows. Consider using a cross-platform alternative if Windows support is needed.- "db:reset": "rm -f prisma/benchmark.db && prisma db push && tsx src/db/seed.ts", + "db:reset": "node -e \"try{require('fs').unlinkSync('prisma/benchmark.db')}catch(e){}\" && prisma db push && tsx src/db/seed.ts",Alternatively, use a package like
rimraffor cross-platform file deletion.packages/trpc-benchmarks/src/runners/run-all-benchmarks.ts (2)
6-6: Remove unused import.
readFileis imported but never used in this file.-import { mkdir, writeFile, readFile } from 'node:fs/promises' +import { mkdir, writeFile } from 'node:fs/promises'
298-330: Performance comparison limited to first two results.The comparison logic only considers
results[0]andresults[1]. If more frameworks are added toconfigs, the comparison won't reflect all results. Consider iterating over all pairs or ranking all results.packages/trpc-benchmarks/src/k6/native-benchmark.js (1)
76-81: Misleading comment and logic for test users.The comment says "Get a user" but the code fetches the dashboard endpoint and then hardcodes user IDs regardless of the response content. Consider either fetching actual users or updating the comment to clarify the intent.
- // Get a user - var dashRes = http.get(BASE_URL + '/api/analytics/dashboard') - if (dashRes.status === 200) { - // Create a test user ID based on existing data - testUsers = [{ id: 'user-1' }, { id: 'user-2' }, { id: 'user-3' }] - } + // Use synthetic user IDs for the benchmark (actual users created during seeding) + testUsers = [{ id: 'user-1' }, { id: 'user-2' }, { id: 'user-3' }]packages/trpc-benchmarks/src/runners/run-realistic-benchmark.ts (2)
137-141: Remove hardcoded PATH modification.The PATH modification
/home/user/.local/binis machine-specific and unnecessary if k6 is properly installed. Other benchmark runners don't include this. Users should ensure k6 is in their PATH.const k6Process = spawn('k6', [ 'run', '--env', `BASE_URL=${baseUrl}`, '--summary-trend-stats', 'avg,min,med,max,p(90),p(95),p(99)', 'src/k6/realistic-benchmark.js' ], { cwd: ROOT_DIR, - env: { ...process.env, PATH: `/home/user/.local/bin:${process.env.PATH}` }, + env: process.env, stdio: ['ignore', 'pipe', 'pipe'] })
97-121: Add handler for premature server exit.Similar to
run-hono-benchmark.ts, this function lacks an exit handler. If the server exits before emitting "Server running", the promise will hang until the 15-second timeout.serverProcess.on('error', (err) => { clearTimeout(timeout) reject(err) }) + + serverProcess.on('exit', (code) => { + if (code !== 0 && code !== null) { + clearTimeout(timeout) + reject(new Error(`Server ${config.name} exited with code ${code}`)) + } + }) }) }packages/trpc-benchmarks/src/db/seed.ts (2)
147-168: Review creation loop may become inefficient asexistingReviewsfills up.The while loop randomly selects user-product pairs until finding an unused one. As
existingReviewsgrows, collision probability increases, potentially causing many wasted iterations near the target count. For the current configuration (3000 reviews from 500K possible pairs), this is acceptable, but worth noting if scaling up.Consider using a deterministic approach for larger datasets:
- const existingReviews = new Set<string>() - let reviewsCreated = 0 - - while (reviewsCreated < NUM_REVIEWS) { - const userId = randomElement(users).id - const productId = randomElement(products).id - const key = `${userId}-${productId}` - - if (!existingReviews.has(key)) { - existingReviews.add(key) - await prisma.review.create({ + // Generate unique pairs upfront + const pairs: Array<{ userId: string; productId: string }> = [] + const seen = new Set<string>() + while (pairs.length < NUM_REVIEWS) { + const userId = randomElement(users).id + const productId = randomElement(products).id + const key = `${userId}-${productId}` + if (!seen.has(key)) { + seen.add(key) + pairs.push({ userId, productId }) + } + } + + for (let i = 0; i < pairs.length; i++) { + const { userId, productId } = pairs[i] + await prisma.review.create({ + data: { + userId, + productId, + rating: randomInt(1, 5), + comment: randomElement(reviewComments) + } + }) + if ((i + 1) % 1000 === 0) { + console.log(` Created ${i + 1}/${NUM_REVIEWS} reviews`) + } + }
101-109: Non-null assertion is safe here but could use explicit handling.The
!assertion on line 102 is safe sinceselectedProductsonly contains IDs from theproductsarray. However, for defensive coding, consider using a Map lookup:+ const productMap = new Map(products.map(p => [p.id, p])) + const items = Array.from(selectedProducts).map((productId) => { - const product = products.find((p) => p.id === productId)! + const product = productMap.get(productId)! const quantity = randomInt(1, 5)This also improves performance from O(n) to O(1) per lookup.
packages/trpc-benchmarks/src/shared/db-router.ts (2)
150-161: Usinganytype for dynamic where clause.While this is a common pattern for building dynamic Prisma queries, consider using Prisma's generated types for better type safety:
import type { Prisma } from '@prisma/client' const where: Prisma.ProductWhereInput = {}
445-465: Memory-intensive query loads all products and order items.This endpoint fetches all categories with all their products and all order items into memory before aggregating. For large datasets, this could cause memory issues and slow response times.
For benchmarking purposes, this may be intentional to test a complex query pattern. If production use is considered, prefer database-level aggregation:
revenueByCategory: publicProcedure.query(async ({ ctx }) => { return ctx.prisma.$queryRaw` SELECT c.name as category, COALESCE(SUM(oi.price * oi.quantity), 0) as revenue, COUNT(DISTINCT p.id) as productCount FROM Category c LEFT JOIN Product p ON p.categoryId = c.id LEFT JOIN OrderItem oi ON oi.productId = p.id GROUP BY c.id, c.name ` })packages/trpc-benchmarks/src/k6/benchmark.js (1)
74-95: Clean helper functions for tRPC calls.The
wrapInput,trpcQuery, andtrpcMutationhelpers properly handle superjson format and URL encoding. This is reusable across the benchmark suite.Consider extracting these helpers to a shared module since
quick-benchmark.jsduplicates the same logic.packages/trpc-benchmarks/src/servers/elysia-native-server.ts (2)
219-240: Revenue-by-category loads all data into memory.This query fetches all categories with all their products and all order items. For large datasets, this could cause memory issues and slow responses. For benchmarking purposes this is acceptable, but note that a production implementation would use aggregation queries or raw SQL.
249-257: Consider disconnecting Prisma on shutdown.The shutdown handlers exit without disconnecting the Prisma client, which could leave database connections hanging.
process.on('SIGTERM', () => { console.log('[Elysia-Native+Bun+SQLite] SIGTERM received, shutting down...') + prisma.$disconnect().then(() => process.exit(0)) - process.exit(0) }) process.on('SIGINT', () => { console.log('[Elysia-Native+Bun+SQLite] SIGINT received, shutting down...') + prisma.$disconnect().then(() => process.exit(0)) - process.exit(0) })packages/trpc-benchmarks/src/servers/hono-native-server.ts (1)
255-263: Consider disconnecting Prisma on shutdown.Same as the Elysia server—adding
prisma.$disconnect()before exiting would ensure clean database connection teardown.process.on('SIGTERM', () => { console.log('[Hono-Native+Bun+SQLite] SIGTERM received, shutting down...') + prisma.$disconnect().then(() => process.exit(0)) - process.exit(0) }) process.on('SIGINT', () => { console.log('[Hono-Native+Bun+SQLite] SIGINT received, shutting down...') + prisma.$disconnect().then(() => process.exit(0)) - process.exit(0) })
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
packages/trpc-benchmarks/prisma/benchmark.dbis excluded by!**/*.dbpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (34)
package.json(2 hunks)packages/trpc-benchmarks/README.md(1 hunks)packages/trpc-benchmarks/package.json(1 hunks)packages/trpc-benchmarks/prisma/schema.prisma(1 hunks)packages/trpc-benchmarks/results/native-benchmark-2025-11-30T18-08-41-097Z.json(1 hunks)packages/trpc-benchmarks/results/native-benchmark-2025-11-30T18-11-40-697Z.json(1 hunks)packages/trpc-benchmarks/results/native-benchmark-2025-11-30T18-15-43-440Z.json(1 hunks)packages/trpc-benchmarks/results/native-benchmark-2025-11-30T18-19-10-281Z.json(1 hunks)packages/trpc-benchmarks/results/realistic-benchmark-2025-11-29T22-37-16-123Z.json(1 hunks)packages/trpc-benchmarks/results/realistic-benchmark-2025-11-30T17-22-48-477Z.json(1 hunks)packages/trpc-benchmarks/src/db/client.ts(1 hunks)packages/trpc-benchmarks/src/db/seed.ts(1 hunks)packages/trpc-benchmarks/src/index.ts(1 hunks)packages/trpc-benchmarks/src/k6/benchmark.js(1 hunks)packages/trpc-benchmarks/src/k6/native-benchmark.js(1 hunks)packages/trpc-benchmarks/src/k6/quick-benchmark.js(1 hunks)packages/trpc-benchmarks/src/k6/realistic-benchmark.js(1 hunks)packages/trpc-benchmarks/src/runners/compare-results.ts(1 hunks)packages/trpc-benchmarks/src/runners/run-all-benchmarks.ts(1 hunks)packages/trpc-benchmarks/src/runners/run-express-benchmark.ts(1 hunks)packages/trpc-benchmarks/src/runners/run-hono-benchmark.ts(1 hunks)packages/trpc-benchmarks/src/runners/run-native-benchmark.ts(1 hunks)packages/trpc-benchmarks/src/runners/run-realistic-benchmark.ts(1 hunks)packages/trpc-benchmarks/src/servers/elysia-db-server.ts(1 hunks)packages/trpc-benchmarks/src/servers/elysia-native-server.ts(1 hunks)packages/trpc-benchmarks/src/servers/elysia-server.ts(1 hunks)packages/trpc-benchmarks/src/servers/express-db-server.ts(1 hunks)packages/trpc-benchmarks/src/servers/express-server.ts(1 hunks)packages/trpc-benchmarks/src/servers/hono-db-server.ts(1 hunks)packages/trpc-benchmarks/src/servers/hono-native-server.ts(1 hunks)packages/trpc-benchmarks/src/servers/hono-server.ts(1 hunks)packages/trpc-benchmarks/src/shared/db-router.ts(1 hunks)packages/trpc-benchmarks/src/shared/router.ts(1 hunks)packages/trpc-benchmarks/tsconfig.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (11)
packages/trpc-benchmarks/src/servers/hono-server.ts (1)
packages/trpc-benchmarks/src/shared/router.ts (1)
appRouter(69-196)
packages/trpc-benchmarks/src/shared/router.ts (2)
packages/trpc-benchmarks/src/shared/db-router.ts (2)
router(18-18)publicProcedure(19-19)packages/trpc-benchmarks/src/index.ts (2)
appRouter(4-4)AppRouter(4-4)
packages/trpc-benchmarks/src/servers/elysia-server.ts (2)
packages/trpc-benchmarks/src/index.ts (1)
appRouter(4-4)packages/trpc-benchmarks/src/shared/router.ts (1)
appRouter(69-196)
packages/trpc-benchmarks/src/servers/elysia-native-server.ts (1)
packages/trpc-benchmarks/src/db/client.ts (1)
prisma(9-13)
packages/trpc-benchmarks/src/shared/db-router.ts (3)
packages/trpc-benchmarks/src/db/client.ts (1)
PrismaClient(19-19)packages/trpc-benchmarks/src/shared/router.ts (2)
router(21-21)publicProcedure(22-22)packages/node-backend/src/index.ts (1)
TRPCError(8-8)
packages/trpc-benchmarks/src/k6/realistic-benchmark.js (1)
packages/trpc-benchmarks/src/k6/benchmark.js (10)
userGetLatency(13-13)errorRate(17-17)BASE_URL(21-21)TRPC_URL(22-22)options(25-72)options(25-72)url(81-83)setup(278-291)healthRes(282-282)teardown(294-296)
packages/trpc-benchmarks/src/servers/hono-native-server.ts (1)
packages/trpc-benchmarks/src/db/client.ts (1)
prisma(9-13)
packages/trpc-benchmarks/src/servers/express-server.ts (2)
packages/trpc-benchmarks/src/index.ts (1)
appRouter(4-4)packages/trpc-benchmarks/src/shared/router.ts (1)
appRouter(69-196)
packages/trpc-benchmarks/src/k6/quick-benchmark.js (2)
packages/trpc-benchmarks/src/k6/benchmark.js (5)
pingLatency(11-11)errorRate(17-17)BASE_URL(21-21)setup(278-291)teardown(294-296)packages/trpc-benchmarks/src/k6/realistic-benchmark.js (2)
setup(318-338)teardown(340-343)
packages/trpc-benchmarks/src/runners/run-all-benchmarks.ts (2)
packages/trpc-benchmarks/src/k6/benchmark.js (2)
options(25-72)options(25-72)packages/trpc-benchmarks/src/k6/quick-benchmark.js (2)
options(20-27)options(20-27)
packages/trpc-benchmarks/src/db/seed.ts (2)
packages/trpc-benchmarks/src/db/client.ts (2)
prisma(9-13)PrismaClient(19-19)packages/js-utils/src/Numbers.ts (1)
randomInt(24-28)
🪛 ast-grep (0.40.0)
packages/trpc-benchmarks/src/runners/run-realistic-benchmark.ts
[warning] 180-180: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(db_${metric}_latency[^=]*=\\s*avg=([0-9.]+)m?s.*p\\(95\\)=([0-9.]+)m?s, 'i')
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🪛 LanguageTool
packages/trpc-benchmarks/README.md
[grammar] ~174-~174: Ensure spelling is correct
Context: ...ss/fail thresholds: - 95th percentile < 500ms - 99th percentile < 1000ms - Ping P95 < 50...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🪛 markdownlint-cli2 (0.18.1)
packages/trpc-benchmarks/README.md
213-213: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript)
🔇 Additional comments (46)
packages/trpc-benchmarks/tsconfig.json (1)
1-21: LGTM!The TypeScript configuration is well-suited for this cross-runtime benchmarking package, with appropriate compiler options and type definitions for both Node.js and Bun environments.
package.json (3)
12-26: LGTM!The workspace catalog is a good addition for centralizing common dependency versions across packages.
50-51: LGTM!The new devDependencies for Elysia and tRPC integration align with the PR's benchmarking objectives.
59-63: The typebox override is necessary for monorepo consistency and compatibility.The override ensures all packages in the monorepo use
@sinclair/typebox@^0.34.0, which aligns with the explicit peer dependency requirements fromelysia@1.4.16andexact-mirror. Both packages declare@sinclair/typebox@^0.34.0as a peer dependency, and the override resolves to0.34.41without conflicts. This is a legitimate pattern for enforcing version consistency across a pnpm monorepo and does not mask underlying compatibility issues.packages/trpc-benchmarks/README.md (1)
1-245: Excellent documentation!The README provides comprehensive coverage of installation, usage, benchmarking scenarios, API endpoints, metrics, and CLI options. This will help users understand and utilize the benchmarking suite effectively.
packages/trpc-benchmarks/src/index.ts (1)
1-4: LGTM!Clean re-export pattern for the public API surface. The file correctly exposes the shared router and type definitions for external consumption.
packages/trpc-benchmarks/results/realistic-benchmark-2025-11-30T17-22-48-477Z.json (1)
1-56: Benchmark results look consistent.The three configurations show comparable request throughput (4658-4665 requests) with zero errors, providing a good baseline for comparison.
packages/trpc-benchmarks/results/realistic-benchmark-2025-11-29T22-37-16-123Z.json (1)
1-56: LGTM!Results from an earlier benchmark run showing similar performance characteristics across frameworks.
packages/trpc-benchmarks/src/servers/express-server.ts (1)
1-29: LGTM!The server setup is clean and minimal, optimized for benchmarking with:
- Minimal middleware overhead (only JSON parsing with 1MB limit)
- x-powered-by header disabled
- Simple health endpoint outside tRPC
- Proper tRPC integration with empty context
packages/trpc-benchmarks/src/servers/hono-server.ts (1)
12-27: LGTM!Health endpoint correctly detects runtime, and tRPC integration is properly configured.
packages/trpc-benchmarks/src/servers/elysia-server.ts (1)
1-43: LGTM!The server implementation is clean and straightforward. The graceful shutdown using
app.stop()is simpler than the Express/Hono Node.js paths since Elysia's stop method is synchronous.packages/trpc-benchmarks/src/servers/elysia-db-server.ts (1)
1-25: LGTM!The server setup correctly integrates Prisma context with the tRPC router and provides proper health check metadata including database information.
packages/trpc-benchmarks/src/db/client.ts (1)
1-19: LGTM. The singleton pattern is correctly implemented.The
NODE_ENV !== 'production'check on line 15 is appropriate. WhenNODE_ENVis undefined (the default in benchmark contexts), the condition evaluates to true, enabling global caching and ensuring the singleton works as intended. This also follows a good practice of disabling caching only in production serverless environments.packages/trpc-benchmarks/src/servers/express-db-server.ts (3)
1-8: LGTM on imports and configuration.Clean imports and standard port configuration with environment variable override.
10-19: LGTM on Express app setup.Good practices: minimal middleware for benchmarking purposes and disabling
x-powered-byheader for security.
21-28: LGTM on tRPC endpoint configuration.The tRPC middleware is correctly configured with the database router and Prisma context.
packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (2)
7-25: LGTM on interface definitions.Well-structured interfaces for benchmark configuration and results with appropriate optional metrics.
139-166: Server process may become orphaned if waitForServer throws.If
waitForServerthrows an unexpected error (e.g., network issue), thefinallyblock will still run, but other code paths before the try block may leave the process running.The current structure with try/finally is correct for cleanup. The implementation handles the happy path and failure scenarios appropriately.
packages/trpc-benchmarks/src/servers/hono-db-server.ts (1)
1-8: LGTM on imports and configuration.Clean imports with appropriate modules for tRPC/Hono integration and Prisma.
packages/trpc-benchmarks/src/runners/run-express-benchmark.ts (1)
130-139: LGTM on error handling and cleanup.Good use of try/catch/finally to ensure server cleanup regardless of benchmark success or failure.
packages/trpc-benchmarks/src/shared/router.ts (3)
17-22: LGTM on tRPC initialization.Good use of superjson transformer for proper serialization of dates and other complex types.
24-42: In-memory data stores are module-level singletons.This is intentional for benchmarking, but be aware that the
usersMap will accumulate entries fromuser.createmutations across benchmark runs within the same server instance, potentially affecting memory usage in long-running tests.
144-155: Recursive Fibonacci can cause stack overflow for larger inputs.The max input is capped at 35, which is safe for stack depth but will be very slow (~9 billion operations for n=35). This is intentional for CPU benchmarking, but consider documenting the expected runtime or lowering the max for shorter tests.
The implementation is correct for its benchmarking purpose. The comment "recursive, inefficient on purpose" clearly documents the intent.
packages/trpc-benchmarks/src/runners/run-hono-benchmark.ts (1)
134-164: LGTM!The main function properly handles errors with try/catch/finally, ensures server cleanup, and provides clear status messages. The overall structure is sound.
packages/trpc-benchmarks/package.json (1)
36-50: LGTM!The dependency structure is appropriate for a private benchmark package. Framework dependencies (Hono, Elysia) in devDependencies work correctly since the package won't be published and servers run via tsx/bun which includes devDependencies.
packages/trpc-benchmarks/src/runners/run-all-benchmarks.ts (1)
101-146: LGTM!The
startServerfunction has comprehensive error handling including the exit event handler, proper timeout cleanup, and grace period for server initialization.packages/trpc-benchmarks/src/k6/native-benchmark.js (1)
97-276: LGTM!The main test function is well-structured with realistic weighted action distribution, proper HTTP response checks, and latency tracking per operation type. The sleep intervals add realistic user behavior simulation.
packages/trpc-benchmarks/src/runners/run-realistic-benchmark.ts (2)
178-189: Static analysis ReDoS warning is a false positive here.The
dbMetricsarray contains only hardcoded strings, so there's no user-controlled input in the regex construction. The warning can be safely ignored.
206-342: LGTM!The main function has comprehensive skip logic for missing runtimes, proper server lifecycle management with cleanup in finally blocks, and well-formatted comparison output with rankings. The JSON results persistence is a nice touch for further analysis.
packages/trpc-benchmarks/prisma/schema.prisma (1)
1-87: Well-structured Prisma schema for e-commerce benchmarking.The schema is well-designed with appropriate indexes for the query patterns used in the benchmark router (filters by category, price range, status, and date ordering). The cascade delete on
OrderItemand the unique constraint onReviewpreventing duplicate user-product reviews are good choices.packages/trpc-benchmarks/src/db/seed.ts (1)
179-186: Good error handling and cleanup.Proper use of
.catch()for error handling with exit code and.finally()to ensure the Prisma client is disconnected regardless of success or failure.packages/trpc-benchmarks/src/shared/db-router.ts (2)
297-350: Well-implemented transactional order creation.Good use of Prisma's
$transactionfor atomic operations: validating stock, creating the order with items, and decrementing stock all within a single transaction. The error handling with appropriateTRPCErrorcodes (NOT_FOUND,BAD_REQUEST) is correct.
13-19: Separate tRPC instance with database context.This file creates its own tRPC instance with
Context(containingPrismaClient), separate from the sharedrouter.ts. This is appropriate since the db-router requires database context while the basic router may not. Just ensure consumers import from the correct module based on their needs.packages/trpc-benchmarks/src/k6/quick-benchmark.js (1)
82-94: Good setup/teardown lifecycle hooks.The setup function properly validates server availability before running tests and passes server info to teardown for logging. This follows k6 best practices.
packages/trpc-benchmarks/src/k6/benchmark.js (2)
24-72: Well-designed k6 scenario configuration.Good progression from smoke (1 VU, 10s) → load (ramping 0→10→20→0) → stress (ramping to 100 VUs) with appropriate
startTimeoffsets to sequence them. The thresholds (p95 < 500ms, p99 < 1s, errors < 1%) are reasonable for a benchmark.
158-226: All endpoints in benchmark.js exist in router.ts.The load test and stress test in benchmark.js call endpoints that are properly defined in the router used by express-server.ts. Endpoints like
user.get,user.create,user.batch,items.list,items.all,compute.hash, andcompute.fibonacciare all present in router.ts. The benchmark script is correctly configured for the express server.packages/trpc-benchmarks/src/servers/elysia-native-server.ts (2)
1-16: LGTM!Health endpoint and server setup look good. Clear runtime metadata and proper environment-based port configuration.
18-55: LGTM!Product list and search endpoints are well-implemented with proper pagination, parallel queries using
Promise.all, and appropriate Prisma includes.packages/trpc-benchmarks/src/servers/hono-native-server.ts (3)
1-17: LGTM!Health endpoint and server initialization are clean and consistent with the Elysia implementation.
18-87: LGTM!Product and category routes are well-structured with proper pagination, parallel queries, and Prisma includes.
245-248: LGTM!The default export with
portandfetchfollows Bun's native server pattern correctly.packages/trpc-benchmarks/src/k6/realistic-benchmark.js (5)
1-31: LGTM!Good setup with custom metrics per operation type. The metric naming convention (
db_*) clearly indicates these are database-backed operations.
33-54: LGTM!The ramping VU scenario and thresholds are reasonable for a realistic e-commerce load pattern.
56-93: LGTM!Helper functions for tRPC query/mutation wrapping and result extraction are clean and handle errors gracefully.
95-163: LGTM!Browse and search behaviors are well-implemented with proper metric tracking, error rate recording, and realistic sleep delays between operations.
197-256: LGTM!Order placement logic handles the case where data isn't yet seeded, properly validates item uniqueness, and correctly treats stock issues (400) as acceptable outcomes rather than errors.
| @@ -0,0 +1 @@ | |||
| [] No newline at end of file | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Empty benchmark result file.
This file contains an empty array, which may indicate a failed benchmark run or an incomplete data capture. Empty result files can cause confusion and may lead to issues when processing results.
Consider either removing this file or documenting why it exists (e.g., as a template or representing a specific failed run scenario).
🤖 Prompt for AI Agents
In
packages/trpc-benchmarks/results/native-benchmark-2025-11-30T18-08-41-097Z.json
(lines 1-1), the results file contains an empty array which indicates a failed
or incomplete benchmark run; either remove the file if it's not needed, or
replace its contents with a small explanation JSON object or a README entry
(e.g., {"status":"failed","reason":"<explain>","timestamp":"..."} or add a
comment/documentation file alongside) so consumers and CI know why the file
exists and processing scripts won’t misinterpret an empty array.
| "avgLatency": 8.22, | ||
| "p95Latency": 70.61, | ||
| "p99Latency": 7.13, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical data inconsistency: p99Latency is lower than avgLatency.
The p99 latency (7.13ms) is lower than the average latency (8.22ms), which is mathematically impossible. By definition, the 99th percentile must be greater than or equal to the average. This indicates a data integrity issue—the values may be swapped, calculated incorrectly, or using inconsistent units.
| "avgLatency": 9, | ||
| "p95Latency": 80.55, | ||
| "p99Latency": 7.53, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical data inconsistency: p99Latency is lower than avgLatency.
Same issue as above—the p99 latency (7.53ms) is lower than the average latency (9ms), which is impossible. This suggests the benchmark data collection or aggregation logic has a bug.
Would you like me to generate a script to validate the benchmark result files and identify all instances of this data inconsistency?
🤖 Prompt for AI Agents
In
packages/trpc-benchmarks/results/native-benchmark-2025-11-30T18-19-10-281Z.json
around lines 36–38, the recorded p99Latency (7.53) is less than avgLatency (9),
indicating percentile/aggregation logic is wrong; fix by verifying percentiles
are computed from the same raw sample set and recomputing percentiles with a
correct algorithm (sort samples, use the proper index/interpolation for p95/p99)
and ensure post-checks enforce p99 >= p95 and p99 >= avg (or fail/log the file)
and add a validation step or small script that scans benchmark JSON files for
these inconsistencies and either corrects or rejects results.
| // Test data | ||
| let testProducts = [] | ||
| let testUsers = [] | ||
| let testCategories = [] | ||
| let createdOrders = [] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Module-level mutable state won't work as expected across VUs.
In k6, each VU runs in its own JavaScript context, so createdOrders (and other module-level arrays) are not shared between VUs. The createdOrders.push() on line 203 only affects the current VU's array, and teardown() will see an empty or minimal array.
If cross-VU order tracking is needed, consider using k6's shared array or accepting that each VU tracks its own orders independently.
Since this is a benchmark (not a correctness test), this behavior may be acceptable—each VU will have its own small pool of created orders to query. Just be aware the teardown log won't reflect total orders created.
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/k6/native-benchmark.js around lines 48-53,
module-level arrays (testProducts, testUsers, testCategories, createdOrders) are
per-VU and not shared across VUs; to fix, stop relying on createdOrders in a
global teardown or switch to a k6 shared mechanism: either 1) keep per-VU arrays
and adjust teardown to aggregate/report per-VU results (or remove teardown
reliance on total orders), or 2) replace createdOrders with a shared solution
such as a k6 Counter/Trend metric (imported from k6/metrics) to increment on
each order creation, or persist minimal read-only data via SharedArray/open if
you truly need cross-VU read-only state; pick one approach and update code to
use that pattern consistently.
| // Query with input | ||
| const userRes = trpcQuery('user.get', { id: 'user-1' }) | ||
| throughput.add(1) | ||
| queryLatency.add(userRes.timings.duration) | ||
| check(userRes, { 'user.get ok': (r) => r.status === 200 }) | ||
| errorRate.add(userRes.status !== 200) | ||
|
|
||
| // Mutation | ||
| const createRes = trpcMutation('user.create', { | ||
| name: `Test ${Date.now()}`, | ||
| email: `test-${Date.now()}@example.com` | ||
| }) | ||
| throughput.add(1) | ||
| mutationLatency.add(createRes.timings.duration) | ||
| check(createRes, { 'user.create ok': (r) => r.status === 200 }) | ||
| errorRate.add(createRes.status !== 200) | ||
|
|
||
| // List query | ||
| const listRes = trpcQuery('items.list', { page: 1, pageSize: 20 }) | ||
| throughput.add(1) | ||
| queryLatency.add(listRes.timings.duration) | ||
| check(listRes, { 'items.list ok': (r) => r.status === 200 }) | ||
| errorRate.add(listRes.status !== 200) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's locate the router files
echo "=== Locating router files ==="
find packages/trpc-benchmarks -name "*router*" -type f
echo ""
echo "=== Locating benchmark files ==="
find packages/trpc-benchmarks -name "quick-benchmark.js" -type f
echo ""
echo "=== Looking at quick-benchmark.js to see if it specifies which router ==="
head -50 packages/trpc-benchmarks/src/k6/quick-benchmark.jsRepository: goat-io/fluent
Length of output: 1908
🏁 Script executed:
#!/bin/bash
# Let's examine both router files to see what procedures they define
echo "=== router.ts content ==="
wc -l packages/trpc-benchmarks/src/shared/router.ts
cat -n packages/trpc-benchmarks/src/shared/router.ts
echo ""
echo "=== db-router.ts content ==="
wc -l packages/trpc-benchmarks/src/shared/db-router.ts
cat -n packages/trpc-benchmarks/src/shared/db-router.tsRepository: goat-io/fluent
Length of output: 25130
Consider which router this benchmark targets.
The script calls user.create and items.list, which exist in router.ts but not in db-router.ts:
- In
router.ts: both endpoints are defined (user.create at line 100, items.list at line 121) - In
db-router.ts: onlyproduct.listexists; nouser.createoritems.list
Since the benchmark uses a configurable BASE_URL environment variable without specifying the target router, requests will fail if this benchmark is run against a db-router server. Clarify in documentation which router these benchmarks target, or add conditional endpoint logic if both routers should be supported.
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/k6/quick-benchmark.js around lines 55–77, the
benchmark assumes the presence of user.create and items.list routes (present in
router.ts) which do not exist in db-router.ts, so tests will fail if BASE_URL
points to a db-router server; either (A) update the benchmark
documentation/README to state explicitly which router/server this script targets
(e.g., "router.ts" with user.create and items.list) or (B) make the script
environment-aware by adding a simple ROUTER_TYPE or TARGET_ROUTER env var and
conditional endpoint mapping: when ROUTER_TYPE==='db' use endpoints that exist
on db-router (e.g., product.list or other db-router routes) and adjust
checks/assertions and metrics accordingly, otherwise use the current user/items
endpoints; ensure any added conditional paths also update the check names and
errorRate logic to match the chosen endpoints.
| function parseArgs() { | ||
| const args = process.argv.slice(2) | ||
| let duration = '60s' | ||
| let maxVus = 20 | ||
|
|
||
| for (let i = 0; i < args.length; i++) { | ||
| if (args[i] === '--duration' && args[i + 1]) { | ||
| duration = args[i + 1] | ||
| i++ | ||
| } else if (args[i] === '--max-vus' && args[i + 1]) { | ||
| maxVus = parseInt(args[i + 1], 10) | ||
| i++ | ||
| } | ||
| } | ||
|
|
||
| return { duration, maxVus } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maxVus argument is parsed but never used.
The --max-vus argument is parsed at lines 58-60 but maxVus is never passed to runBenchmark or used elsewhere. Either remove the unused argument or wire it into the k6 invocation.
function parseArgs() {
const args = process.argv.slice(2)
let duration = '60s'
- let maxVus = 20
for (let i = 0; i < args.length; i++) {
if (args[i] === '--duration' && args[i + 1]) {
duration = args[i + 1]
i++
- } else if (args[i] === '--max-vus' && args[i + 1]) {
- maxVus = parseInt(args[i + 1], 10)
- i++
}
}
- return { duration, maxVus }
+ return { duration }
}Or, if max VUs should be configurable, pass it to k6 via --env MAX_VUS=${maxVus}.
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/runners/run-realistic-benchmark.ts around lines
49 to 65, the CLI parses a --max-vus value into the maxVus variable but never
uses it; either remove the parsing or wire maxVus into the k6 invocation. Fix by
passing maxVus into the benchmark runner (e.g., add maxVus to the object/args
passed to runBenchmark) or export it to the k6 process via an env flag (e.g.,
add --env MAX_VUS=${maxVus} to the spawned k6 command) and update any downstream
code to read process.env.MAX_VUS; ensure parsing still validates the integer and
document the new env/arg usage.
| process.on('SIGTERM', async () => { | ||
| console.log('[Elysia+Bun+SQLite] SIGTERM received, shutting down...') | ||
| await prisma.$disconnect() | ||
| app.stop() | ||
| process.exit(0) | ||
| }) | ||
|
|
||
| process.on('SIGINT', async () => { | ||
| console.log('[Elysia+Bun+SQLite] SIGINT received, shutting down...') | ||
| await prisma.$disconnect() | ||
| app.stop() | ||
| process.exit(0) | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix async shutdown handlers - process.exit(0) doesn't wait for prisma.$disconnect().
The shutdown handlers are declared async and call await prisma.$disconnect(), but then immediately call process.exit(0). This causes the process to exit before the Prisma disconnection completes, potentially leaving the database connection in an inconsistent state.
The handlers need to be properly async or wrap the exit in the promise chain:
-process.on('SIGTERM', async () => {
+process.on('SIGTERM', () => {
console.log('[Elysia+Bun+SQLite] SIGTERM received, shutting down...')
- await prisma.$disconnect()
- app.stop()
- process.exit(0)
+ prisma.$disconnect()
+ .then(() => {
+ app.stop()
+ process.exit(0)
+ })
+ .catch((error) => {
+ console.error('[Elysia+Bun+SQLite] Error during shutdown:', error)
+ process.exit(1)
+ })
})
-process.on('SIGINT', async () => {
+process.on('SIGINT', () => {
console.log('[Elysia+Bun+SQLite] SIGINT received, shutting down...')
- await prisma.$disconnect()
- app.stop()
- process.exit(0)
+ prisma.$disconnect()
+ .then(() => {
+ app.stop()
+ process.exit(0)
+ })
+ .catch((error) => {
+ console.error('[Elysia+Bun+SQLite] Error during shutdown:', error)
+ process.exit(1)
+ })
})🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/servers/elysia-db-server.ts around lines 33 to
45, the SIGTERM/SIGINT handlers call await prisma.$disconnect() but immediately
call process.exit(0) which can terminate the process before the async disconnect
completes; change the handlers to await app.stop() and await
prisma.$disconnect() inside a try/catch, only call process.exit(0) after those
awaits complete (or call process.exit in a .then() after the promises resolve),
and log and handle any errors before exiting to ensure the shutdown sequence
completes reliably.
| .post('/api/orders', async ({ body, set }) => { | ||
| const { userId, items } = body as { userId: string; items: { productId: string; quantity: number }[] } | ||
|
|
||
| // Validate stock and calculate total | ||
| let total = 0 | ||
| const productIds = items.map(i => i.productId) | ||
| const products = await prisma.product.findMany({ | ||
| where: { id: { in: productIds } } | ||
| }) | ||
|
|
||
| const productMap = new Map(products.map(p => [p.id, p])) | ||
|
|
||
| for (const item of items) { | ||
| const product = productMap.get(item.productId) | ||
| if (!product) { | ||
| set.status = 400 | ||
| return { error: `Product ${item.productId} not found` } | ||
| } | ||
| if (product.stock < item.quantity) { | ||
| set.status = 400 | ||
| return { error: `Insufficient stock for ${product.name}` } | ||
| } | ||
| total += product.price * item.quantity | ||
| } | ||
|
|
||
| // Create order with transaction | ||
| const order = await prisma.$transaction(async (tx) => { | ||
| const newOrder = await tx.order.create({ | ||
| data: { | ||
| userId, | ||
| total, | ||
| status: 'pending', | ||
| items: { | ||
| create: items.map(item => ({ | ||
| productId: item.productId, | ||
| quantity: item.quantity, | ||
| price: productMap.get(item.productId)!.price | ||
| })) | ||
| } | ||
| }, | ||
| include: { items: true } | ||
| }) | ||
|
|
||
| // Update stock | ||
| for (const item of items) { | ||
| await tx.product.update({ | ||
| where: { id: item.productId }, | ||
| data: { stock: { decrement: item.quantity } } | ||
| }) | ||
| } | ||
|
|
||
| return newOrder | ||
| }) | ||
|
|
||
| return order | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Race condition: stock validation occurs outside transaction.
Stock is checked at lines 138-149 before the transaction, but decremented inside the transaction at lines 170-175. Concurrent requests could both pass validation, then both decrement, resulting in negative stock.
For a benchmark, this may be acceptable, but in production you'd need to either:
- Move the stock validation inside the transaction with
SELECT ... FOR UPDATE - Add a database constraint preventing negative stock
// Create order with transaction
const order = await prisma.$transaction(async (tx) => {
+ // Re-validate stock inside transaction
+ const freshProducts = await tx.product.findMany({
+ where: { id: { in: productIds } }
+ })
+ for (const item of items) {
+ const product = freshProducts.find(p => p.id === item.productId)
+ if (!product || product.stock < item.quantity) {
+ throw new Error(`Insufficient stock for product ${item.productId}`)
+ }
+ }
+
const newOrder = await tx.order.create({🤖 Prompt for AI Agents
packages/trpc-benchmarks/src/servers/elysia-native-server.ts lines 126-181:
stock validation is done before the transaction which allows race conditions;
move the validation and total calculation into the same prisma.$transaction so
the reads and writes occur atomically—inside the transaction re-query the
products using the transaction client (tx) and either lock rows (via a FOR
UPDATE raw query if your DB/Prisma setup supports it) or perform conditional
updates that fail if stock is insufficient (e.g., update where id = X and stock
>= qty and check affected rows), compute total from those authoritative rows,
create the order and decrement stock inside the same tx, and return a 400 error
when any product is missing or any conditional update fails; alternatively add a
DB-level constraint preventing negative stock as a secondary safeguard.
| app.post('/api/orders', async (c) => { | ||
| const { userId, items } = await c.req.json() as { userId: string; items: { productId: string; quantity: number }[] } | ||
|
|
||
| // Validate stock and calculate total | ||
| let total = 0 | ||
| const productIds = items.map(i => i.productId) | ||
| const products = await prisma.product.findMany({ | ||
| where: { id: { in: productIds } } | ||
| }) | ||
|
|
||
| const productMap = new Map(products.map(p => [p.id, p])) | ||
|
|
||
| for (const item of items) { | ||
| const product = productMap.get(item.productId) | ||
| if (!product) { | ||
| return c.json({ error: `Product ${item.productId} not found` }, 400) | ||
| } | ||
| if (product.stock < item.quantity) { | ||
| return c.json({ error: `Insufficient stock for ${product.name}` }, 400) | ||
| } | ||
| total += product.price * item.quantity | ||
| } | ||
|
|
||
| // Create order with transaction | ||
| const order = await prisma.$transaction(async (tx) => { | ||
| const newOrder = await tx.order.create({ | ||
| data: { | ||
| userId, | ||
| total, | ||
| status: 'pending', | ||
| items: { | ||
| create: items.map(item => ({ | ||
| productId: item.productId, | ||
| quantity: item.quantity, | ||
| price: productMap.get(item.productId)!.price | ||
| })) | ||
| } | ||
| }, | ||
| include: { items: true } | ||
| }) | ||
|
|
||
| // Update stock | ||
| for (const item of items) { | ||
| await tx.product.update({ | ||
| where: { id: item.productId }, | ||
| data: { stock: { decrement: item.quantity } } | ||
| }) | ||
| } | ||
|
|
||
| return newOrder | ||
| }) | ||
|
|
||
| return c.json(order) | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same race condition as Elysia server: stock validation outside transaction.
Stock is validated at lines 142-151 before the transaction starts, then decremented inside the transaction at lines 172-177. This allows concurrent order requests to pass validation simultaneously, potentially causing negative stock.
See the comment on elysia-native-server.ts for the suggested fix pattern.
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/servers/hono-native-server.ts around lines 130
to 183, the code validates product stock before starting the transaction then
decrements stock inside the transaction, creating a race that can allow negative
stock; move all stock checks and updates into the same prisma transaction:
inside the prisma.$transaction callback re-query the products using the
transaction client (tx), validate stock there, perform conditional updates (e.g.
tx.product.update with a where clause that requires stock >= quantity or check
the update result) for each item so the check-and-decrement is atomic, and if
any conditional update fails throw an error to rollback and return a 400;
finally create the order within that same transaction using the tx client.
| if (isBun) { | ||
| // Bun native server | ||
| const server = Bun.serve({ | ||
| port: PORT, | ||
| fetch: app.fetch | ||
| }) | ||
|
|
||
| console.log(`[Hono+Bun] Server running on http://localhost:${server.port}`) | ||
| console.log(`[Hono+Bun] tRPC endpoint: http://localhost:${server.port}/trpc`) | ||
| console.log(`[Hono+Bun] Bun version: ${Bun.version}`) | ||
| console.log(`[Hono+Bun] PID: ${process.pid}`) | ||
| } else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add graceful shutdown handlers for the Bun runtime path.
The Bun server path lacks SIGTERM/SIGINT handlers, which could lead to abrupt termination and incomplete cleanup during benchmarks or deployment.
Add shutdown handlers after the server startup logging:
console.log(`[Hono+Bun] Server running on http://localhost:${server.port}`)
console.log(`[Hono+Bun] tRPC endpoint: http://localhost:${server.port}/trpc`)
console.log(`[Hono+Bun] Bun version: ${Bun.version}`)
console.log(`[Hono+Bun] PID: ${process.pid}`)
+
+ // Graceful shutdown
+ process.on('SIGTERM', () => {
+ console.log('[Hono+Bun] SIGTERM received, shutting down...')
+ server.stop()
+ process.exit(0)
+ })
+
+ process.on('SIGINT', () => {
+ console.log('[Hono+Bun] SIGINT received, shutting down...')
+ server.stop()
+ process.exit(0)
+ })
} else {🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/servers/hono-server.ts around lines 32–43, the
Bun server startup path currently logs details but has no graceful shutdown
handlers; add SIGINT and SIGTERM listeners immediately after the startup logs
that call the Bun server's stop/close method, perform any necessary async
cleanup (await if stop returns a Promise), log shutdown completion, and then
exit the process with the appropriate code; ensure handlers are idempotent
(ignore repeated signals) and also handle unexpected errors/unhandledRejection
by logging and triggering the same cleanup flow.
Analysis shows Elysia Native and Hono Native perform nearly identically (~8ms avg latency, ~82 req/s) with real database workloads. The performance difference is within margin of error (~1.5%). Conclusion: tRPC is NOT slowing Elysia down - the bottleneck is Prisma/SQLite I/O, not framework routing overhead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (1)
packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (1)
126-127: Misleading: p99Latency stores p90 value.As previously flagged, this assigns the p90 value to
p99Latency. The printed comparison at line 205 will report this as "P99 Latency" which is misleading.Either rename the field to
p90Latencyand updateprintComparison, or configure k6 to output actual p99 values.
🧹 Nitpick comments (7)
packages/trpc-benchmarks/results/native-benchmark-2025-11-30T21-05-15-149Z.json (1)
20-20: Clarify error rate semantics.The errorRate values (5 and 6) lack unit clarity. Are these absolute counts, percentages, or another metric? Document the unit to ensure proper interpretation of benchmark reliability.
Also applies to: 41-41
packages/trpc-benchmarks/results/native-benchmark-2025-11-30T20-55-25-153Z.json (1)
1-44: Add trailing newline to JSON file.JSON files should end with a newline character for consistency and to avoid potential issues with some tools and version control systems.
} -] +]packages/trpc-benchmarks/src/servers/elysia-optimized-server.ts (4)
30-54: Redundant default fallbacks with TypeBox defaults.TypeBox
t.Numeric({ default: 20 })should handle defaults at the schema level. The nullish coalescing on lines 31-32 (?? 20,?? 0) may be redundant. However, this is harmless as a defensive measure.Consider removing the redundant defaults if TypeBox handles them:
.get('/', async ({ query }) => { - const limit = query.limit ?? 20 - const offset = query.offset ?? 0 + const limit = query.limit + const offset = query.offset const categoryId = query.categoryIdAlternatively, keep them for defensive coding if there's uncertainty about TypeBox's behavior with
t.Optional.
122-143: User endpoint exposes email address.Line 127 includes
278-299: Revenue-by-category fetches entire dataset.This query includes all products and order items per category, which could be slow with large datasets. For a benchmark testing DB-bound scenarios, this is likely intentional. In production, consider using a raw SQL aggregate query.
309-317: Shutdown handlers don't close Prisma connection.The handlers call
process.exit(0)without disconnecting Prisma, which could leave DB connections hanging. For a benchmark package this is acceptable, but note for production use.+import { prisma } from '../db/client.js' process.on('SIGTERM', async () => { console.log('[Elysia-Optimized+Bun+SQLite] SIGTERM received, shutting down...') + await prisma.$disconnect() process.exit(0) })packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (1)
237-239: Consider usingrecursive: truefor directory creation.
mkdirSync('results')will throw if a parent directory is missing (unlikely here, but defensive):if (!existsSync('results')) { - mkdirSync('results') + mkdirSync('results', { recursive: true }) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
packages/trpc-benchmarks/results/native-benchmark-2025-11-30T20-55-25-153Z.json(1 hunks)packages/trpc-benchmarks/results/native-benchmark-2025-11-30T20-59-25-254Z.json(1 hunks)packages/trpc-benchmarks/results/native-benchmark-2025-11-30T21-05-15-149Z.json(1 hunks)packages/trpc-benchmarks/src/runners/run-native-benchmark.ts(1 hunks)packages/trpc-benchmarks/src/servers/elysia-optimized-server.ts(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/trpc-benchmarks/results/native-benchmark-2025-11-30T20-59-25-254Z.json
🧰 Additional context used
🧬 Code graph analysis (1)
packages/trpc-benchmarks/src/servers/elysia-optimized-server.ts (1)
packages/trpc-benchmarks/src/db/client.ts (1)
prisma(9-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript)
🔇 Additional comments (8)
packages/trpc-benchmarks/src/servers/elysia-optimized-server.ts (3)
1-14: LGTM!Clean imports and straightforward port configuration with sensible default.
17-24: LGTM!Health check endpoint is clean and provides useful runtime context for benchmarking.
107-117: LGTM!Clean categories listing with product counts. No pagination, which is acceptable for a benchmark with limited categories.
packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (5)
7-49: LGTM!Clear interface definitions and benchmark configurations with appropriate port assignments to avoid conflicts.
51-90: LGTM!Utility functions are clean. The
waitForServerpolling with 15-second timeout is reasonable, and filteringExperimentalWarningfrom stderr is a nice touch.
92-116: LGTM!Appropriate timeout (3 minutes) and buffer size (50MB) for benchmark output. Returning
error.stdouton failure allows partial result parsing.
146-173: LGTM!Clean orchestration with try/finally ensuring server cleanup. The 1-second delay after SIGTERM is reasonable for graceful shutdown.
175-229: LGTM!Well-structured comparison table with dynamic column widths and meaningful ranking output. Note that line 205 displays "P99 Latency" which will show p90 data per the parsing issue above.
| "metrics": { | ||
| "avgLatency": 7.71, | ||
| "p95Latency": 18.18, | ||
| "p99Latency": 6.37, | ||
| "totalRequests": 5778, | ||
| "requestsPerSecond": 82.317084, | ||
| "errorRate": 5 | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find packages/trpc-benchmarks -type f \( -name "*.ts" -o -name "*.js" \) | head -20Repository: goat-io/fluent
Length of output: 1145
🏁 Script executed:
find packages/trpc-benchmarks -name "*k6*" -o -name "*benchmark*" | head -20Repository: goat-io/fluent
Length of output: 1329
🏁 Script executed:
cat packages/trpc-benchmarks/package.jsonRepository: goat-io/fluent
Length of output: 1862
🏁 Script executed:
ls -la packages/trpc-benchmarks/Repository: goat-io/fluent
Length of output: 527
🏁 Script executed:
cat packages/trpc-benchmarks/src/k6/native-benchmark.jsRepository: goat-io/fluent
Length of output: 8622
🏁 Script executed:
cat packages/trpc-benchmarks/src/runners/run-native-benchmark.tsRepository: goat-io/fluent
Length of output: 7777
🏁 Script executed:
cat packages/trpc-benchmarks/results/native-benchmark-2025-11-30T20-55-25-153Z.jsonRepository: goat-io/fluent
Length of output: 938
🏁 Script executed:
# Let's also check another result to see if the pattern is consistent
cat packages/trpc-benchmarks/results/native-benchmark-2025-11-30T20-59-25-254Z.jsonRepository: goat-io/fluent
Length of output: 937
Fix metric labeling bug in k6 result parser: p90 is incorrectly reported as p99.
The metrics file contains systematically invalid percentile data across all benchmark runs because of a labeling error in packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (line 88-93). The k6 benchmark outputs avg, p(90), and p(95) latencies, but the parsing logic incorrectly assigns the p90 value to the p99Latency field:
// Current (incorrect):
metrics.p99Latency = parseFloat(durationMatch[3]) // This is p90, not p99The regex capture groups are: [1]=avg, [3]=p(90), [5]=p(95). Since k6 does not output p99 by default, the comment suggests using p90 as a fallback, but the code mislabels it. Either:
- Rename
p99Latencytop90Latencyto correctly reflect what's being reported, or - Configure k6 to output actual p99 percentiles and parse them correctly.
The reported p99 values (6.37–6.72ms) are actually p90 data, which explains why they consistently fall below average latency across all runs.
| "metrics": { | ||
| "avgLatency": 8.03, | ||
| "p95Latency": 76.8, | ||
| "p99Latency": 6.82, | ||
| "totalRequests": 5739, | ||
| "requestsPerSecond": 81.733911, | ||
| "errorRate": 5 | ||
| } | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/trpc-benchmarks/results/native-benchmark-2025-11-30T21-05-15-149Z.jsonRepository: goat-io/fluent
Length of output: 1245
Critical: Percentile latencies violate statistical ordering.
Both benchmark entries show p99Latency values lower than p95Latency values:
- Hono: p95Latency=76.8ms, p99Latency=6.82ms
- Elysia: p95Latency=78.16ms, p99Latency=6.91ms
The 99th percentile must always be ≥ the 95th percentile. This indicates the metrics were either swapped during recording, miscomputed, or corrupted.
Verify the data collection and calculation pipeline: re-run benchmarks, check the recording tool output, and confirm latency values are correctly assigned to their percentile fields.
🤖 Prompt for AI Agents
In
packages/trpc-benchmarks/results/native-benchmark-2025-11-30T21-05-15-149Z.json
around lines 14-22 the percentile fields are inconsistent (p99Latency <
p95Latency), indicating swapped or miscomputed percentile assignments; inspect
the data pipeline that computes and assigns percentiles, verify the raw latency
sample arrays and the percentile calculation routine (ensure you compute p95 and
p99 from the same sorted dataset with correct indices or interpolation), add a
runtime validation check that asserts p99 >= p95 and fails or corrects the
record if violated, and rerun the benchmark/recording tool to regenerate the
JSON with correct percentile values.
| .post('/', async ({ body }) => { | ||
| const { userId, items } = body | ||
|
|
||
| // Validate stock and calculate total | ||
| let total = 0 | ||
| const productIds = items.map(i => i.productId) | ||
| const products = await prisma.product.findMany({ | ||
| where: { id: { in: productIds } } | ||
| }) | ||
|
|
||
| const productMap = new Map(products.map(p => [p.id, p])) | ||
|
|
||
| for (const item of items) { | ||
| const product = productMap.get(item.productId) | ||
| if (!product) { | ||
| return error(400, { error: `Product ${item.productId} not found` }) | ||
| } | ||
| if (product.stock < item.quantity) { | ||
| return error(400, { error: `Insufficient stock for ${product.name}` }) | ||
| } | ||
| total += product.price * item.quantity | ||
| } | ||
|
|
||
| // Create order with transaction | ||
| const order = await prisma.$transaction(async (tx) => { | ||
| const newOrder = await tx.order.create({ | ||
| data: { | ||
| userId, | ||
| total, | ||
| status: 'pending', | ||
| items: { | ||
| create: items.map(item => ({ | ||
| productId: item.productId, | ||
| quantity: item.quantity, | ||
| price: productMap.get(item.productId)!.price | ||
| })) | ||
| } | ||
| }, | ||
| include: { items: true } | ||
| }) | ||
|
|
||
| // Update stock | ||
| for (const item of items) { | ||
| await tx.product.update({ | ||
| where: { id: item.productId }, | ||
| data: { stock: { decrement: item.quantity } } | ||
| }) | ||
| } | ||
|
|
||
| return newOrder | ||
| }) | ||
|
|
||
| return order |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Race condition in stock validation.
Stock is checked at lines 179-194 outside the transaction, but decremented inside the transaction at lines 215-220. Under concurrent load, two requests could both pass validation with remaining stock of 1, then both decrement, resulting in negative stock.
Move stock validation inside the transaction:
.post('/', async ({ body }) => {
const { userId, items } = body
- // Validate stock and calculate total
- let total = 0
- const productIds = items.map(i => i.productId)
- const products = await prisma.product.findMany({
- where: { id: { in: productIds } }
- })
-
- const productMap = new Map(products.map(p => [p.id, p]))
-
- for (const item of items) {
- const product = productMap.get(item.productId)
- if (!product) {
- return error(400, { error: `Product ${item.productId} not found` })
- }
- if (product.stock < item.quantity) {
- return error(400, { error: `Insufficient stock for ${product.name}` })
- }
- total += product.price * item.quantity
- }
-
// Create order with transaction
const order = await prisma.$transaction(async (tx) => {
+ // Validate stock and calculate total inside transaction
+ const productIds = items.map(i => i.productId)
+ const products = await tx.product.findMany({
+ where: { id: { in: productIds } }
+ })
+
+ const productMap = new Map(products.map(p => [p.id, p]))
+ let total = 0
+
+ for (const item of items) {
+ const product = productMap.get(item.productId)
+ if (!product) {
+ throw new Error(`Product ${item.productId} not found`)
+ }
+ if (product.stock < item.quantity) {
+ throw new Error(`Insufficient stock for ${product.name}`)
+ }
+ total += product.price * item.quantity
+ }
+
const newOrder = await tx.order.create({Note: Throwing inside the transaction will roll it back. You'll need to catch and convert to error(400, ...) responses.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/servers/elysia-optimized-server.ts around lines
173 to 225, the stock validation is done outside the transaction which creates a
race condition; move all validation and total calculation inside the
prisma.$transaction so the same transactional snapshot is used: inside the
transaction callback use tx.product.findMany({ where: { id: { in: productIds } }
}) to rebuild productMap, validate existence and stock (throw an Error for
invalid cases), compute total and prepare the items payload, then create the
order and decrement stock within the same tx. Ensure you catch the thrown
validation Error around the transaction call and convert it to return error(400,
...) so the transaction rollback is propagated as a client 400 response.
| body: t.Object({ | ||
| userId: t.String(), | ||
| items: t.Array(t.Object({ | ||
| productId: t.String(), | ||
| quantity: t.Number() | ||
| })) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quantity validation allows invalid values.
t.Number() accepts negative numbers, zero, and floats. Consider adding constraints:
body: t.Object({
userId: t.String(),
items: t.Array(t.Object({
productId: t.String(),
- quantity: t.Number()
+ quantity: t.Integer({ minimum: 1 })
}))
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| body: t.Object({ | |
| userId: t.String(), | |
| items: t.Array(t.Object({ | |
| productId: t.String(), | |
| quantity: t.Number() | |
| })) | |
| }) | |
| }) | |
| body: t.Object({ | |
| userId: t.String(), | |
| items: t.Array(t.Object({ | |
| productId: t.String(), | |
| quantity: t.Integer({ minimum: 1 }) | |
| })) | |
| }) |
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/servers/elysia-optimized-server.ts around lines
227 to 234, the schema uses t.Number() for item.quantity which permits
negatives, zero and floats; change the schema to require a positive integer
(e.g., replace t.Number() with an integer type and enforce minimum 1 or add a
refinement/constraint of > 0) so quantity is validated as an integer >= 1.
- Add STRESS_TEST environment variable to k6 benchmark to ramp up to 200 VUs - Add ResourceMonitor class using /proc filesystem for CPU/memory tracking - Add --stress CLI flag to benchmark runner - Include stress test results: Hono 181.5 RPS vs Elysia 164.8 RPS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (1)
280-281: Misleading: p99Latency is assigned the p90 value.The comment acknowledges p99 isn't in default k6 output, but storing p90 in a field named
p99Latencyis misleading. Either rename the field to accurately reflect the data, or configure k6 with--summary-trend-statsto include p99.- // p99 not in default k6 output, use p90 as fallback - metrics.p99Latency = parseFloat(durationMatch[3]) + // Note: Default k6 output only includes p90, not p99 + metrics.p90Latency = parseFloat(durationMatch[3])Also update the
BenchmarkResultinterface at line 27 to usep90Latencyinstead ofp99Latency.packages/trpc-benchmarks/src/k6/native-benchmark.js (1)
74-78: Module-levelcreatedOrdersis per-VU, not shared across VUs.The
createdOrdersarray and the log inteardown()will only reflect orders created by one VU, not the total across all VUs. This is inherent to k6's execution model.For benchmarking purposes this is acceptable, but be aware the teardown log is misleading. Consider removing the misleading log or using a k6 Counter metric to track total orders created across all VUs:
+const ordersCreated = new Counter('orders_created') // In Place Order group, after successful order: if (res.status === 200 || res.status === 201) { + ordersCreated.add(1) try { var order = JSON.parse(res.body)Also applies to: 305-308
🧹 Nitpick comments (5)
packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (3)
109-138: Dead code:getProcessStatsfunction is never used.This standalone function duplicates logic already present in the
ResourceMonitorclass and is never called anywhere in the file.Remove the unused function:
-// Resource monitoring using /proc filesystem (Linux) -function getProcessStats(pid: number): { memoryMB: number; cpuPercent: number } | null { - try { - // Read memory from /proc/[pid]/status - const statusPath = `/proc/${pid}/status` - if (!existsSync(statusPath)) return null - - const status = readFileSync(statusPath, 'utf-8') - const vmRssMatch = status.match(/VmRSS:\s+(\d+)\s+kB/) - const memoryKB = vmRssMatch ? parseInt(vmRssMatch[1]) : 0 - const memoryMB = memoryKB / 1024 - - // Read CPU from /proc/[pid]/stat - const statPath = `/proc/${pid}/stat` - const stat = readFileSync(statPath, 'utf-8') - const parts = stat.split(' ') - // utime (14th field) + stime (15th field) = total CPU time in jiffies - const utime = parseInt(parts[13]) || 0 - const stime = parseInt(parts[14]) || 0 - const totalTime = utime + stime - - // This is cumulative, so we just return a rough estimate - // Real CPU% would need two samples to calculate delta - const cpuPercent = totalTime / 100 // Rough approximation - - return { memoryMB, cpuPercent } - } catch { - return null - } -}
341-351:monitor.stop()is called twice.Line 341 calls
monitor?.stop()to retrieve samples, then line 346 in thefinallyblock calls it again. The second call is harmless (returns empty array) but redundant.- const resourceSamples = monitor?.stop() || [] + const resourceSamples = monitor?.samples || [] return { config, metrics, resourceSamples } } finally { if (monitor) { monitor.stop() }Alternatively, make
stop()idempotent by clearing the interval only if it exists (already done) and expose samples via a getter.
72-85: Consider adding a timeout warning for server startup.
waitForServersilently returnsfalseafter 15 seconds (30 attempts × 500ms). The caller throws an error, but adding logging during retries would help diagnose slow server startups.async function waitForServer(url: string, maxAttempts = 30): Promise<boolean> { for (let i = 0; i < maxAttempts; i++) { try { const response = await fetch(`${url}/health`) if (response.ok) { return true } } catch { // Server not ready yet } + if (i > 0 && i % 10 === 0) { + console.log(` Still waiting for server... (${i}/${maxAttempts} attempts)`) + } await sleep(500) } return false }packages/trpc-benchmarks/src/k6/native-benchmark.js (2)
7-7: Unused import:Gaugeis never used.-import { Trend, Counter, Gauge } from 'k6/metrics' +import { Trend, Counter } from 'k6/metrics'
20-20: Unused metric:successfulRequestsis defined but never incremented.Either remove this unused counter or add
successfulRequests.add(1)after each successful request check.-const successfulRequests = new Counter('successful_requests')
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
packages/trpc-benchmarks/results/stress-test-2025-12-04T18-02-20-896Z.json(1 hunks)packages/trpc-benchmarks/src/k6/native-benchmark.js(1 hunks)packages/trpc-benchmarks/src/runners/run-native-benchmark.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
packages/trpc-benchmarks/src/k6/native-benchmark.js (3)
packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (1)
start(151-164)packages/trpc-benchmarks/src/k6/quick-benchmark.js (1)
res(83-83)packages/js-utils/src/Combinations.ts (1)
check(311-317)
packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (2)
packages/trpc-benchmarks/src/k6/native-benchmark.js (1)
STRESS_TEST(26-26)packages/node-utils/src/Processes.ts (1)
cpuPercent(66-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript)
🔇 Additional comments (8)
packages/trpc-benchmarks/src/runners/run-native-benchmark.ts (3)
140-240: ResourceMonitor relies on Linux/procfilesystem.This implementation will silently fail on macOS/Windows, returning zero values. Consider adding a warning when the platform isn't supported, or documenting this Linux-only limitation.
class ResourceMonitor { private pid: number private samples: ResourceSample[] = [] private interval: NodeJS.Timeout | null = null private lastCpuTime = 0 private lastSampleTime = Date.now() + private isLinux: boolean constructor(pid: number) { this.pid = pid + this.isLinux = process.platform === 'linux' + if (!this.isLinux) { + console.warn('[ResourceMonitor] Resource monitoring only supported on Linux. Metrics will be unavailable.') + } } start(intervalMs = 1000) { + if (!this.isLinux) return this.samples = []
87-107: LGTM!Server startup with stdout/stderr streaming and filtering of noisy warnings is well implemented.
438-476: LGTM!Main function orchestrates benchmarks correctly with proper error handling, delays between runs, result aggregation, and timestamped output files.
packages/trpc-benchmarks/src/k6/native-benchmark.js (4)
61-71: Threshold inconsistency:error_rateuses different aggregate types.In stress mode (line 64), the threshold uses
rate<0.5(rate per second), but in normal mode (line 67), it usescount<100(absolute count). This inconsistency may be intentional but could confuse future maintainers.Verify this is the intended behavior. Using
ratein stress tests allows for higher absolute error counts during longer high-load runs, whilecountin normal mode enforces a hard limit. If intentional, consider adding a comment explaining the difference.
202-208:Math.random()flagged by security scanner — acceptable for benchmarks.The security scanner flagged
Math.random()for generating user IDs (lines 203, 253). For load testing purposes, cryptographic randomness is unnecessary — these are synthetic test identifiers, not actual security tokens.Also applies to: 253-253
80-117: Setup function correctly initializes test data.The setup fetches products, categories, and creates synthetic user data. Data is returned and passed to VUs correctly.
124-303: LGTM — realistic weighted scenario distribution.The action groups (Browse 35%, Search 20%, Order 15%, View Order 10%, User Account 10%, Dashboard 10%) simulate realistic e-commerce traffic patterns with appropriate latency tracking and error handling.
packages/trpc-benchmarks/results/stress-test-2025-12-04T18-02-20-896Z.json (1)
22-25: Resource monitoring data is unavailable (Linux-only feature).All resource metrics (peakMemoryMB, avgMemoryMB, peakCpuPercent, avgCpuPercent) are 0 and resourceSamples arrays are empty. Resource monitoring in the benchmark harness uses the Linux /proc filesystem and is automatically unavailable in non-Linux environments (macOS, Windows, sandbox/CI). This benchmark was likely run in such an environment. For comprehensive resource analysis, run benchmarks on a Linux system or document the platform constraints.
Also applies to: 49-52, 27-27, 54-54
| "avgLatency": 560.41, | ||
| "p95Latency": 4.72, | ||
| "p99Latency": 866.72, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/trpc-benchmarks/results/stress-test-2025-12-04T18-02-20-896Z.jsonRepository: goat-io/fluent
Length of output: 1660
Fix latency metrics—p95/p99 percentile ordering is mathematically invalid.
The percentile values are inconsistent with basic distribution semantics:
- Hono: p95 (4.72ms) < average (560.41ms), which violates percentile ordering rules
- Elysia: p99 (4.11ms) < p95 (4.88ms), directly violating the definition that p99 ≥ p95; also p95 < average (618.88ms)
Percentile values must always satisfy p95 ≤ p99, and typically p95 should be ≥ average latency in a right-skewed distribution. These inversions indicate measurement calculation errors. Verify the metrics collection and calculation logic.
| "totalRequests": 27254, | ||
| "requestsPerSecond": 181.541404, | ||
| "peakRPS": 181.541404, | ||
| "errorRate": 2061, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "stress-test-2025-12-04T18-02-20-896Z.json" -type fRepository: goat-io/fluent
Length of output: 134
🏁 Script executed:
find . -type d -name "results" | head -10Repository: goat-io/fluent
Length of output: 92
🏁 Script executed:
ls -lh packages/trpc-benchmarks/results/ 2>/dev/null | head -20Repository: goat-io/fluent
Length of output: 1028
🏁 Script executed:
wc -l "packages/trpc-benchmarks/results/stress-test-2025-12-04T18-02-20-896Z.json" 2>/dev/null || echo "File not found"Repository: goat-io/fluent
Length of output: 135
🏁 Script executed:
find . -path "*trpc-benchmarks*" -name "*.json" -type f 2>/dev/null | head -20Repository: goat-io/fluent
Length of output: 959
🏁 Script executed:
git ls-files "packages/trpc-benchmarks/" 2>/dev/null | grep -E "\.(json|md)$" | head -20Repository: goat-io/fluent
Length of output: 970
🏁 Script executed:
git ls-files | grep "stress-test" | head -10Repository: goat-io/fluent
Length of output: 132
🏁 Script executed:
find . -type f -name "*.json" 2>/dev/null | grep -i benchmark | head -20Repository: goat-io/fluent
Length of output: 1063
🏁 Script executed:
ls -la packages/ 2>/dev/null | head -20Repository: goat-io/fluent
Length of output: 1176
🏁 Script executed:
find packages -name "*.ts" -o -name "*.js" | grep -i benchmark | head -10Repository: goat-io/fluent
Length of output: 660
🏁 Script executed:
find packages -name "README*" -o -name "*.md" | xargs grep -l "benchmark\|stress" 2>/dev/null | head -10Repository: goat-io/fluent
Length of output: 385
🏁 Script executed:
ls -la packages/trpc-benchmarks/ 2>/dev/null || echo "Package not found"Repository: goat-io/fluent
Length of output: 527
🏁 Script executed:
git ls-files "packages/" | head -30Repository: goat-io/fluent
Length of output: 1633
🏁 Script executed:
find . -type f \( -name "*.json" -o -name "*.ts" -o -name "*.js" \) | xargs grep -l "errorRate\|error.*rate" 2>/dev/null | head -10Repository: goat-io/fluent
Length of output: 612
🏁 Script executed:
find . -type f -name "*stress*" -o -name "*bench*" 2>/dev/null | head -20Repository: goat-io/fluent
Length of output: 1286
🏁 Script executed:
stat "packages/trpc-benchmarks/results/stress-test-2025-12-04T18-02-20-896Z.json" 2>&1Repository: goat-io/fluent
Length of output: 102
🏁 Script executed:
find . -name "*.json" -type f 2>/dev/null | xargs grep -l "errorRate" 2>/dev/null | head -20Repository: goat-io/fluent
Length of output: 796
🏁 Script executed:
cat -n packages/trpc-benchmarks/results/stress-test-2025-12-04T18-02-20-896Z.jsonRepository: goat-io/fluent
Length of output: 1660
🏁 Script executed:
cat packages/trpc-benchmarks/README.mdRepository: goat-io/fluent
Length of output: 6734
🏁 Script executed:
find packages/trpc-benchmarks/src -name "*.ts" -o -name "*.js" | head -10Repository: goat-io/fluent
Length of output: 557
🏁 Script executed:
ls -la packages/trpc-benchmarks/src/Repository: goat-io/fluent
Length of output: 492
Investigate why error rates significantly exceed documented threshold.
The error rates for both frameworks (Hono: 7.56%, Elysia: 7.70%) substantially exceed the benchmark's documented pass/fail threshold of < 1% error rate. While the stress test intentionally ramps up to 100 virtual users to find breaking points, errors at this magnitude suggest possible issues with request handling under extreme load. Verify the error types, confirm the servers recovered gracefully after the test concluded, and document whether these failures are acceptable for the intended stress scenario.
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/results/stress-test-2025-12-04T18-02-20-896Z.json
around line 21, the reported "errorRate": 2061 indicates ~7–8% failures which
far exceed the <1% threshold; investigate by (1) extracting and categorizing
error types from the raw test logs/curl/bench tool output (timeouts, 5xx,
connection resets, rate limits, client errors), (2) correlating error timestamps
with server logs and resource metrics (CPU, memory, open file/socket limits) to
identify bottlenecks or saturation points, (3) verifying whether servers
returned to healthy state after test completion and capturing any crash traces
or restart events, (4) re-run a targeted subset of runs at high load with
increased logging and adjusted timeouts to reproduce and confirm root cause, and
(5) update the JSON results with an annotated summary of findings and a short
note stating whether the observed failures are acceptable for this stress
scenario or if code/infra fixes are required.
Stress test results with 200 concurrent connections for 30 seconds: - Hono Native + Bun: 2246 req/s (winner) - Express Native + Node: 1346 req/s Hono handles 67% more requests than Express under identical load. Elysia excluded from test due to typebox version conflict. Added: - express-native-server.ts - Native Express REST API (no tRPC) - run-stress-test.ts - autocannon-based stress test runner - Updated run-native-benchmark.ts to include Express in configs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (6)
packages/trpc-benchmarks/src/servers/express-native-server.ts (3)
22-80: Missing async error handling in route handlers.If any Prisma query throws (e.g., connection issues, malformed queries), the unhandled rejection will crash the server or leave the request hanging. This is a benchmark server, so it may be acceptable, but for robustness consider wrapping in try-catch or using an async error handler middleware.
Example wrapper pattern:
const asyncHandler = (fn: (req: express.Request, res: express.Response) => Promise<void>) => (req: express.Request, res: express.Response, next: express.NextFunction) => Promise.resolve(fn(req, res)).catch(next) app.get('/api/products', asyncHandler(async (req, res) => { // ... handler logic }))
225-246: Potential memory issue with unbounded data fetch.This endpoint fetches all categories with all their products and all order items into memory before aggregating. With large datasets, this could cause memory pressure. For benchmarking, this may be intentional to test worst-case scenarios, but for production-like behavior, consider using database-level aggregation:
// More efficient approach using raw SQL or Prisma's groupBy const result = await prisma.$queryRaw` SELECT c.id as categoryId, c.name as categoryName, COALESCE(SUM(oi.price * oi.quantity), 0) as revenue, COUNT(DISTINCT p.id) as productCount FROM Category c LEFT JOIN Product p ON p.categoryId = c.id LEFT JOIN OrderItem oi ON oi.productId = p.id GROUP BY c.id, c.name `
255-263: Consider disconnecting Prisma client on shutdown.The graceful shutdown closes the HTTP server but doesn't disconnect the Prisma client. While not critical for benchmarks (the process exits anyway), adding
prisma.$disconnect()ensures clean connection pool teardown:process.on('SIGTERM', () => { console.log('[Express-Native+Node+SQLite] SIGTERM received, shutting down...') - server.close(() => process.exit(0)) + server.close(async () => { + await prisma.$disconnect() + process.exit(0) + }) })packages/trpc-benchmarks/src/runners/run-stress-test.ts (3)
27-49: Verify runtime and command availability before execution.The configurations assume
tsx(via npx),bun, and the server files exist. If any runtime or server file is missing, the process will fail with unclear error messages.Consider adding dependency checks at startup:
async function validateDependencies(): Promise<boolean> { const checks = [ { cmd: 'npx', args: ['tsx', '--version'], name: 'tsx' }, { cmd: 'bun', args: ['--version'], name: 'bun' } ] for (const check of checks) { try { await new Promise((resolve, reject) => { spawn(check.cmd, check.args, { stdio: 'ignore' }) .on('close', (code) => code === 0 ? resolve(true) : reject()) .on('error', reject) }) } catch { console.error(`❌ ${check.name} not available`) return false } } return true }Then call this in
main()before running tests.
147-147: Consider making the test endpoint configurable.The endpoint
/api/productsis hardcoded. If servers don't implement this exact route, the test will fail without clear indication. Consider adding endpoint validation or making it configurable per framework.Add to ServerConfig interface:
interface ServerConfig { name: string runtime: 'bun' | 'node' framework: string port: number command: string[] + testEndpoint?: string // defaults to /api/products }Then use:
- const result = await runAutocannon(`${serverUrl}/api/products`, 30, 200) + const endpoint = config.testEndpoint || '/api/products' + const result = await runAutocannon(`${serverUrl}${endpoint}`, 30, 200)
211-244: Consider adding graceful shutdown handler.If the script is interrupted (Ctrl+C) while a server is running, the server process will be left orphaned and continue running, potentially causing port conflicts on the next run.
Add cleanup on interrupt:
// At the top level before main() let currentServer: ChildProcess | null = null process.on('SIGINT', () => { console.log('\n\nReceived SIGINT, cleaning up...') if (currentServer && !currentServer.killed) { currentServer.kill('SIGTERM') } process.exit(130) }) // Then in runStressTest, track the server: async function runStressTest(config: ServerConfig): Promise<StressResult | null> { console.log(`\n${'='.repeat(60)}`) console.log(`Starting ${config.name}...`) const server = startServer(config) currentServer = server // Track for cleanup try { // ... rest of function } finally { console.log(` Stopping server...`) server.kill('SIGTERM') currentServer = null // Clear tracking await sleep(1000) } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
packages/trpc-benchmarks/src/runners/run-native-benchmark.ts(1 hunks)packages/trpc-benchmarks/src/runners/run-stress-test.ts(1 hunks)packages/trpc-benchmarks/src/servers/express-native-server.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/trpc-benchmarks/src/runners/run-native-benchmark.ts
🧰 Additional context used
🧬 Code graph analysis (1)
packages/trpc-benchmarks/src/servers/express-native-server.ts (1)
packages/trpc-benchmarks/src/db/client.ts (1)
prisma(9-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript)
🔇 Additional comments (6)
packages/trpc-benchmarks/src/servers/express-native-server.ts (4)
1-10: LGTM!Clean setup that reuses the shared Prisma client from
../db/client.ts, ensuring consistent database connection handling across benchmark servers.
12-19: LGTM!Health endpoint correctly returns runtime metadata for benchmark identification.
82-90: LGTM!Category listing with product counts is correctly implemented.
92-130: Minor: User orders endpoint behavior for non-existent users.
GET /api/users/:id/ordersreturns an empty array for non-existent users instead of a 404. This is acceptable for benchmarking but differs from the user detail endpoint behavior. Also note that email is exposed in user responses—fine for benchmarks but worth considering in production contexts.packages/trpc-benchmarks/src/runners/run-stress-test.ts (2)
7-25: LGTM: Well-defined TypeScript interfaces.The type definitions are clear and comprehensive, covering all necessary configuration and result data.
51-66: LGTM: Robust server readiness check.The polling mechanism with exponential-style retry is appropriate for waiting on server startup.
| function startServer(config: ServerConfig): ChildProcess { | ||
| const env = { ...process.env, PORT: String(config.port) } | ||
| const proc = spawn(config.command[0], config.command.slice(1), { | ||
| env, | ||
| stdio: ['ignore', 'pipe', 'pipe'], | ||
| cwd: process.cwd() | ||
| }) | ||
|
|
||
| proc.stdout?.on('data', (data) => { | ||
| console.log(`[${config.name}] ${data.toString().trim()}`) | ||
| }) | ||
|
|
||
| proc.stderr?.on('data', (data) => { | ||
| const msg = data.toString().trim() | ||
| if (msg && !msg.includes('ExperimentalWarning')) { | ||
| console.error(`[${config.name}] ${msg}`) | ||
| } | ||
| }) | ||
|
|
||
| return proc | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handler for spawn failures.
If the command doesn't exist or spawn fails, the process will emit an 'error' event. Without a handler, this could cause an unhandled error and leave the script hanging.
Add error handling:
function startServer(config: ServerConfig): ChildProcess {
const env = { ...process.env, PORT: String(config.port) }
const proc = spawn(config.command[0], config.command.slice(1), {
env,
stdio: ['ignore', 'pipe', 'pipe'],
cwd: process.cwd()
})
+ proc.on('error', (err) => {
+ console.error(`[${config.name}] Failed to start: ${err.message}`)
+ })
+
proc.stdout?.on('data', (data) => {
console.log(`[${config.name}] ${data.toString().trim()}`)
})🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/runners/run-stress-test.ts around lines 68 to
88, the spawned child process lacks an 'error' event handler so a spawn failure
(e.g., command not found) can produce an unhandled exception; add
proc.on('error', ...) to log the error with the server name (using processLogger
or console.error) and ensure the error is propagated or causes the parent to
exit/cleanup (e.g., reject the caller or throw/exit after logging) so the script
doesn't hang silently.
| async function runAutocannon(url: string, duration: number, connections: number): Promise<any> { | ||
| return new Promise((resolve, reject) => { | ||
| const args = [ | ||
| 'autocannon', | ||
| '-c', String(connections), // concurrent connections | ||
| '-d', String(duration), // duration in seconds | ||
| '-j', // JSON output | ||
| url | ||
| ] | ||
|
|
||
| const proc = spawn('npx', args, { cwd: process.cwd() }) | ||
| let output = '' | ||
|
|
||
| proc.stdout?.on('data', (data) => { | ||
| output += data.toString() | ||
| }) | ||
|
|
||
| proc.stderr?.on('data', (data) => { | ||
| const msg = data.toString() | ||
| if (!msg.includes('Running')) { | ||
| console.log(` [autocannon] ${msg.trim()}`) | ||
| } | ||
| }) | ||
|
|
||
| proc.on('close', (code) => { | ||
| if (code === 0) { | ||
| try { | ||
| resolve(JSON.parse(output)) | ||
| } catch { | ||
| reject(new Error('Failed to parse autocannon output')) | ||
| } | ||
| } else { | ||
| reject(new Error(`autocannon exited with code ${code}`)) | ||
| } | ||
| }) | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add timeout protection and improve error messaging.
Two issues:
- If autocannon hangs or takes too long, there's no timeout to abort the test
- JSON parse errors lose context about what output was received
Add a timeout wrapper and preserve parse errors:
async function runAutocannon(url: string, duration: number, connections: number): Promise<any> {
- return new Promise((resolve, reject) => {
+ const runPromise = new Promise((resolve, reject) => {
const args = [
'autocannon',
'-c', String(connections),
'-d', String(duration),
'-j',
url
]
const proc = spawn('npx', args, { cwd: process.cwd() })
let output = ''
proc.stdout?.on('data', (data) => {
output += data.toString()
})
proc.stderr?.on('data', (data) => {
const msg = data.toString()
if (!msg.includes('Running')) {
console.log(` [autocannon] ${msg.trim()}`)
}
})
proc.on('close', (code) => {
if (code === 0) {
try {
resolve(JSON.parse(output))
- } catch {
- reject(new Error('Failed to parse autocannon output'))
+ } catch (err) {
+ reject(new Error(`Failed to parse autocannon output: ${err}. Output: ${output.substring(0, 200)}`))
}
} else {
reject(new Error(`autocannon exited with code ${code}`))
}
})
})
+
+ // Add timeout (duration + 60s buffer)
+ const timeout = (duration + 60) * 1000
+ return Promise.race([
+ runPromise,
+ new Promise((_, reject) =>
+ setTimeout(() => reject(new Error(`autocannon timed out after ${timeout}ms`)), timeout)
+ )
+ ])
}🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/runners/run-stress-test.ts around lines 90 to
126, the runAutocannon helper lacks timeout protection and obscures parse
errors; wrap the spawned process in a timeout (e.g., setTimeout) so the test is
aborted after a configurable max duration, kill the child process on timeout,
clear the timer on normal exit, and reject with a clear timeout Error; also
preserve output when JSON.parse fails by including the raw output (and any
stderr captured) in the rejection message so callers can see what was returned,
and improve non-zero exit errors to include the exit code and collected output.
| } finally { | ||
| console.log(` Stopping server...`) | ||
| server.kill('SIGTERM') | ||
| await sleep(1000) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve server cleanup reliability.
Two concerns:
SIGTERMmight not work reliably on Windows- A fixed 1-second sleep might be insufficient for slow shutdowns, potentially causing port conflicts in subsequent tests
Consider a more robust cleanup:
} finally {
console.log(` Stopping server...`)
- server.kill('SIGTERM')
- await sleep(1000)
+
+ // Kill with fallback
+ server.kill('SIGTERM')
+ await sleep(2000)
+
+ // Force kill if still running
+ if (!server.killed) {
+ console.log(` Force killing server...`)
+ server.kill('SIGKILL')
+ await sleep(1000)
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } finally { | |
| console.log(` Stopping server...`) | |
| server.kill('SIGTERM') | |
| await sleep(1000) | |
| } | |
| } finally { | |
| console.log(` Stopping server...`) | |
| // Kill with fallback | |
| server.kill('SIGTERM') | |
| await sleep(2000) | |
| // Force kill if still running | |
| if (!server.killed) { | |
| console.log(` Force killing server...`) | |
| server.kill('SIGKILL') | |
| await sleep(1000) | |
| } | |
| } |
| // Performance comparison | ||
| if (sorted.length >= 2) { | ||
| console.log('\nPerformance Analysis:') | ||
| const best = sorted[0] | ||
| sorted.slice(1).forEach(r => { | ||
| const diff = ((best.metrics.requestsPerSecond - r.metrics.requestsPerSecond) / r.metrics.requestsPerSecond * 100).toFixed(1) | ||
| console.log(` ${best.config.name} is ${diff}% faster than ${r.config.name}`) | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Protect against division by zero in performance comparison.
If a framework has zero requests per second (e.g., due to errors), line 205 will produce Infinity or NaN in the percentage calculation.
Add validation:
if (sorted.length >= 2) {
console.log('\nPerformance Analysis:')
const best = sorted[0]
+ if (best.metrics.requestsPerSecond === 0) {
+ console.log(' Unable to compare: best framework had no successful requests')
+ return
+ }
sorted.slice(1).forEach(r => {
+ if (r.metrics.requestsPerSecond === 0) {
+ console.log(` ${r.config.name} had no successful requests`)
+ return
+ }
const diff = ((best.metrics.requestsPerSecond - r.metrics.requestsPerSecond) / r.metrics.requestsPerSecond * 100).toFixed(1)
console.log(` ${best.config.name} is ${diff}% faster than ${r.config.name}`)
})
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Performance comparison | |
| if (sorted.length >= 2) { | |
| console.log('\nPerformance Analysis:') | |
| const best = sorted[0] | |
| sorted.slice(1).forEach(r => { | |
| const diff = ((best.metrics.requestsPerSecond - r.metrics.requestsPerSecond) / r.metrics.requestsPerSecond * 100).toFixed(1) | |
| console.log(` ${best.config.name} is ${diff}% faster than ${r.config.name}`) | |
| }) | |
| } | |
| // Performance comparison | |
| if (sorted.length >= 2) { | |
| console.log('\nPerformance Analysis:') | |
| const best = sorted[0] | |
| if (best.metrics.requestsPerSecond === 0) { | |
| console.log(' Unable to compare: best framework had no successful requests') | |
| return | |
| } | |
| sorted.slice(1).forEach(r => { | |
| if (r.metrics.requestsPerSecond === 0) { | |
| console.log(` ${r.config.name} had no successful requests`) | |
| return | |
| } | |
| const diff = ((best.metrics.requestsPerSecond - r.metrics.requestsPerSecond) / r.metrics.requestsPerSecond * 100).toFixed(1) | |
| console.log(` ${best.config.name} is ${diff}% faster than ${r.config.name}`) | |
| }) | |
| } |
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/runners/run-stress-test.ts around lines 200 to
208, the percentage calculation uses r.metrics.requestsPerSecond as the
denominator and can produce Infinity/NaN when that value is zero; update the
loop to validate the denominator before computing the diff — if
r.metrics.requestsPerSecond is 0 or not a finite number, avoid the division and
print a clear fallback (e.g., "N/A" or "no requests") or skip the comparison;
otherwise compute the percentage as before, making sure to coerce values to
numbers and format to one decimal place.
| const timestamp = new Date().toISOString().replace(/[:.]/g, '-') | ||
| const resultsFile = `results/stress-test-autocannon-${timestamp}.json` | ||
| writeFileSync(resultsFile, JSON.stringify(results, null, 2)) | ||
| console.log(`\nResults saved to ${resultsFile}`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for file operations.
writeFileSync can fail due to permissions, disk space, or other I/O errors, which would crash the script after all tests completed successfully.
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
const resultsFile = `results/stress-test-autocannon-${timestamp}.json`
- writeFileSync(resultsFile, JSON.stringify(results, null, 2))
- console.log(`\nResults saved to ${resultsFile}`)
+ try {
+ writeFileSync(resultsFile, JSON.stringify(results, null, 2))
+ console.log(`\nResults saved to ${resultsFile}`)
+ } catch (err) {
+ console.error(`\nFailed to save results: ${err}`)
+ console.log('\nResults:', JSON.stringify(results, null, 2))
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const timestamp = new Date().toISOString().replace(/[:.]/g, '-') | |
| const resultsFile = `results/stress-test-autocannon-${timestamp}.json` | |
| writeFileSync(resultsFile, JSON.stringify(results, null, 2)) | |
| console.log(`\nResults saved to ${resultsFile}`) | |
| const timestamp = new Date().toISOString().replace(/[:.]/g, '-') | |
| const resultsFile = `results/stress-test-autocannon-${timestamp}.json` | |
| try { | |
| writeFileSync(resultsFile, JSON.stringify(results, null, 2)) | |
| console.log(`\nResults saved to ${resultsFile}`) | |
| } catch (err) { | |
| console.error(`\nFailed to save results: ${err}`) | |
| console.log('\nResults:', JSON.stringify(results, null, 2)) | |
| } |
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/runners/run-stress-test.ts around lines 237 to
240, the call to writeFileSync can throw and currently will crash the script
without a clear error path; wrap the file write in a try/catch, ensure the
results directory exists (create it with fs.mkdirSync(resultsDir, { recursive:
true }) if missing), catch and log the full error details (console.error with
the error object and a descriptive message), and handle failure cleanly (e.g.,
exit with a non-zero code or return a failed status) so the process doesn't
crash silently.
| app.post('/api/orders', async (req, res) => { | ||
| const { userId, items } = req.body as { userId: string; items: { productId: string; quantity: number }[] } | ||
|
|
||
| // Validate stock and calculate total | ||
| let total = 0 | ||
| const productIds = items.map(i => i.productId) | ||
| const products = await prisma.product.findMany({ | ||
| where: { id: { in: productIds } } | ||
| }) | ||
|
|
||
| const productMap = new Map(products.map(p => [p.id, p])) | ||
|
|
||
| for (const item of items) { | ||
| const product = productMap.get(item.productId) | ||
| if (!product) { | ||
| return res.status(400).json({ error: `Product ${item.productId} not found` }) | ||
| } | ||
| if (product.stock < item.quantity) { | ||
| return res.status(400).json({ error: `Insufficient stock for ${product.name}` }) | ||
| } | ||
| total += product.price * item.quantity | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing input validation on request body.
The type assertion on line 134 doesn't validate the actual structure of req.body. If items is undefined or not an array, items.map() on line 138 will throw. Consider adding basic validation:
app.post('/api/orders', async (req, res) => {
const { userId, items } = req.body as { userId: string; items: { productId: string; quantity: number }[] }
+
+ if (!userId || !Array.isArray(items) || items.length === 0) {
+ return res.status(400).json({ error: 'Invalid request: userId and items array required' })
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| app.post('/api/orders', async (req, res) => { | |
| const { userId, items } = req.body as { userId: string; items: { productId: string; quantity: number }[] } | |
| // Validate stock and calculate total | |
| let total = 0 | |
| const productIds = items.map(i => i.productId) | |
| const products = await prisma.product.findMany({ | |
| where: { id: { in: productIds } } | |
| }) | |
| const productMap = new Map(products.map(p => [p.id, p])) | |
| for (const item of items) { | |
| const product = productMap.get(item.productId) | |
| if (!product) { | |
| return res.status(400).json({ error: `Product ${item.productId} not found` }) | |
| } | |
| if (product.stock < item.quantity) { | |
| return res.status(400).json({ error: `Insufficient stock for ${product.name}` }) | |
| } | |
| total += product.price * item.quantity | |
| } | |
| app.post('/api/orders', async (req, res) => { | |
| const { userId, items } = req.body as { userId: string; items: { productId: string; quantity: number }[] } | |
| if (!userId || !Array.isArray(items) || items.length === 0) { | |
| return res.status(400).json({ error: 'Invalid request: userId and items array required' }) | |
| } | |
| // Validate stock and calculate total | |
| let total = 0 | |
| const productIds = items.map(i => i.productId) | |
| const products = await prisma.product.findMany({ | |
| where: { id: { in: productIds } } | |
| }) | |
| const productMap = new Map(products.map(p => [p.id, p])) | |
| for (const item of items) { | |
| const product = productMap.get(item.productId) | |
| if (!product) { | |
| return res.status(400).json({ error: `Product ${item.productId} not found` }) | |
| } | |
| if (product.stock < item.quantity) { | |
| return res.status(400).json({ error: `Insufficient stock for ${product.name}` }) | |
| } | |
| total += product.price * item.quantity | |
| } |
🤖 Prompt for AI Agents
In packages/trpc-benchmarks/src/servers/express-native-server.ts around lines
133 to 154, the handler uses a type assertion for req.body but does not validate
it, so items.map can throw if items is missing or not an array; add explicit
validation before using items: check that req.body exists, that userId is a
non-empty string, that items is an array, and that each item has a string
productId and a numeric quantity > 0; if validation fails return
res.status(400).json with a clear error; only after validation proceed to build
productIds and the rest of the logic.
| for (const item of items) { | ||
| const product = productMap.get(item.productId) | ||
| if (!product) { | ||
| return res.status(400).json({ error: `Product ${item.productId} not found` }) | ||
| } | ||
| if (product.stock < item.quantity) { | ||
| return res.status(400).json({ error: `Insufficient stock for ${product.name}` }) | ||
| } | ||
| total += product.price * item.quantity | ||
| } | ||
|
|
||
| // Create order with transaction | ||
| const order = await prisma.$transaction(async (tx) => { | ||
| const newOrder = await tx.order.create({ | ||
| data: { | ||
| userId, | ||
| total, | ||
| status: 'pending', | ||
| items: { | ||
| create: items.map(item => ({ | ||
| productId: item.productId, | ||
| quantity: item.quantity, | ||
| price: productMap.get(item.productId)!.price | ||
| })) | ||
| } | ||
| }, | ||
| include: { items: true } | ||
| }) | ||
|
|
||
| // Update stock | ||
| for (const item of items) { | ||
| await tx.product.update({ | ||
| where: { id: item.productId }, | ||
| data: { stock: { decrement: item.quantity } } | ||
| }) | ||
| } | ||
|
|
||
| return newOrder | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TOCTOU race condition in stock validation.
Stock is validated outside the transaction (lines 145-154), but decremented inside it (lines 175-180). Under concurrent load, two requests could simultaneously pass validation for the last item in stock, both enter the transaction, and stock could go negative.
For accurate benchmarking of a realistic e-commerce flow, move the stock check inside the transaction:
- for (const item of items) {
- const product = productMap.get(item.productId)
- if (!product) {
- return res.status(400).json({ error: `Product ${item.productId} not found` })
- }
- if (product.stock < item.quantity) {
- return res.status(400).json({ error: `Insufficient stock for ${product.name}` })
- }
- total += product.price * item.quantity
- }
-
// Create order with transaction
const order = await prisma.$transaction(async (tx) => {
+ // Re-fetch and validate stock inside transaction
+ const products = await tx.product.findMany({
+ where: { id: { in: productIds } }
+ })
+ const productMap = new Map(products.map(p => [p.id, p]))
+
+ for (const item of items) {
+ const product = productMap.get(item.productId)
+ if (!product) {
+ throw new Error(`Product ${item.productId} not found`)
+ }
+ if (product.stock < item.quantity) {
+ throw new Error(`Insufficient stock for ${product.name}`)
+ }
+ total += product.price * item.quantity
+ }
+
const newOrder = await tx.order.create({Alternatively, use SELECT ... FOR UPDATE semantics if the database supports it, though SQLite doesn't support row-level locking in the same way.
Committable suggestion skipped: line range outside the PR's diff.
Stress test results with 200 concurrent connections for 30 seconds: - Hono Native + Bun: 2208 req/s (fastest) - Express Native + Node: 1362 req/s (-38% vs Hono) - Express + tRPC + Bun: 708 req/s (-68% vs Hono) Key findings: - Hono is 62% faster than Express Native - tRPC overhead: Express Native is ~2x faster than Express+tRPC - Bun runtime with native routes is the clear performance winner Added: - express-trpc-bun-server.ts - Express + tRPC running on Bun - Updated run-stress-test.ts to support tRPC endpoints
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
packages/trpc-benchmarks/src/servers/express-trpc-bun-server.ts (2)
8-9: Consider preserving explicitPORT=0and avoiding||fallback quirks
const PORT = Number(process.env.PORT) || 3008will treatPORT=0and any non-numeric but truthy value as the default3008. If you ever want to allowPORT=0(ephemeral port) or distinguish between “not set” and “invalid”, consider a slightly stricter expression:-const PORT = Number(process.env.PORT) || 3008 +const envPort = process.env.PORT +const PORT = envPort !== undefined ? Number(envPort) : 3008This keeps
0if explicitly configured while still defaulting when the env var is absent.
43-54: Make shutdown idempotent and avoid cutting off in‑flight requestsRight now, the shutdown handler:
- Disconnects the DB client before closing the HTTP server, which can interrupt in‑flight requests.
- Can be invoked multiple times (SIGINT/SIGTERM) without any guard, potentially triggering multiple disconnect/close calls.
For slightly more robust behavior, you could:
- Add a simple guard flag to make shutdown idempotent.
- Close the HTTP server first, then disconnect the DB client in the
server.closecallback.For example:
-// Graceful shutdown -const shutdown = async () => { - console.log('[Express+tRPC+Bun+SQLite] Shutting down...') - await prisma.$disconnect() - server.close(() => { - console.log('[Express+tRPC+Bun+SQLite] Server closed') - process.exit(0) - }) -} - -process.on('SIGTERM', shutdown) -process.on('SIGINT', shutdown) +let isShuttingDown = false + +// Graceful shutdown +const shutdown = () => { + if (isShuttingDown) return + isShuttingDown = true + console.log('[Express+tRPC+Bun+SQLite] Shutting down...') + server.close(async () => { + console.log('[Express+tRPC+Bun+SQLite] Server closed') + try { + await prisma.$disconnect() + } finally { + process.exit(0) + } + }) +} + +process.on('SIGTERM', shutdown) +process.on('SIGINT', shutdown)Not critical for pure benchmarking, but it will behave better if you re-use this server in other contexts.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
packages/trpc-benchmarks/results/stress-test-autocannon-2025-12-04T18-54-42-380Z.json(1 hunks)packages/trpc-benchmarks/src/runners/run-stress-test.ts(1 hunks)packages/trpc-benchmarks/src/servers/express-trpc-bun-server.ts(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/trpc-benchmarks/results/stress-test-autocannon-2025-12-04T18-54-42-380Z.json
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/trpc-benchmarks/src/runners/run-stress-test.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript)
🔇 Additional comments (1)
packages/trpc-benchmarks/src/servers/express-trpc-bun-server.ts (1)
10-41: Core wiring (health + /trpc + logging) looks solid for benchmark useThe minimal middleware, separate
/healthendpoint, and/trpcmounting with a shared DB context are all appropriate for a benchmark harness, and the startup logs give enough detail to distinguish this variant at runtime.
New benchmark package comparing tRPC API performance across different
server frameworks and runtimes:
Includes realistic e-commerce benchmark with SQLite/Prisma for DB ops,
testing product listings, searches, order creation, and dashboard analytics.
Uses k6 for load testing with ramping VU scenarios.
Results show Hono+Bun ~16% faster than Express+Node, with Elysia
performing similarly to Express in DB-bound scenarios.
Summary by CodeRabbit
New Features
Tools
Data
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.