PR 说明:修复 /requests 页面卡顿/卡死(WS 降载 + 前端重绘抑制 + 索引优化)#246
PR 说明:修复 /requests 页面卡顿/卡死(WS 降载 + 前端重绘抑制 + 索引优化)#246SurviveM merged 5 commits intoawsl-project:mainfrom
Conversation
api/admin/requests?limit=100 后端慢成狗
📝 WalkthroughWalkthrough新增广播前的数据卫生化函数并在广播路径中使用;后端引入 200ms 节流与批处理以合并实时适配器事件并快照数据;WebSocket Hub 改为非阻塞入队并在发送前快照;SQLite 增加 provider_id 索引迁移与测试;前端集中化冷却订阅、对请求/attempt 做批处理与记忆化优化并添加请求导航改动。 Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Executor as Executor (事件产生)
participant Broadcaster as Broadcaster (Sanitize/快照)
participant WSHub as WebSocketHub (非阻塞入队)
participant Clients as ConnectedClients
Client->>Executor: 产生 ProxyUpstreamAttempt / 事件(高并发)
Executor->>Executor: 缓存/合并更新并标记 dirty
note right of Executor: 200ms ticker 批量合并
Executor->>Executor: ticker 或 关闭时触发 flush(创建快照)
Executor->>Broadcaster: 发送待广播快照
Broadcaster->>Broadcaster: 调用 Sanitize 并创建 JSON 快照
Broadcaster-->>WSHub: 非阻塞入队广播消息(可能被丢弃)
WSHub->>WSHub: RLock 收集活跃客户端并解锁
WSHub->>Clients: 对每个客户端 WriteJSON(设置写超时,清理错误连接)
Estimated code review effort🎯 4 (复杂) | ⏱️ ~60 分钟 Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
针对
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (6)
internal/event/broadcaster.go (1)
1-1: 文件开头存在 BOM(字节顺序标记)字符。
package event前有一个不可见的 UTF-8 BOM(\xEF\xBB\xBF)。Go 编译器虽能容忍,但这不是 Go 的惯例,可能影响某些工具链(如 diff、linter)。建议在此文件及本 PR 中其他带 BOM 的文件中统一移除。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/event/broadcaster.go` at line 1, The file contains a UTF-8 BOM before the package declaration ("package event") which should be removed; open internal/event/broadcaster.go and delete the leading BOM bytes (0xEF 0xBB 0xBF) so the file starts exactly with "package event", save as UTF-8 without BOM, and repeat for any other files in this PR that include a BOM (or run a repo-wide tool to strip BOMs) to avoid tooling/diff issues.internal/event/wails_broadcaster_desktop.go (1)
1-1: 移除文件开头的 UTF-8 BOM 字符,提高代码规范性。文件确实以 BOM 字符开头。虽然 Go 编译器明确支持并会正确处理文件开头的 UTF-8 BOM(此行为已在 Go 工具链的合规性测试中验证),但移除 BOM 是更规范的做法,确保跨平台和工具的一致性。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/event/wails_broadcaster_desktop.go` at line 1, 文件开头包含 UTF-8 BOM,按规范应移除它:打开 internal/event/wails_broadcaster_desktop.go,删除文件最前面的 BOM 字符(U+FEFF),保存为纯 UTF-8(无 BOM),然后确认 package/函数声明(例如 package ... 或任何顶层声明)现在位于文件首字符处且编译通过。internal/handler/websocket.go (2)
104-108: 非阻塞发送(select + default)策略符合 PR 目标,但静默丢弃可能在生产中难以察觉。当
h.broadcastchannel(缓冲 100)满时,消息被静默丢弃。在高压场景下这是合理的降载策略,但建议至少记一条 debug/metric 日志或递增计数器,以便运维监控消息丢弃率。💡 建议:添加丢弃计数/日志
以
BroadcastProxyRequest为例:select { case h.broadcast <- msg: default: - // Channel 满时丢弃,避免阻塞 executor/请求处理链路 + // Channel 满时丢弃,避免阻塞 executor/请求处理链路 + log.Printf("[WARN] broadcast channel full, dropping %s message", msg.Type) }生产环境中建议替换为 prometheus counter 或 structured logging,以避免在持续满载时日志自身成为瓶颈。
Also applies to: 117-120, 129-132, 141-144
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/handler/websocket.go` around lines 104 - 108, The non-blocking send to h.broadcast (used for BroadcastProxyRequest and similar messages) currently silently drops messages when the buffered channel is full; update each select default branch (the instances around h.broadcast sends) to record the drop—either increment a drop counter/metric (preferred: Prometheus counter) or emit a debug/structured log that includes the message type and any request ID—so operators can monitor drop rates without blocking the hot path; reference the send sites using h.broadcast and the BroadcastProxyRequest message type and add the metric/log increment in each default branch (also apply the same change to the other select blocks noted at the other occurrences).
54-58:WriteJSON串行写入所有客户端——慢客户端会阻塞后续客户端的消息投递。当前
run()是单 goroutine 顺序遍历所有客户端调用WriteJSON。如果某个客户端网络慢(例如写超时未设置时 TCP 缓冲区满),该次WriteJSON会阻塞较长时间,导致其他客户端的消息延迟。考虑为每个连接设置
SetWriteDeadline,或为每个客户端维护独立的写 goroutine + 发送队列(per-client fan-out),以实现客户端间的隔离。当前规模若客户端数量不多,这不是紧急问题。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/handler/websocket.go` around lines 54 - 58, The run() loop is writing to all clients sequentially with client.WriteJSON, so a slow client blocks others; change to per-client isolated writes by either setting a write deadline before each WriteJSON (use client.SetWriteDeadline) and handling timeout errors when writing, or implement a per-client send queue and a dedicated write goroutine for each connection (create a buffered channel per client, send messages into that channel from run(), and have the goroutine read the channel and call WriteJSON; on write errors close the connection and append to toRemove). Ensure to reference the existing clients slice, the run() function, client.WriteJSON calls, and the toRemove handling so slow/failed clients are removed cleanly.web/src/pages/requests/index.tsx (1)
578-578:LogRow内部的useCallback是可选的防御性优化。由于
LogRow已被memo包裹,handleClick仅在组件实际重渲染时才会重新创建。这里的useCallback主要防止<TableRow onClick>因函数引用变化触发子树重渲染,但如果TableRow没有做 memo/shouldComponentUpdate,该层优化收益有限。不影响正确性,保留亦可。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/pages/requests/index.tsx` at line 578, The useCallback wrapper around handleClick inside LogRow is an unnecessary defensive optimization because LogRow is already wrapped with memo; remove the useCallback and replace the memoized handler with a plain function reference (i.e., make handleClick a simple arrow function that calls onOpenRequest(request.id)) so the code is simpler and behavior unchanged; keep references to LogRow, handleClick, onOpenRequest and note that TableRow may still re-render if it isn’t memoized.internal/repository/sqlite/migrations.go (1)
63-66: MySQL 重复索引的错误判断过于宽泛,建议收窄匹配字符串
strings.Contains(..., "duplicate")会同时匹配 MySQL 错误码 1061(ER_DUP_KEYNAME,"Duplicate key name",预期)和 1062(ER_DUP_ENTRY,"Duplicate entry"),后者在CREATE INDEX语境下虽几乎不可能出现,但仍是潜在误判。建议至少收窄为匹配"duplicate key name",或直接检查 MySQL 驱动的错误码(1061)。♻️ 建议:收窄字符串匹配
-if err != nil && strings.Contains(strings.ToLower(err.Error()), "duplicate") { +if err != nil && strings.Contains(strings.ToLower(err.Error()), "duplicate key name") {若项目已引入
github.com/go-sql-driver/mysql,可进一步精确检查错误码:import "github.com/go-sql-driver/mysql" var mysqlErr *mysql.MySQLError if err != nil && errors.As(err, &mysqlErr) && mysqlErr.Number == 1061 { return nil } return err🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/repository/sqlite/migrations.go` around lines 63 - 66, The current error check in the db.Exec call that creates idx_proxy_requests_provider_id uses strings.Contains(err.Error(), "duplicate"), which is too broad; change the logic in function(s) handling the CREATE INDEX to either (1) narrow the string match to "duplicate key name" or (preferred) (2) import github.com/go-sql-driver/mysql and use errors.As to cast to *mysql.MySQLError and return nil only when mysqlErr.Number == 1061 (ER_DUP_KEYNAME); ensure you return the original err for all other cases and add the mysql import and errors package usage where db.Exec is invoked.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
internal/event/broadcaster.gointernal/event/wails_broadcaster_desktop.gointernal/executor/executor.gointernal/handler/websocket.gointernal/repository/sqlite/migrations.gointernal/repository/sqlite/models.goweb/src/contexts/cooldowns-context.tsxweb/src/hooks/queries/use-providers.tsweb/src/hooks/queries/use-requests.tsweb/src/hooks/use-cooldowns.tsweb/src/hooks/use-streaming.tsweb/src/lib/cooldown-update-subscription.tsweb/src/pages/requests/index.tsx
🧰 Additional context used
🧬 Code graph analysis (8)
web/src/hooks/use-streaming.ts (4)
web/src/lib/transport/types.ts (1)
ClientType(8-8)internal/domain/model.go (1)
ClientType(9-9)web/src/lib/theme.ts (1)
ClientType(26-26)web/src/lib/transport/index.ts (1)
ClientType(8-8)
web/src/lib/cooldown-update-subscription.ts (1)
web/src/lib/query-client.ts (1)
queryClient(3-14)
internal/event/broadcaster.go (2)
internal/domain/model.go (4)
ProxyRequest(287-356)ProxyUpstreamAttempt(358-412)RequestInfo(274-279)ResponseInfo(280-284)internal/adapter/client/adapter.go (1)
RequestInfo(17-20)
internal/repository/sqlite/migrations.go (2)
internal/version/version.go (1)
Version(6-6)internal/repository/sqlite/db.go (1)
DB(17-20)
web/src/contexts/cooldowns-context.tsx (2)
web/src/lib/cooldown-update-subscription.ts (1)
subscribeCooldownUpdates(9-28)web/src/lib/query-client.ts (1)
queryClient(3-14)
internal/handler/websocket.go (4)
internal/event/broadcaster.go (2)
SanitizeProxyRequestForBroadcast(28-40)SanitizeProxyUpstreamAttemptForBroadcast(43-55)web/src/lib/transport/types.ts (2)
WSMessage(322-325)ProxyUpstreamAttempt(245-274)web/src/lib/transport/index.ts (2)
WSMessage(39-39)ProxyUpstreamAttempt(28-28)internal/domain/model.go (1)
ProxyUpstreamAttempt(358-412)
web/src/hooks/use-cooldowns.ts (2)
web/src/lib/cooldown-update-subscription.ts (1)
subscribeCooldownUpdates(9-28)web/src/lib/query-client.ts (1)
queryClient(3-14)
internal/event/wails_broadcaster_desktop.go (1)
internal/event/broadcaster.go (2)
SanitizeProxyRequestForBroadcast(28-40)SanitizeProxyUpstreamAttemptForBroadcast(43-55)
🔇 Additional comments (19)
web/src/hooks/queries/use-providers.ts (1)
96-103: LGTM!
staleTime: 5000是合理的缓存策略,配合 WS 事件驱动的invalidateQueries,在减少不必要请求的同时保证了数据的及时更新。web/src/hooks/use-streaming.ts (1)
108-147: LGTM!将统计计算移入
useMemo是正确的优化。依赖项[activeRequests]准确追踪了 Map 引用变化,确保只在活跃请求实际变更时重新计算,避免了每次渲染都重建 Map 和数组的开销。internal/event/broadcaster.go (1)
28-55: LGTM!浅拷贝 + nil 大字段的方案简洁高效。两个 Sanitize 函数逻辑清晰:nil 保护、已瘦身快速返回、浅拷贝后置空重字段。在广播序列化场景下不会产生共享引用问题。
internal/event/wails_broadcaster_desktop.go (1)
49-56: LGTM!Sanitize 放置位置合理。在
WailsBroadcaster层统一做瘦身,确保 inner broadcaster(WebSocket)和 Wails Event 两条路径都收到轻量 payload。即使 inner broadcaster 内部再次调用 Sanitize,也因 nil 快速返回而无额外开销。web/src/lib/cooldown-update-subscription.ts (1)
1-29: LGTM!单例订阅模式实现得当。引用计数式的生命周期管理(Set 跟踪活跃 client,最后一个退出时释放 transport 监听)干净且正确。
Set.add天然幂等,在 React Strict Mode 下双调用也不会出问题。web/src/hooks/use-cooldowns.ts (1)
22-25: LGTM!通过
subscribeCooldownUpdates集中管理订阅,配合Set<QueryClient>去重,有效解决了useCooldowns和CooldownsProvider同时挂载时的重复订阅问题。web/src/contexts/cooldowns-context.tsx (1)
40-43: LGTM!与
use-cooldowns.ts保持一致,统一使用集中式订阅管理器,清理逻辑简洁。web/src/hooks/queries/use-requests.ts (4)
91-129: LGTM!Attempt 批处理合并逻辑正确。
flushAttempts的实现值得肯定:
- 先
Array.from再clear(),避免迭代过程中被修改。- 合并时使用
??保留已缓存的requestInfo/responseInfo,与后端 Sanitize 置 null 的行为完美配合。- 刷新前再次检查
getObserversCount(),即使订阅时的 early check 过时也不会错误写入。
166-178: LGTM!详情缓存合并保护到位。与
flushAttempts采用相同的??合并策略,确保 WS 推送的瘦身消息不会覆盖详情页已加载的requestInfo/responseInfo。
131-139:flush中仅有 pending attempts 时可能延迟一个周期。当
pendingRequests为空但pendingAttemptsByRequest非空时,flush直接调用flushAttempts()并 return,这是正确的。但在主流程末尾(Line 354)也无条件调用了flushAttempts()。由于 Line 97 已经clear()了 map,第二次调用会因size === 0立即返回,不会重复处理。逻辑无误,只是想确认这种设计是有意为之。
373-390: 订阅回调中的 observer 检查与 flush 时的双重检查形成良好的分层防护。订阅时先做快速过滤(减少无用数据堆积),flush 时再做精确检查(防止 250ms 窗口内 observer 消失)。唯一的轻微代价是 250ms 内可能积累少量后续被丢弃的 attempt,但对内存影响可忽略。
internal/executor/executor.go (2)
324-340: 节流 + flush 快照设计合理。200ms 聚合窗口配合
dirty标记和flush()快照广播,既避免了流式场景下的广播风暴,又确保了 channel 关闭时不丢弃最后一批脏数据(line 346)。SanitizeProxyUpstreamAttemptForBroadcast做浅拷贝后置空指针字段,对于当前ProxyUpstreamAttempt结构(其余字段皆为值类型)是安全的。
342-390: 取消此项审查——代码已正确遵循同步协议。在
internal/executor/middleware_dispatch.go:130处的调用点确实遵循了"先等done关闭,再读attempt"的协议:
- 第 131 行:启动
processAdapterEventsRealtime协程- 第 145 行:同步执行 adapter(事件通过
eventChan写入)- 第 154 行:
eventChan.Close()信号协程退出- 第 155 行:
<-eventDone阻塞直至协程完成(defer close(done)触发)- 第 157+ 行:只在此处才访问
attemptRecord的字段此外,
flush()函数通过SanitizeProxyUpstreamAttemptForBroadcast(attempt)创建浅拷贝(copied := *attempt),将请求/响应详情清空后再广播,完全避免了指针别名问题。不存在数据竞争。Likely an incorrect or invalid review comment.
web/src/pages/requests/index.tsx (4)
764-786: Memo 等值函数的逻辑设计合理,有效实现了"已完成行不随nowMs重渲染"的目标。自定义
arePropsEqual先通过引用比较排除request变更,再对 PENDING/IN_PROGRESS 行仅比较nowMs,已完成行则直接跳过,契合 PR 描述的重绘抑制策略。有一个小建议:当前判断
prevPending || nextPending在prev.request === next.request的前提下其实等价于prevPending && nextPending,写成&&语义会更清晰,但不影响正确性。
229-236: 移动端行缺少nowMs/ 实时时长显示——请确认是有意为之。桌面端
MemoLogRow接收nowMs实现进行中请求的实时计时,但MemoMobileRequestCard不接收nowMs,只展示request.duration。这意味着移动端对 PENDING/IN_PROGRESS 请求的时长不会实时更新。如果这是预期行为(移动端不需要实时秒表),可以忽略。
856-864:MemoMobileRequestCard的等值函数正确且简洁。三项引用比较覆盖了组件所有 props,且
onOpenRequest已通过useCallback稳定化,不会导致不必要的重渲染。
171-176:handleOpenRequest稳定引用 ✓
useCallback+[navigate]依赖正确。React Router v7 的navigate在组件生命周期内引用稳定,因此回调函数也不会频繁重建,配合下游memo使用效果良好。internal/handler/websocket.go (1)
43-69:run()的两阶段锁改造正确,消除了"持锁写网络"的阻塞风险。先 RLock 快照客户端列表、再逐个写入、最后 WLock 清理失败连接——单 goroutine 保证了 Gorilla WebSocket "one concurrent writer" 的约束。
HandleWebSocketdefer 可能先行 Close 连接导致WriteJSON失败,但 error 路径已正确处理(再次 Close 对 Gorilla 是幂等的)。internal/repository/sqlite/models.go (1)
225-225: LGTM —gorm:"index"标签与迁移脚本命名完全一致GORM 对此字段生成的默认索引名为
idx_proxy_requests_provider_id(即idx_{tableName}_{columnName}),与 migrations.go v2 迁移中的 SQL 完全对齐。无论 AutoMigrate 与 RunMigrations 的执行顺序如何,两条路径均已做幂等处理,不存在冲突风险。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/handler/websocket.go`:
- Around line 98-121: Broadcast currently sends potentially-mutable original
pointers because event.SanitizeProxyUpstreamAttemptForBroadcast /
SanitizeProxyRequestForBroadcast may return the same pointer when
RequestInfo/ResponseInfo are nil, causing data races when
executor.processAdapterEventsRealtime modifies the object after enqueueing;
change WebSocketHub.BroadcastProxyRequest and
WebSocketHub.BroadcastProxyUpstreamAttempt to always make a shallow copy
(snapshot) of the sanitized object (copy the struct value into a new variable
and use that for WSMessage.Data) before sending to h.broadcast so the async
consumer/WriteJSON reads an immutable snapshot rather than the original mutable
object.
In `@internal/repository/sqlite/migrations.go`:
- Around line 75-77: The Down path silently ignores errors from the DROP INDEX
statement (the call to db.Exec("DROP INDEX idx_proxy_requests_provider_id ON
proxy_requests").Error inside rollbackMigration), which can leave
schema_migrations out of sync; change the code to capture the error (err :=
db.Exec(...).Error) and if err != nil emit a warning log that includes the
migration id/version, the SQL being run and the actual error (using the existing
logger used elsewhere), then continue returning nil so the migration record is
removed but the failure is visible for debugging.
---
Nitpick comments:
In `@internal/event/broadcaster.go`:
- Line 1: The file contains a UTF-8 BOM before the package declaration ("package
event") which should be removed; open internal/event/broadcaster.go and delete
the leading BOM bytes (0xEF 0xBB 0xBF) so the file starts exactly with "package
event", save as UTF-8 without BOM, and repeat for any other files in this PR
that include a BOM (or run a repo-wide tool to strip BOMs) to avoid tooling/diff
issues.
In `@internal/event/wails_broadcaster_desktop.go`:
- Line 1: 文件开头包含 UTF-8 BOM,按规范应移除它:打开
internal/event/wails_broadcaster_desktop.go,删除文件最前面的 BOM 字符(U+FEFF),保存为纯 UTF-8(无
BOM),然后确认 package/函数声明(例如 package ... 或任何顶层声明)现在位于文件首字符处且编译通过。
In `@internal/handler/websocket.go`:
- Around line 104-108: The non-blocking send to h.broadcast (used for
BroadcastProxyRequest and similar messages) currently silently drops messages
when the buffered channel is full; update each select default branch (the
instances around h.broadcast sends) to record the drop—either increment a drop
counter/metric (preferred: Prometheus counter) or emit a debug/structured log
that includes the message type and any request ID—so operators can monitor drop
rates without blocking the hot path; reference the send sites using h.broadcast
and the BroadcastProxyRequest message type and add the metric/log increment in
each default branch (also apply the same change to the other select blocks noted
at the other occurrences).
- Around line 54-58: The run() loop is writing to all clients sequentially with
client.WriteJSON, so a slow client blocks others; change to per-client isolated
writes by either setting a write deadline before each WriteJSON (use
client.SetWriteDeadline) and handling timeout errors when writing, or implement
a per-client send queue and a dedicated write goroutine for each connection
(create a buffered channel per client, send messages into that channel from
run(), and have the goroutine read the channel and call WriteJSON; on write
errors close the connection and append to toRemove). Ensure to reference the
existing clients slice, the run() function, client.WriteJSON calls, and the
toRemove handling so slow/failed clients are removed cleanly.
In `@internal/repository/sqlite/migrations.go`:
- Around line 63-66: The current error check in the db.Exec call that creates
idx_proxy_requests_provider_id uses strings.Contains(err.Error(), "duplicate"),
which is too broad; change the logic in function(s) handling the CREATE INDEX to
either (1) narrow the string match to "duplicate key name" or (preferred) (2)
import github.com/go-sql-driver/mysql and use errors.As to cast to
*mysql.MySQLError and return nil only when mysqlErr.Number == 1061
(ER_DUP_KEYNAME); ensure you return the original err for all other cases and add
the mysql import and errors package usage where db.Exec is invoked.
In `@web/src/pages/requests/index.tsx`:
- Line 578: The useCallback wrapper around handleClick inside LogRow is an
unnecessary defensive optimization because LogRow is already wrapped with memo;
remove the useCallback and replace the memoized handler with a plain function
reference (i.e., make handleClick a simple arrow function that calls
onOpenRequest(request.id)) so the code is simpler and behavior unchanged; keep
references to LogRow, handleClick, onOpenRequest and note that TableRow may
still re-render if it isn’t memoized.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (8)
internal/handler/websocket.go (2)
51-78: 顺序扇出在客户端较多时可能放大延迟。当前实现对每个客户端串行
WriteJSON,单条消息的广播延迟最坏为O(n × writeTimeout)。若只是内部管理面板(客户端少),可以接受;但如果未来客户端数量增长,建议考虑并发写入或限制最大客户端数。锁/竞态方面:RLock 快照 → 释放锁 → 无锁写入 → 收集失败连接 → 加写锁批量删除,逻辑正确,修复了原先在 RLock 下 delete map 的竞态问题。👍
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/handler/websocket.go` around lines 51 - 78, The current WebSocketHub.run serially calls SetWriteDeadline and WriteJSON for each client (using h.broadcast, h.clients and websocket.WriteJSON), causing O(n × writeTimeout) broadcast latency when clients grow; change run to perform writes concurrently: snapshot h.clients as you already do, then spawn a goroutine per client (or use a bounded worker pool/semaphore) that does SetWriteDeadline and WriteJSON, collect failures into a thread-safe slice/channel, wait with a sync.WaitGroup for all writes to finish, then acquire h.mu.Lock and remove failed connections from h.clients; ensure no map access happens while goroutines are writing and that removal still happens under the lock to avoid races.
80-95: 丢弃计数器为全局单调递增,偶发丢弃可能被"沉默"。
broadcastDroppedTotal永不重置。假设系统经历一次短暂高压后恢复正常,再次发生偶发丢弃时计数器值可能是 2、3、…99,这些都不会触发日志(需要到 100 才会再次打印)。如果需要更好的可观测性,可考虑按时间窗口节流(如最多每 10 秒打印一次)或使用 Prometheus counter 等 metrics 暴露。当前实现对于诊断大规模风暴已足够,仅作为改进建议。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/handler/websocket.go` around lines 80 - 95, The broadcastDroppedTotal counter in tryEnqueueBroadcast is monotonically increasing so intermittent drops may be silent until the counter hits the next 100 boundary; change tryEnqueueBroadcast to rate-limit logging by time window instead of only by counter: keep a lastLogTime (e.g., lastBroadcastDropLog int64) on WebSocketHub and, when a drop occurs, increment broadcastDroppedTotal as now but also check the lastLogTime (use atomic load/store or a mutex) and only emit log.Printf if the lastLogTime is older than a configured window (e.g., 10s), then update lastLogTime to now; keep the existing counter-based sampling if you want, but add the time-window gate so occasional drops after recovery still produce an observable log.internal/handler/websocket_test.go (1)
86-123: 丢弃日志测试逻辑正确,但修改了全局log状态。测试通过修改全局
log.SetOutput来捕获日志输出,defer 中正确恢复了原状态。由于包内测试默认顺序执行,当前不会有并发问题。但如果未来有人加了t.Parallel(),全局 log 状态修改会导致 flaky test。可以在注释中标注此约束,或考虑将日志记录器注入到WebSocketHub中以便测试。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/handler/websocket_test.go` around lines 86 - 123, The test mutates the global log state (log.SetOutput/Flags/Prefix) which makes it flaky if tests run in parallel; instead, modify WebSocketHub to accept an injectable logger (add a Logger *log.Logger field to the WebSocketHub struct and use that logger inside BroadcastProxyRequest) and update TestWebSocketHub_BroadcastProxyRequest_LogsWhenDropped to construct the hub with a local log.New(&buf, "", 0) so the test captures output without touching global state; update references in BroadcastProxyRequest and any hub constructors to use the new Logger when present.internal/repository/sqlite/migrations.go (1)
90-101:"error 1061"字符串兜底匹配存在理论上的前缀子串误判第 100 行:
return strings.Contains(lower, "duplicate key name") || strings.Contains(lower, "error 1061")
"error 1061"是"error 10619"的子串,若错误消息中含有类似"error 10619: ..."的内容会误判为 true。虽然在CREATE INDEX上下文中此场景极难发生,且此函数仅在case "mysql":分支调用,但可以用更精确的边界匹配消除歧义:♻️ 可选:使用更精确的边界匹配
-return strings.Contains(lower, "duplicate key name") || strings.Contains(lower, "error 1061") +return strings.Contains(lower, "duplicate key name") || strings.Contains(lower, "error 1061 ") || strings.HasSuffix(lower, "error 1061")或直接用正则(若团队允许):
import "regexp" var errCode1061Re = regexp.MustCompile(`\berror 1061\b`) return strings.Contains(lower, "duplicate key name") || errCode1061Re.MatchString(lower)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/repository/sqlite/migrations.go` around lines 90 - 101, The fallback string match in isMySQLDuplicateIndexError uses strings.Contains(lower, "error 1061") which can falsely match longer codes like "error 10619"; update the fallback to perform a bounded match instead (e.g., check for "error 1061" with word/colon boundaries or use a precompiled regexp with \b around 1061) so only the exact error code 1061 is considered while keeping the existing "duplicate key name" check; modify the isMySQLDuplicateIndexError function to replace the loose strings.Contains call with this precise boundary match.internal/repository/sqlite/migrations_test.go (1)
10-22: 建议补充nil输入与字符串前缀误判边界用例当前 4 个用例覆盖了主要分支,但有两处可选补充:
nil输入:isMySQLDuplicateIndexError第一行即处理nil,但测试中未覆盖,建议补充以完整文档化该行为。- 字符串前缀误判(对应上一条
"error 1061"子串问题):若维护者采纳精确匹配建议,可追加一个反例(如"error 10619: something"),确保修改后不误判。✅ 可选:补充用例
+if isMySQLDuplicateIndexError(nil) { + t.Fatalf("expected false for nil error") +} +// 确保 "error 10619" 不被误判为 1061 +if isMySQLDuplicateIndexError(errors.New("error 10619: something unrelated")) { + t.Fatalf("expected false for error code 10619 (should not match 1061 as substring)") +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/repository/sqlite/migrations_test.go` around lines 10 - 22, Add two additional assertions to TestIsMySQLDuplicateIndexError to cover the nil and string-prefix edge cases: call isMySQLDuplicateIndexError(nil) and assert it returns false, and add a negative string case (e.g., errors.New("Error 10619: ...") or similar) to assert it does not match as a duplicate-index error; update the test function TestIsMySQLDuplicateIndexError to include these two assertions so the behavior of isMySQLDuplicateIndexError for nil inputs and near-prefix numeric matches is explicitly verified.web/src/pages/requests/index.tsx (3)
778-782:nextPending是冗余变量到达此处时
prev.request === next.request(同一对象引用,若引用不同则第 767 行已提前返回false),因此prev.request.status === next.request.status恒成立,prevPending === nextPending始终为真。nextPending可以删除,条件直接用prevPending即可。♻️ 简化建议
- const prevPending = prev.request.status === 'PENDING' || prev.request.status === 'IN_PROGRESS'; - const nextPending = next.request.status === 'PENDING' || next.request.status === 'IN_PROGRESS'; - if (prevPending || nextPending) { + const isPending = prev.request.status === 'PENDING' || prev.request.status === 'IN_PROGRESS'; + if (isPending) { return prev.nowMs === next.nowMs; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/pages/requests/index.tsx` around lines 778 - 782, 此处多定义了 nextPending 且其值与 prevPending 恒等(因为此处 prev.request === next.request 已为同一引用),请删除 nextPending 变量并在条件判断中直接使用 prevPending(即将 "if (prevPending || nextPending) { return prev.nowMs === next.nowMs; }" 简化为 "if (prevPending) { return prev.nowMs === next.nowMs; }"),保留对 prev.request/status 和 prev.nowMs 的现有引用以保持逻辑不变。
1-2: 将两条 React import 合并为一条两行均从
'react'导入,可合并成单条语句,符合no-duplicate-imports规范。♻️ 建议合并
-import { useState, useMemo, useRef, useEffect } from 'react'; -import { useCallback, memo } from 'react'; +import { useState, useMemo, useRef, useEffect, useCallback, memo } from 'react';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/pages/requests/index.tsx` around lines 1 - 2, Two separate imports from 'react' should be combined: replace the two import statements that bring in useState, useMemo, useRef, useEffect and useCallback, memo with a single import from 'react' that lists all six identifiers (useState, useMemo, useRef, useEffect, useCallback, memo) to satisfy the no-duplicate-imports rule and avoid duplicated module imports.
798-798:MobileRequestCard中的useCallback无实质收益,且与LogRow不一致
handleClick仅作为 DOMonClick处理器使用,不传给任何已 memo 化的子组件,因此useCallback并不能带来额外的渲染优化,反而增加了对比开销。LogRow(第 578 行)使用的是朴素箭头函数,建议两处保持一致。♻️ 对齐简化
- const handleClick = useCallback(() => onOpenRequest(request.id), [onOpenRequest, request.id]); + const handleClick = () => onOpenRequest(request.id);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/pages/requests/index.tsx` at line 798, Remove the unnecessary useCallback wrapper around handleClick in MobileRequestCard: replace the memoized callback (useCallback(() => onOpenRequest(request.id), [...])) with a plain arrow function or inline handler (e.g., () => onOpenRequest(request.id)) to match LogRow's simple arrow style; update the declaration of handleClick (or the onClick prop) to call onOpenRequest(request.id) directly and remove the useCallback import/usage to avoid extra comparison overhead.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
internal/event/broadcaster.gointernal/event/wails_broadcaster_desktop.gointernal/handler/websocket.gointernal/handler/websocket_test.gointernal/repository/sqlite/migrations.gointernal/repository/sqlite/migrations_test.goweb/src/pages/requests/index.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- internal/event/wails_broadcaster_desktop.go
- internal/event/broadcaster.go
🧰 Additional context used
🧬 Code graph analysis (3)
internal/repository/sqlite/migrations.go (1)
internal/repository/sqlite/db.go (1)
DB(17-20)
internal/handler/websocket.go (3)
web/src/lib/transport/types.ts (2)
WSMessage(322-325)ProxyUpstreamAttempt(245-274)web/src/lib/transport/index.ts (2)
WSMessage(39-39)ProxyUpstreamAttempt(28-28)internal/event/broadcaster.go (2)
SanitizeProxyRequestForBroadcast(28-40)SanitizeProxyUpstreamAttemptForBroadcast(43-55)
internal/handler/websocket_test.go (1)
internal/handler/websocket.go (2)
WebSocketHub(31-38)WSMessage(26-29)
🔇 Additional comments (7)
internal/handler/websocket.go (4)
124-142: 快照语义正确,异步消费安全。无论
SanitizeProxyRequestForBroadcast是否返回原指针,Line 130 始终做浅拷贝,且data = snapshot以值类型存入interface{},异步消费者读到的是独立副本。之前 review 中指出的竞态风险已正确修复。
144-166: 与BroadcastProxyRequest一致的快照模式,实现正确。
62-68:client.Close()可能被调用两次(run()中 +HandleWebSocket的 defer 中),但安全。当
run()因写入失败调用client.Close()后,HandleWebSocket中的ReadMessage也会返回错误,随后 defer 再次调用conn.Close()。gorilla/websocket 底层net.Conn.Close()是幂等的,不会 panic,所以当前代码是安全的。无需修改。
35-45: 常量与结构体新增合理。
broadcastDroppedTotal使用atomic.Uint64零值即可用,无需额外初始化;websocketWriteTimeout = 5s对 WebSocket 写入是合理的超时值;channel buffer 100 对管理面板场景足够。internal/handler/websocket_test.go (2)
12-48: 快照语义测试正确,type switch 覆盖了值类型和指针类型两种情况。当前
BroadcastProxyRequest实现中data = snapshot(值类型),所以case *domain.ProxyRequest(Line 34)在当前实现下不会命中。作为防御性测试这没问题,但如果想让测试更精确地反映实现,可以只保留case domain.ProxyRequest分支。
50-84: 与上一个测试结构一致,覆盖了 attempt 的快照语义,LGTM。internal/repository/sqlite/migrations.go (1)
103-142: LGTM,新迁移与现有框架集成正确
RunMigrations先执行 AutoMigrate 再按版本升序运行迁移,Version 2 的幂等设计(IF NOT EXISTS/isMySQLDuplicateIndexError)与框架逻辑完全兼容。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/handler/websocket.go`:
- Around line 168-184: BroadcastMessage currently enqueues the provided data
directly (via BroadcastMessage -> tryEnqueueBroadcast) which risks races if
callers pass mutable pointers; match the behavior of
BroadcastProxyRequest/BroadcastProxyUpstreamAttempt by snapshotting data before
enqueuing: detect common mutable types or, more generally, marshal the data to
JSON and unmarshal into an interface{} (or clone slices/maps/structs) to create
an immutable copy, then build WSMessage with that copy and call
tryEnqueueBroadcast; alternatively, if you choose caller responsibility, add a
clear comment on BroadcastMessage documenting that callers must pass
immutable/serialized data.
In `@internal/repository/sqlite/migrations.go`:
- Around line 57-88: The v2 migration uses CREATE/DROP INDEX which on MySQL
causes implicit commits so the surrounding gorm.Transaction in runMigration
cannot guarantee atomicity; update the migration (the Version:2 entry in
migrations.go, specifically the Up/Down funcs and/or nearby runMigration
comment) to add a clear note that MySQL DDL triggers implicit commits and that
m.Up(tx) + tx.Create(&SchemaMigration{...}) are not atomic on MySQL, reference
isMySQLDuplicateIndexError and SchemaMigration, and state that the behavior
relies on Up being idempotent (self-healing) rather than transactional rollback
so future maintainers understand this limitation.
In `@web/src/pages/requests/index.tsx`:
- Around line 764-786: MemoLogRow's comparator uses the index prop (in
LogRowProps) so when a new real-time request is unshifted all subsequent rows
get index+1 and the comparator returns false, causing full re-renders; remove
index from the props and comparator, switch the zebra styling from using the
index to a CSS nth-child rule on the TableRow (e.g., Tailwind
[&:nth-child(even)]/[odd]) and stop passing index into LogRow/LogRowProps and
MemoLogRow so the comparator no longer invalidates rows on inserts.
---
Nitpick comments:
In `@internal/handler/websocket_test.go`:
- Around line 86-123: The test mutates the global log state
(log.SetOutput/Flags/Prefix) which makes it flaky if tests run in parallel;
instead, modify WebSocketHub to accept an injectable logger (add a Logger
*log.Logger field to the WebSocketHub struct and use that logger inside
BroadcastProxyRequest) and update
TestWebSocketHub_BroadcastProxyRequest_LogsWhenDropped to construct the hub with
a local log.New(&buf, "", 0) so the test captures output without touching global
state; update references in BroadcastProxyRequest and any hub constructors to
use the new Logger when present.
In `@internal/handler/websocket.go`:
- Around line 51-78: The current WebSocketHub.run serially calls
SetWriteDeadline and WriteJSON for each client (using h.broadcast, h.clients and
websocket.WriteJSON), causing O(n × writeTimeout) broadcast latency when clients
grow; change run to perform writes concurrently: snapshot h.clients as you
already do, then spawn a goroutine per client (or use a bounded worker
pool/semaphore) that does SetWriteDeadline and WriteJSON, collect failures into
a thread-safe slice/channel, wait with a sync.WaitGroup for all writes to
finish, then acquire h.mu.Lock and remove failed connections from h.clients;
ensure no map access happens while goroutines are writing and that removal still
happens under the lock to avoid races.
- Around line 80-95: The broadcastDroppedTotal counter in tryEnqueueBroadcast is
monotonically increasing so intermittent drops may be silent until the counter
hits the next 100 boundary; change tryEnqueueBroadcast to rate-limit logging by
time window instead of only by counter: keep a lastLogTime (e.g.,
lastBroadcastDropLog int64) on WebSocketHub and, when a drop occurs, increment
broadcastDroppedTotal as now but also check the lastLogTime (use atomic
load/store or a mutex) and only emit log.Printf if the lastLogTime is older than
a configured window (e.g., 10s), then update lastLogTime to now; keep the
existing counter-based sampling if you want, but add the time-window gate so
occasional drops after recovery still produce an observable log.
In `@internal/repository/sqlite/migrations_test.go`:
- Around line 10-22: Add two additional assertions to
TestIsMySQLDuplicateIndexError to cover the nil and string-prefix edge cases:
call isMySQLDuplicateIndexError(nil) and assert it returns false, and add a
negative string case (e.g., errors.New("Error 10619: ...") or similar) to assert
it does not match as a duplicate-index error; update the test function
TestIsMySQLDuplicateIndexError to include these two assertions so the behavior
of isMySQLDuplicateIndexError for nil inputs and near-prefix numeric matches is
explicitly verified.
In `@internal/repository/sqlite/migrations.go`:
- Around line 90-101: The fallback string match in isMySQLDuplicateIndexError
uses strings.Contains(lower, "error 1061") which can falsely match longer codes
like "error 10619"; update the fallback to perform a bounded match instead
(e.g., check for "error 1061" with word/colon boundaries or use a precompiled
regexp with \b around 1061) so only the exact error code 1061 is considered
while keeping the existing "duplicate key name" check; modify the
isMySQLDuplicateIndexError function to replace the loose strings.Contains call
with this precise boundary match.
In `@web/src/pages/requests/index.tsx`:
- Around line 778-782: 此处多定义了 nextPending 且其值与 prevPending 恒等(因为此处 prev.request
=== next.request 已为同一引用),请删除 nextPending 变量并在条件判断中直接使用 prevPending(即将 "if
(prevPending || nextPending) { return prev.nowMs === next.nowMs; }" 简化为 "if
(prevPending) { return prev.nowMs === next.nowMs; }"),保留对 prev.request/status 和
prev.nowMs 的现有引用以保持逻辑不变。
- Around line 1-2: Two separate imports from 'react' should be combined: replace
the two import statements that bring in useState, useMemo, useRef, useEffect and
useCallback, memo with a single import from 'react' that lists all six
identifiers (useState, useMemo, useRef, useEffect, useCallback, memo) to satisfy
the no-duplicate-imports rule and avoid duplicated module imports.
- Line 798: Remove the unnecessary useCallback wrapper around handleClick in
MobileRequestCard: replace the memoized callback (useCallback(() =>
onOpenRequest(request.id), [...])) with a plain arrow function or inline handler
(e.g., () => onOpenRequest(request.id)) to match LogRow's simple arrow style;
update the declaration of handleClick (or the onClick prop) to call
onOpenRequest(request.id) directly and remove the useCallback import/usage to
avoid extra comparison overhead.
There was a problem hiding this comment.
🧹 Nitpick comments (8)
internal/handler/websocket.go (3)
81-96:tryEnqueueBroadcast非阻塞入队 + 节流日志设计合理。
select/default确保热路径不阻塞;原子计数器 + 首次/每百次打日志在可观测性和性能间取得了平衡。一个小建议:当前
broadcastDroppedTotal是累积计数,永不重置。如果希望在恢复正常后能区分"历史曾丢弃"和"当前仍在丢弃",可以考虑增加一个滑动窗口计数或定期重置逻辑。不过对当前场景影响不大。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/handler/websocket.go` around lines 81 - 96, The current tryEnqueueBroadcast uses a cumulative counter broadcastDroppedTotal that never resets, making it hard to tell if drops are recent or historical; modify tryEnqueueBroadcast (or nearby metric collection logic) to maintain a short-term sliding/resettable counter (e.g., a time-windowed counter or a counter that is reset periodically) in addition to the cumulative broadcastDroppedTotal, and use that short-term counter for the "dropped_total" log/observability checks (while keeping the overall cumulative metric for long-term stats) so you can distinguish recent ongoing drops from historical ones when logging or alerting.
125-143:BroadcastProxyRequest/BroadcastProxyUpstreamAttempt与BroadcastMessage的快照策略不一致,存在性能优化机会。
BroadcastMessage预序列化为json.RawMessage,run()的WriteJSON对 N 个客户端只需直接写入已序列化的字节。而BroadcastProxyRequest/BroadcastProxyUpstreamAttempt在run()中对每个客户端各做一次json.Marshal(WriteJSON内部调用)。在客户端数量较多时,预序列化可以避免 N 次重复序列化。可以考虑统一采用
BroadcastMessage的json.RawMessage策略:♻️ 示例(以 BroadcastProxyRequest 为例)
func (h *WebSocketHub) BroadcastProxyRequest(req *domain.ProxyRequest) { sanitized := event.SanitizeProxyRequestForBroadcast(req) - var data interface{} = sanitized var meta string if sanitized != nil { - // 无论 Sanitize 是否返回原指针,都强制做一次浅拷贝快照 snapshot := *sanitized - data = snapshot meta = "requestID=" + snapshot.RequestID if snapshot.ID != 0 { meta += " requestDbID=" + strconv.FormatUint(snapshot.ID, 10) } + b, err := json.Marshal(snapshot) + if err != nil { + log.Printf("[WebSocket] drop broadcast message type=proxy_request_update: marshal snapshot failed: %v", err) + return + } + msg := WSMessage{ + Type: "proxy_request_update", + Data: json.RawMessage(b), + } + h.tryEnqueueBroadcast(msg, meta) + return } msg := WSMessage{ Type: "proxy_request_update", - Data: data, + Data: nil, } h.tryEnqueueBroadcast(msg, meta) }Also applies to: 145-167, 170-195
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/handler/websocket.go` around lines 125 - 143, BroadcastProxyRequest and BroadcastProxyUpstreamAttempt currently pass Go structs to WSMessage so run()/WriteJSON ends up json.Marshal-ing per-client; change them to follow BroadcastMessage by taking a shallow snapshot (as you already do), json.Marshal that snapshot once into a []byte and set WSMessage.Data to json.RawMessage(theBytes) before calling tryEnqueueBroadcast so run() can write the already-serialized bytes to all clients; preserve the existing nil handling, meta construction (RequestID and ID), and the snapshot logic, and apply the same change to BroadcastProxyUpstreamAttempt and the other affected blocks so the pre-serialization strategy is consistent with BroadcastMessage.
52-68:run()中对同一消息的多客户端写入可以优化为一次序列化。当前
WriteJSON(msg)对每个客户端分别做一次 JSON 序列化。可以在循环外预先序列化为[]byte,然后使用WriteMessage(websocket.TextMessage, data)写入,避免 N 次重复 marshal:♻️ 预序列化优化建议
for msg := range h.broadcast { + data, err := json.Marshal(msg) + if err != nil { + log.Printf("[WebSocket] failed to marshal broadcast message type=%s: %v", msg.Type, err) + continue + } + h.mu.RLock() clients := make([]*websocket.Conn, 0, len(h.clients)) for client := range h.clients { clients = append(clients, client) } h.mu.RUnlock() var toRemove []*websocket.Conn for _, client := range clients { _ = client.SetWriteDeadline(time.Now().Add(websocketWriteTimeout)) - if err := client.WriteJSON(msg); err != nil { + if err := client.WriteMessage(websocket.TextMessage, data); err != nil { _ = client.Close() toRemove = append(toRemove, client) } }这与上一条评论中
BroadcastProxyRequest/BroadcastProxyUpstreamAttempt统一采用json.RawMessage的建议是互补的——如果所有入队消息都已经是预序列化的json.RawMessage,则此处可以进一步简化。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/handler/websocket.go` around lines 52 - 68, The run() loop currently calls client.WriteJSON(msg) for each connection which re-marshals the same msg repeatedly; change run() in WebSocketHub to pre-serialize the message once (e.g. json.Marshal or use existing json.RawMessage when available) into a []byte before iterating clients, then replace WriteJSON(msg) with client.WriteMessage(websocket.TextMessage, data) and keep the existing SetWriteDeadline(websocketWriteTimeout) and close/remove logic; this reduces repeated marshaling and is compatible with BroadcastProxyRequest/BroadcastProxyUpstreamAttempt when those use json.RawMessage.internal/repository/sqlite/migrations.go (2)
125-142:RunMigrations使用getCurrentVersion获取最大已执行版本号来跳过已执行迁移,存在间隙跳过风险。
getCurrentVersion返回MAX(version)。如果 v1 和 v3 已执行但 v2 未执行(例如手动操作或并发异常),currentVersion为 3,v2 将永远被跳过。当前只有 v1、v2 两个迁移,实际不会触发此问题,但随着迁移数量增长建议关注。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/repository/sqlite/migrations.go` around lines 125 - 142, RunMigrations currently uses getCurrentVersion (MAX(version)) which can skip intermediate migrations; change the logic to fetch the set of applied migration versions (e.g. query the migrations table for all executed versions) and build a map of appliedVersions, then iterate sortedMigrations and call d.runMigration(m) only when m.Version is not present in appliedVersions; remove reliance on getCurrentVersion for skipping and use the appliedVersions map instead so missing intermediate migrations (e.g. v2 when v1 and v3 exist) will still execute.
90-101:isMySQLDuplicateIndexError兜底字符串匹配范围可再考量。当前兜底逻辑同时匹配
"duplicate key name"和"error 1061"。"error 1061"作为子字符串可能意外匹配到包含该文本的非 MySQL 错误消息(例如日志格式中嵌入的错误引用)。不过考虑到此函数仅在 MySQL 方言路径下被调用,实际误匹配风险很低,当前实现可以接受。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/repository/sqlite/migrations.go` around lines 90 - 101, The fallback string check in isMySQLDuplicateIndexError is too broad because matching "error 1061" can falsely trigger on unrelated messages; update the fallback to only look for the specific MySQL phrase "duplicate key name" and/or a tighter pattern for the MySQL code (e.g., "1061" only when paired with "ER_DUP_KEYNAME" or "mysql" context) instead of the loose "error 1061" substring; retain the errors.As check against mysqlDriver.MySQLError and adjust the final strings.Contains calls in isMySQLDuplicateIndexError accordingly.internal/handler/websocket_test.go (1)
87-124: 全局 logger 重定向在并行测试场景下存在风险。当前测试通过
log.SetOutput重定向全局 logger 来捕获日志输出。虽然目前没有使用t.Parallel(),但如果将来有人加上并行标记,会导致全局 logger 状态竞争。建议在测试文件或函数顶部加注释说明此测试不可并行运行的原因,或者考虑让WebSocketHub接受自定义 logger 以便测试时注入。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/handler/websocket_test.go` around lines 87 - 124, The test mutates the global logger via log.SetOutput which races in parallel tests; either mark the test as non-parallel with a comment at the top of the test file explaining it must not be run with t.Parallel(), or better refactor WebSocketHub to accept a logger (e.g., add a logger field to WebSocketHub and use it inside BroadcastProxyRequest and any other logging call), update the test to create the hub with a test-specific logger that writes to an in-memory buffer (instead of calling log.SetOutput), and assert against that buffer; refer to WebSocketHub, BroadcastProxyRequest and WSMessage to locate the code to change.web/src/pages/requests/index.tsx (2)
759-780: 比较器逻辑正确;prevPending || nextPending可简化整体逻辑无误:已完成行在
nowMstick 时正确跳过重渲染;待处理行每秒随nowMs变更触发更新;状态变更时 React Query 会产生新对象引用,在第 762 行提前拦截。但到达第 774 行时,
prev.request === next.request已成立(同一对象引用),因此prev.request.status === next.request.status,即prevPending === nextPending,||与&&等价。可简化为if (prevPending)提升可读性:♻️ 可选简化
- const prevPending = prev.request.status === 'PENDING' || prev.request.status === 'IN_PROGRESS'; - const nextPending = next.request.status === 'PENDING' || next.request.status === 'IN_PROGRESS'; - if (prevPending || nextPending) { - return prev.nowMs === next.nowMs; - } + const isPending = prev.request.status === 'PENDING' || prev.request.status === 'IN_PROGRESS'; + if (isPending) { + return prev.nowMs === next.nowMs; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/pages/requests/index.tsx` around lines 759 - 780, The comparator in MemoLogRow redundantly checks both prevPending and nextPending even though earlier the comparator already returned false if prev.request !== next.request, guaranteeing prev.request === next.request and therefore prevPending === nextPending; simplify the branch by replacing the condition "if (prevPending || nextPending)" with "if (prevPending)" and keep the existing early-return behavior that compares prev.nowMs === next.nowMs to decide re-rendering for pending rows (variables: MemoLogRow, prev.request/next.request, prevPending, nextPending, prev.nowMs/next.nowMs).
792-792:MobileRequestCard内useCallback无实际效益,与LogRow写法不一致
handleClick仅作为普通<div>的onClick使用,非传入 memo 包装的子组件,函数引用是否稳定对渲染无影响。useCallback在此只增加依赖比对开销,而LogRow的同等写法(第 573 行)已正确使用内联箭头函数。♻️ 建议移除多余的 useCallback
- const handleClick = useCallback(() => onOpenRequest(request.id), [onOpenRequest, request.id]); + const handleClick = () => onOpenRequest(request.id);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/pages/requests/index.tsx` at line 792, The handleClick inside MobileRequestCard is unnecessarily wrapped with useCallback—remove the useCallback wrapper and replace the handler with an inline arrow function (e.g., onClick={() => onOpenRequest(request.id)}) like LogRow does; keep referencing onOpenRequest and request.id directly and delete the handleClick constant to eliminate unneeded dependency tracking and simplify the component.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
internal/handler/websocket.gointernal/handler/websocket_test.gointernal/repository/sqlite/migrations.goweb/src/pages/requests/index.tsx
🧰 Additional context used
🧬 Code graph analysis (3)
internal/handler/websocket.go (3)
web/src/lib/transport/index.ts (2)
WSMessage(39-39)ProxyUpstreamAttempt(28-28)internal/event/broadcaster.go (2)
SanitizeProxyRequestForBroadcast(28-40)SanitizeProxyUpstreamAttemptForBroadcast(43-55)internal/domain/model.go (1)
ProxyUpstreamAttempt(358-412)
internal/handler/websocket_test.go (1)
internal/handler/websocket.go (2)
WebSocketHub(32-39)WSMessage(27-30)
internal/repository/sqlite/migrations.go (1)
internal/repository/sqlite/db.go (1)
DB(17-20)
🔇 Additional comments (7)
internal/repository/sqlite/migrations.go (1)
57-88: 迁移 v2 实现清晰,MySQL 兼容处理得当。方言分支逻辑合理:MySQL 通过
isMySQLDuplicateIndexError实现幂等,其他方言使用IF NOT EXISTS/IF EXISTS。Down 路径的 warning 日志也已补充,整体与之前 review 的建议一致。internal/handler/websocket_test.go (2)
13-49: 快照语义测试覆盖良好。同时处理了
*domain.ProxyRequest和domain.ProxyRequest两种 type assertion 分支,对实现变更具有良好的鲁棒性。
51-85: Attempt 快照测试与 BroadcastMessage 快照测试逻辑清晰,覆盖了核心的不可变性保证。测试策略(广播 → 修改原始对象 → 验证 channel 中的快照未被污染)简洁有效。
Also applies to: 126-158
internal/handler/websocket.go (3)
52-79:run()重构正确,锁粒度和清理逻辑合理。RLock 快照客户端列表 → 释放锁 → 逐连接写入(带 deadline) → 批量清理失败连接。这避免了旧实现中持锁做网络 I/O 以及 RLock 下删除 map 的两个问题。
SetWriteDeadline确保慢客户端不会阻塞广播循环。
125-167:BroadcastProxyRequest/BroadcastProxyUpstreamAttempt快照策略正确,已解决之前 review 的数据竞态问题。无论
Sanitize是否返回原指针,都强制做浅拷贝(snapshot := *sanitized),并以值类型存入interface{},确保异步消费者不受影响。元数据字符串的构建为丢弃日志提供了良好的可追溯性。注意:当前浅拷贝对
domain.ProxyRequest/domain.ProxyUpstreamAttempt是安全的,因为Sanitize已将RequestInfo/ResponseInfo指针字段置 nil。如果未来 domain 模型新增了其他可变引用字段(slice/map/pointer),浅拷贝可能不再充分,届时需重新评估。
170-195:BroadcastMessage的 JSON 预序列化快照策略是正确的解决方案,已解决之前 review 的问题。对
json.RawMessage的特判避免了双重序列化,marshal失败时有日志并提前返回。encoding/json在序列化时会直接嵌入json.RawMessage的原始字节,不会产生额外转义。web/src/pages/requests/index.tsx (1)
576-608: 斑马纹改用 CSSeven:选择器,过往问题已修复移除
indexprop 后改用 Tailwindeven:bg-*伪类实现斑马纹,彻底消除了新实时请求前插时因所有行index+1导致的全量重渲染问题。cn()底层的tailwind-merge能正确解决even:bg-foreground/[0.03]与后续even:bg-red-500/25/even:bg-amber-500/15/even:bg-blue-500/10之间的冲突(后者覆盖前者)。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@internal/handler/websocket_test.go`:
- Around line 87-124: The test mutates the global logger via log.SetOutput which
races in parallel tests; either mark the test as non-parallel with a comment at
the top of the test file explaining it must not be run with t.Parallel(), or
better refactor WebSocketHub to accept a logger (e.g., add a logger field to
WebSocketHub and use it inside BroadcastProxyRequest and any other logging
call), update the test to create the hub with a test-specific logger that writes
to an in-memory buffer (instead of calling log.SetOutput), and assert against
that buffer; refer to WebSocketHub, BroadcastProxyRequest and WSMessage to
locate the code to change.
In `@internal/handler/websocket.go`:
- Around line 81-96: The current tryEnqueueBroadcast uses a cumulative counter
broadcastDroppedTotal that never resets, making it hard to tell if drops are
recent or historical; modify tryEnqueueBroadcast (or nearby metric collection
logic) to maintain a short-term sliding/resettable counter (e.g., a
time-windowed counter or a counter that is reset periodically) in addition to
the cumulative broadcastDroppedTotal, and use that short-term counter for the
"dropped_total" log/observability checks (while keeping the overall cumulative
metric for long-term stats) so you can distinguish recent ongoing drops from
historical ones when logging or alerting.
- Around line 125-143: BroadcastProxyRequest and BroadcastProxyUpstreamAttempt
currently pass Go structs to WSMessage so run()/WriteJSON ends up
json.Marshal-ing per-client; change them to follow BroadcastMessage by taking a
shallow snapshot (as you already do), json.Marshal that snapshot once into a
[]byte and set WSMessage.Data to json.RawMessage(theBytes) before calling
tryEnqueueBroadcast so run() can write the already-serialized bytes to all
clients; preserve the existing nil handling, meta construction (RequestID and
ID), and the snapshot logic, and apply the same change to
BroadcastProxyUpstreamAttempt and the other affected blocks so the
pre-serialization strategy is consistent with BroadcastMessage.
- Around line 52-68: The run() loop currently calls client.WriteJSON(msg) for
each connection which re-marshals the same msg repeatedly; change run() in
WebSocketHub to pre-serialize the message once (e.g. json.Marshal or use
existing json.RawMessage when available) into a []byte before iterating clients,
then replace WriteJSON(msg) with client.WriteMessage(websocket.TextMessage,
data) and keep the existing SetWriteDeadline(websocketWriteTimeout) and
close/remove logic; this reduces repeated marshaling and is compatible with
BroadcastProxyRequest/BroadcastProxyUpstreamAttempt when those use
json.RawMessage.
In `@internal/repository/sqlite/migrations.go`:
- Around line 125-142: RunMigrations currently uses getCurrentVersion
(MAX(version)) which can skip intermediate migrations; change the logic to fetch
the set of applied migration versions (e.g. query the migrations table for all
executed versions) and build a map of appliedVersions, then iterate
sortedMigrations and call d.runMigration(m) only when m.Version is not present
in appliedVersions; remove reliance on getCurrentVersion for skipping and use
the appliedVersions map instead so missing intermediate migrations (e.g. v2 when
v1 and v3 exist) will still execute.
- Around line 90-101: The fallback string check in isMySQLDuplicateIndexError is
too broad because matching "error 1061" can falsely trigger on unrelated
messages; update the fallback to only look for the specific MySQL phrase
"duplicate key name" and/or a tighter pattern for the MySQL code (e.g., "1061"
only when paired with "ER_DUP_KEYNAME" or "mysql" context) instead of the loose
"error 1061" substring; retain the errors.As check against
mysqlDriver.MySQLError and adjust the final strings.Contains calls in
isMySQLDuplicateIndexError accordingly.
In `@web/src/pages/requests/index.tsx`:
- Around line 759-780: The comparator in MemoLogRow redundantly checks both
prevPending and nextPending even though earlier the comparator already returned
false if prev.request !== next.request, guaranteeing prev.request ===
next.request and therefore prevPending === nextPending; simplify the branch by
replacing the condition "if (prevPending || nextPending)" with "if
(prevPending)" and keep the existing early-return behavior that compares
prev.nowMs === next.nowMs to decide re-rendering for pending rows (variables:
MemoLogRow, prev.request/next.request, prevPending, nextPending,
prev.nowMs/next.nowMs).
- Line 792: The handleClick inside MobileRequestCard is unnecessarily wrapped
with useCallback—remove the useCallback wrapper and replace the handler with an
inline arrow function (e.g., onClick={() => onOpenRequest(request.id)}) like
LogRow does; keep referencing onOpenRequest and request.id directly and delete
the handleClick constant to eliminate unneeded dependency tracking and simplify
the component.
背景问题
本次修复点(主要收益)
limit=100 等高频路径。
行为变化/注意事项(性能换稳定的取舍)
Summary by CodeRabbit
发布说明
新功能
性能优化
改进
测试