Skip to content

PR 说明:修复 /requests 页面卡顿/卡死(WS 降载 + 前端重绘抑制 + 索引优化)#246

Merged
SurviveM merged 5 commits intoawsl-project:mainfrom
luodaoyi:main
Feb 22, 2026
Merged

PR 说明:修复 /requests 页面卡顿/卡死(WS 降载 + 前端重绘抑制 + 索引优化)#246
SurviveM merged 5 commits intoawsl-project:mainfrom
luodaoyi:main

Conversation

@luodaoyi
Copy link
Contributor

@luodaoyi luodaoyi commented Feb 22, 2026

背景问题

  • /requests 在“传输中(PENDING/IN_PROGRESS)”条目增多时加载极慢、越开越卡,最终导致 Chrome 甚至系统级假死。
  • 观测到 proxy_request_update / proxy_upstream_attempt_update 单条消息体积常达 60KB+(甚至 200KB+),并且高频推送导致前端解析/GC/重渲染风暴。

本次修复点(主要收益)

  1. WebSocket 消息瘦身(关键)
  • 实时广播不再携带 requestInfo/responseInfo 等大字段,显著降低 WS 帧体积与前端 JSON 解析/GC 压力。
  • 详情页所需大字段改为通过现有详情接口获取(实时事件只用于状态/计数等轻量更新)。
  1. WebSocket Hub 不再阻塞后端执行链路
  • 广播 channel 满时改为“丢弃而不阻塞”,避免慢客户端导致 executor goroutine 被卡住,进而拖垮整个请求处理流水线。
  1. 流式/高频 Adapter 事件广播节流
  • 将 attempt 的实时更新改为 200ms 聚合窗口广播,避免流式响应下按 token/metrics 事件触发“广播风暴”。
  1. 前端 /requests 重绘风暴止血(P2)
  • 对行组件做 memo 与稳定回调:nowMs 每秒 tick 仅让“进行中”行重渲染,已完成行不再跟随全表刷新。
  • 解决列表越开越慢的线性放大(由“整表每秒重渲染”降为“仅活跃行刷新”)。
  1. 前端额外 P1 优化(不改业务语义)
  • cooldown_update 订阅去重:避免同一事件被两处重复订阅导致 invalidateQueries(['cooldowns']) 触发两次。
  • proxy_upstream_attempt_update 前端写缓存批处理:与 request 更新一致的 250ms flush,避免逐条 setQueryData。
  • useAllProviderStats 增加 staleTime:减少“每次挂载立即过期重拉取”的重复请求。
  • use-streaming.ts 统计计算使用 useMemo:避免每次 render 重建多个 Map + O(n) 遍历。
  1. 数据库查询优化
  • 为 proxy_requests.provider_id 增加索引,并提供显式迁移,避免 WHERE provider_id = ? ORDER BY id DESC LIMIT ... 退化为全表扫描,改善 /api/admin/requests?
    limit=100 等高频路径。

行为变化/注意事项(性能换稳定的取舍)

  • 在系统高压/客户端慢消费时,实时 WS 消息可能被丢弃(保证后端不被拖死,避免全局卡死)。
  • attempt 更新存在 最多约 200ms 的广播延迟;前端 attempts 列表更新存在 最多约 250ms 的批处理延迟。
  • 实时 WS 不再携带大字段,详情展示依赖接口拉取;前端已做 merge 保护避免“WS 瘦身”覆盖已加载详情导致闪空。

Summary by CodeRabbit

发布说明

  • 新功能

    • 添加冷却更新时间的集中订阅管理(统一订阅/取消订阅接口)
    • 请求列表新增导航处理,支持直接打开请求详情页
  • 性能优化

    • 实时广播引入节流与批处理,减小高并发下的广播负载
    • 为请求数据添加数据库索引以提升查询性能
    • 前端批量合并与记忆化,减少渲染与网络更新频率
    • 优化 WebSocket 广播路径以降低锁竞争与阻塞
  • 改进

    • 广播前对实时数据进行精简与快照,减少消息大小并避免并发变更影响
  • 测试

    • 增加 WebSocket 与迁移相关的单元测试验证快照与错误检测

@coderabbitai
Copy link

coderabbitai bot commented Feb 22, 2026

📝 Walkthrough

Walkthrough

新增广播前的数据卫生化函数并在广播路径中使用;后端引入 200ms 节流与批处理以合并实时适配器事件并快照数据;WebSocket Hub 改为非阻塞入队并在发送前快照;SQLite 增加 provider_id 索引迁移与测试;前端集中化冷却订阅、对请求/attempt 做批处理与记忆化优化并添加请求导航改动。

Changes

Cohort / File(s) Summary
广播与卫生化
internal/event/broadcaster.go, internal/event/wails_broadcaster_desktop.go
新增 SanitizeProxyRequestForBroadcastSanitizeProxyUpstreamAttemptForBroadcast,并在桌面广播路径中使用以移除大字段并避免异步发送时的数据竞争。
Executor 实时节流与批处理
internal/executor/executor.go
在 processAdapterEventsRealtime 中加入 200ms ticker 与 dirty/flush 快照逻辑,批量合并适配器事件并在 flush 时广播快照,替代逐条立即广播。
WebSocket 非阻塞广播
internal/handler/websocket.go, internal/handler/websocket_test.go
Hub 改为非阻塞入队(tryEnqueueBroadcast),记录丢弃计数并限速日志;写入时收集客户端后解锁写入并处理写超时/错误;广播前做 JSON 快照以避免竞争;新增单元测试覆盖快照与丢弃日志行为。
SQLite 模式与测试
internal/repository/sqlite/migrations.go, internal/repository/sqlite/models.go, internal/repository/sqlite/migrations_test.go
新增迁移版本为 proxy_requests.provider_id 创建索引(方言感知 MySQL 特殊处理)并在模型上添加 GORM 索引标签;新增检测 MySQL 重复索引错误的测试。
前端冷却订阅集中化
web/src/lib/cooldown-update-subscription.ts, web/src/contexts/cooldowns-context.tsx, web/src/hooks/use-cooldowns.ts
新增 subscribeCooldownUpdates(queryClient) 单例订阅管理,替换原有各处的手动订阅/解绑,使用引用计数与懒绑定传输事件。
前端请求/attempt 批处理与缓存合并
web/src/hooks/queries/use-requests.ts
引入 pendingAttemptsByRequest 与批量 flush 机制,按 request 聚合 ProxyUpstreamAttempt 更新并合并到缓存;细化 detail 合并以避免稀疏更新覆盖已加载内容。
前端其他改进
web/src/hooks/queries/use-providers.ts, web/src/hooks/use-streaming.ts, web/src/pages/requests/index.tsx
为 provider 统计设置 staleTime: 5000use-streaming 使用 useMemo 记忆化计数计算;Requests 页面引入 navigation handler、将组件回调改为 onOpenRequest 并用 Memo 优化重渲染。

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Executor as Executor (事件产生)
    participant Broadcaster as Broadcaster (Sanitize/快照)
    participant WSHub as WebSocketHub (非阻塞入队)
    participant Clients as ConnectedClients

    Client->>Executor: 产生 ProxyUpstreamAttempt / 事件(高并发)
    Executor->>Executor: 缓存/合并更新并标记 dirty
    note right of Executor: 200ms ticker 批量合并
    Executor->>Executor: ticker 或 关闭时触发 flush(创建快照)
    Executor->>Broadcaster: 发送待广播快照
    Broadcaster->>Broadcaster: 调用 Sanitize 并创建 JSON 快照
    Broadcaster-->>WSHub: 非阻塞入队广播消息(可能被丢弃)
    WSHub->>WSHub: RLock 收集活跃客户端并解锁
    WSHub->>Clients: 对每个客户端 WriteJSON(设置写超时,清理错误连接)
Loading

Estimated code review effort

🎯 4 (复杂) | ⏱️ ~60 分钟

Possibly related PRs

Suggested reviewers

  • awsl233777
  • whhjdi
  • dreamhunter2333

Poem

🐰 我是代码里的兔子,耳朵听见事件呼,
把沉重字段悄悄丢,广播轻快不打扰,
节流汇成细雨落,WebSocket 无声跑,
前端记忆守安好,索引为夜点亮刀。

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 35.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR标题清晰准确地概括了主要改动:修复/requests页面卡顿问题,并通过括号中的三个关键优化点(WS降载、前端重绘抑制、索引优化)说明解决方案。

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@luodaoyi
Copy link
Contributor Author

针对 /requests 实时列表与 /admin/requests?limit=100 在高频/大数据量场景下出现的“前端卡顿 + 后端慢查询/阻塞”风险,补齐后端广播节流/瘦身与查询排序/索引优化,并降低前端缓存写入与重渲染压力。(包含一次与 main 的冲突解决合并提交)

  • 修复了 SQLite 列表查询在大数据量下因复杂排序导致无法有效利用索引、limit=100 仍可能触发全表排序从而显著变慢的问题(列表排序改为与游标一致的 id 稳定排序,after 场景改为 id ASC)。
  • 修复了实时链路在流式高并发下触发“广播风暴”的问题(adapter 事件合并节流后再广播,避免每个细粒度事件都立刻推送)。
  • 修复了 WebSocket 广播 payload 过大导致前端 JSON.parse/GC 压力飙升、列表页明显卡顿的风险(广播前对 ProxyRequest/Attempt 做瘦身,去掉 requestInfo/responseInfo 大字段;详情改为按需接口拉取)。
  • 修复了 WebSocketHub 在读锁期间执行网络写入、以及在 RLock 下删除 clients 带来的数据竞争/阻塞风险(写前拷贝连接列表,失败后再集中加写锁清理)。
  • 修复了前端实时更新“写缓存造内存/重渲染风暴”的问题(WS 更新做批量 flush;仅在 query 有 observer 时才更新 attempts/detail;列表行/卡片 memo + 稳定回调减少不必要渲染)。
  • 修复了 provider 维度过滤/查询的热点路径缺少索引导致的性能隐患(新增 proxy_requests(provider_id) 索引迁移,同时在 model 上标注 gorm:"index")。
    涉及范围/影响面
  • internal/repository/sqlite/proxy_request.go
  • internal/repository/sqlite/migrations.go
  • internal/executor/executor.go
  • internal/handler/websocket.go
  • internal/event/broadcaster.go
  • web/src/hooks/queries/use-requests.ts
  • web/src/pages/requests/index.tsx

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (6)
internal/event/broadcaster.go (1)

1-1: 文件开头存在 BOM(字节顺序标记)字符。

package event 前有一个不可见的 UTF-8 BOM(\xEF\xBB\xBF)。Go 编译器虽能容忍,但这不是 Go 的惯例,可能影响某些工具链(如 diff、linter)。建议在此文件及本 PR 中其他带 BOM 的文件中统一移除。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/event/broadcaster.go` at line 1, The file contains a UTF-8 BOM
before the package declaration ("package event") which should be removed; open
internal/event/broadcaster.go and delete the leading BOM bytes (0xEF 0xBB 0xBF)
so the file starts exactly with "package event", save as UTF-8 without BOM, and
repeat for any other files in this PR that include a BOM (or run a repo-wide
tool to strip BOMs) to avoid tooling/diff issues.
internal/event/wails_broadcaster_desktop.go (1)

1-1: 移除文件开头的 UTF-8 BOM 字符,提高代码规范性。

文件确实以 BOM 字符开头。虽然 Go 编译器明确支持并会正确处理文件开头的 UTF-8 BOM(此行为已在 Go 工具链的合规性测试中验证),但移除 BOM 是更规范的做法,确保跨平台和工具的一致性。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/event/wails_broadcaster_desktop.go` at line 1, 文件开头包含 UTF-8
BOM,按规范应移除它:打开 internal/event/wails_broadcaster_desktop.go,删除文件最前面的 BOM
字符(U+FEFF),保存为纯 UTF-8(无 BOM),然后确认 package/函数声明(例如 package ...
或任何顶层声明)现在位于文件首字符处且编译通过。
internal/handler/websocket.go (2)

104-108: 非阻塞发送(select + default)策略符合 PR 目标,但静默丢弃可能在生产中难以察觉。

h.broadcast channel(缓冲 100)满时,消息被静默丢弃。在高压场景下这是合理的降载策略,但建议至少记一条 debug/metric 日志或递增计数器,以便运维监控消息丢弃率。

💡 建议:添加丢弃计数/日志

BroadcastProxyRequest 为例:

 select {
 case h.broadcast <- msg:
 default:
-    // Channel 满时丢弃,避免阻塞 executor/请求处理链路
+    // Channel 满时丢弃,避免阻塞 executor/请求处理链路
+    log.Printf("[WARN] broadcast channel full, dropping %s message", msg.Type)
 }

生产环境中建议替换为 prometheus counter 或 structured logging,以避免在持续满载时日志自身成为瓶颈。

Also applies to: 117-120, 129-132, 141-144

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/handler/websocket.go` around lines 104 - 108, The non-blocking send
to h.broadcast (used for BroadcastProxyRequest and similar messages) currently
silently drops messages when the buffered channel is full; update each select
default branch (the instances around h.broadcast sends) to record the
drop—either increment a drop counter/metric (preferred: Prometheus counter) or
emit a debug/structured log that includes the message type and any request ID—so
operators can monitor drop rates without blocking the hot path; reference the
send sites using h.broadcast and the BroadcastProxyRequest message type and add
the metric/log increment in each default branch (also apply the same change to
the other select blocks noted at the other occurrences).

54-58: WriteJSON 串行写入所有客户端——慢客户端会阻塞后续客户端的消息投递。

当前 run() 是单 goroutine 顺序遍历所有客户端调用 WriteJSON。如果某个客户端网络慢(例如写超时未设置时 TCP 缓冲区满),该次 WriteJSON 会阻塞较长时间,导致其他客户端的消息延迟。

考虑为每个连接设置 SetWriteDeadline,或为每个客户端维护独立的写 goroutine + 发送队列(per-client fan-out),以实现客户端间的隔离。当前规模若客户端数量不多,这不是紧急问题。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/handler/websocket.go` around lines 54 - 58, The run() loop is
writing to all clients sequentially with client.WriteJSON, so a slow client
blocks others; change to per-client isolated writes by either setting a write
deadline before each WriteJSON (use client.SetWriteDeadline) and handling
timeout errors when writing, or implement a per-client send queue and a
dedicated write goroutine for each connection (create a buffered channel per
client, send messages into that channel from run(), and have the goroutine read
the channel and call WriteJSON; on write errors close the connection and append
to toRemove). Ensure to reference the existing clients slice, the run()
function, client.WriteJSON calls, and the toRemove handling so slow/failed
clients are removed cleanly.
web/src/pages/requests/index.tsx (1)

578-578: LogRow 内部的 useCallback 是可选的防御性优化。

由于 LogRow 已被 memo 包裹,handleClick 仅在组件实际重渲染时才会重新创建。这里的 useCallback 主要防止 <TableRow onClick> 因函数引用变化触发子树重渲染,但如果 TableRow 没有做 memo/shouldComponentUpdate,该层优化收益有限。不影响正确性,保留亦可。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/pages/requests/index.tsx` at line 578, The useCallback wrapper around
handleClick inside LogRow is an unnecessary defensive optimization because
LogRow is already wrapped with memo; remove the useCallback and replace the
memoized handler with a plain function reference (i.e., make handleClick a
simple arrow function that calls onOpenRequest(request.id)) so the code is
simpler and behavior unchanged; keep references to LogRow, handleClick,
onOpenRequest and note that TableRow may still re-render if it isn’t memoized.
internal/repository/sqlite/migrations.go (1)

63-66: MySQL 重复索引的错误判断过于宽泛,建议收窄匹配字符串

strings.Contains(..., "duplicate") 会同时匹配 MySQL 错误码 1061(ER_DUP_KEYNAME,"Duplicate key name",预期)和 1062(ER_DUP_ENTRY,"Duplicate entry"),后者在 CREATE INDEX 语境下虽几乎不可能出现,但仍是潜在误判。建议至少收窄为匹配 "duplicate key name",或直接检查 MySQL 驱动的错误码(1061)。

♻️ 建议:收窄字符串匹配
-if err != nil && strings.Contains(strings.ToLower(err.Error()), "duplicate") {
+if err != nil && strings.Contains(strings.ToLower(err.Error()), "duplicate key name") {

若项目已引入 github.com/go-sql-driver/mysql,可进一步精确检查错误码:

import "github.com/go-sql-driver/mysql"

var mysqlErr *mysql.MySQLError
if err != nil && errors.As(err, &mysqlErr) && mysqlErr.Number == 1061 {
    return nil
}
return err
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/repository/sqlite/migrations.go` around lines 63 - 66, The current
error check in the db.Exec call that creates idx_proxy_requests_provider_id uses
strings.Contains(err.Error(), "duplicate"), which is too broad; change the logic
in function(s) handling the CREATE INDEX to either (1) narrow the string match
to "duplicate key name" or (preferred) (2) import github.com/go-sql-driver/mysql
and use errors.As to cast to *mysql.MySQLError and return nil only when
mysqlErr.Number == 1061 (ER_DUP_KEYNAME); ensure you return the original err for
all other cases and add the mysql import and errors package usage where db.Exec
is invoked.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3e71063 and 50aba33.

📒 Files selected for processing (13)
  • internal/event/broadcaster.go
  • internal/event/wails_broadcaster_desktop.go
  • internal/executor/executor.go
  • internal/handler/websocket.go
  • internal/repository/sqlite/migrations.go
  • internal/repository/sqlite/models.go
  • web/src/contexts/cooldowns-context.tsx
  • web/src/hooks/queries/use-providers.ts
  • web/src/hooks/queries/use-requests.ts
  • web/src/hooks/use-cooldowns.ts
  • web/src/hooks/use-streaming.ts
  • web/src/lib/cooldown-update-subscription.ts
  • web/src/pages/requests/index.tsx
🧰 Additional context used
🧬 Code graph analysis (8)
web/src/hooks/use-streaming.ts (4)
web/src/lib/transport/types.ts (1)
  • ClientType (8-8)
internal/domain/model.go (1)
  • ClientType (9-9)
web/src/lib/theme.ts (1)
  • ClientType (26-26)
web/src/lib/transport/index.ts (1)
  • ClientType (8-8)
web/src/lib/cooldown-update-subscription.ts (1)
web/src/lib/query-client.ts (1)
  • queryClient (3-14)
internal/event/broadcaster.go (2)
internal/domain/model.go (4)
  • ProxyRequest (287-356)
  • ProxyUpstreamAttempt (358-412)
  • RequestInfo (274-279)
  • ResponseInfo (280-284)
internal/adapter/client/adapter.go (1)
  • RequestInfo (17-20)
internal/repository/sqlite/migrations.go (2)
internal/version/version.go (1)
  • Version (6-6)
internal/repository/sqlite/db.go (1)
  • DB (17-20)
web/src/contexts/cooldowns-context.tsx (2)
web/src/lib/cooldown-update-subscription.ts (1)
  • subscribeCooldownUpdates (9-28)
web/src/lib/query-client.ts (1)
  • queryClient (3-14)
internal/handler/websocket.go (4)
internal/event/broadcaster.go (2)
  • SanitizeProxyRequestForBroadcast (28-40)
  • SanitizeProxyUpstreamAttemptForBroadcast (43-55)
web/src/lib/transport/types.ts (2)
  • WSMessage (322-325)
  • ProxyUpstreamAttempt (245-274)
web/src/lib/transport/index.ts (2)
  • WSMessage (39-39)
  • ProxyUpstreamAttempt (28-28)
internal/domain/model.go (1)
  • ProxyUpstreamAttempt (358-412)
web/src/hooks/use-cooldowns.ts (2)
web/src/lib/cooldown-update-subscription.ts (1)
  • subscribeCooldownUpdates (9-28)
web/src/lib/query-client.ts (1)
  • queryClient (3-14)
internal/event/wails_broadcaster_desktop.go (1)
internal/event/broadcaster.go (2)
  • SanitizeProxyRequestForBroadcast (28-40)
  • SanitizeProxyUpstreamAttemptForBroadcast (43-55)
🔇 Additional comments (19)
web/src/hooks/queries/use-providers.ts (1)

96-103: LGTM!

staleTime: 5000 是合理的缓存策略,配合 WS 事件驱动的 invalidateQueries,在减少不必要请求的同时保证了数据的及时更新。

web/src/hooks/use-streaming.ts (1)

108-147: LGTM!

将统计计算移入 useMemo 是正确的优化。依赖项 [activeRequests] 准确追踪了 Map 引用变化,确保只在活跃请求实际变更时重新计算,避免了每次渲染都重建 Map 和数组的开销。

internal/event/broadcaster.go (1)

28-55: LGTM!浅拷贝 + nil 大字段的方案简洁高效。

两个 Sanitize 函数逻辑清晰:nil 保护、已瘦身快速返回、浅拷贝后置空重字段。在广播序列化场景下不会产生共享引用问题。

internal/event/wails_broadcaster_desktop.go (1)

49-56: LGTM!Sanitize 放置位置合理。

WailsBroadcaster 层统一做瘦身,确保 inner broadcaster(WebSocket)和 Wails Event 两条路径都收到轻量 payload。即使 inner broadcaster 内部再次调用 Sanitize,也因 nil 快速返回而无额外开销。

web/src/lib/cooldown-update-subscription.ts (1)

1-29: LGTM!单例订阅模式实现得当。

引用计数式的生命周期管理(Set 跟踪活跃 client,最后一个退出时释放 transport 监听)干净且正确。Set.add 天然幂等,在 React Strict Mode 下双调用也不会出问题。

web/src/hooks/use-cooldowns.ts (1)

22-25: LGTM!

通过 subscribeCooldownUpdates 集中管理订阅,配合 Set<QueryClient> 去重,有效解决了 useCooldownsCooldownsProvider 同时挂载时的重复订阅问题。

web/src/contexts/cooldowns-context.tsx (1)

40-43: LGTM!

use-cooldowns.ts 保持一致,统一使用集中式订阅管理器,清理逻辑简洁。

web/src/hooks/queries/use-requests.ts (4)

91-129: LGTM!Attempt 批处理合并逻辑正确。

flushAttempts 的实现值得肯定:

  • Array.fromclear(),避免迭代过程中被修改。
  • 合并时使用 ?? 保留已缓存的 requestInfo/responseInfo,与后端 Sanitize 置 null 的行为完美配合。
  • 刷新前再次检查 getObserversCount(),即使订阅时的 early check 过时也不会错误写入。

166-178: LGTM!详情缓存合并保护到位。

flushAttempts 采用相同的 ?? 合并策略,确保 WS 推送的瘦身消息不会覆盖详情页已加载的 requestInfo/responseInfo


131-139: flush 中仅有 pending attempts 时可能延迟一个周期。

pendingRequests 为空但 pendingAttemptsByRequest 非空时,flush 直接调用 flushAttempts() 并 return,这是正确的。但在主流程末尾(Line 354)也无条件调用了 flushAttempts()。由于 Line 97 已经 clear() 了 map,第二次调用会因 size === 0 立即返回,不会重复处理。逻辑无误,只是想确认这种设计是有意为之。


373-390: 订阅回调中的 observer 检查与 flush 时的双重检查形成良好的分层防护。

订阅时先做快速过滤(减少无用数据堆积),flush 时再做精确检查(防止 250ms 窗口内 observer 消失)。唯一的轻微代价是 250ms 内可能积累少量后续被丢弃的 attempt,但对内存影响可忽略。

internal/executor/executor.go (2)

324-340: 节流 + flush 快照设计合理。

200ms 聚合窗口配合 dirty 标记和 flush() 快照广播,既避免了流式场景下的广播风暴,又确保了 channel 关闭时不丢弃最后一批脏数据(line 346)。SanitizeProxyUpstreamAttemptForBroadcast 做浅拷贝后置空指针字段,对于当前 ProxyUpstreamAttempt 结构(其余字段皆为值类型)是安全的。


342-390: 取消此项审查——代码已正确遵循同步协议。

internal/executor/middleware_dispatch.go:130 处的调用点确实遵循了"先等 done 关闭,再读 attempt"的协议:

  1. 第 131 行:启动 processAdapterEventsRealtime 协程
  2. 第 145 行:同步执行 adapter(事件通过 eventChan 写入)
  3. 第 154 行:eventChan.Close() 信号协程退出
  4. 第 155 行:<-eventDone 阻塞直至协程完成(defer close(done) 触发)
  5. 第 157+ 行:只在此处才访问 attemptRecord 的字段

此外,flush() 函数通过 SanitizeProxyUpstreamAttemptForBroadcast(attempt) 创建浅拷贝(copied := *attempt),将请求/响应详情清空后再广播,完全避免了指针别名问题。不存在数据竞争。

Likely an incorrect or invalid review comment.

web/src/pages/requests/index.tsx (4)

764-786: Memo 等值函数的逻辑设计合理,有效实现了"已完成行不随 nowMs 重渲染"的目标。

自定义 arePropsEqual 先通过引用比较排除 request 变更,再对 PENDING/IN_PROGRESS 行仅比较 nowMs,已完成行则直接跳过,契合 PR 描述的重绘抑制策略。

有一个小建议:当前判断 prevPending || nextPendingprev.request === next.request 的前提下其实等价于 prevPending && nextPending,写成 && 语义会更清晰,但不影响正确性。


229-236: 移动端行缺少 nowMs / 实时时长显示——请确认是有意为之。

桌面端 MemoLogRow 接收 nowMs 实现进行中请求的实时计时,但 MemoMobileRequestCard 不接收 nowMs,只展示 request.duration。这意味着移动端对 PENDING/IN_PROGRESS 请求的时长不会实时更新。如果这是预期行为(移动端不需要实时秒表),可以忽略。


856-864: MemoMobileRequestCard 的等值函数正确且简洁。

三项引用比较覆盖了组件所有 props,且 onOpenRequest 已通过 useCallback 稳定化,不会导致不必要的重渲染。


171-176: handleOpenRequest 稳定引用 ✓

useCallback + [navigate] 依赖正确。React Router v7 的 navigate 在组件生命周期内引用稳定,因此回调函数也不会频繁重建,配合下游 memo 使用效果良好。

internal/handler/websocket.go (1)

43-69: run() 的两阶段锁改造正确,消除了"持锁写网络"的阻塞风险。

先 RLock 快照客户端列表、再逐个写入、最后 WLock 清理失败连接——单 goroutine 保证了 Gorilla WebSocket "one concurrent writer" 的约束。HandleWebSocket defer 可能先行 Close 连接导致 WriteJSON 失败,但 error 路径已正确处理(再次 Close 对 Gorilla 是幂等的)。

internal/repository/sqlite/models.go (1)

225-225: LGTM — gorm:"index" 标签与迁移脚本命名完全一致

GORM 对此字段生成的默认索引名为 idx_proxy_requests_provider_id(即 idx_{tableName}_{columnName}),与 migrations.go v2 迁移中的 SQL 完全对齐。无论 AutoMigrate 与 RunMigrations 的执行顺序如何,两条路径均已做幂等处理,不存在冲突风险。

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/handler/websocket.go`:
- Around line 98-121: Broadcast currently sends potentially-mutable original
pointers because event.SanitizeProxyUpstreamAttemptForBroadcast /
SanitizeProxyRequestForBroadcast may return the same pointer when
RequestInfo/ResponseInfo are nil, causing data races when
executor.processAdapterEventsRealtime modifies the object after enqueueing;
change WebSocketHub.BroadcastProxyRequest and
WebSocketHub.BroadcastProxyUpstreamAttempt to always make a shallow copy
(snapshot) of the sanitized object (copy the struct value into a new variable
and use that for WSMessage.Data) before sending to h.broadcast so the async
consumer/WriteJSON reads an immutable snapshot rather than the original mutable
object.

In `@internal/repository/sqlite/migrations.go`:
- Around line 75-77: The Down path silently ignores errors from the DROP INDEX
statement (the call to db.Exec("DROP INDEX idx_proxy_requests_provider_id ON
proxy_requests").Error inside rollbackMigration), which can leave
schema_migrations out of sync; change the code to capture the error (err :=
db.Exec(...).Error) and if err != nil emit a warning log that includes the
migration id/version, the SQL being run and the actual error (using the existing
logger used elsewhere), then continue returning nil so the migration record is
removed but the failure is visible for debugging.

---

Nitpick comments:
In `@internal/event/broadcaster.go`:
- Line 1: The file contains a UTF-8 BOM before the package declaration ("package
event") which should be removed; open internal/event/broadcaster.go and delete
the leading BOM bytes (0xEF 0xBB 0xBF) so the file starts exactly with "package
event", save as UTF-8 without BOM, and repeat for any other files in this PR
that include a BOM (or run a repo-wide tool to strip BOMs) to avoid tooling/diff
issues.

In `@internal/event/wails_broadcaster_desktop.go`:
- Line 1: 文件开头包含 UTF-8 BOM,按规范应移除它:打开
internal/event/wails_broadcaster_desktop.go,删除文件最前面的 BOM 字符(U+FEFF),保存为纯 UTF-8(无
BOM),然后确认 package/函数声明(例如 package ... 或任何顶层声明)现在位于文件首字符处且编译通过。

In `@internal/handler/websocket.go`:
- Around line 104-108: The non-blocking send to h.broadcast (used for
BroadcastProxyRequest and similar messages) currently silently drops messages
when the buffered channel is full; update each select default branch (the
instances around h.broadcast sends) to record the drop—either increment a drop
counter/metric (preferred: Prometheus counter) or emit a debug/structured log
that includes the message type and any request ID—so operators can monitor drop
rates without blocking the hot path; reference the send sites using h.broadcast
and the BroadcastProxyRequest message type and add the metric/log increment in
each default branch (also apply the same change to the other select blocks noted
at the other occurrences).
- Around line 54-58: The run() loop is writing to all clients sequentially with
client.WriteJSON, so a slow client blocks others; change to per-client isolated
writes by either setting a write deadline before each WriteJSON (use
client.SetWriteDeadline) and handling timeout errors when writing, or implement
a per-client send queue and a dedicated write goroutine for each connection
(create a buffered channel per client, send messages into that channel from
run(), and have the goroutine read the channel and call WriteJSON; on write
errors close the connection and append to toRemove). Ensure to reference the
existing clients slice, the run() function, client.WriteJSON calls, and the
toRemove handling so slow/failed clients are removed cleanly.

In `@internal/repository/sqlite/migrations.go`:
- Around line 63-66: The current error check in the db.Exec call that creates
idx_proxy_requests_provider_id uses strings.Contains(err.Error(), "duplicate"),
which is too broad; change the logic in function(s) handling the CREATE INDEX to
either (1) narrow the string match to "duplicate key name" or (preferred) (2)
import github.com/go-sql-driver/mysql and use errors.As to cast to
*mysql.MySQLError and return nil only when mysqlErr.Number == 1061
(ER_DUP_KEYNAME); ensure you return the original err for all other cases and add
the mysql import and errors package usage where db.Exec is invoked.

In `@web/src/pages/requests/index.tsx`:
- Line 578: The useCallback wrapper around handleClick inside LogRow is an
unnecessary defensive optimization because LogRow is already wrapped with memo;
remove the useCallback and replace the memoized handler with a plain function
reference (i.e., make handleClick a simple arrow function that calls
onOpenRequest(request.id)) so the code is simpler and behavior unchanged; keep
references to LogRow, handleClick, onOpenRequest and note that TableRow may
still re-render if it isn’t memoized.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (8)
internal/handler/websocket.go (2)

51-78: 顺序扇出在客户端较多时可能放大延迟。

当前实现对每个客户端串行 WriteJSON,单条消息的广播延迟最坏为 O(n × writeTimeout)。若只是内部管理面板(客户端少),可以接受;但如果未来客户端数量增长,建议考虑并发写入或限制最大客户端数。

锁/竞态方面:RLock 快照 → 释放锁 → 无锁写入 → 收集失败连接 → 加写锁批量删除,逻辑正确,修复了原先在 RLock 下 delete map 的竞态问题。👍

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/handler/websocket.go` around lines 51 - 78, The current
WebSocketHub.run serially calls SetWriteDeadline and WriteJSON for each client
(using h.broadcast, h.clients and websocket.WriteJSON), causing O(n ×
writeTimeout) broadcast latency when clients grow; change run to perform writes
concurrently: snapshot h.clients as you already do, then spawn a goroutine per
client (or use a bounded worker pool/semaphore) that does SetWriteDeadline and
WriteJSON, collect failures into a thread-safe slice/channel, wait with a
sync.WaitGroup for all writes to finish, then acquire h.mu.Lock and remove
failed connections from h.clients; ensure no map access happens while goroutines
are writing and that removal still happens under the lock to avoid races.

80-95: 丢弃计数器为全局单调递增,偶发丢弃可能被"沉默"。

broadcastDroppedTotal 永不重置。假设系统经历一次短暂高压后恢复正常,再次发生偶发丢弃时计数器值可能是 2、3、…99,这些都不会触发日志(需要到 100 才会再次打印)。如果需要更好的可观测性,可考虑按时间窗口节流(如最多每 10 秒打印一次)或使用 Prometheus counter 等 metrics 暴露。

当前实现对于诊断大规模风暴已足够,仅作为改进建议。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/handler/websocket.go` around lines 80 - 95, The
broadcastDroppedTotal counter in tryEnqueueBroadcast is monotonically increasing
so intermittent drops may be silent until the counter hits the next 100
boundary; change tryEnqueueBroadcast to rate-limit logging by time window
instead of only by counter: keep a lastLogTime (e.g., lastBroadcastDropLog
int64) on WebSocketHub and, when a drop occurs, increment broadcastDroppedTotal
as now but also check the lastLogTime (use atomic load/store or a mutex) and
only emit log.Printf if the lastLogTime is older than a configured window (e.g.,
10s), then update lastLogTime to now; keep the existing counter-based sampling
if you want, but add the time-window gate so occasional drops after recovery
still produce an observable log.
internal/handler/websocket_test.go (1)

86-123: 丢弃日志测试逻辑正确,但修改了全局 log 状态。

测试通过修改全局 log.SetOutput 来捕获日志输出,defer 中正确恢复了原状态。由于包内测试默认顺序执行,当前不会有并发问题。但如果未来有人加了 t.Parallel(),全局 log 状态修改会导致 flaky test。可以在注释中标注此约束,或考虑将日志记录器注入到 WebSocketHub 中以便测试。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/handler/websocket_test.go` around lines 86 - 123, The test mutates
the global log state (log.SetOutput/Flags/Prefix) which makes it flaky if tests
run in parallel; instead, modify WebSocketHub to accept an injectable logger
(add a Logger *log.Logger field to the WebSocketHub struct and use that logger
inside BroadcastProxyRequest) and update
TestWebSocketHub_BroadcastProxyRequest_LogsWhenDropped to construct the hub with
a local log.New(&buf, "", 0) so the test captures output without touching global
state; update references in BroadcastProxyRequest and any hub constructors to
use the new Logger when present.
internal/repository/sqlite/migrations.go (1)

90-101: "error 1061" 字符串兜底匹配存在理论上的前缀子串误判

第 100 行:

return strings.Contains(lower, "duplicate key name") || strings.Contains(lower, "error 1061")

"error 1061""error 10619" 的子串,若错误消息中含有类似 "error 10619: ..." 的内容会误判为 true。虽然在 CREATE INDEX 上下文中此场景极难发生,且此函数仅在 case "mysql": 分支调用,但可以用更精确的边界匹配消除歧义:

♻️ 可选:使用更精确的边界匹配
-return strings.Contains(lower, "duplicate key name") || strings.Contains(lower, "error 1061")
+return strings.Contains(lower, "duplicate key name") || strings.Contains(lower, "error 1061 ") || strings.HasSuffix(lower, "error 1061")

或直接用正则(若团队允许):

import "regexp"
var errCode1061Re = regexp.MustCompile(`\berror 1061\b`)
return strings.Contains(lower, "duplicate key name") || errCode1061Re.MatchString(lower)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/repository/sqlite/migrations.go` around lines 90 - 101, The fallback
string match in isMySQLDuplicateIndexError uses strings.Contains(lower, "error
1061") which can falsely match longer codes like "error 10619"; update the
fallback to perform a bounded match instead (e.g., check for "error 1061" with
word/colon boundaries or use a precompiled regexp with \b around 1061) so only
the exact error code 1061 is considered while keeping the existing "duplicate
key name" check; modify the isMySQLDuplicateIndexError function to replace the
loose strings.Contains call with this precise boundary match.
internal/repository/sqlite/migrations_test.go (1)

10-22: 建议补充 nil 输入与字符串前缀误判边界用例

当前 4 个用例覆盖了主要分支,但有两处可选补充:

  1. nil 输入isMySQLDuplicateIndexError 第一行即处理 nil,但测试中未覆盖,建议补充以完整文档化该行为。
  2. 字符串前缀误判(对应上一条 "error 1061" 子串问题):若维护者采纳精确匹配建议,可追加一个反例(如 "error 10619: something"),确保修改后不误判。
✅ 可选:补充用例
+if isMySQLDuplicateIndexError(nil) {
+    t.Fatalf("expected false for nil error")
+}
+// 确保 "error 10619" 不被误判为 1061
+if isMySQLDuplicateIndexError(errors.New("error 10619: something unrelated")) {
+    t.Fatalf("expected false for error code 10619 (should not match 1061 as substring)")
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/repository/sqlite/migrations_test.go` around lines 10 - 22, Add two
additional assertions to TestIsMySQLDuplicateIndexError to cover the nil and
string-prefix edge cases: call isMySQLDuplicateIndexError(nil) and assert it
returns false, and add a negative string case (e.g., errors.New("Error 10619:
...") or similar) to assert it does not match as a duplicate-index error; update
the test function TestIsMySQLDuplicateIndexError to include these two assertions
so the behavior of isMySQLDuplicateIndexError for nil inputs and near-prefix
numeric matches is explicitly verified.
web/src/pages/requests/index.tsx (3)

778-782: nextPending 是冗余变量

到达此处时 prev.request === next.request(同一对象引用,若引用不同则第 767 行已提前返回 false),因此 prev.request.status === next.request.status 恒成立,prevPending === nextPending 始终为真。nextPending 可以删除,条件直接用 prevPending 即可。

♻️ 简化建议
- const prevPending = prev.request.status === 'PENDING' || prev.request.status === 'IN_PROGRESS';
- const nextPending = next.request.status === 'PENDING' || next.request.status === 'IN_PROGRESS';
- if (prevPending || nextPending) {
+ const isPending = prev.request.status === 'PENDING' || prev.request.status === 'IN_PROGRESS';
+ if (isPending) {
    return prev.nowMs === next.nowMs;
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/pages/requests/index.tsx` around lines 778 - 782, 此处多定义了 nextPending
且其值与 prevPending 恒等(因为此处 prev.request === next.request 已为同一引用),请删除 nextPending
变量并在条件判断中直接使用 prevPending(即将 "if (prevPending || nextPending) { return
prev.nowMs === next.nowMs; }" 简化为 "if (prevPending) { return prev.nowMs ===
next.nowMs; }"),保留对 prev.request/status 和 prev.nowMs 的现有引用以保持逻辑不变。

1-2: 将两条 React import 合并为一条

两行均从 'react' 导入,可合并成单条语句,符合 no-duplicate-imports 规范。

♻️ 建议合并
-import { useState, useMemo, useRef, useEffect } from 'react';
-import { useCallback, memo } from 'react';
+import { useState, useMemo, useRef, useEffect, useCallback, memo } from 'react';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/pages/requests/index.tsx` around lines 1 - 2, Two separate imports
from 'react' should be combined: replace the two import statements that bring in
useState, useMemo, useRef, useEffect and useCallback, memo with a single import
from 'react' that lists all six identifiers (useState, useMemo, useRef,
useEffect, useCallback, memo) to satisfy the no-duplicate-imports rule and avoid
duplicated module imports.

798-798: MobileRequestCard 中的 useCallback 无实质收益,且与 LogRow 不一致

handleClick 仅作为 DOM onClick 处理器使用,不传给任何已 memo 化的子组件,因此 useCallback 并不能带来额外的渲染优化,反而增加了对比开销。LogRow(第 578 行)使用的是朴素箭头函数,建议两处保持一致。

♻️ 对齐简化
- const handleClick = useCallback(() => onOpenRequest(request.id), [onOpenRequest, request.id]);
+ const handleClick = () => onOpenRequest(request.id);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/pages/requests/index.tsx` at line 798, Remove the unnecessary
useCallback wrapper around handleClick in MobileRequestCard: replace the
memoized callback (useCallback(() => onOpenRequest(request.id), [...])) with a
plain arrow function or inline handler (e.g., () => onOpenRequest(request.id))
to match LogRow's simple arrow style; update the declaration of handleClick (or
the onClick prop) to call onOpenRequest(request.id) directly and remove the
useCallback import/usage to avoid extra comparison overhead.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 50aba33 and 7b80677.

📒 Files selected for processing (7)
  • internal/event/broadcaster.go
  • internal/event/wails_broadcaster_desktop.go
  • internal/handler/websocket.go
  • internal/handler/websocket_test.go
  • internal/repository/sqlite/migrations.go
  • internal/repository/sqlite/migrations_test.go
  • web/src/pages/requests/index.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
  • internal/event/wails_broadcaster_desktop.go
  • internal/event/broadcaster.go
🧰 Additional context used
🧬 Code graph analysis (3)
internal/repository/sqlite/migrations.go (1)
internal/repository/sqlite/db.go (1)
  • DB (17-20)
internal/handler/websocket.go (3)
web/src/lib/transport/types.ts (2)
  • WSMessage (322-325)
  • ProxyUpstreamAttempt (245-274)
web/src/lib/transport/index.ts (2)
  • WSMessage (39-39)
  • ProxyUpstreamAttempt (28-28)
internal/event/broadcaster.go (2)
  • SanitizeProxyRequestForBroadcast (28-40)
  • SanitizeProxyUpstreamAttemptForBroadcast (43-55)
internal/handler/websocket_test.go (1)
internal/handler/websocket.go (2)
  • WebSocketHub (31-38)
  • WSMessage (26-29)
🔇 Additional comments (7)
internal/handler/websocket.go (4)

124-142: 快照语义正确,异步消费安全。

无论 SanitizeProxyRequestForBroadcast 是否返回原指针,Line 130 始终做浅拷贝,且 data = snapshot 以值类型存入 interface{},异步消费者读到的是独立副本。之前 review 中指出的竞态风险已正确修复。


144-166: BroadcastProxyRequest 一致的快照模式,实现正确。


62-68: client.Close() 可能被调用两次(run() 中 + HandleWebSocket 的 defer 中),但安全。

run() 因写入失败调用 client.Close() 后,HandleWebSocket 中的 ReadMessage 也会返回错误,随后 defer 再次调用 conn.Close()。gorilla/websocket 底层 net.Conn.Close() 是幂等的,不会 panic,所以当前代码是安全的。无需修改。


35-45: 常量与结构体新增合理。

broadcastDroppedTotal 使用 atomic.Uint64 零值即可用,无需额外初始化;websocketWriteTimeout = 5s 对 WebSocket 写入是合理的超时值;channel buffer 100 对管理面板场景足够。

internal/handler/websocket_test.go (2)

12-48: 快照语义测试正确,type switch 覆盖了值类型和指针类型两种情况。

当前 BroadcastProxyRequest 实现中 data = snapshot(值类型),所以 case *domain.ProxyRequest(Line 34)在当前实现下不会命中。作为防御性测试这没问题,但如果想让测试更精确地反映实现,可以只保留 case domain.ProxyRequest 分支。


50-84: 与上一个测试结构一致,覆盖了 attempt 的快照语义,LGTM。

internal/repository/sqlite/migrations.go (1)

103-142: LGTM,新迁移与现有框架集成正确

RunMigrations 先执行 AutoMigrate 再按版本升序运行迁移,Version 2 的幂等设计(IF NOT EXISTS / isMySQLDuplicateIndexError)与框架逻辑完全兼容。

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/handler/websocket.go`:
- Around line 168-184: BroadcastMessage currently enqueues the provided data
directly (via BroadcastMessage -> tryEnqueueBroadcast) which risks races if
callers pass mutable pointers; match the behavior of
BroadcastProxyRequest/BroadcastProxyUpstreamAttempt by snapshotting data before
enqueuing: detect common mutable types or, more generally, marshal the data to
JSON and unmarshal into an interface{} (or clone slices/maps/structs) to create
an immutable copy, then build WSMessage with that copy and call
tryEnqueueBroadcast; alternatively, if you choose caller responsibility, add a
clear comment on BroadcastMessage documenting that callers must pass
immutable/serialized data.

In `@internal/repository/sqlite/migrations.go`:
- Around line 57-88: The v2 migration uses CREATE/DROP INDEX which on MySQL
causes implicit commits so the surrounding gorm.Transaction in runMigration
cannot guarantee atomicity; update the migration (the Version:2 entry in
migrations.go, specifically the Up/Down funcs and/or nearby runMigration
comment) to add a clear note that MySQL DDL triggers implicit commits and that
m.Up(tx) + tx.Create(&SchemaMigration{...}) are not atomic on MySQL, reference
isMySQLDuplicateIndexError and SchemaMigration, and state that the behavior
relies on Up being idempotent (self-healing) rather than transactional rollback
so future maintainers understand this limitation.

In `@web/src/pages/requests/index.tsx`:
- Around line 764-786: MemoLogRow's comparator uses the index prop (in
LogRowProps) so when a new real-time request is unshifted all subsequent rows
get index+1 and the comparator returns false, causing full re-renders; remove
index from the props and comparator, switch the zebra styling from using the
index to a CSS nth-child rule on the TableRow (e.g., Tailwind
[&:nth-child(even)]/[odd]) and stop passing index into LogRow/LogRowProps and
MemoLogRow so the comparator no longer invalidates rows on inserts.

---

Nitpick comments:
In `@internal/handler/websocket_test.go`:
- Around line 86-123: The test mutates the global log state
(log.SetOutput/Flags/Prefix) which makes it flaky if tests run in parallel;
instead, modify WebSocketHub to accept an injectable logger (add a Logger
*log.Logger field to the WebSocketHub struct and use that logger inside
BroadcastProxyRequest) and update
TestWebSocketHub_BroadcastProxyRequest_LogsWhenDropped to construct the hub with
a local log.New(&buf, "", 0) so the test captures output without touching global
state; update references in BroadcastProxyRequest and any hub constructors to
use the new Logger when present.

In `@internal/handler/websocket.go`:
- Around line 51-78: The current WebSocketHub.run serially calls
SetWriteDeadline and WriteJSON for each client (using h.broadcast, h.clients and
websocket.WriteJSON), causing O(n × writeTimeout) broadcast latency when clients
grow; change run to perform writes concurrently: snapshot h.clients as you
already do, then spawn a goroutine per client (or use a bounded worker
pool/semaphore) that does SetWriteDeadline and WriteJSON, collect failures into
a thread-safe slice/channel, wait with a sync.WaitGroup for all writes to
finish, then acquire h.mu.Lock and remove failed connections from h.clients;
ensure no map access happens while goroutines are writing and that removal still
happens under the lock to avoid races.
- Around line 80-95: The broadcastDroppedTotal counter in tryEnqueueBroadcast is
monotonically increasing so intermittent drops may be silent until the counter
hits the next 100 boundary; change tryEnqueueBroadcast to rate-limit logging by
time window instead of only by counter: keep a lastLogTime (e.g.,
lastBroadcastDropLog int64) on WebSocketHub and, when a drop occurs, increment
broadcastDroppedTotal as now but also check the lastLogTime (use atomic
load/store or a mutex) and only emit log.Printf if the lastLogTime is older than
a configured window (e.g., 10s), then update lastLogTime to now; keep the
existing counter-based sampling if you want, but add the time-window gate so
occasional drops after recovery still produce an observable log.

In `@internal/repository/sqlite/migrations_test.go`:
- Around line 10-22: Add two additional assertions to
TestIsMySQLDuplicateIndexError to cover the nil and string-prefix edge cases:
call isMySQLDuplicateIndexError(nil) and assert it returns false, and add a
negative string case (e.g., errors.New("Error 10619: ...") or similar) to assert
it does not match as a duplicate-index error; update the test function
TestIsMySQLDuplicateIndexError to include these two assertions so the behavior
of isMySQLDuplicateIndexError for nil inputs and near-prefix numeric matches is
explicitly verified.

In `@internal/repository/sqlite/migrations.go`:
- Around line 90-101: The fallback string match in isMySQLDuplicateIndexError
uses strings.Contains(lower, "error 1061") which can falsely match longer codes
like "error 10619"; update the fallback to perform a bounded match instead
(e.g., check for "error 1061" with word/colon boundaries or use a precompiled
regexp with \b around 1061) so only the exact error code 1061 is considered
while keeping the existing "duplicate key name" check; modify the
isMySQLDuplicateIndexError function to replace the loose strings.Contains call
with this precise boundary match.

In `@web/src/pages/requests/index.tsx`:
- Around line 778-782: 此处多定义了 nextPending 且其值与 prevPending 恒等(因为此处 prev.request
=== next.request 已为同一引用),请删除 nextPending 变量并在条件判断中直接使用 prevPending(即将 "if
(prevPending || nextPending) { return prev.nowMs === next.nowMs; }" 简化为 "if
(prevPending) { return prev.nowMs === next.nowMs; }"),保留对 prev.request/status 和
prev.nowMs 的现有引用以保持逻辑不变。
- Around line 1-2: Two separate imports from 'react' should be combined: replace
the two import statements that bring in useState, useMemo, useRef, useEffect and
useCallback, memo with a single import from 'react' that lists all six
identifiers (useState, useMemo, useRef, useEffect, useCallback, memo) to satisfy
the no-duplicate-imports rule and avoid duplicated module imports.
- Line 798: Remove the unnecessary useCallback wrapper around handleClick in
MobileRequestCard: replace the memoized callback (useCallback(() =>
onOpenRequest(request.id), [...])) with a plain arrow function or inline handler
(e.g., () => onOpenRequest(request.id)) to match LogRow's simple arrow style;
update the declaration of handleClick (or the onClick prop) to call
onOpenRequest(request.id) directly and remove the useCallback import/usage to
avoid extra comparison overhead.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (8)
internal/handler/websocket.go (3)

81-96: tryEnqueueBroadcast 非阻塞入队 + 节流日志设计合理。

select/default 确保热路径不阻塞;原子计数器 + 首次/每百次打日志在可观测性和性能间取得了平衡。

一个小建议:当前 broadcastDroppedTotal 是累积计数,永不重置。如果希望在恢复正常后能区分"历史曾丢弃"和"当前仍在丢弃",可以考虑增加一个滑动窗口计数或定期重置逻辑。不过对当前场景影响不大。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/handler/websocket.go` around lines 81 - 96, The current
tryEnqueueBroadcast uses a cumulative counter broadcastDroppedTotal that never
resets, making it hard to tell if drops are recent or historical; modify
tryEnqueueBroadcast (or nearby metric collection logic) to maintain a short-term
sliding/resettable counter (e.g., a time-windowed counter or a counter that is
reset periodically) in addition to the cumulative broadcastDroppedTotal, and use
that short-term counter for the "dropped_total" log/observability checks (while
keeping the overall cumulative metric for long-term stats) so you can
distinguish recent ongoing drops from historical ones when logging or alerting.

125-143: BroadcastProxyRequest/BroadcastProxyUpstreamAttemptBroadcastMessage 的快照策略不一致,存在性能优化机会。

BroadcastMessage 预序列化为 json.RawMessagerun()WriteJSON 对 N 个客户端只需直接写入已序列化的字节。而 BroadcastProxyRequest/BroadcastProxyUpstreamAttemptrun() 中对每个客户端各做一次 json.MarshalWriteJSON 内部调用)。

在客户端数量较多时,预序列化可以避免 N 次重复序列化。可以考虑统一采用 BroadcastMessagejson.RawMessage 策略:

♻️ 示例(以 BroadcastProxyRequest 为例)
 func (h *WebSocketHub) BroadcastProxyRequest(req *domain.ProxyRequest) {
 	sanitized := event.SanitizeProxyRequestForBroadcast(req)
-	var data interface{} = sanitized
 	var meta string
 	if sanitized != nil {
-		// 无论 Sanitize 是否返回原指针,都强制做一次浅拷贝快照
 		snapshot := *sanitized
-		data = snapshot
 		meta = "requestID=" + snapshot.RequestID
 		if snapshot.ID != 0 {
 			meta += " requestDbID=" + strconv.FormatUint(snapshot.ID, 10)
 		}
+		b, err := json.Marshal(snapshot)
+		if err != nil {
+			log.Printf("[WebSocket] drop broadcast message type=proxy_request_update: marshal snapshot failed: %v", err)
+			return
+		}
+		msg := WSMessage{
+			Type: "proxy_request_update",
+			Data: json.RawMessage(b),
+		}
+		h.tryEnqueueBroadcast(msg, meta)
+		return
 	}
 	msg := WSMessage{
 		Type: "proxy_request_update",
-		Data: data,
+		Data: nil,
 	}
 	h.tryEnqueueBroadcast(msg, meta)
 }

Also applies to: 145-167, 170-195

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/handler/websocket.go` around lines 125 - 143, BroadcastProxyRequest
and BroadcastProxyUpstreamAttempt currently pass Go structs to WSMessage so
run()/WriteJSON ends up json.Marshal-ing per-client; change them to follow
BroadcastMessage by taking a shallow snapshot (as you already do), json.Marshal
that snapshot once into a []byte and set WSMessage.Data to
json.RawMessage(theBytes) before calling tryEnqueueBroadcast so run() can write
the already-serialized bytes to all clients; preserve the existing nil handling,
meta construction (RequestID and ID), and the snapshot logic, and apply the same
change to BroadcastProxyUpstreamAttempt and the other affected blocks so the
pre-serialization strategy is consistent with BroadcastMessage.

52-68: run() 中对同一消息的多客户端写入可以优化为一次序列化。

当前 WriteJSON(msg) 对每个客户端分别做一次 JSON 序列化。可以在循环外预先序列化为 []byte,然后使用 WriteMessage(websocket.TextMessage, data) 写入,避免 N 次重复 marshal:

♻️ 预序列化优化建议
 for msg := range h.broadcast {
+	data, err := json.Marshal(msg)
+	if err != nil {
+		log.Printf("[WebSocket] failed to marshal broadcast message type=%s: %v", msg.Type, err)
+		continue
+	}
+
 	h.mu.RLock()
 	clients := make([]*websocket.Conn, 0, len(h.clients))
 	for client := range h.clients {
 		clients = append(clients, client)
 	}
 	h.mu.RUnlock()

 	var toRemove []*websocket.Conn
 	for _, client := range clients {
 		_ = client.SetWriteDeadline(time.Now().Add(websocketWriteTimeout))
-		if err := client.WriteJSON(msg); err != nil {
+		if err := client.WriteMessage(websocket.TextMessage, data); err != nil {
 			_ = client.Close()
 			toRemove = append(toRemove, client)
 		}
 	}

这与上一条评论中 BroadcastProxyRequest/BroadcastProxyUpstreamAttempt 统一采用 json.RawMessage 的建议是互补的——如果所有入队消息都已经是预序列化的 json.RawMessage,则此处可以进一步简化。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/handler/websocket.go` around lines 52 - 68, The run() loop currently
calls client.WriteJSON(msg) for each connection which re-marshals the same msg
repeatedly; change run() in WebSocketHub to pre-serialize the message once (e.g.
json.Marshal or use existing json.RawMessage when available) into a []byte
before iterating clients, then replace WriteJSON(msg) with
client.WriteMessage(websocket.TextMessage, data) and keep the existing
SetWriteDeadline(websocketWriteTimeout) and close/remove logic; this reduces
repeated marshaling and is compatible with
BroadcastProxyRequest/BroadcastProxyUpstreamAttempt when those use
json.RawMessage.
internal/repository/sqlite/migrations.go (2)

125-142: RunMigrations 使用 getCurrentVersion 获取最大已执行版本号来跳过已执行迁移,存在间隙跳过风险。

getCurrentVersion 返回 MAX(version)。如果 v1 和 v3 已执行但 v2 未执行(例如手动操作或并发异常),currentVersion 为 3,v2 将永远被跳过。当前只有 v1、v2 两个迁移,实际不会触发此问题,但随着迁移数量增长建议关注。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/repository/sqlite/migrations.go` around lines 125 - 142,
RunMigrations currently uses getCurrentVersion (MAX(version)) which can skip
intermediate migrations; change the logic to fetch the set of applied migration
versions (e.g. query the migrations table for all executed versions) and build a
map of appliedVersions, then iterate sortedMigrations and call d.runMigration(m)
only when m.Version is not present in appliedVersions; remove reliance on
getCurrentVersion for skipping and use the appliedVersions map instead so
missing intermediate migrations (e.g. v2 when v1 and v3 exist) will still
execute.

90-101: isMySQLDuplicateIndexError 兜底字符串匹配范围可再考量。

当前兜底逻辑同时匹配 "duplicate key name""error 1061""error 1061" 作为子字符串可能意外匹配到包含该文本的非 MySQL 错误消息(例如日志格式中嵌入的错误引用)。不过考虑到此函数仅在 MySQL 方言路径下被调用,实际误匹配风险很低,当前实现可以接受。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/repository/sqlite/migrations.go` around lines 90 - 101, The fallback
string check in isMySQLDuplicateIndexError is too broad because matching "error
1061" can falsely trigger on unrelated messages; update the fallback to only
look for the specific MySQL phrase "duplicate key name" and/or a tighter pattern
for the MySQL code (e.g., "1061" only when paired with "ER_DUP_KEYNAME" or
"mysql" context) instead of the loose "error 1061" substring; retain the
errors.As check against mysqlDriver.MySQLError and adjust the final
strings.Contains calls in isMySQLDuplicateIndexError accordingly.
internal/handler/websocket_test.go (1)

87-124: 全局 logger 重定向在并行测试场景下存在风险。

当前测试通过 log.SetOutput 重定向全局 logger 来捕获日志输出。虽然目前没有使用 t.Parallel(),但如果将来有人加上并行标记,会导致全局 logger 状态竞争。建议在测试文件或函数顶部加注释说明此测试不可并行运行的原因,或者考虑让 WebSocketHub 接受自定义 logger 以便测试时注入。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/handler/websocket_test.go` around lines 87 - 124, The test mutates
the global logger via log.SetOutput which races in parallel tests; either mark
the test as non-parallel with a comment at the top of the test file explaining
it must not be run with t.Parallel(), or better refactor WebSocketHub to accept
a logger (e.g., add a logger field to WebSocketHub and use it inside
BroadcastProxyRequest and any other logging call), update the test to create the
hub with a test-specific logger that writes to an in-memory buffer (instead of
calling log.SetOutput), and assert against that buffer; refer to WebSocketHub,
BroadcastProxyRequest and WSMessage to locate the code to change.
web/src/pages/requests/index.tsx (2)

759-780: 比较器逻辑正确;prevPending || nextPending 可简化

整体逻辑无误:已完成行在 nowMs tick 时正确跳过重渲染;待处理行每秒随 nowMs 变更触发更新;状态变更时 React Query 会产生新对象引用,在第 762 行提前拦截。

但到达第 774 行时,prev.request === next.request 已成立(同一对象引用),因此 prev.request.status === next.request.status,即 prevPending === nextPending||&& 等价。可简化为 if (prevPending) 提升可读性:

♻️ 可选简化
- const prevPending = prev.request.status === 'PENDING' || prev.request.status === 'IN_PROGRESS';
- const nextPending = next.request.status === 'PENDING' || next.request.status === 'IN_PROGRESS';
- if (prevPending || nextPending) {
-   return prev.nowMs === next.nowMs;
- }
+ const isPending = prev.request.status === 'PENDING' || prev.request.status === 'IN_PROGRESS';
+ if (isPending) {
+   return prev.nowMs === next.nowMs;
+ }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/pages/requests/index.tsx` around lines 759 - 780, The comparator in
MemoLogRow redundantly checks both prevPending and nextPending even though
earlier the comparator already returned false if prev.request !== next.request,
guaranteeing prev.request === next.request and therefore prevPending ===
nextPending; simplify the branch by replacing the condition "if (prevPending ||
nextPending)" with "if (prevPending)" and keep the existing early-return
behavior that compares prev.nowMs === next.nowMs to decide re-rendering for
pending rows (variables: MemoLogRow, prev.request/next.request, prevPending,
nextPending, prev.nowMs/next.nowMs).

792-792: MobileRequestCarduseCallback 无实际效益,与 LogRow 写法不一致

handleClick 仅作为普通 <div>onClick 使用,非传入 memo 包装的子组件,函数引用是否稳定对渲染无影响。useCallback 在此只增加依赖比对开销,而 LogRow 的同等写法(第 573 行)已正确使用内联箭头函数。

♻️ 建议移除多余的 useCallback
- const handleClick = useCallback(() => onOpenRequest(request.id), [onOpenRequest, request.id]);
+ const handleClick = () => onOpenRequest(request.id);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/pages/requests/index.tsx` at line 792, The handleClick inside
MobileRequestCard is unnecessarily wrapped with useCallback—remove the
useCallback wrapper and replace the handler with an inline arrow function (e.g.,
onClick={() => onOpenRequest(request.id)}) like LogRow does; keep referencing
onOpenRequest and request.id directly and delete the handleClick constant to
eliminate unneeded dependency tracking and simplify the component.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7b80677 and a793174.

📒 Files selected for processing (4)
  • internal/handler/websocket.go
  • internal/handler/websocket_test.go
  • internal/repository/sqlite/migrations.go
  • web/src/pages/requests/index.tsx
🧰 Additional context used
🧬 Code graph analysis (3)
internal/handler/websocket.go (3)
web/src/lib/transport/index.ts (2)
  • WSMessage (39-39)
  • ProxyUpstreamAttempt (28-28)
internal/event/broadcaster.go (2)
  • SanitizeProxyRequestForBroadcast (28-40)
  • SanitizeProxyUpstreamAttemptForBroadcast (43-55)
internal/domain/model.go (1)
  • ProxyUpstreamAttempt (358-412)
internal/handler/websocket_test.go (1)
internal/handler/websocket.go (2)
  • WebSocketHub (32-39)
  • WSMessage (27-30)
internal/repository/sqlite/migrations.go (1)
internal/repository/sqlite/db.go (1)
  • DB (17-20)
🔇 Additional comments (7)
internal/repository/sqlite/migrations.go (1)

57-88: 迁移 v2 实现清晰,MySQL 兼容处理得当。

方言分支逻辑合理:MySQL 通过 isMySQLDuplicateIndexError 实现幂等,其他方言使用 IF NOT EXISTS/IF EXISTS。Down 路径的 warning 日志也已补充,整体与之前 review 的建议一致。

internal/handler/websocket_test.go (2)

13-49: 快照语义测试覆盖良好。

同时处理了 *domain.ProxyRequestdomain.ProxyRequest 两种 type assertion 分支,对实现变更具有良好的鲁棒性。


51-85: Attempt 快照测试与 BroadcastMessage 快照测试逻辑清晰,覆盖了核心的不可变性保证。

测试策略(广播 → 修改原始对象 → 验证 channel 中的快照未被污染)简洁有效。

Also applies to: 126-158

internal/handler/websocket.go (3)

52-79: run() 重构正确,锁粒度和清理逻辑合理。

RLock 快照客户端列表 → 释放锁 → 逐连接写入(带 deadline) → 批量清理失败连接。这避免了旧实现中持锁做网络 I/O 以及 RLock 下删除 map 的两个问题。SetWriteDeadline 确保慢客户端不会阻塞广播循环。


125-167: BroadcastProxyRequest / BroadcastProxyUpstreamAttempt 快照策略正确,已解决之前 review 的数据竞态问题。

无论 Sanitize 是否返回原指针,都强制做浅拷贝(snapshot := *sanitized),并以值类型存入 interface{},确保异步消费者不受影响。元数据字符串的构建为丢弃日志提供了良好的可追溯性。

注意:当前浅拷贝对 domain.ProxyRequest / domain.ProxyUpstreamAttempt 是安全的,因为 Sanitize 已将 RequestInfo/ResponseInfo 指针字段置 nil。如果未来 domain 模型新增了其他可变引用字段(slice/map/pointer),浅拷贝可能不再充分,届时需重新评估。


170-195: BroadcastMessage 的 JSON 预序列化快照策略是正确的解决方案,已解决之前 review 的问题。

json.RawMessage 的特判避免了双重序列化,marshal 失败时有日志并提前返回。encoding/json 在序列化时会直接嵌入 json.RawMessage 的原始字节,不会产生额外转义。

web/src/pages/requests/index.tsx (1)

576-608: 斑马纹改用 CSS even: 选择器,过往问题已修复

移除 index prop 后改用 Tailwind even:bg-* 伪类实现斑马纹,彻底消除了新实时请求前插时因所有行 index+1 导致的全量重渲染问题。cn() 底层的 tailwind-merge 能正确解决 even:bg-foreground/[0.03] 与后续 even:bg-red-500/25 / even:bg-amber-500/15 / even:bg-blue-500/10 之间的冲突(后者覆盖前者)。

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@internal/handler/websocket_test.go`:
- Around line 87-124: The test mutates the global logger via log.SetOutput which
races in parallel tests; either mark the test as non-parallel with a comment at
the top of the test file explaining it must not be run with t.Parallel(), or
better refactor WebSocketHub to accept a logger (e.g., add a logger field to
WebSocketHub and use it inside BroadcastProxyRequest and any other logging
call), update the test to create the hub with a test-specific logger that writes
to an in-memory buffer (instead of calling log.SetOutput), and assert against
that buffer; refer to WebSocketHub, BroadcastProxyRequest and WSMessage to
locate the code to change.

In `@internal/handler/websocket.go`:
- Around line 81-96: The current tryEnqueueBroadcast uses a cumulative counter
broadcastDroppedTotal that never resets, making it hard to tell if drops are
recent or historical; modify tryEnqueueBroadcast (or nearby metric collection
logic) to maintain a short-term sliding/resettable counter (e.g., a
time-windowed counter or a counter that is reset periodically) in addition to
the cumulative broadcastDroppedTotal, and use that short-term counter for the
"dropped_total" log/observability checks (while keeping the overall cumulative
metric for long-term stats) so you can distinguish recent ongoing drops from
historical ones when logging or alerting.
- Around line 125-143: BroadcastProxyRequest and BroadcastProxyUpstreamAttempt
currently pass Go structs to WSMessage so run()/WriteJSON ends up
json.Marshal-ing per-client; change them to follow BroadcastMessage by taking a
shallow snapshot (as you already do), json.Marshal that snapshot once into a
[]byte and set WSMessage.Data to json.RawMessage(theBytes) before calling
tryEnqueueBroadcast so run() can write the already-serialized bytes to all
clients; preserve the existing nil handling, meta construction (RequestID and
ID), and the snapshot logic, and apply the same change to
BroadcastProxyUpstreamAttempt and the other affected blocks so the
pre-serialization strategy is consistent with BroadcastMessage.
- Around line 52-68: The run() loop currently calls client.WriteJSON(msg) for
each connection which re-marshals the same msg repeatedly; change run() in
WebSocketHub to pre-serialize the message once (e.g. json.Marshal or use
existing json.RawMessage when available) into a []byte before iterating clients,
then replace WriteJSON(msg) with client.WriteMessage(websocket.TextMessage,
data) and keep the existing SetWriteDeadline(websocketWriteTimeout) and
close/remove logic; this reduces repeated marshaling and is compatible with
BroadcastProxyRequest/BroadcastProxyUpstreamAttempt when those use
json.RawMessage.

In `@internal/repository/sqlite/migrations.go`:
- Around line 125-142: RunMigrations currently uses getCurrentVersion
(MAX(version)) which can skip intermediate migrations; change the logic to fetch
the set of applied migration versions (e.g. query the migrations table for all
executed versions) and build a map of appliedVersions, then iterate
sortedMigrations and call d.runMigration(m) only when m.Version is not present
in appliedVersions; remove reliance on getCurrentVersion for skipping and use
the appliedVersions map instead so missing intermediate migrations (e.g. v2 when
v1 and v3 exist) will still execute.
- Around line 90-101: The fallback string check in isMySQLDuplicateIndexError is
too broad because matching "error 1061" can falsely trigger on unrelated
messages; update the fallback to only look for the specific MySQL phrase
"duplicate key name" and/or a tighter pattern for the MySQL code (e.g., "1061"
only when paired with "ER_DUP_KEYNAME" or "mysql" context) instead of the loose
"error 1061" substring; retain the errors.As check against
mysqlDriver.MySQLError and adjust the final strings.Contains calls in
isMySQLDuplicateIndexError accordingly.

In `@web/src/pages/requests/index.tsx`:
- Around line 759-780: The comparator in MemoLogRow redundantly checks both
prevPending and nextPending even though earlier the comparator already returned
false if prev.request !== next.request, guaranteeing prev.request ===
next.request and therefore prevPending === nextPending; simplify the branch by
replacing the condition "if (prevPending || nextPending)" with "if
(prevPending)" and keep the existing early-return behavior that compares
prev.nowMs === next.nowMs to decide re-rendering for pending rows (variables:
MemoLogRow, prev.request/next.request, prevPending, nextPending,
prev.nowMs/next.nowMs).
- Line 792: The handleClick inside MobileRequestCard is unnecessarily wrapped
with useCallback—remove the useCallback wrapper and replace the handler with an
inline arrow function (e.g., onClick={() => onOpenRequest(request.id)}) like
LogRow does; keep referencing onOpenRequest and request.id directly and delete
the handleClick constant to eliminate unneeded dependency tracking and simplify
the component.

@SurviveM SurviveM merged commit 4d29e90 into awsl-project:main Feb 22, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants