Conversation
22d76f0 to
c1ed23c
Compare
|
what do you think @mpscholten ? |
|
Can you some more details why this feature is needed? I've never seen a project where auto refresh was the bottleneck |
|
Well I didn't found the bottleneck but the current autorefresh seems not scalable. I did a project with ihp-openai to generate CV. streming live the cv generation with autorefresh. ( the doc of ihp-openai advise using autorefresh + a job ). So if i have 1000 users connected to this page (doing nothing), for each write of the openai job of the other user, it will refresh the page server side for each user connected which would do at least 10000 sql reqd per second (if the route contains only 1 query) ? And If the 1000 users do each one openai request at the same time, it would even do 1000 * 1000 * 10 = 10 M sql request per second ! With the autorefreshWith, it supress all these autorefresh read as it refresh only when should refresh returns true. so we replace 10M sql request -> 1000 if in should refresh. Do I missunderstood something ? |
|
|
makes sense. the new interface just feels a bit complex. we basically solved the same problems in data sync already. likely we could just copy some of the ideas and transfer them to auto refresh. The ideal interface would just be like todays In datasync we do this by keeping track of all ids of all objects. Then for an Here's some thoughts from claude: |
|
would be nice if it is automatic indeed. for Datasync, it does: So it still has the scaling issue in case of inserts. performing And in datasync we know exactly the requests performed. In autorefresh, it currently only track tables, is it possible to track the query ? and even impossible if pgquery is used in the route ? whereas with autorefreshWith i can add a user_id on all records, create a helper and use that helper on all autorefresh for exemple. Maybe the doc exemples are too complex? It is true the notification table is duplicated though. |
a2282e4 to
f5e2d04
Compare
|
what do you think @mpscholten ? |
|
Seems other reactive systems doesnt fully solve the scaling issue automatically either. Maybe doing what datasync do is enough as the sql query to check is verry cheap ? For exemple, for this senario: it would only do ~ 1000 * 10 queries so still scaling linearly with the number of users. while having an acceptable 100ms latency. |
|
I think we ned to extend the |
f5e2d04 to
547af72
Compare
Make autoRefresh smart by tracking fetched row IDs and filtering notifications in Haskell — no SQL at notification time, zero API changes. On notification: - UPDATE/DELETE: extract row ID from payload JSON, skip if not in tracked set - INSERT: conservative refresh (new row, can't check without filter values) - Tables without ID tracking (raw SQL, fetchCount): always refresh Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
autoRefresh now always uses ID-based smart filtering, making the separate autoRefreshWith API unnecessary. This also resolves the subscription conflict where registerSmartNotificationTrigger and registerRowNotificationTrigger shared subscribedRowTables. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
a95265d to
0b67156
Compare
a6c2453 to
535891b
Compare
Track WHERE conditions alongside row IDs during fetch. On INSERT notifications, evaluate the new row against the query's conditions using the hasql encoder printer. Rows that don't match the filters (e.g. different projectId) skip the re-render. - Add getParamPrinterText to extract text values from Encoders.Params - Track conditions via ModelContext callback and Dynamic wrapper - Evaluate ColumnCondition (EqOp, IsOp, InOp), And/Or trees - Safe fallback to refresh for unsupported operators (LIKE, etc.) - 16 new tests for matchesInsertPayload and shouldRefreshForPayload Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
535891b to
8c7cbfb
Compare
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
No description provided.