Skip to content

Conversation

@qaz741wsd856
Copy link

Summary

This PR changes the import endpoint to use Cloudflare D1's batch API to insert folders and ciphers in bulk instead of performing individual insert operations per record. The change reduces network round-trips and the number of D1 database operations performed per Worker request.

Motivation

  • Cloudflare Workers free plan enforces a limit of 50 D1 database operations per Worker request. Each individual D1 execution counts toward this limit. See Limits.
  • D1's batch API executes multiple prepared statements in a single call; in practice a single batch call counts as one D1 operation while executing many statements server-side. By batching inserts, we drastically reduce the number of D1 operations and avoid hitting the per-request cap. Also, it can reduce network round-trips and improve import performance.
  • Although you might expect Worker↔D1 communication to be extremely fast on Cloudflare's internal network, each D1 operation still incurs non-trivial latency. In my testing the original (per-item) implementation importing a vault with 600+ records took approximately 30 seconds to complete. (But I have no idea why that import did not trigger the per-request D1 operation cap.)

What changed

  • src/handlers/import.rs now prepares insert statements for folders and ciphers and executes them via D1Database::batch() instead of running each insert individually.
  • Folder and cipher inserts are executed separately (folders first, then ciphers) to preserve relationships.
  • A new environment variable IMPORT_BATCH_SIZE controls batching granularity.

hotianbexuanto added a commit to hotianbexuanto/warden-worker that referenced this pull request Dec 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant