Skip to content

feat: add performance optimizations for high-concurrency workloads#22

Merged
chenyanchen merged 2 commits intomainfrom
feat/shared-cache-kv
Jan 15, 2026
Merged

feat: add performance optimizations for high-concurrency workloads#22
chenyanchen merged 2 commits intomainfrom
feat/shared-cache-kv

Conversation

@chenyanchen
Copy link
Owner

  • Add sharded KV implementation (cachekv.NewSharded) that reduces lock contention by partitioning keys across multiple rwMutexKV instances, achieving 4.7-6x faster reads and 2.8-3.2x faster writes under parallel access
  • Add WithWriteThrough option to layerkv.New for write-through caching strategy, improving Set+Get patterns by 1.6x
  • Fix triple allocation in layerkv batch.Set by replacing maps.Keys + slices.Collect with direct key iteration, reducing allocations by 80-91%
  • Add comprehensive benchmarks for cachekv and layerkv packages
  • Update README with documentation, benchmarks, and usage examples

- Add sharded KV implementation (cachekv.NewSharded) that reduces lock
  contention by partitioning keys across multiple rwMutexKV instances,
  achieving 4.7-6x faster reads and 2.8-3.2x faster writes under parallel access
- Add WithWriteThrough option to layerkv.New for write-through caching
  strategy, improving Set+Get patterns by 1.6x
- Fix triple allocation in layerkv batch.Set by replacing maps.Keys +
  slices.Collect with direct key iteration, reducing allocations by 80-91%
- Add comprehensive benchmarks for cachekv and layerkv packages
- Update README with documentation, benchmarks, and usage examples

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@chenyanchen chenyanchen self-assigned this Jan 15, 2026
@chenyanchen chenyanchen merged commit cfcc983 into main Jan 15, 2026
1 check passed
@chenyanchen chenyanchen deleted the feat/shared-cache-kv branch January 15, 2026 12:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant