-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
Problem
The current send_log table creates one record per email sent, used only for rate limiting. This causes:
- Unbounded growth: 1M emails/day = 1M records/day
- No automatic purge: records accumulate indefinitely
- Inefficient queries: counting records instead of reading a value
Proposed Solution
In-memory rate limiter - no database at all for rate limiting.
class RateLimiter:
"""Counters in memory, zero DB."""
def __init__(self):
# {account_id: {minute_ts: count, hour_ts: count, day_ts: count}}
self._counters: dict[str, dict[str, int]] = {}
def check_and_increment(self, account_id: str, limits: dict) -> bool:
"""Check limits and increment. Returns True if allowed."""
now = int(time.time())
# ... rolling counters logicBenefits
| Aspect | DB | In-Memory |
|---|---|---|
| Latency | ~1ms query | ~1μs dict lookup |
| I/O | Write every email | Zero I/O |
| Complexity | Schema, migrations | One dict |
| Crash recovery | Keeps counters | Reset to zero |
Crash Recovery
If service restarts, counters reset to zero. This is:
- Conservative: can send again after restart
- Self-healing: limits are per minute/hour/day, realigns quickly
- Same as Redis: with TTL keys
Tasks
- Create
RateLimiterclass with in-memory counters - Modify rate limit check to use
RateLimiterinstead ofsend_log - Remove
send_logtable andentities/send_log/directory - Update tests
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels