Skip to content

Latest commit

 

History

History
8 lines (7 loc) · 1.88 KB

LIMITATIONS.md

File metadata and controls

8 lines (7 loc) · 1.88 KB

Known major limitations

  • There is no way at the moment to drop jobs in progress when connection is closed during processing. For instance, if a client initiated a heavy COUNT event and disconnected immediately there is no way to stop execution of the job and resources will just be wasted.
  • Every connection gets its own connection_id and there is no way now to reconnect (however, there is no visible use-case for it too except for some lost responses)
  • There is no acknowledgement mechanism since PUBSUB is used and not STREAMS but its not clear whether it makes sense to change it in future or not
  • Each client connection responses is handlel with Redis PUBSUB meaning each connection requires dedicated Redis connection so that redis connections can't be managed with connection pooling so there will be redis_connections_count = puma_pool + sidekiq_pool + subscribers_count. It's not a problem in theory since Redis is able to handle tens of thousands of connections and may be scaled horizontally. But some providers may limit this number and deployment at scale should account for that fact. In future it is possible to optimize for 1 Redis connection per Puma cluster, while Puma will handle its own scope of websockets connections and route Redis Pub/Sub to appropriate clients, but this is not a priority for now.
  • Nothing has been implemented yet in terms of TOR deployment
  • Websockets server is running on top of Puma Application Server. Puma itself has hot reloading feature to have real zero downtime deployments (TODO: verify how it works with websockets). However, most of modern ways of deployments (kubernetes, docker, appplatform, heroku). Will "kill" running instance in one way or another so they won't for each webscoket connection to disconnect. So clients should expect disconnects are possible during deployments.