Redis cache #623
Replies: 2 comments 8 replies
-
hi @YannickLecroart21 you need to use the work queue mode to support bulk operation via redis-cache. Basically in this configuration, you will two roles:
api gets whatevers comes from orion and stash in redis @c0c0n3 knows more, but i don't think we are optimizing the sql, i.e. grouping together messages and doing single sql insert. consider that when you do insert of batches (which are supported if the message from orion contains multiple entities), |
Beta Was this translation helpful? Give feedback.
-
Hi @YannickLecroart21, nice meeting you!
QuantumLeap uses Redis to speed up data ingestion. Every time an NGSI notification comes in from Context Broker, QuantumLeap needs to look up some data in the DB before it can insert the NGSI entities in the notification payload. To avoid multiple DB queries for each entity to insert, QuantumLeap runs the queries once and then caches the result sets in Redis. That's the Redis cache. But like @chicco785 explained, QuantumLeap also uses Redis as a work queue back end---technically QL uses Redis as a distributed data store rather than a cache in this case.
TL;DR Not directly. But if you can control what data gets sent to QuantumLeap's
the whole lot should be inserted in one go, i.e. QuantumLeap should issue a SQL statement like INSERT INTO T (entity_id, ...) VALUES (1, ..), (2, ..), (3, ..), ... Keep in mind though if you have lots of data to insert in one go, this isn't probably what you're looking for. For starters QuantumLeap doesn't use prepared statements at the moment and also it doesn't support streaming, so the entire dataset would have to be loaded in memory---see #193 about it. Also, Crate offers better options when it comes to bulk imports. Read on if you'd like to know a bit more about what's going on under the bonnet. Relationship b/w Work Queue & Redis cacheLike @chicco785 said, you could configure the QuantumLeap work queue. In this deployment scenario, when the QuantumLeap API front end receives a notification from Context Broker, puts that notification payload on an insert queue in Redis---which boils down to a hashtable with items to insert. Backend QuantumLeap queue workers pick up notification payloads from the queue and insert them into the configured database backend---Crate DB in your case. As is the case for vanilla QuantumLeap (i.e. no work queue), queue workers will still use the Redis DB query cache (as explained earlier) when inserting NGSI entities into the database. Relationship b/w notification payload & DB insertsIf there are multiple NGSI entities in a notification payload, queue workers will group them by entity type and then insert each group with a single SQL statement---this is exactly the same behaviour of vanilla QuantumLeap. Well, things are slightly more complicated than that, since if a group is too large, then QuantumLeap will issue multiple inserts---see #450 for the gory details :-) But assuming each group can fit in a single insert, schematically this is what would happen
Hope that helps! |
Beta Was this translation helpful? Give feedback.
-
Hi,
First of all, I would like to thank you for all the efforts on this Fiware component. Would it be possible to give us more details on how
the Redis cache works when inserting new data from an Orion subscription? I am wondering if QuantumLeap can make bulk operations on CrateDB via the Redis cache.
Thanks in advance.
Regards,
Beta Was this translation helpful? Give feedback.
All reactions