-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to change publish's query selector and keep already inserted Minimongo docs? #1
Comments
First, why nested |
Why don't you then make two queries, one based on original query, and one based on What exactly are you trying to accomplish? |
Humm, the inner autorun was to scope some tests only, i will remove, thanks.
|
@mitar , i researched meteor-easy-search as an easier solution for my problem but it seems you spotted the same advantage that your packages can bring: sending only new documents to client and avoid it restarts. as discussed here: matteodem/meteor-easy-search#405 (comment) The only missing piece is changing the query selector besides the limit. But i understand the default Meteor publish behavior is making the Minimongo reflect the publish cursor results so maybe it goes outside this package intentions. |
I think what you want to do is then use multiple subscriptions instead. So you do not want just to change different documents, you also want to keep the old ones around. So just open two subscriptions. With whatever you call "old" and "new". |
You mean creating a new subscription from client every time the filter controls are changed? Don't it have performance implications for a big number of subscriptions? |
How many previous states do you want to keep on the client. You want to complicate stuff with keeping the previous state around so that you do not have to load documents. Personally, I think you are probably optimizing this prematurely. But, if you want, then decide how many of those previous states you want so you keep them around. So if you want just to be able to go back to the previous one, then keep only the previous one, so only one step back. The reason why you have to keep the subscription from before around is because maybe documents changed between user going into filter mode and went back to no-filter mode. So if you would just "cache" old documents on the client side, then you will loose any changes after that. So you have to keep the subscription if you want to go back. In fact, Meteor already makes sure that only new documents are send over. So you can have two subscriptions at the same time. It maybe has performance implications on the server side, but I care only about client side. Server side I can scale horizontally as necessary. Client side is what matters to clients. So this package solves the problem that between unsubscribing and resubscribing new documents are send again. So the flicker which happens when documents are removed and then new ones with larger limit are returned. That is per one subscription. If you have multiple, then you have less documents which might be resend if there is overlap. BTW, if you have multiple subscriptions over the same collection you might want to use this package to know which documents are coming from where. So you can know which documents are coming from the filtered and which from unfiltered subscription. So I would suggest that you use this package for limit changes. For filtering, you just resubcribe. You can keep previous subscription around to be able to quickly go back, if you want that. You use |
Thanks for enlighten Mitar! these packages really open up the fine control options over Meteor's mechanics. I'm evaluating the approach you proposed combining these packages as well as Arunoda's search-source, which is very fast because uses Meteor Methods instead of subscriptions, and does client-side caching, but unfortunately isn't reactive from database. |
Do you know some way i can mark docs as sent in Mergebox so i can get them in batch via Meteor Methods? |
What exactly are you trying to optimize? |
The initial load of data and posterior requests of big number of docs, like pagination. If i request more than 100 docs, the processing of DDP messages sent from server, reception on client and insertion on minimongo, one by one, is much slower than the request all data in one time with a method. Just the serialization in server and parse in client one by one already makes a big difference in speed. And the bigger the batch size, bigger is the difference. |
Without considering the observer triggers and view rendering here. |
Ah, this problem. Sorry, I am already using this branch to address it: Try it out. :-) I forget about old not-yet merged issues Meteor has and I already solved. :-) |
Thanks Mitar, i'm following close the PR discussion before trying this. |
I'm trying this package and it's being very useful :)
@mitar , do you have some idea how i can change the query selector in publication with .setData but preserve other docs from that subscription already in Minimongo? Perhaps there is a way to use along with SubsManager.
I have something like that now:
Appreciate any help.
The text was updated successfully, but these errors were encountered: