Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change publish's query selector and keep already inserted Minimongo docs? #1

Closed
fabiodr opened this issue Feb 21, 2016 · 14 comments
Labels

Comments

@fabiodr
Copy link

fabiodr commented Feb 21, 2016

I'm trying this package and it's being very useful :)

@mitar , do you have some idea how i can change the query selector in publication with .setData but preserve other docs from that subscription already in Minimongo? Perhaps there is a way to use along with SubsManager.

I have something like that now:

Meteor.publish("leadsIncrementalByFilter", function(filter, findOptions) {
  this.autorun(function(computation) {
    var self = this;

    if (!self.userId) {
      self.ready();
      return;
    }

    self.autorun(function(computation) {
      return Leads.find(
        self.data("filter") || filter, 
        self.data("findOptions") || findOptions);
    });
  });
});

Template.leads.onCreated(function() {
  var quantityInitialItems = 10;
  Template.instance().leadsLimit = new ReactiveVar(quantityInitialItems);

  Template.instance().leadsSub = Meteor.subscribe("leadsIncrementalByFilter", buildFilter(), buildFindOptions());
});

Template.leads.events({
  "click a[data-idphasefilter]": function(e, instance) {
    e.preventDefault();
    Session.set("leadOrderByField", $(e.target).data("orderby"));
    Template.instance().leadsSub.setData("filter", buildFilter());
  },
  "click #searchButton": function(e, instance) {
    e.preventDefault();
    Session.set("leadSearchString", $('#searchText').val());
    Template.instance().leadsSub.setData("filter", buildFilter());
  },
  "click [data-getmoreleads]": function(e, instance) {
    var increasedLimit = Template.instance().leadsLimit.get() + 10;
    Template.instance().leadsLimit.set(increasedLimit);
    Template.instance().leadsSub.setData("findOptions", buildFindOptions());
  }
});

Appreciate any help.

@mitar mitar added the question label Feb 21, 2016
@mitar
Copy link
Member

mitar commented Feb 21, 2016

First, why nested autoruns inside the publish?

@mitar
Copy link
Member

mitar commented Feb 21, 2016

but preserve other docs from that subscription already in Minimongo

Why don't you then make two queries, one based on original query, and one based on data?

What exactly are you trying to accomplish?

@fabiodr
Copy link
Author

fabiodr commented Feb 21, 2016

Humm, the inner autorun was to scope some tests only, i will remove, thanks.

  • My screen has some filters.
  • Initially i want to display only around 20 docs.
  • It has infinite scroll too, when the page is scrolled to bottom, i get more 20 docs changing limit with .setData
  • The user can filter some docs that aren't initially in these already retrieved docs, so i change the filter with .setData too and the publish send new docs. But the not matching docs are being removed from Minimongo client cache.
  • If the user clear the filter, i want to display the in-memory docs without going to server again

@fabiodr
Copy link
Author

fabiodr commented Feb 21, 2016

@mitar , i researched meteor-easy-search as an easier solution for my problem but it seems you spotted the same advantage that your packages can bring: sending only new documents to client and avoid it restarts.

as discussed here: matteodem/meteor-easy-search#405 (comment)

The only missing piece is changing the query selector besides the limit.

But i understand the default Meteor publish behavior is making the Minimongo reflect the publish cursor results so maybe it goes outside this package intentions.

@mitar
Copy link
Member

mitar commented Feb 21, 2016

But i understand the default Meteor publish behavior is making the Minimongo reflect the publish cursor results so maybe it goes outside this package intentions.

I think what you want to do is then use multiple subscriptions instead.

So you do not want just to change different documents, you also want to keep the old ones around. So just open two subscriptions. With whatever you call "old" and "new".

@fabiodr
Copy link
Author

fabiodr commented Feb 21, 2016

You mean creating a new subscription from client every time the filter controls are changed? Don't it have performance implications for a big number of subscriptions?

@mitar
Copy link
Member

mitar commented Feb 22, 2016

How many previous states do you want to keep on the client. You want to complicate stuff with keeping the previous state around so that you do not have to load documents.

Personally, I think you are probably optimizing this prematurely. But, if you want, then decide how many of those previous states you want so you keep them around. So if you want just to be able to go back to the previous one, then keep only the previous one, so only one step back.

The reason why you have to keep the subscription from before around is because maybe documents changed between user going into filter mode and went back to no-filter mode. So if you would just "cache" old documents on the client side, then you will loose any changes after that. So you have to keep the subscription if you want to go back.

In fact, Meteor already makes sure that only new documents are send over. So you can have two subscriptions at the same time. It maybe has performance implications on the server side, but I care only about client side. Server side I can scale horizontally as necessary. Client side is what matters to clients.

So this package solves the problem that between unsubscribing and resubscribing new documents are send again. So the flicker which happens when documents are removed and then new ones with larger limit are returned. That is per one subscription. If you have multiple, then you have less documents which might be resend if there is overlap.

BTW, if you have multiple subscriptions over the same collection you might want to use this package to know which documents are coming from where. So you can know which documents are coming from the filtered and which from unfiltered subscription.

So I would suggest that you use this package for limit changes. For filtering, you just resubcribe. You can keep previous subscription around to be able to quickly go back, if you want that. You use subscription-scope package to know which documents are really part of the search results. (Otherwise you might get some invalid documents into the search result on the client. Or, you have to redo the filter on the client, which might not be possible (full text search is an example). Including order.)

@fabiodr
Copy link
Author

fabiodr commented Mar 1, 2016

Thanks for enlighten Mitar! these packages really open up the fine control options over Meteor's mechanics.

I'm evaluating the approach you proposed combining these packages as well as Arunoda's search-source, which is very fast because uses Meteor Methods instead of subscriptions, and does client-side caching, but unfortunately isn't reactive from database.

@fabiodr
Copy link
Author

fabiodr commented Mar 1, 2016

Do you know some way i can mark docs as sent in Mergebox so i can get them in batch via Meteor Methods?

@mitar
Copy link
Member

mitar commented Mar 1, 2016

What exactly are you trying to optimize?

@fabiodr
Copy link
Author

fabiodr commented Mar 1, 2016

The initial load of data and posterior requests of big number of docs, like pagination. If i request more than 100 docs, the processing of DDP messages sent from server, reception on client and insertion on minimongo, one by one, is much slower than the request all data in one time with a method. Just the serialization in server and parse in client one by one already makes a big difference in speed. And the bigger the batch size, bigger is the difference.

@fabiodr
Copy link
Author

fabiodr commented Mar 1, 2016

Without considering the observer triggers and view rendering here.

@mitar
Copy link
Member

mitar commented Mar 2, 2016

If i request more than 100 docs, the processing of DDP messages sent from server, reception on client and insertion on minimongo, one by one, is much slower than the request all data in one time with a method.

Ah, this problem. Sorry, I am already using this branch to address it:

meteor/meteor#5680

Try it out. :-) I forget about old not-yet merged issues Meteor has and I already solved. :-)

@fabiodr
Copy link
Author

fabiodr commented Mar 15, 2016

Thanks Mitar, i'm following close the PR discussion before trying this.

@mitar mitar closed this as completed May 30, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants