You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use this library and other Spacy models to create Doc objects.
I use the pipe() method to apply this to a large corpus of text. The main challenge is that accessing the vector of each document is too slow.
Is there a way to get only the vector from applying the model on the text? Or extract the vectors in batches as well?
This problem was raised from the problem of similarity where I couldn't also use the method similarity() on batches but only 1 by 1. Is there a way to compute similarity in batches?
I'm using a 4-core CPU.
I hope my question is clear.
Thanks.
The text was updated successfully, but these errors were encountered:
Hi, imho the spaCy way of dealing with separate documents is sort of "in the way". I do not recall a way to handle batches of spaCy docs!?
I come from another direction, I have a huge number of computed embeddings from USE and would like to input them to a spaCy pipeline. I think this is way easier also for your case, to eventually overwrite or add the USE embedding as "vector" hook to the respective doc object.
Of course, would be great if @ATAboukhadra could assist here and extend his plugin via a convenience method, maybe.
Hi,
I use this library and other Spacy models to create Doc objects.
I use the pipe() method to apply this to a large corpus of text. The main challenge is that accessing the vector of each document is too slow.
Is there a way to get only the vector from applying the model on the text? Or extract the vectors in batches as well?
This problem was raised from the problem of similarity where I couldn't also use the method similarity() on batches but only 1 by 1. Is there a way to compute similarity in batches?
I'm using a 4-core CPU.
I hope my question is clear.
Thanks.
The text was updated successfully, but these errors were encountered: