You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I was reading through the history, I noticed that somewhere down the line, the code AzureSearchEmbedService has been updated only to call Document Intelligence per pages in the document while it used to call Document Intelligence for the whole document and use a method named BlobNameFromFilePage to decide which page is the created index record relates to?
Is there any reason why this has been updated to run per page instead of the whole document? isn't this change make it more costly to embed documents?
Whare are the pros and cons of each approach?
I appreciate if you can provide me with some information
The text was updated successfully, but these errors were encountered:
Source doc is still included in the metadata - If you choose to use Vector (you should) as a search strategy for RAG, then the embedding model generates 1536 vectors, regardless of content length. So, for both specificity and search optimization, it makes sense to split potential search results into things that would actually be responsive AND not immediately exceed the model's context window
As I was reading through the history, I noticed that somewhere down the line, the code
AzureSearchEmbedService
has been updated only to call Document Intelligence per pages in the document while it used to call Document Intelligence for the whole document and use a method namedBlobNameFromFilePage
to decide which page is the created index record relates to?Is there any reason why this has been updated to run per page instead of the whole document? isn't this change make it more costly to embed documents?
Whare are the pros and cons of each approach?
I appreciate if you can provide me with some information
The text was updated successfully, but these errors were encountered: