[Question]: Empty Response after query_engine.query #8421
Replies: 5 comments
-
🤖 Hello, Based on the information you've provided, it seems like the issue might be related to the content being loaded into the index or the query you're executing. Here are a few things you could check:
You can refer to the following sources for more information:
Also, it's worth noting that there have been similar issues reported in the past. You might find the discussions and solutions helpful: Please let me know if this helps or if you need further assistance. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
When you load the index, you should pass the service context back in
|
Beta Was this translation helpful? Give feedback.
-
Also, it looks like you are using the langchain document class? Or you set the wrong kwargs. Should be
I.e. llama-index docs use the |
Beta Was this translation helpful? Give feedback.
-
Thankyou for the response @logan-markewich. Again now I removed the manual Document creation code and replaced the web loader from Langchain to BeautifulSoupWebReader in llama_index. Below is the index.json file content: {"index_store/data": {"some_string": {"type": "vector_store", "data": "{"index_id": "some_string", "summary": null, "nodes_dict": {"2e48e410-a236-4f61-a282-e94429cb9bb9": "2e48e410-a236-4f61-a282-e94429cb9bb9"}, "doc_id_dict": {}, "embeddings_dict": {}}"}}} But still doc_id_dict and embeddings_dict are empty. Below is the code: from llama_index.embeddings import LangchainEmbedding def document_loader(): loader = BeautifulSoupWebReader() embeddings =LangchainEmbedding( service_context = ServiceContext.from_defaults( set_global_service_context(service_context) Output now is blank atleast earlier it was printing Empty Response!! |
Beta Was this translation helpful? Give feedback.
-
Hmm I suspect if you Seems like maybe an issue with the LLM? Which LLM are you using, or how is it setup? I would try decreasing the chunk size to 1024, and maybe setting |
Beta Was this translation helpful? Give feedback.
-
Question Validation
Question
I am trying to read the content of website and index it using llama_index but after I perform query_engine.query(question) I get a empty response.I have latest version of llama_index installed.(0.8.31)
When I saw the index_store.json file its content was:
{"index_store/data": {"some_string": {"type": "vector_store", "data": "{"index_id": "some_string", "summary": null, "nodes_dict": {}, "doc_id_dict": {}, "embeddings_dict": {}}"}}}
Below is the code:
def document_loader():
web_links = ["https://www.databricks.com/","https://help.databricks.com","https://databricks.com/try-databricks","https://help.databricks.com/s/","https://docs.databricks.com"]
loader = WebBaseLoader(web_links)
documents = loader.load()
docs=[Document(page_content=doc.page_content,metadata={"source":doc.metadata["source"]}) for doc in documents]
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {"device": "cuda"}
embeddings =LangchainEmbedding(
HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
)
service_context = ServiceContext.from_defaults(
chunk_size=2048,
llm=llm,
embed_model=embeddings
)
set_global_service_context(service_context)
index=GPTVectorStoreIndex.from_documents(docs)
index.storage_context.persist("db/naval_index")
storage_context = StorageContext.from_defaults(persist_dir="db/naval_index")
new_index = load_index_from_storage(storage_context)
new_query_engine = new_index.as_query_engine()
new_query_engine.query(question)
Is it the problem with llama_index or my approach is incorrect?
Beta Was this translation helpful? Give feedback.
All reactions