Skip to content

Commit

Permalink
Update create_chat_chain.py
Browse files Browse the repository at this point in the history
  • Loading branch information
hellokayas authored Nov 24, 2024
1 parent dc63ebe commit 9743a4f
Showing 1 changed file with 53 additions and 2 deletions.
55 changes: 53 additions & 2 deletions readme_ready/query/create_chat_chain.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,32 @@ def make_qa_chain(
device: str = "cpu",
on_token_stream=None,
):
"""Make QA Chain"""
"""
Creates a question-answering (QA) chain for the specified project.
Initializes and configures the QA chain using the provided repository and user configurations.
Selects the appropriate language model (LLM), sets up the retriever with a history-aware mechanism,
and combines document chains for processing queries. The chain facilitates interaction with the
vector store to retrieve and process relevant information based on user queries.
Args:
project_name: The name of the project for which the QA chain is being created.
repository_url: The URL of the repository containing the project.
content_type: The type of content to be processed (e.g., 'code', 'documentation').
chat_prompt: The prompt template used for generating chat responses.
target_audience: The intended audience for the QA responses.
vectorstore: An instance of HNSWLib representing the vector store containing document embeddings.
llms: A list of LLMModels to select from for generating embeddings and responses.
device: The device to use for model inference (default is 'cpu').
on_token_stream: Optional callback for handling token streams during model inference.
Returns:
A retrieval chain configured for question-answering, combining the retriever and document processing chain.
Raises:
ValueError: If no suitable model is found in the provided LLMs.
RuntimeError: If there is an issue initializing the chat models or creating the chains.
"""
llm = llms[1] if len(llms) > 1 else llms[0]
llm_name = llm.value
print(f"LLM: {llm_name.lower()}")
Expand Down Expand Up @@ -240,7 +265,33 @@ def make_readme_chain(
device: str = "cpu",
on_token_stream=None,
):
"""Make Readme Chain"""
"""
Creates a README generation chain for the specified project.
Initializes and configures the README generation chain using the provided repository, user, and README configurations.
Selects the appropriate language model (LLM), sets up the document processing chain with the specified prompts,
and integrates with the vector store to generate comprehensive README sections based on project data.
The chain facilitates automated generation of README files tailored to the project's specifications.
Args:
project_name: The name of the project for which the README is being generated.
repository_url: The URL of the repository containing the project.
content_type: The type of content to be included in the README (e.g., 'overview', 'installation').
chat_prompt: The prompt template used for generating README content.
target_audience: The intended audience for the README.
vectorstore: An instance of HNSWLib representing the vector store containing document embeddings.
llms: A list of LLMModels to select from for generating README content.
peft_model: An optional parameter specifying a PEFT (Parameter-Efficient Fine-Tuning) model for enhanced performance.
device: The device to use for model inference (default is 'cpu').
on_token_stream: Optional callback for handling token streams during model inference.
Returns:
A retrieval chain configured for README generation, combining the retriever and document processing chain.
Raises:
ValueError: If no suitable model is found in the provided LLMs.
RuntimeError: If there is an issue initializing the chat models or creating the chains.
"""
llm = llms[1] if len(llms) > 1 else llms[0]
llm_name = llm.value
doc_chat_model = None
Expand Down

0 comments on commit 9743a4f

Please sign in to comment.