Skip to content

Commit dd7d085

Browse files
Deployed be87ba6 with MkDocs version: 1.6.0
0 parents  commit dd7d085

File tree

84 files changed

+60894
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+60894
-0
lines changed

.nojekyll

Whitespace-only changes.

404.html

Lines changed: 1174 additions & 0 deletions
Large diffs are not rendered by default.

Inference/docker/index.html

Lines changed: 1342 additions & 0 deletions
Large diffs are not rendered by default.

Inference/index.html

Lines changed: 1205 additions & 0 deletions
Large diffs are not rendered by default.

Inference/inference/index.html

Lines changed: 3453 additions & 0 deletions
Large diffs are not rendered by default.

Ollama server/index.html

Lines changed: 1216 additions & 0 deletions
Large diffs are not rendered by default.

Query processing LLM/api_reference/index.html

Lines changed: 1562 additions & 0 deletions
Large diffs are not rendered by default.

Query processing LLM/index.html

Lines changed: 1382 additions & 0 deletions
Large diffs are not rendered by default.

Rag Pipeline/Developer Tutorials/change data input/index.html

Lines changed: 1485 additions & 0 deletions
Large diffs are not rendered by default.

Rag Pipeline/Developer Tutorials/change model/index.html

Lines changed: 1475 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# ---
2+
# jupyter:
3+
# jupytext:
4+
# text_representation:
5+
# extension: .py
6+
# format_name: light
7+
# format_version: '1.5'
8+
# jupytext_version: 1.16.3
9+
# kernelspec:
10+
# display_name: openml
11+
# language: python
12+
# name: python3
13+
# ---
14+
15+
# # Tutorial on changing models
16+
# - How would you use a different embedding and llm model?
17+
18+
from __future__ import annotations
19+
from langchain_community.cache import SQLiteCache
20+
import os
21+
import sys
22+
import chromadb
23+
24+
from backend.modules.utils import load_config_and_device
25+
from backend.modules.rag_llm import QASetup
26+
27+
# ## Initial config
28+
29+
config = load_config_and_device("../../../backend/config.json")
30+
config["persist_dir"] = "../../data/doc_examples/chroma_db/"
31+
config["data_dir"] = "../../data/doc_examples/"
32+
config["type_of_data"] = "dataset"
33+
config["training"] = True
34+
config["test_subset"] = True # set this to false while training, this is for demo
35+
# load the persistent database using ChromaDB
36+
client = chromadb.PersistentClient(path=config["persist_dir"])
37+
print(config)
38+
39+
# ## Embedding model
40+
# - Pick a model from HF
41+
42+
config["embedding_model"] = "BAAI/bge-large-en-v1.5"
43+
44+
# ## LLM model
45+
46+
# - Pick a model from Ollama - https://ollama.com/library?sort=popular
47+
# - eg : mistral
48+
#
49+
50+
config["llm_model"] = "mistral"
51+
52+
# +
53+
qa_dataset_handler = QASetup(
54+
config=config,
55+
data_type=config["type_of_data"],
56+
client=client,
57+
)
58+
59+
qa_dataset, _ = qa_dataset_handler.setup_vector_db_and_qa()
60+
# -
61+
62+
# # IMPORTANT
63+
# - Do NOT forget to change the model to the best model in ollama/get_ollama.sh

0 commit comments

Comments
 (0)