From 0b09740887b67c28bcf9c77238381c3f2a1cda7f Mon Sep 17 00:00:00 2001 From: Bogdan Farca Date: Wed, 22 Jan 2025 12:08:13 +0100 Subject: [PATCH] WMS ID 11661: Emergency bug fixes (#750) * Submited : Create a Large Language Model (LLM) chatbot using Oracle Database 23ai and Generative AI Service. * Name adjustements * Folder rename * Deleted old folder * Fixed some bugs * Added the Livelabs version of the workshop * Fixed some bugs * Changed to the new LiveLab image + switched model to cohere.command-r-plus * Updates for the new LiveLabs image * Fixed image name * Fixed some prerequisites * Corrected Liana's job role * Fixed some conditional inclusions * Created the ocw24 version of the lab * Fixed some typos + disclaimers * Fixed index.html + duplicated names * Fixed some OCW24 bugs * Using ATP instead of local DB * Bug fixing + adding credentials * Blured personal information * Hide logos in screenshots * Small bugfix * Added the CWT API key and instructions * Bug fixes --------- Co-authored-by: Bogdan Farca --- ai-chatbot-engine/search+llm/search+llm.md | 4 +++- ai-chatbot-engine/vectorization/vectorization.md | 7 +++---- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/ai-chatbot-engine/search+llm/search+llm.md b/ai-chatbot-engine/search+llm/search+llm.md index 1196644d7..7970a21d0 100644 --- a/ai-chatbot-engine/search+llm/search+llm.md +++ b/ai-chatbot-engine/search+llm/search+llm.md @@ -88,8 +88,10 @@ The SQL query is executed with the provided vector parameter, fetching relevant If we print the results, we obtain something like the following. As requested, we have the "score" of each hit, which is essentially the distance in vector space between the question and the text chunk, as well as the metadata JSON embedded in each chunk. ```python + import pprint pprint.pp(results) + ``` ``` @@ -166,7 +168,7 @@ In a Retrieval-Augmented Generation (RAG) application, the prompt given to a Lar ```python # transform docs into a string array using the "paylod" key - docs_as_one_string = "\n=========\n".join([doc["text"] for doc in results]) + docs_as_one_string = "\n=========\n".join([doc[1]["text"] for doc in results]) docs_truncated = truncate_string(docs_as_one_string, 1000) ``` diff --git a/ai-chatbot-engine/vectorization/vectorization.md b/ai-chatbot-engine/vectorization/vectorization.md index 876b09469..89d95c2d3 100644 --- a/ai-chatbot-engine/vectorization/vectorization.md +++ b/ai-chatbot-engine/vectorization/vectorization.md @@ -149,13 +149,12 @@ Finally, let's start to code. It is now time to insert the prepared chunks into the vector database. ### Step 1: Create a database connection -1. Drag and drop the wallet file you downloaded previosly into the Jupyter file pane. Unzip it in folder named "wallet". - +1. Drag and drop the wallet file you downloaded previosly into the Jupyter file pane. Unzip it in folder named "wallet". 1. The connection details should be pinned down in a cell. ```python - un = - pw = + un = "" + pw = "" cs = "host.containers.internal/FREEPDB1" cs = "host.containers.internal/FREEPDB1"