diff --git a/notebook.ipynb b/notebook.ipynb
index 1d45359..2753526 100644
--- a/notebook.ipynb
+++ b/notebook.ipynb
@@ -9,21 +9,21 @@
"source": [
"# MLRun's Call Center Demo\n",
"\n",
- "Welcome to Iguazio Internet company's call center. In this demo we will be showcasing how we used GenAI to analyze calls - turnning call center conversation audio files of customers and agents into valueable data in a single workflow orchastrated by MLRun.\n",
+ "Welcome to Iguazio Internet company's call center. This demo showcases how to use GenAI to analyze calls, turning call center audio files of customers and agents into valuable data, all in one single workflow orchestrated by MLRun.\n",
"\n",
- "MLRun will be automating the entire workflow, auto-scale resources as needed, automatically distribute inference jobs to multiple workers and automatically log and parse values between the workflow different steps.\n",
+ "MLRun automates the entire workflow, auto-scales resources as needed, automatically distributes inference jobs to multiple workers, and automatically logs and parses values between the different workflow steps.\n",
"\n",
- "The demo will demonstrate two usages of GenAI:\n",
- "* **Unstructured Data Generation** - Generating audio data with ground truth metadata to evaluate our analysis.\n",
- "* **Unstructured Data Analysis** - Turning audio calls to text and into tabular features.\n",
+ "The demo demonstrates two usages of GenAI:\n",
+ "* **Unstructured Data Generation** — Generating audio data with ground truth metadata to evaluate the analysis.\n",
+ "* **Unstructured Data Analysis** — Turning audio calls into text and tabular features.\n",
"\n",
"## Table of contents:\n",
"\n",
- "1. [Project Creation](#project_creation)\n",
- "2. [Calls Data Generation](#calls_data_generation)\n",
- "3. [Calls Analysis](#calls_analysis)\n",
- "4. [Calls Viewer](#calls_viewer)\n",
- "5. [Future Work](#future_work)"
+ "1. [Create the project](#create_the_project)\n",
+ "2. [Generate the call data](#generate_the_call_data)\n",
+ "3. [Calls analysis](#calls_analysis)\n",
+ "4. [View the data](#view_the_data)\n",
+ "5. [Future work](#future_work)"
]
},
{
@@ -32,8 +32,8 @@
"metadata": {},
"source": [
"___\n",
- "\n",
- "## 1. Project Creation"
+ "\n",
+ "## 1. Create the project "
]
},
{
@@ -41,12 +41,12 @@
"id": "0d961f97-5930-4bff-8dbf-c0875a4ab47a",
"metadata": {},
"source": [
- "### 1.1. Install Requirements\n",
+ "### 1.1 Install the requirements\n",
"\n",
- "For the demo, we will need:\n",
- "* [**MLRun**](https://www.mlrun.org/) - Orchestrate the demo's workflows.\n",
- "* [**SQLAlchemy**](https://www.sqlalchemy.org/) - Manage the MySQL DB of calls, clients and agents.\n",
- "* [**Gradio**](https://www.gradio.app/) - To view the call center DB, transcriptions and play the generated conversations."
+ "This demo requires:\n",
+ "* [**MLRun**](https://www.mlrun.org/) — Orchestrate the demo's workflows.\n",
+ "* [**SQLAlchemy**](https://www.sqlalchemy.org/) — Manage the MySQL DB of calls, clients and agents.\n",
+ "* [**Gradio**](https://www.gradio.app/) — To view the call center DB and transcriptions, and to play the generated conversations."
]
},
{
@@ -68,16 +68,16 @@
"id": "b5eb3156-4dba-4ef7-a406-13e89772e700",
"metadata": {},
"source": [
- "### 1.2. Fill Tokens & URL\n",
+ "### 1.2 Fill the tokens and URL\n",
"\n",
- "There are 3 requirred tokens to run the demo end-to-end:\n",
- "* [OpenAI ChatGPT](https://chat.openai.com/) - In order to generate conversations. 2 tokens are required:\n",
+ "Three tokens are required to run the demo end-to-end:\n",
+ "* [OpenAI ChatGPT](https://chat.openai.com/) — To generate conversations, two tokens are required:\n",
" * `OPENAI_API_KEY`\n",
" * `OPENAI_API_BASE`\n",
" \n",
- "> Note: The requirement for OpenAI token will be removed soon in favor of an open-source LLM.\n",
+ "> Note: The requirement for the OpenAI token will be removed soon in favor of an open-source LLM.\n",
"\n",
- "* [MySQL](https://www.mysql.com/) - A URL with user name and password for collecting the calls into the DB.\n",
+ "* [MySQL](https://www.mysql.com/) — A URL with user name and password for collecting the calls into the DB.\n",
"\n"
]
},
@@ -141,20 +141,20 @@
"id": "1ea33fee-ec95-48e3-aae8-9247ae182481",
"metadata": {},
"source": [
- "### 1.3. Setup Project\n",
+ "### 1.3 Set up the project\n",
"\n",
- "We'll create the MLRun project with the function [`mlrun.get_or_create_project`](https://docs.mlrun.org/en/latest/api/mlrun.projects.html#mlrun.projects.get_or_create_project). The project is being created (or loaded if previously created) and being setup automatically according to the [project_setup.py](./project_setup.py) file located in this repo. \n",
+ "The MLRun project is created by running the function [`mlrun.get_or_create_project`](https://docs.mlrun.org/en/latest/api/mlrun.projects.html#mlrun.projects.get_or_create_project). This creates the project (or loads it if previously created) and sets it up automatically according to the [project_setup.py](./project_setup.py) file located in this repo. \n",
"\n",
- "This file is setting the functions accoridng to the given `parameters` down below. Feel free to set them as you wish:\n",
+ "The project_setup.py file sets the functions according to the following default `parameters`. You can adjust them as relevant:\n",
"\n",
- "* `source : str` - The git repo source of the project to clone when each function is running.\n",
- "* `default_image : str` - The default image to use for running the workflow's functions. For the sake of simplicity, the demo will use the same image for all the functions.\n",
- "* `gpus: int` - The amount of GPUs to use when running the demo. 0 means CPU.\n",
- "* `node_name: str` - The node name to run the demo on (Optional).\n",
+ "* `source : str` — The git repo source of the project to clone when each function is running.\n",
+ "* `default_image : str` — The default image to use for running the workflow's functions. For the sake of simplicity, the demo uses the same image for all the functions.\n",
+ "* `gpus: int` — The number of GPUs to use when running the demo. 0 means CPU.\n",
+ "* `node_name: str` — The node name to run the demo on (optional).\n",
"\n",
- "> Note: Multiple GPUs (`gpus` > 1) will automatically deploy [OpenMPI](https://www.open-mpi.org/) jobs for **better performance and GPU utilization**.\n",
+ "> Note: Multiple GPUs (`gpus` > 1) automatically deploy [OpenMPI](https://www.open-mpi.org/) jobs for **better performance and GPU utilization**.\n",
"\n",
- "You may notice there are not many functions under the source directory, that's because most of the code in this project is being imported from [**MLRun's Functions Hub**](https://www.mlrun.org/hub/) - A collection of reusable functions and assets that are optimized and tested to make the move to production easier and faster!"
+ "There are not many functions under the source directory. That's because most of the code in this project is imported from [**MLRun's Functions Hub**](https://www.mlrun.org/hub/) — a collection of reusable functions and assets that are optimized and tested to simplify and accelate the move to production!"
]
},
{
@@ -192,21 +192,21 @@
},
"source": [
"___\n",
- "\n",
- "## 2. Calls Data Generation\n",
+ "\n",
+ "## 2. Generate the call data\n",
"\n",
- "> Note: This entire workflow may be skipped in favour of using the already generated data that is available in this demo. See the [next cell](#skip_and_import_local_data) for more details\n",
+ "> Note: This entire workflow can be skipped if you want to use data that is already generated and available in this demo. See the [next cell](#skip_and_import_local_data) for more details.\n",
"\n",
- "The data generation workflow includes 6 steps. You may skip the agents and clients data generation and only generate calls using the existing agents and clients by passing `generate_clients_and_agents = True`. You can see each function's docstring and code by clicking the function name in the following list:\n",
+ "The data generation workflow comprises six steps. If you want to skip the agents and clients data generation and just generate calls using the existing agents and clients, then pass `generate_clients_and_agents = True`. You can see each function's docstring and code by clicking the function name in the following list:\n",
"\n",
- "1. (Skipable) [**Agents & Clients Data Generator**](https://github.com/mlrun/functions/blob/master/structured_data_generator)- ***Hub Function***: Using OpenAI's ChatGPT we generate the metadata for the call center's agents and clients. The data include fields like first name, last name, phone number, etc. All the agents and clients steps are running in parallel.\n",
- "2. (Skipable) [**Insert Agents & Clients Data to DB**](.src/calls_analysis/data_management.py): Insert the generated agents and clients data into the MySQL database.\n",
- "3. [**Get Agents & Clients from DB**](.src/calls_analysis/data_management.py): Getting the call center data from the database in order to get to pass it to the next step.\n",
- "4. [**Conversation Generation**](./src/calls_generation/conversations_generator.py): Here OpenAI's ChatGPT is being used to generate the conversations for the demo. We randomize prompts and keep their values for ground truths to evaluate the analysis later on.\n",
- "5. [**Text to Audio**](https://github.com/mlrun/functions/blob/master/text_to_audio_generator) - ***Hub Function***: Using [SunoAI's Bark](https://github.com/suno-ai/bark), we generate audio files from the conversations text files produced.\n",
- "6. [**Batch Creation**](./src/calls_generation/conversations_generator.py): The last step is to wrap all of our generated data to an input batch that is ready for the analysis workflow!\n",
+ "1. (Skippable) [**Agents & Clients Data Generator**](https://github.com/mlrun/functions/blob/master/structured_data_generator)- ***Hub Function*** — Use OpenAI's ChatGPT to generate the metadata for the call center's agents and clients. The data include fields like first name, last name, phone number, etc. All the agents and clients steps run in parallel.\n",
+ "2. (Skippable) [**Insert Agents & Clients Data to DB**](.src/calls_analysis/data_management.py) — Insert the generated agents and clients data into the MySQL database.\n",
+ "3. [**Get Agents & Clients from DB**](.src/calls_analysis/data_management.py) — Get the call center data from the database, before passing it in the next step.\n",
+ "4. [**Conversation Generation**](./src/calls_generation/conversations_generator.py) — Here, OpenAI's ChatGPT is used to generate the conversations for the demo. We randomize prompts and keep their values for ground truths to evaluate the analysis later on.\n",
+ "5. [**Text to Audio**](https://github.com/mlrun/functions/blob/master/text_to_audio_generator) — ***Hub Function***: Using [SunoAI's Bark](https://github.com/suno-ai/bark), we generate audio files from the conversations that were produced by the text files.\n",
+ "6. [**Batch Creation**](./src/calls_generation/conversations_generator.py) — The last step is to wrap all of the generated data to an input batch that is ready for the analysis workflow!\n",
"\n",
- "After this workflow, the database will be filled with data in the following structure:\n",
+ "After this workflow, the database is filled with data in the following structure:\n",
"* **Client Table**\n",
" | Client ID | First Name | Last Name | Phone Number | Email | Calls |\n",
" | :------- | :--------- | :-------- | :----------- | :--------------- | :---- |\n",
@@ -226,7 +226,7 @@
},
"source": [
"\n",
- "If you'd like to experiment with the provided example data and skip this section, you can run the following commands to generate the necessary artifacts for the analysis workflow:"
+ "If you want to experiment with the provided example data and skip this section, you can run the following commands to generate the necessary artifacts for the analysis workflow:"
]
},
{
@@ -248,26 +248,26 @@
"tags": []
},
"source": [
- "### 2.1. Run The Workflow\n",
+ "### 2.1. Run the workflow\n",
"\n",
- "Let us run the [described workflow](./src/workflows/calls_generation.py) by calling the project's method [`project.run`](https://docs.mlrun.org/en/latest/api/mlrun.projects.html#mlrun.projects.MlrunProject.run).\n",
+ "Run the [described workflow](./src/workflows/calls_generation.py) by calling the project's method [`project.run`](https://docs.mlrun.org/en/latest/api/mlrun.projects.html#mlrun.projects.MlrunProject.run).\n",
"\n",
- "Parameters passed to the function can be adjusted according to the workflow `arguments`. Feel free to set them as you wish. You may choose the `amount` of calls you wish to generate, the models to use and configure the metadata as you wish:\n",
+ "The following parameters are passed to the function as workflow `arguments`. Set them as relevant. You can choose the `amount` of calls you wish to generate, the models to use, and configure the metadata as you wish:\n",
"\n",
- "* `amount: int` - The amount of samples to generate.\n",
- "* `generation_model: str` - What model from Open AI to use for data generation.\n",
- "* `use_small_models: bool` - Whether to use the smaller bark models for the text to audio generation.\n",
- "* `language: str` - The language to generate data in (in the existing sample we chose Spanish).\n",
- "* `available_voices: List[str]` - What voices pool to choose from.\n",
- "* `min_time: int` - Minimum duration of the generated calls.\n",
- "* `max_time: int` - Maximum duration of the generated calls.\n",
- "* `from_date: str` - From which date the call can occur (metadata of the call).\n",
- "* `to_date: str` - To which date the call can occur (metadata of the call).\n",
- "* `from_time: str` - From which time the call can occur (metadata of the call).\n",
- "* `to_time: str` - To which time the call can occur (metadata of the call).\n",
- "* `num_clients: int` - Amount of clients to generate.\n",
- "* `num_agents: int` - Amount of agents to generate.\n",
- "* `generate_clients_and_agents: bool` - Skip the clients and agents generation (only generate new calls with the existing data)."
+ "* `amount: int` — The number of samples to generate.\n",
+ "* `generation_model: str` — Which model from Open AI to use for data generation.\n",
+ "* `use_small_models: bool` — Whether to use the smaller bark models for the text-to-audio generation.\n",
+ "* `language: str` — The language to generate data in (this sample uses Spanish).\n",
+ "* `available_voices: List[str]` — What voices pool to choose from.\n",
+ "* `min_time: int` — Minimum duration of the generated calls.\n",
+ "* `max_time: int` — Maximum duration of the generated calls.\n",
+ "* `from_date: str` — Starting date for the calls (metadata of the call).\n",
+ "* `to_date: str` — Latest date for the calls (metadata of the call).\n",
+ "* `from_time: str` — Starting time for the calls (metadata of the call).\n",
+ "* `to_time: str` — Last time for the calls (time the call center closes). (metadata of the call).\n",
+ "* `num_clients: int` — Number of clients to generate.\n",
+ "* `num_agents: int` — Number of agents to generate.\n",
+ "* `generate_clients_and_agents: bool` — Skip the clients and agents generation (only generate new calls with the existing data)."
]
},
{
@@ -609,35 +609,35 @@
"source": [
"___\n",
"\n",
- "## 3. Calls Analysis\n",
+ "## 3. Calls analysis\n",
"\n",
- "The workflow include multiple steps where all the main functions are imported from **MLRun's Functions Hub**. You can see each hub function's docstring, code and example by clicking the function name in the following list:\n",
+ "The workflow includes multiple steps for which all of the main functions are imported from the **[MLRun Function Hub](https://www.mlrun.org/hub/)**. You can see each hub function's docstring, code, and example, by clicking the function name in the following list:\n",
"\n",
- "1. [**Insert Calls Data to DB**](./src/calls_analysis/db_management.py): Insert the calls metadata to the MySQL DB.\n",
- "2. [**Perform Speech Diarization**](https://github.com/mlrun/functions/tree/development/silero_vad) - ***Hub Function***: Analyze which person is talking when during the call for better transcription and analysis later on. Diarization gives context to the LLM and yields better results. The function uses the [silero-VAD](https://github.com/snakers4/silero-vad) model. The speech diarization is performed per channel based on the assumption call center recordings has each channel in the audio belong to a different speaker.\n",
- "3. [**Transcribe**](https://github.com/mlrun/functions/tree/master/transcribe) - ***Hub function***: Uses [Huggingface's ASR pipeline](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.AutomaticSpeechRecognitionPipeline) with [OpenAI's Whisper models](https://huggingface.co/openai).This function transcribe and translate the calls into text and save them as text files. It is an optimized version of [OpenAI's Whisper](https://openai.com/research/whisper) package - enable to use batching, CPU offloading to multiprocessing workers and distribute across multiple GPUs using MLRun and OpenMPI.\n",
- "4. [**Recognize PII**](https://github.com/mlrun/functions/tree/master/pii_recognizer) - ***Hub Function***: Uses 3 techniques to recognize private identfiable information: RegEx, [Flair](https://flairnlp.github.io/) and [Microsoft's Presidio Analyzer](https://microsoft.github.io/presidio/analyzer/) and [Anonymizer](https://microsoft.github.io/presidio/anonymizer/). The function clear the recognized private data and produces multiple artifacts to review and understand the recogniztion process.\n",
- "5. [**Analysis**](https://github.com/mlrun/functions/tree/master/question_answering) - ***Hub Function***: Use a LLM to analyze a given text. It expects a prompt template and questions to send to the LLM and construct a dataframe dataset out of its answers. In this demo we will use a GPTQ quantized version of [Mistral-7B](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ) to analyze our conversation calls. It will help us extract the following features:\n",
+ "1. [**Insert the Calls Data to the DB**](./src/calls_analysis/db_management.py) — Insert the calls metadata to the MySQL DB.\n",
+ "2. [**Perform Speech Diarization**](https://github.com/mlrun/functions/tree/development/silero_vad) – ***Hub Function***: Analyze when each person is talking during the call for subsequent improved transcription and analysis. Diarization gives context to the LLM and yields better results. The function uses the [silero-VAD](https://github.com/snakers4/silero-vad) model. The speech diarization is performed per channel based on the assumption that each channel of the audio in the call center recordings belongs to a different speaker.\n",
+ "3. [**Transcribe**](https://github.com/mlrun/functions/tree/master/transcribe) – ***Hub function***: Uses [Huggingface's ASR pipeline](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.AutomaticSpeechRecognitionPipeline) with [OpenAI's Whisper models](https://huggingface.co/openai).This function transcribes and translates the calls into text and saves them as text files. It is an optimized version of [OpenAI's Whisper](https://openai.com/research/whisper) package — enabled to use batching, CPU offloading to multiprocessing workers, and to distribute across multiple GPUs using MLRun and OpenMPI.\n",
+ "4. [**Recognize PII**](https://github.com/mlrun/functions/tree/master/pii_recognizer) – ***Hub Function***: Uses three techniques to recognize personally identifiable information: RegEx, [Flair](https://flairnlp.github.io/) and [Microsoft's Presidio Analyzer](https://microsoft.github.io/presidio/analyzer/) and [Anonymizer](https://microsoft.github.io/presidio/anonymizer/). The function clears the recognized personal data and produces multiple artifacts to review and understand the recognition process.\n",
+ "5. [**Analysis**](https://github.com/mlrun/functions/tree/master/question_answering) – ***Hub Function***: Uses an LLM to analyze a given text. It expects a prompt template and questions to send to the LLM, and then constructs a dataframe dataset from its answers. This demo uses a GPTQ quantized version of [Mistral-7B](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ) to analyze the calls' conversations. It helps to extract the following features:\n",
" \n",
- " * `topic: str` - The general subject of the call out of a given list of topics.\n",
- " * `summary: str` - The summary of the entire call in few sentences.\n",
- " * `concern_addressed: bool` - Whether the client's concern was addressed by the end of the call. Can be one of {yes, no}.\n",
- " * `customer_tone: str` - The general customer tone durring the call. Can be one of {posetive, netural, negative}.\n",
- " * `agent_tone: str` - The general agent tone durring the call. Can be one of {posetive, netural, negative}. \n",
- " * `upsale_attempted: bool` - Whether the agent tried to upsale the client during the call.\n",
- " * `upsale_success: bool` - Whether the upsale attempt was succesfull.\n",
- " * `empathy: int` - How empathy was the agent between 1 to 5.\n",
- " * `professionalism: int` - How professional was the agent between 1 to 5.\n",
- " * `kindness: int` - How kind was the agent between 1 to 5.\n",
- " * `effective_communication: int` - How was the agent's communication scored between 1 to 5.\n",
- " * `active_listening: int` - How was the agent activly listening scored between 1 to 5.\n",
- " * `customization: int` - How custom was the agent to the client's needs between 1 to 5.\n",
+ " * `topic: str` — The general subject of the call out of a given list of topics.\n",
+ " * `summary: str` — The summary of the entire call in few sentences.\n",
+ " * `concern_addressed: bool` — Whether the client's concern was addressed at the end of the call. Can be one of {yes, no}.\n",
+ " * `customer_tone: str` — The general customer tone durring the call. Can be one of {positive, neutral, negative}.\n",
+ " * `agent_tone: str` — The general agent tone during the call. Can be one of {positive, neutral, negative}. \n",
+ " * `upsale_attempted: bool` — Whether the agent tried to upsell the client during the call.\n",
+ " * `upsale_success: bool` - Whether the upsell attempt was successful.\n",
+ " * `empathy: int` — The level of empathy on the part of the agent from 1 to 5.\n",
+ " * `professionalism: int` — The agent's professionalism from 1 to 5.\n",
+ " * `kindness: int` — How kind was the agent from 1 to 5.\n",
+ " * `effective_communication: int` — Efficacy of the agent's communication from 1 to 5.\n",
+ " * `active_listening: int` — The level of active listening on the part of the agent from 1 to 5.\n",
+ " * `customization: int` — How closely the agent responded to the client's needs from 1 to 5.\n",
"\n",
"6. [**Postprocess Analysis Answers**](./src/postprocess.py) - A project function used to postprocess the LLM's answers before updating them into the DB.\n",
" \n",
- "Between each step, there is a call for the function [**Update Calls**](./src/calls_analysis/db_management.py) that is updating the calls DB with their newly collected data and status.\n",
+ "Between each step, there is a call for the function [**Update Calls**](./src/calls_analysis/db_management.py) that updates the calls DB with the newly collected data and status.\n",
"\n",
- "Here we can see an example of a call row in the database before and after the analysis:\n",
+ "Here you can see an example of a call row in the database before and after the analysis:\n",
"\n",
"* In the beginning of the workflow:\n",
"| Call ID | Client ID | Agent ID | Date | Time | Audio File |\n",
@@ -655,20 +655,20 @@
"id": "c68f2edd-0c0b-403a-8fc6-156b299dc8f1",
"metadata": {},
"source": [
- "### 3.2. Run The Workflow\n",
+ "### 3.2. Run the workflow\n",
"\n",
- "We will now run the workflow using the following parameters:\n",
- "* `batch: str` - Path to the dataframe artifact that represents the batch to analyze. \n",
- "* `calls_audio_files : str` - Path to the conversation audio files directory or a given input batch.\n",
+ "Now, run the workflow using the following parameters:\n",
+ "* `batch: str` — Path to the dataframe artifact that represents the batch to analyze. \n",
+ "* `calls_audio_files : str` — Path to the conversation audio files directory or a given input batch.\n",
"\n",
- "> Notice: We pass the `batch` and the `calls_audio_files` to the workflow using the artifact's store paths. \n",
- "* `batch_size: int` The batch size for the transcription's inference model (size for each worker).\n",
- "* `transcribe_model : str` - The model to use for the transcribe function. Must be one of the official model names listed [here](https://github.com/guillaumekln/faster-whisper).\n",
- "* `translate_to_english: bool` - Whether to translate the transcriptions to English. Recommended for use when the audio language is not English.\n",
- "* `pii_recognition_model : str` - The model to use. Can be \"spacy\", \"flair\", \"pattern\" or \"whole\".\n",
- "* `pii_recognition_entities : Listr[str]` - The list of entities to recognize.\n",
- "* `pii_recognition_entity_operator_map: Dict[str, tuple]` - A dictionary that maps entity to operator name and operator params.\n",
- "* `question_answering_model : str` - The model to use for asnwering the given questions."
+ "> Notice: The `batch` and the `calls_audio_files` are passed to the workflow using the artifact's store paths. \n",
+ "* `batch_size: int` — The batch size for the transcription's inference model (size for each worker).\n",
+ "* `transcribe_model : str` — The model to use for the transcribe function. Must be one of the official model names listed [here](https://github.com/guillaumekln/faster-whisper).\n",
+ "* `translate_to_english: bool` — Whether to translate the transcriptions to English. Recommended for use when the audio language is not English.\n",
+ "* `pii_recognition_model : str` — The model to use. Can be \"spacy\", \"flair\", \"pattern\" or \"whole\".\n",
+ "* `pii_recognition_entities : Listr[str]` — The list of entities to recognize.\n",
+ "* `pii_recognition_entity_operator_map: Dict[str, tuple]` — A dictionary that maps entity to operator name and operator params.\n",
+ "* `question_answering_model : str` — The model to use for answering the given questions."
]
},
{
@@ -988,12 +988,12 @@
},
"source": [
"___\n",
- "\n",
- "## 4. Calls Viewer\n",
+ "\n",
+ "## 4. View the data\n",
"\n",
- "While the workflow is running, we can view the data and features as they are being collected.\n",
+ "While the workflow is running, you can view the data and features as they are collected.\n",
"\n",
- "> Note: As each step in the workflow is automatically logged, it can be viewed as well separatly in the MLRun UI under the project's artifacts. MLRun's experiment tracking enables full exploration and reproducibility between steps in a workflow due to its automatic logging features. Here we only see the MySQL DB."
+ "> Note: Since each step in the workflow is automatically logged, it can also be viewed in the MLRun UI under the project's artifacts. MLRun's experiment tracking enables full exploration and reproducibility between steps in a workflow due to its automatic logging features. Here you only see the MySQL DB."
]
},
{
@@ -1137,12 +1137,12 @@
"source": [
"___\n",
"\n",
- "## 5. Future Work\n",
+ "## 5. Future work\n",
"\n",
- "This demo was a proof of concept for LLMs feature extraction capabilities, while using MLRun for the orchestration from developemny to production. The demo is being further developed and you are welcome to track and develop it with us:\n",
+ "This demo is a proof of concept for LLM's feature-extraction capabilities, while using MLRun for the orchestration from development to production. The demo continues to be developed. You are welcome to track and develop it with us:\n",
"\n",
"### v0.3\n",
- "#### New Features\n",
+ "#### New features\n",
"* [ ] **Open Source Data Generation Workflow** - Replace OpenAI ChatGPT with an open-source LLM.\n",
"* [ ] **Data Storage** - Generated data files will be uploaded into a data storage like S3.\n",
"* [ ] **Evaluation** - Perform evaluation based on the ground truth generated along the data to assess the transcription quality and the LLM analysis accuracy.\n",
@@ -1150,11 +1150,11 @@
"* [ ] **Data Analysis Workflow** - Workflow to analyze the collected data from the RDB (MySQL) and VDB (Milvus).\n",
"\n",
"### v0.2\n",
- "#### New Features\n",
+ "#### New features\n",
"* [x] **Calls Generation Pipeline** - Generate data and a batch of calls with ground truth metadata for evaluation.\n",
"* [x] **MySQL as Relational DB** - Store all the collected analysis and data in a MySQL database.\n",
"* [x] **Speech Diarization** - Know who talks when by performing speech diarization per channel.\n",
- "* [x] **Translation** - Enable translating into English the transcriptions for inferring thorugh the open source LLM.\n",
+ "* [x] **Translation** - Enable translating the transcriptions into English for inferring thorugh the open source LLM.\n",
"\n",
"#### Improvements\n",
"* [x] **Distributed Transcription and Diarization** - Add OpenMPI support to distribute the pipeline functions across multiple workers.\n",
@@ -1163,9 +1163,9 @@
"* [x] **GPTQ Quantization** - Use a GPTQ quantized model for analysis for faster inference time.\n",
"\n",
"### v0.1\n",
- "* [x] **Trasncription** - Use Open AI's Whisper for transcribing audio calls.\n",
- "* [x] **Anonimization** - Anonimize the text before inferring.\n",
- "* [x] **Analysis** - Perform question asnwering for feature extraction using Falcon-40B."
+ "* [x] **Transcription** - Use Open AI's Whisper for transcribing audio calls.\n",
+ "* [x] **Anonymization** - Anonymize the text before inferring.\n",
+ "* [x] **Analysis** - Perform question answering for feature extraction using Falcon-40B."
]
},
{
@@ -1179,9 +1179,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "mlrun-base",
+ "display_name": "Python 3 (ipykernel)",
"language": "python",
- "name": "conda-env-mlrun-base-py"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -1193,7 +1193,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.9.16"
+ "version": "3.9.13"
}
},
"nbformat": 4,