This repository contains the the logic of dialogue planner. It is deployed as a Flask python application with an SQLAlchemy database that is supposed to store solutiions generated by planner.
To avoid potential tensorflow configuration errors (and to allow for GPU support) install tensorflow using their tutorial. If you do so, run the following steps within the created conda
environment. Otherwise, we still recommend using a virtual environment like conda
or venv
.
Install the dependencies listed in the requirements.txt file to be able to run the app locally.
pip install -r requirements.txt
python -m spacy download en_core_web_md
Run the following in the terminal. You can replace local_data/updated_gold_standard_bot
with the path to any directory populated with valid plan4dial generated files.
python contingent_plan_executor/local_main.py local_data/updated_gold_standard_bot
Run the following in the terminal. You can replace local_data/updated_gold_standard_bot
with the path to any directory populated with valid plan4dial generated files.
python contingent_plan_executor/app.py local_data/updated_gold_standard_bot
With Docker installed, run this command in the terminal to build the image:
docker build -t hovor:latest .
Once the image is built, you have two options for running the container:
Use this version if you want to run a conversation that deletes once the container is removed. (This is most ideal for testing).
docker run -it --rm -p 5000:5000 -d hovor:latest local_data/updated_gold_standard_bot
Use this version to persist conversation data.
docker run -it --rm -p 5000:5000 -d -v convo_data:/data hovor:latest local_data/updated_gold_standard_bot
Below are the various sub-endpoints and explanation of the services they provide along with necessary input/output to/from them. Note that for any endpoint, an unsuccessful call will have an "error" status, and you can view the "msg" to see what went wrong.
Begins a new conversation. Returns the agent's message(s) under "msg" and the user's id under "user_id". Be sure to store the "user_id" so you can load your conversation later!
- "user_id": the user_id that identifies your save slot Begins a new conversation, but overwrites the existing user save slot if it exists. Returns the same as the GET request, but keeps the same "user_id".
- "user_id": the user_id that identifies your save slot
- "msg": the message you want to send to the agent Sends a message to the agent speaking to the given "user_id". Returns the agent's message(s) under "msg". Other diagnostic information like "confidence" is also returned.
- "user_id": the user_id that identifies your save slot
Loads the conversation of the given "user_id" in its most recent save state. Returns the agent's message(s) under "msg". Call the new-message endpoint to continue the conversation.
View your app at : http://localhost:5000
Send new messages with curl
, i.e. :
curl -d '{"user_id":"haz", "msg":"I want to go to Toronto"}' -H "Content-Type: application/json" -X POST http://localhost:5000/new-message
Once you have a server running (local or otherwise) see here.
This allows for chatbot designers to simulate conversations, and analyze problematic ones.
To run simulation and evaluation, some libraries will be needed in addition to the normal libraries for hovor. In my experience, the best way to do this is with conda to manage your environments, but the same package list should work for pip approaches.
To create a working env with conda, follow these steps:
conda create --name hovor-sim python=3.8.15
conda activate hovor-sim
conda install pip
pip install -r requirements_sim.txt
python -m spacy download en_core_web_md
You can see the packages that will be installed the in the file requirements_sim.txt
.
If you have difficulties with package versions, you can view the text file conda_list.txt
which contains the output of conda list for my working conda environment on ubuntu. You can likely specify these package versions in the requirements file to fix any conflicts.
The best way to apply simulation and evaluation, is to use our user interface. With your simulation environment activated, you will run the command: streamlit run local_simulate_evaluate_streamlit.py
and once the paths are correct, it will run these processes.
If you want to make your own outcome determiner, start by looking at the DefaultSystemOutcomeDeterminer
. You will need to return a list of tuples that each hold an outcome group and a confidence, as well as update the context with updated variable values (if any). Finally, you will need to specify the conditions for your action to run with this function and this function.