diff --git a/labs/03-orchestration/04-Functions/stock.ipynb b/labs/03-orchestration/04-Functions/stock.ipynb
new file mode 100644
index 0000000..8080b44
--- /dev/null
+++ b/labs/03-orchestration/04-Functions/stock.ipynb
@@ -0,0 +1,498 @@
+{
+ "cells": [
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Working with functions in Azure OpenAI\n",
+ "This notebook shows how to use the Chat Completions API in combination with functions to extend the current capabilities of GPT models. GPT models, do not inherently support real-time interaction with external systems, databases, or files. However, functions can be used to do so.\n",
+ "\n",
+ "Overview:
\n",
+ "`functions` is an optional parameter in the Chat Completion API which can be used to provide function specifications. This allows models to generate function arguments for the specifications provided by the user. \n",
+ "\n",
+ "Note: The API will not execute any function calls. Executing function calls using the outputed argments must be done by developers. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# if needed, install and/or upgrade to the latest version of the OpenAI Python library\n",
+ "#%pip install --upgrade openai"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import openai\n",
+ "import json\n",
+ "from dotenv import load_dotenv\n",
+ "\n",
+ "\n",
+ "# Load environment variables\n",
+ "if load_dotenv():\n",
+ " print(\"Found OpenAPI Base Endpoint: \" + os.getenv(\"OPENAI_API_BASE\"))\n",
+ "else: \n",
+ " print(\"No file .env found\")\n",
+ "\n",
+ "# Setting up the deployment name\n",
+ "deployment_name = os.getenv(\"DEPLOYMENT_ID\")\n",
+ "\n",
+ "# This is set to `azure`\n",
+ "openai.api_type = \"azure\"\n",
+ "\n",
+ "# The API key for your Azure OpenAI resource.\n",
+ "openai.api_key = os.getenv(\"OPENAI_API_KEY\")\n",
+ "\n",
+ "# The base URL for your Azure OpenAI resource. e.g. \"https://.openai.azure.com\"\n",
+ "openai.api_base = os.getenv(\"OPENAI_API_BASE\") \n",
+ "\n",
+ "# Currently Chat Completion API have the following versions available: 2023-07-01-preview\n",
+ "openai.api_version = os.getenv(\"OPENAI_API_VERSION\") "
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 1.0 Test functions\n",
+ "\n",
+ "This code calls the model with the user query and the set of functions defined in the functions parameter. The model then can choose if it calls a function. If a function is called, the content will be in a strigified JSON object. The function call that should be made and arguments are location in: response[`choices`][0][`function_call`]."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def get_function_call(messages, function_call = \"auto\"):\n",
+ " # Define the functions to use\n",
+ " functions = [\n",
+ " {\n",
+ " \"name\": \"get_current_weather\",\n",
+ " \"description\": \"Get the current weather in a given location\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"location\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
+ " },\n",
+ " \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
+ " },\n",
+ " \"required\": [\"location\"],\n",
+ " },\n",
+ " },\n",
+ " ]\n",
+ "\n",
+ " # Call the model with the user query (messages) and the functions defined in the functions parameter\n",
+ " response = openai.ChatCompletion.create(\n",
+ " deployment_id = deployment_name,\n",
+ " messages=messages,\n",
+ " functions=functions,\n",
+ " function_call=function_call, \n",
+ " )\n",
+ "\n",
+ " return response"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Forcing the use of a specific function or no function\n",
+ "By changing the value of the `functions` parameter you can allow the model to decide what function to use, force the model to use a specific function, or force the model to use no function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "first_message = [{\"role\": \"user\", \"content\": \"What's the weather like in San Francisco?\"}]\n",
+ "# 'auto' : Let the model decide what function to call\n",
+ "print(\"Let the model decide what function to call:\")\n",
+ "print(get_function_call(first_message, \"auto\")[\"choices\"][0]['message'])\n",
+ "\n",
+ "# 'none' : Don't call any function \n",
+ "print(\"Don't call any function:\")\n",
+ "print(get_function_call(first_message, \"none\")[\"choices\"][0]['message'])\n",
+ "\n",
+ "# force a specific function call\n",
+ "print(\"Force a specific function call:\")\n",
+ "print(get_function_call(first_message, function_call={\"name\": \"get_current_weather\"})[\"choices\"][0]['message'])"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2.0 Defining functions\n",
+ "Now that we know how to work with functions, let's define some functions in code so that we can walk through the process of using functions end to end."
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Function #1: Get current time"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import pytz\n",
+ "from datetime import datetime\n",
+ "\n",
+ "def get_current_time(location):\n",
+ " try:\n",
+ " # Get the timezone for the city\n",
+ " timezone = pytz.timezone(location)\n",
+ "\n",
+ " # Get the current time in the timezone\n",
+ " now = datetime.now(timezone)\n",
+ " current_time = now.strftime(\"%I:%M:%S %p\")\n",
+ "\n",
+ " return current_time\n",
+ " except:\n",
+ " return \"Sorry, I couldn't find the timezone for that location.\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "get_current_time(\"America/New_York\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Function #2: Get stock market data\n",
+ "For simplicity, we're just hard coding some stock market data but you could easily edit the code to call out to an API to retrieve real-time data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import yfinance as yf\n",
+ "from datetime import datetime, timedelta\n",
+ "\n",
+ "def get_current_stock_price(name):\n",
+ " \"\"\"Method to get current stock price\"\"\"\n",
+ " ticker_data = yf.Ticker(name)\n",
+ " recent = ticker_data.history(period='1d')\n",
+ " return str(recent.iloc[0]['Close']) + ' USD'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(get_current_stock_price(\"MSFT\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Function #3: Calculator "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import math\n",
+ "\n",
+ "def calculator(num1, num2, operator):\n",
+ " if operator == '+':\n",
+ " return str(num1 + num2)\n",
+ " elif operator == '-':\n",
+ " return str(num1 - num2)\n",
+ " elif operator == '*':\n",
+ " return str(num1 * num2)\n",
+ " elif operator == '/':\n",
+ " return str(num1 / num2)\n",
+ " elif operator == '**':\n",
+ " return str(num1 ** num2)\n",
+ " elif operator == 'sqrt':\n",
+ " return str(math.sqrt(num1))\n",
+ " else:\n",
+ " return \"Invalid operator\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(calculator(5, 5, '+'))"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 3.0 Calling a function using GPT\n",
+ "\n",
+ "Steps for Function Calling: \n",
+ "\n",
+ "1. Call the model with the user query and a set of functions defined in the functions parameter.\n",
+ "2. The model can choose to call a function; if so, the content will be a stringified JSON object adhering to your custom schema (note: the model may generate invalid JSON or hallucinate parameters).\n",
+ "3. Parse the string into JSON in your code, and call your function with the provided arguments if they exist.\n",
+ "4. Call the model again by appending the function response as a new message, and let the model summarize the results back to the user.\n",
+ "\n",
+ "### 3.1 Describe the functions so that the model knows how to call them"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "functions = [\n",
+ " {\n",
+ " \"name\": \"get_current_time\",\n",
+ " \"description\": \"Get the current time in a given location\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"location\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The location name. The pytz is used to get the timezone for that location. Location names should be in a format like America/New_York, Asia/Bangkok, Europe/London\",\n",
+ " }\n",
+ " },\n",
+ " \"required\": [\"location\"],\n",
+ " },\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"get_current_stock_price\",\n",
+ " \"description\": \"Get the stock value for a given stock name\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"name\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The stock name. The stock market symbol name is used to retrieve the value on the stock exchange\"\n",
+ " },\n",
+ " },\n",
+ " \"required\": [\"name\"],\n",
+ " }, \n",
+ " },\n",
+ " {\n",
+ " \"name\": \"calculator\",\n",
+ " \"description\": \"A simple calculator used to perform basic arithmetic operations\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"num1\": {\"type\": \"number\"},\n",
+ " \"num2\": {\"type\": \"number\"},\n",
+ " \"operator\": {\"type\": \"string\", \"enum\": [\"+\", \"-\", \"*\", \"/\", \"**\", \"sqrt\"]},\n",
+ " },\n",
+ " \"required\": [\"num1\", \"num2\", \"operator\"],\n",
+ " },\n",
+ " }\n",
+ " ]\n",
+ "\n",
+ "available_functions = {\n",
+ " \"get_current_time\": get_current_time,\n",
+ " \"get_current_stock_price\": get_current_stock_price,\n",
+ " \"calculator\": calculator,\n",
+ " } "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 3.2 Define a helper function to validate the function call\n",
+ "It's possible that the models could generate incorrect function calls so it's important to validate the calls. Here we define a simple helper function to validate the function call although you could apply more complex validation for your use case."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import inspect\n",
+ "\n",
+ "# helper method used to check if the correct arguments are provided to a function\n",
+ "def check_args(function, args):\n",
+ " sig = inspect.signature(function)\n",
+ " params = sig.parameters\n",
+ "\n",
+ " # Check if there are extra arguments\n",
+ " for name in args:\n",
+ " if name not in params:\n",
+ " return False\n",
+ " # Check if the required arguments are provided \n",
+ " for name, param in params.items():\n",
+ " if param.default is param.empty and name not in args:\n",
+ " return False\n",
+ "\n",
+ " return True"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def run_conversation(messages, functions, available_functions, deployment_id):\n",
+ " # Step 1: send the conversation and available functions to GPT\n",
+ "\n",
+ " response = openai.ChatCompletion.create(\n",
+ " deployment_id=deployment_id,\n",
+ " messages=messages,\n",
+ " functions=functions,\n",
+ " function_call=\"auto\", \n",
+ " )\n",
+ " response_message = response[\"choices\"][0][\"message\"]\n",
+ "\n",
+ "\n",
+ " # Step 2: check if GPT wanted to call a function\n",
+ " if response_message.get(\"function_call\"):\n",
+ " print(\"Recommended Function call:\")\n",
+ " print(response_message.get(\"function_call\"))\n",
+ " print()\n",
+ " \n",
+ " # Step 3: call the function\n",
+ " # Note: the JSON response may not always be valid; be sure to handle errors\n",
+ " \n",
+ " function_name = response_message[\"function_call\"][\"name\"]\n",
+ " \n",
+ " # verify function exists\n",
+ " if function_name not in available_functions:\n",
+ " return \"Function \" + function_name + \" does not exist\"\n",
+ " function_to_call = available_functions[function_name] \n",
+ " \n",
+ " # verify function has correct number of arguments\n",
+ " function_args = json.loads(response_message[\"function_call\"][\"arguments\"])\n",
+ " if check_args(function_to_call, function_args) is False:\n",
+ " return \"Invalid number of arguments for function: \" + function_name\n",
+ " function_response = function_to_call(**function_args)\n",
+ " \n",
+ " print(\"Output of function call:\")\n",
+ " print(function_response)\n",
+ " print()\n",
+ " \n",
+ " # Step 4: send the info on the function call and function response to GPT\n",
+ " \n",
+ " # adding assistant response to messages\n",
+ " messages.append(\n",
+ " {\n",
+ " \"role\": response_message[\"role\"],\n",
+ " \"function_call\": {\n",
+ " \"name\": response_message[\"function_call\"][\"name\"],\n",
+ " \"arguments\": response_message[\"function_call\"][\"arguments\"],\n",
+ " },\n",
+ " \"content\": None\n",
+ " }\n",
+ " )\n",
+ "\n",
+ " # adding function response to messages\n",
+ " messages.append(\n",
+ " {\n",
+ " \"role\": \"function\",\n",
+ " \"name\": function_name,\n",
+ " \"content\": function_response,\n",
+ " }\n",
+ " ) # extend conversation with function response\n",
+ "\n",
+ " print(\"Messages in second request:\")\n",
+ " for message in messages:\n",
+ " print(message)\n",
+ " print()\n",
+ "\n",
+ " second_response = openai.ChatCompletion.create(\n",
+ " messages=messages,\n",
+ " deployment_id=deployment_id\n",
+ " ) # get a new response from GPT where it can see the function response\n",
+ "\n",
+ " return second_response"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "messages = [{\"role\": \"user\", \"content\": \"What time is it in New York?\"}]\n",
+ "assistant_response = run_conversation(messages, functions, available_functions, deployment_name)\n",
+ "print(assistant_response['choices'][0]['message'])"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "messages = [{\"role\": \"user\", \"content\": \"What is the value of the Microsoft stock?\"}]\n",
+ "assistant_response = run_conversation(messages, functions, available_functions, deployment_name)\n",
+ "print(assistant_response)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.6"
+ },
+ "orig_nbformat": 4
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/requirements.txt b/requirements.txt
index 9c23a42..c80235b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10,4 +10,5 @@ requests==2.31.0
unstructured==0.10.5
markdown==3.4.4
qdrant-client==1.4.0
-chromadb==0.4.6
\ No newline at end of file
+chromadb==0.4.6
+yfinance==0.2.32
\ No newline at end of file