Skip to content

Commit

Permalink
Merge pull request #2073 from MicrosoftDocs/main
Browse files Browse the repository at this point in the history
12/18/2024 PM Publish
  • Loading branch information
Taojunshen authored Dec 18, 2024
2 parents 34987bb + a06e343 commit 6064893
Show file tree
Hide file tree
Showing 64 changed files with 609 additions and 261 deletions.
14 changes: 1 addition & 13 deletions articles/ai-services/agents/how-to/tools/openapi-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ OpenAPI Specified tool improves your function calling experience by providing st
automated, and scalable API integrations that enhance the capabilities and efficiency of your agent.
[OpenAPI specifications](https://spec.openapis.org/oas/latest.html) provide a formal standard for
describing HTTP APIs. This allows people to understand how an API works, how a sequence of APIs
work together, generate client code, create tests, apply design standards, and more.
work together, generate client code, create tests, apply design standards, and more. Currently, we support 3 authentication types with the OpenAPI 3.0 specified tools: `anonymous`, `API key`, `managed identity`.

## Set up
1. Ensure you've completed the prerequisites and setup steps in the [quickstart](../../quickstart.md).
Expand All @@ -51,18 +51,6 @@ work together, generate client code, create tests, apply design standards, and m
- Connection name: `YOUR_CONNECTION_NAME` (You will use this connection name in the sample code below.)
- Access: you can choose either *this project only* or *shared to all projects*. Just make sure in the sample code below, the project you entered connection string for has access to this connection.

1. Update your OpenAPI Spec with the following:
```json
"components": {
"securitySchemes": {
"cosoLocationApiLambdaAuthorizer": {
"type": "apiKey",
"name": "key",
"in": "query"
}
}
}
```
::: zone-end

::: zone pivot="code-example"
Expand Down
3 changes: 2 additions & 1 deletion articles/ai-services/agents/how-to/tools/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ services: cognitive-services
manager: nitinme
ms.service: azure
ms.topic: how-to
ms.date: 12/11/2024
ms.date: 12/18/2024
author: aahill
ms.author: aahi
ms.custom: azure-ai-agents
Expand Down Expand Up @@ -39,3 +39,4 @@ Agents can access multiple tools in parallel. These can be both Azure OpenAI-hos
| [Code interpreter](./code-interpreter.md) | Enables agents to write and run Python code in a sandboxed execution environment. | ✔️ | ✔️ | ✔️ | ✔️ |
|[Function calling](./function-calling.md) | Allows you to describe the structure of functions to an agent and then return the functions that need to be called along with their arguments. | ✔️ | ✔️ | ✔️ | ✔️ |
|[OpenAPI Specification](./openapi-spec.md) | Connect to an external API using an OpenAPI 3.0 specified tool, allowing for scalable interoperability with various applications. | ✔️ | ✔️ | ✔️ | ✔️ |
|[Azure functions](./azure-functions.md) | Use Azure functions to leverage the scalability and flexibility of serverless computing. | ✔️ | | | ✔️ |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
195 changes: 14 additions & 181 deletions articles/ai-services/openai/concepts/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Azure OpenAI Service is powered by a diverse set of models with different capabi

| Models | Description |
|--|--|
| [o1-preview and o1-mini](#o1-preview-and-o1-mini-models-limited-access) | Limited access models, specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. |
| [o1 & o1-mini](#o1-and-o1-mini-models-limited-access) | Limited access models, specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. |
| [GPT-4o & GPT-4o mini & GPT-4 Turbo](#gpt-4o-and-gpt-4-turbo) | The latest most capable Azure OpenAI models with multimodal versions, which can accept both text and images as input. |
| [GPT-4o-Realtime-Preview](#gpt-4o-realtime-preview) | A GPT-4o model that supports low-latency, "speech in, speech out" conversational interactions. |
| [GPT-4](#gpt-4) | A set of models that improve on GPT-3.5 and can understand and generate natural language and code. |
Expand All @@ -28,200 +28,33 @@ Azure OpenAI Service is powered by a diverse set of models with different capabi
| [Whisper](#whisper-models) | A series of models in preview that can transcribe and translate speech to text. |
| [Text to speech](#text-to-speech-models-preview) (Preview) | A series of models in preview that can synthesize text to speech. |

## o1-preview and o1-mini models limited access
## o1 and o1-mini models limited access

The Azure OpenAI `o1-preview` and `o1-mini` models are specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, and math compared to previous iterations.
The Azure OpenAI `o1` and `o1-mini` models are specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, and math compared to previous iterations.

| Model ID | Description | Max Request (tokens) | Training Data (up to) |
| --- | :--- |:--- |:---: |
|`o1-preview` (2024-09-12) | The most capable model in the o1 series, offering enhanced reasoning abilities.| Input: 128,000 <br> Output: 32,768 | Oct 2023 |
| `o1` (2024-12-17) | The most capable model in the o1 series, offering enhanced reasoning abilities. <br> **Request access: [limited access model application](https://aka.ms/OAI/o1access)** <br> - Structured outputs<br> - Text, image processing <br> - Functions/Tools <br> | Input: 200,000 <br> Output: 100,000 | |
|`o1-preview` (2024-09-12) | Older preview version | Input: 128,000 <br> Output: 32,768 | Oct 2023 |
| `o1-mini` (2024-09-12) | A faster and more cost-efficient option in the o1 series, ideal for coding tasks requiring speed and lower resource consumption.| Input: 128,000 <br> Output: 65,536 | Oct 2023 |

### Availability

The `o1-preview` and `o1-mini` models are now available for API access and model deployment. **Registration is required, and access will be granted based on Microsoft's eligibility criteria**.
The `o1` and `o1-mini` models are now available for API access and model deployment. **Registration is required, and access will be granted based on Microsoft's eligibility criteria**. Customers who previously applied and received access to `o1-preview`, don't need to reapply as they are automatically on the wait-list for the latest model.

Request access: [limited access model application](https://aka.ms/oai/modelaccess)
Request access: [limited access model application](https://aka.ms/OAI/o1access)

Once access has been granted, you will need to create a deployment for each model.
Once access has been granted, you will need to create a deployment for each model. If you have an existing `o1-preview` deployment in place upgrade is currently not supported, you will need to create a new deployment.

### API support

Support for the **o1 series** models was added in API version `2024-09-01-preview`.

The `max_tokens` parameter has been deprecated and replaced with the new `max_completion_tokens` parameter. **o1 series** models will only work with the `max_completion_tokens` parameter.

### Usage

These models do not currently support the same set of parameters as other models that use the chat completions API. Only a very limited subset is currently supported, so common parameters like `temperature`, `top_p`, are not available and including them will cause your request to fail. `o1-preview` and `o1-mini` models will also not accept the system message role as part of the messages array.

# [Python (Microsoft Entra ID)](#tab/python-secure)

You may need to upgrade your version of the OpenAI Python library to take advantage of the new `max_completion_tokens` parameter.

```cmd
pip install openai --upgrade
```

If you are new to using Microsoft Entra ID for authentication see [How to configure Azure OpenAI Service with Microsoft Entra ID authentication](../how-to/managed-identity.md).

```python
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider

token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)

client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
azure_ad_token_provider=token_provider,
api_version="2024-09-01-preview"
)

response = client.chat.completions.create(
model="o1-preview-new", # replace with the model deployment name of your o1-preview, or o1-mini model
messages=[
{"role": "user", "content": "What steps should I think about when writing my first Python API?"},
],
max_completion_tokens = 5000

)

print(response.model_dump_json(indent=2))
```

# [Python (key-based auth)](#tab/python)

You may need to upgrade your version of the OpenAI Python library to take advantage of the new `max_completion_tokens` parameter.

```cmd
pip install openai --upgrade
```

```python

from openai import AzureOpenAI

client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-09-01-preview"
)

response = client.chat.completions.create(
model="o1-preview-new", # replace with the model deployment name of your o1-preview, or o1-mini model
messages=[
{"role": "user", "content": "What steps should I think about when writing my first Python API?"},
],
max_completion_tokens = 5000

)

print(response.model_dump_json(indent=2))
```

# [Output](#tab/python-output)

```json
{
"id": "chatcmpl-AEj7pKFoiTqDPHuxOcirA9KIvf3yz",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "Writing your first Python API is an exciting step in developing software that can communicate with other applications. An API (Application Programming Interface) allows different software systems to interact with each other, enabling data exchange and functionality sharing. Here are the steps you should consider when creating your first Python API:\n\n1. **Define the Purpose and Requirements**\n\n - **Identify the Functionality**: Clearly outline what your API is supposed to do. What data or services will it provide to the users?\n - **Determine the Endpoints**: Plan the different URLs (endpoints) through which users can access the API functionalities.\n - **Specify Request and Response Formats**: Decide on the data formats (usually JSON) for incoming requests and outgoing responses.\n\n2. **Choose the Right Framework**\n\n Python offers several frameworks for building APIs. Two of the most popular are:\n\n - **Flask**: A lightweight and flexible web framework, great for small to medium-sized APIs.\n - **FastAPI**: A modern, high-performance framework for building APIs with Python 3.6+ types, offering automatic interactive documentation.\n\n **Example**:\n ```bash\n pip install flask\n ```\n or\n ```bash\n pip install fastapi uvicorn\n ```\n\n3. **Set Up the Development Environment**\n\n - **Create a Virtual Environment**: Isolate your project dependencies using `venv` or `conda`.\n - **Install Required Packages**: Ensure all necessary libraries and packages are installed.\n\n **Example**:\n ```bash\n python -m venv env\n source env/bin/activate # On Windows use `env\\Scripts\\activate`\n ```\n\n4. **Implement the API Endpoints**\n\n - **Write the Code for Each Endpoint**: Implement the logic that handles requests and returns responses.\n - **Use Decorators to Define Routes**: In frameworks like Flask, you use decorators to specify the URL endpoints.\n\n **Example with Flask**:\n ```python\n from flask import Flask, request, jsonify\n\n app = Flask(__name__)\n\n @app.route('/hello', methods=['GET'])\n def hello_world():\n return jsonify({'message': 'Hello, World!'})\n\n if __name__ == '__main__':\n app.run(debug=True)\n ```\n\n5. **Handle Data Serialization and Deserialization**\n\n - **Parsing Incoming Data**: Use libraries to parse JSON or other data formats from requests.\n - **Formatting Output Data**: Ensure that responses are properly formatted in JSON or XML.\n\n6. **Implement Error Handling**\n\n - **Handle Exceptions Gracefully**: Provide meaningful error messages and HTTP status codes.\n - **Validate Input Data**: Check for required fields and appropriate data types to prevent errors.\n\n **Example**:\n ```python\n @app.errorhandler(404)\n def resource_not_found(e):\n return jsonify(error=str(e)), 404\n ```\n\n7. **Add Authentication and Authorization (If Necessary)**\n\n - **Secure Endpoints**: If your API requires, implement security measures such as API keys, tokens (JWT), or OAuth.\n - **Manage User Sessions**: Handle user login states and permissions appropriately.\n\n8. **Document Your API**\n\n - **Use Tools Like Swagger/OpenAPI**: Automatically generate interactive API documentation.\n - **Provide Usage Examples**: Help users understand how to interact with your API.\n\n **Example with FastAPI**:\n FastAPI automatically generates docs at `/docs` using Swagger UI.\n\n9. **Test Your API**\n\n - **Write Unit and Integration Tests**: Ensure each endpoint works as expected.\n - **Use Testing Tools**: Utilize tools like `unittest`, `pytest`, or API testing platforms like Postman.\n\n **Example**:\n ```python\n import unittest\n class TestAPI(unittest.TestCase):\n def test_hello_world(self):\n response = app.test_client().get('/hello')\n self.assertEqual(response.status_code, 200)\n ```\n\n10. **Optimize Performance**\n\n - **Improve Response Times**: Optimize your code and consider using asynchronous programming if necessary.\n - **Manage Resource Utilization**: Ensure your API can handle the expected load.\n\n11. **Deploy Your API**\n\n - **Choose a Hosting Platform**: Options include AWS, Heroku, DigitalOcean, etc.\n - **Configure the Server**: Set up the environment to run your API in a production setting.\n - **Use a Production Server**: Instead of the development server, use WSGI servers like Gunicorn or Uvicorn.\n\n **Example**:\n ```bash\n uvicorn main:app --host 0.0.0.0 --port 80\n ```\n\n12. **Monitor and Maintain**\n\n - **Logging**: Implement logging to track events and errors.\n - **Monitoring**: Use monitoring tools to track performance and uptime.\n - **Update and Patch**: Keep dependencies up to date and patch any security vulnerabilities.\n\n13. **Consider Versioning**\n\n - **Plan for Updates**: Use versioning in your API endpoints to manage changes without breaking existing clients.\n - **Example**:\n ```python\n @app.route('/v1/hello', methods=['GET'])\n ```\n\n14. **Gather Feedback and Iterate**\n\n - **User Feedback**: Encourage users to provide feedback on your API.\n - **Continuous Improvement**: Use the feedback to make improvements and add features.\n\n**Additional Tips**:\n\n- **Keep It Simple**: Start with a minimal viable API and expand functionality over time.\n- **Follow RESTful Principles**: Design your API according to REST standards to make it intuitive and standard-compliant.\n- **Security Best Practices**: Always sanitize inputs and protect against common vulnerabilities like SQL injection and cross-site scripting (XSS).\nBy following these steps, you'll be well on your way to creating a functional and robust Python API. Good luck with your development!",
"refusal": null,
"role": "assistant",
"function_call": null,
"tool_calls": null
},
"content_filter_results": {
"hate": {
"filtered": false,
"severity": "safe"
},
"protected_material_code": {
"filtered": false,
"detected": false
},
"protected_material_text": {
"filtered": false,
"detected": false
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": false,
"severity": "safe"
}
}
}
],
"created": 1728073417,
"model": "o1-preview-2024-09-12",
"object": "chat.completion",
"service_tier": null,
"system_fingerprint": "fp_503a95a7d8",
"usage": {
"completion_tokens": 1843,
"prompt_tokens": 20,
"total_tokens": 1863,
"completion_tokens_details": {
"audio_tokens": null,
"reasoning_tokens": 448
},
"prompt_tokens_details": {
"audio_tokens": null,
"cached_tokens": 0
}
},
"prompt_filter_results": [
{
"prompt_index": 0,
"content_filter_results": {
"custom_blocklists": {
"filtered": false
},
"hate": {
"filtered": false,
"severity": "safe"
},
"jailbreak": {
"filtered": false,
"detected": false
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": false,
"severity": "safe"
}
}
}
]
}
```

---
To learn more about the advanced `o1` series models see, [getting started with o1 series reasoning models](../how-to/reasoning.md).

### Region availability

Available for standard and global standard deployment in East US, East US2, North Central US, South Central US, Sweden Central, West US, and West US3 for approved customers.
| Model | Region |
|---|---|
|`o1` | East US2 (Global Standard) <br> Sweden Central (Global Standard) |
| `o1-preview` | See the [models table](#global-standard-model-availability). |
| `o1-mini` | See the [models table](#global-provisioned-managed-model-availability). |

## GPT-4o-Realtime-Preview

Expand Down
6 changes: 3 additions & 3 deletions articles/ai-services/openai/how-to/gpt-with-vision.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ manager: nitinme

Vision-enabled chat models are large multimodal models (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. They incorporate both natural language processing and visual understanding. The current vision-enabled models are GPT-4 Turbo with Vision, GPT-4o, and GPT-4o-mini.

The vision-enabled models answer general questions about what's present in the images or videos you upload.
The vision-enabled models answer general questions about what's present in the images you upload.

> [!TIP]
> To use vision-enabled models, you call the Chat Completion API on a supported model that you have deployed. If you're not familiar with the Chat Completion API, see the [Vision-enabled chat how-to guide](/azure/ai-services/openai/how-to/chatgpt?tabs=python&pivots=programming-language-chat-completions).
Expand Down Expand Up @@ -290,7 +290,7 @@ Every response includes a `"finish_reason"` field. It has the following possible
- `length`: Incomplete model output due to the `max_tokens` input parameter or model's token limit.
- `content_filter`: Omitted content due to a flag from our content filters.


<!--
### Create a video retrieval index
Expand Down Expand Up @@ -366,7 +366,7 @@ Every response includes a `"finish_reason"` field. It has the following possible
```bash
curl.exe -v -X GET "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index/ingestions?api-version=2023-05-01-preview&$top=20" -H "ocp-apim-subscription-key: <YOUR_SUBSCRIPTION_KEY>"
```
-->

## Next steps

Expand Down
1 change: 1 addition & 0 deletions articles/ai-services/openai/how-to/prompt-caching.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ Caches are typically cleared within 5-10 minutes of inactivity and are always re

Currently only the following models support prompt caching with Azure OpenAI:

- `o1-2024-12-17`
- `o1-preview-2024-09-12`
- `o1-mini-2024-09-12`
- `gpt-4o-2024-05-13`
Expand Down
Loading

0 comments on commit 6064893

Please sign in to comment.