Skip to content

Releases: BerriAI/litellm

v1.18.1

18 Jan 17:54
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.18.0...v1.18.1

v1.18.0

18 Jan 02:55
Compare
Choose a tag to compare

What's Changed

https://docs.litellm.ai/docs/simple_proxy

  • [Feat] Proxy - Access Key metadata in callbacks by @ishaan-jaff in #1484
    • Access Proxy Key metadata in callbacks
    • Access Endpoint URL in calbacks - you can see if /chat/completions, /embeddings, /image/generation etc is called
    • Support for Langfuse Tags, We log request metadata as langfuse tags

PS. no keys leaked - these are keys to my local proxy
Screenshot 2024-01-17 at 6 10 10 PM

Support for model access groups

Use this if you have keys with access to specific models, and you want to give all them access to a new model.

You can now assign keys access to model groups, and add new models to that group via the config.yaml - https://docs.litellm.ai/docs/proxy/users#grant-access-to-new-model

curl --location 'http://localhost:8000/key/generate' \
-H 'Authorization: Bearer <your-master-key>' \
-H 'Content-Type: application/json' \
-d '{"models": ["beta-models"], # 👈 Model Access Group
            "max_budget": 0,}'

Langfuse Tags logged:

Screenshot 2024-01-17 at 6 11 36 PM * feat(proxy_server.py): support model access groups by @krrishdholakia in https://github.com//pull/1483

Full Changelog: v1.17.18...v1.18.0

What's Changed

Full Changelog: v1.17.18...v1.18.0

v1.17.18

18 Jan 01:39
Compare
Choose a tag to compare

What's Changed

  • [Fix+Test] /key/delete functions by @ishaan-jaff in #1482 Added extensive testing + improved swagger

Full Changelog: v1.17.17...v1.17.18

v1.17.17

17 Jan 22:03
Compare
Choose a tag to compare

What's Changed

Testing + fixes for: https://docs.litellm.ai/docs/proxy/virtual_keys

  1. Generate a Key, and use it to make a call
  2. Make a call with invalid key, expect it to fail
  3. Make a call to a key with invalid model - expect to fail
  4. Make a call to a key with valid model - expect to pass
  5. Make a call with key over budget, expect to fail
  6. Make a streaming chat/completions call with key over budget, expect to fail
  7. Make a call with an key that never expires, expect to pass
  8. Make a call with an expired key, expect to fail

Full Changelog: v1.17.16...v1.17.17

v1.17.16

17 Jan 20:39
Compare
Choose a tag to compare

Full Changelog: v1.17.15...v1.17.16

v1.17.15

17 Jan 19:50
Compare
Choose a tag to compare

What's Changed

Usage - with Azure Vision enhancements

Docs: https://docs.litellm.ai/docs/providers/azure#usage---with-azure-vision-enhancements

Note: Azure requires the base_url to be set with /extensions

Example

base_url=https://gpt-4-vision-resource.openai.azure.com/openai/deployments/gpt-4-vision/extensions
# base_url="{azure_endpoint}/openai/deployments/{azure_deployment}/extensions"

Usage

import os 
from litellm import completion

os.environ["AZURE_API_KEY"] = "your-api-key"

# azure call
response = completion(
            model="azure/gpt-4-vision",
            timeout=5,
            messages=[
                {
                    "role": "user",
                    "content": [
                        {"type": "text", "text": "Whats in this image?"},
                        {
                            "type": "image_url",
                            "image_url": {
                                "url": "https://avatars.githubusercontent.com/u/29436595?v=4"
                            },
                        },
                    ],
                }
            ],
            base_url="https://gpt-4-vision-resource.openai.azure.com/openai/deployments/gpt-4-vision/extensions",
            api_key=os.getenv("AZURE_VISION_API_KEY"),
            enhancements={"ocr": {"enabled": True}, "grounding": {"enabled": True}},
            dataSources=[
                {
                    "type": "AzureComputerVision",
                    "parameters": {
                        "endpoint": "https://gpt-4-vision-enhancement.cognitiveservices.azure.com/",
                        "key": os.environ["AZURE_VISION_ENHANCE_KEY"],
                    },
                }
            ],
)

Full Changelog: v1.17.14...v1.17.15

v1.17.14

17 Jan 18:06
Compare
Choose a tag to compare

Fixes bug for mistral ai api optional param mapping

Full Changelog: v1.17.13...v1.17.14

v1.17.13

17 Jan 05:57
Compare
Choose a tag to compare

What's Changed

Proxy Virtual Keys Improvements

Added Testing + minor fixes for the following scenarios:

  1. Generate a Key, and use it to make a call
  2. Make a call with invalid key, expect it to fail
  3. Make a call to a key with invalid model - expect to fail
  4. Make a call to a key with valid model - expect to pass
  5. Make a call with key over budget, expect to fail
  6. Make a streaming chat/completions call with key over budget, expect to fail

Full Changelog: v1.17.12...v1.17.13

v1.17.12

17 Jan 04:28
Compare
Choose a tag to compare

What's Changed

LiteLLM Proxy:

https://docs.litellm.ai/docs/proxy/virtual_keys

  • /key/generate, user_auth There was a bug with how we were checking expiry time
  • user_auth Requests Fail when a user crosses their budget
  • user_auth Requests Fail when a user crosses their budget now (with streaming requests)

PRs with fixes

Full Changelog: v1.17.10...v1.17.12

v1.17.10

16 Jan 23:41
Compare
Choose a tag to compare

What's Changed

LiteLLM Proxy:

Usage

export NUM_WORKERS=4
litellm --config config.yaml

https://docs.litellm.ai/docs/proxy/cli

Full Changelog: v1.17.9...v1.17.10