Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
df9a26d
Use video id for videos api
Sameerlite Nov 3, 2025
35f5dc9
remove mock code
Sameerlite Nov 3, 2025
64a71fb
Potential fix for code scanning alert no. 3630: Clear-text logging of…
Sameerlite Nov 3, 2025
3d66b40
remove print statements
Sameerlite Nov 3, 2025
0db0896
Update video prefix for 'video_'
Sameerlite Nov 3, 2025
4dc8670
Merge pull request #16192 from BerriAI/litellm_openai_use_videos_id
Sameerlite Nov 4, 2025
3a0650c
Merge pull request #16228 from BerriAI/main
Sameerlite Nov 4, 2025
1943be5
Add veo with openai videos unified specs
Sameerlite Nov 4, 2025
1412055
Add videos testing to UI
Sameerlite Nov 4, 2025
f973b9c
remove mock code
Sameerlite Nov 4, 2025
8762131
Remove not need ui changes:
Sameerlite Nov 4, 2025
1a2a289
Fix mypy errors related to gemini
Sameerlite Nov 4, 2025
73ce407
fix test_transform_video_create_request
Sameerlite Nov 5, 2025
7c56fe2
Add vertex ai veo config
Sameerlite Nov 5, 2025
6685c58
Add vertex ai veo config
Sameerlite Nov 5, 2025
faf8332
Add cost tracking for gemini and add optional param passing
Sameerlite Nov 5, 2025
184430c
fix bugs related to vertex ai veo
Sameerlite Nov 5, 2025
319797d
Merge pull request #16230 from BerriAI/litellm_videos_UI_addition
Sameerlite Nov 5, 2025
7dad31f
Add Gemini Veo Video Generation in Openai Videos Unified Spec (#16229)
Sameerlite Nov 6, 2025
8b2f02b
Merge pull request #16318 from BerriAI/main
Sameerlite Nov 6, 2025
1434b20
Add contant video duration for gemini and vertex
Sameerlite Nov 6, 2025
0eb1b4a
Merge pull request #16262 from BerriAI/litellm_vertex_ai_videos_spec
Sameerlite Nov 6, 2025
c613e70
Merge branch 'litellm_sameer_nov' into litellm_veo_videos_spec
Sameerlite Nov 6, 2025
c28a075
Merge pull request #16322 from BerriAI/litellm_veo_videos_spec
Sameerlite Nov 6, 2025
02ef0a0
Fix litellm_mapped_tests tests
Sameerlite Nov 6, 2025
baca1ee
fix azure videos issue
Sameerlite Nov 7, 2025
af96b53
Added doc for videos vertex ai
Sameerlite Nov 7, 2025
a9749f1
fix seconds param error
Sameerlite Nov 8, 2025
1d433b3
fix lint errors
Sameerlite Nov 8, 2025
5f6b23f
Merge branch 'main' into litellm_sameer_nov
ishaan-jaff Nov 8, 2025
d70d00d
test_transform_video_create_response_cost_tracking_no_duration
ishaan-jaff Nov 8, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 5 additions & 13 deletions docs/my-website/docs/providers/azure/videos.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ LiteLLM supports Azure OpenAI's video generation models including Sora with full
import os
os.environ["AZURE_OPENAI_API_KEY"] = "your-azure-api-key"
os.environ["AZURE_OPENAI_API_BASE"] = "https://your-resource.openai.azure.com/"
os.environ["AZURE_OPENAI_API_VERSION"] = "2024-02-15-preview"
```

### Basic Usage
Expand All @@ -37,7 +36,6 @@ import time

os.environ["AZURE_OPENAI_API_KEY"] = "your-azure-api-key"
os.environ["AZURE_OPENAI_API_BASE"] = "https://your-resource.openai.azure.com/"
os.environ["AZURE_OPENAI_API_VERSION"] = "2024-02-15-preview"

# Generate video
response = video_generation(
Expand All @@ -53,8 +51,7 @@ print(f"Initial Status: {response.status}")
# Check status until video is ready
while True:
status_response = video_status(
video_id=response.id,
custom_llm_provider="azure"
video_id=response.id
)

print(f"Current Status: {status_response.status}")
Expand All @@ -69,8 +66,7 @@ while True:

# Download video content when ready
video_bytes = video_content(
video_id=response.id,
custom_llm_provider="azure"
video_id=response.id
)

# Save to file
Expand All @@ -87,7 +83,6 @@ Here's how to call Azure video generation models with the LiteLLM Proxy Server
```bash
export AZURE_OPENAI_API_KEY="your-azure-api-key"
export AZURE_OPENAI_API_BASE="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_API_VERSION="2024-02-15-preview"
```

### 2. Start the proxy
Expand All @@ -102,7 +97,6 @@ model_list:
model: azure/sora-2
api_key: os.environ/AZURE_OPENAI_API_KEY
api_base: os.environ/AZURE_OPENAI_API_BASE
api_version: "2024-02-15-preview"
```
</TabItem>
Expand Down Expand Up @@ -211,8 +205,7 @@ general_settings:
```python
# Download video content
video_bytes = video_content(
video_id="video_1234567890",
model="azure/sora-2"
video_id="video_1234567890"
)
# Save to file
Expand Down Expand Up @@ -243,8 +236,7 @@ def generate_and_download_video(prompt):
# Step 3: Download video
video_bytes = litellm.video_content(
video_id=video_id,
custom_llm_provider="azure"
video_id=video_id
)
# Step 4: Save to file
Expand All @@ -264,9 +256,9 @@ video_file = generate_and_download_video(
```python
# Video editing with reference image
response = litellm.video_remix(
video_id="video_456",
prompt="Make the cat jump higher",
input_reference=open("path/to/image.jpg", "rb"), # Reference image as file object
custom_llm_provider="azure"
seconds="8"
)
Expand Down
2 changes: 1 addition & 1 deletion docs/my-website/docs/providers/gemini.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ import TabItem from '@theme/TabItem';
| Provider Route on LiteLLM | `gemini/` |
| Provider Doc | [Google AI Studio ↗](https://aistudio.google.com/) |
| API Endpoint for Provider | https://generativelanguage.googleapis.com |
| Supported OpenAI Endpoints | `/chat/completions`, [`/embeddings`](../embedding/supported_embedding#gemini-ai-embedding-models), `/completions` |
| Supported OpenAI Endpoints | `/chat/completions`, [`/embeddings`](../embedding/supported_embedding#gemini-ai-embedding-models), `/completions`, [`/videos`](./gemini/videos.md) |
| Pass-through Endpoint | [Supported](../pass_through/google_ai_studio.md) |

<br />
Expand Down
Loading
Loading