Skip to content

Commit ef01eba

Browse files
Merge branch 'main' into litellm_embeddings_fix
2 parents 9b76c73 + b803af4 commit ef01eba

File tree

266 files changed

+6882
-2533
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

266 files changed

+6882
-2533
lines changed

.circleci/config.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -532,7 +532,7 @@ jobs:
532532
command: |
533533
pwd
534534
ls
535-
python -m pytest -vv tests/router_unit_tests --cov=litellm --cov-report=xml -x -s -v --junitxml=test-results/junit.xml --durations=5
535+
python -m pytest -vv tests/router_unit_tests --cov=litellm --cov-report=xml -x -s --junitxml=test-results/junit.xml --durations=5
536536
no_output_timeout: 120m
537537
- run:
538538
name: Rename the coverage files
@@ -1164,7 +1164,7 @@ jobs:
11641164
command: |
11651165
pwd
11661166
ls
1167-
python -m pytest -vv tests/test_litellm --cov=litellm --cov-report=xml -s -v --junitxml=test-results/junit-litellm.xml --durations=10 -n 8
1167+
python -m pytest -vv tests/test_litellm --cov=litellm --cov-report=xml -v --junitxml=test-results/junit-litellm.xml --durations=10 -n 8
11681168
no_output_timeout: 120m
11691169
- run:
11701170
name: Rename the coverage files
@@ -1396,7 +1396,7 @@ jobs:
13961396
command: |
13971397
pwd
13981398
ls
1399-
python -m pytest -vv tests/image_gen_tests --cov=litellm --cov-report=xml -x -s -v --junitxml=test-results/junit.xml --durations=5
1399+
python -m pytest -vv tests/image_gen_tests --cov=litellm --cov-report=xml -x -v --junitxml=test-results/junit.xml --durations=5
14001400
no_output_timeout: 120m
14011401
- run:
14021402
name: Rename the coverage files

docker/Dockerfile.dev

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,9 @@ USER root
5757
# Install only runtime dependencies
5858
RUN apt-get update && apt-get install -y --no-install-recommends \
5959
libssl3 \
60+
libatomic1 \
61+
nodejs \
62+
npm \
6063
&& rm -rf /var/lib/apt/lists/*
6164

6265
WORKDIR /app

docs/my-website/docs/exception_mapping.md

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,85 @@ except openai.APITimeoutError as e:
112112
print(f"should_retry: {should_retry}")
113113
```
114114

115+
## Advanced
116+
117+
### Accessing Provider-Specific Error Details
118+
119+
LiteLLM exceptions include a `provider_specific_fields` attribute that contains additional error information specific to each provider. This is particularly useful for Azure OpenAI, which provides detailed content filtering information.
120+
121+
#### Azure OpenAI - Content Policy Violation Inner Error Access
122+
123+
When Azure OpenAI returns content policy violations, you can access the detailed content filtering results through the `innererror` field:
124+
125+
```python
126+
import litellm
127+
from litellm.exceptions import ContentPolicyViolationError
128+
129+
try:
130+
response = litellm.completion(
131+
model="azure/gpt-4",
132+
messages=[
133+
{
134+
"role": "user",
135+
"content": "Some content that might violate policies"
136+
}
137+
]
138+
)
139+
except ContentPolicyViolationError as e:
140+
# Access Azure-specific error details
141+
if e.provider_specific_fields and "innererror" in e.provider_specific_fields:
142+
innererror = e.provider_specific_fields["innererror"]
143+
144+
# Access content filter results
145+
content_filter_result = innererror.get("content_filter_result", {})
146+
147+
print(f"Content filter code: {innererror.get('code')}")
148+
print(f"Hate filtered: {content_filter_result.get('hate', {}).get('filtered')}")
149+
print(f"Violence severity: {content_filter_result.get('violence', {}).get('severity')}")
150+
print(f"Sexual content filtered: {content_filter_result.get('sexual', {}).get('filtered')}")
151+
```
152+
153+
**Example Response Structure:**
154+
155+
When calling the LiteLLM proxy, content policy violations will return detailed filtering information:
156+
157+
```json
158+
{
159+
"error": {
160+
"message": "litellm.ContentPolicyViolationError: AzureException - The response was filtered due to the prompt triggering Azure OpenAI's content management policy...",
161+
"type": null,
162+
"param": null,
163+
"code": "400",
164+
"provider_specific_fields": {
165+
"innererror": {
166+
"code": "ResponsibleAIPolicyViolation",
167+
"content_filter_result": {
168+
"hate": {
169+
"filtered": true,
170+
"severity": "high"
171+
},
172+
"jailbreak": {
173+
"filtered": false,
174+
"detected": false
175+
},
176+
"self_harm": {
177+
"filtered": false,
178+
"severity": "safe"
179+
},
180+
"sexual": {
181+
"filtered": false,
182+
"severity": "safe"
183+
},
184+
"violence": {
185+
"filtered": true,
186+
"severity": "medium"
187+
}
188+
}
189+
}
190+
}
191+
}
192+
}
193+
115194
## Details
116195

117196
To see how it's implemented - [check out the code](https://github.com/BerriAI/litellm/blob/a42c197e5a6de56ea576c73715e6c7c6b19fa249/litellm/utils.py#L1217)

docs/my-website/docs/providers/azure/videos.md

Lines changed: 5 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,6 @@ LiteLLM supports Azure OpenAI's video generation models including Sora with full
2525
import os
2626
os.environ["AZURE_OPENAI_API_KEY"] = "your-azure-api-key"
2727
os.environ["AZURE_OPENAI_API_BASE"] = "https://your-resource.openai.azure.com/"
28-
os.environ["AZURE_OPENAI_API_VERSION"] = "2024-02-15-preview"
2928
```
3029

3130
### Basic Usage
@@ -37,7 +36,6 @@ import time
3736

3837
os.environ["AZURE_OPENAI_API_KEY"] = "your-azure-api-key"
3938
os.environ["AZURE_OPENAI_API_BASE"] = "https://your-resource.openai.azure.com/"
40-
os.environ["AZURE_OPENAI_API_VERSION"] = "2024-02-15-preview"
4139

4240
# Generate video
4341
response = video_generation(
@@ -53,8 +51,7 @@ print(f"Initial Status: {response.status}")
5351
# Check status until video is ready
5452
while True:
5553
status_response = video_status(
56-
video_id=response.id,
57-
custom_llm_provider="azure"
54+
video_id=response.id
5855
)
5956

6057
print(f"Current Status: {status_response.status}")
@@ -69,8 +66,7 @@ while True:
6966

7067
# Download video content when ready
7168
video_bytes = video_content(
72-
video_id=response.id,
73-
custom_llm_provider="azure"
69+
video_id=response.id
7470
)
7571

7672
# Save to file
@@ -87,7 +83,6 @@ Here's how to call Azure video generation models with the LiteLLM Proxy Server
8783
```bash
8884
export AZURE_OPENAI_API_KEY="your-azure-api-key"
8985
export AZURE_OPENAI_API_BASE="https://your-resource.openai.azure.com/"
90-
export AZURE_OPENAI_API_VERSION="2024-02-15-preview"
9186
```
9287

9388
### 2. Start the proxy
@@ -102,7 +97,6 @@ model_list:
10297
model: azure/sora-2
10398
api_key: os.environ/AZURE_OPENAI_API_KEY
10499
api_base: os.environ/AZURE_OPENAI_API_BASE
105-
api_version: "2024-02-15-preview"
106100
```
107101
108102
</TabItem>
@@ -211,8 +205,7 @@ general_settings:
211205
```python
212206
# Download video content
213207
video_bytes = video_content(
214-
video_id="video_1234567890",
215-
model="azure/sora-2"
208+
video_id="video_1234567890"
216209
)
217210
218211
# Save to file
@@ -243,8 +236,7 @@ def generate_and_download_video(prompt):
243236
244237
# Step 3: Download video
245238
video_bytes = litellm.video_content(
246-
video_id=video_id,
247-
custom_llm_provider="azure"
239+
video_id=video_id
248240
)
249241
250242
# Step 4: Save to file
@@ -264,9 +256,9 @@ video_file = generate_and_download_video(
264256
```python
265257
# Video editing with reference image
266258
response = litellm.video_remix(
259+
video_id="video_456",
267260
prompt="Make the cat jump higher",
268261
input_reference=open("path/to/image.jpg", "rb"), # Reference image as file object
269-
custom_llm_provider="azure"
270262
seconds="8"
271263
)
272264

docs/my-website/docs/providers/gemini.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ import TabItem from '@theme/TabItem';
1010
| Provider Route on LiteLLM | `gemini/` |
1111
| Provider Doc | [Google AI Studio ↗](https://aistudio.google.com/) |
1212
| API Endpoint for Provider | https://generativelanguage.googleapis.com |
13-
| Supported OpenAI Endpoints | `/chat/completions`, [`/embeddings`](../embedding/supported_embedding#gemini-ai-embedding-models), `/completions` |
13+
| Supported OpenAI Endpoints | `/chat/completions`, [`/embeddings`](../embedding/supported_embedding#gemini-ai-embedding-models), `/completions`, [`/videos`](./gemini/videos.md) |
1414
| Pass-through Endpoint | [Supported](../pass_through/google_ai_studio.md) |
1515

1616
<br />

0 commit comments

Comments
 (0)