Skip to content

Commit bb9c749

Browse files
Merge: Fix recurrent merge conflic
2 parents d0b0adb + b803af4 commit bb9c749

File tree

346 files changed

+9275
-2940
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

346 files changed

+9275
-2940
lines changed

.circleci/config.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -532,7 +532,7 @@ jobs:
532532
command: |
533533
pwd
534534
ls
535-
python -m pytest -vv tests/router_unit_tests --cov=litellm --cov-report=xml -x -s -v --junitxml=test-results/junit.xml --durations=5
535+
python -m pytest -vv tests/router_unit_tests --cov=litellm --cov-report=xml -x -s --junitxml=test-results/junit.xml --durations=5
536536
no_output_timeout: 120m
537537
- run:
538538
name: Rename the coverage files
@@ -1164,7 +1164,7 @@ jobs:
11641164
command: |
11651165
pwd
11661166
ls
1167-
python -m pytest -vv tests/test_litellm --cov=litellm --cov-report=xml -s -v --junitxml=test-results/junit-litellm.xml --durations=10 -n 8
1167+
python -m pytest -vv tests/test_litellm --cov=litellm --cov-report=xml -v --junitxml=test-results/junit-litellm.xml --durations=10 -n 8
11681168
no_output_timeout: 120m
11691169
- run:
11701170
name: Rename the coverage files
@@ -1396,7 +1396,7 @@ jobs:
13961396
command: |
13971397
pwd
13981398
ls
1399-
python -m pytest -vv tests/image_gen_tests --cov=litellm --cov-report=xml -x -s -v --junitxml=test-results/junit.xml --durations=5
1399+
python -m pytest -vv tests/image_gen_tests --cov=litellm --cov-report=xml -x -v --junitxml=test-results/junit.xml --durations=5
14001400
no_output_timeout: 120m
14011401
- run:
14021402
name: Rename the coverage files

docker/Dockerfile.dev

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,9 @@ USER root
5757
# Install only runtime dependencies
5858
RUN apt-get update && apt-get install -y --no-install-recommends \
5959
libssl3 \
60+
libatomic1 \
61+
nodejs \
62+
npm \
6063
&& rm -rf /var/lib/apt/lists/*
6164

6265
WORKDIR /app

docs/my-website/docs/completion/image_generation_chat.md

Lines changed: 46 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -15,16 +15,22 @@ Supported Providers:
1515
- Google AI Studio (`gemini`)
1616
- Vertex AI (`vertex_ai/`)
1717

18-
LiteLLM will standardize the `image` response in the assistant message for models that support image generation during chat completions.
18+
LiteLLM will standardize the `images` response in the assistant message for models that support image generation during chat completions.
1919

2020
```python title="Example response from litellm"
2121
"message": {
2222
...
2323
"content": "Here's the image you requested:",
24-
"image": {
25-
"url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...",
26-
"detail": "auto"
27-
}
24+
"images": [
25+
{
26+
"image_url": {
27+
"url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...",
28+
"detail": "auto"
29+
},
30+
"index": 0,
31+
"type": "image_url"
32+
}
33+
]
2834
}
2935
```
3036

@@ -47,7 +53,7 @@ response = completion(
4753
)
4854

4955
print(response.choices[0].message.content) # Text response
50-
print(response.choices[0].message.image) # Image data
56+
print(response.choices[0].message.images) # List of image objects
5157
```
5258

5359
</TabItem>
@@ -103,10 +109,16 @@ curl http://0.0.0.0:4000/v1/chat/completions \
103109
"message": {
104110
"content": "Here's the image you requested:",
105111
"role": "assistant",
106-
"image": {
107-
"url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...",
108-
"detail": "auto"
109-
}
112+
"images": [
113+
{
114+
"image_url": {
115+
"url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...",
116+
"detail": "auto"
117+
},
118+
"index": 0,
119+
"type": "image_url"
120+
}
121+
]
110122
}
111123
}
112124
],
@@ -141,8 +153,8 @@ response = completion(
141153
)
142154

143155
for chunk in response:
144-
if hasattr(chunk.choices[0].delta, "image") and chunk.choices[0].delta.image is not None:
145-
print("Generated image:", chunk.choices[0].delta.image["url"])
156+
if hasattr(chunk.choices[0].delta, "images") and chunk.choices[0].delta.images is not None:
157+
print("Generated image:", chunk.choices[0].delta.images[0]["image_url"]["url"])
146158
break
147159
```
148160

@@ -175,7 +187,7 @@ data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1723323084
175187

176188
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1723323084,"model":"gemini/gemini-2.5-flash-image-preview","choices":[{"index":0,"delta":{"content":"Here's the image you requested:"},"finish_reason":null}]}
177189

178-
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1723323084,"model":"gemini/gemini-2.5-flash-image-preview","choices":[{"index":0,"delta":{"image":{"url":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...","detail":"auto"}},"finish_reason":null}]}
190+
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1723323084,"model":"gemini/gemini-2.5-flash-image-preview","choices":[{"index":0,"delta":{"images":[{"image_url":{"url":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...","detail":"auto"},"index":0,"type":"image_url"}]},"finish_reason":null}]}
179191

180192
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1723323084,"model":"gemini/gemini-2.5-flash-image-preview","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
181193

@@ -200,8 +212,8 @@ async def generate_image():
200212
)
201213

202214
print(response.choices[0].message.content) # Text response
203-
print(response.choices[0].message.image) # Image data
204-
215+
print(response.choices[0].message.images) # List of image objects
216+
205217
return response
206218

207219
# Run the async function
@@ -215,18 +227,28 @@ asyncio.run(generate_image())
215227
| Google AI Studio | `gemini/gemini-2.5-flash-image-preview` |
216228
| Vertex AI | `vertex_ai/gemini-2.5-flash-image-preview` |
217229

218-
## Spec
230+
## Spec
219231

220-
The `image` field in the response follows this structure:
232+
The `images` field in the response follows this structure:
221233

222234
```python
223-
"image": {
224-
"url": "data:image/png;base64,<base64_encoded_image>",
225-
"detail": "auto"
226-
}
235+
"images": [
236+
{
237+
"image_url": {
238+
"url": "data:image/png;base64,<base64_encoded_image>",
239+
"detail": "auto"
240+
},
241+
"index": 0,
242+
"type": "image_url"
243+
}
244+
]
227245
```
228246

229-
- `url` - str: Base64 encoded image data in data URI format
230-
- `detail` - str: Image detail level (always "auto" for generated images)
247+
- `images` - List[ImageURLListItem]: Array of generated images
248+
- `image_url` - ImageURLObject: Container for image data
249+
- `url` - str: Base64 encoded image data in data URI format
250+
- `detail` - str: Image detail level (always "auto" for generated images)
251+
- `index` - int: Index of the image in the response
252+
- `type` - str: Type identifier (always "image_url")
231253

232-
The image is returned as a base64-encoded data URI that can be directly used in HTML `<img>` tags or saved to a file.
254+
The images are returned as base64-encoded data URIs that can be directly used in HTML `<img>` tags or saved to files.

docs/my-website/docs/exception_mapping.md

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,85 @@ except openai.APITimeoutError as e:
112112
print(f"should_retry: {should_retry}")
113113
```
114114

115+
## Advanced
116+
117+
### Accessing Provider-Specific Error Details
118+
119+
LiteLLM exceptions include a `provider_specific_fields` attribute that contains additional error information specific to each provider. This is particularly useful for Azure OpenAI, which provides detailed content filtering information.
120+
121+
#### Azure OpenAI - Content Policy Violation Inner Error Access
122+
123+
When Azure OpenAI returns content policy violations, you can access the detailed content filtering results through the `innererror` field:
124+
125+
```python
126+
import litellm
127+
from litellm.exceptions import ContentPolicyViolationError
128+
129+
try:
130+
response = litellm.completion(
131+
model="azure/gpt-4",
132+
messages=[
133+
{
134+
"role": "user",
135+
"content": "Some content that might violate policies"
136+
}
137+
]
138+
)
139+
except ContentPolicyViolationError as e:
140+
# Access Azure-specific error details
141+
if e.provider_specific_fields and "innererror" in e.provider_specific_fields:
142+
innererror = e.provider_specific_fields["innererror"]
143+
144+
# Access content filter results
145+
content_filter_result = innererror.get("content_filter_result", {})
146+
147+
print(f"Content filter code: {innererror.get('code')}")
148+
print(f"Hate filtered: {content_filter_result.get('hate', {}).get('filtered')}")
149+
print(f"Violence severity: {content_filter_result.get('violence', {}).get('severity')}")
150+
print(f"Sexual content filtered: {content_filter_result.get('sexual', {}).get('filtered')}")
151+
```
152+
153+
**Example Response Structure:**
154+
155+
When calling the LiteLLM proxy, content policy violations will return detailed filtering information:
156+
157+
```json
158+
{
159+
"error": {
160+
"message": "litellm.ContentPolicyViolationError: AzureException - The response was filtered due to the prompt triggering Azure OpenAI's content management policy...",
161+
"type": null,
162+
"param": null,
163+
"code": "400",
164+
"provider_specific_fields": {
165+
"innererror": {
166+
"code": "ResponsibleAIPolicyViolation",
167+
"content_filter_result": {
168+
"hate": {
169+
"filtered": true,
170+
"severity": "high"
171+
},
172+
"jailbreak": {
173+
"filtered": false,
174+
"detected": false
175+
},
176+
"self_harm": {
177+
"filtered": false,
178+
"severity": "safe"
179+
},
180+
"sexual": {
181+
"filtered": false,
182+
"severity": "safe"
183+
},
184+
"violence": {
185+
"filtered": true,
186+
"severity": "medium"
187+
}
188+
}
189+
}
190+
}
191+
}
192+
}
193+
115194
## Details
116195

117196
To see how it's implemented - [check out the code](https://github.com/BerriAI/litellm/blob/a42c197e5a6de56ea576c73715e6c7c6b19fa249/litellm/utils.py#L1217)

docs/my-website/docs/moderation.md

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,19 @@ response = moderation(
2222

2323
For `/moderations` endpoint, there is **no need to specify `model` in the request or on the litellm config.yaml**
2424

25-
Start litellm proxy server
2625

26+
1. Setup config.yaml
27+
```yaml
28+
model_list:
29+
- model_name: text-moderation-stable
30+
litellm_params:
31+
model: openai/omni-moderation-latest
2732
```
28-
litellm
33+
34+
2. Start litellm proxy server
35+
36+
```
37+
litellm --config /path/to/config.yaml
2938
```
3039

3140

@@ -41,7 +50,7 @@ client = OpenAI(api_key="<proxy-api-key>", base_url="http://0.0.0.0:4000")
4150

4251
response = client.moderations.create(
4352
input="hello from litellm",
44-
model="text-moderation-stable" # optional, defaults to `omni-moderation-latest`
53+
model="text-moderation-stable"
4554
)
4655

4756
print(response)

docs/my-website/docs/observability/datadog.md

Lines changed: 27 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,12 +56,32 @@ litellm_settings:
5656
5757
**Step 2**: Set Required env variables for datadog
5858
59+
#### Direct API
60+
61+
Send logs directly to Datadog API:
62+
5963
```shell
6064
DD_API_KEY="5f2d0f310***********" # your datadog API Key
6165
DD_SITE="us5.datadoghq.com" # your datadog base url
6266
DD_SOURCE="litellm_dev" # [OPTIONAL] your datadog source. use to differentiate dev vs. prod deployments
6367
```
6468

69+
#### Via DataDog Agent
70+
71+
Send logs through a local DataDog agent (useful for containerized environments):
72+
73+
```shell
74+
DD_AGENT_HOST="localhost" # hostname or IP of DataDog agent
75+
DD_AGENT_PORT="10518" # [OPTIONAL] port of DataDog agent (default: 10518)
76+
DD_API_KEY="5f2d0f310***********" # [OPTIONAL] your datadog API Key (agent handles auth)
77+
DD_SOURCE="litellm_dev" # [OPTIONAL] your datadog source
78+
```
79+
80+
When `DD_AGENT_HOST` is set, logs are sent to the agent instead of directly to DataDog API. This is useful for:
81+
- Centralized log shipping in containerized environments
82+
- Reducing direct API calls from multiple services
83+
- Leveraging agent-side processing and filtering
84+
6585
**Step 3**: Start the proxy, make a test request
6686

6787
Start proxy
@@ -169,12 +189,17 @@ LiteLLM supports customizing the following Datadog environment variables
169189

170190
| Environment Variable | Description | Default Value | Required |
171191
|---------------------|-------------|---------------|----------|
172-
| `DD_API_KEY` | Your Datadog API key for authentication | None | ✅ Yes |
173-
| `DD_SITE` | Your Datadog site (e.g., "us5.datadoghq.com") | None | ✅ Yes |
192+
| `DD_API_KEY` | Your Datadog API key for authentication (required for direct API, optional for agent) | None | Conditional* |
193+
| `DD_SITE` | Your Datadog site (e.g., "us5.datadoghq.com") (required for direct API) | None | Conditional* |
194+
| `DD_AGENT_HOST` | Hostname or IP of DataDog agent (e.g., "localhost"). When set, logs are sent to agent instead of direct API | None | ❌ No |
195+
| `DD_AGENT_PORT` | Port of DataDog agent for log intake | "10518" | ❌ No |
174196
| `DD_ENV` | Environment tag for your logs (e.g., "production", "staging") | "unknown" | ❌ No |
175197
| `DD_SERVICE` | Service name for your logs | "litellm-server" | ❌ No |
176198
| `DD_SOURCE` | Source name for your logs | "litellm" | ❌ No |
177199
| `DD_VERSION` | Version tag for your logs | "unknown" | ❌ No |
178200
| `HOSTNAME` | Hostname tag for your logs | "" | ❌ No |
179201
| `POD_NAME` | Pod name tag (useful for Kubernetes deployments) | "unknown" | ❌ No |
180202

203+
\* **Required when using Direct API** (default): `DD_API_KEY` and `DD_SITE` are required
204+
\* **Optional when using DataDog Agent**: Set `DD_AGENT_HOST` to use agent mode; `DD_API_KEY` and `DD_SITE` are not required
205+

0 commit comments

Comments
 (0)