Skip to content

Using gpt-5 model in Autogen 0.7.2 #7048

@karun19

Description

@karun19

What happened?

Describe the bug
I am trying to use gpt-5 model in Autogen 0.7.2 and is not working smoothly as expected. Facing slowness, getting different errors etc.
Need guidance on right way of using gpt-5 in Autogen.

To Reproduce
`Below is the model config used,

AzureOpenAIChatCompletionClient( model="gpt-5", model_info={ "model": "gpt-5", "family": "gpt", "vision": False, "input_format": "chat", "output_format": "text", "function_calling": True, "json_output": False, "structured_output": True, }, temperature=1, frequency_penalty=0.0, presence_penalty=0.0, )`

Expected behavior
Autogen should work smoothly with gpt-5 model similar to gpt-4o

Which packages was the bug in?

Python AgentChat (autogen-agentchat>=0.4.0)

AutoGen library version.

Python 0.7.2

Other library version.

No response

Model used

gpt-5

Model provider

Azure AI Foundary (Azure AI Studio)

Other model provider

No response

Python version

3.13

.NET version

None

Operating system

Windows

Metadata

Metadata

Assignees

No one assigned

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions