Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions articles/gpt-oss/run-locally-ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,12 +114,12 @@ Ollama doesn’t (yet) support the **Responses API** natively.

If you do want to use the Responses API you can use [**Hugging Face’s `Responses.js` proxy**](https://github.com/huggingface/responses.js) to convert Chat Completions to Responses API.

For basic use cases you can also [**run our example Python server with Ollama as the backend.**](https://github.com/openai/gpt-oss?tab=readme-ov-file#responses-api) This server is a basic example server and does not have the
For basic use cases you can also [**run our example Python server with Ollama as the backend.**](https://github.com/openai/gpt-oss?tab=readme-ov-file#responses-api) This server is a basic example server and does not have the ...

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes that include hyperlinks need to be sanitized before you push them.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hyperlink hasn't changed. Only three dots were added to draw attention. This sentence seems incomplete.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is incomplete!


```shell
pip install gpt-oss
python -m gpt_oss.responses_api.serve \
--inference_backend=ollama \
--inference-backend ollama \
--checkpoint gpt-oss:20b
```

Expand Down