From 09e9bbc4f6b12c536ca3710fb78d54aecb610cda Mon Sep 17 00:00:00 2001 From: weijh Date: Tue, 14 Oct 2025 10:12:17 +0800 Subject: [PATCH] fix typo The sentence above seems incomplete... --- articles/gpt-oss/run-locally-ollama.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/articles/gpt-oss/run-locally-ollama.md b/articles/gpt-oss/run-locally-ollama.md index cfe0f1c2f2..cd9a69f5c2 100644 --- a/articles/gpt-oss/run-locally-ollama.md +++ b/articles/gpt-oss/run-locally-ollama.md @@ -114,12 +114,12 @@ Ollama doesn’t (yet) support the **Responses API** natively. If you do want to use the Responses API you can use [**Hugging Face’s `Responses.js` proxy**](https://github.com/huggingface/responses.js) to convert Chat Completions to Responses API. -For basic use cases you can also [**run our example Python server with Ollama as the backend.**](https://github.com/openai/gpt-oss?tab=readme-ov-file#responses-api) This server is a basic example server and does not have the +For basic use cases you can also [**run our example Python server with Ollama as the backend.**](https://github.com/openai/gpt-oss?tab=readme-ov-file#responses-api) This server is a basic example server and does not have the ... ```shell pip install gpt-oss python -m gpt_oss.responses_api.serve \ - --inference_backend=ollama \ + --inference-backend ollama \ --checkpoint gpt-oss:20b ```