-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Response from OpenAI compatible API with LocalAI not shown #681
Comments
Any messages in the console for big-AGI? And what branch was this on? |
@enricoros There are no messages in console, besides big-AGI pinging the default chat llm (not the LocalAI rag one) in order to make a title for the chat. This is on the v1-dev branch |
@tsoernes could you try the v2-dev branch? Its AI engine was rewritten from scratch and is much more powerful (and will tell you about non-compliances with strict schema parsing). v1-dev is insupported and will fade away soon (doesn't support images, etc.) |
I have tried it now. It says: Debugging log from server says:
From the code: default_response = {
"id": "chatcmpl-ATocTflkPHoUliboru2hb5h9tk0ox",
"object": "chat.completion",
"created": 1731669457,
"model": "gpt-4o",
"system_fingerprint": "fp_000eow_rag",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "PLACEHOLDER",
"refusal": None,
},
"logprobs": None,
"finish_reason": "stop",
'content_filter_results': {
'hate': {'filtered': False, 'severity': 'safe'},
'self_harm': {'filtered': False, 'severity': 'safe'},
'sexual': {'filtered': False, 'severity': 'safe'},
'violence': {'filtered': False, 'severity': 'safe'}}
}],
"usage": {
"prompt_tokens": 1,
"completion_tokens": 1,
"total_tokens": 2,
},
'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {}}]
}
@app.route(route="v1/chat/completions", auth_level=func.AuthLevel.ANONYMOUS)
def chat_completions(req: func.HttpRequest) -> func.HttpResponse:
js = req.get_json()
messages = js["messages"]
print(f"{messages=}")
default_response["id"] = "chatcmpl-" + ''.join(random.choices(ascii_letters, k=29))
default_response["created"] = int(time.time())
default_response["choices"][0]["message"]["content"] = "SOME MESSAGE"
js = json.dumps(default_response)
print(f"Returning {js}")
return func.HttpResponse(js, status_code=200, mimetype="application/json") |
@tsoernes I think this is because it's expecting a Streaming answer (chat.completion.chunk) but getting a chat completion (full completion) object instead. I think the fix won't be hard, the steaming flag needs to be set to false in the code when performing the AIX call. 2 options: 1. Change the server to reply with a Steaming (sse, chat completion chunk objects), or 2. Change Big AGI to not expect a streaming answer when making a streaming request, or even never make a streaming request for this model. |
Description
I'm attempting to make an API to interact with a RAG chat bot that I've made.
The API is OpenAI compatible (should be) but I cannot get big-AGI to display the response that the API sends to big-AGI.
I have logging enabled on the API side, and it is receiving the request from big-AGI, and, as far as I can tell, responding correctly. But no response message is shown. When I send a message from big-AGI, it waits until the API responds (and shows the typing animation), but it displays no message content from the response.
Example "messages" key in from request received in API:
And the response which the API sends back, with status code 200 and
mimetype="application/json"
:And here is the code for the API. It is an azure function running locally:
Device and browser
Chrome
Screenshots and more
Willingness to Contribute
The text was updated successfully, but these errors were encountered: