You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
text-generation-webui API chat completion broke completely after I tried using a "Generation Preset/Character Name" of my own. Even after removing it, it's not working anymore, even after deleting and re-adding the services.
For some reason, it now always loads the default Assistant character and uses that without the home-llm system prompt and correct prompt template. I've spent about an hour hunting for where a configuration file or database entry might have changed and could be stuck on the old version, but I didn't find anything. Restarted, deleted, reconfigured ooba and home-llm, but it remains messed up.
Expected behavior
Just as before, when the "Generation Preset/Character Name" box is empty, don't use any of ooba's default assistants.
Logs
ooba's console log:
19:34:08-504205 INFO PROMPT=
<BOS_TOKEN>The following is a conversation with an AI Large Language Model. The AI has been trained to answer questions, provide recommendations, and help with decision making. The AI follows user requests. The AI thinks outside the box.
You: HELP!
AI:
As you can see, that's the ooba default Assistant prompt, not the home-llm prompt. And there's no prompt template, although everything looks correct in the home-llm settings UI.
Update:
After almost 2 hours of troubleshooting, it's still broken despite me completely reinstalling ooba and home-llm from scratch! I don't think ooba has any secret config files or registry entries, so I wonder if home-llm has those and doesn't remove them when the integration is deleted?
Totally stumped right now, as everything worked perfectly before I touched that cursed "Generation Preset/Character Name" field. And now, no matter the model, as soon as I enable "Use chat completions endpoint", it forgets the system prompt and prompt template and uses ooba's default assistant without a prompt template.
Update 2:
Three hours later – still no luck. I'm trying to find out where home-llm stores its settings, searched the DB, searched the file system. Any pointers? The integrations I set up must be saved somewhere, after all...
Also, if I switch Chat Mode to "Instruct", I can enter an instruct prompt template's name, but it's ignored, instead home-llm's option "Prompt Format" is used – but not consistently (only ChatML, Alpaca, Mistral are applied, ALL the others result in Alpaca being used)!
The text was updated successfully, but these errors were encountered:
PR for prompt templates for Command-R and Phi, which would make the text completion endpoint work with these models. Not a fix for this issue with the chat completion endpoint, but at least there would be an alternative.
I'll still continue looking for a fix for this, too...
Don't mean to hijack, but how did you manage to point the integration to the Ollama API through OpenWebUI?
When configuring the integration, when I set the API hostname to ai.hq.arpa/ollama, I just get:
Failed to connect to the remote API: 0, message='Attempt to decode JSON with unexpected mimetype: text/html; charset=utf-8', url='https://ai.hq.arpa/ollama:443/api/tags'
@rwjack Is that your own internal domain? I guess it does not support any extra prefixes to the API route.
Frankly, while it would be a nice thing to have more flexiblity here, it's highly non-standard.
I think what you can, and maybe should, do is use a reverse proxy to put ollama api on a route path of a domain. You can create another subdomain for ollama api. Like ollama.hq.arpa?
If /ollama is a suffix that OpenWebUI adds, you can use proxy_pass in nginx, or similar configuration in any other proxy webserver, to server it under another subdomain and drop the /ollama part.
I think what you can, and maybe should, do is use a reverse proxy to put ollama api on a route path of a domain. You can create another subdomain for ollama api. Like ollama.hq.arpa?
This is what I did initially, but I did not want to have the ollama unauthenticated api exposed at all.
Describe the bug
text-generation-webui API chat completion broke completely after I tried using a "Generation Preset/Character Name" of my own. Even after removing it, it's not working anymore, even after deleting and re-adding the services.
For some reason, it now always loads the default Assistant character and uses that without the home-llm system prompt and correct prompt template. I've spent about an hour hunting for where a configuration file or database entry might have changed and could be stuck on the old version, but I didn't find anything. Restarted, deleted, reconfigured ooba and home-llm, but it remains messed up.
Expected behavior
Just as before, when the "Generation Preset/Character Name" box is empty, don't use any of ooba's default assistants.
Logs
ooba's console log:
As you can see, that's the ooba default Assistant prompt, not the home-llm prompt. And there's no prompt template, although everything looks correct in the home-llm settings UI.
Update:
After almost 2 hours of troubleshooting, it's still broken despite me completely reinstalling ooba and home-llm from scratch! I don't think ooba has any secret config files or registry entries, so I wonder if home-llm has those and doesn't remove them when the integration is deleted?
Totally stumped right now, as everything worked perfectly before I touched that cursed "Generation Preset/Character Name" field. And now, no matter the model, as soon as I enable "Use chat completions endpoint", it forgets the system prompt and prompt template and uses ooba's default assistant without a prompt template.
Update 2:
Three hours later – still no luck. I'm trying to find out where home-llm stores its settings, searched the DB, searched the file system. Any pointers? The integrations I set up must be saved somewhere, after all...
Also, if I switch Chat Mode to "Instruct", I can enter an instruct prompt template's name, but it's ignored, instead home-llm's option "Prompt Format" is used – but not consistently (only ChatML, Alpaca, Mistral are applied, ALL the others result in Alpaca being used)!
The text was updated successfully, but these errors were encountered: