Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example connections parameters to api #582

Open
Tom-Neverwinter opened this issue Jul 30, 2024 · 1 comment
Open

Example connections parameters to api #582

Tom-Neverwinter opened this issue Jul 30, 2024 · 1 comment

Comments

@Tom-Neverwinter
Copy link

Tom-Neverwinter commented Jul 30, 2024

Capture

tested with the default address and port
http://127.0.0.1:11434/

llama3.1
llama3.1:latest

mimicking the oobabooga setup did not result in a connection either. https://docs.sillytavern.app/usage/api-connections/

https://youtu.be/SxRiRZu_Jhc?si=2-sG1iUQx0DLbxq4

kobold ai setup: https://www.youtube.com/watch?v=ksBWKa_30Hc

https://www.reddit.com/r/RisuAI/comments/1d57aki/does_anyone_know_how_to_connect_ollama_local_with/ not much help either
tested and didnt work:

http://localhost:11434/api/chat
http://localhost:11434/api/v1/generate
http://localhost:11434/api/embed
http://localhost:11434/api/tokenize
ttp://localhost:11434/api/complete

http://127.0.0.1:11434/api/chat
http://127.0.0.1:11434/api/v1/generate

adding a example for the default setup would help a lot of people:
openai gpt
anthropic claude d5837e5
custom (open ai)
oobabooga
mancer
openrouter

"Content-Type": "application/json"

mistral api
google gemini
kobold 90d1660
novellist
cohere
novel ai
horde
ollama https://github.com/search?q=repo%3Akwaroran%2FRisuAI+ollama&type=code

"sonnet 3.5 for aws and custom" 25a60db

fal.ai
de6c90c

comfyui c6d96d9

remove tos: f7ddc09

@Tom-Neverwinter Tom-Neverwinter changed the title connection paramater to ollama Example connections parameters to api Aug 15, 2024
@underhill-gb
Copy link

I've had some success using RisuAI and Ollama with these settings:

Custom OpenAI-compatible

URL: http://localhost:11434
Proxy Key/Password: ollama
Request Model - Custom - fluffy/l3-8b-stheno-v3.2:latest
Tokenizer: Llama3
Response Streaming: Enabled

You also need to import a preset for Context and Instruct which you can do within RisuAI.

L3 Instruct Mode.json
L3 Context Template.json

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants