Simple test project for Ollama (local use LLM).
With this project, you can simply add PromtTypes
to re-use on every request.
- Pre-define prompt types for re-use
- Add
- Add a
Prompt Type
with the following PromptPrefix:Write a blog article with the following keywords:
- Use this Prompt Type on http://localhost/ and fill in the
data
with keywords for your
- Add a
Prompt Type
with the following PromptPrefix:Parse the following data to the format: Name, Date of Birth
- Use this Prompt Type on http://localhost/ and fill in the
data
with information you have.
- Docker
./Taskfile init
(you may need to do this 2 times, too lazy to fix migrations for both worker and PHP container...)- http://localhost:3000/ and download the
llama3
model (or your own model, you can switch to your favorite model in the.env
) - Add some prompts to http://localhost/prompt/type/
a. Example; Name:
Blog
. Prompt Prefix:Write a blog article with the following keywords:
- http://localhost/ => Add some ollama_request with the prompt type you just created
- http://localhost/ollama/request/ to view your request. Output is available when processed by the queue.
Run ./Taskfile
for a complete list of Taskfile
commands.