Skip to content

Commit

Permalink
metion the cache
Browse files Browse the repository at this point in the history
  • Loading branch information
masci committed May 28, 2024
1 parent 092ea9d commit af82797
Showing 1 changed file with 10 additions and 6 deletions.
16 changes: 10 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ Docs are available [here](https://masci.github.io/banks/).
- [banks](#banks)
- [Installation](#installation)
- [Examples](#examples)
- [Generate a blog writing prompt](#generate-a-blog-writing-prompt)
- [Generate a summarizer prompt](#generate-a-summarizer-prompt)
- [Create a blog writing prompt](#create-a-blog-writing-prompt)
- [Create a summarizer prompt](#create-a-summarizer-prompt)
- [Lemmatize text while processing a template](#lemmatize-text-while-processing-a-template)
- [Use a LLM to generate a text while rendering a prompt](#use-a-llm-to-generate-a-text-while-rendering-a-prompt)
- [Go meta: create a prompt and `generate` its response](#go-meta-create-a-prompt-and-generate-its-response)
Expand All @@ -42,7 +42,7 @@ pip install banks

## Examples

### Generate a blog writing prompt
### Create a blog writing prompt

Given a generic template to instruct an LLM to generate a blog article, we
use Banks to generate the actual prompt on our topic of choice, "retrogame computing":
Expand All @@ -64,9 +64,9 @@ Write a 500-word blog post on retrogame computing.
Blog post:
```

### Generate a summarizer prompt
### Create a summarizer prompt

Instead of hardcoding the content to summarize in the prompt itself, we can generate it
Instead of hardcoding the content to summarize in the prompt itself, we can inject it
starting from a generic one:


Expand Down Expand Up @@ -189,10 +189,14 @@ If you paste Banks' output into ChatGPT you would get something like this:
Climate change is a pressing global issue, but together we can create positive change! Let's embrace renewable energy, protect our planet, and build a sustainable future for generations to come. 🌍💚 #ClimateAction #PositiveFuture
```

> [!TIP]
> [!IMPORTANT]
> The `generate` extension uses [LiteLLM](https://github.com/BerriAI/litellm) under the hood, and provided you have the
> proper environment variables set, you can use any model from the supported [model providers](https://docs.litellm.ai/docs/providers).
> [!NOTE]
> Banks uses a cache to avoid generating text again for the same template with the same context. By default
> the cache is in-memory but it can be customized.
### Go meta: create a prompt and `generate` its response

We can leverage Jinja's macro system to generate a prompt, send the result to OpenAI and get a response.
Expand Down

0 comments on commit af82797

Please sign in to comment.