Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
masci committed Oct 8, 2024
1 parent f7bc694 commit 70effef
Show file tree
Hide file tree
Showing 4 changed files with 326 additions and 9 deletions.
296 changes: 296 additions & 0 deletions docs/examples.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,296 @@
**Table of Contents**

- [Create a blog writing prompt](#create-a-blog-writing-prompt)
- [Create a summarizer prompt](#create-a-summarizer-prompt)
- [Lemmatize text while processing a template](#lemmatize-text-while-processing-a-template)
- [Use a LLM to generate a text while rendering a prompt](#use-a-llm-to-generate-a-text-while-rendering-a-prompt)
- [Go meta: create a prompt and `generate` its response](#go-meta-create-a-prompt-and-generate-its-response)
- [Go meta(meta): process a LLM response](#go-metameta-process-a-llm-response)
- [Reuse templates from registries](#reuse-templates-from-registries)
- [Async support](#async-support)



## Create a blog writing prompt

Given a generic template to instruct an LLM to generate a blog article, we
use Banks to generate the actual prompt on our topic of choice, "retrogame computing":

```py
from banks import Prompt


p = Prompt("Write a 500-word blog post on {{ topic }}.\n\nBlog post:")
topic = "retrogame computing"
print(p.text({"topic": topic}))
```

This will print the following text, that can be pasted directly into Chat-GPT:

```txt
Write a 500-word blog post on retrogame computing.
Blog post:
```

The same prompt can be written in form of chat messages:
```py
prompt_text = """{% chat role="system" %}
I want you to act as a title generator for written pieces.
{% endchat %}
{% chat role="user" %}
Write a 500-word blog post on {{ topic }}.
Blog post:
{% endchat %}"""

p = Prompt(prompt_text)
print(p.chat_messages({"topic":"prompt engineering"}))
```

This will output the following:
```txt
[
ChatMessage(role='system', content='I want you to act as a title generator for written pieces.\n'),
ChatMessage(role='user', content='Write a 500-word blog post on .\n\nBlog post:\n')
]
```

## Create a summarizer prompt

Instead of hardcoding the content to summarize in the prompt itself, we can inject it
starting from a generic one:


```py
from banks import Prompt


prompt_template = """
Summarize the following documents:
{% for document in documents %}
{{ document }}
{% endfor %}
Summary:
"""

# In a real-world scenario, these would be loaded as external resources from files or network
documents = [
"A first paragraph talking about AI",
"A second paragraph talking about climate change",
"A third paragraph talking about retrogaming"
]

p = Prompt(prompt_template)
print(p.text({"documents": documents}))
```

The resulting prompt:

```txt
Summarize the following documents:
A first paragraph talking about AI
A second paragraph talking about climate change
A third paragraph talking about retrogaming
Summary:
```

## Lemmatize text while processing a template

Banks comes with predefined filters you can use to process data before generating the
prompt. Say you want to use a lemmatizer on a document before summarizing it, first
you need to install `simplemma`:

```sh
pip install simplemma
```

then you can use the `lemmatize` filter in your templates like this:

```py
from banks import Prompt


prompt_template = """
Summarize the following document:
{{ document | lemmatize }}
Summary:
"""

p = Prompt(prompt_template)
print(p.text({"document": "The cats are running"}))
```

the output would be:

```txt
Summarize the following document:
the cat be run
Summary:
```

## Use a LLM to generate a text while rendering a prompt

Sometimes it might be useful to ask another LLM to generate examples for you in a
few-shot prompt. Provided you have a valid OpenAI API key stored in an env var
called `OPENAI_API_KEY` you can ask Banks to do something like this (note we can
annotate the prompt using comments - anything within `{# ... #}` will be removed
from the final prompt):

```py
from banks import Prompt


prompt_template = """
Generate a tweet about the topic {{ topic }} with a positive sentiment.
{#
This is for illustration purposes only, there are better and cheaper ways
to generate examples for a few-shots prompt.
#}
Examples:
{% for number in range(3) %}
- {% generate "write a tweet with positive sentiment" "gpt-3.5-turbo" %}
{% endfor %}
"""

p = Prompt(prompt_template)
print(p.text({"topic": "climate change"}))
```

The output would be something similar to the following:
```txt
Generate a tweet about the topic climate change with a positive sentiment.
Examples:
- "Feeling grateful for the amazing capabilities of #GPT3.5Turbo! It's making my work so much easier and efficient. Thank you, technology!" #positivity #innovation
- "Feeling grateful for all the opportunities that come my way! With #GPT3.5Turbo, I am able to accomplish tasks faster and more efficiently. #positivity #productivity"
- "Feeling grateful for all the wonderful opportunities and experiences that life has to offer! #positivity #gratitude #blessed #gpt3.5turbo"
```

If you paste Banks' output into ChatGPT you would get something like this:
```txt
Climate change is a pressing global issue, but together we can create positive change! Let's embrace renewable energy, protect our planet, and build a sustainable future for generations to come. 🌍💚 #ClimateAction #PositiveFuture
```

> [!IMPORTANT]
> The `generate` extension uses [LiteLLM](https://github.com/BerriAI/litellm) under the hood, and provided you have the
> proper environment variables set, you can use any model from the supported [model providers](https://docs.litellm.ai/docs/providers).
> [!NOTE]
> Banks uses a cache to avoid generating text again for the same template with the same context. By default
> the cache is in-memory but it can be customized.
## Go meta: create a prompt and `generate` its response

We can leverage Jinja's macro system to generate a prompt, send the result to OpenAI and get a response.
Let's bring back the blog writing example:

```py
from banks import Prompt

prompt_template = """
{% from "banks_macros.jinja" import run_prompt with context %}
{%- call run_prompt() -%}
Write a 500-word blog post on {{ topic }}
Blog post:
{%- endcall -%}
"""

p = Prompt(prompt_template)
print(p.text({"topic": "climate change"}))
```

The snippet above won't print the prompt, instead will generate the prompt text

```
Write a 500-word blog post on climate change
Blog post:
```

and will send it to OpenAI using the `generate` extension, eventually returning its response:

```
Climate change is a phenomenon that has been gaining attention in recent years...
...
```

## Go meta(meta): process a LLM response

When generating a response from a prompt template, we can take a step further and
post-process the LLM response by assinging it to a variable and applying filters
to it:

```py
from banks import Prompt

prompt_template = """
{% from "banks_macros.jinja" import run_prompt with context %}
{%- set prompt_result %}
{%- call run_prompt() -%}
Write a 500-word blog post on {{ topic }}
Blog post:
{%- endcall -%}
{%- endset %}
{# nothing is returned at this point: the variable 'prompt_result' contains the result #}
{# let's use the prompt_result variable now #}
{{ prompt_result | upper }}
"""

p = Prompt(prompt_template)
print(p.text({"topic": "climate change"}))
```

The final answer from the LLM will be printed, this time all in uppercase.

## Reuse templates from registries

We can get the same result as the previous example loading the prompt template from a registry
instead of hardcoding it into the Python code. For convenience, Banks comes with a few registry types
you can use to store your templates. For example, the `DirectoryTemplateRegistry` can load templates
from a directory in the file system. Suppose you have a folder called `templates` in the current path,
and the folder contains a file called `blog.jinja`. You can load the prompt template like this:

```py
from banks import Prompt
from banks.registries import DirectoryTemplateRegistry

registry = DirectoryTemplateRegistry(populated_dir)
prompt = registry.get(name="blog")

print(prompt.text({"topic": "retrogame computing"}))
```

## Async support

To run banks within an `asyncio` loop you have to do two things:
1. set the environment variable `BANKS_ASYNC_ENABLED=true`.
2. use the `AsyncPrompt` class that has an awaitable `run` method.

Example:
```python
from banks import AsyncPrompt

async def main():
p = AsyncPrompt("Write a blog article about the topic {{ topic }}")
result = await p.text({"topic": "AI frameworks"})
print(result)

asyncio.run(main())
```
32 changes: 24 additions & 8 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,24 +3,33 @@
[Banks](https://en.wikipedia.org/wiki/Arrival_(film)) is the linguist professor who will help you generate meaningful
LLM prompts using a template language that makes sense.

Prompts are instrumental for the success of any LLM application, and Banks focuses around specific areas of their
lifecycle:

- :orange_book: **Templating**: Banks provides tools and functions to build prompts text and chat messages from generic blueprints.
- :tickets: **Versioning and metadata**: Banks supports attaching metadata to prompts to ease their management, and versioning is
first-class citizen.
- :file_cabinet: **Management**: Banks provides ways to store prompts on disk along with their metadata.

Banks is fundamentally [Jinja2](https://jinja.palletsprojects.com/en/3.1.x/intro/) with additional functionalities
specifically designed to work with Large Language Models prompts. Similar to other template languages, Banks takes
in input a generic piece of text called _template_ and gives you back its _rendered_ version, where the generic bits
are replaced by actual data provided by the user.
are replaced by actual data provided by the user and returned in a form that's
suitable for sending to an LLM, like plain text or chat messages.

## Features

Banks currently supports all the [features from Jinja2](https://jinja.palletsprojects.com/en/3.1.x/templates/#jinja-filters.truncate)
along with some additions specifically designed to help developers with LLM prompts:

* [Filters](prompt.md#filters): useful to manipulate the prompt text during template rendering.
* [Extensions](prompt.md#extensions): useful to support custom functions (e.g. text generation via LiteLLM).
* [Macros](prompt.md#macros): useful to implement complex logic in the template itself instead of Python code.
- [Filters](prompt.md#filters): useful to manipulate the prompt text during template rendering.
- [Extensions](prompt.md#extensions): useful to support custom functions (e.g. text generation via LiteLLM).
- [Macros](prompt.md#macros): useful to implement complex logic in the template itself instead of Python code.

The library comes with its own set of features:

* [Template registry](registry.md): storage API for versioned prompts.
* [Configuration](config.md): useful to integrate the library with existing applications.
- [Template registry](registry.md): storage API for versioned prompts.
- [Configuration](config.md): useful to integrate the library with existing applications.

## Installation

Expand All @@ -30,12 +39,19 @@ Install the latest version of Banks using `pip`:
pip install banks
```

### Optional dependencies

Some functionalities require additional dependencies that need to be installed manually:

- `pip install simplemma` is required by the `lemmatize` filter

## Examples

If you like to jump straight to the code:

- See a showcase of basic examples [here](examples).
- Check out the Cookbooks:
- :blue_book: [Prompt caching with Anthropic](./cookbooks/Prompt_Caching_with_Anthropic.ipynb)
- :blue_book: [Prompt versioning](./cookbooks/Prompt_Versioning.ipynb)

## License

`banks` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
5 changes: 5 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ theme:

nav:
- Home: 'index.md'
- Examples: 'examples.md'
- Python API: 'python.md'
- Prompt API: 'prompt.md'
- Configuration: 'config.md'
Expand All @@ -29,6 +30,10 @@ plugins:
show_bases: false

markdown_extensions:
- attr_list
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
- pymdownx.highlight:
anchor_linenums: true
line_spans: __span
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ cov = [
"test-cov",
"cov-report",
]
docs = "mkdocs build"
docs = "mkdocs {args:build}"

[[tool.hatch.envs.all.matrix]]
python = ["3.10", "3.11", "3.12"]
Expand Down

0 comments on commit 70effef

Please sign in to comment.