Skip to content

Commit

Permalink
deploy: 2bdec8e
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Oct 23, 2024
0 parents commit ba7c13f
Show file tree
Hide file tree
Showing 27 changed files with 7,840 additions and 0 deletions.
Empty file added .nojekyll
Empty file.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added 00_core_files/figure-html/cell-26-output-1.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1,212 changes: 1,212 additions & 0 deletions core.html

Large diffs are not rendered by default.

708 changes: 708 additions & 0 deletions core.html.md

Large diffs are not rendered by default.

771 changes: 771 additions & 0 deletions index.html

Large diffs are not rendered by default.

189 changes: 189 additions & 0 deletions index.html.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,189 @@
# msglm


<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

### Installation

Install the latest version from pypi

``` sh
$ pip install msglm
```

## Usage

To use an LLM we need to structure our messages in a particular format.

Here’s an example of a text chat from the OpenAI docs.

``` python
from openai import OpenAI
client = OpenAI()

completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "What's the Wild Atlantic Way?"}
]
)
```

Generating the correct format for a particular API can get tedious. The
goal of *msglm* is to make it easier.

The examples below will show you how to use *msglm* for text and image
chats with OpenAI and Anthropic.

### Text Chats

For a text chat simply pass a list of strings and the api format
(e.g. “openai”) to **mk_msgs** and it will generate the correct format.

``` python
mk_msgs(["Hello, world!", "some assistant response"], api="openai")
```

``` js
[
{"role": "user", "content": "Hello, world!"},
{"role": "assistant", "content": "Some assistant response"}
]
```

#### anthropic

``` python
from msglm import mk_msgs_anthropic as mk_msgs
from anthropic import Anthropic
client = Anthropic()

r = client.messages.create(
model="claude-3-haiku-20240307",
max_tokens=1024,
messages=[mk_msgs(["Hello, world!", "some LLM response"])]
)
print(r.content[0].text)
```

#### openai

``` python
from msglm import mk_msgs_openai as mk_msgs
from openai import OpenAI

client = OpenAI()
r = client.chat.completions.create(
model="gpt-4o-mini",
messages=[mk_msgs(["Hello, world!", "some LLM response"])]
)
print(r.choices[0].message.content)
```

### Image Chats

For an image chat simply pass the raw image bytes in a list with your
question to *mk_msgs* and it will generate the correct format.

``` python
mk_msg([img, "What's in this image?"], api="anthropic")
```

``` js
[
{
"role": "user",
"content": [
{"type": "image", "source": {"type": "base64", "media_type": media_type, "data": img}}
{"type": "text", "text": "What's in this image?"}
]
}
]
```

#### anthropic

``` python
import httpx
from msglm import mk_msg_anthropic as mk_msg
from anthropic import Anthropic

client = Anthropic()

img_url = "https://www.atshq.org/wp-content/uploads/2022/07/shutterstock_1626122512.jpg"
img = httpx.get(img_url).content

r = client.messages.create(
model="claude-3-haiku-20240307",
max_tokens=1024,
messages=[mk_msg([img, "Describe the image"])]
)
print(r.content[0].text)
```

#### openai

``` python
import httpx
from msglm import mk_msg_openai as mk_msg
from openai import OpenAI

img_url = "https://www.atshq.org/wp-content/uploads/2022/07/shutterstock_1626122512.jpg"
img = httpx.get(img_url).content

client = OpenAI()
r = client.chat.completions.create(
model="gpt-4o-mini",
messages=[mk_msg([img, "Describe the image"])]
)
print(r.choices[0].message.content)
```

### API Wrappers

To make life a little easier, msglm comes with api specific wrappers for
[`mk_msg`](https://AnswerDotAI.github.io/msglm/core.html#mk_msg) and
[`mk_msgs`](https://AnswerDotAI.github.io/msglm/core.html#mk_msgs).

For Anthropic use

``` python
from msglm import mk_msg_anthropic as mk_msg, mk_msgs_anthropic as mk_msgs
```

For OpenAI use

``` python
from msglm import mk_msg_openai as mk_msg, mk_msgs_openai as mk_msgs
```

### Other use-cases

#### Prompt Caching

*msglm* supports [prompt
caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching)
for Anthropic models. Simply pass *cache=True* to *mk_msg* or *mk_msgs*.

``` python
from msglm import mk_msg_anthropic as mk_msg

mk_msg("please cache my message", cache=True)
```

This generates the expected cache block below

``` js
{
"role": "user",
"content": [
{"type": "text", "text": "Please cache my message", "cache_control": {"type": "ephemeral"}}
]
}
```

### Summary

We hope *msglm* will make your life a little easier when chatting to
LLMs. To learn more about the package please read this
[doc](https://answerdotai.github.io/msglm/).
1 change: 1 addition & 0 deletions robots.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Sitemap: https://AnswerDotAI.github.io/msglm/sitemap.xml
42 changes: 42 additions & 0 deletions search.json

Large diffs are not rendered by default.

Loading

0 comments on commit ba7c13f

Please sign in to comment.