From 418a1883117e3e560ef4f02ef9b087fdcae3078e Mon Sep 17 00:00:00 2001 From: Erik Gaasedelen Date: Tue, 19 Nov 2024 22:07:17 -0800 Subject: [PATCH 1/9] add llms txt --- llms-ctx-full.txt | 4794 +++++++++++++++++++++++++++++++++++++++++++++ llms-ctx.txt | 4794 +++++++++++++++++++++++++++++++++++++++++++++ llms.txt | 23 + 3 files changed, 9611 insertions(+) create mode 100644 llms-ctx-full.txt create mode 100644 llms-ctx.txt create mode 100644 llms.txt diff --git a/llms-ctx-full.txt b/llms-ctx-full.txt new file mode 100644 index 0000000..b6e0a91 --- /dev/null +++ b/llms-ctx-full.txt @@ -0,0 +1,4794 @@ +Things to remember when using Claudette: + +- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key +- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) +- The library provides both synchronous and asynchronous interfaces +- Use `Chat()` for maintaining conversation state and handling tool interactions +- When using tools, the library automatically handles the request/response loop +- Image support is built in but only available on compatible models (not Haiku)# Claudette’s source + + + +This is the ‘literate’ source code for Claudette. You can view the fully +rendered version of the notebook +[here](https://claudette.answer.ai/core.html), or you can clone the git +repo and run the [interactive +notebook](https://github.com/AnswerDotAI/claudette/blob/main/00_core.ipynb) +in Jupyter. The notebook is converted the [Python module +claudette/core.py](https://github.com/AnswerDotAI/claudette/blob/main/claudette/core.py) +using [nbdev](https://nbdev.fast.ai/). The goal of this source code is +to both create the Python module, and also to teach the reader *how* it +is created, without assuming much existing knowledge about Claude’s API. + +Most of the time you’ll see that we write some source code *first*, and +then a description or discussion of it *afterwards*. + +## Setup + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +To print every HTTP request and response in full, uncomment the above +line. This functionality is provided by Anthropic’s SDK. + +
+ +> **Tip** +> +> If you’re reading the rendered version of this notebook, you’ll see an +> “Exported source” collapsible widget below. If you’re reading the +> source notebook directly, you’ll see `#| exports` at the top of the +> cell. These show that this piece of code will be exported into the +> python module that this notebook creates. No other code will be +> included – any other code in this notebook is just for demonstration, +> documentation, and testing. +> +> You can toggle expanding/collapsing the source code of all exported +> sections by using the ` Code` menu in the top right of the rendered +> notebook page. + +
+ +
+Exported source + +``` python +model_types = { + # Anthropic + 'claude-3-opus-20240229': 'opus', + 'claude-3-5-sonnet-20241022': 'sonnet', + 'claude-3-haiku-20240307': 'haiku-3', + 'claude-3-5-haiku-20241022': 'haiku-3-5', + # AWS + 'anthropic.claude-3-opus-20240229-v1:0': 'opus', + 'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet', + 'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet', + 'anthropic.claude-3-haiku-20240307-v1:0': 'haiku', + # Google + 'claude-3-opus@20240229': 'opus', + 'claude-3-5-sonnet-v2@20241022': 'sonnet', + 'claude-3-sonnet@20240229': 'sonnet', + 'claude-3-haiku@20240307': 'haiku', +} + +all_models = list(model_types) +``` + +
+
+Exported source + +``` python +text_only_models = ('claude-3-5-haiku-20241022',) +``` + +
+ +These are the current versions and +[prices](https://www.anthropic.com/pricing#anthropic-api) of Anthropic’s +models at the time of writing. + +``` python +model = models[1]; model +``` + + 'claude-3-5-sonnet-20241022' + +For examples, we’ll use Sonnet 3.5, since it’s awesome. + +## Antropic SDK + +``` python +cli = Anthropic() +``` + +This is what Anthropic’s SDK provides for interacting with Python. To +use it, pass it a list of *messages*, with *content* and a *role*. The +roles should alternate between *user* and *assistant*. + +
+ +> **Tip** +> +> After the code below you’ll see an indented section with an orange +> vertical line on the left. This is used to show the *result* of +> running the code above. Because the code is running in a Jupyter +> Notebook, we don’t have to use `print` to display results, we can just +> type the expression directly, as we do with `r` here. + +
+ +``` python +m = {'role': 'user', 'content': "I'm Jeremy"} +r = cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help +you today? + +
+ +- id: `msg_017Q8WYvvANfyHWLJWt95UR1` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 27}` + +
+ +### Formatting output + +That output is pretty long and hard to read, so let’s clean it up. We’ll +start by pulling out the `Content` part of the message. To do that, +we’re going to write our first function which will be included to the +`claudette/core.py` module. + +
+ +> **Tip** +> +> This is the first exported public function or class we’re creating +> (the previous export was of a variable). In the rendered version of +> the notebook for these you’ll see 4 things, in this order (unless the +> symbol starts with a single `_`, which indicates it’s *private*): +> +> - The signature (with the symbol name as a heading, with a horizontal +> rule above) +> - A table of paramater docs (if provided) +> - The doc string (in italics). +> - The source code (in a collapsible “Exported source” block) +> +> After that, we generally provide a bit more detail on what we’ve +> created, and why, along with a sample usage. + +
+ +------------------------------------------------------------------------ + +source + +### find_block + +> find_block (r:collections.abc.Mapping, blk_type:type= 'anthropic.types.text_block.TextBlock'>) + +*Find the first block of type `blk_type` in `r.content`.* + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
rMappingThe message to look in
blk_typetypeTextBlockThe type of block to find
+ +
+Exported source + +``` python +def find_block(r:abc.Mapping, # The message to look in + blk_type:type=TextBlock # The type of block to find + ): + "Find the first block of type `blk_type` in `r.content`." + return first(o for o in r.content if isinstance(o,blk_type)) +``` + +
+ +This makes it easier to grab the needed parts of Claude’s responses, +which can include multiple pieces of content. By default, we look for +the first text block. That will generally have the content we want to +display. + +``` python +find_block(r) +``` + + TextBlock(text="Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", type='text') + +------------------------------------------------------------------------ + +source + +### contents + +> contents (r) + +*Helper to get the contents from Claude response `r`.* + +
+Exported source + +``` python +def contents(r): + "Helper to get the contents from Claude response `r`." + blk = find_block(r) + if not blk and r.content: blk = r.content[0] + return blk.text.strip() if hasattr(blk,'text') else str(blk) +``` + +
+ +For display purposes, we often just want to show the text itself. + +``` python +contents(r) +``` + + "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?" + +
+Exported source + +``` python +@patch +def _repr_markdown_(self:(Message)): + det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items()) + cts = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex + return f"""{cts} + +
+ +- {det} + +
""" +``` + +
+ +Jupyter looks for a `_repr_markdown_` method in displayed objects; we +add this in order to display just the content text, and collapse full +details into a hideable section. Note that `patch` is from +[fastcore](https://fastcore.fast.ai/), and is used to add (or replace) +functionality in an existing class. We pass the class(es) that we want +to patch as type annotations to `self`. In this case, `_repr_markdown_` +is being added to Anthropic’s `Message` class, so when we display the +message now we just see the contents, and the details are hidden away in +a collapsible details block. + +``` python +r +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help +you today? + +
+ +- id: `msg_017Q8WYvvANfyHWLJWt95UR1` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 27}` + +
+ +One key part of the response is the +[`usage`](https://claudette.answer.ai/core.html#usage) key, which tells +us how many tokens we used by returning a `Usage` object. + +We’ll add some helpers to make things a bit cleaner for creating and +formatting these objects. + +``` python +r.usage +``` + + In: 10; Out: 27; Cache create: 0; Cache read: 0; Total: 37 + +------------------------------------------------------------------------ + +source + +### usage + +> usage (inp=0, out=0, cache_create=0, cache_read=0) + +*Slightly more concise version of `Usage`.* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
inpint0input tokens
outint0Output tokens
cache_createint0Cache creation tokens
cache_readint0Cache read tokens
+ +
+Exported source + +``` python +def usage(inp=0, # input tokens + out=0, # Output tokens + cache_create=0, # Cache creation tokens + cache_read=0 # Cache read tokens + ): + "Slightly more concise version of `Usage`." + return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create, cache_read_input_tokens=cache_read) +``` + +
+ +The constructor provided by Anthropic is rather verbose, so we clean it +up a bit, using a lowercase version of the name. + +``` python +usage(5) +``` + + In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 + +------------------------------------------------------------------------ + +source + +### Usage.total + +> Usage.total () + +
+Exported source + +``` python +@patch(as_prop=True) +def total(self:Usage): return self.input_tokens+self.output_tokens+getattr(self, "cache_creation_input_tokens",0)+getattr(self, "cache_read_input_tokens",0) +``` + +
+ +Adding a `total` property to `Usage` makes it easier to see how many +tokens we’ve used up altogether. + +``` python +usage(5,1).total +``` + + 6 + +------------------------------------------------------------------------ + +source + +### Usage.\_\_repr\_\_ + +> Usage.__repr__ () + +*Return repr(self).* + +
+Exported source + +``` python +@patch +def __repr__(self:Usage): return f'In: {self.input_tokens}; Out: {self.output_tokens}; Cache create: {getattr(self, "cache_creation_input_tokens",0)}; Cache read: {getattr(self, "cache_read_input_tokens",0)}; Total: {self.total}' +``` + +
+ +In python, patching `__repr__` lets us change how an object is +displayed. (More generally, methods starting and ending in `__` in +Python are called `dunder` methods, and have some `magic` behavior – +such as, in this case, changing how an object is displayed.) + +``` python +usage(5) +``` + + In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 + +------------------------------------------------------------------------ + +source + +### Usage.\_\_add\_\_ + +> Usage.__add__ (b) + +*Add together each of `input_tokens` and `output_tokens`* + +
+Exported source + +``` python +@patch +def __add__(self:Usage, b): + "Add together each of `input_tokens` and `output_tokens`" + return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens, getattr(self,'cache_creation_input_tokens',0)+getattr(b,'cache_creation_input_tokens',0), getattr(self,'cache_read_input_tokens',0)+getattr(b,'cache_read_input_tokens',0)) +``` + +
+ +And, patching `__add__` lets `+` work on a `Usage` object. + +``` python +r.usage+r.usage +``` + + In: 20; Out: 54; Cache create: 0; Cache read: 0; Total: 74 + +### Creating messages + +Creating correctly formatted `dict`s from scratch every time isn’t very +handy, so next up we’ll add helpers for this. + +``` python +def mk_msg(content, role='user', **kw): + return dict(role=role, content=content, **kw) +``` + +We make things a bit more convenient by writing a function to create a +message for us. + +
+ +> **Note** +> +> You may have noticed that we didn’t export the +> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) function +> (i.e. there’s no “Exported source” block around it). That’s because +> we’ll need more functionality in our final version than this version +> has – so we’ll be defining a more complete version later. Rather than +> refactoring/editing in notebooks, often it’s helpful to simply +> gradually build up complexity by re-defining a symbol. + +
+ +``` python +prompt = "I'm Jeremy" +m = mk_msg(prompt) +m +``` + + {'role': 'user', 'content': "I'm Jeremy"} + +``` python +r = cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hi Jeremy! I’m Claude. Nice to meet you. How can I help you today? + +
+ +- id: `msg_01BhkuvQtEPoC8wHSbU7YRpV` +- content: + `[{'text': "Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 24}` + +
+ +------------------------------------------------------------------------ + +source + +### mk_msgs + +> mk_msgs (msgs:list, **kw) + +*Helper to set ‘assistant’ role on alternate messages.* + +
+Exported source + +``` python +def mk_msgs(msgs:list, **kw): + "Helper to set 'assistant' role on alternate messages." + if isinstance(msgs,str): msgs=[msgs] + return [mk_msg(o, ('user','assistant')[i%2], **kw) for i,o in enumerate(msgs)] +``` + +
+ +LLMs, including Claude, don’t actually have state, but instead dialogs +are created by passing back all previous prompts and responses every +time. With Claude, they always alternate *user* and *assistant*. +Therefore we create a function to make it easier to build up these +dialog lists. + +But to do so, we need to update +[`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) so that we +can’t only pass a `str` as `content`, but can also pass a `dict` or an +object with a `content` attr, since these are both types of message that +Claude can create. To do so, we check for a `content` key or attr, and +use it if found. + +
+Exported source + +``` python +def _str_if_needed(o): + if isinstance(o, (list,tuple,abc.Mapping,L)) or hasattr(o, '__pydantic_serializer__'): return o + return str(o) +``` + +
+ +``` python +def mk_msg(content, role='user', **kw): + "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" + if hasattr(content, 'content'): content,role = content.content,content.role + if isinstance(content, abc.Mapping): content=content['content'] + return dict(role=role, content=_str_if_needed(content), **kw) +``` + +``` python +msgs = mk_msgs([prompt, r, 'I forgot my name. Can you remind me please?']) +msgs +``` + + [{'role': 'user', 'content': "I'm Jeremy"}, + {'role': 'assistant', + 'content': [TextBlock(text="Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", type='text')]}, + {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}] + +Now, if we pass this list of messages to Claude, the model treats it as +a conversation to respond to. + +``` python +cli.messages.create(messages=msgs, model=model, max_tokens=200) +``` + +You just told me your name is Jeremy. + +
+ +- id: `msg_01KZski1R3z1iGjF6XsBb9dM` +- content: + `[{'text': 'You just told me your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 46, 'output_tokens': 13}` + +
+ +## Client + +------------------------------------------------------------------------ + +source + +### Client + +> Client (model, cli=None, log=False) + +*Basic Anthropic messages client.* + +
+Exported source + +``` python +class Client: + def __init__(self, model, cli=None, log=False): + "Basic Anthropic messages client." + self.model,self.use = model,usage() + self.text_only = model in text_only_models + self.log = [] if log else None + self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'})) +``` + +
+ +We’ll create a simple +[`Client`](https://claudette.answer.ai/core.html#client) for `Anthropic` +which tracks usage stores the model to use. We don’t add any methods +right away – instead we’ll use `patch` for that so we can add and +document them incrementally. + +``` python +c = Client(model) +c.use +``` + + In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 + +
+Exported source + +``` python +@patch +def _r(self:Client, r:Message, prefill=''): + "Store the result of the message and accrue total usage." + if prefill: + blk = find_block(r) + blk.text = prefill + (blk.text or '') + self.result = r + self.use += r.usage + self.stop_reason = r.stop_reason + self.stop_sequence = r.stop_sequence + return r +``` + +
+ +We use a `_` prefix on private methods, but we document them here in the +interests of literate source code. + +`_r` will be used each time we get a new result, to track usage and also +to keep the result available for later. + +``` python +c._r(r) +c.use +``` + + In: 10; Out: 24; Cache create: 0; Cache read: 0; Total: 34 + +Whereas OpenAI’s models use a `stream` parameter for streaming, +Anthropic’s use a separate method. We implement Anthropic’s approach in +a private method, and then use a `stream` parameter in `__call__` for +consistency: + +
+Exported source + +``` python +@patch +def _log(self:Client, final, prefill, msgs, maxtok=None, sp=None, temp=None, stream=None, stop=None, **kwargs): + self._r(final, prefill) + if self.log is not None: self.log.append({ + "msgs": msgs, "prefill": prefill, **kwargs, + "msgs": msgs, "prefill": prefill, "maxtok": maxtok, "sp": sp, "temp": temp, "stream": stream, "stop": stop, **kwargs, + "result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence + }) + return self.result +``` + +
+
+Exported source + +``` python +@patch +def _stream(self:Client, msgs:list, prefill='', **kwargs): + with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: + if prefill: yield(prefill) + yield from s.text_stream + self._log(s.get_final_message(), prefill, msgs, **kwargs) +``` + +
+ +Claude supports adding an extra `assistant` message at the end, which +contains the *prefill* – i.e. the text we want Claude to assume the +response starts with. However Claude doesn’t actually repeat that in the +response, so for convenience we add it. + +
+Exported source + +``` python +@patch +def _precall(self:Client, msgs, prefill, stop, kwargs): + pref = [prefill.strip()] if prefill else [] + if not isinstance(msgs,list): msgs = [msgs] + if stop is not None: + if not isinstance(stop, (list)): stop = [stop] + kwargs["stop_sequences"] = stop + msgs = mk_msgs(msgs+pref) + return msgs +``` + +
+ +``` python +@patch +@delegates(messages.Messages.create) +def __call__(self:Client, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + **kwargs): + "Make a call to Claude." + msgs = self._precall(msgs, prefill, stop, kwargs) + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = self.c.messages.create( + model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs) +``` + +Defining `__call__` let’s us use an object like a function (i.e it’s +*callable*). We use it as a small wrapper over `messages.create`. +However we’re not exporting this version just yet – we have some +additions we’ll make in a moment… + +``` python +c = Client(model, log=True) +c.use +``` + + In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 + +``` python +c('Hi') +``` + +Hello! How can I help you today? + +
+ +- id: `msg_01DZfHpTqbodjegmvG6kkQvn` +- content: + `[{'text': 'Hello! How can I help you today?', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 8, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +c.use +``` + + In: 8; Out: 22; Cache create: 0; Cache read: 0; Total: 30 + +Let’s try out *prefill*: + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +``` + +``` python +c(q, prefill=pref) +``` + +According to Douglas Adams, it’s 42. More seriously, there’s no +universal answer - it’s deeply personal. Common perspectives include: +finding happiness, making meaningful connections, pursuing purpose +through work/creativity, helping others, or simply experiencing and +appreciating existence. + +
+ +- id: `msg_01RKAjFBMhyBjvKw59ypM6tp` +- content: + `[{'text': "According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 24, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +We can pass `stream=True` to stream the response back incrementally: + +``` python +for o in c('Hi', stream=True): print(o, end='') +``` + + Hello! How can I help you today? + +``` python +c.use +``` + + In: 40; Out: 97; Cache create: 0; Cache read: 0; Total: 137 + +``` python +for o in c(q, prefill=pref, stream=True): print(o, end='') +``` + + According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence. + +``` python +c.use +``` + + In: 64; Out: 150; Cache create: 0; Cache read: 0; Total: 214 + +Pass a stop seauence if you want claude to stop generating text when it +encounters it. + +``` python +c("Count from 1 to 10", stop="5") +``` + +1 2 3 4 + +
+ +- id: `msg_01D3kdCAHNbXadE144FLPbQV` +- content: `[{'text': '1\n2\n3\n4\n', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `stop_sequence` +- stop_sequence: `5` +- type: `message` +- usage: + `{'input_tokens': 15, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +This also works with streaming, and you can pass more than one stop +sequence: + +``` python +for o in c("Count from 1 to 10", stop=["2", "yellow"], stream=True): print(o, end='') +print(c.stop_reason, c.stop_sequence) +``` + + 1 + stop_sequence 2 + +You can check the logs: + +``` python +c.log[-1] +``` + + {'msgs': [{'role': 'user', 'content': 'Count from 1 to 10'}], + 'prefill': '', + 'max_tokens': 4096, + 'system': '', + 'temperature': 0, + 'stop_sequences': ['2', 'yellow'], + 'maxtok': None, + 'sp': None, + 'temp': None, + 'stream': None, + 'stop': None, + 'result': Message(id='msg_01PbJN7QLwYALfoqTtYJHYVR', content=[TextBlock(text='1\n', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='stop_sequence', stop_sequence='2', type='message', usage=In: 15; Out: 11; Cache create: 0; Cache read: 0; Total: 26), + 'use': In: 94; Out: 172; Cache create: 0; Cache read: 0; Total: 266, + 'stop_reason': 'stop_sequence', + 'stop_sequence': '2'} + +## Tool use + +Let’s now add tool use (aka *function calling*). + +------------------------------------------------------------------------ + +source + +### mk_tool_choice + +> mk_tool_choice (choose:Union[str,bool,NoneType]) + +*Create a `tool_choice` dict that’s ‘auto’ if `choose` is `None`, ‘any’ +if it is True, or ‘tool’ otherwise* + +
+Exported source + +``` python +def mk_tool_choice(choose:Union[str,bool,None])->dict: + "Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise" + return {"type": "tool", "name": choose} if isinstance(choose,str) else {'type':'any'} if choose else {'type':'auto'} +``` + +
+ +``` python +print(mk_tool_choice('sums')) +print(mk_tool_choice(True)) +print(mk_tool_choice(None)) +``` + + {'type': 'tool', 'name': 'sums'} + {'type': 'any'} + {'type': 'auto'} + +Claude can be forced to use a particular tool, or select from a specific +list of tools, or decide for itself when to use a tool. If you want to +force a tool (or force choosing from a list), include a `tool_choice` +param with a dict from +[`mk_tool_choice`](https://claudette.answer.ai/core.html#mk_tool_choice). + +For testing, we need a function that Claude can call; we’ll write a +simple function that adds numbers together, and will tell us when it’s +being called: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +sp = "You are a summing expert." +``` + +Claudette can autogenerate a schema thanks to the `toolslm` library. +We’ll force the use of the tool using the function we created earlier. + +``` python +tools=[get_schema(sums)] +choice = mk_tool_choice('sums') +``` + +We’ll start a dialog with Claude now. We’ll store the messages of our +dialog in `msgs`. The first message will be our prompt `pr`, and we’ll +pass our `tools` schema. + +``` python +msgs = mk_msgs(pr) +r = c(msgs, sp=sp, tools=tools, tool_choice=choice) +r +``` + +ToolUseBlock(id=‘toolu_01JEJNPyeeGm7uwckeF5J4pf’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_015eEr2H8V4j8nNEh1KQifjH` +- content: + `[{'id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 442, 'output_tokens': 55, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +When Claude decides that it should use a tool, it passes back a +`ToolUseBlock` with the name of the tool to call, and the params to use. + +We don’t want to allow it to call just any possible function (that would +be a security disaster!) so we create a *namespace* – that is, a +dictionary of allowable function names to call. + +``` python +ns = mk_ns(sums) +ns +``` + + {'sums': int>} + +------------------------------------------------------------------------ + +source + +### mk_funcres + +> mk_funcres (tuid, res) + +*Given tool use id and the tool result, create a tool_result response.* + +
+Exported source + +``` python +def mk_funcres(tuid, res): + "Given tool use id and the tool result, create a tool_result response." + return dict(type="tool_result", tool_use_id=tuid, content=str(res)) +``` + +
+ +We can now use the function requested by Claude. We look it up in `ns`, +and pass in the provided parameters. + +``` python +fc = find_block(r, ToolUseBlock) +res = mk_funcres(fc.id, call_func(fc.name, fc.input, ns=ns)) +res +``` + + Finding the sum of 604542 and 6458932 + + {'type': 'tool_result', + 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', + 'content': '7063474'} + +------------------------------------------------------------------------ + +source + +### mk_toolres + +> mk_toolres (r:collections.abc.Mapping, +> ns:Optional[collections.abc.Mapping]=None, obj:Optional=None) + +*Create a `tool_result` message from response `r`.* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
rMappingTool use request response from Claude
nsOptionalNoneNamespace to search for tools
objOptionalNoneClass to search for tools
+ +
+Exported source + +``` python +def mk_toolres( + r:abc.Mapping, # Tool use request response from Claude + ns:Optional[abc.Mapping]=None, # Namespace to search for tools + obj:Optional=None # Class to search for tools + ): + "Create a `tool_result` message from response `r`." + cts = getattr(r, 'content', []) + res = [mk_msg(r)] + if ns is None: ns=globals() + if obj is not None: ns = mk_ns(obj) + tcs = [mk_funcres(o.id, call_func(o.name, o.input, ns)) for o in cts if isinstance(o,ToolUseBlock)] + if tcs: res.append(mk_msg(tcs)) + return res +``` + +
+ +In order to tell Claude the result of the tool call, we pass back the +tool use assistant request and the `tool_result` response. + +``` python +tr = mk_toolres(r, ns=ns) +tr +``` + + Finding the sum of 604542 and 6458932 + + [{'role': 'assistant', + 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, + {'role': 'user', + 'content': [{'type': 'tool_result', + 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', + 'content': '7063474'}]}] + +We add this to our dialog, and now Claude has all the information it +needs to answer our question. + +``` python +msgs += tr +contents(c(msgs, sp=sp, tools=tools)) +``` + + 'The sum of 604542 and 6458932 is 7063474.' + +``` python +msgs +``` + + [{'role': 'user', 'content': 'What is 604542+6458932?'}, + {'role': 'assistant', + 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, + {'role': 'user', + 'content': [{'type': 'tool_result', + 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', + 'content': '7063474'}]}] + +This works with methods as well – in this case, use the object itself +for `ns`: + +``` python +class Dummy: + def sums( + self, + a:int, # First thing to sum + b:int=1 # Second thing to sum + ) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +tools = [get_schema(Dummy.sums)] +o = Dummy() +r = c(pr, sp=sp, tools=tools, tool_choice=choice) +tr = mk_toolres(r, obj=o) +msgs += tr +contents(c(msgs, sp=sp, tools=tools)) +``` + + Finding the sum of 604542 and 6458932 + + 'The sum of 604542 and 6458932 is 7063474.' + +------------------------------------------------------------------------ + +source + +### get_types + +> get_types (msgs) + +``` python +get_types(msgs) +``` + + ['text', 'tool_use', 'tool_result', 'tool_use', 'tool_result'] + +------------------------------------------------------------------------ + +source + +### Client.\_\_call\_\_ + +> Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', +> stream:bool=False, stop=None, tools:Optional[list]=None, +> tool_choice:Optional[dict]=None, +> metadata:MetadataParam|NotGiven=NOT_GIVEN, +> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Unio +> n[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, +> temperature:float|NotGiven=NOT_GIVEN, +> top_k:int|NotGiven=NOT_GIVEN, +> top_p:float|NotGiven=NOT_GIVEN, +> extra_headers:Headers|None=None, +> extra_query:Query|None=None, extra_body:Body|None=None, +> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) + +*Make a call to Claude.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
toolsOptionalNoneList of tools to make available to Claude
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
+ +
+Exported source + +``` python +@patch +@delegates(messages.Messages.create) +def __call__(self:Client, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + tools:Optional[list]=None, # List of tools to make available to Claude + tool_choice:Optional[dict]=None, # Optionally force use of some tool + **kwargs): + "Make a call to Claude." + if tools: kwargs['tools'] = [get_schema(o) for o in listify(tools)] + if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) + msgs = self._precall(msgs, prefill, stop, kwargs) + if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) +``` + +
+ +``` python +r = c(pr, sp=sp, tools=sums, tool_choice=sums) +r +``` + +ToolUseBlock(id=‘toolu_01KNbjuc8utt6ZroFngmAcuj’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_01T8zmguPksQaKLLgUuaYAJL` +- content: + `[{'id': 'toolu_01KNbjuc8utt6ZroFngmAcuj', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 438, 'output_tokens': 64, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +tr = mk_toolres(r, ns=ns) +``` + + Finding the sum of 604542 and 6458932 + +------------------------------------------------------------------------ + +source + +### Client.structured + +> Client.structured (msgs:list, tools:Optional[list]=None, +> obj:Optional=None, +> ns:Optional[collections.abc.Mapping]=None, sp='', +> temp=0, maxtok=4096, prefill='', stream:bool=False, +> stop=None, tool_choice:Optional[dict]=None, +> metadata:MetadataParam|NotGiven=NOT_GIVEN, +> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Un +> ion[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, +> temperature:float|NotGiven=NOT_GIVEN, +> top_k:int|NotGiven=NOT_GIVEN, +> top_p:float|NotGiven=NOT_GIVEN, +> extra_headers:Headers|None=None, +> extra_query:Query|None=None, +> extra_body:Body|None=None, +> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) + +*Return the value of all tool calls (generally used for structured +outputs)* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
toolsOptionalNoneList of tools to make available to Claude
objOptionalNoneClass to search for tools
nsOptionalNoneNamespace to search for tools
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
+ +
+Exported source + +``` python +@patch +@delegates(Client.__call__) +def structured(self:Client, + msgs:list, # List of messages in the dialog + tools:Optional[list]=None, # List of tools to make available to Claude + obj:Optional=None, # Class to search for tools + ns:Optional[abc.Mapping]=None, # Namespace to search for tools + **kwargs): + "Return the value of all tool calls (generally used for structured outputs)" + tools = listify(tools) + res = self(msgs, tools=tools, tool_choice=tools, **kwargs) + if ns is None: ns=mk_ns(*tools) + if obj is not None: ns = mk_ns(obj) + cts = getattr(res, 'content', []) + tcs = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)] + return tcs +``` + +
+ +Anthropic’s API does not support response formats directly, so instead +we provide a `structured` method to use tool calling to achieve the same +result. The result of the tool is not passed back to Claude in this +case, but instead is returned directly to the user. + +``` python +c.structured(pr, tools=[sums]) +``` + + Finding the sum of 604542 and 6458932 + + [7063474] + +## Chat + +Rather than manually adding the responses to a dialog, we’ll create a +simple [`Chat`](https://claudette.answer.ai/core.html#chat) class to do +that for us, each time we make a request. We’ll also store the system +prompt and tools here, to avoid passing them every time. + +------------------------------------------------------------------------ + +source + +### Chat + +> Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None, +> sp='', tools:Optional[list]=None, temp=0, +> cont_pr:Optional[str]=None) + +*Anthropic chat client.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstrOptional system prompt
toolsOptionalNoneList of tools to make available to Claude
tempint0Temperature
cont_prOptionalNoneUser prompt to continue an assistant response: +assistant,[user:“…”],assistant
+ +
+Exported source + +``` python +class Chat: + def __init__(self, + model:Optional[str]=None, # Model to use (leave empty if passing `cli`) + cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) + sp='', # Optional system prompt + tools:Optional[list]=None, # List of tools to make available to Claude + temp=0, # Temperature + cont_pr:Optional[str]=None): # User prompt to continue an assistant response: assistant,[user:"..."],assistant + "Anthropic chat client." + assert model or cli + assert cont_pr != "", "cont_pr may not be an empty string" + self.c = (cli or Client(model)) + self.h,self.sp,self.tools,self.cont_pr,self.temp = [],sp,tools,cont_pr,temp + + @property + def use(self): return self.c.use +``` + +
+ +The class stores the +[`Client`](https://claudette.answer.ai/core.html#client) that will +provide the responses in `c`, and a history of messages in `h`. + +``` python +sp = "Never mention what tools you use." +chat = Chat(model, sp=sp) +chat.c.use, chat.h +``` + + (In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0, []) + +We’ve shown the token usage but we really care about is pricing. Let’s +extract the latest +[pricing](https://www.anthropic.com/pricing#anthropic-api) from +Anthropic into a `pricing` dict. + +We’ll patch `Usage` to enable it compute the cost given pricing. + +------------------------------------------------------------------------ + +source + +### Usage.cost + +> Usage.cost (costs:tuple) + +
+Exported source + +``` python +@patch +def cost(self:Usage, costs:tuple) -> float: + cache_w, cache_r = getattr(self, "cache_creation_input_tokens",0), getattr(self, "cache_read_input_tokens",0) + return sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6 +``` + +
+ +``` python +chat.c.use.cost(pricing[model_types[chat.c.model]]) +``` + + 0.0 + +This is clunky. Let’s add `cost` as a property for the +[`Chat`](https://claudette.answer.ai/core.html#chat) class. It will pass +in the appropriate prices for the current model to the usage cost +calculator. + +------------------------------------------------------------------------ + +source + +### Chat.cost + +> Chat.cost () + +
+Exported source + +``` python +@patch(as_prop=True) +def cost(self: Chat) -> float: return self.c.use.cost(pricing[model_types[self.c.model]]) +``` + +
+ +``` python +chat.cost +``` + + 0.0 + +------------------------------------------------------------------------ + +source + +### Chat.\_\_call\_\_ + +> Chat.__call__ (pr=None, temp=None, maxtok=4096, stream=False, prefill='', +> tool_choice:Optional[dict]=None, **kw) + +*Call self as a function.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
kw
+ +
+Exported source + +``` python +@patch +def _stream(self:Chat, res): + yield from res + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) +``` + +
+
+Exported source + +``` python +@patch +def _post_pr(self:Chat, pr, prev_role): + if pr is None and prev_role == 'assistant': + if self.cont_pr is None: + raise ValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.") + pr = self.cont_pr # No user prompt, keep the chain + if pr: self.h.append(mk_msg(pr)) +``` + +
+
+Exported source + +``` python +@patch +def _append_pr(self:Chat, + pr=None, # Prompt / message + ): + prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' + if pr and prev_role == 'user': self() # already user request pending + self._post_pr(pr, prev_role) +``` + +
+
+Exported source + +``` python +@patch +def __call__(self:Chat, + pr=None, # Prompt / message + temp=None, # Temperature + maxtok=4096, # Maximum tokens + stream=False, # Stream response? + prefill='', # Optional prefill to pass to Claude as start of its response + tool_choice:Optional[dict]=None, # Optionally force use of some tool + **kw): + if temp is None: temp=self.temp + self._append_pr(pr) + res = self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, + tools=self.tools, tool_choice=tool_choice,**kw) + if stream: return self._stream(res) + self.h += mk_toolres(self.c.result, ns=self.tools) + return res +``` + +
+ +The `__call__` method just passes the request along to the +[`Client`](https://claudette.answer.ai/core.html#client), but rather +than just passing in this one prompt, it appends it to the history and +passes it all along. As a result, we now have state! + +``` python +chat = Chat(model, sp=sp) +``` + +``` python +chat("I'm Jeremy") +chat("What's my name?") +``` + +Your name is Jeremy. + +
+ +- id: `msg_01GpNv4P5x9Gzc5mxxw9FgEL` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 41, 'output_tokens': 9, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat.use, chat.cost +``` + + (In: 58; Out: 27; Cache create: 0; Cache read: 0; Total: 85, 0.000579) + +Let’s try out prefill too: + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +``` + +``` python +chat(q, prefill=pref) +``` + +According to Douglas Adams,42. But seriously: To find purpose, create +meaning, love, grow, and make a positive impact while experiencing +life’s journey. + +
+ +- id: `msg_011s2iLranbHFhdsVg8sz6eY` +- content: + `[{'text': "According to Douglas Adams,42. But seriously: To find purpose, create meaning, love, grow, and make a positive impact while experiencing life's journey.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 69, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +By default messages must be in user, assistant, user format. If this +isn’t followed (aka calling `chat()` without a user message) it will +error out: + +``` python +try: chat() +except ValueError as e: print("Error:", e) +``` + + Error: Prompt must be given after assistant completion, or use `self.cont_pr`. + +Setting `cont_pr` allows a “default prompt” to be specified when a +prompt isn’t specified. Usually used to prompt the model to continue. + +``` python +chat.cont_pr = "keep going..." +chat() +``` + +To build meaningful relationships, pursue passions, learn continuously, +help others, appreciate beauty, overcome challenges, leave a positive +legacy, and find personal fulfillment through whatever brings you joy +and contributes to the greater good. + +
+ +- id: `msg_01Rz8oydLAinmSMyaKbmmpE9` +- content: + `[{'text': 'To build meaningful relationships, pursue passions, learn continuously, help others, appreciate beauty, overcome challenges, leave a positive legacy, and find personal fulfillment through whatever brings you joy and contributes to the greater good.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 105, 'output_tokens': 54, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +We can also use streaming: + +``` python +chat = Chat(model, sp=sp) +for o in chat("I'm Jeremy", stream=True): print(o, end='') +``` + + Hello Jeremy! Nice to meet you. How are you today? + +``` python +for o in chat(q, prefill=pref, stream=True): print(o, end='') +``` + + According to Douglas Adams, 42. More seriously: to find purpose, love, grow, and make a positive impact while experiencing life's journey. + +### Chat tool use + +We automagically get streamlined tool use as well: + +``` python +pr = f"What is {a}+{b}?" +pr +``` + + 'What is 604542+6458932?' + +``` python +chat = Chat(model, sp=sp, tools=[sums]) +r = chat(pr) +r +``` + + Finding the sum of 604542 and 6458932 + +Let me calculate that sum for you. + +
+ +- id: `msg_01MY2VWnZuU8jKyRKJ5FGzmM` +- content: + `[{'text': 'Let me calculate that sum for you.', 'type': 'text'}, {'id': 'toolu_01JXnJ1ReFqx5ppX3y7UcQCB', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 437, 'output_tokens': 87, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Now we need to send this result to Claude—calling the object with no +parameters tells it to return the tool result to Claude: + +``` python +chat() +``` + +604542 + 6458932 = 7063474 + +
+ +- id: `msg_01Sog8j3pgYb3TBWPYwR4uQU` +- content: `[{'text': '604542 + 6458932 = 7063474', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 532, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +It should be correct, because it actually used our Python function to do +the addition. Let’s check: + +``` python +a+b +``` + + 7063474 + +## Images + +Claude can handle image data as well. As everyone knows, when testing +image APIs you have to use a cute puppy. + +``` python +# Image is Cute_dog.jpg from Wikimedia +fn = Path('samples/puppy.jpg') +display.Image(filename=fn, width=200) +``` + + + +``` python +img = fn.read_bytes() +``` + +
+Exported source + +``` python +def _add_cache(d, cache): + "Optionally add cache control" + if cache: d["cache_control"] = {"type": "ephemeral"} + return d +``` + +
+ +Claude supports context caching by adding a `cache_control` header, so +we provide an option to enable that. + +------------------------------------------------------------------------ + +source + +### img_msg + +> img_msg (data:bytes, cache=False) + +*Convert image `data` into an encoded `dict`* + +
+Exported source + +``` python +def img_msg(data:bytes, cache=False)->dict: + "Convert image `data` into an encoded `dict`" + img = base64.b64encode(data).decode("utf-8") + mtype = mimetypes.types_map['.'+imghdr.what(None, h=data)] + r = dict(type="base64", media_type=mtype, data=img) + return _add_cache({"type": "image", "source": r}, cache) +``` + +
+ +Anthropic have documented the particular `dict` structure that expect +image data to be in, so we have a little function to create that for us. + +------------------------------------------------------------------------ + +source + +### text_msg + +> text_msg (s:str, cache=False) + +*Convert `s` to a text message* + +
+Exported source + +``` python +def text_msg(s:str, cache=False)->dict: + "Convert `s` to a text message" + return _add_cache({"type": "text", "text": s}, cache) +``` + +
+ +A Claude message can be a list of image and text parts. So we’ve also +created a helper for making the text parts. + +``` python +q = "In brief, what color flowers are in this image?" +msg = mk_msg([img_msg(img), text_msg(q)]) +``` + +``` python +c([msg]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_01Ej9XSFQKFtD9pUns5g7tom` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 44, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+
+Exported source + +``` python +def _mk_content(src, cache=False): + "Create appropriate content data structure based on type of content" + if isinstance(src,str): return text_msg(src, cache=cache) + if isinstance(src,bytes): return img_msg(src, cache=cache) + if isinstance(src, abc.Mapping): return {k:_str_if_needed(v) for k,v in src.items()} + return _str_if_needed(src) +``` + +
+ +There’s not need to manually choose the type of message, since we figure +that out from the data of the source data. + +``` python +_mk_content('Hi') +``` + + {'type': 'text', 'text': 'Hi'} + +------------------------------------------------------------------------ + +source + +### mk_msg + +> mk_msg (content, role='user', cache=False, **kw) + +*Helper to create a `dict` appropriate for a Claude message. `kw` are +added as key/value pairs to the message* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
contentA string, list, or dict containing the contents of the message
rolestruserMust be ‘user’ or ‘assistant’
cacheboolFalse
kw
+ +
+Exported source + +``` python +def mk_msg(content, # A string, list, or dict containing the contents of the message + role='user', # Must be 'user' or 'assistant' + cache=False, + **kw): + "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" + if hasattr(content, 'content'): content,role = content.content,content.role + if isinstance(content, abc.Mapping): content=content.get('content', content) + if not isinstance(content, list): content=[content] + content = [_mk_content(o, cache if islast else False) for islast,o in loop_last(content)] if content else '.' + return dict2obj(dict(role=role, content=content, **kw), list_func=list) +``` + +
+ +``` python +mk_msg(['hi', 'there'], cache=True) +``` + +``` json +{ 'content': [ {'text': 'hi', 'type': 'text'}, + { 'cache_control': {'type': 'ephemeral'}, + 'text': 'there', + 'type': 'text'}], + 'role': 'user'} +``` + +``` python +m = mk_msg(['hi', 'there'], cache=True) +``` + +When we construct a message, we now use +[`_mk_content`](https://claudette.answer.ai/core.html#_mk_content) to +create the appropriate parts. Since a dialog contains multiple messages, +and a message can contain multiple content parts, to pass a single +message with multiple parts we have to use a list containing a single +list: + +``` python +c([[img, q]]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_014GQfAQF5FYU8a4Y8bvVm16` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +
+ +> **Note** +> +> As promised (much!) earlier, we’ve now finally completed our +> definition of +> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg), and this +> version is the one we export to the Python module. + +
+ +Some models unfortunately do not support image inputs such as Haiku 3.5 + +``` python +model = models[-1]; model +``` + + 'claude-3-5-haiku-20241022' + +``` python +c = Client(model) +c([[img, q]]) +``` + + AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 + --------------------------------------------------------------------------- + AssertionError Traceback (most recent call last) + Cell In[115], line 2 +  1 c = Client(model) + ----> 2 c([[img, q]]) + + Cell In[72], line 19, in __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs) +  17 if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) +  18 msgs = self._precall(msgs, prefill, stop, kwargs) + ---> 19 if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" +  20 if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) +  21 res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + + AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 + +## Third party providers + +### Amazon Bedrock + +These are Amazon’s current Claude models: + +``` python +models_aws +``` + + ['anthropic.claude-3-opus-20240229-v1:0', + 'anthropic.claude-3-5-sonnet-20241022-v2:0', + 'anthropic.claude-3-sonnet-20240229-v1:0', + 'anthropic.claude-3-haiku-20240307-v1:0'] + +
+ +> **Note** +> +> `anthropic` at version 0.34.2 seems not to install `boto3` as a +> dependency. You may need to do a `pip install boto3` or the creation +> of the [`Client`](https://claudette.answer.ai/core.html#client) below +> fails. + +
+ +Provided `boto3` is installed, we otherwise don’t need any extra code to +support Amazon Bedrock – we just have to set up the approach client: + +``` python +ab = AnthropicBedrock( + aws_access_key=os.environ['AWS_ACCESS_KEY'], + aws_secret_key=os.environ['AWS_SECRET_KEY'], +) +client = Client(models_aws[-1], ab) +``` + +``` python +chat = Chat(cli=client) +``` + +``` python +chat("I'm Jeremy") +``` + +It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_bdrk_01JPBwsACbf1HZoNDUzbHNpJ` +- content: + `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 32}` + +
+ +### Google Vertex + +``` python +models_goog +``` + + ['claude-3-opus@20240229', + 'claude-3-5-sonnet-v2@20241022', + 'claude-3-sonnet@20240229', + 'claude-3-haiku@20240307'] + +``` python +from anthropic import AnthropicVertex +import google.auth +``` + +``` python +project_id = google.auth.default()[1] +region = "us-east5" +gv = AnthropicVertex(project_id=project_id, region=region) +client = Client(models_goog[-1], gv) +``` + +``` python +chat = Chat(cli=client) +``` + +``` python +chat("I'm Jeremy") +```
# Tool loop + + + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +``` python +model = models[-1] +``` + +Anthropic provides an [interesting +example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) +of using tools to mock up a hypothetical ordering system. We’re going to +take it a step further, and show how we can dramatically simplify the +process, whilst completing more complex tasks. + +We’ll start by defining the same mock customer/order data as in +Anthropic’s example, plus create a entity relationship between customers +and orders: + +``` python +orders = { + "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"), + "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"), + "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")} + +customers = { + "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890", + orders=[orders['O1'], orders['O2']]), + "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210", + orders=[orders['O3']]) +} +``` + +We can now define the same functions from the original example – but +note that we don’t need to manually create the large JSON schema, since +Claudette handles all that for us automatically from the functions +directly. We’ll add some extra functionality to update order details +when cancelling too. + +``` python +def get_customer_info( + customer_id:str # ID of the customer +): # Customer's name, email, phone number, and list of orders + "Retrieves a customer's information and their orders based on the customer ID" + print(f'- Retrieving customer {customer_id}') + return customers.get(customer_id, "Customer not found") + +def get_order_details( + order_id:str # ID of the order +): # Order's ID, product name, quantity, price, and order status + "Retrieves the details of a specific order based on the order ID" + print(f'- Retrieving order {order_id}') + return orders.get(order_id, "Order not found") + +def cancel_order( + order_id:str # ID of the order to cancel +)->bool: # True if the cancellation is successful + "Cancels an order based on the provided order ID" + print(f'- Cancelling order {order_id}') + if order_id not in orders: return False + orders[order_id]['status'] = 'Cancelled' + return True +``` + +We’re now ready to start our chat. + +``` python +tools = [get_customer_info, get_order_details, cancel_order] +chat = Chat(model, tools=tools) +``` + +We’ll start with the same request as Anthropic showed: + +``` python +r = chat('Can you tell me the email address for customer C1?') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [ToolUseBlock(id='toolu_0168sUZoEUpjzk5Y8WN3q9XL', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +Claude asks us to use a tool. Claudette handles that automatically by +just calling it again: + +``` python +r = chat() +contents(r) +``` + + 'The email address for customer C1 is john@example.com.' + +Let’s consider a more complex case than in the original example – what +happens if a customer wants to cancel all of their orders? + +``` python +chat = Chat(model, tools=tools) +r = chat('Please cancel all orders for customer C1 for me.') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), + ToolUseBlock(id='toolu_01ADr1rEp7NLZ2iKWfLp7vz7', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +This is the start of a multi-stage tool use process. Doing it manually +step by step is inconvenient, so let’s write a function to handle this +for us: + +------------------------------------------------------------------------ + +source + +### Chat.toolloop + +> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None, +> maxtok=4096, stream=False, prefill='', +> tool_choice:Optional[dict]=None) + +*Add prompt `pr` to dialog and get a response from Claude, automatically +following up with `tool_use` messages* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prPrompt to pass to Claude
max_stepsint10Maximum number of tool requests to loop through
trace_funcOptionalNoneFunction to trace tool use steps (e.g print)
cont_funcOptionalnoopFunction that stops loop if returns False
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
+ +
+Exported source + +``` python +@patch +@delegates(Chat.__call__) +def toolloop(self:Chat, + pr, # Prompt to pass to Claude + max_steps=10, # Maximum number of tool requests to loop through + trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`) + cont_func:Optional[callable]=noop, # Function that stops loop if returns False + **kwargs): + "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages" + n_msgs = len(self.h) + r = self(pr, **kwargs) + for i in range(max_steps): + if r.stop_reason!='tool_use': break + if trace_func: trace_func(self.h[n_msgs:]); n_msgs = len(self.h) + r = self(**kwargs) + if not (cont_func or noop)(self.h[-2]): break + if trace_func: trace_func(self.h[n_msgs:]) + return r +``` + +
+ +We’ll start by re-running our previous request - we shouldn’t have to +manually pass back the `tool_use` message any more: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Can you tell me the email address for customer C1?') +r +``` + + - Retrieving customer C1 + +The email address for customer C1 is john@example.com. + +
+ +- id: `msg_01Fm2CY76dNeWief4kUW6r71` +- content: + `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 720, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Let’s see if it can handle the multi-stage process now – we’ll add +`trace_func=print` to see each stage of the process: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print) +r +``` + + - Retrieving customer C1 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Please cancel all orders for customer C1 for me.'}]}, {'role': 'assistant', 'content': [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01SvivKytaRHEdKixEY9dUDz', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01SvivKytaRHEdKixEY9dUDz', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}] + - Cancelling order O1 + [{'role': 'assistant', 'content': [TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel each of these orders:", type='text'), ToolUseBlock(id='toolu_01DoGVUPVBeDYERMePHDzUoT', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DoGVUPVBeDYERMePHDzUoT', 'content': 'True'}]}] + - Cancelling order O2 + [{'role': 'assistant', 'content': [ToolUseBlock(id='toolu_01XNwS35yY88Mvx4B3QqDeXX', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01XNwS35yY88Mvx4B3QqDeXX', 'content': 'True'}]}] + [{'role': 'assistant', 'content': [TextBlock(text="I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", type='text')]}] + +I’ve successfully cancelled both orders O1 and O2 for customer C1. +Please let me know if you need anything else! + +
+ +- id: `msg_01K1QpUZ8nrBVUHYTrH5QjSF` +- content: + `[{'text': "I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 921, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +OK Claude thinks the orders were cancelled – let’s check one: + +``` python +chat.toolloop('What is the status of order O2?') +``` + + - Retrieving order O2 + +The status of order O2 is now ‘Cancelled’ since I successfully cancelled +that order earlier. + +
+ +- id: `msg_01XcXpFDwoZ3u1bFDf5mY8x1` +- content: + `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1092, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Code interpreter + +Here is an example of using `toolloop` to implement a simple code +interpreter with additional tools. + +``` python +from toolslm.shell import get_shell +from fastcore.meta import delegates +import traceback +``` + +``` python +@delegates() +class CodeChat(Chat): + imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses' + def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs): + super().__init__(model=model, **kwargs) + self.ask = ask + self.tools.append(self.run_cell) + self.shell = get_shell() + self.shell.run_cell('import '+self.imps) +``` + +We have one additional parameter to creating a `CodeChat` beyond what we +pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is +`ask` – if that’s `True`, we’ll prompt the user before running code. + +``` python +@patch +def run_cell( + self:CodeChat, + code:str, # Code to execute in persistent IPython session +): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute + "Asks user for permission, and if provided, executes python `code` using persistent IPython session." + confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n' + if self.ask and input(confirm): return '#DECLINED#' + try: res = self.shell.run_cell(code) + except Exception as e: return traceback.format_exc() + return res.stdout if res.result is None else res.result +``` + +We just pass along requests to run code to the shell’s implementation. +Claude often prints results instead of just using the last expression, +so we capture stdout in those cases. + +``` python +sp = f'''You are a knowledgable assistant. Do not use tools unless needed. +Don't do complex calculations yourself -- use code for them. +The following modules are pre-imported for `run_cell` automatically: + +{CodeChat.imps} + +Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls. + +If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.''' +``` + +``` python +def get_user(ignored:str='' # Unused parameter + ): # Username of current user + "Get the username of the user running this session" + print("Looking up username") + return 'Jeremy' +``` + +In order to test out multi-stage tool use, we create a mock function +that Claude can call to get the current username. + +``` python +model = models[1] +``` + +``` python +chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3) +``` + +Claude gets confused sometimes about how tools work, so we use examples +to remind it: + +``` python +chat.h = [ + 'Calculate the square root of `10332`', 'math.sqrt(10332)', + '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.' +] +``` + +Providing a callable to toolloop’s `trace_func` lets us print out +information during the loop: + +``` python +def _show_cts(h): + for r in h: + for o in r.get('content'): + if hasattr(o,'text'): print(o.text) + nm = getattr(o, 'name', None) + if nm=='run_cell': print(o.input['code']) + elif nm: print(f'{o.name}({o.input})') +``` + +…and toolloop’s `cont_func` callable let’s us provide a function which, +if it returns `False`, stops the loop: + +``` python +def _cont_decline(c): + return nested_idx(c, 'content', 'content') != '#DECLINED#' +``` + +Now we can try our code interpreter. We start by asking for a function +to be created, which we’ll use in the next prompt to test that the +interpreter is persistent. + +``` python +pr = '''Create a 1-line function `checksum` for a string `s`, +that multiplies together the ascii values of each character in `s` using `reduce`.''' +chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline) +``` + + Press Enter to execute, or enter "n" to skip? + ``` + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + ``` + + Create a 1-line function `checksum` for a string `s`, + that multiplies together the ascii values of each character in `s` using `reduce`. + Let me help you create that function using `reduce` and `functools`. + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + The function has been created. Let me explain how it works: + 1. It takes a string `s` as input + 2. Uses `functools.reduce` to multiply together all ASCII values + 3. `ord(y)` gets the ASCII value of each character + 4. The initial value is 1 (the third parameter to reduce) + 5. The lambda function multiplies the accumulator (x) with each new ASCII value + + You can test it with any string. For example, you could try `checksum("hello")` to see it in action. + +The function has been created. Let me explain how it works: 1. It takes +a string `s` as input 2. Uses `functools.reduce` to multiply together +all ASCII values 3. `ord(y)` gets the ASCII value of each character 4. +The initial value is 1 (the third parameter to reduce) 5. The lambda +function multiplies the accumulator (x) with each new ASCII value + +You can test it with any string. For example, you could try +`checksum("hello")` to see it in action. + +
+ +- id: `msg_011pcGY9LbYqvRSfDPgCqUkT` +- content: + `[{'text': 'The function has been created. Let me explain how it works:\n1. It takes a string`s`as input\n2. Uses`functools.reduce`to multiply together all ASCII values\n3.`ord(y)`gets the ASCII value of each character\n4. The initial value is 1 (the third parameter to reduce)\n5. The lambda function multiplies the accumulator (x) with each new ASCII value\n\nYou can test it with any string. For example, you could try`checksum(“hello”)`to see it in action.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 824, 'output_tokens': 125, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +By asking for a calculation to be done on the username, we force it to +use multiple steps: + +``` python +pr = 'Use it to get the checksum of the username of this session.' +chat.toolloop(pr, trace_func=_show_cts) +``` + + Looking up username + Use it to get the checksum of the username of this session. + I'll first get the username using `get_user` and then apply our `checksum` function to it. + get_user({'ignored': ''}) + Press Enter to execute, or enter "n" to skip? + ``` + print(checksum("Jeremy")) + ``` + + Now I'll calculate the checksum of "Jeremy": + print(checksum("Jeremy")) + The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy". + +The checksum of the username “Jeremy” is 1134987783204. This was +calculated by multiplying together the ASCII values of each character in +“Jeremy”. + +
+ +- id: `msg_01UXvtcLzzykZpnQUT35v4uD` +- content: + `[{'text': 'The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy".', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
# The async version + + + +## Setup + +## Async SDK + +``` python +model = models[1] +cli = AsyncAnthropic() +``` + +``` python +m = {'role': 'user', 'content': "I'm Jeremy"} +r = await cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hello Jeremy! It’s nice to meet you. How can I assist you today? Is +there anything specific you’d like to talk about or any questions you +have? + +
+ +- id: `msg_019gsEQs5dqb3kgwNJbTH27M` +- content: + `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 36}` + +
+ +------------------------------------------------------------------------ + +source + +### AsyncClient + +> AsyncClient (model, cli=None, log=False) + +*Async Anthropic messages client.* + +
+Exported source + +``` python +class AsyncClient(Client): + def __init__(self, model, cli=None, log=False): + "Async Anthropic messages client." + super().__init__(model,cli,log) + if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}) +``` + +
+ +``` python +c = AsyncClient(model) +``` + +``` python +c._r(r) +c.use +``` + + In: 10; Out: 36; Total: 46 + +------------------------------------------------------------------------ + +source + +### AsyncClient.\_\_call\_\_ + +> AsyncClient.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', +> stream:bool=False, stop=None, cli=None, log=False) + +*Make an async call to Claude.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
cliNoneTypeNone
logboolFalse
+ +
+Exported source + +``` python +@patch +async def _stream(self:AsyncClient, msgs:list, prefill='', **kwargs): + async with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: + if prefill: yield prefill + async for o in s.text_stream: yield o + self._log(await s.get_final_message(), prefill, msgs, kwargs) +``` + +
+
+Exported source + +``` python +@patch +@delegates(Client) +async def __call__(self:AsyncClient, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + **kwargs): + "Make an async call to Claude." + msgs = self._precall(msgs, prefill, stop, kwargs) + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = await self.c.messages.create( + model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) +``` + +
+ +``` python +c = AsyncClient(model, log=True) +c.use +``` + + In: 0; Out: 0; Total: 0 + +``` python +c.model = models[1] +await c('Hi') +``` + +Hello! How can I assist you today? Feel free to ask any questions or let +me know if you need help with anything. + +
+ +- id: `msg_01L9vqP9r1LcmvSk8vWGLbPo` +- content: + `[{'text': 'Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 8, 'output_tokens': 29, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +c.use +``` + + In: 8; Out: 29; Total: 37 + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +await c(q, prefill=pref) +``` + +According to Douglas Adams, the meaning of life is 42. More seriously, +there’s no universally agreed upon meaning of life. Many philosophers +and religions have proposed different answers, but it remains an open +question that individuals must grapple with for themselves. + +
+ +- id: `msg_01KAJbCneA2oCRPVm9EkyDXF` +- content: + `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 24, 'output_tokens': 51, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +async for o in (await c('Hi', stream=True)): print(o, end='') +``` + + Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything. + +``` python +c.use +``` + + In: 40; Out: 109; Total: 149 + +``` python +async for o in (await c(q, prefill=pref, stream=True)): print(o, end='') +``` + + According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves. + +``` python +c.use +``` + + In: 64; Out: 160; Total: 224 + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +sp = "You are a summing expert." +``` + +``` python +tools=[get_schema(sums)] +choice = mk_tool_choice('sums') +``` + +``` python +tools = [get_schema(sums)] +msgs = mk_msgs(pr) +r = await c(msgs, sp=sp, tools=tools, tool_choice=choice) +tr = mk_toolres(r, ns=globals()) +msgs += tr +contents(await c(msgs, sp=sp, tools=tools)) +``` + + Finding the sum of 604542 and 6458932 + + 'As a summing expert, I\'m happy to help you with this addition. The sum of 604542 and 6458932 is 7063474.\n\nTo break it down:\n604542 (first number)\n+ 6458932 (second number)\n= 7063474 (total sum)\n\nThis result was calculated using the "sums" function, which adds two numbers together. Is there anything else you\'d like me to sum for you?' + +## AsyncChat + +------------------------------------------------------------------------ + +source + +### AsyncChat + +> AsyncChat (model:Optional[str]=None, +> cli:Optional[claudette.core.Client]=None, sp='', +> tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None) + +*Anthropic async chat client.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstr
toolsOptionalNone
tempint0
cont_prOptionalNone
+ +
+Exported source + +``` python +@delegates() +class AsyncChat(Chat): + def __init__(self, + model:Optional[str]=None, # Model to use (leave empty if passing `cli`) + cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) + **kwargs): + "Anthropic async chat client." + super().__init__(model, cli, **kwargs) + if not cli: self.c = AsyncClient(model) +``` + +
+ +``` python +sp = "Never mention what tools you use." +chat = AsyncChat(model, sp=sp) +chat.c.use, chat.h +``` + + (In: 0; Out: 0; Total: 0, []) + +------------------------------------------------------------------------ + +source + +### AsyncChat.\_\_call\_\_ + +> AsyncChat.__call__ (pr=None, temp=0, maxtok=4096, stream=False, +> prefill='', **kw) + +*Call self as a function.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempint0Temperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
kw
+ +
+Exported source + +``` python +@patch +async def _stream(self:AsyncChat, res): + async for o in res: yield o + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) +``` + +
+
+Exported source + +``` python +@patch +async def _append_pr(self:AsyncChat, pr=None): + prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history + if pr and prev_role == 'user': await self() + self._post_pr(pr, prev_role) +``` + +
+
+Exported source + +``` python +@patch +async def __call__(self:AsyncChat, + pr=None, # Prompt / message + temp=0, # Temperature + maxtok=4096, # Maximum tokens + stream=False, # Stream response? + prefill='', # Optional prefill to pass to Claude as start of its response + **kw): + await self._append_pr(pr) + if self.tools: kw['tools'] = [get_schema(o) for o in self.tools] + res = await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, **kw) + if stream: return self._stream(res) + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) + return res +``` + +
+ +``` python +await chat("I'm Jeremy") +await chat("What's my name?") +``` + +Your name is Jeremy, as you mentioned in your previous message. + +
+ +- id: `msg_01NMugMXWpDP9iuTXeLkHarn` +- content: + `[{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 64, 'output_tokens': 16, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +await chat(q, prefill=pref) +``` + +According to Douglas Adams, the meaning of life is 42. More seriously, +there’s no universally agreed upon answer. Common philosophical +perspectives include: + +1. Finding personal fulfillment +2. Serving others +3. Pursuing happiness +4. Creating meaning through our choices +5. Experiencing and appreciating existence + +Ultimately, many believe each individual must determine their own life’s +meaning. + +
+ +- id: `msg_01VPWUQn5Do1Kst8RYUDQvCu` +- content: + `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 100, 'output_tokens': 82, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat = AsyncChat(model, sp=sp) +async for o in (await chat("I'm Jeremy", stream=True)): print(o, end='') +``` + + Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions I can help you with? + +``` python +pr = f"What is {a}+{b}?" +chat = AsyncChat(model, sp=sp, tools=[sums]) +r = await chat(pr) +r +``` + + Finding the sum of 604542 and 6458932 + +To answer this question, I can use the “sums” function to add these two +numbers together. Let me do that for you. + +
+ +- id: `msg_015z1rffSWFxvj7rSpzc43ZE` +- content: + `[{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01SNKhtfnDQBC4RGY4mUCq1v', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 428, 'output_tokens': 101, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +await chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_018KAsE2YGiXWjUJkLPrXpb2` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 543, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +fn = Path('samples/puppy.jpg') +img = fn.read_bytes() +``` + +``` python +q = "In brief, what color flowers are in this image?" +msg = mk_msg([img_msg(img), text_msg(q)]) +await c([msg]) +``` + +The flowers in this image are purple. They appear to be small, +daisy-like flowers, possibly asters or some type of purple daisy, +blooming in the background behind the adorable puppy in the foreground. + +
+ +- id: `msg_017qgZggLjUY915mWbWCkb9X` +- content: + `[{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 50, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
# claudette + + + +> **NB**: If you are reading this in GitHub’s readme, we recommend you +> instead read the much more nicely formatted [documentation +> format](https://claudette.answer.ai/) of this tutorial. + +*Claudette* is a wrapper for Anthropic’s [Python +SDK](https://github.com/anthropics/anthropic-sdk-python). + +The SDK works well, but it is quite low level – it leaves the developer +to do a lot of stuff manually. That’s a lot of extra work and +boilerplate! Claudette automates pretty much everything that can be +automated, whilst providing full control. Amongst the features provided: + +- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that + creates stateful dialogs +- Support for *prefill*, which tells Claude what to use as the first few + words of its response +- Convenient image support +- Simple and convenient support for Claude’s new Tool Use API. + +You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the +key provided to you by Anthropic in order to use this library. + +Note that this library is the first ever “literate nbdev” project. That +means that the actual source code for the library is a rendered Jupyter +Notebook which includes callout notes and tips, HTML tables and images, +detailed explanations, and teaches *how* and *why* the code is written +the way it is. Even if you’ve never used the Anthropic Python SDK or +Claude API before, you should be able to read the source code. Click +[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, +or clone the git repo and execute the notebook yourself to see every +step of the creation process in action. The tutorial below includes +links to API details which will take you to relevant parts of the +source. The reason this project is a new kind of literal program is +because we take seriously Knuth’s call to action, that we have a “*moral +commitment*” to never write an “*illiterate program*” – and so we have a +commitment to making literate programming and easy and pleasant +experience. (For more on this, see [this +talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) + +> “*Let us change our traditional attitude to the construction of +> programs: Instead of imagining that our main task is to instruct a +> **computer** what to do, let us concentrate rather on explaining to +> **human beings** what we want a computer to do.*” Donald E. Knuth, +> [Literate +> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) +> (1984) + +## Install + +``` sh +pip install claudette +``` + +## Getting started + +Anthropic’s Python SDK will automatically be installed with Claudette, +if you don’t already have it. + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +To print every HTTP request and response in full, uncomment the above +line. + +``` python +from claudette import * +``` + +Claudette only exports the symbols that are needed to use the library, +so you can use `import *` to import them. Alternatively, just use: + +``` python +import claudette +``` + +…and then add the prefix `claudette.` to any usages of the module. + +Claudette provides `models`, which is a list of models currently +available from the SDK. + +``` python +models +``` + + ['claude-3-opus-20240229', + 'claude-3-5-sonnet-20241022', + 'claude-3-haiku-20240307'] + +For these examples, we’ll use Sonnet 3.5, since it’s awesome! + +``` python +model = models[1] +``` + +## Chat + +The main interface to Claudette is the +[`Chat`](https://claudette.answer.ai/core.html#chat) class, which +provides a stateful interface to Claude: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +chat("I'm Jeremy") +``` + +Hello Jeremy, nice to meet you. + +
+ +- id: `msg_015oK9jEcra3TEKHUGYULjWB` +- content: + `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01Si8sTFJe8d8vq7enanbAwj` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +As you see above, displaying the results of a call in a notebook shows +just the message contents, with the other details hidden behind a +collapsible section. Alternatively you can `print` the details: + +``` python +print(r) +``` + + Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) + +Claude supports adding an extra `assistant` message at the end, which +contains the *prefill* – i.e. the text we want Claude to assume the +response starts with. Let’s try it out: + +``` python +chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,') +``` + +According to Douglas Adams,42. Philosophically, it’s to find personal +meaning through relationships, purpose, and experiences. + +
+ +- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` +- content: + `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can add `stream=True` to stream the results as soon as they arrive +(although you will only see the gradual generation if you execute the +notebook yourself, of course!) + +``` python +for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): + print(o, end='') +``` + + It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. + +### Async + +Alternatively, you can use +[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or +[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for +the async versions, e.g: + +``` python +chat = AsyncChat(model) +await chat("I'm Jeremy") +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_016Q8cdc3sPWBS8eXcNj841L` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Remember to use `async for` when streaming in this case: + +``` python +async for o in await chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,', stream=True): + print(o, end='') +``` + + According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. + +## Prompt caching + +If you use `mk_msg(msg, cache=True)`, then the message is cached using +Claude’s [prompt +caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) +feature. For instance, here we use caching when asking about Claudette’s +readme file: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +``` + +``` python +nbtxt = Path('README.txt').read_text() +msg = f''' +{nbtxt} + +In brief, what is the purpose of this project based on the readme?''' +r = chat(mk_msg(msg, cache=True)) +r +``` + +Claudette is a high-level wrapper for Anthropic’s Python SDK that +automates common tasks and provides additional functionality. Its main +features include: + +1. A Chat class for stateful dialogs +2. Support for prefill (controlling Claude’s initial response words) +3. Convenient image handling +4. Simple tool use API integration +5. Support for multiple model providers (Anthropic, AWS Bedrock, Google + Vertex) + +The project is notable for being the first “literate nbdev” project, +meaning its source code is written as a detailed, readable Jupyter +Notebook that includes explanations, examples, and teaching material +alongside the functional code. + +The goal is to simplify working with Claude’s API while maintaining full +control, reducing boilerplate code and manual work that would otherwise +be needed with the base SDK. + +
+ +- id: `msg_014rVQnYoZXZuyWUCMELG1QW` +- content: + `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` + +
+ +The response records the a cache has been created using these input +tokens: + +``` python +print(r.usage) +``` + + Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) + +We can now ask a followup question in this chat: + +``` python +r = chat('How does it make tool use more ergonomic?') +r +``` + +According to the README, Claudette makes tool use more ergonomic in +several ways: + +1. It uses docments to make Python function definitions more + user-friendly - each parameter and return value should have a type + and description + +2. It handles the tool calling process automatically - when Claude + returns a tool_use message, Claudette manages calling the tool with + the provided parameters behind the scenes + +3. It provides a `toolloop` method that can handle multiple tool calls + in a single step to solve more complex problems + +4. It allows you to pass a list of tools to the Chat constructor and + optionally force Claude to always use a specific tool via + `tool_choice` + +Here’s a simple example from the README: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b + +chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') +``` + +This makes it much simpler compared to manually handling all the tool +use logic that would be required with the base SDK. + +
+ +- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` +- content: + `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` + +
+ +We can see that this only used ~200 regular input tokens – the 7000+ +context tokens have been read from cache. + +``` python +print(r.usage) +``` + + Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) + +``` python +chat.use +``` + + In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 + +## Tool use + +[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude +use external tools. + +We use [docments](https://fastcore.fast.ai/docments.html) to make +defining Python functions as ergonomic as possible. Each parameter (and +the return value) should have a type, and a docments comment with the +description of what it is. As an example we’ll write a simple function +that adds numbers together, and will tell us when it’s being called: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +Sometimes Claude will say something like “according to the `sums` tool +the answer is” – generally we’d rather it just tells the user the +answer, so we can use a system prompt to help with this: + +``` python +sp = "Never mention what tools you use." +``` + +We’ll get Claude to add up some long numbers: + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +pr +``` + + 'What is 604542+6458932?' + +To use tools, pass a list of them to +[`Chat`](https://claudette.answer.ai/core.html#chat): + +``` python +chat = Chat(model, sp=sp, tools=[sums]) +``` + +To force Claude to always answer using a tool, set `tool_choice` to that +function name. When Claude needs to use a tool, it doesn’t return the +answer, but instead returns a `tool_use` message, which means we have to +call the named tool with the provided parameters. + +``` python +r = chat(pr, tool_choice='sums') +r +``` + + Finding the sum of 604542 and 6458932 + +ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_014xrPyotyiBmFSctkp1LZHk` +- content: + `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Claudette handles all that for us – we just call it again, and it all +happens automatically: + +``` python +chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_01151puJxG8Fa6k6QSmzwKQA` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can see how many tokens have been used at any time by checking the +`use` property. Note that (as of May 2024) tool use in Claude uses a +*lot* of tokens, since it automatically adds a large system prompt. + +``` python +chat.use +``` + + In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 + +We can do everything needed to use tools in a single step, by using +[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). +This can even call multiple tools as needed solve a problem. For +example, let’s define a tool to handle multiplication: + +``` python +def mults( + a:int, # First thing to multiply + b:int=1 # Second thing to multiply +) -> int: # The product of the inputs + "Multiplies a * b." + print(f"Finding the product of {a} and {b}") + return a * b +``` + +Now with a single call we can calculate `(a+b)*2` – by passing +`show_trace` we can see each response from Claude in the process: + +``` python +chat = Chat(model, sp=sp, tools=[sums,mults]) +pr = f'Calculate ({a}+{b})*2' +pr +``` + + 'Calculate (604542+6458932)*2' + +``` python +chat.toolloop(pr, trace_func=print) +``` + + Finding the sum of 604542 and 6458932 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] + Finding the product of 7063474 and 2 + [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] + [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] + +The final result is 14,126,948. + +
+ +- id: `msg_0162teyBcJHriUzZXMPz4r5d` +- content: + `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Structured data + +If you just want the immediate result from a single tool, use +[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). + +``` python +cli = Client(model) +``` + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +cli.structured("What is 604542+6458932", sums) +``` + + Finding the sum of 604542 and 6458932 + + [7063474] + +This is particularly useful for getting back structured information, +e.g: + +``` python +class President: + "Information about a president of the United States" + def __init__(self, + first:str, # first name + last:str, # last name + spouse:str, # name of spouse + years_in_office:str, # format: "{start_year}-{end_year}" + birthplace:str, # name of city + birth_year:int # year of birth, `0` if unknown + ): + assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" + store_attr() + + __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') +``` + +``` python +cli.structured("Provide key information about the 3rd President of the United States", President) +``` + + [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] + +## Images + +Claude can handle image data as well. As everyone knows, when testing +image APIs you have to use a cute puppy. + +``` python +fn = Path('samples/puppy.jpg') +display.Image(filename=fn, width=200) +``` + + + +We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object +as before: + +``` python +chat = Chat(model) +``` + +Claudette expects images as a list of bytes, so we read in the file: + +``` python +img = fn.read_bytes() +``` + +Prompts to Claudette can be lists, containing text, images, or both, eg: + +``` python +chat([img, "In brief, what color flowers are in this image?"]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +The image is included as input tokens. + +``` python +chat.use +``` + + In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 + +Alternatively, Claudette supports creating a multi-stage chat with +separate image and text prompts. For instance, you can pass just the +image as the initial prompt (in which case Claude will make some general +comments about what it sees), and then follow up with questions in +additional prompts: + +``` python +chat = Chat(model) +chat(img) +``` + +What an adorable Cavalier King Charles Spaniel puppy! The photo captures +the classic brown and white coloring of the breed, with those soulful +dark eyes that are so characteristic. The puppy is lying in the grass, +and there are lovely purple asters blooming in the background, creating +a beautiful natural setting. The combination of the puppy’s sweet +expression and the delicate flowers makes for a charming composition. +Cavalier King Charles Spaniels are known for their gentle, affectionate +nature, and this little one certainly seems to embody those traits with +its endearing look. + +
+ +- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` +- content: + `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What direction is the puppy facing?') +``` + +The puppy is facing towards the left side of the image. Its head is +positioned so we can see its right side profile, though it appears to be +looking slightly towards the camera, giving us a good view of its +distinctive brown and white facial markings and one of its dark eyes. +The puppy is lying down with its white chest/front visible against the +green grass. + +
+ +- id: `msg_01AeR9eWjbxa788YF97iErtN` +- content: + `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What color is it?') +``` + +The puppy has a classic Cavalier King Charles Spaniel coat with a rich +chestnut brown (sometimes called Blenheim) coloring on its ears and +patches on its face, combined with a bright white base color. The white +is particularly prominent on its face (creating a distinctive blaze down +the center) and chest area. This brown and white combination is one of +the most recognizable color patterns for the breed. + +
+ +- id: `msg_01R91AqXG7pLc8hK24F5mc7x` +- content: + `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Note that the image is passed in again for every input in the dialog, so +that number of input tokens increases quickly with this kind of chat. +(For large images, using prompt caching might be a good idea.) + +``` python +chat.use +``` + + In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 + +## Other model providers + +You can also use 3rd party providers of Anthropic models, as shown here. + +### Amazon Bedrock + +These are the models available through Bedrock: + +``` python +models_aws +``` + + ['anthropic.claude-3-opus-20240229-v1:0', + 'anthropic.claude-3-5-sonnet-20241022-v2:0', + 'anthropic.claude-3-sonnet-20240229-v1:0', + 'anthropic.claude-3-haiku-20240307-v1:0'] + +To use them, call `AnthropicBedrock` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicBedrock +``` + +``` python +ab = AnthropicBedrock( + aws_access_key=os.environ['AWS_ACCESS_KEY'], + aws_secret_key=os.environ['AWS_SECRET_KEY'], +) +client = Client(models_aws[-1], ab) +``` + +Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) +object passing this client to the `cli` parameter – and from then on, +everything is identical to the previous examples. + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` +- content: + `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 32}` + +
+ +### Google Vertex + +These are the models available through Vertex: + +``` python +models_goog +``` + + ['claude-3-opus@20240229', + 'claude-3-5-sonnet-v2@20241022', + 'claude-3-sonnet@20240229', + 'claude-3-haiku@20240307'] + +To use them, call `AnthropicVertex` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicVertex +import google.auth +``` + +``` python +project_id = google.auth.default()[1] +gv = AnthropicVertex(project_id=project_id, region="us-east5") +client = Client(models_goog[-1], gv) +``` + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +## Extensions + +- [Pydantic Structured + Ouput](https://github.com/tom-pollak/claudette-pydantic)
# claudette-pydantic + + + +> Adds Pydantic support for +> [claudette](https://github.com/AnswerDotAI/claudette) through function +> calling + +claudette_pydantic provides the `struct` method in the `Client` and +`Chat` of claudette + +`struct` provides a wrapper around `__call__`. Provide a Pydantic +`BaseModel` as schema, and the model will return an initialized +`BaseModel` object. + +I’ve found Haiku to be quite reliable at even complicated schemas. + +## Install + +``` sh +pip install claudette-pydantic +``` + +## Getting Started + +``` python +from claudette.core import * +import claudette_pydantic # patches claudette with `struct` +from pydantic import BaseModel, Field +from typing import Literal, Union, List +``` + +``` python +model = models[-1] +model +``` + + 'claude-3-haiku-20240307' + +``` python +class Pet(BaseModel): + "Create a new pet" + name: str + age: int + owner: str = Field(default="NA", description="Owner name. Do not return if not given.") + type: Literal['dog', 'cat', 'mouse'] + +c = Client(model) +print(repr(c.struct(msgs="Can you make a pet for my dog Mac? He's 14 years old", resp_model=Pet))) +print(repr(c.struct(msgs="Tom: my cat is juma and he's 16 years old", resp_model=Pet))) +``` + + Pet(name='Mac', age=14, owner='NA', type='dog') + Pet(name='juma', age=16, owner='Tom', type='cat') + +## Going Deeper + +I pulled this example from [pydantic +docs](https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions) +has a list of discriminated unions, shown by `pet_type`. For each object +the model is required to return different things. + +You should be able to use the full power of Pydantic here. I’ve found +that instructor for Claude fails on this example. + +Each sub BaseModel may also have docstrings describing usage. I’ve found +prompting this way to be quite reliable. + +``` python +class Cat(BaseModel): + pet_type: Literal['cat'] + meows: int + + +class Dog(BaseModel): + pet_type: Literal['dog'] + barks: float + + +class Reptile(BaseModel): + pet_type: Literal['lizard', 'dragon'] + scales: bool + +# Dummy to show doc strings +class Create(BaseModel): + "Pass as final member of the `pet` list to indicate success" + pet_type: Literal['create'] + +class OwnersPets(BaseModel): + """ + Information for to gather for an Owner's pets + """ + pet: List[Union[Cat, Dog, Reptile, Create]] = Field(..., discriminator='pet_type') + +chat = Chat(model) +pr = "hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows" +print(repr(chat.struct(OwnersPets, pr=pr))) +print(repr(chat.struct(OwnersPets, pr="actually my dragon does have scales, can you change that for me?"))) +``` + + OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) + OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) + +While the struct uses tool use to enforce the schema, we save in history +as the `repr` response to keep the user,assistant,user flow. + +``` python +chat.h +``` + + [{'role': 'user', + 'content': [{'type': 'text', + 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, + {'role': 'assistant', + 'content': [{'type': 'text', + 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}, + {'role': 'user', + 'content': [{'type': 'text', + 'text': 'actually my dragon does have scales, can you change that for me?'}]}, + {'role': 'assistant', + 'content': [{'type': 'text', + 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}] + +Alternatively you can use struct as tool use flow with +`treat_as_output=False` (but requires the next input to be assistant) + +``` python +chat.struct(OwnersPets, pr=pr, treat_as_output=False) +chat.h[-3:] +``` + + [{'role': 'user', + 'content': [{'type': 'text', + 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, + {'role': 'assistant', + 'content': [ToolUseBlock(id='toolu_015ggQ1iH6BxBffd7erj3rjR', input={'pet': [{'pet_type': 'dog', 'barks': 6.0}, {'pet_type': 'dragon', 'scales': False}, {'pet_type': 'cat', 'meows': 2}]}, name='OwnersPets', type='tool_use')]}, + {'role': 'user', + 'content': [{'type': 'tool_result', + 'tool_use_id': 'toolu_015ggQ1iH6BxBffd7erj3rjR', + 'content': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2)])"}]}] + +(So I couldn’t prompt it again here, next input would have to be an +assistant) + +### User Creation & few-shot examples + +You can even add few shot examples *for each input* + +``` python +class User(BaseModel): + "User creation tool" + age: int = Field(description='Age of the user') + name: str = Field(title='Username') + password: str = Field( + json_schema_extra={ + 'title': 'Password', + 'description': 'Password of the user', + 'examples': ['Monkey!123'], + } + ) +print(repr(c.struct(msgs=["Can you create me a new user for tom age 22"], resp_model=User, sp="for a given user, generate a similar password based on examples"))) +``` + + User(age=22, name='tom', password='Monkey!123') + +Uses the few-shot example as asked for in the system prompt. + +### You can find more examples [nbs/examples](nbs/examples) + +## Signature: + +``` python +Client.struct( + self: claudette.core.Client, + msgs: list, + resp_model: type[BaseModel], # non-initialized Pydantic BaseModel + **, # Client.__call__ kwargs... +) -> BaseModel +``` + +``` python +Chat.struct( + self: claudette.core.Chat, + resp_model: type[BaseModel], # non-initialized Pydantic BaseModel + treat_as_output=True, # In chat history, tool is reflected + **, # Chat.__call__ kwargs... +) -> BaseModel +```
diff --git a/llms-ctx.txt b/llms-ctx.txt new file mode 100644 index 0000000..b6e0a91 --- /dev/null +++ b/llms-ctx.txt @@ -0,0 +1,4794 @@ +Things to remember when using Claudette: + +- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key +- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) +- The library provides both synchronous and asynchronous interfaces +- Use `Chat()` for maintaining conversation state and handling tool interactions +- When using tools, the library automatically handles the request/response loop +- Image support is built in but only available on compatible models (not Haiku)# Claudette’s source + + + +This is the ‘literate’ source code for Claudette. You can view the fully +rendered version of the notebook +[here](https://claudette.answer.ai/core.html), or you can clone the git +repo and run the [interactive +notebook](https://github.com/AnswerDotAI/claudette/blob/main/00_core.ipynb) +in Jupyter. The notebook is converted the [Python module +claudette/core.py](https://github.com/AnswerDotAI/claudette/blob/main/claudette/core.py) +using [nbdev](https://nbdev.fast.ai/). The goal of this source code is +to both create the Python module, and also to teach the reader *how* it +is created, without assuming much existing knowledge about Claude’s API. + +Most of the time you’ll see that we write some source code *first*, and +then a description or discussion of it *afterwards*. + +## Setup + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +To print every HTTP request and response in full, uncomment the above +line. This functionality is provided by Anthropic’s SDK. + +
+ +> **Tip** +> +> If you’re reading the rendered version of this notebook, you’ll see an +> “Exported source” collapsible widget below. If you’re reading the +> source notebook directly, you’ll see `#| exports` at the top of the +> cell. These show that this piece of code will be exported into the +> python module that this notebook creates. No other code will be +> included – any other code in this notebook is just for demonstration, +> documentation, and testing. +> +> You can toggle expanding/collapsing the source code of all exported +> sections by using the ` Code` menu in the top right of the rendered +> notebook page. + +
+ +
+Exported source + +``` python +model_types = { + # Anthropic + 'claude-3-opus-20240229': 'opus', + 'claude-3-5-sonnet-20241022': 'sonnet', + 'claude-3-haiku-20240307': 'haiku-3', + 'claude-3-5-haiku-20241022': 'haiku-3-5', + # AWS + 'anthropic.claude-3-opus-20240229-v1:0': 'opus', + 'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet', + 'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet', + 'anthropic.claude-3-haiku-20240307-v1:0': 'haiku', + # Google + 'claude-3-opus@20240229': 'opus', + 'claude-3-5-sonnet-v2@20241022': 'sonnet', + 'claude-3-sonnet@20240229': 'sonnet', + 'claude-3-haiku@20240307': 'haiku', +} + +all_models = list(model_types) +``` + +
+
+Exported source + +``` python +text_only_models = ('claude-3-5-haiku-20241022',) +``` + +
+ +These are the current versions and +[prices](https://www.anthropic.com/pricing#anthropic-api) of Anthropic’s +models at the time of writing. + +``` python +model = models[1]; model +``` + + 'claude-3-5-sonnet-20241022' + +For examples, we’ll use Sonnet 3.5, since it’s awesome. + +## Antropic SDK + +``` python +cli = Anthropic() +``` + +This is what Anthropic’s SDK provides for interacting with Python. To +use it, pass it a list of *messages*, with *content* and a *role*. The +roles should alternate between *user* and *assistant*. + +
+ +> **Tip** +> +> After the code below you’ll see an indented section with an orange +> vertical line on the left. This is used to show the *result* of +> running the code above. Because the code is running in a Jupyter +> Notebook, we don’t have to use `print` to display results, we can just +> type the expression directly, as we do with `r` here. + +
+ +``` python +m = {'role': 'user', 'content': "I'm Jeremy"} +r = cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help +you today? + +
+ +- id: `msg_017Q8WYvvANfyHWLJWt95UR1` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 27}` + +
+ +### Formatting output + +That output is pretty long and hard to read, so let’s clean it up. We’ll +start by pulling out the `Content` part of the message. To do that, +we’re going to write our first function which will be included to the +`claudette/core.py` module. + +
+ +> **Tip** +> +> This is the first exported public function or class we’re creating +> (the previous export was of a variable). In the rendered version of +> the notebook for these you’ll see 4 things, in this order (unless the +> symbol starts with a single `_`, which indicates it’s *private*): +> +> - The signature (with the symbol name as a heading, with a horizontal +> rule above) +> - A table of paramater docs (if provided) +> - The doc string (in italics). +> - The source code (in a collapsible “Exported source” block) +> +> After that, we generally provide a bit more detail on what we’ve +> created, and why, along with a sample usage. + +
+ +------------------------------------------------------------------------ + +source + +### find_block + +> find_block (r:collections.abc.Mapping, blk_type:type= 'anthropic.types.text_block.TextBlock'>) + +*Find the first block of type `blk_type` in `r.content`.* + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
rMappingThe message to look in
blk_typetypeTextBlockThe type of block to find
+ +
+Exported source + +``` python +def find_block(r:abc.Mapping, # The message to look in + blk_type:type=TextBlock # The type of block to find + ): + "Find the first block of type `blk_type` in `r.content`." + return first(o for o in r.content if isinstance(o,blk_type)) +``` + +
+ +This makes it easier to grab the needed parts of Claude’s responses, +which can include multiple pieces of content. By default, we look for +the first text block. That will generally have the content we want to +display. + +``` python +find_block(r) +``` + + TextBlock(text="Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", type='text') + +------------------------------------------------------------------------ + +source + +### contents + +> contents (r) + +*Helper to get the contents from Claude response `r`.* + +
+Exported source + +``` python +def contents(r): + "Helper to get the contents from Claude response `r`." + blk = find_block(r) + if not blk and r.content: blk = r.content[0] + return blk.text.strip() if hasattr(blk,'text') else str(blk) +``` + +
+ +For display purposes, we often just want to show the text itself. + +``` python +contents(r) +``` + + "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?" + +
+Exported source + +``` python +@patch +def _repr_markdown_(self:(Message)): + det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items()) + cts = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex + return f"""{cts} + +
+ +- {det} + +
""" +``` + +
+ +Jupyter looks for a `_repr_markdown_` method in displayed objects; we +add this in order to display just the content text, and collapse full +details into a hideable section. Note that `patch` is from +[fastcore](https://fastcore.fast.ai/), and is used to add (or replace) +functionality in an existing class. We pass the class(es) that we want +to patch as type annotations to `self`. In this case, `_repr_markdown_` +is being added to Anthropic’s `Message` class, so when we display the +message now we just see the contents, and the details are hidden away in +a collapsible details block. + +``` python +r +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help +you today? + +
+ +- id: `msg_017Q8WYvvANfyHWLJWt95UR1` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 27}` + +
+ +One key part of the response is the +[`usage`](https://claudette.answer.ai/core.html#usage) key, which tells +us how many tokens we used by returning a `Usage` object. + +We’ll add some helpers to make things a bit cleaner for creating and +formatting these objects. + +``` python +r.usage +``` + + In: 10; Out: 27; Cache create: 0; Cache read: 0; Total: 37 + +------------------------------------------------------------------------ + +source + +### usage + +> usage (inp=0, out=0, cache_create=0, cache_read=0) + +*Slightly more concise version of `Usage`.* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
inpint0input tokens
outint0Output tokens
cache_createint0Cache creation tokens
cache_readint0Cache read tokens
+ +
+Exported source + +``` python +def usage(inp=0, # input tokens + out=0, # Output tokens + cache_create=0, # Cache creation tokens + cache_read=0 # Cache read tokens + ): + "Slightly more concise version of `Usage`." + return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create, cache_read_input_tokens=cache_read) +``` + +
+ +The constructor provided by Anthropic is rather verbose, so we clean it +up a bit, using a lowercase version of the name. + +``` python +usage(5) +``` + + In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 + +------------------------------------------------------------------------ + +source + +### Usage.total + +> Usage.total () + +
+Exported source + +``` python +@patch(as_prop=True) +def total(self:Usage): return self.input_tokens+self.output_tokens+getattr(self, "cache_creation_input_tokens",0)+getattr(self, "cache_read_input_tokens",0) +``` + +
+ +Adding a `total` property to `Usage` makes it easier to see how many +tokens we’ve used up altogether. + +``` python +usage(5,1).total +``` + + 6 + +------------------------------------------------------------------------ + +source + +### Usage.\_\_repr\_\_ + +> Usage.__repr__ () + +*Return repr(self).* + +
+Exported source + +``` python +@patch +def __repr__(self:Usage): return f'In: {self.input_tokens}; Out: {self.output_tokens}; Cache create: {getattr(self, "cache_creation_input_tokens",0)}; Cache read: {getattr(self, "cache_read_input_tokens",0)}; Total: {self.total}' +``` + +
+ +In python, patching `__repr__` lets us change how an object is +displayed. (More generally, methods starting and ending in `__` in +Python are called `dunder` methods, and have some `magic` behavior – +such as, in this case, changing how an object is displayed.) + +``` python +usage(5) +``` + + In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 + +------------------------------------------------------------------------ + +source + +### Usage.\_\_add\_\_ + +> Usage.__add__ (b) + +*Add together each of `input_tokens` and `output_tokens`* + +
+Exported source + +``` python +@patch +def __add__(self:Usage, b): + "Add together each of `input_tokens` and `output_tokens`" + return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens, getattr(self,'cache_creation_input_tokens',0)+getattr(b,'cache_creation_input_tokens',0), getattr(self,'cache_read_input_tokens',0)+getattr(b,'cache_read_input_tokens',0)) +``` + +
+ +And, patching `__add__` lets `+` work on a `Usage` object. + +``` python +r.usage+r.usage +``` + + In: 20; Out: 54; Cache create: 0; Cache read: 0; Total: 74 + +### Creating messages + +Creating correctly formatted `dict`s from scratch every time isn’t very +handy, so next up we’ll add helpers for this. + +``` python +def mk_msg(content, role='user', **kw): + return dict(role=role, content=content, **kw) +``` + +We make things a bit more convenient by writing a function to create a +message for us. + +
+ +> **Note** +> +> You may have noticed that we didn’t export the +> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) function +> (i.e. there’s no “Exported source” block around it). That’s because +> we’ll need more functionality in our final version than this version +> has – so we’ll be defining a more complete version later. Rather than +> refactoring/editing in notebooks, often it’s helpful to simply +> gradually build up complexity by re-defining a symbol. + +
+ +``` python +prompt = "I'm Jeremy" +m = mk_msg(prompt) +m +``` + + {'role': 'user', 'content': "I'm Jeremy"} + +``` python +r = cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hi Jeremy! I’m Claude. Nice to meet you. How can I help you today? + +
+ +- id: `msg_01BhkuvQtEPoC8wHSbU7YRpV` +- content: + `[{'text': "Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 24}` + +
+ +------------------------------------------------------------------------ + +source + +### mk_msgs + +> mk_msgs (msgs:list, **kw) + +*Helper to set ‘assistant’ role on alternate messages.* + +
+Exported source + +``` python +def mk_msgs(msgs:list, **kw): + "Helper to set 'assistant' role on alternate messages." + if isinstance(msgs,str): msgs=[msgs] + return [mk_msg(o, ('user','assistant')[i%2], **kw) for i,o in enumerate(msgs)] +``` + +
+ +LLMs, including Claude, don’t actually have state, but instead dialogs +are created by passing back all previous prompts and responses every +time. With Claude, they always alternate *user* and *assistant*. +Therefore we create a function to make it easier to build up these +dialog lists. + +But to do so, we need to update +[`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) so that we +can’t only pass a `str` as `content`, but can also pass a `dict` or an +object with a `content` attr, since these are both types of message that +Claude can create. To do so, we check for a `content` key or attr, and +use it if found. + +
+Exported source + +``` python +def _str_if_needed(o): + if isinstance(o, (list,tuple,abc.Mapping,L)) or hasattr(o, '__pydantic_serializer__'): return o + return str(o) +``` + +
+ +``` python +def mk_msg(content, role='user', **kw): + "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" + if hasattr(content, 'content'): content,role = content.content,content.role + if isinstance(content, abc.Mapping): content=content['content'] + return dict(role=role, content=_str_if_needed(content), **kw) +``` + +``` python +msgs = mk_msgs([prompt, r, 'I forgot my name. Can you remind me please?']) +msgs +``` + + [{'role': 'user', 'content': "I'm Jeremy"}, + {'role': 'assistant', + 'content': [TextBlock(text="Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", type='text')]}, + {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}] + +Now, if we pass this list of messages to Claude, the model treats it as +a conversation to respond to. + +``` python +cli.messages.create(messages=msgs, model=model, max_tokens=200) +``` + +You just told me your name is Jeremy. + +
+ +- id: `msg_01KZski1R3z1iGjF6XsBb9dM` +- content: + `[{'text': 'You just told me your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 46, 'output_tokens': 13}` + +
+ +## Client + +------------------------------------------------------------------------ + +source + +### Client + +> Client (model, cli=None, log=False) + +*Basic Anthropic messages client.* + +
+Exported source + +``` python +class Client: + def __init__(self, model, cli=None, log=False): + "Basic Anthropic messages client." + self.model,self.use = model,usage() + self.text_only = model in text_only_models + self.log = [] if log else None + self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'})) +``` + +
+ +We’ll create a simple +[`Client`](https://claudette.answer.ai/core.html#client) for `Anthropic` +which tracks usage stores the model to use. We don’t add any methods +right away – instead we’ll use `patch` for that so we can add and +document them incrementally. + +``` python +c = Client(model) +c.use +``` + + In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 + +
+Exported source + +``` python +@patch +def _r(self:Client, r:Message, prefill=''): + "Store the result of the message and accrue total usage." + if prefill: + blk = find_block(r) + blk.text = prefill + (blk.text or '') + self.result = r + self.use += r.usage + self.stop_reason = r.stop_reason + self.stop_sequence = r.stop_sequence + return r +``` + +
+ +We use a `_` prefix on private methods, but we document them here in the +interests of literate source code. + +`_r` will be used each time we get a new result, to track usage and also +to keep the result available for later. + +``` python +c._r(r) +c.use +``` + + In: 10; Out: 24; Cache create: 0; Cache read: 0; Total: 34 + +Whereas OpenAI’s models use a `stream` parameter for streaming, +Anthropic’s use a separate method. We implement Anthropic’s approach in +a private method, and then use a `stream` parameter in `__call__` for +consistency: + +
+Exported source + +``` python +@patch +def _log(self:Client, final, prefill, msgs, maxtok=None, sp=None, temp=None, stream=None, stop=None, **kwargs): + self._r(final, prefill) + if self.log is not None: self.log.append({ + "msgs": msgs, "prefill": prefill, **kwargs, + "msgs": msgs, "prefill": prefill, "maxtok": maxtok, "sp": sp, "temp": temp, "stream": stream, "stop": stop, **kwargs, + "result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence + }) + return self.result +``` + +
+
+Exported source + +``` python +@patch +def _stream(self:Client, msgs:list, prefill='', **kwargs): + with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: + if prefill: yield(prefill) + yield from s.text_stream + self._log(s.get_final_message(), prefill, msgs, **kwargs) +``` + +
+ +Claude supports adding an extra `assistant` message at the end, which +contains the *prefill* – i.e. the text we want Claude to assume the +response starts with. However Claude doesn’t actually repeat that in the +response, so for convenience we add it. + +
+Exported source + +``` python +@patch +def _precall(self:Client, msgs, prefill, stop, kwargs): + pref = [prefill.strip()] if prefill else [] + if not isinstance(msgs,list): msgs = [msgs] + if stop is not None: + if not isinstance(stop, (list)): stop = [stop] + kwargs["stop_sequences"] = stop + msgs = mk_msgs(msgs+pref) + return msgs +``` + +
+ +``` python +@patch +@delegates(messages.Messages.create) +def __call__(self:Client, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + **kwargs): + "Make a call to Claude." + msgs = self._precall(msgs, prefill, stop, kwargs) + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = self.c.messages.create( + model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs) +``` + +Defining `__call__` let’s us use an object like a function (i.e it’s +*callable*). We use it as a small wrapper over `messages.create`. +However we’re not exporting this version just yet – we have some +additions we’ll make in a moment… + +``` python +c = Client(model, log=True) +c.use +``` + + In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 + +``` python +c('Hi') +``` + +Hello! How can I help you today? + +
+ +- id: `msg_01DZfHpTqbodjegmvG6kkQvn` +- content: + `[{'text': 'Hello! How can I help you today?', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 8, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +c.use +``` + + In: 8; Out: 22; Cache create: 0; Cache read: 0; Total: 30 + +Let’s try out *prefill*: + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +``` + +``` python +c(q, prefill=pref) +``` + +According to Douglas Adams, it’s 42. More seriously, there’s no +universal answer - it’s deeply personal. Common perspectives include: +finding happiness, making meaningful connections, pursuing purpose +through work/creativity, helping others, or simply experiencing and +appreciating existence. + +
+ +- id: `msg_01RKAjFBMhyBjvKw59ypM6tp` +- content: + `[{'text': "According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 24, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +We can pass `stream=True` to stream the response back incrementally: + +``` python +for o in c('Hi', stream=True): print(o, end='') +``` + + Hello! How can I help you today? + +``` python +c.use +``` + + In: 40; Out: 97; Cache create: 0; Cache read: 0; Total: 137 + +``` python +for o in c(q, prefill=pref, stream=True): print(o, end='') +``` + + According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence. + +``` python +c.use +``` + + In: 64; Out: 150; Cache create: 0; Cache read: 0; Total: 214 + +Pass a stop seauence if you want claude to stop generating text when it +encounters it. + +``` python +c("Count from 1 to 10", stop="5") +``` + +1 2 3 4 + +
+ +- id: `msg_01D3kdCAHNbXadE144FLPbQV` +- content: `[{'text': '1\n2\n3\n4\n', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `stop_sequence` +- stop_sequence: `5` +- type: `message` +- usage: + `{'input_tokens': 15, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +This also works with streaming, and you can pass more than one stop +sequence: + +``` python +for o in c("Count from 1 to 10", stop=["2", "yellow"], stream=True): print(o, end='') +print(c.stop_reason, c.stop_sequence) +``` + + 1 + stop_sequence 2 + +You can check the logs: + +``` python +c.log[-1] +``` + + {'msgs': [{'role': 'user', 'content': 'Count from 1 to 10'}], + 'prefill': '', + 'max_tokens': 4096, + 'system': '', + 'temperature': 0, + 'stop_sequences': ['2', 'yellow'], + 'maxtok': None, + 'sp': None, + 'temp': None, + 'stream': None, + 'stop': None, + 'result': Message(id='msg_01PbJN7QLwYALfoqTtYJHYVR', content=[TextBlock(text='1\n', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='stop_sequence', stop_sequence='2', type='message', usage=In: 15; Out: 11; Cache create: 0; Cache read: 0; Total: 26), + 'use': In: 94; Out: 172; Cache create: 0; Cache read: 0; Total: 266, + 'stop_reason': 'stop_sequence', + 'stop_sequence': '2'} + +## Tool use + +Let’s now add tool use (aka *function calling*). + +------------------------------------------------------------------------ + +source + +### mk_tool_choice + +> mk_tool_choice (choose:Union[str,bool,NoneType]) + +*Create a `tool_choice` dict that’s ‘auto’ if `choose` is `None`, ‘any’ +if it is True, or ‘tool’ otherwise* + +
+Exported source + +``` python +def mk_tool_choice(choose:Union[str,bool,None])->dict: + "Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise" + return {"type": "tool", "name": choose} if isinstance(choose,str) else {'type':'any'} if choose else {'type':'auto'} +``` + +
+ +``` python +print(mk_tool_choice('sums')) +print(mk_tool_choice(True)) +print(mk_tool_choice(None)) +``` + + {'type': 'tool', 'name': 'sums'} + {'type': 'any'} + {'type': 'auto'} + +Claude can be forced to use a particular tool, or select from a specific +list of tools, or decide for itself when to use a tool. If you want to +force a tool (or force choosing from a list), include a `tool_choice` +param with a dict from +[`mk_tool_choice`](https://claudette.answer.ai/core.html#mk_tool_choice). + +For testing, we need a function that Claude can call; we’ll write a +simple function that adds numbers together, and will tell us when it’s +being called: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +sp = "You are a summing expert." +``` + +Claudette can autogenerate a schema thanks to the `toolslm` library. +We’ll force the use of the tool using the function we created earlier. + +``` python +tools=[get_schema(sums)] +choice = mk_tool_choice('sums') +``` + +We’ll start a dialog with Claude now. We’ll store the messages of our +dialog in `msgs`. The first message will be our prompt `pr`, and we’ll +pass our `tools` schema. + +``` python +msgs = mk_msgs(pr) +r = c(msgs, sp=sp, tools=tools, tool_choice=choice) +r +``` + +ToolUseBlock(id=‘toolu_01JEJNPyeeGm7uwckeF5J4pf’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_015eEr2H8V4j8nNEh1KQifjH` +- content: + `[{'id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 442, 'output_tokens': 55, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +When Claude decides that it should use a tool, it passes back a +`ToolUseBlock` with the name of the tool to call, and the params to use. + +We don’t want to allow it to call just any possible function (that would +be a security disaster!) so we create a *namespace* – that is, a +dictionary of allowable function names to call. + +``` python +ns = mk_ns(sums) +ns +``` + + {'sums': int>} + +------------------------------------------------------------------------ + +source + +### mk_funcres + +> mk_funcres (tuid, res) + +*Given tool use id and the tool result, create a tool_result response.* + +
+Exported source + +``` python +def mk_funcres(tuid, res): + "Given tool use id and the tool result, create a tool_result response." + return dict(type="tool_result", tool_use_id=tuid, content=str(res)) +``` + +
+ +We can now use the function requested by Claude. We look it up in `ns`, +and pass in the provided parameters. + +``` python +fc = find_block(r, ToolUseBlock) +res = mk_funcres(fc.id, call_func(fc.name, fc.input, ns=ns)) +res +``` + + Finding the sum of 604542 and 6458932 + + {'type': 'tool_result', + 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', + 'content': '7063474'} + +------------------------------------------------------------------------ + +source + +### mk_toolres + +> mk_toolres (r:collections.abc.Mapping, +> ns:Optional[collections.abc.Mapping]=None, obj:Optional=None) + +*Create a `tool_result` message from response `r`.* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
rMappingTool use request response from Claude
nsOptionalNoneNamespace to search for tools
objOptionalNoneClass to search for tools
+ +
+Exported source + +``` python +def mk_toolres( + r:abc.Mapping, # Tool use request response from Claude + ns:Optional[abc.Mapping]=None, # Namespace to search for tools + obj:Optional=None # Class to search for tools + ): + "Create a `tool_result` message from response `r`." + cts = getattr(r, 'content', []) + res = [mk_msg(r)] + if ns is None: ns=globals() + if obj is not None: ns = mk_ns(obj) + tcs = [mk_funcres(o.id, call_func(o.name, o.input, ns)) for o in cts if isinstance(o,ToolUseBlock)] + if tcs: res.append(mk_msg(tcs)) + return res +``` + +
+ +In order to tell Claude the result of the tool call, we pass back the +tool use assistant request and the `tool_result` response. + +``` python +tr = mk_toolres(r, ns=ns) +tr +``` + + Finding the sum of 604542 and 6458932 + + [{'role': 'assistant', + 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, + {'role': 'user', + 'content': [{'type': 'tool_result', + 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', + 'content': '7063474'}]}] + +We add this to our dialog, and now Claude has all the information it +needs to answer our question. + +``` python +msgs += tr +contents(c(msgs, sp=sp, tools=tools)) +``` + + 'The sum of 604542 and 6458932 is 7063474.' + +``` python +msgs +``` + + [{'role': 'user', 'content': 'What is 604542+6458932?'}, + {'role': 'assistant', + 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, + {'role': 'user', + 'content': [{'type': 'tool_result', + 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', + 'content': '7063474'}]}] + +This works with methods as well – in this case, use the object itself +for `ns`: + +``` python +class Dummy: + def sums( + self, + a:int, # First thing to sum + b:int=1 # Second thing to sum + ) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +tools = [get_schema(Dummy.sums)] +o = Dummy() +r = c(pr, sp=sp, tools=tools, tool_choice=choice) +tr = mk_toolres(r, obj=o) +msgs += tr +contents(c(msgs, sp=sp, tools=tools)) +``` + + Finding the sum of 604542 and 6458932 + + 'The sum of 604542 and 6458932 is 7063474.' + +------------------------------------------------------------------------ + +source + +### get_types + +> get_types (msgs) + +``` python +get_types(msgs) +``` + + ['text', 'tool_use', 'tool_result', 'tool_use', 'tool_result'] + +------------------------------------------------------------------------ + +source + +### Client.\_\_call\_\_ + +> Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', +> stream:bool=False, stop=None, tools:Optional[list]=None, +> tool_choice:Optional[dict]=None, +> metadata:MetadataParam|NotGiven=NOT_GIVEN, +> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Unio +> n[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, +> temperature:float|NotGiven=NOT_GIVEN, +> top_k:int|NotGiven=NOT_GIVEN, +> top_p:float|NotGiven=NOT_GIVEN, +> extra_headers:Headers|None=None, +> extra_query:Query|None=None, extra_body:Body|None=None, +> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) + +*Make a call to Claude.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
toolsOptionalNoneList of tools to make available to Claude
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
+ +
+Exported source + +``` python +@patch +@delegates(messages.Messages.create) +def __call__(self:Client, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + tools:Optional[list]=None, # List of tools to make available to Claude + tool_choice:Optional[dict]=None, # Optionally force use of some tool + **kwargs): + "Make a call to Claude." + if tools: kwargs['tools'] = [get_schema(o) for o in listify(tools)] + if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) + msgs = self._precall(msgs, prefill, stop, kwargs) + if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) +``` + +
+ +``` python +r = c(pr, sp=sp, tools=sums, tool_choice=sums) +r +``` + +ToolUseBlock(id=‘toolu_01KNbjuc8utt6ZroFngmAcuj’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_01T8zmguPksQaKLLgUuaYAJL` +- content: + `[{'id': 'toolu_01KNbjuc8utt6ZroFngmAcuj', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 438, 'output_tokens': 64, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +tr = mk_toolres(r, ns=ns) +``` + + Finding the sum of 604542 and 6458932 + +------------------------------------------------------------------------ + +source + +### Client.structured + +> Client.structured (msgs:list, tools:Optional[list]=None, +> obj:Optional=None, +> ns:Optional[collections.abc.Mapping]=None, sp='', +> temp=0, maxtok=4096, prefill='', stream:bool=False, +> stop=None, tool_choice:Optional[dict]=None, +> metadata:MetadataParam|NotGiven=NOT_GIVEN, +> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Un +> ion[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, +> temperature:float|NotGiven=NOT_GIVEN, +> top_k:int|NotGiven=NOT_GIVEN, +> top_p:float|NotGiven=NOT_GIVEN, +> extra_headers:Headers|None=None, +> extra_query:Query|None=None, +> extra_body:Body|None=None, +> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) + +*Return the value of all tool calls (generally used for structured +outputs)* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
toolsOptionalNoneList of tools to make available to Claude
objOptionalNoneClass to search for tools
nsOptionalNoneNamespace to search for tools
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
+ +
+Exported source + +``` python +@patch +@delegates(Client.__call__) +def structured(self:Client, + msgs:list, # List of messages in the dialog + tools:Optional[list]=None, # List of tools to make available to Claude + obj:Optional=None, # Class to search for tools + ns:Optional[abc.Mapping]=None, # Namespace to search for tools + **kwargs): + "Return the value of all tool calls (generally used for structured outputs)" + tools = listify(tools) + res = self(msgs, tools=tools, tool_choice=tools, **kwargs) + if ns is None: ns=mk_ns(*tools) + if obj is not None: ns = mk_ns(obj) + cts = getattr(res, 'content', []) + tcs = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)] + return tcs +``` + +
+ +Anthropic’s API does not support response formats directly, so instead +we provide a `structured` method to use tool calling to achieve the same +result. The result of the tool is not passed back to Claude in this +case, but instead is returned directly to the user. + +``` python +c.structured(pr, tools=[sums]) +``` + + Finding the sum of 604542 and 6458932 + + [7063474] + +## Chat + +Rather than manually adding the responses to a dialog, we’ll create a +simple [`Chat`](https://claudette.answer.ai/core.html#chat) class to do +that for us, each time we make a request. We’ll also store the system +prompt and tools here, to avoid passing them every time. + +------------------------------------------------------------------------ + +source + +### Chat + +> Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None, +> sp='', tools:Optional[list]=None, temp=0, +> cont_pr:Optional[str]=None) + +*Anthropic chat client.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstrOptional system prompt
toolsOptionalNoneList of tools to make available to Claude
tempint0Temperature
cont_prOptionalNoneUser prompt to continue an assistant response: +assistant,[user:“…”],assistant
+ +
+Exported source + +``` python +class Chat: + def __init__(self, + model:Optional[str]=None, # Model to use (leave empty if passing `cli`) + cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) + sp='', # Optional system prompt + tools:Optional[list]=None, # List of tools to make available to Claude + temp=0, # Temperature + cont_pr:Optional[str]=None): # User prompt to continue an assistant response: assistant,[user:"..."],assistant + "Anthropic chat client." + assert model or cli + assert cont_pr != "", "cont_pr may not be an empty string" + self.c = (cli or Client(model)) + self.h,self.sp,self.tools,self.cont_pr,self.temp = [],sp,tools,cont_pr,temp + + @property + def use(self): return self.c.use +``` + +
+ +The class stores the +[`Client`](https://claudette.answer.ai/core.html#client) that will +provide the responses in `c`, and a history of messages in `h`. + +``` python +sp = "Never mention what tools you use." +chat = Chat(model, sp=sp) +chat.c.use, chat.h +``` + + (In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0, []) + +We’ve shown the token usage but we really care about is pricing. Let’s +extract the latest +[pricing](https://www.anthropic.com/pricing#anthropic-api) from +Anthropic into a `pricing` dict. + +We’ll patch `Usage` to enable it compute the cost given pricing. + +------------------------------------------------------------------------ + +source + +### Usage.cost + +> Usage.cost (costs:tuple) + +
+Exported source + +``` python +@patch +def cost(self:Usage, costs:tuple) -> float: + cache_w, cache_r = getattr(self, "cache_creation_input_tokens",0), getattr(self, "cache_read_input_tokens",0) + return sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6 +``` + +
+ +``` python +chat.c.use.cost(pricing[model_types[chat.c.model]]) +``` + + 0.0 + +This is clunky. Let’s add `cost` as a property for the +[`Chat`](https://claudette.answer.ai/core.html#chat) class. It will pass +in the appropriate prices for the current model to the usage cost +calculator. + +------------------------------------------------------------------------ + +source + +### Chat.cost + +> Chat.cost () + +
+Exported source + +``` python +@patch(as_prop=True) +def cost(self: Chat) -> float: return self.c.use.cost(pricing[model_types[self.c.model]]) +``` + +
+ +``` python +chat.cost +``` + + 0.0 + +------------------------------------------------------------------------ + +source + +### Chat.\_\_call\_\_ + +> Chat.__call__ (pr=None, temp=None, maxtok=4096, stream=False, prefill='', +> tool_choice:Optional[dict]=None, **kw) + +*Call self as a function.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
kw
+ +
+Exported source + +``` python +@patch +def _stream(self:Chat, res): + yield from res + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) +``` + +
+
+Exported source + +``` python +@patch +def _post_pr(self:Chat, pr, prev_role): + if pr is None and prev_role == 'assistant': + if self.cont_pr is None: + raise ValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.") + pr = self.cont_pr # No user prompt, keep the chain + if pr: self.h.append(mk_msg(pr)) +``` + +
+
+Exported source + +``` python +@patch +def _append_pr(self:Chat, + pr=None, # Prompt / message + ): + prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' + if pr and prev_role == 'user': self() # already user request pending + self._post_pr(pr, prev_role) +``` + +
+
+Exported source + +``` python +@patch +def __call__(self:Chat, + pr=None, # Prompt / message + temp=None, # Temperature + maxtok=4096, # Maximum tokens + stream=False, # Stream response? + prefill='', # Optional prefill to pass to Claude as start of its response + tool_choice:Optional[dict]=None, # Optionally force use of some tool + **kw): + if temp is None: temp=self.temp + self._append_pr(pr) + res = self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, + tools=self.tools, tool_choice=tool_choice,**kw) + if stream: return self._stream(res) + self.h += mk_toolres(self.c.result, ns=self.tools) + return res +``` + +
+ +The `__call__` method just passes the request along to the +[`Client`](https://claudette.answer.ai/core.html#client), but rather +than just passing in this one prompt, it appends it to the history and +passes it all along. As a result, we now have state! + +``` python +chat = Chat(model, sp=sp) +``` + +``` python +chat("I'm Jeremy") +chat("What's my name?") +``` + +Your name is Jeremy. + +
+ +- id: `msg_01GpNv4P5x9Gzc5mxxw9FgEL` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 41, 'output_tokens': 9, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat.use, chat.cost +``` + + (In: 58; Out: 27; Cache create: 0; Cache read: 0; Total: 85, 0.000579) + +Let’s try out prefill too: + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +``` + +``` python +chat(q, prefill=pref) +``` + +According to Douglas Adams,42. But seriously: To find purpose, create +meaning, love, grow, and make a positive impact while experiencing +life’s journey. + +
+ +- id: `msg_011s2iLranbHFhdsVg8sz6eY` +- content: + `[{'text': "According to Douglas Adams,42. But seriously: To find purpose, create meaning, love, grow, and make a positive impact while experiencing life's journey.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 69, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +By default messages must be in user, assistant, user format. If this +isn’t followed (aka calling `chat()` without a user message) it will +error out: + +``` python +try: chat() +except ValueError as e: print("Error:", e) +``` + + Error: Prompt must be given after assistant completion, or use `self.cont_pr`. + +Setting `cont_pr` allows a “default prompt” to be specified when a +prompt isn’t specified. Usually used to prompt the model to continue. + +``` python +chat.cont_pr = "keep going..." +chat() +``` + +To build meaningful relationships, pursue passions, learn continuously, +help others, appreciate beauty, overcome challenges, leave a positive +legacy, and find personal fulfillment through whatever brings you joy +and contributes to the greater good. + +
+ +- id: `msg_01Rz8oydLAinmSMyaKbmmpE9` +- content: + `[{'text': 'To build meaningful relationships, pursue passions, learn continuously, help others, appreciate beauty, overcome challenges, leave a positive legacy, and find personal fulfillment through whatever brings you joy and contributes to the greater good.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 105, 'output_tokens': 54, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +We can also use streaming: + +``` python +chat = Chat(model, sp=sp) +for o in chat("I'm Jeremy", stream=True): print(o, end='') +``` + + Hello Jeremy! Nice to meet you. How are you today? + +``` python +for o in chat(q, prefill=pref, stream=True): print(o, end='') +``` + + According to Douglas Adams, 42. More seriously: to find purpose, love, grow, and make a positive impact while experiencing life's journey. + +### Chat tool use + +We automagically get streamlined tool use as well: + +``` python +pr = f"What is {a}+{b}?" +pr +``` + + 'What is 604542+6458932?' + +``` python +chat = Chat(model, sp=sp, tools=[sums]) +r = chat(pr) +r +``` + + Finding the sum of 604542 and 6458932 + +Let me calculate that sum for you. + +
+ +- id: `msg_01MY2VWnZuU8jKyRKJ5FGzmM` +- content: + `[{'text': 'Let me calculate that sum for you.', 'type': 'text'}, {'id': 'toolu_01JXnJ1ReFqx5ppX3y7UcQCB', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 437, 'output_tokens': 87, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Now we need to send this result to Claude—calling the object with no +parameters tells it to return the tool result to Claude: + +``` python +chat() +``` + +604542 + 6458932 = 7063474 + +
+ +- id: `msg_01Sog8j3pgYb3TBWPYwR4uQU` +- content: `[{'text': '604542 + 6458932 = 7063474', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 532, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +It should be correct, because it actually used our Python function to do +the addition. Let’s check: + +``` python +a+b +``` + + 7063474 + +## Images + +Claude can handle image data as well. As everyone knows, when testing +image APIs you have to use a cute puppy. + +``` python +# Image is Cute_dog.jpg from Wikimedia +fn = Path('samples/puppy.jpg') +display.Image(filename=fn, width=200) +``` + + + +``` python +img = fn.read_bytes() +``` + +
+Exported source + +``` python +def _add_cache(d, cache): + "Optionally add cache control" + if cache: d["cache_control"] = {"type": "ephemeral"} + return d +``` + +
+ +Claude supports context caching by adding a `cache_control` header, so +we provide an option to enable that. + +------------------------------------------------------------------------ + +source + +### img_msg + +> img_msg (data:bytes, cache=False) + +*Convert image `data` into an encoded `dict`* + +
+Exported source + +``` python +def img_msg(data:bytes, cache=False)->dict: + "Convert image `data` into an encoded `dict`" + img = base64.b64encode(data).decode("utf-8") + mtype = mimetypes.types_map['.'+imghdr.what(None, h=data)] + r = dict(type="base64", media_type=mtype, data=img) + return _add_cache({"type": "image", "source": r}, cache) +``` + +
+ +Anthropic have documented the particular `dict` structure that expect +image data to be in, so we have a little function to create that for us. + +------------------------------------------------------------------------ + +source + +### text_msg + +> text_msg (s:str, cache=False) + +*Convert `s` to a text message* + +
+Exported source + +``` python +def text_msg(s:str, cache=False)->dict: + "Convert `s` to a text message" + return _add_cache({"type": "text", "text": s}, cache) +``` + +
+ +A Claude message can be a list of image and text parts. So we’ve also +created a helper for making the text parts. + +``` python +q = "In brief, what color flowers are in this image?" +msg = mk_msg([img_msg(img), text_msg(q)]) +``` + +``` python +c([msg]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_01Ej9XSFQKFtD9pUns5g7tom` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 44, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+
+Exported source + +``` python +def _mk_content(src, cache=False): + "Create appropriate content data structure based on type of content" + if isinstance(src,str): return text_msg(src, cache=cache) + if isinstance(src,bytes): return img_msg(src, cache=cache) + if isinstance(src, abc.Mapping): return {k:_str_if_needed(v) for k,v in src.items()} + return _str_if_needed(src) +``` + +
+ +There’s not need to manually choose the type of message, since we figure +that out from the data of the source data. + +``` python +_mk_content('Hi') +``` + + {'type': 'text', 'text': 'Hi'} + +------------------------------------------------------------------------ + +source + +### mk_msg + +> mk_msg (content, role='user', cache=False, **kw) + +*Helper to create a `dict` appropriate for a Claude message. `kw` are +added as key/value pairs to the message* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
contentA string, list, or dict containing the contents of the message
rolestruserMust be ‘user’ or ‘assistant’
cacheboolFalse
kw
+ +
+Exported source + +``` python +def mk_msg(content, # A string, list, or dict containing the contents of the message + role='user', # Must be 'user' or 'assistant' + cache=False, + **kw): + "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" + if hasattr(content, 'content'): content,role = content.content,content.role + if isinstance(content, abc.Mapping): content=content.get('content', content) + if not isinstance(content, list): content=[content] + content = [_mk_content(o, cache if islast else False) for islast,o in loop_last(content)] if content else '.' + return dict2obj(dict(role=role, content=content, **kw), list_func=list) +``` + +
+ +``` python +mk_msg(['hi', 'there'], cache=True) +``` + +``` json +{ 'content': [ {'text': 'hi', 'type': 'text'}, + { 'cache_control': {'type': 'ephemeral'}, + 'text': 'there', + 'type': 'text'}], + 'role': 'user'} +``` + +``` python +m = mk_msg(['hi', 'there'], cache=True) +``` + +When we construct a message, we now use +[`_mk_content`](https://claudette.answer.ai/core.html#_mk_content) to +create the appropriate parts. Since a dialog contains multiple messages, +and a message can contain multiple content parts, to pass a single +message with multiple parts we have to use a list containing a single +list: + +``` python +c([[img, q]]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_014GQfAQF5FYU8a4Y8bvVm16` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +
+ +> **Note** +> +> As promised (much!) earlier, we’ve now finally completed our +> definition of +> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg), and this +> version is the one we export to the Python module. + +
+ +Some models unfortunately do not support image inputs such as Haiku 3.5 + +``` python +model = models[-1]; model +``` + + 'claude-3-5-haiku-20241022' + +``` python +c = Client(model) +c([[img, q]]) +``` + + AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 + --------------------------------------------------------------------------- + AssertionError Traceback (most recent call last) + Cell In[115], line 2 +  1 c = Client(model) + ----> 2 c([[img, q]]) + + Cell In[72], line 19, in __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs) +  17 if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) +  18 msgs = self._precall(msgs, prefill, stop, kwargs) + ---> 19 if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" +  20 if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) +  21 res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + + AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 + +## Third party providers + +### Amazon Bedrock + +These are Amazon’s current Claude models: + +``` python +models_aws +``` + + ['anthropic.claude-3-opus-20240229-v1:0', + 'anthropic.claude-3-5-sonnet-20241022-v2:0', + 'anthropic.claude-3-sonnet-20240229-v1:0', + 'anthropic.claude-3-haiku-20240307-v1:0'] + +
+ +> **Note** +> +> `anthropic` at version 0.34.2 seems not to install `boto3` as a +> dependency. You may need to do a `pip install boto3` or the creation +> of the [`Client`](https://claudette.answer.ai/core.html#client) below +> fails. + +
+ +Provided `boto3` is installed, we otherwise don’t need any extra code to +support Amazon Bedrock – we just have to set up the approach client: + +``` python +ab = AnthropicBedrock( + aws_access_key=os.environ['AWS_ACCESS_KEY'], + aws_secret_key=os.environ['AWS_SECRET_KEY'], +) +client = Client(models_aws[-1], ab) +``` + +``` python +chat = Chat(cli=client) +``` + +``` python +chat("I'm Jeremy") +``` + +It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_bdrk_01JPBwsACbf1HZoNDUzbHNpJ` +- content: + `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 32}` + +
+ +### Google Vertex + +``` python +models_goog +``` + + ['claude-3-opus@20240229', + 'claude-3-5-sonnet-v2@20241022', + 'claude-3-sonnet@20240229', + 'claude-3-haiku@20240307'] + +``` python +from anthropic import AnthropicVertex +import google.auth +``` + +``` python +project_id = google.auth.default()[1] +region = "us-east5" +gv = AnthropicVertex(project_id=project_id, region=region) +client = Client(models_goog[-1], gv) +``` + +``` python +chat = Chat(cli=client) +``` + +``` python +chat("I'm Jeremy") +```
# Tool loop + + + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +``` python +model = models[-1] +``` + +Anthropic provides an [interesting +example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) +of using tools to mock up a hypothetical ordering system. We’re going to +take it a step further, and show how we can dramatically simplify the +process, whilst completing more complex tasks. + +We’ll start by defining the same mock customer/order data as in +Anthropic’s example, plus create a entity relationship between customers +and orders: + +``` python +orders = { + "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"), + "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"), + "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")} + +customers = { + "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890", + orders=[orders['O1'], orders['O2']]), + "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210", + orders=[orders['O3']]) +} +``` + +We can now define the same functions from the original example – but +note that we don’t need to manually create the large JSON schema, since +Claudette handles all that for us automatically from the functions +directly. We’ll add some extra functionality to update order details +when cancelling too. + +``` python +def get_customer_info( + customer_id:str # ID of the customer +): # Customer's name, email, phone number, and list of orders + "Retrieves a customer's information and their orders based on the customer ID" + print(f'- Retrieving customer {customer_id}') + return customers.get(customer_id, "Customer not found") + +def get_order_details( + order_id:str # ID of the order +): # Order's ID, product name, quantity, price, and order status + "Retrieves the details of a specific order based on the order ID" + print(f'- Retrieving order {order_id}') + return orders.get(order_id, "Order not found") + +def cancel_order( + order_id:str # ID of the order to cancel +)->bool: # True if the cancellation is successful + "Cancels an order based on the provided order ID" + print(f'- Cancelling order {order_id}') + if order_id not in orders: return False + orders[order_id]['status'] = 'Cancelled' + return True +``` + +We’re now ready to start our chat. + +``` python +tools = [get_customer_info, get_order_details, cancel_order] +chat = Chat(model, tools=tools) +``` + +We’ll start with the same request as Anthropic showed: + +``` python +r = chat('Can you tell me the email address for customer C1?') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [ToolUseBlock(id='toolu_0168sUZoEUpjzk5Y8WN3q9XL', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +Claude asks us to use a tool. Claudette handles that automatically by +just calling it again: + +``` python +r = chat() +contents(r) +``` + + 'The email address for customer C1 is john@example.com.' + +Let’s consider a more complex case than in the original example – what +happens if a customer wants to cancel all of their orders? + +``` python +chat = Chat(model, tools=tools) +r = chat('Please cancel all orders for customer C1 for me.') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), + ToolUseBlock(id='toolu_01ADr1rEp7NLZ2iKWfLp7vz7', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +This is the start of a multi-stage tool use process. Doing it manually +step by step is inconvenient, so let’s write a function to handle this +for us: + +------------------------------------------------------------------------ + +source + +### Chat.toolloop + +> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None, +> maxtok=4096, stream=False, prefill='', +> tool_choice:Optional[dict]=None) + +*Add prompt `pr` to dialog and get a response from Claude, automatically +following up with `tool_use` messages* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prPrompt to pass to Claude
max_stepsint10Maximum number of tool requests to loop through
trace_funcOptionalNoneFunction to trace tool use steps (e.g print)
cont_funcOptionalnoopFunction that stops loop if returns False
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
+ +
+Exported source + +``` python +@patch +@delegates(Chat.__call__) +def toolloop(self:Chat, + pr, # Prompt to pass to Claude + max_steps=10, # Maximum number of tool requests to loop through + trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`) + cont_func:Optional[callable]=noop, # Function that stops loop if returns False + **kwargs): + "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages" + n_msgs = len(self.h) + r = self(pr, **kwargs) + for i in range(max_steps): + if r.stop_reason!='tool_use': break + if trace_func: trace_func(self.h[n_msgs:]); n_msgs = len(self.h) + r = self(**kwargs) + if not (cont_func or noop)(self.h[-2]): break + if trace_func: trace_func(self.h[n_msgs:]) + return r +``` + +
+ +We’ll start by re-running our previous request - we shouldn’t have to +manually pass back the `tool_use` message any more: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Can you tell me the email address for customer C1?') +r +``` + + - Retrieving customer C1 + +The email address for customer C1 is john@example.com. + +
+ +- id: `msg_01Fm2CY76dNeWief4kUW6r71` +- content: + `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 720, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Let’s see if it can handle the multi-stage process now – we’ll add +`trace_func=print` to see each stage of the process: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print) +r +``` + + - Retrieving customer C1 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Please cancel all orders for customer C1 for me.'}]}, {'role': 'assistant', 'content': [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01SvivKytaRHEdKixEY9dUDz', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01SvivKytaRHEdKixEY9dUDz', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}] + - Cancelling order O1 + [{'role': 'assistant', 'content': [TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel each of these orders:", type='text'), ToolUseBlock(id='toolu_01DoGVUPVBeDYERMePHDzUoT', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DoGVUPVBeDYERMePHDzUoT', 'content': 'True'}]}] + - Cancelling order O2 + [{'role': 'assistant', 'content': [ToolUseBlock(id='toolu_01XNwS35yY88Mvx4B3QqDeXX', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01XNwS35yY88Mvx4B3QqDeXX', 'content': 'True'}]}] + [{'role': 'assistant', 'content': [TextBlock(text="I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", type='text')]}] + +I’ve successfully cancelled both orders O1 and O2 for customer C1. +Please let me know if you need anything else! + +
+ +- id: `msg_01K1QpUZ8nrBVUHYTrH5QjSF` +- content: + `[{'text': "I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 921, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +OK Claude thinks the orders were cancelled – let’s check one: + +``` python +chat.toolloop('What is the status of order O2?') +``` + + - Retrieving order O2 + +The status of order O2 is now ‘Cancelled’ since I successfully cancelled +that order earlier. + +
+ +- id: `msg_01XcXpFDwoZ3u1bFDf5mY8x1` +- content: + `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1092, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Code interpreter + +Here is an example of using `toolloop` to implement a simple code +interpreter with additional tools. + +``` python +from toolslm.shell import get_shell +from fastcore.meta import delegates +import traceback +``` + +``` python +@delegates() +class CodeChat(Chat): + imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses' + def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs): + super().__init__(model=model, **kwargs) + self.ask = ask + self.tools.append(self.run_cell) + self.shell = get_shell() + self.shell.run_cell('import '+self.imps) +``` + +We have one additional parameter to creating a `CodeChat` beyond what we +pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is +`ask` – if that’s `True`, we’ll prompt the user before running code. + +``` python +@patch +def run_cell( + self:CodeChat, + code:str, # Code to execute in persistent IPython session +): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute + "Asks user for permission, and if provided, executes python `code` using persistent IPython session." + confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n' + if self.ask and input(confirm): return '#DECLINED#' + try: res = self.shell.run_cell(code) + except Exception as e: return traceback.format_exc() + return res.stdout if res.result is None else res.result +``` + +We just pass along requests to run code to the shell’s implementation. +Claude often prints results instead of just using the last expression, +so we capture stdout in those cases. + +``` python +sp = f'''You are a knowledgable assistant. Do not use tools unless needed. +Don't do complex calculations yourself -- use code for them. +The following modules are pre-imported for `run_cell` automatically: + +{CodeChat.imps} + +Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls. + +If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.''' +``` + +``` python +def get_user(ignored:str='' # Unused parameter + ): # Username of current user + "Get the username of the user running this session" + print("Looking up username") + return 'Jeremy' +``` + +In order to test out multi-stage tool use, we create a mock function +that Claude can call to get the current username. + +``` python +model = models[1] +``` + +``` python +chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3) +``` + +Claude gets confused sometimes about how tools work, so we use examples +to remind it: + +``` python +chat.h = [ + 'Calculate the square root of `10332`', 'math.sqrt(10332)', + '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.' +] +``` + +Providing a callable to toolloop’s `trace_func` lets us print out +information during the loop: + +``` python +def _show_cts(h): + for r in h: + for o in r.get('content'): + if hasattr(o,'text'): print(o.text) + nm = getattr(o, 'name', None) + if nm=='run_cell': print(o.input['code']) + elif nm: print(f'{o.name}({o.input})') +``` + +…and toolloop’s `cont_func` callable let’s us provide a function which, +if it returns `False`, stops the loop: + +``` python +def _cont_decline(c): + return nested_idx(c, 'content', 'content') != '#DECLINED#' +``` + +Now we can try our code interpreter. We start by asking for a function +to be created, which we’ll use in the next prompt to test that the +interpreter is persistent. + +``` python +pr = '''Create a 1-line function `checksum` for a string `s`, +that multiplies together the ascii values of each character in `s` using `reduce`.''' +chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline) +``` + + Press Enter to execute, or enter "n" to skip? + ``` + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + ``` + + Create a 1-line function `checksum` for a string `s`, + that multiplies together the ascii values of each character in `s` using `reduce`. + Let me help you create that function using `reduce` and `functools`. + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + The function has been created. Let me explain how it works: + 1. It takes a string `s` as input + 2. Uses `functools.reduce` to multiply together all ASCII values + 3. `ord(y)` gets the ASCII value of each character + 4. The initial value is 1 (the third parameter to reduce) + 5. The lambda function multiplies the accumulator (x) with each new ASCII value + + You can test it with any string. For example, you could try `checksum("hello")` to see it in action. + +The function has been created. Let me explain how it works: 1. It takes +a string `s` as input 2. Uses `functools.reduce` to multiply together +all ASCII values 3. `ord(y)` gets the ASCII value of each character 4. +The initial value is 1 (the third parameter to reduce) 5. The lambda +function multiplies the accumulator (x) with each new ASCII value + +You can test it with any string. For example, you could try +`checksum("hello")` to see it in action. + +
+ +- id: `msg_011pcGY9LbYqvRSfDPgCqUkT` +- content: + `[{'text': 'The function has been created. Let me explain how it works:\n1. It takes a string`s`as input\n2. Uses`functools.reduce`to multiply together all ASCII values\n3.`ord(y)`gets the ASCII value of each character\n4. The initial value is 1 (the third parameter to reduce)\n5. The lambda function multiplies the accumulator (x) with each new ASCII value\n\nYou can test it with any string. For example, you could try`checksum(“hello”)`to see it in action.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 824, 'output_tokens': 125, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +By asking for a calculation to be done on the username, we force it to +use multiple steps: + +``` python +pr = 'Use it to get the checksum of the username of this session.' +chat.toolloop(pr, trace_func=_show_cts) +``` + + Looking up username + Use it to get the checksum of the username of this session. + I'll first get the username using `get_user` and then apply our `checksum` function to it. + get_user({'ignored': ''}) + Press Enter to execute, or enter "n" to skip? + ``` + print(checksum("Jeremy")) + ``` + + Now I'll calculate the checksum of "Jeremy": + print(checksum("Jeremy")) + The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy". + +The checksum of the username “Jeremy” is 1134987783204. This was +calculated by multiplying together the ASCII values of each character in +“Jeremy”. + +
+ +- id: `msg_01UXvtcLzzykZpnQUT35v4uD` +- content: + `[{'text': 'The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy".', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
# The async version + + + +## Setup + +## Async SDK + +``` python +model = models[1] +cli = AsyncAnthropic() +``` + +``` python +m = {'role': 'user', 'content': "I'm Jeremy"} +r = await cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hello Jeremy! It’s nice to meet you. How can I assist you today? Is +there anything specific you’d like to talk about or any questions you +have? + +
+ +- id: `msg_019gsEQs5dqb3kgwNJbTH27M` +- content: + `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 36}` + +
+ +------------------------------------------------------------------------ + +source + +### AsyncClient + +> AsyncClient (model, cli=None, log=False) + +*Async Anthropic messages client.* + +
+Exported source + +``` python +class AsyncClient(Client): + def __init__(self, model, cli=None, log=False): + "Async Anthropic messages client." + super().__init__(model,cli,log) + if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}) +``` + +
+ +``` python +c = AsyncClient(model) +``` + +``` python +c._r(r) +c.use +``` + + In: 10; Out: 36; Total: 46 + +------------------------------------------------------------------------ + +source + +### AsyncClient.\_\_call\_\_ + +> AsyncClient.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', +> stream:bool=False, stop=None, cli=None, log=False) + +*Make an async call to Claude.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
cliNoneTypeNone
logboolFalse
+ +
+Exported source + +``` python +@patch +async def _stream(self:AsyncClient, msgs:list, prefill='', **kwargs): + async with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: + if prefill: yield prefill + async for o in s.text_stream: yield o + self._log(await s.get_final_message(), prefill, msgs, kwargs) +``` + +
+
+Exported source + +``` python +@patch +@delegates(Client) +async def __call__(self:AsyncClient, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + **kwargs): + "Make an async call to Claude." + msgs = self._precall(msgs, prefill, stop, kwargs) + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = await self.c.messages.create( + model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) +``` + +
+ +``` python +c = AsyncClient(model, log=True) +c.use +``` + + In: 0; Out: 0; Total: 0 + +``` python +c.model = models[1] +await c('Hi') +``` + +Hello! How can I assist you today? Feel free to ask any questions or let +me know if you need help with anything. + +
+ +- id: `msg_01L9vqP9r1LcmvSk8vWGLbPo` +- content: + `[{'text': 'Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 8, 'output_tokens': 29, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +c.use +``` + + In: 8; Out: 29; Total: 37 + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +await c(q, prefill=pref) +``` + +According to Douglas Adams, the meaning of life is 42. More seriously, +there’s no universally agreed upon meaning of life. Many philosophers +and religions have proposed different answers, but it remains an open +question that individuals must grapple with for themselves. + +
+ +- id: `msg_01KAJbCneA2oCRPVm9EkyDXF` +- content: + `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 24, 'output_tokens': 51, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +async for o in (await c('Hi', stream=True)): print(o, end='') +``` + + Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything. + +``` python +c.use +``` + + In: 40; Out: 109; Total: 149 + +``` python +async for o in (await c(q, prefill=pref, stream=True)): print(o, end='') +``` + + According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves. + +``` python +c.use +``` + + In: 64; Out: 160; Total: 224 + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +sp = "You are a summing expert." +``` + +``` python +tools=[get_schema(sums)] +choice = mk_tool_choice('sums') +``` + +``` python +tools = [get_schema(sums)] +msgs = mk_msgs(pr) +r = await c(msgs, sp=sp, tools=tools, tool_choice=choice) +tr = mk_toolres(r, ns=globals()) +msgs += tr +contents(await c(msgs, sp=sp, tools=tools)) +``` + + Finding the sum of 604542 and 6458932 + + 'As a summing expert, I\'m happy to help you with this addition. The sum of 604542 and 6458932 is 7063474.\n\nTo break it down:\n604542 (first number)\n+ 6458932 (second number)\n= 7063474 (total sum)\n\nThis result was calculated using the "sums" function, which adds two numbers together. Is there anything else you\'d like me to sum for you?' + +## AsyncChat + +------------------------------------------------------------------------ + +source + +### AsyncChat + +> AsyncChat (model:Optional[str]=None, +> cli:Optional[claudette.core.Client]=None, sp='', +> tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None) + +*Anthropic async chat client.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstr
toolsOptionalNone
tempint0
cont_prOptionalNone
+ +
+Exported source + +``` python +@delegates() +class AsyncChat(Chat): + def __init__(self, + model:Optional[str]=None, # Model to use (leave empty if passing `cli`) + cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) + **kwargs): + "Anthropic async chat client." + super().__init__(model, cli, **kwargs) + if not cli: self.c = AsyncClient(model) +``` + +
+ +``` python +sp = "Never mention what tools you use." +chat = AsyncChat(model, sp=sp) +chat.c.use, chat.h +``` + + (In: 0; Out: 0; Total: 0, []) + +------------------------------------------------------------------------ + +source + +### AsyncChat.\_\_call\_\_ + +> AsyncChat.__call__ (pr=None, temp=0, maxtok=4096, stream=False, +> prefill='', **kw) + +*Call self as a function.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempint0Temperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
kw
+ +
+Exported source + +``` python +@patch +async def _stream(self:AsyncChat, res): + async for o in res: yield o + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) +``` + +
+
+Exported source + +``` python +@patch +async def _append_pr(self:AsyncChat, pr=None): + prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history + if pr and prev_role == 'user': await self() + self._post_pr(pr, prev_role) +``` + +
+
+Exported source + +``` python +@patch +async def __call__(self:AsyncChat, + pr=None, # Prompt / message + temp=0, # Temperature + maxtok=4096, # Maximum tokens + stream=False, # Stream response? + prefill='', # Optional prefill to pass to Claude as start of its response + **kw): + await self._append_pr(pr) + if self.tools: kw['tools'] = [get_schema(o) for o in self.tools] + res = await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, **kw) + if stream: return self._stream(res) + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) + return res +``` + +
+ +``` python +await chat("I'm Jeremy") +await chat("What's my name?") +``` + +Your name is Jeremy, as you mentioned in your previous message. + +
+ +- id: `msg_01NMugMXWpDP9iuTXeLkHarn` +- content: + `[{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 64, 'output_tokens': 16, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +await chat(q, prefill=pref) +``` + +According to Douglas Adams, the meaning of life is 42. More seriously, +there’s no universally agreed upon answer. Common philosophical +perspectives include: + +1. Finding personal fulfillment +2. Serving others +3. Pursuing happiness +4. Creating meaning through our choices +5. Experiencing and appreciating existence + +Ultimately, many believe each individual must determine their own life’s +meaning. + +
+ +- id: `msg_01VPWUQn5Do1Kst8RYUDQvCu` +- content: + `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 100, 'output_tokens': 82, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat = AsyncChat(model, sp=sp) +async for o in (await chat("I'm Jeremy", stream=True)): print(o, end='') +``` + + Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions I can help you with? + +``` python +pr = f"What is {a}+{b}?" +chat = AsyncChat(model, sp=sp, tools=[sums]) +r = await chat(pr) +r +``` + + Finding the sum of 604542 and 6458932 + +To answer this question, I can use the “sums” function to add these two +numbers together. Let me do that for you. + +
+ +- id: `msg_015z1rffSWFxvj7rSpzc43ZE` +- content: + `[{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01SNKhtfnDQBC4RGY4mUCq1v', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 428, 'output_tokens': 101, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +await chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_018KAsE2YGiXWjUJkLPrXpb2` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 543, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +fn = Path('samples/puppy.jpg') +img = fn.read_bytes() +``` + +``` python +q = "In brief, what color flowers are in this image?" +msg = mk_msg([img_msg(img), text_msg(q)]) +await c([msg]) +``` + +The flowers in this image are purple. They appear to be small, +daisy-like flowers, possibly asters or some type of purple daisy, +blooming in the background behind the adorable puppy in the foreground. + +
+ +- id: `msg_017qgZggLjUY915mWbWCkb9X` +- content: + `[{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 50, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
# claudette + + + +> **NB**: If you are reading this in GitHub’s readme, we recommend you +> instead read the much more nicely formatted [documentation +> format](https://claudette.answer.ai/) of this tutorial. + +*Claudette* is a wrapper for Anthropic’s [Python +SDK](https://github.com/anthropics/anthropic-sdk-python). + +The SDK works well, but it is quite low level – it leaves the developer +to do a lot of stuff manually. That’s a lot of extra work and +boilerplate! Claudette automates pretty much everything that can be +automated, whilst providing full control. Amongst the features provided: + +- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that + creates stateful dialogs +- Support for *prefill*, which tells Claude what to use as the first few + words of its response +- Convenient image support +- Simple and convenient support for Claude’s new Tool Use API. + +You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the +key provided to you by Anthropic in order to use this library. + +Note that this library is the first ever “literate nbdev” project. That +means that the actual source code for the library is a rendered Jupyter +Notebook which includes callout notes and tips, HTML tables and images, +detailed explanations, and teaches *how* and *why* the code is written +the way it is. Even if you’ve never used the Anthropic Python SDK or +Claude API before, you should be able to read the source code. Click +[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, +or clone the git repo and execute the notebook yourself to see every +step of the creation process in action. The tutorial below includes +links to API details which will take you to relevant parts of the +source. The reason this project is a new kind of literal program is +because we take seriously Knuth’s call to action, that we have a “*moral +commitment*” to never write an “*illiterate program*” – and so we have a +commitment to making literate programming and easy and pleasant +experience. (For more on this, see [this +talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) + +> “*Let us change our traditional attitude to the construction of +> programs: Instead of imagining that our main task is to instruct a +> **computer** what to do, let us concentrate rather on explaining to +> **human beings** what we want a computer to do.*” Donald E. Knuth, +> [Literate +> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) +> (1984) + +## Install + +``` sh +pip install claudette +``` + +## Getting started + +Anthropic’s Python SDK will automatically be installed with Claudette, +if you don’t already have it. + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +To print every HTTP request and response in full, uncomment the above +line. + +``` python +from claudette import * +``` + +Claudette only exports the symbols that are needed to use the library, +so you can use `import *` to import them. Alternatively, just use: + +``` python +import claudette +``` + +…and then add the prefix `claudette.` to any usages of the module. + +Claudette provides `models`, which is a list of models currently +available from the SDK. + +``` python +models +``` + + ['claude-3-opus-20240229', + 'claude-3-5-sonnet-20241022', + 'claude-3-haiku-20240307'] + +For these examples, we’ll use Sonnet 3.5, since it’s awesome! + +``` python +model = models[1] +``` + +## Chat + +The main interface to Claudette is the +[`Chat`](https://claudette.answer.ai/core.html#chat) class, which +provides a stateful interface to Claude: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +chat("I'm Jeremy") +``` + +Hello Jeremy, nice to meet you. + +
+ +- id: `msg_015oK9jEcra3TEKHUGYULjWB` +- content: + `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01Si8sTFJe8d8vq7enanbAwj` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +As you see above, displaying the results of a call in a notebook shows +just the message contents, with the other details hidden behind a +collapsible section. Alternatively you can `print` the details: + +``` python +print(r) +``` + + Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) + +Claude supports adding an extra `assistant` message at the end, which +contains the *prefill* – i.e. the text we want Claude to assume the +response starts with. Let’s try it out: + +``` python +chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,') +``` + +According to Douglas Adams,42. Philosophically, it’s to find personal +meaning through relationships, purpose, and experiences. + +
+ +- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` +- content: + `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can add `stream=True` to stream the results as soon as they arrive +(although you will only see the gradual generation if you execute the +notebook yourself, of course!) + +``` python +for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): + print(o, end='') +``` + + It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. + +### Async + +Alternatively, you can use +[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or +[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for +the async versions, e.g: + +``` python +chat = AsyncChat(model) +await chat("I'm Jeremy") +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_016Q8cdc3sPWBS8eXcNj841L` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Remember to use `async for` when streaming in this case: + +``` python +async for o in await chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,', stream=True): + print(o, end='') +``` + + According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. + +## Prompt caching + +If you use `mk_msg(msg, cache=True)`, then the message is cached using +Claude’s [prompt +caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) +feature. For instance, here we use caching when asking about Claudette’s +readme file: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +``` + +``` python +nbtxt = Path('README.txt').read_text() +msg = f''' +{nbtxt} + +In brief, what is the purpose of this project based on the readme?''' +r = chat(mk_msg(msg, cache=True)) +r +``` + +Claudette is a high-level wrapper for Anthropic’s Python SDK that +automates common tasks and provides additional functionality. Its main +features include: + +1. A Chat class for stateful dialogs +2. Support for prefill (controlling Claude’s initial response words) +3. Convenient image handling +4. Simple tool use API integration +5. Support for multiple model providers (Anthropic, AWS Bedrock, Google + Vertex) + +The project is notable for being the first “literate nbdev” project, +meaning its source code is written as a detailed, readable Jupyter +Notebook that includes explanations, examples, and teaching material +alongside the functional code. + +The goal is to simplify working with Claude’s API while maintaining full +control, reducing boilerplate code and manual work that would otherwise +be needed with the base SDK. + +
+ +- id: `msg_014rVQnYoZXZuyWUCMELG1QW` +- content: + `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` + +
+ +The response records the a cache has been created using these input +tokens: + +``` python +print(r.usage) +``` + + Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) + +We can now ask a followup question in this chat: + +``` python +r = chat('How does it make tool use more ergonomic?') +r +``` + +According to the README, Claudette makes tool use more ergonomic in +several ways: + +1. It uses docments to make Python function definitions more + user-friendly - each parameter and return value should have a type + and description + +2. It handles the tool calling process automatically - when Claude + returns a tool_use message, Claudette manages calling the tool with + the provided parameters behind the scenes + +3. It provides a `toolloop` method that can handle multiple tool calls + in a single step to solve more complex problems + +4. It allows you to pass a list of tools to the Chat constructor and + optionally force Claude to always use a specific tool via + `tool_choice` + +Here’s a simple example from the README: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b + +chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') +``` + +This makes it much simpler compared to manually handling all the tool +use logic that would be required with the base SDK. + +
+ +- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` +- content: + `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` + +
+ +We can see that this only used ~200 regular input tokens – the 7000+ +context tokens have been read from cache. + +``` python +print(r.usage) +``` + + Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) + +``` python +chat.use +``` + + In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 + +## Tool use + +[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude +use external tools. + +We use [docments](https://fastcore.fast.ai/docments.html) to make +defining Python functions as ergonomic as possible. Each parameter (and +the return value) should have a type, and a docments comment with the +description of what it is. As an example we’ll write a simple function +that adds numbers together, and will tell us when it’s being called: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +Sometimes Claude will say something like “according to the `sums` tool +the answer is” – generally we’d rather it just tells the user the +answer, so we can use a system prompt to help with this: + +``` python +sp = "Never mention what tools you use." +``` + +We’ll get Claude to add up some long numbers: + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +pr +``` + + 'What is 604542+6458932?' + +To use tools, pass a list of them to +[`Chat`](https://claudette.answer.ai/core.html#chat): + +``` python +chat = Chat(model, sp=sp, tools=[sums]) +``` + +To force Claude to always answer using a tool, set `tool_choice` to that +function name. When Claude needs to use a tool, it doesn’t return the +answer, but instead returns a `tool_use` message, which means we have to +call the named tool with the provided parameters. + +``` python +r = chat(pr, tool_choice='sums') +r +``` + + Finding the sum of 604542 and 6458932 + +ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_014xrPyotyiBmFSctkp1LZHk` +- content: + `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Claudette handles all that for us – we just call it again, and it all +happens automatically: + +``` python +chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_01151puJxG8Fa6k6QSmzwKQA` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can see how many tokens have been used at any time by checking the +`use` property. Note that (as of May 2024) tool use in Claude uses a +*lot* of tokens, since it automatically adds a large system prompt. + +``` python +chat.use +``` + + In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 + +We can do everything needed to use tools in a single step, by using +[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). +This can even call multiple tools as needed solve a problem. For +example, let’s define a tool to handle multiplication: + +``` python +def mults( + a:int, # First thing to multiply + b:int=1 # Second thing to multiply +) -> int: # The product of the inputs + "Multiplies a * b." + print(f"Finding the product of {a} and {b}") + return a * b +``` + +Now with a single call we can calculate `(a+b)*2` – by passing +`show_trace` we can see each response from Claude in the process: + +``` python +chat = Chat(model, sp=sp, tools=[sums,mults]) +pr = f'Calculate ({a}+{b})*2' +pr +``` + + 'Calculate (604542+6458932)*2' + +``` python +chat.toolloop(pr, trace_func=print) +``` + + Finding the sum of 604542 and 6458932 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] + Finding the product of 7063474 and 2 + [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] + [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] + +The final result is 14,126,948. + +
+ +- id: `msg_0162teyBcJHriUzZXMPz4r5d` +- content: + `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Structured data + +If you just want the immediate result from a single tool, use +[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). + +``` python +cli = Client(model) +``` + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +cli.structured("What is 604542+6458932", sums) +``` + + Finding the sum of 604542 and 6458932 + + [7063474] + +This is particularly useful for getting back structured information, +e.g: + +``` python +class President: + "Information about a president of the United States" + def __init__(self, + first:str, # first name + last:str, # last name + spouse:str, # name of spouse + years_in_office:str, # format: "{start_year}-{end_year}" + birthplace:str, # name of city + birth_year:int # year of birth, `0` if unknown + ): + assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" + store_attr() + + __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') +``` + +``` python +cli.structured("Provide key information about the 3rd President of the United States", President) +``` + + [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] + +## Images + +Claude can handle image data as well. As everyone knows, when testing +image APIs you have to use a cute puppy. + +``` python +fn = Path('samples/puppy.jpg') +display.Image(filename=fn, width=200) +``` + + + +We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object +as before: + +``` python +chat = Chat(model) +``` + +Claudette expects images as a list of bytes, so we read in the file: + +``` python +img = fn.read_bytes() +``` + +Prompts to Claudette can be lists, containing text, images, or both, eg: + +``` python +chat([img, "In brief, what color flowers are in this image?"]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +The image is included as input tokens. + +``` python +chat.use +``` + + In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 + +Alternatively, Claudette supports creating a multi-stage chat with +separate image and text prompts. For instance, you can pass just the +image as the initial prompt (in which case Claude will make some general +comments about what it sees), and then follow up with questions in +additional prompts: + +``` python +chat = Chat(model) +chat(img) +``` + +What an adorable Cavalier King Charles Spaniel puppy! The photo captures +the classic brown and white coloring of the breed, with those soulful +dark eyes that are so characteristic. The puppy is lying in the grass, +and there are lovely purple asters blooming in the background, creating +a beautiful natural setting. The combination of the puppy’s sweet +expression and the delicate flowers makes for a charming composition. +Cavalier King Charles Spaniels are known for their gentle, affectionate +nature, and this little one certainly seems to embody those traits with +its endearing look. + +
+ +- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` +- content: + `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What direction is the puppy facing?') +``` + +The puppy is facing towards the left side of the image. Its head is +positioned so we can see its right side profile, though it appears to be +looking slightly towards the camera, giving us a good view of its +distinctive brown and white facial markings and one of its dark eyes. +The puppy is lying down with its white chest/front visible against the +green grass. + +
+ +- id: `msg_01AeR9eWjbxa788YF97iErtN` +- content: + `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What color is it?') +``` + +The puppy has a classic Cavalier King Charles Spaniel coat with a rich +chestnut brown (sometimes called Blenheim) coloring on its ears and +patches on its face, combined with a bright white base color. The white +is particularly prominent on its face (creating a distinctive blaze down +the center) and chest area. This brown and white combination is one of +the most recognizable color patterns for the breed. + +
+ +- id: `msg_01R91AqXG7pLc8hK24F5mc7x` +- content: + `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Note that the image is passed in again for every input in the dialog, so +that number of input tokens increases quickly with this kind of chat. +(For large images, using prompt caching might be a good idea.) + +``` python +chat.use +``` + + In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 + +## Other model providers + +You can also use 3rd party providers of Anthropic models, as shown here. + +### Amazon Bedrock + +These are the models available through Bedrock: + +``` python +models_aws +``` + + ['anthropic.claude-3-opus-20240229-v1:0', + 'anthropic.claude-3-5-sonnet-20241022-v2:0', + 'anthropic.claude-3-sonnet-20240229-v1:0', + 'anthropic.claude-3-haiku-20240307-v1:0'] + +To use them, call `AnthropicBedrock` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicBedrock +``` + +``` python +ab = AnthropicBedrock( + aws_access_key=os.environ['AWS_ACCESS_KEY'], + aws_secret_key=os.environ['AWS_SECRET_KEY'], +) +client = Client(models_aws[-1], ab) +``` + +Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) +object passing this client to the `cli` parameter – and from then on, +everything is identical to the previous examples. + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` +- content: + `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 32}` + +
+ +### Google Vertex + +These are the models available through Vertex: + +``` python +models_goog +``` + + ['claude-3-opus@20240229', + 'claude-3-5-sonnet-v2@20241022', + 'claude-3-sonnet@20240229', + 'claude-3-haiku@20240307'] + +To use them, call `AnthropicVertex` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicVertex +import google.auth +``` + +``` python +project_id = google.auth.default()[1] +gv = AnthropicVertex(project_id=project_id, region="us-east5") +client = Client(models_goog[-1], gv) +``` + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +## Extensions + +- [Pydantic Structured + Ouput](https://github.com/tom-pollak/claudette-pydantic)
# claudette-pydantic + + + +> Adds Pydantic support for +> [claudette](https://github.com/AnswerDotAI/claudette) through function +> calling + +claudette_pydantic provides the `struct` method in the `Client` and +`Chat` of claudette + +`struct` provides a wrapper around `__call__`. Provide a Pydantic +`BaseModel` as schema, and the model will return an initialized +`BaseModel` object. + +I’ve found Haiku to be quite reliable at even complicated schemas. + +## Install + +``` sh +pip install claudette-pydantic +``` + +## Getting Started + +``` python +from claudette.core import * +import claudette_pydantic # patches claudette with `struct` +from pydantic import BaseModel, Field +from typing import Literal, Union, List +``` + +``` python +model = models[-1] +model +``` + + 'claude-3-haiku-20240307' + +``` python +class Pet(BaseModel): + "Create a new pet" + name: str + age: int + owner: str = Field(default="NA", description="Owner name. Do not return if not given.") + type: Literal['dog', 'cat', 'mouse'] + +c = Client(model) +print(repr(c.struct(msgs="Can you make a pet for my dog Mac? He's 14 years old", resp_model=Pet))) +print(repr(c.struct(msgs="Tom: my cat is juma and he's 16 years old", resp_model=Pet))) +``` + + Pet(name='Mac', age=14, owner='NA', type='dog') + Pet(name='juma', age=16, owner='Tom', type='cat') + +## Going Deeper + +I pulled this example from [pydantic +docs](https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions) +has a list of discriminated unions, shown by `pet_type`. For each object +the model is required to return different things. + +You should be able to use the full power of Pydantic here. I’ve found +that instructor for Claude fails on this example. + +Each sub BaseModel may also have docstrings describing usage. I’ve found +prompting this way to be quite reliable. + +``` python +class Cat(BaseModel): + pet_type: Literal['cat'] + meows: int + + +class Dog(BaseModel): + pet_type: Literal['dog'] + barks: float + + +class Reptile(BaseModel): + pet_type: Literal['lizard', 'dragon'] + scales: bool + +# Dummy to show doc strings +class Create(BaseModel): + "Pass as final member of the `pet` list to indicate success" + pet_type: Literal['create'] + +class OwnersPets(BaseModel): + """ + Information for to gather for an Owner's pets + """ + pet: List[Union[Cat, Dog, Reptile, Create]] = Field(..., discriminator='pet_type') + +chat = Chat(model) +pr = "hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows" +print(repr(chat.struct(OwnersPets, pr=pr))) +print(repr(chat.struct(OwnersPets, pr="actually my dragon does have scales, can you change that for me?"))) +``` + + OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) + OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) + +While the struct uses tool use to enforce the schema, we save in history +as the `repr` response to keep the user,assistant,user flow. + +``` python +chat.h +``` + + [{'role': 'user', + 'content': [{'type': 'text', + 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, + {'role': 'assistant', + 'content': [{'type': 'text', + 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}, + {'role': 'user', + 'content': [{'type': 'text', + 'text': 'actually my dragon does have scales, can you change that for me?'}]}, + {'role': 'assistant', + 'content': [{'type': 'text', + 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}] + +Alternatively you can use struct as tool use flow with +`treat_as_output=False` (but requires the next input to be assistant) + +``` python +chat.struct(OwnersPets, pr=pr, treat_as_output=False) +chat.h[-3:] +``` + + [{'role': 'user', + 'content': [{'type': 'text', + 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, + {'role': 'assistant', + 'content': [ToolUseBlock(id='toolu_015ggQ1iH6BxBffd7erj3rjR', input={'pet': [{'pet_type': 'dog', 'barks': 6.0}, {'pet_type': 'dragon', 'scales': False}, {'pet_type': 'cat', 'meows': 2}]}, name='OwnersPets', type='tool_use')]}, + {'role': 'user', + 'content': [{'type': 'tool_result', + 'tool_use_id': 'toolu_015ggQ1iH6BxBffd7erj3rjR', + 'content': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2)])"}]}] + +(So I couldn’t prompt it again here, next input would have to be an +assistant) + +### User Creation & few-shot examples + +You can even add few shot examples *for each input* + +``` python +class User(BaseModel): + "User creation tool" + age: int = Field(description='Age of the user') + name: str = Field(title='Username') + password: str = Field( + json_schema_extra={ + 'title': 'Password', + 'description': 'Password of the user', + 'examples': ['Monkey!123'], + } + ) +print(repr(c.struct(msgs=["Can you create me a new user for tom age 22"], resp_model=User, sp="for a given user, generate a similar password based on examples"))) +``` + + User(age=22, name='tom', password='Monkey!123') + +Uses the few-shot example as asked for in the system prompt. + +### You can find more examples [nbs/examples](nbs/examples) + +## Signature: + +``` python +Client.struct( + self: claudette.core.Client, + msgs: list, + resp_model: type[BaseModel], # non-initialized Pydantic BaseModel + **, # Client.__call__ kwargs... +) -> BaseModel +``` + +``` python +Chat.struct( + self: claudette.core.Chat, + resp_model: type[BaseModel], # non-initialized Pydantic BaseModel + treat_as_output=True, # In chat history, tool is reflected + **, # Chat.__call__ kwargs... +) -> BaseModel +```
diff --git a/llms.txt b/llms.txt new file mode 100644 index 0000000..c2ec98f --- /dev/null +++ b/llms.txt @@ -0,0 +1,23 @@ +# Claudette + +> Claudette is a Python library that wraps Anthropic's Claude API to provide a higher-level interface for creating AI applications. It automates common patterns while maintaining full control, offering features like stateful chat, prefill support, image handling, and streamlined tool use. + +Things to remember when using Claudette: + +- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key +- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) +- The library provides both synchronous and asynchronous interfaces +- Use `Chat()` for maintaining conversation state and handling tool interactions +- When using tools, the library automatically handles the request/response loop +- Image support is built in but only available on compatible models (not Haiku) + +## Docs + +- [Core functionality](https://claudette.answer.ai/core.html.md): Detailed walkthrough of main features including Client, Chat, and tool usage +- [Tool loop handling](https://claudette.answer.ai/toolloop.html.md): How to use the tool loop functionality for complex multi-step interactions +- [Async support](https://claudette.answer.ai/async.html.md): Using Claudette with async/await +- [README](https://raw.githubusercontent.com/AnswerDotAI/claudette/refs/heads/main/README.md): Quick start guide and overview + +## Optional Extensions + +- [Pydantic Structured Output](https://raw.githubusercontent.com/tom-pollak/claudette-pydantic/refs/heads/main/README.md): Extension for structured data output using Pydantic models From 04df708d32fb5863f0e7ca2880c000d7cf00a740 Mon Sep 17 00:00:00 2001 From: Erik Gaasedelen Date: Tue, 19 Nov 2024 22:15:09 -0800 Subject: [PATCH 2/9] fix optional --- llms-ctx-full.txt | 4 +- llms-ctx.txt | 189 +--------------------------------------------- llms.txt | 2 +- 3 files changed, 4 insertions(+), 191 deletions(-) diff --git a/llms-ctx-full.txt b/llms-ctx-full.txt index b6e0a91..ee66a3a 100644 --- a/llms-ctx-full.txt +++ b/llms-ctx-full.txt @@ -4604,7 +4604,7 @@ chat("I'm Jeremy") ## Extensions - [Pydantic Structured - Ouput](https://github.com/tom-pollak/claudette-pydantic)# claudette-pydantic + Ouput](https://github.com/tom-pollak/claudette-pydantic)# claudette-pydantic @@ -4791,4 +4791,4 @@ Chat.struct( treat_as_output=True, # In chat history, tool is reflected **, # Chat.__call__ kwargs... ) -> BaseModel -``` +``` diff --git a/llms-ctx.txt b/llms-ctx.txt index b6e0a91..7b7e5f2 100644 --- a/llms-ctx.txt +++ b/llms-ctx.txt @@ -4604,191 +4604,4 @@ chat("I'm Jeremy") ## Extensions - [Pydantic Structured - Ouput](https://github.com/tom-pollak/claudette-pydantic)# claudette-pydantic - - - -> Adds Pydantic support for -> [claudette](https://github.com/AnswerDotAI/claudette) through function -> calling - -claudette_pydantic provides the `struct` method in the `Client` and -`Chat` of claudette - -`struct` provides a wrapper around `__call__`. Provide a Pydantic -`BaseModel` as schema, and the model will return an initialized -`BaseModel` object. - -I’ve found Haiku to be quite reliable at even complicated schemas. - -## Install - -``` sh -pip install claudette-pydantic -``` - -## Getting Started - -``` python -from claudette.core import * -import claudette_pydantic # patches claudette with `struct` -from pydantic import BaseModel, Field -from typing import Literal, Union, List -``` - -``` python -model = models[-1] -model -``` - - 'claude-3-haiku-20240307' - -``` python -class Pet(BaseModel): - "Create a new pet" - name: str - age: int - owner: str = Field(default="NA", description="Owner name. Do not return if not given.") - type: Literal['dog', 'cat', 'mouse'] - -c = Client(model) -print(repr(c.struct(msgs="Can you make a pet for my dog Mac? He's 14 years old", resp_model=Pet))) -print(repr(c.struct(msgs="Tom: my cat is juma and he's 16 years old", resp_model=Pet))) -``` - - Pet(name='Mac', age=14, owner='NA', type='dog') - Pet(name='juma', age=16, owner='Tom', type='cat') - -## Going Deeper - -I pulled this example from [pydantic -docs](https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions) -has a list of discriminated unions, shown by `pet_type`. For each object -the model is required to return different things. - -You should be able to use the full power of Pydantic here. I’ve found -that instructor for Claude fails on this example. - -Each sub BaseModel may also have docstrings describing usage. I’ve found -prompting this way to be quite reliable. - -``` python -class Cat(BaseModel): - pet_type: Literal['cat'] - meows: int - - -class Dog(BaseModel): - pet_type: Literal['dog'] - barks: float - - -class Reptile(BaseModel): - pet_type: Literal['lizard', 'dragon'] - scales: bool - -# Dummy to show doc strings -class Create(BaseModel): - "Pass as final member of the `pet` list to indicate success" - pet_type: Literal['create'] - -class OwnersPets(BaseModel): - """ - Information for to gather for an Owner's pets - """ - pet: List[Union[Cat, Dog, Reptile, Create]] = Field(..., discriminator='pet_type') - -chat = Chat(model) -pr = "hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows" -print(repr(chat.struct(OwnersPets, pr=pr))) -print(repr(chat.struct(OwnersPets, pr="actually my dragon does have scales, can you change that for me?"))) -``` - - OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) - OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) - -While the struct uses tool use to enforce the schema, we save in history -as the `repr` response to keep the user,assistant,user flow. - -``` python -chat.h -``` - - [{'role': 'user', - 'content': [{'type': 'text', - 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, - {'role': 'assistant', - 'content': [{'type': 'text', - 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}, - {'role': 'user', - 'content': [{'type': 'text', - 'text': 'actually my dragon does have scales, can you change that for me?'}]}, - {'role': 'assistant', - 'content': [{'type': 'text', - 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}] - -Alternatively you can use struct as tool use flow with -`treat_as_output=False` (but requires the next input to be assistant) - -``` python -chat.struct(OwnersPets, pr=pr, treat_as_output=False) -chat.h[-3:] -``` - - [{'role': 'user', - 'content': [{'type': 'text', - 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, - {'role': 'assistant', - 'content': [ToolUseBlock(id='toolu_015ggQ1iH6BxBffd7erj3rjR', input={'pet': [{'pet_type': 'dog', 'barks': 6.0}, {'pet_type': 'dragon', 'scales': False}, {'pet_type': 'cat', 'meows': 2}]}, name='OwnersPets', type='tool_use')]}, - {'role': 'user', - 'content': [{'type': 'tool_result', - 'tool_use_id': 'toolu_015ggQ1iH6BxBffd7erj3rjR', - 'content': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2)])"}]}] - -(So I couldn’t prompt it again here, next input would have to be an -assistant) - -### User Creation & few-shot examples - -You can even add few shot examples *for each input* - -``` python -class User(BaseModel): - "User creation tool" - age: int = Field(description='Age of the user') - name: str = Field(title='Username') - password: str = Field( - json_schema_extra={ - 'title': 'Password', - 'description': 'Password of the user', - 'examples': ['Monkey!123'], - } - ) -print(repr(c.struct(msgs=["Can you create me a new user for tom age 22"], resp_model=User, sp="for a given user, generate a similar password based on examples"))) -``` - - User(age=22, name='tom', password='Monkey!123') - -Uses the few-shot example as asked for in the system prompt. - -### You can find more examples [nbs/examples](nbs/examples) - -## Signature: - -``` python -Client.struct( - self: claudette.core.Client, - msgs: list, - resp_model: type[BaseModel], # non-initialized Pydantic BaseModel - **, # Client.__call__ kwargs... -) -> BaseModel -``` - -``` python -Chat.struct( - self: claudette.core.Chat, - resp_model: type[BaseModel], # non-initialized Pydantic BaseModel - treat_as_output=True, # In chat history, tool is reflected - **, # Chat.__call__ kwargs... -) -> BaseModel -``` + Ouput](https://github.com/tom-pollak/claudette-pydantic) diff --git a/llms.txt b/llms.txt index c2ec98f..936ec93 100644 --- a/llms.txt +++ b/llms.txt @@ -18,6 +18,6 @@ Things to remember when using Claudette: - [Async support](https://claudette.answer.ai/async.html.md): Using Claudette with async/await - [README](https://raw.githubusercontent.com/AnswerDotAI/claudette/refs/heads/main/README.md): Quick start guide and overview -## Optional Extensions +## Optional - [Pydantic Structured Output](https://raw.githubusercontent.com/tom-pollak/claudette-pydantic/refs/heads/main/README.md): Extension for structured data output using Pydantic models From b30f08e3549554f53b06fbd9bf03a0c961de3023 Mon Sep 17 00:00:00 2001 From: Erik Gaasedelen Date: Tue, 19 Nov 2024 22:46:05 -0800 Subject: [PATCH 3/9] create tools and llm folder --- llm/apilist.txt | 74 ++++++++++++++++++++++++++++++++++++++++ llms.txt => llm/llms.txt | 10 ++++-- 2 files changed, 81 insertions(+), 3 deletions(-) create mode 100644 llm/apilist.txt rename llms.txt => llm/llms.txt (93%) diff --git a/llm/apilist.txt b/llm/apilist.txt new file mode 100644 index 0000000..72ce0bb --- /dev/null +++ b/llm/apilist.txt @@ -0,0 +1,74 @@ +# claudette Module Documentation + +## claudette.asink + +- `class AsyncClient` + - `def __init__(self, model, cli, log)` + Async Anthropic messages client. + + +- `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)` + Make an async call to Claude. + +- `@delegates() class AsyncChat` + - `def __init__(self, model, cli, **kwargs)` + Anthropic async chat client. + + +## claudette.core + +- `def find_block(r, blk_type)` + Find the first block of type `blk_type` in `r.content`. + +- `def contents(r)` + Helper to get the contents from Claude response `r`. + +- `def usage(inp, out, cache_create, cache_read)` + Slightly more concise version of `Usage`. + +- `@patch def __add__(self, b)` + Add together each of `input_tokens` and `output_tokens` + +- `def mk_msgs(msgs, **kw)` + Helper to set 'assistant' role on alternate messages. + +- `class Client` + - `def __init__(self, model, cli, log)` + Basic Anthropic messages client. + + +- `def mk_tool_choice(choose)` + Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise + +- `def mk_funcres(tuid, res)` + Given tool use id and the tool result, create a tool_result response. + +- `def mk_toolres(r, ns, obj)` + Create a `tool_result` message from response `r`. + +- `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)` + Make a call to Claude. + +- `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)` + Return the value of all tool calls (generally used for structured outputs) + +- `class Chat` + - `def __init__(self, model, cli, sp, tools, temp, cont_pr)` + Anthropic chat client. + + - `@property def use(self)` + +- `def img_msg(data, cache)` + Convert image `data` into an encoded `dict` + +- `def text_msg(s, cache)` + Convert `s` to a text message + +- `def mk_msg(content, role, cache, **kw)` + Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message + +## claudette.toolloop + +- `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)` + Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages + diff --git a/llms.txt b/llm/llms.txt similarity index 93% rename from llms.txt rename to llm/llms.txt index 936ec93..6c0b397 100644 --- a/llms.txt +++ b/llm/llms.txt @@ -13,11 +13,15 @@ Things to remember when using Claudette: ## Docs -- [Core functionality](https://claudette.answer.ai/core.html.md): Detailed walkthrough of main features including Client, Chat, and tool usage -- [Tool loop handling](https://claudette.answer.ai/toolloop.html.md): How to use the tool loop functionality for complex multi-step interactions -- [Async support](https://claudette.answer.ai/async.html.md): Using Claudette with async/await - [README](https://raw.githubusercontent.com/AnswerDotAI/claudette/refs/heads/main/README.md): Quick start guide and overview +- [Tool loop handling](https://claudette.answer.ai/toolloop.html.md): How to use the tool loop functionality for complex multi-step interactions + +## API + +- [API List](https://docs.fastht.ml/apilist.txt): A succint list of all functions and methods in claudette. ## Optional +- [Async support](https://claudette.answer.ai/async.html.md): Using Claudette with async/await +- [Core functionality](https://claudette.answer.ai/core.html.md): Detailed walkthrough of main features including Client, Chat, and tool usage - [Pydantic Structured Output](https://raw.githubusercontent.com/tom-pollak/claudette-pydantic/refs/heads/main/README.md): Extension for structured data output using Pydantic models From a306d668af63411a9fbc8e62c69d9d09e0548e1c Mon Sep 17 00:00:00 2001 From: Erik Gaasedelen Date: Tue, 19 Nov 2024 22:56:48 -0800 Subject: [PATCH 4/9] add tools and llm folder --- llm/llms-ctx-full.txt | 4867 +++++++++++++++++++++++++++++++++++++ llm/llms-ctx.txt | 1462 +++++++++++ llm/llms.txt | 2 +- tools/refresh_llm_docs.sh | 12 + 4 files changed, 6342 insertions(+), 1 deletion(-) create mode 100644 llm/llms-ctx-full.txt create mode 100644 llm/llms-ctx.txt create mode 100755 tools/refresh_llm_docs.sh diff --git a/llm/llms-ctx-full.txt b/llm/llms-ctx-full.txt new file mode 100644 index 0000000..ff98bee --- /dev/null +++ b/llm/llms-ctx-full.txt @@ -0,0 +1,4867 @@ +Things to remember when using Claudette: + +- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key +- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) +- The library provides both synchronous and asynchronous interfaces +- Use `Chat()` for maintaining conversation state and handling tool interactions +- When using tools, the library automatically handles the request/response loop +- Image support is built in but only available on compatible models (not Haiku)# claudette + + + +> **NB**: If you are reading this in GitHub’s readme, we recommend you +> instead read the much more nicely formatted [documentation +> format](https://claudette.answer.ai/) of this tutorial. + +*Claudette* is a wrapper for Anthropic’s [Python +SDK](https://github.com/anthropics/anthropic-sdk-python). + +The SDK works well, but it is quite low level – it leaves the developer +to do a lot of stuff manually. That’s a lot of extra work and +boilerplate! Claudette automates pretty much everything that can be +automated, whilst providing full control. Amongst the features provided: + +- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that + creates stateful dialogs +- Support for *prefill*, which tells Claude what to use as the first few + words of its response +- Convenient image support +- Simple and convenient support for Claude’s new Tool Use API. + +You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the +key provided to you by Anthropic in order to use this library. + +Note that this library is the first ever “literate nbdev” project. That +means that the actual source code for the library is a rendered Jupyter +Notebook which includes callout notes and tips, HTML tables and images, +detailed explanations, and teaches *how* and *why* the code is written +the way it is. Even if you’ve never used the Anthropic Python SDK or +Claude API before, you should be able to read the source code. Click +[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, +or clone the git repo and execute the notebook yourself to see every +step of the creation process in action. The tutorial below includes +links to API details which will take you to relevant parts of the +source. The reason this project is a new kind of literal program is +because we take seriously Knuth’s call to action, that we have a “*moral +commitment*” to never write an “*illiterate program*” – and so we have a +commitment to making literate programming and easy and pleasant +experience. (For more on this, see [this +talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) + +> “*Let us change our traditional attitude to the construction of +> programs: Instead of imagining that our main task is to instruct a +> **computer** what to do, let us concentrate rather on explaining to +> **human beings** what we want a computer to do.*” Donald E. Knuth, +> [Literate +> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) +> (1984) + +## Install + +``` sh +pip install claudette +``` + +## Getting started + +Anthropic’s Python SDK will automatically be installed with Claudette, +if you don’t already have it. + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +To print every HTTP request and response in full, uncomment the above +line. + +``` python +from claudette import * +``` + +Claudette only exports the symbols that are needed to use the library, +so you can use `import *` to import them. Alternatively, just use: + +``` python +import claudette +``` + +…and then add the prefix `claudette.` to any usages of the module. + +Claudette provides `models`, which is a list of models currently +available from the SDK. + +``` python +models +``` + + ['claude-3-opus-20240229', + 'claude-3-5-sonnet-20241022', + 'claude-3-haiku-20240307'] + +For these examples, we’ll use Sonnet 3.5, since it’s awesome! + +``` python +model = models[1] +``` + +## Chat + +The main interface to Claudette is the +[`Chat`](https://claudette.answer.ai/core.html#chat) class, which +provides a stateful interface to Claude: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +chat("I'm Jeremy") +``` + +Hello Jeremy, nice to meet you. + +
+ +- id: `msg_015oK9jEcra3TEKHUGYULjWB` +- content: + `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01Si8sTFJe8d8vq7enanbAwj` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +As you see above, displaying the results of a call in a notebook shows +just the message contents, with the other details hidden behind a +collapsible section. Alternatively you can `print` the details: + +``` python +print(r) +``` + + Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) + +Claude supports adding an extra `assistant` message at the end, which +contains the *prefill* – i.e. the text we want Claude to assume the +response starts with. Let’s try it out: + +``` python +chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,') +``` + +According to Douglas Adams,42. Philosophically, it’s to find personal +meaning through relationships, purpose, and experiences. + +
+ +- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` +- content: + `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can add `stream=True` to stream the results as soon as they arrive +(although you will only see the gradual generation if you execute the +notebook yourself, of course!) + +``` python +for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): + print(o, end='') +``` + + It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. + +### Async + +Alternatively, you can use +[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or +[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for +the async versions, e.g: + +``` python +chat = AsyncChat(model) +await chat("I'm Jeremy") +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_016Q8cdc3sPWBS8eXcNj841L` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Remember to use `async for` when streaming in this case: + +``` python +async for o in await chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,', stream=True): + print(o, end='') +``` + + According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. + +## Prompt caching + +If you use `mk_msg(msg, cache=True)`, then the message is cached using +Claude’s [prompt +caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) +feature. For instance, here we use caching when asking about Claudette’s +readme file: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +``` + +``` python +nbtxt = Path('README.txt').read_text() +msg = f''' +{nbtxt} + +In brief, what is the purpose of this project based on the readme?''' +r = chat(mk_msg(msg, cache=True)) +r +``` + +Claudette is a high-level wrapper for Anthropic’s Python SDK that +automates common tasks and provides additional functionality. Its main +features include: + +1. A Chat class for stateful dialogs +2. Support for prefill (controlling Claude’s initial response words) +3. Convenient image handling +4. Simple tool use API integration +5. Support for multiple model providers (Anthropic, AWS Bedrock, Google + Vertex) + +The project is notable for being the first “literate nbdev” project, +meaning its source code is written as a detailed, readable Jupyter +Notebook that includes explanations, examples, and teaching material +alongside the functional code. + +The goal is to simplify working with Claude’s API while maintaining full +control, reducing boilerplate code and manual work that would otherwise +be needed with the base SDK. + +
+ +- id: `msg_014rVQnYoZXZuyWUCMELG1QW` +- content: + `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` + +
+ +The response records the a cache has been created using these input +tokens: + +``` python +print(r.usage) +``` + + Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) + +We can now ask a followup question in this chat: + +``` python +r = chat('How does it make tool use more ergonomic?') +r +``` + +According to the README, Claudette makes tool use more ergonomic in +several ways: + +1. It uses docments to make Python function definitions more + user-friendly - each parameter and return value should have a type + and description + +2. It handles the tool calling process automatically - when Claude + returns a tool_use message, Claudette manages calling the tool with + the provided parameters behind the scenes + +3. It provides a `toolloop` method that can handle multiple tool calls + in a single step to solve more complex problems + +4. It allows you to pass a list of tools to the Chat constructor and + optionally force Claude to always use a specific tool via + `tool_choice` + +Here’s a simple example from the README: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b + +chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') +``` + +This makes it much simpler compared to manually handling all the tool +use logic that would be required with the base SDK. + +
+ +- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` +- content: + `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` + +
+ +We can see that this only used ~200 regular input tokens – the 7000+ +context tokens have been read from cache. + +``` python +print(r.usage) +``` + + Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) + +``` python +chat.use +``` + + In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 + +## Tool use + +[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude +use external tools. + +We use [docments](https://fastcore.fast.ai/docments.html) to make +defining Python functions as ergonomic as possible. Each parameter (and +the return value) should have a type, and a docments comment with the +description of what it is. As an example we’ll write a simple function +that adds numbers together, and will tell us when it’s being called: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +Sometimes Claude will say something like “according to the `sums` tool +the answer is” – generally we’d rather it just tells the user the +answer, so we can use a system prompt to help with this: + +``` python +sp = "Never mention what tools you use." +``` + +We’ll get Claude to add up some long numbers: + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +pr +``` + + 'What is 604542+6458932?' + +To use tools, pass a list of them to +[`Chat`](https://claudette.answer.ai/core.html#chat): + +``` python +chat = Chat(model, sp=sp, tools=[sums]) +``` + +To force Claude to always answer using a tool, set `tool_choice` to that +function name. When Claude needs to use a tool, it doesn’t return the +answer, but instead returns a `tool_use` message, which means we have to +call the named tool with the provided parameters. + +``` python +r = chat(pr, tool_choice='sums') +r +``` + + Finding the sum of 604542 and 6458932 + +ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_014xrPyotyiBmFSctkp1LZHk` +- content: + `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Claudette handles all that for us – we just call it again, and it all +happens automatically: + +``` python +chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_01151puJxG8Fa6k6QSmzwKQA` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can see how many tokens have been used at any time by checking the +`use` property. Note that (as of May 2024) tool use in Claude uses a +*lot* of tokens, since it automatically adds a large system prompt. + +``` python +chat.use +``` + + In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 + +We can do everything needed to use tools in a single step, by using +[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). +This can even call multiple tools as needed solve a problem. For +example, let’s define a tool to handle multiplication: + +``` python +def mults( + a:int, # First thing to multiply + b:int=1 # Second thing to multiply +) -> int: # The product of the inputs + "Multiplies a * b." + print(f"Finding the product of {a} and {b}") + return a * b +``` + +Now with a single call we can calculate `(a+b)*2` – by passing +`show_trace` we can see each response from Claude in the process: + +``` python +chat = Chat(model, sp=sp, tools=[sums,mults]) +pr = f'Calculate ({a}+{b})*2' +pr +``` + + 'Calculate (604542+6458932)*2' + +``` python +chat.toolloop(pr, trace_func=print) +``` + + Finding the sum of 604542 and 6458932 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] + Finding the product of 7063474 and 2 + [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] + [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] + +The final result is 14,126,948. + +
+ +- id: `msg_0162teyBcJHriUzZXMPz4r5d` +- content: + `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Structured data + +If you just want the immediate result from a single tool, use +[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). + +``` python +cli = Client(model) +``` + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +cli.structured("What is 604542+6458932", sums) +``` + + Finding the sum of 604542 and 6458932 + + [7063474] + +This is particularly useful for getting back structured information, +e.g: + +``` python +class President: + "Information about a president of the United States" + def __init__(self, + first:str, # first name + last:str, # last name + spouse:str, # name of spouse + years_in_office:str, # format: "{start_year}-{end_year}" + birthplace:str, # name of city + birth_year:int # year of birth, `0` if unknown + ): + assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" + store_attr() + + __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') +``` + +``` python +cli.structured("Provide key information about the 3rd President of the United States", President) +``` + + [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] + +## Images + +Claude can handle image data as well. As everyone knows, when testing +image APIs you have to use a cute puppy. + +``` python +fn = Path('samples/puppy.jpg') +display.Image(filename=fn, width=200) +``` + + + +We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object +as before: + +``` python +chat = Chat(model) +``` + +Claudette expects images as a list of bytes, so we read in the file: + +``` python +img = fn.read_bytes() +``` + +Prompts to Claudette can be lists, containing text, images, or both, eg: + +``` python +chat([img, "In brief, what color flowers are in this image?"]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +The image is included as input tokens. + +``` python +chat.use +``` + + In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 + +Alternatively, Claudette supports creating a multi-stage chat with +separate image and text prompts. For instance, you can pass just the +image as the initial prompt (in which case Claude will make some general +comments about what it sees), and then follow up with questions in +additional prompts: + +``` python +chat = Chat(model) +chat(img) +``` + +What an adorable Cavalier King Charles Spaniel puppy! The photo captures +the classic brown and white coloring of the breed, with those soulful +dark eyes that are so characteristic. The puppy is lying in the grass, +and there are lovely purple asters blooming in the background, creating +a beautiful natural setting. The combination of the puppy’s sweet +expression and the delicate flowers makes for a charming composition. +Cavalier King Charles Spaniels are known for their gentle, affectionate +nature, and this little one certainly seems to embody those traits with +its endearing look. + +
+ +- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` +- content: + `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What direction is the puppy facing?') +``` + +The puppy is facing towards the left side of the image. Its head is +positioned so we can see its right side profile, though it appears to be +looking slightly towards the camera, giving us a good view of its +distinctive brown and white facial markings and one of its dark eyes. +The puppy is lying down with its white chest/front visible against the +green grass. + +
+ +- id: `msg_01AeR9eWjbxa788YF97iErtN` +- content: + `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What color is it?') +``` + +The puppy has a classic Cavalier King Charles Spaniel coat with a rich +chestnut brown (sometimes called Blenheim) coloring on its ears and +patches on its face, combined with a bright white base color. The white +is particularly prominent on its face (creating a distinctive blaze down +the center) and chest area. This brown and white combination is one of +the most recognizable color patterns for the breed. + +
+ +- id: `msg_01R91AqXG7pLc8hK24F5mc7x` +- content: + `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Note that the image is passed in again for every input in the dialog, so +that number of input tokens increases quickly with this kind of chat. +(For large images, using prompt caching might be a good idea.) + +``` python +chat.use +``` + + In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 + +## Other model providers + +You can also use 3rd party providers of Anthropic models, as shown here. + +### Amazon Bedrock + +These are the models available through Bedrock: + +``` python +models_aws +``` + + ['anthropic.claude-3-opus-20240229-v1:0', + 'anthropic.claude-3-5-sonnet-20241022-v2:0', + 'anthropic.claude-3-sonnet-20240229-v1:0', + 'anthropic.claude-3-haiku-20240307-v1:0'] + +To use them, call `AnthropicBedrock` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicBedrock +``` + +``` python +ab = AnthropicBedrock( + aws_access_key=os.environ['AWS_ACCESS_KEY'], + aws_secret_key=os.environ['AWS_SECRET_KEY'], +) +client = Client(models_aws[-1], ab) +``` + +Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) +object passing this client to the `cli` parameter – and from then on, +everything is identical to the previous examples. + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` +- content: + `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 32}` + +
+ +### Google Vertex + +These are the models available through Vertex: + +``` python +models_goog +``` + + ['claude-3-opus@20240229', + 'claude-3-5-sonnet-v2@20241022', + 'claude-3-sonnet@20240229', + 'claude-3-haiku@20240307'] + +To use them, call `AnthropicVertex` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicVertex +import google.auth +``` + +``` python +project_id = google.auth.default()[1] +gv = AnthropicVertex(project_id=project_id, region="us-east5") +client = Client(models_goog[-1], gv) +``` + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +## Extensions + +- [Pydantic Structured + Ouput](https://github.com/tom-pollak/claudette-pydantic)
# Tool loop + + + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +``` python +model = models[-1] +``` + +Anthropic provides an [interesting +example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) +of using tools to mock up a hypothetical ordering system. We’re going to +take it a step further, and show how we can dramatically simplify the +process, whilst completing more complex tasks. + +We’ll start by defining the same mock customer/order data as in +Anthropic’s example, plus create a entity relationship between customers +and orders: + +``` python +orders = { + "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"), + "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"), + "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")} + +customers = { + "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890", + orders=[orders['O1'], orders['O2']]), + "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210", + orders=[orders['O3']]) +} +``` + +We can now define the same functions from the original example – but +note that we don’t need to manually create the large JSON schema, since +Claudette handles all that for us automatically from the functions +directly. We’ll add some extra functionality to update order details +when cancelling too. + +``` python +def get_customer_info( + customer_id:str # ID of the customer +): # Customer's name, email, phone number, and list of orders + "Retrieves a customer's information and their orders based on the customer ID" + print(f'- Retrieving customer {customer_id}') + return customers.get(customer_id, "Customer not found") + +def get_order_details( + order_id:str # ID of the order +): # Order's ID, product name, quantity, price, and order status + "Retrieves the details of a specific order based on the order ID" + print(f'- Retrieving order {order_id}') + return orders.get(order_id, "Order not found") + +def cancel_order( + order_id:str # ID of the order to cancel +)->bool: # True if the cancellation is successful + "Cancels an order based on the provided order ID" + print(f'- Cancelling order {order_id}') + if order_id not in orders: return False + orders[order_id]['status'] = 'Cancelled' + return True +``` + +We’re now ready to start our chat. + +``` python +tools = [get_customer_info, get_order_details, cancel_order] +chat = Chat(model, tools=tools) +``` + +We’ll start with the same request as Anthropic showed: + +``` python +r = chat('Can you tell me the email address for customer C1?') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [ToolUseBlock(id='toolu_0168sUZoEUpjzk5Y8WN3q9XL', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +Claude asks us to use a tool. Claudette handles that automatically by +just calling it again: + +``` python +r = chat() +contents(r) +``` + + 'The email address for customer C1 is john@example.com.' + +Let’s consider a more complex case than in the original example – what +happens if a customer wants to cancel all of their orders? + +``` python +chat = Chat(model, tools=tools) +r = chat('Please cancel all orders for customer C1 for me.') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), + ToolUseBlock(id='toolu_01ADr1rEp7NLZ2iKWfLp7vz7', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +This is the start of a multi-stage tool use process. Doing it manually +step by step is inconvenient, so let’s write a function to handle this +for us: + +------------------------------------------------------------------------ + +source + +### Chat.toolloop + +> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None, +> maxtok=4096, stream=False, prefill='', +> tool_choice:Optional[dict]=None) + +*Add prompt `pr` to dialog and get a response from Claude, automatically +following up with `tool_use` messages* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prPrompt to pass to Claude
max_stepsint10Maximum number of tool requests to loop through
trace_funcOptionalNoneFunction to trace tool use steps (e.g print)
cont_funcOptionalnoopFunction that stops loop if returns False
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
+ +
+Exported source + +``` python +@patch +@delegates(Chat.__call__) +def toolloop(self:Chat, + pr, # Prompt to pass to Claude + max_steps=10, # Maximum number of tool requests to loop through + trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`) + cont_func:Optional[callable]=noop, # Function that stops loop if returns False + **kwargs): + "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages" + n_msgs = len(self.h) + r = self(pr, **kwargs) + for i in range(max_steps): + if r.stop_reason!='tool_use': break + if trace_func: trace_func(self.h[n_msgs:]); n_msgs = len(self.h) + r = self(**kwargs) + if not (cont_func or noop)(self.h[-2]): break + if trace_func: trace_func(self.h[n_msgs:]) + return r +``` + +
+ +We’ll start by re-running our previous request - we shouldn’t have to +manually pass back the `tool_use` message any more: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Can you tell me the email address for customer C1?') +r +``` + + - Retrieving customer C1 + +The email address for customer C1 is john@example.com. + +
+ +- id: `msg_01Fm2CY76dNeWief4kUW6r71` +- content: + `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 720, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Let’s see if it can handle the multi-stage process now – we’ll add +`trace_func=print` to see each stage of the process: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print) +r +``` + + - Retrieving customer C1 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Please cancel all orders for customer C1 for me.'}]}, {'role': 'assistant', 'content': [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01SvivKytaRHEdKixEY9dUDz', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01SvivKytaRHEdKixEY9dUDz', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}] + - Cancelling order O1 + [{'role': 'assistant', 'content': [TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel each of these orders:", type='text'), ToolUseBlock(id='toolu_01DoGVUPVBeDYERMePHDzUoT', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DoGVUPVBeDYERMePHDzUoT', 'content': 'True'}]}] + - Cancelling order O2 + [{'role': 'assistant', 'content': [ToolUseBlock(id='toolu_01XNwS35yY88Mvx4B3QqDeXX', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01XNwS35yY88Mvx4B3QqDeXX', 'content': 'True'}]}] + [{'role': 'assistant', 'content': [TextBlock(text="I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", type='text')]}] + +I’ve successfully cancelled both orders O1 and O2 for customer C1. +Please let me know if you need anything else! + +
+ +- id: `msg_01K1QpUZ8nrBVUHYTrH5QjSF` +- content: + `[{'text': "I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 921, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +OK Claude thinks the orders were cancelled – let’s check one: + +``` python +chat.toolloop('What is the status of order O2?') +``` + + - Retrieving order O2 + +The status of order O2 is now ‘Cancelled’ since I successfully cancelled +that order earlier. + +
+ +- id: `msg_01XcXpFDwoZ3u1bFDf5mY8x1` +- content: + `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1092, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Code interpreter + +Here is an example of using `toolloop` to implement a simple code +interpreter with additional tools. + +``` python +from toolslm.shell import get_shell +from fastcore.meta import delegates +import traceback +``` + +``` python +@delegates() +class CodeChat(Chat): + imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses' + def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs): + super().__init__(model=model, **kwargs) + self.ask = ask + self.tools.append(self.run_cell) + self.shell = get_shell() + self.shell.run_cell('import '+self.imps) +``` + +We have one additional parameter to creating a `CodeChat` beyond what we +pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is +`ask` – if that’s `True`, we’ll prompt the user before running code. + +``` python +@patch +def run_cell( + self:CodeChat, + code:str, # Code to execute in persistent IPython session +): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute + "Asks user for permission, and if provided, executes python `code` using persistent IPython session." + confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n' + if self.ask and input(confirm): return '#DECLINED#' + try: res = self.shell.run_cell(code) + except Exception as e: return traceback.format_exc() + return res.stdout if res.result is None else res.result +``` + +We just pass along requests to run code to the shell’s implementation. +Claude often prints results instead of just using the last expression, +so we capture stdout in those cases. + +``` python +sp = f'''You are a knowledgable assistant. Do not use tools unless needed. +Don't do complex calculations yourself -- use code for them. +The following modules are pre-imported for `run_cell` automatically: + +{CodeChat.imps} + +Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls. + +If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.''' +``` + +``` python +def get_user(ignored:str='' # Unused parameter + ): # Username of current user + "Get the username of the user running this session" + print("Looking up username") + return 'Jeremy' +``` + +In order to test out multi-stage tool use, we create a mock function +that Claude can call to get the current username. + +``` python +model = models[1] +``` + +``` python +chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3) +``` + +Claude gets confused sometimes about how tools work, so we use examples +to remind it: + +``` python +chat.h = [ + 'Calculate the square root of `10332`', 'math.sqrt(10332)', + '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.' +] +``` + +Providing a callable to toolloop’s `trace_func` lets us print out +information during the loop: + +``` python +def _show_cts(h): + for r in h: + for o in r.get('content'): + if hasattr(o,'text'): print(o.text) + nm = getattr(o, 'name', None) + if nm=='run_cell': print(o.input['code']) + elif nm: print(f'{o.name}({o.input})') +``` + +…and toolloop’s `cont_func` callable let’s us provide a function which, +if it returns `False`, stops the loop: + +``` python +def _cont_decline(c): + return nested_idx(c, 'content', 'content') != '#DECLINED#' +``` + +Now we can try our code interpreter. We start by asking for a function +to be created, which we’ll use in the next prompt to test that the +interpreter is persistent. + +``` python +pr = '''Create a 1-line function `checksum` for a string `s`, +that multiplies together the ascii values of each character in `s` using `reduce`.''' +chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline) +``` + + Press Enter to execute, or enter "n" to skip? + ``` + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + ``` + + Create a 1-line function `checksum` for a string `s`, + that multiplies together the ascii values of each character in `s` using `reduce`. + Let me help you create that function using `reduce` and `functools`. + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + The function has been created. Let me explain how it works: + 1. It takes a string `s` as input + 2. Uses `functools.reduce` to multiply together all ASCII values + 3. `ord(y)` gets the ASCII value of each character + 4. The initial value is 1 (the third parameter to reduce) + 5. The lambda function multiplies the accumulator (x) with each new ASCII value + + You can test it with any string. For example, you could try `checksum("hello")` to see it in action. + +The function has been created. Let me explain how it works: 1. It takes +a string `s` as input 2. Uses `functools.reduce` to multiply together +all ASCII values 3. `ord(y)` gets the ASCII value of each character 4. +The initial value is 1 (the third parameter to reduce) 5. The lambda +function multiplies the accumulator (x) with each new ASCII value + +You can test it with any string. For example, you could try +`checksum("hello")` to see it in action. + +
+ +- id: `msg_011pcGY9LbYqvRSfDPgCqUkT` +- content: + `[{'text': 'The function has been created. Let me explain how it works:\n1. It takes a string`s`as input\n2. Uses`functools.reduce`to multiply together all ASCII values\n3.`ord(y)`gets the ASCII value of each character\n4. The initial value is 1 (the third parameter to reduce)\n5. The lambda function multiplies the accumulator (x) with each new ASCII value\n\nYou can test it with any string. For example, you could try`checksum(“hello”)`to see it in action.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 824, 'output_tokens': 125, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +By asking for a calculation to be done on the username, we force it to +use multiple steps: + +``` python +pr = 'Use it to get the checksum of the username of this session.' +chat.toolloop(pr, trace_func=_show_cts) +``` + + Looking up username + Use it to get the checksum of the username of this session. + I'll first get the username using `get_user` and then apply our `checksum` function to it. + get_user({'ignored': ''}) + Press Enter to execute, or enter "n" to skip? + ``` + print(checksum("Jeremy")) + ``` + + Now I'll calculate the checksum of "Jeremy": + print(checksum("Jeremy")) + The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy". + +The checksum of the username “Jeremy” is 1134987783204. This was +calculated by multiplying together the ASCII values of each character in +“Jeremy”. + +
+ +- id: `msg_01UXvtcLzzykZpnQUT35v4uD` +- content: + `[{'text': 'The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy".', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
# claudette Module Documentation + +## claudette.asink + +- `class AsyncClient` + - `def __init__(self, model, cli, log)` + Async Anthropic messages client. + + +- `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)` + Make an async call to Claude. + +- `@delegates() class AsyncChat` + - `def __init__(self, model, cli, **kwargs)` + Anthropic async chat client. + + +## claudette.core + +- `def find_block(r, blk_type)` + Find the first block of type `blk_type` in `r.content`. + +- `def contents(r)` + Helper to get the contents from Claude response `r`. + +- `def usage(inp, out, cache_create, cache_read)` + Slightly more concise version of `Usage`. + +- `@patch def __add__(self, b)` + Add together each of `input_tokens` and `output_tokens` + +- `def mk_msgs(msgs, **kw)` + Helper to set 'assistant' role on alternate messages. + +- `class Client` + - `def __init__(self, model, cli, log)` + Basic Anthropic messages client. + + +- `def mk_tool_choice(choose)` + Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise + +- `def mk_funcres(tuid, res)` + Given tool use id and the tool result, create a tool_result response. + +- `def mk_toolres(r, ns, obj)` + Create a `tool_result` message from response `r`. + +- `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)` + Make a call to Claude. + +- `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)` + Return the value of all tool calls (generally used for structured outputs) + +- `class Chat` + - `def __init__(self, model, cli, sp, tools, temp, cont_pr)` + Anthropic chat client. + + - `@property def use(self)` + +- `def img_msg(data, cache)` + Convert image `data` into an encoded `dict` + +- `def text_msg(s, cache)` + Convert `s` to a text message + +- `def mk_msg(content, role, cache, **kw)` + Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message + +## claudette.toolloop + +- `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)` + Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages +# The async version + + + +## Setup + +## Async SDK + +``` python +model = models[1] +cli = AsyncAnthropic() +``` + +``` python +m = {'role': 'user', 'content': "I'm Jeremy"} +r = await cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hello Jeremy! It’s nice to meet you. How can I assist you today? Is +there anything specific you’d like to talk about or any questions you +have? + +
+ +- id: `msg_019gsEQs5dqb3kgwNJbTH27M` +- content: + `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 36}` + +
+ +------------------------------------------------------------------------ + +source + +### AsyncClient + +> AsyncClient (model, cli=None, log=False) + +*Async Anthropic messages client.* + +
+Exported source + +``` python +class AsyncClient(Client): + def __init__(self, model, cli=None, log=False): + "Async Anthropic messages client." + super().__init__(model,cli,log) + if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}) +``` + +
+ +``` python +c = AsyncClient(model) +``` + +``` python +c._r(r) +c.use +``` + + In: 10; Out: 36; Total: 46 + +------------------------------------------------------------------------ + +source + +### AsyncClient.\_\_call\_\_ + +> AsyncClient.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', +> stream:bool=False, stop=None, cli=None, log=False) + +*Make an async call to Claude.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
cliNoneTypeNone
logboolFalse
+ +
+Exported source + +``` python +@patch +async def _stream(self:AsyncClient, msgs:list, prefill='', **kwargs): + async with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: + if prefill: yield prefill + async for o in s.text_stream: yield o + self._log(await s.get_final_message(), prefill, msgs, kwargs) +``` + +
+
+Exported source + +``` python +@patch +@delegates(Client) +async def __call__(self:AsyncClient, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + **kwargs): + "Make an async call to Claude." + msgs = self._precall(msgs, prefill, stop, kwargs) + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = await self.c.messages.create( + model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) +``` + +
+ +``` python +c = AsyncClient(model, log=True) +c.use +``` + + In: 0; Out: 0; Total: 0 + +``` python +c.model = models[1] +await c('Hi') +``` + +Hello! How can I assist you today? Feel free to ask any questions or let +me know if you need help with anything. + +
+ +- id: `msg_01L9vqP9r1LcmvSk8vWGLbPo` +- content: + `[{'text': 'Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 8, 'output_tokens': 29, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +c.use +``` + + In: 8; Out: 29; Total: 37 + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +await c(q, prefill=pref) +``` + +According to Douglas Adams, the meaning of life is 42. More seriously, +there’s no universally agreed upon meaning of life. Many philosophers +and religions have proposed different answers, but it remains an open +question that individuals must grapple with for themselves. + +
+ +- id: `msg_01KAJbCneA2oCRPVm9EkyDXF` +- content: + `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 24, 'output_tokens': 51, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +async for o in (await c('Hi', stream=True)): print(o, end='') +``` + + Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything. + +``` python +c.use +``` + + In: 40; Out: 109; Total: 149 + +``` python +async for o in (await c(q, prefill=pref, stream=True)): print(o, end='') +``` + + According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves. + +``` python +c.use +``` + + In: 64; Out: 160; Total: 224 + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +sp = "You are a summing expert." +``` + +``` python +tools=[get_schema(sums)] +choice = mk_tool_choice('sums') +``` + +``` python +tools = [get_schema(sums)] +msgs = mk_msgs(pr) +r = await c(msgs, sp=sp, tools=tools, tool_choice=choice) +tr = mk_toolres(r, ns=globals()) +msgs += tr +contents(await c(msgs, sp=sp, tools=tools)) +``` + + Finding the sum of 604542 and 6458932 + + 'As a summing expert, I\'m happy to help you with this addition. The sum of 604542 and 6458932 is 7063474.\n\nTo break it down:\n604542 (first number)\n+ 6458932 (second number)\n= 7063474 (total sum)\n\nThis result was calculated using the "sums" function, which adds two numbers together. Is there anything else you\'d like me to sum for you?' + +## AsyncChat + +------------------------------------------------------------------------ + +source + +### AsyncChat + +> AsyncChat (model:Optional[str]=None, +> cli:Optional[claudette.core.Client]=None, sp='', +> tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None) + +*Anthropic async chat client.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstr
toolsOptionalNone
tempint0
cont_prOptionalNone
+ +
+Exported source + +``` python +@delegates() +class AsyncChat(Chat): + def __init__(self, + model:Optional[str]=None, # Model to use (leave empty if passing `cli`) + cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) + **kwargs): + "Anthropic async chat client." + super().__init__(model, cli, **kwargs) + if not cli: self.c = AsyncClient(model) +``` + +
+ +``` python +sp = "Never mention what tools you use." +chat = AsyncChat(model, sp=sp) +chat.c.use, chat.h +``` + + (In: 0; Out: 0; Total: 0, []) + +------------------------------------------------------------------------ + +source + +### AsyncChat.\_\_call\_\_ + +> AsyncChat.__call__ (pr=None, temp=0, maxtok=4096, stream=False, +> prefill='', **kw) + +*Call self as a function.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempint0Temperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
kw
+ +
+Exported source + +``` python +@patch +async def _stream(self:AsyncChat, res): + async for o in res: yield o + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) +``` + +
+
+Exported source + +``` python +@patch +async def _append_pr(self:AsyncChat, pr=None): + prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history + if pr and prev_role == 'user': await self() + self._post_pr(pr, prev_role) +``` + +
+
+Exported source + +``` python +@patch +async def __call__(self:AsyncChat, + pr=None, # Prompt / message + temp=0, # Temperature + maxtok=4096, # Maximum tokens + stream=False, # Stream response? + prefill='', # Optional prefill to pass to Claude as start of its response + **kw): + await self._append_pr(pr) + if self.tools: kw['tools'] = [get_schema(o) for o in self.tools] + res = await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, **kw) + if stream: return self._stream(res) + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) + return res +``` + +
+ +``` python +await chat("I'm Jeremy") +await chat("What's my name?") +``` + +Your name is Jeremy, as you mentioned in your previous message. + +
+ +- id: `msg_01NMugMXWpDP9iuTXeLkHarn` +- content: + `[{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 64, 'output_tokens': 16, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +await chat(q, prefill=pref) +``` + +According to Douglas Adams, the meaning of life is 42. More seriously, +there’s no universally agreed upon answer. Common philosophical +perspectives include: + +1. Finding personal fulfillment +2. Serving others +3. Pursuing happiness +4. Creating meaning through our choices +5. Experiencing and appreciating existence + +Ultimately, many believe each individual must determine their own life’s +meaning. + +
+ +- id: `msg_01VPWUQn5Do1Kst8RYUDQvCu` +- content: + `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 100, 'output_tokens': 82, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat = AsyncChat(model, sp=sp) +async for o in (await chat("I'm Jeremy", stream=True)): print(o, end='') +``` + + Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions I can help you with? + +``` python +pr = f"What is {a}+{b}?" +chat = AsyncChat(model, sp=sp, tools=[sums]) +r = await chat(pr) +r +``` + + Finding the sum of 604542 and 6458932 + +To answer this question, I can use the “sums” function to add these two +numbers together. Let me do that for you. + +
+ +- id: `msg_015z1rffSWFxvj7rSpzc43ZE` +- content: + `[{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01SNKhtfnDQBC4RGY4mUCq1v', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 428, 'output_tokens': 101, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +await chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_018KAsE2YGiXWjUJkLPrXpb2` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 543, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +fn = Path('samples/puppy.jpg') +img = fn.read_bytes() +``` + +``` python +q = "In brief, what color flowers are in this image?" +msg = mk_msg([img_msg(img), text_msg(q)]) +await c([msg]) +``` + +The flowers in this image are purple. They appear to be small, +daisy-like flowers, possibly asters or some type of purple daisy, +blooming in the background behind the adorable puppy in the foreground. + +
+ +- id: `msg_017qgZggLjUY915mWbWCkb9X` +- content: + `[{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 50, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
# Claudette’s source + + + +This is the ‘literate’ source code for Claudette. You can view the fully +rendered version of the notebook +[here](https://claudette.answer.ai/core.html), or you can clone the git +repo and run the [interactive +notebook](https://github.com/AnswerDotAI/claudette/blob/main/00_core.ipynb) +in Jupyter. The notebook is converted the [Python module +claudette/core.py](https://github.com/AnswerDotAI/claudette/blob/main/claudette/core.py) +using [nbdev](https://nbdev.fast.ai/). The goal of this source code is +to both create the Python module, and also to teach the reader *how* it +is created, without assuming much existing knowledge about Claude’s API. + +Most of the time you’ll see that we write some source code *first*, and +then a description or discussion of it *afterwards*. + +## Setup + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +To print every HTTP request and response in full, uncomment the above +line. This functionality is provided by Anthropic’s SDK. + +
+ +> **Tip** +> +> If you’re reading the rendered version of this notebook, you’ll see an +> “Exported source” collapsible widget below. If you’re reading the +> source notebook directly, you’ll see `#| exports` at the top of the +> cell. These show that this piece of code will be exported into the +> python module that this notebook creates. No other code will be +> included – any other code in this notebook is just for demonstration, +> documentation, and testing. +> +> You can toggle expanding/collapsing the source code of all exported +> sections by using the ` Code` menu in the top right of the rendered +> notebook page. + +
+ +
+Exported source + +``` python +model_types = { + # Anthropic + 'claude-3-opus-20240229': 'opus', + 'claude-3-5-sonnet-20241022': 'sonnet', + 'claude-3-haiku-20240307': 'haiku-3', + 'claude-3-5-haiku-20241022': 'haiku-3-5', + # AWS + 'anthropic.claude-3-opus-20240229-v1:0': 'opus', + 'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet', + 'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet', + 'anthropic.claude-3-haiku-20240307-v1:0': 'haiku', + # Google + 'claude-3-opus@20240229': 'opus', + 'claude-3-5-sonnet-v2@20241022': 'sonnet', + 'claude-3-sonnet@20240229': 'sonnet', + 'claude-3-haiku@20240307': 'haiku', +} + +all_models = list(model_types) +``` + +
+
+Exported source + +``` python +text_only_models = ('claude-3-5-haiku-20241022',) +``` + +
+ +These are the current versions and +[prices](https://www.anthropic.com/pricing#anthropic-api) of Anthropic’s +models at the time of writing. + +``` python +model = models[1]; model +``` + + 'claude-3-5-sonnet-20241022' + +For examples, we’ll use Sonnet 3.5, since it’s awesome. + +## Antropic SDK + +``` python +cli = Anthropic() +``` + +This is what Anthropic’s SDK provides for interacting with Python. To +use it, pass it a list of *messages*, with *content* and a *role*. The +roles should alternate between *user* and *assistant*. + +
+ +> **Tip** +> +> After the code below you’ll see an indented section with an orange +> vertical line on the left. This is used to show the *result* of +> running the code above. Because the code is running in a Jupyter +> Notebook, we don’t have to use `print` to display results, we can just +> type the expression directly, as we do with `r` here. + +
+ +``` python +m = {'role': 'user', 'content': "I'm Jeremy"} +r = cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help +you today? + +
+ +- id: `msg_017Q8WYvvANfyHWLJWt95UR1` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 27}` + +
+ +### Formatting output + +That output is pretty long and hard to read, so let’s clean it up. We’ll +start by pulling out the `Content` part of the message. To do that, +we’re going to write our first function which will be included to the +`claudette/core.py` module. + +
+ +> **Tip** +> +> This is the first exported public function or class we’re creating +> (the previous export was of a variable). In the rendered version of +> the notebook for these you’ll see 4 things, in this order (unless the +> symbol starts with a single `_`, which indicates it’s *private*): +> +> - The signature (with the symbol name as a heading, with a horizontal +> rule above) +> - A table of paramater docs (if provided) +> - The doc string (in italics). +> - The source code (in a collapsible “Exported source” block) +> +> After that, we generally provide a bit more detail on what we’ve +> created, and why, along with a sample usage. + +
+ +------------------------------------------------------------------------ + +source + +### find_block + +> find_block (r:collections.abc.Mapping, blk_type:type= 'anthropic.types.text_block.TextBlock'>) + +*Find the first block of type `blk_type` in `r.content`.* + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
rMappingThe message to look in
blk_typetypeTextBlockThe type of block to find
+ +
+Exported source + +``` python +def find_block(r:abc.Mapping, # The message to look in + blk_type:type=TextBlock # The type of block to find + ): + "Find the first block of type `blk_type` in `r.content`." + return first(o for o in r.content if isinstance(o,blk_type)) +``` + +
+ +This makes it easier to grab the needed parts of Claude’s responses, +which can include multiple pieces of content. By default, we look for +the first text block. That will generally have the content we want to +display. + +``` python +find_block(r) +``` + + TextBlock(text="Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", type='text') + +------------------------------------------------------------------------ + +source + +### contents + +> contents (r) + +*Helper to get the contents from Claude response `r`.* + +
+Exported source + +``` python +def contents(r): + "Helper to get the contents from Claude response `r`." + blk = find_block(r) + if not blk and r.content: blk = r.content[0] + return blk.text.strip() if hasattr(blk,'text') else str(blk) +``` + +
+ +For display purposes, we often just want to show the text itself. + +``` python +contents(r) +``` + + "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?" + +
+Exported source + +``` python +@patch +def _repr_markdown_(self:(Message)): + det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items()) + cts = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex + return f"""{cts} + +
+ +- {det} + +
""" +``` + +
+ +Jupyter looks for a `_repr_markdown_` method in displayed objects; we +add this in order to display just the content text, and collapse full +details into a hideable section. Note that `patch` is from +[fastcore](https://fastcore.fast.ai/), and is used to add (or replace) +functionality in an existing class. We pass the class(es) that we want +to patch as type annotations to `self`. In this case, `_repr_markdown_` +is being added to Anthropic’s `Message` class, so when we display the +message now we just see the contents, and the details are hidden away in +a collapsible details block. + +``` python +r +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help +you today? + +
+ +- id: `msg_017Q8WYvvANfyHWLJWt95UR1` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 27}` + +
+ +One key part of the response is the +[`usage`](https://claudette.answer.ai/core.html#usage) key, which tells +us how many tokens we used by returning a `Usage` object. + +We’ll add some helpers to make things a bit cleaner for creating and +formatting these objects. + +``` python +r.usage +``` + + In: 10; Out: 27; Cache create: 0; Cache read: 0; Total: 37 + +------------------------------------------------------------------------ + +source + +### usage + +> usage (inp=0, out=0, cache_create=0, cache_read=0) + +*Slightly more concise version of `Usage`.* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
inpint0input tokens
outint0Output tokens
cache_createint0Cache creation tokens
cache_readint0Cache read tokens
+ +
+Exported source + +``` python +def usage(inp=0, # input tokens + out=0, # Output tokens + cache_create=0, # Cache creation tokens + cache_read=0 # Cache read tokens + ): + "Slightly more concise version of `Usage`." + return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create, cache_read_input_tokens=cache_read) +``` + +
+ +The constructor provided by Anthropic is rather verbose, so we clean it +up a bit, using a lowercase version of the name. + +``` python +usage(5) +``` + + In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 + +------------------------------------------------------------------------ + +source + +### Usage.total + +> Usage.total () + +
+Exported source + +``` python +@patch(as_prop=True) +def total(self:Usage): return self.input_tokens+self.output_tokens+getattr(self, "cache_creation_input_tokens",0)+getattr(self, "cache_read_input_tokens",0) +``` + +
+ +Adding a `total` property to `Usage` makes it easier to see how many +tokens we’ve used up altogether. + +``` python +usage(5,1).total +``` + + 6 + +------------------------------------------------------------------------ + +source + +### Usage.\_\_repr\_\_ + +> Usage.__repr__ () + +*Return repr(self).* + +
+Exported source + +``` python +@patch +def __repr__(self:Usage): return f'In: {self.input_tokens}; Out: {self.output_tokens}; Cache create: {getattr(self, "cache_creation_input_tokens",0)}; Cache read: {getattr(self, "cache_read_input_tokens",0)}; Total: {self.total}' +``` + +
+ +In python, patching `__repr__` lets us change how an object is +displayed. (More generally, methods starting and ending in `__` in +Python are called `dunder` methods, and have some `magic` behavior – +such as, in this case, changing how an object is displayed.) + +``` python +usage(5) +``` + + In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 + +------------------------------------------------------------------------ + +source + +### Usage.\_\_add\_\_ + +> Usage.__add__ (b) + +*Add together each of `input_tokens` and `output_tokens`* + +
+Exported source + +``` python +@patch +def __add__(self:Usage, b): + "Add together each of `input_tokens` and `output_tokens`" + return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens, getattr(self,'cache_creation_input_tokens',0)+getattr(b,'cache_creation_input_tokens',0), getattr(self,'cache_read_input_tokens',0)+getattr(b,'cache_read_input_tokens',0)) +``` + +
+ +And, patching `__add__` lets `+` work on a `Usage` object. + +``` python +r.usage+r.usage +``` + + In: 20; Out: 54; Cache create: 0; Cache read: 0; Total: 74 + +### Creating messages + +Creating correctly formatted `dict`s from scratch every time isn’t very +handy, so next up we’ll add helpers for this. + +``` python +def mk_msg(content, role='user', **kw): + return dict(role=role, content=content, **kw) +``` + +We make things a bit more convenient by writing a function to create a +message for us. + +
+ +> **Note** +> +> You may have noticed that we didn’t export the +> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) function +> (i.e. there’s no “Exported source” block around it). That’s because +> we’ll need more functionality in our final version than this version +> has – so we’ll be defining a more complete version later. Rather than +> refactoring/editing in notebooks, often it’s helpful to simply +> gradually build up complexity by re-defining a symbol. + +
+ +``` python +prompt = "I'm Jeremy" +m = mk_msg(prompt) +m +``` + + {'role': 'user', 'content': "I'm Jeremy"} + +``` python +r = cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hi Jeremy! I’m Claude. Nice to meet you. How can I help you today? + +
+ +- id: `msg_01BhkuvQtEPoC8wHSbU7YRpV` +- content: + `[{'text': "Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 24}` + +
+ +------------------------------------------------------------------------ + +source + +### mk_msgs + +> mk_msgs (msgs:list, **kw) + +*Helper to set ‘assistant’ role on alternate messages.* + +
+Exported source + +``` python +def mk_msgs(msgs:list, **kw): + "Helper to set 'assistant' role on alternate messages." + if isinstance(msgs,str): msgs=[msgs] + return [mk_msg(o, ('user','assistant')[i%2], **kw) for i,o in enumerate(msgs)] +``` + +
+ +LLMs, including Claude, don’t actually have state, but instead dialogs +are created by passing back all previous prompts and responses every +time. With Claude, they always alternate *user* and *assistant*. +Therefore we create a function to make it easier to build up these +dialog lists. + +But to do so, we need to update +[`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) so that we +can’t only pass a `str` as `content`, but can also pass a `dict` or an +object with a `content` attr, since these are both types of message that +Claude can create. To do so, we check for a `content` key or attr, and +use it if found. + +
+Exported source + +``` python +def _str_if_needed(o): + if isinstance(o, (list,tuple,abc.Mapping,L)) or hasattr(o, '__pydantic_serializer__'): return o + return str(o) +``` + +
+ +``` python +def mk_msg(content, role='user', **kw): + "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" + if hasattr(content, 'content'): content,role = content.content,content.role + if isinstance(content, abc.Mapping): content=content['content'] + return dict(role=role, content=_str_if_needed(content), **kw) +``` + +``` python +msgs = mk_msgs([prompt, r, 'I forgot my name. Can you remind me please?']) +msgs +``` + + [{'role': 'user', 'content': "I'm Jeremy"}, + {'role': 'assistant', + 'content': [TextBlock(text="Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", type='text')]}, + {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}] + +Now, if we pass this list of messages to Claude, the model treats it as +a conversation to respond to. + +``` python +cli.messages.create(messages=msgs, model=model, max_tokens=200) +``` + +You just told me your name is Jeremy. + +
+ +- id: `msg_01KZski1R3z1iGjF6XsBb9dM` +- content: + `[{'text': 'You just told me your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 46, 'output_tokens': 13}` + +
+ +## Client + +------------------------------------------------------------------------ + +source + +### Client + +> Client (model, cli=None, log=False) + +*Basic Anthropic messages client.* + +
+Exported source + +``` python +class Client: + def __init__(self, model, cli=None, log=False): + "Basic Anthropic messages client." + self.model,self.use = model,usage() + self.text_only = model in text_only_models + self.log = [] if log else None + self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'})) +``` + +
+ +We’ll create a simple +[`Client`](https://claudette.answer.ai/core.html#client) for `Anthropic` +which tracks usage stores the model to use. We don’t add any methods +right away – instead we’ll use `patch` for that so we can add and +document them incrementally. + +``` python +c = Client(model) +c.use +``` + + In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 + +
+Exported source + +``` python +@patch +def _r(self:Client, r:Message, prefill=''): + "Store the result of the message and accrue total usage." + if prefill: + blk = find_block(r) + blk.text = prefill + (blk.text or '') + self.result = r + self.use += r.usage + self.stop_reason = r.stop_reason + self.stop_sequence = r.stop_sequence + return r +``` + +
+ +We use a `_` prefix on private methods, but we document them here in the +interests of literate source code. + +`_r` will be used each time we get a new result, to track usage and also +to keep the result available for later. + +``` python +c._r(r) +c.use +``` + + In: 10; Out: 24; Cache create: 0; Cache read: 0; Total: 34 + +Whereas OpenAI’s models use a `stream` parameter for streaming, +Anthropic’s use a separate method. We implement Anthropic’s approach in +a private method, and then use a `stream` parameter in `__call__` for +consistency: + +
+Exported source + +``` python +@patch +def _log(self:Client, final, prefill, msgs, maxtok=None, sp=None, temp=None, stream=None, stop=None, **kwargs): + self._r(final, prefill) + if self.log is not None: self.log.append({ + "msgs": msgs, "prefill": prefill, **kwargs, + "msgs": msgs, "prefill": prefill, "maxtok": maxtok, "sp": sp, "temp": temp, "stream": stream, "stop": stop, **kwargs, + "result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence + }) + return self.result +``` + +
+
+Exported source + +``` python +@patch +def _stream(self:Client, msgs:list, prefill='', **kwargs): + with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: + if prefill: yield(prefill) + yield from s.text_stream + self._log(s.get_final_message(), prefill, msgs, **kwargs) +``` + +
+ +Claude supports adding an extra `assistant` message at the end, which +contains the *prefill* – i.e. the text we want Claude to assume the +response starts with. However Claude doesn’t actually repeat that in the +response, so for convenience we add it. + +
+Exported source + +``` python +@patch +def _precall(self:Client, msgs, prefill, stop, kwargs): + pref = [prefill.strip()] if prefill else [] + if not isinstance(msgs,list): msgs = [msgs] + if stop is not None: + if not isinstance(stop, (list)): stop = [stop] + kwargs["stop_sequences"] = stop + msgs = mk_msgs(msgs+pref) + return msgs +``` + +
+ +``` python +@patch +@delegates(messages.Messages.create) +def __call__(self:Client, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + **kwargs): + "Make a call to Claude." + msgs = self._precall(msgs, prefill, stop, kwargs) + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = self.c.messages.create( + model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs) +``` + +Defining `__call__` let’s us use an object like a function (i.e it’s +*callable*). We use it as a small wrapper over `messages.create`. +However we’re not exporting this version just yet – we have some +additions we’ll make in a moment… + +``` python +c = Client(model, log=True) +c.use +``` + + In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 + +``` python +c('Hi') +``` + +Hello! How can I help you today? + +
+ +- id: `msg_01DZfHpTqbodjegmvG6kkQvn` +- content: + `[{'text': 'Hello! How can I help you today?', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 8, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +c.use +``` + + In: 8; Out: 22; Cache create: 0; Cache read: 0; Total: 30 + +Let’s try out *prefill*: + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +``` + +``` python +c(q, prefill=pref) +``` + +According to Douglas Adams, it’s 42. More seriously, there’s no +universal answer - it’s deeply personal. Common perspectives include: +finding happiness, making meaningful connections, pursuing purpose +through work/creativity, helping others, or simply experiencing and +appreciating existence. + +
+ +- id: `msg_01RKAjFBMhyBjvKw59ypM6tp` +- content: + `[{'text': "According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 24, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +We can pass `stream=True` to stream the response back incrementally: + +``` python +for o in c('Hi', stream=True): print(o, end='') +``` + + Hello! How can I help you today? + +``` python +c.use +``` + + In: 40; Out: 97; Cache create: 0; Cache read: 0; Total: 137 + +``` python +for o in c(q, prefill=pref, stream=True): print(o, end='') +``` + + According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence. + +``` python +c.use +``` + + In: 64; Out: 150; Cache create: 0; Cache read: 0; Total: 214 + +Pass a stop seauence if you want claude to stop generating text when it +encounters it. + +``` python +c("Count from 1 to 10", stop="5") +``` + +1 2 3 4 + +
+ +- id: `msg_01D3kdCAHNbXadE144FLPbQV` +- content: `[{'text': '1\n2\n3\n4\n', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `stop_sequence` +- stop_sequence: `5` +- type: `message` +- usage: + `{'input_tokens': 15, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +This also works with streaming, and you can pass more than one stop +sequence: + +``` python +for o in c("Count from 1 to 10", stop=["2", "yellow"], stream=True): print(o, end='') +print(c.stop_reason, c.stop_sequence) +``` + + 1 + stop_sequence 2 + +You can check the logs: + +``` python +c.log[-1] +``` + + {'msgs': [{'role': 'user', 'content': 'Count from 1 to 10'}], + 'prefill': '', + 'max_tokens': 4096, + 'system': '', + 'temperature': 0, + 'stop_sequences': ['2', 'yellow'], + 'maxtok': None, + 'sp': None, + 'temp': None, + 'stream': None, + 'stop': None, + 'result': Message(id='msg_01PbJN7QLwYALfoqTtYJHYVR', content=[TextBlock(text='1\n', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='stop_sequence', stop_sequence='2', type='message', usage=In: 15; Out: 11; Cache create: 0; Cache read: 0; Total: 26), + 'use': In: 94; Out: 172; Cache create: 0; Cache read: 0; Total: 266, + 'stop_reason': 'stop_sequence', + 'stop_sequence': '2'} + +## Tool use + +Let’s now add tool use (aka *function calling*). + +------------------------------------------------------------------------ + +source + +### mk_tool_choice + +> mk_tool_choice (choose:Union[str,bool,NoneType]) + +*Create a `tool_choice` dict that’s ‘auto’ if `choose` is `None`, ‘any’ +if it is True, or ‘tool’ otherwise* + +
+Exported source + +``` python +def mk_tool_choice(choose:Union[str,bool,None])->dict: + "Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise" + return {"type": "tool", "name": choose} if isinstance(choose,str) else {'type':'any'} if choose else {'type':'auto'} +``` + +
+ +``` python +print(mk_tool_choice('sums')) +print(mk_tool_choice(True)) +print(mk_tool_choice(None)) +``` + + {'type': 'tool', 'name': 'sums'} + {'type': 'any'} + {'type': 'auto'} + +Claude can be forced to use a particular tool, or select from a specific +list of tools, or decide for itself when to use a tool. If you want to +force a tool (or force choosing from a list), include a `tool_choice` +param with a dict from +[`mk_tool_choice`](https://claudette.answer.ai/core.html#mk_tool_choice). + +For testing, we need a function that Claude can call; we’ll write a +simple function that adds numbers together, and will tell us when it’s +being called: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +sp = "You are a summing expert." +``` + +Claudette can autogenerate a schema thanks to the `toolslm` library. +We’ll force the use of the tool using the function we created earlier. + +``` python +tools=[get_schema(sums)] +choice = mk_tool_choice('sums') +``` + +We’ll start a dialog with Claude now. We’ll store the messages of our +dialog in `msgs`. The first message will be our prompt `pr`, and we’ll +pass our `tools` schema. + +``` python +msgs = mk_msgs(pr) +r = c(msgs, sp=sp, tools=tools, tool_choice=choice) +r +``` + +ToolUseBlock(id=‘toolu_01JEJNPyeeGm7uwckeF5J4pf’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_015eEr2H8V4j8nNEh1KQifjH` +- content: + `[{'id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 442, 'output_tokens': 55, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +When Claude decides that it should use a tool, it passes back a +`ToolUseBlock` with the name of the tool to call, and the params to use. + +We don’t want to allow it to call just any possible function (that would +be a security disaster!) so we create a *namespace* – that is, a +dictionary of allowable function names to call. + +``` python +ns = mk_ns(sums) +ns +``` + + {'sums': int>} + +------------------------------------------------------------------------ + +source + +### mk_funcres + +> mk_funcres (tuid, res) + +*Given tool use id and the tool result, create a tool_result response.* + +
+Exported source + +``` python +def mk_funcres(tuid, res): + "Given tool use id and the tool result, create a tool_result response." + return dict(type="tool_result", tool_use_id=tuid, content=str(res)) +``` + +
+ +We can now use the function requested by Claude. We look it up in `ns`, +and pass in the provided parameters. + +``` python +fc = find_block(r, ToolUseBlock) +res = mk_funcres(fc.id, call_func(fc.name, fc.input, ns=ns)) +res +``` + + Finding the sum of 604542 and 6458932 + + {'type': 'tool_result', + 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', + 'content': '7063474'} + +------------------------------------------------------------------------ + +source + +### mk_toolres + +> mk_toolres (r:collections.abc.Mapping, +> ns:Optional[collections.abc.Mapping]=None, obj:Optional=None) + +*Create a `tool_result` message from response `r`.* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
rMappingTool use request response from Claude
nsOptionalNoneNamespace to search for tools
objOptionalNoneClass to search for tools
+ +
+Exported source + +``` python +def mk_toolres( + r:abc.Mapping, # Tool use request response from Claude + ns:Optional[abc.Mapping]=None, # Namespace to search for tools + obj:Optional=None # Class to search for tools + ): + "Create a `tool_result` message from response `r`." + cts = getattr(r, 'content', []) + res = [mk_msg(r)] + if ns is None: ns=globals() + if obj is not None: ns = mk_ns(obj) + tcs = [mk_funcres(o.id, call_func(o.name, o.input, ns)) for o in cts if isinstance(o,ToolUseBlock)] + if tcs: res.append(mk_msg(tcs)) + return res +``` + +
+ +In order to tell Claude the result of the tool call, we pass back the +tool use assistant request and the `tool_result` response. + +``` python +tr = mk_toolres(r, ns=ns) +tr +``` + + Finding the sum of 604542 and 6458932 + + [{'role': 'assistant', + 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, + {'role': 'user', + 'content': [{'type': 'tool_result', + 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', + 'content': '7063474'}]}] + +We add this to our dialog, and now Claude has all the information it +needs to answer our question. + +``` python +msgs += tr +contents(c(msgs, sp=sp, tools=tools)) +``` + + 'The sum of 604542 and 6458932 is 7063474.' + +``` python +msgs +``` + + [{'role': 'user', 'content': 'What is 604542+6458932?'}, + {'role': 'assistant', + 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, + {'role': 'user', + 'content': [{'type': 'tool_result', + 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', + 'content': '7063474'}]}] + +This works with methods as well – in this case, use the object itself +for `ns`: + +``` python +class Dummy: + def sums( + self, + a:int, # First thing to sum + b:int=1 # Second thing to sum + ) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +tools = [get_schema(Dummy.sums)] +o = Dummy() +r = c(pr, sp=sp, tools=tools, tool_choice=choice) +tr = mk_toolres(r, obj=o) +msgs += tr +contents(c(msgs, sp=sp, tools=tools)) +``` + + Finding the sum of 604542 and 6458932 + + 'The sum of 604542 and 6458932 is 7063474.' + +------------------------------------------------------------------------ + +source + +### get_types + +> get_types (msgs) + +``` python +get_types(msgs) +``` + + ['text', 'tool_use', 'tool_result', 'tool_use', 'tool_result'] + +------------------------------------------------------------------------ + +source + +### Client.\_\_call\_\_ + +> Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', +> stream:bool=False, stop=None, tools:Optional[list]=None, +> tool_choice:Optional[dict]=None, +> metadata:MetadataParam|NotGiven=NOT_GIVEN, +> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Unio +> n[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, +> temperature:float|NotGiven=NOT_GIVEN, +> top_k:int|NotGiven=NOT_GIVEN, +> top_p:float|NotGiven=NOT_GIVEN, +> extra_headers:Headers|None=None, +> extra_query:Query|None=None, extra_body:Body|None=None, +> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) + +*Make a call to Claude.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
toolsOptionalNoneList of tools to make available to Claude
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
+ +
+Exported source + +``` python +@patch +@delegates(messages.Messages.create) +def __call__(self:Client, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + tools:Optional[list]=None, # List of tools to make available to Claude + tool_choice:Optional[dict]=None, # Optionally force use of some tool + **kwargs): + "Make a call to Claude." + if tools: kwargs['tools'] = [get_schema(o) for o in listify(tools)] + if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) + msgs = self._precall(msgs, prefill, stop, kwargs) + if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) +``` + +
+ +``` python +r = c(pr, sp=sp, tools=sums, tool_choice=sums) +r +``` + +ToolUseBlock(id=‘toolu_01KNbjuc8utt6ZroFngmAcuj’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_01T8zmguPksQaKLLgUuaYAJL` +- content: + `[{'id': 'toolu_01KNbjuc8utt6ZroFngmAcuj', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 438, 'output_tokens': 64, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +tr = mk_toolres(r, ns=ns) +``` + + Finding the sum of 604542 and 6458932 + +------------------------------------------------------------------------ + +source + +### Client.structured + +> Client.structured (msgs:list, tools:Optional[list]=None, +> obj:Optional=None, +> ns:Optional[collections.abc.Mapping]=None, sp='', +> temp=0, maxtok=4096, prefill='', stream:bool=False, +> stop=None, tool_choice:Optional[dict]=None, +> metadata:MetadataParam|NotGiven=NOT_GIVEN, +> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Un +> ion[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, +> temperature:float|NotGiven=NOT_GIVEN, +> top_k:int|NotGiven=NOT_GIVEN, +> top_p:float|NotGiven=NOT_GIVEN, +> extra_headers:Headers|None=None, +> extra_query:Query|None=None, +> extra_body:Body|None=None, +> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) + +*Return the value of all tool calls (generally used for structured +outputs)* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
toolsOptionalNoneList of tools to make available to Claude
objOptionalNoneClass to search for tools
nsOptionalNoneNamespace to search for tools
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
+ +
+Exported source + +``` python +@patch +@delegates(Client.__call__) +def structured(self:Client, + msgs:list, # List of messages in the dialog + tools:Optional[list]=None, # List of tools to make available to Claude + obj:Optional=None, # Class to search for tools + ns:Optional[abc.Mapping]=None, # Namespace to search for tools + **kwargs): + "Return the value of all tool calls (generally used for structured outputs)" + tools = listify(tools) + res = self(msgs, tools=tools, tool_choice=tools, **kwargs) + if ns is None: ns=mk_ns(*tools) + if obj is not None: ns = mk_ns(obj) + cts = getattr(res, 'content', []) + tcs = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)] + return tcs +``` + +
+ +Anthropic’s API does not support response formats directly, so instead +we provide a `structured` method to use tool calling to achieve the same +result. The result of the tool is not passed back to Claude in this +case, but instead is returned directly to the user. + +``` python +c.structured(pr, tools=[sums]) +``` + + Finding the sum of 604542 and 6458932 + + [7063474] + +## Chat + +Rather than manually adding the responses to a dialog, we’ll create a +simple [`Chat`](https://claudette.answer.ai/core.html#chat) class to do +that for us, each time we make a request. We’ll also store the system +prompt and tools here, to avoid passing them every time. + +------------------------------------------------------------------------ + +source + +### Chat + +> Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None, +> sp='', tools:Optional[list]=None, temp=0, +> cont_pr:Optional[str]=None) + +*Anthropic chat client.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstrOptional system prompt
toolsOptionalNoneList of tools to make available to Claude
tempint0Temperature
cont_prOptionalNoneUser prompt to continue an assistant response: +assistant,[user:“…”],assistant
+ +
+Exported source + +``` python +class Chat: + def __init__(self, + model:Optional[str]=None, # Model to use (leave empty if passing `cli`) + cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) + sp='', # Optional system prompt + tools:Optional[list]=None, # List of tools to make available to Claude + temp=0, # Temperature + cont_pr:Optional[str]=None): # User prompt to continue an assistant response: assistant,[user:"..."],assistant + "Anthropic chat client." + assert model or cli + assert cont_pr != "", "cont_pr may not be an empty string" + self.c = (cli or Client(model)) + self.h,self.sp,self.tools,self.cont_pr,self.temp = [],sp,tools,cont_pr,temp + + @property + def use(self): return self.c.use +``` + +
+ +The class stores the +[`Client`](https://claudette.answer.ai/core.html#client) that will +provide the responses in `c`, and a history of messages in `h`. + +``` python +sp = "Never mention what tools you use." +chat = Chat(model, sp=sp) +chat.c.use, chat.h +``` + + (In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0, []) + +We’ve shown the token usage but we really care about is pricing. Let’s +extract the latest +[pricing](https://www.anthropic.com/pricing#anthropic-api) from +Anthropic into a `pricing` dict. + +We’ll patch `Usage` to enable it compute the cost given pricing. + +------------------------------------------------------------------------ + +source + +### Usage.cost + +> Usage.cost (costs:tuple) + +
+Exported source + +``` python +@patch +def cost(self:Usage, costs:tuple) -> float: + cache_w, cache_r = getattr(self, "cache_creation_input_tokens",0), getattr(self, "cache_read_input_tokens",0) + return sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6 +``` + +
+ +``` python +chat.c.use.cost(pricing[model_types[chat.c.model]]) +``` + + 0.0 + +This is clunky. Let’s add `cost` as a property for the +[`Chat`](https://claudette.answer.ai/core.html#chat) class. It will pass +in the appropriate prices for the current model to the usage cost +calculator. + +------------------------------------------------------------------------ + +source + +### Chat.cost + +> Chat.cost () + +
+Exported source + +``` python +@patch(as_prop=True) +def cost(self: Chat) -> float: return self.c.use.cost(pricing[model_types[self.c.model]]) +``` + +
+ +``` python +chat.cost +``` + + 0.0 + +------------------------------------------------------------------------ + +source + +### Chat.\_\_call\_\_ + +> Chat.__call__ (pr=None, temp=None, maxtok=4096, stream=False, prefill='', +> tool_choice:Optional[dict]=None, **kw) + +*Call self as a function.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
kw
+ +
+Exported source + +``` python +@patch +def _stream(self:Chat, res): + yield from res + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) +``` + +
+
+Exported source + +``` python +@patch +def _post_pr(self:Chat, pr, prev_role): + if pr is None and prev_role == 'assistant': + if self.cont_pr is None: + raise ValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.") + pr = self.cont_pr # No user prompt, keep the chain + if pr: self.h.append(mk_msg(pr)) +``` + +
+
+Exported source + +``` python +@patch +def _append_pr(self:Chat, + pr=None, # Prompt / message + ): + prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' + if pr and prev_role == 'user': self() # already user request pending + self._post_pr(pr, prev_role) +``` + +
+
+Exported source + +``` python +@patch +def __call__(self:Chat, + pr=None, # Prompt / message + temp=None, # Temperature + maxtok=4096, # Maximum tokens + stream=False, # Stream response? + prefill='', # Optional prefill to pass to Claude as start of its response + tool_choice:Optional[dict]=None, # Optionally force use of some tool + **kw): + if temp is None: temp=self.temp + self._append_pr(pr) + res = self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, + tools=self.tools, tool_choice=tool_choice,**kw) + if stream: return self._stream(res) + self.h += mk_toolres(self.c.result, ns=self.tools) + return res +``` + +
+ +The `__call__` method just passes the request along to the +[`Client`](https://claudette.answer.ai/core.html#client), but rather +than just passing in this one prompt, it appends it to the history and +passes it all along. As a result, we now have state! + +``` python +chat = Chat(model, sp=sp) +``` + +``` python +chat("I'm Jeremy") +chat("What's my name?") +``` + +Your name is Jeremy. + +
+ +- id: `msg_01GpNv4P5x9Gzc5mxxw9FgEL` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 41, 'output_tokens': 9, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat.use, chat.cost +``` + + (In: 58; Out: 27; Cache create: 0; Cache read: 0; Total: 85, 0.000579) + +Let’s try out prefill too: + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +``` + +``` python +chat(q, prefill=pref) +``` + +According to Douglas Adams,42. But seriously: To find purpose, create +meaning, love, grow, and make a positive impact while experiencing +life’s journey. + +
+ +- id: `msg_011s2iLranbHFhdsVg8sz6eY` +- content: + `[{'text': "According to Douglas Adams,42. But seriously: To find purpose, create meaning, love, grow, and make a positive impact while experiencing life's journey.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 69, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +By default messages must be in user, assistant, user format. If this +isn’t followed (aka calling `chat()` without a user message) it will +error out: + +``` python +try: chat() +except ValueError as e: print("Error:", e) +``` + + Error: Prompt must be given after assistant completion, or use `self.cont_pr`. + +Setting `cont_pr` allows a “default prompt” to be specified when a +prompt isn’t specified. Usually used to prompt the model to continue. + +``` python +chat.cont_pr = "keep going..." +chat() +``` + +To build meaningful relationships, pursue passions, learn continuously, +help others, appreciate beauty, overcome challenges, leave a positive +legacy, and find personal fulfillment through whatever brings you joy +and contributes to the greater good. + +
+ +- id: `msg_01Rz8oydLAinmSMyaKbmmpE9` +- content: + `[{'text': 'To build meaningful relationships, pursue passions, learn continuously, help others, appreciate beauty, overcome challenges, leave a positive legacy, and find personal fulfillment through whatever brings you joy and contributes to the greater good.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 105, 'output_tokens': 54, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +We can also use streaming: + +``` python +chat = Chat(model, sp=sp) +for o in chat("I'm Jeremy", stream=True): print(o, end='') +``` + + Hello Jeremy! Nice to meet you. How are you today? + +``` python +for o in chat(q, prefill=pref, stream=True): print(o, end='') +``` + + According to Douglas Adams, 42. More seriously: to find purpose, love, grow, and make a positive impact while experiencing life's journey. + +### Chat tool use + +We automagically get streamlined tool use as well: + +``` python +pr = f"What is {a}+{b}?" +pr +``` + + 'What is 604542+6458932?' + +``` python +chat = Chat(model, sp=sp, tools=[sums]) +r = chat(pr) +r +``` + + Finding the sum of 604542 and 6458932 + +Let me calculate that sum for you. + +
+ +- id: `msg_01MY2VWnZuU8jKyRKJ5FGzmM` +- content: + `[{'text': 'Let me calculate that sum for you.', 'type': 'text'}, {'id': 'toolu_01JXnJ1ReFqx5ppX3y7UcQCB', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 437, 'output_tokens': 87, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Now we need to send this result to Claude—calling the object with no +parameters tells it to return the tool result to Claude: + +``` python +chat() +``` + +604542 + 6458932 = 7063474 + +
+ +- id: `msg_01Sog8j3pgYb3TBWPYwR4uQU` +- content: `[{'text': '604542 + 6458932 = 7063474', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 532, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +It should be correct, because it actually used our Python function to do +the addition. Let’s check: + +``` python +a+b +``` + + 7063474 + +## Images + +Claude can handle image data as well. As everyone knows, when testing +image APIs you have to use a cute puppy. + +``` python +# Image is Cute_dog.jpg from Wikimedia +fn = Path('samples/puppy.jpg') +display.Image(filename=fn, width=200) +``` + + + +``` python +img = fn.read_bytes() +``` + +
+Exported source + +``` python +def _add_cache(d, cache): + "Optionally add cache control" + if cache: d["cache_control"] = {"type": "ephemeral"} + return d +``` + +
+ +Claude supports context caching by adding a `cache_control` header, so +we provide an option to enable that. + +------------------------------------------------------------------------ + +source + +### img_msg + +> img_msg (data:bytes, cache=False) + +*Convert image `data` into an encoded `dict`* + +
+Exported source + +``` python +def img_msg(data:bytes, cache=False)->dict: + "Convert image `data` into an encoded `dict`" + img = base64.b64encode(data).decode("utf-8") + mtype = mimetypes.types_map['.'+imghdr.what(None, h=data)] + r = dict(type="base64", media_type=mtype, data=img) + return _add_cache({"type": "image", "source": r}, cache) +``` + +
+ +Anthropic have documented the particular `dict` structure that expect +image data to be in, so we have a little function to create that for us. + +------------------------------------------------------------------------ + +source + +### text_msg + +> text_msg (s:str, cache=False) + +*Convert `s` to a text message* + +
+Exported source + +``` python +def text_msg(s:str, cache=False)->dict: + "Convert `s` to a text message" + return _add_cache({"type": "text", "text": s}, cache) +``` + +
+ +A Claude message can be a list of image and text parts. So we’ve also +created a helper for making the text parts. + +``` python +q = "In brief, what color flowers are in this image?" +msg = mk_msg([img_msg(img), text_msg(q)]) +``` + +``` python +c([msg]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_01Ej9XSFQKFtD9pUns5g7tom` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 44, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+
+Exported source + +``` python +def _mk_content(src, cache=False): + "Create appropriate content data structure based on type of content" + if isinstance(src,str): return text_msg(src, cache=cache) + if isinstance(src,bytes): return img_msg(src, cache=cache) + if isinstance(src, abc.Mapping): return {k:_str_if_needed(v) for k,v in src.items()} + return _str_if_needed(src) +``` + +
+ +There’s not need to manually choose the type of message, since we figure +that out from the data of the source data. + +``` python +_mk_content('Hi') +``` + + {'type': 'text', 'text': 'Hi'} + +------------------------------------------------------------------------ + +source + +### mk_msg + +> mk_msg (content, role='user', cache=False, **kw) + +*Helper to create a `dict` appropriate for a Claude message. `kw` are +added as key/value pairs to the message* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
contentA string, list, or dict containing the contents of the message
rolestruserMust be ‘user’ or ‘assistant’
cacheboolFalse
kw
+ +
+Exported source + +``` python +def mk_msg(content, # A string, list, or dict containing the contents of the message + role='user', # Must be 'user' or 'assistant' + cache=False, + **kw): + "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" + if hasattr(content, 'content'): content,role = content.content,content.role + if isinstance(content, abc.Mapping): content=content.get('content', content) + if not isinstance(content, list): content=[content] + content = [_mk_content(o, cache if islast else False) for islast,o in loop_last(content)] if content else '.' + return dict2obj(dict(role=role, content=content, **kw), list_func=list) +``` + +
+ +``` python +mk_msg(['hi', 'there'], cache=True) +``` + +``` json +{ 'content': [ {'text': 'hi', 'type': 'text'}, + { 'cache_control': {'type': 'ephemeral'}, + 'text': 'there', + 'type': 'text'}], + 'role': 'user'} +``` + +``` python +m = mk_msg(['hi', 'there'], cache=True) +``` + +When we construct a message, we now use +[`_mk_content`](https://claudette.answer.ai/core.html#_mk_content) to +create the appropriate parts. Since a dialog contains multiple messages, +and a message can contain multiple content parts, to pass a single +message with multiple parts we have to use a list containing a single +list: + +``` python +c([[img, q]]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_014GQfAQF5FYU8a4Y8bvVm16` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +
+ +> **Note** +> +> As promised (much!) earlier, we’ve now finally completed our +> definition of +> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg), and this +> version is the one we export to the Python module. + +
+ +Some models unfortunately do not support image inputs such as Haiku 3.5 + +``` python +model = models[-1]; model +``` + + 'claude-3-5-haiku-20241022' + +``` python +c = Client(model) +c([[img, q]]) +``` + + AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 + --------------------------------------------------------------------------- + AssertionError Traceback (most recent call last) + Cell In[115], line 2 +  1 c = Client(model) + ----> 2 c([[img, q]]) + + Cell In[72], line 19, in __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs) +  17 if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) +  18 msgs = self._precall(msgs, prefill, stop, kwargs) + ---> 19 if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" +  20 if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) +  21 res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + + AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 + +## Third party providers + +### Amazon Bedrock + +These are Amazon’s current Claude models: + +``` python +models_aws +``` + + ['anthropic.claude-3-opus-20240229-v1:0', + 'anthropic.claude-3-5-sonnet-20241022-v2:0', + 'anthropic.claude-3-sonnet-20240229-v1:0', + 'anthropic.claude-3-haiku-20240307-v1:0'] + +
+ +> **Note** +> +> `anthropic` at version 0.34.2 seems not to install `boto3` as a +> dependency. You may need to do a `pip install boto3` or the creation +> of the [`Client`](https://claudette.answer.ai/core.html#client) below +> fails. + +
+ +Provided `boto3` is installed, we otherwise don’t need any extra code to +support Amazon Bedrock – we just have to set up the approach client: + +``` python +ab = AnthropicBedrock( + aws_access_key=os.environ['AWS_ACCESS_KEY'], + aws_secret_key=os.environ['AWS_SECRET_KEY'], +) +client = Client(models_aws[-1], ab) +``` + +``` python +chat = Chat(cli=client) +``` + +``` python +chat("I'm Jeremy") +``` + +It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_bdrk_01JPBwsACbf1HZoNDUzbHNpJ` +- content: + `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 32}` + +
+ +### Google Vertex + +``` python +models_goog +``` + + ['claude-3-opus@20240229', + 'claude-3-5-sonnet-v2@20241022', + 'claude-3-sonnet@20240229', + 'claude-3-haiku@20240307'] + +``` python +from anthropic import AnthropicVertex +import google.auth +``` + +``` python +project_id = google.auth.default()[1] +region = "us-east5" +gv = AnthropicVertex(project_id=project_id, region=region) +client = Client(models_goog[-1], gv) +``` + +``` python +chat = Chat(cli=client) +``` + +``` python +chat("I'm Jeremy") +```
# claudette-pydantic + + + +> Adds Pydantic support for +> [claudette](https://github.com/AnswerDotAI/claudette) through function +> calling + +claudette_pydantic provides the `struct` method in the `Client` and +`Chat` of claudette + +`struct` provides a wrapper around `__call__`. Provide a Pydantic +`BaseModel` as schema, and the model will return an initialized +`BaseModel` object. + +I’ve found Haiku to be quite reliable at even complicated schemas. + +## Install + +``` sh +pip install claudette-pydantic +``` + +## Getting Started + +``` python +from claudette.core import * +import claudette_pydantic # patches claudette with `struct` +from pydantic import BaseModel, Field +from typing import Literal, Union, List +``` + +``` python +model = models[-1] +model +``` + + 'claude-3-haiku-20240307' + +``` python +class Pet(BaseModel): + "Create a new pet" + name: str + age: int + owner: str = Field(default="NA", description="Owner name. Do not return if not given.") + type: Literal['dog', 'cat', 'mouse'] + +c = Client(model) +print(repr(c.struct(msgs="Can you make a pet for my dog Mac? He's 14 years old", resp_model=Pet))) +print(repr(c.struct(msgs="Tom: my cat is juma and he's 16 years old", resp_model=Pet))) +``` + + Pet(name='Mac', age=14, owner='NA', type='dog') + Pet(name='juma', age=16, owner='Tom', type='cat') + +## Going Deeper + +I pulled this example from [pydantic +docs](https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions) +has a list of discriminated unions, shown by `pet_type`. For each object +the model is required to return different things. + +You should be able to use the full power of Pydantic here. I’ve found +that instructor for Claude fails on this example. + +Each sub BaseModel may also have docstrings describing usage. I’ve found +prompting this way to be quite reliable. + +``` python +class Cat(BaseModel): + pet_type: Literal['cat'] + meows: int + + +class Dog(BaseModel): + pet_type: Literal['dog'] + barks: float + + +class Reptile(BaseModel): + pet_type: Literal['lizard', 'dragon'] + scales: bool + +# Dummy to show doc strings +class Create(BaseModel): + "Pass as final member of the `pet` list to indicate success" + pet_type: Literal['create'] + +class OwnersPets(BaseModel): + """ + Information for to gather for an Owner's pets + """ + pet: List[Union[Cat, Dog, Reptile, Create]] = Field(..., discriminator='pet_type') + +chat = Chat(model) +pr = "hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows" +print(repr(chat.struct(OwnersPets, pr=pr))) +print(repr(chat.struct(OwnersPets, pr="actually my dragon does have scales, can you change that for me?"))) +``` + + OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) + OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) + +While the struct uses tool use to enforce the schema, we save in history +as the `repr` response to keep the user,assistant,user flow. + +``` python +chat.h +``` + + [{'role': 'user', + 'content': [{'type': 'text', + 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, + {'role': 'assistant', + 'content': [{'type': 'text', + 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}, + {'role': 'user', + 'content': [{'type': 'text', + 'text': 'actually my dragon does have scales, can you change that for me?'}]}, + {'role': 'assistant', + 'content': [{'type': 'text', + 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}] + +Alternatively you can use struct as tool use flow with +`treat_as_output=False` (but requires the next input to be assistant) + +``` python +chat.struct(OwnersPets, pr=pr, treat_as_output=False) +chat.h[-3:] +``` + + [{'role': 'user', + 'content': [{'type': 'text', + 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, + {'role': 'assistant', + 'content': [ToolUseBlock(id='toolu_015ggQ1iH6BxBffd7erj3rjR', input={'pet': [{'pet_type': 'dog', 'barks': 6.0}, {'pet_type': 'dragon', 'scales': False}, {'pet_type': 'cat', 'meows': 2}]}, name='OwnersPets', type='tool_use')]}, + {'role': 'user', + 'content': [{'type': 'tool_result', + 'tool_use_id': 'toolu_015ggQ1iH6BxBffd7erj3rjR', + 'content': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2)])"}]}] + +(So I couldn’t prompt it again here, next input would have to be an +assistant) + +### User Creation & few-shot examples + +You can even add few shot examples *for each input* + +``` python +class User(BaseModel): + "User creation tool" + age: int = Field(description='Age of the user') + name: str = Field(title='Username') + password: str = Field( + json_schema_extra={ + 'title': 'Password', + 'description': 'Password of the user', + 'examples': ['Monkey!123'], + } + ) +print(repr(c.struct(msgs=["Can you create me a new user for tom age 22"], resp_model=User, sp="for a given user, generate a similar password based on examples"))) +``` + + User(age=22, name='tom', password='Monkey!123') + +Uses the few-shot example as asked for in the system prompt. + +### You can find more examples [nbs/examples](nbs/examples) + +## Signature: + +``` python +Client.struct( + self: claudette.core.Client, + msgs: list, + resp_model: type[BaseModel], # non-initialized Pydantic BaseModel + **, # Client.__call__ kwargs... +) -> BaseModel +``` + +``` python +Chat.struct( + self: claudette.core.Chat, + resp_model: type[BaseModel], # non-initialized Pydantic BaseModel + treat_as_output=True, # In chat history, tool is reflected + **, # Chat.__call__ kwargs... +) -> BaseModel +```
diff --git a/llm/llms-ctx.txt b/llm/llms-ctx.txt new file mode 100644 index 0000000..4239b3e --- /dev/null +++ b/llm/llms-ctx.txt @@ -0,0 +1,1462 @@ +Things to remember when using Claudette: + +- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key +- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) +- The library provides both synchronous and asynchronous interfaces +- Use `Chat()` for maintaining conversation state and handling tool interactions +- When using tools, the library automatically handles the request/response loop +- Image support is built in but only available on compatible models (not Haiku)# claudette + + + +> **NB**: If you are reading this in GitHub’s readme, we recommend you +> instead read the much more nicely formatted [documentation +> format](https://claudette.answer.ai/) of this tutorial. + +*Claudette* is a wrapper for Anthropic’s [Python +SDK](https://github.com/anthropics/anthropic-sdk-python). + +The SDK works well, but it is quite low level – it leaves the developer +to do a lot of stuff manually. That’s a lot of extra work and +boilerplate! Claudette automates pretty much everything that can be +automated, whilst providing full control. Amongst the features provided: + +- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that + creates stateful dialogs +- Support for *prefill*, which tells Claude what to use as the first few + words of its response +- Convenient image support +- Simple and convenient support for Claude’s new Tool Use API. + +You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the +key provided to you by Anthropic in order to use this library. + +Note that this library is the first ever “literate nbdev” project. That +means that the actual source code for the library is a rendered Jupyter +Notebook which includes callout notes and tips, HTML tables and images, +detailed explanations, and teaches *how* and *why* the code is written +the way it is. Even if you’ve never used the Anthropic Python SDK or +Claude API before, you should be able to read the source code. Click +[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, +or clone the git repo and execute the notebook yourself to see every +step of the creation process in action. The tutorial below includes +links to API details which will take you to relevant parts of the +source. The reason this project is a new kind of literal program is +because we take seriously Knuth’s call to action, that we have a “*moral +commitment*” to never write an “*illiterate program*” – and so we have a +commitment to making literate programming and easy and pleasant +experience. (For more on this, see [this +talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) + +> “*Let us change our traditional attitude to the construction of +> programs: Instead of imagining that our main task is to instruct a +> **computer** what to do, let us concentrate rather on explaining to +> **human beings** what we want a computer to do.*” Donald E. Knuth, +> [Literate +> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) +> (1984) + +## Install + +``` sh +pip install claudette +``` + +## Getting started + +Anthropic’s Python SDK will automatically be installed with Claudette, +if you don’t already have it. + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +To print every HTTP request and response in full, uncomment the above +line. + +``` python +from claudette import * +``` + +Claudette only exports the symbols that are needed to use the library, +so you can use `import *` to import them. Alternatively, just use: + +``` python +import claudette +``` + +…and then add the prefix `claudette.` to any usages of the module. + +Claudette provides `models`, which is a list of models currently +available from the SDK. + +``` python +models +``` + + ['claude-3-opus-20240229', + 'claude-3-5-sonnet-20241022', + 'claude-3-haiku-20240307'] + +For these examples, we’ll use Sonnet 3.5, since it’s awesome! + +``` python +model = models[1] +``` + +## Chat + +The main interface to Claudette is the +[`Chat`](https://claudette.answer.ai/core.html#chat) class, which +provides a stateful interface to Claude: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +chat("I'm Jeremy") +``` + +Hello Jeremy, nice to meet you. + +
+ +- id: `msg_015oK9jEcra3TEKHUGYULjWB` +- content: + `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01Si8sTFJe8d8vq7enanbAwj` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +As you see above, displaying the results of a call in a notebook shows +just the message contents, with the other details hidden behind a +collapsible section. Alternatively you can `print` the details: + +``` python +print(r) +``` + + Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) + +Claude supports adding an extra `assistant` message at the end, which +contains the *prefill* – i.e. the text we want Claude to assume the +response starts with. Let’s try it out: + +``` python +chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,') +``` + +According to Douglas Adams,42. Philosophically, it’s to find personal +meaning through relationships, purpose, and experiences. + +
+ +- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` +- content: + `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can add `stream=True` to stream the results as soon as they arrive +(although you will only see the gradual generation if you execute the +notebook yourself, of course!) + +``` python +for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): + print(o, end='') +``` + + It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. + +### Async + +Alternatively, you can use +[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or +[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for +the async versions, e.g: + +``` python +chat = AsyncChat(model) +await chat("I'm Jeremy") +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_016Q8cdc3sPWBS8eXcNj841L` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Remember to use `async for` when streaming in this case: + +``` python +async for o in await chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,', stream=True): + print(o, end='') +``` + + According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. + +## Prompt caching + +If you use `mk_msg(msg, cache=True)`, then the message is cached using +Claude’s [prompt +caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) +feature. For instance, here we use caching when asking about Claudette’s +readme file: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +``` + +``` python +nbtxt = Path('README.txt').read_text() +msg = f''' +{nbtxt} + +In brief, what is the purpose of this project based on the readme?''' +r = chat(mk_msg(msg, cache=True)) +r +``` + +Claudette is a high-level wrapper for Anthropic’s Python SDK that +automates common tasks and provides additional functionality. Its main +features include: + +1. A Chat class for stateful dialogs +2. Support for prefill (controlling Claude’s initial response words) +3. Convenient image handling +4. Simple tool use API integration +5. Support for multiple model providers (Anthropic, AWS Bedrock, Google + Vertex) + +The project is notable for being the first “literate nbdev” project, +meaning its source code is written as a detailed, readable Jupyter +Notebook that includes explanations, examples, and teaching material +alongside the functional code. + +The goal is to simplify working with Claude’s API while maintaining full +control, reducing boilerplate code and manual work that would otherwise +be needed with the base SDK. + +
+ +- id: `msg_014rVQnYoZXZuyWUCMELG1QW` +- content: + `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` + +
+ +The response records the a cache has been created using these input +tokens: + +``` python +print(r.usage) +``` + + Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) + +We can now ask a followup question in this chat: + +``` python +r = chat('How does it make tool use more ergonomic?') +r +``` + +According to the README, Claudette makes tool use more ergonomic in +several ways: + +1. It uses docments to make Python function definitions more + user-friendly - each parameter and return value should have a type + and description + +2. It handles the tool calling process automatically - when Claude + returns a tool_use message, Claudette manages calling the tool with + the provided parameters behind the scenes + +3. It provides a `toolloop` method that can handle multiple tool calls + in a single step to solve more complex problems + +4. It allows you to pass a list of tools to the Chat constructor and + optionally force Claude to always use a specific tool via + `tool_choice` + +Here’s a simple example from the README: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b + +chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') +``` + +This makes it much simpler compared to manually handling all the tool +use logic that would be required with the base SDK. + +
+ +- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` +- content: + `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` + +
+ +We can see that this only used ~200 regular input tokens – the 7000+ +context tokens have been read from cache. + +``` python +print(r.usage) +``` + + Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) + +``` python +chat.use +``` + + In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 + +## Tool use + +[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude +use external tools. + +We use [docments](https://fastcore.fast.ai/docments.html) to make +defining Python functions as ergonomic as possible. Each parameter (and +the return value) should have a type, and a docments comment with the +description of what it is. As an example we’ll write a simple function +that adds numbers together, and will tell us when it’s being called: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +Sometimes Claude will say something like “according to the `sums` tool +the answer is” – generally we’d rather it just tells the user the +answer, so we can use a system prompt to help with this: + +``` python +sp = "Never mention what tools you use." +``` + +We’ll get Claude to add up some long numbers: + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +pr +``` + + 'What is 604542+6458932?' + +To use tools, pass a list of them to +[`Chat`](https://claudette.answer.ai/core.html#chat): + +``` python +chat = Chat(model, sp=sp, tools=[sums]) +``` + +To force Claude to always answer using a tool, set `tool_choice` to that +function name. When Claude needs to use a tool, it doesn’t return the +answer, but instead returns a `tool_use` message, which means we have to +call the named tool with the provided parameters. + +``` python +r = chat(pr, tool_choice='sums') +r +``` + + Finding the sum of 604542 and 6458932 + +ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_014xrPyotyiBmFSctkp1LZHk` +- content: + `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Claudette handles all that for us – we just call it again, and it all +happens automatically: + +``` python +chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_01151puJxG8Fa6k6QSmzwKQA` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can see how many tokens have been used at any time by checking the +`use` property. Note that (as of May 2024) tool use in Claude uses a +*lot* of tokens, since it automatically adds a large system prompt. + +``` python +chat.use +``` + + In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 + +We can do everything needed to use tools in a single step, by using +[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). +This can even call multiple tools as needed solve a problem. For +example, let’s define a tool to handle multiplication: + +``` python +def mults( + a:int, # First thing to multiply + b:int=1 # Second thing to multiply +) -> int: # The product of the inputs + "Multiplies a * b." + print(f"Finding the product of {a} and {b}") + return a * b +``` + +Now with a single call we can calculate `(a+b)*2` – by passing +`show_trace` we can see each response from Claude in the process: + +``` python +chat = Chat(model, sp=sp, tools=[sums,mults]) +pr = f'Calculate ({a}+{b})*2' +pr +``` + + 'Calculate (604542+6458932)*2' + +``` python +chat.toolloop(pr, trace_func=print) +``` + + Finding the sum of 604542 and 6458932 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] + Finding the product of 7063474 and 2 + [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] + [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] + +The final result is 14,126,948. + +
+ +- id: `msg_0162teyBcJHriUzZXMPz4r5d` +- content: + `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Structured data + +If you just want the immediate result from a single tool, use +[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). + +``` python +cli = Client(model) +``` + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +cli.structured("What is 604542+6458932", sums) +``` + + Finding the sum of 604542 and 6458932 + + [7063474] + +This is particularly useful for getting back structured information, +e.g: + +``` python +class President: + "Information about a president of the United States" + def __init__(self, + first:str, # first name + last:str, # last name + spouse:str, # name of spouse + years_in_office:str, # format: "{start_year}-{end_year}" + birthplace:str, # name of city + birth_year:int # year of birth, `0` if unknown + ): + assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" + store_attr() + + __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') +``` + +``` python +cli.structured("Provide key information about the 3rd President of the United States", President) +``` + + [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] + +## Images + +Claude can handle image data as well. As everyone knows, when testing +image APIs you have to use a cute puppy. + +``` python +fn = Path('samples/puppy.jpg') +display.Image(filename=fn, width=200) +``` + + + +We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object +as before: + +``` python +chat = Chat(model) +``` + +Claudette expects images as a list of bytes, so we read in the file: + +``` python +img = fn.read_bytes() +``` + +Prompts to Claudette can be lists, containing text, images, or both, eg: + +``` python +chat([img, "In brief, what color flowers are in this image?"]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +The image is included as input tokens. + +``` python +chat.use +``` + + In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 + +Alternatively, Claudette supports creating a multi-stage chat with +separate image and text prompts. For instance, you can pass just the +image as the initial prompt (in which case Claude will make some general +comments about what it sees), and then follow up with questions in +additional prompts: + +``` python +chat = Chat(model) +chat(img) +``` + +What an adorable Cavalier King Charles Spaniel puppy! The photo captures +the classic brown and white coloring of the breed, with those soulful +dark eyes that are so characteristic. The puppy is lying in the grass, +and there are lovely purple asters blooming in the background, creating +a beautiful natural setting. The combination of the puppy’s sweet +expression and the delicate flowers makes for a charming composition. +Cavalier King Charles Spaniels are known for their gentle, affectionate +nature, and this little one certainly seems to embody those traits with +its endearing look. + +
+ +- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` +- content: + `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What direction is the puppy facing?') +``` + +The puppy is facing towards the left side of the image. Its head is +positioned so we can see its right side profile, though it appears to be +looking slightly towards the camera, giving us a good view of its +distinctive brown and white facial markings and one of its dark eyes. +The puppy is lying down with its white chest/front visible against the +green grass. + +
+ +- id: `msg_01AeR9eWjbxa788YF97iErtN` +- content: + `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What color is it?') +``` + +The puppy has a classic Cavalier King Charles Spaniel coat with a rich +chestnut brown (sometimes called Blenheim) coloring on its ears and +patches on its face, combined with a bright white base color. The white +is particularly prominent on its face (creating a distinctive blaze down +the center) and chest area. This brown and white combination is one of +the most recognizable color patterns for the breed. + +
+ +- id: `msg_01R91AqXG7pLc8hK24F5mc7x` +- content: + `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Note that the image is passed in again for every input in the dialog, so +that number of input tokens increases quickly with this kind of chat. +(For large images, using prompt caching might be a good idea.) + +``` python +chat.use +``` + + In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 + +## Other model providers + +You can also use 3rd party providers of Anthropic models, as shown here. + +### Amazon Bedrock + +These are the models available through Bedrock: + +``` python +models_aws +``` + + ['anthropic.claude-3-opus-20240229-v1:0', + 'anthropic.claude-3-5-sonnet-20241022-v2:0', + 'anthropic.claude-3-sonnet-20240229-v1:0', + 'anthropic.claude-3-haiku-20240307-v1:0'] + +To use them, call `AnthropicBedrock` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicBedrock +``` + +``` python +ab = AnthropicBedrock( + aws_access_key=os.environ['AWS_ACCESS_KEY'], + aws_secret_key=os.environ['AWS_SECRET_KEY'], +) +client = Client(models_aws[-1], ab) +``` + +Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) +object passing this client to the `cli` parameter – and from then on, +everything is identical to the previous examples. + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` +- content: + `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 32}` + +
+ +### Google Vertex + +These are the models available through Vertex: + +``` python +models_goog +``` + + ['claude-3-opus@20240229', + 'claude-3-5-sonnet-v2@20241022', + 'claude-3-sonnet@20240229', + 'claude-3-haiku@20240307'] + +To use them, call `AnthropicVertex` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicVertex +import google.auth +``` + +``` python +project_id = google.auth.default()[1] +gv = AnthropicVertex(project_id=project_id, region="us-east5") +client = Client(models_goog[-1], gv) +``` + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +## Extensions + +- [Pydantic Structured + Ouput](https://github.com/tom-pollak/claudette-pydantic)
# Tool loop + + + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +``` python +model = models[-1] +``` + +Anthropic provides an [interesting +example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) +of using tools to mock up a hypothetical ordering system. We’re going to +take it a step further, and show how we can dramatically simplify the +process, whilst completing more complex tasks. + +We’ll start by defining the same mock customer/order data as in +Anthropic’s example, plus create a entity relationship between customers +and orders: + +``` python +orders = { + "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"), + "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"), + "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")} + +customers = { + "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890", + orders=[orders['O1'], orders['O2']]), + "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210", + orders=[orders['O3']]) +} +``` + +We can now define the same functions from the original example – but +note that we don’t need to manually create the large JSON schema, since +Claudette handles all that for us automatically from the functions +directly. We’ll add some extra functionality to update order details +when cancelling too. + +``` python +def get_customer_info( + customer_id:str # ID of the customer +): # Customer's name, email, phone number, and list of orders + "Retrieves a customer's information and their orders based on the customer ID" + print(f'- Retrieving customer {customer_id}') + return customers.get(customer_id, "Customer not found") + +def get_order_details( + order_id:str # ID of the order +): # Order's ID, product name, quantity, price, and order status + "Retrieves the details of a specific order based on the order ID" + print(f'- Retrieving order {order_id}') + return orders.get(order_id, "Order not found") + +def cancel_order( + order_id:str # ID of the order to cancel +)->bool: # True if the cancellation is successful + "Cancels an order based on the provided order ID" + print(f'- Cancelling order {order_id}') + if order_id not in orders: return False + orders[order_id]['status'] = 'Cancelled' + return True +``` + +We’re now ready to start our chat. + +``` python +tools = [get_customer_info, get_order_details, cancel_order] +chat = Chat(model, tools=tools) +``` + +We’ll start with the same request as Anthropic showed: + +``` python +r = chat('Can you tell me the email address for customer C1?') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [ToolUseBlock(id='toolu_0168sUZoEUpjzk5Y8WN3q9XL', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +Claude asks us to use a tool. Claudette handles that automatically by +just calling it again: + +``` python +r = chat() +contents(r) +``` + + 'The email address for customer C1 is john@example.com.' + +Let’s consider a more complex case than in the original example – what +happens if a customer wants to cancel all of their orders? + +``` python +chat = Chat(model, tools=tools) +r = chat('Please cancel all orders for customer C1 for me.') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), + ToolUseBlock(id='toolu_01ADr1rEp7NLZ2iKWfLp7vz7', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +This is the start of a multi-stage tool use process. Doing it manually +step by step is inconvenient, so let’s write a function to handle this +for us: + +------------------------------------------------------------------------ + +source + +### Chat.toolloop + +> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None, +> maxtok=4096, stream=False, prefill='', +> tool_choice:Optional[dict]=None) + +*Add prompt `pr` to dialog and get a response from Claude, automatically +following up with `tool_use` messages* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prPrompt to pass to Claude
max_stepsint10Maximum number of tool requests to loop through
trace_funcOptionalNoneFunction to trace tool use steps (e.g print)
cont_funcOptionalnoopFunction that stops loop if returns False
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
+ +
+Exported source + +``` python +@patch +@delegates(Chat.__call__) +def toolloop(self:Chat, + pr, # Prompt to pass to Claude + max_steps=10, # Maximum number of tool requests to loop through + trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`) + cont_func:Optional[callable]=noop, # Function that stops loop if returns False + **kwargs): + "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages" + n_msgs = len(self.h) + r = self(pr, **kwargs) + for i in range(max_steps): + if r.stop_reason!='tool_use': break + if trace_func: trace_func(self.h[n_msgs:]); n_msgs = len(self.h) + r = self(**kwargs) + if not (cont_func or noop)(self.h[-2]): break + if trace_func: trace_func(self.h[n_msgs:]) + return r +``` + +
+ +We’ll start by re-running our previous request - we shouldn’t have to +manually pass back the `tool_use` message any more: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Can you tell me the email address for customer C1?') +r +``` + + - Retrieving customer C1 + +The email address for customer C1 is john@example.com. + +
+ +- id: `msg_01Fm2CY76dNeWief4kUW6r71` +- content: + `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 720, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Let’s see if it can handle the multi-stage process now – we’ll add +`trace_func=print` to see each stage of the process: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print) +r +``` + + - Retrieving customer C1 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Please cancel all orders for customer C1 for me.'}]}, {'role': 'assistant', 'content': [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01SvivKytaRHEdKixEY9dUDz', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01SvivKytaRHEdKixEY9dUDz', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}] + - Cancelling order O1 + [{'role': 'assistant', 'content': [TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel each of these orders:", type='text'), ToolUseBlock(id='toolu_01DoGVUPVBeDYERMePHDzUoT', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DoGVUPVBeDYERMePHDzUoT', 'content': 'True'}]}] + - Cancelling order O2 + [{'role': 'assistant', 'content': [ToolUseBlock(id='toolu_01XNwS35yY88Mvx4B3QqDeXX', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01XNwS35yY88Mvx4B3QqDeXX', 'content': 'True'}]}] + [{'role': 'assistant', 'content': [TextBlock(text="I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", type='text')]}] + +I’ve successfully cancelled both orders O1 and O2 for customer C1. +Please let me know if you need anything else! + +
+ +- id: `msg_01K1QpUZ8nrBVUHYTrH5QjSF` +- content: + `[{'text': "I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 921, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +OK Claude thinks the orders were cancelled – let’s check one: + +``` python +chat.toolloop('What is the status of order O2?') +``` + + - Retrieving order O2 + +The status of order O2 is now ‘Cancelled’ since I successfully cancelled +that order earlier. + +
+ +- id: `msg_01XcXpFDwoZ3u1bFDf5mY8x1` +- content: + `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1092, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Code interpreter + +Here is an example of using `toolloop` to implement a simple code +interpreter with additional tools. + +``` python +from toolslm.shell import get_shell +from fastcore.meta import delegates +import traceback +``` + +``` python +@delegates() +class CodeChat(Chat): + imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses' + def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs): + super().__init__(model=model, **kwargs) + self.ask = ask + self.tools.append(self.run_cell) + self.shell = get_shell() + self.shell.run_cell('import '+self.imps) +``` + +We have one additional parameter to creating a `CodeChat` beyond what we +pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is +`ask` – if that’s `True`, we’ll prompt the user before running code. + +``` python +@patch +def run_cell( + self:CodeChat, + code:str, # Code to execute in persistent IPython session +): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute + "Asks user for permission, and if provided, executes python `code` using persistent IPython session." + confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n' + if self.ask and input(confirm): return '#DECLINED#' + try: res = self.shell.run_cell(code) + except Exception as e: return traceback.format_exc() + return res.stdout if res.result is None else res.result +``` + +We just pass along requests to run code to the shell’s implementation. +Claude often prints results instead of just using the last expression, +so we capture stdout in those cases. + +``` python +sp = f'''You are a knowledgable assistant. Do not use tools unless needed. +Don't do complex calculations yourself -- use code for them. +The following modules are pre-imported for `run_cell` automatically: + +{CodeChat.imps} + +Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls. + +If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.''' +``` + +``` python +def get_user(ignored:str='' # Unused parameter + ): # Username of current user + "Get the username of the user running this session" + print("Looking up username") + return 'Jeremy' +``` + +In order to test out multi-stage tool use, we create a mock function +that Claude can call to get the current username. + +``` python +model = models[1] +``` + +``` python +chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3) +``` + +Claude gets confused sometimes about how tools work, so we use examples +to remind it: + +``` python +chat.h = [ + 'Calculate the square root of `10332`', 'math.sqrt(10332)', + '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.' +] +``` + +Providing a callable to toolloop’s `trace_func` lets us print out +information during the loop: + +``` python +def _show_cts(h): + for r in h: + for o in r.get('content'): + if hasattr(o,'text'): print(o.text) + nm = getattr(o, 'name', None) + if nm=='run_cell': print(o.input['code']) + elif nm: print(f'{o.name}({o.input})') +``` + +…and toolloop’s `cont_func` callable let’s us provide a function which, +if it returns `False`, stops the loop: + +``` python +def _cont_decline(c): + return nested_idx(c, 'content', 'content') != '#DECLINED#' +``` + +Now we can try our code interpreter. We start by asking for a function +to be created, which we’ll use in the next prompt to test that the +interpreter is persistent. + +``` python +pr = '''Create a 1-line function `checksum` for a string `s`, +that multiplies together the ascii values of each character in `s` using `reduce`.''' +chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline) +``` + + Press Enter to execute, or enter "n" to skip? + ``` + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + ``` + + Create a 1-line function `checksum` for a string `s`, + that multiplies together the ascii values of each character in `s` using `reduce`. + Let me help you create that function using `reduce` and `functools`. + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + The function has been created. Let me explain how it works: + 1. It takes a string `s` as input + 2. Uses `functools.reduce` to multiply together all ASCII values + 3. `ord(y)` gets the ASCII value of each character + 4. The initial value is 1 (the third parameter to reduce) + 5. The lambda function multiplies the accumulator (x) with each new ASCII value + + You can test it with any string. For example, you could try `checksum("hello")` to see it in action. + +The function has been created. Let me explain how it works: 1. It takes +a string `s` as input 2. Uses `functools.reduce` to multiply together +all ASCII values 3. `ord(y)` gets the ASCII value of each character 4. +The initial value is 1 (the third parameter to reduce) 5. The lambda +function multiplies the accumulator (x) with each new ASCII value + +You can test it with any string. For example, you could try +`checksum("hello")` to see it in action. + +
+ +- id: `msg_011pcGY9LbYqvRSfDPgCqUkT` +- content: + `[{'text': 'The function has been created. Let me explain how it works:\n1. It takes a string`s`as input\n2. Uses`functools.reduce`to multiply together all ASCII values\n3.`ord(y)`gets the ASCII value of each character\n4. The initial value is 1 (the third parameter to reduce)\n5. The lambda function multiplies the accumulator (x) with each new ASCII value\n\nYou can test it with any string. For example, you could try`checksum(“hello”)`to see it in action.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 824, 'output_tokens': 125, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +By asking for a calculation to be done on the username, we force it to +use multiple steps: + +``` python +pr = 'Use it to get the checksum of the username of this session.' +chat.toolloop(pr, trace_func=_show_cts) +``` + + Looking up username + Use it to get the checksum of the username of this session. + I'll first get the username using `get_user` and then apply our `checksum` function to it. + get_user({'ignored': ''}) + Press Enter to execute, or enter "n" to skip? + ``` + print(checksum("Jeremy")) + ``` + + Now I'll calculate the checksum of "Jeremy": + print(checksum("Jeremy")) + The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy". + +The checksum of the username “Jeremy” is 1134987783204. This was +calculated by multiplying together the ASCII values of each character in +“Jeremy”. + +
+ +- id: `msg_01UXvtcLzzykZpnQUT35v4uD` +- content: + `[{'text': 'The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy".', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
# claudette Module Documentation + +## claudette.asink + +- `class AsyncClient` + - `def __init__(self, model, cli, log)` + Async Anthropic messages client. + + +- `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)` + Make an async call to Claude. + +- `@delegates() class AsyncChat` + - `def __init__(self, model, cli, **kwargs)` + Anthropic async chat client. + + +## claudette.core + +- `def find_block(r, blk_type)` + Find the first block of type `blk_type` in `r.content`. + +- `def contents(r)` + Helper to get the contents from Claude response `r`. + +- `def usage(inp, out, cache_create, cache_read)` + Slightly more concise version of `Usage`. + +- `@patch def __add__(self, b)` + Add together each of `input_tokens` and `output_tokens` + +- `def mk_msgs(msgs, **kw)` + Helper to set 'assistant' role on alternate messages. + +- `class Client` + - `def __init__(self, model, cli, log)` + Basic Anthropic messages client. + + +- `def mk_tool_choice(choose)` + Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise + +- `def mk_funcres(tuid, res)` + Given tool use id and the tool result, create a tool_result response. + +- `def mk_toolres(r, ns, obj)` + Create a `tool_result` message from response `r`. + +- `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)` + Make a call to Claude. + +- `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)` + Return the value of all tool calls (generally used for structured outputs) + +- `class Chat` + - `def __init__(self, model, cli, sp, tools, temp, cont_pr)` + Anthropic chat client. + + - `@property def use(self)` + +- `def img_msg(data, cache)` + Convert image `data` into an encoded `dict` + +- `def text_msg(s, cache)` + Convert `s` to a text message + +- `def mk_msg(content, role, cache, **kw)` + Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message + +## claudette.toolloop + +- `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)` + Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages +
diff --git a/llm/llms.txt b/llm/llms.txt index 6c0b397..34be56f 100644 --- a/llm/llms.txt +++ b/llm/llms.txt @@ -18,7 +18,7 @@ Things to remember when using Claudette: ## API -- [API List](https://docs.fastht.ml/apilist.txt): A succint list of all functions and methods in claudette. +- [API List](https://raw.githubusercontent.com/AnswerDotAI/claudette/b30f08e3549554f53b06fbd9bf03a0c961de3023/llm/apilist.txt): A succint list of all functions and methods in claudette. ## Optional diff --git a/tools/refresh_llm_docs.sh b/tools/refresh_llm_docs.sh new file mode 100755 index 0000000..9be88e7 --- /dev/null +++ b/tools/refresh_llm_docs.sh @@ -0,0 +1,12 @@ +#!/bin/bash + +echo "Refreshing LLM documentation files..." + +echo "Generating API list documentation..." +pysym2md claudette --output_file llm/apilist.txt > llm/apilist.txt + +echo "Generating context files..." +llms_txt2ctx llm/llms.txt > llm/llms-ctx.txt +llms_txt2ctx llm/llms.txt --optional True > llm/llms-ctx-full.txt + +echo "✅ Documentation refresh complete!" From bf36eeef43269f7aa861bc98b65b8f72c831d4e3 Mon Sep 17 00:00:00 2001 From: Erik Gaasedelen Date: Tue, 19 Nov 2024 23:19:56 -0800 Subject: [PATCH 5/9] trim down number of tokens --- llm/llms-ctx-full.txt | 2892 +------------------------ llm/llms-ctx.txt | 522 +---- llm/llms.txt | 5 +- llms-ctx-full.txt | 4794 ----------------------------------------- llms-ctx.txt | 4607 --------------------------------------- 5 files changed, 78 insertions(+), 12742 deletions(-) delete mode 100644 llms-ctx-full.txt delete mode 100644 llms-ctx.txt diff --git a/llm/llms-ctx-full.txt b/llm/llms-ctx-full.txt index ff98bee..4ecb0ef 100644 --- a/llm/llms-ctx-full.txt +++ b/llm/llms-ctx-full.txt @@ -866,7 +866,80 @@ chat("I'm Jeremy") ## Extensions - [Pydantic Structured - Ouput](https://github.com/tom-pollak/claudette-pydantic)# Tool loop + Ouput](https://github.com/tom-pollak/claudette-pydantic)# claudette Module Documentation + +## claudette.asink + +- `class AsyncClient` + - `def __init__(self, model, cli, log)` + Async Anthropic messages client. + + +- `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)` + Make an async call to Claude. + +- `@delegates() class AsyncChat` + - `def __init__(self, model, cli, **kwargs)` + Anthropic async chat client. + + +## claudette.core + +- `def find_block(r, blk_type)` + Find the first block of type `blk_type` in `r.content`. + +- `def contents(r)` + Helper to get the contents from Claude response `r`. + +- `def usage(inp, out, cache_create, cache_read)` + Slightly more concise version of `Usage`. + +- `@patch def __add__(self, b)` + Add together each of `input_tokens` and `output_tokens` + +- `def mk_msgs(msgs, **kw)` + Helper to set 'assistant' role on alternate messages. + +- `class Client` + - `def __init__(self, model, cli, log)` + Basic Anthropic messages client. + + +- `def mk_tool_choice(choose)` + Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise + +- `def mk_funcres(tuid, res)` + Given tool use id and the tool result, create a tool_result response. + +- `def mk_toolres(r, ns, obj)` + Create a `tool_result` message from response `r`. + +- `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)` + Make a call to Claude. + +- `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)` + Return the value of all tool calls (generally used for structured outputs) + +- `class Chat` + - `def __init__(self, model, cli, sp, tools, temp, cont_pr)` + Anthropic chat client. + + - `@property def use(self)` + +- `def img_msg(data, cache)` + Convert image `data` into an encoded `dict` + +- `def text_msg(s, cache)` + Convert `s` to a text message + +- `def mk_msg(content, role, cache, **kw)` + Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message + +## claudette.toolloop + +- `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)` + Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages +# Tool loop @@ -1386,80 +1459,7 @@ calculated by multiplying together the ASCII values of each character in - usage: `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` -# claudette Module Documentation - -## claudette.asink - -- `class AsyncClient` - - `def __init__(self, model, cli, log)` - Async Anthropic messages client. - - -- `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)` - Make an async call to Claude. - -- `@delegates() class AsyncChat` - - `def __init__(self, model, cli, **kwargs)` - Anthropic async chat client. - - -## claudette.core - -- `def find_block(r, blk_type)` - Find the first block of type `blk_type` in `r.content`. - -- `def contents(r)` - Helper to get the contents from Claude response `r`. - -- `def usage(inp, out, cache_create, cache_read)` - Slightly more concise version of `Usage`. - -- `@patch def __add__(self, b)` - Add together each of `input_tokens` and `output_tokens` - -- `def mk_msgs(msgs, **kw)` - Helper to set 'assistant' role on alternate messages. - -- `class Client` - - `def __init__(self, model, cli, log)` - Basic Anthropic messages client. - - -- `def mk_tool_choice(choose)` - Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise - -- `def mk_funcres(tuid, res)` - Given tool use id and the tool result, create a tool_result response. - -- `def mk_toolres(r, ns, obj)` - Create a `tool_result` message from response `r`. - -- `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)` - Make a call to Claude. - -- `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)` - Return the value of all tool calls (generally used for structured outputs) - -- `class Chat` - - `def __init__(self, model, cli, sp, tools, temp, cont_pr)` - Anthropic chat client. - - - `@property def use(self)` - -- `def img_msg(data, cache)` - Convert image `data` into an encoded `dict` - -- `def text_msg(s, cache)` - Convert `s` to a text message - -- `def mk_msg(content, role, cache, **kw)` - Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message - -## claudette.toolloop - -- `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)` - Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages -# The async version +# The async version @@ -2124,2744 +2124,4 @@ blooming in the background behind the adorable puppy in the foreground. - usage: `{'input_tokens': 110, 'output_tokens': 50, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` -# Claudette’s source - - - -This is the ‘literate’ source code for Claudette. You can view the fully -rendered version of the notebook -[here](https://claudette.answer.ai/core.html), or you can clone the git -repo and run the [interactive -notebook](https://github.com/AnswerDotAI/claudette/blob/main/00_core.ipynb) -in Jupyter. The notebook is converted the [Python module -claudette/core.py](https://github.com/AnswerDotAI/claudette/blob/main/claudette/core.py) -using [nbdev](https://nbdev.fast.ai/). The goal of this source code is -to both create the Python module, and also to teach the reader *how* it -is created, without assuming much existing knowledge about Claude’s API. - -Most of the time you’ll see that we write some source code *first*, and -then a description or discussion of it *afterwards*. - -## Setup - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -To print every HTTP request and response in full, uncomment the above -line. This functionality is provided by Anthropic’s SDK. - -
- -> **Tip** -> -> If you’re reading the rendered version of this notebook, you’ll see an -> “Exported source” collapsible widget below. If you’re reading the -> source notebook directly, you’ll see `#| exports` at the top of the -> cell. These show that this piece of code will be exported into the -> python module that this notebook creates. No other code will be -> included – any other code in this notebook is just for demonstration, -> documentation, and testing. -> -> You can toggle expanding/collapsing the source code of all exported -> sections by using the ` Code` menu in the top right of the rendered -> notebook page. - -
- -
-Exported source - -``` python -model_types = { - # Anthropic - 'claude-3-opus-20240229': 'opus', - 'claude-3-5-sonnet-20241022': 'sonnet', - 'claude-3-haiku-20240307': 'haiku-3', - 'claude-3-5-haiku-20241022': 'haiku-3-5', - # AWS - 'anthropic.claude-3-opus-20240229-v1:0': 'opus', - 'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet', - 'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet', - 'anthropic.claude-3-haiku-20240307-v1:0': 'haiku', - # Google - 'claude-3-opus@20240229': 'opus', - 'claude-3-5-sonnet-v2@20241022': 'sonnet', - 'claude-3-sonnet@20240229': 'sonnet', - 'claude-3-haiku@20240307': 'haiku', -} - -all_models = list(model_types) -``` - -
-
-Exported source - -``` python -text_only_models = ('claude-3-5-haiku-20241022',) -``` - -
- -These are the current versions and -[prices](https://www.anthropic.com/pricing#anthropic-api) of Anthropic’s -models at the time of writing. - -``` python -model = models[1]; model -``` - - 'claude-3-5-sonnet-20241022' - -For examples, we’ll use Sonnet 3.5, since it’s awesome. - -## Antropic SDK - -``` python -cli = Anthropic() -``` - -This is what Anthropic’s SDK provides for interacting with Python. To -use it, pass it a list of *messages*, with *content* and a *role*. The -roles should alternate between *user* and *assistant*. - -
- -> **Tip** -> -> After the code below you’ll see an indented section with an orange -> vertical line on the left. This is used to show the *result* of -> running the code above. Because the code is running in a Jupyter -> Notebook, we don’t have to use `print` to display results, we can just -> type the expression directly, as we do with `r` here. - -
- -``` python -m = {'role': 'user', 'content': "I'm Jeremy"} -r = cli.messages.create(messages=[m], model=model, max_tokens=100) -r -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help -you today? - -
- -- id: `msg_017Q8WYvvANfyHWLJWt95UR1` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 27}` - -
- -### Formatting output - -That output is pretty long and hard to read, so let’s clean it up. We’ll -start by pulling out the `Content` part of the message. To do that, -we’re going to write our first function which will be included to the -`claudette/core.py` module. - -
- -> **Tip** -> -> This is the first exported public function or class we’re creating -> (the previous export was of a variable). In the rendered version of -> the notebook for these you’ll see 4 things, in this order (unless the -> symbol starts with a single `_`, which indicates it’s *private*): -> -> - The signature (with the symbol name as a heading, with a horizontal -> rule above) -> - A table of paramater docs (if provided) -> - The doc string (in italics). -> - The source code (in a collapsible “Exported source” block) -> -> After that, we generally provide a bit more detail on what we’ve -> created, and why, along with a sample usage. - -
- ------------------------------------------------------------------------- - -source - -### find_block - -> find_block (r:collections.abc.Mapping, blk_type:type= 'anthropic.types.text_block.TextBlock'>) - -*Find the first block of type `blk_type` in `r.content`.* - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
rMappingThe message to look in
blk_typetypeTextBlockThe type of block to find
- -
-Exported source - -``` python -def find_block(r:abc.Mapping, # The message to look in - blk_type:type=TextBlock # The type of block to find - ): - "Find the first block of type `blk_type` in `r.content`." - return first(o for o in r.content if isinstance(o,blk_type)) -``` - -
- -This makes it easier to grab the needed parts of Claude’s responses, -which can include multiple pieces of content. By default, we look for -the first text block. That will generally have the content we want to -display. - -``` python -find_block(r) -``` - - TextBlock(text="Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", type='text') - ------------------------------------------------------------------------- - -source - -### contents - -> contents (r) - -*Helper to get the contents from Claude response `r`.* - -
-Exported source - -``` python -def contents(r): - "Helper to get the contents from Claude response `r`." - blk = find_block(r) - if not blk and r.content: blk = r.content[0] - return blk.text.strip() if hasattr(blk,'text') else str(blk) -``` - -
- -For display purposes, we often just want to show the text itself. - -``` python -contents(r) -``` - - "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?" - -
-Exported source - -``` python -@patch -def _repr_markdown_(self:(Message)): - det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items()) - cts = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex - return f"""{cts} - -
- -- {det} - -
""" -``` - -
- -Jupyter looks for a `_repr_markdown_` method in displayed objects; we -add this in order to display just the content text, and collapse full -details into a hideable section. Note that `patch` is from -[fastcore](https://fastcore.fast.ai/), and is used to add (or replace) -functionality in an existing class. We pass the class(es) that we want -to patch as type annotations to `self`. In this case, `_repr_markdown_` -is being added to Anthropic’s `Message` class, so when we display the -message now we just see the contents, and the details are hidden away in -a collapsible details block. - -``` python -r -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help -you today? - -
- -- id: `msg_017Q8WYvvANfyHWLJWt95UR1` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 27}` - -
- -One key part of the response is the -[`usage`](https://claudette.answer.ai/core.html#usage) key, which tells -us how many tokens we used by returning a `Usage` object. - -We’ll add some helpers to make things a bit cleaner for creating and -formatting these objects. - -``` python -r.usage -``` - - In: 10; Out: 27; Cache create: 0; Cache read: 0; Total: 37 - ------------------------------------------------------------------------- - -source - -### usage - -> usage (inp=0, out=0, cache_create=0, cache_read=0) - -*Slightly more concise version of `Usage`.* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
inpint0input tokens
outint0Output tokens
cache_createint0Cache creation tokens
cache_readint0Cache read tokens
- -
-Exported source - -``` python -def usage(inp=0, # input tokens - out=0, # Output tokens - cache_create=0, # Cache creation tokens - cache_read=0 # Cache read tokens - ): - "Slightly more concise version of `Usage`." - return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create, cache_read_input_tokens=cache_read) -``` - -
- -The constructor provided by Anthropic is rather verbose, so we clean it -up a bit, using a lowercase version of the name. - -``` python -usage(5) -``` - - In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 - ------------------------------------------------------------------------- - -source - -### Usage.total - -> Usage.total () - -
-Exported source - -``` python -@patch(as_prop=True) -def total(self:Usage): return self.input_tokens+self.output_tokens+getattr(self, "cache_creation_input_tokens",0)+getattr(self, "cache_read_input_tokens",0) -``` - -
- -Adding a `total` property to `Usage` makes it easier to see how many -tokens we’ve used up altogether. - -``` python -usage(5,1).total -``` - - 6 - ------------------------------------------------------------------------- - -source - -### Usage.\_\_repr\_\_ - -> Usage.__repr__ () - -*Return repr(self).* - -
-Exported source - -``` python -@patch -def __repr__(self:Usage): return f'In: {self.input_tokens}; Out: {self.output_tokens}; Cache create: {getattr(self, "cache_creation_input_tokens",0)}; Cache read: {getattr(self, "cache_read_input_tokens",0)}; Total: {self.total}' -``` - -
- -In python, patching `__repr__` lets us change how an object is -displayed. (More generally, methods starting and ending in `__` in -Python are called `dunder` methods, and have some `magic` behavior – -such as, in this case, changing how an object is displayed.) - -``` python -usage(5) -``` - - In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 - ------------------------------------------------------------------------- - -source - -### Usage.\_\_add\_\_ - -> Usage.__add__ (b) - -*Add together each of `input_tokens` and `output_tokens`* - -
-Exported source - -``` python -@patch -def __add__(self:Usage, b): - "Add together each of `input_tokens` and `output_tokens`" - return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens, getattr(self,'cache_creation_input_tokens',0)+getattr(b,'cache_creation_input_tokens',0), getattr(self,'cache_read_input_tokens',0)+getattr(b,'cache_read_input_tokens',0)) -``` - -
- -And, patching `__add__` lets `+` work on a `Usage` object. - -``` python -r.usage+r.usage -``` - - In: 20; Out: 54; Cache create: 0; Cache read: 0; Total: 74 - -### Creating messages - -Creating correctly formatted `dict`s from scratch every time isn’t very -handy, so next up we’ll add helpers for this. - -``` python -def mk_msg(content, role='user', **kw): - return dict(role=role, content=content, **kw) -``` - -We make things a bit more convenient by writing a function to create a -message for us. - -
- -> **Note** -> -> You may have noticed that we didn’t export the -> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) function -> (i.e. there’s no “Exported source” block around it). That’s because -> we’ll need more functionality in our final version than this version -> has – so we’ll be defining a more complete version later. Rather than -> refactoring/editing in notebooks, often it’s helpful to simply -> gradually build up complexity by re-defining a symbol. - -
- -``` python -prompt = "I'm Jeremy" -m = mk_msg(prompt) -m -``` - - {'role': 'user', 'content': "I'm Jeremy"} - -``` python -r = cli.messages.create(messages=[m], model=model, max_tokens=100) -r -``` - -Hi Jeremy! I’m Claude. Nice to meet you. How can I help you today? - -
- -- id: `msg_01BhkuvQtEPoC8wHSbU7YRpV` -- content: - `[{'text': "Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 24}` - -
- ------------------------------------------------------------------------- - -source - -### mk_msgs - -> mk_msgs (msgs:list, **kw) - -*Helper to set ‘assistant’ role on alternate messages.* - -
-Exported source - -``` python -def mk_msgs(msgs:list, **kw): - "Helper to set 'assistant' role on alternate messages." - if isinstance(msgs,str): msgs=[msgs] - return [mk_msg(o, ('user','assistant')[i%2], **kw) for i,o in enumerate(msgs)] -``` - -
- -LLMs, including Claude, don’t actually have state, but instead dialogs -are created by passing back all previous prompts and responses every -time. With Claude, they always alternate *user* and *assistant*. -Therefore we create a function to make it easier to build up these -dialog lists. - -But to do so, we need to update -[`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) so that we -can’t only pass a `str` as `content`, but can also pass a `dict` or an -object with a `content` attr, since these are both types of message that -Claude can create. To do so, we check for a `content` key or attr, and -use it if found. - -
-Exported source - -``` python -def _str_if_needed(o): - if isinstance(o, (list,tuple,abc.Mapping,L)) or hasattr(o, '__pydantic_serializer__'): return o - return str(o) -``` - -
- -``` python -def mk_msg(content, role='user', **kw): - "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" - if hasattr(content, 'content'): content,role = content.content,content.role - if isinstance(content, abc.Mapping): content=content['content'] - return dict(role=role, content=_str_if_needed(content), **kw) -``` - -``` python -msgs = mk_msgs([prompt, r, 'I forgot my name. Can you remind me please?']) -msgs -``` - - [{'role': 'user', 'content': "I'm Jeremy"}, - {'role': 'assistant', - 'content': [TextBlock(text="Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", type='text')]}, - {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}] - -Now, if we pass this list of messages to Claude, the model treats it as -a conversation to respond to. - -``` python -cli.messages.create(messages=msgs, model=model, max_tokens=200) -``` - -You just told me your name is Jeremy. - -
- -- id: `msg_01KZski1R3z1iGjF6XsBb9dM` -- content: - `[{'text': 'You just told me your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 46, 'output_tokens': 13}` - -
- -## Client - ------------------------------------------------------------------------- - -source - -### Client - -> Client (model, cli=None, log=False) - -*Basic Anthropic messages client.* - -
-Exported source - -``` python -class Client: - def __init__(self, model, cli=None, log=False): - "Basic Anthropic messages client." - self.model,self.use = model,usage() - self.text_only = model in text_only_models - self.log = [] if log else None - self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'})) -``` - -
- -We’ll create a simple -[`Client`](https://claudette.answer.ai/core.html#client) for `Anthropic` -which tracks usage stores the model to use. We don’t add any methods -right away – instead we’ll use `patch` for that so we can add and -document them incrementally. - -``` python -c = Client(model) -c.use -``` - - In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 - -
-Exported source - -``` python -@patch -def _r(self:Client, r:Message, prefill=''): - "Store the result of the message and accrue total usage." - if prefill: - blk = find_block(r) - blk.text = prefill + (blk.text or '') - self.result = r - self.use += r.usage - self.stop_reason = r.stop_reason - self.stop_sequence = r.stop_sequence - return r -``` - -
- -We use a `_` prefix on private methods, but we document them here in the -interests of literate source code. - -`_r` will be used each time we get a new result, to track usage and also -to keep the result available for later. - -``` python -c._r(r) -c.use -``` - - In: 10; Out: 24; Cache create: 0; Cache read: 0; Total: 34 - -Whereas OpenAI’s models use a `stream` parameter for streaming, -Anthropic’s use a separate method. We implement Anthropic’s approach in -a private method, and then use a `stream` parameter in `__call__` for -consistency: - -
-Exported source - -``` python -@patch -def _log(self:Client, final, prefill, msgs, maxtok=None, sp=None, temp=None, stream=None, stop=None, **kwargs): - self._r(final, prefill) - if self.log is not None: self.log.append({ - "msgs": msgs, "prefill": prefill, **kwargs, - "msgs": msgs, "prefill": prefill, "maxtok": maxtok, "sp": sp, "temp": temp, "stream": stream, "stop": stop, **kwargs, - "result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence - }) - return self.result -``` - -
-
-Exported source - -``` python -@patch -def _stream(self:Client, msgs:list, prefill='', **kwargs): - with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: - if prefill: yield(prefill) - yield from s.text_stream - self._log(s.get_final_message(), prefill, msgs, **kwargs) -``` - -
- -Claude supports adding an extra `assistant` message at the end, which -contains the *prefill* – i.e. the text we want Claude to assume the -response starts with. However Claude doesn’t actually repeat that in the -response, so for convenience we add it. - -
-Exported source - -``` python -@patch -def _precall(self:Client, msgs, prefill, stop, kwargs): - pref = [prefill.strip()] if prefill else [] - if not isinstance(msgs,list): msgs = [msgs] - if stop is not None: - if not isinstance(stop, (list)): stop = [stop] - kwargs["stop_sequences"] = stop - msgs = mk_msgs(msgs+pref) - return msgs -``` - -
- -``` python -@patch -@delegates(messages.Messages.create) -def __call__(self:Client, - msgs:list, # List of messages in the dialog - sp='', # The system prompt - temp=0, # Temperature - maxtok=4096, # Maximum tokens - prefill='', # Optional prefill to pass to Claude as start of its response - stream:bool=False, # Stream response? - stop=None, # Stop sequence - **kwargs): - "Make a call to Claude." - msgs = self._precall(msgs, prefill, stop, kwargs) - if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - res = self.c.messages.create( - model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs) -``` - -Defining `__call__` let’s us use an object like a function (i.e it’s -*callable*). We use it as a small wrapper over `messages.create`. -However we’re not exporting this version just yet – we have some -additions we’ll make in a moment… - -``` python -c = Client(model, log=True) -c.use -``` - - In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 - -``` python -c('Hi') -``` - -Hello! How can I help you today? - -
- -- id: `msg_01DZfHpTqbodjegmvG6kkQvn` -- content: - `[{'text': 'Hello! How can I help you today?', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 8, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -c.use -``` - - In: 8; Out: 22; Cache create: 0; Cache read: 0; Total: 30 - -Let’s try out *prefill*: - -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -``` - -``` python -c(q, prefill=pref) -``` - -According to Douglas Adams, it’s 42. More seriously, there’s no -universal answer - it’s deeply personal. Common perspectives include: -finding happiness, making meaningful connections, pursuing purpose -through work/creativity, helping others, or simply experiencing and -appreciating existence. - -
- -- id: `msg_01RKAjFBMhyBjvKw59ypM6tp` -- content: - `[{'text': "According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 24, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -We can pass `stream=True` to stream the response back incrementally: - -``` python -for o in c('Hi', stream=True): print(o, end='') -``` - - Hello! How can I help you today? - -``` python -c.use -``` - - In: 40; Out: 97; Cache create: 0; Cache read: 0; Total: 137 - -``` python -for o in c(q, prefill=pref, stream=True): print(o, end='') -``` - - According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence. - -``` python -c.use -``` - - In: 64; Out: 150; Cache create: 0; Cache read: 0; Total: 214 - -Pass a stop seauence if you want claude to stop generating text when it -encounters it. - -``` python -c("Count from 1 to 10", stop="5") -``` - -1 2 3 4 - -
- -- id: `msg_01D3kdCAHNbXadE144FLPbQV` -- content: `[{'text': '1\n2\n3\n4\n', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `stop_sequence` -- stop_sequence: `5` -- type: `message` -- usage: - `{'input_tokens': 15, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -This also works with streaming, and you can pass more than one stop -sequence: - -``` python -for o in c("Count from 1 to 10", stop=["2", "yellow"], stream=True): print(o, end='') -print(c.stop_reason, c.stop_sequence) -``` - - 1 - stop_sequence 2 - -You can check the logs: - -``` python -c.log[-1] -``` - - {'msgs': [{'role': 'user', 'content': 'Count from 1 to 10'}], - 'prefill': '', - 'max_tokens': 4096, - 'system': '', - 'temperature': 0, - 'stop_sequences': ['2', 'yellow'], - 'maxtok': None, - 'sp': None, - 'temp': None, - 'stream': None, - 'stop': None, - 'result': Message(id='msg_01PbJN7QLwYALfoqTtYJHYVR', content=[TextBlock(text='1\n', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='stop_sequence', stop_sequence='2', type='message', usage=In: 15; Out: 11; Cache create: 0; Cache read: 0; Total: 26), - 'use': In: 94; Out: 172; Cache create: 0; Cache read: 0; Total: 266, - 'stop_reason': 'stop_sequence', - 'stop_sequence': '2'} - -## Tool use - -Let’s now add tool use (aka *function calling*). - ------------------------------------------------------------------------- - -source - -### mk_tool_choice - -> mk_tool_choice (choose:Union[str,bool,NoneType]) - -*Create a `tool_choice` dict that’s ‘auto’ if `choose` is `None`, ‘any’ -if it is True, or ‘tool’ otherwise* - -
-Exported source - -``` python -def mk_tool_choice(choose:Union[str,bool,None])->dict: - "Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise" - return {"type": "tool", "name": choose} if isinstance(choose,str) else {'type':'any'} if choose else {'type':'auto'} -``` - -
- -``` python -print(mk_tool_choice('sums')) -print(mk_tool_choice(True)) -print(mk_tool_choice(None)) -``` - - {'type': 'tool', 'name': 'sums'} - {'type': 'any'} - {'type': 'auto'} - -Claude can be forced to use a particular tool, or select from a specific -list of tools, or decide for itself when to use a tool. If you want to -force a tool (or force choosing from a list), include a `tool_choice` -param with a dict from -[`mk_tool_choice`](https://claudette.answer.ai/core.html#mk_tool_choice). - -For testing, we need a function that Claude can call; we’ll write a -simple function that adds numbers together, and will tell us when it’s -being called: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -sp = "You are a summing expert." -``` - -Claudette can autogenerate a schema thanks to the `toolslm` library. -We’ll force the use of the tool using the function we created earlier. - -``` python -tools=[get_schema(sums)] -choice = mk_tool_choice('sums') -``` - -We’ll start a dialog with Claude now. We’ll store the messages of our -dialog in `msgs`. The first message will be our prompt `pr`, and we’ll -pass our `tools` schema. - -``` python -msgs = mk_msgs(pr) -r = c(msgs, sp=sp, tools=tools, tool_choice=choice) -r -``` - -ToolUseBlock(id=‘toolu_01JEJNPyeeGm7uwckeF5J4pf’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_015eEr2H8V4j8nNEh1KQifjH` -- content: - `[{'id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 442, 'output_tokens': 55, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -When Claude decides that it should use a tool, it passes back a -`ToolUseBlock` with the name of the tool to call, and the params to use. - -We don’t want to allow it to call just any possible function (that would -be a security disaster!) so we create a *namespace* – that is, a -dictionary of allowable function names to call. - -``` python -ns = mk_ns(sums) -ns -``` - - {'sums': int>} - ------------------------------------------------------------------------- - -source - -### mk_funcres - -> mk_funcres (tuid, res) - -*Given tool use id and the tool result, create a tool_result response.* - -
-Exported source - -``` python -def mk_funcres(tuid, res): - "Given tool use id and the tool result, create a tool_result response." - return dict(type="tool_result", tool_use_id=tuid, content=str(res)) -``` - -
- -We can now use the function requested by Claude. We look it up in `ns`, -and pass in the provided parameters. - -``` python -fc = find_block(r, ToolUseBlock) -res = mk_funcres(fc.id, call_func(fc.name, fc.input, ns=ns)) -res -``` - - Finding the sum of 604542 and 6458932 - - {'type': 'tool_result', - 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', - 'content': '7063474'} - ------------------------------------------------------------------------- - -source - -### mk_toolres - -> mk_toolres (r:collections.abc.Mapping, -> ns:Optional[collections.abc.Mapping]=None, obj:Optional=None) - -*Create a `tool_result` message from response `r`.* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
rMappingTool use request response from Claude
nsOptionalNoneNamespace to search for tools
objOptionalNoneClass to search for tools
- -
-Exported source - -``` python -def mk_toolres( - r:abc.Mapping, # Tool use request response from Claude - ns:Optional[abc.Mapping]=None, # Namespace to search for tools - obj:Optional=None # Class to search for tools - ): - "Create a `tool_result` message from response `r`." - cts = getattr(r, 'content', []) - res = [mk_msg(r)] - if ns is None: ns=globals() - if obj is not None: ns = mk_ns(obj) - tcs = [mk_funcres(o.id, call_func(o.name, o.input, ns)) for o in cts if isinstance(o,ToolUseBlock)] - if tcs: res.append(mk_msg(tcs)) - return res -``` - -
- -In order to tell Claude the result of the tool call, we pass back the -tool use assistant request and the `tool_result` response. - -``` python -tr = mk_toolres(r, ns=ns) -tr -``` - - Finding the sum of 604542 and 6458932 - - [{'role': 'assistant', - 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, - {'role': 'user', - 'content': [{'type': 'tool_result', - 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', - 'content': '7063474'}]}] - -We add this to our dialog, and now Claude has all the information it -needs to answer our question. - -``` python -msgs += tr -contents(c(msgs, sp=sp, tools=tools)) -``` - - 'The sum of 604542 and 6458932 is 7063474.' - -``` python -msgs -``` - - [{'role': 'user', 'content': 'What is 604542+6458932?'}, - {'role': 'assistant', - 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, - {'role': 'user', - 'content': [{'type': 'tool_result', - 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', - 'content': '7063474'}]}] - -This works with methods as well – in this case, use the object itself -for `ns`: - -``` python -class Dummy: - def sums( - self, - a:int, # First thing to sum - b:int=1 # Second thing to sum - ) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -tools = [get_schema(Dummy.sums)] -o = Dummy() -r = c(pr, sp=sp, tools=tools, tool_choice=choice) -tr = mk_toolres(r, obj=o) -msgs += tr -contents(c(msgs, sp=sp, tools=tools)) -``` - - Finding the sum of 604542 and 6458932 - - 'The sum of 604542 and 6458932 is 7063474.' - ------------------------------------------------------------------------- - -source - -### get_types - -> get_types (msgs) - -``` python -get_types(msgs) -``` - - ['text', 'tool_use', 'tool_result', 'tool_use', 'tool_result'] - ------------------------------------------------------------------------- - -source - -### Client.\_\_call\_\_ - -> Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', -> stream:bool=False, stop=None, tools:Optional[list]=None, -> tool_choice:Optional[dict]=None, -> metadata:MetadataParam|NotGiven=NOT_GIVEN, -> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Unio -> n[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, -> temperature:float|NotGiven=NOT_GIVEN, -> top_k:int|NotGiven=NOT_GIVEN, -> top_p:float|NotGiven=NOT_GIVEN, -> extra_headers:Headers|None=None, -> extra_query:Query|None=None, extra_body:Body|None=None, -> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) - -*Make a call to Claude.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
toolsOptionalNoneList of tools to make available to Claude
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
- -
-Exported source - -``` python -@patch -@delegates(messages.Messages.create) -def __call__(self:Client, - msgs:list, # List of messages in the dialog - sp='', # The system prompt - temp=0, # Temperature - maxtok=4096, # Maximum tokens - prefill='', # Optional prefill to pass to Claude as start of its response - stream:bool=False, # Stream response? - stop=None, # Stop sequence - tools:Optional[list]=None, # List of tools to make available to Claude - tool_choice:Optional[dict]=None, # Optionally force use of some tool - **kwargs): - "Make a call to Claude." - if tools: kwargs['tools'] = [get_schema(o) for o in listify(tools)] - if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) - msgs = self._precall(msgs, prefill, stop, kwargs) - if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" - if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) -``` - -
- -``` python -r = c(pr, sp=sp, tools=sums, tool_choice=sums) -r -``` - -ToolUseBlock(id=‘toolu_01KNbjuc8utt6ZroFngmAcuj’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_01T8zmguPksQaKLLgUuaYAJL` -- content: - `[{'id': 'toolu_01KNbjuc8utt6ZroFngmAcuj', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 438, 'output_tokens': 64, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -tr = mk_toolres(r, ns=ns) -``` - - Finding the sum of 604542 and 6458932 - ------------------------------------------------------------------------- - -source - -### Client.structured - -> Client.structured (msgs:list, tools:Optional[list]=None, -> obj:Optional=None, -> ns:Optional[collections.abc.Mapping]=None, sp='', -> temp=0, maxtok=4096, prefill='', stream:bool=False, -> stop=None, tool_choice:Optional[dict]=None, -> metadata:MetadataParam|NotGiven=NOT_GIVEN, -> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Un -> ion[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, -> temperature:float|NotGiven=NOT_GIVEN, -> top_k:int|NotGiven=NOT_GIVEN, -> top_p:float|NotGiven=NOT_GIVEN, -> extra_headers:Headers|None=None, -> extra_query:Query|None=None, -> extra_body:Body|None=None, -> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) - -*Return the value of all tool calls (generally used for structured -outputs)* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
msgslistList of messages in the dialog
toolsOptionalNoneList of tools to make available to Claude
objOptionalNoneClass to search for tools
nsOptionalNoneNamespace to search for tools
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
- -
-Exported source - -``` python -@patch -@delegates(Client.__call__) -def structured(self:Client, - msgs:list, # List of messages in the dialog - tools:Optional[list]=None, # List of tools to make available to Claude - obj:Optional=None, # Class to search for tools - ns:Optional[abc.Mapping]=None, # Namespace to search for tools - **kwargs): - "Return the value of all tool calls (generally used for structured outputs)" - tools = listify(tools) - res = self(msgs, tools=tools, tool_choice=tools, **kwargs) - if ns is None: ns=mk_ns(*tools) - if obj is not None: ns = mk_ns(obj) - cts = getattr(res, 'content', []) - tcs = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)] - return tcs -``` - -
- -Anthropic’s API does not support response formats directly, so instead -we provide a `structured` method to use tool calling to achieve the same -result. The result of the tool is not passed back to Claude in this -case, but instead is returned directly to the user. - -``` python -c.structured(pr, tools=[sums]) -``` - - Finding the sum of 604542 and 6458932 - - [7063474] - -## Chat - -Rather than manually adding the responses to a dialog, we’ll create a -simple [`Chat`](https://claudette.answer.ai/core.html#chat) class to do -that for us, each time we make a request. We’ll also store the system -prompt and tools here, to avoid passing them every time. - ------------------------------------------------------------------------- - -source - -### Chat - -> Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None, -> sp='', tools:Optional[list]=None, temp=0, -> cont_pr:Optional[str]=None) - -*Anthropic chat client.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstrOptional system prompt
toolsOptionalNoneList of tools to make available to Claude
tempint0Temperature
cont_prOptionalNoneUser prompt to continue an assistant response: -assistant,[user:“…”],assistant
- -
-Exported source - -``` python -class Chat: - def __init__(self, - model:Optional[str]=None, # Model to use (leave empty if passing `cli`) - cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) - sp='', # Optional system prompt - tools:Optional[list]=None, # List of tools to make available to Claude - temp=0, # Temperature - cont_pr:Optional[str]=None): # User prompt to continue an assistant response: assistant,[user:"..."],assistant - "Anthropic chat client." - assert model or cli - assert cont_pr != "", "cont_pr may not be an empty string" - self.c = (cli or Client(model)) - self.h,self.sp,self.tools,self.cont_pr,self.temp = [],sp,tools,cont_pr,temp - - @property - def use(self): return self.c.use -``` - -
- -The class stores the -[`Client`](https://claudette.answer.ai/core.html#client) that will -provide the responses in `c`, and a history of messages in `h`. - -``` python -sp = "Never mention what tools you use." -chat = Chat(model, sp=sp) -chat.c.use, chat.h -``` - - (In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0, []) - -We’ve shown the token usage but we really care about is pricing. Let’s -extract the latest -[pricing](https://www.anthropic.com/pricing#anthropic-api) from -Anthropic into a `pricing` dict. - -We’ll patch `Usage` to enable it compute the cost given pricing. - ------------------------------------------------------------------------- - -source - -### Usage.cost - -> Usage.cost (costs:tuple) - -
-Exported source - -``` python -@patch -def cost(self:Usage, costs:tuple) -> float: - cache_w, cache_r = getattr(self, "cache_creation_input_tokens",0), getattr(self, "cache_read_input_tokens",0) - return sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6 -``` - -
- -``` python -chat.c.use.cost(pricing[model_types[chat.c.model]]) -``` - - 0.0 - -This is clunky. Let’s add `cost` as a property for the -[`Chat`](https://claudette.answer.ai/core.html#chat) class. It will pass -in the appropriate prices for the current model to the usage cost -calculator. - ------------------------------------------------------------------------- - -source - -### Chat.cost - -> Chat.cost () - -
-Exported source - -``` python -@patch(as_prop=True) -def cost(self: Chat) -> float: return self.c.use.cost(pricing[model_types[self.c.model]]) -``` - -
- -``` python -chat.cost -``` - - 0.0 - ------------------------------------------------------------------------- - -source - -### Chat.\_\_call\_\_ - -> Chat.__call__ (pr=None, temp=None, maxtok=4096, stream=False, prefill='', -> tool_choice:Optional[dict]=None, **kw) - -*Call self as a function.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
kw
- -
-Exported source - -``` python -@patch -def _stream(self:Chat, res): - yield from res - self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) -``` - -
-
-Exported source - -``` python -@patch -def _post_pr(self:Chat, pr, prev_role): - if pr is None and prev_role == 'assistant': - if self.cont_pr is None: - raise ValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.") - pr = self.cont_pr # No user prompt, keep the chain - if pr: self.h.append(mk_msg(pr)) -``` - -
-
-Exported source - -``` python -@patch -def _append_pr(self:Chat, - pr=None, # Prompt / message - ): - prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' - if pr and prev_role == 'user': self() # already user request pending - self._post_pr(pr, prev_role) -``` - -
-
-Exported source - -``` python -@patch -def __call__(self:Chat, - pr=None, # Prompt / message - temp=None, # Temperature - maxtok=4096, # Maximum tokens - stream=False, # Stream response? - prefill='', # Optional prefill to pass to Claude as start of its response - tool_choice:Optional[dict]=None, # Optionally force use of some tool - **kw): - if temp is None: temp=self.temp - self._append_pr(pr) - res = self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, - tools=self.tools, tool_choice=tool_choice,**kw) - if stream: return self._stream(res) - self.h += mk_toolres(self.c.result, ns=self.tools) - return res -``` - -
- -The `__call__` method just passes the request along to the -[`Client`](https://claudette.answer.ai/core.html#client), but rather -than just passing in this one prompt, it appends it to the history and -passes it all along. As a result, we now have state! - -``` python -chat = Chat(model, sp=sp) -``` - -``` python -chat("I'm Jeremy") -chat("What's my name?") -``` - -Your name is Jeremy. - -
- -- id: `msg_01GpNv4P5x9Gzc5mxxw9FgEL` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 41, 'output_tokens': 9, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat.use, chat.cost -``` - - (In: 58; Out: 27; Cache create: 0; Cache read: 0; Total: 85, 0.000579) - -Let’s try out prefill too: - -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -``` - -``` python -chat(q, prefill=pref) -``` - -According to Douglas Adams,42. But seriously: To find purpose, create -meaning, love, grow, and make a positive impact while experiencing -life’s journey. - -
- -- id: `msg_011s2iLranbHFhdsVg8sz6eY` -- content: - `[{'text': "According to Douglas Adams,42. But seriously: To find purpose, create meaning, love, grow, and make a positive impact while experiencing life's journey.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 69, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -By default messages must be in user, assistant, user format. If this -isn’t followed (aka calling `chat()` without a user message) it will -error out: - -``` python -try: chat() -except ValueError as e: print("Error:", e) -``` - - Error: Prompt must be given after assistant completion, or use `self.cont_pr`. - -Setting `cont_pr` allows a “default prompt” to be specified when a -prompt isn’t specified. Usually used to prompt the model to continue. - -``` python -chat.cont_pr = "keep going..." -chat() -``` - -To build meaningful relationships, pursue passions, learn continuously, -help others, appreciate beauty, overcome challenges, leave a positive -legacy, and find personal fulfillment through whatever brings you joy -and contributes to the greater good. - -
- -- id: `msg_01Rz8oydLAinmSMyaKbmmpE9` -- content: - `[{'text': 'To build meaningful relationships, pursue passions, learn continuously, help others, appreciate beauty, overcome challenges, leave a positive legacy, and find personal fulfillment through whatever brings you joy and contributes to the greater good.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 105, 'output_tokens': 54, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -We can also use streaming: - -``` python -chat = Chat(model, sp=sp) -for o in chat("I'm Jeremy", stream=True): print(o, end='') -``` - - Hello Jeremy! Nice to meet you. How are you today? - -``` python -for o in chat(q, prefill=pref, stream=True): print(o, end='') -``` - - According to Douglas Adams, 42. More seriously: to find purpose, love, grow, and make a positive impact while experiencing life's journey. - -### Chat tool use - -We automagically get streamlined tool use as well: - -``` python -pr = f"What is {a}+{b}?" -pr -``` - - 'What is 604542+6458932?' - -``` python -chat = Chat(model, sp=sp, tools=[sums]) -r = chat(pr) -r -``` - - Finding the sum of 604542 and 6458932 - -Let me calculate that sum for you. - -
- -- id: `msg_01MY2VWnZuU8jKyRKJ5FGzmM` -- content: - `[{'text': 'Let me calculate that sum for you.', 'type': 'text'}, {'id': 'toolu_01JXnJ1ReFqx5ppX3y7UcQCB', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 437, 'output_tokens': 87, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Now we need to send this result to Claude—calling the object with no -parameters tells it to return the tool result to Claude: - -``` python -chat() -``` - -604542 + 6458932 = 7063474 - -
- -- id: `msg_01Sog8j3pgYb3TBWPYwR4uQU` -- content: `[{'text': '604542 + 6458932 = 7063474', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 532, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -It should be correct, because it actually used our Python function to do -the addition. Let’s check: - -``` python -a+b -``` - - 7063474 - -## Images - -Claude can handle image data as well. As everyone knows, when testing -image APIs you have to use a cute puppy. - -``` python -# Image is Cute_dog.jpg from Wikimedia -fn = Path('samples/puppy.jpg') -display.Image(filename=fn, width=200) -``` - - - -``` python -img = fn.read_bytes() -``` - -
-Exported source - -``` python -def _add_cache(d, cache): - "Optionally add cache control" - if cache: d["cache_control"] = {"type": "ephemeral"} - return d -``` - -
- -Claude supports context caching by adding a `cache_control` header, so -we provide an option to enable that. - ------------------------------------------------------------------------- - -source - -### img_msg - -> img_msg (data:bytes, cache=False) - -*Convert image `data` into an encoded `dict`* - -
-Exported source - -``` python -def img_msg(data:bytes, cache=False)->dict: - "Convert image `data` into an encoded `dict`" - img = base64.b64encode(data).decode("utf-8") - mtype = mimetypes.types_map['.'+imghdr.what(None, h=data)] - r = dict(type="base64", media_type=mtype, data=img) - return _add_cache({"type": "image", "source": r}, cache) -``` - -
- -Anthropic have documented the particular `dict` structure that expect -image data to be in, so we have a little function to create that for us. - ------------------------------------------------------------------------- - -source - -### text_msg - -> text_msg (s:str, cache=False) - -*Convert `s` to a text message* - -
-Exported source - -``` python -def text_msg(s:str, cache=False)->dict: - "Convert `s` to a text message" - return _add_cache({"type": "text", "text": s}, cache) -``` - -
- -A Claude message can be a list of image and text parts. So we’ve also -created a helper for making the text parts. - -``` python -q = "In brief, what color flowers are in this image?" -msg = mk_msg([img_msg(img), text_msg(q)]) -``` - -``` python -c([msg]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_01Ej9XSFQKFtD9pUns5g7tom` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 44, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
-
-Exported source - -``` python -def _mk_content(src, cache=False): - "Create appropriate content data structure based on type of content" - if isinstance(src,str): return text_msg(src, cache=cache) - if isinstance(src,bytes): return img_msg(src, cache=cache) - if isinstance(src, abc.Mapping): return {k:_str_if_needed(v) for k,v in src.items()} - return _str_if_needed(src) -``` - -
- -There’s not need to manually choose the type of message, since we figure -that out from the data of the source data. - -``` python -_mk_content('Hi') -``` - - {'type': 'text', 'text': 'Hi'} - ------------------------------------------------------------------------- - -source - -### mk_msg - -> mk_msg (content, role='user', cache=False, **kw) - -*Helper to create a `dict` appropriate for a Claude message. `kw` are -added as key/value pairs to the message* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
contentA string, list, or dict containing the contents of the message
rolestruserMust be ‘user’ or ‘assistant’
cacheboolFalse
kw
- -
-Exported source - -``` python -def mk_msg(content, # A string, list, or dict containing the contents of the message - role='user', # Must be 'user' or 'assistant' - cache=False, - **kw): - "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" - if hasattr(content, 'content'): content,role = content.content,content.role - if isinstance(content, abc.Mapping): content=content.get('content', content) - if not isinstance(content, list): content=[content] - content = [_mk_content(o, cache if islast else False) for islast,o in loop_last(content)] if content else '.' - return dict2obj(dict(role=role, content=content, **kw), list_func=list) -``` - -
- -``` python -mk_msg(['hi', 'there'], cache=True) -``` - -``` json -{ 'content': [ {'text': 'hi', 'type': 'text'}, - { 'cache_control': {'type': 'ephemeral'}, - 'text': 'there', - 'type': 'text'}], - 'role': 'user'} -``` - -``` python -m = mk_msg(['hi', 'there'], cache=True) -``` - -When we construct a message, we now use -[`_mk_content`](https://claudette.answer.ai/core.html#_mk_content) to -create the appropriate parts. Since a dialog contains multiple messages, -and a message can contain multiple content parts, to pass a single -message with multiple parts we have to use a list containing a single -list: - -``` python -c([[img, q]]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_014GQfAQF5FYU8a4Y8bvVm16` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -
- -> **Note** -> -> As promised (much!) earlier, we’ve now finally completed our -> definition of -> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg), and this -> version is the one we export to the Python module. - -
- -Some models unfortunately do not support image inputs such as Haiku 3.5 - -``` python -model = models[-1]; model -``` - - 'claude-3-5-haiku-20241022' - -``` python -c = Client(model) -c([[img, q]]) -``` - - AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 - --------------------------------------------------------------------------- - AssertionError Traceback (most recent call last) - Cell In[115], line 2 -  1 c = Client(model) - ----> 2 c([[img, q]]) - - Cell In[72], line 19, in __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs) -  17 if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) -  18 msgs = self._precall(msgs, prefill, stop, kwargs) - ---> 19 if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" -  20 if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) -  21 res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - - AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 - -## Third party providers - -### Amazon Bedrock - -These are Amazon’s current Claude models: - -``` python -models_aws -``` - - ['anthropic.claude-3-opus-20240229-v1:0', - 'anthropic.claude-3-5-sonnet-20241022-v2:0', - 'anthropic.claude-3-sonnet-20240229-v1:0', - 'anthropic.claude-3-haiku-20240307-v1:0'] - -
- -> **Note** -> -> `anthropic` at version 0.34.2 seems not to install `boto3` as a -> dependency. You may need to do a `pip install boto3` or the creation -> of the [`Client`](https://claudette.answer.ai/core.html#client) below -> fails. - -
- -Provided `boto3` is installed, we otherwise don’t need any extra code to -support Amazon Bedrock – we just have to set up the approach client: - -``` python -ab = AnthropicBedrock( - aws_access_key=os.environ['AWS_ACCESS_KEY'], - aws_secret_key=os.environ['AWS_SECRET_KEY'], -) -client = Client(models_aws[-1], ab) -``` - -``` python -chat = Chat(cli=client) -``` - -``` python -chat("I'm Jeremy") -``` - -It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_bdrk_01JPBwsACbf1HZoNDUzbHNpJ` -- content: - `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 32}` - -
- -### Google Vertex - -``` python -models_goog -``` - - ['claude-3-opus@20240229', - 'claude-3-5-sonnet-v2@20241022', - 'claude-3-sonnet@20240229', - 'claude-3-haiku@20240307'] - -``` python -from anthropic import AnthropicVertex -import google.auth -``` - -``` python -project_id = google.auth.default()[1] -region = "us-east5" -gv = AnthropicVertex(project_id=project_id, region=region) -client = Client(models_goog[-1], gv) -``` - -``` python -chat = Chat(cli=client) -``` - -``` python -chat("I'm Jeremy") -```
# claudette-pydantic - - - -> Adds Pydantic support for -> [claudette](https://github.com/AnswerDotAI/claudette) through function -> calling - -claudette_pydantic provides the `struct` method in the `Client` and -`Chat` of claudette - -`struct` provides a wrapper around `__call__`. Provide a Pydantic -`BaseModel` as schema, and the model will return an initialized -`BaseModel` object. - -I’ve found Haiku to be quite reliable at even complicated schemas. - -## Install - -``` sh -pip install claudette-pydantic -``` - -## Getting Started - -``` python -from claudette.core import * -import claudette_pydantic # patches claudette with `struct` -from pydantic import BaseModel, Field -from typing import Literal, Union, List -``` - -``` python -model = models[-1] -model -``` - - 'claude-3-haiku-20240307' - -``` python -class Pet(BaseModel): - "Create a new pet" - name: str - age: int - owner: str = Field(default="NA", description="Owner name. Do not return if not given.") - type: Literal['dog', 'cat', 'mouse'] - -c = Client(model) -print(repr(c.struct(msgs="Can you make a pet for my dog Mac? He's 14 years old", resp_model=Pet))) -print(repr(c.struct(msgs="Tom: my cat is juma and he's 16 years old", resp_model=Pet))) -``` - - Pet(name='Mac', age=14, owner='NA', type='dog') - Pet(name='juma', age=16, owner='Tom', type='cat') - -## Going Deeper - -I pulled this example from [pydantic -docs](https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions) -has a list of discriminated unions, shown by `pet_type`. For each object -the model is required to return different things. - -You should be able to use the full power of Pydantic here. I’ve found -that instructor for Claude fails on this example. - -Each sub BaseModel may also have docstrings describing usage. I’ve found -prompting this way to be quite reliable. - -``` python -class Cat(BaseModel): - pet_type: Literal['cat'] - meows: int - - -class Dog(BaseModel): - pet_type: Literal['dog'] - barks: float - - -class Reptile(BaseModel): - pet_type: Literal['lizard', 'dragon'] - scales: bool - -# Dummy to show doc strings -class Create(BaseModel): - "Pass as final member of the `pet` list to indicate success" - pet_type: Literal['create'] - -class OwnersPets(BaseModel): - """ - Information for to gather for an Owner's pets - """ - pet: List[Union[Cat, Dog, Reptile, Create]] = Field(..., discriminator='pet_type') - -chat = Chat(model) -pr = "hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows" -print(repr(chat.struct(OwnersPets, pr=pr))) -print(repr(chat.struct(OwnersPets, pr="actually my dragon does have scales, can you change that for me?"))) -``` - - OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) - OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) - -While the struct uses tool use to enforce the schema, we save in history -as the `repr` response to keep the user,assistant,user flow. - -``` python -chat.h -``` - - [{'role': 'user', - 'content': [{'type': 'text', - 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, - {'role': 'assistant', - 'content': [{'type': 'text', - 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}, - {'role': 'user', - 'content': [{'type': 'text', - 'text': 'actually my dragon does have scales, can you change that for me?'}]}, - {'role': 'assistant', - 'content': [{'type': 'text', - 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}] - -Alternatively you can use struct as tool use flow with -`treat_as_output=False` (but requires the next input to be assistant) - -``` python -chat.struct(OwnersPets, pr=pr, treat_as_output=False) -chat.h[-3:] -``` - - [{'role': 'user', - 'content': [{'type': 'text', - 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, - {'role': 'assistant', - 'content': [ToolUseBlock(id='toolu_015ggQ1iH6BxBffd7erj3rjR', input={'pet': [{'pet_type': 'dog', 'barks': 6.0}, {'pet_type': 'dragon', 'scales': False}, {'pet_type': 'cat', 'meows': 2}]}, name='OwnersPets', type='tool_use')]}, - {'role': 'user', - 'content': [{'type': 'tool_result', - 'tool_use_id': 'toolu_015ggQ1iH6BxBffd7erj3rjR', - 'content': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2)])"}]}] - -(So I couldn’t prompt it again here, next input would have to be an -assistant) - -### User Creation & few-shot examples - -You can even add few shot examples *for each input* - -``` python -class User(BaseModel): - "User creation tool" - age: int = Field(description='Age of the user') - name: str = Field(title='Username') - password: str = Field( - json_schema_extra={ - 'title': 'Password', - 'description': 'Password of the user', - 'examples': ['Monkey!123'], - } - ) -print(repr(c.struct(msgs=["Can you create me a new user for tom age 22"], resp_model=User, sp="for a given user, generate a similar password based on examples"))) -``` - - User(age=22, name='tom', password='Monkey!123') - -Uses the few-shot example as asked for in the system prompt. - -### You can find more examples [nbs/examples](nbs/examples) - -## Signature: - -``` python -Client.struct( - self: claudette.core.Client, - msgs: list, - resp_model: type[BaseModel], # non-initialized Pydantic BaseModel - **, # Client.__call__ kwargs... -) -> BaseModel -``` - -``` python -Chat.struct( - self: claudette.core.Chat, - resp_model: type[BaseModel], # non-initialized Pydantic BaseModel - treat_as_output=True, # In chat history, tool is reflected - **, # Chat.__call__ kwargs... -) -> BaseModel -```
+
diff --git a/llm/llms-ctx.txt b/llm/llms-ctx.txt index 4239b3e..732c1dc 100644 --- a/llm/llms-ctx.txt +++ b/llm/llms-ctx.txt @@ -866,527 +866,7 @@ chat("I'm Jeremy") ## Extensions - [Pydantic Structured - Ouput](https://github.com/tom-pollak/claudette-pydantic)# Tool loop - - - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -``` python -model = models[-1] -``` - -Anthropic provides an [interesting -example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) -of using tools to mock up a hypothetical ordering system. We’re going to -take it a step further, and show how we can dramatically simplify the -process, whilst completing more complex tasks. - -We’ll start by defining the same mock customer/order data as in -Anthropic’s example, plus create a entity relationship between customers -and orders: - -``` python -orders = { - "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"), - "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"), - "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")} - -customers = { - "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890", - orders=[orders['O1'], orders['O2']]), - "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210", - orders=[orders['O3']]) -} -``` - -We can now define the same functions from the original example – but -note that we don’t need to manually create the large JSON schema, since -Claudette handles all that for us automatically from the functions -directly. We’ll add some extra functionality to update order details -when cancelling too. - -``` python -def get_customer_info( - customer_id:str # ID of the customer -): # Customer's name, email, phone number, and list of orders - "Retrieves a customer's information and their orders based on the customer ID" - print(f'- Retrieving customer {customer_id}') - return customers.get(customer_id, "Customer not found") - -def get_order_details( - order_id:str # ID of the order -): # Order's ID, product name, quantity, price, and order status - "Retrieves the details of a specific order based on the order ID" - print(f'- Retrieving order {order_id}') - return orders.get(order_id, "Order not found") - -def cancel_order( - order_id:str # ID of the order to cancel -)->bool: # True if the cancellation is successful - "Cancels an order based on the provided order ID" - print(f'- Cancelling order {order_id}') - if order_id not in orders: return False - orders[order_id]['status'] = 'Cancelled' - return True -``` - -We’re now ready to start our chat. - -``` python -tools = [get_customer_info, get_order_details, cancel_order] -chat = Chat(model, tools=tools) -``` - -We’ll start with the same request as Anthropic showed: - -``` python -r = chat('Can you tell me the email address for customer C1?') -print(r.stop_reason) -r.content -``` - - - Retrieving customer C1 - tool_use - - [ToolUseBlock(id='toolu_0168sUZoEUpjzk5Y8WN3q9XL', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] - -Claude asks us to use a tool. Claudette handles that automatically by -just calling it again: - -``` python -r = chat() -contents(r) -``` - - 'The email address for customer C1 is john@example.com.' - -Let’s consider a more complex case than in the original example – what -happens if a customer wants to cancel all of their orders? - -``` python -chat = Chat(model, tools=tools) -r = chat('Please cancel all orders for customer C1 for me.') -print(r.stop_reason) -r.content -``` - - - Retrieving customer C1 - tool_use - - [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), - ToolUseBlock(id='toolu_01ADr1rEp7NLZ2iKWfLp7vz7', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] - -This is the start of a multi-stage tool use process. Doing it manually -step by step is inconvenient, so let’s write a function to handle this -for us: - ------------------------------------------------------------------------- - -source - -### Chat.toolloop - -> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None, -> maxtok=4096, stream=False, prefill='', -> tool_choice:Optional[dict]=None) - -*Add prompt `pr` to dialog and get a response from Claude, automatically -following up with `tool_use` messages* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prPrompt to pass to Claude
max_stepsint10Maximum number of tool requests to loop through
trace_funcOptionalNoneFunction to trace tool use steps (e.g print)
cont_funcOptionalnoopFunction that stops loop if returns False
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
- -
-Exported source - -``` python -@patch -@delegates(Chat.__call__) -def toolloop(self:Chat, - pr, # Prompt to pass to Claude - max_steps=10, # Maximum number of tool requests to loop through - trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`) - cont_func:Optional[callable]=noop, # Function that stops loop if returns False - **kwargs): - "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages" - n_msgs = len(self.h) - r = self(pr, **kwargs) - for i in range(max_steps): - if r.stop_reason!='tool_use': break - if trace_func: trace_func(self.h[n_msgs:]); n_msgs = len(self.h) - r = self(**kwargs) - if not (cont_func or noop)(self.h[-2]): break - if trace_func: trace_func(self.h[n_msgs:]) - return r -``` - -
- -We’ll start by re-running our previous request - we shouldn’t have to -manually pass back the `tool_use` message any more: - -``` python -chat = Chat(model, tools=tools) -r = chat.toolloop('Can you tell me the email address for customer C1?') -r -``` - - - Retrieving customer C1 - -The email address for customer C1 is john@example.com. - -
- -- id: `msg_01Fm2CY76dNeWief4kUW6r71` -- content: - `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 720, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Let’s see if it can handle the multi-stage process now – we’ll add -`trace_func=print` to see each stage of the process: - -``` python -chat = Chat(model, tools=tools) -r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print) -r -``` - - - Retrieving customer C1 - [{'role': 'user', 'content': [{'type': 'text', 'text': 'Please cancel all orders for customer C1 for me.'}]}, {'role': 'assistant', 'content': [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01SvivKytaRHEdKixEY9dUDz', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01SvivKytaRHEdKixEY9dUDz', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}] - - Cancelling order O1 - [{'role': 'assistant', 'content': [TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel each of these orders:", type='text'), ToolUseBlock(id='toolu_01DoGVUPVBeDYERMePHDzUoT', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DoGVUPVBeDYERMePHDzUoT', 'content': 'True'}]}] - - Cancelling order O2 - [{'role': 'assistant', 'content': [ToolUseBlock(id='toolu_01XNwS35yY88Mvx4B3QqDeXX', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01XNwS35yY88Mvx4B3QqDeXX', 'content': 'True'}]}] - [{'role': 'assistant', 'content': [TextBlock(text="I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", type='text')]}] - -I’ve successfully cancelled both orders O1 and O2 for customer C1. -Please let me know if you need anything else! - -
- -- id: `msg_01K1QpUZ8nrBVUHYTrH5QjSF` -- content: - `[{'text': "I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 921, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -OK Claude thinks the orders were cancelled – let’s check one: - -``` python -chat.toolloop('What is the status of order O2?') -``` - - - Retrieving order O2 - -The status of order O2 is now ‘Cancelled’ since I successfully cancelled -that order earlier. - -
- -- id: `msg_01XcXpFDwoZ3u1bFDf5mY8x1` -- content: - `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 1092, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -## Code interpreter - -Here is an example of using `toolloop` to implement a simple code -interpreter with additional tools. - -``` python -from toolslm.shell import get_shell -from fastcore.meta import delegates -import traceback -``` - -``` python -@delegates() -class CodeChat(Chat): - imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses' - def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs): - super().__init__(model=model, **kwargs) - self.ask = ask - self.tools.append(self.run_cell) - self.shell = get_shell() - self.shell.run_cell('import '+self.imps) -``` - -We have one additional parameter to creating a `CodeChat` beyond what we -pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is -`ask` – if that’s `True`, we’ll prompt the user before running code. - -``` python -@patch -def run_cell( - self:CodeChat, - code:str, # Code to execute in persistent IPython session -): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute - "Asks user for permission, and if provided, executes python `code` using persistent IPython session." - confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n' - if self.ask and input(confirm): return '#DECLINED#' - try: res = self.shell.run_cell(code) - except Exception as e: return traceback.format_exc() - return res.stdout if res.result is None else res.result -``` - -We just pass along requests to run code to the shell’s implementation. -Claude often prints results instead of just using the last expression, -so we capture stdout in those cases. - -``` python -sp = f'''You are a knowledgable assistant. Do not use tools unless needed. -Don't do complex calculations yourself -- use code for them. -The following modules are pre-imported for `run_cell` automatically: - -{CodeChat.imps} - -Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls. - -If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.''' -``` - -``` python -def get_user(ignored:str='' # Unused parameter - ): # Username of current user - "Get the username of the user running this session" - print("Looking up username") - return 'Jeremy' -``` - -In order to test out multi-stage tool use, we create a mock function -that Claude can call to get the current username. - -``` python -model = models[1] -``` - -``` python -chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3) -``` - -Claude gets confused sometimes about how tools work, so we use examples -to remind it: - -``` python -chat.h = [ - 'Calculate the square root of `10332`', 'math.sqrt(10332)', - '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.' -] -``` - -Providing a callable to toolloop’s `trace_func` lets us print out -information during the loop: - -``` python -def _show_cts(h): - for r in h: - for o in r.get('content'): - if hasattr(o,'text'): print(o.text) - nm = getattr(o, 'name', None) - if nm=='run_cell': print(o.input['code']) - elif nm: print(f'{o.name}({o.input})') -``` - -…and toolloop’s `cont_func` callable let’s us provide a function which, -if it returns `False`, stops the loop: - -``` python -def _cont_decline(c): - return nested_idx(c, 'content', 'content') != '#DECLINED#' -``` - -Now we can try our code interpreter. We start by asking for a function -to be created, which we’ll use in the next prompt to test that the -interpreter is persistent. - -``` python -pr = '''Create a 1-line function `checksum` for a string `s`, -that multiplies together the ascii values of each character in `s` using `reduce`.''' -chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline) -``` - - Press Enter to execute, or enter "n" to skip? - ``` - checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) - ``` - - Create a 1-line function `checksum` for a string `s`, - that multiplies together the ascii values of each character in `s` using `reduce`. - Let me help you create that function using `reduce` and `functools`. - checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) - The function has been created. Let me explain how it works: - 1. It takes a string `s` as input - 2. Uses `functools.reduce` to multiply together all ASCII values - 3. `ord(y)` gets the ASCII value of each character - 4. The initial value is 1 (the third parameter to reduce) - 5. The lambda function multiplies the accumulator (x) with each new ASCII value - - You can test it with any string. For example, you could try `checksum("hello")` to see it in action. - -The function has been created. Let me explain how it works: 1. It takes -a string `s` as input 2. Uses `functools.reduce` to multiply together -all ASCII values 3. `ord(y)` gets the ASCII value of each character 4. -The initial value is 1 (the third parameter to reduce) 5. The lambda -function multiplies the accumulator (x) with each new ASCII value - -You can test it with any string. For example, you could try -`checksum("hello")` to see it in action. - -
- -- id: `msg_011pcGY9LbYqvRSfDPgCqUkT` -- content: - `[{'text': 'The function has been created. Let me explain how it works:\n1. It takes a string`s`as input\n2. Uses`functools.reduce`to multiply together all ASCII values\n3.`ord(y)`gets the ASCII value of each character\n4. The initial value is 1 (the third parameter to reduce)\n5. The lambda function multiplies the accumulator (x) with each new ASCII value\n\nYou can test it with any string. For example, you could try`checksum(“hello”)`to see it in action.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 824, 'output_tokens': 125, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -By asking for a calculation to be done on the username, we force it to -use multiple steps: - -``` python -pr = 'Use it to get the checksum of the username of this session.' -chat.toolloop(pr, trace_func=_show_cts) -``` - - Looking up username - Use it to get the checksum of the username of this session. - I'll first get the username using `get_user` and then apply our `checksum` function to it. - get_user({'ignored': ''}) - Press Enter to execute, or enter "n" to skip? - ``` - print(checksum("Jeremy")) - ``` - - Now I'll calculate the checksum of "Jeremy": - print(checksum("Jeremy")) - The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy". - -The checksum of the username “Jeremy” is 1134987783204. This was -calculated by multiplying together the ASCII values of each character in -“Jeremy”. - -
- -- id: `msg_01UXvtcLzzykZpnQUT35v4uD` -- content: - `[{'text': 'The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy".', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
# claudette Module Documentation + Ouput](https://github.com/tom-pollak/claudette-pydantic)# claudette Module Documentation ## claudette.asink diff --git a/llm/llms.txt b/llm/llms.txt index 34be56f..efa3035 100644 --- a/llm/llms.txt +++ b/llm/llms.txt @@ -14,14 +14,11 @@ Things to remember when using Claudette: ## Docs - [README](https://raw.githubusercontent.com/AnswerDotAI/claudette/refs/heads/main/README.md): Quick start guide and overview -- [Tool loop handling](https://claudette.answer.ai/toolloop.html.md): How to use the tool loop functionality for complex multi-step interactions ## API - [API List](https://raw.githubusercontent.com/AnswerDotAI/claudette/b30f08e3549554f53b06fbd9bf03a0c961de3023/llm/apilist.txt): A succint list of all functions and methods in claudette. ## Optional - +- [Tool loop handling](https://claudette.answer.ai/toolloop.html.md): How to use the tool loop functionality for complex multi-step interactions - [Async support](https://claudette.answer.ai/async.html.md): Using Claudette with async/await -- [Core functionality](https://claudette.answer.ai/core.html.md): Detailed walkthrough of main features including Client, Chat, and tool usage -- [Pydantic Structured Output](https://raw.githubusercontent.com/tom-pollak/claudette-pydantic/refs/heads/main/README.md): Extension for structured data output using Pydantic models diff --git a/llms-ctx-full.txt b/llms-ctx-full.txt deleted file mode 100644 index ee66a3a..0000000 --- a/llms-ctx-full.txt +++ /dev/null @@ -1,4794 +0,0 @@ -Things to remember when using Claudette: - -- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key -- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) -- The library provides both synchronous and asynchronous interfaces -- Use `Chat()` for maintaining conversation state and handling tool interactions -- When using tools, the library automatically handles the request/response loop -- Image support is built in but only available on compatible models (not Haiku)# Claudette’s source - - - -This is the ‘literate’ source code for Claudette. You can view the fully -rendered version of the notebook -[here](https://claudette.answer.ai/core.html), or you can clone the git -repo and run the [interactive -notebook](https://github.com/AnswerDotAI/claudette/blob/main/00_core.ipynb) -in Jupyter. The notebook is converted the [Python module -claudette/core.py](https://github.com/AnswerDotAI/claudette/blob/main/claudette/core.py) -using [nbdev](https://nbdev.fast.ai/). The goal of this source code is -to both create the Python module, and also to teach the reader *how* it -is created, without assuming much existing knowledge about Claude’s API. - -Most of the time you’ll see that we write some source code *first*, and -then a description or discussion of it *afterwards*. - -## Setup - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -To print every HTTP request and response in full, uncomment the above -line. This functionality is provided by Anthropic’s SDK. - -
- -> **Tip** -> -> If you’re reading the rendered version of this notebook, you’ll see an -> “Exported source” collapsible widget below. If you’re reading the -> source notebook directly, you’ll see `#| exports` at the top of the -> cell. These show that this piece of code will be exported into the -> python module that this notebook creates. No other code will be -> included – any other code in this notebook is just for demonstration, -> documentation, and testing. -> -> You can toggle expanding/collapsing the source code of all exported -> sections by using the ` Code` menu in the top right of the rendered -> notebook page. - -
- -
-Exported source - -``` python -model_types = { - # Anthropic - 'claude-3-opus-20240229': 'opus', - 'claude-3-5-sonnet-20241022': 'sonnet', - 'claude-3-haiku-20240307': 'haiku-3', - 'claude-3-5-haiku-20241022': 'haiku-3-5', - # AWS - 'anthropic.claude-3-opus-20240229-v1:0': 'opus', - 'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet', - 'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet', - 'anthropic.claude-3-haiku-20240307-v1:0': 'haiku', - # Google - 'claude-3-opus@20240229': 'opus', - 'claude-3-5-sonnet-v2@20241022': 'sonnet', - 'claude-3-sonnet@20240229': 'sonnet', - 'claude-3-haiku@20240307': 'haiku', -} - -all_models = list(model_types) -``` - -
-
-Exported source - -``` python -text_only_models = ('claude-3-5-haiku-20241022',) -``` - -
- -These are the current versions and -[prices](https://www.anthropic.com/pricing#anthropic-api) of Anthropic’s -models at the time of writing. - -``` python -model = models[1]; model -``` - - 'claude-3-5-sonnet-20241022' - -For examples, we’ll use Sonnet 3.5, since it’s awesome. - -## Antropic SDK - -``` python -cli = Anthropic() -``` - -This is what Anthropic’s SDK provides for interacting with Python. To -use it, pass it a list of *messages*, with *content* and a *role*. The -roles should alternate between *user* and *assistant*. - -
- -> **Tip** -> -> After the code below you’ll see an indented section with an orange -> vertical line on the left. This is used to show the *result* of -> running the code above. Because the code is running in a Jupyter -> Notebook, we don’t have to use `print` to display results, we can just -> type the expression directly, as we do with `r` here. - -
- -``` python -m = {'role': 'user', 'content': "I'm Jeremy"} -r = cli.messages.create(messages=[m], model=model, max_tokens=100) -r -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help -you today? - -
- -- id: `msg_017Q8WYvvANfyHWLJWt95UR1` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 27}` - -
- -### Formatting output - -That output is pretty long and hard to read, so let’s clean it up. We’ll -start by pulling out the `Content` part of the message. To do that, -we’re going to write our first function which will be included to the -`claudette/core.py` module. - -
- -> **Tip** -> -> This is the first exported public function or class we’re creating -> (the previous export was of a variable). In the rendered version of -> the notebook for these you’ll see 4 things, in this order (unless the -> symbol starts with a single `_`, which indicates it’s *private*): -> -> - The signature (with the symbol name as a heading, with a horizontal -> rule above) -> - A table of paramater docs (if provided) -> - The doc string (in italics). -> - The source code (in a collapsible “Exported source” block) -> -> After that, we generally provide a bit more detail on what we’ve -> created, and why, along with a sample usage. - -
- ------------------------------------------------------------------------- - -source - -### find_block - -> find_block (r:collections.abc.Mapping, blk_type:type= 'anthropic.types.text_block.TextBlock'>) - -*Find the first block of type `blk_type` in `r.content`.* - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
rMappingThe message to look in
blk_typetypeTextBlockThe type of block to find
- -
-Exported source - -``` python -def find_block(r:abc.Mapping, # The message to look in - blk_type:type=TextBlock # The type of block to find - ): - "Find the first block of type `blk_type` in `r.content`." - return first(o for o in r.content if isinstance(o,blk_type)) -``` - -
- -This makes it easier to grab the needed parts of Claude’s responses, -which can include multiple pieces of content. By default, we look for -the first text block. That will generally have the content we want to -display. - -``` python -find_block(r) -``` - - TextBlock(text="Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", type='text') - ------------------------------------------------------------------------- - -source - -### contents - -> contents (r) - -*Helper to get the contents from Claude response `r`.* - -
-Exported source - -``` python -def contents(r): - "Helper to get the contents from Claude response `r`." - blk = find_block(r) - if not blk and r.content: blk = r.content[0] - return blk.text.strip() if hasattr(blk,'text') else str(blk) -``` - -
- -For display purposes, we often just want to show the text itself. - -``` python -contents(r) -``` - - "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?" - -
-Exported source - -``` python -@patch -def _repr_markdown_(self:(Message)): - det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items()) - cts = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex - return f"""{cts} - -
- -- {det} - -
""" -``` - -
- -Jupyter looks for a `_repr_markdown_` method in displayed objects; we -add this in order to display just the content text, and collapse full -details into a hideable section. Note that `patch` is from -[fastcore](https://fastcore.fast.ai/), and is used to add (or replace) -functionality in an existing class. We pass the class(es) that we want -to patch as type annotations to `self`. In this case, `_repr_markdown_` -is being added to Anthropic’s `Message` class, so when we display the -message now we just see the contents, and the details are hidden away in -a collapsible details block. - -``` python -r -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help -you today? - -
- -- id: `msg_017Q8WYvvANfyHWLJWt95UR1` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 27}` - -
- -One key part of the response is the -[`usage`](https://claudette.answer.ai/core.html#usage) key, which tells -us how many tokens we used by returning a `Usage` object. - -We’ll add some helpers to make things a bit cleaner for creating and -formatting these objects. - -``` python -r.usage -``` - - In: 10; Out: 27; Cache create: 0; Cache read: 0; Total: 37 - ------------------------------------------------------------------------- - -source - -### usage - -> usage (inp=0, out=0, cache_create=0, cache_read=0) - -*Slightly more concise version of `Usage`.* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
inpint0input tokens
outint0Output tokens
cache_createint0Cache creation tokens
cache_readint0Cache read tokens
- -
-Exported source - -``` python -def usage(inp=0, # input tokens - out=0, # Output tokens - cache_create=0, # Cache creation tokens - cache_read=0 # Cache read tokens - ): - "Slightly more concise version of `Usage`." - return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create, cache_read_input_tokens=cache_read) -``` - -
- -The constructor provided by Anthropic is rather verbose, so we clean it -up a bit, using a lowercase version of the name. - -``` python -usage(5) -``` - - In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 - ------------------------------------------------------------------------- - -source - -### Usage.total - -> Usage.total () - -
-Exported source - -``` python -@patch(as_prop=True) -def total(self:Usage): return self.input_tokens+self.output_tokens+getattr(self, "cache_creation_input_tokens",0)+getattr(self, "cache_read_input_tokens",0) -``` - -
- -Adding a `total` property to `Usage` makes it easier to see how many -tokens we’ve used up altogether. - -``` python -usage(5,1).total -``` - - 6 - ------------------------------------------------------------------------- - -source - -### Usage.\_\_repr\_\_ - -> Usage.__repr__ () - -*Return repr(self).* - -
-Exported source - -``` python -@patch -def __repr__(self:Usage): return f'In: {self.input_tokens}; Out: {self.output_tokens}; Cache create: {getattr(self, "cache_creation_input_tokens",0)}; Cache read: {getattr(self, "cache_read_input_tokens",0)}; Total: {self.total}' -``` - -
- -In python, patching `__repr__` lets us change how an object is -displayed. (More generally, methods starting and ending in `__` in -Python are called `dunder` methods, and have some `magic` behavior – -such as, in this case, changing how an object is displayed.) - -``` python -usage(5) -``` - - In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 - ------------------------------------------------------------------------- - -source - -### Usage.\_\_add\_\_ - -> Usage.__add__ (b) - -*Add together each of `input_tokens` and `output_tokens`* - -
-Exported source - -``` python -@patch -def __add__(self:Usage, b): - "Add together each of `input_tokens` and `output_tokens`" - return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens, getattr(self,'cache_creation_input_tokens',0)+getattr(b,'cache_creation_input_tokens',0), getattr(self,'cache_read_input_tokens',0)+getattr(b,'cache_read_input_tokens',0)) -``` - -
- -And, patching `__add__` lets `+` work on a `Usage` object. - -``` python -r.usage+r.usage -``` - - In: 20; Out: 54; Cache create: 0; Cache read: 0; Total: 74 - -### Creating messages - -Creating correctly formatted `dict`s from scratch every time isn’t very -handy, so next up we’ll add helpers for this. - -``` python -def mk_msg(content, role='user', **kw): - return dict(role=role, content=content, **kw) -``` - -We make things a bit more convenient by writing a function to create a -message for us. - -
- -> **Note** -> -> You may have noticed that we didn’t export the -> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) function -> (i.e. there’s no “Exported source” block around it). That’s because -> we’ll need more functionality in our final version than this version -> has – so we’ll be defining a more complete version later. Rather than -> refactoring/editing in notebooks, often it’s helpful to simply -> gradually build up complexity by re-defining a symbol. - -
- -``` python -prompt = "I'm Jeremy" -m = mk_msg(prompt) -m -``` - - {'role': 'user', 'content': "I'm Jeremy"} - -``` python -r = cli.messages.create(messages=[m], model=model, max_tokens=100) -r -``` - -Hi Jeremy! I’m Claude. Nice to meet you. How can I help you today? - -
- -- id: `msg_01BhkuvQtEPoC8wHSbU7YRpV` -- content: - `[{'text': "Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 24}` - -
- ------------------------------------------------------------------------- - -source - -### mk_msgs - -> mk_msgs (msgs:list, **kw) - -*Helper to set ‘assistant’ role on alternate messages.* - -
-Exported source - -``` python -def mk_msgs(msgs:list, **kw): - "Helper to set 'assistant' role on alternate messages." - if isinstance(msgs,str): msgs=[msgs] - return [mk_msg(o, ('user','assistant')[i%2], **kw) for i,o in enumerate(msgs)] -``` - -
- -LLMs, including Claude, don’t actually have state, but instead dialogs -are created by passing back all previous prompts and responses every -time. With Claude, they always alternate *user* and *assistant*. -Therefore we create a function to make it easier to build up these -dialog lists. - -But to do so, we need to update -[`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) so that we -can’t only pass a `str` as `content`, but can also pass a `dict` or an -object with a `content` attr, since these are both types of message that -Claude can create. To do so, we check for a `content` key or attr, and -use it if found. - -
-Exported source - -``` python -def _str_if_needed(o): - if isinstance(o, (list,tuple,abc.Mapping,L)) or hasattr(o, '__pydantic_serializer__'): return o - return str(o) -``` - -
- -``` python -def mk_msg(content, role='user', **kw): - "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" - if hasattr(content, 'content'): content,role = content.content,content.role - if isinstance(content, abc.Mapping): content=content['content'] - return dict(role=role, content=_str_if_needed(content), **kw) -``` - -``` python -msgs = mk_msgs([prompt, r, 'I forgot my name. Can you remind me please?']) -msgs -``` - - [{'role': 'user', 'content': "I'm Jeremy"}, - {'role': 'assistant', - 'content': [TextBlock(text="Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", type='text')]}, - {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}] - -Now, if we pass this list of messages to Claude, the model treats it as -a conversation to respond to. - -``` python -cli.messages.create(messages=msgs, model=model, max_tokens=200) -``` - -You just told me your name is Jeremy. - -
- -- id: `msg_01KZski1R3z1iGjF6XsBb9dM` -- content: - `[{'text': 'You just told me your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 46, 'output_tokens': 13}` - -
- -## Client - ------------------------------------------------------------------------- - -source - -### Client - -> Client (model, cli=None, log=False) - -*Basic Anthropic messages client.* - -
-Exported source - -``` python -class Client: - def __init__(self, model, cli=None, log=False): - "Basic Anthropic messages client." - self.model,self.use = model,usage() - self.text_only = model in text_only_models - self.log = [] if log else None - self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'})) -``` - -
- -We’ll create a simple -[`Client`](https://claudette.answer.ai/core.html#client) for `Anthropic` -which tracks usage stores the model to use. We don’t add any methods -right away – instead we’ll use `patch` for that so we can add and -document them incrementally. - -``` python -c = Client(model) -c.use -``` - - In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 - -
-Exported source - -``` python -@patch -def _r(self:Client, r:Message, prefill=''): - "Store the result of the message and accrue total usage." - if prefill: - blk = find_block(r) - blk.text = prefill + (blk.text or '') - self.result = r - self.use += r.usage - self.stop_reason = r.stop_reason - self.stop_sequence = r.stop_sequence - return r -``` - -
- -We use a `_` prefix on private methods, but we document them here in the -interests of literate source code. - -`_r` will be used each time we get a new result, to track usage and also -to keep the result available for later. - -``` python -c._r(r) -c.use -``` - - In: 10; Out: 24; Cache create: 0; Cache read: 0; Total: 34 - -Whereas OpenAI’s models use a `stream` parameter for streaming, -Anthropic’s use a separate method. We implement Anthropic’s approach in -a private method, and then use a `stream` parameter in `__call__` for -consistency: - -
-Exported source - -``` python -@patch -def _log(self:Client, final, prefill, msgs, maxtok=None, sp=None, temp=None, stream=None, stop=None, **kwargs): - self._r(final, prefill) - if self.log is not None: self.log.append({ - "msgs": msgs, "prefill": prefill, **kwargs, - "msgs": msgs, "prefill": prefill, "maxtok": maxtok, "sp": sp, "temp": temp, "stream": stream, "stop": stop, **kwargs, - "result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence - }) - return self.result -``` - -
-
-Exported source - -``` python -@patch -def _stream(self:Client, msgs:list, prefill='', **kwargs): - with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: - if prefill: yield(prefill) - yield from s.text_stream - self._log(s.get_final_message(), prefill, msgs, **kwargs) -``` - -
- -Claude supports adding an extra `assistant` message at the end, which -contains the *prefill* – i.e. the text we want Claude to assume the -response starts with. However Claude doesn’t actually repeat that in the -response, so for convenience we add it. - -
-Exported source - -``` python -@patch -def _precall(self:Client, msgs, prefill, stop, kwargs): - pref = [prefill.strip()] if prefill else [] - if not isinstance(msgs,list): msgs = [msgs] - if stop is not None: - if not isinstance(stop, (list)): stop = [stop] - kwargs["stop_sequences"] = stop - msgs = mk_msgs(msgs+pref) - return msgs -``` - -
- -``` python -@patch -@delegates(messages.Messages.create) -def __call__(self:Client, - msgs:list, # List of messages in the dialog - sp='', # The system prompt - temp=0, # Temperature - maxtok=4096, # Maximum tokens - prefill='', # Optional prefill to pass to Claude as start of its response - stream:bool=False, # Stream response? - stop=None, # Stop sequence - **kwargs): - "Make a call to Claude." - msgs = self._precall(msgs, prefill, stop, kwargs) - if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - res = self.c.messages.create( - model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs) -``` - -Defining `__call__` let’s us use an object like a function (i.e it’s -*callable*). We use it as a small wrapper over `messages.create`. -However we’re not exporting this version just yet – we have some -additions we’ll make in a moment… - -``` python -c = Client(model, log=True) -c.use -``` - - In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 - -``` python -c('Hi') -``` - -Hello! How can I help you today? - -
- -- id: `msg_01DZfHpTqbodjegmvG6kkQvn` -- content: - `[{'text': 'Hello! How can I help you today?', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 8, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -c.use -``` - - In: 8; Out: 22; Cache create: 0; Cache read: 0; Total: 30 - -Let’s try out *prefill*: - -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -``` - -``` python -c(q, prefill=pref) -``` - -According to Douglas Adams, it’s 42. More seriously, there’s no -universal answer - it’s deeply personal. Common perspectives include: -finding happiness, making meaningful connections, pursuing purpose -through work/creativity, helping others, or simply experiencing and -appreciating existence. - -
- -- id: `msg_01RKAjFBMhyBjvKw59ypM6tp` -- content: - `[{'text': "According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 24, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -We can pass `stream=True` to stream the response back incrementally: - -``` python -for o in c('Hi', stream=True): print(o, end='') -``` - - Hello! How can I help you today? - -``` python -c.use -``` - - In: 40; Out: 97; Cache create: 0; Cache read: 0; Total: 137 - -``` python -for o in c(q, prefill=pref, stream=True): print(o, end='') -``` - - According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence. - -``` python -c.use -``` - - In: 64; Out: 150; Cache create: 0; Cache read: 0; Total: 214 - -Pass a stop seauence if you want claude to stop generating text when it -encounters it. - -``` python -c("Count from 1 to 10", stop="5") -``` - -1 2 3 4 - -
- -- id: `msg_01D3kdCAHNbXadE144FLPbQV` -- content: `[{'text': '1\n2\n3\n4\n', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `stop_sequence` -- stop_sequence: `5` -- type: `message` -- usage: - `{'input_tokens': 15, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -This also works with streaming, and you can pass more than one stop -sequence: - -``` python -for o in c("Count from 1 to 10", stop=["2", "yellow"], stream=True): print(o, end='') -print(c.stop_reason, c.stop_sequence) -``` - - 1 - stop_sequence 2 - -You can check the logs: - -``` python -c.log[-1] -``` - - {'msgs': [{'role': 'user', 'content': 'Count from 1 to 10'}], - 'prefill': '', - 'max_tokens': 4096, - 'system': '', - 'temperature': 0, - 'stop_sequences': ['2', 'yellow'], - 'maxtok': None, - 'sp': None, - 'temp': None, - 'stream': None, - 'stop': None, - 'result': Message(id='msg_01PbJN7QLwYALfoqTtYJHYVR', content=[TextBlock(text='1\n', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='stop_sequence', stop_sequence='2', type='message', usage=In: 15; Out: 11; Cache create: 0; Cache read: 0; Total: 26), - 'use': In: 94; Out: 172; Cache create: 0; Cache read: 0; Total: 266, - 'stop_reason': 'stop_sequence', - 'stop_sequence': '2'} - -## Tool use - -Let’s now add tool use (aka *function calling*). - ------------------------------------------------------------------------- - -source - -### mk_tool_choice - -> mk_tool_choice (choose:Union[str,bool,NoneType]) - -*Create a `tool_choice` dict that’s ‘auto’ if `choose` is `None`, ‘any’ -if it is True, or ‘tool’ otherwise* - -
-Exported source - -``` python -def mk_tool_choice(choose:Union[str,bool,None])->dict: - "Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise" - return {"type": "tool", "name": choose} if isinstance(choose,str) else {'type':'any'} if choose else {'type':'auto'} -``` - -
- -``` python -print(mk_tool_choice('sums')) -print(mk_tool_choice(True)) -print(mk_tool_choice(None)) -``` - - {'type': 'tool', 'name': 'sums'} - {'type': 'any'} - {'type': 'auto'} - -Claude can be forced to use a particular tool, or select from a specific -list of tools, or decide for itself when to use a tool. If you want to -force a tool (or force choosing from a list), include a `tool_choice` -param with a dict from -[`mk_tool_choice`](https://claudette.answer.ai/core.html#mk_tool_choice). - -For testing, we need a function that Claude can call; we’ll write a -simple function that adds numbers together, and will tell us when it’s -being called: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -sp = "You are a summing expert." -``` - -Claudette can autogenerate a schema thanks to the `toolslm` library. -We’ll force the use of the tool using the function we created earlier. - -``` python -tools=[get_schema(sums)] -choice = mk_tool_choice('sums') -``` - -We’ll start a dialog with Claude now. We’ll store the messages of our -dialog in `msgs`. The first message will be our prompt `pr`, and we’ll -pass our `tools` schema. - -``` python -msgs = mk_msgs(pr) -r = c(msgs, sp=sp, tools=tools, tool_choice=choice) -r -``` - -ToolUseBlock(id=‘toolu_01JEJNPyeeGm7uwckeF5J4pf’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_015eEr2H8V4j8nNEh1KQifjH` -- content: - `[{'id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 442, 'output_tokens': 55, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -When Claude decides that it should use a tool, it passes back a -`ToolUseBlock` with the name of the tool to call, and the params to use. - -We don’t want to allow it to call just any possible function (that would -be a security disaster!) so we create a *namespace* – that is, a -dictionary of allowable function names to call. - -``` python -ns = mk_ns(sums) -ns -``` - - {'sums': int>} - ------------------------------------------------------------------------- - -source - -### mk_funcres - -> mk_funcres (tuid, res) - -*Given tool use id and the tool result, create a tool_result response.* - -
-Exported source - -``` python -def mk_funcres(tuid, res): - "Given tool use id and the tool result, create a tool_result response." - return dict(type="tool_result", tool_use_id=tuid, content=str(res)) -``` - -
- -We can now use the function requested by Claude. We look it up in `ns`, -and pass in the provided parameters. - -``` python -fc = find_block(r, ToolUseBlock) -res = mk_funcres(fc.id, call_func(fc.name, fc.input, ns=ns)) -res -``` - - Finding the sum of 604542 and 6458932 - - {'type': 'tool_result', - 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', - 'content': '7063474'} - ------------------------------------------------------------------------- - -source - -### mk_toolres - -> mk_toolres (r:collections.abc.Mapping, -> ns:Optional[collections.abc.Mapping]=None, obj:Optional=None) - -*Create a `tool_result` message from response `r`.* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
rMappingTool use request response from Claude
nsOptionalNoneNamespace to search for tools
objOptionalNoneClass to search for tools
- -
-Exported source - -``` python -def mk_toolres( - r:abc.Mapping, # Tool use request response from Claude - ns:Optional[abc.Mapping]=None, # Namespace to search for tools - obj:Optional=None # Class to search for tools - ): - "Create a `tool_result` message from response `r`." - cts = getattr(r, 'content', []) - res = [mk_msg(r)] - if ns is None: ns=globals() - if obj is not None: ns = mk_ns(obj) - tcs = [mk_funcres(o.id, call_func(o.name, o.input, ns)) for o in cts if isinstance(o,ToolUseBlock)] - if tcs: res.append(mk_msg(tcs)) - return res -``` - -
- -In order to tell Claude the result of the tool call, we pass back the -tool use assistant request and the `tool_result` response. - -``` python -tr = mk_toolres(r, ns=ns) -tr -``` - - Finding the sum of 604542 and 6458932 - - [{'role': 'assistant', - 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, - {'role': 'user', - 'content': [{'type': 'tool_result', - 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', - 'content': '7063474'}]}] - -We add this to our dialog, and now Claude has all the information it -needs to answer our question. - -``` python -msgs += tr -contents(c(msgs, sp=sp, tools=tools)) -``` - - 'The sum of 604542 and 6458932 is 7063474.' - -``` python -msgs -``` - - [{'role': 'user', 'content': 'What is 604542+6458932?'}, - {'role': 'assistant', - 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, - {'role': 'user', - 'content': [{'type': 'tool_result', - 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', - 'content': '7063474'}]}] - -This works with methods as well – in this case, use the object itself -for `ns`: - -``` python -class Dummy: - def sums( - self, - a:int, # First thing to sum - b:int=1 # Second thing to sum - ) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -tools = [get_schema(Dummy.sums)] -o = Dummy() -r = c(pr, sp=sp, tools=tools, tool_choice=choice) -tr = mk_toolres(r, obj=o) -msgs += tr -contents(c(msgs, sp=sp, tools=tools)) -``` - - Finding the sum of 604542 and 6458932 - - 'The sum of 604542 and 6458932 is 7063474.' - ------------------------------------------------------------------------- - -source - -### get_types - -> get_types (msgs) - -``` python -get_types(msgs) -``` - - ['text', 'tool_use', 'tool_result', 'tool_use', 'tool_result'] - ------------------------------------------------------------------------- - -source - -### Client.\_\_call\_\_ - -> Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', -> stream:bool=False, stop=None, tools:Optional[list]=None, -> tool_choice:Optional[dict]=None, -> metadata:MetadataParam|NotGiven=NOT_GIVEN, -> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Unio -> n[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, -> temperature:float|NotGiven=NOT_GIVEN, -> top_k:int|NotGiven=NOT_GIVEN, -> top_p:float|NotGiven=NOT_GIVEN, -> extra_headers:Headers|None=None, -> extra_query:Query|None=None, extra_body:Body|None=None, -> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) - -*Make a call to Claude.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
toolsOptionalNoneList of tools to make available to Claude
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
- -
-Exported source - -``` python -@patch -@delegates(messages.Messages.create) -def __call__(self:Client, - msgs:list, # List of messages in the dialog - sp='', # The system prompt - temp=0, # Temperature - maxtok=4096, # Maximum tokens - prefill='', # Optional prefill to pass to Claude as start of its response - stream:bool=False, # Stream response? - stop=None, # Stop sequence - tools:Optional[list]=None, # List of tools to make available to Claude - tool_choice:Optional[dict]=None, # Optionally force use of some tool - **kwargs): - "Make a call to Claude." - if tools: kwargs['tools'] = [get_schema(o) for o in listify(tools)] - if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) - msgs = self._precall(msgs, prefill, stop, kwargs) - if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" - if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) -``` - -
- -``` python -r = c(pr, sp=sp, tools=sums, tool_choice=sums) -r -``` - -ToolUseBlock(id=‘toolu_01KNbjuc8utt6ZroFngmAcuj’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_01T8zmguPksQaKLLgUuaYAJL` -- content: - `[{'id': 'toolu_01KNbjuc8utt6ZroFngmAcuj', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 438, 'output_tokens': 64, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -tr = mk_toolres(r, ns=ns) -``` - - Finding the sum of 604542 and 6458932 - ------------------------------------------------------------------------- - -source - -### Client.structured - -> Client.structured (msgs:list, tools:Optional[list]=None, -> obj:Optional=None, -> ns:Optional[collections.abc.Mapping]=None, sp='', -> temp=0, maxtok=4096, prefill='', stream:bool=False, -> stop=None, tool_choice:Optional[dict]=None, -> metadata:MetadataParam|NotGiven=NOT_GIVEN, -> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Un -> ion[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, -> temperature:float|NotGiven=NOT_GIVEN, -> top_k:int|NotGiven=NOT_GIVEN, -> top_p:float|NotGiven=NOT_GIVEN, -> extra_headers:Headers|None=None, -> extra_query:Query|None=None, -> extra_body:Body|None=None, -> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) - -*Return the value of all tool calls (generally used for structured -outputs)* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
msgslistList of messages in the dialog
toolsOptionalNoneList of tools to make available to Claude
objOptionalNoneClass to search for tools
nsOptionalNoneNamespace to search for tools
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
- -
-Exported source - -``` python -@patch -@delegates(Client.__call__) -def structured(self:Client, - msgs:list, # List of messages in the dialog - tools:Optional[list]=None, # List of tools to make available to Claude - obj:Optional=None, # Class to search for tools - ns:Optional[abc.Mapping]=None, # Namespace to search for tools - **kwargs): - "Return the value of all tool calls (generally used for structured outputs)" - tools = listify(tools) - res = self(msgs, tools=tools, tool_choice=tools, **kwargs) - if ns is None: ns=mk_ns(*tools) - if obj is not None: ns = mk_ns(obj) - cts = getattr(res, 'content', []) - tcs = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)] - return tcs -``` - -
- -Anthropic’s API does not support response formats directly, so instead -we provide a `structured` method to use tool calling to achieve the same -result. The result of the tool is not passed back to Claude in this -case, but instead is returned directly to the user. - -``` python -c.structured(pr, tools=[sums]) -``` - - Finding the sum of 604542 and 6458932 - - [7063474] - -## Chat - -Rather than manually adding the responses to a dialog, we’ll create a -simple [`Chat`](https://claudette.answer.ai/core.html#chat) class to do -that for us, each time we make a request. We’ll also store the system -prompt and tools here, to avoid passing them every time. - ------------------------------------------------------------------------- - -source - -### Chat - -> Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None, -> sp='', tools:Optional[list]=None, temp=0, -> cont_pr:Optional[str]=None) - -*Anthropic chat client.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstrOptional system prompt
toolsOptionalNoneList of tools to make available to Claude
tempint0Temperature
cont_prOptionalNoneUser prompt to continue an assistant response: -assistant,[user:“…”],assistant
- -
-Exported source - -``` python -class Chat: - def __init__(self, - model:Optional[str]=None, # Model to use (leave empty if passing `cli`) - cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) - sp='', # Optional system prompt - tools:Optional[list]=None, # List of tools to make available to Claude - temp=0, # Temperature - cont_pr:Optional[str]=None): # User prompt to continue an assistant response: assistant,[user:"..."],assistant - "Anthropic chat client." - assert model or cli - assert cont_pr != "", "cont_pr may not be an empty string" - self.c = (cli or Client(model)) - self.h,self.sp,self.tools,self.cont_pr,self.temp = [],sp,tools,cont_pr,temp - - @property - def use(self): return self.c.use -``` - -
- -The class stores the -[`Client`](https://claudette.answer.ai/core.html#client) that will -provide the responses in `c`, and a history of messages in `h`. - -``` python -sp = "Never mention what tools you use." -chat = Chat(model, sp=sp) -chat.c.use, chat.h -``` - - (In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0, []) - -We’ve shown the token usage but we really care about is pricing. Let’s -extract the latest -[pricing](https://www.anthropic.com/pricing#anthropic-api) from -Anthropic into a `pricing` dict. - -We’ll patch `Usage` to enable it compute the cost given pricing. - ------------------------------------------------------------------------- - -source - -### Usage.cost - -> Usage.cost (costs:tuple) - -
-Exported source - -``` python -@patch -def cost(self:Usage, costs:tuple) -> float: - cache_w, cache_r = getattr(self, "cache_creation_input_tokens",0), getattr(self, "cache_read_input_tokens",0) - return sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6 -``` - -
- -``` python -chat.c.use.cost(pricing[model_types[chat.c.model]]) -``` - - 0.0 - -This is clunky. Let’s add `cost` as a property for the -[`Chat`](https://claudette.answer.ai/core.html#chat) class. It will pass -in the appropriate prices for the current model to the usage cost -calculator. - ------------------------------------------------------------------------- - -source - -### Chat.cost - -> Chat.cost () - -
-Exported source - -``` python -@patch(as_prop=True) -def cost(self: Chat) -> float: return self.c.use.cost(pricing[model_types[self.c.model]]) -``` - -
- -``` python -chat.cost -``` - - 0.0 - ------------------------------------------------------------------------- - -source - -### Chat.\_\_call\_\_ - -> Chat.__call__ (pr=None, temp=None, maxtok=4096, stream=False, prefill='', -> tool_choice:Optional[dict]=None, **kw) - -*Call self as a function.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
kw
- -
-Exported source - -``` python -@patch -def _stream(self:Chat, res): - yield from res - self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) -``` - -
-
-Exported source - -``` python -@patch -def _post_pr(self:Chat, pr, prev_role): - if pr is None and prev_role == 'assistant': - if self.cont_pr is None: - raise ValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.") - pr = self.cont_pr # No user prompt, keep the chain - if pr: self.h.append(mk_msg(pr)) -``` - -
-
-Exported source - -``` python -@patch -def _append_pr(self:Chat, - pr=None, # Prompt / message - ): - prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' - if pr and prev_role == 'user': self() # already user request pending - self._post_pr(pr, prev_role) -``` - -
-
-Exported source - -``` python -@patch -def __call__(self:Chat, - pr=None, # Prompt / message - temp=None, # Temperature - maxtok=4096, # Maximum tokens - stream=False, # Stream response? - prefill='', # Optional prefill to pass to Claude as start of its response - tool_choice:Optional[dict]=None, # Optionally force use of some tool - **kw): - if temp is None: temp=self.temp - self._append_pr(pr) - res = self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, - tools=self.tools, tool_choice=tool_choice,**kw) - if stream: return self._stream(res) - self.h += mk_toolres(self.c.result, ns=self.tools) - return res -``` - -
- -The `__call__` method just passes the request along to the -[`Client`](https://claudette.answer.ai/core.html#client), but rather -than just passing in this one prompt, it appends it to the history and -passes it all along. As a result, we now have state! - -``` python -chat = Chat(model, sp=sp) -``` - -``` python -chat("I'm Jeremy") -chat("What's my name?") -``` - -Your name is Jeremy. - -
- -- id: `msg_01GpNv4P5x9Gzc5mxxw9FgEL` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 41, 'output_tokens': 9, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat.use, chat.cost -``` - - (In: 58; Out: 27; Cache create: 0; Cache read: 0; Total: 85, 0.000579) - -Let’s try out prefill too: - -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -``` - -``` python -chat(q, prefill=pref) -``` - -According to Douglas Adams,42. But seriously: To find purpose, create -meaning, love, grow, and make a positive impact while experiencing -life’s journey. - -
- -- id: `msg_011s2iLranbHFhdsVg8sz6eY` -- content: - `[{'text': "According to Douglas Adams,42. But seriously: To find purpose, create meaning, love, grow, and make a positive impact while experiencing life's journey.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 69, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -By default messages must be in user, assistant, user format. If this -isn’t followed (aka calling `chat()` without a user message) it will -error out: - -``` python -try: chat() -except ValueError as e: print("Error:", e) -``` - - Error: Prompt must be given after assistant completion, or use `self.cont_pr`. - -Setting `cont_pr` allows a “default prompt” to be specified when a -prompt isn’t specified. Usually used to prompt the model to continue. - -``` python -chat.cont_pr = "keep going..." -chat() -``` - -To build meaningful relationships, pursue passions, learn continuously, -help others, appreciate beauty, overcome challenges, leave a positive -legacy, and find personal fulfillment through whatever brings you joy -and contributes to the greater good. - -
- -- id: `msg_01Rz8oydLAinmSMyaKbmmpE9` -- content: - `[{'text': 'To build meaningful relationships, pursue passions, learn continuously, help others, appreciate beauty, overcome challenges, leave a positive legacy, and find personal fulfillment through whatever brings you joy and contributes to the greater good.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 105, 'output_tokens': 54, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -We can also use streaming: - -``` python -chat = Chat(model, sp=sp) -for o in chat("I'm Jeremy", stream=True): print(o, end='') -``` - - Hello Jeremy! Nice to meet you. How are you today? - -``` python -for o in chat(q, prefill=pref, stream=True): print(o, end='') -``` - - According to Douglas Adams, 42. More seriously: to find purpose, love, grow, and make a positive impact while experiencing life's journey. - -### Chat tool use - -We automagically get streamlined tool use as well: - -``` python -pr = f"What is {a}+{b}?" -pr -``` - - 'What is 604542+6458932?' - -``` python -chat = Chat(model, sp=sp, tools=[sums]) -r = chat(pr) -r -``` - - Finding the sum of 604542 and 6458932 - -Let me calculate that sum for you. - -
- -- id: `msg_01MY2VWnZuU8jKyRKJ5FGzmM` -- content: - `[{'text': 'Let me calculate that sum for you.', 'type': 'text'}, {'id': 'toolu_01JXnJ1ReFqx5ppX3y7UcQCB', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 437, 'output_tokens': 87, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Now we need to send this result to Claude—calling the object with no -parameters tells it to return the tool result to Claude: - -``` python -chat() -``` - -604542 + 6458932 = 7063474 - -
- -- id: `msg_01Sog8j3pgYb3TBWPYwR4uQU` -- content: `[{'text': '604542 + 6458932 = 7063474', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 532, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -It should be correct, because it actually used our Python function to do -the addition. Let’s check: - -``` python -a+b -``` - - 7063474 - -## Images - -Claude can handle image data as well. As everyone knows, when testing -image APIs you have to use a cute puppy. - -``` python -# Image is Cute_dog.jpg from Wikimedia -fn = Path('samples/puppy.jpg') -display.Image(filename=fn, width=200) -``` - - - -``` python -img = fn.read_bytes() -``` - -
-Exported source - -``` python -def _add_cache(d, cache): - "Optionally add cache control" - if cache: d["cache_control"] = {"type": "ephemeral"} - return d -``` - -
- -Claude supports context caching by adding a `cache_control` header, so -we provide an option to enable that. - ------------------------------------------------------------------------- - -source - -### img_msg - -> img_msg (data:bytes, cache=False) - -*Convert image `data` into an encoded `dict`* - -
-Exported source - -``` python -def img_msg(data:bytes, cache=False)->dict: - "Convert image `data` into an encoded `dict`" - img = base64.b64encode(data).decode("utf-8") - mtype = mimetypes.types_map['.'+imghdr.what(None, h=data)] - r = dict(type="base64", media_type=mtype, data=img) - return _add_cache({"type": "image", "source": r}, cache) -``` - -
- -Anthropic have documented the particular `dict` structure that expect -image data to be in, so we have a little function to create that for us. - ------------------------------------------------------------------------- - -source - -### text_msg - -> text_msg (s:str, cache=False) - -*Convert `s` to a text message* - -
-Exported source - -``` python -def text_msg(s:str, cache=False)->dict: - "Convert `s` to a text message" - return _add_cache({"type": "text", "text": s}, cache) -``` - -
- -A Claude message can be a list of image and text parts. So we’ve also -created a helper for making the text parts. - -``` python -q = "In brief, what color flowers are in this image?" -msg = mk_msg([img_msg(img), text_msg(q)]) -``` - -``` python -c([msg]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_01Ej9XSFQKFtD9pUns5g7tom` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 44, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
-
-Exported source - -``` python -def _mk_content(src, cache=False): - "Create appropriate content data structure based on type of content" - if isinstance(src,str): return text_msg(src, cache=cache) - if isinstance(src,bytes): return img_msg(src, cache=cache) - if isinstance(src, abc.Mapping): return {k:_str_if_needed(v) for k,v in src.items()} - return _str_if_needed(src) -``` - -
- -There’s not need to manually choose the type of message, since we figure -that out from the data of the source data. - -``` python -_mk_content('Hi') -``` - - {'type': 'text', 'text': 'Hi'} - ------------------------------------------------------------------------- - -source - -### mk_msg - -> mk_msg (content, role='user', cache=False, **kw) - -*Helper to create a `dict` appropriate for a Claude message. `kw` are -added as key/value pairs to the message* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
contentA string, list, or dict containing the contents of the message
rolestruserMust be ‘user’ or ‘assistant’
cacheboolFalse
kw
- -
-Exported source - -``` python -def mk_msg(content, # A string, list, or dict containing the contents of the message - role='user', # Must be 'user' or 'assistant' - cache=False, - **kw): - "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" - if hasattr(content, 'content'): content,role = content.content,content.role - if isinstance(content, abc.Mapping): content=content.get('content', content) - if not isinstance(content, list): content=[content] - content = [_mk_content(o, cache if islast else False) for islast,o in loop_last(content)] if content else '.' - return dict2obj(dict(role=role, content=content, **kw), list_func=list) -``` - -
- -``` python -mk_msg(['hi', 'there'], cache=True) -``` - -``` json -{ 'content': [ {'text': 'hi', 'type': 'text'}, - { 'cache_control': {'type': 'ephemeral'}, - 'text': 'there', - 'type': 'text'}], - 'role': 'user'} -``` - -``` python -m = mk_msg(['hi', 'there'], cache=True) -``` - -When we construct a message, we now use -[`_mk_content`](https://claudette.answer.ai/core.html#_mk_content) to -create the appropriate parts. Since a dialog contains multiple messages, -and a message can contain multiple content parts, to pass a single -message with multiple parts we have to use a list containing a single -list: - -``` python -c([[img, q]]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_014GQfAQF5FYU8a4Y8bvVm16` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -
- -> **Note** -> -> As promised (much!) earlier, we’ve now finally completed our -> definition of -> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg), and this -> version is the one we export to the Python module. - -
- -Some models unfortunately do not support image inputs such as Haiku 3.5 - -``` python -model = models[-1]; model -``` - - 'claude-3-5-haiku-20241022' - -``` python -c = Client(model) -c([[img, q]]) -``` - - AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 - --------------------------------------------------------------------------- - AssertionError Traceback (most recent call last) - Cell In[115], line 2 -  1 c = Client(model) - ----> 2 c([[img, q]]) - - Cell In[72], line 19, in __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs) -  17 if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) -  18 msgs = self._precall(msgs, prefill, stop, kwargs) - ---> 19 if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" -  20 if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) -  21 res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - - AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 - -## Third party providers - -### Amazon Bedrock - -These are Amazon’s current Claude models: - -``` python -models_aws -``` - - ['anthropic.claude-3-opus-20240229-v1:0', - 'anthropic.claude-3-5-sonnet-20241022-v2:0', - 'anthropic.claude-3-sonnet-20240229-v1:0', - 'anthropic.claude-3-haiku-20240307-v1:0'] - -
- -> **Note** -> -> `anthropic` at version 0.34.2 seems not to install `boto3` as a -> dependency. You may need to do a `pip install boto3` or the creation -> of the [`Client`](https://claudette.answer.ai/core.html#client) below -> fails. - -
- -Provided `boto3` is installed, we otherwise don’t need any extra code to -support Amazon Bedrock – we just have to set up the approach client: - -``` python -ab = AnthropicBedrock( - aws_access_key=os.environ['AWS_ACCESS_KEY'], - aws_secret_key=os.environ['AWS_SECRET_KEY'], -) -client = Client(models_aws[-1], ab) -``` - -``` python -chat = Chat(cli=client) -``` - -``` python -chat("I'm Jeremy") -``` - -It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_bdrk_01JPBwsACbf1HZoNDUzbHNpJ` -- content: - `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 32}` - -
- -### Google Vertex - -``` python -models_goog -``` - - ['claude-3-opus@20240229', - 'claude-3-5-sonnet-v2@20241022', - 'claude-3-sonnet@20240229', - 'claude-3-haiku@20240307'] - -``` python -from anthropic import AnthropicVertex -import google.auth -``` - -``` python -project_id = google.auth.default()[1] -region = "us-east5" -gv = AnthropicVertex(project_id=project_id, region=region) -client = Client(models_goog[-1], gv) -``` - -``` python -chat = Chat(cli=client) -``` - -``` python -chat("I'm Jeremy") -```
# Tool loop - - - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -``` python -model = models[-1] -``` - -Anthropic provides an [interesting -example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) -of using tools to mock up a hypothetical ordering system. We’re going to -take it a step further, and show how we can dramatically simplify the -process, whilst completing more complex tasks. - -We’ll start by defining the same mock customer/order data as in -Anthropic’s example, plus create a entity relationship between customers -and orders: - -``` python -orders = { - "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"), - "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"), - "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")} - -customers = { - "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890", - orders=[orders['O1'], orders['O2']]), - "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210", - orders=[orders['O3']]) -} -``` - -We can now define the same functions from the original example – but -note that we don’t need to manually create the large JSON schema, since -Claudette handles all that for us automatically from the functions -directly. We’ll add some extra functionality to update order details -when cancelling too. - -``` python -def get_customer_info( - customer_id:str # ID of the customer -): # Customer's name, email, phone number, and list of orders - "Retrieves a customer's information and their orders based on the customer ID" - print(f'- Retrieving customer {customer_id}') - return customers.get(customer_id, "Customer not found") - -def get_order_details( - order_id:str # ID of the order -): # Order's ID, product name, quantity, price, and order status - "Retrieves the details of a specific order based on the order ID" - print(f'- Retrieving order {order_id}') - return orders.get(order_id, "Order not found") - -def cancel_order( - order_id:str # ID of the order to cancel -)->bool: # True if the cancellation is successful - "Cancels an order based on the provided order ID" - print(f'- Cancelling order {order_id}') - if order_id not in orders: return False - orders[order_id]['status'] = 'Cancelled' - return True -``` - -We’re now ready to start our chat. - -``` python -tools = [get_customer_info, get_order_details, cancel_order] -chat = Chat(model, tools=tools) -``` - -We’ll start with the same request as Anthropic showed: - -``` python -r = chat('Can you tell me the email address for customer C1?') -print(r.stop_reason) -r.content -``` - - - Retrieving customer C1 - tool_use - - [ToolUseBlock(id='toolu_0168sUZoEUpjzk5Y8WN3q9XL', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] - -Claude asks us to use a tool. Claudette handles that automatically by -just calling it again: - -``` python -r = chat() -contents(r) -``` - - 'The email address for customer C1 is john@example.com.' - -Let’s consider a more complex case than in the original example – what -happens if a customer wants to cancel all of their orders? - -``` python -chat = Chat(model, tools=tools) -r = chat('Please cancel all orders for customer C1 for me.') -print(r.stop_reason) -r.content -``` - - - Retrieving customer C1 - tool_use - - [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), - ToolUseBlock(id='toolu_01ADr1rEp7NLZ2iKWfLp7vz7', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] - -This is the start of a multi-stage tool use process. Doing it manually -step by step is inconvenient, so let’s write a function to handle this -for us: - ------------------------------------------------------------------------- - -source - -### Chat.toolloop - -> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None, -> maxtok=4096, stream=False, prefill='', -> tool_choice:Optional[dict]=None) - -*Add prompt `pr` to dialog and get a response from Claude, automatically -following up with `tool_use` messages* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prPrompt to pass to Claude
max_stepsint10Maximum number of tool requests to loop through
trace_funcOptionalNoneFunction to trace tool use steps (e.g print)
cont_funcOptionalnoopFunction that stops loop if returns False
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
- -
-Exported source - -``` python -@patch -@delegates(Chat.__call__) -def toolloop(self:Chat, - pr, # Prompt to pass to Claude - max_steps=10, # Maximum number of tool requests to loop through - trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`) - cont_func:Optional[callable]=noop, # Function that stops loop if returns False - **kwargs): - "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages" - n_msgs = len(self.h) - r = self(pr, **kwargs) - for i in range(max_steps): - if r.stop_reason!='tool_use': break - if trace_func: trace_func(self.h[n_msgs:]); n_msgs = len(self.h) - r = self(**kwargs) - if not (cont_func or noop)(self.h[-2]): break - if trace_func: trace_func(self.h[n_msgs:]) - return r -``` - -
- -We’ll start by re-running our previous request - we shouldn’t have to -manually pass back the `tool_use` message any more: - -``` python -chat = Chat(model, tools=tools) -r = chat.toolloop('Can you tell me the email address for customer C1?') -r -``` - - - Retrieving customer C1 - -The email address for customer C1 is john@example.com. - -
- -- id: `msg_01Fm2CY76dNeWief4kUW6r71` -- content: - `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 720, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Let’s see if it can handle the multi-stage process now – we’ll add -`trace_func=print` to see each stage of the process: - -``` python -chat = Chat(model, tools=tools) -r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print) -r -``` - - - Retrieving customer C1 - [{'role': 'user', 'content': [{'type': 'text', 'text': 'Please cancel all orders for customer C1 for me.'}]}, {'role': 'assistant', 'content': [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01SvivKytaRHEdKixEY9dUDz', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01SvivKytaRHEdKixEY9dUDz', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}] - - Cancelling order O1 - [{'role': 'assistant', 'content': [TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel each of these orders:", type='text'), ToolUseBlock(id='toolu_01DoGVUPVBeDYERMePHDzUoT', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DoGVUPVBeDYERMePHDzUoT', 'content': 'True'}]}] - - Cancelling order O2 - [{'role': 'assistant', 'content': [ToolUseBlock(id='toolu_01XNwS35yY88Mvx4B3QqDeXX', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01XNwS35yY88Mvx4B3QqDeXX', 'content': 'True'}]}] - [{'role': 'assistant', 'content': [TextBlock(text="I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", type='text')]}] - -I’ve successfully cancelled both orders O1 and O2 for customer C1. -Please let me know if you need anything else! - -
- -- id: `msg_01K1QpUZ8nrBVUHYTrH5QjSF` -- content: - `[{'text': "I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 921, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -OK Claude thinks the orders were cancelled – let’s check one: - -``` python -chat.toolloop('What is the status of order O2?') -``` - - - Retrieving order O2 - -The status of order O2 is now ‘Cancelled’ since I successfully cancelled -that order earlier. - -
- -- id: `msg_01XcXpFDwoZ3u1bFDf5mY8x1` -- content: - `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 1092, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -## Code interpreter - -Here is an example of using `toolloop` to implement a simple code -interpreter with additional tools. - -``` python -from toolslm.shell import get_shell -from fastcore.meta import delegates -import traceback -``` - -``` python -@delegates() -class CodeChat(Chat): - imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses' - def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs): - super().__init__(model=model, **kwargs) - self.ask = ask - self.tools.append(self.run_cell) - self.shell = get_shell() - self.shell.run_cell('import '+self.imps) -``` - -We have one additional parameter to creating a `CodeChat` beyond what we -pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is -`ask` – if that’s `True`, we’ll prompt the user before running code. - -``` python -@patch -def run_cell( - self:CodeChat, - code:str, # Code to execute in persistent IPython session -): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute - "Asks user for permission, and if provided, executes python `code` using persistent IPython session." - confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n' - if self.ask and input(confirm): return '#DECLINED#' - try: res = self.shell.run_cell(code) - except Exception as e: return traceback.format_exc() - return res.stdout if res.result is None else res.result -``` - -We just pass along requests to run code to the shell’s implementation. -Claude often prints results instead of just using the last expression, -so we capture stdout in those cases. - -``` python -sp = f'''You are a knowledgable assistant. Do not use tools unless needed. -Don't do complex calculations yourself -- use code for them. -The following modules are pre-imported for `run_cell` automatically: - -{CodeChat.imps} - -Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls. - -If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.''' -``` - -``` python -def get_user(ignored:str='' # Unused parameter - ): # Username of current user - "Get the username of the user running this session" - print("Looking up username") - return 'Jeremy' -``` - -In order to test out multi-stage tool use, we create a mock function -that Claude can call to get the current username. - -``` python -model = models[1] -``` - -``` python -chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3) -``` - -Claude gets confused sometimes about how tools work, so we use examples -to remind it: - -``` python -chat.h = [ - 'Calculate the square root of `10332`', 'math.sqrt(10332)', - '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.' -] -``` - -Providing a callable to toolloop’s `trace_func` lets us print out -information during the loop: - -``` python -def _show_cts(h): - for r in h: - for o in r.get('content'): - if hasattr(o,'text'): print(o.text) - nm = getattr(o, 'name', None) - if nm=='run_cell': print(o.input['code']) - elif nm: print(f'{o.name}({o.input})') -``` - -…and toolloop’s `cont_func` callable let’s us provide a function which, -if it returns `False`, stops the loop: - -``` python -def _cont_decline(c): - return nested_idx(c, 'content', 'content') != '#DECLINED#' -``` - -Now we can try our code interpreter. We start by asking for a function -to be created, which we’ll use in the next prompt to test that the -interpreter is persistent. - -``` python -pr = '''Create a 1-line function `checksum` for a string `s`, -that multiplies together the ascii values of each character in `s` using `reduce`.''' -chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline) -``` - - Press Enter to execute, or enter "n" to skip? - ``` - checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) - ``` - - Create a 1-line function `checksum` for a string `s`, - that multiplies together the ascii values of each character in `s` using `reduce`. - Let me help you create that function using `reduce` and `functools`. - checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) - The function has been created. Let me explain how it works: - 1. It takes a string `s` as input - 2. Uses `functools.reduce` to multiply together all ASCII values - 3. `ord(y)` gets the ASCII value of each character - 4. The initial value is 1 (the third parameter to reduce) - 5. The lambda function multiplies the accumulator (x) with each new ASCII value - - You can test it with any string. For example, you could try `checksum("hello")` to see it in action. - -The function has been created. Let me explain how it works: 1. It takes -a string `s` as input 2. Uses `functools.reduce` to multiply together -all ASCII values 3. `ord(y)` gets the ASCII value of each character 4. -The initial value is 1 (the third parameter to reduce) 5. The lambda -function multiplies the accumulator (x) with each new ASCII value - -You can test it with any string. For example, you could try -`checksum("hello")` to see it in action. - -
- -- id: `msg_011pcGY9LbYqvRSfDPgCqUkT` -- content: - `[{'text': 'The function has been created. Let me explain how it works:\n1. It takes a string`s`as input\n2. Uses`functools.reduce`to multiply together all ASCII values\n3.`ord(y)`gets the ASCII value of each character\n4. The initial value is 1 (the third parameter to reduce)\n5. The lambda function multiplies the accumulator (x) with each new ASCII value\n\nYou can test it with any string. For example, you could try`checksum(“hello”)`to see it in action.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 824, 'output_tokens': 125, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -By asking for a calculation to be done on the username, we force it to -use multiple steps: - -``` python -pr = 'Use it to get the checksum of the username of this session.' -chat.toolloop(pr, trace_func=_show_cts) -``` - - Looking up username - Use it to get the checksum of the username of this session. - I'll first get the username using `get_user` and then apply our `checksum` function to it. - get_user({'ignored': ''}) - Press Enter to execute, or enter "n" to skip? - ``` - print(checksum("Jeremy")) - ``` - - Now I'll calculate the checksum of "Jeremy": - print(checksum("Jeremy")) - The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy". - -The checksum of the username “Jeremy” is 1134987783204. This was -calculated by multiplying together the ASCII values of each character in -“Jeremy”. - -
- -- id: `msg_01UXvtcLzzykZpnQUT35v4uD` -- content: - `[{'text': 'The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy".', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
# The async version - - - -## Setup - -## Async SDK - -``` python -model = models[1] -cli = AsyncAnthropic() -``` - -``` python -m = {'role': 'user', 'content': "I'm Jeremy"} -r = await cli.messages.create(messages=[m], model=model, max_tokens=100) -r -``` - -Hello Jeremy! It’s nice to meet you. How can I assist you today? Is -there anything specific you’d like to talk about or any questions you -have? - -
- -- id: `msg_019gsEQs5dqb3kgwNJbTH27M` -- content: - `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 36}` - -
- ------------------------------------------------------------------------- - -source - -### AsyncClient - -> AsyncClient (model, cli=None, log=False) - -*Async Anthropic messages client.* - -
-Exported source - -``` python -class AsyncClient(Client): - def __init__(self, model, cli=None, log=False): - "Async Anthropic messages client." - super().__init__(model,cli,log) - if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}) -``` - -
- -``` python -c = AsyncClient(model) -``` - -``` python -c._r(r) -c.use -``` - - In: 10; Out: 36; Total: 46 - ------------------------------------------------------------------------- - -source - -### AsyncClient.\_\_call\_\_ - -> AsyncClient.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', -> stream:bool=False, stop=None, cli=None, log=False) - -*Make an async call to Claude.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
cliNoneTypeNone
logboolFalse
- -
-Exported source - -``` python -@patch -async def _stream(self:AsyncClient, msgs:list, prefill='', **kwargs): - async with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: - if prefill: yield prefill - async for o in s.text_stream: yield o - self._log(await s.get_final_message(), prefill, msgs, kwargs) -``` - -
-
-Exported source - -``` python -@patch -@delegates(Client) -async def __call__(self:AsyncClient, - msgs:list, # List of messages in the dialog - sp='', # The system prompt - temp=0, # Temperature - maxtok=4096, # Maximum tokens - prefill='', # Optional prefill to pass to Claude as start of its response - stream:bool=False, # Stream response? - stop=None, # Stop sequence - **kwargs): - "Make an async call to Claude." - msgs = self._precall(msgs, prefill, stop, kwargs) - if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - res = await self.c.messages.create( - model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) -``` - -
- -``` python -c = AsyncClient(model, log=True) -c.use -``` - - In: 0; Out: 0; Total: 0 - -``` python -c.model = models[1] -await c('Hi') -``` - -Hello! How can I assist you today? Feel free to ask any questions or let -me know if you need help with anything. - -
- -- id: `msg_01L9vqP9r1LcmvSk8vWGLbPo` -- content: - `[{'text': 'Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 8, 'output_tokens': 29, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -c.use -``` - - In: 8; Out: 29; Total: 37 - -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -await c(q, prefill=pref) -``` - -According to Douglas Adams, the meaning of life is 42. More seriously, -there’s no universally agreed upon meaning of life. Many philosophers -and religions have proposed different answers, but it remains an open -question that individuals must grapple with for themselves. - -
- -- id: `msg_01KAJbCneA2oCRPVm9EkyDXF` -- content: - `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 24, 'output_tokens': 51, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -async for o in (await c('Hi', stream=True)): print(o, end='') -``` - - Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything. - -``` python -c.use -``` - - In: 40; Out: 109; Total: 149 - -``` python -async for o in (await c(q, prefill=pref, stream=True)): print(o, end='') -``` - - According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves. - -``` python -c.use -``` - - In: 64; Out: 160; Total: 224 - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -sp = "You are a summing expert." -``` - -``` python -tools=[get_schema(sums)] -choice = mk_tool_choice('sums') -``` - -``` python -tools = [get_schema(sums)] -msgs = mk_msgs(pr) -r = await c(msgs, sp=sp, tools=tools, tool_choice=choice) -tr = mk_toolres(r, ns=globals()) -msgs += tr -contents(await c(msgs, sp=sp, tools=tools)) -``` - - Finding the sum of 604542 and 6458932 - - 'As a summing expert, I\'m happy to help you with this addition. The sum of 604542 and 6458932 is 7063474.\n\nTo break it down:\n604542 (first number)\n+ 6458932 (second number)\n= 7063474 (total sum)\n\nThis result was calculated using the "sums" function, which adds two numbers together. Is there anything else you\'d like me to sum for you?' - -## AsyncChat - ------------------------------------------------------------------------- - -source - -### AsyncChat - -> AsyncChat (model:Optional[str]=None, -> cli:Optional[claudette.core.Client]=None, sp='', -> tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None) - -*Anthropic async chat client.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstr
toolsOptionalNone
tempint0
cont_prOptionalNone
- -
-Exported source - -``` python -@delegates() -class AsyncChat(Chat): - def __init__(self, - model:Optional[str]=None, # Model to use (leave empty if passing `cli`) - cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) - **kwargs): - "Anthropic async chat client." - super().__init__(model, cli, **kwargs) - if not cli: self.c = AsyncClient(model) -``` - -
- -``` python -sp = "Never mention what tools you use." -chat = AsyncChat(model, sp=sp) -chat.c.use, chat.h -``` - - (In: 0; Out: 0; Total: 0, []) - ------------------------------------------------------------------------- - -source - -### AsyncChat.\_\_call\_\_ - -> AsyncChat.__call__ (pr=None, temp=0, maxtok=4096, stream=False, -> prefill='', **kw) - -*Call self as a function.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempint0Temperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
kw
- -
-Exported source - -``` python -@patch -async def _stream(self:AsyncChat, res): - async for o in res: yield o - self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) -``` - -
-
-Exported source - -``` python -@patch -async def _append_pr(self:AsyncChat, pr=None): - prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history - if pr and prev_role == 'user': await self() - self._post_pr(pr, prev_role) -``` - -
-
-Exported source - -``` python -@patch -async def __call__(self:AsyncChat, - pr=None, # Prompt / message - temp=0, # Temperature - maxtok=4096, # Maximum tokens - stream=False, # Stream response? - prefill='', # Optional prefill to pass to Claude as start of its response - **kw): - await self._append_pr(pr) - if self.tools: kw['tools'] = [get_schema(o) for o in self.tools] - res = await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, **kw) - if stream: return self._stream(res) - self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) - return res -``` - -
- -``` python -await chat("I'm Jeremy") -await chat("What's my name?") -``` - -Your name is Jeremy, as you mentioned in your previous message. - -
- -- id: `msg_01NMugMXWpDP9iuTXeLkHarn` -- content: - `[{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 64, 'output_tokens': 16, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -await chat(q, prefill=pref) -``` - -According to Douglas Adams, the meaning of life is 42. More seriously, -there’s no universally agreed upon answer. Common philosophical -perspectives include: - -1. Finding personal fulfillment -2. Serving others -3. Pursuing happiness -4. Creating meaning through our choices -5. Experiencing and appreciating existence - -Ultimately, many believe each individual must determine their own life’s -meaning. - -
- -- id: `msg_01VPWUQn5Do1Kst8RYUDQvCu` -- content: - `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 100, 'output_tokens': 82, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat = AsyncChat(model, sp=sp) -async for o in (await chat("I'm Jeremy", stream=True)): print(o, end='') -``` - - Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions I can help you with? - -``` python -pr = f"What is {a}+{b}?" -chat = AsyncChat(model, sp=sp, tools=[sums]) -r = await chat(pr) -r -``` - - Finding the sum of 604542 and 6458932 - -To answer this question, I can use the “sums” function to add these two -numbers together. Let me do that for you. - -
- -- id: `msg_015z1rffSWFxvj7rSpzc43ZE` -- content: - `[{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01SNKhtfnDQBC4RGY4mUCq1v', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 428, 'output_tokens': 101, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -await chat() -``` - -The sum of 604542 and 6458932 is 7063474. - -
- -- id: `msg_018KAsE2YGiXWjUJkLPrXpb2` -- content: - `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 543, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -fn = Path('samples/puppy.jpg') -img = fn.read_bytes() -``` - -``` python -q = "In brief, what color flowers are in this image?" -msg = mk_msg([img_msg(img), text_msg(q)]) -await c([msg]) -``` - -The flowers in this image are purple. They appear to be small, -daisy-like flowers, possibly asters or some type of purple daisy, -blooming in the background behind the adorable puppy in the foreground. - -
- -- id: `msg_017qgZggLjUY915mWbWCkb9X` -- content: - `[{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 50, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
# claudette - - - -> **NB**: If you are reading this in GitHub’s readme, we recommend you -> instead read the much more nicely formatted [documentation -> format](https://claudette.answer.ai/) of this tutorial. - -*Claudette* is a wrapper for Anthropic’s [Python -SDK](https://github.com/anthropics/anthropic-sdk-python). - -The SDK works well, but it is quite low level – it leaves the developer -to do a lot of stuff manually. That’s a lot of extra work and -boilerplate! Claudette automates pretty much everything that can be -automated, whilst providing full control. Amongst the features provided: - -- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that - creates stateful dialogs -- Support for *prefill*, which tells Claude what to use as the first few - words of its response -- Convenient image support -- Simple and convenient support for Claude’s new Tool Use API. - -You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the -key provided to you by Anthropic in order to use this library. - -Note that this library is the first ever “literate nbdev” project. That -means that the actual source code for the library is a rendered Jupyter -Notebook which includes callout notes and tips, HTML tables and images, -detailed explanations, and teaches *how* and *why* the code is written -the way it is. Even if you’ve never used the Anthropic Python SDK or -Claude API before, you should be able to read the source code. Click -[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, -or clone the git repo and execute the notebook yourself to see every -step of the creation process in action. The tutorial below includes -links to API details which will take you to relevant parts of the -source. The reason this project is a new kind of literal program is -because we take seriously Knuth’s call to action, that we have a “*moral -commitment*” to never write an “*illiterate program*” – and so we have a -commitment to making literate programming and easy and pleasant -experience. (For more on this, see [this -talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) - -> “*Let us change our traditional attitude to the construction of -> programs: Instead of imagining that our main task is to instruct a -> **computer** what to do, let us concentrate rather on explaining to -> **human beings** what we want a computer to do.*” Donald E. Knuth, -> [Literate -> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) -> (1984) - -## Install - -``` sh -pip install claudette -``` - -## Getting started - -Anthropic’s Python SDK will automatically be installed with Claudette, -if you don’t already have it. - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -To print every HTTP request and response in full, uncomment the above -line. - -``` python -from claudette import * -``` - -Claudette only exports the symbols that are needed to use the library, -so you can use `import *` to import them. Alternatively, just use: - -``` python -import claudette -``` - -…and then add the prefix `claudette.` to any usages of the module. - -Claudette provides `models`, which is a list of models currently -available from the SDK. - -``` python -models -``` - - ['claude-3-opus-20240229', - 'claude-3-5-sonnet-20241022', - 'claude-3-haiku-20240307'] - -For these examples, we’ll use Sonnet 3.5, since it’s awesome! - -``` python -model = models[1] -``` - -## Chat - -The main interface to Claudette is the -[`Chat`](https://claudette.answer.ai/core.html#chat) class, which -provides a stateful interface to Claude: - -``` python -chat = Chat(model, sp="""You are a helpful and concise assistant.""") -chat("I'm Jeremy") -``` - -Hello Jeremy, nice to meet you. - -
- -- id: `msg_015oK9jEcra3TEKHUGYULjWB` -- content: - `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -r = chat("What's my name?") -r -``` - -Your name is Jeremy. - -
- -- id: `msg_01Si8sTFJe8d8vq7enanbAwj` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -r = chat("What's my name?") -r -``` - -Your name is Jeremy. - -
- -- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -As you see above, displaying the results of a call in a notebook shows -just the message contents, with the other details hidden behind a -collapsible section. Alternatively you can `print` the details: - -``` python -print(r) -``` - - Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) - -Claude supports adding an extra `assistant` message at the end, which -contains the *prefill* – i.e. the text we want Claude to assume the -response starts with. Let’s try it out: - -``` python -chat("Concisely, what is the meaning of life?", - prefill='According to Douglas Adams,') -``` - -According to Douglas Adams,42. Philosophically, it’s to find personal -meaning through relationships, purpose, and experiences. - -
- -- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` -- content: - `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -You can add `stream=True` to stream the results as soon as they arrive -(although you will only see the gradual generation if you execute the -notebook yourself, of course!) - -``` python -for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): - print(o, end='') -``` - - It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. - -### Async - -Alternatively, you can use -[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or -[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for -the async versions, e.g: - -``` python -chat = AsyncChat(model) -await chat("I'm Jeremy") -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_016Q8cdc3sPWBS8eXcNj841L` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Remember to use `async for` when streaming in this case: - -``` python -async for o in await chat("Concisely, what is the meaning of life?", - prefill='According to Douglas Adams,', stream=True): - print(o, end='') -``` - - According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. - -## Prompt caching - -If you use `mk_msg(msg, cache=True)`, then the message is cached using -Claude’s [prompt -caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) -feature. For instance, here we use caching when asking about Claudette’s -readme file: - -``` python -chat = Chat(model, sp="""You are a helpful and concise assistant.""") -``` - -``` python -nbtxt = Path('README.txt').read_text() -msg = f''' -{nbtxt} - -In brief, what is the purpose of this project based on the readme?''' -r = chat(mk_msg(msg, cache=True)) -r -``` - -Claudette is a high-level wrapper for Anthropic’s Python SDK that -automates common tasks and provides additional functionality. Its main -features include: - -1. A Chat class for stateful dialogs -2. Support for prefill (controlling Claude’s initial response words) -3. Convenient image handling -4. Simple tool use API integration -5. Support for multiple model providers (Anthropic, AWS Bedrock, Google - Vertex) - -The project is notable for being the first “literate nbdev” project, -meaning its source code is written as a detailed, readable Jupyter -Notebook that includes explanations, examples, and teaching material -alongside the functional code. - -The goal is to simplify working with Claude’s API while maintaining full -control, reducing boilerplate code and manual work that would otherwise -be needed with the base SDK. - -
- -- id: `msg_014rVQnYoZXZuyWUCMELG1QW` -- content: - `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` - -
- -The response records the a cache has been created using these input -tokens: - -``` python -print(r.usage) -``` - - Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) - -We can now ask a followup question in this chat: - -``` python -r = chat('How does it make tool use more ergonomic?') -r -``` - -According to the README, Claudette makes tool use more ergonomic in -several ways: - -1. It uses docments to make Python function definitions more - user-friendly - each parameter and return value should have a type - and description - -2. It handles the tool calling process automatically - when Claude - returns a tool_use message, Claudette manages calling the tool with - the provided parameters behind the scenes - -3. It provides a `toolloop` method that can handle multiple tool calls - in a single step to solve more complex problems - -4. It allows you to pass a list of tools to the Chat constructor and - optionally force Claude to always use a specific tool via - `tool_choice` - -Here’s a simple example from the README: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b - -chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') -``` - -This makes it much simpler compared to manually handling all the tool -use logic that would be required with the base SDK. - -
- -- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` -- content: - `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` - -
- -We can see that this only used ~200 regular input tokens – the 7000+ -context tokens have been read from cache. - -``` python -print(r.usage) -``` - - Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) - -``` python -chat.use -``` - - In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 - -## Tool use - -[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude -use external tools. - -We use [docments](https://fastcore.fast.ai/docments.html) to make -defining Python functions as ergonomic as possible. Each parameter (and -the return value) should have a type, and a docments comment with the -description of what it is. As an example we’ll write a simple function -that adds numbers together, and will tell us when it’s being called: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -Sometimes Claude will say something like “according to the `sums` tool -the answer is” – generally we’d rather it just tells the user the -answer, so we can use a system prompt to help with this: - -``` python -sp = "Never mention what tools you use." -``` - -We’ll get Claude to add up some long numbers: - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -pr -``` - - 'What is 604542+6458932?' - -To use tools, pass a list of them to -[`Chat`](https://claudette.answer.ai/core.html#chat): - -``` python -chat = Chat(model, sp=sp, tools=[sums]) -``` - -To force Claude to always answer using a tool, set `tool_choice` to that -function name. When Claude needs to use a tool, it doesn’t return the -answer, but instead returns a `tool_use` message, which means we have to -call the named tool with the provided parameters. - -``` python -r = chat(pr, tool_choice='sums') -r -``` - - Finding the sum of 604542 and 6458932 - -ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_014xrPyotyiBmFSctkp1LZHk` -- content: - `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Claudette handles all that for us – we just call it again, and it all -happens automatically: - -``` python -chat() -``` - -The sum of 604542 and 6458932 is 7063474. - -
- -- id: `msg_01151puJxG8Fa6k6QSmzwKQA` -- content: - `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -You can see how many tokens have been used at any time by checking the -`use` property. Note that (as of May 2024) tool use in Claude uses a -*lot* of tokens, since it automatically adds a large system prompt. - -``` python -chat.use -``` - - In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 - -We can do everything needed to use tools in a single step, by using -[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). -This can even call multiple tools as needed solve a problem. For -example, let’s define a tool to handle multiplication: - -``` python -def mults( - a:int, # First thing to multiply - b:int=1 # Second thing to multiply -) -> int: # The product of the inputs - "Multiplies a * b." - print(f"Finding the product of {a} and {b}") - return a * b -``` - -Now with a single call we can calculate `(a+b)*2` – by passing -`show_trace` we can see each response from Claude in the process: - -``` python -chat = Chat(model, sp=sp, tools=[sums,mults]) -pr = f'Calculate ({a}+{b})*2' -pr -``` - - 'Calculate (604542+6458932)*2' - -``` python -chat.toolloop(pr, trace_func=print) -``` - - Finding the sum of 604542 and 6458932 - [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] - Finding the product of 7063474 and 2 - [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] - [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] - -The final result is 14,126,948. - -
- -- id: `msg_0162teyBcJHriUzZXMPz4r5d` -- content: - `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -## Structured data - -If you just want the immediate result from a single tool, use -[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). - -``` python -cli = Client(model) -``` - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -cli.structured("What is 604542+6458932", sums) -``` - - Finding the sum of 604542 and 6458932 - - [7063474] - -This is particularly useful for getting back structured information, -e.g: - -``` python -class President: - "Information about a president of the United States" - def __init__(self, - first:str, # first name - last:str, # last name - spouse:str, # name of spouse - years_in_office:str, # format: "{start_year}-{end_year}" - birthplace:str, # name of city - birth_year:int # year of birth, `0` if unknown - ): - assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" - store_attr() - - __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') -``` - -``` python -cli.structured("Provide key information about the 3rd President of the United States", President) -``` - - [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] - -## Images - -Claude can handle image data as well. As everyone knows, when testing -image APIs you have to use a cute puppy. - -``` python -fn = Path('samples/puppy.jpg') -display.Image(filename=fn, width=200) -``` - - - -We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object -as before: - -``` python -chat = Chat(model) -``` - -Claudette expects images as a list of bytes, so we read in the file: - -``` python -img = fn.read_bytes() -``` - -Prompts to Claudette can be lists, containing text, images, or both, eg: - -``` python -chat([img, "In brief, what color flowers are in this image?"]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -The image is included as input tokens. - -``` python -chat.use -``` - - In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 - -Alternatively, Claudette supports creating a multi-stage chat with -separate image and text prompts. For instance, you can pass just the -image as the initial prompt (in which case Claude will make some general -comments about what it sees), and then follow up with questions in -additional prompts: - -``` python -chat = Chat(model) -chat(img) -``` - -What an adorable Cavalier King Charles Spaniel puppy! The photo captures -the classic brown and white coloring of the breed, with those soulful -dark eyes that are so characteristic. The puppy is lying in the grass, -and there are lovely purple asters blooming in the background, creating -a beautiful natural setting. The combination of the puppy’s sweet -expression and the delicate flowers makes for a charming composition. -Cavalier King Charles Spaniels are known for their gentle, affectionate -nature, and this little one certainly seems to embody those traits with -its endearing look. - -
- -- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` -- content: - `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat('What direction is the puppy facing?') -``` - -The puppy is facing towards the left side of the image. Its head is -positioned so we can see its right side profile, though it appears to be -looking slightly towards the camera, giving us a good view of its -distinctive brown and white facial markings and one of its dark eyes. -The puppy is lying down with its white chest/front visible against the -green grass. - -
- -- id: `msg_01AeR9eWjbxa788YF97iErtN` -- content: - `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat('What color is it?') -``` - -The puppy has a classic Cavalier King Charles Spaniel coat with a rich -chestnut brown (sometimes called Blenheim) coloring on its ears and -patches on its face, combined with a bright white base color. The white -is particularly prominent on its face (creating a distinctive blaze down -the center) and chest area. This brown and white combination is one of -the most recognizable color patterns for the breed. - -
- -- id: `msg_01R91AqXG7pLc8hK24F5mc7x` -- content: - `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Note that the image is passed in again for every input in the dialog, so -that number of input tokens increases quickly with this kind of chat. -(For large images, using prompt caching might be a good idea.) - -``` python -chat.use -``` - - In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 - -## Other model providers - -You can also use 3rd party providers of Anthropic models, as shown here. - -### Amazon Bedrock - -These are the models available through Bedrock: - -``` python -models_aws -``` - - ['anthropic.claude-3-opus-20240229-v1:0', - 'anthropic.claude-3-5-sonnet-20241022-v2:0', - 'anthropic.claude-3-sonnet-20240229-v1:0', - 'anthropic.claude-3-haiku-20240307-v1:0'] - -To use them, call `AnthropicBedrock` with your access details, and pass -that to [`Client`](https://claudette.answer.ai/core.html#client): - -``` python -from anthropic import AnthropicBedrock -``` - -``` python -ab = AnthropicBedrock( - aws_access_key=os.environ['AWS_ACCESS_KEY'], - aws_secret_key=os.environ['AWS_SECRET_KEY'], -) -client = Client(models_aws[-1], ab) -``` - -Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) -object passing this client to the `cli` parameter – and from then on, -everything is identical to the previous examples. - -``` python -chat = Chat(cli=client) -chat("I'm Jeremy") -``` - -It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` -- content: - `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 32}` - -
- -### Google Vertex - -These are the models available through Vertex: - -``` python -models_goog -``` - - ['claude-3-opus@20240229', - 'claude-3-5-sonnet-v2@20241022', - 'claude-3-sonnet@20240229', - 'claude-3-haiku@20240307'] - -To use them, call `AnthropicVertex` with your access details, and pass -that to [`Client`](https://claudette.answer.ai/core.html#client): - -``` python -from anthropic import AnthropicVertex -import google.auth -``` - -``` python -project_id = google.auth.default()[1] -gv = AnthropicVertex(project_id=project_id, region="us-east5") -client = Client(models_goog[-1], gv) -``` - -``` python -chat = Chat(cli=client) -chat("I'm Jeremy") -``` - -## Extensions - -- [Pydantic Structured - Ouput](https://github.com/tom-pollak/claudette-pydantic)
# claudette-pydantic - - - -> Adds Pydantic support for -> [claudette](https://github.com/AnswerDotAI/claudette) through function -> calling - -claudette_pydantic provides the `struct` method in the `Client` and -`Chat` of claudette - -`struct` provides a wrapper around `__call__`. Provide a Pydantic -`BaseModel` as schema, and the model will return an initialized -`BaseModel` object. - -I’ve found Haiku to be quite reliable at even complicated schemas. - -## Install - -``` sh -pip install claudette-pydantic -``` - -## Getting Started - -``` python -from claudette.core import * -import claudette_pydantic # patches claudette with `struct` -from pydantic import BaseModel, Field -from typing import Literal, Union, List -``` - -``` python -model = models[-1] -model -``` - - 'claude-3-haiku-20240307' - -``` python -class Pet(BaseModel): - "Create a new pet" - name: str - age: int - owner: str = Field(default="NA", description="Owner name. Do not return if not given.") - type: Literal['dog', 'cat', 'mouse'] - -c = Client(model) -print(repr(c.struct(msgs="Can you make a pet for my dog Mac? He's 14 years old", resp_model=Pet))) -print(repr(c.struct(msgs="Tom: my cat is juma and he's 16 years old", resp_model=Pet))) -``` - - Pet(name='Mac', age=14, owner='NA', type='dog') - Pet(name='juma', age=16, owner='Tom', type='cat') - -## Going Deeper - -I pulled this example from [pydantic -docs](https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions) -has a list of discriminated unions, shown by `pet_type`. For each object -the model is required to return different things. - -You should be able to use the full power of Pydantic here. I’ve found -that instructor for Claude fails on this example. - -Each sub BaseModel may also have docstrings describing usage. I’ve found -prompting this way to be quite reliable. - -``` python -class Cat(BaseModel): - pet_type: Literal['cat'] - meows: int - - -class Dog(BaseModel): - pet_type: Literal['dog'] - barks: float - - -class Reptile(BaseModel): - pet_type: Literal['lizard', 'dragon'] - scales: bool - -# Dummy to show doc strings -class Create(BaseModel): - "Pass as final member of the `pet` list to indicate success" - pet_type: Literal['create'] - -class OwnersPets(BaseModel): - """ - Information for to gather for an Owner's pets - """ - pet: List[Union[Cat, Dog, Reptile, Create]] = Field(..., discriminator='pet_type') - -chat = Chat(model) -pr = "hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows" -print(repr(chat.struct(OwnersPets, pr=pr))) -print(repr(chat.struct(OwnersPets, pr="actually my dragon does have scales, can you change that for me?"))) -``` - - OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) - OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')]) - -While the struct uses tool use to enforce the schema, we save in history -as the `repr` response to keep the user,assistant,user flow. - -``` python -chat.h -``` - - [{'role': 'user', - 'content': [{'type': 'text', - 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, - {'role': 'assistant', - 'content': [{'type': 'text', - 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}, - {'role': 'user', - 'content': [{'type': 'text', - 'text': 'actually my dragon does have scales, can you change that for me?'}]}, - {'role': 'assistant', - 'content': [{'type': 'text', - 'text': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=True), Cat(pet_type='cat', meows=2), Create(pet_type='create')])"}]}] - -Alternatively you can use struct as tool use flow with -`treat_as_output=False` (but requires the next input to be assistant) - -``` python -chat.struct(OwnersPets, pr=pr, treat_as_output=False) -chat.h[-3:] -``` - - [{'role': 'user', - 'content': [{'type': 'text', - 'text': 'hello I am a new owner and I would like to add some pets for me. I have a dog which has 6 barks, a dragon with no scales, and a cat with 2 meows'}]}, - {'role': 'assistant', - 'content': [ToolUseBlock(id='toolu_015ggQ1iH6BxBffd7erj3rjR', input={'pet': [{'pet_type': 'dog', 'barks': 6.0}, {'pet_type': 'dragon', 'scales': False}, {'pet_type': 'cat', 'meows': 2}]}, name='OwnersPets', type='tool_use')]}, - {'role': 'user', - 'content': [{'type': 'tool_result', - 'tool_use_id': 'toolu_015ggQ1iH6BxBffd7erj3rjR', - 'content': "OwnersPets(pet=[Dog(pet_type='dog', barks=6.0), Reptile(pet_type='dragon', scales=False), Cat(pet_type='cat', meows=2)])"}]}] - -(So I couldn’t prompt it again here, next input would have to be an -assistant) - -### User Creation & few-shot examples - -You can even add few shot examples *for each input* - -``` python -class User(BaseModel): - "User creation tool" - age: int = Field(description='Age of the user') - name: str = Field(title='Username') - password: str = Field( - json_schema_extra={ - 'title': 'Password', - 'description': 'Password of the user', - 'examples': ['Monkey!123'], - } - ) -print(repr(c.struct(msgs=["Can you create me a new user for tom age 22"], resp_model=User, sp="for a given user, generate a similar password based on examples"))) -``` - - User(age=22, name='tom', password='Monkey!123') - -Uses the few-shot example as asked for in the system prompt. - -### You can find more examples [nbs/examples](nbs/examples) - -## Signature: - -``` python -Client.struct( - self: claudette.core.Client, - msgs: list, - resp_model: type[BaseModel], # non-initialized Pydantic BaseModel - **, # Client.__call__ kwargs... -) -> BaseModel -``` - -``` python -Chat.struct( - self: claudette.core.Chat, - resp_model: type[BaseModel], # non-initialized Pydantic BaseModel - treat_as_output=True, # In chat history, tool is reflected - **, # Chat.__call__ kwargs... -) -> BaseModel -```
diff --git a/llms-ctx.txt b/llms-ctx.txt deleted file mode 100644 index 7b7e5f2..0000000 --- a/llms-ctx.txt +++ /dev/null @@ -1,4607 +0,0 @@ -Things to remember when using Claudette: - -- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key -- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) -- The library provides both synchronous and asynchronous interfaces -- Use `Chat()` for maintaining conversation state and handling tool interactions -- When using tools, the library automatically handles the request/response loop -- Image support is built in but only available on compatible models (not Haiku)# Claudette’s source - - - -This is the ‘literate’ source code for Claudette. You can view the fully -rendered version of the notebook -[here](https://claudette.answer.ai/core.html), or you can clone the git -repo and run the [interactive -notebook](https://github.com/AnswerDotAI/claudette/blob/main/00_core.ipynb) -in Jupyter. The notebook is converted the [Python module -claudette/core.py](https://github.com/AnswerDotAI/claudette/blob/main/claudette/core.py) -using [nbdev](https://nbdev.fast.ai/). The goal of this source code is -to both create the Python module, and also to teach the reader *how* it -is created, without assuming much existing knowledge about Claude’s API. - -Most of the time you’ll see that we write some source code *first*, and -then a description or discussion of it *afterwards*. - -## Setup - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -To print every HTTP request and response in full, uncomment the above -line. This functionality is provided by Anthropic’s SDK. - -
- -> **Tip** -> -> If you’re reading the rendered version of this notebook, you’ll see an -> “Exported source” collapsible widget below. If you’re reading the -> source notebook directly, you’ll see `#| exports` at the top of the -> cell. These show that this piece of code will be exported into the -> python module that this notebook creates. No other code will be -> included – any other code in this notebook is just for demonstration, -> documentation, and testing. -> -> You can toggle expanding/collapsing the source code of all exported -> sections by using the ` Code` menu in the top right of the rendered -> notebook page. - -
- -
-Exported source - -``` python -model_types = { - # Anthropic - 'claude-3-opus-20240229': 'opus', - 'claude-3-5-sonnet-20241022': 'sonnet', - 'claude-3-haiku-20240307': 'haiku-3', - 'claude-3-5-haiku-20241022': 'haiku-3-5', - # AWS - 'anthropic.claude-3-opus-20240229-v1:0': 'opus', - 'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet', - 'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet', - 'anthropic.claude-3-haiku-20240307-v1:0': 'haiku', - # Google - 'claude-3-opus@20240229': 'opus', - 'claude-3-5-sonnet-v2@20241022': 'sonnet', - 'claude-3-sonnet@20240229': 'sonnet', - 'claude-3-haiku@20240307': 'haiku', -} - -all_models = list(model_types) -``` - -
-
-Exported source - -``` python -text_only_models = ('claude-3-5-haiku-20241022',) -``` - -
- -These are the current versions and -[prices](https://www.anthropic.com/pricing#anthropic-api) of Anthropic’s -models at the time of writing. - -``` python -model = models[1]; model -``` - - 'claude-3-5-sonnet-20241022' - -For examples, we’ll use Sonnet 3.5, since it’s awesome. - -## Antropic SDK - -``` python -cli = Anthropic() -``` - -This is what Anthropic’s SDK provides for interacting with Python. To -use it, pass it a list of *messages*, with *content* and a *role*. The -roles should alternate between *user* and *assistant*. - -
- -> **Tip** -> -> After the code below you’ll see an indented section with an orange -> vertical line on the left. This is used to show the *result* of -> running the code above. Because the code is running in a Jupyter -> Notebook, we don’t have to use `print` to display results, we can just -> type the expression directly, as we do with `r` here. - -
- -``` python -m = {'role': 'user', 'content': "I'm Jeremy"} -r = cli.messages.create(messages=[m], model=model, max_tokens=100) -r -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help -you today? - -
- -- id: `msg_017Q8WYvvANfyHWLJWt95UR1` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 27}` - -
- -### Formatting output - -That output is pretty long and hard to read, so let’s clean it up. We’ll -start by pulling out the `Content` part of the message. To do that, -we’re going to write our first function which will be included to the -`claudette/core.py` module. - -
- -> **Tip** -> -> This is the first exported public function or class we’re creating -> (the previous export was of a variable). In the rendered version of -> the notebook for these you’ll see 4 things, in this order (unless the -> symbol starts with a single `_`, which indicates it’s *private*): -> -> - The signature (with the symbol name as a heading, with a horizontal -> rule above) -> - A table of paramater docs (if provided) -> - The doc string (in italics). -> - The source code (in a collapsible “Exported source” block) -> -> After that, we generally provide a bit more detail on what we’ve -> created, and why, along with a sample usage. - -
- ------------------------------------------------------------------------- - -source - -### find_block - -> find_block (r:collections.abc.Mapping, blk_type:type= 'anthropic.types.text_block.TextBlock'>) - -*Find the first block of type `blk_type` in `r.content`.* - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
rMappingThe message to look in
blk_typetypeTextBlockThe type of block to find
- -
-Exported source - -``` python -def find_block(r:abc.Mapping, # The message to look in - blk_type:type=TextBlock # The type of block to find - ): - "Find the first block of type `blk_type` in `r.content`." - return first(o for o in r.content if isinstance(o,blk_type)) -``` - -
- -This makes it easier to grab the needed parts of Claude’s responses, -which can include multiple pieces of content. By default, we look for -the first text block. That will generally have the content we want to -display. - -``` python -find_block(r) -``` - - TextBlock(text="Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", type='text') - ------------------------------------------------------------------------- - -source - -### contents - -> contents (r) - -*Helper to get the contents from Claude response `r`.* - -
-Exported source - -``` python -def contents(r): - "Helper to get the contents from Claude response `r`." - blk = find_block(r) - if not blk and r.content: blk = r.content[0] - return blk.text.strip() if hasattr(blk,'text') else str(blk) -``` - -
- -For display purposes, we often just want to show the text itself. - -``` python -contents(r) -``` - - "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?" - -
-Exported source - -``` python -@patch -def _repr_markdown_(self:(Message)): - det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items()) - cts = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex - return f"""{cts} - -
- -- {det} - -
""" -``` - -
- -Jupyter looks for a `_repr_markdown_` method in displayed objects; we -add this in order to display just the content text, and collapse full -details into a hideable section. Note that `patch` is from -[fastcore](https://fastcore.fast.ai/), and is used to add (or replace) -functionality in an existing class. We pass the class(es) that we want -to patch as type annotations to `self`. In this case, `_repr_markdown_` -is being added to Anthropic’s `Message` class, so when we display the -message now we just see the contents, and the details are hidden away in -a collapsible details block. - -``` python -r -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant. How can I help -you today? - -
- -- id: `msg_017Q8WYvvANfyHWLJWt95UR1` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 27}` - -
- -One key part of the response is the -[`usage`](https://claudette.answer.ai/core.html#usage) key, which tells -us how many tokens we used by returning a `Usage` object. - -We’ll add some helpers to make things a bit cleaner for creating and -formatting these objects. - -``` python -r.usage -``` - - In: 10; Out: 27; Cache create: 0; Cache read: 0; Total: 37 - ------------------------------------------------------------------------- - -source - -### usage - -> usage (inp=0, out=0, cache_create=0, cache_read=0) - -*Slightly more concise version of `Usage`.* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
inpint0input tokens
outint0Output tokens
cache_createint0Cache creation tokens
cache_readint0Cache read tokens
- -
-Exported source - -``` python -def usage(inp=0, # input tokens - out=0, # Output tokens - cache_create=0, # Cache creation tokens - cache_read=0 # Cache read tokens - ): - "Slightly more concise version of `Usage`." - return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create, cache_read_input_tokens=cache_read) -``` - -
- -The constructor provided by Anthropic is rather verbose, so we clean it -up a bit, using a lowercase version of the name. - -``` python -usage(5) -``` - - In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 - ------------------------------------------------------------------------- - -source - -### Usage.total - -> Usage.total () - -
-Exported source - -``` python -@patch(as_prop=True) -def total(self:Usage): return self.input_tokens+self.output_tokens+getattr(self, "cache_creation_input_tokens",0)+getattr(self, "cache_read_input_tokens",0) -``` - -
- -Adding a `total` property to `Usage` makes it easier to see how many -tokens we’ve used up altogether. - -``` python -usage(5,1).total -``` - - 6 - ------------------------------------------------------------------------- - -source - -### Usage.\_\_repr\_\_ - -> Usage.__repr__ () - -*Return repr(self).* - -
-Exported source - -``` python -@patch -def __repr__(self:Usage): return f'In: {self.input_tokens}; Out: {self.output_tokens}; Cache create: {getattr(self, "cache_creation_input_tokens",0)}; Cache read: {getattr(self, "cache_read_input_tokens",0)}; Total: {self.total}' -``` - -
- -In python, patching `__repr__` lets us change how an object is -displayed. (More generally, methods starting and ending in `__` in -Python are called `dunder` methods, and have some `magic` behavior – -such as, in this case, changing how an object is displayed.) - -``` python -usage(5) -``` - - In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5 - ------------------------------------------------------------------------- - -source - -### Usage.\_\_add\_\_ - -> Usage.__add__ (b) - -*Add together each of `input_tokens` and `output_tokens`* - -
-Exported source - -``` python -@patch -def __add__(self:Usage, b): - "Add together each of `input_tokens` and `output_tokens`" - return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens, getattr(self,'cache_creation_input_tokens',0)+getattr(b,'cache_creation_input_tokens',0), getattr(self,'cache_read_input_tokens',0)+getattr(b,'cache_read_input_tokens',0)) -``` - -
- -And, patching `__add__` lets `+` work on a `Usage` object. - -``` python -r.usage+r.usage -``` - - In: 20; Out: 54; Cache create: 0; Cache read: 0; Total: 74 - -### Creating messages - -Creating correctly formatted `dict`s from scratch every time isn’t very -handy, so next up we’ll add helpers for this. - -``` python -def mk_msg(content, role='user', **kw): - return dict(role=role, content=content, **kw) -``` - -We make things a bit more convenient by writing a function to create a -message for us. - -
- -> **Note** -> -> You may have noticed that we didn’t export the -> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) function -> (i.e. there’s no “Exported source” block around it). That’s because -> we’ll need more functionality in our final version than this version -> has – so we’ll be defining a more complete version later. Rather than -> refactoring/editing in notebooks, often it’s helpful to simply -> gradually build up complexity by re-defining a symbol. - -
- -``` python -prompt = "I'm Jeremy" -m = mk_msg(prompt) -m -``` - - {'role': 'user', 'content': "I'm Jeremy"} - -``` python -r = cli.messages.create(messages=[m], model=model, max_tokens=100) -r -``` - -Hi Jeremy! I’m Claude. Nice to meet you. How can I help you today? - -
- -- id: `msg_01BhkuvQtEPoC8wHSbU7YRpV` -- content: - `[{'text': "Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 24}` - -
- ------------------------------------------------------------------------- - -source - -### mk_msgs - -> mk_msgs (msgs:list, **kw) - -*Helper to set ‘assistant’ role on alternate messages.* - -
-Exported source - -``` python -def mk_msgs(msgs:list, **kw): - "Helper to set 'assistant' role on alternate messages." - if isinstance(msgs,str): msgs=[msgs] - return [mk_msg(o, ('user','assistant')[i%2], **kw) for i,o in enumerate(msgs)] -``` - -
- -LLMs, including Claude, don’t actually have state, but instead dialogs -are created by passing back all previous prompts and responses every -time. With Claude, they always alternate *user* and *assistant*. -Therefore we create a function to make it easier to build up these -dialog lists. - -But to do so, we need to update -[`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) so that we -can’t only pass a `str` as `content`, but can also pass a `dict` or an -object with a `content` attr, since these are both types of message that -Claude can create. To do so, we check for a `content` key or attr, and -use it if found. - -
-Exported source - -``` python -def _str_if_needed(o): - if isinstance(o, (list,tuple,abc.Mapping,L)) or hasattr(o, '__pydantic_serializer__'): return o - return str(o) -``` - -
- -``` python -def mk_msg(content, role='user', **kw): - "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" - if hasattr(content, 'content'): content,role = content.content,content.role - if isinstance(content, abc.Mapping): content=content['content'] - return dict(role=role, content=_str_if_needed(content), **kw) -``` - -``` python -msgs = mk_msgs([prompt, r, 'I forgot my name. Can you remind me please?']) -msgs -``` - - [{'role': 'user', 'content': "I'm Jeremy"}, - {'role': 'assistant', - 'content': [TextBlock(text="Hi Jeremy! I'm Claude. Nice to meet you. How can I help you today?", type='text')]}, - {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}] - -Now, if we pass this list of messages to Claude, the model treats it as -a conversation to respond to. - -``` python -cli.messages.create(messages=msgs, model=model, max_tokens=200) -``` - -You just told me your name is Jeremy. - -
- -- id: `msg_01KZski1R3z1iGjF6XsBb9dM` -- content: - `[{'text': 'You just told me your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 46, 'output_tokens': 13}` - -
- -## Client - ------------------------------------------------------------------------- - -source - -### Client - -> Client (model, cli=None, log=False) - -*Basic Anthropic messages client.* - -
-Exported source - -``` python -class Client: - def __init__(self, model, cli=None, log=False): - "Basic Anthropic messages client." - self.model,self.use = model,usage() - self.text_only = model in text_only_models - self.log = [] if log else None - self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'})) -``` - -
- -We’ll create a simple -[`Client`](https://claudette.answer.ai/core.html#client) for `Anthropic` -which tracks usage stores the model to use. We don’t add any methods -right away – instead we’ll use `patch` for that so we can add and -document them incrementally. - -``` python -c = Client(model) -c.use -``` - - In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 - -
-Exported source - -``` python -@patch -def _r(self:Client, r:Message, prefill=''): - "Store the result of the message and accrue total usage." - if prefill: - blk = find_block(r) - blk.text = prefill + (blk.text or '') - self.result = r - self.use += r.usage - self.stop_reason = r.stop_reason - self.stop_sequence = r.stop_sequence - return r -``` - -
- -We use a `_` prefix on private methods, but we document them here in the -interests of literate source code. - -`_r` will be used each time we get a new result, to track usage and also -to keep the result available for later. - -``` python -c._r(r) -c.use -``` - - In: 10; Out: 24; Cache create: 0; Cache read: 0; Total: 34 - -Whereas OpenAI’s models use a `stream` parameter for streaming, -Anthropic’s use a separate method. We implement Anthropic’s approach in -a private method, and then use a `stream` parameter in `__call__` for -consistency: - -
-Exported source - -``` python -@patch -def _log(self:Client, final, prefill, msgs, maxtok=None, sp=None, temp=None, stream=None, stop=None, **kwargs): - self._r(final, prefill) - if self.log is not None: self.log.append({ - "msgs": msgs, "prefill": prefill, **kwargs, - "msgs": msgs, "prefill": prefill, "maxtok": maxtok, "sp": sp, "temp": temp, "stream": stream, "stop": stop, **kwargs, - "result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence - }) - return self.result -``` - -
-
-Exported source - -``` python -@patch -def _stream(self:Client, msgs:list, prefill='', **kwargs): - with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: - if prefill: yield(prefill) - yield from s.text_stream - self._log(s.get_final_message(), prefill, msgs, **kwargs) -``` - -
- -Claude supports adding an extra `assistant` message at the end, which -contains the *prefill* – i.e. the text we want Claude to assume the -response starts with. However Claude doesn’t actually repeat that in the -response, so for convenience we add it. - -
-Exported source - -``` python -@patch -def _precall(self:Client, msgs, prefill, stop, kwargs): - pref = [prefill.strip()] if prefill else [] - if not isinstance(msgs,list): msgs = [msgs] - if stop is not None: - if not isinstance(stop, (list)): stop = [stop] - kwargs["stop_sequences"] = stop - msgs = mk_msgs(msgs+pref) - return msgs -``` - -
- -``` python -@patch -@delegates(messages.Messages.create) -def __call__(self:Client, - msgs:list, # List of messages in the dialog - sp='', # The system prompt - temp=0, # Temperature - maxtok=4096, # Maximum tokens - prefill='', # Optional prefill to pass to Claude as start of its response - stream:bool=False, # Stream response? - stop=None, # Stop sequence - **kwargs): - "Make a call to Claude." - msgs = self._precall(msgs, prefill, stop, kwargs) - if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - res = self.c.messages.create( - model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs) -``` - -Defining `__call__` let’s us use an object like a function (i.e it’s -*callable*). We use it as a small wrapper over `messages.create`. -However we’re not exporting this version just yet – we have some -additions we’ll make in a moment… - -``` python -c = Client(model, log=True) -c.use -``` - - In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0 - -``` python -c('Hi') -``` - -Hello! How can I help you today? - -
- -- id: `msg_01DZfHpTqbodjegmvG6kkQvn` -- content: - `[{'text': 'Hello! How can I help you today?', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 8, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -c.use -``` - - In: 8; Out: 22; Cache create: 0; Cache read: 0; Total: 30 - -Let’s try out *prefill*: - -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -``` - -``` python -c(q, prefill=pref) -``` - -According to Douglas Adams, it’s 42. More seriously, there’s no -universal answer - it’s deeply personal. Common perspectives include: -finding happiness, making meaningful connections, pursuing purpose -through work/creativity, helping others, or simply experiencing and -appreciating existence. - -
- -- id: `msg_01RKAjFBMhyBjvKw59ypM6tp` -- content: - `[{'text': "According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 24, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -We can pass `stream=True` to stream the response back incrementally: - -``` python -for o in c('Hi', stream=True): print(o, end='') -``` - - Hello! How can I help you today? - -``` python -c.use -``` - - In: 40; Out: 97; Cache create: 0; Cache read: 0; Total: 137 - -``` python -for o in c(q, prefill=pref, stream=True): print(o, end='') -``` - - According to Douglas Adams, it's 42. More seriously, there's no universal answer - it's deeply personal. Common perspectives include: finding happiness, making meaningful connections, pursuing purpose through work/creativity, helping others, or simply experiencing and appreciating existence. - -``` python -c.use -``` - - In: 64; Out: 150; Cache create: 0; Cache read: 0; Total: 214 - -Pass a stop seauence if you want claude to stop generating text when it -encounters it. - -``` python -c("Count from 1 to 10", stop="5") -``` - -1 2 3 4 - -
- -- id: `msg_01D3kdCAHNbXadE144FLPbQV` -- content: `[{'text': '1\n2\n3\n4\n', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `stop_sequence` -- stop_sequence: `5` -- type: `message` -- usage: - `{'input_tokens': 15, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -This also works with streaming, and you can pass more than one stop -sequence: - -``` python -for o in c("Count from 1 to 10", stop=["2", "yellow"], stream=True): print(o, end='') -print(c.stop_reason, c.stop_sequence) -``` - - 1 - stop_sequence 2 - -You can check the logs: - -``` python -c.log[-1] -``` - - {'msgs': [{'role': 'user', 'content': 'Count from 1 to 10'}], - 'prefill': '', - 'max_tokens': 4096, - 'system': '', - 'temperature': 0, - 'stop_sequences': ['2', 'yellow'], - 'maxtok': None, - 'sp': None, - 'temp': None, - 'stream': None, - 'stop': None, - 'result': Message(id='msg_01PbJN7QLwYALfoqTtYJHYVR', content=[TextBlock(text='1\n', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='stop_sequence', stop_sequence='2', type='message', usage=In: 15; Out: 11; Cache create: 0; Cache read: 0; Total: 26), - 'use': In: 94; Out: 172; Cache create: 0; Cache read: 0; Total: 266, - 'stop_reason': 'stop_sequence', - 'stop_sequence': '2'} - -## Tool use - -Let’s now add tool use (aka *function calling*). - ------------------------------------------------------------------------- - -source - -### mk_tool_choice - -> mk_tool_choice (choose:Union[str,bool,NoneType]) - -*Create a `tool_choice` dict that’s ‘auto’ if `choose` is `None`, ‘any’ -if it is True, or ‘tool’ otherwise* - -
-Exported source - -``` python -def mk_tool_choice(choose:Union[str,bool,None])->dict: - "Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise" - return {"type": "tool", "name": choose} if isinstance(choose,str) else {'type':'any'} if choose else {'type':'auto'} -``` - -
- -``` python -print(mk_tool_choice('sums')) -print(mk_tool_choice(True)) -print(mk_tool_choice(None)) -``` - - {'type': 'tool', 'name': 'sums'} - {'type': 'any'} - {'type': 'auto'} - -Claude can be forced to use a particular tool, or select from a specific -list of tools, or decide for itself when to use a tool. If you want to -force a tool (or force choosing from a list), include a `tool_choice` -param with a dict from -[`mk_tool_choice`](https://claudette.answer.ai/core.html#mk_tool_choice). - -For testing, we need a function that Claude can call; we’ll write a -simple function that adds numbers together, and will tell us when it’s -being called: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -sp = "You are a summing expert." -``` - -Claudette can autogenerate a schema thanks to the `toolslm` library. -We’ll force the use of the tool using the function we created earlier. - -``` python -tools=[get_schema(sums)] -choice = mk_tool_choice('sums') -``` - -We’ll start a dialog with Claude now. We’ll store the messages of our -dialog in `msgs`. The first message will be our prompt `pr`, and we’ll -pass our `tools` schema. - -``` python -msgs = mk_msgs(pr) -r = c(msgs, sp=sp, tools=tools, tool_choice=choice) -r -``` - -ToolUseBlock(id=‘toolu_01JEJNPyeeGm7uwckeF5J4pf’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_015eEr2H8V4j8nNEh1KQifjH` -- content: - `[{'id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 442, 'output_tokens': 55, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -When Claude decides that it should use a tool, it passes back a -`ToolUseBlock` with the name of the tool to call, and the params to use. - -We don’t want to allow it to call just any possible function (that would -be a security disaster!) so we create a *namespace* – that is, a -dictionary of allowable function names to call. - -``` python -ns = mk_ns(sums) -ns -``` - - {'sums': int>} - ------------------------------------------------------------------------- - -source - -### mk_funcres - -> mk_funcres (tuid, res) - -*Given tool use id and the tool result, create a tool_result response.* - -
-Exported source - -``` python -def mk_funcres(tuid, res): - "Given tool use id and the tool result, create a tool_result response." - return dict(type="tool_result", tool_use_id=tuid, content=str(res)) -``` - -
- -We can now use the function requested by Claude. We look it up in `ns`, -and pass in the provided parameters. - -``` python -fc = find_block(r, ToolUseBlock) -res = mk_funcres(fc.id, call_func(fc.name, fc.input, ns=ns)) -res -``` - - Finding the sum of 604542 and 6458932 - - {'type': 'tool_result', - 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', - 'content': '7063474'} - ------------------------------------------------------------------------- - -source - -### mk_toolres - -> mk_toolres (r:collections.abc.Mapping, -> ns:Optional[collections.abc.Mapping]=None, obj:Optional=None) - -*Create a `tool_result` message from response `r`.* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
rMappingTool use request response from Claude
nsOptionalNoneNamespace to search for tools
objOptionalNoneClass to search for tools
- -
-Exported source - -``` python -def mk_toolres( - r:abc.Mapping, # Tool use request response from Claude - ns:Optional[abc.Mapping]=None, # Namespace to search for tools - obj:Optional=None # Class to search for tools - ): - "Create a `tool_result` message from response `r`." - cts = getattr(r, 'content', []) - res = [mk_msg(r)] - if ns is None: ns=globals() - if obj is not None: ns = mk_ns(obj) - tcs = [mk_funcres(o.id, call_func(o.name, o.input, ns)) for o in cts if isinstance(o,ToolUseBlock)] - if tcs: res.append(mk_msg(tcs)) - return res -``` - -
- -In order to tell Claude the result of the tool call, we pass back the -tool use assistant request and the `tool_result` response. - -``` python -tr = mk_toolres(r, ns=ns) -tr -``` - - Finding the sum of 604542 and 6458932 - - [{'role': 'assistant', - 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, - {'role': 'user', - 'content': [{'type': 'tool_result', - 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', - 'content': '7063474'}]}] - -We add this to our dialog, and now Claude has all the information it -needs to answer our question. - -``` python -msgs += tr -contents(c(msgs, sp=sp, tools=tools)) -``` - - 'The sum of 604542 and 6458932 is 7063474.' - -``` python -msgs -``` - - [{'role': 'user', 'content': 'What is 604542+6458932?'}, - {'role': 'assistant', - 'content': [ToolUseBlock(id='toolu_01JEJNPyeeGm7uwckeF5J4pf', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, - {'role': 'user', - 'content': [{'type': 'tool_result', - 'tool_use_id': 'toolu_01JEJNPyeeGm7uwckeF5J4pf', - 'content': '7063474'}]}] - -This works with methods as well – in this case, use the object itself -for `ns`: - -``` python -class Dummy: - def sums( - self, - a:int, # First thing to sum - b:int=1 # Second thing to sum - ) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -tools = [get_schema(Dummy.sums)] -o = Dummy() -r = c(pr, sp=sp, tools=tools, tool_choice=choice) -tr = mk_toolres(r, obj=o) -msgs += tr -contents(c(msgs, sp=sp, tools=tools)) -``` - - Finding the sum of 604542 and 6458932 - - 'The sum of 604542 and 6458932 is 7063474.' - ------------------------------------------------------------------------- - -source - -### get_types - -> get_types (msgs) - -``` python -get_types(msgs) -``` - - ['text', 'tool_use', 'tool_result', 'tool_use', 'tool_result'] - ------------------------------------------------------------------------- - -source - -### Client.\_\_call\_\_ - -> Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', -> stream:bool=False, stop=None, tools:Optional[list]=None, -> tool_choice:Optional[dict]=None, -> metadata:MetadataParam|NotGiven=NOT_GIVEN, -> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Unio -> n[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, -> temperature:float|NotGiven=NOT_GIVEN, -> top_k:int|NotGiven=NOT_GIVEN, -> top_p:float|NotGiven=NOT_GIVEN, -> extra_headers:Headers|None=None, -> extra_query:Query|None=None, extra_body:Body|None=None, -> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) - -*Make a call to Claude.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
toolsOptionalNoneList of tools to make available to Claude
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
- -
-Exported source - -``` python -@patch -@delegates(messages.Messages.create) -def __call__(self:Client, - msgs:list, # List of messages in the dialog - sp='', # The system prompt - temp=0, # Temperature - maxtok=4096, # Maximum tokens - prefill='', # Optional prefill to pass to Claude as start of its response - stream:bool=False, # Stream response? - stop=None, # Stop sequence - tools:Optional[list]=None, # List of tools to make available to Claude - tool_choice:Optional[dict]=None, # Optionally force use of some tool - **kwargs): - "Make a call to Claude." - if tools: kwargs['tools'] = [get_schema(o) for o in listify(tools)] - if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) - msgs = self._precall(msgs, prefill, stop, kwargs) - if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" - if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) -``` - -
- -``` python -r = c(pr, sp=sp, tools=sums, tool_choice=sums) -r -``` - -ToolUseBlock(id=‘toolu_01KNbjuc8utt6ZroFngmAcuj’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_01T8zmguPksQaKLLgUuaYAJL` -- content: - `[{'id': 'toolu_01KNbjuc8utt6ZroFngmAcuj', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 438, 'output_tokens': 64, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -tr = mk_toolres(r, ns=ns) -``` - - Finding the sum of 604542 and 6458932 - ------------------------------------------------------------------------- - -source - -### Client.structured - -> Client.structured (msgs:list, tools:Optional[list]=None, -> obj:Optional=None, -> ns:Optional[collections.abc.Mapping]=None, sp='', -> temp=0, maxtok=4096, prefill='', stream:bool=False, -> stop=None, tool_choice:Optional[dict]=None, -> metadata:MetadataParam|NotGiven=NOT_GIVEN, -> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Un -> ion[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, -> temperature:float|NotGiven=NOT_GIVEN, -> top_k:int|NotGiven=NOT_GIVEN, -> top_p:float|NotGiven=NOT_GIVEN, -> extra_headers:Headers|None=None, -> extra_query:Query|None=None, -> extra_body:Body|None=None, -> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN) - -*Return the value of all tool calls (generally used for structured -outputs)* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
msgslistList of messages in the dialog
toolsOptionalNoneList of tools to make available to Claude
objOptionalNoneClass to search for tools
nsOptionalNoneNamespace to search for tools
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
tool_choiceOptionalNoneOptionally force use of some tool
metadataMetadataParam | NotGivenNOT_GIVEN
stop_sequencesList[str] | NotGivenNOT_GIVEN
systemUnion[str, Iterable[TextBlockParam]] | NotGivenNOT_GIVEN
temperaturefloat | NotGivenNOT_GIVEN
top_kint | NotGivenNOT_GIVEN
top_pfloat | NotGivenNOT_GIVEN
extra_headersHeaders | NoneNone
extra_queryQuery | NoneNone
extra_bodyBody | NoneNone
timeoutfloat | httpx.Timeout | None | NotGivenNOT_GIVEN
- -
-Exported source - -``` python -@patch -@delegates(Client.__call__) -def structured(self:Client, - msgs:list, # List of messages in the dialog - tools:Optional[list]=None, # List of tools to make available to Claude - obj:Optional=None, # Class to search for tools - ns:Optional[abc.Mapping]=None, # Namespace to search for tools - **kwargs): - "Return the value of all tool calls (generally used for structured outputs)" - tools = listify(tools) - res = self(msgs, tools=tools, tool_choice=tools, **kwargs) - if ns is None: ns=mk_ns(*tools) - if obj is not None: ns = mk_ns(obj) - cts = getattr(res, 'content', []) - tcs = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)] - return tcs -``` - -
- -Anthropic’s API does not support response formats directly, so instead -we provide a `structured` method to use tool calling to achieve the same -result. The result of the tool is not passed back to Claude in this -case, but instead is returned directly to the user. - -``` python -c.structured(pr, tools=[sums]) -``` - - Finding the sum of 604542 and 6458932 - - [7063474] - -## Chat - -Rather than manually adding the responses to a dialog, we’ll create a -simple [`Chat`](https://claudette.answer.ai/core.html#chat) class to do -that for us, each time we make a request. We’ll also store the system -prompt and tools here, to avoid passing them every time. - ------------------------------------------------------------------------- - -source - -### Chat - -> Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None, -> sp='', tools:Optional[list]=None, temp=0, -> cont_pr:Optional[str]=None) - -*Anthropic chat client.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstrOptional system prompt
toolsOptionalNoneList of tools to make available to Claude
tempint0Temperature
cont_prOptionalNoneUser prompt to continue an assistant response: -assistant,[user:“…”],assistant
- -
-Exported source - -``` python -class Chat: - def __init__(self, - model:Optional[str]=None, # Model to use (leave empty if passing `cli`) - cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) - sp='', # Optional system prompt - tools:Optional[list]=None, # List of tools to make available to Claude - temp=0, # Temperature - cont_pr:Optional[str]=None): # User prompt to continue an assistant response: assistant,[user:"..."],assistant - "Anthropic chat client." - assert model or cli - assert cont_pr != "", "cont_pr may not be an empty string" - self.c = (cli or Client(model)) - self.h,self.sp,self.tools,self.cont_pr,self.temp = [],sp,tools,cont_pr,temp - - @property - def use(self): return self.c.use -``` - -
- -The class stores the -[`Client`](https://claudette.answer.ai/core.html#client) that will -provide the responses in `c`, and a history of messages in `h`. - -``` python -sp = "Never mention what tools you use." -chat = Chat(model, sp=sp) -chat.c.use, chat.h -``` - - (In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0, []) - -We’ve shown the token usage but we really care about is pricing. Let’s -extract the latest -[pricing](https://www.anthropic.com/pricing#anthropic-api) from -Anthropic into a `pricing` dict. - -We’ll patch `Usage` to enable it compute the cost given pricing. - ------------------------------------------------------------------------- - -source - -### Usage.cost - -> Usage.cost (costs:tuple) - -
-Exported source - -``` python -@patch -def cost(self:Usage, costs:tuple) -> float: - cache_w, cache_r = getattr(self, "cache_creation_input_tokens",0), getattr(self, "cache_read_input_tokens",0) - return sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6 -``` - -
- -``` python -chat.c.use.cost(pricing[model_types[chat.c.model]]) -``` - - 0.0 - -This is clunky. Let’s add `cost` as a property for the -[`Chat`](https://claudette.answer.ai/core.html#chat) class. It will pass -in the appropriate prices for the current model to the usage cost -calculator. - ------------------------------------------------------------------------- - -source - -### Chat.cost - -> Chat.cost () - -
-Exported source - -``` python -@patch(as_prop=True) -def cost(self: Chat) -> float: return self.c.use.cost(pricing[model_types[self.c.model]]) -``` - -
- -``` python -chat.cost -``` - - 0.0 - ------------------------------------------------------------------------- - -source - -### Chat.\_\_call\_\_ - -> Chat.__call__ (pr=None, temp=None, maxtok=4096, stream=False, prefill='', -> tool_choice:Optional[dict]=None, **kw) - -*Call self as a function.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
kw
- -
-Exported source - -``` python -@patch -def _stream(self:Chat, res): - yield from res - self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) -``` - -
-
-Exported source - -``` python -@patch -def _post_pr(self:Chat, pr, prev_role): - if pr is None and prev_role == 'assistant': - if self.cont_pr is None: - raise ValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.") - pr = self.cont_pr # No user prompt, keep the chain - if pr: self.h.append(mk_msg(pr)) -``` - -
-
-Exported source - -``` python -@patch -def _append_pr(self:Chat, - pr=None, # Prompt / message - ): - prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' - if pr and prev_role == 'user': self() # already user request pending - self._post_pr(pr, prev_role) -``` - -
-
-Exported source - -``` python -@patch -def __call__(self:Chat, - pr=None, # Prompt / message - temp=None, # Temperature - maxtok=4096, # Maximum tokens - stream=False, # Stream response? - prefill='', # Optional prefill to pass to Claude as start of its response - tool_choice:Optional[dict]=None, # Optionally force use of some tool - **kw): - if temp is None: temp=self.temp - self._append_pr(pr) - res = self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, - tools=self.tools, tool_choice=tool_choice,**kw) - if stream: return self._stream(res) - self.h += mk_toolres(self.c.result, ns=self.tools) - return res -``` - -
- -The `__call__` method just passes the request along to the -[`Client`](https://claudette.answer.ai/core.html#client), but rather -than just passing in this one prompt, it appends it to the history and -passes it all along. As a result, we now have state! - -``` python -chat = Chat(model, sp=sp) -``` - -``` python -chat("I'm Jeremy") -chat("What's my name?") -``` - -Your name is Jeremy. - -
- -- id: `msg_01GpNv4P5x9Gzc5mxxw9FgEL` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 41, 'output_tokens': 9, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat.use, chat.cost -``` - - (In: 58; Out: 27; Cache create: 0; Cache read: 0; Total: 85, 0.000579) - -Let’s try out prefill too: - -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -``` - -``` python -chat(q, prefill=pref) -``` - -According to Douglas Adams,42. But seriously: To find purpose, create -meaning, love, grow, and make a positive impact while experiencing -life’s journey. - -
- -- id: `msg_011s2iLranbHFhdsVg8sz6eY` -- content: - `[{'text': "According to Douglas Adams,42. But seriously: To find purpose, create meaning, love, grow, and make a positive impact while experiencing life's journey.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 69, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -By default messages must be in user, assistant, user format. If this -isn’t followed (aka calling `chat()` without a user message) it will -error out: - -``` python -try: chat() -except ValueError as e: print("Error:", e) -``` - - Error: Prompt must be given after assistant completion, or use `self.cont_pr`. - -Setting `cont_pr` allows a “default prompt” to be specified when a -prompt isn’t specified. Usually used to prompt the model to continue. - -``` python -chat.cont_pr = "keep going..." -chat() -``` - -To build meaningful relationships, pursue passions, learn continuously, -help others, appreciate beauty, overcome challenges, leave a positive -legacy, and find personal fulfillment through whatever brings you joy -and contributes to the greater good. - -
- -- id: `msg_01Rz8oydLAinmSMyaKbmmpE9` -- content: - `[{'text': 'To build meaningful relationships, pursue passions, learn continuously, help others, appreciate beauty, overcome challenges, leave a positive legacy, and find personal fulfillment through whatever brings you joy and contributes to the greater good.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 105, 'output_tokens': 54, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -We can also use streaming: - -``` python -chat = Chat(model, sp=sp) -for o in chat("I'm Jeremy", stream=True): print(o, end='') -``` - - Hello Jeremy! Nice to meet you. How are you today? - -``` python -for o in chat(q, prefill=pref, stream=True): print(o, end='') -``` - - According to Douglas Adams, 42. More seriously: to find purpose, love, grow, and make a positive impact while experiencing life's journey. - -### Chat tool use - -We automagically get streamlined tool use as well: - -``` python -pr = f"What is {a}+{b}?" -pr -``` - - 'What is 604542+6458932?' - -``` python -chat = Chat(model, sp=sp, tools=[sums]) -r = chat(pr) -r -``` - - Finding the sum of 604542 and 6458932 - -Let me calculate that sum for you. - -
- -- id: `msg_01MY2VWnZuU8jKyRKJ5FGzmM` -- content: - `[{'text': 'Let me calculate that sum for you.', 'type': 'text'}, {'id': 'toolu_01JXnJ1ReFqx5ppX3y7UcQCB', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 437, 'output_tokens': 87, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Now we need to send this result to Claude—calling the object with no -parameters tells it to return the tool result to Claude: - -``` python -chat() -``` - -604542 + 6458932 = 7063474 - -
- -- id: `msg_01Sog8j3pgYb3TBWPYwR4uQU` -- content: `[{'text': '604542 + 6458932 = 7063474', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 532, 'output_tokens': 22, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -It should be correct, because it actually used our Python function to do -the addition. Let’s check: - -``` python -a+b -``` - - 7063474 - -## Images - -Claude can handle image data as well. As everyone knows, when testing -image APIs you have to use a cute puppy. - -``` python -# Image is Cute_dog.jpg from Wikimedia -fn = Path('samples/puppy.jpg') -display.Image(filename=fn, width=200) -``` - - - -``` python -img = fn.read_bytes() -``` - -
-Exported source - -``` python -def _add_cache(d, cache): - "Optionally add cache control" - if cache: d["cache_control"] = {"type": "ephemeral"} - return d -``` - -
- -Claude supports context caching by adding a `cache_control` header, so -we provide an option to enable that. - ------------------------------------------------------------------------- - -source - -### img_msg - -> img_msg (data:bytes, cache=False) - -*Convert image `data` into an encoded `dict`* - -
-Exported source - -``` python -def img_msg(data:bytes, cache=False)->dict: - "Convert image `data` into an encoded `dict`" - img = base64.b64encode(data).decode("utf-8") - mtype = mimetypes.types_map['.'+imghdr.what(None, h=data)] - r = dict(type="base64", media_type=mtype, data=img) - return _add_cache({"type": "image", "source": r}, cache) -``` - -
- -Anthropic have documented the particular `dict` structure that expect -image data to be in, so we have a little function to create that for us. - ------------------------------------------------------------------------- - -source - -### text_msg - -> text_msg (s:str, cache=False) - -*Convert `s` to a text message* - -
-Exported source - -``` python -def text_msg(s:str, cache=False)->dict: - "Convert `s` to a text message" - return _add_cache({"type": "text", "text": s}, cache) -``` - -
- -A Claude message can be a list of image and text parts. So we’ve also -created a helper for making the text parts. - -``` python -q = "In brief, what color flowers are in this image?" -msg = mk_msg([img_msg(img), text_msg(q)]) -``` - -``` python -c([msg]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_01Ej9XSFQKFtD9pUns5g7tom` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 44, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
-
-Exported source - -``` python -def _mk_content(src, cache=False): - "Create appropriate content data structure based on type of content" - if isinstance(src,str): return text_msg(src, cache=cache) - if isinstance(src,bytes): return img_msg(src, cache=cache) - if isinstance(src, abc.Mapping): return {k:_str_if_needed(v) for k,v in src.items()} - return _str_if_needed(src) -``` - -
- -There’s not need to manually choose the type of message, since we figure -that out from the data of the source data. - -``` python -_mk_content('Hi') -``` - - {'type': 'text', 'text': 'Hi'} - ------------------------------------------------------------------------- - -source - -### mk_msg - -> mk_msg (content, role='user', cache=False, **kw) - -*Helper to create a `dict` appropriate for a Claude message. `kw` are -added as key/value pairs to the message* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
contentA string, list, or dict containing the contents of the message
rolestruserMust be ‘user’ or ‘assistant’
cacheboolFalse
kw
- -
-Exported source - -``` python -def mk_msg(content, # A string, list, or dict containing the contents of the message - role='user', # Must be 'user' or 'assistant' - cache=False, - **kw): - "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message" - if hasattr(content, 'content'): content,role = content.content,content.role - if isinstance(content, abc.Mapping): content=content.get('content', content) - if not isinstance(content, list): content=[content] - content = [_mk_content(o, cache if islast else False) for islast,o in loop_last(content)] if content else '.' - return dict2obj(dict(role=role, content=content, **kw), list_func=list) -``` - -
- -``` python -mk_msg(['hi', 'there'], cache=True) -``` - -``` json -{ 'content': [ {'text': 'hi', 'type': 'text'}, - { 'cache_control': {'type': 'ephemeral'}, - 'text': 'there', - 'type': 'text'}], - 'role': 'user'} -``` - -``` python -m = mk_msg(['hi', 'there'], cache=True) -``` - -When we construct a message, we now use -[`_mk_content`](https://claudette.answer.ai/core.html#_mk_content) to -create the appropriate parts. Since a dialog contains multiple messages, -and a message can contain multiple content parts, to pass a single -message with multiple parts we have to use a list containing a single -list: - -``` python -c([[img, q]]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_014GQfAQF5FYU8a4Y8bvVm16` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -
- -> **Note** -> -> As promised (much!) earlier, we’ve now finally completed our -> definition of -> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg), and this -> version is the one we export to the Python module. - -
- -Some models unfortunately do not support image inputs such as Haiku 3.5 - -``` python -model = models[-1]; model -``` - - 'claude-3-5-haiku-20241022' - -``` python -c = Client(model) -c([[img, q]]) -``` - - AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 - --------------------------------------------------------------------------- - AssertionError Traceback (most recent call last) - Cell In[115], line 2 -  1 c = Client(model) - ----> 2 c([[img, q]]) - - Cell In[72], line 19, in __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs) -  17 if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice) -  18 msgs = self._precall(msgs, prefill, stop, kwargs) - ---> 19 if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}" -  20 if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) -  21 res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - - AssertionError: Images are not supported by the current model type: claude-3-5-haiku-20241022 - -## Third party providers - -### Amazon Bedrock - -These are Amazon’s current Claude models: - -``` python -models_aws -``` - - ['anthropic.claude-3-opus-20240229-v1:0', - 'anthropic.claude-3-5-sonnet-20241022-v2:0', - 'anthropic.claude-3-sonnet-20240229-v1:0', - 'anthropic.claude-3-haiku-20240307-v1:0'] - -
- -> **Note** -> -> `anthropic` at version 0.34.2 seems not to install `boto3` as a -> dependency. You may need to do a `pip install boto3` or the creation -> of the [`Client`](https://claudette.answer.ai/core.html#client) below -> fails. - -
- -Provided `boto3` is installed, we otherwise don’t need any extra code to -support Amazon Bedrock – we just have to set up the approach client: - -``` python -ab = AnthropicBedrock( - aws_access_key=os.environ['AWS_ACCESS_KEY'], - aws_secret_key=os.environ['AWS_SECRET_KEY'], -) -client = Client(models_aws[-1], ab) -``` - -``` python -chat = Chat(cli=client) -``` - -``` python -chat("I'm Jeremy") -``` - -It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_bdrk_01JPBwsACbf1HZoNDUzbHNpJ` -- content: - `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 32}` - -
- -### Google Vertex - -``` python -models_goog -``` - - ['claude-3-opus@20240229', - 'claude-3-5-sonnet-v2@20241022', - 'claude-3-sonnet@20240229', - 'claude-3-haiku@20240307'] - -``` python -from anthropic import AnthropicVertex -import google.auth -``` - -``` python -project_id = google.auth.default()[1] -region = "us-east5" -gv = AnthropicVertex(project_id=project_id, region=region) -client = Client(models_goog[-1], gv) -``` - -``` python -chat = Chat(cli=client) -``` - -``` python -chat("I'm Jeremy") -```
# Tool loop - - - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -``` python -model = models[-1] -``` - -Anthropic provides an [interesting -example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) -of using tools to mock up a hypothetical ordering system. We’re going to -take it a step further, and show how we can dramatically simplify the -process, whilst completing more complex tasks. - -We’ll start by defining the same mock customer/order data as in -Anthropic’s example, plus create a entity relationship between customers -and orders: - -``` python -orders = { - "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"), - "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"), - "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")} - -customers = { - "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890", - orders=[orders['O1'], orders['O2']]), - "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210", - orders=[orders['O3']]) -} -``` - -We can now define the same functions from the original example – but -note that we don’t need to manually create the large JSON schema, since -Claudette handles all that for us automatically from the functions -directly. We’ll add some extra functionality to update order details -when cancelling too. - -``` python -def get_customer_info( - customer_id:str # ID of the customer -): # Customer's name, email, phone number, and list of orders - "Retrieves a customer's information and their orders based on the customer ID" - print(f'- Retrieving customer {customer_id}') - return customers.get(customer_id, "Customer not found") - -def get_order_details( - order_id:str # ID of the order -): # Order's ID, product name, quantity, price, and order status - "Retrieves the details of a specific order based on the order ID" - print(f'- Retrieving order {order_id}') - return orders.get(order_id, "Order not found") - -def cancel_order( - order_id:str # ID of the order to cancel -)->bool: # True if the cancellation is successful - "Cancels an order based on the provided order ID" - print(f'- Cancelling order {order_id}') - if order_id not in orders: return False - orders[order_id]['status'] = 'Cancelled' - return True -``` - -We’re now ready to start our chat. - -``` python -tools = [get_customer_info, get_order_details, cancel_order] -chat = Chat(model, tools=tools) -``` - -We’ll start with the same request as Anthropic showed: - -``` python -r = chat('Can you tell me the email address for customer C1?') -print(r.stop_reason) -r.content -``` - - - Retrieving customer C1 - tool_use - - [ToolUseBlock(id='toolu_0168sUZoEUpjzk5Y8WN3q9XL', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] - -Claude asks us to use a tool. Claudette handles that automatically by -just calling it again: - -``` python -r = chat() -contents(r) -``` - - 'The email address for customer C1 is john@example.com.' - -Let’s consider a more complex case than in the original example – what -happens if a customer wants to cancel all of their orders? - -``` python -chat = Chat(model, tools=tools) -r = chat('Please cancel all orders for customer C1 for me.') -print(r.stop_reason) -r.content -``` - - - Retrieving customer C1 - tool_use - - [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), - ToolUseBlock(id='toolu_01ADr1rEp7NLZ2iKWfLp7vz7', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] - -This is the start of a multi-stage tool use process. Doing it manually -step by step is inconvenient, so let’s write a function to handle this -for us: - ------------------------------------------------------------------------- - -source - -### Chat.toolloop - -> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None, -> maxtok=4096, stream=False, prefill='', -> tool_choice:Optional[dict]=None) - -*Add prompt `pr` to dialog and get a response from Claude, automatically -following up with `tool_use` messages* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prPrompt to pass to Claude
max_stepsint10Maximum number of tool requests to loop through
trace_funcOptionalNoneFunction to trace tool use steps (e.g print)
cont_funcOptionalnoopFunction that stops loop if returns False
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
- -
-Exported source - -``` python -@patch -@delegates(Chat.__call__) -def toolloop(self:Chat, - pr, # Prompt to pass to Claude - max_steps=10, # Maximum number of tool requests to loop through - trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`) - cont_func:Optional[callable]=noop, # Function that stops loop if returns False - **kwargs): - "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages" - n_msgs = len(self.h) - r = self(pr, **kwargs) - for i in range(max_steps): - if r.stop_reason!='tool_use': break - if trace_func: trace_func(self.h[n_msgs:]); n_msgs = len(self.h) - r = self(**kwargs) - if not (cont_func or noop)(self.h[-2]): break - if trace_func: trace_func(self.h[n_msgs:]) - return r -``` - -
- -We’ll start by re-running our previous request - we shouldn’t have to -manually pass back the `tool_use` message any more: - -``` python -chat = Chat(model, tools=tools) -r = chat.toolloop('Can you tell me the email address for customer C1?') -r -``` - - - Retrieving customer C1 - -The email address for customer C1 is john@example.com. - -
- -- id: `msg_01Fm2CY76dNeWief4kUW6r71` -- content: - `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 720, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Let’s see if it can handle the multi-stage process now – we’ll add -`trace_func=print` to see each stage of the process: - -``` python -chat = Chat(model, tools=tools) -r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print) -r -``` - - - Retrieving customer C1 - [{'role': 'user', 'content': [{'type': 'text', 'text': 'Please cancel all orders for customer C1 for me.'}]}, {'role': 'assistant', 'content': [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01SvivKytaRHEdKixEY9dUDz', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01SvivKytaRHEdKixEY9dUDz', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}] - - Cancelling order O1 - [{'role': 'assistant', 'content': [TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel each of these orders:", type='text'), ToolUseBlock(id='toolu_01DoGVUPVBeDYERMePHDzUoT', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DoGVUPVBeDYERMePHDzUoT', 'content': 'True'}]}] - - Cancelling order O2 - [{'role': 'assistant', 'content': [ToolUseBlock(id='toolu_01XNwS35yY88Mvx4B3QqDeXX', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01XNwS35yY88Mvx4B3QqDeXX', 'content': 'True'}]}] - [{'role': 'assistant', 'content': [TextBlock(text="I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", type='text')]}] - -I’ve successfully cancelled both orders O1 and O2 for customer C1. -Please let me know if you need anything else! - -
- -- id: `msg_01K1QpUZ8nrBVUHYTrH5QjSF` -- content: - `[{'text': "I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 921, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -OK Claude thinks the orders were cancelled – let’s check one: - -``` python -chat.toolloop('What is the status of order O2?') -``` - - - Retrieving order O2 - -The status of order O2 is now ‘Cancelled’ since I successfully cancelled -that order earlier. - -
- -- id: `msg_01XcXpFDwoZ3u1bFDf5mY8x1` -- content: - `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 1092, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -## Code interpreter - -Here is an example of using `toolloop` to implement a simple code -interpreter with additional tools. - -``` python -from toolslm.shell import get_shell -from fastcore.meta import delegates -import traceback -``` - -``` python -@delegates() -class CodeChat(Chat): - imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses' - def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs): - super().__init__(model=model, **kwargs) - self.ask = ask - self.tools.append(self.run_cell) - self.shell = get_shell() - self.shell.run_cell('import '+self.imps) -``` - -We have one additional parameter to creating a `CodeChat` beyond what we -pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is -`ask` – if that’s `True`, we’ll prompt the user before running code. - -``` python -@patch -def run_cell( - self:CodeChat, - code:str, # Code to execute in persistent IPython session -): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute - "Asks user for permission, and if provided, executes python `code` using persistent IPython session." - confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n' - if self.ask and input(confirm): return '#DECLINED#' - try: res = self.shell.run_cell(code) - except Exception as e: return traceback.format_exc() - return res.stdout if res.result is None else res.result -``` - -We just pass along requests to run code to the shell’s implementation. -Claude often prints results instead of just using the last expression, -so we capture stdout in those cases. - -``` python -sp = f'''You are a knowledgable assistant. Do not use tools unless needed. -Don't do complex calculations yourself -- use code for them. -The following modules are pre-imported for `run_cell` automatically: - -{CodeChat.imps} - -Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls. - -If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.''' -``` - -``` python -def get_user(ignored:str='' # Unused parameter - ): # Username of current user - "Get the username of the user running this session" - print("Looking up username") - return 'Jeremy' -``` - -In order to test out multi-stage tool use, we create a mock function -that Claude can call to get the current username. - -``` python -model = models[1] -``` - -``` python -chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3) -``` - -Claude gets confused sometimes about how tools work, so we use examples -to remind it: - -``` python -chat.h = [ - 'Calculate the square root of `10332`', 'math.sqrt(10332)', - '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.' -] -``` - -Providing a callable to toolloop’s `trace_func` lets us print out -information during the loop: - -``` python -def _show_cts(h): - for r in h: - for o in r.get('content'): - if hasattr(o,'text'): print(o.text) - nm = getattr(o, 'name', None) - if nm=='run_cell': print(o.input['code']) - elif nm: print(f'{o.name}({o.input})') -``` - -…and toolloop’s `cont_func` callable let’s us provide a function which, -if it returns `False`, stops the loop: - -``` python -def _cont_decline(c): - return nested_idx(c, 'content', 'content') != '#DECLINED#' -``` - -Now we can try our code interpreter. We start by asking for a function -to be created, which we’ll use in the next prompt to test that the -interpreter is persistent. - -``` python -pr = '''Create a 1-line function `checksum` for a string `s`, -that multiplies together the ascii values of each character in `s` using `reduce`.''' -chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline) -``` - - Press Enter to execute, or enter "n" to skip? - ``` - checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) - ``` - - Create a 1-line function `checksum` for a string `s`, - that multiplies together the ascii values of each character in `s` using `reduce`. - Let me help you create that function using `reduce` and `functools`. - checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) - The function has been created. Let me explain how it works: - 1. It takes a string `s` as input - 2. Uses `functools.reduce` to multiply together all ASCII values - 3. `ord(y)` gets the ASCII value of each character - 4. The initial value is 1 (the third parameter to reduce) - 5. The lambda function multiplies the accumulator (x) with each new ASCII value - - You can test it with any string. For example, you could try `checksum("hello")` to see it in action. - -The function has been created. Let me explain how it works: 1. It takes -a string `s` as input 2. Uses `functools.reduce` to multiply together -all ASCII values 3. `ord(y)` gets the ASCII value of each character 4. -The initial value is 1 (the third parameter to reduce) 5. The lambda -function multiplies the accumulator (x) with each new ASCII value - -You can test it with any string. For example, you could try -`checksum("hello")` to see it in action. - -
- -- id: `msg_011pcGY9LbYqvRSfDPgCqUkT` -- content: - `[{'text': 'The function has been created. Let me explain how it works:\n1. It takes a string`s`as input\n2. Uses`functools.reduce`to multiply together all ASCII values\n3.`ord(y)`gets the ASCII value of each character\n4. The initial value is 1 (the third parameter to reduce)\n5. The lambda function multiplies the accumulator (x) with each new ASCII value\n\nYou can test it with any string. For example, you could try`checksum(“hello”)`to see it in action.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 824, 'output_tokens': 125, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -By asking for a calculation to be done on the username, we force it to -use multiple steps: - -``` python -pr = 'Use it to get the checksum of the username of this session.' -chat.toolloop(pr, trace_func=_show_cts) -``` - - Looking up username - Use it to get the checksum of the username of this session. - I'll first get the username using `get_user` and then apply our `checksum` function to it. - get_user({'ignored': ''}) - Press Enter to execute, or enter "n" to skip? - ``` - print(checksum("Jeremy")) - ``` - - Now I'll calculate the checksum of "Jeremy": - print(checksum("Jeremy")) - The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy". - -The checksum of the username “Jeremy” is 1134987783204. This was -calculated by multiplying together the ASCII values of each character in -“Jeremy”. - -
- -- id: `msg_01UXvtcLzzykZpnQUT35v4uD` -- content: - `[{'text': 'The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy".', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
# The async version - - - -## Setup - -## Async SDK - -``` python -model = models[1] -cli = AsyncAnthropic() -``` - -``` python -m = {'role': 'user', 'content': "I'm Jeremy"} -r = await cli.messages.create(messages=[m], model=model, max_tokens=100) -r -``` - -Hello Jeremy! It’s nice to meet you. How can I assist you today? Is -there anything specific you’d like to talk about or any questions you -have? - -
- -- id: `msg_019gsEQs5dqb3kgwNJbTH27M` -- content: - `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 36}` - -
- ------------------------------------------------------------------------- - -source - -### AsyncClient - -> AsyncClient (model, cli=None, log=False) - -*Async Anthropic messages client.* - -
-Exported source - -``` python -class AsyncClient(Client): - def __init__(self, model, cli=None, log=False): - "Async Anthropic messages client." - super().__init__(model,cli,log) - if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}) -``` - -
- -``` python -c = AsyncClient(model) -``` - -``` python -c._r(r) -c.use -``` - - In: 10; Out: 36; Total: 46 - ------------------------------------------------------------------------- - -source - -### AsyncClient.\_\_call\_\_ - -> AsyncClient.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', -> stream:bool=False, stop=None, cli=None, log=False) - -*Make an async call to Claude.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
cliNoneTypeNone
logboolFalse
- -
-Exported source - -``` python -@patch -async def _stream(self:AsyncClient, msgs:list, prefill='', **kwargs): - async with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: - if prefill: yield prefill - async for o in s.text_stream: yield o - self._log(await s.get_final_message(), prefill, msgs, kwargs) -``` - -
-
-Exported source - -``` python -@patch -@delegates(Client) -async def __call__(self:AsyncClient, - msgs:list, # List of messages in the dialog - sp='', # The system prompt - temp=0, # Temperature - maxtok=4096, # Maximum tokens - prefill='', # Optional prefill to pass to Claude as start of its response - stream:bool=False, # Stream response? - stop=None, # Stop sequence - **kwargs): - "Make an async call to Claude." - msgs = self._precall(msgs, prefill, stop, kwargs) - if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - res = await self.c.messages.create( - model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) -``` - -
- -``` python -c = AsyncClient(model, log=True) -c.use -``` - - In: 0; Out: 0; Total: 0 - -``` python -c.model = models[1] -await c('Hi') -``` - -Hello! How can I assist you today? Feel free to ask any questions or let -me know if you need help with anything. - -
- -- id: `msg_01L9vqP9r1LcmvSk8vWGLbPo` -- content: - `[{'text': 'Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 8, 'output_tokens': 29, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -c.use -``` - - In: 8; Out: 29; Total: 37 - -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -await c(q, prefill=pref) -``` - -According to Douglas Adams, the meaning of life is 42. More seriously, -there’s no universally agreed upon meaning of life. Many philosophers -and religions have proposed different answers, but it remains an open -question that individuals must grapple with for themselves. - -
- -- id: `msg_01KAJbCneA2oCRPVm9EkyDXF` -- content: - `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 24, 'output_tokens': 51, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -async for o in (await c('Hi', stream=True)): print(o, end='') -``` - - Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything. - -``` python -c.use -``` - - In: 40; Out: 109; Total: 149 - -``` python -async for o in (await c(q, prefill=pref, stream=True)): print(o, end='') -``` - - According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves. - -``` python -c.use -``` - - In: 64; Out: 160; Total: 224 - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -sp = "You are a summing expert." -``` - -``` python -tools=[get_schema(sums)] -choice = mk_tool_choice('sums') -``` - -``` python -tools = [get_schema(sums)] -msgs = mk_msgs(pr) -r = await c(msgs, sp=sp, tools=tools, tool_choice=choice) -tr = mk_toolres(r, ns=globals()) -msgs += tr -contents(await c(msgs, sp=sp, tools=tools)) -``` - - Finding the sum of 604542 and 6458932 - - 'As a summing expert, I\'m happy to help you with this addition. The sum of 604542 and 6458932 is 7063474.\n\nTo break it down:\n604542 (first number)\n+ 6458932 (second number)\n= 7063474 (total sum)\n\nThis result was calculated using the "sums" function, which adds two numbers together. Is there anything else you\'d like me to sum for you?' - -## AsyncChat - ------------------------------------------------------------------------- - -source - -### AsyncChat - -> AsyncChat (model:Optional[str]=None, -> cli:Optional[claudette.core.Client]=None, sp='', -> tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None) - -*Anthropic async chat client.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstr
toolsOptionalNone
tempint0
cont_prOptionalNone
- -
-Exported source - -``` python -@delegates() -class AsyncChat(Chat): - def __init__(self, - model:Optional[str]=None, # Model to use (leave empty if passing `cli`) - cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) - **kwargs): - "Anthropic async chat client." - super().__init__(model, cli, **kwargs) - if not cli: self.c = AsyncClient(model) -``` - -
- -``` python -sp = "Never mention what tools you use." -chat = AsyncChat(model, sp=sp) -chat.c.use, chat.h -``` - - (In: 0; Out: 0; Total: 0, []) - ------------------------------------------------------------------------- - -source - -### AsyncChat.\_\_call\_\_ - -> AsyncChat.__call__ (pr=None, temp=0, maxtok=4096, stream=False, -> prefill='', **kw) - -*Call self as a function.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempint0Temperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
kw
- -
-Exported source - -``` python -@patch -async def _stream(self:AsyncChat, res): - async for o in res: yield o - self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) -``` - -
-
-Exported source - -``` python -@patch -async def _append_pr(self:AsyncChat, pr=None): - prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history - if pr and prev_role == 'user': await self() - self._post_pr(pr, prev_role) -``` - -
-
-Exported source - -``` python -@patch -async def __call__(self:AsyncChat, - pr=None, # Prompt / message - temp=0, # Temperature - maxtok=4096, # Maximum tokens - stream=False, # Stream response? - prefill='', # Optional prefill to pass to Claude as start of its response - **kw): - await self._append_pr(pr) - if self.tools: kw['tools'] = [get_schema(o) for o in self.tools] - res = await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, **kw) - if stream: return self._stream(res) - self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) - return res -``` - -
- -``` python -await chat("I'm Jeremy") -await chat("What's my name?") -``` - -Your name is Jeremy, as you mentioned in your previous message. - -
- -- id: `msg_01NMugMXWpDP9iuTXeLkHarn` -- content: - `[{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 64, 'output_tokens': 16, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -await chat(q, prefill=pref) -``` - -According to Douglas Adams, the meaning of life is 42. More seriously, -there’s no universally agreed upon answer. Common philosophical -perspectives include: - -1. Finding personal fulfillment -2. Serving others -3. Pursuing happiness -4. Creating meaning through our choices -5. Experiencing and appreciating existence - -Ultimately, many believe each individual must determine their own life’s -meaning. - -
- -- id: `msg_01VPWUQn5Do1Kst8RYUDQvCu` -- content: - `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 100, 'output_tokens': 82, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat = AsyncChat(model, sp=sp) -async for o in (await chat("I'm Jeremy", stream=True)): print(o, end='') -``` - - Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions I can help you with? - -``` python -pr = f"What is {a}+{b}?" -chat = AsyncChat(model, sp=sp, tools=[sums]) -r = await chat(pr) -r -``` - - Finding the sum of 604542 and 6458932 - -To answer this question, I can use the “sums” function to add these two -numbers together. Let me do that for you. - -
- -- id: `msg_015z1rffSWFxvj7rSpzc43ZE` -- content: - `[{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01SNKhtfnDQBC4RGY4mUCq1v', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 428, 'output_tokens': 101, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -await chat() -``` - -The sum of 604542 and 6458932 is 7063474. - -
- -- id: `msg_018KAsE2YGiXWjUJkLPrXpb2` -- content: - `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 543, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -fn = Path('samples/puppy.jpg') -img = fn.read_bytes() -``` - -``` python -q = "In brief, what color flowers are in this image?" -msg = mk_msg([img_msg(img), text_msg(q)]) -await c([msg]) -``` - -The flowers in this image are purple. They appear to be small, -daisy-like flowers, possibly asters or some type of purple daisy, -blooming in the background behind the adorable puppy in the foreground. - -
- -- id: `msg_017qgZggLjUY915mWbWCkb9X` -- content: - `[{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 50, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
# claudette - - - -> **NB**: If you are reading this in GitHub’s readme, we recommend you -> instead read the much more nicely formatted [documentation -> format](https://claudette.answer.ai/) of this tutorial. - -*Claudette* is a wrapper for Anthropic’s [Python -SDK](https://github.com/anthropics/anthropic-sdk-python). - -The SDK works well, but it is quite low level – it leaves the developer -to do a lot of stuff manually. That’s a lot of extra work and -boilerplate! Claudette automates pretty much everything that can be -automated, whilst providing full control. Amongst the features provided: - -- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that - creates stateful dialogs -- Support for *prefill*, which tells Claude what to use as the first few - words of its response -- Convenient image support -- Simple and convenient support for Claude’s new Tool Use API. - -You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the -key provided to you by Anthropic in order to use this library. - -Note that this library is the first ever “literate nbdev” project. That -means that the actual source code for the library is a rendered Jupyter -Notebook which includes callout notes and tips, HTML tables and images, -detailed explanations, and teaches *how* and *why* the code is written -the way it is. Even if you’ve never used the Anthropic Python SDK or -Claude API before, you should be able to read the source code. Click -[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, -or clone the git repo and execute the notebook yourself to see every -step of the creation process in action. The tutorial below includes -links to API details which will take you to relevant parts of the -source. The reason this project is a new kind of literal program is -because we take seriously Knuth’s call to action, that we have a “*moral -commitment*” to never write an “*illiterate program*” – and so we have a -commitment to making literate programming and easy and pleasant -experience. (For more on this, see [this -talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) - -> “*Let us change our traditional attitude to the construction of -> programs: Instead of imagining that our main task is to instruct a -> **computer** what to do, let us concentrate rather on explaining to -> **human beings** what we want a computer to do.*” Donald E. Knuth, -> [Literate -> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) -> (1984) - -## Install - -``` sh -pip install claudette -``` - -## Getting started - -Anthropic’s Python SDK will automatically be installed with Claudette, -if you don’t already have it. - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -To print every HTTP request and response in full, uncomment the above -line. - -``` python -from claudette import * -``` - -Claudette only exports the symbols that are needed to use the library, -so you can use `import *` to import them. Alternatively, just use: - -``` python -import claudette -``` - -…and then add the prefix `claudette.` to any usages of the module. - -Claudette provides `models`, which is a list of models currently -available from the SDK. - -``` python -models -``` - - ['claude-3-opus-20240229', - 'claude-3-5-sonnet-20241022', - 'claude-3-haiku-20240307'] - -For these examples, we’ll use Sonnet 3.5, since it’s awesome! - -``` python -model = models[1] -``` - -## Chat - -The main interface to Claudette is the -[`Chat`](https://claudette.answer.ai/core.html#chat) class, which -provides a stateful interface to Claude: - -``` python -chat = Chat(model, sp="""You are a helpful and concise assistant.""") -chat("I'm Jeremy") -``` - -Hello Jeremy, nice to meet you. - -
- -- id: `msg_015oK9jEcra3TEKHUGYULjWB` -- content: - `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -r = chat("What's my name?") -r -``` - -Your name is Jeremy. - -
- -- id: `msg_01Si8sTFJe8d8vq7enanbAwj` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -r = chat("What's my name?") -r -``` - -Your name is Jeremy. - -
- -- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -As you see above, displaying the results of a call in a notebook shows -just the message contents, with the other details hidden behind a -collapsible section. Alternatively you can `print` the details: - -``` python -print(r) -``` - - Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) - -Claude supports adding an extra `assistant` message at the end, which -contains the *prefill* – i.e. the text we want Claude to assume the -response starts with. Let’s try it out: - -``` python -chat("Concisely, what is the meaning of life?", - prefill='According to Douglas Adams,') -``` - -According to Douglas Adams,42. Philosophically, it’s to find personal -meaning through relationships, purpose, and experiences. - -
- -- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` -- content: - `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -You can add `stream=True` to stream the results as soon as they arrive -(although you will only see the gradual generation if you execute the -notebook yourself, of course!) - -``` python -for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): - print(o, end='') -``` - - It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. - -### Async - -Alternatively, you can use -[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or -[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for -the async versions, e.g: - -``` python -chat = AsyncChat(model) -await chat("I'm Jeremy") -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_016Q8cdc3sPWBS8eXcNj841L` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Remember to use `async for` when streaming in this case: - -``` python -async for o in await chat("Concisely, what is the meaning of life?", - prefill='According to Douglas Adams,', stream=True): - print(o, end='') -``` - - According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. - -## Prompt caching - -If you use `mk_msg(msg, cache=True)`, then the message is cached using -Claude’s [prompt -caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) -feature. For instance, here we use caching when asking about Claudette’s -readme file: - -``` python -chat = Chat(model, sp="""You are a helpful and concise assistant.""") -``` - -``` python -nbtxt = Path('README.txt').read_text() -msg = f''' -{nbtxt} - -In brief, what is the purpose of this project based on the readme?''' -r = chat(mk_msg(msg, cache=True)) -r -``` - -Claudette is a high-level wrapper for Anthropic’s Python SDK that -automates common tasks and provides additional functionality. Its main -features include: - -1. A Chat class for stateful dialogs -2. Support for prefill (controlling Claude’s initial response words) -3. Convenient image handling -4. Simple tool use API integration -5. Support for multiple model providers (Anthropic, AWS Bedrock, Google - Vertex) - -The project is notable for being the first “literate nbdev” project, -meaning its source code is written as a detailed, readable Jupyter -Notebook that includes explanations, examples, and teaching material -alongside the functional code. - -The goal is to simplify working with Claude’s API while maintaining full -control, reducing boilerplate code and manual work that would otherwise -be needed with the base SDK. - -
- -- id: `msg_014rVQnYoZXZuyWUCMELG1QW` -- content: - `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` - -
- -The response records the a cache has been created using these input -tokens: - -``` python -print(r.usage) -``` - - Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) - -We can now ask a followup question in this chat: - -``` python -r = chat('How does it make tool use more ergonomic?') -r -``` - -According to the README, Claudette makes tool use more ergonomic in -several ways: - -1. It uses docments to make Python function definitions more - user-friendly - each parameter and return value should have a type - and description - -2. It handles the tool calling process automatically - when Claude - returns a tool_use message, Claudette manages calling the tool with - the provided parameters behind the scenes - -3. It provides a `toolloop` method that can handle multiple tool calls - in a single step to solve more complex problems - -4. It allows you to pass a list of tools to the Chat constructor and - optionally force Claude to always use a specific tool via - `tool_choice` - -Here’s a simple example from the README: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b - -chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') -``` - -This makes it much simpler compared to manually handling all the tool -use logic that would be required with the base SDK. - -
- -- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` -- content: - `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` - -
- -We can see that this only used ~200 regular input tokens – the 7000+ -context tokens have been read from cache. - -``` python -print(r.usage) -``` - - Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) - -``` python -chat.use -``` - - In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 - -## Tool use - -[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude -use external tools. - -We use [docments](https://fastcore.fast.ai/docments.html) to make -defining Python functions as ergonomic as possible. Each parameter (and -the return value) should have a type, and a docments comment with the -description of what it is. As an example we’ll write a simple function -that adds numbers together, and will tell us when it’s being called: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -Sometimes Claude will say something like “according to the `sums` tool -the answer is” – generally we’d rather it just tells the user the -answer, so we can use a system prompt to help with this: - -``` python -sp = "Never mention what tools you use." -``` - -We’ll get Claude to add up some long numbers: - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -pr -``` - - 'What is 604542+6458932?' - -To use tools, pass a list of them to -[`Chat`](https://claudette.answer.ai/core.html#chat): - -``` python -chat = Chat(model, sp=sp, tools=[sums]) -``` - -To force Claude to always answer using a tool, set `tool_choice` to that -function name. When Claude needs to use a tool, it doesn’t return the -answer, but instead returns a `tool_use` message, which means we have to -call the named tool with the provided parameters. - -``` python -r = chat(pr, tool_choice='sums') -r -``` - - Finding the sum of 604542 and 6458932 - -ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_014xrPyotyiBmFSctkp1LZHk` -- content: - `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Claudette handles all that for us – we just call it again, and it all -happens automatically: - -``` python -chat() -``` - -The sum of 604542 and 6458932 is 7063474. - -
- -- id: `msg_01151puJxG8Fa6k6QSmzwKQA` -- content: - `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -You can see how many tokens have been used at any time by checking the -`use` property. Note that (as of May 2024) tool use in Claude uses a -*lot* of tokens, since it automatically adds a large system prompt. - -``` python -chat.use -``` - - In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 - -We can do everything needed to use tools in a single step, by using -[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). -This can even call multiple tools as needed solve a problem. For -example, let’s define a tool to handle multiplication: - -``` python -def mults( - a:int, # First thing to multiply - b:int=1 # Second thing to multiply -) -> int: # The product of the inputs - "Multiplies a * b." - print(f"Finding the product of {a} and {b}") - return a * b -``` - -Now with a single call we can calculate `(a+b)*2` – by passing -`show_trace` we can see each response from Claude in the process: - -``` python -chat = Chat(model, sp=sp, tools=[sums,mults]) -pr = f'Calculate ({a}+{b})*2' -pr -``` - - 'Calculate (604542+6458932)*2' - -``` python -chat.toolloop(pr, trace_func=print) -``` - - Finding the sum of 604542 and 6458932 - [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] - Finding the product of 7063474 and 2 - [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] - [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] - -The final result is 14,126,948. - -
- -- id: `msg_0162teyBcJHriUzZXMPz4r5d` -- content: - `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -## Structured data - -If you just want the immediate result from a single tool, use -[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). - -``` python -cli = Client(model) -``` - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -cli.structured("What is 604542+6458932", sums) -``` - - Finding the sum of 604542 and 6458932 - - [7063474] - -This is particularly useful for getting back structured information, -e.g: - -``` python -class President: - "Information about a president of the United States" - def __init__(self, - first:str, # first name - last:str, # last name - spouse:str, # name of spouse - years_in_office:str, # format: "{start_year}-{end_year}" - birthplace:str, # name of city - birth_year:int # year of birth, `0` if unknown - ): - assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" - store_attr() - - __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') -``` - -``` python -cli.structured("Provide key information about the 3rd President of the United States", President) -``` - - [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] - -## Images - -Claude can handle image data as well. As everyone knows, when testing -image APIs you have to use a cute puppy. - -``` python -fn = Path('samples/puppy.jpg') -display.Image(filename=fn, width=200) -``` - - - -We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object -as before: - -``` python -chat = Chat(model) -``` - -Claudette expects images as a list of bytes, so we read in the file: - -``` python -img = fn.read_bytes() -``` - -Prompts to Claudette can be lists, containing text, images, or both, eg: - -``` python -chat([img, "In brief, what color flowers are in this image?"]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -The image is included as input tokens. - -``` python -chat.use -``` - - In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 - -Alternatively, Claudette supports creating a multi-stage chat with -separate image and text prompts. For instance, you can pass just the -image as the initial prompt (in which case Claude will make some general -comments about what it sees), and then follow up with questions in -additional prompts: - -``` python -chat = Chat(model) -chat(img) -``` - -What an adorable Cavalier King Charles Spaniel puppy! The photo captures -the classic brown and white coloring of the breed, with those soulful -dark eyes that are so characteristic. The puppy is lying in the grass, -and there are lovely purple asters blooming in the background, creating -a beautiful natural setting. The combination of the puppy’s sweet -expression and the delicate flowers makes for a charming composition. -Cavalier King Charles Spaniels are known for their gentle, affectionate -nature, and this little one certainly seems to embody those traits with -its endearing look. - -
- -- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` -- content: - `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat('What direction is the puppy facing?') -``` - -The puppy is facing towards the left side of the image. Its head is -positioned so we can see its right side profile, though it appears to be -looking slightly towards the camera, giving us a good view of its -distinctive brown and white facial markings and one of its dark eyes. -The puppy is lying down with its white chest/front visible against the -green grass. - -
- -- id: `msg_01AeR9eWjbxa788YF97iErtN` -- content: - `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat('What color is it?') -``` - -The puppy has a classic Cavalier King Charles Spaniel coat with a rich -chestnut brown (sometimes called Blenheim) coloring on its ears and -patches on its face, combined with a bright white base color. The white -is particularly prominent on its face (creating a distinctive blaze down -the center) and chest area. This brown and white combination is one of -the most recognizable color patterns for the breed. - -
- -- id: `msg_01R91AqXG7pLc8hK24F5mc7x` -- content: - `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Note that the image is passed in again for every input in the dialog, so -that number of input tokens increases quickly with this kind of chat. -(For large images, using prompt caching might be a good idea.) - -``` python -chat.use -``` - - In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 - -## Other model providers - -You can also use 3rd party providers of Anthropic models, as shown here. - -### Amazon Bedrock - -These are the models available through Bedrock: - -``` python -models_aws -``` - - ['anthropic.claude-3-opus-20240229-v1:0', - 'anthropic.claude-3-5-sonnet-20241022-v2:0', - 'anthropic.claude-3-sonnet-20240229-v1:0', - 'anthropic.claude-3-haiku-20240307-v1:0'] - -To use them, call `AnthropicBedrock` with your access details, and pass -that to [`Client`](https://claudette.answer.ai/core.html#client): - -``` python -from anthropic import AnthropicBedrock -``` - -``` python -ab = AnthropicBedrock( - aws_access_key=os.environ['AWS_ACCESS_KEY'], - aws_secret_key=os.environ['AWS_SECRET_KEY'], -) -client = Client(models_aws[-1], ab) -``` - -Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) -object passing this client to the `cli` parameter – and from then on, -everything is identical to the previous examples. - -``` python -chat = Chat(cli=client) -chat("I'm Jeremy") -``` - -It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` -- content: - `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 32}` - -
- -### Google Vertex - -These are the models available through Vertex: - -``` python -models_goog -``` - - ['claude-3-opus@20240229', - 'claude-3-5-sonnet-v2@20241022', - 'claude-3-sonnet@20240229', - 'claude-3-haiku@20240307'] - -To use them, call `AnthropicVertex` with your access details, and pass -that to [`Client`](https://claudette.answer.ai/core.html#client): - -``` python -from anthropic import AnthropicVertex -import google.auth -``` - -``` python -project_id = google.auth.default()[1] -gv = AnthropicVertex(project_id=project_id, region="us-east5") -client = Client(models_goog[-1], gv) -``` - -``` python -chat = Chat(cli=client) -chat("I'm Jeremy") -``` - -## Extensions - -- [Pydantic Structured - Ouput](https://github.com/tom-pollak/claudette-pydantic)
From 8da69219561b5098cdcec48191893212a3e17bca Mon Sep 17 00:00:00 2001 From: Erik Gaasedelen Date: Tue, 19 Nov 2024 23:27:23 -0800 Subject: [PATCH 6/9] correct apilist path --- llm/llms.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llm/llms.txt b/llm/llms.txt index efa3035..deecedc 100644 --- a/llm/llms.txt +++ b/llm/llms.txt @@ -17,7 +17,7 @@ Things to remember when using Claudette: ## API -- [API List](https://raw.githubusercontent.com/AnswerDotAI/claudette/b30f08e3549554f53b06fbd9bf03a0c961de3023/llm/apilist.txt): A succint list of all functions and methods in claudette. +- [API List](https://raw.githubusercontent.com/AnswerDotAI/claudette/llm/apilist.txt): A succint list of all functions and methods in claudette. ## Optional - [Tool loop handling](https://claudette.answer.ai/toolloop.html.md): How to use the tool loop functionality for complex multi-step interactions From 84bd02a3195fd6061ff82d6f716ddd58b65ea29c Mon Sep 17 00:00:00 2001 From: Erik Gaasedelen Date: Tue, 19 Nov 2024 23:30:44 -0800 Subject: [PATCH 7/9] actually correct path --- llm/llms.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llm/llms.txt b/llm/llms.txt index deecedc..42a8cf6 100644 --- a/llm/llms.txt +++ b/llm/llms.txt @@ -17,7 +17,7 @@ Things to remember when using Claudette: ## API -- [API List](https://raw.githubusercontent.com/AnswerDotAI/claudette/llm/apilist.txt): A succint list of all functions and methods in claudette. +- [API List](https://raw.githubusercontent.com/AnswerDotAI/claudette/refs/heads/main/llm/apilist.txt): A succint list of all functions and methods in claudette. ## Optional - [Tool loop handling](https://claudette.answer.ai/toolloop.html.md): How to use the tool loop functionality for complex multi-step interactions From cbc917e6ebccf0412ba38da6ecced501920e4d74 Mon Sep 17 00:00:00 2001 From: Erik Gaasedelen Date: Wed, 20 Nov 2024 21:01:52 -0800 Subject: [PATCH 8/9] change to root --- apilist.txt | 74 ++ llms-ctx-full.txt | 2054 +++++++++++++++++++++++++++++++++++++ llms-ctx.txt | 869 ++++++++++++++++ llm/llms.txt => llms.txt | 2 +- tools/refresh_llm_docs.sh | 6 +- 5 files changed, 3001 insertions(+), 4 deletions(-) create mode 100644 apilist.txt create mode 100644 llms-ctx-full.txt create mode 100644 llms-ctx.txt rename llm/llms.txt => llms.txt (93%) diff --git a/apilist.txt b/apilist.txt new file mode 100644 index 0000000..72ce0bb --- /dev/null +++ b/apilist.txt @@ -0,0 +1,74 @@ +# claudette Module Documentation + +## claudette.asink + +- `class AsyncClient` + - `def __init__(self, model, cli, log)` + Async Anthropic messages client. + + +- `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)` + Make an async call to Claude. + +- `@delegates() class AsyncChat` + - `def __init__(self, model, cli, **kwargs)` + Anthropic async chat client. + + +## claudette.core + +- `def find_block(r, blk_type)` + Find the first block of type `blk_type` in `r.content`. + +- `def contents(r)` + Helper to get the contents from Claude response `r`. + +- `def usage(inp, out, cache_create, cache_read)` + Slightly more concise version of `Usage`. + +- `@patch def __add__(self, b)` + Add together each of `input_tokens` and `output_tokens` + +- `def mk_msgs(msgs, **kw)` + Helper to set 'assistant' role on alternate messages. + +- `class Client` + - `def __init__(self, model, cli, log)` + Basic Anthropic messages client. + + +- `def mk_tool_choice(choose)` + Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise + +- `def mk_funcres(tuid, res)` + Given tool use id and the tool result, create a tool_result response. + +- `def mk_toolres(r, ns, obj)` + Create a `tool_result` message from response `r`. + +- `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)` + Make a call to Claude. + +- `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)` + Return the value of all tool calls (generally used for structured outputs) + +- `class Chat` + - `def __init__(self, model, cli, sp, tools, temp, cont_pr)` + Anthropic chat client. + + - `@property def use(self)` + +- `def img_msg(data, cache)` + Convert image `data` into an encoded `dict` + +- `def text_msg(s, cache)` + Convert `s` to a text message + +- `def mk_msg(content, role, cache, **kw)` + Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message + +## claudette.toolloop + +- `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)` + Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages + diff --git a/llms-ctx-full.txt b/llms-ctx-full.txt new file mode 100644 index 0000000..df3e806 --- /dev/null +++ b/llms-ctx-full.txt @@ -0,0 +1,2054 @@ +Things to remember when using Claudette: + +- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key +- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) +- The library provides both synchronous and asynchronous interfaces +- Use `Chat()` for maintaining conversation state and handling tool interactions +- When using tools, the library automatically handles the request/response loop +- Image support is built in but only available on compatible models (not Haiku)# claudette + + + +> **NB**: If you are reading this in GitHub’s readme, we recommend you +> instead read the much more nicely formatted [documentation +> format](https://claudette.answer.ai/) of this tutorial. + +*Claudette* is a wrapper for Anthropic’s [Python +SDK](https://github.com/anthropics/anthropic-sdk-python). + +The SDK works well, but it is quite low level – it leaves the developer +to do a lot of stuff manually. That’s a lot of extra work and +boilerplate! Claudette automates pretty much everything that can be +automated, whilst providing full control. Amongst the features provided: + +- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that + creates stateful dialogs +- Support for *prefill*, which tells Claude what to use as the first few + words of its response +- Convenient image support +- Simple and convenient support for Claude’s new Tool Use API. + +You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the +key provided to you by Anthropic in order to use this library. + +Note that this library is the first ever “literate nbdev” project. That +means that the actual source code for the library is a rendered Jupyter +Notebook which includes callout notes and tips, HTML tables and images, +detailed explanations, and teaches *how* and *why* the code is written +the way it is. Even if you’ve never used the Anthropic Python SDK or +Claude API before, you should be able to read the source code. Click +[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, +or clone the git repo and execute the notebook yourself to see every +step of the creation process in action. The tutorial below includes +links to API details which will take you to relevant parts of the +source. The reason this project is a new kind of literal program is +because we take seriously Knuth’s call to action, that we have a “*moral +commitment*” to never write an “*illiterate program*” – and so we have a +commitment to making literate programming and easy and pleasant +experience. (For more on this, see [this +talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) + +> “*Let us change our traditional attitude to the construction of +> programs: Instead of imagining that our main task is to instruct a +> **computer** what to do, let us concentrate rather on explaining to +> **human beings** what we want a computer to do.*” Donald E. Knuth, +> [Literate +> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) +> (1984) + +## Install + +``` sh +pip install claudette +``` + +## Getting started + +Anthropic’s Python SDK will automatically be installed with Claudette, +if you don’t already have it. + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +To print every HTTP request and response in full, uncomment the above +line. + +``` python +from claudette import * +``` + +Claudette only exports the symbols that are needed to use the library, +so you can use `import *` to import them. Alternatively, just use: + +``` python +import claudette +``` + +…and then add the prefix `claudette.` to any usages of the module. + +Claudette provides `models`, which is a list of models currently +available from the SDK. + +``` python +models +``` + + ['claude-3-opus-20240229', + 'claude-3-5-sonnet-20241022', + 'claude-3-haiku-20240307'] + +For these examples, we’ll use Sonnet 3.5, since it’s awesome! + +``` python +model = models[1] +``` + +## Chat + +The main interface to Claudette is the +[`Chat`](https://claudette.answer.ai/core.html#chat) class, which +provides a stateful interface to Claude: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +chat("I'm Jeremy") +``` + +Hello Jeremy, nice to meet you. + +
+ +- id: `msg_015oK9jEcra3TEKHUGYULjWB` +- content: + `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01Si8sTFJe8d8vq7enanbAwj` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +As you see above, displaying the results of a call in a notebook shows +just the message contents, with the other details hidden behind a +collapsible section. Alternatively you can `print` the details: + +``` python +print(r) +``` + + Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) + +Claude supports adding an extra `assistant` message at the end, which +contains the *prefill* – i.e. the text we want Claude to assume the +response starts with. Let’s try it out: + +``` python +chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,') +``` + +According to Douglas Adams,42. Philosophically, it’s to find personal +meaning through relationships, purpose, and experiences. + +
+ +- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` +- content: + `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can add `stream=True` to stream the results as soon as they arrive +(although you will only see the gradual generation if you execute the +notebook yourself, of course!) + +``` python +for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): + print(o, end='') +``` + + It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. + +### Async + +Alternatively, you can use +[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or +[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for +the async versions, e.g: + +``` python +chat = AsyncChat(model) +await chat("I'm Jeremy") +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_016Q8cdc3sPWBS8eXcNj841L` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Remember to use `async for` when streaming in this case: + +``` python +async for o in await chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,', stream=True): + print(o, end='') +``` + + According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. + +## Prompt caching + +If you use `mk_msg(msg, cache=True)`, then the message is cached using +Claude’s [prompt +caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) +feature. For instance, here we use caching when asking about Claudette’s +readme file: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +``` + +``` python +nbtxt = Path('README.txt').read_text() +msg = f''' +{nbtxt} + +In brief, what is the purpose of this project based on the readme?''' +r = chat(mk_msg(msg, cache=True)) +r +``` + +Claudette is a high-level wrapper for Anthropic’s Python SDK that +automates common tasks and provides additional functionality. Its main +features include: + +1. A Chat class for stateful dialogs +2. Support for prefill (controlling Claude’s initial response words) +3. Convenient image handling +4. Simple tool use API integration +5. Support for multiple model providers (Anthropic, AWS Bedrock, Google + Vertex) + +The project is notable for being the first “literate nbdev” project, +meaning its source code is written as a detailed, readable Jupyter +Notebook that includes explanations, examples, and teaching material +alongside the functional code. + +The goal is to simplify working with Claude’s API while maintaining full +control, reducing boilerplate code and manual work that would otherwise +be needed with the base SDK. + +
+ +- id: `msg_014rVQnYoZXZuyWUCMELG1QW` +- content: + `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` + +
+ +The response records the a cache has been created using these input +tokens: + +``` python +print(r.usage) +``` + + Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) + +We can now ask a followup question in this chat: + +``` python +r = chat('How does it make tool use more ergonomic?') +r +``` + +According to the README, Claudette makes tool use more ergonomic in +several ways: + +1. It uses docments to make Python function definitions more + user-friendly - each parameter and return value should have a type + and description + +2. It handles the tool calling process automatically - when Claude + returns a tool_use message, Claudette manages calling the tool with + the provided parameters behind the scenes + +3. It provides a `toolloop` method that can handle multiple tool calls + in a single step to solve more complex problems + +4. It allows you to pass a list of tools to the Chat constructor and + optionally force Claude to always use a specific tool via + `tool_choice` + +Here’s a simple example from the README: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b + +chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') +``` + +This makes it much simpler compared to manually handling all the tool +use logic that would be required with the base SDK. + +
+ +- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` +- content: + `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` + +
+ +We can see that this only used ~200 regular input tokens – the 7000+ +context tokens have been read from cache. + +``` python +print(r.usage) +``` + + Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) + +``` python +chat.use +``` + + In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 + +## Tool use + +[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude +use external tools. + +We use [docments](https://fastcore.fast.ai/docments.html) to make +defining Python functions as ergonomic as possible. Each parameter (and +the return value) should have a type, and a docments comment with the +description of what it is. As an example we’ll write a simple function +that adds numbers together, and will tell us when it’s being called: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +Sometimes Claude will say something like “according to the `sums` tool +the answer is” – generally we’d rather it just tells the user the +answer, so we can use a system prompt to help with this: + +``` python +sp = "Never mention what tools you use." +``` + +We’ll get Claude to add up some long numbers: + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +pr +``` + + 'What is 604542+6458932?' + +To use tools, pass a list of them to +[`Chat`](https://claudette.answer.ai/core.html#chat): + +``` python +chat = Chat(model, sp=sp, tools=[sums]) +``` + +To force Claude to always answer using a tool, set `tool_choice` to that +function name. When Claude needs to use a tool, it doesn’t return the +answer, but instead returns a `tool_use` message, which means we have to +call the named tool with the provided parameters. + +``` python +r = chat(pr, tool_choice='sums') +r +``` + + Finding the sum of 604542 and 6458932 + +ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_014xrPyotyiBmFSctkp1LZHk` +- content: + `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Claudette handles all that for us – we just call it again, and it all +happens automatically: + +``` python +chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_01151puJxG8Fa6k6QSmzwKQA` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can see how many tokens have been used at any time by checking the +`use` property. Note that (as of May 2024) tool use in Claude uses a +*lot* of tokens, since it automatically adds a large system prompt. + +``` python +chat.use +``` + + In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 + +We can do everything needed to use tools in a single step, by using +[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). +This can even call multiple tools as needed solve a problem. For +example, let’s define a tool to handle multiplication: + +``` python +def mults( + a:int, # First thing to multiply + b:int=1 # Second thing to multiply +) -> int: # The product of the inputs + "Multiplies a * b." + print(f"Finding the product of {a} and {b}") + return a * b +``` + +Now with a single call we can calculate `(a+b)*2` – by passing +`show_trace` we can see each response from Claude in the process: + +``` python +chat = Chat(model, sp=sp, tools=[sums,mults]) +pr = f'Calculate ({a}+{b})*2' +pr +``` + + 'Calculate (604542+6458932)*2' + +``` python +chat.toolloop(pr, trace_func=print) +``` + + Finding the sum of 604542 and 6458932 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] + Finding the product of 7063474 and 2 + [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] + [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] + +The final result is 14,126,948. + +
+ +- id: `msg_0162teyBcJHriUzZXMPz4r5d` +- content: + `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Structured data + +If you just want the immediate result from a single tool, use +[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). + +``` python +cli = Client(model) +``` + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +cli.structured("What is 604542+6458932", sums) +``` + + Finding the sum of 604542 and 6458932 + + [7063474] + +This is particularly useful for getting back structured information, +e.g: + +``` python +class President: + "Information about a president of the United States" + def __init__(self, + first:str, # first name + last:str, # last name + spouse:str, # name of spouse + years_in_office:str, # format: "{start_year}-{end_year}" + birthplace:str, # name of city + birth_year:int # year of birth, `0` if unknown + ): + assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" + store_attr() + + __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') +``` + +``` python +cli.structured("Provide key information about the 3rd President of the United States", President) +``` + + [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] + +## Images + +Claude can handle image data as well. As everyone knows, when testing +image APIs you have to use a cute puppy. + +``` python +fn = Path('samples/puppy.jpg') +display.Image(filename=fn, width=200) +``` + + + +We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object +as before: + +``` python +chat = Chat(model) +``` + +Claudette expects images as a list of bytes, so we read in the file: + +``` python +img = fn.read_bytes() +``` + +Prompts to Claudette can be lists, containing text, images, or both, eg: + +``` python +chat([img, "In brief, what color flowers are in this image?"]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +The image is included as input tokens. + +``` python +chat.use +``` + + In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 + +Alternatively, Claudette supports creating a multi-stage chat with +separate image and text prompts. For instance, you can pass just the +image as the initial prompt (in which case Claude will make some general +comments about what it sees), and then follow up with questions in +additional prompts: + +``` python +chat = Chat(model) +chat(img) +``` + +What an adorable Cavalier King Charles Spaniel puppy! The photo captures +the classic brown and white coloring of the breed, with those soulful +dark eyes that are so characteristic. The puppy is lying in the grass, +and there are lovely purple asters blooming in the background, creating +a beautiful natural setting. The combination of the puppy’s sweet +expression and the delicate flowers makes for a charming composition. +Cavalier King Charles Spaniels are known for their gentle, affectionate +nature, and this little one certainly seems to embody those traits with +its endearing look. + +
+ +- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` +- content: + `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What direction is the puppy facing?') +``` + +The puppy is facing towards the left side of the image. Its head is +positioned so we can see its right side profile, though it appears to be +looking slightly towards the camera, giving us a good view of its +distinctive brown and white facial markings and one of its dark eyes. +The puppy is lying down with its white chest/front visible against the +green grass. + +
+ +- id: `msg_01AeR9eWjbxa788YF97iErtN` +- content: + `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What color is it?') +``` + +The puppy has a classic Cavalier King Charles Spaniel coat with a rich +chestnut brown (sometimes called Blenheim) coloring on its ears and +patches on its face, combined with a bright white base color. The white +is particularly prominent on its face (creating a distinctive blaze down +the center) and chest area. This brown and white combination is one of +the most recognizable color patterns for the breed. + +
+ +- id: `msg_01R91AqXG7pLc8hK24F5mc7x` +- content: + `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Note that the image is passed in again for every input in the dialog, so +that number of input tokens increases quickly with this kind of chat. +(For large images, using prompt caching might be a good idea.) + +``` python +chat.use +``` + + In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 + +## Other model providers + +You can also use 3rd party providers of Anthropic models, as shown here. + +### Amazon Bedrock + +These are the models available through Bedrock: + +``` python +models_aws +``` + + ['anthropic.claude-3-opus-20240229-v1:0', + 'anthropic.claude-3-5-sonnet-20241022-v2:0', + 'anthropic.claude-3-sonnet-20240229-v1:0', + 'anthropic.claude-3-haiku-20240307-v1:0'] + +To use them, call `AnthropicBedrock` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicBedrock +``` + +``` python +ab = AnthropicBedrock( + aws_access_key=os.environ['AWS_ACCESS_KEY'], + aws_secret_key=os.environ['AWS_SECRET_KEY'], +) +client = Client(models_aws[-1], ab) +``` + +Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) +object passing this client to the `cli` parameter – and from then on, +everything is identical to the previous examples. + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` +- content: + `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 32}` + +
+ +### Google Vertex + +These are the models available through Vertex: + +``` python +models_goog +``` + + ['claude-3-opus@20240229', + 'claude-3-5-sonnet-v2@20241022', + 'claude-3-sonnet@20240229', + 'claude-3-haiku@20240307'] + +To use them, call `AnthropicVertex` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicVertex +import google.auth +``` + +``` python +project_id = google.auth.default()[1] +gv = AnthropicVertex(project_id=project_id, region="us-east5") +client = Client(models_goog[-1], gv) +``` + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +## Extensions + +- [Pydantic Structured + Ouput](https://github.com/tom-pollak/claudette-pydantic)
404: Not Found# Tool loop + + + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +``` python +model = models[-1] +``` + +Anthropic provides an [interesting +example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) +of using tools to mock up a hypothetical ordering system. We’re going to +take it a step further, and show how we can dramatically simplify the +process, whilst completing more complex tasks. + +We’ll start by defining the same mock customer/order data as in +Anthropic’s example, plus create a entity relationship between customers +and orders: + +``` python +orders = { + "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"), + "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"), + "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")} + +customers = { + "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890", + orders=[orders['O1'], orders['O2']]), + "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210", + orders=[orders['O3']]) +} +``` + +We can now define the same functions from the original example – but +note that we don’t need to manually create the large JSON schema, since +Claudette handles all that for us automatically from the functions +directly. We’ll add some extra functionality to update order details +when cancelling too. + +``` python +def get_customer_info( + customer_id:str # ID of the customer +): # Customer's name, email, phone number, and list of orders + "Retrieves a customer's information and their orders based on the customer ID" + print(f'- Retrieving customer {customer_id}') + return customers.get(customer_id, "Customer not found") + +def get_order_details( + order_id:str # ID of the order +): # Order's ID, product name, quantity, price, and order status + "Retrieves the details of a specific order based on the order ID" + print(f'- Retrieving order {order_id}') + return orders.get(order_id, "Order not found") + +def cancel_order( + order_id:str # ID of the order to cancel +)->bool: # True if the cancellation is successful + "Cancels an order based on the provided order ID" + print(f'- Cancelling order {order_id}') + if order_id not in orders: return False + orders[order_id]['status'] = 'Cancelled' + return True +``` + +We’re now ready to start our chat. + +``` python +tools = [get_customer_info, get_order_details, cancel_order] +chat = Chat(model, tools=tools) +``` + +We’ll start with the same request as Anthropic showed: + +``` python +r = chat('Can you tell me the email address for customer C1?') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [ToolUseBlock(id='toolu_0168sUZoEUpjzk5Y8WN3q9XL', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +Claude asks us to use a tool. Claudette handles that automatically by +just calling it again: + +``` python +r = chat() +contents(r) +``` + + 'The email address for customer C1 is john@example.com.' + +Let’s consider a more complex case than in the original example – what +happens if a customer wants to cancel all of their orders? + +``` python +chat = Chat(model, tools=tools) +r = chat('Please cancel all orders for customer C1 for me.') +print(r.stop_reason) +r.content +``` + + - Retrieving customer C1 + tool_use + + [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), + ToolUseBlock(id='toolu_01ADr1rEp7NLZ2iKWfLp7vz7', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] + +This is the start of a multi-stage tool use process. Doing it manually +step by step is inconvenient, so let’s write a function to handle this +for us: + +------------------------------------------------------------------------ + +source + +### Chat.toolloop + +> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None, +> maxtok=4096, stream=False, prefill='', +> tool_choice:Optional[dict]=None) + +*Add prompt `pr` to dialog and get a response from Claude, automatically +following up with `tool_use` messages* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prPrompt to pass to Claude
max_stepsint10Maximum number of tool requests to loop through
trace_funcOptionalNoneFunction to trace tool use steps (e.g print)
cont_funcOptionalnoopFunction that stops loop if returns False
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
+ +
+Exported source + +``` python +@patch +@delegates(Chat.__call__) +def toolloop(self:Chat, + pr, # Prompt to pass to Claude + max_steps=10, # Maximum number of tool requests to loop through + trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`) + cont_func:Optional[callable]=noop, # Function that stops loop if returns False + **kwargs): + "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages" + n_msgs = len(self.h) + r = self(pr, **kwargs) + for i in range(max_steps): + if r.stop_reason!='tool_use': break + if trace_func: trace_func(self.h[n_msgs:]); n_msgs = len(self.h) + r = self(**kwargs) + if not (cont_func or noop)(self.h[-2]): break + if trace_func: trace_func(self.h[n_msgs:]) + return r +``` + +
+ +We’ll start by re-running our previous request - we shouldn’t have to +manually pass back the `tool_use` message any more: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Can you tell me the email address for customer C1?') +r +``` + + - Retrieving customer C1 + +The email address for customer C1 is john@example.com. + +
+ +- id: `msg_01Fm2CY76dNeWief4kUW6r71` +- content: + `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 720, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Let’s see if it can handle the multi-stage process now – we’ll add +`trace_func=print` to see each stage of the process: + +``` python +chat = Chat(model, tools=tools) +r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print) +r +``` + + - Retrieving customer C1 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Please cancel all orders for customer C1 for me.'}]}, {'role': 'assistant', 'content': [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01SvivKytaRHEdKixEY9dUDz', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01SvivKytaRHEdKixEY9dUDz', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}] + - Cancelling order O1 + [{'role': 'assistant', 'content': [TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel each of these orders:", type='text'), ToolUseBlock(id='toolu_01DoGVUPVBeDYERMePHDzUoT', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DoGVUPVBeDYERMePHDzUoT', 'content': 'True'}]}] + - Cancelling order O2 + [{'role': 'assistant', 'content': [ToolUseBlock(id='toolu_01XNwS35yY88Mvx4B3QqDeXX', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01XNwS35yY88Mvx4B3QqDeXX', 'content': 'True'}]}] + [{'role': 'assistant', 'content': [TextBlock(text="I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", type='text')]}] + +I’ve successfully cancelled both orders O1 and O2 for customer C1. +Please let me know if you need anything else! + +
+ +- id: `msg_01K1QpUZ8nrBVUHYTrH5QjSF` +- content: + `[{'text': "I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 921, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +OK Claude thinks the orders were cancelled – let’s check one: + +``` python +chat.toolloop('What is the status of order O2?') +``` + + - Retrieving order O2 + +The status of order O2 is now ‘Cancelled’ since I successfully cancelled +that order earlier. + +
+ +- id: `msg_01XcXpFDwoZ3u1bFDf5mY8x1` +- content: + `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1092, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Code interpreter + +Here is an example of using `toolloop` to implement a simple code +interpreter with additional tools. + +``` python +from toolslm.shell import get_shell +from fastcore.meta import delegates +import traceback +``` + +``` python +@delegates() +class CodeChat(Chat): + imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses' + def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs): + super().__init__(model=model, **kwargs) + self.ask = ask + self.tools.append(self.run_cell) + self.shell = get_shell() + self.shell.run_cell('import '+self.imps) +``` + +We have one additional parameter to creating a `CodeChat` beyond what we +pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is +`ask` – if that’s `True`, we’ll prompt the user before running code. + +``` python +@patch +def run_cell( + self:CodeChat, + code:str, # Code to execute in persistent IPython session +): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute + "Asks user for permission, and if provided, executes python `code` using persistent IPython session." + confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n' + if self.ask and input(confirm): return '#DECLINED#' + try: res = self.shell.run_cell(code) + except Exception as e: return traceback.format_exc() + return res.stdout if res.result is None else res.result +``` + +We just pass along requests to run code to the shell’s implementation. +Claude often prints results instead of just using the last expression, +so we capture stdout in those cases. + +``` python +sp = f'''You are a knowledgable assistant. Do not use tools unless needed. +Don't do complex calculations yourself -- use code for them. +The following modules are pre-imported for `run_cell` automatically: + +{CodeChat.imps} + +Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls. + +If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.''' +``` + +``` python +def get_user(ignored:str='' # Unused parameter + ): # Username of current user + "Get the username of the user running this session" + print("Looking up username") + return 'Jeremy' +``` + +In order to test out multi-stage tool use, we create a mock function +that Claude can call to get the current username. + +``` python +model = models[1] +``` + +``` python +chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3) +``` + +Claude gets confused sometimes about how tools work, so we use examples +to remind it: + +``` python +chat.h = [ + 'Calculate the square root of `10332`', 'math.sqrt(10332)', + '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.' +] +``` + +Providing a callable to toolloop’s `trace_func` lets us print out +information during the loop: + +``` python +def _show_cts(h): + for r in h: + for o in r.get('content'): + if hasattr(o,'text'): print(o.text) + nm = getattr(o, 'name', None) + if nm=='run_cell': print(o.input['code']) + elif nm: print(f'{o.name}({o.input})') +``` + +…and toolloop’s `cont_func` callable let’s us provide a function which, +if it returns `False`, stops the loop: + +``` python +def _cont_decline(c): + return nested_idx(c, 'content', 'content') != '#DECLINED#' +``` + +Now we can try our code interpreter. We start by asking for a function +to be created, which we’ll use in the next prompt to test that the +interpreter is persistent. + +``` python +pr = '''Create a 1-line function `checksum` for a string `s`, +that multiplies together the ascii values of each character in `s` using `reduce`.''' +chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline) +``` + + Press Enter to execute, or enter "n" to skip? + ``` + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + ``` + + Create a 1-line function `checksum` for a string `s`, + that multiplies together the ascii values of each character in `s` using `reduce`. + Let me help you create that function using `reduce` and `functools`. + checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) + The function has been created. Let me explain how it works: + 1. It takes a string `s` as input + 2. Uses `functools.reduce` to multiply together all ASCII values + 3. `ord(y)` gets the ASCII value of each character + 4. The initial value is 1 (the third parameter to reduce) + 5. The lambda function multiplies the accumulator (x) with each new ASCII value + + You can test it with any string. For example, you could try `checksum("hello")` to see it in action. + +The function has been created. Let me explain how it works: 1. It takes +a string `s` as input 2. Uses `functools.reduce` to multiply together +all ASCII values 3. `ord(y)` gets the ASCII value of each character 4. +The initial value is 1 (the third parameter to reduce) 5. The lambda +function multiplies the accumulator (x) with each new ASCII value + +You can test it with any string. For example, you could try +`checksum("hello")` to see it in action. + +
+ +- id: `msg_011pcGY9LbYqvRSfDPgCqUkT` +- content: + `[{'text': 'The function has been created. Let me explain how it works:\n1. It takes a string`s`as input\n2. Uses`functools.reduce`to multiply together all ASCII values\n3.`ord(y)`gets the ASCII value of each character\n4. The initial value is 1 (the third parameter to reduce)\n5. The lambda function multiplies the accumulator (x) with each new ASCII value\n\nYou can test it with any string. For example, you could try`checksum(“hello”)`to see it in action.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 824, 'output_tokens': 125, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +By asking for a calculation to be done on the username, we force it to +use multiple steps: + +``` python +pr = 'Use it to get the checksum of the username of this session.' +chat.toolloop(pr, trace_func=_show_cts) +``` + + Looking up username + Use it to get the checksum of the username of this session. + I'll first get the username using `get_user` and then apply our `checksum` function to it. + get_user({'ignored': ''}) + Press Enter to execute, or enter "n" to skip? + ``` + print(checksum("Jeremy")) + ``` + + Now I'll calculate the checksum of "Jeremy": + print(checksum("Jeremy")) + The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy". + +The checksum of the username “Jeremy” is 1134987783204. This was +calculated by multiplying together the ASCII values of each character in +“Jeremy”. + +
+ +- id: `msg_01UXvtcLzzykZpnQUT35v4uD` +- content: + `[{'text': 'The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy".', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
# The async version + + + +## Setup + +## Async SDK + +``` python +model = models[1] +cli = AsyncAnthropic() +``` + +``` python +m = {'role': 'user', 'content': "I'm Jeremy"} +r = await cli.messages.create(messages=[m], model=model, max_tokens=100) +r +``` + +Hello Jeremy! It’s nice to meet you. How can I assist you today? Is +there anything specific you’d like to talk about or any questions you +have? + +
+ +- id: `msg_019gsEQs5dqb3kgwNJbTH27M` +- content: + `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 36}` + +
+ +------------------------------------------------------------------------ + +source + +### AsyncClient + +> AsyncClient (model, cli=None, log=False) + +*Async Anthropic messages client.* + +
+Exported source + +``` python +class AsyncClient(Client): + def __init__(self, model, cli=None, log=False): + "Async Anthropic messages client." + super().__init__(model,cli,log) + if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}) +``` + +
+ +``` python +c = AsyncClient(model) +``` + +``` python +c._r(r) +c.use +``` + + In: 10; Out: 36; Total: 46 + +------------------------------------------------------------------------ + +source + +### AsyncClient.\_\_call\_\_ + +> AsyncClient.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', +> stream:bool=False, stop=None, cli=None, log=False) + +*Make an async call to Claude.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
cliNoneTypeNone
logboolFalse
+ +
+Exported source + +``` python +@patch +async def _stream(self:AsyncClient, msgs:list, prefill='', **kwargs): + async with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: + if prefill: yield prefill + async for o in s.text_stream: yield o + self._log(await s.get_final_message(), prefill, msgs, kwargs) +``` + +
+
+Exported source + +``` python +@patch +@delegates(Client) +async def __call__(self:AsyncClient, + msgs:list, # List of messages in the dialog + sp='', # The system prompt + temp=0, # Temperature + maxtok=4096, # Maximum tokens + prefill='', # Optional prefill to pass to Claude as start of its response + stream:bool=False, # Stream response? + stop=None, # Stop sequence + **kwargs): + "Make an async call to Claude." + msgs = self._precall(msgs, prefill, stop, kwargs) + if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + res = await self.c.messages.create( + model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) + return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) +``` + +
+ +``` python +c = AsyncClient(model, log=True) +c.use +``` + + In: 0; Out: 0; Total: 0 + +``` python +c.model = models[1] +await c('Hi') +``` + +Hello! How can I assist you today? Feel free to ask any questions or let +me know if you need help with anything. + +
+ +- id: `msg_01L9vqP9r1LcmvSk8vWGLbPo` +- content: + `[{'text': 'Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 8, 'output_tokens': 29, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +c.use +``` + + In: 8; Out: 29; Total: 37 + +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +await c(q, prefill=pref) +``` + +According to Douglas Adams, the meaning of life is 42. More seriously, +there’s no universally agreed upon meaning of life. Many philosophers +and religions have proposed different answers, but it remains an open +question that individuals must grapple with for themselves. + +
+ +- id: `msg_01KAJbCneA2oCRPVm9EkyDXF` +- content: + `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 24, 'output_tokens': 51, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +async for o in (await c('Hi', stream=True)): print(o, end='') +``` + + Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything. + +``` python +c.use +``` + + In: 40; Out: 109; Total: 149 + +``` python +async for o in (await c(q, prefill=pref, stream=True)): print(o, end='') +``` + + According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves. + +``` python +c.use +``` + + In: 64; Out: 160; Total: 224 + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +sp = "You are a summing expert." +``` + +``` python +tools=[get_schema(sums)] +choice = mk_tool_choice('sums') +``` + +``` python +tools = [get_schema(sums)] +msgs = mk_msgs(pr) +r = await c(msgs, sp=sp, tools=tools, tool_choice=choice) +tr = mk_toolres(r, ns=globals()) +msgs += tr +contents(await c(msgs, sp=sp, tools=tools)) +``` + + Finding the sum of 604542 and 6458932 + + 'As a summing expert, I\'m happy to help you with this addition. The sum of 604542 and 6458932 is 7063474.\n\nTo break it down:\n604542 (first number)\n+ 6458932 (second number)\n= 7063474 (total sum)\n\nThis result was calculated using the "sums" function, which adds two numbers together. Is there anything else you\'d like me to sum for you?' + +## AsyncChat + +------------------------------------------------------------------------ + +source + +### AsyncChat + +> AsyncChat (model:Optional[str]=None, +> cli:Optional[claudette.core.Client]=None, sp='', +> tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None) + +*Anthropic async chat client.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstr
toolsOptionalNone
tempint0
cont_prOptionalNone
+ +
+Exported source + +``` python +@delegates() +class AsyncChat(Chat): + def __init__(self, + model:Optional[str]=None, # Model to use (leave empty if passing `cli`) + cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) + **kwargs): + "Anthropic async chat client." + super().__init__(model, cli, **kwargs) + if not cli: self.c = AsyncClient(model) +``` + +
+ +``` python +sp = "Never mention what tools you use." +chat = AsyncChat(model, sp=sp) +chat.c.use, chat.h +``` + + (In: 0; Out: 0; Total: 0, []) + +------------------------------------------------------------------------ + +source + +### AsyncChat.\_\_call\_\_ + +> AsyncChat.__call__ (pr=None, temp=0, maxtok=4096, stream=False, +> prefill='', **kw) + +*Call self as a function.* + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempint0Temperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
kw
+ +
+Exported source + +``` python +@patch +async def _stream(self:AsyncChat, res): + async for o in res: yield o + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) +``` + +
+
+Exported source + +``` python +@patch +async def _append_pr(self:AsyncChat, pr=None): + prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history + if pr and prev_role == 'user': await self() + self._post_pr(pr, prev_role) +``` + +
+
+Exported source + +``` python +@patch +async def __call__(self:AsyncChat, + pr=None, # Prompt / message + temp=0, # Temperature + maxtok=4096, # Maximum tokens + stream=False, # Stream response? + prefill='', # Optional prefill to pass to Claude as start of its response + **kw): + await self._append_pr(pr) + if self.tools: kw['tools'] = [get_schema(o) for o in self.tools] + res = await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, **kw) + if stream: return self._stream(res) + self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) + return res +``` + +
+ +``` python +await chat("I'm Jeremy") +await chat("What's my name?") +``` + +Your name is Jeremy, as you mentioned in your previous message. + +
+ +- id: `msg_01NMugMXWpDP9iuTXeLkHarn` +- content: + `[{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 64, 'output_tokens': 16, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +q = "Concisely, what is the meaning of life?" +pref = 'According to Douglas Adams,' +await chat(q, prefill=pref) +``` + +According to Douglas Adams, the meaning of life is 42. More seriously, +there’s no universally agreed upon answer. Common philosophical +perspectives include: + +1. Finding personal fulfillment +2. Serving others +3. Pursuing happiness +4. Creating meaning through our choices +5. Experiencing and appreciating existence + +Ultimately, many believe each individual must determine their own life’s +meaning. + +
+ +- id: `msg_01VPWUQn5Do1Kst8RYUDQvCu` +- content: + `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 100, 'output_tokens': 82, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat = AsyncChat(model, sp=sp) +async for o in (await chat("I'm Jeremy", stream=True)): print(o, end='') +``` + + Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions I can help you with? + +``` python +pr = f"What is {a}+{b}?" +chat = AsyncChat(model, sp=sp, tools=[sums]) +r = await chat(pr) +r +``` + + Finding the sum of 604542 and 6458932 + +To answer this question, I can use the “sums” function to add these two +numbers together. Let me do that for you. + +
+ +- id: `msg_015z1rffSWFxvj7rSpzc43ZE` +- content: + `[{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01SNKhtfnDQBC4RGY4mUCq1v', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 428, 'output_tokens': 101, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +await chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_018KAsE2YGiXWjUJkLPrXpb2` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 543, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +fn = Path('samples/puppy.jpg') +img = fn.read_bytes() +``` + +``` python +q = "In brief, what color flowers are in this image?" +msg = mk_msg([img_msg(img), text_msg(q)]) +await c([msg]) +``` + +The flowers in this image are purple. They appear to be small, +daisy-like flowers, possibly asters or some type of purple daisy, +blooming in the background behind the adorable puppy in the foreground. + +
+ +- id: `msg_017qgZggLjUY915mWbWCkb9X` +- content: + `[{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20240620` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 50, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
diff --git a/llms-ctx.txt b/llms-ctx.txt new file mode 100644 index 0000000..ad4b3fc --- /dev/null +++ b/llms-ctx.txt @@ -0,0 +1,869 @@ +Things to remember when using Claudette: + +- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key +- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) +- The library provides both synchronous and asynchronous interfaces +- Use `Chat()` for maintaining conversation state and handling tool interactions +- When using tools, the library automatically handles the request/response loop +- Image support is built in but only available on compatible models (not Haiku)# claudette + + + +> **NB**: If you are reading this in GitHub’s readme, we recommend you +> instead read the much more nicely formatted [documentation +> format](https://claudette.answer.ai/) of this tutorial. + +*Claudette* is a wrapper for Anthropic’s [Python +SDK](https://github.com/anthropics/anthropic-sdk-python). + +The SDK works well, but it is quite low level – it leaves the developer +to do a lot of stuff manually. That’s a lot of extra work and +boilerplate! Claudette automates pretty much everything that can be +automated, whilst providing full control. Amongst the features provided: + +- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that + creates stateful dialogs +- Support for *prefill*, which tells Claude what to use as the first few + words of its response +- Convenient image support +- Simple and convenient support for Claude’s new Tool Use API. + +You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the +key provided to you by Anthropic in order to use this library. + +Note that this library is the first ever “literate nbdev” project. That +means that the actual source code for the library is a rendered Jupyter +Notebook which includes callout notes and tips, HTML tables and images, +detailed explanations, and teaches *how* and *why* the code is written +the way it is. Even if you’ve never used the Anthropic Python SDK or +Claude API before, you should be able to read the source code. Click +[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, +or clone the git repo and execute the notebook yourself to see every +step of the creation process in action. The tutorial below includes +links to API details which will take you to relevant parts of the +source. The reason this project is a new kind of literal program is +because we take seriously Knuth’s call to action, that we have a “*moral +commitment*” to never write an “*illiterate program*” – and so we have a +commitment to making literate programming and easy and pleasant +experience. (For more on this, see [this +talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) + +> “*Let us change our traditional attitude to the construction of +> programs: Instead of imagining that our main task is to instruct a +> **computer** what to do, let us concentrate rather on explaining to +> **human beings** what we want a computer to do.*” Donald E. Knuth, +> [Literate +> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) +> (1984) + +## Install + +``` sh +pip install claudette +``` + +## Getting started + +Anthropic’s Python SDK will automatically be installed with Claudette, +if you don’t already have it. + +``` python +import os +# os.environ['ANTHROPIC_LOG'] = 'debug' +``` + +To print every HTTP request and response in full, uncomment the above +line. + +``` python +from claudette import * +``` + +Claudette only exports the symbols that are needed to use the library, +so you can use `import *` to import them. Alternatively, just use: + +``` python +import claudette +``` + +…and then add the prefix `claudette.` to any usages of the module. + +Claudette provides `models`, which is a list of models currently +available from the SDK. + +``` python +models +``` + + ['claude-3-opus-20240229', + 'claude-3-5-sonnet-20241022', + 'claude-3-haiku-20240307'] + +For these examples, we’ll use Sonnet 3.5, since it’s awesome! + +``` python +model = models[1] +``` + +## Chat + +The main interface to Claudette is the +[`Chat`](https://claudette.answer.ai/core.html#chat) class, which +provides a stateful interface to Claude: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +chat("I'm Jeremy") +``` + +Hello Jeremy, nice to meet you. + +
+ +- id: `msg_015oK9jEcra3TEKHUGYULjWB` +- content: + `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01Si8sTFJe8d8vq7enanbAwj` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +r = chat("What's my name?") +r +``` + +Your name is Jeremy. + +
+ +- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` +- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +As you see above, displaying the results of a call in a notebook shows +just the message contents, with the other details hidden behind a +collapsible section. Alternatively you can `print` the details: + +``` python +print(r) +``` + + Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) + +Claude supports adding an extra `assistant` message at the end, which +contains the *prefill* – i.e. the text we want Claude to assume the +response starts with. Let’s try it out: + +``` python +chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,') +``` + +According to Douglas Adams,42. Philosophically, it’s to find personal +meaning through relationships, purpose, and experiences. + +
+ +- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` +- content: + `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can add `stream=True` to stream the results as soon as they arrive +(although you will only see the gradual generation if you execute the +notebook yourself, of course!) + +``` python +for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): + print(o, end='') +``` + + It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. + +### Async + +Alternatively, you can use +[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or +[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for +the async versions, e.g: + +``` python +chat = AsyncChat(model) +await chat("I'm Jeremy") +``` + +Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_016Q8cdc3sPWBS8eXcNj841L` +- content: + `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Remember to use `async for` when streaming in this case: + +``` python +async for o in await chat("Concisely, what is the meaning of life?", + prefill='According to Douglas Adams,', stream=True): + print(o, end='') +``` + + According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. + +## Prompt caching + +If you use `mk_msg(msg, cache=True)`, then the message is cached using +Claude’s [prompt +caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) +feature. For instance, here we use caching when asking about Claudette’s +readme file: + +``` python +chat = Chat(model, sp="""You are a helpful and concise assistant.""") +``` + +``` python +nbtxt = Path('README.txt').read_text() +msg = f''' +{nbtxt} + +In brief, what is the purpose of this project based on the readme?''' +r = chat(mk_msg(msg, cache=True)) +r +``` + +Claudette is a high-level wrapper for Anthropic’s Python SDK that +automates common tasks and provides additional functionality. Its main +features include: + +1. A Chat class for stateful dialogs +2. Support for prefill (controlling Claude’s initial response words) +3. Convenient image handling +4. Simple tool use API integration +5. Support for multiple model providers (Anthropic, AWS Bedrock, Google + Vertex) + +The project is notable for being the first “literate nbdev” project, +meaning its source code is written as a detailed, readable Jupyter +Notebook that includes explanations, examples, and teaching material +alongside the functional code. + +The goal is to simplify working with Claude’s API while maintaining full +control, reducing boilerplate code and manual work that would otherwise +be needed with the base SDK. + +
+ +- id: `msg_014rVQnYoZXZuyWUCMELG1QW` +- content: + `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` + +
+ +The response records the a cache has been created using these input +tokens: + +``` python +print(r.usage) +``` + + Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) + +We can now ask a followup question in this chat: + +``` python +r = chat('How does it make tool use more ergonomic?') +r +``` + +According to the README, Claudette makes tool use more ergonomic in +several ways: + +1. It uses docments to make Python function definitions more + user-friendly - each parameter and return value should have a type + and description + +2. It handles the tool calling process automatically - when Claude + returns a tool_use message, Claudette manages calling the tool with + the provided parameters behind the scenes + +3. It provides a `toolloop` method that can handle multiple tool calls + in a single step to solve more complex problems + +4. It allows you to pass a list of tools to the Chat constructor and + optionally force Claude to always use a specific tool via + `tool_choice` + +Here’s a simple example from the README: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b + +chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') +``` + +This makes it much simpler compared to manually handling all the tool +use logic that would be required with the base SDK. + +
+ +- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` +- content: + `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` + +
+ +We can see that this only used ~200 regular input tokens – the 7000+ +context tokens have been read from cache. + +``` python +print(r.usage) +``` + + Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) + +``` python +chat.use +``` + + In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 + +## Tool use + +[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude +use external tools. + +We use [docments](https://fastcore.fast.ai/docments.html) to make +defining Python functions as ergonomic as possible. Each parameter (and +the return value) should have a type, and a docments comment with the +description of what it is. As an example we’ll write a simple function +that adds numbers together, and will tell us when it’s being called: + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +Sometimes Claude will say something like “according to the `sums` tool +the answer is” – generally we’d rather it just tells the user the +answer, so we can use a system prompt to help with this: + +``` python +sp = "Never mention what tools you use." +``` + +We’ll get Claude to add up some long numbers: + +``` python +a,b = 604542,6458932 +pr = f"What is {a}+{b}?" +pr +``` + + 'What is 604542+6458932?' + +To use tools, pass a list of them to +[`Chat`](https://claudette.answer.ai/core.html#chat): + +``` python +chat = Chat(model, sp=sp, tools=[sums]) +``` + +To force Claude to always answer using a tool, set `tool_choice` to that +function name. When Claude needs to use a tool, it doesn’t return the +answer, but instead returns a `tool_use` message, which means we have to +call the named tool with the provided parameters. + +``` python +r = chat(pr, tool_choice='sums') +r +``` + + Finding the sum of 604542 and 6458932 + +ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, +‘b’: 6458932}, name=‘sums’, type=‘tool_use’) + +
+ +- id: `msg_014xrPyotyiBmFSctkp1LZHk` +- content: + `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `tool_use` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Claudette handles all that for us – we just call it again, and it all +happens automatically: + +``` python +chat() +``` + +The sum of 604542 and 6458932 is 7063474. + +
+ +- id: `msg_01151puJxG8Fa6k6QSmzwKQA` +- content: + `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +You can see how many tokens have been used at any time by checking the +`use` property. Note that (as of May 2024) tool use in Claude uses a +*lot* of tokens, since it automatically adds a large system prompt. + +``` python +chat.use +``` + + In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 + +We can do everything needed to use tools in a single step, by using +[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). +This can even call multiple tools as needed solve a problem. For +example, let’s define a tool to handle multiplication: + +``` python +def mults( + a:int, # First thing to multiply + b:int=1 # Second thing to multiply +) -> int: # The product of the inputs + "Multiplies a * b." + print(f"Finding the product of {a} and {b}") + return a * b +``` + +Now with a single call we can calculate `(a+b)*2` – by passing +`show_trace` we can see each response from Claude in the process: + +``` python +chat = Chat(model, sp=sp, tools=[sums,mults]) +pr = f'Calculate ({a}+{b})*2' +pr +``` + + 'Calculate (604542+6458932)*2' + +``` python +chat.toolloop(pr, trace_func=print) +``` + + Finding the sum of 604542 and 6458932 + [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] + Finding the product of 7063474 and 2 + [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] + [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] + +The final result is 14,126,948. + +
+ +- id: `msg_0162teyBcJHriUzZXMPz4r5d` +- content: + `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +## Structured data + +If you just want the immediate result from a single tool, use +[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). + +``` python +cli = Client(model) +``` + +``` python +def sums( + a:int, # First thing to sum + b:int=1 # Second thing to sum +) -> int: # The sum of the inputs + "Adds a + b." + print(f"Finding the sum of {a} and {b}") + return a + b +``` + +``` python +cli.structured("What is 604542+6458932", sums) +``` + + Finding the sum of 604542 and 6458932 + + [7063474] + +This is particularly useful for getting back structured information, +e.g: + +``` python +class President: + "Information about a president of the United States" + def __init__(self, + first:str, # first name + last:str, # last name + spouse:str, # name of spouse + years_in_office:str, # format: "{start_year}-{end_year}" + birthplace:str, # name of city + birth_year:int # year of birth, `0` if unknown + ): + assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" + store_attr() + + __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') +``` + +``` python +cli.structured("Provide key information about the 3rd President of the United States", President) +``` + + [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] + +## Images + +Claude can handle image data as well. As everyone knows, when testing +image APIs you have to use a cute puppy. + +``` python +fn = Path('samples/puppy.jpg') +display.Image(filename=fn, width=200) +``` + + + +We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object +as before: + +``` python +chat = Chat(model) +``` + +Claudette expects images as a list of bytes, so we read in the file: + +``` python +img = fn.read_bytes() +``` + +Prompts to Claudette can be lists, containing text, images, or both, eg: + +``` python +chat([img, "In brief, what color flowers are in this image?"]) +``` + +In this adorable puppy photo, there are purple/lavender colored flowers +(appears to be asters or similar daisy-like flowers) in the background. + +
+ +- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` +- content: + `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +The image is included as input tokens. + +``` python +chat.use +``` + + In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 + +Alternatively, Claudette supports creating a multi-stage chat with +separate image and text prompts. For instance, you can pass just the +image as the initial prompt (in which case Claude will make some general +comments about what it sees), and then follow up with questions in +additional prompts: + +``` python +chat = Chat(model) +chat(img) +``` + +What an adorable Cavalier King Charles Spaniel puppy! The photo captures +the classic brown and white coloring of the breed, with those soulful +dark eyes that are so characteristic. The puppy is lying in the grass, +and there are lovely purple asters blooming in the background, creating +a beautiful natural setting. The combination of the puppy’s sweet +expression and the delicate flowers makes for a charming composition. +Cavalier King Charles Spaniels are known for their gentle, affectionate +nature, and this little one certainly seems to embody those traits with +its endearing look. + +
+ +- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` +- content: + `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What direction is the puppy facing?') +``` + +The puppy is facing towards the left side of the image. Its head is +positioned so we can see its right side profile, though it appears to be +looking slightly towards the camera, giving us a good view of its +distinctive brown and white facial markings and one of its dark eyes. +The puppy is lying down with its white chest/front visible against the +green grass. + +
+ +- id: `msg_01AeR9eWjbxa788YF97iErtN` +- content: + `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +``` python +chat('What color is it?') +``` + +The puppy has a classic Cavalier King Charles Spaniel coat with a rich +chestnut brown (sometimes called Blenheim) coloring on its ears and +patches on its face, combined with a bright white base color. The white +is particularly prominent on its face (creating a distinctive blaze down +the center) and chest area. This brown and white combination is one of +the most recognizable color patterns for the breed. + +
+ +- id: `msg_01R91AqXG7pLc8hK24F5mc7x` +- content: + `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` +- model: `claude-3-5-sonnet-20241022` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: + `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` + +
+ +Note that the image is passed in again for every input in the dialog, so +that number of input tokens increases quickly with this kind of chat. +(For large images, using prompt caching might be a good idea.) + +``` python +chat.use +``` + + In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 + +## Other model providers + +You can also use 3rd party providers of Anthropic models, as shown here. + +### Amazon Bedrock + +These are the models available through Bedrock: + +``` python +models_aws +``` + + ['anthropic.claude-3-opus-20240229-v1:0', + 'anthropic.claude-3-5-sonnet-20241022-v2:0', + 'anthropic.claude-3-sonnet-20240229-v1:0', + 'anthropic.claude-3-haiku-20240307-v1:0'] + +To use them, call `AnthropicBedrock` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicBedrock +``` + +``` python +ab = AnthropicBedrock( + aws_access_key=os.environ['AWS_ACCESS_KEY'], + aws_secret_key=os.environ['AWS_SECRET_KEY'], +) +client = Client(models_aws[-1], ab) +``` + +Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) +object passing this client to the `cli` parameter – and from then on, +everything is identical to the previous examples. + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by +Anthropic. How can I help you today? + +
+ +- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` +- content: + `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` +- model: `claude-3-haiku-20240307` +- role: `assistant` +- stop_reason: `end_turn` +- stop_sequence: `None` +- type: `message` +- usage: `{'input_tokens': 10, 'output_tokens': 32}` + +
+ +### Google Vertex + +These are the models available through Vertex: + +``` python +models_goog +``` + + ['claude-3-opus@20240229', + 'claude-3-5-sonnet-v2@20241022', + 'claude-3-sonnet@20240229', + 'claude-3-haiku@20240307'] + +To use them, call `AnthropicVertex` with your access details, and pass +that to [`Client`](https://claudette.answer.ai/core.html#client): + +``` python +from anthropic import AnthropicVertex +import google.auth +``` + +``` python +project_id = google.auth.default()[1] +gv = AnthropicVertex(project_id=project_id, region="us-east5") +client = Client(models_goog[-1], gv) +``` + +``` python +chat = Chat(cli=client) +chat("I'm Jeremy") +``` + +## Extensions + +- [Pydantic Structured + Ouput](https://github.com/tom-pollak/claudette-pydantic)
404: Not Found
diff --git a/llm/llms.txt b/llms.txt similarity index 93% rename from llm/llms.txt rename to llms.txt index 42a8cf6..0e28db3 100644 --- a/llm/llms.txt +++ b/llms.txt @@ -17,7 +17,7 @@ Things to remember when using Claudette: ## API -- [API List](https://raw.githubusercontent.com/AnswerDotAI/claudette/refs/heads/main/llm/apilist.txt): A succint list of all functions and methods in claudette. +- [API List](https://raw.githubusercontent.com/AnswerDotAI/claudette/refs/heads/main/apilist.txt): A succint list of all functions and methods in claudette. ## Optional - [Tool loop handling](https://claudette.answer.ai/toolloop.html.md): How to use the tool loop functionality for complex multi-step interactions diff --git a/tools/refresh_llm_docs.sh b/tools/refresh_llm_docs.sh index 9be88e7..e98009f 100755 --- a/tools/refresh_llm_docs.sh +++ b/tools/refresh_llm_docs.sh @@ -3,10 +3,10 @@ echo "Refreshing LLM documentation files..." echo "Generating API list documentation..." -pysym2md claudette --output_file llm/apilist.txt > llm/apilist.txt +pysym2md claudette --output_file apilist.txt echo "Generating context files..." -llms_txt2ctx llm/llms.txt > llm/llms-ctx.txt -llms_txt2ctx llm/llms.txt --optional True > llm/llms-ctx-full.txt +llms_txt2ctx llms.txt > llms-ctx.txt +llms_txt2ctx llms.txt --optional True > llms-ctx-full.txt echo "✅ Documentation refresh complete!" From a9860c43b336c72ea072c059ee2ce435a5f460dc Mon Sep 17 00:00:00 2001 From: Erik Gaasedelen Date: Wed, 20 Nov 2024 21:03:02 -0800 Subject: [PATCH 9/9] remove llm folder --- llm/apilist.txt | 74 -- llm/llms-ctx-full.txt | 2127 ----------------------------------------- llm/llms-ctx.txt | 942 ------------------ 3 files changed, 3143 deletions(-) delete mode 100644 llm/apilist.txt delete mode 100644 llm/llms-ctx-full.txt delete mode 100644 llm/llms-ctx.txt diff --git a/llm/apilist.txt b/llm/apilist.txt deleted file mode 100644 index 72ce0bb..0000000 --- a/llm/apilist.txt +++ /dev/null @@ -1,74 +0,0 @@ -# claudette Module Documentation - -## claudette.asink - -- `class AsyncClient` - - `def __init__(self, model, cli, log)` - Async Anthropic messages client. - - -- `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)` - Make an async call to Claude. - -- `@delegates() class AsyncChat` - - `def __init__(self, model, cli, **kwargs)` - Anthropic async chat client. - - -## claudette.core - -- `def find_block(r, blk_type)` - Find the first block of type `blk_type` in `r.content`. - -- `def contents(r)` - Helper to get the contents from Claude response `r`. - -- `def usage(inp, out, cache_create, cache_read)` - Slightly more concise version of `Usage`. - -- `@patch def __add__(self, b)` - Add together each of `input_tokens` and `output_tokens` - -- `def mk_msgs(msgs, **kw)` - Helper to set 'assistant' role on alternate messages. - -- `class Client` - - `def __init__(self, model, cli, log)` - Basic Anthropic messages client. - - -- `def mk_tool_choice(choose)` - Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise - -- `def mk_funcres(tuid, res)` - Given tool use id and the tool result, create a tool_result response. - -- `def mk_toolres(r, ns, obj)` - Create a `tool_result` message from response `r`. - -- `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)` - Make a call to Claude. - -- `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)` - Return the value of all tool calls (generally used for structured outputs) - -- `class Chat` - - `def __init__(self, model, cli, sp, tools, temp, cont_pr)` - Anthropic chat client. - - - `@property def use(self)` - -- `def img_msg(data, cache)` - Convert image `data` into an encoded `dict` - -- `def text_msg(s, cache)` - Convert `s` to a text message - -- `def mk_msg(content, role, cache, **kw)` - Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message - -## claudette.toolloop - -- `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)` - Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages - diff --git a/llm/llms-ctx-full.txt b/llm/llms-ctx-full.txt deleted file mode 100644 index 4ecb0ef..0000000 --- a/llm/llms-ctx-full.txt +++ /dev/null @@ -1,2127 +0,0 @@ -Things to remember when using Claudette: - -- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key -- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) -- The library provides both synchronous and asynchronous interfaces -- Use `Chat()` for maintaining conversation state and handling tool interactions -- When using tools, the library automatically handles the request/response loop -- Image support is built in but only available on compatible models (not Haiku)# claudette - - - -> **NB**: If you are reading this in GitHub’s readme, we recommend you -> instead read the much more nicely formatted [documentation -> format](https://claudette.answer.ai/) of this tutorial. - -*Claudette* is a wrapper for Anthropic’s [Python -SDK](https://github.com/anthropics/anthropic-sdk-python). - -The SDK works well, but it is quite low level – it leaves the developer -to do a lot of stuff manually. That’s a lot of extra work and -boilerplate! Claudette automates pretty much everything that can be -automated, whilst providing full control. Amongst the features provided: - -- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that - creates stateful dialogs -- Support for *prefill*, which tells Claude what to use as the first few - words of its response -- Convenient image support -- Simple and convenient support for Claude’s new Tool Use API. - -You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the -key provided to you by Anthropic in order to use this library. - -Note that this library is the first ever “literate nbdev” project. That -means that the actual source code for the library is a rendered Jupyter -Notebook which includes callout notes and tips, HTML tables and images, -detailed explanations, and teaches *how* and *why* the code is written -the way it is. Even if you’ve never used the Anthropic Python SDK or -Claude API before, you should be able to read the source code. Click -[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, -or clone the git repo and execute the notebook yourself to see every -step of the creation process in action. The tutorial below includes -links to API details which will take you to relevant parts of the -source. The reason this project is a new kind of literal program is -because we take seriously Knuth’s call to action, that we have a “*moral -commitment*” to never write an “*illiterate program*” – and so we have a -commitment to making literate programming and easy and pleasant -experience. (For more on this, see [this -talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) - -> “*Let us change our traditional attitude to the construction of -> programs: Instead of imagining that our main task is to instruct a -> **computer** what to do, let us concentrate rather on explaining to -> **human beings** what we want a computer to do.*” Donald E. Knuth, -> [Literate -> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) -> (1984) - -## Install - -``` sh -pip install claudette -``` - -## Getting started - -Anthropic’s Python SDK will automatically be installed with Claudette, -if you don’t already have it. - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -To print every HTTP request and response in full, uncomment the above -line. - -``` python -from claudette import * -``` - -Claudette only exports the symbols that are needed to use the library, -so you can use `import *` to import them. Alternatively, just use: - -``` python -import claudette -``` - -…and then add the prefix `claudette.` to any usages of the module. - -Claudette provides `models`, which is a list of models currently -available from the SDK. - -``` python -models -``` - - ['claude-3-opus-20240229', - 'claude-3-5-sonnet-20241022', - 'claude-3-haiku-20240307'] - -For these examples, we’ll use Sonnet 3.5, since it’s awesome! - -``` python -model = models[1] -``` - -## Chat - -The main interface to Claudette is the -[`Chat`](https://claudette.answer.ai/core.html#chat) class, which -provides a stateful interface to Claude: - -``` python -chat = Chat(model, sp="""You are a helpful and concise assistant.""") -chat("I'm Jeremy") -``` - -Hello Jeremy, nice to meet you. - -
- -- id: `msg_015oK9jEcra3TEKHUGYULjWB` -- content: - `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -r = chat("What's my name?") -r -``` - -Your name is Jeremy. - -
- -- id: `msg_01Si8sTFJe8d8vq7enanbAwj` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -r = chat("What's my name?") -r -``` - -Your name is Jeremy. - -
- -- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -As you see above, displaying the results of a call in a notebook shows -just the message contents, with the other details hidden behind a -collapsible section. Alternatively you can `print` the details: - -``` python -print(r) -``` - - Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) - -Claude supports adding an extra `assistant` message at the end, which -contains the *prefill* – i.e. the text we want Claude to assume the -response starts with. Let’s try it out: - -``` python -chat("Concisely, what is the meaning of life?", - prefill='According to Douglas Adams,') -``` - -According to Douglas Adams,42. Philosophically, it’s to find personal -meaning through relationships, purpose, and experiences. - -
- -- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` -- content: - `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -You can add `stream=True` to stream the results as soon as they arrive -(although you will only see the gradual generation if you execute the -notebook yourself, of course!) - -``` python -for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): - print(o, end='') -``` - - It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. - -### Async - -Alternatively, you can use -[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or -[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for -the async versions, e.g: - -``` python -chat = AsyncChat(model) -await chat("I'm Jeremy") -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_016Q8cdc3sPWBS8eXcNj841L` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Remember to use `async for` when streaming in this case: - -``` python -async for o in await chat("Concisely, what is the meaning of life?", - prefill='According to Douglas Adams,', stream=True): - print(o, end='') -``` - - According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. - -## Prompt caching - -If you use `mk_msg(msg, cache=True)`, then the message is cached using -Claude’s [prompt -caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) -feature. For instance, here we use caching when asking about Claudette’s -readme file: - -``` python -chat = Chat(model, sp="""You are a helpful and concise assistant.""") -``` - -``` python -nbtxt = Path('README.txt').read_text() -msg = f''' -{nbtxt} - -In brief, what is the purpose of this project based on the readme?''' -r = chat(mk_msg(msg, cache=True)) -r -``` - -Claudette is a high-level wrapper for Anthropic’s Python SDK that -automates common tasks and provides additional functionality. Its main -features include: - -1. A Chat class for stateful dialogs -2. Support for prefill (controlling Claude’s initial response words) -3. Convenient image handling -4. Simple tool use API integration -5. Support for multiple model providers (Anthropic, AWS Bedrock, Google - Vertex) - -The project is notable for being the first “literate nbdev” project, -meaning its source code is written as a detailed, readable Jupyter -Notebook that includes explanations, examples, and teaching material -alongside the functional code. - -The goal is to simplify working with Claude’s API while maintaining full -control, reducing boilerplate code and manual work that would otherwise -be needed with the base SDK. - -
- -- id: `msg_014rVQnYoZXZuyWUCMELG1QW` -- content: - `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` - -
- -The response records the a cache has been created using these input -tokens: - -``` python -print(r.usage) -``` - - Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) - -We can now ask a followup question in this chat: - -``` python -r = chat('How does it make tool use more ergonomic?') -r -``` - -According to the README, Claudette makes tool use more ergonomic in -several ways: - -1. It uses docments to make Python function definitions more - user-friendly - each parameter and return value should have a type - and description - -2. It handles the tool calling process automatically - when Claude - returns a tool_use message, Claudette manages calling the tool with - the provided parameters behind the scenes - -3. It provides a `toolloop` method that can handle multiple tool calls - in a single step to solve more complex problems - -4. It allows you to pass a list of tools to the Chat constructor and - optionally force Claude to always use a specific tool via - `tool_choice` - -Here’s a simple example from the README: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b - -chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') -``` - -This makes it much simpler compared to manually handling all the tool -use logic that would be required with the base SDK. - -
- -- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` -- content: - `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` - -
- -We can see that this only used ~200 regular input tokens – the 7000+ -context tokens have been read from cache. - -``` python -print(r.usage) -``` - - Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) - -``` python -chat.use -``` - - In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 - -## Tool use - -[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude -use external tools. - -We use [docments](https://fastcore.fast.ai/docments.html) to make -defining Python functions as ergonomic as possible. Each parameter (and -the return value) should have a type, and a docments comment with the -description of what it is. As an example we’ll write a simple function -that adds numbers together, and will tell us when it’s being called: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -Sometimes Claude will say something like “according to the `sums` tool -the answer is” – generally we’d rather it just tells the user the -answer, so we can use a system prompt to help with this: - -``` python -sp = "Never mention what tools you use." -``` - -We’ll get Claude to add up some long numbers: - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -pr -``` - - 'What is 604542+6458932?' - -To use tools, pass a list of them to -[`Chat`](https://claudette.answer.ai/core.html#chat): - -``` python -chat = Chat(model, sp=sp, tools=[sums]) -``` - -To force Claude to always answer using a tool, set `tool_choice` to that -function name. When Claude needs to use a tool, it doesn’t return the -answer, but instead returns a `tool_use` message, which means we have to -call the named tool with the provided parameters. - -``` python -r = chat(pr, tool_choice='sums') -r -``` - - Finding the sum of 604542 and 6458932 - -ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_014xrPyotyiBmFSctkp1LZHk` -- content: - `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Claudette handles all that for us – we just call it again, and it all -happens automatically: - -``` python -chat() -``` - -The sum of 604542 and 6458932 is 7063474. - -
- -- id: `msg_01151puJxG8Fa6k6QSmzwKQA` -- content: - `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -You can see how many tokens have been used at any time by checking the -`use` property. Note that (as of May 2024) tool use in Claude uses a -*lot* of tokens, since it automatically adds a large system prompt. - -``` python -chat.use -``` - - In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 - -We can do everything needed to use tools in a single step, by using -[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). -This can even call multiple tools as needed solve a problem. For -example, let’s define a tool to handle multiplication: - -``` python -def mults( - a:int, # First thing to multiply - b:int=1 # Second thing to multiply -) -> int: # The product of the inputs - "Multiplies a * b." - print(f"Finding the product of {a} and {b}") - return a * b -``` - -Now with a single call we can calculate `(a+b)*2` – by passing -`show_trace` we can see each response from Claude in the process: - -``` python -chat = Chat(model, sp=sp, tools=[sums,mults]) -pr = f'Calculate ({a}+{b})*2' -pr -``` - - 'Calculate (604542+6458932)*2' - -``` python -chat.toolloop(pr, trace_func=print) -``` - - Finding the sum of 604542 and 6458932 - [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] - Finding the product of 7063474 and 2 - [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] - [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] - -The final result is 14,126,948. - -
- -- id: `msg_0162teyBcJHriUzZXMPz4r5d` -- content: - `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -## Structured data - -If you just want the immediate result from a single tool, use -[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). - -``` python -cli = Client(model) -``` - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -cli.structured("What is 604542+6458932", sums) -``` - - Finding the sum of 604542 and 6458932 - - [7063474] - -This is particularly useful for getting back structured information, -e.g: - -``` python -class President: - "Information about a president of the United States" - def __init__(self, - first:str, # first name - last:str, # last name - spouse:str, # name of spouse - years_in_office:str, # format: "{start_year}-{end_year}" - birthplace:str, # name of city - birth_year:int # year of birth, `0` if unknown - ): - assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" - store_attr() - - __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') -``` - -``` python -cli.structured("Provide key information about the 3rd President of the United States", President) -``` - - [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] - -## Images - -Claude can handle image data as well. As everyone knows, when testing -image APIs you have to use a cute puppy. - -``` python -fn = Path('samples/puppy.jpg') -display.Image(filename=fn, width=200) -``` - - - -We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object -as before: - -``` python -chat = Chat(model) -``` - -Claudette expects images as a list of bytes, so we read in the file: - -``` python -img = fn.read_bytes() -``` - -Prompts to Claudette can be lists, containing text, images, or both, eg: - -``` python -chat([img, "In brief, what color flowers are in this image?"]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -The image is included as input tokens. - -``` python -chat.use -``` - - In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 - -Alternatively, Claudette supports creating a multi-stage chat with -separate image and text prompts. For instance, you can pass just the -image as the initial prompt (in which case Claude will make some general -comments about what it sees), and then follow up with questions in -additional prompts: - -``` python -chat = Chat(model) -chat(img) -``` - -What an adorable Cavalier King Charles Spaniel puppy! The photo captures -the classic brown and white coloring of the breed, with those soulful -dark eyes that are so characteristic. The puppy is lying in the grass, -and there are lovely purple asters blooming in the background, creating -a beautiful natural setting. The combination of the puppy’s sweet -expression and the delicate flowers makes for a charming composition. -Cavalier King Charles Spaniels are known for their gentle, affectionate -nature, and this little one certainly seems to embody those traits with -its endearing look. - -
- -- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` -- content: - `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat('What direction is the puppy facing?') -``` - -The puppy is facing towards the left side of the image. Its head is -positioned so we can see its right side profile, though it appears to be -looking slightly towards the camera, giving us a good view of its -distinctive brown and white facial markings and one of its dark eyes. -The puppy is lying down with its white chest/front visible against the -green grass. - -
- -- id: `msg_01AeR9eWjbxa788YF97iErtN` -- content: - `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat('What color is it?') -``` - -The puppy has a classic Cavalier King Charles Spaniel coat with a rich -chestnut brown (sometimes called Blenheim) coloring on its ears and -patches on its face, combined with a bright white base color. The white -is particularly prominent on its face (creating a distinctive blaze down -the center) and chest area. This brown and white combination is one of -the most recognizable color patterns for the breed. - -
- -- id: `msg_01R91AqXG7pLc8hK24F5mc7x` -- content: - `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Note that the image is passed in again for every input in the dialog, so -that number of input tokens increases quickly with this kind of chat. -(For large images, using prompt caching might be a good idea.) - -``` python -chat.use -``` - - In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 - -## Other model providers - -You can also use 3rd party providers of Anthropic models, as shown here. - -### Amazon Bedrock - -These are the models available through Bedrock: - -``` python -models_aws -``` - - ['anthropic.claude-3-opus-20240229-v1:0', - 'anthropic.claude-3-5-sonnet-20241022-v2:0', - 'anthropic.claude-3-sonnet-20240229-v1:0', - 'anthropic.claude-3-haiku-20240307-v1:0'] - -To use them, call `AnthropicBedrock` with your access details, and pass -that to [`Client`](https://claudette.answer.ai/core.html#client): - -``` python -from anthropic import AnthropicBedrock -``` - -``` python -ab = AnthropicBedrock( - aws_access_key=os.environ['AWS_ACCESS_KEY'], - aws_secret_key=os.environ['AWS_SECRET_KEY'], -) -client = Client(models_aws[-1], ab) -``` - -Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) -object passing this client to the `cli` parameter – and from then on, -everything is identical to the previous examples. - -``` python -chat = Chat(cli=client) -chat("I'm Jeremy") -``` - -It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` -- content: - `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 32}` - -
- -### Google Vertex - -These are the models available through Vertex: - -``` python -models_goog -``` - - ['claude-3-opus@20240229', - 'claude-3-5-sonnet-v2@20241022', - 'claude-3-sonnet@20240229', - 'claude-3-haiku@20240307'] - -To use them, call `AnthropicVertex` with your access details, and pass -that to [`Client`](https://claudette.answer.ai/core.html#client): - -``` python -from anthropic import AnthropicVertex -import google.auth -``` - -``` python -project_id = google.auth.default()[1] -gv = AnthropicVertex(project_id=project_id, region="us-east5") -client = Client(models_goog[-1], gv) -``` - -``` python -chat = Chat(cli=client) -chat("I'm Jeremy") -``` - -## Extensions - -- [Pydantic Structured - Ouput](https://github.com/tom-pollak/claudette-pydantic)
# claudette Module Documentation - -## claudette.asink - -- `class AsyncClient` - - `def __init__(self, model, cli, log)` - Async Anthropic messages client. - - -- `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)` - Make an async call to Claude. - -- `@delegates() class AsyncChat` - - `def __init__(self, model, cli, **kwargs)` - Anthropic async chat client. - - -## claudette.core - -- `def find_block(r, blk_type)` - Find the first block of type `blk_type` in `r.content`. - -- `def contents(r)` - Helper to get the contents from Claude response `r`. - -- `def usage(inp, out, cache_create, cache_read)` - Slightly more concise version of `Usage`. - -- `@patch def __add__(self, b)` - Add together each of `input_tokens` and `output_tokens` - -- `def mk_msgs(msgs, **kw)` - Helper to set 'assistant' role on alternate messages. - -- `class Client` - - `def __init__(self, model, cli, log)` - Basic Anthropic messages client. - - -- `def mk_tool_choice(choose)` - Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise - -- `def mk_funcres(tuid, res)` - Given tool use id and the tool result, create a tool_result response. - -- `def mk_toolres(r, ns, obj)` - Create a `tool_result` message from response `r`. - -- `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)` - Make a call to Claude. - -- `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)` - Return the value of all tool calls (generally used for structured outputs) - -- `class Chat` - - `def __init__(self, model, cli, sp, tools, temp, cont_pr)` - Anthropic chat client. - - - `@property def use(self)` - -- `def img_msg(data, cache)` - Convert image `data` into an encoded `dict` - -- `def text_msg(s, cache)` - Convert `s` to a text message - -- `def mk_msg(content, role, cache, **kw)` - Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message - -## claudette.toolloop - -- `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)` - Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages -# Tool loop - - - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -``` python -model = models[-1] -``` - -Anthropic provides an [interesting -example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) -of using tools to mock up a hypothetical ordering system. We’re going to -take it a step further, and show how we can dramatically simplify the -process, whilst completing more complex tasks. - -We’ll start by defining the same mock customer/order data as in -Anthropic’s example, plus create a entity relationship between customers -and orders: - -``` python -orders = { - "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"), - "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"), - "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")} - -customers = { - "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890", - orders=[orders['O1'], orders['O2']]), - "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210", - orders=[orders['O3']]) -} -``` - -We can now define the same functions from the original example – but -note that we don’t need to manually create the large JSON schema, since -Claudette handles all that for us automatically from the functions -directly. We’ll add some extra functionality to update order details -when cancelling too. - -``` python -def get_customer_info( - customer_id:str # ID of the customer -): # Customer's name, email, phone number, and list of orders - "Retrieves a customer's information and their orders based on the customer ID" - print(f'- Retrieving customer {customer_id}') - return customers.get(customer_id, "Customer not found") - -def get_order_details( - order_id:str # ID of the order -): # Order's ID, product name, quantity, price, and order status - "Retrieves the details of a specific order based on the order ID" - print(f'- Retrieving order {order_id}') - return orders.get(order_id, "Order not found") - -def cancel_order( - order_id:str # ID of the order to cancel -)->bool: # True if the cancellation is successful - "Cancels an order based on the provided order ID" - print(f'- Cancelling order {order_id}') - if order_id not in orders: return False - orders[order_id]['status'] = 'Cancelled' - return True -``` - -We’re now ready to start our chat. - -``` python -tools = [get_customer_info, get_order_details, cancel_order] -chat = Chat(model, tools=tools) -``` - -We’ll start with the same request as Anthropic showed: - -``` python -r = chat('Can you tell me the email address for customer C1?') -print(r.stop_reason) -r.content -``` - - - Retrieving customer C1 - tool_use - - [ToolUseBlock(id='toolu_0168sUZoEUpjzk5Y8WN3q9XL', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] - -Claude asks us to use a tool. Claudette handles that automatically by -just calling it again: - -``` python -r = chat() -contents(r) -``` - - 'The email address for customer C1 is john@example.com.' - -Let’s consider a more complex case than in the original example – what -happens if a customer wants to cancel all of their orders? - -``` python -chat = Chat(model, tools=tools) -r = chat('Please cancel all orders for customer C1 for me.') -print(r.stop_reason) -r.content -``` - - - Retrieving customer C1 - tool_use - - [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), - ToolUseBlock(id='toolu_01ADr1rEp7NLZ2iKWfLp7vz7', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')] - -This is the start of a multi-stage tool use process. Doing it manually -step by step is inconvenient, so let’s write a function to handle this -for us: - ------------------------------------------------------------------------- - -source - -### Chat.toolloop - -> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None, -> maxtok=4096, stream=False, prefill='', -> tool_choice:Optional[dict]=None) - -*Add prompt `pr` to dialog and get a response from Claude, automatically -following up with `tool_use` messages* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prPrompt to pass to Claude
max_stepsint10Maximum number of tool requests to loop through
trace_funcOptionalNoneFunction to trace tool use steps (e.g print)
cont_funcOptionalnoopFunction that stops loop if returns False
tempNoneTypeNoneTemperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
tool_choiceOptionalNoneOptionally force use of some tool
- -
-Exported source - -``` python -@patch -@delegates(Chat.__call__) -def toolloop(self:Chat, - pr, # Prompt to pass to Claude - max_steps=10, # Maximum number of tool requests to loop through - trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`) - cont_func:Optional[callable]=noop, # Function that stops loop if returns False - **kwargs): - "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages" - n_msgs = len(self.h) - r = self(pr, **kwargs) - for i in range(max_steps): - if r.stop_reason!='tool_use': break - if trace_func: trace_func(self.h[n_msgs:]); n_msgs = len(self.h) - r = self(**kwargs) - if not (cont_func or noop)(self.h[-2]): break - if trace_func: trace_func(self.h[n_msgs:]) - return r -``` - -
- -We’ll start by re-running our previous request - we shouldn’t have to -manually pass back the `tool_use` message any more: - -``` python -chat = Chat(model, tools=tools) -r = chat.toolloop('Can you tell me the email address for customer C1?') -r -``` - - - Retrieving customer C1 - -The email address for customer C1 is john@example.com. - -
- -- id: `msg_01Fm2CY76dNeWief4kUW6r71` -- content: - `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 720, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Let’s see if it can handle the multi-stage process now – we’ll add -`trace_func=print` to see each stage of the process: - -``` python -chat = Chat(model, tools=tools) -r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print) -r -``` - - - Retrieving customer C1 - [{'role': 'user', 'content': [{'type': 'text', 'text': 'Please cancel all orders for customer C1 for me.'}]}, {'role': 'assistant', 'content': [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01SvivKytaRHEdKixEY9dUDz', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01SvivKytaRHEdKixEY9dUDz', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}] - - Cancelling order O1 - [{'role': 'assistant', 'content': [TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel each of these orders:", type='text'), ToolUseBlock(id='toolu_01DoGVUPVBeDYERMePHDzUoT', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DoGVUPVBeDYERMePHDzUoT', 'content': 'True'}]}] - - Cancelling order O2 - [{'role': 'assistant', 'content': [ToolUseBlock(id='toolu_01XNwS35yY88Mvx4B3QqDeXX', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01XNwS35yY88Mvx4B3QqDeXX', 'content': 'True'}]}] - [{'role': 'assistant', 'content': [TextBlock(text="I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", type='text')]}] - -I’ve successfully cancelled both orders O1 and O2 for customer C1. -Please let me know if you need anything else! - -
- -- id: `msg_01K1QpUZ8nrBVUHYTrH5QjSF` -- content: - `[{'text': "I've successfully cancelled both orders O1 and O2 for customer C1. Please let me know if you need anything else!", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 921, 'output_tokens': 32, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -OK Claude thinks the orders were cancelled – let’s check one: - -``` python -chat.toolloop('What is the status of order O2?') -``` - - - Retrieving order O2 - -The status of order O2 is now ‘Cancelled’ since I successfully cancelled -that order earlier. - -
- -- id: `msg_01XcXpFDwoZ3u1bFDf5mY8x1` -- content: - `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 1092, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -## Code interpreter - -Here is an example of using `toolloop` to implement a simple code -interpreter with additional tools. - -``` python -from toolslm.shell import get_shell -from fastcore.meta import delegates -import traceback -``` - -``` python -@delegates() -class CodeChat(Chat): - imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses' - def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs): - super().__init__(model=model, **kwargs) - self.ask = ask - self.tools.append(self.run_cell) - self.shell = get_shell() - self.shell.run_cell('import '+self.imps) -``` - -We have one additional parameter to creating a `CodeChat` beyond what we -pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is -`ask` – if that’s `True`, we’ll prompt the user before running code. - -``` python -@patch -def run_cell( - self:CodeChat, - code:str, # Code to execute in persistent IPython session -): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute - "Asks user for permission, and if provided, executes python `code` using persistent IPython session." - confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n' - if self.ask and input(confirm): return '#DECLINED#' - try: res = self.shell.run_cell(code) - except Exception as e: return traceback.format_exc() - return res.stdout if res.result is None else res.result -``` - -We just pass along requests to run code to the shell’s implementation. -Claude often prints results instead of just using the last expression, -so we capture stdout in those cases. - -``` python -sp = f'''You are a knowledgable assistant. Do not use tools unless needed. -Don't do complex calculations yourself -- use code for them. -The following modules are pre-imported for `run_cell` automatically: - -{CodeChat.imps} - -Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls. - -If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.''' -``` - -``` python -def get_user(ignored:str='' # Unused parameter - ): # Username of current user - "Get the username of the user running this session" - print("Looking up username") - return 'Jeremy' -``` - -In order to test out multi-stage tool use, we create a mock function -that Claude can call to get the current username. - -``` python -model = models[1] -``` - -``` python -chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3) -``` - -Claude gets confused sometimes about how tools work, so we use examples -to remind it: - -``` python -chat.h = [ - 'Calculate the square root of `10332`', 'math.sqrt(10332)', - '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.' -] -``` - -Providing a callable to toolloop’s `trace_func` lets us print out -information during the loop: - -``` python -def _show_cts(h): - for r in h: - for o in r.get('content'): - if hasattr(o,'text'): print(o.text) - nm = getattr(o, 'name', None) - if nm=='run_cell': print(o.input['code']) - elif nm: print(f'{o.name}({o.input})') -``` - -…and toolloop’s `cont_func` callable let’s us provide a function which, -if it returns `False`, stops the loop: - -``` python -def _cont_decline(c): - return nested_idx(c, 'content', 'content') != '#DECLINED#' -``` - -Now we can try our code interpreter. We start by asking for a function -to be created, which we’ll use in the next prompt to test that the -interpreter is persistent. - -``` python -pr = '''Create a 1-line function `checksum` for a string `s`, -that multiplies together the ascii values of each character in `s` using `reduce`.''' -chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline) -``` - - Press Enter to execute, or enter "n" to skip? - ``` - checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) - ``` - - Create a 1-line function `checksum` for a string `s`, - that multiplies together the ascii values of each character in `s` using `reduce`. - Let me help you create that function using `reduce` and `functools`. - checksum = lambda s: functools.reduce(lambda x, y: x * ord(y), s, 1) - The function has been created. Let me explain how it works: - 1. It takes a string `s` as input - 2. Uses `functools.reduce` to multiply together all ASCII values - 3. `ord(y)` gets the ASCII value of each character - 4. The initial value is 1 (the third parameter to reduce) - 5. The lambda function multiplies the accumulator (x) with each new ASCII value - - You can test it with any string. For example, you could try `checksum("hello")` to see it in action. - -The function has been created. Let me explain how it works: 1. It takes -a string `s` as input 2. Uses `functools.reduce` to multiply together -all ASCII values 3. `ord(y)` gets the ASCII value of each character 4. -The initial value is 1 (the third parameter to reduce) 5. The lambda -function multiplies the accumulator (x) with each new ASCII value - -You can test it with any string. For example, you could try -`checksum("hello")` to see it in action. - -
- -- id: `msg_011pcGY9LbYqvRSfDPgCqUkT` -- content: - `[{'text': 'The function has been created. Let me explain how it works:\n1. It takes a string`s`as input\n2. Uses`functools.reduce`to multiply together all ASCII values\n3.`ord(y)`gets the ASCII value of each character\n4. The initial value is 1 (the third parameter to reduce)\n5. The lambda function multiplies the accumulator (x) with each new ASCII value\n\nYou can test it with any string. For example, you could try`checksum(“hello”)`to see it in action.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 824, 'output_tokens': 125, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -By asking for a calculation to be done on the username, we force it to -use multiple steps: - -``` python -pr = 'Use it to get the checksum of the username of this session.' -chat.toolloop(pr, trace_func=_show_cts) -``` - - Looking up username - Use it to get the checksum of the username of this session. - I'll first get the username using `get_user` and then apply our `checksum` function to it. - get_user({'ignored': ''}) - Press Enter to execute, or enter "n" to skip? - ``` - print(checksum("Jeremy")) - ``` - - Now I'll calculate the checksum of "Jeremy": - print(checksum("Jeremy")) - The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy". - -The checksum of the username “Jeremy” is 1134987783204. This was -calculated by multiplying together the ASCII values of each character in -“Jeremy”. - -
- -- id: `msg_01UXvtcLzzykZpnQUT35v4uD` -- content: - `[{'text': 'The checksum of the username "Jeremy" is 1134987783204. This was calculated by multiplying together the ASCII values of each character in "Jeremy".', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 1143, 'output_tokens': 38, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
# The async version - - - -## Setup - -## Async SDK - -``` python -model = models[1] -cli = AsyncAnthropic() -``` - -``` python -m = {'role': 'user', 'content': "I'm Jeremy"} -r = await cli.messages.create(messages=[m], model=model, max_tokens=100) -r -``` - -Hello Jeremy! It’s nice to meet you. How can I assist you today? Is -there anything specific you’d like to talk about or any questions you -have? - -
- -- id: `msg_019gsEQs5dqb3kgwNJbTH27M` -- content: - `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 36}` - -
- ------------------------------------------------------------------------- - -source - -### AsyncClient - -> AsyncClient (model, cli=None, log=False) - -*Async Anthropic messages client.* - -
-Exported source - -``` python -class AsyncClient(Client): - def __init__(self, model, cli=None, log=False): - "Async Anthropic messages client." - super().__init__(model,cli,log) - if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}) -``` - -
- -``` python -c = AsyncClient(model) -``` - -``` python -c._r(r) -c.use -``` - - In: 10; Out: 36; Total: 46 - ------------------------------------------------------------------------- - -source - -### AsyncClient.\_\_call\_\_ - -> AsyncClient.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='', -> stream:bool=False, stop=None, cli=None, log=False) - -*Make an async call to Claude.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
msgslistList of messages in the dialog
spstrThe system prompt
tempint0Temperature
maxtokint4096Maximum tokens
prefillstrOptional prefill to pass to Claude as start of its response
streamboolFalseStream response?
stopNoneTypeNoneStop sequence
cliNoneTypeNone
logboolFalse
- -
-Exported source - -``` python -@patch -async def _stream(self:AsyncClient, msgs:list, prefill='', **kwargs): - async with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s: - if prefill: yield prefill - async for o in s.text_stream: yield o - self._log(await s.get_final_message(), prefill, msgs, kwargs) -``` - -
-
-Exported source - -``` python -@patch -@delegates(Client) -async def __call__(self:AsyncClient, - msgs:list, # List of messages in the dialog - sp='', # The system prompt - temp=0, # Temperature - maxtok=4096, # Maximum tokens - prefill='', # Optional prefill to pass to Claude as start of its response - stream:bool=False, # Stream response? - stop=None, # Stop sequence - **kwargs): - "Make an async call to Claude." - msgs = self._precall(msgs, prefill, stop, kwargs) - if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - res = await self.c.messages.create( - model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs) - return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs) -``` - -
- -``` python -c = AsyncClient(model, log=True) -c.use -``` - - In: 0; Out: 0; Total: 0 - -``` python -c.model = models[1] -await c('Hi') -``` - -Hello! How can I assist you today? Feel free to ask any questions or let -me know if you need help with anything. - -
- -- id: `msg_01L9vqP9r1LcmvSk8vWGLbPo` -- content: - `[{'text': 'Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 8, 'output_tokens': 29, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -c.use -``` - - In: 8; Out: 29; Total: 37 - -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -await c(q, prefill=pref) -``` - -According to Douglas Adams, the meaning of life is 42. More seriously, -there’s no universally agreed upon meaning of life. Many philosophers -and religions have proposed different answers, but it remains an open -question that individuals must grapple with for themselves. - -
- -- id: `msg_01KAJbCneA2oCRPVm9EkyDXF` -- content: - `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 24, 'output_tokens': 51, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -async for o in (await c('Hi', stream=True)): print(o, end='') -``` - - Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything. - -``` python -c.use -``` - - In: 40; Out: 109; Total: 149 - -``` python -async for o in (await c(q, prefill=pref, stream=True)): print(o, end='') -``` - - According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves. - -``` python -c.use -``` - - In: 64; Out: 160; Total: 224 - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -sp = "You are a summing expert." -``` - -``` python -tools=[get_schema(sums)] -choice = mk_tool_choice('sums') -``` - -``` python -tools = [get_schema(sums)] -msgs = mk_msgs(pr) -r = await c(msgs, sp=sp, tools=tools, tool_choice=choice) -tr = mk_toolres(r, ns=globals()) -msgs += tr -contents(await c(msgs, sp=sp, tools=tools)) -``` - - Finding the sum of 604542 and 6458932 - - 'As a summing expert, I\'m happy to help you with this addition. The sum of 604542 and 6458932 is 7063474.\n\nTo break it down:\n604542 (first number)\n+ 6458932 (second number)\n= 7063474 (total sum)\n\nThis result was calculated using the "sums" function, which adds two numbers together. Is there anything else you\'d like me to sum for you?' - -## AsyncChat - ------------------------------------------------------------------------- - -source - -### AsyncChat - -> AsyncChat (model:Optional[str]=None, -> cli:Optional[claudette.core.Client]=None, sp='', -> tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None) - -*Anthropic async chat client.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
modelOptionalNoneModel to use (leave empty if passing cli)
cliOptionalNoneClient to use (leave empty if passing model)
spstr
toolsOptionalNone
tempint0
cont_prOptionalNone
- -
-Exported source - -``` python -@delegates() -class AsyncChat(Chat): - def __init__(self, - model:Optional[str]=None, # Model to use (leave empty if passing `cli`) - cli:Optional[Client]=None, # Client to use (leave empty if passing `model`) - **kwargs): - "Anthropic async chat client." - super().__init__(model, cli, **kwargs) - if not cli: self.c = AsyncClient(model) -``` - -
- -``` python -sp = "Never mention what tools you use." -chat = AsyncChat(model, sp=sp) -chat.c.use, chat.h -``` - - (In: 0; Out: 0; Total: 0, []) - ------------------------------------------------------------------------- - -source - -### AsyncChat.\_\_call\_\_ - -> AsyncChat.__call__ (pr=None, temp=0, maxtok=4096, stream=False, -> prefill='', **kw) - -*Call self as a function.* - - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeDefaultDetails
prNoneTypeNonePrompt / message
tempint0Temperature
maxtokint4096Maximum tokens
streamboolFalseStream response?
prefillstrOptional prefill to pass to Claude as start of its response
kw
- -
-Exported source - -``` python -@patch -async def _stream(self:AsyncChat, res): - async for o in res: yield o - self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) -``` - -
-
-Exported source - -``` python -@patch -async def _append_pr(self:AsyncChat, pr=None): - prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history - if pr and prev_role == 'user': await self() - self._post_pr(pr, prev_role) -``` - -
-
-Exported source - -``` python -@patch -async def __call__(self:AsyncChat, - pr=None, # Prompt / message - temp=0, # Temperature - maxtok=4096, # Maximum tokens - stream=False, # Stream response? - prefill='', # Optional prefill to pass to Claude as start of its response - **kw): - await self._append_pr(pr) - if self.tools: kw['tools'] = [get_schema(o) for o in self.tools] - res = await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, **kw) - if stream: return self._stream(res) - self.h += mk_toolres(self.c.result, ns=self.tools, obj=self) - return res -``` - -
- -``` python -await chat("I'm Jeremy") -await chat("What's my name?") -``` - -Your name is Jeremy, as you mentioned in your previous message. - -
- -- id: `msg_01NMugMXWpDP9iuTXeLkHarn` -- content: - `[{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 64, 'output_tokens': 16, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -q = "Concisely, what is the meaning of life?" -pref = 'According to Douglas Adams,' -await chat(q, prefill=pref) -``` - -According to Douglas Adams, the meaning of life is 42. More seriously, -there’s no universally agreed upon answer. Common philosophical -perspectives include: - -1. Finding personal fulfillment -2. Serving others -3. Pursuing happiness -4. Creating meaning through our choices -5. Experiencing and appreciating existence - -Ultimately, many believe each individual must determine their own life’s -meaning. - -
- -- id: `msg_01VPWUQn5Do1Kst8RYUDQvCu` -- content: - `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 100, 'output_tokens': 82, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat = AsyncChat(model, sp=sp) -async for o in (await chat("I'm Jeremy", stream=True)): print(o, end='') -``` - - Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions I can help you with? - -``` python -pr = f"What is {a}+{b}?" -chat = AsyncChat(model, sp=sp, tools=[sums]) -r = await chat(pr) -r -``` - - Finding the sum of 604542 and 6458932 - -To answer this question, I can use the “sums” function to add these two -numbers together. Let me do that for you. - -
- -- id: `msg_015z1rffSWFxvj7rSpzc43ZE` -- content: - `[{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01SNKhtfnDQBC4RGY4mUCq1v', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 428, 'output_tokens': 101, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -await chat() -``` - -The sum of 604542 and 6458932 is 7063474. - -
- -- id: `msg_018KAsE2YGiXWjUJkLPrXpb2` -- content: - `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 543, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -fn = Path('samples/puppy.jpg') -img = fn.read_bytes() -``` - -``` python -q = "In brief, what color flowers are in this image?" -msg = mk_msg([img_msg(img), text_msg(q)]) -await c([msg]) -``` - -The flowers in this image are purple. They appear to be small, -daisy-like flowers, possibly asters or some type of purple daisy, -blooming in the background behind the adorable puppy in the foreground. - -
- -- id: `msg_017qgZggLjUY915mWbWCkb9X` -- content: - `[{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20240620` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 50, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
diff --git a/llm/llms-ctx.txt b/llm/llms-ctx.txt deleted file mode 100644 index 732c1dc..0000000 --- a/llm/llms-ctx.txt +++ /dev/null @@ -1,942 +0,0 @@ -Things to remember when using Claudette: - -- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key -- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex) -- The library provides both synchronous and asynchronous interfaces -- Use `Chat()` for maintaining conversation state and handling tool interactions -- When using tools, the library automatically handles the request/response loop -- Image support is built in but only available on compatible models (not Haiku)# claudette - - - -> **NB**: If you are reading this in GitHub’s readme, we recommend you -> instead read the much more nicely formatted [documentation -> format](https://claudette.answer.ai/) of this tutorial. - -*Claudette* is a wrapper for Anthropic’s [Python -SDK](https://github.com/anthropics/anthropic-sdk-python). - -The SDK works well, but it is quite low level – it leaves the developer -to do a lot of stuff manually. That’s a lot of extra work and -boilerplate! Claudette automates pretty much everything that can be -automated, whilst providing full control. Amongst the features provided: - -- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that - creates stateful dialogs -- Support for *prefill*, which tells Claude what to use as the first few - words of its response -- Convenient image support -- Simple and convenient support for Claude’s new Tool Use API. - -You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the -key provided to you by Anthropic in order to use this library. - -Note that this library is the first ever “literate nbdev” project. That -means that the actual source code for the library is a rendered Jupyter -Notebook which includes callout notes and tips, HTML tables and images, -detailed explanations, and teaches *how* and *why* the code is written -the way it is. Even if you’ve never used the Anthropic Python SDK or -Claude API before, you should be able to read the source code. Click -[Claudette’s Source](https://claudette.answer.ai/core.html) to read it, -or clone the git repo and execute the notebook yourself to see every -step of the creation process in action. The tutorial below includes -links to API details which will take you to relevant parts of the -source. The reason this project is a new kind of literal program is -because we take seriously Knuth’s call to action, that we have a “*moral -commitment*” to never write an “*illiterate program*” – and so we have a -commitment to making literate programming and easy and pleasant -experience. (For more on this, see [this -talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.) - -> “*Let us change our traditional attitude to the construction of -> programs: Instead of imagining that our main task is to instruct a -> **computer** what to do, let us concentrate rather on explaining to -> **human beings** what we want a computer to do.*” Donald E. Knuth, -> [Literate -> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf) -> (1984) - -## Install - -``` sh -pip install claudette -``` - -## Getting started - -Anthropic’s Python SDK will automatically be installed with Claudette, -if you don’t already have it. - -``` python -import os -# os.environ['ANTHROPIC_LOG'] = 'debug' -``` - -To print every HTTP request and response in full, uncomment the above -line. - -``` python -from claudette import * -``` - -Claudette only exports the symbols that are needed to use the library, -so you can use `import *` to import them. Alternatively, just use: - -``` python -import claudette -``` - -…and then add the prefix `claudette.` to any usages of the module. - -Claudette provides `models`, which is a list of models currently -available from the SDK. - -``` python -models -``` - - ['claude-3-opus-20240229', - 'claude-3-5-sonnet-20241022', - 'claude-3-haiku-20240307'] - -For these examples, we’ll use Sonnet 3.5, since it’s awesome! - -``` python -model = models[1] -``` - -## Chat - -The main interface to Claudette is the -[`Chat`](https://claudette.answer.ai/core.html#chat) class, which -provides a stateful interface to Claude: - -``` python -chat = Chat(model, sp="""You are a helpful and concise assistant.""") -chat("I'm Jeremy") -``` - -Hello Jeremy, nice to meet you. - -
- -- id: `msg_015oK9jEcra3TEKHUGYULjWB` -- content: - `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -r = chat("What's my name?") -r -``` - -Your name is Jeremy. - -
- -- id: `msg_01Si8sTFJe8d8vq7enanbAwj` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -r = chat("What's my name?") -r -``` - -Your name is Jeremy. - -
- -- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx` -- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -As you see above, displaying the results of a call in a notebook shows -just the message contents, with the other details hidden behind a -collapsible section. Alternatively you can `print` the details: - -``` python -print(r) -``` - - Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62) - -Claude supports adding an extra `assistant` message at the end, which -contains the *prefill* – i.e. the text we want Claude to assume the -response starts with. Let’s try it out: - -``` python -chat("Concisely, what is the meaning of life?", - prefill='According to Douglas Adams,') -``` - -According to Douglas Adams,42. Philosophically, it’s to find personal -meaning through relationships, purpose, and experiences. - -
- -- id: `msg_01R9RvMdFwea9iRX5uYSSHG7` -- content: - `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -You can add `stream=True` to stream the results as soon as they arrive -(although you will only see the gradual generation if you execute the -notebook yourself, of course!) - -``` python -for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True): - print(o, end='') -``` - - It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. - -### Async - -Alternatively, you can use -[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or -[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for -the async versions, e.g: - -``` python -chat = AsyncChat(model) -await chat("I'm Jeremy") -``` - -Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_016Q8cdc3sPWBS8eXcNj841L` -- content: - `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Remember to use `async for` when streaming in this case: - -``` python -async for o in await chat("Concisely, what is the meaning of life?", - prefill='According to Douglas Adams,', stream=True): - print(o, end='') -``` - - According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful. - -## Prompt caching - -If you use `mk_msg(msg, cache=True)`, then the message is cached using -Claude’s [prompt -caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) -feature. For instance, here we use caching when asking about Claudette’s -readme file: - -``` python -chat = Chat(model, sp="""You are a helpful and concise assistant.""") -``` - -``` python -nbtxt = Path('README.txt').read_text() -msg = f''' -{nbtxt} - -In brief, what is the purpose of this project based on the readme?''' -r = chat(mk_msg(msg, cache=True)) -r -``` - -Claudette is a high-level wrapper for Anthropic’s Python SDK that -automates common tasks and provides additional functionality. Its main -features include: - -1. A Chat class for stateful dialogs -2. Support for prefill (controlling Claude’s initial response words) -3. Convenient image handling -4. Simple tool use API integration -5. Support for multiple model providers (Anthropic, AWS Bedrock, Google - Vertex) - -The project is notable for being the first “literate nbdev” project, -meaning its source code is written as a detailed, readable Jupyter -Notebook that includes explanations, examples, and teaching material -alongside the functional code. - -The goal is to simplify working with Claude’s API while maintaining full -control, reducing boilerplate code and manual work that would otherwise -be needed with the base SDK. - -
- -- id: `msg_014rVQnYoZXZuyWUCMELG1QW` -- content: - `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}` - -
- -The response records the a cache has been created using these input -tokens: - -``` python -print(r.usage) -``` - - Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0) - -We can now ask a followup question in this chat: - -``` python -r = chat('How does it make tool use more ergonomic?') -r -``` - -According to the README, Claudette makes tool use more ergonomic in -several ways: - -1. It uses docments to make Python function definitions more - user-friendly - each parameter and return value should have a type - and description - -2. It handles the tool calling process automatically - when Claude - returns a tool_use message, Claudette manages calling the tool with - the provided parameters behind the scenes - -3. It provides a `toolloop` method that can handle multiple tool calls - in a single step to solve more complex problems - -4. It allows you to pass a list of tools to the Chat constructor and - optionally force Claude to always use a specific tool via - `tool_choice` - -Here’s a simple example from the README: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b - -chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums') -``` - -This makes it much simpler compared to manually handling all the tool -use logic that would be required with the base SDK. - -
- -- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU` -- content: - `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ```` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}` - -
- -We can see that this only used ~200 regular input tokens – the 7000+ -context tokens have been read from cache. - -``` python -print(r.usage) -``` - - Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205) - -``` python -chat.use -``` - - In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070 - -## Tool use - -[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude -use external tools. - -We use [docments](https://fastcore.fast.ai/docments.html) to make -defining Python functions as ergonomic as possible. Each parameter (and -the return value) should have a type, and a docments comment with the -description of what it is. As an example we’ll write a simple function -that adds numbers together, and will tell us when it’s being called: - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -Sometimes Claude will say something like “according to the `sums` tool -the answer is” – generally we’d rather it just tells the user the -answer, so we can use a system prompt to help with this: - -``` python -sp = "Never mention what tools you use." -``` - -We’ll get Claude to add up some long numbers: - -``` python -a,b = 604542,6458932 -pr = f"What is {a}+{b}?" -pr -``` - - 'What is 604542+6458932?' - -To use tools, pass a list of them to -[`Chat`](https://claudette.answer.ai/core.html#chat): - -``` python -chat = Chat(model, sp=sp, tools=[sums]) -``` - -To force Claude to always answer using a tool, set `tool_choice` to that -function name. When Claude needs to use a tool, it doesn’t return the -answer, but instead returns a `tool_use` message, which means we have to -call the named tool with the provided parameters. - -``` python -r = chat(pr, tool_choice='sums') -r -``` - - Finding the sum of 604542 and 6458932 - -ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542, -‘b’: 6458932}, name=‘sums’, type=‘tool_use’) - -
- -- id: `msg_014xrPyotyiBmFSctkp1LZHk` -- content: - `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `tool_use` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Claudette handles all that for us – we just call it again, and it all -happens automatically: - -``` python -chat() -``` - -The sum of 604542 and 6458932 is 7063474. - -
- -- id: `msg_01151puJxG8Fa6k6QSmzwKQA` -- content: - `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -You can see how many tokens have been used at any time by checking the -`use` property. Note that (as of May 2024) tool use in Claude uses a -*lot* of tokens, since it automatically adds a large system prompt. - -``` python -chat.use -``` - - In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042 - -We can do everything needed to use tools in a single step, by using -[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop). -This can even call multiple tools as needed solve a problem. For -example, let’s define a tool to handle multiplication: - -``` python -def mults( - a:int, # First thing to multiply - b:int=1 # Second thing to multiply -) -> int: # The product of the inputs - "Multiplies a * b." - print(f"Finding the product of {a} and {b}") - return a * b -``` - -Now with a single call we can calculate `(a+b)*2` – by passing -`show_trace` we can see each response from Claude in the process: - -``` python -chat = Chat(model, sp=sp, tools=[sums,mults]) -pr = f'Calculate ({a}+{b})*2' -pr -``` - - 'Calculate (604542+6458932)*2' - -``` python -chat.toolloop(pr, trace_func=print) -``` - - Finding the sum of 604542 and 6458932 - [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}] - Finding the product of 7063474 and 2 - [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}] - [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}] - -The final result is 14,126,948. - -
- -- id: `msg_0162teyBcJHriUzZXMPz4r5d` -- content: - `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -## Structured data - -If you just want the immediate result from a single tool, use -[`Client.structured`](https://claudette.answer.ai/core.html#client.structured). - -``` python -cli = Client(model) -``` - -``` python -def sums( - a:int, # First thing to sum - b:int=1 # Second thing to sum -) -> int: # The sum of the inputs - "Adds a + b." - print(f"Finding the sum of {a} and {b}") - return a + b -``` - -``` python -cli.structured("What is 604542+6458932", sums) -``` - - Finding the sum of 604542 and 6458932 - - [7063474] - -This is particularly useful for getting back structured information, -e.g: - -``` python -class President: - "Information about a president of the United States" - def __init__(self, - first:str, # first name - last:str, # last name - spouse:str, # name of spouse - years_in_office:str, # format: "{start_year}-{end_year}" - birthplace:str, # name of city - birth_year:int # year of birth, `0` if unknown - ): - assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`" - store_attr() - - __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year') -``` - -``` python -cli.structured("Provide key information about the 3rd President of the United States", President) -``` - - [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)] - -## Images - -Claude can handle image data as well. As everyone knows, when testing -image APIs you have to use a cute puppy. - -``` python -fn = Path('samples/puppy.jpg') -display.Image(filename=fn, width=200) -``` - - - -We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object -as before: - -``` python -chat = Chat(model) -``` - -Claudette expects images as a list of bytes, so we read in the file: - -``` python -img = fn.read_bytes() -``` - -Prompts to Claudette can be lists, containing text, images, or both, eg: - -``` python -chat([img, "In brief, what color flowers are in this image?"]) -``` - -In this adorable puppy photo, there are purple/lavender colored flowers -(appears to be asters or similar daisy-like flowers) in the background. - -
- -- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT` -- content: - `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -The image is included as input tokens. - -``` python -chat.use -``` - - In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147 - -Alternatively, Claudette supports creating a multi-stage chat with -separate image and text prompts. For instance, you can pass just the -image as the initial prompt (in which case Claude will make some general -comments about what it sees), and then follow up with questions in -additional prompts: - -``` python -chat = Chat(model) -chat(img) -``` - -What an adorable Cavalier King Charles Spaniel puppy! The photo captures -the classic brown and white coloring of the breed, with those soulful -dark eyes that are so characteristic. The puppy is lying in the grass, -and there are lovely purple asters blooming in the background, creating -a beautiful natural setting. The combination of the puppy’s sweet -expression and the delicate flowers makes for a charming composition. -Cavalier King Charles Spaniels are known for their gentle, affectionate -nature, and this little one certainly seems to embody those traits with -its endearing look. - -
- -- id: `msg_01Ciyymq44uwp2iYwRZdKWNN` -- content: - `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat('What direction is the puppy facing?') -``` - -The puppy is facing towards the left side of the image. Its head is -positioned so we can see its right side profile, though it appears to be -looking slightly towards the camera, giving us a good view of its -distinctive brown and white facial markings and one of its dark eyes. -The puppy is lying down with its white chest/front visible against the -green grass. - -
- -- id: `msg_01AeR9eWjbxa788YF97iErtN` -- content: - `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -``` python -chat('What color is it?') -``` - -The puppy has a classic Cavalier King Charles Spaniel coat with a rich -chestnut brown (sometimes called Blenheim) coloring on its ears and -patches on its face, combined with a bright white base color. The white -is particularly prominent on its face (creating a distinctive blaze down -the center) and chest area. This brown and white combination is one of -the most recognizable color patterns for the breed. - -
- -- id: `msg_01R91AqXG7pLc8hK24F5mc7x` -- content: - `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]` -- model: `claude-3-5-sonnet-20241022` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: - `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}` - -
- -Note that the image is passed in again for every input in the dialog, so -that number of input tokens increases quickly with this kind of chat. -(For large images, using prompt caching might be a good idea.) - -``` python -chat.use -``` - - In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964 - -## Other model providers - -You can also use 3rd party providers of Anthropic models, as shown here. - -### Amazon Bedrock - -These are the models available through Bedrock: - -``` python -models_aws -``` - - ['anthropic.claude-3-opus-20240229-v1:0', - 'anthropic.claude-3-5-sonnet-20241022-v2:0', - 'anthropic.claude-3-sonnet-20240229-v1:0', - 'anthropic.claude-3-haiku-20240307-v1:0'] - -To use them, call `AnthropicBedrock` with your access details, and pass -that to [`Client`](https://claudette.answer.ai/core.html#client): - -``` python -from anthropic import AnthropicBedrock -``` - -``` python -ab = AnthropicBedrock( - aws_access_key=os.environ['AWS_ACCESS_KEY'], - aws_secret_key=os.environ['AWS_SECRET_KEY'], -) -client = Client(models_aws[-1], ab) -``` - -Now create your [`Chat`](https://claudette.answer.ai/core.html#chat) -object passing this client to the `cli` parameter – and from then on, -everything is identical to the previous examples. - -``` python -chat = Chat(cli=client) -chat("I'm Jeremy") -``` - -It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by -Anthropic. How can I help you today? - -
- -- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq` -- content: - `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]` -- model: `claude-3-haiku-20240307` -- role: `assistant` -- stop_reason: `end_turn` -- stop_sequence: `None` -- type: `message` -- usage: `{'input_tokens': 10, 'output_tokens': 32}` - -
- -### Google Vertex - -These are the models available through Vertex: - -``` python -models_goog -``` - - ['claude-3-opus@20240229', - 'claude-3-5-sonnet-v2@20241022', - 'claude-3-sonnet@20240229', - 'claude-3-haiku@20240307'] - -To use them, call `AnthropicVertex` with your access details, and pass -that to [`Client`](https://claudette.answer.ai/core.html#client): - -``` python -from anthropic import AnthropicVertex -import google.auth -``` - -``` python -project_id = google.auth.default()[1] -gv = AnthropicVertex(project_id=project_id, region="us-east5") -client = Client(models_goog[-1], gv) -``` - -``` python -chat = Chat(cli=client) -chat("I'm Jeremy") -``` - -## Extensions - -- [Pydantic Structured - Ouput](https://github.com/tom-pollak/claudette-pydantic)
# claudette Module Documentation - -## claudette.asink - -- `class AsyncClient` - - `def __init__(self, model, cli, log)` - Async Anthropic messages client. - - -- `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)` - Make an async call to Claude. - -- `@delegates() class AsyncChat` - - `def __init__(self, model, cli, **kwargs)` - Anthropic async chat client. - - -## claudette.core - -- `def find_block(r, blk_type)` - Find the first block of type `blk_type` in `r.content`. - -- `def contents(r)` - Helper to get the contents from Claude response `r`. - -- `def usage(inp, out, cache_create, cache_read)` - Slightly more concise version of `Usage`. - -- `@patch def __add__(self, b)` - Add together each of `input_tokens` and `output_tokens` - -- `def mk_msgs(msgs, **kw)` - Helper to set 'assistant' role on alternate messages. - -- `class Client` - - `def __init__(self, model, cli, log)` - Basic Anthropic messages client. - - -- `def mk_tool_choice(choose)` - Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise - -- `def mk_funcres(tuid, res)` - Given tool use id and the tool result, create a tool_result response. - -- `def mk_toolres(r, ns, obj)` - Create a `tool_result` message from response `r`. - -- `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)` - Make a call to Claude. - -- `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)` - Return the value of all tool calls (generally used for structured outputs) - -- `class Chat` - - `def __init__(self, model, cli, sp, tools, temp, cont_pr)` - Anthropic chat client. - - - `@property def use(self)` - -- `def img_msg(data, cache)` - Convert image `data` into an encoded `dict` - -- `def text_msg(s, cache)` - Convert `s` to a text message - -- `def mk_msg(content, role, cache, **kw)` - Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message - -## claudette.toolloop - -- `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)` - Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages -