Update model paths for non-beta tool use (#2), thanks to @sarahpannn
+
+
+
+
+
0.0.1
+
+
Initial release
+
+
+
+
+
+
+
+
+
+
Source Code
+
# Release notes
+
+<!-- do not remove -->
+
+## 0.0.10
+
+### New Features
+
+- Add `Client.structured` ([#32](https://github.com/AnswerDotAI/claudette/issues/32))
+- Use `dict2obj` ([#30](https://github.com/AnswerDotAI/claudette/issues/30))
+- Store tool call result without stringifying ([#29](https://github.com/AnswerDotAI/claudette/issues/29))
+
+
+## 0.0.9
+
+### New Features
+
+- Async support ([#21](https://github.com/AnswerDotAI/claudette/issues/21))
+
+
+## 0.0.7
+
+### New Features
+
+- Prompt caching ([#20](https://github.com/AnswerDotAI/claudette/issues/20))
+- add markdown to doc output ([#19](https://github.com/AnswerDotAI/claudette/issues/19))
+- Support vscode details tags ([#18](https://github.com/AnswerDotAI/claudette/issues/18))
+- Add a `cont_pr` param to Chat as a "default" prompt [#15](https://github.com/AnswerDotAI/claudette/pull/15)), thanks to [@tom-pollak](https://github.com/tom-pollak)
+
+### Bugs Squashed
+
+- Explicit `tool_choice` causes chat() to call tool twice. ([#11](https://github.com/AnswerDotAI/claudette/issues/11))
+
+
+## 0.0.6
+
+### New Features
+
+- Default chat prompt & function calling refactor ([#15](https://github.com/AnswerDotAI/claudette/pull/15)), thanks to [@tom-pollak](https://github.com/tom-pollak)
+
+
+## 0.0.5
+
+### New Features
+
+- Better support for stop sequences ([#12](https://github.com/AnswerDotAI/claudette/pull/12)), thanks to [@xl0](https://github.com/xl0)
+
+
+## 0.0.3
+
+### New Features
+
+- Amazon Bedrock and Google Vertex support ([#7](https://github.com/AnswerDotAI/claudette/issues/7))
+
+### Bug Fixes
+
+- Update model paths for non-beta tool use ([#2](https://github.com/AnswerDotAI/claudette/pull/2)), thanks to [@sarahpannn](https://github.com/sarahpannn)
+
+
+## 0.0.1
+
+- Initial release
Hello Jeremy! It’s nice to meet you. How can I assist you today? Is there anything specific you’d like to talk about or any questions you have?
+
+
+
id: msg_019gsEQs5dqb3kgwNJbTH27M
+
content: [{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]
Optional prefill to pass to Claude as start of its response
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
stop
+
NoneType
+
None
+
Stop sequence
+
+
+
cli
+
NoneType
+
None
+
+
+
+
log
+
bool
+
False
+
+
+
+
+
+
+Exported source
+
@patch
+asyncdef _stream(self:AsyncClient, msgs:list, prefill='', **kwargs):
+asyncwithself.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s:
+if prefill: yield prefill
+asyncfor o in s.text_stream: yield o
+self._log(await s.get_final_message(), prefill, msgs, kwargs)
+
+
+
+
+Exported source
+
@patch
+@delegates(Client)
+asyncdef__call__(self:AsyncClient,
+ msgs:list, # List of messages in the dialog
+ sp='', # The system prompt
+ temp=0, # Temperature
+ maxtok=4096, # Maximum tokens
+ prefill='', # Optional prefill to pass to Claude as start of its response
+ stream:bool=False, # Stream response?
+ stop=None, # Stop sequence
+**kwargs):
+"Make an async call to Claude."
+ msgs =self._precall(msgs, prefill, stop, kwargs)
+if stream: returnself._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+ res =awaitself.c.messages.create(
+ model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+returnself._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs)
+
+
+
+
c = AsyncClient(model, log=True)
+c.use
+
+
In: 0; Out: 0; Total: 0
+
+
+
+
c.model = models[1]
+await c('Hi')
+
+
Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.
+
+
+
id: msg_01L9vqP9r1LcmvSk8vWGLbPo
+
content: [{'text': 'Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.', 'type': 'text'}]
q ="Concisely, what is the meaning of life?"
+pref ='According to Douglas Adams,'
+await c(q, prefill=pref)
+
+
According to Douglas Adams, the meaning of life is 42. More seriously, there’s no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.
+
+
+
id: msg_01KAJbCneA2oCRPVm9EkyDXF
+
content: [{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.", 'type': 'text'}]
asyncfor o in (await c('Hi', stream=True)): print(o, end='')
+
+
Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.
+
+
+
+
c.use
+
+
In: 40; Out: 109; Total: 149
+
+
+
+
asyncfor o in (await c(q, prefill=pref, stream=True)): print(o, end='')
+
+
According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.
+
+
+
+
c.use
+
+
In: 64; Out: 160; Total: 224
+
+
+
+
def sums(
+ a:int, # First thing to sum
+ b:int=1# Second thing to sum
+) ->int: # The sum of the inputs
+"Adds a + b."
+print(f"Finding the sum of {a} and {b}")
+return a + b
+
+
+
a,b =604542,6458932
+pr =f"What is {a}+{b}?"
+sp ="You are a summing expert."
'As a summing expert, I\'m happy to help you with this addition. The sum of 604542 and 6458932 is 7063474.\n\nTo break it down:\n604542 (first number)\n+ 6458932 (second number)\n= 7063474 (total sum)\n\nThis result was calculated using the "sums" function, which adds two numbers together. Is there anything else you\'d like me to sum for you?'
Optional prefill to pass to Claude as start of its response
+
+
+
kw
+
+
+
+
+
+
+
+
+Exported source
+
@patch
+asyncdef _stream(self:AsyncChat, res):
+asyncfor o in res: yield o
+self.h += mk_toolres(self.c.result, ns=self.tools, obj=self)
+
+
+
+
+Exported source
+
@patch
+asyncdef _append_pr(self:AsyncChat, pr=None):
+ prev_role = nested_idx(self.h, -1, 'role') ifself.h else'assistant'# First message should be 'user' if no history
+if pr and prev_role =='user': awaitself()
+self._post_pr(pr, prev_role)
+
+
+
+
+Exported source
+
@patch
+asyncdef__call__(self:AsyncChat,
+ pr=None, # Prompt / message
+ temp=0, # Temperature
+ maxtok=4096, # Maximum tokens
+ stream=False, # Stream response?
+ prefill='', # Optional prefill to pass to Claude as start of its response
+**kw):
+awaitself._append_pr(pr)
+ifself.tools: kw['tools'] = [get_schema(o) for o inself.tools]
+ifself.tool_choice and pr: kw['tool_choice'] = mk_tool_choice(self.tool_choice)
+ res =awaitself.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, **kw)
+if stream: returnself._stream(res)
+self.h += mk_toolres(self.c.result, ns=self.tools, obj=self)
+return res
+
+
+
+
await chat("I'm Jeremy")
+await chat("What's my name?")
+
+
Your name is Jeremy, as you mentioned in your previous message.
+
+
+
id: msg_01NMugMXWpDP9iuTXeLkHarn
+
content: [{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]
q ="Concisely, what is the meaning of life?"
+pref ='According to Douglas Adams,'
+await chat(q, prefill=pref)
+
+
According to Douglas Adams, the meaning of life is 42. More seriously, there’s no universally agreed upon answer. Common philosophical perspectives include:
+
+
Finding personal fulfillment
+
Serving others
+
Pursuing happiness
+
Creating meaning through our choices
+
Experiencing and appreciating existence
+
+
Ultimately, many believe each individual must determine their own life’s meaning.
+
+
+
id: msg_01VPWUQn5Do1Kst8RYUDQvCu
+
content: [{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]
chat = AsyncChat(model, sp=sp)
+asyncfor o in (await chat("I'm Jeremy", stream=True)): print(o, end='')
+
+
Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions I can help you with?
To answer this question, I can use the “sums” function to add these two numbers together. Let me do that for you.
+
+
+
id: msg_015z1rffSWFxvj7rSpzc43ZE
+
content: [{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01SNKhtfnDQBC4RGY4mUCq1v', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]
q ="In brief, what color flowers are in this image?"
+msg = mk_msg([img_msg(img), text_msg(q)])
+await c([msg])
+
+
The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.
+
+
+
id: msg_017qgZggLjUY915mWbWCkb9X
+
content: [{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]
+
+
+
+
+
\ No newline at end of file
diff --git a/async.html.md b/async.html.md
new file mode 100644
index 0000000..07f2180
--- /dev/null
+++ b/async.html.md
@@ -0,0 +1,668 @@
+# The async version
+
+
+
+
+## Setup
+
+## Async SDK
+
+``` python
+model = models[1]
+cli = AsyncAnthropic()
+```
+
+``` python
+m = {'role': 'user', 'content': "I'm Jeremy"}
+r = await cli.messages.create(messages=[m], model=model, max_tokens=100)
+r
+```
+
+Hello Jeremy! It’s nice to meet you. How can I assist you today? Is
+there anything specific you’d like to talk about or any questions you
+have?
+
+
+
+- id: `msg_019gsEQs5dqb3kgwNJbTH27M`
+- content:
+ `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage: `{'input_tokens': 10, 'output_tokens': 36}`
+
+
+
+------------------------------------------------------------------------
+
+source
+
+### AsyncClient
+
+> AsyncClient (model, cli=None, log=False)
+
+*Async Anthropic messages client.*
+
+
+Exported source
+
+``` python
+class AsyncClient(Client):
+ def __init__(self, model, cli=None, log=False):
+ "Async Anthropic messages client."
+ super().__init__(model,cli,log)
+ if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'})
+```
+
+
+
+``` python
+c = AsyncClient(model)
+```
+
+``` python
+c._r(r)
+c.use
+```
+
+ In: 10; Out: 36; Total: 46
+
+------------------------------------------------------------------------
+
+source
+
+### AsyncClient.\_\_call\_\_
+
+> AsyncClient.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='',
+> stream:bool=False, stop=None, cli=None, log=False)
+
+*Make an async call to Claude.*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
msgs
+
list
+
+
List of messages in the dialog
+
+
+
sp
+
str
+
+
The system prompt
+
+
+
temp
+
int
+
0
+
Temperature
+
+
+
maxtok
+
int
+
4096
+
Maximum tokens
+
+
+
prefill
+
str
+
+
Optional prefill to pass to Claude as start of its response
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
stop
+
NoneType
+
None
+
Stop sequence
+
+
+
cli
+
NoneType
+
None
+
+
+
+
log
+
bool
+
False
+
+
+
+
+
+
+Exported source
+
+``` python
+@patch
+async def _stream(self:AsyncClient, msgs:list, prefill='', **kwargs):
+ async with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s:
+ if prefill: yield prefill
+ async for o in s.text_stream: yield o
+ self._log(await s.get_final_message(), prefill, msgs, kwargs)
+```
+
+
+
+Exported source
+
+``` python
+@patch
+@delegates(Client)
+async def __call__(self:AsyncClient,
+ msgs:list, # List of messages in the dialog
+ sp='', # The system prompt
+ temp=0, # Temperature
+ maxtok=4096, # Maximum tokens
+ prefill='', # Optional prefill to pass to Claude as start of its response
+ stream:bool=False, # Stream response?
+ stop=None, # Stop sequence
+ **kwargs):
+ "Make an async call to Claude."
+ msgs = self._precall(msgs, prefill, stop, kwargs)
+ if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+ res = await self.c.messages.create(
+ model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+ return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs)
+```
+
+
+
+``` python
+c = AsyncClient(model, log=True)
+c.use
+```
+
+ In: 0; Out: 0; Total: 0
+
+``` python
+c.model = models[1]
+await c('Hi')
+```
+
+Hello! How can I assist you today? Feel free to ask any questions or let
+me know if you need help with anything.
+
+
+
+- id: `msg_01L9vqP9r1LcmvSk8vWGLbPo`
+- content:
+ `[{'text': 'Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.', 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 8, 'output_tokens': 29, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+``` python
+c.use
+```
+
+ In: 8; Out: 29; Total: 37
+
+``` python
+q = "Concisely, what is the meaning of life?"
+pref = 'According to Douglas Adams,'
+await c(q, prefill=pref)
+```
+
+According to Douglas Adams, the meaning of life is 42. More seriously,
+there’s no universally agreed upon meaning of life. Many philosophers
+and religions have proposed different answers, but it remains an open
+question that individuals must grapple with for themselves.
+
+
+
+- id: `msg_01KAJbCneA2oCRPVm9EkyDXF`
+- content:
+ `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 24, 'output_tokens': 51, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+``` python
+async for o in (await c('Hi', stream=True)): print(o, end='')
+```
+
+ Hello! How can I assist you today? Feel free to ask any questions or let me know if you need help with anything.
+
+``` python
+c.use
+```
+
+ In: 40; Out: 109; Total: 149
+
+``` python
+async for o in (await c(q, prefill=pref, stream=True)): print(o, end='')
+```
+
+ According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon meaning of life. Many philosophers and religions have proposed different answers, but it remains an open question that individuals must grapple with for themselves.
+
+``` python
+c.use
+```
+
+ In: 64; Out: 160; Total: 224
+
+``` python
+def sums(
+ a:int, # First thing to sum
+ b:int=1 # Second thing to sum
+) -> int: # The sum of the inputs
+ "Adds a + b."
+ print(f"Finding the sum of {a} and {b}")
+ return a + b
+```
+
+``` python
+a,b = 604542,6458932
+pr = f"What is {a}+{b}?"
+sp = "You are a summing expert."
+```
+
+``` python
+tools=[get_schema(sums)]
+choice = mk_tool_choice('sums')
+```
+
+``` python
+tools = [get_schema(sums)]
+msgs = mk_msgs(pr)
+r = await c(msgs, sp=sp, tools=tools, tool_choice=choice)
+tr = mk_toolres(r, ns=globals())
+msgs += tr
+contents(await c(msgs, sp=sp, tools=tools))
+```
+
+ Finding the sum of 604542 and 6458932
+
+ 'As a summing expert, I\'m happy to help you with this addition. The sum of 604542 and 6458932 is 7063474.\n\nTo break it down:\n604542 (first number)\n+ 6458932 (second number)\n= 7063474 (total sum)\n\nThis result was calculated using the "sums" function, which adds two numbers together. Is there anything else you\'d like me to sum for you?'
+
+## AsyncChat
+
+------------------------------------------------------------------------
+
+source
+
+### AsyncChat
+
+> AsyncChat (model:Optional[str]=None,
+> cli:Optional[claudette.core.Client]=None, sp='',
+> tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None)
+
+*Anthropic async chat client.*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
model
+
Optional
+
None
+
Model to use (leave empty if passing cli)
+
+
+
cli
+
Optional
+
None
+
Client to use (leave empty if passing model)
+
+
+
sp
+
str
+
+
+
+
+
tools
+
Optional
+
None
+
+
+
+
temp
+
int
+
0
+
+
+
+
cont_pr
+
Optional
+
None
+
+
+
+
+
+
+Exported source
+
+``` python
+@delegates()
+class AsyncChat(Chat):
+ def __init__(self,
+ model:Optional[str]=None, # Model to use (leave empty if passing `cli`)
+ cli:Optional[Client]=None, # Client to use (leave empty if passing `model`)
+ **kwargs):
+ "Anthropic async chat client."
+ super().__init__(model, cli, **kwargs)
+ if not cli: self.c = AsyncClient(model)
+```
+
+
+
+``` python
+sp = "Never mention what tools you use."
+chat = AsyncChat(model, sp=sp)
+chat.c.use, chat.h
+```
+
+ (In: 0; Out: 0; Total: 0, [])
+
+------------------------------------------------------------------------
+
+source
+
+### AsyncChat.\_\_call\_\_
+
+> AsyncChat.__call__ (pr=None, temp=0, maxtok=4096, stream=False,
+> prefill='', **kw)
+
+*Call self as a function.*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
pr
+
NoneType
+
None
+
Prompt / message
+
+
+
temp
+
int
+
0
+
Temperature
+
+
+
maxtok
+
int
+
4096
+
Maximum tokens
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
prefill
+
str
+
+
Optional prefill to pass to Claude as start of its response
+
+
+
kw
+
+
+
+
+
+
+
+
+Exported source
+
+``` python
+@patch
+async def _stream(self:AsyncChat, res):
+ async for o in res: yield o
+ self.h += mk_toolres(self.c.result, ns=self.tools, obj=self)
+```
+
+
+
+Exported source
+
+``` python
+@patch
+async def _append_pr(self:AsyncChat, pr=None):
+ prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history
+ if pr and prev_role == 'user': await self()
+ self._post_pr(pr, prev_role)
+```
+
+
+
+Exported source
+
+``` python
+@patch
+async def __call__(self:AsyncChat,
+ pr=None, # Prompt / message
+ temp=0, # Temperature
+ maxtok=4096, # Maximum tokens
+ stream=False, # Stream response?
+ prefill='', # Optional prefill to pass to Claude as start of its response
+ **kw):
+ await self._append_pr(pr)
+ if self.tools: kw['tools'] = [get_schema(o) for o in self.tools]
+ if self.tool_choice and pr: kw['tool_choice'] = mk_tool_choice(self.tool_choice)
+ res = await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, **kw)
+ if stream: return self._stream(res)
+ self.h += mk_toolres(self.c.result, ns=self.tools, obj=self)
+ return res
+```
+
+
+
+``` python
+await chat("I'm Jeremy")
+await chat("What's my name?")
+```
+
+Your name is Jeremy, as you mentioned in your previous message.
+
+
+
+- id: `msg_01NMugMXWpDP9iuTXeLkHarn`
+- content:
+ `[{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 64, 'output_tokens': 16, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+``` python
+q = "Concisely, what is the meaning of life?"
+pref = 'According to Douglas Adams,'
+await chat(q, prefill=pref)
+```
+
+According to Douglas Adams, the meaning of life is 42. More seriously,
+there’s no universally agreed upon answer. Common philosophical
+perspectives include:
+
+1. Finding personal fulfillment
+2. Serving others
+3. Pursuing happiness
+4. Creating meaning through our choices
+5. Experiencing and appreciating existence
+
+Ultimately, many believe each individual must determine their own life’s
+meaning.
+
+
+
+- id: `msg_01VPWUQn5Do1Kst8RYUDQvCu`
+- content:
+ `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 100, 'output_tokens': 82, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+``` python
+chat = AsyncChat(model, sp=sp)
+async for o in (await chat("I'm Jeremy", stream=True)): print(o, end='')
+```
+
+ Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions I can help you with?
+
+``` python
+pr = f"What is {a}+{b}?"
+chat = AsyncChat(model, sp=sp, tools=[sums])
+r = await chat(pr)
+r
+```
+
+ Finding the sum of 604542 and 6458932
+
+To answer this question, I can use the “sums” function to add these two
+numbers together. Let me do that for you.
+
+
+
+- id: `msg_015z1rffSWFxvj7rSpzc43ZE`
+- content:
+ `[{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01SNKhtfnDQBC4RGY4mUCq1v', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `tool_use`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 428, 'output_tokens': 101, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+``` python
+await chat()
+```
+
+The sum of 604542 and 6458932 is 7063474.
+
+
+
+- id: `msg_018KAsE2YGiXWjUJkLPrXpb2`
+- content:
+ `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 543, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+``` python
+fn = Path('samples/puppy.jpg')
+img = fn.read_bytes()
+```
+
+``` python
+q = "In brief, what color flowers are in this image?"
+msg = mk_msg([img_msg(img), text_msg(q)])
+await c([msg])
+```
+
+The flowers in this image are purple. They appear to be small,
+daisy-like flowers, possibly asters or some type of purple daisy,
+blooming in the background behind the adorable puppy in the foreground.
+
+
+
+- id: `msg_017qgZggLjUY915mWbWCkb9X`
+- content:
+ `[{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 110, 'output_tokens': 50, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
diff --git a/core.html b/core.html
new file mode 100644
index 0000000..e7ed5de
--- /dev/null
+++ b/core.html
@@ -0,0 +1,2918 @@
+
+
+
+
+
+
+
+
+
+Claudette’s source – claudette
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
This is the ‘literate’ source code for Claudette. You can view the fully rendered version of the notebook here, or you can clone the git repo and run the interactive notebook in Jupyter. The notebook is converted the Python module claudette/core.py using nbdev. The goal of this source code is to both create the Python module, and also to teach the reader how it is created, without assuming much existing knowledge about Claude’s API.
+
Most of the time you’ll see that we write some source code first, and then a description or discussion of it afterwards.
+
+
Setup
+
+
import os
+# os.environ['ANTHROPIC_LOG'] = 'debug'
+
+
To print every HTTP request and response in full, uncomment the above line. This functionality is provided by Anthropic’s SDK.
+
+
+
+
+
+
+Tip
+
+
+
+
If you’re reading the rendered version of this notebook, you’ll see an “Exported source” collapsible widget below. If you’re reading the source notebook directly, you’ll see #| exports at the top of the cell. These show that this piece of code will be exported into the python module that this notebook creates. No other code will be included – any other code in this notebook is just for demonstration, documentation, and testing.
+
You can toggle expanding/collapsing the source code of all exported sections by using the </> Code menu in the top right of the rendered notebook page.
These are the current versions and prices of Anthropic’s models at the time of writing.
+
+
model = models[1]
+
+
For examples, we’ll use Sonnet 3.5, since it’s awesome.
+
+
+
Antropic SDK
+
+
cli = Anthropic()
+
+
This is what Anthropic’s SDK provides for interacting with Python. To use it, pass it a list of messages, with content and a role. The roles should alternate between user and assistant.
+
+
+
+
+
+
+Tip
+
+
+
+
After the code below you’ll see an indented section with an orange vertical line on the left. This is used to show the result of running the code above. Because the code is running in a Jupyter Notebook, we don’t have to use print to display results, we can just type the expression directly, as we do with r here.
Hello Jeremy! It’s nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.
+
+
+
id: msg_01JGTX1KNKpS3W7KogJVmSRX
+
content: [{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.", 'type': 'text'}]
+
model: claude-3-5-sonnet-20240620
+
role: assistant
+
stop_reason: end_turn
+
stop_sequence: None
+
type: message
+
usage: {'input_tokens': 10, 'output_tokens': 38}
+
+
+
+
+
+
Formatting output
+
That output is pretty long and hard to read, so let’s clean it up. We’ll start by pulling out the Content part of the message. To do that, we’re going to write our first function which will be included to the claudette/core.py module.
+
+
+
+
+
+
+Tip
+
+
+
+
This is the first exported public function or class we’re creating (the previous export was of a variable). In the rendered version of the notebook for these you’ll see 4 things, in this order (unless the symbol starts with a single _, which indicates it’s private):
+
+
The signature (with the symbol name as a heading, with a horizontal rule above)
+
A table of paramater docs (if provided)
+
The doc string (in italics).
+
The source code (in a collapsible “Exported source” block)
+
+
After that, we generally provide a bit more detail on what we’ve created, and why, along with a sample usage.
Find the first block of type blk_type in r.content.
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
r
+
Mapping
+
+
The message to look in
+
+
+
blk_type
+
type
+
TextBlock
+
The type of block to find
+
+
+
+
+
+Exported source
+
def find_block(r:abc.Mapping, # The message to look in
+ blk_type:type=TextBlock # The type of block to find
+ ):
+"Find the first block of type `blk_type` in `r.content`."
+return first(o for o in r.content ifisinstance(o,blk_type))
+
+
+
This makes it easier to grab the needed parts of Claude’s responses, which can include multiple pieces of content. By default, we look for the first text block. That will generally have the content we want to display.
+
+
find_block(r)
+
+
TextBlock(text="Hello Jeremy! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.", type='text')
Helper to get the contents from Claude response r.
+
+
+Exported source
+
def contents(r):
+"Helper to get the contents from Claude response `r`."
+ blk = find_block(r)
+ifnot blk and r.content: blk = r.content[0]
+return blk.text.strip() ifhasattr(blk,'text') elsestr(blk)
+
+
+
For display purposes, we often just want to show the text itself.
+
+
contents(r)
+
+
"Hello Jeremy! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything."
Jupyter looks for a _repr_markdown_ method in displayed objects; we add this in order to display just the content text, and collapse full details into a hideable section. Note that patch is from fastcore, and is used to add (or replace) functionality in an existing class. We pass the class(es) that we want to patch as type annotations to self. In this case, _repr_markdown_ is being added to Anthropic’s Message class, so when we display the message now we just see the contents, and the details are hidden away in a collapsible details block.
+
+
r
+
+
Hello Jeremy! It’s nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.
+
+
+
id: msg_01JGTX1KNKpS3W7KogJVmSRX
+
content: [{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.", 'type': 'text'}]
+
model: claude-3-5-sonnet-20240620
+
role: assistant
+
stop_reason: end_turn
+
stop_sequence: None
+
type: message
+
usage: {'input_tokens': 10, 'output_tokens': 38}
+
+
+
+
+
One key part of the response is the usage key, which tells us how many tokens we used by returning a Usage object.
+
We’ll add some helpers to make things a bit cleaner for creating and formatting these objects.
In python, patching __repr__ lets us change how an object is displayed. (More generally, methods starting and ending in __ in Python are called dunder methods, and have some magic behavior – such as, in this case, changing how an object is displayed.)
Add together each of input_tokens and output_tokens
+
+
+Exported source
+
@patch
+def__add__(self:Usage, b):
+"Add together each of `input_tokens` and `output_tokens`"
+return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens, getattr(self,'cache_creation_input_tokens',0)+getattr(b,'cache_creation_input_tokens',0), getattr(self,'cache_read_input_tokens',0)+getattr(b,'cache_read_input_tokens',0))
+
+
+
And, patching __add__ lets + work on a Usage object.
We make things a bit more convenient by writing a function to create a message for us.
+
+
+
+
+
+
+Note
+
+
+
+
You may have noticed that we didn’t export the mk_msg function (i.e. there’s no “Exported source” block around it). That’s because we’ll need more functionality in our final version than this version has – so we’ll be defining a more complete version later. Rather than refactoring/editing in notebooks, often it’s helpful to simply gradually build up complexity by re-defining a symbol.
+
+
+
+
prompt ="I'm Jeremy"
+m = mk_msg(prompt)
+m
+
+
{'role': 'user', 'content': "I'm Jeremy"}
+
+
+
+
r = cli.messages.create(messages=[m], model=model, max_tokens=100)
+r
+
+
Hello Jeremy! It’s nice to meet you. How are you doing today? Is there anything I can help you with or any questions you’d like to ask?
+
+
+
id: msg_01FPMrkmWuYfacN8xDnBXKUY
+
content: [{'text': "Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything I can help you with or any questions you'd like to ask?", 'type': 'text'}]
Helper to set ‘assistant’ role on alternate messages.
+
+
+Exported source
+
def mk_msgs(msgs:list, **kw):
+"Helper to set 'assistant' role on alternate messages."
+ifisinstance(msgs,str): msgs=[msgs]
+return [mk_msg(o, ('user','assistant')[i%2], **kw) for i,o inenumerate(msgs)]
+
+
+
LLMs, including Claude, don’t actually have state, but instead dialogs are created by passing back all previous prompts and responses every time. With Claude, they always alternate user and assistant. Therefore we create a function to make it easier to build up these dialog lists.
+
But to do so, we need to update mk_msg so that we can’t only pass a str as content, but can also pass a dict or an object with a content attr, since these are both types of message that Claude can create. To do so, we check for a content key or attr, and use it if found.
+
+
+Exported source
+
def _str_if_needed(o):
+ifisinstance(o, (list,tuple,abc.Mapping,L)) orhasattr(o, '__pydantic_serializer__'): return o
+returnstr(o)
+
+
+
+
def mk_msg(content, role='user', **kw):
+"Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message"
+ifhasattr(content, 'content'): content,role = content.content,content.role
+ifisinstance(content, abc.Mapping): content=content['content']
+returndict(role=role, content=_str_if_needed(content), **kw)
+
+
+
msgs = mk_msgs([prompt, r, 'I forgot my name. Can you remind me please?'])
+msgs
+
+
[{'role': 'user', 'content': "I'm Jeremy"},
+ {'role': 'assistant',
+ 'content': [TextBlock(text="Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything I can help you with or any questions you'd like to ask?", type='text')]},
+ {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}]
+
+
+
Now, if we pass this list of messages to Claude, the model treats it as a conversation to respond to.
class Client:
+def__init__(self, model, cli=None, log=False):
+"Basic Anthropic messages client."
+self.model,self.use = model,usage()
+self.log = [] if log elseNone
+self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}))
+
+
+
We’ll create a simple Client for Anthropic which tracks usage stores the model to use. We don’t add any methods right away – instead we’ll use patch for that so we can add and document them incrementally.
Whereas OpenAI’s models use a stream parameter for streaming, Anthropic’s use a separate method. We implement Anthropic’s approach in a private method, and then use a stream parameter in __call__ for consistency:
Claude supports adding an extra assistant message at the end, which contains the prefill – i.e. the text we want Claude to assume the response starts with. However Claude doesn’t actually repeat that in the response, so for convenience we add it.
@patch
+@delegates(messages.Messages.create)
+def__call__(self:Client,
+ msgs:list, # List of messages in the dialog
+ sp='', # The system prompt
+ temp=0, # Temperature
+ maxtok=4096, # Maximum tokens
+ prefill='', # Optional prefill to pass to Claude as start of its response
+ stream:bool=False, # Stream response?
+ stop=None, # Stop sequence
+**kwargs):
+"Make a call to Claude."
+ msgs =self._precall(msgs, prefill, stop, kwargs)
+if stream: returnself._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+ res =self.c.messages.create(
+ model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+returnself._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs)
+
+
Defining __call__ let’s us use an object like a function (i.e it’s callable). We use it as a small wrapper over messages.create. However we’re not exporting this version just yet – we have some additions we’ll make in a moment…
Create a tool_choice dict that’s ‘auto’ if choose is None, ‘any’ if it is True, or ‘tool’ otherwise
+
+
+Exported source
+
def mk_tool_choice(choose:Union[str,bool,None])->dict:
+"Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise"
+return {"type": "tool", "name": choose} ifisinstance(choose,str) else {'type':'any'} if choose else {'type':'auto'}
Claude can be forced to use a particular tool, or select from a specific list of tools, or decide for itself when to use a tool. If you want to force a tool (or force choosing from a list), include a tool_choice param with a dict from mk_tool_choice.
+
For testing, we need a function that Claude can call; we’ll write a simple function that adds numbers together, and will tell us when it’s being called:
+
+
def sums(
+ a:int, # First thing to sum
+ b:int=1# Second thing to sum
+) ->int: # The sum of the inputs
+"Adds a + b."
+print(f"Finding the sum of {a} and {b}")
+return a + b
+
+
+
a,b =604542,6458932
+pr =f"What is {a}+{b}?"
+sp ="You are a summing expert."
+
+
Claudette can autogenerate a schema thanks to the toolslm library. We’ll force the use of the tool using the function we created earlier.
We’ll start a dialog with Claude now. We’ll store the messages of our dialog in msgs. The first message will be our prompt pr, and we’ll pass our tools schema.
When Claude decides that it should use a tool, it passes back a ToolUseBlock with the name of the tool to call, and the params to use.
+
We don’t want to allow it to call just any possible function (that would be a security disaster!) so we create a namespace – that is, a dictionary of allowable function names to call.
+
+
+Exported source
+
def _mk_ns(*funcs:list[callable]) ->dict[str,callable]:
+"Create a `dict` of name to function in `funcs`, to use as a namespace"
+return {f.__name__:f for f in funcs}
+
+
+
+
ns = _mk_ns(sums)
+ns
+
+
{'sums': <function __main__.sums(a: int, b: int = 1) -> int>}
Given tool use id and the tool result, create a tool_result response.
+
+
+Exported source
+
def call_func(fc:ToolUseBlock, # Tool use block from Claude's message
+ ns:Optional[abc.Mapping]=None, # Namespace to search for tools, defaults to `globals()`
+ obj:Optional=None# Object to search for tools
+ ):
+"Call the function in the tool response `tr`, using namespace `ns`."
+if ns isNone: ns=globals()
+ifnotisinstance(ns, abc.Mapping): ns = _mk_ns(*ns)
+ func =getattr(obj, fc.name, None)
+ifnot func: func = ns[fc.name]
+return func(**fc.input)
+
+def mk_funcres(tuid, res):
+"Given tool use id and the tool result, create a tool_result response."
+returndict(type="tool_result", tool_use_id=tuid, content=str(res))
def mk_toolres(
+ r:abc.Mapping, # Tool use request response from Claude
+ ns:Optional[abc.Mapping]=None, # Namespace to search for tools
+ obj:Optional=None# Class to search for tools
+ ):
+"Create a `tool_result` message from response `r`."
+ cts =getattr(r, 'content', [])
+ res = [mk_msg(r)]
+ tcs = [mk_funcres(o.id, call_func(o, ns=ns, obj=obj)) for o in cts ifisinstance(o,ToolUseBlock)]
+if tcs: res.append(mk_msg(tcs))
+return res
+
+
+
In order to tell Claude the result of the tool call, we pass back the tool use assistant request and the tool_result response.
We add this to our dialog, and now Claude has all the information it needs to answer our question.
+
+
msgs += tr
+contents(c(msgs, sp=sp, tools=tools))
+
+
'The sum of 604542 and 6458932 is 7063474.'
+
+
+
This works with methods as well – in this case, use the object itself for ns:
+
+
class Dummy:
+def sums(
+self,
+ a:int, # First thing to sum
+ b:int=1# Second thing to sum
+ ) ->int: # The sum of the inputs
+"Adds a + b."
+print(f"Finding the sum of {a} and {b}")
+return a + b
Optional prefill to pass to Claude as start of its response
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
stop
+
NoneType
+
None
+
Stop sequence
+
+
+
tools
+
Optional
+
None
+
List of tools to make available to Claude
+
+
+
tool_choice
+
Optional
+
None
+
Optionally force use of some tool
+
+
+
metadata
+
MetadataParam | NotGiven
+
NOT_GIVEN
+
+
+
+
stop_sequences
+
List[str] | NotGiven
+
NOT_GIVEN
+
+
+
+
system
+
Union[str, Iterable[TextBlockParam]] | NotGiven
+
NOT_GIVEN
+
+
+
+
temperature
+
float | NotGiven
+
NOT_GIVEN
+
+
+
+
top_k
+
int | NotGiven
+
NOT_GIVEN
+
+
+
+
top_p
+
float | NotGiven
+
NOT_GIVEN
+
+
+
+
extra_headers
+
Headers | None
+
None
+
+
+
+
extra_query
+
Query | None
+
None
+
+
+
+
extra_body
+
Body | None
+
None
+
+
+
+
timeout
+
float | httpx.Timeout | None | NotGiven
+
NOT_GIVEN
+
+
+
+
+
+
+Exported source
+
@patch
+@delegates(messages.Messages.create)
+def__call__(self:Client,
+ msgs:list, # List of messages in the dialog
+ sp='', # The system prompt
+ temp=0, # Temperature
+ maxtok=4096, # Maximum tokens
+ prefill='', # Optional prefill to pass to Claude as start of its response
+ stream:bool=False, # Stream response?
+ stop=None, # Stop sequence
+ tools:Optional[list]=None, # List of tools to make available to Claude
+ tool_choice:Optional[dict]=None, # Optionally force use of some tool
+**kwargs):
+"Make a call to Claude."
+if tools: kwargs['tools'] = [get_schema(o) for o in listify(tools)]
+if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice)
+ msgs =self._precall(msgs, prefill, stop, kwargs)
+if stream: returnself._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+ res =self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+returnself._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs)
Return the value of all tool calls (generally used for structured outputs)
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
msgs
+
list
+
+
List of messages in the dialog
+
+
+
tools
+
Optional
+
None
+
List of tools to make available to Claude
+
+
+
obj
+
Optional
+
None
+
Class to search for tools
+
+
+
ns
+
Optional
+
None
+
Namespace to search for tools
+
+
+
sp
+
str
+
+
The system prompt
+
+
+
temp
+
int
+
0
+
Temperature
+
+
+
maxtok
+
int
+
4096
+
Maximum tokens
+
+
+
prefill
+
str
+
+
Optional prefill to pass to Claude as start of its response
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
stop
+
NoneType
+
None
+
Stop sequence
+
+
+
tool_choice
+
Optional
+
None
+
Optionally force use of some tool
+
+
+
metadata
+
MetadataParam | NotGiven
+
NOT_GIVEN
+
+
+
+
stop_sequences
+
List[str] | NotGiven
+
NOT_GIVEN
+
+
+
+
system
+
Union[str, Iterable[TextBlockParam]] | NotGiven
+
NOT_GIVEN
+
+
+
+
temperature
+
float | NotGiven
+
NOT_GIVEN
+
+
+
+
top_k
+
int | NotGiven
+
NOT_GIVEN
+
+
+
+
top_p
+
float | NotGiven
+
NOT_GIVEN
+
+
+
+
extra_headers
+
Headers | None
+
None
+
+
+
+
extra_query
+
Query | None
+
None
+
+
+
+
extra_body
+
Body | None
+
None
+
+
+
+
timeout
+
float | httpx.Timeout | None | NotGiven
+
NOT_GIVEN
+
+
+
+
+
+
+Exported source
+
@patch
+@delegates(Client.__call__)
+def structured(self:Client,
+ msgs:list, # List of messages in the dialog
+ tools:Optional[list]=None, # List of tools to make available to Claude
+ obj:Optional=None, # Class to search for tools
+ ns:Optional[abc.Mapping]=None, # Namespace to search for tools
+**kwargs):
+"Return the value of all tool calls (generally used for structured outputs)"
+ tools = listify(tools)
+ res =self(msgs, tools=tools, tool_choice=tools, **kwargs)
+if ns isNone: ns=tools
+ cts =getattr(res, 'content', [])
+ tcs = [call_func(o, ns=ns, obj=obj) for o in cts ifisinstance(o,ToolUseBlock)]
+return tcs
+
+
+
Anthropic’s API does not support response formats directly, so instead we provide a structured method to use tool calling to achieve the same result. The result of the tool is not passed back to Claude in this case, but instead is returned directly to the user.
+
+
c.structured(pr, tools=[sums])
+
+
Finding the sum of 604542 and 6458932
+
+
+
[7063474]
+
+
+
+
+
+
Chat
+
Rather than manually adding the responses to a dialog, we’ll create a simple Chat class to do that for us, each time we make a request. We’ll also store the system prompt and tools here, to avoid passing them every time.
User prompt to continue an assistant response: assistant,[user:“…”],assistant
+
+
+
+
+
+Exported source
+
class Chat:
+def__init__(self,
+ model:Optional[str]=None, # Model to use (leave empty if passing `cli`)
+ cli:Optional[Client]=None, # Client to use (leave empty if passing `model`)
+ sp='', # Optional system prompt
+ tools:Optional[list]=None, # List of tools to make available to Claude
+ temp=0, # Temperature
+ cont_pr:Optional[str]=None): # User prompt to continue an assistant response: assistant,[user:"..."],assistant
+"Anthropic chat client."
+assert model or cli
+assert cont_pr !="", "cont_pr may not be an empty string"
+self.c = (cli or Client(model))
+self.h,self.sp,self.tools,self.cont_pr,self.temp = [],sp,tools,cont_pr,temp
+
+@property
+def use(self): returnself.c.use
+
+
+
The class stores the Client that will provide the responses in c, and a history of messages in h.
+
+
sp ="Never mention what tools you use."
+chat = Chat(model, sp=sp)
+chat.c.use, chat.h
This is clunky. Let’s add cost as a property for the Chat class. It will pass in the appropriate prices for the current model to the usage cost calculator.
@patch
+def _post_pr(self:Chat, pr, prev_role):
+if pr isNoneand prev_role =='assistant':
+ifself.cont_pr isNone:
+raiseValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.")
+ pr =self.cont_pr # No user prompt, keep the chain
+if pr: self.h.append(mk_msg(pr))
+
+
+
+
+Exported source
+
@patch
+def _append_pr(self:Chat,
+ pr=None, # Prompt / message
+ ):
+ prev_role = nested_idx(self.h, -1, 'role') ifself.h else'assistant'# First message should be 'user'
+if pr and prev_role =='user': self() # already user request pending
+self._post_pr(pr, prev_role)
+
+
+
+
+Exported source
+
@patch
+def__call__(self:Chat,
+ pr=None, # Prompt / message
+ temp=None, # Temperature
+ maxtok=4096, # Maximum tokens
+ stream=False, # Stream response?
+ prefill='', # Optional prefill to pass to Claude as start of its response
+ tool_choice:Optional[dict]=None, # Optionally force use of some tool
+**kw):
+if temp isNone: temp=self.temp
+self._append_pr(pr)
+ res =self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok,
+ tools=self.tools, tool_choice=tool_choice,**kw)
+if stream: returnself._stream(res)
+self.h += mk_toolres(self.c.result, ns=self.tools, obj=self)
+return res
+
+
+
The __call__ method just passes the request along to the Client, but rather than just passing in this one prompt, it appends it to the history and passes it all along. As a result, we now have state!
+
+
chat = Chat(model, sp=sp)
+
+
+
chat("I'm Jeremy")
+chat("What's my name?")
+
+
Your name is Jeremy, as you mentioned in your previous message.
+
+
+
id: msg_01Lxa5M7S1cjCBTQtaZfWWCQ
+
content: [{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]
q ="Concisely, what is the meaning of life?"
+pref ='According to Douglas Adams,'
+
+
+
chat(q, prefill=pref)
+
+
According to Douglas Adams, the meaning of life is 42. More seriously, there’s no universally agreed upon answer. Common philosophical perspectives include:
+
+
Finding personal fulfillment
+
Serving others
+
Pursuing happiness
+
Creating meaning through our choices
+
Experiencing and appreciating existence
+
+
Ultimately, many believe each individual must determine their own life’s meaning.
+
+
+
id: msg_012Z1zkix1fb1B7fHWZQpMoF
+
content: [{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]
Error: Prompt must be given after assistant completion, or use `self.cont_pr`.
+
+
+
Setting cont_pr allows a “default prompt” to be specified when a prompt isn’t specified. Usually used to prompt the model to continue.
+
+
chat.cont_pr ="keep going..."
+chat()
+
+
Continuing on the topic of life’s meaning:
+
+
Achieving self-actualization
+
Leaving a positive legacy
+
Connecting with others and forming relationships
+
Exploring and understanding the universe
+
Evolving as a species
+
Overcoming challenges and growing
+
Finding balance between various aspects of life
+
Expressing creativity and individuality
+
Pursuing knowledge and wisdom
+
Living in harmony with nature
+
+
These perspectives often overlap and can be combined in various ways. Some argue that the absence of an inherent meaning allows for the freedom to create our own purpose.
+
+
+
id: msg_012qPDMHoEpaT9RDDu4UyYKq
+
content: [{'text': "Continuing on the topic of life's meaning:\n\n6. Achieving self-actualization\n7. Leaving a positive legacy\n8. Connecting with others and forming relationships\n9. Exploring and understanding the universe\n10. Evolving as a species\n11. Overcoming challenges and growing\n12. Finding balance between various aspects of life\n13. Expressing creativity and individuality\n14. Pursuing knowledge and wisdom\n15. Living in harmony with nature\n\nThese perspectives often overlap and can be combined in various ways. Some argue that the absence of an inherent meaning allows for the freedom to create our own purpose.", 'type': 'text'}]
chat = Chat(model, sp=sp)
+for o in chat("I'm Jeremy", stream=True): print(o, end='')
+
+
Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions you have?
+
+
+
+
for o in chat(q, prefill=pref, stream=True): print(o, end='')
+
+
According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:
+
+1. Finding personal fulfillment
+2. Serving others or a higher purpose
+3. Experiencing and creating love and happiness
+4. Pursuing knowledge and understanding
+5. Leaving a positive legacy
+
+Ultimately, many believe each individual must determine their own meaning.
+
+
+
+
+
Chat tool use
+
We automagically get streamlined tool use as well:
To answer this question, I can use the “sums” function to add these two numbers together. Let me do that for you.
+
+
+
id: msg_01Y6nic6azHsQUHgnu92UDTu
+
content: [{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01DrdxEJ2KuqCPRKmKjgCsyZ', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]
def _mk_content(src, cache=False):
+"Create appropriate content data structure based on type of content"
+ifisinstance(src,str): return text_msg(src, cache=cache)
+ifisinstance(src,bytes): return img_msg(src, cache=cache)
+ifisinstance(src, abc.Mapping): return {k:_str_if_needed(v) for k,v in src.items()}
+return _str_if_needed(src)
+
+
+
There’s not need to manually choose the type of message, since we figure that out from the data of the source data.
Helper to create a dict appropriate for a Claude message. kw are added as key/value pairs to the message
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
content
+
+
+
A string, list, or dict containing the contents of the message
+
+
+
role
+
str
+
user
+
Must be ‘user’ or ‘assistant’
+
+
+
cache
+
bool
+
False
+
+
+
+
kw
+
+
+
+
+
+
+
+
+Exported source
+
def mk_msg(content, # A string, list, or dict containing the contents of the message
+ role='user', # Must be 'user' or 'assistant'
+ cache=False,
+**kw):
+"Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message"
+ifhasattr(content, 'content'): content,role = content.content,content.role
+ifisinstance(content, abc.Mapping): content=content.get('content', content)
+ifnotisinstance(content, list): content=[content]
+ content = [_mk_content(o, cache if islast elseFalse) for islast,o in loop_last(content)] if content else'.'
+return dict2obj(dict(role=role, content=content, **kw), list_func=list)
When we construct a message, we now use _mk_content to create the appropriate parts. Since a dialog contains multiple messages, and a message can contain multiple content parts, to pass a single message with multiple parts we have to use a list containing a single list:
+
+
c([[img, q]])
+
+
The image contains purple or lavender-colored flowers, which appear to be daisies or a similar type of flower.
+
+
+
id: msg_01FDxZ8umYNK4yPSuUcFqNoE
+
content: [{'text': 'The image contains purple or lavender-colored flowers, which appear to be daisies or a similar type of flower.', 'type': 'text'}]
anthropic at version 0.34.2 seems not to install boto3 as a dependency. You may need to do a pip install boto3 or the creation of the Client below fails.
+
+
+
Provided boto3 is installed, we otherwise don’t need any extra code to support Amazon Bedrock – we just have to set up the approach client:
Hello Jeremy! It’s nice to meet you. How can I assist you today? Is there anything specific you’d like to talk about or any questions you have?
+
+
+
id: msg_bdrk_01MwjVA5hwyfob3w4vdsqpnU
+
content: [{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]
Hello Jeremy! It’s nice to meet you. How can I assist you today? Is there anything specific you’d like to talk about or any questions you have?
+
+
+
id: msg_vrtx_01PFtHewPDe35yShy7vecp5q
+
content: [{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]
+
model: claude-3-5-sonnet-20240620
+
role: assistant
+
stop_reason: end_turn
+
stop_sequence: None
+
type: message
+
usage: {'input_tokens': 10, 'output_tokens': 36}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/core.html.md b/core.html.md
new file mode 100644
index 0000000..632b943
--- /dev/null
+++ b/core.html.md
@@ -0,0 +1,2609 @@
+# Claudette’s source
+
+
+
+
+This is the ‘literate’ source code for Claudette. You can view the fully
+rendered version of the notebook
+[here](https://claudette.answer.ai/core.html), or you can clone the git
+repo and run the [interactive
+notebook](https://github.com/AnswerDotAI/claudette/blob/main/00_core.ipynb)
+in Jupyter. The notebook is converted the [Python module
+claudette/core.py](https://github.com/AnswerDotAI/claudette/blob/main/claudette/core.py)
+using [nbdev](https://nbdev.fast.ai/). The goal of this source code is
+to both create the Python module, and also to teach the reader *how* it
+is created, without assuming much existing knowledge about Claude’s API.
+
+Most of the time you’ll see that we write some source code *first*, and
+then a description or discussion of it *afterwards*.
+
+## Setup
+
+``` python
+import os
+# os.environ['ANTHROPIC_LOG'] = 'debug'
+```
+
+To print every HTTP request and response in full, uncomment the above
+line. This functionality is provided by Anthropic’s SDK.
+
+
+
+> **Tip**
+>
+> If you’re reading the rendered version of this notebook, you’ll see an
+> “Exported source” collapsible widget below. If you’re reading the
+> source notebook directly, you’ll see `#| exports` at the top of the
+> cell. These show that this piece of code will be exported into the
+> python module that this notebook creates. No other code will be
+> included – any other code in this notebook is just for demonstration,
+> documentation, and testing.
+>
+> You can toggle expanding/collapsing the source code of all exported
+> sections by using the `> Code` menu in the top right of the rendered
+> notebook page.
+
+
+
+
+Exported source
+
+``` python
+model_types = {
+ # Anthropic
+ 'claude-3-opus-20240229': 'opus',
+ 'claude-3-5-sonnet-20240620': 'sonnet',
+ 'claude-3-haiku-20240307': 'haiku',
+ # AWS
+ 'anthropic.claude-3-opus-20240229-v1:0': 'opus',
+ 'anthropic.claude-3-5-sonnet-20240620-v1:0': 'sonnet',
+ 'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet',
+ 'anthropic.claude-3-haiku-20240307-v1:0': 'haiku',
+ # Google
+ 'claude-3-opus@20240229': 'opus',
+ 'claude-3-5-sonnet@20240620': 'sonnet',
+ 'claude-3-sonnet@20240229': 'sonnet',
+ 'claude-3-haiku@20240307': 'haiku',
+}
+
+all_models = list(model_types)
+```
+
+
+
+These are the current versions and
+[prices](https://www.anthropic.com/pricing#anthropic-api) of Anthropic’s
+models at the time of writing.
+
+``` python
+model = models[1]
+```
+
+For examples, we’ll use Sonnet 3.5, since it’s awesome.
+
+## Antropic SDK
+
+``` python
+cli = Anthropic()
+```
+
+This is what Anthropic’s SDK provides for interacting with Python. To
+use it, pass it a list of *messages*, with *content* and a *role*. The
+roles should alternate between *user* and *assistant*.
+
+
+
+> **Tip**
+>
+> After the code below you’ll see an indented section with an orange
+> vertical line on the left. This is used to show the *result* of
+> running the code above. Because the code is running in a Jupyter
+> Notebook, we don’t have to use `print` to display results, we can just
+> type the expression directly, as we do with `r` here.
+
+
+
+``` python
+m = {'role': 'user', 'content': "I'm Jeremy"}
+r = cli.messages.create(messages=[m], model=model, max_tokens=100)
+r
+```
+
+Hello Jeremy! It’s nice to meet you. How can I assist you today? Feel
+free to ask me any questions or let me know if you need help with
+anything.
+
+
+
+- id: `msg_01JGTX1KNKpS3W7KogJVmSRX`
+- content:
+ `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage: `{'input_tokens': 10, 'output_tokens': 38}`
+
+
+
+### Formatting output
+
+That output is pretty long and hard to read, so let’s clean it up. We’ll
+start by pulling out the `Content` part of the message. To do that,
+we’re going to write our first function which will be included to the
+`claudette/core.py` module.
+
+
+
+> **Tip**
+>
+> This is the first exported public function or class we’re creating
+> (the previous export was of a variable). In the rendered version of
+> the notebook for these you’ll see 4 things, in this order (unless the
+> symbol starts with a single `_`, which indicates it’s *private*):
+>
+> - The signature (with the symbol name as a heading, with a horizontal
+> rule above)
+> - A table of paramater docs (if provided)
+> - The doc string (in italics).
+> - The source code (in a collapsible “Exported source” block)
+>
+> After that, we generally provide a bit more detail on what we’ve
+> created, and why, along with a sample usage.
+
+
+
+------------------------------------------------------------------------
+
+source
+
+### find_block
+
+> find_block (r:collections.abc.Mapping, blk_type:type= 'anthropic.types.text_block.TextBlock'>)
+
+*Find the first block of type `blk_type` in `r.content`.*
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
r
+
Mapping
+
+
The message to look in
+
+
+
blk_type
+
type
+
TextBlock
+
The type of block to find
+
+
+
+
+
+Exported source
+
+``` python
+def find_block(r:abc.Mapping, # The message to look in
+ blk_type:type=TextBlock # The type of block to find
+ ):
+ "Find the first block of type `blk_type` in `r.content`."
+ return first(o for o in r.content if isinstance(o,blk_type))
+```
+
+
+
+This makes it easier to grab the needed parts of Claude’s responses,
+which can include multiple pieces of content. By default, we look for
+the first text block. That will generally have the content we want to
+display.
+
+``` python
+find_block(r)
+```
+
+ TextBlock(text="Hello Jeremy! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.", type='text')
+
+------------------------------------------------------------------------
+
+source
+
+### contents
+
+> contents (r)
+
+*Helper to get the contents from Claude response `r`.*
+
+
+Exported source
+
+``` python
+def contents(r):
+ "Helper to get the contents from Claude response `r`."
+ blk = find_block(r)
+ if not blk and r.content: blk = r.content[0]
+ return blk.text.strip() if hasattr(blk,'text') else str(blk)
+```
+
+
+
+For display purposes, we often just want to show the text itself.
+
+``` python
+contents(r)
+```
+
+ "Hello Jeremy! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything."
+
+
+Exported source
+
+``` python
+@patch
+def _repr_markdown_(self:(Message)):
+ det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items())
+ cts = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex
+ return f"""{cts}
+
+
+
+- {det}
+
+"""
+```
+
+
+
+Jupyter looks for a `_repr_markdown_` method in displayed objects; we
+add this in order to display just the content text, and collapse full
+details into a hideable section. Note that `patch` is from
+[fastcore](https://fastcore.fast.ai/), and is used to add (or replace)
+functionality in an existing class. We pass the class(es) that we want
+to patch as type annotations to `self`. In this case, `_repr_markdown_`
+is being added to Anthropic’s `Message` class, so when we display the
+message now we just see the contents, and the details are hidden away in
+a collapsible details block.
+
+``` python
+r
+```
+
+Hello Jeremy! It’s nice to meet you. How can I assist you today? Feel
+free to ask me any questions or let me know if you need help with
+anything.
+
+
+
+- id: `msg_01JGTX1KNKpS3W7KogJVmSRX`
+- content:
+ `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage: `{'input_tokens': 10, 'output_tokens': 38}`
+
+
+
+One key part of the response is the
+[`usage`](https://claudette.answer.ai/core.html#usage) key, which tells
+us how many tokens we used by returning a `Usage` object.
+
+We’ll add some helpers to make things a bit cleaner for creating and
+formatting these objects.
+
+``` python
+r.usage
+```
+
+ In: 10; Out: 38; Cache create: 0; Cache read: 0; Total: 48
+
+------------------------------------------------------------------------
+
+source
+
+### usage
+
+> usage (inp=0, out=0, cache_create=0, cache_read=0)
+
+*Slightly more concise version of `Usage`.*
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
inp
+
int
+
0
+
input tokens
+
+
+
out
+
int
+
0
+
Output tokens
+
+
+
cache_create
+
int
+
0
+
Cache creation tokens
+
+
+
cache_read
+
int
+
0
+
Cache read tokens
+
+
+
+
+
+Exported source
+
+``` python
+def usage(inp=0, # input tokens
+ out=0, # Output tokens
+ cache_create=0, # Cache creation tokens
+ cache_read=0 # Cache read tokens
+ ):
+ "Slightly more concise version of `Usage`."
+ return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create, cache_read_input_tokens=cache_read)
+```
+
+
+
+The constructor provided by Anthropic is rather verbose, so we clean it
+up a bit, using a lowercase version of the name.
+
+``` python
+usage(5)
+```
+
+ In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5
+
+------------------------------------------------------------------------
+
+source
+
+### Usage.total
+
+> Usage.total ()
+
+
+Exported source
+
+``` python
+@patch(as_prop=True)
+def total(self:Usage): return self.input_tokens+self.output_tokens+getattr(self, "cache_creation_input_tokens",0)+getattr(self, "cache_read_input_tokens",0)
+```
+
+
+
+Adding a `total` property to `Usage` makes it easier to see how many
+tokens we’ve used up altogether.
+
+``` python
+usage(5,1).total
+```
+
+ 6
+
+------------------------------------------------------------------------
+
+source
+
+### Usage.\_\_repr\_\_
+
+> Usage.__repr__ ()
+
+*Return repr(self).*
+
+
+Exported source
+
+``` python
+@patch
+def __repr__(self:Usage): return f'In: {self.input_tokens}; Out: {self.output_tokens}; Cache create: {getattr(self, "cache_creation_input_tokens",0)}; Cache read: {getattr(self, "cache_read_input_tokens",0)}; Total: {self.total}'
+```
+
+
+
+In python, patching `__repr__` lets us change how an object is
+displayed. (More generally, methods starting and ending in `__` in
+Python are called `dunder` methods, and have some `magic` behavior –
+such as, in this case, changing how an object is displayed.)
+
+``` python
+usage(5)
+```
+
+ In: 5; Out: 0; Cache create: 0; Cache read: 0; Total: 5
+
+------------------------------------------------------------------------
+
+source
+
+### Usage.\_\_add\_\_
+
+> Usage.__add__ (b)
+
+*Add together each of `input_tokens` and `output_tokens`*
+
+
+Exported source
+
+``` python
+@patch
+def __add__(self:Usage, b):
+ "Add together each of `input_tokens` and `output_tokens`"
+ return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens, getattr(self,'cache_creation_input_tokens',0)+getattr(b,'cache_creation_input_tokens',0), getattr(self,'cache_read_input_tokens',0)+getattr(b,'cache_read_input_tokens',0))
+```
+
+
+
+And, patching `__add__` lets `+` work on a `Usage` object.
+
+``` python
+r.usage+r.usage
+```
+
+ In: 20; Out: 76; Cache create: 0; Cache read: 0; Total: 96
+
+### Creating messages
+
+Creating correctly formatted `dict`s from scratch every time isn’t very
+handy, so next up we’ll add helpers for this.
+
+``` python
+def mk_msg(content, role='user', **kw):
+ return dict(role=role, content=content, **kw)
+```
+
+We make things a bit more convenient by writing a function to create a
+message for us.
+
+
+
+> **Note**
+>
+> You may have noticed that we didn’t export the
+> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) function
+> (i.e. there’s no “Exported source” block around it). That’s because
+> we’ll need more functionality in our final version than this version
+> has – so we’ll be defining a more complete version later. Rather than
+> refactoring/editing in notebooks, often it’s helpful to simply
+> gradually build up complexity by re-defining a symbol.
+
+
+
+``` python
+prompt = "I'm Jeremy"
+m = mk_msg(prompt)
+m
+```
+
+ {'role': 'user', 'content': "I'm Jeremy"}
+
+``` python
+r = cli.messages.create(messages=[m], model=model, max_tokens=100)
+r
+```
+
+Hello Jeremy! It’s nice to meet you. How are you doing today? Is there
+anything I can help you with or any questions you’d like to ask?
+
+
+
+- id: `msg_01FPMrkmWuYfacN8xDnBXKUY`
+- content:
+ `[{'text': "Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything I can help you with or any questions you'd like to ask?", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage: `{'input_tokens': 10, 'output_tokens': 36}`
+
+
+
+------------------------------------------------------------------------
+
+source
+
+### mk_msgs
+
+> mk_msgs (msgs:list, **kw)
+
+*Helper to set ‘assistant’ role on alternate messages.*
+
+
+Exported source
+
+``` python
+def mk_msgs(msgs:list, **kw):
+ "Helper to set 'assistant' role on alternate messages."
+ if isinstance(msgs,str): msgs=[msgs]
+ return [mk_msg(o, ('user','assistant')[i%2], **kw) for i,o in enumerate(msgs)]
+```
+
+
+
+LLMs, including Claude, don’t actually have state, but instead dialogs
+are created by passing back all previous prompts and responses every
+time. With Claude, they always alternate *user* and *assistant*.
+Therefore we create a function to make it easier to build up these
+dialog lists.
+
+But to do so, we need to update
+[`mk_msg`](https://claudette.answer.ai/core.html#mk_msg) so that we
+can’t only pass a `str` as `content`, but can also pass a `dict` or an
+object with a `content` attr, since these are both types of message that
+Claude can create. To do so, we check for a `content` key or attr, and
+use it if found.
+
+
+Exported source
+
+``` python
+def _str_if_needed(o):
+ if isinstance(o, (list,tuple,abc.Mapping,L)) or hasattr(o, '__pydantic_serializer__'): return o
+ return str(o)
+```
+
+
+
+``` python
+def mk_msg(content, role='user', **kw):
+ "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message"
+ if hasattr(content, 'content'): content,role = content.content,content.role
+ if isinstance(content, abc.Mapping): content=content['content']
+ return dict(role=role, content=_str_if_needed(content), **kw)
+```
+
+``` python
+msgs = mk_msgs([prompt, r, 'I forgot my name. Can you remind me please?'])
+msgs
+```
+
+ [{'role': 'user', 'content': "I'm Jeremy"},
+ {'role': 'assistant',
+ 'content': [TextBlock(text="Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything I can help you with or any questions you'd like to ask?", type='text')]},
+ {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}]
+
+Now, if we pass this list of messages to Claude, the model treats it as
+a conversation to respond to.
+
+``` python
+cli.messages.create(messages=msgs, model=model, max_tokens=200)
+```
+
+Of course! You just told me that your name is Jeremy.
+
+
+
+- id: `msg_014ntKw7Mvfws4BHjjsyYLPn`
+- content:
+ `[{'text': 'Of course! You just told me that your name is Jeremy.', 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage: `{'input_tokens': 60, 'output_tokens': 16}`
+
+
+
+## Client
+
+------------------------------------------------------------------------
+
+source
+
+### Client
+
+> Client (model, cli=None, log=False)
+
+*Basic Anthropic messages client.*
+
+
+Exported source
+
+``` python
+class Client:
+ def __init__(self, model, cli=None, log=False):
+ "Basic Anthropic messages client."
+ self.model,self.use = model,usage()
+ self.log = [] if log else None
+ self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}))
+```
+
+
+
+We’ll create a simple
+[`Client`](https://claudette.answer.ai/core.html#client) for `Anthropic`
+which tracks usage stores the model to use. We don’t add any methods
+right away – instead we’ll use `patch` for that so we can add and
+document them incrementally.
+
+``` python
+c = Client(model)
+c.use
+```
+
+ In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0
+
+
+Exported source
+
+``` python
+@patch
+def _r(self:Client, r:Message, prefill=''):
+ "Store the result of the message and accrue total usage."
+ if prefill:
+ blk = find_block(r)
+ blk.text = prefill + (blk.text or '')
+ self.result = r
+ self.use += r.usage
+ self.stop_reason = r.stop_reason
+ self.stop_sequence = r.stop_sequence
+ return r
+```
+
+
+
+We use a `_` prefix on private methods, but we document them here in the
+interests of literate source code.
+
+`_r` will be used each time we get a new result, to track usage and also
+to keep the result available for later.
+
+``` python
+c._r(r)
+c.use
+```
+
+ In: 10; Out: 36; Cache create: 0; Cache read: 0; Total: 46
+
+Whereas OpenAI’s models use a `stream` parameter for streaming,
+Anthropic’s use a separate method. We implement Anthropic’s approach in
+a private method, and then use a `stream` parameter in `__call__` for
+consistency:
+
+
+Exported source
+
+``` python
+@patch
+def _log(self:Client, final, prefill, msgs, maxtok=None, sp=None, temp=None, stream=None, stop=None, **kwargs):
+ self._r(final, prefill)
+ if self.log is not None: self.log.append({
+ "msgs": msgs, "prefill": prefill, **kwargs,
+ "msgs": msgs, "prefill": prefill, "maxtok": maxtok, "sp": sp, "temp": temp, "stream": stream, "stop": stop, **kwargs,
+ "result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence
+ })
+ return self.result
+```
+
+
+
+Exported source
+
+``` python
+@patch
+def _stream(self:Client, msgs:list, prefill='', **kwargs):
+ with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs), **kwargs) as s:
+ if prefill: yield(prefill)
+ yield from s.text_stream
+ self._log(s.get_final_message(), prefill, msgs, **kwargs)
+```
+
+
+
+Claude supports adding an extra `assistant` message at the end, which
+contains the *prefill* – i.e. the text we want Claude to assume the
+response starts with. However Claude doesn’t actually repeat that in the
+response, so for convenience we add it.
+
+
+Exported source
+
+``` python
+@patch
+def _precall(self:Client, msgs, prefill, stop, kwargs):
+ pref = [prefill.strip()] if prefill else []
+ if not isinstance(msgs,list): msgs = [msgs]
+ if stop is not None:
+ if not isinstance(stop, (list)): stop = [stop]
+ kwargs["stop_sequences"] = stop
+ msgs = mk_msgs(msgs+pref)
+ return msgs
+```
+
+
+
+``` python
+@patch
+@delegates(messages.Messages.create)
+def __call__(self:Client,
+ msgs:list, # List of messages in the dialog
+ sp='', # The system prompt
+ temp=0, # Temperature
+ maxtok=4096, # Maximum tokens
+ prefill='', # Optional prefill to pass to Claude as start of its response
+ stream:bool=False, # Stream response?
+ stop=None, # Stop sequence
+ **kwargs):
+ "Make a call to Claude."
+ msgs = self._precall(msgs, prefill, stop, kwargs)
+ if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+ res = self.c.messages.create(
+ model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+ return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs)
+```
+
+Defining `__call__` let’s us use an object like a function (i.e it’s
+*callable*). We use it as a small wrapper over `messages.create`.
+However we’re not exporting this version just yet – we have some
+additions we’ll make in a moment…
+
+``` python
+c = Client(model, log=True)
+c.use
+```
+
+ In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0
+
+``` python
+c.model = models[-1]
+```
+
+``` python
+c('Hi')
+```
+
+Hello! How can I assist you today?
+
+
+
+- id: `msg_01GQLdNJ4BT92shxxq9JAiUE`
+- content:
+ `[{'text': 'Hello! How can I assist you today?', 'type': 'text'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 8, 'output_tokens': 12, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+``` python
+c.use
+```
+
+ In: 8; Out: 12; Cache create: 0; Cache read: 0; Total: 20
+
+Let’s try out *prefill*:
+
+``` python
+q = "Concisely, what is the meaning of life?"
+pref = 'According to Douglas Adams,'
+```
+
+``` python
+c(q, prefill=pref)
+```
+
+According to Douglas Adams, “The answer to the ultimate question of
+life, the universe, and everything is 42.”
+
+
+
+- id: `msg_019ULPf3arEfv9yqdEm8oZat`
+- content:
+ `[{'text': 'According to Douglas Adams, "The answer to the ultimate question of life, the universe, and everything is 42."', 'type': 'text'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 24, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+We can pass `stream=True` to stream the response back incrementally:
+
+``` python
+for o in c('Hi', stream=True): print(o, end='')
+```
+
+ Hello! How can I assist you today?
+
+``` python
+c.use
+```
+
+ In: 40; Out: 47; Cache create: 0; Cache read: 0; Total: 87
+
+``` python
+for o in c(q, prefill=pref, stream=True): print(o, end='')
+```
+
+ According to Douglas Adams, "The answer to the ultimate question of life, the universe, and everything is 42."
+
+``` python
+c.use
+```
+
+ In: 64; Out: 70; Cache create: 0; Cache read: 0; Total: 134
+
+Pass a stop seauence if you want claude to stop generating text when it
+encounters it.
+
+``` python
+c("Count from 1 to 10", stop="5")
+```
+
+1, 2, 3, 4,
+
+
+
+- id: `msg_013PPywqnu3H1zL7eG6M9BsF`
+- content: `[{'text': '1, 2, 3, 4, ', 'type': 'text'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `stop_sequence`
+- stop_sequence: `5`
+- type: `message`
+- usage:
+ `{'input_tokens': 15, 'output_tokens': 14, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+This also works with streaming, and you can pass more than one stop
+sequence:
+
+``` python
+for o in c("Count from 1 to 10", stop=["2", "yellow"], stream=True): print(o, end='')
+print(c.stop_reason, c.stop_sequence)
+```
+
+ 1, stop_sequence 2
+
+You can check the logs:
+
+``` python
+c.log[-1]
+```
+
+ {'msgs': [{'role': 'user', 'content': 'Count from 1 to 10'}],
+ 'prefill': '',
+ 'max_tokens': 4096,
+ 'system': '',
+ 'temperature': 0,
+ 'stop_sequences': ['2', 'yellow'],
+ 'maxtok': None,
+ 'sp': None,
+ 'temp': None,
+ 'stream': None,
+ 'stop': None,
+ 'result': Message(id='msg_01K76r1E9Wh8fM8CkmotxDQs', content=[TextBlock(text='1, ', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='stop_sequence', stop_sequence='2', type='message', usage=In: 15; Out: 5; Cache create: 0; Cache read: 0; Total: 20),
+ 'use': In: 94; Out: 89; Cache create: 0; Cache read: 0; Total: 183,
+ 'stop_reason': 'stop_sequence',
+ 'stop_sequence': '2'}
+
+## Tool use
+
+Let’s now add tool use (aka *function calling*).
+
+------------------------------------------------------------------------
+
+source
+
+### mk_tool_choice
+
+> mk_tool_choice (choose:Union[str,bool,NoneType])
+
+*Create a `tool_choice` dict that’s ‘auto’ if `choose` is `None`, ‘any’
+if it is True, or ‘tool’ otherwise*
+
+
+Exported source
+
+``` python
+def mk_tool_choice(choose:Union[str,bool,None])->dict:
+ "Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise"
+ return {"type": "tool", "name": choose} if isinstance(choose,str) else {'type':'any'} if choose else {'type':'auto'}
+```
+
+
+
+``` python
+print(mk_tool_choice('sums'))
+print(mk_tool_choice(True))
+print(mk_tool_choice(None))
+```
+
+ {'type': 'tool', 'name': 'sums'}
+ {'type': 'any'}
+ {'type': 'auto'}
+
+Claude can be forced to use a particular tool, or select from a specific
+list of tools, or decide for itself when to use a tool. If you want to
+force a tool (or force choosing from a list), include a `tool_choice`
+param with a dict from
+[`mk_tool_choice`](https://claudette.answer.ai/core.html#mk_tool_choice).
+
+For testing, we need a function that Claude can call; we’ll write a
+simple function that adds numbers together, and will tell us when it’s
+being called:
+
+``` python
+def sums(
+ a:int, # First thing to sum
+ b:int=1 # Second thing to sum
+) -> int: # The sum of the inputs
+ "Adds a + b."
+ print(f"Finding the sum of {a} and {b}")
+ return a + b
+```
+
+``` python
+a,b = 604542,6458932
+pr = f"What is {a}+{b}?"
+sp = "You are a summing expert."
+```
+
+Claudette can autogenerate a schema thanks to the `toolslm` library.
+We’ll force the use of the tool using the function we created earlier.
+
+``` python
+tools=[get_schema(sums)]
+choice = mk_tool_choice('sums')
+```
+
+We’ll start a dialog with Claude now. We’ll store the messages of our
+dialog in `msgs`. The first message will be our prompt `pr`, and we’ll
+pass our `tools` schema.
+
+``` python
+msgs = mk_msgs(pr)
+r = c(msgs, sp=sp, tools=tools, tool_choice=choice)
+r
+```
+
+ToolUseBlock(id=‘toolu_01KDmzXp4XfPBT5e4pRe551A’, input={‘a’: 604542,
+‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
+
+
+
+- id: `msg_01VeFngZj3FcjgoMLepXAWqS`
+- content:
+ `[{'id': 'toolu_01KDmzXp4XfPBT5e4pRe551A', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `tool_use`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 493, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+When Claude decides that it should use a tool, it passes back a
+`ToolUseBlock` with the name of the tool to call, and the params to use.
+
+We don’t want to allow it to call just any possible function (that would
+be a security disaster!) so we create a *namespace* – that is, a
+dictionary of allowable function names to call.
+
+
+Exported source
+
+``` python
+def _mk_ns(*funcs:list[callable]) -> dict[str,callable]:
+ "Create a `dict` of name to function in `funcs`, to use as a namespace"
+ return {f.__name__:f for f in funcs}
+```
+
+
+
+``` python
+ns = _mk_ns(sums)
+ns
+```
+
+ {'sums': int>}
+
+------------------------------------------------------------------------
+
+source
+
+### mk_funcres
+
+> mk_funcres (tuid, res)
+
+*Given tool use id and the tool result, create a tool_result response.*
+
+
+Exported source
+
+``` python
+def call_func(fc:ToolUseBlock, # Tool use block from Claude's message
+ ns:Optional[abc.Mapping]=None, # Namespace to search for tools, defaults to `globals()`
+ obj:Optional=None # Object to search for tools
+ ):
+ "Call the function in the tool response `tr`, using namespace `ns`."
+ if ns is None: ns=globals()
+ if not isinstance(ns, abc.Mapping): ns = _mk_ns(*ns)
+ func = getattr(obj, fc.name, None)
+ if not func: func = ns[fc.name]
+ return func(**fc.input)
+
+def mk_funcres(tuid, res):
+ "Given tool use id and the tool result, create a tool_result response."
+ return dict(type="tool_result", tool_use_id=tuid, content=str(res))
+```
+
+
+
+------------------------------------------------------------------------
+
+source
+
+### call_func
+
+> call_func (fc:anthropic.types.tool_use_block.ToolUseBlock,
+> ns:Optional[collections.abc.Mapping]=None, obj:Optional=None)
+
+*Call the function in the tool response `tr`, using namespace `ns`.*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
fc
+
ToolUseBlock
+
+
Tool use block from Claude’s message
+
+
+
ns
+
Optional
+
None
+
Namespace to search for tools, defaults to
+globals()
+
+
+
obj
+
Optional
+
None
+
Object to search for tools
+
+
+
+
+We can now use the function requested by Claude. We look it up in `ns`,
+and pass in the provided parameters.
+
+``` python
+fc = find_block(r, ToolUseBlock)
+res = mk_funcres(fc.id, call_func(fc, ns=ns))
+res
+```
+
+ Finding the sum of 604542 and 6458932
+
+ {'type': 'tool_result',
+ 'tool_use_id': 'toolu_01KDmzXp4XfPBT5e4pRe551A',
+ 'content': '7063474'}
+
+------------------------------------------------------------------------
+
+source
+
+### mk_toolres
+
+> mk_toolres (r:collections.abc.Mapping,
+> ns:Optional[collections.abc.Mapping]=None, obj:Optional=None)
+
+*Create a `tool_result` message from response `r`.*
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
r
+
Mapping
+
+
Tool use request response from Claude
+
+
+
ns
+
Optional
+
None
+
Namespace to search for tools
+
+
+
obj
+
Optional
+
None
+
Class to search for tools
+
+
+
+
+
+Exported source
+
+``` python
+def mk_toolres(
+ r:abc.Mapping, # Tool use request response from Claude
+ ns:Optional[abc.Mapping]=None, # Namespace to search for tools
+ obj:Optional=None # Class to search for tools
+ ):
+ "Create a `tool_result` message from response `r`."
+ cts = getattr(r, 'content', [])
+ res = [mk_msg(r)]
+ tcs = [mk_funcres(o.id, call_func(o, ns=ns, obj=obj)) for o in cts if isinstance(o,ToolUseBlock)]
+ if tcs: res.append(mk_msg(tcs))
+ return res
+```
+
+
+
+In order to tell Claude the result of the tool call, we pass back the
+tool use assistant request and the `tool_result` response.
+
+``` python
+tr = mk_toolres(r, ns=ns)
+tr
+```
+
+ Finding the sum of 604542 and 6458932
+
+ [{'role': 'assistant',
+ 'content': [ToolUseBlock(id='toolu_01KDmzXp4XfPBT5e4pRe551A', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]},
+ {'role': 'user',
+ 'content': [{'type': 'tool_result',
+ 'tool_use_id': 'toolu_01KDmzXp4XfPBT5e4pRe551A',
+ 'content': '7063474'}]}]
+
+We add this to our dialog, and now Claude has all the information it
+needs to answer our question.
+
+``` python
+msgs += tr
+contents(c(msgs, sp=sp, tools=tools))
+```
+
+ 'The sum of 604542 and 6458932 is 7063474.'
+
+This works with methods as well – in this case, use the object itself
+for `ns`:
+
+``` python
+class Dummy:
+ def sums(
+ self,
+ a:int, # First thing to sum
+ b:int=1 # Second thing to sum
+ ) -> int: # The sum of the inputs
+ "Adds a + b."
+ print(f"Finding the sum of {a} and {b}")
+ return a + b
+```
+
+``` python
+tools = [get_schema(Dummy.sums)]
+o = Dummy()
+r = c(pr, sp=sp, tools=tools, tool_choice=choice)
+tr = mk_toolres(r, obj=o)
+msgs += tr
+contents(c(msgs, sp=sp, tools=tools))
+```
+
+ Finding the sum of 604542 and 6458932
+
+ 'The sum of 604,542 and 6,458,932 is 7,063,474.'
+
+------------------------------------------------------------------------
+
+source
+
+### Client.\_\_call\_\_
+
+> Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, prefill='',
+> stream:bool=False, stop=None, tools:Optional[list]=None,
+> tool_choice:Optional[dict]=None,
+> metadata:MetadataParam|NotGiven=NOT_GIVEN,
+> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Unio
+> n[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN,
+> temperature:float|NotGiven=NOT_GIVEN,
+> top_k:int|NotGiven=NOT_GIVEN,
+> top_p:float|NotGiven=NOT_GIVEN,
+> extra_headers:Headers|None=None,
+> extra_query:Query|None=None, extra_body:Body|None=None,
+> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)
+
+*Make a call to Claude.*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
msgs
+
list
+
+
List of messages in the dialog
+
+
+
sp
+
str
+
+
The system prompt
+
+
+
temp
+
int
+
0
+
Temperature
+
+
+
maxtok
+
int
+
4096
+
Maximum tokens
+
+
+
prefill
+
str
+
+
Optional prefill to pass to Claude as start of its response
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
stop
+
NoneType
+
None
+
Stop sequence
+
+
+
tools
+
Optional
+
None
+
List of tools to make available to Claude
+
+
+
tool_choice
+
Optional
+
None
+
Optionally force use of some tool
+
+
+
metadata
+
MetadataParam | NotGiven
+
NOT_GIVEN
+
+
+
+
stop_sequences
+
List[str] | NotGiven
+
NOT_GIVEN
+
+
+
+
system
+
Union[str, Iterable[TextBlockParam]] | NotGiven
+
NOT_GIVEN
+
+
+
+
temperature
+
float | NotGiven
+
NOT_GIVEN
+
+
+
+
top_k
+
int | NotGiven
+
NOT_GIVEN
+
+
+
+
top_p
+
float | NotGiven
+
NOT_GIVEN
+
+
+
+
extra_headers
+
Headers | None
+
None
+
+
+
+
extra_query
+
Query | None
+
None
+
+
+
+
extra_body
+
Body | None
+
None
+
+
+
+
timeout
+
float | httpx.Timeout | None | NotGiven
+
NOT_GIVEN
+
+
+
+
+
+
+Exported source
+
+``` python
+@patch
+@delegates(messages.Messages.create)
+def __call__(self:Client,
+ msgs:list, # List of messages in the dialog
+ sp='', # The system prompt
+ temp=0, # Temperature
+ maxtok=4096, # Maximum tokens
+ prefill='', # Optional prefill to pass to Claude as start of its response
+ stream:bool=False, # Stream response?
+ stop=None, # Stop sequence
+ tools:Optional[list]=None, # List of tools to make available to Claude
+ tool_choice:Optional[dict]=None, # Optionally force use of some tool
+ **kwargs):
+ "Make a call to Claude."
+ if tools: kwargs['tools'] = [get_schema(o) for o in listify(tools)]
+ if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice)
+ msgs = self._precall(msgs, prefill, stop, kwargs)
+ if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+ res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
+ return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs)
+```
+
+
+
+``` python
+r = c(pr, sp=sp, tools=sums, tool_choice=sums)
+r
+```
+
+ToolUseBlock(id=‘toolu_01BivK3vyqdKSM9pAWQqWfp7’, input={‘a’: 604542,
+‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
+
+
+
+- id: `msg_017toeJm3hbSP4iLPYgi6YDs`
+- content:
+ `[{'id': 'toolu_01BivK3vyqdKSM9pAWQqWfp7', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `tool_use`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 489, 'output_tokens': 57, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+``` python
+tr = mk_toolres(r)
+```
+
+ Finding the sum of 604542 and 6458932
+
+------------------------------------------------------------------------
+
+source
+
+### Client.structured
+
+> Client.structured (msgs:list, tools:Optional[list]=None,
+> obj:Optional=None,
+> ns:Optional[collections.abc.Mapping]=None, sp='',
+> temp=0, maxtok=4096, prefill='', stream:bool=False,
+> stop=None, tool_choice:Optional[dict]=None,
+> metadata:MetadataParam|NotGiven=NOT_GIVEN,
+> stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Un
+> ion[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN,
+> temperature:float|NotGiven=NOT_GIVEN,
+> top_k:int|NotGiven=NOT_GIVEN,
+> top_p:float|NotGiven=NOT_GIVEN,
+> extra_headers:Headers|None=None,
+> extra_query:Query|None=None,
+> extra_body:Body|None=None,
+> timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)
+
+*Return the value of all tool calls (generally used for structured
+outputs)*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
msgs
+
list
+
+
List of messages in the dialog
+
+
+
tools
+
Optional
+
None
+
List of tools to make available to Claude
+
+
+
obj
+
Optional
+
None
+
Class to search for tools
+
+
+
ns
+
Optional
+
None
+
Namespace to search for tools
+
+
+
sp
+
str
+
+
The system prompt
+
+
+
temp
+
int
+
0
+
Temperature
+
+
+
maxtok
+
int
+
4096
+
Maximum tokens
+
+
+
prefill
+
str
+
+
Optional prefill to pass to Claude as start of its response
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
stop
+
NoneType
+
None
+
Stop sequence
+
+
+
tool_choice
+
Optional
+
None
+
Optionally force use of some tool
+
+
+
metadata
+
MetadataParam | NotGiven
+
NOT_GIVEN
+
+
+
+
stop_sequences
+
List[str] | NotGiven
+
NOT_GIVEN
+
+
+
+
system
+
Union[str, Iterable[TextBlockParam]] | NotGiven
+
NOT_GIVEN
+
+
+
+
temperature
+
float | NotGiven
+
NOT_GIVEN
+
+
+
+
top_k
+
int | NotGiven
+
NOT_GIVEN
+
+
+
+
top_p
+
float | NotGiven
+
NOT_GIVEN
+
+
+
+
extra_headers
+
Headers | None
+
None
+
+
+
+
extra_query
+
Query | None
+
None
+
+
+
+
extra_body
+
Body | None
+
None
+
+
+
+
timeout
+
float | httpx.Timeout | None | NotGiven
+
NOT_GIVEN
+
+
+
+
+
+
+Exported source
+
+``` python
+@patch
+@delegates(Client.__call__)
+def structured(self:Client,
+ msgs:list, # List of messages in the dialog
+ tools:Optional[list]=None, # List of tools to make available to Claude
+ obj:Optional=None, # Class to search for tools
+ ns:Optional[abc.Mapping]=None, # Namespace to search for tools
+ **kwargs):
+ "Return the value of all tool calls (generally used for structured outputs)"
+ tools = listify(tools)
+ res = self(msgs, tools=tools, tool_choice=tools, **kwargs)
+ if ns is None: ns=tools
+ cts = getattr(res, 'content', [])
+ tcs = [call_func(o, ns=ns, obj=obj) for o in cts if isinstance(o,ToolUseBlock)]
+ return tcs
+```
+
+
+
+Anthropic’s API does not support response formats directly, so instead
+we provide a `structured` method to use tool calling to achieve the same
+result. The result of the tool is not passed back to Claude in this
+case, but instead is returned directly to the user.
+
+``` python
+c.structured(pr, tools=[sums])
+```
+
+ Finding the sum of 604542 and 6458932
+
+ [7063474]
+
+## Chat
+
+Rather than manually adding the responses to a dialog, we’ll create a
+simple [`Chat`](https://claudette.answer.ai/core.html#chat) class to do
+that for us, each time we make a request. We’ll also store the system
+prompt and tools here, to avoid passing them every time.
+
+------------------------------------------------------------------------
+
+source
+
+### Chat
+
+> Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None,
+> sp='', tools:Optional[list]=None, temp=0,
+> cont_pr:Optional[str]=None)
+
+*Anthropic chat client.*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
model
+
Optional
+
None
+
Model to use (leave empty if passing cli)
+
+
+
cli
+
Optional
+
None
+
Client to use (leave empty if passing model)
+
+
+
sp
+
str
+
+
Optional system prompt
+
+
+
tools
+
Optional
+
None
+
List of tools to make available to Claude
+
+
+
temp
+
int
+
0
+
Temperature
+
+
+
cont_pr
+
Optional
+
None
+
User prompt to continue an assistant response:
+assistant,[user:“…”],assistant
+
+
+
+
+
+Exported source
+
+``` python
+class Chat:
+ def __init__(self,
+ model:Optional[str]=None, # Model to use (leave empty if passing `cli`)
+ cli:Optional[Client]=None, # Client to use (leave empty if passing `model`)
+ sp='', # Optional system prompt
+ tools:Optional[list]=None, # List of tools to make available to Claude
+ temp=0, # Temperature
+ cont_pr:Optional[str]=None): # User prompt to continue an assistant response: assistant,[user:"..."],assistant
+ "Anthropic chat client."
+ assert model or cli
+ assert cont_pr != "", "cont_pr may not be an empty string"
+ self.c = (cli or Client(model))
+ self.h,self.sp,self.tools,self.cont_pr,self.temp = [],sp,tools,cont_pr,temp
+
+ @property
+ def use(self): return self.c.use
+```
+
+
+
+The class stores the
+[`Client`](https://claudette.answer.ai/core.html#client) that will
+provide the responses in `c`, and a history of messages in `h`.
+
+``` python
+sp = "Never mention what tools you use."
+chat = Chat(model, sp=sp)
+chat.c.use, chat.h
+```
+
+ (In: 0; Out: 0; Cache create: 0; Cache read: 0; Total: 0, [])
+
+We’ve shown the token usage but we really care about is pricing. Let’s
+extract the latest
+[pricing](https://www.anthropic.com/pricing#anthropic-api) from
+Anthropic into a `pricing` dict.
+
+We’ll patch `Usage` to enable it compute the cost given pricing.
+
+------------------------------------------------------------------------
+
+source
+
+### Usage.cost
+
+> Usage.cost (costs:tuple)
+
+
+Exported source
+
+``` python
+@patch
+def cost(self:Usage, costs:tuple) -> float:
+ cache_w, cache_r = getattr(self, "cache_creation_input_tokens",0), getattr(self, "cache_read_input_tokens",0)
+ return sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6
+```
+
+
+
+``` python
+chat.c.use.cost(pricing[model_types[chat.c.model]])
+```
+
+ 0.0
+
+This is clunky. Let’s add `cost` as a property for the
+[`Chat`](https://claudette.answer.ai/core.html#chat) class. It will pass
+in the appropriate prices for the current model to the usage cost
+calculator.
+
+------------------------------------------------------------------------
+
+source
+
+### Chat.cost
+
+> Chat.cost ()
+
+
+Exported source
+
+``` python
+@patch(as_prop=True)
+def cost(self: Chat) -> float: return self.c.use.cost(pricing[model_types[self.c.model]])
+```
+
+
+
+``` python
+chat.cost
+```
+
+ 0.0
+
+------------------------------------------------------------------------
+
+source
+
+### Chat.\_\_call\_\_
+
+> Chat.__call__ (pr=None, temp=None, maxtok=4096, stream=False, prefill='',
+> tool_choice:Optional[dict]=None, **kw)
+
+*Call self as a function.*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
pr
+
NoneType
+
None
+
Prompt / message
+
+
+
temp
+
NoneType
+
None
+
Temperature
+
+
+
maxtok
+
int
+
4096
+
Maximum tokens
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
prefill
+
str
+
+
Optional prefill to pass to Claude as start of its response
+
+
+
tool_choice
+
Optional
+
None
+
Optionally force use of some tool
+
+
+
kw
+
+
+
+
+
+
+
+
+Exported source
+
+``` python
+@patch
+def _stream(self:Chat, res):
+ yield from res
+ self.h += mk_toolres(self.c.result, ns=self.tools, obj=self)
+```
+
+
+
+Exported source
+
+``` python
+@patch
+def _post_pr(self:Chat, pr, prev_role):
+ if pr is None and prev_role == 'assistant':
+ if self.cont_pr is None:
+ raise ValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.")
+ pr = self.cont_pr # No user prompt, keep the chain
+ if pr: self.h.append(mk_msg(pr))
+```
+
+
+
+Exported source
+
+``` python
+@patch
+def _append_pr(self:Chat,
+ pr=None, # Prompt / message
+ ):
+ prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user'
+ if pr and prev_role == 'user': self() # already user request pending
+ self._post_pr(pr, prev_role)
+```
+
+
+
+Exported source
+
+``` python
+@patch
+def __call__(self:Chat,
+ pr=None, # Prompt / message
+ temp=None, # Temperature
+ maxtok=4096, # Maximum tokens
+ stream=False, # Stream response?
+ prefill='', # Optional prefill to pass to Claude as start of its response
+ tool_choice:Optional[dict]=None, # Optionally force use of some tool
+ **kw):
+ if temp is None: temp=self.temp
+ self._append_pr(pr)
+ res = self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok,
+ tools=self.tools, tool_choice=tool_choice,**kw)
+ if stream: return self._stream(res)
+ self.h += mk_toolres(self.c.result, ns=self.tools, obj=self)
+ return res
+```
+
+
+
+The `__call__` method just passes the request along to the
+[`Client`](https://claudette.answer.ai/core.html#client), but rather
+than just passing in this one prompt, it appends it to the history and
+passes it all along. As a result, we now have state!
+
+``` python
+chat = Chat(model, sp=sp)
+```
+
+``` python
+chat("I'm Jeremy")
+chat("What's my name?")
+```
+
+Your name is Jeremy, as you mentioned in your previous message.
+
+
+
+- id: `msg_01Lxa5M7S1cjCBTQtaZfWWCQ`
+- content:
+ `[{'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 64, 'output_tokens': 16, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+``` python
+chat.use, chat.cost
+```
+
+ (In: 81; Out: 55; Cache create: 0; Cache read: 0; Total: 136, 0.001068)
+
+Let’s try out prefill too:
+
+``` python
+q = "Concisely, what is the meaning of life?"
+pref = 'According to Douglas Adams,'
+```
+
+``` python
+chat(q, prefill=pref)
+```
+
+According to Douglas Adams, the meaning of life is 42. More seriously,
+there’s no universally agreed upon answer. Common philosophical
+perspectives include:
+
+1. Finding personal fulfillment
+2. Serving others
+3. Pursuing happiness
+4. Creating meaning through our choices
+5. Experiencing and appreciating existence
+
+Ultimately, many believe each individual must determine their own life’s
+meaning.
+
+
+
+- id: `msg_012Z1zkix1fb1B7fHWZQpMoF`
+- content:
+ `[{'text': "According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:\n\n1. Finding personal fulfillment\n2. Serving others\n3. Pursuing happiness\n4. Creating meaning through our choices\n5. Experiencing and appreciating existence\n\nUltimately, many believe each individual must determine their own life's meaning.", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 100, 'output_tokens': 82, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+By default messages must be in user, assistant, user format. If this
+isn’t followed (aka calling `chat()` without a user message) it will
+error out:
+
+``` python
+try: chat()
+except ValueError as e: print("Error:", e)
+```
+
+ Error: Prompt must be given after assistant completion, or use `self.cont_pr`.
+
+Setting `cont_pr` allows a “default prompt” to be specified when a
+prompt isn’t specified. Usually used to prompt the model to continue.
+
+``` python
+chat.cont_pr = "keep going..."
+chat()
+```
+
+Continuing on the topic of life’s meaning:
+
+6. Achieving self-actualization
+7. Leaving a positive legacy
+8. Connecting with others and forming relationships
+9. Exploring and understanding the universe
+10. Evolving as a species
+11. Overcoming challenges and growing
+12. Finding balance between various aspects of life
+13. Expressing creativity and individuality
+14. Pursuing knowledge and wisdom
+15. Living in harmony with nature
+
+These perspectives often overlap and can be combined in various ways.
+Some argue that the absence of an inherent meaning allows for the
+freedom to create our own purpose.
+
+
+
+- id: `msg_012qPDMHoEpaT9RDDu4UyYKq`
+- content:
+ `[{'text': "Continuing on the topic of life's meaning:\n\n6. Achieving self-actualization\n7. Leaving a positive legacy\n8. Connecting with others and forming relationships\n9. Exploring and understanding the universe\n10. Evolving as a species\n11. Overcoming challenges and growing\n12. Finding balance between various aspects of life\n13. Expressing creativity and individuality\n14. Pursuing knowledge and wisdom\n15. Living in harmony with nature\n\nThese perspectives often overlap and can be combined in various ways. Some argue that the absence of an inherent meaning allows for the freedom to create our own purpose.", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 188, 'output_tokens': 135, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+We can also use streaming:
+
+``` python
+chat = Chat(model, sp=sp)
+for o in chat("I'm Jeremy", stream=True): print(o, end='')
+```
+
+ Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions you have?
+
+``` python
+for o in chat(q, prefill=pref, stream=True): print(o, end='')
+```
+
+ According to Douglas Adams, the meaning of life is 42. More seriously, there's no universally agreed upon answer. Common philosophical perspectives include:
+
+ 1. Finding personal fulfillment
+ 2. Serving others or a higher purpose
+ 3. Experiencing and creating love and happiness
+ 4. Pursuing knowledge and understanding
+ 5. Leaving a positive legacy
+
+ Ultimately, many believe each individual must determine their own meaning.
+
+### Chat tool use
+
+We automagically get streamlined tool use as well:
+
+``` python
+pr = f"What is {a}+{b}?"
+pr
+```
+
+ 'What is 604542+6458932?'
+
+``` python
+chat = Chat(model, sp=sp, tools=[sums])
+r = chat(pr)
+r
+```
+
+ Finding the sum of 604542 and 6458932
+
+To answer this question, I can use the “sums” function to add these two
+numbers together. Let me do that for you.
+
+
+
+- id: `msg_01Y6nic6azHsQUHgnu92UDTu`
+- content:
+ `[{'text': 'To answer this question, I can use the "sums" function to add these two numbers together. Let me do that for you.', 'type': 'text'}, {'id': 'toolu_01DrdxEJ2KuqCPRKmKjgCsyZ', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `tool_use`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 428, 'output_tokens': 101, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+Now we need to send this result to Claude—calling the object with no
+parameters tells it to return the tool result to Claude:
+
+``` python
+chat()
+```
+
+The sum of 604542 and 6458932 is 7063474.
+
+
+
+- id: `msg_01JNastkUht4sTSZ17ayuqdd`
+- content:
+ `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 543, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+It should be correct, because it actually used our Python function to do
+the addition. Let’s check:
+
+``` python
+a+b
+```
+
+ 7063474
+
+## Images
+
+Claude can handle image data as well. As everyone knows, when testing
+image APIs you have to use a cute puppy.
+
+``` python
+# Image is Cute_dog.jpg from Wikimedia
+fn = Path('samples/puppy.jpg')
+display.Image(filename=fn, width=200)
+```
+
+
+
+``` python
+img = fn.read_bytes()
+```
+
+
+Exported source
+
+``` python
+def _add_cache(d, cache):
+ "Optionally add cache control"
+ if cache: d["cache_control"] = {"type": "ephemeral"}
+ return d
+```
+
+
+
+Claude supports context caching by adding a `cache_control` header, so
+we provide an option to enable that.
+
+------------------------------------------------------------------------
+
+source
+
+### img_msg
+
+> img_msg (data:bytes, cache=False)
+
+*Convert image `data` into an encoded `dict`*
+
+
+Exported source
+
+``` python
+def img_msg(data:bytes, cache=False)->dict:
+ "Convert image `data` into an encoded `dict`"
+ img = base64.b64encode(data).decode("utf-8")
+ mtype = mimetypes.types_map['.'+imghdr.what(None, h=data)]
+ r = dict(type="base64", media_type=mtype, data=img)
+ return _add_cache({"type": "image", "source": r}, cache)
+```
+
+
+
+Anthropic have documented the particular `dict` structure that expect
+image data to be in, so we have a little function to create that for us.
+
+------------------------------------------------------------------------
+
+source
+
+### text_msg
+
+> text_msg (s:str, cache=False)
+
+*Convert `s` to a text message*
+
+
+Exported source
+
+``` python
+def text_msg(s:str, cache=False)->dict:
+ "Convert `s` to a text message"
+ return _add_cache({"type": "text", "text": s}, cache)
+```
+
+
+
+A Claude message can be a list of image and text parts. So we’ve also
+created a helper for making the text parts.
+
+``` python
+q = "In brief, what color flowers are in this image?"
+msg = mk_msg([img_msg(img), text_msg(q)])
+```
+
+``` python
+c([msg])
+```
+
+The image contains purple or lavender-colored flowers, which appear to
+be daisies or a similar type of flower.
+
+
+
+- id: `msg_011KHngSPHFC2P8TCGxB3LGf`
+- content:
+ `[{'text': 'The image contains purple or lavender-colored flowers, which appear to be daisies or a similar type of flower.', 'type': 'text'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 110, 'output_tokens': 28, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+Exported source
+
+``` python
+def _mk_content(src, cache=False):
+ "Create appropriate content data structure based on type of content"
+ if isinstance(src,str): return text_msg(src, cache=cache)
+ if isinstance(src,bytes): return img_msg(src, cache=cache)
+ if isinstance(src, abc.Mapping): return {k:_str_if_needed(v) for k,v in src.items()}
+ return _str_if_needed(src)
+```
+
+
+
+There’s not need to manually choose the type of message, since we figure
+that out from the data of the source data.
+
+``` python
+_mk_content('Hi')
+```
+
+ {'type': 'text', 'text': 'Hi'}
+
+------------------------------------------------------------------------
+
+source
+
+### mk_msg
+
+> mk_msg (content, role='user', cache=False, **kw)
+
+*Helper to create a `dict` appropriate for a Claude message. `kw` are
+added as key/value pairs to the message*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
content
+
+
+
A string, list, or dict containing the contents of the message
+
+
+
role
+
str
+
user
+
Must be ‘user’ or ‘assistant’
+
+
+
cache
+
bool
+
False
+
+
+
+
kw
+
+
+
+
+
+
+
+
+Exported source
+
+``` python
+def mk_msg(content, # A string, list, or dict containing the contents of the message
+ role='user', # Must be 'user' or 'assistant'
+ cache=False,
+ **kw):
+ "Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message"
+ if hasattr(content, 'content'): content,role = content.content,content.role
+ if isinstance(content, abc.Mapping): content=content.get('content', content)
+ if not isinstance(content, list): content=[content]
+ content = [_mk_content(o, cache if islast else False) for islast,o in loop_last(content)] if content else '.'
+ return dict2obj(dict(role=role, content=content, **kw), list_func=list)
+```
+
+
+
+``` python
+mk_msg(['hi', 'there'], cache=True)
+```
+
+``` json
+{ 'content': [ {'text': 'hi', 'type': 'text'},
+ { 'cache_control': {'type': 'ephemeral'},
+ 'text': 'there',
+ 'type': 'text'}],
+ 'role': 'user'}
+```
+
+``` python
+m = mk_msg(['hi', 'there'], cache=True)
+```
+
+When we construct a message, we now use
+[`_mk_content`](https://claudette.answer.ai/core.html#_mk_content) to
+create the appropriate parts. Since a dialog contains multiple messages,
+and a message can contain multiple content parts, to pass a single
+message with multiple parts we have to use a list containing a single
+list:
+
+``` python
+c([[img, q]])
+```
+
+The image contains purple or lavender-colored flowers, which appear to
+be daisies or a similar type of flower.
+
+
+
+- id: `msg_01FDxZ8umYNK4yPSuUcFqNoE`
+- content:
+ `[{'text': 'The image contains purple or lavender-colored flowers, which appear to be daisies or a similar type of flower.', 'type': 'text'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 110, 'output_tokens': 28, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+
+
+> **Note**
+>
+> As promised (much!) earlier, we’ve now finally completed our
+> definition of
+> [`mk_msg`](https://claudette.answer.ai/core.html#mk_msg), and this
+> version is the one we export to the Python module.
+
+
+
+## Third party providers
+
+### Amazon Bedrock
+
+These are Amazon’s current Claude models:
+
+``` python
+models_aws
+```
+
+ ['anthropic.claude-3-opus-20240229-v1:0',
+ 'anthropic.claude-3-5-sonnet-20240620-v1:0',
+ 'anthropic.claude-3-sonnet-20240229-v1:0',
+ 'anthropic.claude-3-haiku-20240307-v1:0']
+
+
+
+> **Note**
+>
+> `anthropic` at version 0.34.2 seems not to install `boto3` as a
+> dependency. You may need to do a `pip install boto3` or the creation
+> of the [`Client`](https://claudette.answer.ai/core.html#client) below
+> fails.
+
+
+
+Provided `boto3` is installed, we otherwise don’t need any extra code to
+support Amazon Bedrock – we just have to set up the approach client:
+
+``` python
+ab = AnthropicBedrock(
+ aws_access_key=os.environ['AWS_ACCESS_KEY'],
+ aws_secret_key=os.environ['AWS_SECRET_KEY'],
+)
+client = Client(models_aws[-1], ab)
+```
+
+``` python
+chat = Chat(cli=client)
+```
+
+``` python
+chat("I'm Jeremy")
+```
+
+Hello Jeremy! It’s nice to meet you. How can I assist you today? Is
+there anything specific you’d like to talk about or any questions you
+have?
+
+
+
+- id: `msg_bdrk_01MwjVA5hwyfob3w4vdsqpnU`
+- content:
+ `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage: `{'input_tokens': 10, 'output_tokens': 36}`
+
+
+
+### Google Vertex
+
+``` python
+models_goog
+```
+
+ ['claude-3-opus@20240229',
+ 'claude-3-5-sonnet@20240620',
+ 'claude-3-sonnet@20240229',
+ 'claude-3-haiku@20240307']
+
+``` python
+from anthropic import AnthropicVertex
+import google.auth
+```
+
+``` python
+project_id = google.auth.default()[1]
+region = "us-east5"
+gv = AnthropicVertex(project_id=project_id, region=region)
+client = Client(models_goog[-1], gv)
+```
+
+``` python
+chat = Chat(cli=client)
+```
+
+``` python
+chat("I'm Jeremy")
+```
+
+Hello Jeremy! It’s nice to meet you. How can I assist you today? Is
+there anything specific you’d like to talk about or any questions you
+have?
+
+
+
+- id: `msg_vrtx_01PFtHewPDe35yShy7vecp5q`
+- content:
+ `[{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage: `{'input_tokens': 10, 'output_tokens': 36}`
+
+
diff --git a/index.html b/index.html
new file mode 100644
index 0000000..75bc94c
--- /dev/null
+++ b/index.html
@@ -0,0 +1,1376 @@
+
+
+
+
+
+
+
+
+
+
+claudette
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
NB: If you are reading this in GitHub’s readme, we recommend you instead read the much more nicely formatted documentation format of this tutorial.
+
+
Claudette is a wrapper for Anthropic’s Python SDK.
+
The SDK works well, but it is quite low level – it leaves the developer to do a lot of stuff manually. That’s a lot of extra work and boilerplate! Claudette automates pretty much everything that can be automated, whilst providing full control. Amongst the features provided:
Support for prefill, which tells Claude what to use as the first few words of its response
+
Convenient image support
+
Simple and convenient support for Claude’s new Tool Use API.
+
+
You’ll need to set the ANTHROPIC_API_KEY environment variable to the key provided to you by Anthropic in order to use this library.
+
Note that this library is the first ever “literate nbdev” project. That means that the actual source code for the library is a rendered Jupyter Notebook which includes callout notes and tips, HTML tables and images, detailed explanations, and teaches how and why the code is written the way it is. Even if you’ve never used the Anthropic Python SDK or Claude API before, you should be able to read the source code. Click Claudette’s Source to read it, or clone the git repo and execute the notebook yourself to see every step of the creation process in action. The tutorial below includes links to API details which will take you to relevant parts of the source. The reason this project is a new kind of literal program is because we take seriously Knuth’s call to action, that we have a “moral commitment” to never write an “illiterate program” – and so we have a commitment to making literate programming and easy and pleasant experience. (For more on this, see this talk from Hamel Husain.)
+
+
“Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.” Donald E. Knuth, Literate Programming (1984)
+
+
+
Install
+
pip install claudette
+
+
+
Getting started
+
Anthropic’s Python SDK will automatically be installed with Claudette, if you don’t already have it.
+
+
import os
+# os.environ['ANTHROPIC_LOG'] = 'debug'
+
+
To print every HTTP request and response in full, uncomment the above line.
+
+
from claudette import*
+
+
Claudette only exports the symbols that are needed to use the library, so you can use import * to import them. Alternatively, just use:
+
import claudette
+
…and then add the prefix claudette. to any usages of the module.
+
Claudette provides models, which is a list of models currently available from the SDK.
As you see above, displaying the results of a call in a notebook shows just the message contents, with the other details hidden behind a collapsible section. Alternatively you can print the details:
Claude supports adding an extra assistant message at the end, which contains the prefill – i.e. the text we want Claude to assume the response starts with. Let’s try it out:
+
+
chat("Concisely, what is the meaning of life?",
+ prefill='According to Douglas Adams,')
+
+
According to Douglas Adams, the meaning of life is 42. Philosophically, it’s often considered to be finding purpose, happiness, and fulfillment in one’s existence.
+
+
+
id: msg_01MvA934wD5Ssyr3jhWTmV1G
+
content: [{'text': "According to Douglas Adams, the meaning of life is 42. Philosophically, it's often considered to be finding purpose, happiness, and fulfillment in one's existence.", 'type': 'text'}]
You can add stream=True to stream the results as soon as they arrive (although you will only see the gradual generation if you execute the notebook yourself, of course!)
+
+
for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True):
+print(o, end='')
+
+
It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
+
+
+
+
Async
+
Alternatively, you can use AsyncChat (or AsyncClient) for the async versions, e.g:
+
+
chat = AsyncChat(model)
+await chat("I'm Jeremy")
+
+
Hello Jeremy! It’s nice to meet you. How can I assist you today? Is there anything specific you’d like to talk about or any questions you have?
+
+
+
id: msg_018i1EFCqB2vHmNBvspg9eUZ
+
content: [{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]
Remember to use async for when streaming in this case:
+
+
asyncfor o inawait chat("Concisely, what is the meaning of life?",
+ prefill='According to Douglas Adams,', stream=True):
+print(o, end='')
+
+
According to Douglas Adams, the meaning of life is 42. More seriously, philosophers have debated this for millennia. Common answers include:
+
+1. Finding personal happiness
+2. Serving others
+3. Pursuing knowledge
+4. Creating meaning through our choices
+5. Fulfilling our potential
+6. Connecting with others
+7. Experiencing love and beauty
+
+Ultimately, many believe we must each find our own meaning.
+
+
+
+
+
+
Prompt caching
+
If you use mk_msg(msg, cache=True), then the message is cached using Claude’s prompt caching feature. For instance, here we use caching when asking about Claudette’s readme file:
+
+
chat = Chat(model, sp="""You are a helpful and concise assistant.""")
+
+
+
nbtxt = Path('README.txt').read_text()
+msg =f'''<README>
+{nbtxt}
+</README>
+In brief, what is the purpose of this project based on the readme?'''
+r = chat(mk_msg(msg, cache=True))
+r
+
+
Based on the readme, the main purpose of the Claudette project is to provide a high-level wrapper around Anthropic’s Python SDK for interacting with Claude AI models. Key features and goals include:
+
+
Automating and simplifying interactions with Claude, reducing boilerplate code.
+
Providing a stateful dialog interface through the Chat class.
+
Supporting features like prefill (specifying the start of Claude’s response) and image handling.
+
Offering convenient support for Claude’s Tool Use API.
+
Serving as an example of “literate programming”, with the source code designed to be readable and educational, including explanations of how and why the code is written.
+
Supporting multiple model providers, including direct Anthropic API access as well as Claude models available through Amazon Bedrock and Google Vertex AI.
+
+
The project aims to make working with Claude models more convenient and accessible for developers while also serving as an educational resource on how to effectively use and interact with these AI models.
+
+
+
id: msg_015khP4yqW57tH4qK6tGTkQr
+
content: [{'text': 'Based on the readme, the main purpose of the Claudette project is to provide a high-level wrapper around Anthropic\'s Python SDK for interacting with Claude AI models. Key features and goals include:\n\n1. Automating and simplifying interactions with Claude, reducing boilerplate code.\n\n2. Providing a stateful dialog interface through the [Chat](https://claudette.answer.ai/core.html#chat) class.\n\n3. Supporting features like prefill (specifying the start of Claude\'s response) and image handling.\n\n4. Offering convenient support for Claude\'s Tool Use API.\n\n5. Serving as an example of "literate programming", with the source code designed to be readable and educational, including explanations of how and why the code is written.\n\n6. Supporting multiple model providers, including direct Anthropic API access as well as Claude models available through Amazon Bedrock and Google Vertex AI.\n\nThe project aims to make working with Claude models more convenient and accessible for developers while also serving as an educational resource on how to effectively use and interact with these AI models.', 'type': 'text'}]
r = chat('How does it make tool use more ergonomic?')
+r
+
+
Claudette makes tool use more ergonomic in several ways:
+
+
Simplified function definition: It uses docments to make defining Python functions for tools as simple as possible. Each parameter and the return value should have a type and a description.
+
Automatic handling: The Chat class can be initialized with a list of tools, and Claudette handles the back-and-forth between Claude and the tools automatically.
+
Single-step execution: The Chat.toolloop method allows for executing a series of tool calls in a single step, even if multiple tools are needed to solve a problem.
+
Forced tool use: You can set tool_choice to force Claude to always answer using a specific tool.
+
Tracing: The toolloop method supports a trace_func parameter, allowing you to see each response from Claude during the process.
+
Automatic parameter passing: When Claude decides to use a tool, Claudette automatically calls the tool with the provided parameters.
+
System prompt integration: It allows setting a system prompt to guide Claude’s behavior when using tools, such as instructing it not to mention the tools it’s using.
+
+
These features significantly reduce the amount of code and manual handling required to use Claude’s tool use capabilities, making the process more streamlined and developer-friendly.
+
+
+
id: msg_01B4KHLHzM6MUnRgiB3tZ1m5
+
content: [{'text': "Claudette makes tool use more ergonomic in several ways:\n\n1. Simplified function definition: It uses docments to make defining Python functions for tools as simple as possible. Each parameter and the return value should have a type and a description.\n\n2. Automatic handling: The [Chat](https://claudette.answer.ai/core.html#chat) class can be initialized with a list of tools, and Claudette handles the back-and-forth between Claude and the tools automatically.\n\n3. Single-step execution: The [Chat.toolloop](https://claudette.answer.ai/toolloop.html#chat.toolloop) method allows for executing a series of tool calls in a single step, even if multiple tools are needed to solve a problem.\n\n4. Forced tool use: You can settool_choiceto force Claude to always answer using a specific tool.\n\n5. Tracing: Thetoolloopmethod supports atrace_funcparameter, allowing you to see each response from Claude during the process.\n\n6. Automatic parameter passing: When Claude decides to use a tool, Claudette automatically calls the tool with the provided parameters.\n\n7. System prompt integration: It allows setting a system prompt to guide Claude's behavior when using tools, such as instructing it not to mention the tools it's using.\n\nThese features significantly reduce the amount of code and manual handling required to use Claude's tool use capabilities, making the process more streamlined and developer-friendly.", 'type': 'text'}]
We use docments to make defining Python functions as ergonomic as possible. Each parameter (and the return value) should have a type, and a docments comment with the description of what it is. As an example we’ll write a simple function that adds numbers together, and will tell us when it’s being called:
+
+
def sums(
+ a:int, # First thing to sum
+ b:int=1# Second thing to sum
+) ->int: # The sum of the inputs
+"Adds a + b."
+print(f"Finding the sum of {a} and {b}")
+return a + b
+
+
Sometimes Claude will say something like “according to the sums tool the answer is” – generally we’d rather it just tells the user the answer, so we can use a system prompt to help with this:
+
+
sp ="Never mention what tools you use."
+
+
We’ll get Claude to add up some long numbers:
+
+
a,b =604542,6458932
+pr =f"What is {a}+{b}?"
+pr
+
+
'What is 604542+6458932?'
+
+
+
To use tools, pass a list of them to Chat, and to force it to always answer using a tool, set tool_choice to that function name:
Now when we call that with our prompt, Claude doesn’t return the answer, but instead returns a tool_use message, which means we have to call the named tool with the provided parameters:
You can see how many tokens have been used at any time by checking the use property. Note that (as of May 2024) tool use in Claude uses a lot of tokens, since it automatically adds a large system prompt.
We can do everything needed to use tools in a single step, by using Chat.toolloop. This can even call multiple tools as needed solve a problem. For example, let’s define a tool to handle multiplication:
+
+
def mults(
+ a:int, # First thing to multiply
+ b:int=1# Second thing to multiply
+) ->int: # The product of the inputs
+"Multiplies a * b."
+print(f"Finding the product of {a} and {b}")
+return a * b
+
+
Now with a single call we can calculate (a+b)*2 – by passing show_trace we can see each response from Claude in the process:
Finding the sum of 604542 and 6458932
+Message(id='msg_01BidPp2g3FuMLzFJd7jHDeb', content=[TextBlock(text='Certainly! To calculate (604542+6458932)*2, we\'ll need to use the available tools to perform the addition and multiplication operations. Let\'s break it down step by step:\n\n1. First, we\'ll add 604542 and 6458932 using the "sums" function.\n2. Then, we\'ll multiply the result by 2 using the "mults" function.\n\nLet\'s start with the addition:', type='text'), ToolUseBlock(id='toolu_017v8XraNE8sEaErP3SqwWw2', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')], model='claude-3-5-sonnet-20240620', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 538; Out: 168; Cache create: 0; Cache read: 0; Total: 706)
+Finding the product of 7063474 and 2
+Message(id='msg_01XpkGk396hzw5zS8qfC6zb5', content=[TextBlock(text="Great! The sum of 604542 and 6458932 is 7063474.\n\nNow, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_012R3kQMdwT75GtbzWjfXL3k', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')], model='claude-3-5-sonnet-20240620', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 721; Out: 106; Cache create: 0; Cache read: 0; Total: 827)
+Message(id='msg_018bqP3PhfdcP5N7KKyTSLzF', content=[TextBlock(text='Now we have our final result. \n\nThe calculation (604542+6458932)*2 equals 14126948.\n\nTo break it down:\n1. 604542 + 6458932 = 7063474\n2. 7063474 * 2 = 14126948\n\nSo, the final answer to (604542+6458932)*2 is 14126948.', type='text')], model='claude-3-5-sonnet-20240620', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 841; Out: 95; Cache create: 0; Cache read: 0; Total: 936)
+
+
+
Now we have our final result.
+
The calculation (604542+6458932)*2 equals 14126948.
+
To break it down: 1. 604542 + 6458932 = 7063474 2. 7063474 * 2 = 14126948
+
So, the final answer to (604542+6458932)*2 is 14126948.
+
+
+
id: msg_018bqP3PhfdcP5N7KKyTSLzF
+
content: [{'text': 'Now we have our final result. \n\nThe calculation (604542+6458932)*2 equals 14126948.\n\nTo break it down:\n1. 604542 + 6458932 = 7063474\n2. 7063474 * 2 = 14126948\n\nSo, the final answer to (604542+6458932)*2 is 14126948.', 'type': 'text'}]
If you just want the immediate result from a single tool, use Client.structured.
+
+
cli = Client(model)
+
+
+
def sums(
+ a:int, # First thing to sum
+ b:int=1# Second thing to sum
+) ->int: # The sum of the inputs
+"Adds a + b."
+print(f"Finding the sum of {a} and {b}")
+return a + b
+
+
+
cli.structured("What is 604542+6458932", sums)
+
+
Finding the sum of 604542 and 6458932
+
+
+
[7063474]
+
+
+
This is particularly useful for getting back structured information, e.g:
+
+
class President:
+"Information about a president of the United States"
+def__init__(self,
+ first:str, # first name
+ last:str, # last name
+ spouse:str, # name of spouse
+ years_in_office:str, # format: "{start_year}-{end_year}"
+ birthplace:str, # name of city
+ birth_year:int# year of birth, `0` if unknown
+ ):
+assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`"
+ store_attr()
+
+__repr__= basic_repr('first, last, spouse, years_in_office, birthplace, birth_year')
+
+
+
cli.structured("Provide key information about the 3rd President of the United States", President)
Claudette expects images as a list of bytes, so we read in the file:
+
+
img = fn.read_bytes()
+
+
Prompts to Claudette can be lists, containing text, images, or both, eg:
+
+
chat([img, "In brief, what color flowers are in this image?"])
+
+
The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.
+
+
+
id: msg_01Wq2UqWLrQhWmmcuS7Dd8aL
+
content: [{'text': 'The flowers in this image are purple. They appear to be small, daisy-like flowers, possibly asters or some type of purple daisy, blooming in the background behind the adorable puppy in the foreground.', 'type': 'text'}]
Alternatively, Claudette supports creating a multi-stage chat with separate image and text prompts. For instance, you can pass just the image as the initial prompt (in which case Claude will make some general comments about what it sees), and then follow up with questions in additional prompts:
+
+
chat = Chat(model)
+chat(img)
+
+
This image shows an adorable puppy lying in the grass. The puppy appears to be a Cavalier King Charles Spaniel or a similar breed, with distinctive white and reddish-brown fur coloring. Its face is predominantly white with large, expressive dark eyes and a small black nose.
+
The puppy is resting on a grassy surface, and behind it, you can see some purple flowers, which look like asters or michaelmas daisies. These flowers add a lovely splash of color to the background. There’s also what seems to be a wooden structure or fence visible behind the puppy, giving the scene a rustic, garden-like feel.
+
The composition of the image is quite charming, with the puppy as the main focus in the foreground and the natural elements providing a beautiful, colorful backdrop. The lighting appears soft, highlighting the puppy’s fur and giving the whole image a warm, inviting atmosphere.
+
This kind of image would be perfect for a greeting card, calendar, or as a heartwarming pet portrait. It captures the innocence and cuteness of a young dog in a picturesque outdoor setting.
+
+
+
id: msg_015NoQzCLM5ofbZTCxDPmWAT
+
content: [{'text': "This image shows an adorable puppy lying in the grass. The puppy appears to be a Cavalier King Charles Spaniel or a similar breed, with distinctive white and reddish-brown fur coloring. Its face is predominantly white with large, expressive dark eyes and a small black nose.\n\nThe puppy is resting on a grassy surface, and behind it, you can see some purple flowers, which look like asters or michaelmas daisies. These flowers add a lovely splash of color to the background. There's also what seems to be a wooden structure or fence visible behind the puppy, giving the scene a rustic, garden-like feel.\n\nThe composition of the image is quite charming, with the puppy as the main focus in the foreground and the natural elements providing a beautiful, colorful backdrop. The lighting appears soft, highlighting the puppy's fur and giving the whole image a warm, inviting atmosphere.\n\nThis kind of image would be perfect for a greeting card, calendar, or as a heartwarming pet portrait. It captures the innocence and cuteness of a young dog in a picturesque outdoor setting.", 'type': 'text'}]
The puppy in the image is facing towards the left side of the frame. Its head is turned slightly, allowing us to see most of its face, including both eyes, which are looking directly at the camera. The puppy’s body is angled diagonally, with its front paws and chest visible as it rests on the grass. This positioning gives a good view of the puppy’s facial features and part of its body, creating an engaging and endearing portrait of the young dog.
+
+
+
id: msg_018bjcun7oQyBLtn3eMi1nHU
+
content: [{'text': "The puppy in the image is facing towards the left side of the frame. Its head is turned slightly, allowing us to see most of its face, including both eyes, which are looking directly at the camera. The puppy's body is angled diagonally, with its front paws and chest visible as it rests on the grass. This positioning gives a good view of the puppy's facial features and part of its body, creating an engaging and endearing portrait of the young dog.", 'type': 'text'}]
The puppy in the image has a combination of two main colors:
+
+
White: The majority of its face, including the area around its eyes, muzzle, and part of its chest, is white.
+
Reddish-brown (often called “ruby” or “chestnut” in dog breed descriptions): This color appears on its ears and patches on its body.
+
+
This color combination is typical of Cavalier King Charles Spaniels, particularly the Blenheim variety, though without being able to see the full body, it’s hard to confirm the exact breed. The contrast between the white and reddish-brown fur creates a striking and adorable appearance for the puppy.
+
+
+
id: msg_01T4JvKPNT9a9iWXachmszAU
+
content: [{'text': 'The puppy in the image has a combination of two main colors:\n\n1. White: The majority of its face, including the area around its eyes, muzzle, and part of its chest, is white.\n\n2. Reddish-brown (often called "ruby" or "chestnut" in dog breed descriptions): This color appears on its ears and patches on its body.\n\nThis color combination is typical of Cavalier King Charles Spaniels, particularly the Blenheim variety, though without being able to see the full body, it\'s hard to confirm the exact breed. The contrast between the white and reddish-brown fur creates a striking and adorable appearance for the puppy.', 'type': 'text'}]
Note that the image is passed in again for every input in the dialog, so that number of input tokens increases quickly with this kind of chat. (For large images, using prompt caching might be a good idea.)
Now create your Chat object passing this client to the cli parameter – and from then on, everything is identical to the previous examples.
+
+
chat = Chat(cli=client)
+chat("I'm Jeremy")
+
+
Hello Jeremy! It’s nice to meet you. How can I assist you today? Is there anything specific you’d like to talk about or any questions you have?
+
+
+
id: msg_bdrk_01VFVE1Pe5LNubaWYKC1sz8f
+
content: [{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]
Hello Jeremy! It’s nice to meet you. How can I assist you today? Is there anything specific you’d like to talk about or any questions you have?
+
+
+
id: msg_vrtx_01P251BUJXBBvihsvb3VVgZ3
+
content: [{'text': "Hello Jeremy! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to talk about or any questions you have?", 'type': 'text'}]
import os
+# os.environ['ANTHROPIC_LOG'] = 'debug'
+
+
+
model = models[-1]
+
+
Anthropic provides an interesting example of using tools to mock up a hypothetical ordering system. We’re going to take it a step further, and show how we can dramatically simplify the process, whilst completing more complex tasks.
+
We’ll start by defining the same mock customer/order data as in Anthropic’s example, plus create a entity relationship between customers and orders:
We can now define the same functions from the original example – but note that we don’t need to manually create the large JSON schema, since Claudette handles all that for us automatically from the functions directly. We’ll add some extra functionality to update order details when cancelling too.
+
+
def get_customer_info(
+ customer_id:str# ID of the customer
+): # Customer's name, email, phone number, and list of orders
+"Retrieves a customer's information and their orders based on the customer ID"
+print(f'- Retrieving customer {customer_id}')
+return customers.get(customer_id, "Customer not found")
+
+def get_order_details(
+ order_id:str# ID of the order
+): # Order's ID, product name, quantity, price, and order status
+"Retrieves the details of a specific order based on the order ID"
+print(f'- Retrieving order {order_id}')
+return orders.get(order_id, "Order not found")
+
+def cancel_order(
+ order_id:str# ID of the order to cancel
+)->bool: # True if the cancellation is successful
+"Cancels an order based on the provided order ID"
+print(f'- Cancelling order {order_id}')
+if order_id notin orders: returnFalse
+ orders[order_id]['status'] ='Cancelled'
+returnTrue
Add prompt pr to dialog and get a response from Claude, automatically following up with tool_use messages
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
pr
+
+
+
Prompt to pass to Claude
+
+
+
max_steps
+
int
+
10
+
Maximum number of tool requests to loop through
+
+
+
trace_func
+
Optional
+
None
+
Function to trace tool use steps (e.g print)
+
+
+
cont_func
+
Optional
+
noop
+
Function that stops loop if returns False
+
+
+
temp
+
NoneType
+
None
+
Temperature
+
+
+
maxtok
+
int
+
4096
+
Maximum tokens
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
prefill
+
str
+
+
Optional prefill to pass to Claude as start of its response
+
+
+
tool_choice
+
Optional
+
None
+
Optionally force use of some tool
+
+
+
+
+
+Exported source
+
@patch
+@delegates(Chat.__call__)
+def toolloop(self:Chat,
+ pr, # Prompt to pass to Claude
+ max_steps=10, # Maximum number of tool requests to loop through
+ trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`)
+ cont_func:Optional[callable]=noop, # Function that stops loop if returns False
+**kwargs):
+"Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages"
+ r =self(pr, **kwargs)
+for i inrange(max_steps):
+if r.stop_reason!='tool_use': break
+if trace_func: trace_func(r)
+ r =self(**kwargs)
+ifnot (cont_func or noop)(self.h[-2]): break
+if trace_func: trace_func(r)
+return r
+
+
+
We’ll start by re-running our previous request - we shouldn’t have to manually pass back the tool_use message any more:
+
+
chat = Chat(model, tools=tools)
+r = chat.toolloop('Can you tell me the email address for customer C1?')
+r
+
+
- Retrieving customer C1
+
+
+
The email address for customer C1 is john@example.com.
+
+
+
id: msg_019SscrFmtXCmyAknBfLNv5i
+
content: [{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]
We have one additional parameter to creating a CodeChat beyond what we pass to Chat, which is ask – if that’s True, we’ll prompt the user before running code.
+
+
@patch
+def run_cell(
+self:CodeChat,
+ code:str, # Code to execute in persistent IPython session
+): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute
+"Asks user for permission, and if provided, executes python `code` using persistent IPython session."
+ confirm =f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n'
+ifself.ask andinput(confirm): return'#DECLINED#'
+try: res =self.shell.run_cell(code)
+exceptExceptionas e: return traceback.format_exc()
+return res.stdout if res.result isNoneelse res.result
+
+
We just pass along requests to run code to the shell’s implementation. Claude often prints results instead of just using the last expression, so we capture stdout in those cases.
+
+
sp =f'''You are a knowledgable assistant. Do not use tools unless needed.
+Don't do complex calculations yourself -- use code for them.
+The following modules are pre-imported for `run_cell` automatically:
+
+{CodeChat.imps}
+
+Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls.
+
+If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.'''
+
+
+
def get_user(ignored:str=''# Unused parameter
+ ): # Username of current user
+"Get the username of the user running this session"
+print("Looking up username")
+return'Jeremy'
+
+
In order to test out multi-stage tool use, we create a mock function that Claude can call to get the current username.
Claude gets confused sometimes about how tools work, so we use examples to remind it:
+
+
chat.h = [
+'Calculate the square root of `10332`', 'math.sqrt(10332)',
+'#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.'
+]
+
+
Providing a callable to toolloop’s trace_func lets us print out information during the loop:
+
+
def _show_cts(r):
+for o in r.content:
+ifhasattr(o,'text'): print(o.text)
+ nm =getattr(o, 'name', None)
+if nm=='run_cell': print(o.input['code'])
+elif nm: print(f'{o.name}({o.input})')
+
+
…and toolloop’s cont_func callable let’s us provide a function which, if it returns False, stops the loop:
Now we can try our code interpreter. We start by asking for a function to be created, which we’ll use in the next prompt to test that the interpreter is persistent.
+
+
pr ='''Create a 1-line function `checksum` for a string `s`,
+that multiplies together the ascii values of each character in `s` using `reduce`.'''
+chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline)
+
+
Press Enter to execute, or enter "n" to skip?
+```
+from functools import reduce
+checksum = lambda s: reduce(lambda x, y: x * ord(y), s, 1)
+print("Function 'checksum' has been created.")
+print("Example usage: checksum('hello') =", checksum('hello'))
+```
+
+Certainly! I'll create a one-line function called `checksum` that multiplies together the ASCII values of each character in a given string `s` using the `reduce` function. To do this, we'll use the `run_cell` function to execute the Python code. Here's how we'll do it:
+from functools import reduce
+checksum = lambda s: reduce(lambda x, y: x * ord(y), s, 1)
+print("Function 'checksum' has been created.")
+print("Example usage: checksum('hello') =", checksum('hello'))
+Great! The `checksum` function has been created successfully. Let me explain the function:
+
+1. We import `reduce` from the `functools` module (which is pre-imported in the environment).
+2. The `checksum` function is defined as a lambda function that takes a string `s` as input.
+3. Inside the lambda, we use `reduce` to multiply the ASCII values of each character in the string.
+4. The `reduce` function uses another lambda that multiplies the accumulator `x` by the ASCII value of each character `y` (obtained using `ord(y)`).
+5. The initial value for the reduction is 1, ensuring that the multiplication starts correctly.
+
+As we can see from the example output, calling `checksum('hello')` returns `13599570816`, which is the product of the ASCII values of 'h', 'e', 'l', 'l', and 'o'.
+
+You can now use this `checksum` function with any string. For example, if you want to calculate the checksum of another string, you can do so by calling `checksum('your_string_here')`.
+
+Is there anything else you'd like to do with this function or any other string operations you're interested in?
+
+
+
Great! The checksum function has been created successfully. Let me explain the function:
+
+
We import reduce from the functools module (which is pre-imported in the environment).
+
The checksum function is defined as a lambda function that takes a string s as input.
+
Inside the lambda, we use reduce to multiply the ASCII values of each character in the string.
+
The reduce function uses another lambda that multiplies the accumulator x by the ASCII value of each character y (obtained using ord(y)).
+
The initial value for the reduction is 1, ensuring that the multiplication starts correctly.
+
+
As we can see from the example output, calling checksum('hello') returns 13599570816, which is the product of the ASCII values of ‘h’, ‘e’, ‘l’, ‘l’, and ‘o’.
+
You can now use this checksum function with any string. For example, if you want to calculate the checksum of another string, you can do so by calling checksum('your_string_here').
+
Is there anything else you’d like to do with this function or any other string operations you’re interested in?
+
+
+
id: msg_01LguR5AhsAdeBbYNRC3oNQM
+
content: [{'text': "Great! Thechecksumfunction has been created successfully. Let me explain the function:\n\n1. We importreducefrom thefunctoolsmodule (which is pre-imported in the environment).\n2. Thechecksumfunction is defined as a lambda function that takes a stringsas input.\n3. Inside the lambda, we usereduceto multiply the ASCII values of each character in the string.\n4. Thereducefunction uses another lambda that multiplies the accumulatorxby the ASCII value of each charactery(obtained usingord(y)).\n5. The initial value for the reduction is 1, ensuring that the multiplication starts correctly.\n\nAs we can see from the example output, callingchecksum(‘hello’)returns13599570816, which is the product of the ASCII values of 'h', 'e', 'l', 'l', and 'o'.\n\nYou can now use thischecksumfunction with any string. For example, if you want to calculate the checksum of another string, you can do so by callingchecksum(‘your_string_here’).\n\nIs there anything else you'd like to do with this function or any other string operations you're interested in?", 'type': 'text'}]
By asking for a calculation to be done on the username, we force it to use multiple steps:
+
+
pr ='Use it to get the checksum of the username of this session.'
+chat.toolloop(pr, trace_func=_show_cts)
+
+
Looking up username
+Certainly! I'll use the `checksum` function we just created to calculate the checksum of the username for this session. To do this, we'll first need to get the username using the `get_user` function, and then we'll apply the `checksum` function to that username. Here's how we'll do it:
+get_user({'ignored': ''})
+Press Enter to execute, or enter "n" to skip?
+```
+username = "Jeremy"
+result = checksum(username)
+print(f"The checksum of the username '{username}' is: {result}")
+```
+
+Now that we have the username "Jeremy", let's calculate its checksum:
+username = "Jeremy"
+result = checksum(username)
+print(f"The checksum of the username '{username}' is: {result}")
+There you have it! The checksum of the username "Jeremy" for this session is 1134987783204.
+
+To break it down:
+1. We first retrieved the username "Jeremy" using the `get_user` function.
+2. Then we used our previously defined `checksum` function to calculate the checksum of this username.
+3. The result, 1134987783204, is the product of the ASCII values of each character in "Jeremy".
+
+For verification, we can manually calculate this:
+- ASCII values: J (74), e (101), r (114), e (101), m (109), y (121)
+- 74 * 101 * 114 * 101 * 109 * 121 = 1134987783204
+
+This confirms that our `checksum` function is working correctly for the username of this session.
+
+Is there anything else you'd like to do with the username or the checksum function?
+
+
+
There you have it! The checksum of the username “Jeremy” for this session is 1134987783204.
+
To break it down: 1. We first retrieved the username “Jeremy” using the get_user function. 2. Then we used our previously defined checksum function to calculate the checksum of this username. 3. The result, 1134987783204, is the product of the ASCII values of each character in “Jeremy”.
+
For verification, we can manually calculate this: - ASCII values: J (74), e (101), r (114), e (101), m (109), y (121) - 74 * 101 * 114 * 101 * 109 * 121 = 1134987783204
+
This confirms that our checksum function is working correctly for the username of this session.
+
Is there anything else you’d like to do with the username or the checksum function?
+
+
+
id: msg_01Htvo4rw9rBaozPapFy8XQE
+
content: [{'text': 'There you have it! The checksum of the username "Jeremy" for this session is 1134987783204.\n\nTo break it down:\n1. We first retrieved the username "Jeremy" using theget_userfunction.\n2. Then we used our previously definedchecksumfunction to calculate the checksum of this username.\n3. The result, 1134987783204, is the product of the ASCII values of each character in "Jeremy".\n\nFor verification, we can manually calculate this:\n- ASCII values: J (74), e (101), r (114), e (101), m (109), y (121)\n- 74 * 101 * 114 * 101 * 109 * 121 = 1134987783204\n\nThis confirms that ourchecksumfunction is working correctly for the username of this session.\n\nIs there anything else you\'d like to do with the username or the checksum function?', 'type': 'text'}]
+
+
+
+
+
\ No newline at end of file
diff --git a/toolloop.html.md b/toolloop.html.md
new file mode 100644
index 0000000..f7c2513
--- /dev/null
+++ b/toolloop.html.md
@@ -0,0 +1,577 @@
+# Tool loop
+
+
+
+
+``` python
+import os
+# os.environ['ANTHROPIC_LOG'] = 'debug'
+```
+
+``` python
+model = models[-1]
+```
+
+Anthropic provides an [interesting
+example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb)
+of using tools to mock up a hypothetical ordering system. We’re going to
+take it a step further, and show how we can dramatically simplify the
+process, whilst completing more complex tasks.
+
+We’ll start by defining the same mock customer/order data as in
+Anthropic’s example, plus create a entity relationship between customers
+and orders:
+
+``` python
+orders = {
+ "O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"),
+ "O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"),
+ "O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")}
+
+customers = {
+ "C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890",
+ orders=[orders['O1'], orders['O2']]),
+ "C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210",
+ orders=[orders['O3']])
+}
+```
+
+We can now define the same functions from the original example – but
+note that we don’t need to manually create the large JSON schema, since
+Claudette handles all that for us automatically from the functions
+directly. We’ll add some extra functionality to update order details
+when cancelling too.
+
+``` python
+def get_customer_info(
+ customer_id:str # ID of the customer
+): # Customer's name, email, phone number, and list of orders
+ "Retrieves a customer's information and their orders based on the customer ID"
+ print(f'- Retrieving customer {customer_id}')
+ return customers.get(customer_id, "Customer not found")
+
+def get_order_details(
+ order_id:str # ID of the order
+): # Order's ID, product name, quantity, price, and order status
+ "Retrieves the details of a specific order based on the order ID"
+ print(f'- Retrieving order {order_id}')
+ return orders.get(order_id, "Order not found")
+
+def cancel_order(
+ order_id:str # ID of the order to cancel
+)->bool: # True if the cancellation is successful
+ "Cancels an order based on the provided order ID"
+ print(f'- Cancelling order {order_id}')
+ if order_id not in orders: return False
+ orders[order_id]['status'] = 'Cancelled'
+ return True
+```
+
+We’re now ready to start our chat.
+
+``` python
+tools = [get_customer_info, get_order_details, cancel_order]
+chat = Chat(model, tools=tools)
+```
+
+We’ll start with the same request as Anthropic showed:
+
+``` python
+r = chat('Can you tell me the email address for customer C1?')
+print(r.stop_reason)
+r.content
+```
+
+ - Retrieving customer C1
+ tool_use
+
+ [ToolUseBlock(id='toolu_019DSHxKkKDrNTcUe9AgKqjm', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]
+
+Claude asks us to use a tool. Claudette handles that automatically by
+just calling it again:
+
+``` python
+r = chat()
+contents(r)
+```
+
+ 'The email address for customer C1 is john@example.com.'
+
+Let’s consider a more complex case than in the original example – what
+happens if a customer wants to cancel all of their orders?
+
+``` python
+chat = Chat(model, tools=tools)
+r = chat('Please cancel all orders for customer C1 for me.')
+print(r.stop_reason)
+r.content
+```
+
+ - Retrieving customer C1
+ tool_use
+
+ [TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'),
+ ToolUseBlock(id='toolu_01Gn7zKBeBgWzi2AKfN6bVNZ', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]
+
+This is the start of a multi-stage tool use process. Doing it manually
+step by step is inconvenient, so let’s write a function to handle this
+for us:
+
+------------------------------------------------------------------------
+
+source
+
+### Chat.toolloop
+
+> Chat.toolloop (pr, max_steps=10, trace_func:Optional[ infunctioncallable>]=None, cont_func:Optional[ infunctioncallable>]=, temp=None,
+> maxtok=4096, stream=False, prefill='',
+> tool_choice:Optional[dict]=None)
+
+*Add prompt `pr` to dialog and get a response from Claude, automatically
+following up with `tool_use` messages*
+
+
+
+
+
+
+
+
+
+
+
+
Type
+
Default
+
Details
+
+
+
+
+
pr
+
+
+
Prompt to pass to Claude
+
+
+
max_steps
+
int
+
10
+
Maximum number of tool requests to loop through
+
+
+
trace_func
+
Optional
+
None
+
Function to trace tool use steps (e.g print)
+
+
+
cont_func
+
Optional
+
noop
+
Function that stops loop if returns False
+
+
+
temp
+
NoneType
+
None
+
Temperature
+
+
+
maxtok
+
int
+
4096
+
Maximum tokens
+
+
+
stream
+
bool
+
False
+
Stream response?
+
+
+
prefill
+
str
+
+
Optional prefill to pass to Claude as start of its response
+
+
+
tool_choice
+
Optional
+
None
+
Optionally force use of some tool
+
+
+
+
+
+Exported source
+
+``` python
+@patch
+@delegates(Chat.__call__)
+def toolloop(self:Chat,
+ pr, # Prompt to pass to Claude
+ max_steps=10, # Maximum number of tool requests to loop through
+ trace_func:Optional[callable]=None, # Function to trace tool use steps (e.g `print`)
+ cont_func:Optional[callable]=noop, # Function that stops loop if returns False
+ **kwargs):
+ "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages"
+ r = self(pr, **kwargs)
+ for i in range(max_steps):
+ if r.stop_reason!='tool_use': break
+ if trace_func: trace_func(r)
+ r = self(**kwargs)
+ if not (cont_func or noop)(self.h[-2]): break
+ if trace_func: trace_func(r)
+ return r
+```
+
+
+
+We’ll start by re-running our previous request - we shouldn’t have to
+manually pass back the `tool_use` message any more:
+
+``` python
+chat = Chat(model, tools=tools)
+r = chat.toolloop('Can you tell me the email address for customer C1?')
+r
+```
+
+ - Retrieving customer C1
+
+The email address for customer C1 is john@example.com.
+
+
+
+- id: `msg_019SscrFmtXCmyAknBfLNv5i`
+- content:
+ `[{'text': 'The email address for customer C1 is john@example.com.', 'type': 'text'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 732, 'output_tokens': 19, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+Let’s see if it can handle the multi-stage process now – we’ll add
+`trace_func=print` to see each stage of the process:
+
+``` python
+chat = Chat(model, tools=tools)
+r = chat.toolloop('Please cancel all orders for customer C1 for me.', trace_func=print)
+r
+```
+
+ - Retrieving customer C1
+ Message(id='msg_01VzFkDJ59R7NP6gQ7cNrWi8', content=[TextBlock(text="Okay, let's cancel all orders for customer C1:", type='text'), ToolUseBlock(id='toolu_01YJ6Kh3LMVL5Ekzn44VTH6E', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')], model='claude-3-haiku-20240307', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 537; Out: 72; Cache create: 0; Cache read: 0; Total: 609)
+ - Cancelling order O1
+ Message(id='msg_01SxniJb85ofdMm3UJ4o2XCy', content=[TextBlock(text="Based on the customer information, it looks like there are 2 orders for customer C1:\n- Order O1 for Widget A\n- Order O2 for Gadget B\n\nLet's cancel both of these orders:", type='text'), ToolUseBlock(id='toolu_01M5i2uKyNWDk7s5pQKF3uh3', input={'order_id': 'O1'}, name='cancel_order', type='tool_use')], model='claude-3-haiku-20240307', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 745; Out: 107; Cache create: 0; Cache read: 0; Total: 852)
+ - Cancelling order O2
+ Message(id='msg_01CQg3uJfWmCFRYBAWyUdXAC', content=[ToolUseBlock(id='toolu_012fBAoEhpixe16w7dZFLsnD', input={'order_id': 'O2'}, name='cancel_order', type='tool_use')], model='claude-3-haiku-20240307', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 864; Out: 57; Cache create: 0; Cache read: 0; Total: 921)
+ Message(id='msg_01SghTUGjDJAKifcq7pyiLKb', content=[TextBlock(text='Both order cancellations were successful. I have now cancelled all orders for customer C1.', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 933; Out: 23; Cache create: 0; Cache read: 0; Total: 956)
+
+Both order cancellations were successful. I have now cancelled all
+orders for customer C1.
+
+
+
+- id: `msg_01SghTUGjDJAKifcq7pyiLKb`
+- content:
+ `[{'text': 'Both order cancellations were successful. I have now cancelled all orders for customer C1.', 'type': 'text'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 933, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+OK Claude thinks the orders were cancelled – let’s check one:
+
+``` python
+chat.toolloop('What is the status of order O2?')
+```
+
+ - Retrieving order O2
+
+The status of order O2 is now ‘Cancelled’ since I successfully cancelled
+that order earlier.
+
+
+
+- id: `msg_01VdootDagiKh44zVBhHBMnK`
+- content:
+ `[{'text': "The status of order O2 is now 'Cancelled' since I successfully cancelled that order earlier.", 'type': 'text'}]`
+- model: `claude-3-haiku-20240307`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 1095, 'output_tokens': 26, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+## Code interpreter
+
+Here is an example of using `toolloop` to implement a simple code
+interpreter with additional tools.
+
+``` python
+from toolslm.shell import get_shell
+from fastcore.meta import delegates
+import traceback
+```
+
+``` python
+@delegates()
+class CodeChat(Chat):
+ imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses'
+ def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs):
+ super().__init__(model=model, **kwargs)
+ self.ask = ask
+ self.tools.append(self.run_cell)
+ self.shell = get_shell()
+ self.shell.run_cell('import '+self.imps)
+```
+
+We have one additional parameter to creating a `CodeChat` beyond what we
+pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is
+`ask` – if that’s `True`, we’ll prompt the user before running code.
+
+``` python
+@patch
+def run_cell(
+ self:CodeChat,
+ code:str, # Code to execute in persistent IPython session
+): # Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute
+ "Asks user for permission, and if provided, executes python `code` using persistent IPython session."
+ confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n'
+ if self.ask and input(confirm): return '#DECLINED#'
+ try: res = self.shell.run_cell(code)
+ except Exception as e: return traceback.format_exc()
+ return res.stdout if res.result is None else res.result
+```
+
+We just pass along requests to run code to the shell’s implementation.
+Claude often prints results instead of just using the last expression,
+so we capture stdout in those cases.
+
+``` python
+sp = f'''You are a knowledgable assistant. Do not use tools unless needed.
+Don't do complex calculations yourself -- use code for them.
+The following modules are pre-imported for `run_cell` automatically:
+
+{CodeChat.imps}
+
+Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls.
+
+If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.'''
+```
+
+``` python
+def get_user(ignored:str='' # Unused parameter
+ ): # Username of current user
+ "Get the username of the user running this session"
+ print("Looking up username")
+ return 'Jeremy'
+```
+
+In order to test out multi-stage tool use, we create a mock function
+that Claude can call to get the current username.
+
+``` python
+model = models[1]
+```
+
+``` python
+chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3)
+```
+
+Claude gets confused sometimes about how tools work, so we use examples
+to remind it:
+
+``` python
+chat.h = [
+ 'Calculate the square root of `10332`', 'math.sqrt(10332)',
+ '#DECLINED#', 'I am sorry but the request to execute that was declined and no further progress can be made.'
+]
+```
+
+Providing a callable to toolloop’s `trace_func` lets us print out
+information during the loop:
+
+``` python
+def _show_cts(r):
+ for o in r.content:
+ if hasattr(o,'text'): print(o.text)
+ nm = getattr(o, 'name', None)
+ if nm=='run_cell': print(o.input['code'])
+ elif nm: print(f'{o.name}({o.input})')
+```
+
+…and toolloop’s `cont_func` callable let’s us provide a function which,
+if it returns `False`, stops the loop:
+
+``` python
+def _cont_decline(c):
+ return nested_idx(c, 'content', 'content') != '#DECLINED#'
+```
+
+Now we can try our code interpreter. We start by asking for a function
+to be created, which we’ll use in the next prompt to test that the
+interpreter is persistent.
+
+``` python
+pr = '''Create a 1-line function `checksum` for a string `s`,
+that multiplies together the ascii values of each character in `s` using `reduce`.'''
+chat.toolloop(pr, temp=0.2, trace_func=_show_cts, cont_func=_cont_decline)
+```
+
+ Press Enter to execute, or enter "n" to skip?
+ ```
+ from functools import reduce
+ checksum = lambda s: reduce(lambda x, y: x * ord(y), s, 1)
+ print("Function 'checksum' has been created.")
+ print("Example usage: checksum('hello') =", checksum('hello'))
+ ```
+
+ Certainly! I'll create a one-line function called `checksum` that multiplies together the ASCII values of each character in a given string `s` using the `reduce` function. To do this, we'll use the `run_cell` function to execute the Python code. Here's how we'll do it:
+ from functools import reduce
+ checksum = lambda s: reduce(lambda x, y: x * ord(y), s, 1)
+ print("Function 'checksum' has been created.")
+ print("Example usage: checksum('hello') =", checksum('hello'))
+ Great! The `checksum` function has been created successfully. Let me explain the function:
+
+ 1. We import `reduce` from the `functools` module (which is pre-imported in the environment).
+ 2. The `checksum` function is defined as a lambda function that takes a string `s` as input.
+ 3. Inside the lambda, we use `reduce` to multiply the ASCII values of each character in the string.
+ 4. The `reduce` function uses another lambda that multiplies the accumulator `x` by the ASCII value of each character `y` (obtained using `ord(y)`).
+ 5. The initial value for the reduction is 1, ensuring that the multiplication starts correctly.
+
+ As we can see from the example output, calling `checksum('hello')` returns `13599570816`, which is the product of the ASCII values of 'h', 'e', 'l', 'l', and 'o'.
+
+ You can now use this `checksum` function with any string. For example, if you want to calculate the checksum of another string, you can do so by calling `checksum('your_string_here')`.
+
+ Is there anything else you'd like to do with this function or any other string operations you're interested in?
+
+Great! The `checksum` function has been created successfully. Let me
+explain the function:
+
+1. We import `reduce` from the `functools` module (which is
+ pre-imported in the environment).
+2. The `checksum` function is defined as a lambda function that takes a
+ string `s` as input.
+3. Inside the lambda, we use `reduce` to multiply the ASCII values of
+ each character in the string.
+4. The `reduce` function uses another lambda that multiplies the
+ accumulator `x` by the ASCII value of each character `y` (obtained
+ using `ord(y)`).
+5. The initial value for the reduction is 1, ensuring that the
+ multiplication starts correctly.
+
+As we can see from the example output, calling `checksum('hello')`
+returns `13599570816`, which is the product of the ASCII values of ‘h’,
+‘e’, ‘l’, ‘l’, and ‘o’.
+
+You can now use this `checksum` function with any string. For example,
+if you want to calculate the checksum of another string, you can do so
+by calling `checksum('your_string_here')`.
+
+Is there anything else you’d like to do with this function or any other
+string operations you’re interested in?
+
+
+
+- id: `msg_01LguR5AhsAdeBbYNRC3oNQM`
+- content:
+ `[{'text': "Great! The`checksum`function has been created successfully. Let me explain the function:\n\n1. We import`reduce`from the`functools`module (which is pre-imported in the environment).\n2. The`checksum`function is defined as a lambda function that takes a string`s`as input.\n3. Inside the lambda, we use`reduce`to multiply the ASCII values of each character in the string.\n4. The`reduce`function uses another lambda that multiplies the accumulator`x`by the ASCII value of each character`y`(obtained using`ord(y)`).\n5. The initial value for the reduction is 1, ensuring that the multiplication starts correctly.\n\nAs we can see from the example output, calling`checksum(‘hello’)`returns`13599570816`, which is the product of the ASCII values of 'h', 'e', 'l', 'l', and 'o'.\n\nYou can now use this`checksum`function with any string. For example, if you want to calculate the checksum of another string, you can do so by calling`checksum(‘your_string_here’)`.\n\nIs there anything else you'd like to do with this function or any other string operations you're interested in?", 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 908, 'output_tokens': 281, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+
+
+By asking for a calculation to be done on the username, we force it to
+use multiple steps:
+
+``` python
+pr = 'Use it to get the checksum of the username of this session.'
+chat.toolloop(pr, trace_func=_show_cts)
+```
+
+ Looking up username
+ Certainly! I'll use the `checksum` function we just created to calculate the checksum of the username for this session. To do this, we'll first need to get the username using the `get_user` function, and then we'll apply the `checksum` function to that username. Here's how we'll do it:
+ get_user({'ignored': ''})
+ Press Enter to execute, or enter "n" to skip?
+ ```
+ username = "Jeremy"
+ result = checksum(username)
+ print(f"The checksum of the username '{username}' is: {result}")
+ ```
+
+ Now that we have the username "Jeremy", let's calculate its checksum:
+ username = "Jeremy"
+ result = checksum(username)
+ print(f"The checksum of the username '{username}' is: {result}")
+ There you have it! The checksum of the username "Jeremy" for this session is 1134987783204.
+
+ To break it down:
+ 1. We first retrieved the username "Jeremy" using the `get_user` function.
+ 2. Then we used our previously defined `checksum` function to calculate the checksum of this username.
+ 3. The result, 1134987783204, is the product of the ASCII values of each character in "Jeremy".
+
+ For verification, we can manually calculate this:
+ - ASCII values: J (74), e (101), r (114), e (101), m (109), y (121)
+ - 74 * 101 * 114 * 101 * 109 * 121 = 1134987783204
+
+ This confirms that our `checksum` function is working correctly for the username of this session.
+
+ Is there anything else you'd like to do with the username or the checksum function?
+
+There you have it! The checksum of the username “Jeremy” for this
+session is 1134987783204.
+
+To break it down: 1. We first retrieved the username “Jeremy” using the
+`get_user` function. 2. Then we used our previously defined `checksum`
+function to calculate the checksum of this username. 3. The result,
+1134987783204, is the product of the ASCII values of each character in
+“Jeremy”.
+
+For verification, we can manually calculate this: - ASCII values: J
+(74), e (101), r (114), e (101), m (109), y (121) - 74 \* 101 \* 114 \*
+101 \* 109 \* 121 = 1134987783204
+
+This confirms that our `checksum` function is working correctly for the
+username of this session.
+
+Is there anything else you’d like to do with the username or the
+checksum function?
+
+
+
+- id: `msg_01Htvo4rw9rBaozPapFy8XQE`
+- content:
+ `[{'text': 'There you have it! The checksum of the username "Jeremy" for this session is 1134987783204.\n\nTo break it down:\n1. We first retrieved the username "Jeremy" using the`get_user`function.\n2. Then we used our previously defined`checksum`function to calculate the checksum of this username.\n3. The result, 1134987783204, is the product of the ASCII values of each character in "Jeremy".\n\nFor verification, we can manually calculate this:\n- ASCII values: J (74), e (101), r (114), e (101), m (109), y (121)\n- 74 * 101 * 114 * 101 * 109 * 121 = 1134987783204\n\nThis confirms that our`checksum`function is working correctly for the username of this session.\n\nIs there anything else you\'d like to do with the username or the checksum function?', 'type': 'text'}]`
+- model: `claude-3-5-sonnet-20240620`
+- role: `assistant`
+- stop_reason: `end_turn`
+- stop_sequence: `None`
+- type: `message`
+- usage:
+ `{'input_tokens': 1474, 'output_tokens': 215, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
+
+