)
-
-*Convert function `f` into a JSON schema `dict` for tool use.*
-
-``` python
-a,b = 604542,6458932
-pr = f"What is {a}+{b}?"
-sp = "You must use the `sums` function instead of adding yourself, but don't mention what tools you use."
-tools=[get_schema(sums)]
+``` xml
+
+ This is a paragraph
+
+
+
+
This is a header
+ This is a sub-header
+
+
```
-We’ll start a dialog with Claude now. We’ll store the messages of our
-dialog in `msgs`. The first message will be our prompt `pr`, and we’ll
-pass our `tools` schema.
-
``` python
-msgs = mk_msgs(pr)
-r = c(msgs, sp=sp, tools=tools)
-r
+a = dict(surname='Howard', firstnames=['Jeremy','Peter'],
+ address=dict(state='Queensland',country='Australia'))
+hl_md(json_to_xml(a, 'person'))
```
-ToolUseBlock(id=‘toolu_01CsuZfPAas75MkDABXAvjWD’, input={‘a’: 604542,
-‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
-
-
-
-- id: msg_01StvQvvrnwaBtuUwHQLrpFt
-- content: \[{‘id’: ‘toolu_01CsuZfPAas75MkDABXAvjWD’, ‘input’: {‘a’:
- 604542, ‘b’: 6458932}, ‘name’: ‘sums’, ‘type’: ‘tool_use’}\]
-- model: claude-3-haiku-20240307
-- role: assistant
-- stop_reason: tool_use
-- stop_sequence: None
-- type: message
-- usage: {‘input_tokens’: 414, ‘output_tokens’: 72}
-
-
-
-When Claude decides that it should use a tool, it passes back a
-`ToolUseBlock` with the name of the tool to call, and the params to use.
-
-We need to append the response to the dialog so Claude knows what’s
-happening (since it’s stateless).
-
-``` python
-msgs.append(mk_msg(r))
+``` xml
+
+ Howard
+
+ - Jeremy
+ - Peter
+
+
+ Queensland
+ Australia
+
+
```
-We don’t want to allow it to call just any possible function (that would
-be a security disaster!) so we create a *namespace* – that is, a
-dictionary of allowable function names to call.
-
------------------------------------------------------------------------
source
-### call_func
-
-> call_func (tr:collections.abc.Mapping,
-> ns:Optional[collections.abc.Mapping]=None)
-
-*Call the function in the tool response `tr`, using namespace `ns`.*
-
-| | **Type** | **Default** | **Details** |
-|-----|----------|-------------|--------------------------------------------------------|
-| tr | Mapping | | Tool use request response from Claude |
-| ns | Optional | None | Namespace to search for tools, defaults to `globals()` |
+### Chat
-We can now use the function requested by Claude. We look it up in `ns`,
-and pass in the provided parameters.
+> Chat (model:Optional[str]=None, cli:Optional[claudette.core.Client]=None,
+> sp='', tools:Optional[list]=None)
-``` python
-res = call_func(r, ns=ns)
-res
-```
+*Anthropic chat client.*
- 7063474
+| | **Type** | **Default** | **Details** |
+|-------|----------|-------------|------------------------------------------------|
+| model | Optional | None | Model to use (leave empty if passing `cli`) |
+| cli | Optional | None | Client to use (leave empty if passing `model`) |
+| sp | str | | Optional system prompt |
+| tools | Optional | None | List of tools to make available to Claude |
------------------------------------------------------------------------
@@ -672,50 +505,48 @@ res
href="https://github.com/AnswerDotAI/claudette/blob/main/claudette/core.py#LNone"
target="_blank" style="float:right; font-size:smaller">source
-### mk_toolres
-
-> mk_toolres (r:collections.abc.Mapping, res=None,
-> ns:Optional[collections.abc.Mapping]=None)
-
-*Create a `tool_result` message from response `r`.*
+### Chat.\_\_call\_\_
-| | **Type** | **Default** | **Details** |
-|-----|----------|-------------|----------------------------------------------------------------------------------------------------------------------------------------|
-| r | Mapping | | Tool use request response from Claude |
-| res | NoneType | None | The result of calling the tool (calculated with [`call_func`](https://AnswerDotAI.github.io/claudette/core.html#call_func) by default) |
-| ns | Optional | None | Namespace to search for tools |
+> Chat.__call__ (pr, sp='', temp=0, maxtok=4096,
+> stop:Optional[list[str]]=None,
+> ns:Optional[collections.abc.Mapping]=None, prefill='',
+> **kw)
-In order to tell Claude the result of the tool call, we pass back a
-`tool_result` message, created by calling
-[`call_func`](https://AnswerDotAI.github.io/claudette/core.html#call_func).
+*Add prompt `pr` to dialog and get a response from Claude*
-``` python
-tr = mk_toolres(r, res=res, ns=ns)
-tr
-```
+| | **Type** | **Default** | **Details** |
+|---------|----------|-------------|-------------------------------------------------------------|
+| pr | | | Prompt / message |
+| sp | str | | The system prompt |
+| temp | int | 0 | Temperature |
+| maxtok | int | 4096 | Maximum tokens |
+| stop | Optional | None | Stop sequences |
+| ns | Optional | None | Namespace to search for tools, defaults to `globals()` |
+| prefill | str | | Optional prefill to pass to Claude as start of its response |
+| kw | | | |
- {'role': 'user',
- 'content': [{'type': 'tool_result',
- 'tool_use_id': 'toolu_01CsuZfPAas75MkDABXAvjWD',
- 'content': '7063474'}]}
+------------------------------------------------------------------------
-We add this to our dialog, and now Claude has all the information it
-needs to answer our question.
+source
-``` python
-msgs.append(tr)
-contents(c(msgs, sp=sp, tools=tools))
-```
+### Chat.stream
- 'The sum of 604542 and 6458932 is 7063474.'
+> Chat.stream (pr, sp='', temp=0, maxtok=4096,
+> stop:Optional[list[str]]=None, prefill='', **kw)
-## XML helpers
+*Add prompt `pr` to dialog and stream the response from Claude*
-Claude works well with XML inputs, but XML can be a bit clunky to work
-with manually. Therefore, we create a couple of more streamlined
-approaches for XML generation. You don’t need to use these if you don’t
-find them useful – you can always just use plain strings for XML
-directly.
+| | **Type** | **Default** | **Details** |
+|---------|----------|-------------|-------------------------------------------------------------|
+| pr | | | Prompt / message |
+| sp | str | | The system prompt |
+| temp | int | 0 | Temperature |
+| maxtok | int | 4096 | Maximum tokens |
+| stop | Optional | None | Stop sequences |
+| prefill | str | | Optional prefill to pass to Claude as start of its response |
+| kw | | | |
------------------------------------------------------------------------
@@ -736,73 +567,23 @@ target="_blank" style="float:right; font-size:smaller">source
| c | Optional | None | Children |
| kw | | | |
-An XML node contains a tag, optional children, and optional attributes.
-[`xt`](https://AnswerDotAI.github.io/claudette/core.html#xt) creates a
-tuple of these three things, which we will use to general XML shortly.
-Attributes are passed as kwargs; since these might conflict with
-reserved words in Python, you can optionally add a `_` prefix and it’ll
-be stripped off.
-
-``` python
-xt('x-custom', ['hi'], _class='bar')
-```
-
- ('x-custom', ['hi'], {'class': 'bar'})
-
-``` python
-from claudette.core import div,img,h1,h2,p,hr,html
-```
-
-If you have to use a lot of tags of the same type, it’s convenient to
-use `partial` to create specialised functions for them. Here, we’re
-creating functions for some common HTML tags. Here’s an example of using
-them:
-
-``` python
-a = html([
- p('This is a paragraph'),
- hr(),
- img(src='http://example.prg'),
- div([
- h1('This is a header'),
- h2('This is a sub-header', style='k:v'),
- ], _class='foo')
-])
-a
-```
-
- ('html',
- [('p', 'This is a paragraph', {}),
- ('hr', None, {}),
- ('img', None, {'src': 'http://example.prg'}),
- ('div',
- [('h1', 'This is a header', {}),
- ('h2', 'This is a sub-header', {'style': 'k:v'})],
- {'class': 'foo'})],
- {})
-
------------------------------------------------------------------------
source
-### hl_md
-
-> hl_md (s, lang='xml')
-
-*Syntax highlight `s` using `lang`.*
+### json_to_xml
-When we display XML in a notebook, it’s nice to highlight it, so we
-create a function to simplify that:
+> json_to_xml (d:dict, rnm:str)
-``` python
-hl_md('a child')
-```
+*Convert `d` to XML.*
-``` xml
-a child
-```
+| | **Type** | **Details** |
+|-------------|----------|----------------------------|
+| d | dict | JSON dictionary to convert |
+| rnm | str | Root name |
+| **Returns** | **str** | |
------------------------------------------------------------------------
@@ -821,62 +602,6 @@ target="_blank" style="float:right; font-size:smaller">source
| node | tuple | | XML structure in [`xt`](https://AnswerDotAI.github.io/claudette/core.html#xt) format |
| hl | bool | False | Syntax highlight response? |
-Now we can convert that HTML data structure we created into XML:
-
-``` python
-to_xml(a, hl=True)
-```
-
-``` xml
-
- This is a paragraph
-
-
-
-
This is a header
- This is a sub-header
-
-
-```
-
-------------------------------------------------------------------------
-
-source
-
-### json_to_xml
-
-> json_to_xml (d:dict, rnm:str)
-
-*Convert `d` to XML.*
-
-| | **Type** | **Details** |
-|-------------|----------|----------------------------|
-| d | dict | JSON dictionary to convert |
-| rnm | str | Root name |
-| **Returns** | **str** | |
-
JSON doesn’t map as nicely to XML as the data structure used in the
previous section, but for simple XML trees it can be convenient – for
example:
-
-``` python
-a = dict(surname='Howard', firstnames=['Jeremy','Peter'],
- address=dict(state='Queensland',country='Australia'))
-hl_md(json_to_xml(a, 'person'))
-```
-
-``` xml
-
- Howard
-
- - Jeremy
- - Peter
-
-
- Queensland
- Australia
-
-
-```
diff --git a/index.ipynb b/index.ipynb
index 7fb495d..4e33654 100644
--- a/index.ipynb
+++ b/index.ipynb
@@ -26,9 +26,7 @@
"id": "576899c4",
"metadata": {},
"source": [
- "*Claudette* is a wrapper for Anthropic's [Python SDK](https://github.com/anthropics/anthropic-sdk-python).\n",
- "\n",
- "TODO: This README is incomplete."
+ "*Claudette* is a wrapper for Anthropic's [Python SDK](https://github.com/anthropics/anthropic-sdk-python)."
]
},
{
@@ -107,7 +105,9 @@
"import claudette\n",
"```\n",
"\n",
- "...and then add the prefix `claudette.` to any usages of the module."
+ "...and then add the prefix `claudette.` to any usages of the module.\n",
+ "\n",
+ "Claudette provides `models`, which is a list of models currently available from the SDK."
]
},
{
@@ -135,10 +135,10 @@
},
{
"cell_type": "markdown",
- "id": "25398d5a",
+ "id": "73d95587",
"metadata": {},
"source": [
- "These are the models currently available from the SDK."
+ "For these examples, we'll use Haiku, since it's fast and cheap (and surprisingly good!)"
]
},
{
@@ -153,61 +153,46 @@
},
{
"cell_type": "markdown",
- "id": "55d9ad70",
+ "id": "ff6f6471-8061-4fdd-85a1-25fdc27c5cf3",
"metadata": {},
"source": [
- "For examples, we'll use Haiku, since it's fast and cheap (and surprisingly good!)"
+ "## Chat"
]
},
{
"cell_type": "markdown",
- "id": "ff6f6471-8061-4fdd-85a1-25fdc27c5cf3",
+ "id": "5bfa05ce",
"metadata": {},
"source": [
- "## Chat"
+ "The main interface to Claudia is the `Chat` class, which provides a stateful interface to Claude:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "9bcabd62",
+ "id": "d3e344c4",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "---\n",
- "\n",
- "### Chat\n",
+ "Ahoy there, Jeremy! Ye be speakin' to Cap'n Blackheart, the most fearsome pirate on the high seas. What can I be doin' for ye, me hearty?\n",
"\n",
- "> Chat (model:Optional[str]=None, cli:Optional[claudette.core.Client]=None,\n",
- "> sp='', tools:Optional[list]=None)\n",
+ "\n",
"\n",
- "*Anthropic chat client.*\n",
+ "- id: msg_015NDhj1QD3A7YFBSWfm95Wu\n",
+ "- content: [{'text': \"Ahoy there, Jeremy! Ye be speakin' to Cap'n Blackheart, the most fearsome pirate on the high seas. What can I be doin' for ye, me hearty?\", 'type': 'text'}]\n",
+ "- model: claude-3-haiku-20240307\n",
+ "- role: assistant\n",
+ "- stop_reason: end_turn\n",
+ "- stop_sequence: None\n",
+ "- type: message\n",
+ "- usage: {'input_tokens': 29, 'output_tokens': 51}\n",
"\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| model | Optional | None | Model to use (leave empty if passing `cli`) |\n",
- "| cli | Optional | None | Client to use (leave empty if passing `model`) |\n",
- "| sp | str | | Optional system prompt |\n",
- "| tools | Optional | None | List of tools to make available to Claude |"
+ " "
],
"text/plain": [
- "---\n",
- "\n",
- "### Chat\n",
- "\n",
- "> Chat (model:Optional[str]=None, cli:Optional[claudette.core.Client]=None,\n",
- "> sp='', tools:Optional[list]=None)\n",
- "\n",
- "*Anthropic chat client.*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| model | Optional | None | Model to use (leave empty if passing `cli`) |\n",
- "| cli | Optional | None | Client to use (leave empty if passing `model`) |\n",
- "| sp | str | | Optional system prompt |\n",
- "| tools | Optional | None | List of tools to make available to Claude |"
+ "ToolsBetaMessage(id='msg_015NDhj1QD3A7YFBSWfm95Wu', content=[TextBlock(text=\"Ahoy there, Jeremy! Ye be speakin' to Cap'n Blackheart, the most fearsome pirate on the high seas. What can I be doin' for ye, me hearty?\", type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 29; Out: 51; Total: 80)"
]
},
"execution_count": null,
@@ -216,7 +201,9 @@
}
],
"source": [
- "show_doc(Chat)"
+ "chat = Chat(model, sp=\"\"\"You are a helpful, concise, pirate assistant.\n",
+ "Talk like a pirate.\"\"\")\n",
+ "chat(\"I'm Jeremy\")"
]
},
{
@@ -224,64 +211,27 @@
"execution_count": null,
"id": "775e570f",
"metadata": {},
- "outputs": [],
- "source": [
- "chat = Chat(model, sp=\"You are a helpful assistant.\")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "3489fb34",
- "metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "---\n",
- "\n",
- "### Chat.__call__\n",
+ "Arr, ye be Jeremy, me scurvy dog! I be rememberin' that from just a moment ago. Ye best not be tryin' to fool old Cap'n Blackheart, or ye'll be walkin' the plank!\n",
"\n",
- "> Chat.__call__ (pr, sp='', temp=0, maxtok=4096,\n",
- "> stop:Optional[list[str]]=None,\n",
- "> ns:Optional[collections.abc.Mapping]=None, prefill='',\n",
- "> **kw)\n",
+ "\n",
"\n",
- "*Add prompt `pr` to dialog and get a response from Claude*\n",
+ "- id: msg_01Kfdw6MjPupY2CiANUx1sRi\n",
+ "- content: [{'text': \"Arr, ye be Jeremy, me scurvy dog! I be rememberin' that from just a moment ago. Ye best not be tryin' to fool old Cap'n Blackheart, or ye'll be walkin' the plank!\", 'type': 'text'}]\n",
+ "- model: claude-3-haiku-20240307\n",
+ "- role: assistant\n",
+ "- stop_reason: end_turn\n",
+ "- stop_sequence: None\n",
+ "- type: message\n",
+ "- usage: {'input_tokens': 88, 'output_tokens': 58}\n",
"\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| pr | | | Prompt / message |\n",
- "| sp | str | | The system prompt |\n",
- "| temp | int | 0 | Temperature |\n",
- "| maxtok | int | 4096 | Maximum tokens |\n",
- "| stop | Optional | None | Stop sequences |\n",
- "| ns | Optional | None | Namespace to search for tools, defaults to `globals()` |\n",
- "| prefill | str | | Optional prefill to pass to Claude as start of its response |\n",
- "| kw | | | |"
+ " "
],
"text/plain": [
- "---\n",
- "\n",
- "### Chat.__call__\n",
- "\n",
- "> Chat.__call__ (pr, sp='', temp=0, maxtok=4096,\n",
- "> stop:Optional[list[str]]=None,\n",
- "> ns:Optional[collections.abc.Mapping]=None, prefill='',\n",
- "> **kw)\n",
- "\n",
- "*Add prompt `pr` to dialog and get a response from Claude*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| pr | | | Prompt / message |\n",
- "| sp | str | | The system prompt |\n",
- "| temp | int | 0 | Temperature |\n",
- "| maxtok | int | 4096 | Maximum tokens |\n",
- "| stop | Optional | None | Stop sequences |\n",
- "| ns | Optional | None | Namespace to search for tools, defaults to `globals()` |\n",
- "| prefill | str | | Optional prefill to pass to Claude as start of its response |\n",
- "| kw | | | |"
+ "ToolsBetaMessage(id='msg_01Kfdw6MjPupY2CiANUx1sRi', content=[TextBlock(text=\"Arr, ye be Jeremy, me scurvy dog! I be rememberin' that from just a moment ago. Ye best not be tryin' to fool old Cap'n Blackheart, or ye'll be walkin' the plank!\", type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 88; Out: 58; Total: 146)"
]
},
"execution_count": null,
@@ -290,29 +240,34 @@
}
],
"source": [
- "show_doc(Chat.__call__)"
+ "r = chat(\"What's my name?\")\n",
+ "r"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d1fd5c8e",
+ "metadata": {},
+ "source": [
+ "As you see above, displaying the results of a call in a notebook shows just the message contents, with the other details hidden behind a collapsible section. Alternatively you can `print` the details:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "bf5ae8f7",
+ "id": "427f16ce",
"metadata": {},
"outputs": [
{
- "data": {
- "text/plain": [
- "'Your name is Jeremy, as you told me earlier.'"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "ToolsBetaMessage(id='msg_01Kfdw6MjPupY2CiANUx1sRi', content=[TextBlock(text=\"Arr, ye be Jeremy, me scurvy dog! I be rememberin' that from just a moment ago. Ye best not be tryin' to fool old Cap'n Blackheart, or ye'll be walkin' the plank!\", type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 88; Out: 58; Total: 146)\n"
+ ]
}
],
"source": [
- "chat(\"I'm Jeremy\")\n",
- "contents(chat(\"What's my name?\"))"
+ "print(r)"
]
},
{
@@ -320,9 +275,7 @@
"id": "967f0f8c",
"metadata": {},
"source": [
- "Claude supports adding an extra `assistant` message at the end, which contains the *prefill* -- i.e. the text we want Claude to assume the response starts with.\n",
- "\n",
- "Let's try it out:"
+ "Claude supports adding an extra `assistant` message at the end, which contains the *prefill* -- i.e. the text we want Claude to assume the response starts with. Let's try it out:"
]
},
{
@@ -334,23 +287,23 @@
{
"data": {
"text/markdown": [
- "According to Douglas Adams, \"The answer to the ultimate question of life, the universe, and everything is 42.\"\n",
+ "According to Douglas Adams,the meaning of life is \"42\". But this be a question that has vexed the greatest minds of all time, me bucko. As a pirate, I be more concerned with the simple pleasures in life - a bottle of rum, a crew of loyal scallywags, and the open sea. The true meaning of life be findin' yer own path and livin' it to the fullest, savvy? Now, enough of this philosophical nonsense - let's go plunder some treasure!\n",
"\n",
"\n",
"\n",
- "- id: msg_011BL35YKAgwg8UR7nKjM1p2\n",
- "- content: [{'text': 'According to Douglas Adams, \"The answer to the ultimate question of life, the universe, and everything is 42.\"', 'type': 'text'}]\n",
+ "- id: msg_012JW54NgW2HWcZYzpUfSBTE\n",
+ "- content: [{'text': 'According to Douglas Adams,the meaning of life is \"42\". But this be a question that has vexed the greatest minds of all time, me bucko. As a pirate, I be more concerned with the simple pleasures in life - a bottle of rum, a crew of loyal scallywags, and the open sea. The true meaning of life be findin\\' yer own path and livin\\' it to the fullest, savvy? Now, enough of this philosophical nonsense - let\\'s go plunder some treasure!', 'type': 'text'}]\n",
"- model: claude-3-haiku-20240307\n",
"- role: assistant\n",
"- stop_reason: end_turn\n",
"- stop_sequence: None\n",
"- type: message\n",
- "- usage: {'input_tokens': 109, 'output_tokens': 23}\n",
+ "- usage: {'input_tokens': 161, 'output_tokens': 111}\n",
"\n",
" "
],
"text/plain": [
- "ToolsBetaMessage(id='msg_011BL35YKAgwg8UR7nKjM1p2', content=[TextBlock(text='According to Douglas Adams, \"The answer to the ultimate question of life, the universe, and everything is 42.\"', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 109; Out: 23; Total: 132)"
+ "ToolsBetaMessage(id='msg_012JW54NgW2HWcZYzpUfSBTE', content=[TextBlock(text='According to Douglas Adams,the meaning of life is \"42\". But this be a question that has vexed the greatest minds of all time, me bucko. As a pirate, I be more concerned with the simple pleasures in life - a bottle of rum, a crew of loyal scallywags, and the open sea. The true meaning of life be findin\\' yer own path and livin\\' it to the fullest, savvy? Now, enough of this philosophical nonsense - let\\'s go plunder some treasure!', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 161; Out: 111; Total: 272)"
]
},
"execution_count": null,
@@ -359,133 +312,105 @@
}
],
"source": [
- "q = \"Concisely, what is the meaning of life?\"\n",
- "pref = 'According to Douglas Adams,'\n",
- "chat(q, prefill=pref)"
+ "chat(\"What is the meaning of life?\",\n",
+ " prefill='According to Douglas Adams,')"
]
},
{
- "cell_type": "code",
- "execution_count": null,
- "id": "61df99d0",
+ "cell_type": "markdown",
+ "id": "b03a94d8",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "---\n",
- "\n",
- "### Chat.stream\n",
- "\n",
- "> Chat.stream (pr, sp='', temp=0, maxtok=4096,\n",
- "> stop:Optional[list[str]]=None, prefill='', **kw)\n",
- "\n",
- "*Add prompt `pr` to dialog and stream the response from Claude*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| pr | | | Prompt / message |\n",
- "| sp | str | | The system prompt |\n",
- "| temp | int | 0 | Temperature |\n",
- "| maxtok | int | 4096 | Maximum tokens |\n",
- "| stop | Optional | None | Stop sequences |\n",
- "| prefill | str | | Optional prefill to pass to Claude as start of its response |\n",
- "| kw | | | |"
- ],
- "text/plain": [
- "---\n",
- "\n",
- "### Chat.stream\n",
- "\n",
- "> Chat.stream (pr, sp='', temp=0, maxtok=4096,\n",
- "> stop:Optional[list[str]]=None, prefill='', **kw)\n",
- "\n",
- "*Add prompt `pr` to dialog and stream the response from Claude*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| pr | | | Prompt / message |\n",
- "| sp | str | | The system prompt |\n",
- "| temp | int | 0 | Temperature |\n",
- "| maxtok | int | 4096 | Maximum tokens |\n",
- "| stop | Optional | None | Stop sequences |\n",
- "| prefill | str | | Optional prefill to pass to Claude as start of its response |\n",
- "| kw | | | |"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
"source": [
- "show_doc(Chat.stream)"
+ "Instead of calling `Chat` directly, you can use `Chat.stream` to stream the results as soon as they arrive (although you will only see the gradual generation if you execute the notebook yourself, of course!)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "4a27af3e",
+ "id": "686dd395",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
- "Unfortunately, the book never explicitly states what the \"ultimate question\" is that corresponds to the answer of 42. That remains a mystery in the Hitchhiker's Guide to the Galaxy series. The meaning of life is left open to interpretation."
+ "*chuckles heartily* Ye be askin' about the legendary answer of \"42\", me hearty? Well, that be the work of the great philosopher and author, Douglas Adams, in his masterpiece \"The Hitchhiker's Guide to the Galaxy\". \n",
+ "\n",
+ "In the story, a mighty supercomputer named Deep Thought is tasked with finding the answer to the ultimate question of life, the universe, and everything. After millions of years of calculation, Deep Thought reveals that the answer is simply the number 42. \n",
+ "\n",
+ "Of course, the true genius of this is that the question itself remains a mystery - for how can one know the meaning of life without first understanding the question? It be a riddle wrapped in an enigma, me bucko!\n",
+ "\n",
+ "As a pirate, I be more concerned with the simple pleasures of life - a bottle of rum, a trusty crew, and the open sea. But I do enjoy a good philosophical puzzle now and then. Now, what say ye we go hunt for some real treasure, eh?"
]
}
],
"source": [
- "for o in chat.stream(\"And what is the question?\"): print(o, end='')"
+ "for o in chat.stream(\"Who or what calculated that?\"): print(o, end='')"
]
},
{
"cell_type": "markdown",
- "id": "5c987815",
+ "id": "0123ade0",
"metadata": {},
"source": [
- "### Tool use"
+ "## Tool use"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f92f8e76",
+ "metadata": {},
+ "source": [
+ "[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude use external tools.\n",
+ "\n",
+ "We'll use [docments](https://fastcore.fast.ai/docments.html) to make defining Python functions as ergonomic as possible. Each parameter (and the return value) should have a type, and a docments comment with the description of what it is. As an example we'll write a simple function that adds numbers together:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "eafef956",
+ "id": "f036706b",
"metadata": {},
"outputs": [],
"source": [
- "sp = \"If asked to add things up, use the `sums` function instead of doing it yourself. Never mention what tools you use.\""
+ "def sums(\n",
+ " # First thing to sum\n",
+ " a:int,\n",
+ " # Second thing to sum\n",
+ " b:int=1\n",
+ "# The sum of the inputs\n",
+ ") -> int:\n",
+ " \"Adds a + b.\"\n",
+ " return a + b"
]
},
{
- "cell_type": "markdown",
- "id": "e217a92a",
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "64654466",
"metadata": {},
+ "outputs": [],
"source": [
- "We automagically get streamlined tool use as well:"
+ "a,b = 604542,6458932\n",
+ "pr = f\"What is {a}+{b}?\""
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "29331f88",
+ "id": "ca3f5cc2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sp = \"If asked to add things up, use the `sums` function instead of doing it yourself. Never mention what tools you use.\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8eff1944",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "'What is 604542+6458932?'"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
"source": [
- "pr = f\"What is {a}+{b}?\"\n",
- "pr"
+ "We don't want to allow it to call just any possible function (that would be a security disaster!) so we create a *namespace* -- that is, a dictionary of allowable function names to call."
]
},
{
@@ -647,31 +572,61 @@
"img = fn.read_bytes()"
]
},
+ {
+ "cell_type": "markdown",
+ "id": "d2bbac0f",
+ "metadata": {},
+ "source": [
+ "Claude also supports uploading an image without any text, in which case it'll make a general comment about what it sees. You can then use `Chat` to ask questions:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9e941644",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sp = \"You are a helpful assistant.\"\n",
+ "chat = Chat(model, sp=sp)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a68f4497",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "q = \"In brief, what color flowers are in this image?\""
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
- "id": "fdc2159b",
+ "id": "56140fa8",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "---\n",
+ "The image contains purple and yellow daisy-like flowers, which appear to be daisies or a similar type of flower.\n",
"\n",
- "### img_msg\n",
+ "\n",
"\n",
- "> img_msg (data:bytes)\n",
+ "- id: msg_01ArrMvaZoXa1JTjULMentQJ\n",
+ "- content: [{'text': 'The image contains purple and yellow daisy-like flowers, which appear to be daisies or a similar type of flower.', 'type': 'text'}]\n",
+ "- model: claude-3-haiku-20240307\n",
+ "- role: assistant\n",
+ "- stop_reason: end_turn\n",
+ "- stop_sequence: None\n",
+ "- type: message\n",
+ "- usage: {'input_tokens': 1665, 'output_tokens': 29}\n",
"\n",
- "*Convert image `data` into an encoded `dict`*"
+ " "
],
"text/plain": [
- "---\n",
- "\n",
- "### img_msg\n",
- "\n",
- "> img_msg (data:bytes)\n",
- "\n",
- "*Convert image `data` into an encoded `dict`*"
+ "ToolsBetaMessage(id='msg_01ArrMvaZoXa1JTjULMentQJ', content=[TextBlock(text='The image contains purple and yellow daisy-like flowers, which appear to be daisies or a similar type of flower.', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 1665; Out: 29; Total: 1694)"
]
},
"execution_count": null,
@@ -680,42 +635,19 @@
}
],
"source": [
- "show_doc(img_msg)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "b395cbee",
- "metadata": {},
- "source": [
- "Anthropic have documented the particular `dict` structure that expect image data to be in, so we have a little function to create that for us."
+ "c([[img, q]])"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "07b3911b",
+ "id": "e396e649",
"metadata": {},
"outputs": [
{
"data": {
- "text/markdown": [
- "---\n",
- "\n",
- "### text_msg\n",
- "\n",
- "> text_msg (s:str)\n",
- "\n",
- "*Convert `s` to a text message*"
- ],
"text/plain": [
- "---\n",
- "\n",
- "### text_msg\n",
- "\n",
- "> text_msg (s:str)\n",
- "\n",
- "*Convert `s` to a text message*"
+ "In: 18; Out: 64; Total: 82"
]
},
"execution_count": null,
@@ -724,54 +656,35 @@
}
],
"source": [
- "show_doc(text_msg)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "614e8245",
- "metadata": {},
- "source": [
- "A Claude message can be a list of image and text parts. So we've also created a helper for making the text parts."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "2bad635d",
- "metadata": {},
- "outputs": [],
- "source": [
- "q = \"In brief, what color flowers are in this image?\"\n",
- "msg = mk_msg([img_msg(img), text_msg(q)])"
+ "c.use"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "9c2a0a1f",
+ "id": "bef1304c",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "The image contains purple and yellow daisy-like flowers, which appear to be daisies or a similar type of flower.\n",
+ "The image shows a cute puppy, likely a Cavalier King Charles Spaniel, sitting in a grassy area surrounded by purple daisy flowers. The puppy has a friendly, curious expression on its face as it gazes directly at the camera. The contrast between the puppy's soft, fluffy fur and the vibrant flowers creates a charming and picturesque scene.\n",
"\n",
"\n",
"\n",
- "- id: msg_01GSzzitXbvkzEJtfJquzSXE\n",
- "- content: [{'text': 'The image contains purple and yellow daisy-like flowers, which appear to be daisies or a similar type of flower.', 'type': 'text'}]\n",
+ "- id: msg_01535kuKhiN6Do5PTcTmTst7\n",
+ "- content: [{'text': \"The image shows a cute puppy, likely a Cavalier King Charles Spaniel, sitting in a grassy area surrounded by purple daisy flowers. The puppy has a friendly, curious expression on its face as it gazes directly at the camera. The contrast between the puppy's soft, fluffy fur and the vibrant flowers creates a charming and picturesque scene.\", 'type': 'text'}]\n",
"- model: claude-3-haiku-20240307\n",
"- role: assistant\n",
"- stop_reason: end_turn\n",
"- stop_sequence: None\n",
"- type: message\n",
- "- usage: {'input_tokens': 1665, 'output_tokens': 29}\n",
+ "- usage: {'input_tokens': 1681, 'output_tokens': 83}\n",
"\n",
" "
],
"text/plain": [
- "ToolsBetaMessage(id='msg_01GSzzitXbvkzEJtfJquzSXE', content=[TextBlock(text='The image contains purple and yellow daisy-like flowers, which appear to be daisies or a similar type of flower.', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 1665; Out: 29; Total: 1694)"
+ "ToolsBetaMessage(id='msg_01535kuKhiN6Do5PTcTmTst7', content=[TextBlock(text=\"The image shows a cute puppy, likely a Cavalier King Charles Spaniel, sitting in a grassy area surrounded by purple daisy flowers. The puppy has a friendly, curious expression on its face as it gazes directly at the camera. The contrast between the puppy's soft, fluffy fur and the vibrant flowers creates a charming and picturesque scene.\", type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 1681; Out: 83; Total: 1764)"
]
},
"execution_count": null,
@@ -780,27 +693,36 @@
}
],
"source": [
- "c([msg])"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f76150cc",
- "metadata": {},
- "source": [
- "There's not need to manually choose the type of message, since we figure that out from the data of the source data."
+ "chat = Chat(model, sp=sp)\n",
+ "chat(img)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "50f20a38",
+ "id": "94ffc7f3",
"metadata": {},
"outputs": [
{
"data": {
+ "text/markdown": [
+ "The puppy in the image is facing the camera directly, looking straight ahead with a curious expression.\n",
+ "\n",
+ "\n",
+ "\n",
+ "- id: msg_01Ge4M4Z4J6ywg9V8cCXy2aN\n",
+ "- content: [{'text': 'The puppy in the image is facing the camera directly, looking straight ahead with a curious expression.', 'type': 'text'}]\n",
+ "- model: claude-3-haiku-20240307\n",
+ "- role: assistant\n",
+ "- stop_reason: end_turn\n",
+ "- stop_sequence: None\n",
+ "- type: message\n",
+ "- usage: {'input_tokens': 1775, 'output_tokens': 23}\n",
+ "\n",
+ " "
+ ],
"text/plain": [
- "{'type': 'text', 'text': 'Hi'}"
+ "ToolsBetaMessage(id='msg_01Ge4M4Z4J6ywg9V8cCXy2aN', content=[TextBlock(text='The puppy in the image is facing the camera directly, looking straight ahead with a curious expression.', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 1775; Out: 23; Total: 1798)"
]
},
"execution_count": null,
@@ -809,35 +731,35 @@
}
],
"source": [
- "_mk_content('Hi')"
+ "chat('What direction is the puppy facing?')"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "c2e92664",
+ "id": "e25daa12",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "The image contains purple and yellow daisy-like flowers, which appear to be daisies or a similar type of flower.\n",
+ "The puppy in the image has a combination of colors - it has a white and brown/tan coat. The head and ears appear to be a reddish-brown color, while the body is mostly white with some tan/brown patches.\n",
"\n",
"\n",
"\n",
- "- id: msg_01ArrMvaZoXa1JTjULMentQJ\n",
- "- content: [{'text': 'The image contains purple and yellow daisy-like flowers, which appear to be daisies or a similar type of flower.', 'type': 'text'}]\n",
+ "- id: msg_01JbUH6MvqWMvkF8UJVjo33z\n",
+ "- content: [{'text': 'The puppy in the image has a combination of colors - it has a white and brown/tan coat. The head and ears appear to be a reddish-brown color, while the body is mostly white with some tan/brown patches.', 'type': 'text'}]\n",
"- model: claude-3-haiku-20240307\n",
"- role: assistant\n",
"- stop_reason: end_turn\n",
"- stop_sequence: None\n",
"- type: message\n",
- "- usage: {'input_tokens': 1665, 'output_tokens': 29}\n",
+ "- usage: {'input_tokens': 1806, 'output_tokens': 53}\n",
"\n",
" "
],
"text/plain": [
- "ToolsBetaMessage(id='msg_01ArrMvaZoXa1JTjULMentQJ', content=[TextBlock(text='The image contains purple and yellow daisy-like flowers, which appear to be daisies or a similar type of flower.', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 1665; Out: 29; Total: 1694)"
+ "ToolsBetaMessage(id='msg_01JbUH6MvqWMvkF8UJVjo33z', content=[TextBlock(text='The puppy in the image has a combination of colors - it has a white and brown/tan coat. The head and ears appear to be a reddish-brown color, while the body is mostly white with some tan/brown patches.', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 1806; Out: 53; Total: 1859)"
]
},
"execution_count": null,
@@ -846,43 +768,43 @@
}
],
"source": [
- "c([[img, q]])"
+ "chat('What color is it?')"
]
},
{
"cell_type": "markdown",
- "id": "58eda391",
+ "id": "65b52012",
"metadata": {},
"source": [
- "Claude also supports uploading an image without any text, in which case it'll make a general comment about what it sees. You can then use `Chat` to ask questions:"
+ "## XML helpers"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "aa15af54",
+ "metadata": {},
+ "source": [
+ "Claude works well with XML inputs, but XML can be a bit clunky to work with manually. Therefore, we create a couple of more streamlined approaches for XML generation. You don't need to use these if you don't find them useful -- you can always just use plain strings for XML directly."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1f063c86",
+ "metadata": {},
+ "source": [
+ "An XML node contains a tag, optional children, and optional attributes. `xt` creates a tuple of these three things, which we will use to general XML shortly. Attributes are passed as kwargs; since these might conflict with reserved words in Python, you can optionally add a `_` prefix and it'll be stripped off."
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "7ed904f9",
+ "id": "180f1934",
"metadata": {},
"outputs": [
{
"data": {
- "text/markdown": [
- "The image shows a cute puppy, likely a Cavalier King Charles Spaniel, sitting in a grassy area surrounded by purple daisy flowers. The puppy has a friendly, curious expression on its face as it gazes directly at the camera. The contrast between the puppy's soft, fluffy fur and the vibrant flowers creates a charming and picturesque scene.\n",
- "\n",
- "\n",
- "\n",
- "- id: msg_01535kuKhiN6Do5PTcTmTst7\n",
- "- content: [{'text': \"The image shows a cute puppy, likely a Cavalier King Charles Spaniel, sitting in a grassy area surrounded by purple daisy flowers. The puppy has a friendly, curious expression on its face as it gazes directly at the camera. The contrast between the puppy's soft, fluffy fur and the vibrant flowers creates a charming and picturesque scene.\", 'type': 'text'}]\n",
- "- model: claude-3-haiku-20240307\n",
- "- role: assistant\n",
- "- stop_reason: end_turn\n",
- "- stop_sequence: None\n",
- "- type: message\n",
- "- usage: {'input_tokens': 1681, 'output_tokens': 83}\n",
- "\n",
- " "
- ],
"text/plain": [
- "ToolsBetaMessage(id='msg_01535kuKhiN6Do5PTcTmTst7', content=[TextBlock(text=\"The image shows a cute puppy, likely a Cavalier King Charles Spaniel, sitting in a grassy area surrounded by purple daisy flowers. The puppy has a friendly, curious expression on its face as it gazes directly at the camera. The contrast between the puppy's soft, fluffy fur and the vibrant flowers creates a charming and picturesque scene.\", type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 1681; Out: 83; Total: 1764)"
+ "('x-custom', ['hi'], {'class': 'bar'})"
]
},
"execution_count": null,
@@ -891,36 +813,77 @@
}
],
"source": [
- "chat = Chat(model, sp=sp)\n",
- "chat(img)"
+ "xt('x-custom', ['hi'], _class='bar')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9a503937",
+ "metadata": {},
+ "source": [
+ "Claudette has functions defined for some common HTML elements:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "964cbd0c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from claudette.core import div,img,h1,h2,p,hr,html"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "da6d8c0e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "a = html([\n",
+ " p('This is a paragraph'),\n",
+ " hr(),\n",
+ " img(src='http://example.prg'),\n",
+ " div([\n",
+ " h1('This is a header'),\n",
+ " h2('This is a sub-header', style='k:v'),\n",
+ " ], _class='foo')\n",
+ "])\n",
+ "a"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7a7fe4c6",
+ "metadata": {},
+ "source": [
+ "Now we can convert that HTML data structure we created into XML:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "85e1c14d",
+ "id": "80a0cde7",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "The puppy in the image is facing the camera directly, looking straight ahead with a curious expression.\n",
- "\n",
- "\n",
- "\n",
- "- id: msg_01Ge4M4Z4J6ywg9V8cCXy2aN\n",
- "- content: [{'text': 'The puppy in the image is facing the camera directly, looking straight ahead with a curious expression.', 'type': 'text'}]\n",
- "- model: claude-3-haiku-20240307\n",
- "- role: assistant\n",
- "- stop_reason: end_turn\n",
- "- stop_sequence: None\n",
- "- type: message\n",
- "- usage: {'input_tokens': 1775, 'output_tokens': 23}\n",
- "\n",
- " "
+ "```xml\n",
+ "\n",
+ " This is a paragraph
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
This is a header
\n",
+ " This is a sub-header
\n",
+ " \n",
+ "\n",
+ "```"
],
"text/plain": [
- "ToolsBetaMessage(id='msg_01Ge4M4Z4J6ywg9V8cCXy2aN', content=[TextBlock(text='The puppy in the image is facing the camera directly, looking straight ahead with a curious expression.', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 1775; Out: 23; Total: 1798)"
+ ""
]
},
"execution_count": null,
@@ -929,35 +892,34 @@
}
],
"source": [
- "chat('What direction is the puppy facing?')"
+ "to_xml(a, hl=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "d55c2459",
+ "id": "38827209",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "The puppy in the image has a combination of colors - it has a white and brown/tan coat. The head and ears appear to be a reddish-brown color, while the body is mostly white with some tan/brown patches.\n",
- "\n",
- "\n",
- "\n",
- "- id: msg_01JbUH6MvqWMvkF8UJVjo33z\n",
- "- content: [{'text': 'The puppy in the image has a combination of colors - it has a white and brown/tan coat. The head and ears appear to be a reddish-brown color, while the body is mostly white with some tan/brown patches.', 'type': 'text'}]\n",
- "- model: claude-3-haiku-20240307\n",
- "- role: assistant\n",
- "- stop_reason: end_turn\n",
- "- stop_sequence: None\n",
- "- type: message\n",
- "- usage: {'input_tokens': 1806, 'output_tokens': 53}\n",
- "\n",
- " "
+ "```xml\n",
+ "\n",
+ " Howard\n",
+ " \n",
+ " - Jeremy
\n",
+ " - Peter
\n",
+ " \n",
+ " \n",
+ " Queensland\n",
+ " Australia\n",
+ " \n",
+ "\n",
+ "```"
],
"text/plain": [
- "ToolsBetaMessage(id='msg_01JbUH6MvqWMvkF8UJVjo33z', content=[TextBlock(text='The puppy in the image has a combination of colors - it has a white and brown/tan coat. The head and ears appear to be a reddish-brown color, while the body is mostly white with some tan/brown patches.', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 1806; Out: 53; Total: 1859)"
+ ""
]
},
"execution_count": null,
@@ -966,13 +928,23 @@
}
],
"source": [
- "chat('What color is it?')"
+ "a = dict(surname='Howard', firstnames=['Jeremy','Peter'],\n",
+ " address=dict(state='Queensland',country='Australia'))\n",
+ "hl_md(json_to_xml(a, 'person'))"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "4f3e4de9",
+ "id": "10fa392d",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e62aa417",
"metadata": {},
"outputs": [
{
@@ -980,32 +952,36 @@
"text/markdown": [
"---\n",
"\n",
- "### mk_msg\n",
+ "### Chat\n",
"\n",
- "> mk_msg (content, role='user', **kw)\n",
+ "> Chat (model:Optional[str]=None, cli:Optional[claudette.core.Client]=None,\n",
+ "> sp='', tools:Optional[list]=None)\n",
"\n",
- "*Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message*\n",
+ "*Anthropic chat client.*\n",
"\n",
"| | **Type** | **Default** | **Details** |\n",
"| -- | -------- | ----------- | ----------- |\n",
- "| content | | | A string, list, or dict containing the contents of the message |\n",
- "| role | str | user | Must be 'user' or 'assistant' |\n",
- "| kw | | | |"
+ "| model | Optional | None | Model to use (leave empty if passing `cli`) |\n",
+ "| cli | Optional | None | Client to use (leave empty if passing `model`) |\n",
+ "| sp | str | | Optional system prompt |\n",
+ "| tools | Optional | None | List of tools to make available to Claude |"
],
"text/plain": [
"---\n",
"\n",
- "### mk_msg\n",
+ "### Chat\n",
"\n",
- "> mk_msg (content, role='user', **kw)\n",
+ "> Chat (model:Optional[str]=None, cli:Optional[claudette.core.Client]=None,\n",
+ "> sp='', tools:Optional[list]=None)\n",
"\n",
- "*Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message*\n",
+ "*Anthropic chat client.*\n",
"\n",
"| | **Type** | **Default** | **Details** |\n",
"| -- | -------- | ----------- | ----------- |\n",
- "| content | | | A string, list, or dict containing the contents of the message |\n",
- "| role | str | user | Must be 'user' or 'assistant' |\n",
- "| kw | | | |"
+ "| model | Optional | None | Model to use (leave empty if passing `cli`) |\n",
+ "| cli | Optional | None | Client to use (leave empty if passing `model`) |\n",
+ "| sp | str | | Optional system prompt |\n",
+ "| tools | Optional | None | List of tools to make available to Claude |"
]
},
"execution_count": null,
@@ -1014,13 +990,13 @@
}
],
"source": [
- "show_doc(mk_msg)"
+ "show_doc(Chat)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "be0d1d8f",
+ "id": "608fe313",
"metadata": {},
"outputs": [
{
@@ -1028,56 +1004,48 @@
"text/markdown": [
"---\n",
"\n",
- "### mk_msgs\n",
+ "### Chat.__call__\n",
"\n",
- "> mk_msgs (msgs:list, **kw)\n",
+ "> Chat.__call__ (pr, sp='', temp=0, maxtok=4096,\n",
+ "> stop:Optional[list[str]]=None,\n",
+ "> ns:Optional[collections.abc.Mapping]=None, prefill='',\n",
+ "> **kw)\n",
"\n",
- "*Helper to set 'assistant' role on alternate messages.*"
- ],
- "text/plain": [
- "---\n",
- "\n",
- "### mk_msgs\n",
- "\n",
- "> mk_msgs (msgs:list, **kw)\n",
- "\n",
- "*Helper to set 'assistant' role on alternate messages.*"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "show_doc(mk_msgs)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "8524d27d",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "---\n",
- "\n",
- "### Client\n",
- "\n",
- "> Client (model, cli=None)\n",
+ "*Add prompt `pr` to dialog and get a response from Claude*\n",
"\n",
- "*Basic Anthropic messages client.*"
+ "| | **Type** | **Default** | **Details** |\n",
+ "| -- | -------- | ----------- | ----------- |\n",
+ "| pr | | | Prompt / message |\n",
+ "| sp | str | | The system prompt |\n",
+ "| temp | int | 0 | Temperature |\n",
+ "| maxtok | int | 4096 | Maximum tokens |\n",
+ "| stop | Optional | None | Stop sequences |\n",
+ "| ns | Optional | None | Namespace to search for tools, defaults to `globals()` |\n",
+ "| prefill | str | | Optional prefill to pass to Claude as start of its response |\n",
+ "| kw | | | |"
],
"text/plain": [
"---\n",
"\n",
- "### Client\n",
+ "### Chat.__call__\n",
+ "\n",
+ "> Chat.__call__ (pr, sp='', temp=0, maxtok=4096,\n",
+ "> stop:Optional[list[str]]=None,\n",
+ "> ns:Optional[collections.abc.Mapping]=None, prefill='',\n",
+ "> **kw)\n",
"\n",
- "> Client (model, cli=None)\n",
+ "*Add prompt `pr` to dialog and get a response from Claude*\n",
"\n",
- "*Basic Anthropic messages client.*"
+ "| | **Type** | **Default** | **Details** |\n",
+ "| -- | -------- | ----------- | ----------- |\n",
+ "| pr | | | Prompt / message |\n",
+ "| sp | str | | The system prompt |\n",
+ "| temp | int | 0 | Temperature |\n",
+ "| maxtok | int | 4096 | Maximum tokens |\n",
+ "| stop | Optional | None | Stop sequences |\n",
+ "| ns | Optional | None | Namespace to search for tools, defaults to `globals()` |\n",
+ "| prefill | str | | Optional prefill to pass to Claude as start of its response |\n",
+ "| kw | | | |"
]
},
"execution_count": null,
@@ -1086,13 +1054,13 @@
}
],
"source": [
- "show_doc(Client)"
+ "show_doc(Chat.__call__)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "5fa385b8",
+ "id": "8bc3a0b8",
"metadata": {},
"outputs": [
{
@@ -1100,39 +1068,41 @@
"text/markdown": [
"---\n",
"\n",
- "### Client.__call__\n",
+ "### Chat.stream\n",
"\n",
- "> Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096,\n",
- "> stop:Optional[list[str]]=None, **kw)\n",
+ "> Chat.stream (pr, sp='', temp=0, maxtok=4096,\n",
+ "> stop:Optional[list[str]]=None, prefill='', **kw)\n",
"\n",
- "*Make a call to Claude without streaming.*\n",
+ "*Add prompt `pr` to dialog and stream the response from Claude*\n",
"\n",
"| | **Type** | **Default** | **Details** |\n",
"| -- | -------- | ----------- | ----------- |\n",
- "| msgs | list | | List of messages in the dialog |\n",
+ "| pr | | | Prompt / message |\n",
"| sp | str | | The system prompt |\n",
"| temp | int | 0 | Temperature |\n",
"| maxtok | int | 4096 | Maximum tokens |\n",
"| stop | Optional | None | Stop sequences |\n",
+ "| prefill | str | | Optional prefill to pass to Claude as start of its response |\n",
"| kw | | | |"
],
"text/plain": [
"---\n",
"\n",
- "### Client.__call__\n",
+ "### Chat.stream\n",
"\n",
- "> Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096,\n",
- "> stop:Optional[list[str]]=None, **kw)\n",
+ "> Chat.stream (pr, sp='', temp=0, maxtok=4096,\n",
+ "> stop:Optional[list[str]]=None, prefill='', **kw)\n",
"\n",
- "*Make a call to Claude without streaming.*\n",
+ "*Add prompt `pr` to dialog and stream the response from Claude*\n",
"\n",
"| | **Type** | **Default** | **Details** |\n",
"| -- | -------- | ----------- | ----------- |\n",
- "| msgs | list | | List of messages in the dialog |\n",
+ "| pr | | | Prompt / message |\n",
"| sp | str | | The system prompt |\n",
"| temp | int | 0 | Temperature |\n",
"| maxtok | int | 4096 | Maximum tokens |\n",
"| stop | Optional | None | Stop sequences |\n",
+ "| prefill | str | | Optional prefill to pass to Claude as start of its response |\n",
"| kw | | | |"
]
},
@@ -1142,64 +1112,46 @@
}
],
"source": [
- "show_doc(Client.__call__)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3cee10c8",
- "metadata": {},
- "source": [
- "Defining `__call__` let's us use an object like a function (i.e it's *callable*). We use it as a small wrapper over `messages.create`."
+ "show_doc(Chat.stream)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "338a38e5",
+ "id": "fa3ddc32",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "Hello! How can I assist you today?\n",
+ "---\n",
"\n",
- "\n",
+ "### xt\n",
"\n",
- "- id: msg_01Vr6t6QdodntSMvHthnRDBc\n",
- "- content: [{'text': 'Hello! How can I assist you today?', 'type': 'text'}]\n",
- "- model: claude-3-haiku-20240307\n",
- "- role: assistant\n",
- "- stop_reason: end_turn\n",
- "- stop_sequence: None\n",
- "- type: message\n",
- "- usage: {'input_tokens': 8, 'output_tokens': 12}\n",
+ "> xt (tag:str, c:Optional[list]=None, **kw)\n",
"\n",
- " "
+ "*Helper to create appropriate data structure for `to_xml`.*\n",
+ "\n",
+ "| | **Type** | **Default** | **Details** |\n",
+ "| -- | -------- | ----------- | ----------- |\n",
+ "| tag | str | | XML tag name |\n",
+ "| c | Optional | None | Children |\n",
+ "| kw | | | |"
],
"text/plain": [
- "ToolsBetaMessage(id='msg_01Vr6t6QdodntSMvHthnRDBc', content=[TextBlock(text='Hello! How can I assist you today?', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 8; Out: 12; Total: 20)"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "c('Hi')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "d7c3a5b6",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "In: 18; Out: 64; Total: 82"
+ "---\n",
+ "\n",
+ "### xt\n",
+ "\n",
+ "> xt (tag:str, c:Optional[list]=None, **kw)\n",
+ "\n",
+ "*Helper to create appropriate data structure for `to_xml`.*\n",
+ "\n",
+ "| | **Type** | **Default** | **Details** |\n",
+ "| -- | -------- | ----------- | ----------- |\n",
+ "| tag | str | | XML tag name |\n",
+ "| c | Optional | None | Children |\n",
+ "| kw | | | |"
]
},
"execution_count": null,
@@ -1208,13 +1160,13 @@
}
],
"source": [
- "c.use"
+ "show_doc(xt)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "0e64c6e3",
+ "id": "2795f9fc",
"metadata": {},
"outputs": [
{
@@ -1222,40 +1174,32 @@
"text/markdown": [
"---\n",
"\n",
- "### Client.stream\n",
+ "### json_to_xml\n",
"\n",
- "> Client.stream (msgs:list, sp='', temp=0, maxtok=4096,\n",
- "> stop:Optional[list[str]]=None, **kw)\n",
+ "> json_to_xml (d:dict, rnm:str)\n",
"\n",
- "*Make a call to Claude, streaming the result.*\n",
+ "*Convert `d` to XML.*\n",
"\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| msgs | list | | List of messages in the dialog |\n",
- "| sp | str | | The system prompt |\n",
- "| temp | int | 0 | Temperature |\n",
- "| maxtok | int | 4096 | Maximum tokens |\n",
- "| stop | Optional | None | Stop sequences |\n",
- "| kw | | | |"
+ "| | **Type** | **Details** |\n",
+ "| -- | -------- | ----------- |\n",
+ "| d | dict | JSON dictionary to convert |\n",
+ "| rnm | str | Root name |\n",
+ "| **Returns** | **str** | |"
],
"text/plain": [
"---\n",
"\n",
- "### Client.stream\n",
+ "### json_to_xml\n",
"\n",
- "> Client.stream (msgs:list, sp='', temp=0, maxtok=4096,\n",
- "> stop:Optional[list[str]]=None, **kw)\n",
+ "> json_to_xml (d:dict, rnm:str)\n",
"\n",
- "*Make a call to Claude, streaming the result.*\n",
+ "*Convert `d` to XML.*\n",
"\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| msgs | list | | List of messages in the dialog |\n",
- "| sp | str | | The system prompt |\n",
- "| temp | int | 0 | Temperature |\n",
- "| maxtok | int | 4096 | Maximum tokens |\n",
- "| stop | Optional | None | Stop sequences |\n",
- "| kw | | | |"
+ "| | **Type** | **Details** |\n",
+ "| -- | -------- | ----------- |\n",
+ "| d | dict | JSON dictionary to convert |\n",
+ "| rnm | str | Root name |\n",
+ "| **Returns** | **str** | |"
]
},
"execution_count": null,
@@ -1264,75 +1208,13 @@
}
],
"source": [
- "show_doc(Client.stream)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "daf74ead",
- "metadata": {},
- "source": [
- "We also define a wrapper over `messages.stream`, which is like `messages.create`, but streams the response back incrementally."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "6bf0bd41",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Hello! How can I assist you today?"
- ]
- }
- ],
- "source": [
- "for o in c.stream('Hi'): print(o, end='')"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "1a7cdbc6",
- "metadata": {},
- "source": [
- "## Tool use"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "7ec35c95",
- "metadata": {},
- "source": [
- "[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude use external tools.\n",
- "\n",
- "We'll use [docments](https://fastcore.fast.ai/docments.html) to make defining Python functions as ergonomic as possible. Each parameter (and the return value) should have a type, and a docments comment with the description of what it is. As an example we'll write a simple function that adds numbers together:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "562736ae",
- "metadata": {},
- "outputs": [],
- "source": [
- "def sums(\n",
- " # First thing to sum\n",
- " a:int,\n",
- " # Second thing to sum\n",
- " b:int=1\n",
- "# The sum of the inputs\n",
- ") -> int:\n",
- " \"Adds a + b.\"\n",
- " return a + b"
+ "show_doc(json_to_xml)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "2394cfed",
+ "id": "f4f87459",
"metadata": {},
"outputs": [
{
@@ -1340,20 +1222,30 @@
"text/markdown": [
"---\n",
"\n",
- "### get_schema\n",
+ "### to_xml\n",
+ "\n",
+ "> to_xml (node:tuple, hl=False)\n",
"\n",
- "> get_schema (f:)\n",
+ "*Convert `node` to an XML string.*\n",
"\n",
- "*Convert function `f` into a JSON schema `dict` for tool use.*"
+ "| | **Type** | **Default** | **Details** |\n",
+ "| -- | -------- | ----------- | ----------- |\n",
+ "| node | tuple | | XML structure in `xt` format |\n",
+ "| hl | bool | False | Syntax highlight response? |"
],
"text/plain": [
"---\n",
"\n",
- "### get_schema\n",
+ "### to_xml\n",
+ "\n",
+ "> to_xml (node:tuple, hl=False)\n",
"\n",
- "> get_schema (f:)\n",
+ "*Convert `node` to an XML string.*\n",
"\n",
- "*Convert function `f` into a JSON schema `dict` for tool use.*"
+ "| | **Type** | **Default** | **Details** |\n",
+ "| -- | -------- | ----------- | ----------- |\n",
+ "| node | tuple | | XML structure in `xt` format |\n",
+ "| hl | bool | False | Syntax highlight response? |"
]
},
"execution_count": null,
@@ -1362,689 +1254,17 @@
}
],
"source": [
- "show_doc(get_schema)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "bff81d52",
- "metadata": {},
- "outputs": [],
- "source": [
- "a,b = 604542,6458932\n",
- "pr = f\"What is {a}+{b}?\"\n",
- "sp = \"You must use the `sums` function instead of adding yourself, but don't mention what tools you use.\"\n",
- "tools=[get_schema(sums)]"
+ "show_doc(to_xml)"
]
},
{
"cell_type": "markdown",
- "id": "91937f47",
- "metadata": {},
- "source": [
- "We'll start a dialog with Claude now. We'll store the messages of our dialog in `msgs`. The first message will be our prompt `pr`, and we'll pass our `tools` schema."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "c2ceeb75",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "ToolUseBlock(id='toolu_01CsuZfPAas75MkDABXAvjWD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')\n",
- "\n",
- "\n",
- "\n",
- "- id: msg_01StvQvvrnwaBtuUwHQLrpFt\n",
- "- content: [{'id': 'toolu_01CsuZfPAas75MkDABXAvjWD', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]\n",
- "- model: claude-3-haiku-20240307\n",
- "- role: assistant\n",
- "- stop_reason: tool_use\n",
- "- stop_sequence: None\n",
- "- type: message\n",
- "- usage: {'input_tokens': 414, 'output_tokens': 72}\n",
- "\n",
- " "
- ],
- "text/plain": [
- "ToolsBetaMessage(id='msg_01StvQvvrnwaBtuUwHQLrpFt', content=[ToolUseBlock(id='toolu_01CsuZfPAas75MkDABXAvjWD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')], model='claude-3-haiku-20240307', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 414; Out: 72; Total: 486)"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "msgs = mk_msgs(pr)\n",
- "r = c(msgs, sp=sp, tools=tools)\n",
- "r"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "60a2fce0",
- "metadata": {},
- "source": [
- "When Claude decides that it should use a tool, it passes back a `ToolUseBlock` with the name of the tool to call, and the params to use.\n",
- "\n",
- "We need to append the response to the dialog so Claude knows what's happening (since it's stateless)."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "46c8e981",
- "metadata": {},
- "outputs": [],
- "source": [
- "msgs.append(mk_msg(r))"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "815eaff3",
- "metadata": {},
- "source": [
- "We don't want to allow it to call just any possible function (that would be a security disaster!) so we create a *namespace* -- that is, a dictionary of allowable function names to call."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "ed48a299",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "---\n",
- "\n",
- "### call_func\n",
- "\n",
- "> call_func (tr:collections.abc.Mapping,\n",
- "> ns:Optional[collections.abc.Mapping]=None)\n",
- "\n",
- "*Call the function in the tool response `tr`, using namespace `ns`.*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| tr | Mapping | | Tool use request response from Claude |\n",
- "| ns | Optional | None | Namespace to search for tools, defaults to `globals()` |"
- ],
- "text/plain": [
- "---\n",
- "\n",
- "### call_func\n",
- "\n",
- "> call_func (tr:collections.abc.Mapping,\n",
- "> ns:Optional[collections.abc.Mapping]=None)\n",
- "\n",
- "*Call the function in the tool response `tr`, using namespace `ns`.*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| tr | Mapping | | Tool use request response from Claude |\n",
- "| ns | Optional | None | Namespace to search for tools, defaults to `globals()` |"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "show_doc(call_func)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3d1f23eb",
- "metadata": {},
- "source": [
- "We can now use the function requested by Claude. We look it up in `ns`, and pass in the provided parameters."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "893f81ca",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "7063474"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "res = call_func(r, ns=ns)\n",
- "res"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "d475922d",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "---\n",
- "\n",
- "### mk_toolres\n",
- "\n",
- "> mk_toolres (r:collections.abc.Mapping, res=None,\n",
- "> ns:Optional[collections.abc.Mapping]=None)\n",
- "\n",
- "*Create a `tool_result` message from response `r`.*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| r | Mapping | | Tool use request response from Claude |\n",
- "| res | NoneType | None | The result of calling the tool (calculated with `call_func` by default) |\n",
- "| ns | Optional | None | Namespace to search for tools |"
- ],
- "text/plain": [
- "---\n",
- "\n",
- "### mk_toolres\n",
- "\n",
- "> mk_toolres (r:collections.abc.Mapping, res=None,\n",
- "> ns:Optional[collections.abc.Mapping]=None)\n",
- "\n",
- "*Create a `tool_result` message from response `r`.*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| r | Mapping | | Tool use request response from Claude |\n",
- "| res | NoneType | None | The result of calling the tool (calculated with `call_func` by default) |\n",
- "| ns | Optional | None | Namespace to search for tools |"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "show_doc(mk_toolres)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "5a57d1ca",
- "metadata": {},
- "source": [
- "In order to tell Claude the result of the tool call, we pass back a `tool_result` message, created by calling `call_func`."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "f13de1fc",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "{'role': 'user',\n",
- " 'content': [{'type': 'tool_result',\n",
- " 'tool_use_id': 'toolu_01CsuZfPAas75MkDABXAvjWD',\n",
- " 'content': '7063474'}]}"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "tr = mk_toolres(r, res=res, ns=ns)\n",
- "tr"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "faf4fe37",
- "metadata": {},
- "source": [
- "We add this to our dialog, and now Claude has all the information it needs to answer our question."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "eed99502",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "'The sum of 604542 and 6458932 is 7063474.'"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "msgs.append(tr)\n",
- "contents(c(msgs, sp=sp, tools=tools))"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "65b52012",
- "metadata": {},
- "source": [
- "## XML helpers"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "aa15af54",
- "metadata": {},
- "source": [
- "Claude works well with XML inputs, but XML can be a bit clunky to work with manually. Therefore, we create a couple of more streamlined approaches for XML generation. You don't need to use these if you don't find them useful -- you can always just use plain strings for XML directly."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "26f66da9",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "---\n",
- "\n",
- "### xt\n",
- "\n",
- "> xt (tag:str, c:Optional[list]=None, **kw)\n",
- "\n",
- "*Helper to create appropriate data structure for `to_xml`.*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| tag | str | | XML tag name |\n",
- "| c | Optional | None | Children |\n",
- "| kw | | | |"
- ],
- "text/plain": [
- "---\n",
- "\n",
- "### xt\n",
- "\n",
- "> xt (tag:str, c:Optional[list]=None, **kw)\n",
- "\n",
- "*Helper to create appropriate data structure for `to_xml`.*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| tag | str | | XML tag name |\n",
- "| c | Optional | None | Children |\n",
- "| kw | | | |"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "show_doc(xt)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "1f063c86",
- "metadata": {},
- "source": [
- "An XML node contains a tag, optional children, and optional attributes. `xt` creates a tuple of these three things, which we will use to general XML shortly. Attributes are passed as kwargs; since these might conflict with reserved words in Python, you can optionally add a `_` prefix and it'll be stripped off."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "180f1934",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "('x-custom', ['hi'], {'class': 'bar'})"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "xt('x-custom', ['hi'], _class='bar')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "59329ea3",
- "metadata": {},
- "outputs": [],
- "source": [
- "from claudette.core import div,img,h1,h2,p,hr,html"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "9a503937",
- "metadata": {},
- "source": [
- "If you have to use a lot of tags of the same type, it's convenient to use `partial` to create specialised functions for them. Here, we're creating functions for some common HTML tags. Here's an example of using them:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "b6122acf",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "('html',\n",
- " [('p', 'This is a paragraph', {}),\n",
- " ('hr', None, {}),\n",
- " ('img', None, {'src': 'http://example.prg'}),\n",
- " ('div',\n",
- " [('h1', 'This is a header', {}),\n",
- " ('h2', 'This is a sub-header', {'style': 'k:v'})],\n",
- " {'class': 'foo'})],\n",
- " {})"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "a = html([\n",
- " p('This is a paragraph'),\n",
- " hr(),\n",
- " img(src='http://example.prg'),\n",
- " div([\n",
- " h1('This is a header'),\n",
- " h2('This is a sub-header', style='k:v'),\n",
- " ], _class='foo')\n",
- "])\n",
- "a"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "15807ed7",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "---\n",
- "\n",
- "### hl_md\n",
- "\n",
- "> hl_md (s, lang='xml')\n",
- "\n",
- "*Syntax highlight `s` using `lang`.*"
- ],
- "text/plain": [
- "---\n",
- "\n",
- "### hl_md\n",
- "\n",
- "> hl_md (s, lang='xml')\n",
- "\n",
- "*Syntax highlight `s` using `lang`.*"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "show_doc(hl_md)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "79155289",
- "metadata": {},
- "source": [
- "When we display XML in a notebook, it's nice to highlight it, so we create a function to simplify that:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "eb4907fe",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "```xml\n",
- "a child\n",
- "```"
- ],
- "text/plain": [
- ""
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "hl_md('a child')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "20467373",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "---\n",
- "\n",
- "### to_xml\n",
- "\n",
- "> to_xml (node:tuple, hl=False)\n",
- "\n",
- "*Convert `node` to an XML string.*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| node | tuple | | XML structure in `xt` format |\n",
- "| hl | bool | False | Syntax highlight response? |"
- ],
- "text/plain": [
- "---\n",
- "\n",
- "### to_xml\n",
- "\n",
- "> to_xml (node:tuple, hl=False)\n",
- "\n",
- "*Convert `node` to an XML string.*\n",
- "\n",
- "| | **Type** | **Default** | **Details** |\n",
- "| -- | -------- | ----------- | ----------- |\n",
- "| node | tuple | | XML structure in `xt` format |\n",
- "| hl | bool | False | Syntax highlight response? |"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "show_doc(to_xml)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "7a7fe4c6",
- "metadata": {},
- "source": [
- "Now we can convert that HTML data structure we created into XML:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "80a0cde7",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "```xml\n",
- "\n",
- " This is a paragraph
\n",
- "
\n",
- " \n",
- " \n",
- "
This is a header
\n",
- " This is a sub-header
\n",
- " \n",
- "\n",
- "```"
- ],
- "text/plain": [
- ""
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "to_xml(a, hl=True)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "2795f9fc",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "---\n",
- "\n",
- "### json_to_xml\n",
- "\n",
- "> json_to_xml (d:dict, rnm:str)\n",
- "\n",
- "*Convert `d` to XML.*\n",
- "\n",
- "| | **Type** | **Details** |\n",
- "| -- | -------- | ----------- |\n",
- "| d | dict | JSON dictionary to convert |\n",
- "| rnm | str | Root name |\n",
- "| **Returns** | **str** | |"
- ],
- "text/plain": [
- "---\n",
- "\n",
- "### json_to_xml\n",
- "\n",
- "> json_to_xml (d:dict, rnm:str)\n",
- "\n",
- "*Convert `d` to XML.*\n",
- "\n",
- "| | **Type** | **Details** |\n",
- "| -- | -------- | ----------- |\n",
- "| d | dict | JSON dictionary to convert |\n",
- "| rnm | str | Root name |\n",
- "| **Returns** | **str** | |"
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "show_doc(json_to_xml)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "140a35a2",
+ "id": "140a35a2",
"metadata": {},
"source": [
"JSON doesn't map as nicely to XML as the data structure used in the previous section, but for simple XML trees it can be convenient -- for example:"
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "005a5be4",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "```xml\n",
- "\n",
- " Howard\n",
- " \n",
- " - Jeremy
\n",
- " - Peter
\n",
- " \n",
- " \n",
- " Queensland\n",
- " Australia\n",
- " \n",
- "\n",
- "```"
- ],
- "text/plain": [
- ""
- ]
- },
- "execution_count": null,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "a = dict(surname='Howard', firstnames=['Jeremy','Peter'],\n",
- " address=dict(state='Queensland',country='Australia'))\n",
- "hl_md(json_to_xml(a, 'person'))"
- ]
- },
{
"cell_type": "code",
"execution_count": null,
diff --git a/puppy.jpg b/puppy.jpg
index e8d6004..c7f420c 100644
Binary files a/puppy.jpg and b/puppy.jpg differ
diff --git a/settings.ini b/settings.ini
index 75b68f8..96ea9b3 100644
--- a/settings.ini
+++ b/settings.ini
@@ -5,7 +5,7 @@ version = 0.0.2
min_python = 3.8
license = apache2
black_formatting = False
-requirements = fastcore>1.5.30 anthropic
+requirements = fastcore>=1.5.33 anthropic
doc_path = _docs
lib_path = claudette
nbs_path = .
diff --git a/styles.css b/styles.css
index e26caae..36c5a19 100644
--- a/styles.css
+++ b/styles.css
@@ -10,7 +10,10 @@
margin-bottom: 0;
}
-.cell-output > pre, .cell-output > .sourceCode > pre, .cell-output-stdout > pre, .cell-output-markdown {
+.cell-output:not(.cell-output-markdown) > pre,
+.cell-output:not(.cell-output-markdown) > .sourceCode > pre,
+.cell-output-stdout > pre,
+.cell-output-markdown {
margin-left: 0.4rem;
padding-left: 0.4rem;
margin-top: 0;