Skip to content

Commit

Permalink
[Community] Add streaming for maritalk (#13207)
Browse files Browse the repository at this point in the history
  • Loading branch information
RosevalJr authored May 3, 2024
1 parent 8337428 commit b6fe03c
Show file tree
Hide file tree
Showing 4 changed files with 338 additions and 89 deletions.
97 changes: 85 additions & 12 deletions docs/docs/examples/llm/maritalk.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,10 @@
"MariTalk is an assistant developed by the Brazilian company [Maritaca AI](https://www.maritaca.ai).\n",
"MariTalk is based on language models that have been specially trained to understand Portuguese well.\n",
"\n",
"This notebook demonstrates how to use MariTalk with llama-index through a simple example."
"This notebook demonstrates how to use MariTalk with Llama Index through two examples:\n",
"\n",
"1. Get pet name suggestions with chat method;\n",
"2. Classify film reviews as negative or positive with few-shot examples with complete method."
]
},
{
Expand All @@ -31,7 +34,8 @@
"outputs": [],
"source": [
"!pip install llama-index\n",
"!pip install llama-index-llms-maritalk"
"!pip install llama-index-llms-maritalk\n",
"!pip install asyncio"
]
},
{
Expand All @@ -46,9 +50,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"### Chat"
"### Example 1 - Pet Name Suggestions with Chat"
]
},
{
Expand All @@ -60,11 +62,11 @@
"from llama_index.core.llms import ChatMessage\n",
"from llama_index.llms.maritalk import Maritalk\n",
"\n",
"import asyncio\n",
"\n",
"# To customize your API key, do this\n",
"# otherwise it will lookup MARITALK_API_KEY from your env variable\n",
"# llm = Maritalk(api_key=\"<your_maritalk_api_key>\")\n",
"\n",
"llm = Maritalk()\n",
"llm = Maritalk(api_key=\"<your_maritalk_api_key>\", model=\"sabia-2-medium\")\n",
"\n",
"# Call chat with a list of messages\n",
"messages = [\n",
Expand All @@ -75,15 +77,55 @@
" ChatMessage(role=\"user\", content=\"I have a dog.\"),\n",
"]\n",
"\n",
"# Sync chat\n",
"response = llm.chat(messages)\n",
"print(response)"
"print(response)\n",
"\n",
"\n",
"# Async chat\n",
"async def get_dog_name(llm, messages):\n",
" response = await llm.achat(messages)\n",
" print(response)\n",
"\n",
"\n",
"asyncio.run(get_dog_name(llm, messages))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Few-shot examples\n",
"#### Stream Generation\n",
"\n",
"For tasks involving the generation of long text, such as creating an extensive article or translating a large document, it can be advantageous to receive the response in parts, as the text is generated, instead of waiting for the complete text. This makes the application more responsive and efficient, especially when the generated text is extensive. We offer two approaches to meet this need: one synchronous and another asynchronous."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Sync streaming chat\n",
"response = llm.stream_chat(messages)\n",
"for chunk in response:\n",
" print(chunk.delta, end=\"\", flush=True)\n",
"\n",
"\n",
"# Async streaming chat\n",
"async def get_dog_name_streaming(llm, messages):\n",
" async for chunk in await llm.astream_chat(messages):\n",
" print(chunk.delta, end=\"\", flush=True)\n",
"\n",
"\n",
"asyncio.run(get_dog_name_streaming(llm, messages))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Example 2 - Few-shot Examples with Complete\n",
"\n",
"We recommend using the `llm.complete()` method when using the model with few-shot examples"
]
Expand All @@ -105,8 +147,39 @@
"Resenha: Apesar de longo, valeu o ingresso..\n",
"Classe:\"\"\"\n",
"\n",
"response = llm.complete(prompt, stopping_tokens=[\"\\n\"])\n",
"print(response)"
"# Sync complete\n",
"response = llm.complete(prompt)\n",
"print(response)\n",
"\n",
"\n",
"# Async complete\n",
"async def classify_review(llm, prompt):\n",
" response = await llm.acomplete(prompt)\n",
" print(response)\n",
"\n",
"\n",
"asyncio.run(classify_review(llm, prompt))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Sync streaming complete\n",
"response = llm.stream_complete(prompt)\n",
"for chunk in response:\n",
" print(chunk.delta, end=\"\", flush=True)\n",
"\n",
"\n",
"# Async streaming complete\n",
"async def classify_review_streaming(llm, prompt):\n",
" async for chunk in await llm.astream_complete(prompt):\n",
" print(chunk.delta, end=\"\", flush=True)\n",
"\n",
"\n",
"asyncio.run(classify_review_streaming(llm, prompt))"
]
}
],
Expand Down
18 changes: 18 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-maritalk/README.md
Original file line number Diff line number Diff line change
@@ -1 +1,19 @@
# LlamaIndex Llms Integration: Maritalk

MariTalk is an assistant developed by the Brazilian company [Maritaca AI](https://www.maritaca.ai). MariTalk is based on language models that have been specially trained to understand Portuguese well.

## Installation

First, install the Llama Index library (and all its dependencies) using the following command:

```
$pip install llama-index llama-index-llms-maritalk
```

## API Key

You will need and API key that can be obtained from chat.maritaca.ai (\"Chaves da API\" section).

## Examples

Examples of usage are presented at [llamahub.ai](https://docs.llamaindex.ai/en/stable/examples/llm/maritalk/).
Loading

0 comments on commit b6fe03c

Please sign in to comment.