Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: update docstrings #497

Merged
merged 4 commits into from
Feb 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -43,17 +43,27 @@ def __init__(
embedding_separator: str = "\n",
):
"""
Create a MistralDocumentEmbedder component.
:param api_key: The Mistral API key.
:param model: The name of the model to use.
:param api_base_url: The Mistral API Base url, defaults to None. For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param prefix: A string to add to the beginning of each text.
:param suffix: A string to add to the end of each text.
:param batch_size: Number of Documents to encode at once.
:param progress_bar: Whether to show a progress bar or not. Can be helpful to disable in production deployments
to keep the logs clean.
:param meta_fields_to_embed: List of meta fields that should be embedded along with the Document text.
:param embedding_separator: Separator used to concatenate the meta fields to the Document text.
Creates a MistralDocumentEmbedder component.

:param api_key:
The Mistral API key.
:param model:
The name of the model to use.
:param api_base_url:
The Mistral API Base url. For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param prefix:
A string to add to the beginning of each text.
:param suffix:
A string to add to the end of each text.
:param batch_size:
Number of Documents to encode at once.
:param progress_bar:
Whether to show a progress bar or not. Can be helpful to disable in production deployments to keep
the logs clean.
:param meta_fields_to_embed:
List of meta fields that should be embedded along with the Document text.
:param embedding_separator:
Separator used to concatenate the meta fields to the Document text.
"""
super(MistralDocumentEmbedder, self).__init__( # noqa: UP008
api_key=api_key,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,22 +11,21 @@
@component
class MistralTextEmbedder(OpenAITextEmbedder):
"""
A component for embedding strings using Mistral models.
A component for embedding strings using Mistral models.

Usage example:
Usage example:
```python
from haystack_integrations.components.embedders.mistral.text_embedder import MistralTextEmbedder

text_to_embed = "I love pizza!"
text_to_embed = "I love pizza!"
text_embedder = MistralTextEmbedder()
print(text_embedder.run(text_to_embed))

text_embedder = MistralTextEmbedder()

print(text_embedder.run(text_to_embed))

# {'embedding': [0.017020374536514282, -0.023255806416273117, ...],
# 'meta': {'model': 'text-embedding-ada-002-v2',
# 'usage': {'prompt_tokens': 4, 'total_tokens': 4}}}
```
# output:
# {'embedding': [0.017020374536514282, -0.023255806416273117, ...],
# 'meta': {'model': 'mistral-embed',
# 'usage': {'prompt_tokens': 4, 'total_tokens': 4}}}
```
"""

def __init__(
Expand All @@ -38,14 +37,19 @@ def __init__(
suffix: str = "",
):
"""
Create an MistralTextEmbedder component.

:param api_key: The Misttal API key.
:param model: The name of the Mistral embedding models to be used.
:param api_base_url: The Mistral API Base url, defaults to `https://api.mistral.ai/v1`.
For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param prefix: A string to add to the beginning of each text.
:param suffix: A string to add to the end of each text.
Creates an MistralTextEmbedder component.

:param api_key:
The Mistral API key.
:param model:
The name of the Mistral embedding model to be used.
:param api_base_url:
The Mistral API Base url.
For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param prefix:
A string to add to the beginning of each text.
:param suffix:
A string to add to the end of each text.
"""
super(MistralTextEmbedder, self).__init__( # noqa: UP008
api_key=api_key,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,17 +12,27 @@
@component
class MistralChatGenerator(OpenAIChatGenerator):
"""
Enables text generation using Mistral's large language models (LLMs).
Currently supports `mistral-tiny`, `mistral-small` and `mistral-medium`
models accessed through the chat completions API endpoint.
Enables text generation using Mistral AI generative models.
For supported models, see [Mistral AI docs](https://docs.mistral.ai/platform/endpoints/#operation/listModels).

Users can pass any text generation parameters valid for the `openai.ChatCompletion.create` method
directly to this component via the `**generation_kwargs` parameter in __init__ or the `**generation_kwargs`
Users can pass any text generation parameters valid for the Mistral Chat Completion API
directly to this component via the `generation_kwargs` parameter in `__init__` or the `generation_kwargs`
parameter in `run` method.

Key Features and Compatibility:
- **Primary Compatibility**: Designed to work seamlessly with the Mistral API Chat Completion endpoint.
- **Streaming Support**: Supports streaming responses from the Mistral API Chat Completion endpoint.
- **Customizability**: Supports all parameters supported by the Mistral API Chat Completion endpoint.

This component uses the ChatMessage format for structuring both input and output,
ensuring coherent and contextually relevant responses in chat-based text generation scenarios.
Details on the ChatMessage format can be found in the
[Haystack docs](https://docs.haystack.deepset.ai/v2.0/docs/data-classes#chatmessage)

For more details on the parameters supported by the Mistral API, refer to the
[Mistral API Docs](https://docs.mistral.ai/api/).

Usage example:
```python
from haystack_integrations.components.generators.mistral import MistralChatGenerator
from haystack.dataclasses import ChatMessage
Expand All @@ -38,19 +48,7 @@ class MistralChatGenerator(OpenAIChatGenerator):
>>meaningful and useful.', role=<ChatRole.ASSISTANT: 'assistant'>, name=None,
>>meta={'model': 'mistral-tiny', 'index': 0, 'finish_reason': 'stop',
>>'usage': {'prompt_tokens': 15, 'completion_tokens': 36, 'total_tokens': 51}})]}

```

Key Features and Compatibility:
- **Primary Compatibility**: Designed to work seamlessly with the Mistral API Chat Completion endpoint.
- **Streaming Support**: Supports streaming responses from the Mistral API Chat Completion endpoint.
- **Customizability**: Supports all parameters supported by the Mistral API Chat Completion endpoint.

Input and Output Format:
- **ChatMessage Format**: This component uses the ChatMessage format for structuring both input and output,
ensuring coherent and contextually relevant responses in chat-based text generation scenarios.
Details on the ChatMessage format can be found at: https://github.com/openai/openai-python/blob/main/chatml.md.
Note that the Mistral API does not accept `system` messages yet. You can use `user` and `assistant` messages.
"""

def __init__(
Expand All @@ -65,15 +63,19 @@ def __init__(
Creates an instance of MistralChatGenerator. Unless specified otherwise in the `model`, this is for Mistral's
`mistral-tiny` model.

:param api_key: The Mistral API key.
:param model: The name of the Mistral chat completion model to use.
:param streaming_callback: A callback function that is called when a new token is received from the stream.
:param api_key:
The Mistral API key.
:param model:
The name of the Mistral chat completion model to use.
:param streaming_callback:
A callback function that is called when a new token is received from the stream.
The callback function accepts StreamingChunk as an argument.
:param api_base_url: The Mistral API Base url, defaults to `https://api.mistral.ai/v1`.
For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param generation_kwargs: Other parameters to use for the model. These parameters are all sent directly to
the Mistrak endpoint. See [Mistral API docs](https://docs.mistral.ai/api/t) for
more details.
:param api_base_url:
The Mistral API Base url.
For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param generation_kwargs:
Other parameters to use for the model. These parameters are all sent directly to
the Mistral endpoint. See [Mistral API docs](https://docs.mistral.ai/api/) for more details.
Some of the supported parameters:
- `max_tokens`: The maximum number of tokens the output text can have.
- `temperature`: What sampling temperature to use. Higher values mean the model will take more risks.
Expand All @@ -83,7 +85,6 @@ def __init__(
comprising the top 10% probability mass are considered.
- `stream`: Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent
events as they become available, with the stream terminated by a data: [DONE] message.
- `stop`: One or more sequences after which the LLM should stop generating tokens.
- `safe_prompt`: Whether to inject a safety prompt before all conversations.
- `random_seed`: The seed to use for random sampling.
"""
Expand Down