Skip to content

Commit

Permalink
Update version to v0.0.63
Browse files Browse the repository at this point in the history
  • Loading branch information
GitHub Actions committed Jul 24, 2024
1 parent ac3e42c commit 99237c7
Show file tree
Hide file tree
Showing 8 changed files with 308 additions and 52 deletions.
2 changes: 1 addition & 1 deletion docs/capabilities/finetuning.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ curl https://api.mistral.ai/v1/files \

## Create a fine-tuning job
The next step is to create a fine-tuning job.
- model: the specific model you would like to fine-tune. The choices are `open-mistral-7b` (v0.3) and `mistral-small-latest` (`mistral-small-2402`).
- model: the specific model you would like to fine-tune. The choices are `open-mistral-7b` (v0.3), `mistral-small-latest` (`mistral-small-2402`), `codestral-latest` (`codestral-2405`), `open-mistral-nemo` and , `mistral-large-latest` (`mistral-large-2407`).
- training_files: a collection of training file IDs, which can consist of a single file or multiple files
- validation_files: a collection of validation file IDs, which can consist of a single file or multiple files
- hyperparameters: two adjustable hyperparameters, "training_step" and "learning_rate", that users can modify.
Expand Down
1 change: 1 addition & 0 deletions docs/deployment/cloud/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,6 @@ In particular, Mistral's optimized commercial models are available on:

- [Azure AI](../azure)
- [AWS Bedrock](../aws)
- [Google Cloud Vertex AI Model Garden](../vertex)
- Snowflake Cortex

252 changes: 252 additions & 0 deletions docs/deployment/cloud/vertex.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,252 @@
---
id: vertex
title: Vertex AI
sidebar_position: 3.23
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';


You can deploy the following Mistral AI models from Google Cloud Vertex AI's Model Garden:

- Mistral NeMo
- Codestral (instruct and FIM modes)
- Mistral Large

## Pre-requisites

In order to query the model you will need:

- Access to a Google Cloud Project with the Vertex AI API enabled
- Relevant IAM permissions to be able to enable the model and query endpoints through the following roles:
- [Vertex AI User IAM role](https://cloud.google.com/vertex-ai/docs/general/access-control#aiplatform.user).
- Consumer Procurement Entitlement Manager role

On the client side, you will also need:
- The `gcloud` CLI to authenticate against the Google Cloud APIs, please refer to
[this page](https://cloud.google.com/docs/authentication/provide-credentials-adc#google-idp)
for more details.
- A Python virtual environment with the `mistralai-google-cloud` client package installed.
- The following environment variables properly set up:
- `GOOGLE_PROJECT_ID`: a Google Cloud Project ID with the the Vertex AI API enabled
- `GOOGLE_REGION`: a Google Cloud region where Mistral models are available
(e.g. `europe-west4`)

## Querying the models (instruct mode)


<Tabs>
<TabItem value="python" label="Python">

```python
import httpx
import google.auth
from google.auth.transport.requests import Request
import os


def get_credentials() -> str:
credentials, project_id = google.auth.default(
scopes=["https://www.googleapis.com/auth/cloud-platform"]
)
credentials.refresh(Request())
return credentials.token


def build_endpoint_url(
region: str,
project_id: str,
model_name: str,
model_version: str,
streaming: bool = False,
) -> str:
base_url = f"https://{region}-aiplatform.googleapis.com/v1/"
project_fragment = f"projects/{project_id}"
location_fragment = f"locations/{region}"
specifier = "streamRawPredict" if streaming else "rawPredict"
model_fragment = f"publishers/mistralai/models/{model_name}@{model_version}"
url = f"{base_url}{'/'.join([project_fragment, location_fragment, model_fragment])}:{specifier}"
return url


# Retrieve Google Cloud Project ID and Region from environment variables
project_id = os.environ.get("GOOGLE_PROJECT_ID")
region = os.environ.get("GOOGLE_REGION")

# Retrieve Google Cloud credentials.
access_token = get_credentials()

model = "mistral-nemo" # Replace with the model you want to use
model_version = "2407" # Replace with the model version you want to use
is_streamed = False # Change to True to stream token responses

# Build URL
url = build_endpoint_url(
project_id=project_id,
region=region,
model_name=model,
model_version=model_version,
streaming=is_streamed
)

# Define query headers
headers = {
"Authorization": f"Bearer {access_token}",
"Accept": "application/json",
}

# Define POST payload
data = {
"model": model,
"messages": [{"role": "user", "content": "Who is the best French painter?"}],
"stream": is_streamed,
}
# Make the call
with httpx.Client() as client:
resp = client.post(url, json=data, headers=headers, timeout=None)
print(resp.text)

```

</TabItem>
<TabItem value="curl" label="cURL">

```bash
MODEL="mistral-nemo"
MODEL_VERSION="2407"

url="https://$GOOGLE_REGION-aiplatform.googleapis.com/v1/projects/$GOOGLE_PROJECT_ID/locations/$GOOGLE_REGION/publishers/mistralai/models/$MODEL@$MODEL_VERSION:rawPredict"

curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
$url \
--data '{
"model": "'"$MODEL"'",
"temperature": 0,
"messages": [
{"role": "user", "content": "What is the best French cheese?"}
]
}'

```
</TabItem>
</Tabs>

## Querying Codestral in FIM mode


<Tabs>
<TabItem value="python" label="Python">

```python
import httpx
import google.auth
from google.auth.transport.requests import Request
import os


def get_credentials() -> str:
credentials, project_id = google.auth.default(
scopes=["https://www.googleapis.com/auth/cloud-platform"]
)
credentials.refresh(Request())
return credentials.token


def build_endpoint_url(
region: str,
project_id: str,
model_name: str,
model_version: str,
streaming: bool = False,
) -> str:
base_url = f"https://{region}-aiplatform.googleapis.com/v1/"
project_fragment = f"projects/{project_id}"
location_fragment = f"locations/{region}"
specifier = "streamRawPredict" if streaming else "rawPredict"
model_fragment = f"publishers/mistralai/models/{model_name}@{model_version}"
url = f"{base_url}{'/'.join([project_fragment, location_fragment, model_fragment])}:{specifier}"
return url


# Retrieve Google Cloud Project ID and Region from environment variables
project_id = os.environ.get("GOOGLE_PROJECT_ID")
region = os.environ.get("GOOGLE_REGION")

# Retrieve Google Cloud credentials.
access_token = get_credentials()

model = "codestral"
model_version = "2405"
is_streamed = False # Change to True to stream token responses

# Build URL
url = build_endpoint_url(
project_id=project_id,
region=region,
model_name=model,
model_version=model_version,
streaming=is_streamed
)

# Define query headers
headers = {
"Authorization": f"Bearer {access_token}",
"Accept": "application/json",
}

# Define POST payload
data = {
"model": model,
"prompt": "def say_hello(name: str) -> str:",
"suffix": "return n_words"
}
# Make the call
with httpx.Client() as client:
resp = client.post(url, json=data, headers=headers, timeout=None)
print(resp.text)


```

</TabItem>
<TabItem value="curl" label="cURL">

```bash
MODEL="codestral"
MODEL_VERSION="2405"

url="https://$GOOGLE_REGION-aiplatform.googleapis.com/v1/projects/$GOOGLE_PROJECT_ID/locations/$GOOGLE_REGION/publishers/mistralai/models/$MODEL@$MODEL_VERSION:rawPredict"


curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
$url \
--data '{
"model":"'"$MODEL"'",
"prompt": "def count_words_in_file(file_path: str) -> int:",
"suffix": "return n_words"
}'

```
</TabItem>
</Tabs>


## Going further

For more information and examples, you can check:

- The Google Cloud [Partner Models](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/mistral)
documentation page.
- The Vertex Model Cards for [Mistral Large](https://console.cloud.google.com/vertex-ai/publishers/mistralai/model-garden/mistral-large),
[Mistral-NeMo](https://console.cloud.google.com/vertex-ai/publishers/mistralai/model-garden/mistral-nemo) and
[Codestral](https://console.cloud.google.com/vertex-ai/publishers/mistralai/model-garden/codestral).
- The [Getting Started Colab Notebook](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/generative_ai/mistralai_intro.ipynb)
for Mistral models on Vertex, along with the [source file on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/main/notebooks/official/generative_ai/mistralai_intro.ipynb).

26 changes: 9 additions & 17 deletions docs/getting-started/Open-weight-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,12 @@ title: Open-weight models
sidebar_position: 1.4
---

We open-source both pre-trained models and fine-tuned models. These models are not tuned for safety as we want to empower users to test and refine moderation based on their use cases. For safer models, follow our [guardrailing tutorial](/capabilities/guardrailing).

| Model | Available Open-weight|Available via API| Description | Max Tokens| API Endpoints|
|--------------------|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
| Mistral 7B | :heavy_check_mark: <br/> Apache2 |:heavy_check_mark: |The first dense model released by Mistral AI, perfect for experimentation, customization, and quick iteration. At the time of the release, it matched the capabilities of models up to 30B parameters. Learn more on our [blog post](https://mistral.ai/news/announcing-mistral-7b/)| 32k | `open-mistral-7b`|
| Mixtral 8x7B |:heavy_check_mark: <br/> Apache2 | :heavy_check_mark: |A sparse mixture of experts model. As such, it leverages up to 45B parameters but only uses about 12B during inference, leading to better inference throughput at the cost of more vRAM. Learn more on the dedicated [blog post](https://mistral.ai/news/mixtral-of-experts/)| 32k | `open-mixtral-8x7b`|
| Mixtral 8x22B |:heavy_check_mark: <br/> Apache2 | :heavy_check_mark: |A bigger sparse mixture of experts model. As such, it leverages up to 141B parameters but only uses about 39B during inference, leading to better inference throughput at the cost of more vRAM. Learn more on the dedicated [blog post](https://mistral.ai/news/mixtral-8x22b/)| 64k | `open-mixtral-8x22b`|
| Codestral |:heavy_check_mark: <br/> MNPL|:heavy_check_mark: | A cutting-edge generative model that has been specifically designed and optimized for code generation tasks, including fill-in-the-middle and code completion | 32k | `codestral-latest`|
| Codestral Mamba | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A Mamba 2 language model specialized in code generation. Learn more on our [blog post](https://mistral.ai/news/codestral-mamba/) | 256k | `open-codestral-mamba`|
| Mathstral | :heavy_check_mark: <br/> Apache2 | | A math-specific 7B model designed for math reasoning and scientific tasks. Learn more on our [blog post](https://mistral.ai/news/mathstral/) | 32k | NA|
| Mistral NeMo | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A 12B model built with the partnership with Nvidia. It is easy to use and a drop-in replacement in any system using Mistral 7B that it supersedes. Learn more on our [blog post](https://mistral.ai/news/mistral-nemo/) | 128k | `open-mistral-nemo`|
We open-source both pre-trained models and instruction-tuned models. These models are not tuned for safety as we want to empower users to test and refine moderation based on their use cases. For safer models, follow our [guardrailing tutorial](/capabilities/guardrailing).

## License
- Mistral 7B, Mixtral 8x7B, Mixtral 8x22B, Codestral Mamba, Mathstral, and Mistral NeMo are under [Apache 2 License](https://choosealicense.com/licenses/apache-2.0/), which permits their use without any constraints.
- Codestral is under [Mistral AI Non-Production (MNPL) License](https://mistral.ai/licences/MNPL-0.1.md).

- Mistral Large is under [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md).

## Downloading

Expand All @@ -37,10 +27,11 @@ We open-source both pre-trained models and fine-tuned models. These models are n
| Mixtral-8x22B-Instruct-v0.1/ <br/> Mixtral-8x22B-Instruct-v0.3 | [Hugging Face](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) <br/> [raw_weights](https://models.mistralcdn.com/mixtral-8x22b-v0-3/mixtral-8x22B-Instruct-v0.3.tar) (md5sum: `471a02a6902706a2f1e44a693813855b`)|- 32768 vocabulary size |
| Mixtral-8x22B-v0.3 | [raw_weights](https://models.mistralcdn.com/mixtral-8x22b-v0-3/mixtral-8x22B-v0.3.tar) (md5sum: `a2fa75117174f87d1197e3a4eb50371a`) | - 32768 vocabulary size <br/> - Supports v3 Tokenizer |
| Codestral-22B-v0.1 | [Hugging Face](https://huggingface.co/mistralai/Codestral-22B-v0.1) <br/> [raw_weights](https://models.mistralcdn.com/codestral-22b-v0-1/codestral-22B-v0.1.tar) (md5sum: `1ea95d474a1d374b1d1b20a8e0159de3`) | - 32768 vocabulary size <br/> - Supports v3 Tokenizer |
| Codestral-Mamba-7B-v0.1 | [Hugging Face](https://huggingface.co/mistralai/mamba-codestral-7B-v0.1) <br/> [raw_weights](https://models.mistralcdn.com/codestral-mamba-7b-v0-1/codestral-mamba-7B-v0.1.tar)(md5sum: `d3993e4024d1395910c55db0d11db163`) | - 32768 vocabulary size <br/> - Supports v3 Tokenizer |
| Mathstral-7B-v0.1 | [Hugging Face](https://huggingface.co/mistralai/mathstral-7B-v0.1) <br/> [raw_weights](https://models.mistralcdn.com/mathstral-7b-v0-1/mathstral-7B-v0.1.tar)(md5sum: `5f05443e94489c261462794b1016f10b`) | - 32768 vocabulary size <br/> - Supports v3 Tokenizer |
| Mistral-NeMo-Base-2407 | [Hugging Face](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) <br/> [raw_weights](https://models.mistralcdn.com/mistral-nemo-2407/mistral-nemo-base-2407.tar)(md5sum: `c5d079ac4b55fc1ae35f51f0a3c0eb83`) | - 131k vocabulary size <br/> - Supports tekken.json tokenizer |
| Mistral-NeMo-Instruct-2407 | [Hugging Face](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) <br/> [raw_weights](https://models.mistralcdn.com/mistral-nemo-2407/mistral-nemo-instruct-2407.tar)(md5sum: `296fbdf911cb88e6f0be74cd04827fe7`) | - 131k vocabulary size <br/> - Supports tekken.json tokenizer <br/> - Supports function calling |
| Codestral-Mamba-7B-v0.1 | [Hugging Face](https://huggingface.co/mistralai/mamba-codestral-7B-v0.1) <br/> [raw_weights](https://models.mistralcdn.com/codestral-mamba-7b-v0-1/codestral-mamba-7B-v0.1.tar) (md5sum: `d3993e4024d1395910c55db0d11db163`) | - 32768 vocabulary size <br/> - Supports v3 Tokenizer |
| Mathstral-7B-v0.1 | [Hugging Face](https://huggingface.co/mistralai/mathstral-7B-v0.1) <br/> [raw_weights](https://models.mistralcdn.com/mathstral-7b-v0-1/mathstral-7B-v0.1.tar) (md5sum: `5f05443e94489c261462794b1016f10b`) | - 32768 vocabulary size <br/> - Supports v3 Tokenizer |
| Mistral-NeMo-Base-2407 | [Hugging Face](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) <br/> [raw_weights](https://models.mistralcdn.com/mistral-nemo-2407/mistral-nemo-base-2407.tar) (md5sum: `c5d079ac4b55fc1ae35f51f0a3c0eb83`) | - 131k vocabulary size <br/> - Supports tekken.json tokenizer |
| Mistral-NeMo-Instruct-2407 | [Hugging Face](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) <br/> [raw_weights](https://models.mistralcdn.com/mistral-nemo-2407/mistral-nemo-instruct-2407.tar) (md5sum: `296fbdf911cb88e6f0be74cd04827fe7`) | - 131k vocabulary size <br/> - Supports tekken.json tokenizer <br/> - Supports function calling |
| Mistral-Large-Instruct-2407 | [Hugging Face](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407) <br/> [raw_weights](https://models.mistralcdn.com/mistral-large-2407/mistral-large-instruct-2407.tar) (md5sum: `fc602155f9e39151fba81fcaab2fa7c4`)| - 32768 vocabulary size <br/> - Supports v3 Tokenizer <br/> - Supports function calling |


## Sizes
Expand All @@ -53,7 +44,8 @@ We open-source both pre-trained models and fine-tuned models. These models are n
| Codestral-22B-v0.1 | 22.2B | 22.2B | 60 |
| Codestral-Mamba-7B-v0.1 | 7.3B | 7.3B | 16 |
| Mathstral-7B-v0.1 | 7.3B | 7.3B | 16 |
| Mistral-NeMo-12B-v0.1 | 12B | 12B | 28 - bf16 <br/> 16 - fp8 |
| Mistral-NeMo-Instruct-2407 | 12B | 12B | 28 - bf16 <br/> 16 - fp8 |
| Mistral-Large-Instruct-2407 | 123B | 123B | 228 |

## How to run?
Check out [mistral-inference](https://github.com/mistralai/mistral-inference/), a Python package for running our models. You can install `mistral-inference` by
Expand Down
4 changes: 4 additions & 0 deletions docs/getting-started/changelog.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ sidebar_position: 1.8

This is the list of changes to the Mistral API.

July 24, 2024
- We released Mistral Large 2 (`mistral-large-2407`).
- We added fine-tuning support for Codestral, Mistral Nemo and Mistral Large. Now the model choices for fine-tuning are `open-mistral-7b` (v0.3), `mistral-small-latest` (`mistral-small-2402`), `codestral-latest` (`codestral-2405`), `open-mistral-nemo` and , `mistral-large-latest` (`mistral-large-2407`)

July 18, 2024
- We released Mistral NeMo (`open-mistral-nemo`).

Expand Down
Loading

0 comments on commit 99237c7

Please sign in to comment.