Skip to content

Commit

Permalink
Update version to v0.0.100
Browse files Browse the repository at this point in the history
  • Loading branch information
GitHub Actions committed Nov 18, 2024
1 parent 430ed64 commit 89abfa4
Show file tree
Hide file tree
Showing 10 changed files with 67 additions and 15 deletions.
4 changes: 4 additions & 0 deletions docs/capabilities/guardrailing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,10 @@ The table below describes the types of content that can be detected in the moder
| PII | Content that requests, shares, or attempts to elicit personal identifying information such as full names, addresses, phone numbers, social security numbers, or financial account details. |


### FAQ
Q: What is the distribution of false-positive and false-negative results on the new moderation API models. Specifically, will they be more likely to flag something as harmful when it is not or not flag something that is harmful?

A: On our internal testset, policies have a precision between [0.8 - 0.9] and a recall between [0.7 - 0.99]. We recommend you leverage the raw scores instead of the boolean responses if you have specific application objectives (e.g. reduce false positives) and set thresholds accordingly. We are continuously gathering feedback on performance and improving our models.


## System prompt to enforce guardrails
Expand Down
6 changes: 6 additions & 0 deletions docs/capabilities/vision.md
Original file line number Diff line number Diff line change
Expand Up @@ -399,6 +399,12 @@ Model output:
</details>

## FAQ
- What is the price per image?

The price is calculated using the same pricing as input tokens. Each image will be divided into batches of 16x16 pixels, with each batch converted to a token. As a rule of thumb, an image with a resolution of "ResolutionX"x"ResolutionY" will consume approximately `(ResolutionX/16) * (ResolutionY/16)` tokens.
For example, a 720x512 image will consume approximately `(720/16) * (512/16)` ≈ 1440 tokens.
Note that all images with a resolution higher than 1024x1024 will be downscaled while maintaining the same aspect ratio. For instance, a 1436x962 image will be downscaled to approximately 1024x686, consuming around `(1024/16) * (686/16)` ≈ 2600 tokens.

- Can I fine-tune the image capabilities in Pixtral 12B?

No, we do not currently support fine-tuning the image capabilities of Pixtral 12B.
Expand Down
4 changes: 2 additions & 2 deletions docs/deployment/cloud/outscale.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Deployed models expose a REST API that you can query using Mistral's SDK or plai
To run the examples below you will need to set the following environment variables:

- `OUTSCALE_SERVER_URL`: the URL of the VM hosting your Mistral model
- `OUTSCALE_MODEL_NAME`: the name of the model to query (e.g. `small`, `codestral`)
- `OUTSCALE_MODEL_NAME`: the name of the model to query (e.g. `small-2409`, `codestral-2405`)


<Tabs>
Expand Down Expand Up @@ -154,7 +154,7 @@ For more information, see the
serverURL: process.env.OUTSCALE_SERVER_URL || ""
});

const modelName = "codestral";
const modelName = "codestral-2405";

async function fimCompletion(prompt: string, suffix: string) {
const resp = await client.fim.complete({
Expand Down
13 changes: 11 additions & 2 deletions docs/getting-started/changelog.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,17 @@ title: Changelog
sidebar_position: 1.8
---

This is the list of changes to the Mistral API.
November 6, 2029
November 18, 2024
- We released Mistral Large 24.11 (`mistral-large-2411`) and Pixtral Large (`pixtral-large-2411`).
- [Le Chat](https://chat.mistral.ai/):
- Web search with citations
- Canvas for ideation, in-line editing, and export
- State of the art document and image understanding, powered by the new multimodal Pixtral Large
- Image generation, powered by Black Forest Labs Flux Pro
- Fully integrated offering, from models to outputs
- Faster responses powered by speculative editing

November 6, 2024
- We released moderation API and batch API.
- We introduced three new parameters:
- `presence_penalty`: penalizes the repetition of words or phrases
Expand Down
3 changes: 2 additions & 1 deletion docs/getting-started/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,11 @@ We release both premier models and free models, driving innovation and convenien

### Premier models

- Mistral Large, our top-tier reasoning model for high-complexity tasks with the lastest version released [November 2024](https://mistral.ai/news/pixtral-large/)
- Pixtral Large, our frontier-class multimodal model released [November 2024](https://mistral.ai/news/pixtral-large/)
- Ministral 3B, world’s best edge model released [October 2024](https://mistral.ai/news/ministraux/).
- Ministral 8B, powerful edge model with extremely high performance/price ratio released [October 2024](https://mistral.ai/news/ministraux/).
- Mistral Small, our latest enterprise-grade small model with the lastest version v2 released [September 2024](https://mistral.ai/news/september-24-release/).
- Mistral Large, our top-tier reasoning model for high-complexity tasks with the lastest version v2 released [July 2024](https://mistral.ai/news/mistral-large-2407/)
- Codestral, our cutting-edge language model for coding released [May 2024](https://mistral.ai/news/codestral/)
- Mistral Embed, our state-of-the-art semantic for extracting representation of text extracts

Expand Down
3 changes: 2 additions & 1 deletion docs/getting-started/models/benchmark.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@ LLM (Large Language Model) benchmarks are standardized tests or datasets used to
Mistral demonstrates top-tier reasoning capabilities and excels in advanced reasoning, multilingual tasks, math, and code generation. The company reports benchmark results on popular public benchmarks such as MMLU (Massive Multitask Language Understanding), MT-bench, and others.

You can find the benchmark results in the following blog posts:
- [Pixtral](https://mistral.ai/news/pixtral-12b/): Pixtral 12B the first open-source model to demonstrate state-of-the-art multimodal understanding, without regressing on abilities in pure text.
- [Pixtral Large](https://mistral.ai/news/pixtral-large/): Pixtral Large is a 124B open-weights multimodal model built on top of Mistral Large 2. It is the second model in our multimodal family and demonstrates frontier-level image understanding.
- [Pixtral 12B](https://mistral.ai/news/pixtral-12b/): Pixtral 12B the first open-source model to demonstrate state-of-the-art multimodal understanding, without regressing on abilities in pure text.
- [Mistral Large](https://mistral.ai/news/mistral-large-2407/): a cutting-edge text generation model with top-tier reasoning capabilities.
It can be used for complex multilingual reasoning tasks, including text understanding, transformation, and code generation.
- [Mistral Nemo](https://mistral.ai/news/mistral-nemo/): Mistral Nemo's reasoning, world knowledge, and coding performance are state-of-the-art in its size category. As it relies on standard architecture, Mistral Nemo is easy to use and a drop-in replacement in any system using Mistral 7B that it supersedes.
Expand Down
6 changes: 4 additions & 2 deletions docs/getting-started/models/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,10 @@ Mistral provides two types of models: free models and premier models.

| Model | Weight availability|Available via API| Description | Max Tokens| API Endpoints|Version|
|--------------------|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
| Mistral Large |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: |Our top-tier reasoning model for high-complexity tasks with the lastest version released November 2024. Learn more on our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `mistral-large-latest`| 24.11|
| Pixtral Large |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: |Our frontier-class multimodal model released November 2024. Learn more on our [blog post](https://mistral.ai/news/pixtral-large/)| 128k | `pixtral-large-latest`| 24.11|
| Ministral 3B | | :heavy_check_mark: | World’s best edge model. Learn more on our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-3b-latest` | 24.10|
| Ministral 8B | :heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: |Powerful edge model with extremely high performance/price ratio. Learn more on our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-8b-latest` | 24.10|
| Mistral Large |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: |Our top-tier reasoning model for high-complexity tasks with the lastest version v2 released July 2024. Learn more on our [blog post](https://mistral.ai/news/mistral-large-2407/)| 128k | `mistral-large-latest`| 24.07|
| Mistral Small | :heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md) | :heavy_check_mark: | Our latest enterprise-grade small model with the lastest version v2 released September 2024. Learn more on our [blog post](https://mistral.ai/news/september-24-release/) | 32k | `mistral-small-latest` | 24.09|
| Codestral |:heavy_check_mark: <br/> [Mistral Non-Production License](https://mistral.ai/licenses/MNPL-0.1.md) | :heavy_check_mark: | Our cutting-edge language model for coding released May 2024 | 32k | `codestral-latest` | 24.05|
| Mistral Embed | | :heavy_check_mark: | Our state-of-the-art semantic for extracting representation of text extracts | 8k | `mistral-embed` | 23.12|
Expand Down Expand Up @@ -59,6 +60,8 @@ it is recommended to use the dated versions of the Mistral AI API.
Additionally, be prepared for the deprecation of certain endpoints in the coming months.

Here are the details of the available versions:
- `mistral-large-latest`: currently points to `mistral-large-2411`. `mistral-large-2407` and `mistral-large-2402` will be deprecated shortly.
- `pixtral-large-latest`: currently points to `pixtral-large-2411`.
- `mistral-moderation-latest`: currently points to `mistral-moderation-2411`.
- `ministral-3b-latest`: currently points to `ministral-3b-2410`.
- `ministral-8b-latest`: currently points to `ministral-8b-2410`.
Expand All @@ -67,5 +70,4 @@ Here are the details of the available versions:
- `mistral-medium-latest`: currently points to `mistral-medium-2312`.
The previous `mistral-medium` has been dated and tagged as `mistral-medium-2312`.
Mistral Medium will be deprecated shortly.
- `mistral-large-latest`: currently points to `mistral-large-2407`. `mistral-large-2402` will be deprecated shortly.
- `codestral-latest`: currently points to `codestral-2405`.
11 changes: 7 additions & 4 deletions docs/getting-started/models/weights.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ slug: weights
We open-source both pre-trained models and instruction-tuned models. These models are not tuned for safety as we want to empower users to test and refine moderation based on their use cases. For safer models, follow our [guardrailing tutorial](/capabilities/guardrailing).

## License
- Mistral 7B, Mixtral 8x7B, Mixtral 8x22B, Codestral Mamba, Mathstral, Mistral Nemo, and Pixtral are under [Apache 2 License](https://choosealicense.com/licenses/apache-2.0/), which permits their use without any constraints.
- Mistral 7B, Mixtral 8x7B, Mixtral 8x22B, Codestral Mamba, Mathstral, Mistral Nemo, and Pixtral 12B are under [Apache 2 License](https://choosealicense.com/licenses/apache-2.0/), which permits their use without any constraints.
- Codestral is under [Mistral AI Non-Production (MNPL) License](https://mistral.ai/licences/MNPL-0.1.md).
- Ministral 8B, Mistral Large and Mistral Small are under [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md).
- Ministral 8B, Mistral Large, Pixtral Large and Mistral Small are under [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md).

:::note[ ]
If you are interested in purchasing a commercial license for our models, please [contact our team](https://mistral.ai/contact/)
Expand Down Expand Up @@ -39,6 +39,8 @@ If you are interested in purchasing a commercial license for our models, please
| Pixtral-2409 | [Hugging Face](https://huggingface.co/mistralai/Pixtral-12B-2409) | - 131k vocabulary size <br/> - Supports v3 tekken.json tokenizer <br/> - Supports function calling |
| Mistral-Small-Instruct-2409 | [Hugging Face](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409) | - 32768 vocabulary size <br/> - Supports v3 Tokenizer <br/> - Supports function calling |
| Ministral-8B-Instruct-2410 | [Hugging Face](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) | - 131k vocabulary size <br/> - Supports v3 tekken.json tokenizer <br/> - Supports function calling |
| Mistral-Large-Instruct-2411 | [Hugging Face](https://huggingface.co/mistralai/Mistral-Large-Instruct-2411)| - 32768 vocabulary size <br/> - Supports v7 tokenizer <br/> - Supports function calling |
| Pixtral-Large-Instruct-2411 | [Hugging Face](https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411)| - 32768 vocabulary size <br/> - Supports v7 tokenizer <br/> - Supports function calling |

## Sizes

Expand All @@ -51,10 +53,12 @@ If you are interested in purchasing a commercial license for our models, please
| Codestral-Mamba-7B-v0.1 | 7.3B | 7.3B | 16 |
| Mathstral-7B-v0.1 | 7.3B | 7.3B | 16 |
| Mistral-Nemo-Instruct-2407 | 12B | 12B | 28 - bf16 <br/> 16 - fp8 |
| Mistral-Large-Instruct-2407 | 123B | 123B | 228 |
| Mistral-Large-Instruct-2407 | 123B | 123B | 250 |
| Pixtral-2409 | 12B | 12B | 28 - bf16 <br/> 16 - fp8 |
| Mistral-Small-2409 | 22B | 22B | 60 |
| Ministral-8B-2410 | 8B | 8B | 24 |
| Mistral-Large-Instruct-2411 | 123B | 123B | 250 |
| Pixtral-Large-Instruct-2411 | 124B | 124B | 250 |

## How to run?
Check out [mistral-inference](https://github.com/mistralai/mistral-inference/), a Python package for running our models. You can install `mistral-inference` by
Expand All @@ -67,4 +71,3 @@ To learn more about how to use mistral-inference, take a look at the [README](ht
<a target="_blank" href="https://colab.research.google.com/github/mistralai/mistral-inference/blob/main/tutorials/getting_started.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>

30 changes: 28 additions & 2 deletions openapi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2410,6 +2410,7 @@ components:
title: Content
anyOf:
- type: string
- type: "null"
- items:
$ref: "#/components/schemas/ContentChunk"
type: array
Expand Down Expand Up @@ -2718,6 +2719,25 @@ components:
- image_url
title: ImageURLChunk
description: '{"type":"image_url","image_url":{"url":"data:image/png;base64,iVBORw0'
ReferenceChunk:
properties:
type:
type: string
enum:
- reference
const: reference
title: Type
default: reference
reference_ids:
items:
type: integer
type: array
title: Reference Ids
additionalProperties: false
type: object
required:
- reference_ids
title: ReferenceChunk
ResponseFormat:
properties:
type:
Expand Down Expand Up @@ -2821,8 +2841,13 @@ components:
ToolMessage:
properties:
content:
type: string
title: Content
anyOf:
- type: string
- type: "null"
- items:
$ref: "#/components/schemas/ContentChunk"
type: array
tool_call_id:
anyOf:
- type: string
Expand Down Expand Up @@ -2855,6 +2880,7 @@ components:
title: Content
anyOf:
- type: string
- type: "null"
- items:
$ref: "#/components/schemas/ContentChunk"
type: array
Expand Down Expand Up @@ -3116,7 +3142,7 @@ components:
type: integer
example: 0
message:
$ref: "#/components/schemas/DeltaMessage"
$ref: "#/components/schemas/AssistantMessage"
finish_reason:
type: string
enum:
Expand Down
2 changes: 1 addition & 1 deletion version.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v0.0.100
v0.0.15

0 comments on commit 89abfa4

Please sign in to comment.