Skip to content

Commit 837754d

Browse files
authored
Merge pull request #8856 from microsoft/ntrogh/language-model-provider
Add language model chat provider guide
2 parents 8c0fb5d + c10a405 commit 837754d

File tree

3 files changed

+233
-0
lines changed

3 files changed

+233
-0
lines changed
Lines changed: 227 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,227 @@
1+
---
2+
# DO NOT TOUCH — Managed by doc writer
3+
ContentId: 7f90ee4f-cac1-4b99-aee6-c99e088789d0
4+
DateApproved: 08/07/2025
5+
6+
# Summarize the whole topic in less than 300 characters for SEO purpose
7+
MetaDescription: Learn how to implement a LanguageModelChatProvider to contribute custom language models to VS Code's chat experience for extensions.
8+
---
9+
10+
# Language Model Chat Provider API
11+
12+
The Language Model Chat Provider API enables you to contribute your own language models to chat in Visual Studio Code.
13+
14+
## Overview
15+
16+
The `LanguageModelChatProvider` interface follows a one-provider-to-many-models relationship, enabling providers to offer multiple models. Each provider is responsible for:
17+
18+
- Discovering and preparing available language models
19+
- Handling chat requests for its models
20+
- Providing token counting functionality
21+
22+
## Language model information
23+
24+
Each language model must provide metadata through the `LanguageModelChatInformation` interface. The `prepareLanguageModelChatInformation` method returns an array of these objects to inform VS Code about the available models.
25+
26+
```typescript
27+
interface LanguageModelChatInformation {
28+
readonly id: string; // Unique identifier for the model - unique within the provider
29+
readonly name: string; // Human-readable name of the language model - shown in the model picker
30+
readonly family: string; // Model family name
31+
readonly version: string; // Version string
32+
readonly maxInputTokens: number; // Maximum number of tokens the model can accept as input
33+
readonly maxOutputTokens: number; // Maximum number of tokens the model is capable of producing
34+
readonly tooltip?: string; // Optional tooltip text when hovering the model in the UI
35+
readonly detail?: string; // Human-readable text that is rendered alongside the model
36+
readonly capabilities: {
37+
readonly imageInput?: boolean; // Supports image inputs
38+
readonly toolCalling?: boolean | number; // Supports tool calling
39+
};
40+
}
41+
```
42+
43+
## Register the provider
44+
45+
1. The first step is to register the provider in your `package.json`, in the `contributes.languageModelChatProviders` section. Provide a unique `vendor` ID and a `displayName`.
46+
47+
```json
48+
{
49+
"contributes": {
50+
"languageModelChatProviders": [
51+
{
52+
"vendor": "my-provider",
53+
"displayName": "My Provider"
54+
}
55+
]
56+
}
57+
}
58+
```
59+
60+
1. Next, in your extension activation function, register your language model provider using the `lm.registerLanguageModelChatProvider` method.
61+
62+
Provide the provider ID that you used in the `package.json` and an instance of your provider class:
63+
64+
```typescript
65+
import * as vscode from 'vscode';
66+
import { SampleChatModelProvider } from './provider';
67+
68+
export function activate(_: vscode.ExtensionContext) {
69+
vscode.lm.registerLanguageModelChatProvider('my-provider', new SampleChatModelProvider());
70+
}
71+
```
72+
73+
1. Optionally, provide a `contributes.languageModelChatProviders.managementCommand` in your `package.json` to allow users to manage the language model provider.
74+
75+
The value of the `managementCommand` property must be a command defined in the `contributes.commands` section of your `package.json`. In your extension, register the command (`vscode.commands.registerCommand`) and implement the logic for managing the provider such as configuring API keys or other settings.
76+
77+
```json
78+
{
79+
"contributes": {
80+
"languageModelChatProviders": [
81+
{
82+
"vendor": "my-provider",
83+
"displayName": "My Provider",
84+
"managementCommand": "my-provider.manage"
85+
}
86+
],
87+
"commands": [
88+
{
89+
"command": "my-provider.manage",
90+
"title": "Manage My Provider"
91+
}
92+
]
93+
}
94+
}
95+
```
96+
97+
## Implement the provider
98+
99+
A language provider must implement the `LanguageModelChatProvider` interface, which has three main methods:
100+
101+
- `prepareLanguageModelChatInformation`: returns the list of available models
102+
- `provideLanguageModelChatResponse`: handles chat requests and streams responses
103+
- `provideTokenCount`: implements token counting functionality
104+
105+
### Prepare language model information
106+
107+
The `prepareLanguageModelChatInformation` method is called by VS Code to discover the available models and returns a list of `LanguageModelChatInformation` objects.
108+
109+
Use the `options.silent` parameter to control whether to prompt the user for credentials or extra configuration:
110+
111+
```typescript
112+
async prepareLanguageModelChatInformation(
113+
options: { silent: boolean },
114+
token: CancellationToken
115+
): Promise<LanguageModelChatInformation[]> {
116+
if (options.silent) {
117+
return []; // Don't prompt user in silent mode
118+
} else {
119+
await this.promptForApiKey(); // Prompt user for credentials
120+
}
121+
122+
// Fetch available models from your service
123+
const models = await this.fetchAvailableModels();
124+
125+
// Map your models to LanguageModelChatInformation format
126+
return models.map(model => ({
127+
id: model.id,
128+
name: model.displayName,
129+
family: model.family,
130+
version: '1.0.0',
131+
maxInputTokens: model.contextWindow - model.maxOutput,
132+
maxOutputTokens: model.maxOutput,
133+
capabilities: {
134+
imageInput: model.supportsImages,
135+
toolCalling: model.supportsTools
136+
},
137+
requiresAuthorization: true
138+
}));
139+
}
140+
```
141+
142+
### Handle chat requests
143+
144+
The `provideLanguageModelChatResponse` method handles actual chat requests. The provider receives an array of messages in the `LanguageModelChatRequestMessage` format and you can optionally convert them to the format required by your language model API (see [Message format and conversion](#message-format-and-conversion)).
145+
146+
Use the `progress` parameter to stream response chunks. The response can include text parts, tool calls, and tool results (see [Response parts](#response-parts)).
147+
148+
```typescript
149+
async provideLanguageModelChatResponse(
150+
model: LanguageModelChatInformation,
151+
messages: readonly LanguageModelChatRequestMessage[],
152+
options: LanguageModelChatRequestHandleOptions,
153+
progress: Progress<LanguageModelResponsePart>,
154+
token: CancellationToken
155+
): Promise<void> {
156+
157+
// TODO: Implement message conversion, processing, and response streaming
158+
159+
// Optionally, differentiate behavior based on model ID
160+
if (model.id === "my-model-a") {
161+
progress.report(new LanguageModelTextPart("This is my A response."));
162+
} else {
163+
progress.report(new LanguageModelTextPart("Unknown model."));
164+
}
165+
}
166+
```
167+
168+
### Provide token count
169+
170+
The `provideTokenCount` method is responsible for estimating the number of tokens in a given text input:
171+
172+
```typescript
173+
async provideTokenCount(
174+
model: LanguageModelChatInformation,
175+
text: string | LanguageModelChatRequestMessage,
176+
token: CancellationToken
177+
): Promise<number> {
178+
// TODO: Implement token counting for your models
179+
180+
// Example estimation for strings
181+
return Math.ceil(text.toString().length / 4);
182+
}
183+
```
184+
185+
## Message format and conversion
186+
187+
Your provider receives messages in the `LanguageModelChatRequestMessage` format, which you'll typically need to convert to your service's API format. The message content can be a mix of text parts, tool calls, and tool results.
188+
189+
```typescript
190+
interface LanguageModelChatRequestMessage {
191+
readonly role: LanguageModelChatMessageRole;
192+
readonly content: ReadonlyArray<LanguageModelInputPart | unknown>;
193+
readonly name: string | undefined;
194+
}
195+
```
196+
197+
Optionally, convert these messages appropriately for your language model API:
198+
199+
```typescript
200+
private convertMessages(messages: readonly LanguageModelChatRequestMessage[]) {
201+
return messages.map(msg => ({
202+
role: msg.role === vscode.LanguageModelChatMessageRole.User ? 'user' : 'assistant',
203+
content: msg.content
204+
.filter(part => part instanceof vscode.LanguageModelTextPart)
205+
.map(part => (part as vscode.LanguageModelTextPart).value)
206+
.join('')
207+
}));
208+
}
209+
```
210+
211+
## Response parts
212+
213+
Your provider can report different types of response parts through the progress callback via the `LanguageModelResponsePart` type, which can be one of:
214+
215+
- `LanguageModelTextPart` - Text content
216+
- `LanguageModelToolCallPart` - Tool/function calls
217+
- `LanguageModelToolResultPart` - Tool result content
218+
219+
## Getting started
220+
221+
You can get started with a [basic example project](https://github.com/microsoft/vscode-extension-samples/blob/main/chat-model-provider-sample).
222+
223+
## Related content
224+
225+
- [VS Code API Reference](/api/references/vscode-api)
226+
- [Language Model API Guide](/api/extension-guides/ai/language-model)
227+
- [Chat API Extension](/api/extension-guides/ai/chat)

api/toc.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@
3434
["Chat Tutorial", "/api/extension-guides/ai/chat-tutorial"],
3535
["Language Model", "/api/extension-guides/ai/language-model"],
3636
["Language Model Tutorial", "/api/extension-guides/ai/language-model-tutorial"],
37+
["Language Model Chat Provider", "/api/extension-guides/ai/language-model-chat-provider"],
3738
["Prompt TSX", "/api/extension-guides/ai/prompt-tsx"]
3839
]
3940
}],

build/sitemap.xml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1505,6 +1505,11 @@
15051505
<priority>0.8</priority>
15061506
</url>
15071507
<url>
1508+
<loc>https://code.visualstudio.com/api/extension-guides/ai/language-model-chat-provider</loc>
1509+
<changefreq>weekly</changefreq>
1510+
<priority>0.8</priority>
1511+
</url>
1512+
<url>
15081513
<loc>https://code.visualstudio.com/api/extension-guides/ai/mcp</loc>
15091514
<changefreq>weekly</changefreq>
15101515
<priority>0.8</priority>

0 commit comments

Comments
 (0)