Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Streaming azure openai #244

Open
wants to merge 64 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
ea0c687
debug code
ZhongpinWang Oct 21, 2024
5d0985f
Make streaming work
ZhongpinWang Oct 22, 2024
a74cf6b
fix: remove await
ZhongpinWang Oct 22, 2024
9025451
fix: await again
ZhongpinWang Oct 22, 2024
fc12de0
small changes
ZhongpinWang Oct 23, 2024
d5d38bd
chore: add missing javadoc
ZhongpinWang Oct 23, 2024
1f466a7
Merge branch 'feat-streaming-azure-openai-playground' into feat-strea…
ZhongpinWang Oct 23, 2024
07dda35
wip
ZhongpinWang Oct 23, 2024
c218211
feat: pipe streams
ZhongpinWang Oct 23, 2024
4ebd37d
feat: wrap chunk to see usage and finish reason
ZhongpinWang Oct 23, 2024
02d1939
refactor: pipe streams
ZhongpinWang Oct 24, 2024
7a00fc3
refactor
ZhongpinWang Oct 24, 2024
dd02651
refactor: change streamString to streamContent
ZhongpinWang Oct 24, 2024
50142e2
fix: lint
ZhongpinWang Oct 24, 2024
c8611d8
refactor
ZhongpinWang Oct 25, 2024
7386dc5
feat: demo streaming in sample-code
ZhongpinWang Oct 25, 2024
a7ec23a
fix: end res in sample code when finish
ZhongpinWang Oct 25, 2024
2a1d3be
Merge branch 'main' into feat-streaming-azure-openai
ZhongpinWang Oct 28, 2024
bc03fed
fix: lint
ZhongpinWang Oct 28, 2024
c399f09
refactor
ZhongpinWang Oct 28, 2024
b3f4e71
fix: check public-api
ZhongpinWang Oct 28, 2024
fa91209
chore: add tests for stream chunk response
ZhongpinWang Oct 28, 2024
56e6197
fix: Changes from lint
Oct 28, 2024
6297626
fix: chunk type inference
ZhongpinWang Oct 29, 2024
f22bed7
refactor: change some types
ZhongpinWang Oct 30, 2024
1348b97
wip
ZhongpinWang Oct 30, 2024
8086b70
fix: internal.js.map issue
ZhongpinWang Oct 30, 2024
40ad3d2
chore: add tests for chat completion stream
ZhongpinWang Oct 30, 2024
dcb6d54
refactor: move stream files
ZhongpinWang Oct 30, 2024
4bde96b
fix: remove duplicated file
ZhongpinWang Oct 30, 2024
3d5554c
refactor: rename stream
ZhongpinWang Oct 30, 2024
0b79c66
refactor: openai stream
ZhongpinWang Oct 30, 2024
7104fc5
chore: add tests for sse-stream (copied from openai)
ZhongpinWang Oct 30, 2024
2c5247a
refactor: rename test responses
ZhongpinWang Nov 4, 2024
3ff4c9e
Merge branch 'main' into feat-streaming-azure-openai
ZhongpinWang Nov 4, 2024
6570bd2
refactor: replace streamContent with a method
ZhongpinWang Nov 11, 2024
9187988
feat: support multiple choices
ZhongpinWang Nov 11, 2024
0bd1c92
fix: Changes from lint
Nov 11, 2024
0510c2a
fix: add abortcontroler and fix sample code
ZhongpinWang Nov 11, 2024
2bf0e7e
fix: add controller signal to axios
ZhongpinWang Nov 11, 2024
050d0db
fix: Changes from lint
Nov 11, 2024
1399a91
chore: add unit test for stream()
ZhongpinWang Nov 11, 2024
ad65518
fix: Changes from lint
Nov 11, 2024
2a940b1
fix: stream finish reason index 0
ZhongpinWang Nov 11, 2024
8bc6364
lint
ZhongpinWang Nov 11, 2024
841d452
fix: type test
ZhongpinWang Nov 11, 2024
39675b5
fix: make toContentStream return AzureOpenAiChatCompletionStream
ZhongpinWang Nov 11, 2024
658d1bc
fix: lint
ZhongpinWang Nov 11, 2024
d5d817a
Merge branch 'main' into feat-streaming-azure-openai
ZhongpinWang Nov 11, 2024
df6ee3f
feat: throw if sse payload invalid
ZhongpinWang Nov 11, 2024
d3ba1d8
fix: Changes from lint
Nov 11, 2024
819692f
refactor: interface
ZhongpinWang Nov 12, 2024
a06cd03
refactor
ZhongpinWang Nov 12, 2024
6a6e403
Merge branch 'main' into feat-streaming-azure-openai
ZhongpinWang Nov 12, 2024
862ff0f
chore: add changeset
ZhongpinWang Nov 12, 2024
195053f
Merge branch 'main' into feat-streaming-azure-openai
jjtang1985 Nov 13, 2024
18f40b6
chore: improve sample code for streaming
ZhongpinWang Nov 13, 2024
347753f
fix: Changes from lint
Nov 13, 2024
b933d2c
docs
ZhongpinWang Nov 13, 2024
83db52b
refactor: get by index
ZhongpinWang Nov 13, 2024
31fc14e
fix: lint
ZhongpinWang Nov 13, 2024
bdc18d5
chore: small changes
ZhongpinWang Nov 14, 2024
120c0e9
fix: Changes from lint
Nov 14, 2024
d4ef790
Merge branch 'main' into feat-streaming-azure-openai
jjtang1985 Nov 14, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/seven-chairs-change.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---

Check warning on line 1 in .changeset/seven-chairs-change.md

View workflow job for this annotation

GitHub Actions / grammar-check

[vale] reported by reviewdog 🐶 [SAP.Readability] The text is very complex! It has a grade score of >14. Raw Output: {"message": "[SAP.Readability] The text is very complex! It has a grade score of \u003e14.", "location": {"path": ".changeset/seven-chairs-change.md", "range": {"start": {"line": 1, "column": 1}}}, "severity": "WARNING"}
'@sap-ai-sdk/foundation-models': minor
---

[New Functionality] Support streaming for Azure OpenAI chat completion in `foudation-models`.
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,12 @@ import {
mockClientCredentialsGrantCall,
mockDeploymentsList,
mockInference,
parseFileToString,
parseMockResponse
} from '../../../../test-util/mock-http.js';
import { AzureOpenAiChatClient } from './azure-openai-chat-client.js';
import { apiVersion } from './model-types.js';
import type { AzureOpenAiCreateChatCompletionResponse } from './client/inference/schema';
import type { AzureOpenAiCreateChatCompletionResponse } from './client/inference/schema/index.js';

describe('Azure OpenAI chat client', () => {
const chatCompletionEndpoint = {
Expand Down Expand Up @@ -159,4 +160,46 @@ describe('Azure OpenAI chat client', () => {
const response = await clientWithResourceGroup.run(prompt);
expect(response.data).toEqual(mockResponse);
});

it('executes a streaming request with correct chunk response', async () => {
const prompt = {
messages: [
{
role: 'user' as const,
content: 'Where is the deepest place on earth located'
}
],
stream: true,
stream_options: {
include_usage: true
}
};

const mockResponse = await parseFileToString(
'foundation-models',
'azure-openai-chat-completion-stream-chunks.txt'
);

mockInference(
{
data: prompt
},
{
data: mockResponse,
status: 200
},
chatCompletionEndpoint
);

const initialResponse = await parseFileToString(
'foundation-models',
'azure-openai-chat-completion-stream-chunk-response-initial.json'
);

const response = await client.stream(prompt);
for await (const chunk of response.stream) {
expect(JSON.stringify(chunk.data)).toEqual(initialResponse);
break;
}
});
});
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ import {
} from '@sap-ai-sdk/ai-api/internal.js';
import { apiVersion, type AzureOpenAiChatModel } from './model-types.js';
import { AzureOpenAiChatCompletionResponse } from './azure-openai-chat-completion-response.js';
import { AzureOpenAiChatCompletionStreamResponse } from './azure-openai-chat-completion-stream-response.js';
import { AzureOpenAiChatCompletionStream } from './azure-openai-chat-completion-stream.js';
import type { AzureOpenAiChatCompletionStreamChunkResponse } from './azure-openai-chat-completion-stream-chunk-response.js';
import type { HttpResponse } from '@sap-cloud-sdk/http-client';
import type { AzureOpenAiCreateChatCompletionRequest } from './client/inference/schema/index.js';

/**
Expand All @@ -28,12 +32,43 @@ export class AzureOpenAiChatClient {
data: AzureOpenAiCreateChatCompletionRequest,
requestConfig?: CustomRequestConfig
): Promise<AzureOpenAiChatCompletionResponse> {
const response = await this.executeRequest(data, requestConfig);
return new AzureOpenAiChatCompletionResponse(response);
}

/**
* Creates a completion stream for the chat messages.
* @param data - The input parameters for the chat completion.
* @param controller - The abort controller.
* @param requestConfig - The request configuration.
* @returns The completion stream.
*/
async stream(
data: AzureOpenAiCreateChatCompletionRequest,
controller = new AbortController(),
requestConfig?: CustomRequestConfig
): Promise<
AzureOpenAiChatCompletionStreamResponse<AzureOpenAiChatCompletionStreamChunkResponse>
> {
const response =
new AzureOpenAiChatCompletionStreamResponse<AzureOpenAiChatCompletionStreamChunkResponse>();
response.stream = (await this.createStream(data, controller, requestConfig))
._pipe(AzureOpenAiChatCompletionStream._processChunk)
._pipe(AzureOpenAiChatCompletionStream._processFinishReason, response)
._pipe(AzureOpenAiChatCompletionStream._processTokenUsage, response);
return response;
}

private async executeRequest(
data: AzureOpenAiCreateChatCompletionRequest,
requestConfig?: CustomRequestConfig
): Promise<HttpResponse> {
const deploymentId = await getDeploymentId(
this.modelDeployment,
'azure-openai'
);
const resourceGroup = getResourceGroup(this.modelDeployment);
const response = await executeRequest(
return executeRequest(
{
url: `/inference/deployments/${deploymentId}/chat/completions`,
apiVersion,
Expand All @@ -42,6 +77,27 @@ export class AzureOpenAiChatClient {
data,
requestConfig
);
return new AzureOpenAiChatCompletionResponse(response);
}

private async createStream(
data: AzureOpenAiCreateChatCompletionRequest,
controller: AbortController,
requestConfig?: CustomRequestConfig
): Promise<AzureOpenAiChatCompletionStream<any>> {
const response = await this.executeRequest(
{
...data,
stream: true,
stream_options: {
include_usage: true
}
},
{
...requestConfig,
responseType: 'stream',
signal: controller.signal
}
);
return AzureOpenAiChatCompletionStream._create(response, controller);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,7 @@ export class AzureOpenAiChatCompletionResponse {
* @param choiceIndex - The index of the choice to parse.
* @returns The finish reason.
*/
getFinishReason(
choiceIndex = 0
): this['data']['choices'][0]['finish_reason'] {
getFinishReason(choiceIndex = 0): string | undefined | null {
this.logInvalidChoiceIndex(choiceIndex);
return this.data.choices[choiceIndex]?.finish_reason;
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
import { parseMockResponse } from '../../../../test-util/mock-http.js';
import { AzureOpenAiChatCompletionStreamChunkResponse } from './azure-openai-chat-completion-stream-chunk-response.js';

describe('OpenAI chat completion stream chunk response', () => {
let mockResponses: {
tokenUsageResponse: any;
finishReasonResponse: any;
deltaContentResponse: any;
};
let azureOpenAiChatCompletionStreamChunkResponses: {
tokenUsageResponse: AzureOpenAiChatCompletionStreamChunkResponse;
finishReasonResponse: AzureOpenAiChatCompletionStreamChunkResponse;
deltaContentResponse: AzureOpenAiChatCompletionStreamChunkResponse;
};

beforeAll(async () => {
mockResponses = {
tokenUsageResponse: await parseMockResponse<any>(
'foundation-models',
'azure-openai-chat-completion-stream-chunk-response-token-usage.json'
),
finishReasonResponse: await parseMockResponse<any>(
'foundation-models',
'azure-openai-chat-completion-stream-chunk-response-finish-reason.json'
),
deltaContentResponse: await parseMockResponse<any>(
'foundation-models',
'azure-openai-chat-completion-stream-chunk-response-delta-content.json'
)
};
azureOpenAiChatCompletionStreamChunkResponses = {
tokenUsageResponse: new AzureOpenAiChatCompletionStreamChunkResponse(
mockResponses.tokenUsageResponse
),
finishReasonResponse: new AzureOpenAiChatCompletionStreamChunkResponse(
mockResponses.finishReasonResponse
),
deltaContentResponse: new AzureOpenAiChatCompletionStreamChunkResponse(
mockResponses.deltaContentResponse
)
};
});

it('should return the chat completion stream chunk response', () => {
expect(
azureOpenAiChatCompletionStreamChunkResponses.tokenUsageResponse.data
).toStrictEqual(mockResponses.tokenUsageResponse);
expect(
azureOpenAiChatCompletionStreamChunkResponses.finishReasonResponse.data
).toStrictEqual(mockResponses.finishReasonResponse);
expect(
azureOpenAiChatCompletionStreamChunkResponses.deltaContentResponse.data
).toStrictEqual(mockResponses.deltaContentResponse);
});

it('should get token usage', () => {
expect(
azureOpenAiChatCompletionStreamChunkResponses.tokenUsageResponse.getTokenUsage()
).toMatchObject({
ZhongpinWang marked this conversation as resolved.
Show resolved Hide resolved
completion_tokens: expect.any(Number),
prompt_tokens: expect.any(Number),
total_tokens: expect.any(Number)
});
});

it('should return finish reason', () => {
expect(
azureOpenAiChatCompletionStreamChunkResponses.finishReasonResponse.getFinishReason()
).toBe('stop');
});

it('should return delta content with default index 0', () => {
expect(
azureOpenAiChatCompletionStreamChunkResponses.deltaContentResponse.getDeltaContent()
).toBe(' is');
});
});
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import type { AzureOpenAiCompletionUsage } from './client/inference/schema/index.js';

/**
* Azure OpenAI chat completion stream chunk response.
*/
export class AzureOpenAiChatCompletionStreamChunkResponse {
constructor(public readonly data: any) {
// TODO: Change `any` to `CreateChatCompletionStreamResponse` once the preview spec becomes stable.
this.data = data;
}

/**
* Usage of tokens in the chunk response.
* @returns Token usage.
*/
getTokenUsage(): AzureOpenAiCompletionUsage {
return this.data.usage;
}

/**
* Reason for stopping the completion stream chunk.
* @param choiceIndex - The index of the choice to parse.
* @returns The finish reason.
*/
getFinishReason(choiceIndex = 0): string | undefined | null {
return this.data.choices.find((c: any) => c.index === choiceIndex)
?.finish_reason;
}

/**
* Parses the chunk response and returns the delta content.
* @param choiceIndex - The index of the choice to parse.
* @returns The message delta content.
*/
getDeltaContent(choiceIndex = 0): string | undefined | null {
return this.data.choices.find((c: any) => c.index === choiceIndex)?.delta
.content;
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import type { AzureOpenAiCompletionUsage } from './client/inference/schema/index.js';
import type { AzureOpenAiChatCompletionStream } from './azure-openai-chat-completion-stream.js';

/**
* Azure OpenAI chat completion stream response.
*/
export class AzureOpenAiChatCompletionStreamResponse<T> {
private _usage: AzureOpenAiCompletionUsage | undefined;
ZhongpinWang marked this conversation as resolved.
Show resolved Hide resolved
private _finishReasons: Map<number, string> = new Map();
private _stream: AzureOpenAiChatCompletionStream<T> | undefined;

public getTokenUsage(): AzureOpenAiCompletionUsage | undefined {
return this._usage;
}

/**
* @internal
*/
_setTokenUsage(usage: AzureOpenAiCompletionUsage): void {
this._usage = usage;
}

public getFinishReason(choiceIndex = 0): string | undefined | null {
return this._finishReasons.get(choiceIndex);
}

/**
* @internal
*/
_getFinishReasons(): Map<number, string> {
return this._finishReasons;
}

/**
* @internal
*/
_setFinishReasons(finishReasons: Map<number, string>): void {
this._finishReasons = finishReasons;
}

get stream(): AzureOpenAiChatCompletionStream<T> {
if (!this._stream) {
throw new Error('Response stream is undefined.');
}
return this._stream;
}

/**
* @internal
*/
set stream(stream: AzureOpenAiChatCompletionStream<T>) {
this._stream = stream;
}
}
Loading