-
Notifications
You must be signed in to change notification settings - Fork 3.6k
feat: add token usage tracking to OpenAI adapter #7900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Modified OpenAI adapter to properly handle and emit usage chunks in streaming responses - Added logic to store usage chunks and emit them at the end of the stream - Verified Anthropic and Gemini adapters already have complete token usage implementations - Added comprehensive tests for token usage tracking across all three providers - All tests passing with provided API keys 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 3 files
Prompt for AI agents (all 1 issues)
Understand the root cause of the following 1 issues and fix them.
<file name="packages/openai-adapters/src/test/token-usage.test.ts">
<violation number="1" location="packages/openai-adapters/src/test/token-usage.test.ts:115">
Overwriting `global.fetch` without restoring leaves the mock active for subsequent tests. Please store the original fetch and restore it in afterEach/afterAll (or use `vi.spyOn`) so other suites keep the real implementation.</violation>
</file>
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai
to give feedback, ask questions, or re-run the review.
}), | ||
}; | ||
|
||
global.fetch = vi.fn().mockResolvedValue(mockResponse); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overwriting global.fetch
without restoring leaves the mock active for subsequent tests. Please store the original fetch and restore it in afterEach/afterAll (or use vi.spyOn
) so other suites keep the real implementation.
Prompt for AI agents
Address the following comment on packages/openai-adapters/src/test/token-usage.test.ts at line 115:
<comment>Overwriting `global.fetch` without restoring leaves the mock active for subsequent tests. Please store the original fetch and restore it in afterEach/afterAll (or use `vi.spyOn`) so other suites keep the real implementation.</comment>
<file context>
@@ -0,0 +1,353 @@
+ }),
+ };
+
+ global.fetch = vi.fn().mockResolvedValue(mockResponse);
+
+ const api = new AnthropicApi({ apiKey: "test", provider: "anthropic" });
</file context>
Summary
Changes
chatCompletionStream
method to handle usage chunks that arrive at the end of the streamexpectUsage: true
flagTest Plan
Linear Issue
CON-3935
🤖 Generated with Claude Code
Summary by cubic
Adds token usage tracking to the OpenAI adapter and defers the usage event until after all streamed content. Aligns OpenAI with Anthropic and Gemini so all providers report prompt, completion, and total tokens (CON-3935).