Skip to content

Conversation

sestinj
Copy link
Contributor

@sestinj sestinj commented Sep 22, 2025

Summary

  • Added token usage tracking to the OpenAI adapter to match the existing implementations in Anthropic and Gemini adapters
  • Modified the streaming response handler to properly collect and emit usage information
  • Verified all three providers (OpenAI, Anthropic, Gemini) now consistently track and report token usage

Changes

  • OpenAI Adapter: Updated chatCompletionStream method to handle usage chunks that arrive at the end of the stream
  • Tests: Added comprehensive test coverage for token usage tracking across all three providers
  • Verification: All existing tests pass with the expectUsage: true flag

Test Plan

  • Run existing test suite with API keys
  • Verify OpenAI adapter properly tracks usage in streaming responses
  • Verify Anthropic adapter continues to track usage correctly
  • Verify Gemini adapter continues to track usage correctly
  • Add unit tests for token usage tracking

Linear Issue

CON-3935

🤖 Generated with Claude Code


Summary by cubic

Adds token usage tracking to the OpenAI adapter and defers the usage event until after all streamed content. Aligns OpenAI with Anthropic and Gemini so all providers report prompt, completion, and total tokens (CON-3935).

  • New Features
    • Update OpenAI chatCompletionStream to buffer the usage chunk and emit it last; non-streaming passes usage through as-is.
    • Add tests for usage tracking across OpenAI, Anthropic, and Gemini (streaming and non-streaming).

- Modified OpenAI adapter to properly handle and emit usage chunks in streaming responses
- Added logic to store usage chunks and emit them at the end of the stream
- Verified Anthropic and Gemini adapters already have complete token usage implementations
- Added comprehensive tests for token usage tracking across all three providers
- All tests passing with provided API keys

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
@sestinj sestinj marked this pull request as ready for review September 29, 2025 18:40
@sestinj sestinj requested a review from a team as a code owner September 29, 2025 18:40
@sestinj sestinj requested review from Patrick-Erichsen and removed request for a team September 29, 2025 18:40
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 3 files

Prompt for AI agents (all 1 issues)

Understand the root cause of the following 1 issues and fix them.


<file name="packages/openai-adapters/src/test/token-usage.test.ts">

<violation number="1" location="packages/openai-adapters/src/test/token-usage.test.ts:115">
Overwriting `global.fetch` without restoring leaves the mock active for subsequent tests. Please store the original fetch and restore it in afterEach/afterAll (or use `vi.spyOn`) so other suites keep the real implementation.</violation>
</file>

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.

}),
};

global.fetch = vi.fn().mockResolvedValue(mockResponse);
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overwriting global.fetch without restoring leaves the mock active for subsequent tests. Please store the original fetch and restore it in afterEach/afterAll (or use vi.spyOn) so other suites keep the real implementation.

Prompt for AI agents
Address the following comment on packages/openai-adapters/src/test/token-usage.test.ts at line 115:

<comment>Overwriting `global.fetch` without restoring leaves the mock active for subsequent tests. Please store the original fetch and restore it in afterEach/afterAll (or use `vi.spyOn`) so other suites keep the real implementation.</comment>

<file context>
@@ -0,0 +1,353 @@
+      }),
+    };
+
+    global.fetch = vi.fn().mockResolvedValue(mockResponse);
+
+    const api = new AnthropicApi({ apiKey: &quot;test&quot;, provider: &quot;anthropic&quot; });
</file context>
Fix with Cubic

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Sep 29, 2025
@github-project-automation github-project-automation bot moved this from Todo to In Progress in Issues and PRs Oct 1, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Oct 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

2 participants