An unofficial Go client for the OpenRouter API.
This library provides a comprehensive, go-openai
-inspired interface for interacting with the OpenRouter API, giving you access to a multitude of LLMs through a single, unified client.
This library's design and structure are heavily inspired by the excellent go-openai library.
While this library maintains a familiar, go-openai
-style interface, it includes several key features and fields specifically tailored for the OpenRouter API:
- Multi-Provider Models: Seamlessly switch between models from different providers (e.g., Anthropic, Google, Mistral) by changing the
Model
string. - Cost Tracking: The
Usage
object in responses includes aCost
field, providing direct access to the dollar cost of a generation. - Native Token Counts: The
GetGeneration
endpoint provides access toNativePromptTokens
andNativeCompletionTokens
, giving you precise, provider-native tokenization data. - Advanced Routing: Use
Models
for fallback chains andRoute
for custom routing logic. - Reasoning Parameters: Control and request "thinking" tokens from supported models using the
Reasoning
parameters. - Provider-Specific
ExtraBody
: Pass custom, provider-specific parameters through theExtraBody
field for fine-grained control. - Client Utilities: Includes built-in methods to
ListModels
,CheckCredits
, andGetGeneration
stats directly from the client.
go get github.com/iamwavecut/gopenrouter
For complete, runnable examples, please see the examples/
directory. A summary of available examples is below:
Feature | Description |
---|---|
Basic Chat | Demonstrates the standard chat completion flow. |
Streaming Chat | Shows how to stream responses for real-time output. |
Vision (Images) | Illustrates how to send image data using the MultiContent field for vision-enabled models. |
File Attachments | Shows how to attach files (e.g., PDFs) for models that support file-based inputs. |
Prompt Caching | Reduces cost and latency by using OpenRouter's explicit CacheControl for supported providers. |
Automatic Caching (OpenAI) | Demonstrates OpenAI's automatic caching for large prompts, a cost-saving feature on OpenRouter. |
Structured Outputs | Enforces a specific JSON schema for model outputs, a powerful OpenRouter feature. |
Reasoning Tokens | Shows how to request and inspect the model's "thinking" process, unique to OpenRouter. |
Provider Extras | Uses the ExtraBody field to pass provider-specific parameters for fine-grained control. |
List Models | A client utility to fetch the list of all models available on OpenRouter. |
Check Credits | A client utility to check your API key's usage, limit, and free tier status on OpenRouter. |
Get Generation | Fetches detailed post-generation statistics, including cost and native token counts. |
Details on specific features and client utility methods are available in the examples linked above.