Skip to content

Conversation

TeddyAmkie
Copy link
Contributor

📝 Summary

This PR improves LiteLLM documentation and model configuration schema for better developer experience.

🔧 Changes

1. Documentation Updates

  • docs/my-website/docs/completion/input.md: Added concise explanation of max_tokens vs max_output_tokens relationship
  • docs/my-website/docs/completion/token_usage.md: Clarified that get_max_tokens() returns max_output_tokens value

2. Model Configuration Schema

  • model_prices_and_context_window.json: Moved sample_spec from end of file to top for better visibility

- Add concise explanation that max_tokens is legacy, max_output_tokens is current standard
- Both fields are equivalent and LiteLLM uses max_output_tokens internally
- Update both input.md and token_usage.md documentation
- Relocate sample_spec from end of file to beginning for better visibility
- Maintains all existing configuration and documentation
- Improves developer experience when referencing model configuration schema
Copy link

vercel bot commented Oct 8, 2025

@TeddyAmkie is attempting to deploy a commit to the CLERKIEAI Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant