You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(cache): add LLM metadata caching for model and provider information
Extends the cache system to store and restore LLM metadata (model name
and provider name) alongside cache entries. This allows cached results
to maintain provenance information about which model and provider
generated the original response.
- Added LLMMetadataDict and LLMCacheData TypedDict definitions for type
safety
- Extended CacheEntry to include optional llm_metadata field
- Implemented extract_llm_metadata_for_cache() to capture model and
provider info from context
- Implemented restore_llm_metadata_from_cache() to restore metadata
when retrieving cached results
- Updated get_from_cache_and_restore_stats() to handle metadata
extraction and restoration
- Added comprehensive test coverage for metadata caching functionalit
0 commit comments