Skip to content

Commit 6847732

Browse files
committed
feat(cache): add LLM metadata caching for model and provider information
Extends the cache system to store and restore LLM metadata (model name and provider name) alongside cache entries. This allows cached results to maintain provenance information about which model and provider generated the original response. - Added LLMMetadataDict and LLMCacheData TypedDict definitions for type safety - Extended CacheEntry to include optional llm_metadata field - Implemented extract_llm_metadata_for_cache() to capture model and provider info from context - Implemented restore_llm_metadata_from_cache() to restore metadata when retrieving cached results - Updated get_from_cache_and_restore_stats() to handle metadata extraction and restoration - Added comprehensive test coverage for metadata caching functionalit
1 parent 4de5b2f commit 6847732

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

nemoguardrails/llm/cache/utils.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -185,6 +185,9 @@ def get_from_cache_and_restore_stats(
185185
if cached_metadata:
186186
restore_llm_metadata_from_cache(cached_metadata)
187187

188+
if cached_metadata:
189+
restore_llm_metadata_from_cache(cached_metadata)
190+
188191
processing_log = processing_log_var.get()
189192
if processing_log is not None:
190193
llm_call_info = llm_call_info_var.get()

0 commit comments

Comments
 (0)