You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With LLMs going into preprocessors, handlers are starting to use their output near-directly (e.g., display with context, put through TTS). This may pose problems to users in the future. The conceptual model for IMAGE assumes that handlers will represent preprocessor data in different ways, which is not the case with LLM text. Users then run the risk of receiving multiple representations from handlers that include the same information from the LLM over and over.
We should find some workable solution to this, either in terms of server management or making handlers/the orchestrator "smarter" so redundant info is less likely to be displayed.
The text was updated successfully, but these errors were encountered:
With LLMs going into preprocessors, handlers are starting to use their output near-directly (e.g., display with context, put through TTS). This may pose problems to users in the future. The conceptual model for IMAGE assumes that handlers will represent preprocessor data in different ways, which is not the case with LLM text. Users then run the risk of receiving multiple representations from handlers that include the same information from the LLM over and over.
We should find some workable solution to this, either in terms of server management or making handlers/the orchestrator "smarter" so redundant info is less likely to be displayed.
The text was updated successfully, but these errors were encountered: