You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Even though there is a "view model" setup on the Go side, we're also setting one up client side in JavaScript. This can be used to process session event data into more iterable timeline events. However, eventually we'll probably want to do most of this server side so a text "transcript" of the session can be easily passed into other agents/services.
Fine-grained events need to be grouped together into a bigger logical chunk for display. This is currently happening client-side in JavaScript, but there may be advantages to doing more of that pre-processing on the backend. This could simplify the JavaScript client, and allow for other clients that want a higher-level view of the data to use that instead.
There's also a question of how to send data to the client. We're currently serializing the full session on each update. This could become large for long-running sessions, so we could consider whether to switch to sending just the event updates instead.
The text was updated successfully, but these errors were encountered:
Right now there is a "View" on the backend that includes more than just the session (right now just the list of other sessions) so the backend will have at least an initial view model it sends to the frontend, but i think most of the translation of events into logical chunks of messages and timeline events could still be done on the frontend. This would also make it easier to just push session events to the frontend after an initial sync.
There is a similar chunking thing that has to happen on the backend for producing a text "transcript" used for working with LLMs, but that could be done totally separately. Going to make an issue for that.
Yeah, keeping the chunking on the client does make it simpler to send each event as an update after the initial load.
I guess one option for chunking on the backend would be to make that its own event. So you'd have an agent that listens for transcription events, and can generate a new chunked text event or update an existing chunk to add that content.
Though I'm not sure that's necessary. Like you said, the transcripts for LLMs might be different enough anyway that it makes sense to keep the logic separate.
(split from #63)
Fine-grained events need to be grouped together into a bigger logical chunk for display. This is currently happening client-side in JavaScript, but there may be advantages to doing more of that pre-processing on the backend. This could simplify the JavaScript client, and allow for other clients that want a higher-level view of the data to use that instead.
There's also a question of how to send data to the client. We're currently serializing the full session on each update. This could become large for long-running sessions, so we could consider whether to switch to sending just the event updates instead.
The text was updated successfully, but these errors were encountered: