Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network usage investigation #792

Closed
pmarsh-scottlogic opened this issue Jan 22, 2024 · 4 comments
Closed

Network usage investigation #792

pmarsh-scottlogic opened this issue Jan 22, 2024 · 4 comments
Assignees

Comments

@pmarsh-scottlogic
Copy link
Contributor

Interact with different parts of the app. Make note of network calls and identify where we could cut down on number of calls, and the content contained within. Can we combine several calls into one? Is any of the information not needed?

Write a list of all the improvements that could be made, and prioritise in order of what will make the most impact. Write tickets to implement.

AC:
No code changes, therefore no testing needed.

@pmarsh-scottlogic
Copy link
Contributor Author

pmarsh-scottlogic commented Jan 22, 2024

Network investigation writeup

I go through the crucial interactions on the app and check how the network is used for API calls. Each interaction has a section, and the header includes a certain number of the tools emoji 🛠 0-3 indicating the priority on fixing the issues. At the bottom of each section I suggest a way to improve the intteraction.

Tip: use npm run preview to run not in strict mode (stop the double renders)

Refresh 🛠🛠🛠

Here's what happens if I refresh on level 1:

  • health/ always returns with 200 (if the backend is up).
  • email/get?level=0 returns all the sent emails in the session for given level.
  • openai/history?level=0 returns the chat history in the session for given level.
  • defence/status?level=0 returns the defences as configured for given level.
  • openai/validModels returns all the available models from openAI for the api key configured in the backend.
  • systemRoles/ returns the default system role values for levels 1, 2 and 3.

If we refresh on sandbox we get an additional 4 calls to openai/model, all of which return the currently selected model and its config values.

🔧 These should be combined.
🔧 Stop the additional calls to openai/model on sandbox.

Ticket: #300

Change level 🛠🛠🛠

Here's what happens when I change from sandbox to level 1:

  • email/get?level=0 returns all the sent emails in the session for given level.
  • openai/history?level=0 returns the chat history in the session for given level.
  • defence/status?level=0 returns the defences as configured for given level.

These three are also called when you refresh.

🔧 These should be combined.

Ticket #824

Send a chat message 🛠

  • openai/chat takes a message like
{
    "message": "hello",
    "currentLevel": 0
}

and returns a response like:

{
    "reply": "Hello! How can I assist you today?",
    "defenceReport": {
        "blockedReason": "",
        "isBlocked": false,
        "alertedDefences": [],
        "triggeredDefences": []
    },
    "wonLevel": false,
    "isError": false,
    "openAIErrorMessage": null,
    "sentEmails": []
}

🔧 Which all looks pretty good. Possibly we could choose not to send empty/default values.

Ticket #825

Loading user avatar 🛠

If sending a message causes the use avatar to appear in the chat for the first time (even since resetting), we see a new network call to get the avatar assets/UserAvatar-b2331cdf.svg.

🔧 Could stop this maybe? Cache it in the frontend somewhere? Presumably it's already being cached because it doesn't get loaded for subsequent chat messages.

ticket #826

Reset Level 🛠🛠

  • openai/clear clears the chat history for given level.
  • email/clear clears the emails for given level.
  • openai/addHistory adds the "Level progress reset" info message to given level's chat history

🔧 Could be combined

Ticket: #645

Reset all progress 🛠🛠🛠

Here's what happens when I reset on Sandbox:

  • reset resets then returns entire levelState property of the session, which looks like:
[
    {
        "level": "LEVEL_1",
        "chatHistory": [],
        "defences": [
            {
                "id": "CHARACTER_LIMIT",
                "config": [
                    {
                        "id": "MAX_MESSAGE_LENGTH",
                        "value": "280"
                    }
                ],
                "isActive": false,
                "isTriggered": false
            },
...

with details of every defence type for every level. Which is large (442 lines, 26570 characters).

  • 4 calls to openai/model all of which return the currently selected model and its config values. [notable these aren't returned in the main reset call]
  • health returns 200 (when backend is up).
  • email/get?level=3 returns the emails for level 3. But obviously they've just been reset, so it returns an empty array.
  • openai/history?level=3 returns the chat history for level 3. But obviously they've just been reset, so it returns an empty array.
  • defence/status?level=3 returns the configured values for level 3 defences. But these are already contained in the response to reset.

If resetting progress on level 1, you get the above calls but without the calls to openai/model.

🔧 This should all be combined into one request without all the repetition.

ticket #827

Looking at the documents 🛠

Each time I open the view documents overlay we have:

  • documents which returns simply the array of document metas.

🔧 Could load this once either at the start of the game, or when we load a level, rather than each time we open the overlay.

Adding chat messages from the front end 🛠

I note that there is an endpoint to add a message to the backend's chat history. We only every do this as a direct result of something that happened in the backend. Therefore this endpoint and all its associated network calls don't need to exist.

🔧 Remove the endpoint /openai/addHistory and add messages to history in the backend instead

ticket #828

@pmarsh-scottlogic
Copy link
Contributor Author

pmarsh-scottlogic commented Jan 22, 2024

Question:
Reducing api calls is hard, because there's a balance of minimising calls while also minimising the amount of stuff we're storing in the frontend. At the extreme, we could do one api call on loading the app, and then exactly one further API call for each distinct interaction the the user makes. But that would require the client to hold information about every level at once, therefore most of the stored information will be irrelevant at any point. Therefore: Is there a good philosophy of how far to take it, or should I just use common sense?

@chriswilty
Copy link
Member

chriswilty commented Feb 6, 2024

Great writeup 🏆

I see you'd already spotted the double message on Activating a defence - the second one adds an info message stating the defence had been activated, like

Hey backend, activate this defence for me would ya?
"Sure, done!"
Hey backend, that defence is now active ok?
"Erm ... yeah I know mate" 👀

Removing all the redundant "add history message" calls from the UI will require some working out, presumably tackling them one at a time until no more remain, then squishing the backend endpoint and frontend service. The ones that are conditionally sent in ChatBox component in the (colossal) processChatResponse function will require some careful work, though probably the main effort will be ensuring all (successful) responses contain the new messages to add to the chat panel, as some of those requests don't appear to have any response body.

The avatars are fine, they are requested first time they are needed then the browser (and actually CloudFront as well, in remote deployment) will cache them.

Document Metas: they are presumably only needed on Sandbox level, and as you say, we could load them once in that initial switch level request.

For the chat messages, I think the empty arrays are fine as they are teeny tiny, plus they are explicit: I am returning you all the sent emails, of which there are none.

Reset Progress is an interesting one. We don't expect it very often, so 25K of config data seems acceptable, but ... do we really need it all? Can we not (or do we not already) fetch the level config when we switch levels?

And finally, we could of course cache everything in the UI as you suggest, but when a user has been chatting to the bot for a while, chat data are likely to grow significantly. As you say, it's a trade-off between using browser memory (or local storage) and sending the same data over the network more than once. We are storing all this data server-side anyway, so it's (computationally) cheap for the backend to send what the UI needs on demand, e.g. when a user switches level or refreshes the page. We don't anticipate users doing that very often, so it feels like an acceptable trade-off.

@pmarsh-scottlogic
Copy link
Contributor Author

Tickets made and prioritised, so I'm marking this as done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants