Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion biome.json
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
"javascript": {
"formatter": {
"quoteStyle": "double"
}
},
"globals": ["__HTTP_LOGGING_ENABLED__"]
}
}
17 changes: 17 additions & 0 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,23 @@ Please ensure that the dev-production separation still works when adding new fun
In particular, when adding new top-level buttons, make them distinguishable for dev and production versions
(see [webpack-config](../webpack.config.js) to see how this is currently handled for the manifest.json).

### Log request-response pairs

For tests it might be useful to have the full request-response interaction with an LLM.
For this we added an extra development mode which can be enabled by runing
```shell
npm run with-http-logging
```
instead of `npm start`.

With this, each request that is made to an LLM will open a download modal in Thunderbird where you download a json
file which contains the request you made (minus the Authorization header) and the response by the backend.

This request-response json can be reused in tests to mock the backend.
See also [mockResponses](../src/__tests__/mockResponses).

Make sure that the content of these examples is save for publication (See also: Test mails down below).

### Build the plugin locally

- Build the addon package:
Expand Down
2 changes: 1 addition & 1 deletion manifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"update_url": "https://raw.githubusercontent.com/TNG/tb-llm-composer/refs/heads/main/updates.json"
}
},
"permissions": ["menus", "compose", "storage", "notifications", "messagesRead", "accountsRead"],
"permissions": ["menus", "compose", "storage", "notifications", "messagesRead", "accountsRead", "downloads"],
"icons": {
"64": "icons/icon-64px.png",
"32": "icons/icon-32px.png",
Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
"main": "background.js",
"scripts": {
"start": "webpack --watch",
"with-http-logging": "webpack --env HTTP_LOGGING=true --watch",
"build": "webpack --mode production",
"zip": "run-script-os",
"zip:darwin:linux": "cd build && zip -r ../llm-thunderbird.xpi ./*",
Expand Down
32 changes: 32 additions & 0 deletions src/__tests__/mockResponses/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Mock LLM responses

Real-world mock request-response pairs of interactions with the LLM.

The format is inspired by [mock-server](https://www.mock-server.com/mock_server/creating_expectations.html), however,
the request matching should mostly be done based on the request body since the path is configurable for example.
This enables easy copy-pasting examples from wherever you get your examples from.

A very basic example of a json in this folder looks like this:
```json
{
"httpRequest": {
"method": "POST",
"path": "/login",
"body": {
"username": "foo",
"password": "bar"
}
},
"httpResponse": {
"statusCode": 302,
"headers": {
"Location": [
"https://www.mock-server.com"
]
},
"cookies": {
"sessionId": "2By8LOhBmaW5nZXJwcmludCIlMDAzMW"
}
}
}
```
27 changes: 21 additions & 6 deletions src/llmConnection.ts
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import { logRequestResponseToFile } from "./logRequestResponseToFile";
import { getPluginOptions, type LlmParameters } from "./optionsParams";

export enum LlmRoles {
Expand Down Expand Up @@ -84,6 +85,8 @@
return callLlmApi(options.model, requestBody, abortSignal, options.api_token);
}

export type LlmResponseBodyType = string | LlmTextCompletionResponse | TgiErrorResponse;

async function callLlmApi(
url: string,
requestBody: LlmApiRequestBody,
Expand All @@ -98,20 +101,32 @@
}

console.log(`LLM-CONNECTION: Sending request to LLM: POST ${url} with body:\n`, JSON.stringify(requestBody));
const response = await fetch(url, {
const fetchOptions = {
signal: signal,
method: "POST",
headers: headers,
body: JSON.stringify(requestBody),
});
};
const response = await fetch(url, fetchOptions);
const responseBody = await safeParseBody(response);
if (process.env.NODE_ENV === "development" && __HTTP_LOGGING_ENABLED__) {
logRequestResponseToFile(url, fetchOptions, requestBody, response, responseBody);
}
if (!response.ok) {
const errorResponseBody = await response.text();
throw Error(`LLM-CONNECTION: Error response from ${url}: ${errorResponseBody}`);
throw Error(`LLM-CONNECTION: Error response from ${url}: ${JSON.stringify(responseBody)}`);
}
const responseBody = (await response.json()) as LlmTextCompletionResponse | TgiErrorResponse;
console.log("LLM-CONNECTION: LLM responded with:", response.status, responseBody);
return responseBody as LlmTextCompletionResponse | TgiErrorResponse;
}

return responseBody;
async function safeParseBody(response: Response): Promise<LlmResponseBodyType> {
const responseBody = await response.text();

Check failure on line 123 in src/llmConnection.ts

View workflow job for this annotation

GitHub Actions / build

src/__tests__/llmConnection.test.ts > Testing sentContentToLlm > with token, ok response

TypeError: response.text is not a function ❯ safeParseBody src/llmConnection.ts:123:39 ❯ callLlmApi src/llmConnection.ts:111:30

Check failure on line 123 in src/llmConnection.ts

View workflow job for this annotation

GitHub Actions / build

src/__tests__/llmConnection.test.ts > Testing sentContentToLlm > without token, ok response

TypeError: response.text is not a function ❯ safeParseBody src/llmConnection.ts:123:39 ❯ callLlmApi src/llmConnection.ts:111:30
try {
return JSON.parse(responseBody);
} catch (e) {
console.warn("Could not parse response body", responseBody, e);
return responseBody;
}
}

export function isLlmTextCompletionResponse(response: LlmTextCompletionResponse | TgiErrorResponse) {
Expand Down
42 changes: 42 additions & 0 deletions src/logRequestResponseToFile.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
import type { LlmResponseBodyType } from "./llmConnection";

declare global {
const __HTTP_LOGGING_ENABLED__: boolean;
}

export function logRequestResponseToFile(
url: string,
requestOptions: RequestInit,
requestBody: object,
response: Response,
responseBody: LlmResponseBodyType,
) {
const headers = { ...requestOptions.headers } as { [key: string]: string };
headers.Authorization = "***";
const json = {
request: {
url: new URL(url).pathname,
method: requestOptions.method,
headers: headers,
body: requestBody,
},
response: {
status: response.status,
statusText: response.statusText,
headers: response.headers,
body: responseBody,
},
};
const date = new Date();
const blob = new Blob([JSON.stringify(json, null, 2)], { type: "text/plain;charset=utf-8" });
browser.downloads
.download({
url: URL.createObjectURL(blob),
filename:
`request-response-${date.getFullYear()}${date.getMonth() + 1}${date.getDate()}-` +
`${date.getHours()}${date.getMinutes()}${date.getSeconds()}.json`,
})
.catch((e) => {
console.warn("Failed to trigger download of request-response", e);
});
}
4 changes: 4 additions & 0 deletions webpack.config.js
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
const { DefinePlugin } = require("webpack");
const CopyWebpackPlugin = require("copy-webpack-plugin");
const path = require("node:path");
const TerserPlugin = require("terser-webpack-plugin");
Expand Down Expand Up @@ -62,6 +63,9 @@ module.exports = (_env, argv) => {
},
],
}),
new DefinePlugin({
__HTTP_LOGGING_ENABLED__: _env.HTTP_LOGGING === "true",
}),
],
optimization: {
minimize: isProductionMode,
Expand Down
Loading