Skip to content

Commit

Permalink
feat: Add Generative AI plugin (#313)
Browse files Browse the repository at this point in the history
  • Loading branch information
niallthomson authored Dec 9, 2024
1 parent 66ffd3b commit cf19564
Show file tree
Hide file tree
Showing 83 changed files with 7,411 additions and 227 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ For detailed documentation regarding each plugin please see below:
| AWS CodePipeline | [Link](./plugins/codepipeline/README.md) | Show the status of AWS CodePipeline pipelines on the entity page. |
| AWS CodeBuild | [Link](./plugins/codebuild/README.md) | Show the status of AWS CodeBuild projects on the entity page. |
| AWS Config catalog module | [Link](./plugins/core/catalog-config/README.md) | Module that implements an entity provider to ingest AWS resources in to the Backstage catalog. |
| Generative AI | [Link](./plugins/genai/README.md) | Build assistants powered by Generative AI |
| Cost Insights for AWS | [Link](./plugins/cost-insights/README.md) | An implementation of the Cost Insights plugin that provides AWS cost information |

## Security
Expand Down
1 change: 1 addition & 0 deletions packages/app/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
"@aws/aws-codebuild-plugin-for-backstage": "workspace:^",
"@aws/aws-codepipeline-plugin-for-backstage": "workspace:^",
"@aws/cost-insights-plugin-for-backstage": "workspace:^",
"@aws/genai-plugin-for-backstage": "workspace:^",
"@backstage-community/plugin-cost-insights": "^0.12.25",
"@backstage-community/plugin-github-actions": "^0.6.16",
"@backstage-community/plugin-tech-radar": "^0.7.4",
Expand Down
2 changes: 2 additions & 0 deletions packages/app/src/App.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ import { RequirePermission } from '@backstage/plugin-permission-react';
import { catalogEntityCreatePermission } from '@backstage/plugin-catalog-common/alpha';
import { CostInsightsPage } from '@backstage-community/plugin-cost-insights';
import { costInsightsAwsPlugin } from '@aws/cost-insights-plugin-for-backstage';
import { AgentChatPage } from '@aws/genai-plugin-for-backstage';

const app = createApp({
apis,
Expand Down Expand Up @@ -113,6 +114,7 @@ const routes = (
<Route path="/settings" element={<UserSettingsPage />} />
<Route path="/catalog-graph" element={<CatalogGraphPage />} />
<Route path="/cost-insights" element={<CostInsightsPage />} />
<Route path="/assistant/:agentName" element={<AgentChatPage />} />
</FlatRoutes>
);

Expand Down
6 changes: 6 additions & 0 deletions packages/app/src/components/Root/Root.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import {
SidebarSpace,
useSidebarOpenState,
Link,
ChatIcon,
} from '@backstage/core-components';
import MenuIcon from '@material-ui/icons/Menu';
import SearchIcon from '@material-ui/icons/Search';
Expand Down Expand Up @@ -70,6 +71,11 @@ export const Root = ({ children }: PropsWithChildren<{}>) => (
<SidebarItem icon={ExtensionIcon} to="api-docs" text="APIs" />
<SidebarItem icon={LibraryBooks} to="docs" text="Docs" />
<SidebarItem icon={CreateComponentIcon} to="create" text="Create..." />
<SidebarItem
icon={ChatIcon}
to="assistant/general"
text="Chat Assistant"
/>
{/* End global nav */}
<SidebarDivider />
<SidebarScrollWrapper>
Expand Down
2 changes: 2 additions & 0 deletions packages/backend/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@
"@aws/aws-codepipeline-plugin-for-backstage-backend": "workspace:^",
"@aws/aws-core-plugin-for-backstage-scaffolder-actions": "workspace:^",
"@aws/cost-insights-plugin-for-backstage-backend": "workspace:^",
"@aws/genai-plugin-for-backstage-backend": "workspace:^",
"@aws/genai-plugin-langgraph-agent-for-backstage": "workspace:^",
"@backstage/backend-defaults": "^0.5.3",
"@backstage/backend-plugin-api": "^1.0.2",
"@backstage/catalog-client": "^1.8.0",
Expand Down
3 changes: 3 additions & 0 deletions packages/backend/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,7 @@ backend.add(import('@aws/aws-core-plugin-for-backstage-scaffolder-actions'));

backend.add(import('@aws/cost-insights-plugin-for-backstage-backend'));

backend.add(import('@aws/genai-plugin-for-backstage-backend'));
backend.add(import('@aws/genai-plugin-langgraph-agent-for-backstage'));

backend.start();
187 changes: 187 additions & 0 deletions plugins/genai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
# Generative AI plugin for Backstage (Experimental)

This experimental Backstage plugin helps build generative AI assistants in a manner that can leverage the broader Backstage plugin ecosystem. It relies on "tool use" to provide LLMs with access to existing Backstage backend plugins so that the models can access data via Backstage such as the catalog, TechDocs, CI/CD, Kubernetes resources etc.

Features:

- Simple conversational chat interface
- Configure multiple AI "agents" for specific purposes
- Modular approach to providing agent implementations
- Provide "tools" to agents through Backstage extensions

## Before you begin

Considerations before you explore this plugin:

1. Its experimental
1. Using this plugin will incur costs from your LLM provider, you are responsible for these
1. This plugin does not build in guardrails or other protective mechanisms against prompt injection, leaking of sensitive information etc. and you are responsible for these

## Pre-requisites

This plugin relies on external LLMs, and will generally require models that support tool-use/function-calling. Some examples of models that support this include:

1. Anthropic Claude >= 3 (Haiku, Sonnet, Opus)
1. OpenAI
1. Meta Llama (certain models)

The example LangGraph implementation provided can use:

1. [Amazon Bedrock](https://aws.amazon.com/bedrock/)
1. [OpenAI](https://openai.com/)

To explore support for other models/providers please raise a GitHub issue.

## Installation

NOTE: This guide will use the provided LangGraph implementation. To implement your own agent type see [Extending](#extending).

This guide assumes that you are familiar with the general [Getting Started](../../docs/getting-started.md) documentation and have assumes you have an existing Backstage application.

### Backend package

Install the backend package in your Backstage app:

```shell
yarn workspace backend add @aws/genai-plugin-for-backstage-backend @aws/genai-plugin-langgraph-agent-for-backstage
```

Add the plugin to the `packages/backend/src/index.ts`:

```typescript
const backend = createBackend();
// ...
backend.add(import('@aws/genai-plugin-for-backstage-backend'));
backend.add(import('@aws/genai-plugin-langgraph-agent-for-backstage'));
// ...
backend.start();
```

Verify that the backend plugin is running in your Backstage app. You should receive `{"status":"ok"}` when accessing this URL:

`http://<your backstage app>/api/aws-genai/health`.

### Frontend package

Install the frontend package in your Backstage app:

```shell
yarn workspace app add @aws/genai-plugin-for-backstage
```

Edit `packages/app/src/App.tsx` to add a route for the chat UI page:

```typescript
import { AgentChatPage } from '@aws/genai-plugin-for-backstage';

{
/* ... */
}

const routes = (
<FlatRoutes>
/* ... */
<Route path="/assistant/:agentName" element={<AgentChatPage />} />
</FlatRoutes>
);
```

Now edit `packages/app/src/components/Root/Root.tsx` to add a menu item:

```tsx
import { ChatIcon } from '@backstage/core-components';

{
/* ... */
}
export const Root = ({ children }: PropsWithChildren<{}>) => (
<SidebarPage>
<Sidebar>
{/* ... */}
<SidebarGroup label="Menu" icon={<MenuIcon />}>
{/* ... */}
<SidebarItem
icon={ChatIcon}
to="assistant/general"
text="Chat Assistant"
/>
{/* ... */}
</SidebarGroup>
{/* ... */}
</Sidebar>
{/* ... */}
</SidebarPage>
);
```

The URL `assistant/general` means we're going to be using an agent named `general`, which we'll configure below.

### Creating your first agent

This plugin is built around the notion of creating one or more "agents" that can be invoked. These are defined by configuration, so lets configure our first agent.

Add this to your Backstage configuration file (for example `app-config.yaml`):

```yaml
genai:
agents:
general: # This matches the URL in the frontend
description: General chat assistant
prompt: >
You are an expert in platform engineering and answer questions in a succinct and easy to understand manner.
Answers should always be well-structured and use well-formed Markdown.
The current user is {username} and you can provide that information if asked.
langgraph:
messagesMaxTokens: 150000 # Set based on context of chosen model, prune message history based on number of tokens
# Use appropriate snippet for your model provider
bedrock:
modelId: 'anthropic.claude-3-5-sonnet-20241022-v2:0'
region: us-west-2
# openai:
# apiKey: ${OPENAI_API_KEY}
```

See the [LangGraph agent documentation](./agent-langgraph/) for the full configuration reference.

Start the Backstage application:

```
yarn dev
```

Access the application in your browser and select the "Chat Assistant" option in the menu. Ask a general question like "What is Terraform?".

### Adding tools

We can provide tools/functions that can be called by agents to retrieve context or perform actions. Tools can be added to the agent using a Backstage extension point and packaged as NPM packages.

There are several tools built in to the plugin related to core Backstage functionality. The `backstageCatalogSearch`, `backstageEntity` and `backstageTechdocsSearch` tools to give the model basic access to the Backstage catalog and TechDocs documentation.

Update the previous agent definition to add the `tools` field:

```yaml
genai:
agents:
general:
description: [...]
prompt: [...]
langgraph: [...]
tool:
- backstageCatalogSearch
- backstageEntity
- backstageTechdocsSearch
```
Restart Backstage to reload the configuration and try asking the chat assistant a question related to information in the your Backstage catalog, for example "Summarize <component name> from the Backstage catalog".
NOTE: After Backstage starts locally there can be a delay indexing the catalog and TechDocs for search. You will not receive search results until the index is built.
## Further reading
You can view the rest of the documentation to understand how to evolve your chat assistant
1. Prompting tips: Various tips on how to configure the agent system prompt. [See here](./docs/prompting-tips.md).
1. Tools: Provide tools/functions that can be called by agents to retrieve context or perform actions. [See here](./docs/tools.md).
1. Agent implementation: Provide an implementation for how an agent responds to prompts. [See here](./docs/agent-types.md).
1 change: 1 addition & 0 deletions plugins/genai/agent-langgraph/.eslintrc.js
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
module.exports = require('@backstage/cli/config/eslint-factory')(__dirname);
56 changes: 56 additions & 0 deletions plugins/genai/agent-langgraph/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Generative AI plugin for Backstage - LangGraph Agent Type

This package implements an agent for the Generative AI plugin for Backstage based on [LangGraph.js](https://github.com/langchain-ai/langgraphjs).

Features:

1. [ReAct pattern](https://react-lm.github.io/) to use available tools to answer prompts
1. Choose between Amazon Bedrock or OpenAI as the model provider
1. Integrate with [LangFuse](https://github.com/langfuse/langfuse) for observability

Limitations:

1. In-memory persistence only: Chat sessions only persist in-memory

## Configuration

This agent can be configured at two different levels, global and per-agent

### Global

Global configuration values apply to all agents, all of this is optional:

```yaml
genai:
langgraph:
langfuse: # (Optional) Configuration for LangFuse observability
baseUrl: http://localhost:3001 # (Required) LangFuse URL
publicKey: pk-aaa # (Required) Public key
secretKey: sk-bbb # (Required) Secret key
```
### Per-agent
Per-agent configuration only applies to the agent for which it corresponds. The available parameters are:
```yaml
genai:
agents:
general:
description: [...]
prompt: [...]
langgraph:
messagesMaxTokens: 100000 # (Required) Prune message history to maximum of this number of tokens
temperature: 0 # (Optional) Model temperature
maxTokens: 4000 # (Optional) Maximum output tokens
topP: 0.9 # (Optional) Model topP
# Only include the subsequent section for your model provider
# Bedrock only
bedrock:
modelId: 'anthropic.claude-3-5-sonnet-20241022-v2:0' # (Required) Bedrock model ID
region: us-west-2 # (Required) Bedrock AWS region
# OpenAI only
openai:
apiKey: ${OPENAI_API_KEY} # (Required) OpenAI model name
modelName: 'gpt-3.5-turbo-instruct' # (Optional) OpenAI model name
```
67 changes: 67 additions & 0 deletions plugins/genai/agent-langgraph/config.d.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
/**
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* You may obtain a copy of the License at
* http://www.apache.org/licenses/LICENSE-2.0
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

export interface Config {
genai?: {
agents?: {
[name: string]: {
langgraph?: {
/**
* (Required) Maximum tokens to retain in context sent to the model
*/
messagesMaxTokens: number;
/**
* (Optional) Maximum tokens that will be returned by the model
*/
maxTokens?: number;
/**
* (Optional) Model temperature
*/
temperature?: number;
/**
* (Optional) Model topP
*/
topP?: number;
/**
* (Optional) Specific configuration for Amazon Bedrock
*/
bedrock?: {
/**
* (Required) Region to use to access Amazon Bedrock API
*/
region: string;
/**
* (Required) Amazon Bedrock model ID to use
* @see https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html
*/
modelId: string;
};
/**
* (Optional) Specific configuration for OpenAI
*/
openai?: {
/**
* (Required) OpenAI API key for authentication
* @visibility secret
*/
apiKey: string;
/**
* (Optional) Name of the OpenAI model to use
*/
modelName?: string;
};
};
};
};
};
}
Loading

0 comments on commit cf19564

Please sign in to comment.