Appilot['æpaɪlət] stands for application-pilot. It is an experimental project that helps you operate applications using GPT-like LLMs.
- Application management: deploy, upgrade, rollback, etc.
- Environment management: clone, view topology, etc.
- Diagnose: view logs, find flaws and provide fixes.
- Safeguard: any action involving state changes requires human approval.
- Hybrid infrastructure: works on kubernetes, VM, cloud, on-prem.
- Multi language support: Choose the natural language you're comfortable with.
- Pluggable backends: It supports multiple backends including Walrus and Kubernetes, and is extensible.
Chat to deploy llama-2 on AWS:
appilot-llama2.mov
Other use cases:
- Deploy from source code
- Manage environments
- Manage applications in Kubernetes using helm charts
- Operating native Kubernetes resources
- Diagnose and fix issues
prerequistes:
- Clone the repository.
git clone https://github.com/seal-io/appilot && cd appilot
- Run the following command to get the envfile.
cp .env.example .env
-
Edit the
.env
file and fill inOPENAI_API_KEY
. -
Run the following command to install. It will create a venv and install required dependencies.
make install
- Run the following command to get started:
make run
- Ask Appilot to deploy an application, e.g.:
> Deploy a jupyterhub.
...
> Get url of the jupyterhub.
Appilot is configurable via environment variable or the envfile:
Parameter | Description | Default |
---|---|---|
OPENAI_API_KEY | OpenAI API key, access to gpt-4 model is required. | "" |
OPENAI_API_BASE | Custom openAI API base. You can integrate with other LLMs as long as they serve in the same API style. | "" |
TOOLKITS | Toolkits to enable. Currently support Kubernetes and Walrus. Case insensitive. | "kubernetes" |
NATURAL_LANGUAGE | Natural language AI used to interacte with you. e.g., Chinese, Japanese, etc. | "English" |
SHOW_REASONING | Show AI reasoning steps. | True |
VERBOSE | Output in verbose mode. | False |
WALRUS_URL | URL of Walrus, valid when Walrus toolkit is enabled. | "" |
WALRUS_API_KEY | API key of Walrus, valid when Walrus toolkit is enabled. | "" |
WALRUS_SKIP_TLS_VERIFY | Skip TLS verification for WALRUS API. Use when testing with self-signed certificates. Valid when Walrus toolkit is enabled. | True |
WALRUS_DEFAULT_PROJECT | Project name for the default context, valid when Walrus toolkit is enabled. | "" |
WALRUS_DEFAULT_ENVIRONMENT | Environment name for the default context, valid when Walrus toolkit is enabled. | "" |
Follow steps in quickstart to run with Kubernetes backend.
Prerequisites: Install Walrus.
Walrus serves as the application management engine. It provides features like hybrid infrastructure support, environment management, etc. To enable Walrus backend, edit the envfile:
- Set
TOOLKITS=walrus
- Fill in
OPENAI_API_KEY
,WALRUS_URL
andWALRUS_API_KEY
Then you can run Appilot to get started:
make run
You can run Appilot in docker container when using Walrus backend.
Prerequisites: Install
docker
.
- Get an envfile by running the following command.
cp .env.example .env
- Configure the
.env
file.
- Set
TOOLKITS=walrus
- Fill in
OPENAI_API_KEY
,WALRUS_URL
andWALRUS_API_KEY
- Run the following command:
docker run -it --env-file .env sealio/appilot:main
You can use other LLMs as the reasoning engine of Appilot, as long as it serves inference APIs in openAI compatible way.
-
Configure the
.env
file, then setOPENAI_API_BASE=https://your-api-base
. -
Run Appilot as normal.
The following is the architecture diagram of Appilot:
Copyright (c) 2023 Seal, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.