- Getting Started
Running Kai consists of:
- Launching a postgres database and seed with application analysis data
- Launching the backend Kai REST API Service
- This is the component that will work with the database, construct prompts, talk to Large Language Models (LLMs), and generate code fixes
- A client that parses analysis information from
analyzer-lsp and then issues
requests to the Kai backend.
- The primary client will be an IDE plugin
- For IDE setup see: Install the Kai VSCode Plugin
- It's also possible to issue API requests directly, and we have a python script that does this to aid demonstrations. See example/README.md
- The primary client will be an IDE plugin
The easiest way to run Kai is to leverage the prebuilt container images we publish to quay.io/konveyor/kai, you can learn more about early builds at docs/evaluation_builds.md.
This is the simplest configuration which will limit configuration choices and will use cached LLM results so that you may evaluate Kai without having your own API Keys.
- The cached data uses a
KAI__DEMO_MODE=TRUE
mode for running the backend. See docs/contrib/configuration.md for more information. - Follow the guided scenario at docs/scenarios/demo.md to evaluate Kai
This will run Kai using sample analysis reports that simulates the analysis data which will be obtained from Konveyor. Additionally it will default to using cached LLM responses as explained in docs/contrib/configuration.md
Steps:
git clone https://github.com/konveyor/kai.git
cd kai
- Optional Configuration changes ok to skip and use the defaults if using cached responses
- Make changes to
kai/config.toml
to select your desired provider and model - Export
GENAI_KEY
orOPENAI_API_KEY
as appropriate as per docs/llm_selection.md - Note: By default the
stable
image tag will be used by podman compose.yaml. If you want to run with an alternate tag you can export the environment variable:TAG="stable"
with any tag you would like to use.
- Make changes to
- Run
podman compose up
. The first time this is run it will take several minutes to download images and to populate sample data.- After the first run the DB will be populated and subsequent starts will be much faster, as long as the kai_kai_db_data volume is not deleted.
- To clean up all resources run
podman compose down && podman volume rm kai_kai_db_data
.
- Kai backend is now running and ready to serve requests
For an initial evaluation, the recommended path is to follow a guided walkthrough we have created at: docs/scenarios/demo.md which walks through a scenario of using Kai to complete a migration of a Java EE app to Quarkus.
- We launch VSCode with our Kai VS Code extension from konveyor-ecosystem/kai-vscode-plugin
- We open a git checkout of a sample application: coolstore
- We run Kantra inside of VSCode to do an analysis of the application to learn what issues are present that need to be addressed before migrating to Quarkus
- We view the analysis information in VSCode
- We look at the impacted files and choose what files/issues we want to fix
- We click 'Generate Fix' in VSCode on a given file/issue and wait ~45 seconds for the Kai backend to generate a fix
- We view the suggested fix as a 'Diff' in VSCode
- We accept the generated fix
- The file in question has now been updated
- We move onto the next file/issue and repeat
Konveyor integration is still being developed and is not yet fully integrated.
git clone https://github.com/konveyor-ecosystem/kai.git
cd kai
- Make changes to
kai/config.toml
to select your desired provider and model - Export
GENAI_KEY
orOPENAI_API_KEY
as appropriate - Run
USE_HUB_IMPORTER=True HUB_URL=https://tackle-konveyor-tackle.apps.cluster.example.com/hub IMPORTER_ARGS=-k podman compose --profile use_hub_importer up
- Note you will want to update the value of
HUB_URL
to match your Konveyor cluster
- Note you will want to update the value of
You may also run the Kai server from a python virtual environment to aid testing local changes without needing to build a container image.
- See docs/example_cli_script.md to see an alternative method of running the development team uses to exercise the Kai REST API from a python script
- You may modify the analysis information Kai consumes via docs/custom_apps.md
Note that you need to use podman >= 1.1.0 to use the --profile
option. podman does not currently support the alternative COMPOSE_PROFILES
environment variable.
If your konveyor instance does not use self-signed certificates you may omit IMPORTER_ARGS=-k
.
To clean up all resources run podman compose --profile use_hub_importer down && podman volume rm kai_kai_db_data
.