Skip to content

Latest commit

 

History

History
117 lines (85 loc) · 6.77 KB

getting_started.md

File metadata and controls

117 lines (85 loc) · 6.77 KB

Getting Started

Overview

Running Kai consists of:

  1. Launching a postgres database and seed with application analysis data
  2. Launching the backend Kai REST API Service
    • This is the component that will work with the database, construct prompts, talk to Large Language Models (LLMs), and generate code fixes
  3. A client that parses analysis information from analyzer-lsp and then issues requests to the Kai backend.
    • The primary client will be an IDE plugin
    • It's also possible to issue API requests directly, and we have a python script that does this to aid demonstrations. See example/README.md

Recommended path - use podman compose up with cached LLM responses

The easiest way to run Kai is to leverage the prebuilt container images we publish to quay.io/konveyor/kai, you can learn more about early builds at docs/evaluation_builds.md.

This is the simplest configuration which will limit configuration choices and will use cached LLM results so that you may evaluate Kai without having your own API Keys.

Launch Kai with sample data and cached LLM responses

This will run Kai using sample analysis reports that simulates the analysis data which will be obtained from Konveyor. Additionally it will default to using cached LLM responses as explained in docs/contrib/configuration.md

Steps:

  1. git clone https://github.com/konveyor/kai.git
  2. cd kai
  3. Optional Configuration changes ok to skip and use the defaults if using cached responses
    1. Make changes to kai/config.toml to select your desired provider and model
    2. Export GENAI_KEY or OPENAI_API_KEY as appropriate as per docs/llm_selection.md
    3. Note: By default the stable image tag will be used by podman compose.yaml. If you want to run with an alternate tag you can export the environment variable: TAG="stable" with any tag you would like to use.
  4. Run podman compose up. The first time this is run it will take several minutes to download images and to populate sample data.
    • After the first run the DB will be populated and subsequent starts will be much faster, as long as the kai_kai_db_data volume is not deleted.
    • To clean up all resources run podman compose down && podman volume rm kai_kai_db_data.
  5. Kai backend is now running and ready to serve requests

Next interact with Kai via a Guided Walkthrough Scenario

For an initial evaluation, the recommended path is to follow a guided walkthrough we have created at: docs/scenarios/demo.md which walks through a scenario of using Kai to complete a migration of a Java EE app to Quarkus.

What are the general steps of Kai's current evaluation demo?

  1. We launch VSCode with our Kai VS Code extension from konveyor-ecosystem/kai-vscode-plugin
  2. We open a git checkout of a sample application: coolstore
  3. We run Kantra inside of VSCode to do an analysis of the application to learn what issues are present that need to be addressed before migrating to Quarkus
  4. We view the analysis information in VSCode
  5. We look at the impacted files and choose what files/issues we want to fix
  6. We click 'Generate Fix' in VSCode on a given file/issue and wait ~45 seconds for the Kai backend to generate a fix
  7. We view the suggested fix as a 'Diff' in VSCode
  8. We accept the generated fix
  9. The file in question has now been updated
  10. We move onto the next file/issue and repeat

Alternative methods of running

With data from Konveyor Hub

Konveyor integration is still being developed and is not yet fully integrated.

  1. git clone https://github.com/konveyor-ecosystem/kai.git
  2. cd kai
  3. Make changes to kai/config.toml to select your desired provider and model
  4. Export GENAI_KEY or OPENAI_API_KEY as appropriate
  5. Run USE_HUB_IMPORTER=True HUB_URL=https://tackle-konveyor-tackle.apps.cluster.example.com/hub IMPORTER_ARGS=-k podman compose --profile use_hub_importer up
    • Note you will want to update the value of HUB_URL to match your Konveyor cluster

Development Environment

You may also run the Kai server from a python virtual environment to aid testing local changes without needing to build a container image.

Example CLI Script in Python

  • See docs/example_cli_script.md to see an alternative method of running the development team uses to exercise the Kai REST API from a python script

Misc notes

Extending the data Kai consumes

Misc notes with podman compose

Note that you need to use podman >= 1.1.0 to use the --profile option. podman does not currently support the alternative COMPOSE_PROFILES environment variable.

If your konveyor instance does not use self-signed certificates you may omit IMPORTER_ARGS=-k.

To clean up all resources run podman compose --profile use_hub_importer down && podman volume rm kai_kai_db_data.