A service for provisioning and managing fleets of Kafka instances.
For more information on how the service works, see the implementation documentation.
- Golang 1.19+
- Docker - to create database
- ocm cli - ocm command line tool
- Node.js v14.17+ and npm
There are some additional prerequisites required for running kas-fleet-manager due to its interaction with external services which are described below.
-
Follow the populating configuration guide to prepare Fleet Manager with its needed configurations
-
Compile the Fleet Manager binary
make binary
-
Create and setup the Fleet Manager database
- Create and setup the database container and the initial database schemas
make db/setup && make db/migrate
- Optional - Verify tables and records are created
# Login to the database to get a SQL prompt make db/login
# List all the tables serviceapitests# \dt
# Verify that the `migrations` table contains multiple records serviceapitests# select * from migrations;
-
Start the Fleet Manager service in your local environment
./kas-fleet-manager serve
This will start the Fleet Manager server and it will expose its API on port 8000 by default
NOTE: The service has numerous feature flags which can be used to enable/disable certain features of the service. Please see the feature flag documentation for more information.
-
Verify the local service is working
curl -H "Authorization: Bearer $(ocm token)" http://localhost:8000/api/kafkas_mgmt/v1/kafkas {"kind":"KafkaRequestList","page":1,"size":0,"total":0,"items":[]}
NOTE: Make sure you are logged in to OCM through the CLI before running this command. Details on that can be found here
Kas-fleet-manager can be started without a dataplane OSD cluster, however, no Kafkas will be placed or provisioned.
To manually setup a data plane OSD cluster, please follow the
Using an existing OSD cluster with manual scaling enabled
option in
the data-plane-osd-cluster-options.md
guide.
Follow this guide on how to deploy the KAS Fleet Manager service to a OpenShift cluster.
See the Interacting with Fleet Manager document
# Start Swagger UI container
make run/docs
# Launch Swagger UI and Verify from a browser: http://localhost:8082
# Remove Swagger UI conainer
make run/docs/teardown
In addition to starting and running a Fleet Manager server, the Fleet Manager binary provides additional commands to interact with the service (i.e. running data migrations)
To use these commands, run make binary
to create the ./fleet-manager
binary.
Then run ./fleet-manager -h
for information on the additional available
commands.
The service can be run in a number of different environments. Environments are
essentially bespoke sets of configuration that the service uses to make it
function differently. Environments can be set using the OCM_ENV
environment
variable. Below are the list of known environments and their
details.
development
- Thestaging
OCM environment is used. Sentry is disabled. Debugging utilities are enabled. This should be used in local development. This is the default environment used when directly running the Fleet Manager binary and theOCM_ENV
variable has not been set.testing
- The OCM API is mocked/stubbed out, meaning network calls to OCM will fail. The auth service is mocked. This should be used for unit testing.integration
- Identical totesting
but using an emulated OCM API server to respond to OCM API calls, instead of a basic mock. This can be used for integration testing to mock OCM behaviour.production
- Debugging utilities are disabled, Sentry is enabled. environment can be ignored in most development and is only used when the service is deployed.
The OCM_ENV
environment variable should be set before running any Fleet
Manager binary command or Makefile target.
See the testing document.
See the contributing guide for general guidelines.
The https://github.com/bf2fc6cc711aee1a0c2a/cos-fleet-manager is used to
build the cos-fleet-manager
binary which is a fleet manager for connectors
similar to how kas-fleet-manager
is fleet manager for Kafka instances.
The cos-fleet-manager
just imports most of the code from
the kas-fleet-manager
enabling only connector APIs that are in this
repo's internal/connector
package.
Connector integration tests require most of the security and access configuration listed in the populating configuration guide document. Connector service uses AWS secrets manager as a connector specific vault service for storing connector secret properties such as usernames, passwords, etc.
Before running integration tests, the required AWS secrets files
in the secrets/vault
directory MUST be configured in the files:
secrets/vault/aws_access_key_id
secrets/vault/aws_secret_access_key
Additional documentation can be found in the docs directory.
Some relevant documents are:
- Running the Service on an OpenShift cluster
- Adding new endpoint
- Adding new CLI flag
- Automated testing
- Requesting credentials and accounts
- Data Plane Setup
- Access Control
- Quota Management
- Fleet Manager Admin API endpoints overview
- Explanation of JWT token claims used across the fleet-manager
- kas-fleet-manager implementation information
- Data Plane Cluster dynamic scaling architecture
- Explanation of JWT token claims used across the kas-fleet-manager