-
Notifications
You must be signed in to change notification settings - Fork 41
Design
The core of what we are trying to build is a complete enterprise solution that includes not only OpenStack with resiliency, upgrades, and operations at the forefront of our minds, but also fully functioning baremetal deployment support with complex networking in mind, shared storage support, CI/CD pipelines, all part of the same simple to use kit and completely open. All buildable from scratch. We are trying to be an all-in-one solution. It may not be a fit for everyone but for those it does fit it will work well.
Given that helm is a nascent Kubernetes package manager, there are concerns over letting it manipulate the entire deployment for each small change. This goal ensures that when there is a change to the keystone chart content for instance, we can be sure we are only targeting that particular infrastructure and nothing else. This has certain costs as a design decision.
First, it means that leveraging a single values file or environmental values file that spans multiple charts is simply not an option yet today, although consideration is being given in this issue. To support that paradigm in the interim, we have a values.py script which can chunk up a single environment definition file to allow an operator to feed a single environmental definition into the various openstack-helm subcharts as helm --values
input.
Second, using helm dependencies as an actual feature is out of the question in its current state. When dealing with multiple charts, that have the same dependency on the same child charts, and those charts actually deploy real Kubernetes infrastructure, unintended things start to happen. First, helm has no knowledge of what other charts have installed, so it attempts to reinstall those same charts, and generally runs into errors for existing Kubernetes artifacts. Second, should that somehow succeed, the removal of a chart with dependencies will remove the dependencies as well, creating a condition where an operator may inadvertently remove shared infrastructure leveraged by other charts. For this reason, dependencies today need to managed outside of the charts with a larger installation and upgrade orchestrator.
Lastly, this dependency limitation does not preclude the paradigm of a common chart, to leverage shared template definitions across all charts and common variable definitions. As of helm 2.0.0, references to defines in child charts from the parent chart work reliably and so we leverage this in openstack-helm today with the dependency of a common chart across all charts. This works unlike the scenario above because this chart never instantiates any Kubernetes resources, it only defines common template snippets.
The project seeks to leverage native kolla images, produced from a kolla-builder process which is simply a container that helps with the kolla-build and ultimately the push process. We have this published as a Dockerfile today and are producing a chart for it to fulfill our ultimate goal of an end-to-end CI/CD solution. This ensures that there is no magic images or build processes other organizations cannot reproduce.
The project seeks to work with the helm community (mostly by bugging @technosophos in #helm on slack) to accomplish what is possible with helm natively without leveraging external tools. This means we do not wish to wrap helm templates in another layer of templating and want to minimize the external tools (even our own such as values.py) as helm expands its functionality.
The project in many places assumes some sort of persistent storage mechanism. A ceph chart is provided in openstack-helm, and we will continue to operationalize this chart so that it is usable in a production setting. We will strive to support other storage types and over time, expand charts to support non-persistent storage to ease development requirements.
This means we will be focusing on the telemetry, storage, monitoring, and networking needs of enterprise grade clouds as we fill out our OpenStack chart portfolio. We want to make it clear we acknowledge OpenStack is a part of typical datacenter control planes now, but it is not the only component. It is simply another piece of software, albeit an important one, running next to many others in a typical cloud datacenter.