Skip to content
This repository has been archived by the owner on Oct 7, 2020. It is now read-only.

WIP: Istio Cluster config

Costin Manolache edited this page Aug 6, 2019 · 2 revisions

The install is separating the 'cluster-wide'/'cluster-admin' components from 'namespace' scoped components that can be deployed without cluster-admin credentials.

It is expected that the namespace-scoped components may be managed by a CI/CD or or teams with reduced permissions, and we want to allow minimally scoped credentials to be used for managing each Istio component, as well as reduce the use of cluster-wide permissions.

For a long time we had a 2 step install process - the first step installing CRDs ( istio-init ), and then the second step installing the components mixed with related cluster-wide resources.

We are now providing the option for the first step to include all cluster-wide configs - CRDs, cluster roles, cluster role bindings, required namespaces and service accounts. The later are needed in order to be able to create cluster role bindings.

Installation of cluster wide resources should be the first step, replacing the crd install:

kubectl apply -k github.com/istio/installer/kustomize/cluster --prune -l istio=cluster

A major benefit of this model is that it allows multiple deployments of each component ( pilot, etc) in each namespace, and eliminates the need to use a separate namespace per version.

Component or bundle requirements

After installing the 'cluster-wide' resources, the admin has the choice of full a-la-carte mode where each microservices is managed independently or 'bundled', where a set of components are installed at once.

In either case, the components installed:

  • must not define cluster-wide resources ( roles, CRDs, role bindings)
  • not share resources with same name - config maps, etc must be qualified

This is similar with a Knative Revision

Future changes

  • require that cluster-wide resources remain stable across a TLS cycle, so the 'admin-creds' step can be done manually - including review of the permissions and cluster-wide resources.
Clone this wiki locally