Skip to content

Commit ce6a0cd

Browse files
authored
Merge pull request #319 from tetratelabs/update-istio-charts-1.26.4+2
Update Istio charts 1.26.4+2
2 parents 3196a23 + fd5fad0 commit ce6a0cd

File tree

227 files changed

+29539
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

227 files changed

+29539
-0
lines changed
Lines changed: 152 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,152 @@
1+
# Istio Installer
2+
3+
## WARNING: Do not use the files in this directory to install Istio
4+
5+
This directory contains the helm chart _sources_ which are versioned, built and pushed to following helm
6+
repositories with each Istio release. If you want to make changes to Istio helm charts, you're in the
7+
right place.
8+
9+
If you want to _install_ Istio with Helm, instead please [follow the Helm installation docs here](https://istio.io/latest/docs/setup/install/helm/).
10+
11+
Charts in this folder are published to the following Helm repos:
12+
- `https://istio-release.storage.googleapis.com/charts` (charts for official release versions)
13+
- `oci://gcr.io/istio-release/charts/` (charts for official release versions and dev build versions)
14+
15+
Chart publishing is handled by [release builder](https://github.com/istio/release-builder).
16+
17+
---
18+
19+
Note: If making any changes to the charts or values.yaml in this dir, first read [UPDATING-CHARTS.md](UPDATING-CHARTS.md)
20+
21+
Istio installer is a modular, 'a-la-carte' installer for Istio. It is based on a
22+
fork of the Istio helm templates, refactored to increase modularity and isolation.
23+
24+
Goals:
25+
- Improve upgrade experience: users should be able to gradually roll upgrades, with proper
26+
canary deployments for Istio components. It should be possible to deploy a new version while keeping the
27+
stable version in place and gradually migrate apps to the new version.
28+
29+
- More flexibility: the new installer allows multiple 'environments', allowing applications to select
30+
a set of control plane settings and components. While the entire mesh respects the same APIs and config,
31+
apps may target different 'environments' which contain different instances and variants of Istio.
32+
33+
- Better security: separate Istio components reside in different namespaces, allowing different teams or
34+
roles to manage different parts of Istio. For example, a security team would maintain the
35+
root CA and policy, a telemetry team may only have access to Prometheus,
36+
and a different team may maintain the control plane components (which are highly security sensitive).
37+
38+
The install is organized in 'environments' - each environment consists of a set of components
39+
in different namespaces that are configured to work together. Regardless of 'environment',
40+
workloads can talk with each other and obey the Istio configuration resources, but each environment
41+
can use different Istio versions and different configuration defaults.
42+
43+
`istioctl kube-inject` or the automatic sidecar injector are used to select the environment.
44+
In the case of the sidecar injector, the namespace label `istio-env: <NAME_OF_ENV>` is used instead
45+
of the conventional `istio-injected: true`. The name of the environment is defined as the namespace
46+
where the corresponding control plane components (config, discovery, auto-injection) are running.
47+
In the examples below, by default this is the `istio-control` namespace. Pod annotations can also
48+
be used to select a different 'environment'.
49+
50+
## Installing
51+
52+
The new installer is intended to be modular and very explicit about what is installed. It has
53+
far more steps than the Istio installer - but each step is smaller and focused on a specific
54+
feature, and can be performed by different people/teams at different times.
55+
56+
It is strongly recommended that different namespaces are used, with different service accounts.
57+
In particular access to the security-critical production components (root CA, policy, control)
58+
should be locked down and restricted. The new installer allows multiple instances of
59+
policy/control/telemetry - so testing/staging of new settings and versions can be performed
60+
by a different role than the prod version.
61+
62+
The intended users of this repo are users running Istio in production who want to select, tune
63+
and understand each binary that gets deployed, and select which combination to use.
64+
65+
Note: each component can be installed in parallel with an existing Istio 1.0 or 1.1 installation in
66+
`istio-system`. The new components will not interfere with existing apps, but can interoperate,
67+
and it is possible to gradually move apps from Istio 1.0/1.1 to the new environments and
68+
across environments ( for example canary -> prod )
69+
70+
Note: there are still some cluster roles that may need to be fixed, most likely cluster permissions
71+
will need to move to the security component.
72+
73+
## Everything is Optional
74+
75+
Each component in the new installer is optional. Users can install the component defined in the new installer,
76+
use the equivalent component in `istio-system`, configured with the official installer, or use a different
77+
version or implementation.
78+
79+
For example, you may use your own Prometheus and Grafana installs, or you may use a specialized/custom
80+
certificate provisioning tool, or use components that are centrally managed and running in a different cluster.
81+
82+
This is a work in progress - building on top of the multi-cluster installer.
83+
84+
As an extreme, the goal is to be possible to run Istio workloads in a cluster without installing any Istio component
85+
in that cluster. Currently, the minimum we require is the security provider (node agent or citadel).
86+
87+
### Install Istio CRDs
88+
89+
This is the first step of the installation. Please do not remove or edit any CRD - config currently requires
90+
all CRDs to be present. On each upgrade it is recommended to reapply the file, to make sure
91+
you get all CRDs. CRDs are separated by release and by component type in the CRD directory.
92+
93+
Istio has strong integration with certmanager. Some operators may want to keep their current certmanager
94+
CRDs in place and not have Istio modify them. In this case, it is necessary to apply CRD files individually.
95+
96+
```bash
97+
kubectl apply -k github.com/istio/installer/base
98+
```
99+
100+
or
101+
102+
```bash
103+
kubectl apply -f base/files
104+
```
105+
106+
### Install Istio-CNI
107+
108+
This is an optional step - CNI must run in a dedicated namespace, it is a 'singleton' and extremely
109+
security sensitive. Access to the CNI namespace must be highly restricted.
110+
111+
**NOTE:** The environment variable `ISTIO_CLUSTER_ISGKE` is assumed to be set to `true` if the cluster
112+
is a GKE cluster.
113+
114+
```bash
115+
ISTIO_CNI_ARGS=
116+
# TODO: What k8s data can we use for this check for whether GKE?
117+
if [[ "${ISTIO_CLUSTER_ISGKE}" == "true" ]]; then
118+
ISTIO_CNI_ARGS="--set cni.cniBinDir=/home/kubernetes/bin"
119+
fi
120+
iop kube-system istio-cni $IBASE/istio-cni/ ${ISTIO_CNI_ARGS}
121+
```
122+
123+
TODO. It is possible to add Istio-CNI later, and gradually migrate.
124+
125+
### Install Control plane
126+
127+
This can run in any cluster. A mesh should have at least one cluster should run Pilot or equivalent XDS server,
128+
and it is recommended to have Pilot running in each region and in multiple availability zones for multi cluster.
129+
130+
```bash
131+
iop istio-control istio-discovery $IBASE/istio-control/istio-discovery \
132+
--set global.istioNamespace=istio-system
133+
134+
# Second istio-discovery, using master version of istio
135+
TAG=latest HUB=gcr.io/istio-testing iop istio-master istio-discovery-master $IBASE/istio-control/istio-discovery \
136+
--set policy.enable=false \
137+
--set global.istioNamespace=istio-master
138+
```
139+
140+
### Gateways
141+
142+
A cluster may use multiple Gateways, each with a different load balancer IP, domains and certificates.
143+
144+
Since the domain certificates are stored in the gateway namespace, it is recommended to keep each
145+
gateway in a dedicated namespace and restrict access.
146+
147+
For large-scale gateways it is optionally possible to use a dedicated pilot in the gateway namespace.
148+
149+
### Additional test templates
150+
151+
A number of helm test setups are general-purpose and should be installable in any cluster, to confirm
152+
Istio works properly and allow testing the specific installation.
Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->
2+
# Table of Contents
3+
4+
- [Updating charts and values.yaml](#updating-charts-and-valuesyaml)
5+
- [Acceptable Pull Requests](#acceptable-pull-requests)
6+
- [Making changes](#making-changes)
7+
- [Value deprecation](#value-deprecation)
8+
9+
<!-- markdown-toc end -->
10+
11+
# Updating charts and values.yaml
12+
13+
## Acceptable Pull Requests
14+
15+
Helm charts `values.yaml` represent a complex user facing API that tends to grow uncontrollably over time
16+
due to design choices in Helm.
17+
The underlying Kubernetes resources we configure have 1000s of fields; given enough users and bespoke use cases,
18+
eventually someone will want to customize every one of those fields.
19+
If all fields are exposed in `values.yaml`, we end up with an massive API that is also likely worse than just using the Kubernetes API directly.
20+
21+
To avoid this, the project attempts to minimize additions to the `values.yaml` API where possible.
22+
23+
- Helm is for configuration that is expected to be set at install-time only. If the change is a dynamic runtime configuration, it probably belongs in the [MeshConfig API](https://github.com/istio/api/blob/master/mesh/v1alpha1/config.proto). This allows configuration without re-installing or restarting deployments.
24+
25+
- Adding new `global` values is discouraged as a general rule. The only exceptions are values that are frequently and consistently consumed across at least 2 charts (things like image tags, common labels, etc), but these would only be accepted on a strict case-by-case basis.
26+
27+
- If the change is to a Kubernetes field (such as modifying a Deployment attribute), it will likely need to be install-time configuration. However, that doesn't necessarily mean a PR to add a value will be accepted. The `values.yaml` API is intended to maintain a *minimal core set of configuration* that most users will use. For bespoke use cases, [Helm Chart Customization](https://istio.io/latest/docs/setup/additional-setup/customize-installation-helm/#advanced-helm-chart-customization) can be used to allow arbitrary customizations.
28+
29+
- Avoid exposing a single subkey of a multi-value field if it would be more flexible to expose the entire field as arbitrary YAML. If the change truly is generally purpose, it is generally preferred to have broader APIs. For example, instead of providing direct access to each of the complex fields in [affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/), just providing a single `affinity` field that is passed through as-is to the Kubernetes resource. This provides maximum flexibility with minimal API surface overhead.
30+
31+
- All value additions or removals are user-facing and must come with a release note.
32+
33+
- If you find yourself writing the same templating logic across several charts or needing to craft complex conditionals, consider using a shared Helm template for consistency rather than inlining.
34+
35+
## Making changes
36+
37+
### Step 1. Make changes in charts and values.yaml in `manifests` directory
38+
39+
Be sure to provide sufficient documentation and example usage in values.yaml.
40+
41+
- If the chart has a `values.schema.json`, that should be updated as well.
42+
43+
### Step 2. Update the istioctl/Operator values
44+
45+
If you are modifying the `gateway` chart, you can stop here.
46+
All other charts, however, are exposed by `istioctl` and need to follow the steps below.
47+
48+
- The charts in the `manifests` directory are used in istioctl to generate an installation manifest.
49+
50+
- If `values.yaml` is changed, be sure to update corresponding values changes in [../profiles/default.yaml](../profiles/default.yaml)
51+
52+
### Step 3. Update istioctl schema
53+
54+
Istioctl uses a [schema](../../operator/pkg/apis/values_types.proto) to validate the values. Any changes to
55+
the schema must be added here, otherwise istioctl users will see errors.
56+
Once the schema file is updated, run:
57+
58+
```bash
59+
$ make operator-proto
60+
```
61+
62+
This will regenerate the Go structs used for schema validation.
63+
64+
### Step 4. Update the generated manifests
65+
66+
Tests of istioctl use the auto-generated manifests to ensure that the istioctl binary has the correct version of the charts.
67+
68+
To regenerate the manifests, run:
69+
70+
```bash
71+
$ make copy-templates update-golden
72+
```
73+
74+
### Step 5. Create a PR using outputs from Steps 1 to 4
75+
76+
Your PR should pass all the checks if you followed these steps.
77+
78+
## Value deprecation
79+
80+
- Values may be marked as deprecated, but may not be removed until a minimum of 2 releases after the PR marking them as such is merged.
81+
82+
- If you are _marking_ a value as `deprecated`, the PR doing so **must* add a [release note](../../releasenotes/README.md) mentioning the value being deprecated, and any replacements/alternatives.
83+
84+
- When _removing_ a value that has been marked as `deprecated` for a minimum of 2 releases, **both** the `releaseNote` and `upgradeNote` fields must be populated in the release note in the removal PR, mentioning the value being removed, and any replacements/alternatives.
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
apiVersion: v2
2+
appVersion: 1.26.4-tetrate2
3+
description: Helm chart for deploying Istio cluster resources and CRDs
4+
icon: https://istio.io/latest/favicons/android-192x192.png
5+
keywords:
6+
- istio
7+
name: base
8+
sources:
9+
- https://github.com/istio/istio
10+
version: 1.26.4+tetrate2
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Istio base Helm Chart
2+
3+
This chart installs resources shared by all Istio revisions. This includes Istio CRDs.
4+
5+
## Setup Repo Info
6+
7+
```console
8+
helm repo add istio https://istio-release.storage.googleapis.com/charts
9+
helm repo update
10+
```
11+
12+
_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
13+
14+
## Installing the Chart
15+
16+
To install the chart with the release name `istio-base`:
17+
18+
```console
19+
kubectl create namespace istio-system
20+
helm install istio-base istio/base -n istio-system
21+
```
22+
23+
### Profiles
24+
25+
Istio Helm charts have a concept of a `profile`, which is a bundled collection of value presets.
26+
These can be set with `--set profile=<profile>`.
27+
For example, the `demo` profile offers a preset configuration to try out Istio in a test environment, with additional features enabled and lowered resource requirements.
28+
29+
For consistency, the same profiles are used across each chart, even if they do not impact a given chart.
30+
31+
Explicitly set values have highest priority, then profile settings, then chart defaults.
32+
33+
As an implementation detail of profiles, the default values for the chart are all nested under `defaults`.
34+
When configuring the chart, you should not include this.
35+
That is, `--set some.field=true` should be passed, not `--set defaults.some.field=true`.

0 commit comments

Comments
 (0)