Skip to content

Commit

Permalink
Spelling and Formatting
Browse files Browse the repository at this point in the history
  • Loading branch information
jeffgbutler committed Sep 13, 2022
1 parent 24b3b55 commit 10dd825
Show file tree
Hide file tree
Showing 12 changed files with 231 additions and 125 deletions.
68 changes: 34 additions & 34 deletions 07-CustomSupplyChain/01-ClusterSourceTemplate.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Create a Cluster Source Template

The out of the box supply chain includes a fully functioning `ClusterSourceTemplate` that we can reuse. However, it is fairly complex
The out-of-the-box supply chain includes a fully functioning `ClusterSourceTemplate` that we can reuse. However, it is fairly complex
because it includes functionality for secrets and labels. We won't need this for a simple example, so let's recreate it as an exercise.

What we need is a template that will stamp out a `GitRepository` for the first step in our supply chain.
Expand All @@ -10,8 +10,8 @@ What we need is a template that will stamp out a `GitRepository` for the first s
Templates can receive inputs from several places:

1. They can access the standard values of the workload they are associated with
1. They can access the values of parameters specified in the workload (and use default values if not supplied)
1. They can access the output values of other templates they rely on
2. They can access the values of parameters specified in the workload (and use default values if not supplied)
3. They can access the output values of other templates they rely on

For the `ClusterSourceTemplate` we will only interact with the first two.

Expand All @@ -22,22 +22,22 @@ for a simple `GitRepository` resource:

```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: # MUST BE UNIQUE
labels:
app.kubernetes.io/component: source # HARDCODED
spec:
interval: 1m0s
url: # MUST BE UNIQUE
ref: # MUST BE UNIQUE
gitImplementation: # SET A DEFAULT THAT CAN BE OVERRIDDEN
ignore: '!.git' # HARDCODED
kind: GitRepository
metadata:
name: # MUST BE UNIQUE
labels:
app.kubernetes.io/component: source # HARDCODED
spec:
interval: 1m0s
url: # MUST BE UNIQUE
ref: # MUST BE UNIQUE
gitImplementation: # SET A DEFAULT THAT CAN BE OVERRIDDEN
ignore: '!.git' # HARDCODED
```
(for full details about this CRD see https://fluxcd.io/docs/components/source/gitrepositories/)
Some of the items in this spec are optional, but we will use them to demonstrate capabilities in Cartographer.
Some items in this spec are optional, but we will use them to demonstrate capabilities in Cartographer.
This is a template - which means we will need to supply some values to make each stamped out resource unique.
For a particular `GitRepository`, you can think that the name and location of the Git repository should be unique
Expand All @@ -52,7 +52,7 @@ out by Cartographer.
A `ClusterSourceTemplate` needs two main things:

1. It needs to know how to stamp out a resource that will supply source code in the cluster (i.e. it needs a resource template)
1. It needs to know where to find the resulting source code
2. It needs to know where to find the resulting source code

In Cartographer, templates can be coded in two ways: as a simple Kubernetes template - very like the template definitions
you see in other Kubernetes objects like deployments, or as a YTT based template. YTT offers additional flexibility to
Expand Down Expand Up @@ -95,8 +95,8 @@ spec:
A few important things to notice here:

1. The template name is `cartographer-workshop-git-repository-template` - we will need this when building a supply chain
1. The `spec.params` section defines a default value for the parameter `git_implementation`
1. The `spec.template` section contains the `GitRepository` template we showed above and contains parameter
2. The `spec.params` section defines a default value for the parameter `git_implementation`
3. The `spec.template` section contains the `GitRepository` template we showed above and contains parameter
markers for the various values that can change with every workload.

Notice in particular the parameter markers in the template. When using a simple template like this,
Expand Down Expand Up @@ -147,7 +147,7 @@ Several important things to notice:
1. The basic structure of the template and parameters are the same. But instead of `spec.template` we now
use `spec.ytt`. In `spec.ytt` we can specify any YTT script we want and use any of the YTT functionality.
Since this is a simple template, there are no conditionals or loops here.
1. The format of the variables has changed - now we are using the YTT variable format `#@ data.values ...`
2. The format of the variables has changed - now we are using the YTT variable format `#@ data.values ...`

## Building a Supply Chain

Expand Down Expand Up @@ -176,23 +176,23 @@ This is a very simple supply chain. Some important things to notice:

1. The workload type is `source-to-ingress`. We will use this in the workload definition to specify the supply chain
we want to run
1. The `spec.resources` section includes a reference to the simple template based `ClusterSourceTemplate` we created above
1. We haven't discussed security yet, so this supply chain may not function as is in your cluster
2. The `spec.resources` section includes a reference to the simple template based `ClusterSourceTemplate` we created above
3. We haven't discussed security yet, so this supply chain may not function as is in your cluster

## Note about YTT and Kapp

We are going to use ytt and kapp to create the supply supply chain. There are a couple of reasons for this:
We are going to use ytt and kapp to create the supply chain. There are a couple of reasons for this:

1. The supply chain will be composed of many different resources and kapp is a natural tool to use for managing
1. The supply chain will be composed of many resources and kapp is a natural tool to use for managing
multiple resources
1. We will add to the supply chain in a few iterations. Again, kapp is a natural tool for incrementally implementing
2. We will add to the supply chain in a few iterations. Again, kapp is a natural tool for incrementally implementing
an application.
1. We will provide some sensible default for many values in the supply chain, but also provide a method for overriding the
3. We will provide some sensible default for many values in the supply chain, but also provide a method for overriding the
default. YTT is a natural for this.

In the [solution](./solution/) directory there is a `values.yaml` file with sensible defaults. We will discuss each value
as we get to it. Also in that directory are subdirectories with the different stages of the supply chain. We have taken the
typical approach with kapp where each resources is in it's own yaml file, and kapp will deploy things in the correct order.
typical approach with kapp where each resource is in its own yaml file, and kapp will deploy things in the correct order.

We will also run each file through ytt before sending the yaml to kapp.

Expand All @@ -202,7 +202,7 @@ Supply chains all run with a service account. By default, the "default" service
where the workload resides. A developer can also specify a service account when creating a workload.

This can be difficult to manage because the service account will need permission to create every kind of resource
stamped out by the supply chain. When we installed TAP/TCE some of this was setup for us and we will reuse it as much
stamped out by the supply chain. When we installed TAP/TCE some of this was set up for us and we will reuse it as much
as possible.

In [solution/values.yaml](./solution/values.yaml) we provide a default name for a namespace. We will use "default" on TCE
Expand Down Expand Up @@ -244,8 +244,8 @@ subjects:
```

This is a ytt template that will bind the ClusterRole to the default service account in the namespace we will use.
Note that the out of the box supply chains have already given this permission to the default service account - we're
including it here for clarity and also to setup for future permissions we will add in the following sections.
Note that the out-of-the-box supply chains have already given this permission to the default service account - we're
including it here for clarity and also to set up for future permissions we will add in the following sections.

Let's create the supply chain. We're going to use kapp to create and update the supply chain because it is a simple way to
deploy many things as a single "application". In this case, the "application" is the supply chain.
Expand Down Expand Up @@ -291,14 +291,14 @@ spec:

Notice that the `tanzu` command maps the Git parameter values as follows:

| Tanzu CLI Parameter | Resulting Spec Value |
|---|---|
| `--git-repo` | `spec.source.git.url` |
| `--git-branch` | `spec.source.git.ref.branch` |
| Tanzu CLI Parameter | Resulting Spec Value |
|---------------------|------------------------------|
| `--git-repo` | `spec.source.git.url` |
| `--git-branch` | `spec.source.git.ref.branch` |

The mapped values exactly match the values we need in the `ClusterSourceTemplate`.

This supply chain should resolve fairly quickly since it is only downloading source cdoe from GitHub. You can check the status with this command:
This supply chain should resolve fairly quickly since it is only downloading source code from GitHub. You can check the status with this command:

```shell
tanzu apps workload get java-payment-calculator
Expand Down
33 changes: 25 additions & 8 deletions 07-CustomSupplyChain/02-ReusingTheClusterImageTemplate.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ functionality.
For this exercise, we will update the supply chain so that it has two steps:

1. It will retrieve source code from Git using the template we created in the last step
1. It will build and publish and image based on that source code using the existing `ClusterImageTemplate`
2. It will build and publish and image based on that source code using the existing `ClusterImageTemplate`
in the out-of-the-box supply chain

In this exercise, we will learn how Cartographer choreographs the interactions between stamped out resources.
Expand Down Expand Up @@ -46,24 +46,41 @@ spec:
```
Notice that there are now two items under `spec.resources`: our `ClusterSourceTemplate` as before, and a reference
to a `ClusterImageTemplate` who's name we retrive from ytt configuration. We've coded it this way because the template
to a `ClusterImageTemplate` whose name we retrieve from ytt configuration. We've coded it this way because the template
name is different on TAP and TCE. Where did that `ClusterImageTemplate` come from? The answer is that it is
provided by the out-of-the-box supply chain. You can see it with the following command:
provided by the out-of-the-box supply chain. You can see it with on of the following commands (these show the different template
names on TAP and TCE):

<details><summary>ClusterImageTemplate on TAP</summary>
<p>

```shell
kubectl describe ClusterImageTemplate kpack-template
```

</p>
</details>

<details><summary>ClusterImageTemplate on TCE</summary>
<p>

```shell
kubectl describe ClusterImageTemplate image
```

</p>
</details>

If you look closely, you will see that this template is configured with YTT and is significantly more complex than the
simple `ClusterSourceTemplate` we created in the last exercise. But the truth is, we don't really care about that.
We know it works, so we can simply reuse it.

You will also notice that this bit of YAML is a YTT template - you will need to run this through YTT before
sending it to the cluster. The reason for this is the parameter named `registry`. The `ClusterImageTemplate` supplied
with the out-of-the-box supply chain requires this parameter - it needs to know where to publish the image! Using this
YTT template, we can reuse the registry information from our initial configuration of the app-toolkit. You might ask
how I learned this. Then answer is simple - trial and error. I could have decoded the configuration of
the `ClusterImageTemplate` and found it also.
YTT template, we can use the registry information from an external configuration. You might ask
how I learned about this parameter. Then answer is simple - trial and error. I could also have inspected the
configuration of the `ClusterImageTemplate` and found it there.

## Template Dependencies and Choreography

Expand All @@ -73,7 +90,7 @@ Take a closer look at the definition of the `ClusterImageTemplate`:
- name: image-builder
templateRef:
kind: ClusterImageTemplate
name: image
name: #@ data.values.image_template
sources:
- resource: source-provider
name: source
Expand Down Expand Up @@ -103,7 +120,7 @@ major distinction in how Cartographer works compared with traditional CI/CD syst
> Cartographer works by creating Kubernetes resources and then forwarding the output of one resource to another.
> Cartographer depends on Kubernetes resources to react to changes in configuration and reconcile appropriately.
> This makes Cartographer compatible with virtually any Kubernetes resource. Any provider that embraces the CRD model in
> Kuberntes is compatible with Cartographer out of the box.
> Kubernetes is compatible with Cartographer out of the box.
>
> Cartographer does not implement any kind of reconciliation system. Cartographer also does not implement the
> Kubernetes resources that do the actual work. This makes it different from something like
Expand Down
8 changes: 4 additions & 4 deletions 07-CustomSupplyChain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@
In this exercise, we will create a custom supply chain that:

1. Retrieves source code from Git
1. Builds and publishes an images with Kpack
1. Uses a Kubernetes deployment, services, and ingress to deploy the application
2. Builds and publishes an images with Kpack
3. Uses a Kubernetes deployment, services, and ingress to deploy the application

This is similar to the out of the box supply chain except that it does not use Knative to deploy the application.
Our supply chain will be simpler than the out of the box supply chain - at the loss of some flexability.
This is similar to the out-of-the-box supply chain except that it does not use Knative to deploy the application.
Our supply chain will be simpler than the out-of-the-box supply chain - at the loss of some flexibility.
We will also reuse one part of the existing supply chain to show how Cartographer can compose supply
chains from reusable parts.

Expand Down
2 changes: 1 addition & 1 deletion 90-Carvel/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Understanding the Carvel tools is fundamental to success with Tanzu. Most of the Carvel tools are command line tools that do
relatively simple things. Used together, they form a very capable toolchain for working with Kubernetes.

Some of the Carvel tools are installed into Kubernetes clusters as controllers. In fact, the definition of a "Tanzu" cluster
Some Carvel tools are installed into Kubernetes clusters as controllers. In fact, the definition of a "Tanzu" cluster
is just a plain old Kubernetes cluster with two specific Carvel tools installed: the secretgen-controller and the kapp-controller.

For some historical context, the Carvel tools were previously called simply "Kubernetes Tools" which was shortened to "k14s" in
Expand Down
20 changes: 10 additions & 10 deletions 90-Carvel/kapp-controller/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
The kapp-controller is a Kubernetes controller that has two main functions:

1. It can run kapp in a cluster - this is the usage we will discuss in this workshop
1. It can be used to package software for easy installation in a cluster, and it understands how to install and modify
software packages. All of the software available for TCE from VMware is packaged for easy use with the kapp-controller.
2. It can be used to package software for easy installation in a cluster, and it understands how to install and modify
software packages. All the software available for TCE from VMware is packaged for easy use with the kapp-controller.
We have already seen the package repositories and installed packages available in the TCE cluster.

Full details about the Kapp controller are here: https://carvel.dev/kapp-controller/
Expand All @@ -18,8 +18,8 @@ The kapp-controller runs kapp in a cluster. But what does that mean exactly?
When we ran kapp on a workstation, we saw that:

1. We supply input files to kapp that contain Kubernetes YAML
1. We might want to run those input files through ytt or kbld (or both!) before we send them to kapp
1. Ultimately we want kapp to create resources in Kubernetes based on these, possibly transformed, input files
2. We might want to run those input files through ytt or kbld (or both!) before we send them to kapp
3. Ultimately we want kapp to create resources in Kubernetes based on these, possibly transformed, input files

The kapp-controller does exactly this. When we install the kapp-controller in a cluster, we enable a new CRD
of kind `App` with API version `kappctrl.k14s.io/v1alpha1`. This allows us to define input sources for kapp and
Expand All @@ -37,12 +37,12 @@ Configuration of the App CRD contains three major sections:
- From a Helm chart
- From an arbitrary URL
- others
1. `spec.template` where we define the transforms we want to apply to the input YAML. Valid templating engines include:
2. `spec.template` where we define the transforms we want to apply to the input YAML. Valid templating engines include:
- ytt
- kbld
- helmTemplate
- others
1. `spec.deploy` where we specify options for kapp on the deployment
3. `spec.deploy` where we specify options for kapp on the deployment

This may seem a bit abstract, so we will walk through a simple example. But first we need to look at security.

Expand Down Expand Up @@ -98,7 +98,7 @@ Important things to notice in this spec:
1. `spec.fetch` has a single hardcoded deployment spec. It's in a "file" named "deployment.yaml" but that is really
just to keep it separate from other hardcoded entries. There can be as many inline entries as you wish.
1. The kapp-controller will run this input through kbld before applying it. This is specified by the following YAML:
2. The kapp-controller will run this input through kbld before applying it. This is specified by the following YAML:

```yaml
template:
Expand All @@ -109,7 +109,7 @@ Important things to notice in this spec:
for kbld if desired that roughly correspond to options on the kbld CLI. We don't need to specify anything in this case, so we just
supply an empty map (`{}`)

1. `spec.deploy` has `kapp` as it's only value. Again we can specify kapp command line parameters if desired, but we don't need
3. `spec.deploy` has `kapp` as it's only value. Again we can specify kapp command line parameters if desired, but we don't need
any here, so we supply an empy map

This spec creates an App resource that runs kapp and deploys the application. The App resource will check for updates every 30 seconds
Expand Down Expand Up @@ -171,7 +171,7 @@ kapp-controller. It has the following characteristics:
1. There is inline YAML for a namespace, deployment, and service. These are the same files we used from the
kapp exercise - they are ytt templates that accept configuration values. There is also inline YAML for the ytt
schema with default values. This is basically copy/paste of existing YAML templates into an App spec.
1. There is a reference to a Git repository to obtain configuration values. In this case, the reference directory has a single
2. There is a reference to a Git repository to obtain configuration values. In this case, the reference directory has a single
file named "values.yaml" that looks something like this:

```yaml
Expand All @@ -180,7 +180,7 @@ kapp-controller. It has the following characteristics:
namespace: kuard-app-ns
replicas: 3
```
1. The kapp-controller is configured to run both ytt and kbld on the input files before deploying the applcation with kapp
3. The kapp-controller is configured to run both ytt and kbld on the input files before deploying the application with kapp

If you want to experiment with this, then we suggest you change the Git reference to a repo where you can commit. Then
deploy the application and watch it create all the resources you expect. If you make a change to the configuration in Git,
Expand Down
Loading

0 comments on commit 10dd825

Please sign in to comment.