diff --git a/docs/pipelineruns.md b/docs/pipelineruns.md
index a5fd92693f3..c40de5b801a 100644
--- a/docs/pipelineruns.md
+++ b/docs/pipelineruns.md
@@ -24,6 +24,8 @@ weight: 204
- [Mapping ServiceAccount credentials to Tasks](#mapping-serviceaccount-credentials-to-tasks)
- [Specifying a Pod template](#specifying-a-pod-template)
- [Specifying taskRunSpecs](#specifying-taskrunspecs)
+ - [Parameter Substitution in taskRunSpecs](#parameter-substitution-in-taskrunspecs)
+ - [Matrix Support with taskRunSpecs](#matrix-support-with-taskrunspecs)
- [Specifying Workspaces](#specifying-workspaces)
- [Propagated Workspaces](#propagated-workspaces)
- [Referenced TaskRuns within Embedded PipelineRuns](#referenced-taskruns-within-embedded-pipelineruns)
@@ -1008,6 +1010,60 @@ spec:
If a metadata key is present in different levels, the value that will be used in the `PipelineRun` is determined using this precedence order: `PipelineRun.spec.taskRunSpec.metadata` > `PipelineRun.metadata` > `Pipeline.spec.tasks.taskSpec.metadata`.
+#### Parameter Substitution in taskRunSpecs
+
+The `taskRunSpecs` supports parameter substitution in the `podTemplate` fields. This allows you to dynamically configure pod templates based on pipeline parameters, including those from Matrix tasks.
+
+For example, you can use parameter substitution to configure node selectors based on architecture parameters:
+
+```yaml
+spec:
+ taskRunSpecs:
+ - pipelineTaskName: build-task
+ podTemplate:
+ nodeSelector:
+ kubernetes.io/arch: $(params.arch)
+ tolerations:
+ - key: "environment"
+ operator: "Equal"
+ value: "$(params.env)"
+ effect: "NoSchedule"
+```
+
+#### Matrix Support with taskRunSpecs
+
+When using [`Matrix`](matrix.md) to fan out `PipelineTasks`, the `taskRunSpecs` can reference matrix parameters for dynamic pod template configuration. Each matrix combination will create a separate `TaskRun` with the appropriate parameter values substituted in the pod template.
+
+Here's an example showing how to use `taskRunSpecs` with matrix parameters:
+
+```yaml
+spec:
+ taskRunSpecs:
+ - pipelineTaskName: build-and-push-manifest
+ podTemplate:
+ nodeSelector:
+ kubernetes.io/arch: $(params.arch)
+ pipelineSpec:
+ tasks:
+ - name: build-and-push-manifest
+ matrix:
+ params:
+ - name: arch
+ value: ["amd64", "arm64"]
+ taskSpec:
+ params:
+ - name: arch
+ steps:
+ - name: build-and-push
+ image: ubuntu
+ script: |
+ echo "building on $(params.arch)"
+```
+
+In this example, the matrix will create two `TaskRuns` - one for `amd64` and one for `arm64`. Each will have its pod scheduled on the appropriate node architecture using the nodeSelector with the substituted parameter value.
+
+For a complete example, see [`pipelinerun-with-taskrunspecs-matrix-param-substitution.yaml`](../examples/v1/pipelineruns/pipelinerun-with-taskrunspecs-matrix-param-substitution.yaml).
+
### Specifying `Workspaces`
If your `Pipeline` specifies one or more `Workspaces`, you must map those `Workspaces` to
diff --git a/docs/podtemplates.md b/docs/podtemplates.md
index 53bb70ca6e3..15ca621bd14 100644
--- a/docs/podtemplates.md
+++ b/docs/podtemplates.md
@@ -23,6 +23,29 @@ See the following for examples of specifying a Pod template:
- [Specifying a Pod template for a `TaskRun`](./taskruns.md#specifying-a-pod-template)
- [Specifying a Pod template for a `PipelineRun`](./pipelineruns.md#specifying-a-pod-template)
+## Parameter Substitution in Pod Templates
+
+When using Pod templates within `PipelineRun` [`taskRunSpecs`](./pipelineruns.md#specifying-taskrunspecs), you can use parameter substitution to dynamically configure Pod template fields based on pipeline parameters. This is particularly useful when working with [`Matrix`](./matrix.md) tasks that fan out with different parameter values.
+
+Parameter substitution uses the standard Tekton syntax `$(params.paramName)` and is supported in all Pod template fields that accept string values.
+
+Example with parameter substitution:
+```yaml
+taskRunSpecs:
+ - pipelineTaskName: build-task
+ podTemplate:
+ nodeSelector:
+ kubernetes.io/arch: $(params.arch)
+ environment: $(params.env)
+ tolerations:
+ - key: "workload-type"
+ operator: "Equal"
+ value: "$(params.workload)"
+ effect: "NoSchedule"
+```
+
+When used with Matrix tasks, each matrix combination will create a separate `TaskRun` with the parameter values substituted appropriately in the Pod template. For more information and examples, see [Matrix Support with taskRunSpecs](./pipelineruns.md#matrix-support-with-taskrunspecs).
+
## Supported fields
Pod templates support fields listed in the table below.
diff --git a/docs/variables.md b/docs/variables.md
index 02450fd0989..23ba55812e8 100644
--- a/docs/variables.md
+++ b/docs/variables.md
@@ -176,3 +176,21 @@ For instructions on using variable substitutions see the relevant section of [th
| `PipelineRun` | `spec.workspaces[].projected.sources[].configMap.items[].path` |
| `PipelineRun` | `spec.workspaces[].csi.driver` |
| `PipelineRun` | `spec.workspaces[].csi.nodePublishSecretRef.name` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.nodeSelector.*` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.tolerations[].key` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.tolerations[].operator` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.tolerations[].value` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.tolerations[].effect` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.affinity.*` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.securityContext.*` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.volumes[].name` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.volumes[].configMap.name` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.volumes[].secret.secretName` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.runtimeClassName` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.schedulerName` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.priorityClassName` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.imagePullSecrets[].name` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.hostAliases[].ip` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.hostAliases[].hostnames[*]` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.topologySpreadConstraints[].topologyKey` |
+| `PipelineRun` | `spec.taskRunSpecs[].podTemplate.topologySpreadConstraints[].whenUnsatisfiable` |
diff --git a/examples/v1/pipelineruns/beta/pipelinerun-with-matrix-and-taskrunspecs-param-substitution.yaml b/examples/v1/pipelineruns/beta/pipelinerun-with-matrix-and-taskrunspecs-param-substitution.yaml
new file mode 100644
index 00000000000..d51d40fbbd8
--- /dev/null
+++ b/examples/v1/pipelineruns/beta/pipelinerun-with-matrix-and-taskrunspecs-param-substitution.yaml
@@ -0,0 +1,43 @@
+apiVersion: tekton.dev/v1
+kind: PipelineRun
+metadata:
+ generateName: matrix-podtemplate-taskrunspecs-test-
+spec:
+ taskRunSpecs:
+ - pipelineTaskName: build-and-push-manifest
+ podTemplate:
+ nodeSelector:
+ node-type: $(params.node-type)
+ - pipelineTaskName: create-manifest-list
+ podTemplate:
+ nodeSelector:
+ kubernetes.io/arch: amd64
+ pipelineSpec:
+ tasks:
+ - name: build-and-push-manifest
+ matrix:
+ params:
+ - name: node-type
+ value: ["worker-1", "worker-2"]
+ taskSpec:
+ results:
+ - name: manifest
+ type: string
+ params:
+ - name: node-type
+ steps:
+ - name: build-and-push
+ image: ubuntu
+ script: |
+ echo "building on $(params.node-type)"
+ echo "testmanifest-$(params.node-type)" | tee $(results.manifest.path)
+ - name: create-manifest-list
+ params:
+ - name: manifest
+ value: $(tasks.build-and-push-manifest.results.manifest[*])
+ taskSpec:
+ steps:
+ - name: echo-manifests
+ image: ubuntu
+ args: ["$(params.manifest[*])"]
+ script: echo "$@"
diff --git a/hack/setup-kind.sh b/hack/setup-kind.sh
index 6f680361511..ba494c54c85 100755
--- a/hack/setup-kind.sh
+++ b/hack/setup-kind.sh
@@ -125,6 +125,8 @@ kind: Cluster
nodes:
- role: control-plane
image: "${KIND_IMAGE}"
+ labels:
+ node-type: "control-plane"
EOF
for i in $(seq 1 1 "${NODE_COUNT}");
@@ -132,6 +134,8 @@ do
cat >> kind.yaml < 0 {
+ defaults = append(defaults, ts.Params...)
+ }
+ updatedPodTemplate := resources.ApplyPodTemplateReplacements(tr.Spec.PodTemplate, tr, defaults...)
+ if updatedPodTemplate != nil {
+ trCopy := tr.DeepCopy()
+ trCopy.Spec.PodTemplate = updatedPodTemplate
+ tr = trCopy
+ }
+ }
+
podbuilder := podconvert.Builder{
Images: c.Images,
KubeClient: c.KubeClientSet,
diff --git a/pkg/reconciler/taskrun/taskrun_test.go b/pkg/reconciler/taskrun/taskrun_test.go
index f8875b085c1..8fa2f189173 100644
--- a/pkg/reconciler/taskrun/taskrun_test.go
+++ b/pkg/reconciler/taskrun/taskrun_test.go
@@ -79,6 +79,7 @@ import (
ktesting "k8s.io/client-go/testing"
"k8s.io/client-go/tools/record"
clock "k8s.io/utils/clock/testing"
+ "k8s.io/utils/ptr"
"knative.dev/pkg/apis"
duckv1 "knative.dev/pkg/apis/duck/v1"
cminformer "knative.dev/pkg/configmap/informer"
@@ -777,8 +778,7 @@ spec:
taskruns := []*v1.TaskRun{
taskRunSuccess, taskRunWithSaSuccess, taskRunSubstitution,
- taskRunWithTaskSpec,
- taskRunWithLabels, taskRunWithAnnotations, taskRunWithPod,
+ taskRunWithTaskSpec, taskRunWithLabels, taskRunWithAnnotations, taskRunWithPod,
taskRunWithCredentialsVariable, taskRunBundle,
}
@@ -7172,3 +7172,421 @@ status:
})
}
}
+
+// TestReconcile_PodTemplateParameterSubstitution tests that PodTemplate parameters
+// are properly substituted when a TaskRun is reconciled
+func TestReconcile_PodTemplateParameterSubstitution(t *testing.T) {
+ task := parse.MustParseV1Task(t, `
+metadata:
+ name: test-task
+ namespace: foo
+spec:
+ params:
+ - name: arch
+ type: string
+ default: amd64
+ - name: region
+ type: string
+ default: us-west-1
+ - name: selinuxuser
+ type: string
+ default: myuser
+ - name: selinuxrole
+ type: string
+ default: myrole
+ - name: gmsacredential
+ type: string
+ default: mycredential
+ - name: apparmor
+ type: string
+ default: localhost/myprofile
+ - name: hostname
+ type: string
+ default: example.com
+ - name: volumename
+ type: string
+ default: my-volume
+ - name: disktype
+ type: string
+ default: hdd
+ steps:
+ - name: echo
+ image: busybox
+ script: echo hello
+`)
+
+ tr := parse.MustParseV1TaskRun(t, `
+metadata:
+ name: test-taskrun
+ namespace: foo
+spec:
+ params:
+ - name: arch
+ value: arm64
+ - name: region
+ value: us-east-1
+ - name: selinuxuser
+ value: customuser
+ - name: selinuxrole
+ value: customrole
+ - name: gmsacredential
+ value: customcredential
+ - name: apparmor
+ value: localhost/customprofile
+ - name: hostname
+ value: custom.example.com
+ - name: volumename
+ value: custom-volume
+ - name: disktype
+ value: nvme
+ taskRef:
+ name: test-task
+ podTemplate:
+ nodeSelector:
+ kubernetes.io/arch: $(params.arch)
+ region: $(params.region)
+ tolerations:
+ - key: arch
+ operator: Equal
+ value: $(params.arch)
+ effect: NoSchedule
+ runtimeClassName: "gvisor-$(params.arch)"
+ schedulerName: "custom-scheduler-$(params.region)"
+ priorityClassName: "priority-$(params.arch)"
+ imagePullSecrets:
+ - name: "secret-$(params.region)"
+ env:
+ - name: ARCH
+ value: $(params.arch)
+ - name: REGION
+ value: $(params.region)
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: disktype
+ operator: In
+ values:
+ - $(params.disktype)
+ dnsPolicy: ClusterFirst
+ securityContext:
+ seLinuxOptions:
+ user: $(params.selinuxuser)
+ role: $(params.selinuxrole)
+ type: container_t
+ level: s0:c123,c456
+ windowsOptions:
+ gmsaCredentialSpecName: $(params.gmsacredential)
+ runAsUserName: $(params.arch)-user
+ appArmorProfile:
+ type: Localhost
+ localhostProfile: $(params.apparmor)
+ sysctls:
+ - name: kernel.$(params.arch)
+ value: $(params.region)
+ hostAliases:
+ - ip: "192.168.1.1"
+ hostnames:
+ - "$(params.hostname)"
+ - "alias.$(params.hostname)"
+ topologySpreadConstraints:
+ - maxSkew: 1
+ topologyKey: zone-$(params.region)
+ whenUnsatisfiable: DoNotSchedule
+ labelSelector:
+ matchLabels:
+ app: myapp-$(params.arch)
+ dnsConfig:
+ nameservers:
+ - "8.8.8.8"
+ - "$(params.arch).dns.example.com"
+ searches:
+ - "$(params.region).local"
+ options:
+ - name: ndots
+ value: "2"
+ volumes:
+ - name: $(params.volumename)
+ configMap:
+ name: config-$(params.region)
+ - name: secret-volume
+ secret:
+ secretName: secret-$(params.arch)
+ items:
+ - key: $(params.region)
+ path: secret/$(params.arch)
+ - name: projected-volume
+ projected:
+ sources:
+ - configMap:
+ name: projected-config-$(params.region)
+ - secret:
+ name: projected-secret-$(params.arch)
+ - serviceAccountToken:
+ audience: audience-$(params.region)
+ - name: csi-volume
+ csi:
+ driver: csi.example.com
+ nodePublishSecretRef:
+ name: csi-secret-$(params.arch)
+ volumeAttributes:
+ foo: $(params.region)
+`)
+
+ d := test.Data{
+ TaskRuns: []*v1.TaskRun{tr},
+ Tasks: []*v1.Task{task},
+ }
+ testAssets, cancel := getTaskRunController(t, d)
+ defer cancel()
+ createServiceAccount(t, testAssets, "default", tr.Namespace)
+
+ err := testAssets.Controller.Reconciler.Reconcile(testAssets.Ctx, getRunName(tr))
+ if err == nil {
+ t.Errorf("Expected reconcile to return a requeue indicating the pod was created, but got nil")
+ } else if ok, _ := controller.IsRequeueKey(err); !ok {
+ t.Errorf("Expected a requeue error, got: %v", err)
+ }
+
+ // Get the created pod and verify parameter substitution
+ pods, err := testAssets.Clients.Kube.CoreV1().Pods(tr.Namespace).List(testAssets.Ctx, metav1.ListOptions{})
+ if err != nil {
+ t.Fatalf("Error listing pods: %v", err)
+ }
+
+ if len(pods.Items) != 1 {
+ t.Fatalf("Expected 1 pod to be created, got %d", len(pods.Items))
+ }
+
+ pod := pods.Items[0]
+
+ // Verify nodeSelector substitution
+ expectedNodeSelector := map[string]string{
+ "kubernetes.io/arch": "arm64",
+ "region": "us-east-1",
+ }
+ if d := cmp.Diff(expectedNodeSelector, pod.Spec.NodeSelector); d != "" {
+ t.Errorf("NodeSelector mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify tolerations substitution
+ expectedTolerations := []corev1.Toleration{{
+ Key: "arch",
+ Operator: corev1.TolerationOpEqual,
+ Value: "arm64",
+ Effect: corev1.TaintEffectNoSchedule,
+ }}
+ if d := cmp.Diff(expectedTolerations, pod.Spec.Tolerations); d != "" {
+ t.Errorf("Tolerations mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify runtime class substitution
+ expectedRuntimeClassName := "gvisor-arm64"
+ if d := cmp.Diff(&expectedRuntimeClassName, pod.Spec.RuntimeClassName); d != "" {
+ t.Errorf("RuntimeClassName mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify scheduler name substitution
+ expectedSchedulerName := "custom-scheduler-us-east-1"
+ if d := cmp.Diff(expectedSchedulerName, pod.Spec.SchedulerName); d != "" {
+ t.Errorf("SchedulerName mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify priority class name substitution
+ expectedPriorityClassName := "priority-arm64"
+ if d := cmp.Diff(expectedPriorityClassName, pod.Spec.PriorityClassName); d != "" {
+ t.Errorf("PriorityClassName mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify image pull secrets substitution
+ expectedImagePullSecrets := []corev1.LocalObjectReference{{
+ Name: "secret-us-east-1",
+ }}
+ if d := cmp.Diff(expectedImagePullSecrets, pod.Spec.ImagePullSecrets); d != "" {
+ t.Errorf("ImagePullSecrets mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify environment variables substitution in all containers
+ expectedEnvVars := []corev1.EnvVar{
+ {Name: "ARCH", Value: "arm64"},
+ {Name: "REGION", Value: "us-east-1"},
+ }
+
+ for _, container := range pod.Spec.Containers {
+ // Find our added env vars
+ var actualEnvVars []corev1.EnvVar
+ for _, env := range container.Env {
+ if env.Name == "ARCH" || env.Name == "REGION" {
+ actualEnvVars = append(actualEnvVars, env)
+ }
+ }
+
+ if d := cmp.Diff(expectedEnvVars, actualEnvVars); d != "" {
+ t.Errorf("Environment variables mismatch in container %s: %s", container.Name, diff.PrintWantGot(d))
+ }
+ }
+
+ // Verify affinity substitution
+ expectedAffinity := &corev1.Affinity{
+ NodeAffinity: &corev1.NodeAffinity{
+ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{
+ NodeSelectorTerms: []corev1.NodeSelectorTerm{{
+ MatchExpressions: []corev1.NodeSelectorRequirement{{
+ Key: "disktype",
+ Operator: corev1.NodeSelectorOpIn,
+ Values: []string{"nvme"},
+ }},
+ }},
+ },
+ },
+ }
+ if d := cmp.Diff(expectedAffinity, pod.Spec.Affinity); d != "" {
+ t.Errorf("Affinity mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify dnsPolicy substitution
+ expectedDNSPolicy := corev1.DNSClusterFirst
+ if d := cmp.Diff(expectedDNSPolicy, pod.Spec.DNSPolicy); d != "" {
+ t.Errorf("DNSPolicy mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify securityContext substitution (string fields only, excluding int/bool fields)
+ expectedSecurityContext := &corev1.PodSecurityContext{
+ SELinuxOptions: &corev1.SELinuxOptions{
+ User: "customuser",
+ Role: "customrole",
+ Type: "container_t",
+ Level: "s0:c123,c456",
+ },
+ WindowsOptions: &corev1.WindowsSecurityContextOptions{
+ GMSACredentialSpecName: ptr.To("customcredential"),
+ RunAsUserName: ptr.To("arm64-user"),
+ },
+ AppArmorProfile: &corev1.AppArmorProfile{
+ Type: corev1.AppArmorProfileTypeLocalhost,
+ LocalhostProfile: ptr.To("localhost/customprofile"),
+ },
+ Sysctls: []corev1.Sysctl{{
+ Name: "kernel.arm64",
+ Value: "us-east-1",
+ }},
+ }
+ if d := cmp.Diff(expectedSecurityContext, pod.Spec.SecurityContext); d != "" {
+ t.Errorf("SecurityContext mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify hostAliases substitution
+ expectedHostAliases := []corev1.HostAlias{{
+ IP: "192.168.1.1",
+ Hostnames: []string{"custom.example.com", "alias.custom.example.com"},
+ }}
+ if d := cmp.Diff(expectedHostAliases, pod.Spec.HostAliases); d != "" {
+ t.Errorf("HostAliases mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify topologySpreadConstraints substitution
+ expectedTopologySpreadConstraints := []corev1.TopologySpreadConstraint{{
+ MaxSkew: 1,
+ TopologyKey: "zone-us-east-1",
+ WhenUnsatisfiable: corev1.DoNotSchedule,
+ LabelSelector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app": "myapp-arm64",
+ },
+ },
+ }}
+ if d := cmp.Diff(expectedTopologySpreadConstraints, pod.Spec.TopologySpreadConstraints); d != "" {
+ t.Errorf("TopologySpreadConstraints mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify dnsConfig substitution
+ expectedDNSConfig := &corev1.PodDNSConfig{
+ Nameservers: []string{"8.8.8.8", "arm64.dns.example.com"},
+ Searches: []string{"us-east-1.local"},
+ Options: []corev1.PodDNSConfigOption{{
+ Name: "ndots",
+ Value: ptr.To("2"),
+ }},
+ }
+ if d := cmp.Diff(expectedDNSConfig, pod.Spec.DNSConfig); d != "" {
+ t.Errorf("DNSConfig mismatch: %s", diff.PrintWantGot(d))
+ }
+
+ // Verify volumes substitution
+ expectedVolumes := []corev1.Volume{{
+ Name: "custom-volume",
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "config-us-east-1",
+ },
+ },
+ },
+ }, {
+ Name: "secret-volume",
+ VolumeSource: corev1.VolumeSource{
+ Secret: &corev1.SecretVolumeSource{
+ SecretName: "secret-arm64",
+ Items: []corev1.KeyToPath{{
+ Key: "us-east-1",
+ Path: "secret/arm64",
+ }},
+ },
+ },
+ }, {
+ Name: "projected-volume",
+ VolumeSource: corev1.VolumeSource{
+ Projected: &corev1.ProjectedVolumeSource{
+ Sources: []corev1.VolumeProjection{{
+ ConfigMap: &corev1.ConfigMapProjection{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "projected-config-us-east-1",
+ },
+ },
+ }, {
+ Secret: &corev1.SecretProjection{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "projected-secret-arm64",
+ },
+ },
+ }, {
+ ServiceAccountToken: &corev1.ServiceAccountTokenProjection{
+ Audience: "audience-us-east-1",
+ },
+ }},
+ },
+ },
+ }, {
+ Name: "csi-volume",
+ VolumeSource: corev1.VolumeSource{
+ CSI: &corev1.CSIVolumeSource{
+ Driver: "csi.example.com",
+ NodePublishSecretRef: &corev1.LocalObjectReference{
+ Name: "csi-secret-arm64",
+ },
+ VolumeAttributes: map[string]string{
+ "foo": "us-east-1",
+ },
+ },
+ },
+ }}
+
+ // Filter out system volumes (like tekton volumes) to focus on our custom volumes
+ var actualCustomVolumes []corev1.Volume
+ customVolumeNames := map[string]bool{
+ "custom-volume": true,
+ "secret-volume": true,
+ "projected-volume": true,
+ "csi-volume": true,
+ }
+ for _, vol := range pod.Spec.Volumes {
+ if customVolumeNames[vol.Name] {
+ actualCustomVolumes = append(actualCustomVolumes, vol)
+ }
+ }
+
+ if d := cmp.Diff(expectedVolumes, actualCustomVolumes); d != "" {
+ t.Errorf("Custom volumes mismatch: %s", diff.PrintWantGot(d))
+ }
+}