Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

driver can not parse tag values containing a coma #1809

Open
RomanBednar opened this issue Aug 19, 2024 · 4 comments · May be fixed by #1810
Open

driver can not parse tag values containing a coma #1809

RomanBednar opened this issue Aug 19, 2024 · 4 comments · May be fixed by #1810
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@RomanBednar
Copy link
Contributor

Driver can not parse tag value with a coma:

$ oc -n openshift-cluster-csi-drivers get deployment.apps/gcp-pd-csi-driver-controller -o json | jq '.spec.template.spec.containers[0].args[-1]'
"--extra-tags=parent1/key1/value,with,coma/parent2/key2/value2"
$ oc -n openshift-cluster-csi-drivers logs pod/gcp-pd-csi-driver-controller-654845fb89-dqlrp
...
I0815 11:34:35.519878       1 main.go:125] Driver vendor version v1.1.0-rc1-1347-gc770b412-dirty
F0815 11:34:35.524193       1 main.go:166] Bad extra tags: tag "with" is invalid, correct format: 'parent_id1/key1/value1,parent_id2/key2/value2'

Escaping does not work either:

$ oc -n openshift-cluster-csi-drivers get deployment.apps/gcp-pd-csi-driver-controller -o json | jq '.spec.template.spec.containers[0].args[-1]'
"--extra-tags=parent1/key1/value\\,with\\,coma/parent2/key2/value2"
$ oc -n openshift-cluster-csi-drivers logs pod/gcp-pd-csi-driver-controller-78495f87c7-qjvj5
Defaulted container "csi-driver" out of: csi-driver, csi-provisioner, provisioner-kube-rbac-proxy, csi-attacher, attacher-kube-rbac-proxy, csi-resizer, resizer-kube-rbac-proxy, csi-snapshotter, snapshotter-kube-rbac-proxy, csi-liveness-probe
I0815 11:41:27.848600       1 main.go:111] Operating compute environment set to: production and computeEndpoint is set to: <nil>
I0815 11:41:27.848714       1 main.go:120] Sys info: NumCPU: 6 MAXPROC: 1
I0815 11:41:27.848721       1 main.go:125] Driver vendor version v1.1.0-rc1-1347-gc770b412-dirty
F0815 11:41:27.851228       1 main.go:166] Bad extra tags: tag value "value\\" for tag "parent1/key1/value\\" is invalid. Tag value can have a maximum of 63 characters and cannot be empty. Tag value must begin and end with an alphanumeric character, and must contain only uppercase, lowercase alphanumeric characters, and the following special characters `_-.@%=+:,*#&(){}[]` and spaces

According to GCP documentation comas are allowed for tag values: https://cloud.google.com/resource-manager/docs/tags/tags-creating-and-managing#adding_tag_values

OpenShift recently added support for allowing users to set custom GCP labels and tags during installation: https://github.com/openshift/enhancements/blob/master/enhancements/api-review/gcp_user_defined_labels.md#apply-user-defined-labels-to-all-gcp-resources-created-by-openshift

And the allowed values are documented to match the GCP documentation: https://docs.openshift.com/container-platform/4.16/installing/installing_gcp/installing-gcp-customizations.html#installing-gcp-cluster-creation_installing-gcp-customizations

So in OpenShift we don't prevent using comas either for tag values, and if users attempt to use them the driver will fail. Either we need to add some escaping to the driver or disallow using comas for tag values and update the documentation accordingly.

NOTE: This can can be reproduced easily by creating a tag in GCP manually and passing it to --extra-tags arg of the driver, followed by dynamic volume provisioning.

@RomanBednar RomanBednar linked a pull request Aug 19, 2024 that will close this issue
@RomanBednar
Copy link
Contributor Author

cc @mattcary @sunnylovestiramisu

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 2, 2024
@mattcary
Copy link
Contributor

mattcary commented Dec 2, 2024

It's not just commas, extras slashes will also cause problems, and the validation regexp doesn't allow for utf-8 & unicode as described at https://cloud.google.com/resource-manager/docs/tags/tags-creating-and-managing#creating_tag.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants