Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl diff requires update / patch permission #981

Open
mbrancato opened this issue Nov 19, 2020 · 26 comments
Open

kubectl diff requires update / patch permission #981

mbrancato opened this issue Nov 19, 2020 · 26 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@mbrancato
Copy link

mbrancato commented Nov 19, 2020

What happened:

Usually, when we think about using diff to compare two things - write access is not required. In this case, it seems kubectl diff requires write access to compare the current and future state. This might be an upstream problem if this is due to the server side apply.

When doing kubectl diff on a Deployment:

Error from server (Forbidden): deployments.extensions "my-super-deployment" is forbidden: 
User "myuser" cannot patch resource "deployments" in API group "extensions" in the namespace 
"mynamespace": requires one of ["container.deployments.update"] permission(s).

It seems to require PATCH / PUT permission under /apis/apps/v1/namespaces/{namespace}/deployments/* or older /apis/extensions/v1beta1/namespaces/{namespace}/deployments/*.

What you expected to happen:

Diff output to be shown.

How to reproduce it (as minimally and precisely as possible):

Attempt to diff a Deployment resource from a user lacking write access to the deployment. The permission in the example above is mapped to GKE specifically.

Anything else we need to know?:

Environment:

  • Kubernetes client and server versions (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.12-gke.20", GitCommit:"0ac5f81eecab42bff5ef74f18b99d8896ba7b89b", GitTreeState:"clean", BuildDate:"2020-09-09T00:48:20Z", GoVersion:"go1.12.17b4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: Google, GKE
  • OS (e.g: cat /etc/os-release): n/a
@mbrancato mbrancato added the kind/bug Categorizes issue or PR as related to a bug. label Nov 19, 2020
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Nov 19, 2020
@eddiezane
Copy link
Member

kubectl supports +1/-1 version of the api server so your v1.19.3 client is pretty off from our version skew policy.

You could try using an older kubectl version or upgrading your cluster to a supported version.

Please reopen if you run into this with a supported version.

/close

@k8s-ci-robot
Copy link
Contributor

@eddiezane: Closing this issue.

In response to this:

kubectl supports +1/-1 version of the api server so your v1.19.3 client is pretty off from our version skew policy.

You could try using an older kubectl version or upgrading your cluster to a supported version.

Please reopen if you run into this with a supported version.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mbrancato
Copy link
Author

Hi @eddiezane

Here is a minimal example:
uses - https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml as test.yaml

$ kubectl apply -f test.yaml 
deployment.apps/nginx-deployment created
$ kubectl get deployment
NAMESPACE            NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
default              nginx-deployment         3/3     3            3           14s
$ kubectl create clusterrolebinding view --group=view  --clusterrole=view
clusterrolebinding.rbac.authorization.k8s.io/view created
$ kubectl get deployment --as=kubernetes-admin --as-group=view 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           20m
$ kubectl apply -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"nginx\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":88}]}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"nginx"}],"containers":[{"$setElementOrder/ports":[{"containerPort":88}],"name":"nginx","ports":[{"containerPort":88},{"$patch":"delete","containerPort":80}]}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-deployment", Namespace: "default"
for: "test.yaml": deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl diff -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

/reopen

@k8s-ci-robot
Copy link
Contributor

@mbrancato: Reopened this issue.

In response to this:

Hi @eddiezane

Here is a minimal example:
uses - https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml as test.yaml

$ kubectl apply -f test.yaml 
deployment.apps/nginx-deployment created
$ kubectl get deployment
NAMESPACE            NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
default              nginx-deployment         3/3     3            3           14s
$ kubectl create clusterrolebinding view --group=view  --clusterrole=view
clusterrolebinding.rbac.authorization.k8s.io/view created
$ kubectl get deployment --as=kubernetes-admin --as-group=view 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           20m
$ kubectl apply -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"nginx\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":88}]}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"nginx"}],"containers":[{"$setElementOrder/ports":[{"containerPort":88}],"name":"nginx","ports":[{"containerPort":88},{"$patch":"delete","containerPort":80}]}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-deployment", Namespace: "default"
for: "test.yaml": deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl diff -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@julianvmodesto
Copy link
Contributor

Yes this is the current expected behavior:

https://kubernetes.io/docs/reference/using-api/api-concepts/#dry-run-authorization

This issue is a duplicate of kubernetes/kubernetes#95449

@eddiezane
Copy link
Member

Thanks @julianvmodesto. Let's track this there.

/close

@k8s-ci-robot
Copy link
Contributor

@eddiezane: Closing this issue.

In response to this:

Thanks @julianvmodesto. Let's track this there.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mbrancato
Copy link
Author

I'll move discussion there, but as the title says my issue is the patch permission requirement and specifically for kubectl diff which I think is an abstraction from this other issue. I tried not to provide a solution but I thought falling back to client dry run may be a solution.

@tonybenchsci
Copy link

Relates to kubernetes/kubernetes#95449

@mbrancato
Copy link
Author

Reopening this. The other discussion went nowhere.

To maybe make it more clear, comparing the deployed state and a local manifest should not be relying on write or patch permissions to a cluster. And I think that is where the misunderstanding is. At a minimum, the diff should fallback to a client-side comparison.

The same issue of kubectl diff using PATCH on the server (even with --server-side=false) applies to other things that are astonishingly broken like doing kubectl diff on an ingress with an admission controller. The PATCH triggers an admission controller and can cause the diff to fail.

e.g.

% kubectl diff --server-side=false -f ingress-site.yaml
Error from server (BadRequest): admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "site.company.com" and path "/" is already defined in ingress dev/ingress-site

/reopen

@k8s-ci-robot
Copy link
Contributor

@mbrancato: Reopened this issue.

In response to this:

Reopening this. The other discussion went nowhere.

To maybe make it more clear, comparing the deployed state and a local manifest should not be relying on write or patch permissions to a cluster. And I think that is where the misunderstanding is. At a minimum, the diff should fallback to a client-side comparison.

The same issue of kubectl diff using PATCH on the server (even with --server-side=false) applies to other things that are astonishingly broken like doing kubectl diff on an ingress with an admission controller. The PATCH triggers an admission controller and can cause the diff to fail.

e.g.

% kubectl diff --server-side=false -f ingress-site.yaml
Error from server (BadRequest): admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "site.company.com" and path "/" is already defined in ingress dev/ingress-site

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Aug 19, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 17, 2021
@eddiezane
Copy link
Member

@mbrancato apologies for the friction here.

You are correct and this appears to be a valid issue. A client side only diff should not be making a PATCH request. This is because it's currently hard coded to use dry run. This is a bug and should be fixed.

https://github.com/kubernetes/kubernetes/blob/23ee308ed7ed7c80f7e4d0437f38400f8df6faf8/staging/src/k8s.io/kubectl/pkg/cmd/diff/diff.go#L350

/triage accepted
/remove-lifecycle rotten
/priority backlog

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jan 5, 2022
@julianvmodesto
Copy link
Contributor

One point of friction is that the --server-side flag name is misleading. The --server-side flag indicates if the new patch is created with server-side apply or client-side apply. Ideally this would be clearer for kubectl diff with somethign like --server-side-apply.

I don't think this is a bug because diff is intended to only diff a live object against the new patched object.

It sounds like this request is for a client-side version of diff, with the live object retrieved from the server diffed against a patched object that's constructed locally. I think this sounds close to #1147.

@sklirg
Copy link

sklirg commented Mar 25, 2022

I don't think this is a bug because diff is intended to only diff a live object against the new patched object.

If that is the case, I think the help text from kubectl diff --help is misleading:

$ kubectl diff --help
Diff configurations specified by file name or stdin between the current online configuration, and the configuration as
it would be if applied.
...

I interpret this as being able to diff the current online configuration with some offline configuration, and be able to see what would change if the offline configuration was applied to the online configuration. As long as I have permission to get the current online configuration, I would expect to be able to see what would change as well.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@blurpy
Copy link

blurpy commented Apr 21, 2023

Can this be reopened again? Read only diff would be extremely useful, especially in pipelines for pull requests.

@mbrancato
Copy link
Author

/reopen

@k8s-ci-robot
Copy link
Contributor

@mbrancato: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Apr 21, 2023
@jgogstad
Copy link

jgogstad commented May 26, 2023

if you landed here looking for a way to incorporate kubectl diff into CI with read-only access, see [https://github.com/weaveworks/kubediff]. It performs the diff by reading all the objects in a k8s namespace and then compare them to a set of local manifests. No PATCH permissions necessary.

edit: the project looks dead, the markdown diff output is kind of crap (e.g. it will say "Error: Missing" if something is missing on the server), and the json output is limited.

@paololazzari
Copy link

Any updates?

@imans777
Copy link

It's weird how such useful and obvious command doesn't work as expected and it's help command is misleading. Any updates would be appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests