Skip to content

cloud-controller-manager - CrashLoopBackOff #17347

@RizwanaVyoma

Description

@RizwanaVyoma

I am using Kops on GCE cluster.

The recent cluster update automatically changed cloud-controller-manager image

Change log:
ManagedFile/cluster.k8s.local-addons-gcp-cloud-controller.addons.k8s.io-k8s-1.23
        Contents
                                  name: KUBERNETES_SERVICE_HOST
                                            value: 127.0.0.1
                                +         image: gcr.io/k8s-staging-cloud-provider-gcp/cloud-controller-manager:master@sha256:b3ac9d2d9cff8d736473ab0297c57dfb1924b50758e5cc75a80bacd9d6568f8a
                                -         image: gcr.io/k8s-staging-cloud-provider-gcp/cloud-controller-manager:master@sha256:f575cc54d0ac3abf0c4c6e8306d6d809424e237e51f4a9f74575502be71c607c
                                          imagePullPolicy: IfNotPresent
                                          livenessProbe:

Because of this newly updated image gcr.io/k8s-staging-cloud-provider-gcp/cloud-controller-manager:master@sha256:b3ac9d2d9cff8d736473ab0297c57dfb1924b50758e5cc75a80bacd9d6568f8a
cloud-controller-manager pod is crashed.

Log message in pod :
flag provided but not defined: -allocate-node-cidrs
Usage of /go-runner:
-also-stdout
useful with log-file, log to standard output as well as the log file
-log-file string
If non-empty, save stdout to this file
-redirect-stderr
treat stderr same as stdout (default true)

on checking the docker image
docker run --rm gcr.io/k8s-staging-cloud-provider-gcp/cloud-controller-manager:master@sha256:b3ac9d2d9cff8d736473ab0297c57dfb1924b50758e5cc75a80bacd9d6568f8a --help

did not list any of the below attributes which are in the cloud-controller-manager deamonset

  • args:
    - --allocate-node-cidrs=true
    - --cidr-allocator-type=CloudAllocator
    - --cluster-cidr=************
    - --cluster-name=*************
    - --controllers=*
    - --leader-elect=true
    - --v=2
    - --cloud-provider=gce
    - --use-service-account-credentials=true
    - --cloud-config=/etc/kubernetes/cloud.config

Please let us know how to fix this issue. How to avoid this automatic version updates of images. This new image is breaking the cluster.

Metadata

Metadata

Assignees

No one assigned

    Labels

    lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions