Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE_REQUEST] Argo Rollouts support #848

Open
romancin opened this issue Oct 8, 2024 · 0 comments
Open

[FEATURE_REQUEST] Argo Rollouts support #848

romancin opened this issue Oct 8, 2024 · 0 comments

Comments

@romancin
Copy link

romancin commented Oct 8, 2024

Description of the problem/feature request
Currently, we are starting to migrate our applications from Kubernetes Deployments to Argo Rollouts, which provides advanced deployment strategies, like bluegreen or canary. As stated in the Argo Rollouts docs:

A Rollout is Kubernetes workload resource which is equivalent to a Kubernetes Deployment object.
It is intended to replace a Deployment object in scenarios when more advanced deployment or progressive
delivery functionality is needed

Description of the existing behavior vs. expected behavior
When deleting our deployment, the dangling-service check included in kube-linter starts failing, because it can't match any pods, because in this case, the pods are now controlled by Argo Rollouts.

Input file:

apiVersion: v1
kind: Service
metadata:
  labels:
    run: my-app
  name: my-app
  namespace: default
spec:
  ports:
  - name: http
    nodePort: 30395
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: my-app
  type: NodePort
---
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  annotations:
    ignore-check.kube-linter.io/no-read-only-root-fs: Pending to validate if if works with read-only root filesystem
    ignore-check.kube-linter.io/run-as-non-root: Pending to validate if the pod works with other user than root
    owner: platform
  name: my-app
  namespace: default
spec:
  minReadySeconds: 3
  replicas: 2
  selector:
    matchLabels:
      run: my-app
  strategy:
    canary:
      steps:
      - setWeight: 50
      - pause:
          duration: 10s
  template:
    metadata:
      annotations:
        owner: platform
      labels:
        run: my-app
    spec:
      containers:
      - image: docker.io/my-app:1.0.0
        imagePullPolicy: Always
        name: my-app
        ports:
        - containerPort: 80
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            httpHeaders:
            - name: Host
              value: health
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: "1.5"
            memory: 800Mi
          requests:
            cpu: "0.3"
            memory: 200Mi
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    owner: platform
  name: my-app
  namespace: default
spec:
  maxReplicas: 15
  metrics:
  - resource:
      name: cpu
      target:
        averageUtilization: 70
        type: Utilization
    type: Resource
  minReplicas: 2
  scaleTargetRef:
    apiVersion: argoproj.io/v1alpha1
    kind: Rollout
    name: my-app

kube-linter execution:

kube-linter lint app.yml
KubeLinter 0.6.8

/Users/romanmartin/DeveloperCorner/my-app/deploy/my-app/app.yml: (object: default/my-app /v1, Kind=Service) no pods found matching service labels (map[run:my-app]) (check: dangling-service, remediation: Confirm that your service's selector correctly matches the labels on one of your deployments.)

Error: found 1 lint errors

Additional context
Add any other context or screenshots about the feature request here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant