Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update openstack-k8s-operators (main) #485

Merged
merged 1 commit into from
Sep 29, 2023

Conversation

openstack-k8s-ci-robot
Copy link
Contributor

@openstack-k8s-ci-robot openstack-k8s-ci-robot commented Sep 22, 2023

This PR contains the following updates:

Package Type Update Change
github.com/openstack-k8s-operators/cinder-operator/api require digest 1b9a7de -> 42169dd
github.com/openstack-k8s-operators/dataplane-operator/api require digest e89d5ef -> a955424
github.com/openstack-k8s-operators/glance-operator/api require digest 8916408 -> adaea00
github.com/openstack-k8s-operators/heat-operator/api require digest e84784b -> 63f4c93
github.com/openstack-k8s-operators/horizon-operator/api require digest 511d89a -> e0a30ad
github.com/openstack-k8s-operators/infra-operator/apis require digest 2c76cd2 -> 98de8aa
github.com/openstack-k8s-operators/ironic-operator/api require digest 27e7523 -> 8c5a9c4
github.com/openstack-k8s-operators/keystone-operator/api require digest 92ae026 -> 11cb6a6
github.com/openstack-k8s-operators/lib-common/modules/common require digest 7dcb605 -> 4f614f3
github.com/openstack-k8s-operators/lib-common/modules/openstack require digest d74c2f3 -> 4f614f3
github.com/openstack-k8s-operators/lib-common/modules/storage require digest d74c2f3 -> 4f614f3
github.com/openstack-k8s-operators/manila-operator/api require digest 996d4e3 -> c19d2f6
github.com/openstack-k8s-operators/mariadb-operator/api require digest 8999b3b -> 6539555
github.com/openstack-k8s-operators/neutron-operator/api require digest 537b5af -> a320910
github.com/openstack-k8s-operators/nova-operator/api require digest 4a535c8 -> 8734257
github.com/openstack-k8s-operators/octavia-operator/api require digest dece63b -> f032aca
github.com/openstack-k8s-operators/openstack-ansibleee-operator/api require digest 6c12757 -> 96a16f0
github.com/openstack-k8s-operators/openstack-baremetal-operator/api require digest ecb378f -> 9064ac2
github.com/openstack-k8s-operators/ovn-operator/api require digest aab3078 -> 2b25ed7
github.com/openstack-k8s-operators/placement-operator/api require digest 3c99d09 -> d73f6b7
github.com/openstack-k8s-operators/swift-operator/api require digest a37c476 -> 52a9fee
github.com/openstack-k8s-operators/telemetry-operator/api require digest fe2794a -> cf7134b

Configuration

📅 Schedule: Branch creation - "every weekend" in timezone America/New_York, Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@stuggi
Copy link
Contributor

stuggi commented Sep 22, 2023

don't update them this week. we have to update them via #457

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/c3d9899eae5d458885f98685cef6e62d

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 16m 05s
podified-multinode-edpm-deployment-crc FAILURE in 56m 24s
cifmw-crc-podified-edpm-baremetal FAILURE in 1h 00m 02s

@rabi
Copy link
Contributor

rabi commented Sep 23, 2023

recheck

We need to merge this to bump dataplane-operator

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/394c043212b946629d7070dfe9df3559

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 05m 28s
podified-multinode-edpm-deployment-crc FAILURE in 51m 48s
cifmw-crc-podified-edpm-baremetal FAILURE in 53m 57s

@openstack-k8s-ci-robot openstack-k8s-ci-robot force-pushed the renovate/main-openstack-k8s-operators branch from f3c4df8 to f010bf5 Compare September 23, 2023 04:16
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/60e9d3b3e4bf431090b71de0f92bd571

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 07m 57s
podified-multinode-edpm-deployment-crc FAILURE in 55m 22s
cifmw-crc-podified-edpm-baremetal FAILURE in 52m 20s

@openstack-k8s-ci-robot openstack-k8s-ci-robot force-pushed the renovate/main-openstack-k8s-operators branch from f010bf5 to 46f7108 Compare September 23, 2023 07:59
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/b3d1689bb2584cff9a97975528d9a478

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 05m 22s
podified-multinode-edpm-deployment-crc FAILURE in 52m 26s
cifmw-crc-podified-edpm-baremetal FAILURE in 27m 19s

@openstack-k8s-ci-robot openstack-k8s-ci-robot force-pushed the renovate/main-openstack-k8s-operators branch from 46f7108 to 23a3fac Compare September 23, 2023 11:28
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/912ed0aceb5a4b1db7ebd6fa97734cdb

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 06m 52s
podified-multinode-edpm-deployment-crc FAILURE in 49m 33s
cifmw-crc-podified-edpm-baremetal FAILURE in 53m 50s

@openstack-k8s-ci-robot openstack-k8s-ci-robot force-pushed the renovate/main-openstack-k8s-operators branch from 23a3fac to ace7dbb Compare September 23, 2023 13:51
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/21250fe2850a4db58a84bc2afa1c11b7

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 06m 03s
podified-multinode-edpm-deployment-crc FAILURE in 52m 44s
cifmw-crc-podified-edpm-baremetal FAILURE in 50m 41s

@openstack-k8s-ci-robot openstack-k8s-ci-robot force-pushed the renovate/main-openstack-k8s-operators branch from ace7dbb to 56773c1 Compare September 23, 2023 17:20
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/db248da28ef442c5b7ef8a1d8738db60

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 07m 10s
podified-multinode-edpm-deployment-crc FAILURE in 52m 00s
cifmw-crc-podified-edpm-baremetal FAILURE in 54m 11s

@openstack-k8s-ci-robot openstack-k8s-ci-robot force-pushed the renovate/main-openstack-k8s-operators branch from 56773c1 to 20dd48c Compare September 24, 2023 18:12
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/2aacd65427a242fe98046547d14a5846

openstack-k8s-operators-content-provider NODE_FAILURE Node request 200-0006422042 failed in 0s
⚠️ podified-multinode-edpm-deployment-crc SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider
⚠️ cifmw-crc-podified-edpm-baremetal SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider

@stuggi
Copy link
Contributor

stuggi commented Sep 25, 2023

recheck

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/8a59572b8b8e44afa6f9d4f3842f41aa

openstack-k8s-operators-content-provider NODE_FAILURE Node request 200-0006423483 failed in 0s
⚠️ podified-multinode-edpm-deployment-crc SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider
⚠️ cifmw-crc-podified-edpm-baremetal SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider

@stuggi
Copy link
Contributor

stuggi commented Sep 25, 2023

recheck

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/fa8a9f7e1cfc4e31b1b680fde19af901

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 39m 06s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 46m 43s
cifmw-crc-podified-edpm-baremetal FAILURE in 1h 16m 18s

@rabi rabi force-pushed the renovate/main-openstack-k8s-operators branch from 20dd48c to a94b365 Compare September 26, 2023 04:36
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/e5d56abe40724b8aa5342131377a530d

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 23m 25s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 46m 12s
cifmw-crc-podified-edpm-baremetal FAILURE in 1h 10m 37s

@rabi
Copy link
Contributor

rabi commented Sep 26, 2023

Looks like edpm jobs here are not using the latest ansible runner image. Though the openstack-ansibleee-operator image is the latest. That's why edpm baremetal job is failing. @dprince Is this something related to the Disconnected Environment - RELATED_IMAGE changes in openstack-ansibleee-operator ?

                env:
                - name: RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT
                  value: quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:a1bb61a788f9ec9f82db1a81fa0c7692aa3197af022639bfa94276690203952e
                image: quay.io/openstack-k8s-operators/openstack-ansibleee-operator@sha256:abfd44cfe8e2c26d6d1a2d80baa6473d4ffe2b833772dd94c9faca7165606f3e
                livenessProbe:

@openstack-k8s-ci-robot openstack-k8s-ci-robot force-pushed the renovate/main-openstack-k8s-operators branch from a94b365 to f9540e0 Compare September 27, 2023 18:51
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/c8e434d1b86640b6a46636feb6401bc8

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 34m 53s
podified-multinode-edpm-deployment-crc FAILURE in 1h 16m 37s
cifmw-crc-podified-edpm-baremetal FAILURE in 1h 13m 23s

@openstack-k8s-ci-robot openstack-k8s-ci-robot force-pushed the renovate/main-openstack-k8s-operators branch from f9540e0 to 2b9b4af Compare September 28, 2023 08:33
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/7379289a30274808835f141d513d370c

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 32m 19s
podified-multinode-edpm-deployment-crc FAILURE in 1h 19m 36s
cifmw-crc-podified-edpm-baremetal FAILURE in 13m 03s

@rabi
Copy link
Contributor

rabi commented Sep 28, 2023

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/fafcbabbf98f4996b89677158bba10f8

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 35m 07s
podified-multinode-edpm-deployment-crc FAILURE in 1h 19m 20s
cifmw-crc-podified-edpm-baremetal FAILURE in 1h 13m 38s

@rabi
Copy link
Contributor

rabi commented Sep 28, 2023

Looks like the issue is containers.podman.podman_image can't pull images with SHA. With openstack-operator we default some images with SHA and it can't pull those. That's why the download job is failing.

[ramishra@fedora ~]$ cat playbook.yaml 
- hosts: localhost
  tasks:
    - name: Download needed container image
      containers.podman.podman_image:
        name: "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f22187e01fdc7c026d62f28470ccf1ea812a62f9ecbf00af166cd0544641e2ee"
      become: true
[ramishra@fedora ~]$ ansible-playbook playbook.yaml 
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
'all'

PLAY [localhost] *******************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************
ok: [localhost]

TASK [Download needed container image] *********************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to pull image quay.io/podified-antelope-centos9/openstack-nova-compute:sha256:f22187e01fdc7c026d62f28470ccf1ea812a62f9ecbf00af166cd0544641e2ee"}

PLAY RECAP *************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

[ramishra@fedora ~]$ cat playbook.yaml 
- hosts: localhost
  tasks:
    - name: Download needed container image
      containers.podman.podman_image:
        name: "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
      become: true
[ramishra@fedora ~]$ ansible-playbook playbook.yaml 
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
'all'

PLAY [localhost] *******************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************
ok: [localhost]

TASK [Download needed container image] *********************************************************************************
ok: [localhost]

PLAY RECAP *************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

@openstack-k8s-ci-robot openstack-k8s-ci-robot force-pushed the renovate/main-openstack-k8s-operators branch from 2b9b4af to d210f7a Compare September 28, 2023 23:45
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/8a44e018b08a49fe8dee926a3038f309

openstack-k8s-operators-content-provider FAILURE in 9m 18s
⚠️ podified-multinode-edpm-deployment-crc SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider
⚠️ cifmw-crc-podified-edpm-baremetal SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider

@rabi
Copy link
Contributor

rabi commented Sep 29, 2023

recheck

@rabi
Copy link
Contributor

rabi commented Sep 29, 2023

/test openstack-operator-build-deploy-kuttl

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/b1292351bc164a0780909fa069d4bf1e

openstack-k8s-operators-content-provider FAILURE in 10m 03s
⚠️ podified-multinode-edpm-deployment-crc SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider
⚠️ cifmw-crc-podified-edpm-baremetal SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider

@fao89
Copy link
Contributor

fao89 commented Sep 29, 2023

/test openstack-operator-build-deploy-kuttl

@fao89
Copy link
Contributor

fao89 commented Sep 29, 2023

recheck

1 similar comment
@rabi
Copy link
Contributor

rabi commented Sep 29, 2023

recheck

@rabi
Copy link
Contributor

rabi commented Sep 29, 2023

/test openstack-operator-build-deploy-kuttl

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/9249c891ac96449eb64385e98eb61366

✔️ openstack-k8s-operators-content-provider SUCCESS in 3h 31m 20s
podified-multinode-edpm-deployment-crc TIMED_OUT in 3h 08m 40s
cifmw-crc-podified-edpm-baremetal FAILURE in 40m 28s

@rabi
Copy link
Contributor

rabi commented Sep 29, 2023

recheck

@rabi
Copy link
Contributor

rabi commented Sep 29, 2023

/test openstack-operator-build-deploy-kuttl

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/08a751231ca94e789b798eeb579d274b

✔️ openstack-k8s-operators-content-provider SUCCESS in 3h 22m 45s
podified-multinode-edpm-deployment-crc TIMED_OUT in 3h 09m 56s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 52m 47s

@rabi
Copy link
Contributor

rabi commented Sep 29, 2023

recheck

Looks like multinode edpm job has been fixed.

Copy link
Contributor

@abays abays left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 29, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abays, openstack-k8s-ci-robot

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@danpawlik
Copy link

it will require "recheck".

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/86761ce1bd4f47aab742e7a1ffab7fbd

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 11m 21s
podified-multinode-edpm-deployment-crc RETRY_LIMIT in 5m 31s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 52m 23s

@rabi
Copy link
Contributor

rabi commented Sep 29, 2023

recheck

@slagle
Copy link
Contributor

slagle commented Sep 29, 2023

Looks like the issue is containers.podman.podman_image can't pull images with SHA. With openstack-operator we default some images with SHA and it can't pull those. That's why the download job is failing.

[ramishra@fedora ~]$ cat playbook.yaml 
- hosts: localhost
  tasks:
    - name: Download needed container image
      containers.podman.podman_image:
        name: "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f22187e01fdc7c026d62f28470ccf1ea812a62f9ecbf00af166cd0544641e2ee"
      become: true
[ramishra@fedora ~]$ ansible-playbook playbook.yaml 
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
'all'

PLAY [localhost] *******************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************
ok: [localhost]

TASK [Download needed container image] *********************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to pull image quay.io/podified-antelope-centos9/openstack-nova-compute:sha256:f22187e01fdc7c026d62f28470ccf1ea812a62f9ecbf00af166cd0544641e2ee"}

PLAY RECAP *************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

[ramishra@fedora ~]$ cat playbook.yaml 
- hosts: localhost
  tasks:
    - name: Download needed container image
      containers.podman.podman_image:
        name: "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
      become: true
[ramishra@fedora ~]$ ansible-playbook playbook.yaml 
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
'all'

PLAY [localhost] *******************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************
ok: [localhost]

TASK [Download needed container image] *********************************************************************************
ok: [localhost]

PLAY RECAP *************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

I believe this was fixed with: openstack-k8s-operators/edpm-ansible#381 Just for recording purposes.

@openshift-merge-robot openshift-merge-robot merged commit 1433aa8 into main Sep 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants