Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kola-openstack fails due to exceeding quota #889

Open
marmijo opened this issue Jul 14, 2023 · 2 comments
Open

kola-openstack fails due to exceeding quota #889

marmijo opened this issue Jul 14, 2023 · 2 comments
Labels
jira For syncing to JIRA

Comments

@marmijo
Copy link
Member

marmijo commented Jul 14, 2023

The kola openstack job has been failing lately on every stream due to exceeding quota. We get the following errrors:

[2023-07-11T17:57:50.354Z]         harness.go:1704: Cluster failed starting machines: waiting for instance to run: Server 
reported ERROR status: {500 2023-07-11 17:57:40 +0000 UTC  Build of instance 97deb130-396d-438d-87e3-706114227adc
 aborted: VolumeSizeExceedsAvailableQuota: Requested volume or snapshot exceeds allowed gigabytes quota. Requested
 10G, quota is 1600G and 1600G has been consumed.}

[2023-07-11T17:57:52.851Z] 2023-07-11T17:57:52Z kola: retryloop: failed to bring up machines: waiting for instance to run: 
Server reported ERROR status: {500 2023-07-11 17:57:47 +0000 UTC  Build of instance 897854a4-7a4d-4b72-953c-
8b95b0c387de aborted: VolumeSizeExceedsAvailableQuota: Requested volume or snapshot exceeds allowed gigabytes quota. 
Requested 10G, quota is 1600G and 1600G has been consumed.}

or

[2023-07-14T19:35:14.192Z] 2023-07-14T19:35:13Z kola: Flight failed: Request forbidden: [POST
https://compute.public.mtl1.vexxhost.net/v2.1/360b86ddb1994b55a7c5757f8adc3637/os-keypairs], error message: 
{"forbidden": {"code": 403, "message": "Quota exceeded, too many key pairs."}}

Volumes are being created and are not being cleaned up causing the quota exceeded errors. We have been able to delete them manually and get a few tests to pass, but the volumes don't get cleaned up so eventually tests begin to fail. As as result, the entire kola-openstack job fails.

marmijo added a commit to marmijo/fedora-coreos-pipeline that referenced this issue Jul 14, 2023
Snooze the job on all streams for the remainder of July while we
investigate coreos#889.
marmijo added a commit to marmijo/fedora-coreos-pipeline that referenced this issue Jul 14, 2023
Snooze the job on all streams for the remainder of July while we
investigate coreos#889.
marmijo added a commit to marmijo/fedora-coreos-pipeline that referenced this issue Jul 14, 2023
Snooze the job on all streams for the remainder of July while we
investigate coreos#889
@marmijo
Copy link
Member Author

marmijo commented Jul 14, 2023

Let's snooze kola-openstack to unblock the FCOS pipeline while we investigate this. I put up a PR to snooze the cloud test until the end of July. #890

@dustymabe
Copy link
Member

for the too many keypairs thing I figured out what the problem was. The web interface was telling me there were no keypairs but the CLI told me a different story (there were a lot). I cleaned them up and created this issue: coreos/coreos-assembler#3550

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira For syncing to JIRA
Projects
None yet
Development

No branches or pull requests

2 participants