-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hosts.update: Reconfigure CentOS 8 repositories to vault #106
hosts.update: Reconfigure CentOS 8 repositories to vault #106
Conversation
d2fd887
to
2bfbb2b
Compare
/retest centos-ci/cephfs.vfs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thanks.
GlusterFS job failed because of an extra CentOS Storage SIG repository enabled(via installation of centos-release-gluster) later during execution of backend specific tasks for installing some dependencies(may be only for glusterfs-selinux). We'll have to add similar change exclusively inside playbooks/ansible/roles/sit.glusterfs/tasks/common. I'll update the PR. |
805a26a
2bfbb2b
to
805a26a
Compare
CentOS Stream 8 reached EOL but our GPFS and GlusterFS environment setup still has some dependencies which are not yet fulfilled by CentOS Stream 9. In case of GlusterFS it lacks few gluster-ansible related rpms whereas GPFS is not yet ready to be compiled against newer kernel versions from CentOS Stream 9. Therefore we stick to CentOS Stream 8 till requirements are met. See samba-in-kubernetes#102 for further developments. Signed-off-by: Anoop C S <[email protected]>
805a26a
to
6352ec2
Compare
CentOS Stream 8 reached EOL but our GPFS and GlusterFS environment setup still has some dependencies which are not yet fulfilled by CentOS Stream 9. In case of GlusterFS it lacks few gluster-ansible related rpms whereas GPFS is not yet ready to be compiled against newer kernel versions from CentOS Stream 9.
Therefore we stick to CentOS Stream 8 till requirements are met. See #102 for further developments.