Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added a flag, mark_masters_schedulable to mark master nodes scheduleable #10181

Merged

Conversation

amr1ta
Copy link
Contributor

@amr1ta amr1ta commented Jul 25, 2024

Added a flag, mark_masters_schedulable to mark master nodes scheduleble if required and csv major version check between native client and provider

…le if required and verify csv version of native client

Signed-off-by: Amrita Mahapatra <[email protected]>
@amr1ta amr1ta force-pushed the update-provider-client-deploy branch from a828513 to c69a73a Compare July 26, 2024 05:53
"replica"
] = no_of_worker_nodes

if self.platform == constants.BAREMETAL_PLATFORM:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please change to if self.platform in constants.HCI_PROVIDER_CLIENT_PLATFORMS:
We should use only the next config when deploying a new cluster.
ENV_DATA: platform: "hci_baremetal"
Full deployment config:
https://docs.google.com/document/d/1RfDNQi4B3x4kv9PXx2lqGSD2V2UGideTCIXqLYJFg0Y/edit#bookmark=id.5lrwe0xopvrx

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated in latest commit.

"replica"
] = no_of_worker_nodes

if self.platform == constants.BAREMETAL_PLATFORM:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please change to if self.platform in constants.HCI_PROVIDER_CLIENT_PLATFORMS:
We should use only the next config when deploying a new cluster.
ENV_DATA: platform: "hci_baremetal"
Full deployment config:
https://docs.google.com/document/d/1RfDNQi4B3x4kv9PXx2lqGSD2V2UGideTCIXqLYJFg0Y/edit#bookmark=id.5lrwe0xopvrx

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated in latest commit.

namespace (str): namespace where the oc_debug command will be executed

Returns:
disk_names_available_for_cleanup (int): No of disks avoid to cleanup on a node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A small change here:
disk_names_available_for_cleanup (list): The disk names available for cleanup on a node

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated in latest commit.

namespace (str): namespace where the oc_debug command will be executed

Returns:
disks_cleaned (int): No of disks cleaned on a node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this function doesn't return a value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have removed this in latest commit.

Comment on lines +141 to +143
disks_available_on_worker_nodes_for_cleanup = disks_available_to_cleanup(
worker_node_objs[0]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we go over all the worker nodes(and not just one) to take all the available disks to clean up?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have executed this for one worker node because I am setting that value as ["spec"]["storageDeviceSets"][0]["count"] is storage_cluster yaml.
Have seen that in general the disks count are same for all the worker nodes, so thought I can collect this value from any one.

Signed-off-by: Amrita Mahapatra <[email protected]>
@amr1ta
Copy link
Contributor Author

amr1ta commented Aug 5, 2024

created vsphere provider-client cluster with this pr,
https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/40540/console

DanielOsypenko
DanielOsypenko previously approved these changes Aug 5, 2024
yitzhak12
yitzhak12 previously approved these changes Aug 5, 2024
@openshift-ci openshift-ci bot removed the lgtm label Aug 7, 2024
Signed-off-by: Amrita Mahapatra <[email protected]>
@amr1ta amr1ta dismissed stale reviews from yitzhak12 and DanielOsypenko via c1a7594 August 7, 2024 06:28
@openshift-ci openshift-ci bot added the lgtm label Aug 7, 2024
Copy link

openshift-ci bot commented Aug 7, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: amr1ta, dahorak, yitzhak12

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@dahorak dahorak merged commit a196690 into red-hat-storage:master Aug 7, 2024
5 of 6 checks passed
@amr1ta
Copy link
Contributor Author

amr1ta commented Sep 6, 2024

/cherry-pick release-4.16

@openshift-cherrypick-robot
Copy link
Collaborator

@amr1ta: new pull request created: #10437

In response to this:

/cherry-pick release-4.16

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm size/L PR that changes 100-499 lines
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants