-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added a flag, mark_masters_schedulable to mark master nodes scheduleable #10181
Added a flag, mark_masters_schedulable to mark master nodes scheduleable #10181
Conversation
f21c350
to
a828513
Compare
…le if required and verify csv version of native client Signed-off-by: Amrita Mahapatra <[email protected]>
a828513
to
c69a73a
Compare
Signed-off-by: Amrita Mahapatra <[email protected]>
…yamls Signed-off-by: Amrita Mahapatra <[email protected]>
Signed-off-by: Amrita Mahapatra <[email protected]>
"replica" | ||
] = no_of_worker_nodes | ||
|
||
if self.platform == constants.BAREMETAL_PLATFORM: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please change to if self.platform in constants.HCI_PROVIDER_CLIENT_PLATFORMS:
We should use only the next config when deploying a new cluster.
ENV_DATA: platform: "hci_baremetal"
Full deployment config:
https://docs.google.com/document/d/1RfDNQi4B3x4kv9PXx2lqGSD2V2UGideTCIXqLYJFg0Y/edit#bookmark=id.5lrwe0xopvrx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated in latest commit.
"replica" | ||
] = no_of_worker_nodes | ||
|
||
if self.platform == constants.BAREMETAL_PLATFORM: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please change to if self.platform in constants.HCI_PROVIDER_CLIENT_PLATFORMS:
We should use only the next config when deploying a new cluster.
ENV_DATA: platform: "hci_baremetal"
Full deployment config:
https://docs.google.com/document/d/1RfDNQi4B3x4kv9PXx2lqGSD2V2UGideTCIXqLYJFg0Y/edit#bookmark=id.5lrwe0xopvrx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated in latest commit.
ocs_ci/deployment/baremetal.py
Outdated
namespace (str): namespace where the oc_debug command will be executed | ||
|
||
Returns: | ||
disk_names_available_for_cleanup (int): No of disks avoid to cleanup on a node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A small change here:
disk_names_available_for_cleanup (list): The disk names available for cleanup on a node
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated in latest commit.
ocs_ci/deployment/baremetal.py
Outdated
namespace (str): namespace where the oc_debug command will be executed | ||
|
||
Returns: | ||
disks_cleaned (int): No of disks cleaned on a node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this function doesn't return a value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have removed this in latest commit.
disks_available_on_worker_nodes_for_cleanup = disks_available_to_cleanup( | ||
worker_node_objs[0] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we go over all the worker nodes(and not just one) to take all the available disks to clean up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have executed this for one worker node because I am setting that value as ["spec"]["storageDeviceSets"][0]["count"] is storage_cluster yaml.
Have seen that in general the disks count are same for all the worker nodes, so thought I can collect this value from any one.
Signed-off-by: Amrita Mahapatra <[email protected]>
created vsphere provider-client cluster with this pr, |
Signed-off-by: Amrita Mahapatra <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: amr1ta, dahorak, yitzhak12 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherry-pick release-4.16 |
@amr1ta: new pull request created: #10437 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Added a flag, mark_masters_schedulable to mark master nodes scheduleble if required and csv major version check between native client and provider