Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add support for rke2_cni: none #169

Merged
merged 2 commits into from
Nov 10, 2023
Merged

feat: Add support for rke2_cni: none #169

merged 2 commits into from
Nov 10, 2023

Conversation

lukapetrovic-git
Copy link
Contributor

@lukapetrovic-git lukapetrovic-git commented Nov 10, 2023

Description

Added a different readiness check when the rke2_cni option is set to none.
Current behavior if rke2_cni: none is set is that the playbook hangs indefinitely when trying to check if nodes are ready. (Since nodes can't report ready state without an initialized CNI)

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update
  • Small minor change not affecting the Ansible Role code (Github Actions Workflow, Documentation etc.)

How Has This Been Tested?

Tested the following rke2_cni:none scenarios using Vagrant and Ansible:

  • Single node cluster: Works
  • Single master with multiple workers: Works
  • Multi-master cluster with High Availability mode:
    1. If disable_kube_proxy: false - keepalived, kubevip and preconfigured LB (used HAProxy) all work
    2. if disable_kube_proxy: true - keepalived and preconfigured LB (used HAProxy) work, kubevip does not, but i think this cannot be made to work since kubevip is dependant on kube-proxy.

Sorry, something went wrong.

@lukapetrovic-git lukapetrovic-git marked this pull request as ready for review November 10, 2023 13:18
@lukapetrovic-git lukapetrovic-git changed the title Add support for rke2_cni: none feat: Add support for rke2_cni: none Nov 10, 2023
@MonolithProjects MonolithProjects self-assigned this Nov 10, 2023
Copy link
Collaborator

@MonolithProjects MonolithProjects left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks it looks good

Copy link
Collaborator

@MonolithProjects MonolithProjects left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@MonolithProjects MonolithProjects merged commit 41d613b into lablabs:main Nov 10, 2023
5 checks passed
@MonolithProjects MonolithProjects added the enhancement New feature or request label Nov 10, 2023
@pratik705
Copy link

in my lab if i set cni to none(rke2_cni: none), nodes doesnt moe to ready state. I can see following error:
container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

Am I missing anything here?

@lukapetrovic-git
Copy link
Contributor Author

lukapetrovic-git commented Dec 29, 2023

@pratik705 Hi,
Judging on what you wrote it seems like expected behavior to me.
When u set rke2_cni: none the cluster should be established, but the nodes (both control-plane and workers) should remain in NotReady state since there is no CNI initialized.

I use another method that is easier for me to maintain separately for installing a CNI (in my case cillium using the cilium cli) as an extra step - https://github.com/cilium/cilium-cli/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants