This repo is a collection of configs to explore OpenShift Virtualization and Networking.
We’ll reuse the code from the GitOps catalog to install the operator, and then provide our own Hyperconverged resource to instantiate, or configure, the operator.
oc apply -k virtualization/operator/base
oc apply -k virtualization/instance/base
Virtual machines can use the default pod network and expose ports using Kubernetes resources like Services, Ingresses, Routes etc. However, it is likely you may have a need to deploy a virtual machine directly to existing enterprise networks outside the cluster.
This can be done by plumbing those networks to the Nodes and using the NMstate operator which can be programmed using NodeNetworkConfigurationPolicy (NNCP) resources.
An OpenShift Template can be used to make the writing of NNCPs a bit more repeatable and fault tolerant. As with most things in life, being organized and consistent is very important in networking. Please don’t write a one-off NNCP. Nothing (very little) is ever "one-off".
Template to create a node network configuration policy.
Some generated NNCPs can be found here networking/components and here.
Because it is common to have different cluster Nodes allocated for different workloads, this template uses NODE_SELECTOR_KEY
and NODE_SELECTOR_VALUE
parameters to target the nodes which are expected to provide networking for virtual machines. This example uses the machineset used to provision the hypervisor nodes.
Attachment to a network is gated by a namespace scoped NetworkAttachmentDefinition resource used by Multus via annotations on pods.
Network attachment definitions in the 'default' namespace are visible to all other namespaces by default. As a cluster administrator you may wish to restrict access by defining attachments in specific user namespaces. Because only an admin may create the NAD it may make sense to place them all in 'default'.
Associating a network to an ovs-bridge requires a mapping defined via an NNCP. The template above creates a NNCP and a NAD for each VLAN.
Take a peek at this networking diagram for a sense of how the NNCP and NAD will fit together.
Details on migration from VMware to OpenShift Virtualization.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
creationTimestamp: "2023-02-07T16:41:54Z"
generation: 1
labels:
operators.coreos.com/node-maintenance-operator.openshift-operators: ""
name: node-maintenance-operator
namespace: openshift-operators
spec:
channel: stable
installPlanApproval: Automatic
name: node-maintenance-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
---
# necessary?
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: node-maintenance-operator
namespace: openshift-operators
Create a maintenance and VM live migrated great
And Self Node Remediation Operator
-
create a NHC
apiVersion: remediation.medik8s.io/v1alpha1
kind: NodeHealthCheck
metadata:
name: cnv-nodehealthcheck
spec:
minHealthy: 51%
remediationTemplate:
apiVersion: self-node-remediation.medik8s.io/v1alpha1
kind: SelfNodeRemediationTemplate
name: self-node-remediation-resource-deletion-template
namespace: openshift-operators
selector:
matchLabels:
machine.openshift.io/cluster-api-machineset: hub-q4jtr-cnv
unhealthyConditions:
#- duration: 60s
- duration: 300s
status: 'False'
type: Ready
- duration: 300s
status: Unknown
type: Ready
In VM Set runStrategy: Always
apiVersion: self-node-remediation.medik8s.io/v1alpha1
kind: SelfNodeRemediationConfig
metadata:
name: self-node-remediation-config
namespace: openshift-operators
spec:
safeTimeToAssumeNodeRebootedSeconds: 180
watchdogFilePath: /dev/watchdog
isSoftwareRebootEnabled: true
apiServerTimeout: 15s
apiCheckInterval: 5s
maxApiErrorThreshold: 3
peerApiServerTimeout: 5s
peerDialTimeout: 5s
peerRequestTimeout: 5s
peerUpdateInterval: 15m
selector: machine.openshift.io/cluster-api-machineset=hub-q4jtr-cnv
Multiple networks can use these CNI plugins
-
bridge: Configure a bridge-based additional network to allow pods on the same host to communicate with each other and the host.
-
CNV uses
cnv-bridge
. What is the big difference?
-
-
host-device: Configure a host-device additional network to allow pods access to a physical Ethernet network device on the host system.
-
ipvlan: Configure an ipvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based additional network. Unlike a macvlan-based additional network, each pod shares the same MAC address as the parent physical network interface.
-
macvlan: Configure a macvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address.
-
SR-IOV: Configure an SR-IOV based additional network to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system.
Here is a test case that worked ipvlan
---
# oc process -p VLAN=1924 -p NODE_SELECTOR_KEY=kubernetes.io/hostname -p NODE_SELECTOR_VALUE=hub-q4jtr-cnv-dn79n \
# -f templates/template-nncp.yaml -o yaml | yq e '.items[0]' - > nncp-hub-q4jtr-cnv-dn79n.yaml
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
annotations:
description: Network Config for adding vlan 1924 and bridge on ens224
name: ens224-v1924-dn79n
spec:
nodeSelector:
kubernetes.io/hostname: hub-q4jtr-cnv-dn79n
desiredState:
interfaces:
- ipv4:
enabled: false
ipv6:
enabled: false
name: ens224
state: up
type: ethernet
- ipv4:
enabled: false
ipv6:
enabled: false
name: ens224.1924
state: up
type: vlan
vlan:
base-iface: ens224
id: 1924
- bridge:
options:
stp:
enabled: false
port:
- name: ens224.1924
vlan: {}
ipv4:
enabled: false
ipv6:
enabled: false
name: br-1924
state: up
type: linux-bridge
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
annotations:
description: IPVLAN will assign the MAC of the 'master' interface
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br-1924
name: ipvlan-1924
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "ipvlan-1924",
"plugins": [{
"type": "ipvlan",
"master": "br-1924",
"mode": "l2",
"ipam": {
"type": "static",
"addresses": [
{ "address": "192.168.4.213/24" }
]
}
},
# without this next plugin get an address in use error
{
"type": "tuning",
"capabilities": { "mac": true }
}]
}
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"static:latest"},"fieldPath":"spec.template.spec.containers[?(@.name==\"static\")].image"}]'
labels:
app: static
app.kubernetes.io/component: static
app.kubernetes.io/instance: static
name: static-ipvlan
namespace: dale
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
deployment: static-ipvlan
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: '[{"interface":"net1","name":"ipvlan-1924","namespace":"dale"}]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
deployment: static-ipvlan
spec:
containers:
- command:
- /bin/bash
- -c
- sleep 2000000000000
image: registry.redhat.io/openshift4/ose-cli
imagePullPolicy: Always
name: ose-cli
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/hostname: hub-q4jtr-cnv-dn79n
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
-
https://github.com/openshift/network-tools/blob/master/docs/user.md
-
How to create VLAN interface for VMs in OpenShift Virtualization? - When using Linux Bridge as opposed to OVS Bridge