diff --git a/v1.12.x/advance/accelerate-intra-node-tcp-with-ebpf/index.html b/v1.12.x/advance/accelerate-intra-node-tcp-with-ebpf/index.html index a70665ee8..be1e66abb 100644 --- a/v1.12.x/advance/accelerate-intra-node-tcp-with-ebpf/index.html +++ b/v1.12.x/advance/accelerate-intra-node-tcp-with-ebpf/index.html @@ -20,7 +20,7 @@
部署 istio-tcpip-bypass 插件:
kubectl apply -f https://raw.githubusercontent.com/intel/istio-tcpip-bypass/main/bypass-tcpip-daemonset.yaml
再次进入 perf client 容器进行性能测试:
# kubectl exec -it perf-7697bc6ddf-p2xpt sh
/ # qperf -t 60 100.64.0.3 -ub -oo msg_size:1:16K:*4 -vu tcp_lat tcp_bw
-
根据测试结果 TCP 延迟在不同数据包大小的情况下会有 40% ~ 60% 的延迟下降,在数据包大于 1024 字节时吞吐量会有 40% ~ 80% 提升。
Packet Size (byte) | eBPF tcp_lat (us) | Default tcp_lat (us) | eBPF tcp_bw (Mb/s) | Default tcp_bw(Mb/s) |
---|---|---|---|---|
1 | 20.2 | 44.5 | 1.36 | 4.27 |
4 | 20.2 | 48.7 | 5.48 | 16.7 |
16 | 19.6 | 41.6 | 21.7 | 63.5 |
64 | 18.8 | 41.3 | 96.8 | 201 |
256 | 19.2 | 36 | 395 | 539 |
1024 | 18.3 | 42.4 | 1360 | 846 |
4096 | 16.5 | 62.6 | 4460 | 2430 |
16384 | 20.2 | 58.8 | 9600 | 6900 |
在测试的硬件环境下,数据包小于 512 字节时,使用 eBPF 优化吞吐量指标会低于默认配置下的吞吐量。 该情况可能和默认配置下网卡开启 TCP 聚合优化相关。如果应用场景对小包吞吐量敏感,需要在相应环境下 进行测试判断是否开启 eBPF 优化。我们也会后续对 eBPF TCP 小包场景的吞吐量进行优化。
关注公众号获得更多最新信息,请扫描下方二维码:
关注公众号获得更多最新信息,请扫描下方二维码:
Kube-OVN, a CNCF Sandbox Project, bridges the SDN into Cloud Native. It offers an advanced Container Network Fabric for Enterprises with the most functions, extreme performance and the easiest operation.
Most Functions:
If you miss the rich networking capabilities of the SDN age but are struggling to find them in the cloud-native age, Kube-OVN should be your best choice.
Leveraging the proven capabilities of OVS/OVN in the SDN, Kube-OVN brings the rich capabilities of network virtualization to the cloud-native space. It currently supports Subnet Management, Static IP Allocation, Distributed/Centralized Gateways, Underlay/Overlay Hybrid Networks, VPC Multi-Tenant Networks, Cross-Cluster Interconnect, QoS Management, Multi-NIC Management, ACL, Traffic Mirroring, ARM Support, Windows Support, and many more.
Extreme Performance:
If you're concerned about the additional performance loss associated with container networks, then take a look at How Kube-OVN is doing everything it can to optimize performance.
In the data plane, through a series of carefully optimized flow and kernel optimizations, and with emerging technologies such as eBPF, DPDK and SmartNIC Offload, Kube-OVN can approximate or exceed host network performance in terms of latency and throughput.
In the control plane, Kube-OVN can support large-scale clusters of thousands of nodes and tens of thousands of Pods through the tailoring of OVN upstream flow tables and the use and tuning of various caching techniques.
In addition, Kube-OVN is continuously optimizing the usage of resources such as CPU and memory to accommodate resource-limited scenarios such as the edge.
Easiest Operation:
If you're worried about container network operations, Kube-OVN has a number of built-in tools to help you simplify your operations.
Kube-OVN provides one-click installation scripts to help users quickly build production-ready container networks. Also built-in rich monitoring metrics and Grafana dashboard help users to quickly set up monitoring system.
Powerful command line tools simplify daily operations and maintenance for users. By combining with Cilium, users can enhance the observability of their networks with eBPF capabilities. In addition, the ability to mirror traffic makes it easy to customize traffic monitoring and interface with traditional NPM systems.
Kube-OVN, a CNCF Sandbox Project, bridges the SDN into Cloud Native. It offers an advanced Container Network Fabric for Enterprises with the most functions, extreme performance and the easiest operation.
Most Functions:
If you miss the rich networking capabilities of the SDN age but are struggling to find them in the cloud-native age, Kube-OVN should be your best choice.
Leveraging the proven capabilities of OVS/OVN in the SDN, Kube-OVN brings the rich capabilities of network virtualization to the cloud-native space. It currently supports Subnet Management, Static IP Allocation, Distributed/Centralized Gateways, Underlay/Overlay Hybrid Networks, VPC Multi-Tenant Networks, Cross-Cluster Interconnect, QoS Management, Multi-NIC Management, ACL, Traffic Mirroring, ARM Support, Windows Support, and many more.
Extreme Performance:
If you're concerned about the additional performance loss associated with container networks, then take a look at How Kube-OVN is doing everything it can to optimize performance.
In the data plane, through a series of carefully optimized flow and kernel optimizations, and with emerging technologies such as eBPF, DPDK and SmartNIC Offload, Kube-OVN can approximate or exceed host network performance in terms of latency and throughput.
In the control plane, Kube-OVN can support large-scale clusters of thousands of nodes and tens of thousands of Pods through the tailoring of OVN upstream flow tables and the use and tuning of various caching techniques.
In addition, Kube-OVN is continuously optimizing the usage of resources such as CPU and memory to accommodate resource-limited scenarios such as the edge.
Easiest Operation:
If you're worried about container network operations, Kube-OVN has a number of built-in tools to help you simplify your operations.
Kube-OVN provides one-click installation scripts to help users quickly build production-ready container networks. Also built-in rich monitoring metrics and Grafana dashboard help users to quickly set up monitoring system.
Powerful command line tools simplify daily operations and maintenance for users. By combining with Cilium, users can enhance the observability of their networks with eBPF capabilities. In addition, the ability to mirror traffic makes it easy to customize traffic monitoring and interface with traditional NPM systems.
This document describes the general architecture of Kube-OVN, the functionality of each component and how they interact with each other.
Overall, Kube-OVN serves as a bridge between Kubernetes and OVN, combining proven SDN with Cloud Native. This means that Kube-OVN not only implements network specifications under Kubernetes, such as CNI, Service and Networkpolicy, but also brings a large number of SDN domain capabilities to cloud-native, such as logical switches, logical routers, VPCs, gateways, QoS, ACLs and traffic mirroring.
Kube-OVN also maintains a good openness to integrate with many technology solutions, such as Cilium, Submariner, Prometheus, KubeVirt, etc.
The components of Kube-OVN can be broadly divided into three categories.
This type of component comes from the OVN/OVS community with specific modifications for Kube-OVN usage scenarios. OVN/OVS itself is a mature SDN system for managing virtual machines and containers, and we strongly recommend that users interested in the Kube-OVN implementation read ovn-architecture(7) first to understand what OVN is and how to integrate with it. Kube-OVN uses the northbound interface of OVN to create and coordinate virtual networks and map the network concepts into Kubernetes.
All OVN/OVS-related components have been packaged into images and are ready to run in Kubernetes.
The ovn-central
Deployment runs the control plane components of OVN, including ovn-nb
, ovn-sb
, and ovn-northd
.
ovn-nb
: Saves the virtual network configuration and provides an API for virtual network management. kube-ovn-controller
will mainly interact with ovn-nb
to configure the virtual network.ovn-sb
: Holds the logical flow table generated from the logical network of ovn-nb
, as well as the actual physical network state of each node.ovn-northd
: translates the virtual network of ovn-nb
into a logical flow table in ovn-sb
.Multiple instances of ovn-central
will synchronize data via the Raft protocol to ensure high availability.
ovs-ovn
runs as a DaemonSet on each node, with openvswitch
, ovsdb
, and ovn-controller
running inside the Pod. These components act as agents for ovn-central
to translate logical flow tables into real network configurations.
This part is the core component of Kube-OVN, serving as a bridge between OVN and Kubernetes, bridging the two systems and translating network concepts between them. Most of the core functions are implemented in these components.
This component performs the translation of all resources within Kubernetes to OVN resources and acts as the control plane for the entire Kube-OVN system. The kube-ovn-controller
listens for events on all resources related to network functionality and updates the logical network within the OVN based on resource changes. The main resources listened including:
Pod,Service,Endpoint,Node,NetworkPolicy,VPC,Subnet,Vlan,ProviderNetwork。
Taking the Pod event as an example, kube-ovn-controller
listens to the Pod creation event, allocates the address via the built-in in-memory IPAM function, and calls ovn-central
to create logical ports, static routes and possible ACL rules. Next, kube-ovn-controller
writes the assigned address and subnet information such as CIDR, gateway, route, etc. to the annotation of the Pod. This annotation is then read by kube-ovn-cni
and used to configure the local network.
This component runs on each node as a DaemonSet, implements the CNI interface, and operates the local OVS to configure the local network.
This DaemonSet copies the kube-ovn
binary to each machine as a tool for interaction between kubelet
and kube-ovn-cni
. This binary sends the corresponding CNI request to kube-ovn-cni
for further operation. The binary will be copied to the /opt/cni/bin
directory by default.
kube-ovn-cni
will configure the specific network to perform the appropriate traffic operations, and the main tasks including:
ovn-controller
and vswitchd
.ovn0
NIC to connect the container network and the host network.These components provide monitoring, diagnostics, operations tools, and external interface to extend the core network capabilities of Kube-OVN and simplify daily operations and maintenance.
This component is a DaemonSet running on a specific labeled nodes that publish routes to the external, allowing external access to the container directly through the Pod IP.
For more information on how to use it, please refer to BGP Support.
This component is a DaemonSet running on each node to collect OVS status information, node network quality, network latency, etc. The monitoring metrics collected can be found in Metrics.
This component collects OVN status information and the monitoring metrics, all metrics can be found in Metrics.
This component is a kubectl plugin, which can quickly run common operations, for more usage, please refer to [kubectl plugin].(../ops/kubectl-ko.en.md)。
This document describes the general architecture of Kube-OVN, the functionality of each component and how they interact with each other.
Overall, Kube-OVN serves as a bridge between Kubernetes and OVN, combining proven SDN with Cloud Native. This means that Kube-OVN not only implements network specifications under Kubernetes, such as CNI, Service and Networkpolicy, but also brings a large number of SDN domain capabilities to cloud-native, such as logical switches, logical routers, VPCs, gateways, QoS, ACLs and traffic mirroring.
Kube-OVN also maintains a good openness to integrate with many technology solutions, such as Cilium, Submariner, Prometheus, KubeVirt, etc.
The components of Kube-OVN can be broadly divided into three categories.
This type of component comes from the OVN/OVS community with specific modifications for Kube-OVN usage scenarios. OVN/OVS itself is a mature SDN system for managing virtual machines and containers, and we strongly recommend that users interested in the Kube-OVN implementation read ovn-architecture(7) first to understand what OVN is and how to integrate with it. Kube-OVN uses the northbound interface of OVN to create and coordinate virtual networks and map the network concepts into Kubernetes.
All OVN/OVS-related components have been packaged into images and are ready to run in Kubernetes.
The ovn-central
Deployment runs the control plane components of OVN, including ovn-nb
, ovn-sb
, and ovn-northd
.
ovn-nb
: Saves the virtual network configuration and provides an API for virtual network management. kube-ovn-controller
will mainly interact with ovn-nb
to configure the virtual network.ovn-sb
: Holds the logical flow table generated from the logical network of ovn-nb
, as well as the actual physical network state of each node.ovn-northd
: translates the virtual network of ovn-nb
into a logical flow table in ovn-sb
.Multiple instances of ovn-central
will synchronize data via the Raft protocol to ensure high availability.
ovs-ovn
runs as a DaemonSet on each node, with openvswitch
, ovsdb
, and ovn-controller
running inside the Pod. These components act as agents for ovn-central
to translate logical flow tables into real network configurations.
This part is the core component of Kube-OVN, serving as a bridge between OVN and Kubernetes, bridging the two systems and translating network concepts between them. Most of the core functions are implemented in these components.
This component performs the translation of all resources within Kubernetes to OVN resources and acts as the control plane for the entire Kube-OVN system. The kube-ovn-controller
listens for events on all resources related to network functionality and updates the logical network within the OVN based on resource changes. The main resources listened including:
Pod,Service,Endpoint,Node,NetworkPolicy,VPC,Subnet,Vlan,ProviderNetwork。
Taking the Pod event as an example, kube-ovn-controller
listens to the Pod creation event, allocates the address via the built-in in-memory IPAM function, and calls ovn-central
to create logical ports, static routes and possible ACL rules. Next, kube-ovn-controller
writes the assigned address and subnet information such as CIDR, gateway, route, etc. to the annotation of the Pod. This annotation is then read by kube-ovn-cni
and used to configure the local network.
This component runs on each node as a DaemonSet, implements the CNI interface, and operates the local OVS to configure the local network.
This DaemonSet copies the kube-ovn
binary to each machine as a tool for interaction between kubelet
and kube-ovn-cni
. This binary sends the corresponding CNI request to kube-ovn-cni
for further operation. The binary will be copied to the /opt/cni/bin
directory by default.
kube-ovn-cni
will configure the specific network to perform the appropriate traffic operations, and the main tasks including:
ovn-controller
and vswitchd
.ovn0
NIC to connect the container network and the host network.These components provide monitoring, diagnostics, operations tools, and external interface to extend the core network capabilities of Kube-OVN and simplify daily operations and maintenance.
This component is a DaemonSet running on a specific labeled nodes that publish routes to the external, allowing external access to the container directly through the Pod IP.
For more information on how to use it, please refer to BGP Support.
This component is a DaemonSet running on each node to collect OVS status information, node network quality, network latency, etc. The monitoring metrics collected can be found in Metrics.
This component collects OVN status information and the monitoring metrics, all metrics can be found in Metrics.
This component is a kubectl plugin, which can quickly run common operations, for more usage, please refer to [kubectl plugin].(../ops/kubectl-ko.en.md)。
In Kube-OVN, feature stage is classified into Alpha, Beta and GA, based on the degree of feature usage, documentation and test coverage.
For Alpha stage functions:
For Beta stage functions:
For GA stage functions:
This list records the feature stages from the 1.8 release.
Feature | Default | Stage | Since | Until |
---|---|---|---|---|
Namespaced Subnet | true | GA | 1.8 | |
Distributed Gateway | true | GA | 1.8 | |
Active-backup Centralized Gateway | true | GA | 1.8 | |
ECMP Centralized Gateway | false | Beta | 1.8 | |
Subnet ACL | true | Alpha | 1.9 | |
Subnet Isolation (Will be replaced by ACL later) | true | Beta | 1.8 | |
Underlay Subnet | true | GA | 1.8 | |
Multiple Pod Interface | true | Beta | 1.8 | |
Subnet DHCP | false | Alpha | 1.10 | |
Subnet with External Gateway | false | Alpha | 1.8 | |
Cluster Inter-Connection with OVN-IC | false | Beta | 1.8 | |
Cluster Inter-Connection with Submariner | false | Alpha | 1.9 | |
VIP Reservation | true | Alpha | 1.10 | |
Create Custom VPC | true | Beta | 1.8 | |
Custom VPC Floating IP/SNAT/DNAT | true | Alpha | 1.10 | |
Custom VPC Static Route | true | Alpha | 1.10 | |
Custom VPC Policy Route | true | Alpha | 1.10 | |
Custom VPC Security Group | true | Alpha | 1.10 | |
Container Bandwidth QoS | true | GA | 1.8 | |
linux-netem QoS | true | Alpha | 1.9 | |
Prometheus Integration | false | GA | 1.8 | |
Grafana Integration | false | GA | 1.8 | |
IPv4/v6 DualStack | false | GA | 1.8 | |
Default VPC EIP/SNAT | false | Beta | 1.8 | |
Traffic Mirroring | false | GA | 1.8 | |
NetworkPolicy | true | Beta | 1.8 | |
Webhook | false | Alpha | 1.10 | |
Performance Tunning | false | Beta | 1.8 | |
Interconnection with Routes in Overlay Mode | false | Alpha | 1.8 | |
BGP Support | false | Alpha | 1.9 | |
Cilium Integration | false | Alpha | 1.10 | |
Custom VPC Peering | false | Alpha | 1.10 | |
Mellanox Offload | false | Alpha | 1.8 | |
Corigine Offload | false | Alpha | 1.10 | |
Windows Support | false | Alpha | 1.10 | |
DPDK Support | false | Alpha | 1.10 | |
OpenStack Integration | false | Alpha | 1.9 | |
Single Pod Fixed IP/Mac | true | GA | 1.8 | |
Workload with Fixed IP | true | GA | 1.8 | |
StatefulSet with Fixed IP | true | GA | 1.8 | |
VM with Fixed IP | false | Beta | 1.9 | |
Load Balancer Type Service in Default VPC | false | Alpha | 1.11 | |
Load Balance in Custom VPC | false | Alpha | 1.11 | |
DNS in Custom VPC | false | Alpha | 1.11 | |
Underlay and Overlay Interconnection | false | Alpha | 1.11 |
In Kube-OVN, feature stage is classified into Alpha, Beta and GA, based on the degree of feature usage, documentation and test coverage.
For Alpha stage functions:
For Beta stage functions:
For GA stage functions:
This list records the feature stages from the 1.8 release.
Feature | Default | Stage | Since | Until |
---|---|---|---|---|
Namespaced Subnet | true | GA | 1.8 | |
Distributed Gateway | true | GA | 1.8 | |
Active-backup Centralized Gateway | true | GA | 1.8 | |
ECMP Centralized Gateway | false | Beta | 1.8 | |
Subnet ACL | true | Alpha | 1.9 | |
Subnet Isolation (Will be replaced by ACL later) | true | Beta | 1.8 | |
Underlay Subnet | true | GA | 1.8 | |
Multiple Pod Interface | true | Beta | 1.8 | |
Subnet DHCP | false | Alpha | 1.10 | |
Subnet with External Gateway | false | Alpha | 1.8 | |
Cluster Inter-Connection with OVN-IC | false | Beta | 1.8 | |
Cluster Inter-Connection with Submariner | false | Alpha | 1.9 | |
VIP Reservation | true | Alpha | 1.10 | |
Create Custom VPC | true | Beta | 1.8 | |
Custom VPC Floating IP/SNAT/DNAT | true | Alpha | 1.10 | |
Custom VPC Static Route | true | Alpha | 1.10 | |
Custom VPC Policy Route | true | Alpha | 1.10 | |
Custom VPC Security Group | true | Alpha | 1.10 | |
Container Bandwidth QoS | true | GA | 1.8 | |
linux-netem QoS | true | Alpha | 1.9 | |
Prometheus Integration | false | GA | 1.8 | |
Grafana Integration | false | GA | 1.8 | |
IPv4/v6 DualStack | false | GA | 1.8 | |
Default VPC EIP/SNAT | false | Beta | 1.8 | |
Traffic Mirroring | false | GA | 1.8 | |
NetworkPolicy | true | Beta | 1.8 | |
Webhook | false | Alpha | 1.10 | |
Performance Tunning | false | Beta | 1.8 | |
Interconnection with Routes in Overlay Mode | false | Alpha | 1.8 | |
BGP Support | false | Alpha | 1.9 | |
Cilium Integration | false | Alpha | 1.10 | |
Custom VPC Peering | false | Alpha | 1.10 | |
Mellanox Offload | false | Alpha | 1.8 | |
Corigine Offload | false | Alpha | 1.10 | |
Windows Support | false | Alpha | 1.10 | |
DPDK Support | false | Alpha | 1.10 | |
OpenStack Integration | false | Alpha | 1.9 | |
Single Pod Fixed IP/Mac | true | GA | 1.8 | |
Workload with Fixed IP | true | GA | 1.8 | |
StatefulSet with Fixed IP | true | GA | 1.8 | |
VM with Fixed IP | false | Beta | 1.9 | |
Load Balancer Type Service in Default VPC | false | Alpha | 1.11 | |
Load Balance in Custom VPC | false | Alpha | 1.11 | |
DNS in Custom VPC | false | Alpha | 1.11 | |
Underlay and Overlay Interconnection | false | Alpha | 1.11 |
Kube-OVN uses ipset
and iptables
to implement gateway NAT functionality in the default VPC overlay Subnets.
The ipset used is shown in the following table:
Name(IPv4/IPv6) | Type | Usage |
---|---|---|
ovn40services/ovn60services | hash:net | Service CIDR |
ovn40subnets/ovn60subnets | hash:net | Overlay Subnet CIDR and NodeLocal DNS IP address |
ovn40subnets-nat/ovn60subnets-nat | hash:net | Overlay Subnet CIDRs that enable NatOutgoing |
ovn40subnets-distributed-gw/ovn60subnets-distributed-gw | hash:net | Overlay Subnet CIDRs that use distributed gateway |
ovn40other-node/ovn60other-node | hash:net | Internal IP addresses for other Nodes |
ovn40local-pod-ip-nat/ovn60local-pod-ip-nat | hash:ip | Deprecated |
ovn40subnets-nat-policy | hash:net | All subnet cidrs configured with natOutgoingPolicyRules |
ovn40natpr-418e79269dc5-dst | hash:net | The dstIPs corresponding to the rule in natOutgoingPolicyRules |
ovn40natpr-418e79269dc5-src | hash:net | The srcIPs corresponding to the rule in natOutgoingPolicyRules |
The iptables rules (IPv4) used are shown in the following table:
Table | Chain | Rule | Usage | Note |
---|---|---|---|---|
filter | INPUT | -m set --match-set ovn40services src -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | INPUT | -m set --match-set ovn40services dst -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | INPUT | -m set --match-set ovn40subnets src -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | INPUT | -m set --match-set ovn40subnets dst -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -m set --match-set ovn40services src -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -m set --match-set ovn40services dst -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -m set --match-set ovn40subnets src -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -m set --match-set ovn40subnets dst -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -s 10.16.0.0/16 -m comment --comment "ovn-subnet-gateway,ovn-default" | Used to count packets from the subnet to the external network | "10.16.0.0/16" is the cidr of the subnet, the "ovn-subnet-gateway" before the "," in comment is used to identify the iptables rule used to count the subnet inbound and outbound gateway packets, and the "ovn-default" after the "," is the name of the subnet |
filter | FORWARD | -d 10.16.0.0/16 -m comment --comment "ovn-subnet-gateway,ovn-default" | Used to count packets from the external network accessing the subnet | "10.16.0.0/16" is the cidr of the subnet, the "ovn-subnet-gateway" before the "," in comment is used to identify the iptables rule used to count the subnet inbound and outbound gateway packets, and the "ovn-default" after the "," is the name of the subnet |
filter | OUTPUT | -p udp -m udp --dport 6081 -j MARK --set-xmark 0x0 | Clear traffic tag to prevent SNAT | UDP: bad checksum on VXLAN interface |
nat | PREROUTING | -m comment --comment "kube-ovn prerouting rules" -j OVN-PREROUTING | Enter OVN-PREROUTING chain processing | -- |
nat | POSTROUTING | -m comment --comment "kube-ovn postrouting rules" -j OVN-POSTROUTING | Enter OVN-POSTROUTING chain processing | -- |
nat | OVN-PREROUTING | -i ovn0 -m set --match-set ovn40subnets src -m set --match-set ovn40services dst -j MARK --set-xmark 0x4000/0x4000 | Adding masquerade tags to Pod access service traffic | Used when the built-in LB is turned off |
nat | OVN-PREROUTING | -p tcp -m addrtype --dst-type LOCAL -m set --match-set KUBE-NODE-PORT-LOCAL-TCP dst -j MARK --set-xmark 0x80000/0x80000 | Add specific tags to ExternalTrafficPolicy for Local's Service traffic (TCP) | Only used when kube-proxy is using ipvs mode |
nat | OVN-PREROUTING | -p udp -m addrtype --dst-type LOCAL -m set --match-set KUBE-NODE-PORT-LOCAL-UDP dst -j MARK --set-xmark 0x80000/0x80000 | Add specific tags to ExternalTrafficPolicy for Local's Service traffic (UDP) | Only used when kube-proxy is using ipvs mode |
nat | OVN-POSTROUTING | -m set --match-set ovn40services src -m set --match-set ovn40subnets dst -m mark --mark 0x4000/0x4000 -j SNAT --to-source | Use node IP as the source address for access from node to overlay Pods via service IP。 | Works only when kube-proxy is using ipvs mode |
nat | OVN-POSTROUTING | -m mark --mark 0x4000/0x4000 -j MASQUERADE | Perform SNAT for specific tagged traffic | -- |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets src -m set --match-set ovn40subnets dst -j MASQUERADE | Perform SNAT for Service traffic between Pods passing through the node | -- |
nat | OVN-POSTROUTING | -m mark --mark 0x80000/0x80000 -m set --match-set ovn40subnets-distributed-gw dst -j RETURN | For Service traffic where ExternalTrafficPolicy is Local, if the Endpoint uses a distributed gateway, SNAT is not required. | -- |
nat | OVN-POSTROUTING | -m mark --mark 0x80000/0x80000 -j MASQUERADE | For Service traffic where ExternalTrafficPolicy is Local, if the Endpoint uses a centralized gateway, SNAT is required. | -- |
nat | OVN-POSTROUTING | -p tcp -m tcp --tcp-flags SYN NONE -m conntrack --ctstate NEW -j RETURN | No SNAT is performed when the Pod IP is exposed to the outside world | -- |
nat | OVN-POSTROUTING | -s 10.16.0.0/16 -m set ! --match-set ovn40subnets dst -j SNAT --to-source 192.168.0.101 | When the Pod accesses the network outside the cluster, if the subnet is NatOutgoing and a centralized gateway with the specified IP is used, perform SNAT | 10.16.0.0/16 is the Subnet CIDR,192.168.0.101 is the specified IP of gateway node |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets-nat src -m set ! --match-set ovn40subnets dst -j MASQUERADE | When the Pod accesses the network outside the cluster, if NatOutgoing is enabled on the subnet, perform SNAT | -- |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets-nat-policy src -m set ! --match-set ovn40subnets dst -j OVN-NAT-POLICY | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | ovn40subnets-nat-policy is all subnet segments configured with natOutgoingPolicyRules |
nat | OVN-POSTROUTING | -m mark --mark 0x90001/0x90001 -j MASQUERADE --random-fully | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | After coming out of OVN-NAT-POLICY, if it is tagged with 0x90001/0x90001, it will do SNAT |
nat | OVN-POSTROUTING | -m mark --mark 0x90002/0x90002 -j RETURN | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | After coming out of OVN-NAT-POLICY, if it is tagged with 0x90002/0x90002, it will not do SNAT |
nat | OVN-NAT-POLICY | -s 10.0.11.0/24 -m comment --comment natPolicySubnet-net1 -j OVN-NAT-PSUBNET-aa98851157c5 | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | 10.0.11.0/24 represents the CIDR of the subnet net1, and the rules under the OVN-NAT-PSUBNET-aa98851157c5 chain correspond to the natOutgoingPolicyRules configuration of this subnet |
nat | OVN-NAT-PSUBNET-xxxxxxxxxxxx | -m set --match-set ovn40natpr-418e79269dc5-src src -m set --match-set ovn40natpr-418e79269dc5-dst dst -j MARK --set-xmark 0x90002/0x90002 | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | 418e79269dc5 indicates the ID of a rule in natOutgoingPolicyRules, which can be viewed through status.natOutgoingPolicyRules[index].RuleID, indicating that srcIPs meets ovn40natpr-418e79269dc5-src, and dstIPS meets ovn40natpr-418e79269dc5- dst will be marked with tag 0x90002 |
mangle | OVN-OUTPUT | -d 10.241.39.2/32 -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x90003/0x90003 | Introduce kubelet's detection traffic to tproxy with a specific mark | |
mangle | OVN-PREROUTING | -d 10.241.39.2/32 -p tcp -m tcp --dport 80 -j TPROXY --on-port 8102 --on-ip 172.18.0.3 --tproxy-mark 0x90004/0x90004 | Introduce kubelet's detection traffic to tproxy with a specific mark |
Kube-OVN uses ipset
and iptables
to implement gateway NAT functionality in the default VPC overlay Subnets.
The ipset used is shown in the following table:
Name(IPv4/IPv6) | Type | Usage |
---|---|---|
ovn40services/ovn60services | hash:net | Service CIDR |
ovn40subnets/ovn60subnets | hash:net | Overlay Subnet CIDR and NodeLocal DNS IP address |
ovn40subnets-nat/ovn60subnets-nat | hash:net | Overlay Subnet CIDRs that enable NatOutgoing |
ovn40subnets-distributed-gw/ovn60subnets-distributed-gw | hash:net | Overlay Subnet CIDRs that use distributed gateway |
ovn40other-node/ovn60other-node | hash:net | Internal IP addresses for other Nodes |
ovn40local-pod-ip-nat/ovn60local-pod-ip-nat | hash:ip | Deprecated |
ovn40subnets-nat-policy | hash:net | All subnet cidrs configured with natOutgoingPolicyRules |
ovn40natpr-418e79269dc5-dst | hash:net | The dstIPs corresponding to the rule in natOutgoingPolicyRules |
ovn40natpr-418e79269dc5-src | hash:net | The srcIPs corresponding to the rule in natOutgoingPolicyRules |
The iptables rules (IPv4) used are shown in the following table:
Table | Chain | Rule | Usage | Note |
---|---|---|---|---|
filter | INPUT | -m set --match-set ovn40services src -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | INPUT | -m set --match-set ovn40services dst -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | INPUT | -m set --match-set ovn40subnets src -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | INPUT | -m set --match-set ovn40subnets dst -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -m set --match-set ovn40services src -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -m set --match-set ovn40services dst -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -m set --match-set ovn40subnets src -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -m set --match-set ovn40subnets dst -j ACCEPT | Allow k8s service and pod traffic to pass through | -- |
filter | FORWARD | -s 10.16.0.0/16 -m comment --comment "ovn-subnet-gateway,ovn-default" | Used to count packets from the subnet to the external network | "10.16.0.0/16" is the cidr of the subnet, the "ovn-subnet-gateway" before the "," in comment is used to identify the iptables rule used to count the subnet inbound and outbound gateway packets, and the "ovn-default" after the "," is the name of the subnet |
filter | FORWARD | -d 10.16.0.0/16 -m comment --comment "ovn-subnet-gateway,ovn-default" | Used to count packets from the external network accessing the subnet | "10.16.0.0/16" is the cidr of the subnet, the "ovn-subnet-gateway" before the "," in comment is used to identify the iptables rule used to count the subnet inbound and outbound gateway packets, and the "ovn-default" after the "," is the name of the subnet |
filter | OUTPUT | -p udp -m udp --dport 6081 -j MARK --set-xmark 0x0 | Clear traffic tag to prevent SNAT | UDP: bad checksum on VXLAN interface |
nat | PREROUTING | -m comment --comment "kube-ovn prerouting rules" -j OVN-PREROUTING | Enter OVN-PREROUTING chain processing | -- |
nat | POSTROUTING | -m comment --comment "kube-ovn postrouting rules" -j OVN-POSTROUTING | Enter OVN-POSTROUTING chain processing | -- |
nat | OVN-PREROUTING | -i ovn0 -m set --match-set ovn40subnets src -m set --match-set ovn40services dst -j MARK --set-xmark 0x4000/0x4000 | Adding masquerade tags to Pod access service traffic | Used when the built-in LB is turned off |
nat | OVN-PREROUTING | -p tcp -m addrtype --dst-type LOCAL -m set --match-set KUBE-NODE-PORT-LOCAL-TCP dst -j MARK --set-xmark 0x80000/0x80000 | Add specific tags to ExternalTrafficPolicy for Local's Service traffic (TCP) | Only used when kube-proxy is using ipvs mode |
nat | OVN-PREROUTING | -p udp -m addrtype --dst-type LOCAL -m set --match-set KUBE-NODE-PORT-LOCAL-UDP dst -j MARK --set-xmark 0x80000/0x80000 | Add specific tags to ExternalTrafficPolicy for Local's Service traffic (UDP) | Only used when kube-proxy is using ipvs mode |
nat | OVN-POSTROUTING | -m set --match-set ovn40services src -m set --match-set ovn40subnets dst -m mark --mark 0x4000/0x4000 -j SNAT --to-source | Use node IP as the source address for access from node to overlay Pods via service IP。 | Works only when kube-proxy is using ipvs mode |
nat | OVN-POSTROUTING | -m mark --mark 0x4000/0x4000 -j MASQUERADE | Perform SNAT for specific tagged traffic | -- |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets src -m set --match-set ovn40subnets dst -j MASQUERADE | Perform SNAT for Service traffic between Pods passing through the node | -- |
nat | OVN-POSTROUTING | -m mark --mark 0x80000/0x80000 -m set --match-set ovn40subnets-distributed-gw dst -j RETURN | For Service traffic where ExternalTrafficPolicy is Local, if the Endpoint uses a distributed gateway, SNAT is not required. | -- |
nat | OVN-POSTROUTING | -m mark --mark 0x80000/0x80000 -j MASQUERADE | For Service traffic where ExternalTrafficPolicy is Local, if the Endpoint uses a centralized gateway, SNAT is required. | -- |
nat | OVN-POSTROUTING | -p tcp -m tcp --tcp-flags SYN NONE -m conntrack --ctstate NEW -j RETURN | No SNAT is performed when the Pod IP is exposed to the outside world | -- |
nat | OVN-POSTROUTING | -s 10.16.0.0/16 -m set ! --match-set ovn40subnets dst -j SNAT --to-source 192.168.0.101 | When the Pod accesses the network outside the cluster, if the subnet is NatOutgoing and a centralized gateway with the specified IP is used, perform SNAT | 10.16.0.0/16 is the Subnet CIDR,192.168.0.101 is the specified IP of gateway node |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets-nat src -m set ! --match-set ovn40subnets dst -j MASQUERADE | When the Pod accesses the network outside the cluster, if NatOutgoing is enabled on the subnet, perform SNAT | -- |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets-nat-policy src -m set ! --match-set ovn40subnets dst -j OVN-NAT-POLICY | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | ovn40subnets-nat-policy is all subnet segments configured with natOutgoingPolicyRules |
nat | OVN-POSTROUTING | -m mark --mark 0x90001/0x90001 -j MASQUERADE --random-fully | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | After coming out of OVN-NAT-POLICY, if it is tagged with 0x90001/0x90001, it will do SNAT |
nat | OVN-POSTROUTING | -m mark --mark 0x90002/0x90002 -j RETURN | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | After coming out of OVN-NAT-POLICY, if it is tagged with 0x90002/0x90002, it will not do SNAT |
nat | OVN-NAT-POLICY | -s 10.0.11.0/24 -m comment --comment natPolicySubnet-net1 -j OVN-NAT-PSUBNET-aa98851157c5 | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | 10.0.11.0/24 represents the CIDR of the subnet net1, and the rules under the OVN-NAT-PSUBNET-aa98851157c5 chain correspond to the natOutgoingPolicyRules configuration of this subnet |
nat | OVN-NAT-PSUBNET-xxxxxxxxxxxx | -m set --match-set ovn40natpr-418e79269dc5-src src -m set --match-set ovn40natpr-418e79269dc5-dst dst -j MARK --set-xmark 0x90002/0x90002 | When Pod accesses the network outside the cluster, if natOutgoingPolicyRules is enabled on the subnet, the packet with the specified policy will perform SNAT | 418e79269dc5 indicates the ID of a rule in natOutgoingPolicyRules, which can be viewed through status.natOutgoingPolicyRules[index].RuleID, indicating that srcIPs meets ovn40natpr-418e79269dc5-src, and dstIPS meets ovn40natpr-418e79269dc5- dst will be marked with tag 0x90002 |
mangle | OVN-OUTPUT | -d 10.241.39.2/32 -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x90003/0x90003 | Introduce kubelet's detection traffic to tproxy with a specific mark | |
mangle | OVN-PREROUTING | -d 10.241.39.2/32 -p tcp -m tcp --dport 80 -j TPROXY --on-port 8102 --on-ip 172.18.0.3 --tproxy-mark 0x90004/0x90004 | Introduce kubelet's detection traffic to tproxy with a specific mark |
Based on Kube-OVN v1.12.0, we have compiled a list of CRD resources supported by Kube-OVN, listing the types and meanings of each field of CRD definition for reference.
Property Name | Type | Description |
---|---|---|
type | String | Type of status |
status | String | The value of status, in the range of True , False or Unknown |
reason | String | The reason for the status change |
message | String | The specific message of the status change |
lastUpdateTime | Time | The last time the status was updated |
lastTransitionTime | Time | Time of last status type change |
In each CRD definition, the Condition field in Status follows the above format, so we explain it in advance.
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value Subnet |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | SubnetSpec | Subnet specific configuration information |
status | SubnetStatus | Subnet status information |
Property Name | Type | Description |
---|---|---|
default | Bool | Whether this subnet is the default subnet |
vpc | String | The vpc which the subnet belongs to, default is ovn-cluster |
protocol | String | IP protocol, the value is in the range of IPv4 , IPv6 or Dual |
namespaces | []String | The list of namespaces bound to this subnet |
cidrBlock | String | The range of the subnet, e.g. 10.16.0.0/16 |
gateway | String | The gateway address of the subnet, the default value is the first available address under the CIDRBlock of the subnet |
excludeIps | []String | The range of addresses under this subnet that will not be automatically assigned |
provider | String | Default value is ovn . In the case of multiple NICs, the value is <name>.<namespace> of the NetworkAttachmentDefinition, Kube-OVN will use this information to find the corresponding subnet resource |
gatewayType | String | The gateway type in overlay mode, either distributed or centralized |
gatewayNode | String | The gateway node when the gateway mode is centralized, node names can be comma-separated |
natOutgoing | Bool | Whether the outgoing traffic is NAT |
externalEgressGateway | String | The address of the external gateway. This parameter and the natOutgoing parameter cannot be set at the same time |
policyRoutingPriority | Uint32 | Policy route priority. Used to control the forwarding of traffic to the external gateway address after the subnet gateway |
policyRoutingTableID | Uint32 | The TableID of the local policy routing table, should be different for each subnet to avoid conflicts |
private | Bool | Whether the subnet is a private subnet, which denies access to addresses inside the subnet if the subnet is private |
allowSubnets | []String | If the subnet is a private subnet, the set of addresses that are allowed to access the subnet |
vlan | String | The name of vlan to which the subnet is bound |
vips | []String | The virtual-ip parameter information for virtual type lsp on the subnet |
logicalGateway | Bool | Whether to enable logical gateway |
disableGatewayCheck | Bool | Whether to skip the gateway connectivity check when creating a pod |
disableInterConnection | Bool | Whether to enable subnet interconnection across clusters |
enableDHCP | Bool | Whether to configure dhcp configuration options for lsps belong this subnet |
dhcpV4Options | String | The DHCP_Options record associated with lsp dhcpv4_options on the subnet |
dhcpV6Options | String | The DHCP_Options record associated with lsp dhcpv6_options on the subnet |
enableIPv6RA | Bool | Whether to configure the ipv6_ra_configs parameter for the lrp port of the router connected to the subnet |
ipv6RAConfigs | String | The ipv6_ra_configs parameter configuration for the lrp port of the router connected to the subnet |
acls | []Acl | The acls record associated with the logical-switch of the subnet |
u2oInterconnection | Bool | Whether to enable interconnection mode for Overlay/Underlay |
enableLb | *Bool | Whether the logical-switch of the subnet is associated with load-balancer records |
enableEcmp | Bool | Centralized subnet, whether to enable ECMP routing |
Property Name | Type | Description |
---|---|---|
direction | String | Restrict the direction of acl, which value is from-lport or to-lport |
priority | Int | Acl priority, in the range 0 to 32767 |
match | String | Acl rule match expression |
action | String | The action of the rule, which value is in the range of allow-related , allow-stateless , allow , drop , reject |
Property Name | Type | Description |
---|---|---|
conditions | []SubnetCondition | Subnet status change information, refer to the beginning of the document for the definition of Condition |
v4AvailableIPs | Float64 | Number of available IPv4 IPs |
v4availableIPrange | String | The available range of IPv4 addresses on the subnet |
v4UsingIPs | Float64 | Number of used IPv4 IPs |
v4usingIPrange | String | Used IPv4 address ranges on the subnet |
v6AvailableIPs | Float64 | Number of available IPv6 IPs |
v6availableIPrange | String | The available range of IPv6 addresses on the subnet |
v6UsingIPs | Float64 | Number of used IPv6 IPs |
v6usingIPrange | String | Used IPv6 address ranges on the subnet |
sctivateGateway | String | The currently working gateway node in centralized subnet of master-backup mode |
dhcpV4OptionsUUID | String | The DHCP_Options record identifier associated with the lsp dhcpv4_options on the subnet |
dhcpV6OptionsUUID | String | The DHCP_Options record identifier associated with the lsp dhcpv6_options on the subnet |
u2oInterconnectionIP | String | The IP address used for interconnection when Overlay/Underlay interconnection mode is enabled |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IP |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IPSpec | IP specific configuration information |
Property Name | Type | Description |
---|---|---|
podName | String | Pod name which assigned with this IP |
namespace | String | The name of the namespace where the pod is bound |
subnet | String | The subnet which the ip belongs to |
attachSubnets | []String | The name of the other subnets attached to this primary IP (field deprecated) |
nodeName | String | The name of the node where the pod is bound |
ipAddress | String | IP address, in v4IP,v6IP format for dual-stack cases |
v4IPAddress | String | IPv4 IP address |
v6IPAddress | String | IPv6 IP address |
attachIPs | []String | Other IP addresses attached to this primary IP (field is deprecated) |
macAddress | String | The Mac address of the bound pod |
attachMacs | []String | Other Mac addresses attached to this primary IP (field deprecated) |
containerID | String | The Container ID corresponding to the bound pod |
podType | String | Special workload pod, can be StatefulSet , VirtualMachine or empty |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all instances of this resource will be kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value Vlan |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VlanSpec | Vlan specific configuration information |
status | VlanStatus | Vlan status information |
Property Name | Type | Description |
---|---|---|
id | Int | Vlan tag number, in the range of 0~4096 |
provider | String | The name of the ProviderNetwork to which the vlan is bound |
Property Name | Type | Description |
---|---|---|
subnets | []String | The list of subnets to which the vlan is bound |
conditions | []VlanCondition | Vlan status change information, refer to the beginning of the document for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value ProviderNetwork |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | ProviderNetworkSpec | ProviderNetwork specific configuration information |
status | ProviderNetworkStatus | ProviderNetwork status information |
Property Name | Type | Description |
---|---|---|
defaultInterface | String | The name of the NIC interface used by default for this bridge network |
customInterfaces | []CustomInterface | The special NIC configuration used by this bridge network |
excludeNodes | []String | The names of the nodes that will not be bound to this bridge network |
exchangeLinkName | Bool | Whether to exchange the bridge NIC and the corresponding OVS bridge name |
Property Name | Type | Description |
---|---|---|
interface | String | NIC interface name used for underlay |
nodes | []String | List of nodes using the custom NIC interface |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether the current bridge network is in the ready state |
readyNodes | []String | The name of the node whose bridge network is ready |
notReadyNodes | []String | The name of the node whose bridge network is not ready |
vlans | []String | The name of the vlan to which the bridge network is bound |
conditions | []ProviderNetworkCondition | ProviderNetwork status change information, refer to the beginning of the document for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value Vpc |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VpcSpec | Vpc specific configuration information |
status | VpcStatus | Vpc status information |
Property Name | Type | Description |
---|---|---|
namespaces | []String | List of namespaces bound by Vpc |
staticRoutes | []*StaticRoute | The static route information configured under Vpc |
policyRoutes | []*PolicyRoute | The policy route information configured under Vpc |
vpcPeerings | []*VpcPeering | Vpc interconnection information |
enableExternal | Bool | Whether vpc is connected to an external switch |
Property Name | Type | Description |
---|---|---|
policy | String | Routing policy, takes the value of policySrc or policyDst |
cidr | String | Routing cidr value |
nextHopIP | String | The next hop information of the route |
Property Name | Type | Description |
---|---|---|
priority | Int32 | Priority for policy route |
match | String | Match expression for policy route |
action | String | Action for policy route, the value is in the range of allow , drop , reroute |
nextHopIP | String | The next hop of the policy route, separated by commas in the case of ECMP routing |
Property Name | Type | Description |
---|---|---|
remoteVpc | String | Name of the interconnected peering vpc |
localConnectIP | String | The local ip for vpc used to connect to peer vpc |
Property Name | Type | Description |
---|---|---|
conditions | []VpcCondition | Vpc status change information, refer to the beginning of the documentation for the definition of Condition |
standby | Bool | Whether the vpc creation is complete, the subnet under the vpc needs to wait for the vpc creation to complete other proceeding |
default | Bool | Whether it is the default vpc |
defaultLogicalSwitch | String | The default subnet under vpc |
router | String | The logical-router name for the vpc |
tcpLoadBalancer | String | TCP LB information for vpc |
udpLoadBalancer | String | UDP LB information for vpc |
tcpSessionLoadBalancer | String | TCP Session Hold LB Information for Vpc |
udpSessionLoadBalancer | String | UDP session hold LB information for Vpc |
subnets | []String | List of subnets for vpc |
vpcPeerings | []String | List of peer vpcs for vpc interconnection |
enableExternal | Bool | Whether the vpc is connected to an external switch |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value VpcNatGateway |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VpcNatSpec | Vpc gateway specific configuration information |
Property Name | Type | Description |
---|---|---|
vpc | String | Vpc name which the vpc gateway belongs to |
subnet | String | The name of the subnet to which the gateway pod belongs |
lanIp | String | The IP address assigned to the gateway pod |
selector | []String | Standard Kubernetes selector match information |
tolerations | []VpcNatToleration | Standard Kubernetes tolerance information |
Property Name | Type | Description |
---|---|---|
key | String | The key information of the taint tolerance |
operator | String | Takes the value of Exists or Equal |
value | String | The value information of the taint tolerance |
effect | String | The effect of the taint tolerance, takes the value of NoExecute , NoSchedule , or PreferNoSchedule |
tolerationSeconds | Int64 | The amount of time the pod can continue to run on the node after the taint is added |
The meaning of the above tolerance fields can be found in the official Kubernetes documentation Taint and Tolerance.
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IptablesEIP |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IptablesEipSpec | IptablesEIP specific configuration information used by vpc gateway |
status | IptablesEipStatus | IptablesEIP status information used by vpc gateway |
Property Name | Type | Description |
---|---|---|
v4ip | String | IptablesEIP v4 address |
v6ip | String | IptablesEIP v6 address |
macAddress | String | The assigned mac address, not actually used |
natGwDp | String | Vpc gateway name |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether IptablesEIP is configured complete |
ip | String | The IP address used by IptablesEIP, currently only IPv4 addresses are supported |
redo | String | IptablesEIP crd creation or update time |
nat | String | The type of IptablesEIP, either fip , snat , or dnat |
conditions | []IptablesEIPCondition | IptablesEIP status change information, refer to the beginning of the documentation for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IptablesFIPRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IptablesFIPRuleSpec | The IptablesFIPRule specific configuration information used by vpc gateway |
status | IptablesFIPRuleStatus | IptablesFIPRule status information used by vpc gateway |
Property Name | Type | Description |
---|---|---|
eip | String | Name of the IptablesEIP used for IptablesFIPRule |
internalIp | String | The corresponding internal IP address |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether IptablesFIPRule is configured or not |
v4ip | String | The v4 IP address used by IptablesEIP |
v6ip | String | The v6 IP address used by IptablesEIP |
natGwDp | String | Vpc gateway name |
redo | String | IptablesFIPRule crd creation or update time |
conditions | []IptablesFIPRuleCondition | IptablesFIPRule status change information, refer to the beginning of the documentation for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IptablesSnatRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IptablesSnatRuleSpec | The IptablesSnatRule specific configuration information used by the vpc gateway |
status | IptablesSnatRuleStatus | IptablesSnatRule status information used by vpc gateway |
Property Name | Type | Description |
---|---|---|
eip | String | Name of the IptablesEIP used by IptablesSnatRule |
internalIp | String | IptablesSnatRule's corresponding internal IP address |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether the configuration is complete |
v4ip | String | The v4 IP address used by IptablesSnatRule |
v6ip | String | The v6 IP address used by IptablesSnatRule |
natGwDp | String | Vpc gateway name |
redo | String | IptablesSnatRule crd creation or update time |
conditions | []IptablesSnatRuleCondition | IptablesSnatRule status change information, refer to the beginning of the documentation for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IptablesDnatRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IptablesDnatRuleSpec | The IptablesDnatRule specific configuration information used by vpc gateway |
status | IptablesDnatRuleStatus | IptablesDnatRule status information used by vpc gateway |
Property Name | Type | Description |
---|---|---|
eip | Sting | Name of IptablesEIP used by IptablesDnatRule |
externalPort | Sting | External port used by IptablesDnatRule |
protocol | Sting | Vpc gateway dnat protocol type |
internalIp | Sting | Internal IP address used by IptablesDnatRule |
internalPort | Sting | Internal port used by IptablesDnatRule |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether the configuration is complete |
v4ip | String | The v4 IP address used by IptablesDnatRule |
v6ip | String | The v6 IP address used by IptablesDnatRule |
natGwDp | String | Vpc gateway name |
redo | String | IptablesDnatRule crd creation or update time |
conditions | []IptablesDnatRuleCondition | IptablesDnatRule Status change information, refer to the beginning of the documentation for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value VpcDns |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VpcDnsSpec | VpcDns specific configuration information |
status | VpcDnsStatus | VpcDns status information |
Property Name | Type | Description |
---|---|---|
vpc | String | Name of the vpc where VpcDns is located |
subnet | String | The subnet name of the address assigned to the VpcDns pod |
Property Name | Type | Description |
---|---|---|
conditions | []VpcDnsCondition | VpcDns status change information, refer to the beginning of the document for the definition of Condition |
active | Bool | Whether VpcDns is in use |
For detailed documentation on the use of VpcDns, see Customizing VPC DNS.
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value SwitchLBRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | SwitchLBRuleSpec | SwitchLBRule specific configuration information |
status | SwitchLBRuleStatus | SwitchLBRule status information |
Property Name | Type | Description |
---|---|---|
vip | String | Vip address of SwitchLBRule |
namespace | String | SwitchLBRule's namespace |
selector | []String | Standard Kubernetes selector match information |
sessionAffinity | String | Standard Kubernetes service sessionAffinity value |
ports | []SlrPort | List of SwitchLBRule ports |
For detailed configuration information of SwitchLBRule, you can refer to Customizing VPC Internal Load Balancing.
Property Name | Type | Description |
---|---|---|
name | String | Port name |
port | Int32 | Port number |
targetPort | Int32 | Target port of SwitchLBRule |
protocol | String | Protocol type |
Property Name | Type | Description |
---|---|---|
conditions | []SwitchLBRuleCondition | SwitchLBRule status change information, refer to the beginning of the document for the definition of Condition |
ports | String | Port information |
service | String | Name of the service |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have a value of SecurityGroup |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | SecurityGroupSpec | Security Group specific configuration information |
status | SecurityGroupStatus | Security group status information |
Property Name | Type | Description |
---|---|---|
ingressRules | []*SgRule | Inbound security group rules |
egressRules | []*SgRule | Outbound security group rules |
allowSameGroupTraffic | Bool | Whether lsps in the same security group can interoperate and whether traffic rules need to be updated |
Property Name | Type | Description |
---|---|---|
ipVersion | String | IP version number, ipv4 or ipv6 |
protocol | String | The value of icmp , tcp , or udp |
priority | Int | Acl priority. The value range is 1-200, the smaller the value, the higher the priority. |
remoteType | String | The value is either address or securityGroup |
remoteAddress | String | The address of the other side |
remoteSecurityGroup | String | The name of security group on the other side |
portRangeMin | Int | The starting value of the port range, the minimum value is 1. |
portRangeMax | Int | The ending value of the port range, the maximum value is 65535. |
policy | String | The value is allow or drop |
Property Name | Type | Description |
---|---|---|
portGroup | String | The name of the port-group for the security group |
allowSameGroupTraffic | Bool | Whether lsps in the same security group can interoperate, and whether the security group traffic rules need to be updated |
ingressMd5 | String | The MD5 value of the inbound security group rule |
egressMd5 | String | The MD5 value of the outbound security group rule |
ingressLastSyncSuccess | Bool | Whether the last synchronization of the inbound rule was successful |
egressLastSyncSuccess | Bool | Whether the last synchronization of the outbound rule was successful |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value Vip |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VipSpec | Vip specific configuration information |
status | VipStatus | Vip status information |
Property Name | Type | Description |
---|---|---|
namespace | String | Vip's namespace |
subnet | String | Vip's subnet |
v4ip | String | Vip IPv4 ip address |
v6ip | String | Vip IPv6 ip address |
macAddress | String | Vip mac address |
parentV4ip | String | Not currently in use |
parentV6ip | String | Not currently in use |
parentMac | String | Not currently in use |
attachSubnets | []String | This field is deprecated and no longer used |
Property Name | Type | Description |
---|---|---|
conditions | []VipCondition | Vip status change information, refer to the beginning of the documentation for the definition of Condition |
ready | Bool | Vip is ready or not |
v4ip | String | Vip IPv4 ip address, should be the same as the spec field |
v6ip | String | Vip IPv6 ip address, should be the same as the spec field |
mac | String | The vip mac address, which should be the same as the spec field |
pv4ip | String | Not currently used |
pv6ip | String | Not currently used |
pmac | String | Not currently used |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value OvnEip |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | OvnEipSpec | OvnEip specific configuration information for default vpc |
status | OvnEipStatus | OvnEip status information for default vpc |
Property Name | Type | Description |
---|---|---|
externalSubnet | String | OvnEip's subnet name |
v4ip | String | OvnEip IP address |
macAddress | String | OvnEip Mac address |
type | String | OvnEip use type, the value can be fip , snat or lrp |
Property Name | Type | Description |
---|---|---|
conditions | []OvnEipCondition | OvnEip status change information, refer to the beginning of the documentation for the definition of Condition |
v4ip | String | The IPv4 ip address used by ovnEip |
macAddress | String | Mac address used by ovnEip |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value OvnFip |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | OvnFipSpec | OvnFip specific configuration information in default vpc |
status | OvnFipStatus | OvnFip status information in default vpc |
Property Name | Type | Description |
---|---|---|
ovnEip | String | Name of the bound ovnEip |
ipName | String | The IP crd name corresponding to the bound Pod |
Property Name | Type | Description |
---|---|---|
ready | Bool | OvnFip is ready or not |
v4Eip | String | Name of the ovnEip to which ovnFip is bound |
v4Ip | String | The ovnEip address currently in use |
macAddress | String | OvnFip's configured mac address |
vpc | String | The name of the vpc where ovnFip is located |
conditions | []OvnFipCondition | OvnFip status change information, refer to the beginning of the document for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value OvnSnatRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | OvnSnatRuleSpec | OvnSnatRule specific configuration information in default vpc |
status | OvnSnatRuleStatus | OvnSnatRule status information in default vpc |
Property Name | Type | Description |
---|---|---|
ovnEip | String | Name of the ovnEip to which ovnSnatRule is bound |
vpcSubnet | String | The name of the subnet configured by ovnSnatRule |
ipName | String | The IP crd name corresponding to the ovnSnatRule bound Pod |
Property Name | Type | Description |
---|---|---|
ready | Bool | OvnSnatRule is ready or not |
v4Eip | String | The ovnEip address to which ovnSnatRule is bound |
v4IpCidr | String | The cidr address used to configure snat in the logical-router |
vpc | String | The name of the vpc where ovnSnatRule is located |
conditions | []OvnSnatRuleCondition | OvnSnatRule status change information, refer to the beginning of the document for the definition of Condition |
Based on Kube-OVN v1.12.0, we have compiled a list of CRD resources supported by Kube-OVN, listing the types and meanings of each field of CRD definition for reference.
Property Name | Type | Description |
---|---|---|
type | String | Type of status |
status | String | The value of status, in the range of True , False or Unknown |
reason | String | The reason for the status change |
message | String | The specific message of the status change |
lastUpdateTime | Time | The last time the status was updated |
lastTransitionTime | Time | Time of last status type change |
In each CRD definition, the Condition field in Status follows the above format, so we explain it in advance.
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value Subnet |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | SubnetSpec | Subnet specific configuration information |
status | SubnetStatus | Subnet status information |
Property Name | Type | Description |
---|---|---|
default | Bool | Whether this subnet is the default subnet |
vpc | String | The vpc which the subnet belongs to, default is ovn-cluster |
protocol | String | IP protocol, the value is in the range of IPv4 , IPv6 or Dual |
namespaces | []String | The list of namespaces bound to this subnet |
cidrBlock | String | The range of the subnet, e.g. 10.16.0.0/16 |
gateway | String | The gateway address of the subnet, the default value is the first available address under the CIDRBlock of the subnet |
excludeIps | []String | The range of addresses under this subnet that will not be automatically assigned |
provider | String | Default value is ovn . In the case of multiple NICs, the value is <name>.<namespace> of the NetworkAttachmentDefinition, Kube-OVN will use this information to find the corresponding subnet resource |
gatewayType | String | The gateway type in overlay mode, either distributed or centralized |
gatewayNode | String | The gateway node when the gateway mode is centralized, node names can be comma-separated |
natOutgoing | Bool | Whether the outgoing traffic is NAT |
externalEgressGateway | String | The address of the external gateway. This parameter and the natOutgoing parameter cannot be set at the same time |
policyRoutingPriority | Uint32 | Policy route priority. Used to control the forwarding of traffic to the external gateway address after the subnet gateway |
policyRoutingTableID | Uint32 | The TableID of the local policy routing table, should be different for each subnet to avoid conflicts |
private | Bool | Whether the subnet is a private subnet, which denies access to addresses inside the subnet if the subnet is private |
allowSubnets | []String | If the subnet is a private subnet, the set of addresses that are allowed to access the subnet |
vlan | String | The name of vlan to which the subnet is bound |
vips | []String | The virtual-ip parameter information for virtual type lsp on the subnet |
logicalGateway | Bool | Whether to enable logical gateway |
disableGatewayCheck | Bool | Whether to skip the gateway connectivity check when creating a pod |
disableInterConnection | Bool | Whether to enable subnet interconnection across clusters |
enableDHCP | Bool | Whether to configure dhcp configuration options for lsps belong this subnet |
dhcpV4Options | String | The DHCP_Options record associated with lsp dhcpv4_options on the subnet |
dhcpV6Options | String | The DHCP_Options record associated with lsp dhcpv6_options on the subnet |
enableIPv6RA | Bool | Whether to configure the ipv6_ra_configs parameter for the lrp port of the router connected to the subnet |
ipv6RAConfigs | String | The ipv6_ra_configs parameter configuration for the lrp port of the router connected to the subnet |
acls | []Acl | The acls record associated with the logical-switch of the subnet |
u2oInterconnection | Bool | Whether to enable interconnection mode for Overlay/Underlay |
enableLb | *Bool | Whether the logical-switch of the subnet is associated with load-balancer records |
enableEcmp | Bool | Centralized subnet, whether to enable ECMP routing |
Property Name | Type | Description |
---|---|---|
direction | String | Restrict the direction of acl, which value is from-lport or to-lport |
priority | Int | Acl priority, in the range 0 to 32767 |
match | String | Acl rule match expression |
action | String | The action of the rule, which value is in the range of allow-related , allow-stateless , allow , drop , reject |
Property Name | Type | Description |
---|---|---|
conditions | []SubnetCondition | Subnet status change information, refer to the beginning of the document for the definition of Condition |
v4AvailableIPs | Float64 | Number of available IPv4 IPs |
v4availableIPrange | String | The available range of IPv4 addresses on the subnet |
v4UsingIPs | Float64 | Number of used IPv4 IPs |
v4usingIPrange | String | Used IPv4 address ranges on the subnet |
v6AvailableIPs | Float64 | Number of available IPv6 IPs |
v6availableIPrange | String | The available range of IPv6 addresses on the subnet |
v6UsingIPs | Float64 | Number of used IPv6 IPs |
v6usingIPrange | String | Used IPv6 address ranges on the subnet |
sctivateGateway | String | The currently working gateway node in centralized subnet of master-backup mode |
dhcpV4OptionsUUID | String | The DHCP_Options record identifier associated with the lsp dhcpv4_options on the subnet |
dhcpV6OptionsUUID | String | The DHCP_Options record identifier associated with the lsp dhcpv6_options on the subnet |
u2oInterconnectionIP | String | The IP address used for interconnection when Overlay/Underlay interconnection mode is enabled |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IP |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IPSpec | IP specific configuration information |
Property Name | Type | Description |
---|---|---|
podName | String | Pod name which assigned with this IP |
namespace | String | The name of the namespace where the pod is bound |
subnet | String | The subnet which the ip belongs to |
attachSubnets | []String | The name of the other subnets attached to this primary IP (field deprecated) |
nodeName | String | The name of the node where the pod is bound |
ipAddress | String | IP address, in v4IP,v6IP format for dual-stack cases |
v4IPAddress | String | IPv4 IP address |
v6IPAddress | String | IPv6 IP address |
attachIPs | []String | Other IP addresses attached to this primary IP (field is deprecated) |
macAddress | String | The Mac address of the bound pod |
attachMacs | []String | Other Mac addresses attached to this primary IP (field deprecated) |
containerID | String | The Container ID corresponding to the bound pod |
podType | String | Special workload pod, can be StatefulSet , VirtualMachine or empty |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all instances of this resource will be kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value Vlan |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VlanSpec | Vlan specific configuration information |
status | VlanStatus | Vlan status information |
Property Name | Type | Description |
---|---|---|
id | Int | Vlan tag number, in the range of 0~4096 |
provider | String | The name of the ProviderNetwork to which the vlan is bound |
Property Name | Type | Description |
---|---|---|
subnets | []String | The list of subnets to which the vlan is bound |
conditions | []VlanCondition | Vlan status change information, refer to the beginning of the document for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value ProviderNetwork |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | ProviderNetworkSpec | ProviderNetwork specific configuration information |
status | ProviderNetworkStatus | ProviderNetwork status information |
Property Name | Type | Description |
---|---|---|
defaultInterface | String | The name of the NIC interface used by default for this bridge network |
customInterfaces | []CustomInterface | The special NIC configuration used by this bridge network |
excludeNodes | []String | The names of the nodes that will not be bound to this bridge network |
exchangeLinkName | Bool | Whether to exchange the bridge NIC and the corresponding OVS bridge name |
Property Name | Type | Description |
---|---|---|
interface | String | NIC interface name used for underlay |
nodes | []String | List of nodes using the custom NIC interface |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether the current bridge network is in the ready state |
readyNodes | []String | The name of the node whose bridge network is ready |
notReadyNodes | []String | The name of the node whose bridge network is not ready |
vlans | []String | The name of the vlan to which the bridge network is bound |
conditions | []ProviderNetworkCondition | ProviderNetwork status change information, refer to the beginning of the document for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value Vpc |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VpcSpec | Vpc specific configuration information |
status | VpcStatus | Vpc status information |
Property Name | Type | Description |
---|---|---|
namespaces | []String | List of namespaces bound by Vpc |
staticRoutes | []*StaticRoute | The static route information configured under Vpc |
policyRoutes | []*PolicyRoute | The policy route information configured under Vpc |
vpcPeerings | []*VpcPeering | Vpc interconnection information |
enableExternal | Bool | Whether vpc is connected to an external switch |
Property Name | Type | Description |
---|---|---|
policy | String | Routing policy, takes the value of policySrc or policyDst |
cidr | String | Routing cidr value |
nextHopIP | String | The next hop information of the route |
Property Name | Type | Description |
---|---|---|
priority | Int32 | Priority for policy route |
match | String | Match expression for policy route |
action | String | Action for policy route, the value is in the range of allow , drop , reroute |
nextHopIP | String | The next hop of the policy route, separated by commas in the case of ECMP routing |
Property Name | Type | Description |
---|---|---|
remoteVpc | String | Name of the interconnected peering vpc |
localConnectIP | String | The local ip for vpc used to connect to peer vpc |
Property Name | Type | Description |
---|---|---|
conditions | []VpcCondition | Vpc status change information, refer to the beginning of the documentation for the definition of Condition |
standby | Bool | Whether the vpc creation is complete, the subnet under the vpc needs to wait for the vpc creation to complete other proceeding |
default | Bool | Whether it is the default vpc |
defaultLogicalSwitch | String | The default subnet under vpc |
router | String | The logical-router name for the vpc |
tcpLoadBalancer | String | TCP LB information for vpc |
udpLoadBalancer | String | UDP LB information for vpc |
tcpSessionLoadBalancer | String | TCP Session Hold LB Information for Vpc |
udpSessionLoadBalancer | String | UDP session hold LB information for Vpc |
subnets | []String | List of subnets for vpc |
vpcPeerings | []String | List of peer vpcs for vpc interconnection |
enableExternal | Bool | Whether the vpc is connected to an external switch |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value VpcNatGateway |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VpcNatSpec | Vpc gateway specific configuration information |
Property Name | Type | Description |
---|---|---|
vpc | String | Vpc name which the vpc gateway belongs to |
subnet | String | The name of the subnet to which the gateway pod belongs |
lanIp | String | The IP address assigned to the gateway pod |
selector | []String | Standard Kubernetes selector match information |
tolerations | []VpcNatToleration | Standard Kubernetes tolerance information |
Property Name | Type | Description |
---|---|---|
key | String | The key information of the taint tolerance |
operator | String | Takes the value of Exists or Equal |
value | String | The value information of the taint tolerance |
effect | String | The effect of the taint tolerance, takes the value of NoExecute , NoSchedule , or PreferNoSchedule |
tolerationSeconds | Int64 | The amount of time the pod can continue to run on the node after the taint is added |
The meaning of the above tolerance fields can be found in the official Kubernetes documentation Taint and Tolerance.
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IptablesEIP |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IptablesEipSpec | IptablesEIP specific configuration information used by vpc gateway |
status | IptablesEipStatus | IptablesEIP status information used by vpc gateway |
Property Name | Type | Description |
---|---|---|
v4ip | String | IptablesEIP v4 address |
v6ip | String | IptablesEIP v6 address |
macAddress | String | The assigned mac address, not actually used |
natGwDp | String | Vpc gateway name |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether IptablesEIP is configured complete |
ip | String | The IP address used by IptablesEIP, currently only IPv4 addresses are supported |
redo | String | IptablesEIP crd creation or update time |
nat | String | The type of IptablesEIP, either fip , snat , or dnat |
conditions | []IptablesEIPCondition | IptablesEIP status change information, refer to the beginning of the documentation for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IptablesFIPRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IptablesFIPRuleSpec | The IptablesFIPRule specific configuration information used by vpc gateway |
status | IptablesFIPRuleStatus | IptablesFIPRule status information used by vpc gateway |
Property Name | Type | Description |
---|---|---|
eip | String | Name of the IptablesEIP used for IptablesFIPRule |
internalIp | String | The corresponding internal IP address |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether IptablesFIPRule is configured or not |
v4ip | String | The v4 IP address used by IptablesEIP |
v6ip | String | The v6 IP address used by IptablesEIP |
natGwDp | String | Vpc gateway name |
redo | String | IptablesFIPRule crd creation or update time |
conditions | []IptablesFIPRuleCondition | IptablesFIPRule status change information, refer to the beginning of the documentation for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IptablesSnatRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IptablesSnatRuleSpec | The IptablesSnatRule specific configuration information used by the vpc gateway |
status | IptablesSnatRuleStatus | IptablesSnatRule status information used by vpc gateway |
Property Name | Type | Description |
---|---|---|
eip | String | Name of the IptablesEIP used by IptablesSnatRule |
internalIp | String | IptablesSnatRule's corresponding internal IP address |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether the configuration is complete |
v4ip | String | The v4 IP address used by IptablesSnatRule |
v6ip | String | The v6 IP address used by IptablesSnatRule |
natGwDp | String | Vpc gateway name |
redo | String | IptablesSnatRule crd creation or update time |
conditions | []IptablesSnatRuleCondition | IptablesSnatRule status change information, refer to the beginning of the documentation for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource have the value IptablesDnatRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | IptablesDnatRuleSpec | The IptablesDnatRule specific configuration information used by vpc gateway |
status | IptablesDnatRuleStatus | IptablesDnatRule status information used by vpc gateway |
Property Name | Type | Description |
---|---|---|
eip | Sting | Name of IptablesEIP used by IptablesDnatRule |
externalPort | Sting | External port used by IptablesDnatRule |
protocol | Sting | Vpc gateway dnat protocol type |
internalIp | Sting | Internal IP address used by IptablesDnatRule |
internalPort | Sting | Internal port used by IptablesDnatRule |
Property Name | Type | Description |
---|---|---|
ready | Bool | Whether the configuration is complete |
v4ip | String | The v4 IP address used by IptablesDnatRule |
v6ip | String | The v6 IP address used by IptablesDnatRule |
natGwDp | String | Vpc gateway name |
redo | String | IptablesDnatRule crd creation or update time |
conditions | []IptablesDnatRuleCondition | IptablesDnatRule Status change information, refer to the beginning of the documentation for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value VpcDns |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VpcDnsSpec | VpcDns specific configuration information |
status | VpcDnsStatus | VpcDns status information |
Property Name | Type | Description |
---|---|---|
vpc | String | Name of the vpc where VpcDns is located |
subnet | String | The subnet name of the address assigned to the VpcDns pod |
Property Name | Type | Description |
---|---|---|
conditions | []VpcDnsCondition | VpcDns status change information, refer to the beginning of the document for the definition of Condition |
active | Bool | Whether VpcDns is in use |
For detailed documentation on the use of VpcDns, see Customizing VPC DNS.
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have this value as kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value SwitchLBRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | SwitchLBRuleSpec | SwitchLBRule specific configuration information |
status | SwitchLBRuleStatus | SwitchLBRule status information |
Property Name | Type | Description |
---|---|---|
vip | String | Vip address of SwitchLBRule |
namespace | String | SwitchLBRule's namespace |
selector | []String | Standard Kubernetes selector match information |
sessionAffinity | String | Standard Kubernetes service sessionAffinity value |
ports | []SlrPort | List of SwitchLBRule ports |
For detailed configuration information of SwitchLBRule, you can refer to Customizing VPC Internal Load Balancing.
Property Name | Type | Description |
---|---|---|
name | String | Port name |
port | Int32 | Port number |
targetPort | Int32 | Target port of SwitchLBRule |
protocol | String | Protocol type |
Property Name | Type | Description |
---|---|---|
conditions | []SwitchLBRuleCondition | SwitchLBRule status change information, refer to the beginning of the document for the definition of Condition |
ports | String | Port information |
service | String | Name of the service |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have a value of SecurityGroup |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | SecurityGroupSpec | Security Group specific configuration information |
status | SecurityGroupStatus | Security group status information |
Property Name | Type | Description |
---|---|---|
ingressRules | []*SgRule | Inbound security group rules |
egressRules | []*SgRule | Outbound security group rules |
allowSameGroupTraffic | Bool | Whether lsps in the same security group can interoperate and whether traffic rules need to be updated |
Property Name | Type | Description |
---|---|---|
ipVersion | String | IP version number, ipv4 or ipv6 |
protocol | String | The value of icmp , tcp , or udp |
priority | Int | Acl priority. The value range is 1-200, the smaller the value, the higher the priority. |
remoteType | String | The value is either address or securityGroup |
remoteAddress | String | The address of the other side |
remoteSecurityGroup | String | The name of security group on the other side |
portRangeMin | Int | The starting value of the port range, the minimum value is 1. |
portRangeMax | Int | The ending value of the port range, the maximum value is 65535. |
policy | String | The value is allow or drop |
Property Name | Type | Description |
---|---|---|
portGroup | String | The name of the port-group for the security group |
allowSameGroupTraffic | Bool | Whether lsps in the same security group can interoperate, and whether the security group traffic rules need to be updated |
ingressMd5 | String | The MD5 value of the inbound security group rule |
egressMd5 | String | The MD5 value of the outbound security group rule |
ingressLastSyncSuccess | Bool | Whether the last synchronization of the inbound rule was successful |
egressLastSyncSuccess | Bool | Whether the last synchronization of the outbound rule was successful |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value Vip |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | VipSpec | Vip specific configuration information |
status | VipStatus | Vip status information |
Property Name | Type | Description |
---|---|---|
namespace | String | Vip's namespace |
subnet | String | Vip's subnet |
v4ip | String | Vip IPv4 ip address |
v6ip | String | Vip IPv6 ip address |
macAddress | String | Vip mac address |
parentV4ip | String | Not currently in use |
parentV6ip | String | Not currently in use |
parentMac | String | Not currently in use |
attachSubnets | []String | This field is deprecated and no longer used |
Property Name | Type | Description |
---|---|---|
conditions | []VipCondition | Vip status change information, refer to the beginning of the documentation for the definition of Condition |
ready | Bool | Vip is ready or not |
v4ip | String | Vip IPv4 ip address, should be the same as the spec field |
v6ip | String | Vip IPv6 ip address, should be the same as the spec field |
mac | String | The vip mac address, which should be the same as the spec field |
pv4ip | String | Not currently used |
pv6ip | String | Not currently used |
pmac | String | Not currently used |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value OvnEip |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | OvnEipSpec | OvnEip specific configuration information for default vpc |
status | OvnEipStatus | OvnEip status information for default vpc |
Property Name | Type | Description |
---|---|---|
externalSubnet | String | OvnEip's subnet name |
v4ip | String | OvnEip IP address |
macAddress | String | OvnEip Mac address |
type | String | OvnEip use type, the value can be fip , snat or lrp |
Property Name | Type | Description |
---|---|---|
conditions | []OvnEipCondition | OvnEip status change information, refer to the beginning of the documentation for the definition of Condition |
v4ip | String | The IPv4 ip address used by ovnEip |
macAddress | String | Mac address used by ovnEip |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources are kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value OvnFip |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | OvnFipSpec | OvnFip specific configuration information in default vpc |
status | OvnFipStatus | OvnFip status information in default vpc |
Property Name | Type | Description |
---|---|---|
ovnEip | String | Name of the bound ovnEip |
ipName | String | The IP crd name corresponding to the bound Pod |
Property Name | Type | Description |
---|---|---|
ready | Bool | OvnFip is ready or not |
v4Eip | String | Name of the ovnEip to which ovnFip is bound |
v4Ip | String | The ovnEip address currently in use |
macAddress | String | OvnFip's configured mac address |
vpc | String | The name of the vpc where ovnFip is located |
conditions | []OvnFipCondition | OvnFip status change information, refer to the beginning of the document for the definition of Condition |
Property Name | Type | Description |
---|---|---|
apiVersion | String | Standard Kubernetes version information field, all custom resources have kubeovn.io/v1 |
kind | String | Standard Kubernetes resource type field, all instances of this resource will have the value OvnSnatRule |
metadata | ObjectMeta | Standard Kubernetes resource metadata information |
spec | OvnSnatRuleSpec | OvnSnatRule specific configuration information in default vpc |
status | OvnSnatRuleStatus | OvnSnatRule status information in default vpc |
Property Name | Type | Description |
---|---|---|
ovnEip | String | Name of the ovnEip to which ovnSnatRule is bound |
vpcSubnet | String | The name of the subnet configured by ovnSnatRule |
ipName | String | The IP crd name corresponding to the ovnSnatRule bound Pod |
Property Name | Type | Description |
---|---|---|
ready | Bool | OvnSnatRule is ready or not |
v4Eip | String | The ovnEip address to which ovnSnatRule is bound |
v4IpCidr | String | The cidr address used to configure snat in the logical-router |
vpc | String | The name of the vpc where ovnSnatRule is located |
conditions | []OvnSnatRuleCondition | OvnSnatRule status change information, refer to the beginning of the document for the definition of Condition |
Based on the Kube-OVN v1.12.0 version, We have compiled the parameters supported by Kube-ovn-pinger, and listed the value types, meanings, and default values of each field defined by the parameters for reference
Arg Name | Type | Description | Default Value |
---|---|---|---|
port | Int | metrics port | 8080 |
kubeconfig | String | Path to kubeconfig file with authorization and master location information. If not set use the inCluster token. | "" |
ds-namespace | String | kube-ovn-pinger daemonset namespace | "kube-system" |
ds-name | String | kube-ovn-pinger daemonset name | "kube-ovn-pinger" |
interval | Int | interval seconds between consecutive pings | 5 |
mode | String | server or job Mode | "server" |
exit-code | Int | exit code when failure happens | 0 |
internal-dns | String | check dns from pod | "kubernetes.default" |
external-dns | String | check external dns resolve from pod | "" |
external-address | String | check ping connection to an external address | "114.114.114.114" |
network-mode | String | The cni plugin current cluster used | "kube-ovn" |
enable-metrics | Bool | Whether to support metrics query | true |
ovs.timeout | Int | Timeout on JSON-RPC requests to OVS. | 2 |
system.run.dir | String | OVS default run directory. | "/var/run/openvswitch" |
database.vswitch.name | String | The name of OVS db. | "Open_vSwitch" |
database.vswitch.socket.remote | String | JSON-RPC unix socket to OVS db. | "unix:/var/run/openvswitch/db.sock" |
database.vswitch.file.data.path | String | OVS db file. | "/etc/openvswitch/conf.db" |
database.vswitch.file.log.path | String | OVS db log file. | "/var/log/openvswitch/ovsdb-server.log" |
database.vswitch.file.pid.path | String | OVS db process id file. | "/var/run/openvswitch/ovsdb-server.pid" |
database.vswitch.file.system.id.path | String | OVS system id file. | "/etc/openvswitch/system-id.conf" |
service.vswitchd.file.log.path | String | OVS vswitchd daemon log file. | "/var/log/openvswitch/ovs-vswitchd.log" |
service.vswitchd.file.pid.path | String | OVS vswitchd daemon process id file. | "/var/run/openvswitch/ovs-vswitchd.pid" |
service.ovncontroller.file.log.path | String | OVN controller daemon log file. | "/var/log/ovn/ovn-controller.log" |
service.ovncontroller.file.pid.path | String | OVN controller daemon process id file. | "/var/run/ovn/ovn-controller.pid" |
Based on the Kube-OVN v1.12.0 version, We have compiled the parameters supported by Kube-ovn-pinger, and listed the value types, meanings, and default values of each field defined by the parameters for reference
Arg Name | Type | Description | Default Value |
---|---|---|---|
port | Int | metrics port | 8080 |
kubeconfig | String | Path to kubeconfig file with authorization and master location information. If not set use the inCluster token. | "" |
ds-namespace | String | kube-ovn-pinger daemonset namespace | "kube-system" |
ds-name | String | kube-ovn-pinger daemonset name | "kube-ovn-pinger" |
interval | Int | interval seconds between consecutive pings | 5 |
mode | String | server or job Mode | "server" |
exit-code | Int | exit code when failure happens | 0 |
internal-dns | String | check dns from pod | "kubernetes.default" |
external-dns | String | check external dns resolve from pod | "" |
external-address | String | check ping connection to an external address | "114.114.114.114" |
network-mode | String | The cni plugin current cluster used | "kube-ovn" |
enable-metrics | Bool | Whether to support metrics query | true |
ovs.timeout | Int | Timeout on JSON-RPC requests to OVS. | 2 |
system.run.dir | String | OVS default run directory. | "/var/run/openvswitch" |
database.vswitch.name | String | The name of OVS db. | "Open_vSwitch" |
database.vswitch.socket.remote | String | JSON-RPC unix socket to OVS db. | "unix:/var/run/openvswitch/db.sock" |
database.vswitch.file.data.path | String | OVS db file. | "/etc/openvswitch/conf.db" |
database.vswitch.file.log.path | String | OVS db log file. | "/var/log/openvswitch/ovsdb-server.log" |
database.vswitch.file.pid.path | String | OVS db process id file. | "/var/run/openvswitch/ovsdb-server.pid" |
database.vswitch.file.system.id.path | String | OVS system id file. | "/etc/openvswitch/system-id.conf" |
service.vswitchd.file.log.path | String | OVS vswitchd daemon log file. | "/var/log/openvswitch/ovs-vswitchd.log" |
service.vswitchd.file.pid.path | String | OVS vswitchd daemon process id file. | "/var/run/openvswitch/ovs-vswitchd.pid" |
service.ovncontroller.file.log.path | String | OVN controller daemon log file. | "/var/log/ovn/ovn-controller.log" |
service.ovncontroller.file.pid.path | String | OVN controller daemon process id file. | "/var/run/ovn/ovn-controller.pid" |
This document lists all the monitoring metrics provided by Kube-OVN.
OVN status metrics:
Type | Metric | Description |
---|---|---|
Gauge | kube_ovn_ovn_status | OVN Health Status. The values are: (2) for standby or follower, (1) for active or leader, (0) for unhealthy. |
Gauge | kube_ovn_failed_req_count | The number of failed requests to OVN stack. |
Gauge | kube_ovn_log_file_size | The size of a log file associated with an OVN component. |
Gauge | kube_ovn_db_file_size | The size of a database file associated with an OVN component. |
Gauge | kube_ovn_chassis_info | Whether the OVN chassis is up (1) or down (0), together with additional information about the chassis. |
Gauge | kube_ovn_db_status | The status of OVN NB/SB DB, (1) for healthy, (0) for unhealthy. |
Gauge | kube_ovn_logical_switch_info | The information about OVN logical switch. This metric is always up (1). |
Gauge | kube_ovn_logical_switch_external_id | Provides the external IDs and values associated with OVN logical switches. This metric is always up (1). |
Gauge | kube_ovn_logical_switch_port_binding | Provides the association between a logical switch and a logical switch port. This metric is always up (1). |
Gauge | kube_ovn_logical_switch_tunnel_key | The value of the tunnel key associated with the logical switch. |
Gauge | kube_ovn_logical_switch_ports_num | The number of logical switch ports connected to the OVN logical switch. |
Gauge | kube_ovn_logical_switch_port_info | The information about OVN logical switch port. This metric is always up (1). |
Gauge | kube_ovn_logical_switch_port_tunnel_key | The value of the tunnel key associated with the logical switch port. |
Gauge | kube_ovn_cluster_enabled | Is OVN clustering enabled (1) or not (0). |
Gauge | kube_ovn_cluster_role | A metric with a constant '1' value labeled by server role. |
Gauge | kube_ovn_cluster_status | A metric with a constant '1' value labeled by server status. |
Gauge | kube_ovn_cluster_term | The current raft term known by this server. |
Gauge | kube_ovn_cluster_leader_self | Is this server consider itself a leader (1) or not (0). |
Gauge | kube_ovn_cluster_vote_self | Is this server voted itself as a leader (1) or not (0). |
Gauge | kube_ovn_cluster_election_timer | The current election timer value. |
Gauge | kube_ovn_cluster_log_not_committed | The number of log entries not yet committed by this server. |
Gauge | kube_ovn_cluster_log_not_applied | The number of log entries not yet applied by this server. |
Gauge | kube_ovn_cluster_log_index_start | The log entry index start value associated with this server. |
Gauge | kube_ovn_cluster_log_index_next | The log entry index next value associated with this server. |
Gauge | kube_ovn_cluster_inbound_connections_total | The total number of inbound connections to the server. |
Gauge | kube_ovn_cluster_outbound_connections_total | The total number of outbound connections from the server. |
Gauge | kube_ovn_cluster_inbound_connections_error_total | The total number of failed inbound connections to the server. |
Gauge | kube_ovn_cluster_outbound_connections_error_total | The total number of failed outbound connections from the server. |
ovsdb
and vswitchd
status metrics:
Type | Metric | Description |
---|---|---|
Gauge | ovs_status | OVS Health Status. The values are: health(1), unhealthy(0). |
Gauge | ovs_info | This metric provides basic information about OVS. It is always set to 1. |
Gauge | failed_req_count | The number of failed requests to OVS stack. |
Gauge | log_file_size | The size of a log file associated with an OVS component. |
Gauge | db_file_size | The size of a database file associated with an OVS component. |
Gauge | datapath | Represents an existing datapath. This metrics is always 1. |
Gauge | dp_total | Represents total number of datapaths on the system. |
Gauge | dp_if | Represents an existing datapath interface. This metrics is always 1. |
Gauge | dp_if_total | Represents the number of ports connected to the datapath. |
Gauge | dp_flows_total | The number of flows in a datapath. |
Gauge | dp_flows_lookup_hit | The number of incoming packets in a datapath matching existing flows in the datapath. |
Gauge | dp_flows_lookup_missed | The number of incoming packets in a datapath not matching any existing flow in the datapath. |
Gauge | dp_flows_lookup_lost | The number of incoming packets in a datapath destined for userspace process but subsequently dropped before reaching userspace. |
Gauge | dp_masks_hit | The total number of masks visited for matching incoming packets. |
Gauge | dp_masks_total | The number of masks in a datapath. |
Gauge | dp_masks_hit_ratio | The average number of masks visited per packet. It is the ration between hit and total number of packets processed by a datapath. |
Gauge | interface | Represents OVS interface. This is the primary metric for all other interface metrics. This metrics is always 1. |
Gauge | interface_admin_state | The administrative state of the physical network link of OVS interface. The values are: down(0), up(1), other(2). |
Gauge | interface_link_state | The state of the physical network link of OVS interface. The values are: down(0), up(1), other(2). |
Gauge | interface_mac_in_use | The MAC address in use by OVS interface. |
Gauge | interface_mtu | The currently configured MTU for OVS interface. |
Gauge | interface_of_port | Represents the OpenFlow port ID associated with OVS interface. |
Gauge | interface_if_index | Represents the interface index associated with OVS interface. |
Gauge | interface_tx_packets | Represents the number of transmitted packets by OVS interface. |
Gauge | interface_tx_bytes | Represents the number of transmitted bytes by OVS interface. |
Gauge | interface_rx_packets | Represents the number of received packets by OVS interface. |
Gauge | interface_rx_bytes | Represents the number of received bytes by OVS interface. |
Gauge | interface_rx_crc_err | Represents the number of CRC errors for the packets received by OVS interface. |
Gauge | interface_rx_dropped | Represents the number of input packets dropped by OVS interface. |
Gauge | interface_rx_errors | Represents the total number of packets with errors received by OVS interface. |
Gauge | interface_rx_frame_err | Represents the number of frame alignment errors on the packets received by OVS interface. |
Gauge | interface_rx_missed_err | Represents the number of packets with RX missed received by OVS interface. |
Gauge | interface_rx_over_err | Represents the number of packets with RX overrun received by OVS interface. |
Gauge | interface_tx_dropped | Represents the number of output packets dropped by OVS interface. |
Gauge | interface_tx_errors | Represents the total number of transmit errors by OVS interface. |
Gauge | interface_collisions | Represents the number of collisions on OVS interface. |
Network quality related metrics:
Type | Metric | Description |
---|---|---|
Gauge | pinger_ovs_up | If the ovs on the node is up |
Gauge | pinger_ovs_down | If the ovs on the node is down |
Gauge | pinger_ovn_controller_up | If the ovn_controller on the node is up |
Gauge | pinger_ovn_controller_down | If the ovn_controller on the node is down |
Gauge | pinger_inconsistent_port_binding | The number of mismatch port bindings between ovs and ovn-sb |
Gauge | pinger_apiserver_healthy | If the apiserver request is healthy on this node |
Gauge | pinger_apiserver_unhealthy | If the apiserver request is unhealthy on this node |
Histogram | pinger_apiserver_latency_ms | The latency ms histogram the node request apiserver |
Gauge | pinger_internal_dns_healthy | If the internal dns request is unhealthy on this node |
Gauge | pinger_internal_dns_unhealthy | If the internal dns request is unhealthy on this node |
Histogram | pinger_internal_dns_latency_ms | The latency ms histogram the node request internal dns |
Gauge | pinger_external_dns_health | If the external dns request is healthy on this node |
Gauge | pinger_external_dns_unhealthy | If the external dns request is unhealthy on this node |
Histogram | pinger_external_dns_latency_ms | The latency ms histogram the node request external dns |
Histogram | pinger_pod_ping_latency_ms | The latency ms histogram for pod peer ping |
Gauge | pinger_pod_ping_lost_total | The lost count for pod peer ping |
Gauge | pinger_pod_ping_count_total | The total count for pod peer ping |
Histogram | pinger_node_ping_latency_ms | The latency ms histogram for pod ping node |
Gauge | pinger_node_ping_lost_total | The lost count for pod ping node |
Gauge | pinger_node_ping_count_total | The total count for pod ping node |
Histogram | pinger_external_ping_latency_ms | The latency ms histogram for pod ping external address |
Gauge | pinger_external_lost_total | The lost count for pod ping external address |
kube-ovn-controller
status metrics:
Type | Metric | Description |
---|---|---|
Histogram | rest_client_request_latency_seconds | Request latency in seconds. Broken down by verb and URL |
Counter | rest_client_requests_total | Number of HTTP requests, partitioned by status code, method, and host |
Counter | lists_total | Total number of API lists done by the reflectors |
Summary | list_duration_seconds | How long an API list takes to return and decode for the reflectors |
Summary | items_per_list | How many items an API list returns to the reflectors |
Counter | watches_total | Total number of API watches done by the reflectors |
Counter | short_watches_total | Total number of short API watches done by the reflectors |
Summary | watch_duration_seconds | How long an API watch takes to return and decode for the reflectors |
Summary | items_per_watch | How many items an API watch returns to the reflectors |
Gauge | last_resource_version | Last resource version seen for the reflectors |
Histogram | ovs_client_request_latency_milliseconds | The latency histogram for ovs request |
Gauge | subnet_available_ip_count | The available num of ip address in subnet |
Gauge | subnet_used_ip_count | The used num of ip address in subnet |
kube-ovn-cni
status metrics:
Type | Metric | Description |
---|---|---|
Histogram | cni_op_latency_seconds | The latency seconds for cni operations |
Counter | cni_wait_address_seconds_total | Latency that cni wait controller to assign an address |
Counter | cni_wait_connectivity_seconds_total | Latency that cni wait address ready in overlay network |
Counter | cni_wait_route_seconds_total | Latency that cni wait controller to add routed annotation to pod |
Histogram | rest_client_request_latency_seconds | Request latency in seconds. Broken down by verb and URL |
Counter | rest_client_requests_total | Number of HTTP requests, partitioned by status code, method, and host |
Counter | lists_total | Total number of API lists done by the reflectors |
Summary | list_duration_seconds | How long an API list takes to return and decode for the reflectors |
Summary | items_per_list | How many items an API list returns to the reflectors |
Counter | watches_total | Total number of API watches done by the reflectors |
Counter | short_watches_total | Total number of short API watches done by the reflectors |
Summary | watch_duration_seconds | How long an API watch takes to return and decode for the reflectors |
Summary | items_per_watch | How many items an API watch returns to the reflectors |
Gauge | last_resource_version | Last resource version seen for the reflectors |
Histogram | ovs_client_request_latency_milliseconds | The latency histogram for ovs request |
This document lists all the monitoring metrics provided by Kube-OVN.
OVN status metrics:
Type | Metric | Description |
---|---|---|
Gauge | kube_ovn_ovn_status | OVN Health Status. The values are: (2) for standby or follower, (1) for active or leader, (0) for unhealthy. |
Gauge | kube_ovn_failed_req_count | The number of failed requests to OVN stack. |
Gauge | kube_ovn_log_file_size | The size of a log file associated with an OVN component. |
Gauge | kube_ovn_db_file_size | The size of a database file associated with an OVN component. |
Gauge | kube_ovn_chassis_info | Whether the OVN chassis is up (1) or down (0), together with additional information about the chassis. |
Gauge | kube_ovn_db_status | The status of OVN NB/SB DB, (1) for healthy, (0) for unhealthy. |
Gauge | kube_ovn_logical_switch_info | The information about OVN logical switch. This metric is always up (1). |
Gauge | kube_ovn_logical_switch_external_id | Provides the external IDs and values associated with OVN logical switches. This metric is always up (1). |
Gauge | kube_ovn_logical_switch_port_binding | Provides the association between a logical switch and a logical switch port. This metric is always up (1). |
Gauge | kube_ovn_logical_switch_tunnel_key | The value of the tunnel key associated with the logical switch. |
Gauge | kube_ovn_logical_switch_ports_num | The number of logical switch ports connected to the OVN logical switch. |
Gauge | kube_ovn_logical_switch_port_info | The information about OVN logical switch port. This metric is always up (1). |
Gauge | kube_ovn_logical_switch_port_tunnel_key | The value of the tunnel key associated with the logical switch port. |
Gauge | kube_ovn_cluster_enabled | Is OVN clustering enabled (1) or not (0). |
Gauge | kube_ovn_cluster_role | A metric with a constant '1' value labeled by server role. |
Gauge | kube_ovn_cluster_status | A metric with a constant '1' value labeled by server status. |
Gauge | kube_ovn_cluster_term | The current raft term known by this server. |
Gauge | kube_ovn_cluster_leader_self | Is this server consider itself a leader (1) or not (0). |
Gauge | kube_ovn_cluster_vote_self | Is this server voted itself as a leader (1) or not (0). |
Gauge | kube_ovn_cluster_election_timer | The current election timer value. |
Gauge | kube_ovn_cluster_log_not_committed | The number of log entries not yet committed by this server. |
Gauge | kube_ovn_cluster_log_not_applied | The number of log entries not yet applied by this server. |
Gauge | kube_ovn_cluster_log_index_start | The log entry index start value associated with this server. |
Gauge | kube_ovn_cluster_log_index_next | The log entry index next value associated with this server. |
Gauge | kube_ovn_cluster_inbound_connections_total | The total number of inbound connections to the server. |
Gauge | kube_ovn_cluster_outbound_connections_total | The total number of outbound connections from the server. |
Gauge | kube_ovn_cluster_inbound_connections_error_total | The total number of failed inbound connections to the server. |
Gauge | kube_ovn_cluster_outbound_connections_error_total | The total number of failed outbound connections from the server. |
ovsdb
and vswitchd
status metrics:
Type | Metric | Description |
---|---|---|
Gauge | ovs_status | OVS Health Status. The values are: health(1), unhealthy(0). |
Gauge | ovs_info | This metric provides basic information about OVS. It is always set to 1. |
Gauge | failed_req_count | The number of failed requests to OVS stack. |
Gauge | log_file_size | The size of a log file associated with an OVS component. |
Gauge | db_file_size | The size of a database file associated with an OVS component. |
Gauge | datapath | Represents an existing datapath. This metrics is always 1. |
Gauge | dp_total | Represents total number of datapaths on the system. |
Gauge | dp_if | Represents an existing datapath interface. This metrics is always 1. |
Gauge | dp_if_total | Represents the number of ports connected to the datapath. |
Gauge | dp_flows_total | The number of flows in a datapath. |
Gauge | dp_flows_lookup_hit | The number of incoming packets in a datapath matching existing flows in the datapath. |
Gauge | dp_flows_lookup_missed | The number of incoming packets in a datapath not matching any existing flow in the datapath. |
Gauge | dp_flows_lookup_lost | The number of incoming packets in a datapath destined for userspace process but subsequently dropped before reaching userspace. |
Gauge | dp_masks_hit | The total number of masks visited for matching incoming packets. |
Gauge | dp_masks_total | The number of masks in a datapath. |
Gauge | dp_masks_hit_ratio | The average number of masks visited per packet. It is the ration between hit and total number of packets processed by a datapath. |
Gauge | interface | Represents OVS interface. This is the primary metric for all other interface metrics. This metrics is always 1. |
Gauge | interface_admin_state | The administrative state of the physical network link of OVS interface. The values are: down(0), up(1), other(2). |
Gauge | interface_link_state | The state of the physical network link of OVS interface. The values are: down(0), up(1), other(2). |
Gauge | interface_mac_in_use | The MAC address in use by OVS interface. |
Gauge | interface_mtu | The currently configured MTU for OVS interface. |
Gauge | interface_of_port | Represents the OpenFlow port ID associated with OVS interface. |
Gauge | interface_if_index | Represents the interface index associated with OVS interface. |
Gauge | interface_tx_packets | Represents the number of transmitted packets by OVS interface. |
Gauge | interface_tx_bytes | Represents the number of transmitted bytes by OVS interface. |
Gauge | interface_rx_packets | Represents the number of received packets by OVS interface. |
Gauge | interface_rx_bytes | Represents the number of received bytes by OVS interface. |
Gauge | interface_rx_crc_err | Represents the number of CRC errors for the packets received by OVS interface. |
Gauge | interface_rx_dropped | Represents the number of input packets dropped by OVS interface. |
Gauge | interface_rx_errors | Represents the total number of packets with errors received by OVS interface. |
Gauge | interface_rx_frame_err | Represents the number of frame alignment errors on the packets received by OVS interface. |
Gauge | interface_rx_missed_err | Represents the number of packets with RX missed received by OVS interface. |
Gauge | interface_rx_over_err | Represents the number of packets with RX overrun received by OVS interface. |
Gauge | interface_tx_dropped | Represents the number of output packets dropped by OVS interface. |
Gauge | interface_tx_errors | Represents the total number of transmit errors by OVS interface. |
Gauge | interface_collisions | Represents the number of collisions on OVS interface. |
Network quality related metrics:
Type | Metric | Description |
---|---|---|
Gauge | pinger_ovs_up | If the ovs on the node is up |
Gauge | pinger_ovs_down | If the ovs on the node is down |
Gauge | pinger_ovn_controller_up | If the ovn_controller on the node is up |
Gauge | pinger_ovn_controller_down | If the ovn_controller on the node is down |
Gauge | pinger_inconsistent_port_binding | The number of mismatch port bindings between ovs and ovn-sb |
Gauge | pinger_apiserver_healthy | If the apiserver request is healthy on this node |
Gauge | pinger_apiserver_unhealthy | If the apiserver request is unhealthy on this node |
Histogram | pinger_apiserver_latency_ms | The latency ms histogram the node request apiserver |
Gauge | pinger_internal_dns_healthy | If the internal dns request is unhealthy on this node |
Gauge | pinger_internal_dns_unhealthy | If the internal dns request is unhealthy on this node |
Histogram | pinger_internal_dns_latency_ms | The latency ms histogram the node request internal dns |
Gauge | pinger_external_dns_health | If the external dns request is healthy on this node |
Gauge | pinger_external_dns_unhealthy | If the external dns request is unhealthy on this node |
Histogram | pinger_external_dns_latency_ms | The latency ms histogram the node request external dns |
Histogram | pinger_pod_ping_latency_ms | The latency ms histogram for pod peer ping |
Gauge | pinger_pod_ping_lost_total | The lost count for pod peer ping |
Gauge | pinger_pod_ping_count_total | The total count for pod peer ping |
Histogram | pinger_node_ping_latency_ms | The latency ms histogram for pod ping node |
Gauge | pinger_node_ping_lost_total | The lost count for pod ping node |
Gauge | pinger_node_ping_count_total | The total count for pod ping node |
Histogram | pinger_external_ping_latency_ms | The latency ms histogram for pod ping external address |
Gauge | pinger_external_lost_total | The lost count for pod ping external address |
kube-ovn-controller
status metrics:
Type | Metric | Description |
---|---|---|
Histogram | rest_client_request_latency_seconds | Request latency in seconds. Broken down by verb and URL |
Counter | rest_client_requests_total | Number of HTTP requests, partitioned by status code, method, and host |
Counter | lists_total | Total number of API lists done by the reflectors |
Summary | list_duration_seconds | How long an API list takes to return and decode for the reflectors |
Summary | items_per_list | How many items an API list returns to the reflectors |
Counter | watches_total | Total number of API watches done by the reflectors |
Counter | short_watches_total | Total number of short API watches done by the reflectors |
Summary | watch_duration_seconds | How long an API watch takes to return and decode for the reflectors |
Summary | items_per_watch | How many items an API watch returns to the reflectors |
Gauge | last_resource_version | Last resource version seen for the reflectors |
Histogram | ovs_client_request_latency_milliseconds | The latency histogram for ovs request |
Gauge | subnet_available_ip_count | The available num of ip address in subnet |
Gauge | subnet_used_ip_count | The used num of ip address in subnet |
kube-ovn-cni
status metrics:
Type | Metric | Description |
---|---|---|
Histogram | cni_op_latency_seconds | The latency seconds for cni operations |
Counter | cni_wait_address_seconds_total | Latency that cni wait controller to assign an address |
Counter | cni_wait_connectivity_seconds_total | Latency that cni wait address ready in overlay network |
Counter | cni_wait_route_seconds_total | Latency that cni wait controller to add routed annotation to pod |
Histogram | rest_client_request_latency_seconds | Request latency in seconds. Broken down by verb and URL |
Counter | rest_client_requests_total | Number of HTTP requests, partitioned by status code, method, and host |
Counter | lists_total | Total number of API lists done by the reflectors |
Summary | list_duration_seconds | How long an API list takes to return and decode for the reflectors |
Summary | items_per_list | How many items an API list returns to the reflectors |
Counter | watches_total | Total number of API watches done by the reflectors |
Counter | short_watches_total | Total number of short API watches done by the reflectors |
Summary | watch_duration_seconds | How long an API watch takes to return and decode for the reflectors |
Summary | items_per_watch | How many items an API watch returns to the reflectors |
Gauge | last_resource_version | Last resource version seen for the reflectors |
Histogram | ovs_client_request_latency_milliseconds | The latency histogram for ovs request |
Upstream OVN/OVS was originally designed with the goal of a general purpose SDN controller and data plane. Due to some specific usage of the Kubernetes network,Kube-OVN only focused on part of the features. In order to achieve better performance, stability and specific features, Kube-OVN has made some modifications to the upstream OVN/OVS. Users using their own OVN/OVS with Kube-OVN controllers need to be aware of the possible impact of the following changes:
Did not merge into the upstream modification.
dp_hash
to hash
to avoid the hash error problem in some kernels.Merged into upstream modification:
Upstream OVN/OVS was originally designed with the goal of a general purpose SDN controller and data plane. Due to some specific usage of the Kubernetes network,Kube-OVN only focused on part of the features. In order to achieve better performance, stability and specific features, Kube-OVN has made some modifications to the upstream OVN/OVS. Users using their own OVN/OVS with Kube-OVN controllers need to be aware of the possible impact of the following changes:
Did not merge into the upstream modification.
dp_hash
to hash
to avoid the hash error problem in some kernels.Merged into upstream modification:
Kube-OVN uses OVN/OVS as the data plane implementation and currently supports Geneve
, Vxlan
and STT
tunnel encapsulation protocols. These three protocols differ in terms of functionality, performance and ease of use. This document will describe the differences in the use of the three protocols so that users can choose according to their situation.
The Geneve
protocol is the default tunneling protocol selected during Kube-OVN deployment and is also the default recommended tunneling protocol for OVN. This protocol is widely supported in the kernel and can be accelerated using the generic offload capability of modern NICs. Since Geneve
has a variable header, it is possible to use 24bit space to mark different datapaths users can create a larger number of virtual networks.
If you are using Mellanox or Corigine SmartNIC OVS offload, Geneve
requires a higher kernel version. Upstream kernel of 5.4 or higher, or other compatible kernels that backports this feature.
Due to the use of UDP encapsulation, this protocol does not make good use of the TCP-related offloads of modern NICs when handling TCP over UDP, and consumes more CPU resources when handling large packets.
Vxlan
is a recently supported protocol in the upstream OVN, which is widely supported in the kernel and can be accelerated using the common offload capabilities of modern NICs. Due to the limited length of the protocol header and the additional space required for OVN orchestration, there is a limit to the number of datapaths that can be created, with a maximum of 4096 datapaths and a maximum of 4096 ports under each datapath. Also, inport
-based ACLs are not supported due to header length limitations.
Vxlan
offloading is supported in common kernels if using Mellanox or Corigine SmartNIC.
Due to the use of UDP encapsulation, this protocol does not make good use of the TCP-related offloads of modern NICs when handling TCP over UDP, and consumes more CPU resources when handling large packets.
The STT
protocol is an early tunneling protocol supported by the OVN that uses TCP-like headers to take advantage of the TCP offload capabilities common to modern NICs and significantly increase TCP throughput. The protocol also has a long header to support full OVN capabilities and large-scale datapaths.
This protocol is not supported in the kernel. To use it, you need to compile an additional OVS kernel module and recompile the new version of the kernel module when upgrading the kernel.
This protocol is not currently supported by the SmartNic and cannot use the offloading capability of OVS offloading.
Kube-OVN uses OVN/OVS as the data plane implementation and currently supports Geneve
, Vxlan
and STT
tunnel encapsulation protocols. These three protocols differ in terms of functionality, performance and ease of use. This document will describe the differences in the use of the three protocols so that users can choose according to their situation.
The Geneve
protocol is the default tunneling protocol selected during Kube-OVN deployment and is also the default recommended tunneling protocol for OVN. This protocol is widely supported in the kernel and can be accelerated using the generic offload capability of modern NICs. Since Geneve
has a variable header, it is possible to use 24bit space to mark different datapaths users can create a larger number of virtual networks.
If you are using Mellanox or Corigine SmartNIC OVS offload, Geneve
requires a higher kernel version. Upstream kernel of 5.4 or higher, or other compatible kernels that backports this feature.
Due to the use of UDP encapsulation, this protocol does not make good use of the TCP-related offloads of modern NICs when handling TCP over UDP, and consumes more CPU resources when handling large packets.
Vxlan
is a recently supported protocol in the upstream OVN, which is widely supported in the kernel and can be accelerated using the common offload capabilities of modern NICs. Due to the limited length of the protocol header and the additional space required for OVN orchestration, there is a limit to the number of datapaths that can be created, with a maximum of 4096 datapaths and a maximum of 4096 ports under each datapath. Also, inport
-based ACLs are not supported due to header length limitations.
Vxlan
offloading is supported in common kernels if using Mellanox or Corigine SmartNIC.
Due to the use of UDP encapsulation, this protocol does not make good use of the TCP-related offloads of modern NICs when handling TCP over UDP, and consumes more CPU resources when handling large packets.
The STT
protocol is an early tunneling protocol supported by the OVN that uses TCP-like headers to take advantage of the TCP offload capabilities common to modern NICs and significantly increase TCP throughput. The protocol also has a long header to support full OVN capabilities and large-scale datapaths.
This protocol is not supported in the kernel. To use it, you need to compile an additional OVS kernel module and recompile the new version of the kernel module when upgrading the kernel.
This protocol is not currently supported by the SmartNic and cannot use the offloading capability of OVS offloading.
This document describes the forwarding path of traffic in Underlay mode under different scenarios.
Internal logical switches exchange packets directly, without access to the external network.
Packets enter the physic switch via the node NIC and are exchanged by the physic switch.
Packets enter the physic network via the node NIC and are exchanged and routed and forwarded by physic switches and routers.
Here br-provider-1 and br-provider-2 can be the same OVS bridge,multiple subnet can share a Provider Network。
Packets enter the physic network via the node NIC and are exchanged and routed and forwarded by physic switches and routers.
Packets enter the physic network via the node NIC and are exchanged and routed and forwarded by physic switches and routers.
The communication between nodes and Pods follows the same logic.
Kube-OVN configures load balancing for each Kubernetes Service on a logical switch on each subnet. When a Pod accesses other Pods by accessing the Service IP, a network packet is constructed with the Service IP as the destination address and the MAC address of the gateway as the destination MAC address. After the network packet enters the logical switch, load balancing will intercept and DNAT the network packet to modify the destination IP and port to the IP and port of one of the Endpoint corresponding to the Service. Since the logical switch does not modify the Layer 2 destination MAC address of the network packet, the network packet will still be delivered to the physic gateway after entering the physic switch, and the physic gateway will be required to forward the network packet.
This document describes the forwarding path of traffic in Underlay mode under different scenarios.
Internal logical switches exchange packets directly, without access to the external network.
Packets enter the physic switch via the node NIC and are exchanged by the physic switch.
Packets enter the physic network via the node NIC and are exchanged and routed and forwarded by physic switches and routers.
Here br-provider-1 and br-provider-2 can be the same OVS bridge,multiple subnet can share a Provider Network。
Packets enter the physic network via the node NIC and are exchanged and routed and forwarded by physic switches and routers.
Packets enter the physic network via the node NIC and are exchanged and routed and forwarded by physic switches and routers.
The communication between nodes and Pods follows the same logic.
Kube-OVN configures load balancing for each Kubernetes Service on a logical switch on each subnet. When a Pod accesses other Pods by accessing the Service IP, a network packet is constructed with the Service IP as the destination address and the MAC address of the gateway as the destination MAC address. After the network packet enters the logical switch, load balancing will intercept and DNAT the network packet to modify the destination IP and port to the IP and port of one of the Endpoint corresponding to the Service. Since the logical switch does not modify the Layer 2 destination MAC address of the network packet, the network packet will still be delivered to the physic gateway after entering the physic switch, and the physic gateway will be required to forward the network packet.
Kube-OVN is a CNI-compliant network system that depends on the Kubernetes environment and the corresponding kernel network module for its operation. Below are the operating system and software versions tested, the environment configuration and the ports that need to be opened.
geneve
, openvswitch
, ip_tables
and iptable_nat
kernel modules exist.Attention:
netfilter
modules that lead Kube-OVN embed nat and lb failure.Please update kernel and check Floating IPs broken after kernel upgrade to Centos/RHEL 7.5 - DNAT not working.openvswitch
module has some issues for ct,please update kernel version or manually compile openvswitch
kernel module.cat /proc/cmdline
.Check Geneve tunnels don't work when ipv6 is disabled for the detail bug info.ipv6.disable=1
, it should be set to 0
.kube-proxy
works, Kube-OVN can visit kube-apiserver
from Service ClusterIP.CNI
and find cni-bin and cni-conf in default directories, kubelet bootstrap options should contain --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d
./etc/cni/net.d/
.Component | Port | Usage |
---|---|---|
ovn-central | 6641/tcp, 6642/tcp, 6643/tcp, 6644/tcp | ovn-db and raft server listen ports |
ovs-ovn | Geneve 6081/udp, STT 7471/tcp, Vxlan 4789/udp | tunnel ports |
kube-ovn-controller | 10660/tcp | metrics port |
kube-ovn-daemon | 10665/tcp | metrics port |
kube-ovn-monitor | 10661/tcp | metrics port |
Kube-OVN is a CNI-compliant network system that depends on the Kubernetes environment and the corresponding kernel network module for its operation. Below are the operating system and software versions tested, the environment configuration and the ports that need to be opened.
geneve
, openvswitch
, ip_tables
and iptable_nat
kernel modules exist.Attention:
netfilter
modules that lead Kube-OVN embed nat and lb failure.Please update kernel and check Floating IPs broken after kernel upgrade to Centos/RHEL 7.5 - DNAT not working.openvswitch
module has some issues for ct,please update kernel version or manually compile openvswitch
kernel module.cat /proc/cmdline
.Check Geneve tunnels don't work when ipv6 is disabled for the detail bug info.ipv6.disable=1
, it should be set to 0
.kube-proxy
works, Kube-OVN can visit kube-apiserver
from Service ClusterIP.CNI
and find cni-bin and cni-conf in default directories, kubelet bootstrap options should contain --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d
./etc/cni/net.d/
.Component | Port | Usage |
---|---|---|
ovn-central | 6641/tcp, 6642/tcp, 6643/tcp, 6644/tcp | ovn-db and raft server listen ports |
ovs-ovn | Geneve 6081/udp, STT 7471/tcp, Vxlan 4789/udp | tunnel ports |
kube-ovn-controller | 10660/tcp | metrics port |
kube-ovn-daemon | 10665/tcp | metrics port |
kube-ovn-monitor | 10661/tcp | metrics port |
Kube-OVN 是一款 CNCF 旗下的企业级云原生网络编排系统,将 SDN 的能力和云原生结合, 提供丰富的功能,极致的性能以及良好的可运维性。
丰富的功能:
如果你怀念 SDN 领域丰富的网络能力却在云原生领域苦苦追寻而不得,那么 Kube-OVN 将是你的最佳选择。
借助 OVS/OVN 在 SDN 领域成熟的能力,Kube-OVN 将网络虚拟化的丰富功能带入云原生领域。目前已支持子网管理, 静态 IP 分配,分布式/集中式网关,Underlay/Overlay 混合网络, VPC 多租户网络,跨集群互联网络,QoS 管理, 多网卡管理,ACL 网络控制,流量镜像,ARM 支持, Windows 支持等诸多功能。
极致的性能:
如果你担心容器网络会带来额外的性能损耗,那么来看一下 Kube-OVN 是如何极致的优化性能。
在数据平面,通过一系列对流表和内核的精心优化,并借助 eBPF、DPDK、智能网卡卸载等新兴技术, Kube-OVN 可以在延迟和吞吐量等方面的指标达到近似或超出宿主机网络性能的水平。在控制平面,通过对 OVN 上游流表的裁剪, 各种缓存技术的使用和调优,Kube-OVN 可以支持大规模上千节点和上万 Pod 的集群。
此外 Kube-OVN 还在不断优化 CPU 和内存等资源的使用量,以适应边缘等资源有限场景。
良好的可运维性:
如果你对容器网络的运维心存忧虑,Kube-OVN 内置了大量的工具来帮助你简化运维操作。
Kube-OVN 提供了一键安装脚本,帮助用户迅速搭建生产就绪的容器网络。同时内置的丰富的监控指标和 Grafana 面板, 可帮助用户建立完善的监控体系。强大的命令行工具可以简化用户的日常运维操作。通过和 Cilium 结合,利用 eBPF 能力用户可以 增强对网络的可观测性。 此外流量镜像的能力可以方便用户自定义流量监控,并和传统的 NPM 系统对接。
Kube-OVN 是一款 CNCF 旗下的企业级云原生网络编排系统,将 SDN 的能力和云原生结合, 提供丰富的功能,极致的性能以及良好的可运维性。
丰富的功能:
如果你怀念 SDN 领域丰富的网络能力却在云原生领域苦苦追寻而不得,那么 Kube-OVN 将是你的最佳选择。
借助 OVS/OVN 在 SDN 领域成熟的能力,Kube-OVN 将网络虚拟化的丰富功能带入云原生领域。目前已支持子网管理, 静态 IP 分配,分布式/集中式网关,Underlay/Overlay 混合网络, VPC 多租户网络,跨集群互联网络,QoS 管理, 多网卡管理,ACL 网络控制,流量镜像,ARM 支持, Windows 支持等诸多功能。
极致的性能:
如果你担心容器网络会带来额外的性能损耗,那么来看一下 Kube-OVN 是如何极致的优化性能。
在数据平面,通过一系列对流表和内核的精心优化,并借助 eBPF、DPDK、智能网卡卸载等新兴技术, Kube-OVN 可以在延迟和吞吐量等方面的指标达到近似或超出宿主机网络性能的水平。在控制平面,通过对 OVN 上游流表的裁剪, 各种缓存技术的使用和调优,Kube-OVN 可以支持大规模上千节点和上万 Pod 的集群。
此外 Kube-OVN 还在不断优化 CPU 和内存等资源的使用量,以适应边缘等资源有限场景。
良好的可运维性:
如果你对容器网络的运维心存忧虑,Kube-OVN 内置了大量的工具来帮助你简化运维操作。
Kube-OVN 提供了一键安装脚本,帮助用户迅速搭建生产就绪的容器网络。同时内置的丰富的监控指标和 Grafana 面板, 可帮助用户建立完善的监控体系。强大的命令行工具可以简化用户的日常运维操作。通过和 Cilium 结合,利用 eBPF 能力用户可以 增强对网络的可观测性。 此外流量镜像的能力可以方便用户自定义流量监控,并和传统的 NPM 系统对接。
本文档将介绍 Kube-OVN 的总体架构,和各个组件的功能以及其之间的交互。
总体来看,Kube-OVN 作为 Kubernetes 和 OVN 之间的一个桥梁,将成熟的 SDN 和云原生相结合。 这意味着 Kube-OVN 不仅通过 OVN 实现了 Kubernetes 下的网络规范,例如 CNI,Service 和 Networkpolicy,还将大量的 SDN 领域能力带入云原生,例如逻辑交换机,逻辑路由器,VPC,网关,QoS,ACL 和流量镜像。
同时 Kube-OVN 还保持了良好的开放性可以和诸多技术方案集成,例如 Cilium,Submariner,Prometheus,KubeVirt 等等。
Kube-OVN 的组件可以大致分为三类:
该类型组件来自 OVN/OVS 社区,并针对 Kube-OVN 的使用场景做了特定修改。 OVN/OVS 本身是一套成熟的管理虚机和容器的 SDN 系统,我们强烈建议 对 Kube-OVN 实现感兴趣的用户先去读一下 ovn-architecture(7) 来了解什么是 OVN 以及 如何和它进行集成。Kube-OVN 使用 OVN 的北向接口创建和调整虚拟网络,并将其中的网络概念映射到 Kubernetes 之内。
所有 OVN/OVS 相关组件都已打包成对应镜像,并可在 Kubernetes 中运行。
ovn-central
Deployment 运行 OVN 的管理平面组件,包括 ovn-nb
, ovn-sb
, 和 ovn-northd
。
ovn-nb
: 保存虚拟网络配置,并提供 API 进行虚拟网络管理。kube-ovn-controller
将会主要和 ovn-nb
进行交互配置虚拟网络。ovn-sb
: 保存从 ovn-nb
的逻辑网络生成的逻辑流表,以及各个节点的实际物理网络状态。ovn-northd
:将 ovn-nb
的虚拟网络翻译成 ovn-sb
中的逻辑流表。多个 ovn-central
实例会通过 Raft 协议同步数据保证高可用。
ovs-ovn
以 DaemonSet 形式运行在每个节点,在 Pod 内运行了 openvswitch
, ovsdb
, 和 ovn-controller
。这些组件作为 ovn-central
的 Agent 将逻辑流表翻译成真实的网络配置。
该部分为 Kube-OVN 的核心组件,作为 OVN 和 Kubernetes 之间的一个桥梁,将两个系统打通并将网络概念进行相互转换。 大部分的核心功能都在该部分组件中实现。
该组件为一个 Deployment 执行所有 Kubernetes 内资源到 OVN 资源的翻译工作,其作用相当于整个 Kube-OVN 系统的控制平面。 kube-ovn-controller
监听了所有和网络功能相关资源的事件,并根据资源变化情况更新 OVN 内的逻辑网络。主要监听的资源包括: Pod,Service,Endpoint,Node,NetworkPolicy,VPC,Subnet,Vlan,ProviderNetwork。
以 Pod 事件为例, kube-ovn-controller
监听到 Pod 创建事件后,通过内置的内存 IPAM 功能分配地址,并调用 ovn-central
创建 逻辑端口,静态路由和可能的 ACL 规则。接下来 kube-ovn-controller
将分配到的地址,和子网信息例如 CIDR,网关,路由等信息写会到 Pod 的 annotation 中。该 annotation 后续会被 kube-ovn-cni
读取用来配置本地网络。
该组件为一个 DaemonSet 运行在每个节点上,实现 CNI 接口,并操作本地的 OVS 配置单机网络。
该 DaemonSet 会复制 kube-ovn
二进制文件到每台机器,作为 kubelet
和 kube-ovn-cni
之间的交互工具,将相应 CNI 请求 发送给 kube-ovn-cni
执行。该二进制文件默认会被复制到 /opt/cni/bin
目录下。
kube-ovn-cni
会配置具体的网络来执行相应流量操作,主要工作包括:
ovn-controller
和 vswitchd
。ovn0
网卡联通容器网络和主机网络。该部分组件主要提供监控,诊断,运维操作以及和外部进行对接,对 Kube-OVN 的核心网络能力进行扩展,并简化日常运维操作。
该组件为一个 DaemonSet 运行在特定标签的节点上,对外发布容器网络的路由,使得外部可以直接通过 Pod IP 访问容器。
更多相关使用方式请参考 BGP 支持。
该组件为一个 DaemonSet 运行在每个节点上收集 OVS 运行信息,节点网络质量,网络延迟等信息,收集的监控指标可参考 Kube-OVN 监控指标。
该组件为一个 Deployment 收集 OVN 的运行信息,收集的监控指标可参考 Kube-OVN 监控指标。
该组件为 kubectl 插件,可以快速运行常见运维操作,更多使用请参考 kubectl 插件使用。
本文档将介绍 Kube-OVN 的总体架构,和各个组件的功能以及其之间的交互。
总体来看,Kube-OVN 作为 Kubernetes 和 OVN 之间的一个桥梁,将成熟的 SDN 和云原生相结合。 这意味着 Kube-OVN 不仅通过 OVN 实现了 Kubernetes 下的网络规范,例如 CNI,Service 和 Networkpolicy,还将大量的 SDN 领域能力带入云原生,例如逻辑交换机,逻辑路由器,VPC,网关,QoS,ACL 和流量镜像。
同时 Kube-OVN 还保持了良好的开放性可以和诸多技术方案集成,例如 Cilium,Submariner,Prometheus,KubeVirt 等等。
Kube-OVN 的组件可以大致分为三类:
该类型组件来自 OVN/OVS 社区,并针对 Kube-OVN 的使用场景做了特定修改。 OVN/OVS 本身是一套成熟的管理虚机和容器的 SDN 系统,我们强烈建议 对 Kube-OVN 实现感兴趣的用户先去读一下 ovn-architecture(7) 来了解什么是 OVN 以及 如何和它进行集成。Kube-OVN 使用 OVN 的北向接口创建和调整虚拟网络,并将其中的网络概念映射到 Kubernetes 之内。
所有 OVN/OVS 相关组件都已打包成对应镜像,并可在 Kubernetes 中运行。
ovn-central
Deployment 运行 OVN 的管理平面组件,包括 ovn-nb
, ovn-sb
, 和 ovn-northd
。
ovn-nb
: 保存虚拟网络配置,并提供 API 进行虚拟网络管理。kube-ovn-controller
将会主要和 ovn-nb
进行交互配置虚拟网络。ovn-sb
: 保存从 ovn-nb
的逻辑网络生成的逻辑流表,以及各个节点的实际物理网络状态。ovn-northd
:将 ovn-nb
的虚拟网络翻译成 ovn-sb
中的逻辑流表。多个 ovn-central
实例会通过 Raft 协议同步数据保证高可用。
ovs-ovn
以 DaemonSet 形式运行在每个节点,在 Pod 内运行了 openvswitch
, ovsdb
, 和 ovn-controller
。这些组件作为 ovn-central
的 Agent 将逻辑流表翻译成真实的网络配置。
该部分为 Kube-OVN 的核心组件,作为 OVN 和 Kubernetes 之间的一个桥梁,将两个系统打通并将网络概念进行相互转换。 大部分的核心功能都在该部分组件中实现。
该组件为一个 Deployment 执行所有 Kubernetes 内资源到 OVN 资源的翻译工作,其作用相当于整个 Kube-OVN 系统的控制平面。 kube-ovn-controller
监听了所有和网络功能相关资源的事件,并根据资源变化情况更新 OVN 内的逻辑网络。主要监听的资源包括: Pod,Service,Endpoint,Node,NetworkPolicy,VPC,Subnet,Vlan,ProviderNetwork。
以 Pod 事件为例, kube-ovn-controller
监听到 Pod 创建事件后,通过内置的内存 IPAM 功能分配地址,并调用 ovn-central
创建 逻辑端口,静态路由和可能的 ACL 规则。接下来 kube-ovn-controller
将分配到的地址,和子网信息例如 CIDR,网关,路由等信息写会到 Pod 的 annotation 中。该 annotation 后续会被 kube-ovn-cni
读取用来配置本地网络。
该组件为一个 DaemonSet 运行在每个节点上,实现 CNI 接口,并操作本地的 OVS 配置单机网络。
该 DaemonSet 会复制 kube-ovn
二进制文件到每台机器,作为 kubelet
和 kube-ovn-cni
之间的交互工具,将相应 CNI 请求 发送给 kube-ovn-cni
执行。该二进制文件默认会被复制到 /opt/cni/bin
目录下。
kube-ovn-cni
会配置具体的网络来执行相应流量操作,主要工作包括:
ovn-controller
和 vswitchd
。ovn0
网卡联通容器网络和主机网络。该部分组件主要提供监控,诊断,运维操作以及和外部进行对接,对 Kube-OVN 的核心网络能力进行扩展,并简化日常运维操作。
该组件为一个 DaemonSet 运行在特定标签的节点上,对外发布容器网络的路由,使得外部可以直接通过 Pod IP 访问容器。
更多相关使用方式请参考 BGP 支持。
该组件为一个 DaemonSet 运行在每个节点上收集 OVS 运行信息,节点网络质量,网络延迟等信息,收集的监控指标可参考 Kube-OVN 监控指标。
该组件为一个 Deployment 收集 OVN 的运行信息,收集的监控指标可参考 Kube-OVN 监控指标。
该组件为 kubectl 插件,可以快速运行常见运维操作,更多使用请参考 kubectl 插件使用。
在 Kube-OVN 中根据功能使用度,文档完善程度和测试覆盖程度将功能成熟度分为 Alpha,Beta 和 GA 三个阶段。
对于 Alpha 功能:
对于 Beta 功能:
对于 GA 功能:
本列表统计从 v1.8 版本中包含的功能对应成熟度。
功能 | 默认开启 | 状态 | 开始(Since) | 结束(Until) |
---|---|---|---|---|
Namespaced Subnet | true | GA | 1.8 | |
分布式网关 | true | GA | 1.8 | |
主从模式集中式网关 | true | GA | 1.8 | |
ECMP 模式集中式网关 | false | Beta | 1.8 | |
子网 ACL | true | Alpha | 1.9 | |
子网隔离 (未来会和子网 ACL 合并) | true | Beta | 1.8 | |
Underlay 子网 | true | GA | 1.8 | |
多网卡管理 | true | Beta | 1.8 | |
子网 DHCP | false | Alpha | 1.10 | |
子网设置外部网关 | false | Alpha | 1.8 | |
使用 OVN-IC 进行集群互联 | false | Beta | 1.8 | |
使用 Submariner 进行集群互联 | false | Alpha | 1.9 | |
子网 VIP 预留 | true | Alpha | 1.10 | |
创建自定义 VPC | true | Beta | 1.8 | |
自定义 VPC 浮动 IP/SNAT/DNAT | true | Alpha | 1.10 | |
自定义 VPC 静态路由 | true | Alpha | 1.10 | |
自定义 VPC 策略路由 | true | Alpha | 1.10 | |
自定义 VPC 安全组 | true | Alpha | 1.10 | |
容器最大带宽 QoS | true | GA | 1.8 | |
linux-netem QoS | true | Alpha | 1.9 | |
Prometheus 集成 | false | GA | 1.8 | |
Grafana 集成 | false | GA | 1.8 | |
双栈网络 | false | GA | 1.8 | |
默认 VPC EIP/SNAT | false | Beta | 1.8 | |
流量镜像 | false | GA | 1.8 | |
NetworkPolicy | true | Beta | 1.8 | |
Webhook | false | Alpha | 1.10 | |
性能调优 | false | Beta | 1.8 | |
Overlay 子网静态路由对外暴露 | false | Alpha | 1.8 | |
Overlay 子网 BGP 对外暴露 | false | Alpha | 1.9 | |
Cilium 集成 | false | Alpha | 1.10 | |
自定义 VPC 互联 | false | Alpha | 1.10 | |
Mellanox Offload | false | Alpha | 1.8 | |
芯启源 Offload | false | Alpha | 1.10 | |
Windows 支持 | false | Alpha | 1.10 | |
DPDK 支持 | false | Alpha | 1.10 | |
OpenStack 集成 | false | Alpha | 1.9 | |
单个 Pod 固定 IP/Mac | true | GA | 1.8 | |
Workload 固定 IP | true | GA | 1.8 | |
StatefulSet 固定 IP | true | GA | 1.8 | |
VM 固定 IP | false | Beta | 1.9 | |
默认 VPC Load Balancer 类型 Service | false | Alpha | 1.11 | |
自定义 VPC 内部负载均衡 | false | Alpha | 1.11 | |
自定义 VPC DNS | false | Alpha | 1.11 | |
Underlay 和 Overlay 互通 | false | Alpha | 1.11 |
在 Kube-OVN 中根据功能使用度,文档完善程度和测试覆盖程度将功能成熟度分为 Alpha,Beta 和 GA 三个阶段。
对于 Alpha 功能:
对于 Beta 功能:
对于 GA 功能:
本列表统计从 v1.8 版本中包含的功能对应成熟度。
功能 | 默认开启 | 状态 | 开始(Since) | 结束(Until) |
---|---|---|---|---|
Namespaced Subnet | true | GA | 1.8 | |
分布式网关 | true | GA | 1.8 | |
主从模式集中式网关 | true | GA | 1.8 | |
ECMP 模式集中式网关 | false | Beta | 1.8 | |
子网 ACL | true | Alpha | 1.9 | |
子网隔离 (未来会和子网 ACL 合并) | true | Beta | 1.8 | |
Underlay 子网 | true | GA | 1.8 | |
多网卡管理 | true | Beta | 1.8 | |
子网 DHCP | false | Alpha | 1.10 | |
子网设置外部网关 | false | Alpha | 1.8 | |
使用 OVN-IC 进行集群互联 | false | Beta | 1.8 | |
使用 Submariner 进行集群互联 | false | Alpha | 1.9 | |
子网 VIP 预留 | true | Alpha | 1.10 | |
创建自定义 VPC | true | Beta | 1.8 | |
自定义 VPC 浮动 IP/SNAT/DNAT | true | Alpha | 1.10 | |
自定义 VPC 静态路由 | true | Alpha | 1.10 | |
自定义 VPC 策略路由 | true | Alpha | 1.10 | |
自定义 VPC 安全组 | true | Alpha | 1.10 | |
容器最大带宽 QoS | true | GA | 1.8 | |
linux-netem QoS | true | Alpha | 1.9 | |
Prometheus 集成 | false | GA | 1.8 | |
Grafana 集成 | false | GA | 1.8 | |
双栈网络 | false | GA | 1.8 | |
默认 VPC EIP/SNAT | false | Beta | 1.8 | |
流量镜像 | false | GA | 1.8 | |
NetworkPolicy | true | Beta | 1.8 | |
Webhook | false | Alpha | 1.10 | |
性能调优 | false | Beta | 1.8 | |
Overlay 子网静态路由对外暴露 | false | Alpha | 1.8 | |
Overlay 子网 BGP 对外暴露 | false | Alpha | 1.9 | |
Cilium 集成 | false | Alpha | 1.10 | |
自定义 VPC 互联 | false | Alpha | 1.10 | |
Mellanox Offload | false | Alpha | 1.8 | |
芯启源 Offload | false | Alpha | 1.10 | |
Windows 支持 | false | Alpha | 1.10 | |
DPDK 支持 | false | Alpha | 1.10 | |
OpenStack 集成 | false | Alpha | 1.9 | |
单个 Pod 固定 IP/Mac | true | GA | 1.8 | |
Workload 固定 IP | true | GA | 1.8 | |
StatefulSet 固定 IP | true | GA | 1.8 | |
VM 固定 IP | false | Beta | 1.9 | |
默认 VPC Load Balancer 类型 Service | false | Alpha | 1.11 | |
自定义 VPC 内部负载均衡 | false | Alpha | 1.11 | |
自定义 VPC DNS | false | Alpha | 1.11 | |
Underlay 和 Overlay 互通 | false | Alpha | 1.11 |
Kube-OVN 使用 ipset 及 iptables 辅助实现默认 VPC 下容器网络(Overlay)网关 NAT 的功能。
使用的 ipset 如下表所示:
名称(IPv4/IPv6) | 类型 | 存储对象 |
---|---|---|
ovn40services/ovn60services | hash:net | Service 网段 |
ovn40subnets/ovn60subnets | hash:net | Overlay 子网网段以及 NodeLocal DNS IP 地址 |
ovn40subnets-nat/ovn60subnets-nat | hash:net | 开启 NatOutgoing 的 Overlay 子网网段 |
ovn40subnets-distributed-gw/ovn60subnets-distributed-gw | hash:net | 开启分布式网关的 Overlay 子网网段 |
ovn40other-node/ovn60other-node | hash:net | 其它节点的内部 IP 地址 |
ovn40local-pod-ip-nat/ovn60local-pod-ip-nat | hash:ip | 已弃用 |
ovn40subnets-nat-policy | hash:net | 配置了 natOutgoingPolicyRules 的所有子网网段 |
ovn40natpr-418e79269dc5-dst | hash:net | natOutgoingPolicyRules 中 rule 对应的 dstIPs |
ovn40natpr-418e79269dc5-src | hash:net | natOutgoingPolicyRules 中 rule 对应的 srcIPs |
使用的 iptables 规则(IPv4)如下表所示:
表 | 链 | 规则 | 用途 | 备注 |
---|---|---|---|---|
filter | INPUT | -m set --match-set ovn40services src -j ACCEPT | 允许 k8s Service 和 Pod 相关流量通过 | -- |
filter | INPUT | -m set --match-set ovn40services dst -j ACCEPT | 同上 | -- |
filter | INPUT | -m set --match-set ovn40subnets src -j ACCEPT | 同上 | -- |
filter | INPUT | -m set --match-set ovn40subnets dst -j ACCEPT | 同上 | -- |
filter | FORWARD | -m set --match-set ovn40services src -j ACCEPT | 同上 | -- |
filter | FORWARD | -m set --match-set ovn40services dst -j ACCEPT | 同上 | -- |
filter | FORWARD | -m set --match-set ovn40subnets src -j ACCEPT | 同上 | -- |
filter | FORWARD | -m set --match-set ovn40subnets dst -j ACCEPT | 同上 | -- |
filter | FORWARD | -s 10.16.0.0/16 -m comment --comment "ovn-subnet-gateway,ovn-default" | 用于计数从 subnet 访问外部网络的报文 | 10.16.0.0/16 为 subnet 的 cidr ,comment 中逗号前面的 ovn-subnet-gateway 用于标识该 iptables 规则用于 subnet 出入网关报文计数,逗号后面 ovn-default 是该 subnet 的名字 |
filter | FORWARD | -d 10.16.0.0/16 -m comment --comment "ovn-subnet-gateway,ovn-default" | 用于计数从外部网络访问 subnet 的报文 | 同上 |
filter | OUTPUT | -p udp -m udp --dport 6081 -j MARK --set-xmark 0x0 | 清除流量标记,避免执行 SNAT | UDP: bad checksum on VXLAN interface |
nat | PREROUTING | -m comment --comment "kube-ovn prerouting rules" -j OVN-PREROUTING | 进入 OVN-PREROUTING 链处理 | -- |
nat | POSTROUTING | -m comment --comment "kube-ovn postrouting rules" -j OVN-POSTROUTING | 进入 OVN-POSTROUTING 链处理 | -- |
nat | OVN-PREROUTING | -i ovn0 -m set --match-set ovn40subnets src -m set --match-set ovn40services dst -j MARK --set-xmark 0x4000/0x4000 | 为 Pod 访问 Service 流量添加 masquerade 标记 | 作用于关闭内置 LB 的场景 |
nat | OVN-PREROUTING | -p tcp -m addrtype --dst-type LOCAL -m set --match-set KUBE-NODE-PORT-LOCAL-TCP dst -j MARK --set-xmark 0x80000/0x80000 | 为 ExternalTrafficPolicy 为 Local 的 Service 流量(TCP)添加特定标记 | 仅 kube-proxy 使用 ipvs 模式时存在 |
nat | OVN-PREROUTING | -p udp -m addrtype --dst-type LOCAL -m set --match-set KUBE-NODE-PORT-LOCAL-UDP dst -j MARK --set-xmark 0x80000/0x80000 | 为 ExternalTrafficPolicy 为 Local 的 Service 流量(UDP)添加特定标记 | 同上 |
nat | OVN-POSTROUTING | -m set --match-set ovn40services src -m set --match-set ovn40subnets dst -m mark --mark 0x4000/0x4000 -j SNAT --to-source | 当节点通过 Service IP 访问 Overlay Pod 时,保持源 IP 为节点 IP。 | 仅 kube-proxy 使用 ipvs 模式时生效 |
nat | OVN-POSTROUTING | -m mark --mark 0x4000/0x4000 -j MASQUERADE | 为特定标记的流量执行 SNAT | -- |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets src -m set --match-set ovn40subnets dst -j MASQUERADE | 为通过节点的 Pod 之间的 Service 流量执行 SNAT | -- |
nat | OVN-POSTROUTING | -m mark --mark 0x80000/0x80000 -m set --match-set ovn40subnets-distributed-gw dst -j RETURN | 对于 ExternalTrafficPolicy 为 Local 的 Service 流量,若 Endpoint 使用分布式网关,无需执行 SNAT | -- |
nat | OVN-POSTROUTING | -m mark --mark 0x80000/0x80000 -j MASQUERADE | 对于 ExternalTrafficPolicy 为 Local 的 Service 流量,若 Endpoint 使用集中式网关,执行 SNAT | -- |
nat | OVN-POSTROUTING | -p tcp -m tcp --tcp-flags SYN NONE -m conntrack --ctstate NEW -j RETURN | Pod IP 对外暴露时,不执行 SNAT | -- |
nat | OVN-POSTROUTING | -s 10.16.0.0/16 -m set ! --match-set ovn40subnets dst -j SNAT --to-source 192.168.0.101 | Pod 访问集群外网络时,若子网开启 NatOutgoing 且使用指定 IP 的集中式网关,执行 SNAT | 10.16.0.0/16 为子网网段,192.168.0.101 为指定的网关节点 IP |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets-nat src -m set ! --match-set ovn40subnets dst -j MASQUERADE | Pod 访问集群外网络时,若子网开启 NatOutgoing,执行 SNAT | -- |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets-nat-policy src -m set ! --match-set ovn40subnets dst -j OVN-NAT-POLICY | Pod 访问集群外网络时,若子网开启 natOutgoingPolicyRules,指定策略的报文执行 SNAT | 配置了 natOutgoingPolicyRules 子网的出外网报文的进入链 OVN-NAT-POLICY |
nat | OVN-POSTROUTING | -m mark --mark 0x90001/0x90001 -j MASQUERADE --random-fully | 同上 | 从 OVN-NAT-POLICY 出来后,如果被打上 tag 0x90001/0x90001 就会做 SNAT |
nat | OVN-POSTROUTING | -m mark --mark 0x90002/0x90002 -j RETURN | 同上 | 从 OVN-NAT-POLICY 出来后, 如果被打上 tag 0x90002/0x90002 不会做 SNAT |
nat | OVN-NAT-POLICY | -s 10.0.11.0/24 -m comment --comment natPolicySubnet-net1 -j OVN-NAT-PSUBNET-aa98851157c5 | 同上 | 10.0.11.0/24 表示子网 net1 的 CIDR, OVN-NAT-PSUBNET-aa98851157c5 这条链下的规则就对应这个子网的 natOutgoingPolicyRules 配置 |
nat | OVN-NAT-PSUBNET-xxxxxxxxxxxx | -m set --match-set ovn40natpr-418e79269dc5-src src -m set --match-set ovn40natpr-418e79269dc5-dst dst -j MARK --set-xmark 0x90002/0x90002 | 同上 | 418e79269dc5 表示 natOutgoingPolicyRules 中的一条规则的 ID,可以通过 status.natOutgoingPolicyRules[index].RuleID 查看到, 表示 srcIPs 满足 ovn40natpr-418e79269dc5-src, dstIPS 满足 ovn40natpr-418e79269dc5-dst 会打上 tag 0x90002 |
mangle | OVN-OUTPUT | -d 10.241.39.2/32 -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x90003/0x90003 | 将 kubelet 的探测流量加上特定标记从而引入到 tproxy | |
mangle | OVN-PREROUTING | -d 10.241.39.2/32 -p tcp -m tcp --dport 80 -j TPROXY --on-port 8102 --on-ip 172.18.0.3 --tproxy-mark 0x90004/0x90004 | 将 kubelet 的探测流量加上特定标记从而引入到 tproxy |
Kube-OVN 使用 ipset 及 iptables 辅助实现默认 VPC 下容器网络(Overlay)网关 NAT 的功能。
使用的 ipset 如下表所示:
名称(IPv4/IPv6) | 类型 | 存储对象 |
---|---|---|
ovn40services/ovn60services | hash:net | Service 网段 |
ovn40subnets/ovn60subnets | hash:net | Overlay 子网网段以及 NodeLocal DNS IP 地址 |
ovn40subnets-nat/ovn60subnets-nat | hash:net | 开启 NatOutgoing 的 Overlay 子网网段 |
ovn40subnets-distributed-gw/ovn60subnets-distributed-gw | hash:net | 开启分布式网关的 Overlay 子网网段 |
ovn40other-node/ovn60other-node | hash:net | 其它节点的内部 IP 地址 |
ovn40local-pod-ip-nat/ovn60local-pod-ip-nat | hash:ip | 已弃用 |
ovn40subnets-nat-policy | hash:net | 配置了 natOutgoingPolicyRules 的所有子网网段 |
ovn40natpr-418e79269dc5-dst | hash:net | natOutgoingPolicyRules 中 rule 对应的 dstIPs |
ovn40natpr-418e79269dc5-src | hash:net | natOutgoingPolicyRules 中 rule 对应的 srcIPs |
使用的 iptables 规则(IPv4)如下表所示:
表 | 链 | 规则 | 用途 | 备注 |
---|---|---|---|---|
filter | INPUT | -m set --match-set ovn40services src -j ACCEPT | 允许 k8s Service 和 Pod 相关流量通过 | -- |
filter | INPUT | -m set --match-set ovn40services dst -j ACCEPT | 同上 | -- |
filter | INPUT | -m set --match-set ovn40subnets src -j ACCEPT | 同上 | -- |
filter | INPUT | -m set --match-set ovn40subnets dst -j ACCEPT | 同上 | -- |
filter | FORWARD | -m set --match-set ovn40services src -j ACCEPT | 同上 | -- |
filter | FORWARD | -m set --match-set ovn40services dst -j ACCEPT | 同上 | -- |
filter | FORWARD | -m set --match-set ovn40subnets src -j ACCEPT | 同上 | -- |
filter | FORWARD | -m set --match-set ovn40subnets dst -j ACCEPT | 同上 | -- |
filter | FORWARD | -s 10.16.0.0/16 -m comment --comment "ovn-subnet-gateway,ovn-default" | 用于计数从 subnet 访问外部网络的报文 | 10.16.0.0/16 为 subnet 的 cidr ,comment 中逗号前面的 ovn-subnet-gateway 用于标识该 iptables 规则用于 subnet 出入网关报文计数,逗号后面 ovn-default 是该 subnet 的名字 |
filter | FORWARD | -d 10.16.0.0/16 -m comment --comment "ovn-subnet-gateway,ovn-default" | 用于计数从外部网络访问 subnet 的报文 | 同上 |
filter | OUTPUT | -p udp -m udp --dport 6081 -j MARK --set-xmark 0x0 | 清除流量标记,避免执行 SNAT | UDP: bad checksum on VXLAN interface |
nat | PREROUTING | -m comment --comment "kube-ovn prerouting rules" -j OVN-PREROUTING | 进入 OVN-PREROUTING 链处理 | -- |
nat | POSTROUTING | -m comment --comment "kube-ovn postrouting rules" -j OVN-POSTROUTING | 进入 OVN-POSTROUTING 链处理 | -- |
nat | OVN-PREROUTING | -i ovn0 -m set --match-set ovn40subnets src -m set --match-set ovn40services dst -j MARK --set-xmark 0x4000/0x4000 | 为 Pod 访问 Service 流量添加 masquerade 标记 | 作用于关闭内置 LB 的场景 |
nat | OVN-PREROUTING | -p tcp -m addrtype --dst-type LOCAL -m set --match-set KUBE-NODE-PORT-LOCAL-TCP dst -j MARK --set-xmark 0x80000/0x80000 | 为 ExternalTrafficPolicy 为 Local 的 Service 流量(TCP)添加特定标记 | 仅 kube-proxy 使用 ipvs 模式时存在 |
nat | OVN-PREROUTING | -p udp -m addrtype --dst-type LOCAL -m set --match-set KUBE-NODE-PORT-LOCAL-UDP dst -j MARK --set-xmark 0x80000/0x80000 | 为 ExternalTrafficPolicy 为 Local 的 Service 流量(UDP)添加特定标记 | 同上 |
nat | OVN-POSTROUTING | -m set --match-set ovn40services src -m set --match-set ovn40subnets dst -m mark --mark 0x4000/0x4000 -j SNAT --to-source | 当节点通过 Service IP 访问 Overlay Pod 时,保持源 IP 为节点 IP。 | 仅 kube-proxy 使用 ipvs 模式时生效 |
nat | OVN-POSTROUTING | -m mark --mark 0x4000/0x4000 -j MASQUERADE | 为特定标记的流量执行 SNAT | -- |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets src -m set --match-set ovn40subnets dst -j MASQUERADE | 为通过节点的 Pod 之间的 Service 流量执行 SNAT | -- |
nat | OVN-POSTROUTING | -m mark --mark 0x80000/0x80000 -m set --match-set ovn40subnets-distributed-gw dst -j RETURN | 对于 ExternalTrafficPolicy 为 Local 的 Service 流量,若 Endpoint 使用分布式网关,无需执行 SNAT | -- |
nat | OVN-POSTROUTING | -m mark --mark 0x80000/0x80000 -j MASQUERADE | 对于 ExternalTrafficPolicy 为 Local 的 Service 流量,若 Endpoint 使用集中式网关,执行 SNAT | -- |
nat | OVN-POSTROUTING | -p tcp -m tcp --tcp-flags SYN NONE -m conntrack --ctstate NEW -j RETURN | Pod IP 对外暴露时,不执行 SNAT | -- |
nat | OVN-POSTROUTING | -s 10.16.0.0/16 -m set ! --match-set ovn40subnets dst -j SNAT --to-source 192.168.0.101 | Pod 访问集群外网络时,若子网开启 NatOutgoing 且使用指定 IP 的集中式网关,执行 SNAT | 10.16.0.0/16 为子网网段,192.168.0.101 为指定的网关节点 IP |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets-nat src -m set ! --match-set ovn40subnets dst -j MASQUERADE | Pod 访问集群外网络时,若子网开启 NatOutgoing,执行 SNAT | -- |
nat | OVN-POSTROUTING | -m set --match-set ovn40subnets-nat-policy src -m set ! --match-set ovn40subnets dst -j OVN-NAT-POLICY | Pod 访问集群外网络时,若子网开启 natOutgoingPolicyRules,指定策略的报文执行 SNAT | 配置了 natOutgoingPolicyRules 子网的出外网报文的进入链 OVN-NAT-POLICY |
nat | OVN-POSTROUTING | -m mark --mark 0x90001/0x90001 -j MASQUERADE --random-fully | 同上 | 从 OVN-NAT-POLICY 出来后,如果被打上 tag 0x90001/0x90001 就会做 SNAT |
nat | OVN-POSTROUTING | -m mark --mark 0x90002/0x90002 -j RETURN | 同上 | 从 OVN-NAT-POLICY 出来后, 如果被打上 tag 0x90002/0x90002 不会做 SNAT |
nat | OVN-NAT-POLICY | -s 10.0.11.0/24 -m comment --comment natPolicySubnet-net1 -j OVN-NAT-PSUBNET-aa98851157c5 | 同上 | 10.0.11.0/24 表示子网 net1 的 CIDR, OVN-NAT-PSUBNET-aa98851157c5 这条链下的规则就对应这个子网的 natOutgoingPolicyRules 配置 |
nat | OVN-NAT-PSUBNET-xxxxxxxxxxxx | -m set --match-set ovn40natpr-418e79269dc5-src src -m set --match-set ovn40natpr-418e79269dc5-dst dst -j MARK --set-xmark 0x90002/0x90002 | 同上 | 418e79269dc5 表示 natOutgoingPolicyRules 中的一条规则的 ID,可以通过 status.natOutgoingPolicyRules[index].RuleID 查看到, 表示 srcIPs 满足 ovn40natpr-418e79269dc5-src, dstIPS 满足 ovn40natpr-418e79269dc5-dst 会打上 tag 0x90002 |
mangle | OVN-OUTPUT | -d 10.241.39.2/32 -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x90003/0x90003 | 将 kubelet 的探测流量加上特定标记从而引入到 tproxy | |
mangle | OVN-PREROUTING | -d 10.241.39.2/32 -p tcp -m tcp --dport 80 -j TPROXY --on-port 8102 --on-ip 172.18.0.3 --tproxy-mark 0x90004/0x90004 | 将 kubelet 的探测流量加上特定标记从而引入到 tproxy |
基于 Kube-OVN v1.12.0 版本,整理了 Kube-OVN 支持的 CRD 资源列表,列出 CRD 定义各字段的取值类型和含义,以供参考。
属性名称 | 类型 | 描述 |
---|---|---|
type | String | 状态类型 |
status | String | 状态值,取值为 True ,False 或 Unknown |
reason | String | 状态变化的原因 |
message | String | 状态变化的具体信息 |
lastUpdateTime | Time | 上次状态更新时间 |
lastTransitionTime | Time | 上次状态类型发生变化的时间 |
在各 CRD 的定义中,Status 中的 Condition 字段,都遵循上述格式,因此提前进行说明。
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 Subnet |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | SubnetSpec | Subnet 具体配置信息字段 |
status | SubnetStatus | Subnet 状态信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
default | Bool | 该子网是否为默认子网 |
vpc | String | 子网所属 Vpc,默认为 ovn-cluster |
protocol | String | IP 协议,取值可以为 IPv4 ,IPv6 或 Dual |
namespaces | []String | 该子网所绑定的 namespace 列表 |
cidrBlock | String | 子网的网段范围,如 10.16.0.0/16 |
gateway | String | 子网网关地址,默认为该子网 CIDRBlock 下的第一个可用地址 |
excludeIps | []String | 该子网下不会被自动分配的地址范围 |
provider | String | 默认为 ovn。多网卡情况下可以配置取值为 NetworkAttachmentDefinition 的 |
gatewayType | String | Overlay 模式下的网关类型,取值可以为 distributed 或 centralized |
gatewayNode | String | 当网关模式为 centralized 时的网关节点,可以为逗号分隔的多个节点 |
natOutgoing | Bool | 出网流量是否进行 NAT。该参数和 externalEgressGateway 参数不能同时设置。 |
externalEgressGateway | String | 外部网关地址。需要和子网网关节点在同一个二层可达域,该参数和 natOutgoing 参数不能同时设置 |
policyRoutingPriority | Uint32 | 策略路由优先级。添加策略路由使用参数,控制流量经子网网关之后,转发到外部网关地址 |
policyRoutingTableID | Uint32 | 使用的本地策略路由表的 TableID,每个子网均需不同以避免冲突 |
private | Bool | 标识该子网是否为私有子网,私有子网默认拒绝子网外的地址访问 |
allowSubnets | []String | 子网为私有子网的情况下,允许访问该子网地址的集合 |
vlan | String | 子网绑定的 Vlan 名称 |
vips | []String | 子网下 virtual 类型 lsp 的 virtual-ip 参数信息 |
logicalGateway | Bool | 是否启用逻辑网关 |
disableGatewayCheck | Bool | 创建 Pod 时是否跳过网关联通性检查 |
disableInterConnection | Bool | 控制是否开启子网跨集群互联 |
enableDHCP | Bool | 控制是否配置子网下 lsp 的 dhcp 配置选项 |
dhcpV4Options | String | 子网下 lsp dhcpv4_options 关联的 DHCP_Options 记录 |
dhcpV6Options | String | 子网下 lsp dhcpv6_options 关联的 DHCP_Options 记录 |
enableIPv6RA | Bool | 控制子网连接路由器的 lrp 端口,是否配置 ipv6_ra_configs 参数 |
ipv6RAConfigs | String | 子网连接路由器的 lrp 端口,ipv6_ra_configs 参数配置信息 |
acls | []Acl | 子网对应 logical-switch 关联的 acls 记录 |
u2oInterconnection | Bool | 是否开启 Overlay/Underlay 的互联模式 |
enableLb | *Bool | 控制子网对应的 logical-switch 是否关联 load-balancer 记录 |
enableEcmp | Bool | 集中式网关,是否开启 ECMP 路由 |
属性名称 | 类型 | 描述 |
---|---|---|
direction | String | Acl 限制方向,取值为 from-lport 或者 to-lport |
priority | Int | Acl 优先级,取值范围 0 到 32767 |
match | String | Acl 规则匹配表达式 |
action | String | Acl 规则动作,取值为 allow-related , allow-stateless , allow , drop , reject 其中一个 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []SubnetCondition | 子网状态变化信息,具体字段参考文档开头 Condition 定义 |
v4AvailableIPs | Float64 | 子网现在可用的 IPv4 IP 地址数量 |
v4availableIPrange | String | 子网现在可用的 IPv4 地址范围 |
v4UsingIPs | Float64 | 子网现在已用的 IPv4 IP 地址数量 |
v4usingIPrange | String | 子网现在已用的 IPv4 地址范围 |
v6AvailableIPs | Float64 | 子网现在可用的 IPv6 IP 地址数量 |
v6availableIPrange | String | 子网现在可用的 IPv6 地址范围 |
v6UsingIPs | Float64 | 子网现在已用的 IPv6 IP 地址数量 |
v6usingIPrange | String | 子网现在已用的 IPv6 地址范围 |
sctivateGateway | String | 集中式子网,主备模式下当前正在工作的网关节点 |
dhcpV4OptionsUUID | String | 子网下 lsp dhcpv4_options 关联的 DHCP_Options 记录标识 |
dhcpV6OptionsUUID | String | 子网下 lsp dhcpv6_options 关联的 DHCP_Options 记录标识 |
u2oInterconnectionIP | String | 开启 Overlay/Underlay 互联模式后,所占用的用于互联的 IP 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IP |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IPSpec | IP 具体配置信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
podName | String | 绑定 Pod 名称 |
namespace | String | 绑定 Pod 所在 Namespace 名称 |
subnet | String | IP 所属 Subnet |
attachSubnets | []String | 该主 IP 下其他附属子网名称(字段废弃不再使用) |
nodeName | String | 绑定 Pod 所在的节点名称 |
ipAddress | String | IP 地址,双栈情况下为 v4IP,v6IP 格式 |
v4IPAddress | String | IPv4 IP 地址 |
v6IPAddress | String | IPv6 IP 地址 |
attachIPs | []String | 该主 IP 下其他附属 IP 地址(字段废弃不再使用) |
macAddress | String | 绑定 Pod 的 Mac 地址 |
attachMacs | []String | 该主 IP 下其他附属 Mac 地址(字段废弃不再使用) |
containerID | String | 绑定 Pod 对应的 Container ID |
podType | String | 特殊工作负载 Pod,可为 StatefulSet ,VirtualMachine 或空 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 Vlan |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VlanSpec | Vlan 具体配置信息字段 |
status | VlanStatus | Vlan 状态信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
id | Int | Vlan tag 号,取值范围为 0~4096 |
provider | String | Vlan 绑定的 ProviderNetwork 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
subnets | []String | Vlan 绑定的子网列表 |
conditions | []VlanCondition | Vlan 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 ProviderNetwork |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | ProviderNetworkSpec | ProviderNetwork 具体配置信息字段 |
status | ProviderNetworkStatus | ProviderNetwork 状态信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
defaultInterface | String | 该桥接网络默认使用的网卡接口名称 |
customInterfaces | []CustomInterface | 该桥接网络特殊使用的网卡配置 |
excludeNodes | []String | 该桥接网络不会绑定的节点名称 |
exchangeLinkName | Bool | 是否交换桥接网卡和对应 OVS 网桥名称 |
属性名称 | 类型 | 描述 |
---|---|---|
interface | String | Underlay 使用网卡接口名称 |
nodes | []String | 使用自定义网卡接口的节点列表 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | 当前桥接网络是否进入就绪状态 |
readyNodes | []String | 桥接网络进入就绪状态的节点名称 |
notReadyNodes | []String | 桥接网络未进入就绪状态的节点名称 |
vlans | []String | 桥接网络绑定的 Vlan 名称 |
conditions | []ProviderNetworkCondition | ProviderNetwork 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 Vpc |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VpcSpec | Vpc 具体配置信息字段 |
status | VpcStatus | Vpc 状态信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
namespaces | []String | Vpc 绑定的命名空间列表 |
staticRoutes | []*StaticRoute | Vpc 下配置的静态路由信息 |
policyRoutes | []*PolicyRoute | Vpc 下配置的策略路由信息 |
vpcPeerings | []*VpcPeering | Vpc 互联信息 |
enableExternal | Bool | Vpc 是否连接到外部交换机 |
属性名称 | 类型 | 描述 |
---|---|---|
policy | String | 路由策略,取值为 policySrc 或者 policyDst |
cidr | String | 路由 Cidr 网段 |
nextHopIP | String | 路由下一跳信息 |
属性名称 | 类型 | 描述 |
---|---|---|
priority | Int32 | 策略路由优先级 |
match | String | 策略路由匹配条件 |
action | String | 策略路由动作,取值为 allow 、drop 或者 reroute |
nextHopIP | String | 策略路由下一跳信息,ECMP 路由情况下下一跳地址使用逗号隔开 |
属性名称 | 类型 | 描述 |
---|---|---|
remoteVpc | String | Vpc 互联对端 Vpc 名称 |
localConnectIP | String | Vpc 互联本端 IP 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []VpcCondition | Vpc 状态变化信息,具体字段参考文档开头 Condition 定义 |
standby | Bool | 标识 Vpc 是否创建完成,Vpc 下的 Subnet 需要等 Vpc 创建完成转换再继续处理 |
default | Bool | 是否是默认 Vpc |
defaultLogicalSwitch | String | Vpc 下的默认子网 |
router | String | Vpc 对应的 logical-router 名称 |
tcpLoadBalancer | String | Vpc 下的 TCP LB 信息 |
udpLoadBalancer | String | Vpc 下的 UDP LB 信息 |
tcpSessionLoadBalancer | String | Vpc 下的 TCP 会话保持 LB 信息 |
udpSessionLoadBalancer | String | Vpc 下的 UDP 会话保持 LB 信息 |
subnets | []String | Vpc 下的子网列表 |
vpcPeerings | []String | Vpc 互联的对端 Vpc 列表 |
enableExternal | Bool | Vpc 是否连接到外部交换机 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 VpcNatGateway |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VpcNatSpec | Vpc 网关具体配置信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
vpc | String | Vpc 网关 Pod 所在的 Vpc 名称 |
subnet | String | Vpc 网关 Pod 所属的子网名称 |
lanIp | String | Vpc 网关 Pod 指定分配的 IP 地址 |
selector | []String | 标准 Kubernetes Selector 匹配信息 |
tolerations | []VpcNatToleration | 标准 Kubernetes 容忍信息 |
属性名称 | 类型 | 描述 |
---|---|---|
key | String | 容忍污点的 key 信息 |
operator | String | 取值为 Exists 或者 Equal |
value | String | 容忍污点的 value 信息 |
effect | String | 容忍污点的作用效果,取值为 NoExecute 、NoSchedule 或者 PreferNoSchedule |
tolerationSeconds | Int64 | 添加污点后,Pod 还能继续在节点上运行的时间 |
以上容忍字段的含义,可以参考 Kubernetes 官方文档 污点和容忍度。
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IptablesEIP |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IptablesEipSpec | Vpc 网关使用的 IptablesEIP 具体配置信息字段 |
status | IptablesEipStatus | Vpc 网关使用的 IptablesEIP 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
v4ip | String | IptablesEIP v4 地址 |
v6ip | String | IptablesEIP v6 地址 |
macAddress | String | IptablesEIP crd 记录分配的 mac 地址,没有实际使用 |
natGwDp | String | Vpc 网关名称 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | IptablesEIP 是否配置完成 |
ip | String | IptablesEIP 使用的 IP 地址,目前只支持了 IPv4 地址 |
redo | String | IptablesEIP crd 创建或者更新时间 |
nat | String | IptablesEIP 的使用类型,取值为 fip 、snat 或者 dnat |
conditions | []IptablesEIPCondition | IptablesEIP 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IptablesFIPRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IptablesFIPRuleSpec | Vpc 网关使用的 IptablesFIPRule 具体配置信息字段 |
status | IptablesFIPRuleStatus | Vpc 网关使用的 IptablesFIPRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
eip | String | IptablesFIPRule 使用的 IptablesEIP 名称 |
internalIp | String | IptablesFIPRule 对应的内部的 IP 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | IptablesFIPRule 是否配置完成 |
v4ip | String | IptablesEIP 使用的 v4 IP 地址 |
v6ip | String | IptablesEIP 使用的 v6 IP 地址 |
natGwDp | String | Vpc 网关名称 |
redo | String | IptablesFIPRule crd 创建或者更新时间 |
conditions | []IptablesFIPRuleCondition | IptablesFIPRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IptablesSnatRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IptablesSnatRuleSpec | Vpc 网关使用的 IptablesSnatRule 具体配置信息字段 |
status | IptablesSnatRuleStatus | Vpc 网关使用的 IptablesSnatRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
eip | String | IptablesSnatRule 使用的 IptablesEIP 名称 |
internalIp | String | IptablesSnatRule 对应的内部的 IP 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | IptablesSnatRule 是否配置完成 |
v4ip | String | IptablesSnatRule 使用的 v4 IP 地址 |
v6ip | String | IptablesSnatRule 使用的 v6 IP 地址 |
natGwDp | String | Vpc 网关名称 |
redo | String | IptablesSnatRule crd 创建或者更新时间 |
conditions | []IptablesSnatRuleCondition | IptablesSnatRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IptablesDnatRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IptablesDnatRuleSpec | Vpc 网关使用的 IptablesDnatRule 具体配置信息字段 |
status | IptablesDnatRuleStatus | Vpc 网关使用的 IptablesDnatRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
eip | Sting | Vpc 网关配置 IptablesDnatRule 使用的 IptablesEIP 名称 |
externalPort | Sting | Vpc 网关配置 IptablesDnatRule 使用的外部端口 |
protocol | Sting | Vpc 网关配置 IptablesDnatRule 的协议类型 |
internalIp | Sting | Vpc 网关配置 IptablesDnatRule 使用的内部 IP 地址 |
internalPort | Sting | Vpc 网关配置 IptablesDnatRule 使用的内部端口 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | IptablesDnatRule 是否配置完成 |
v4ip | String | IptablesDnatRule 使用的 v4 IP 地址 |
v6ip | String | IptablesDnatRule 使用的 v6 IP 地址 |
natGwDp | String | Vpc 网关名称 |
redo | String | IptablesDnatRule crd 创建或者更新时间 |
conditions | []IptablesDnatRuleCondition | IptablesDnatRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 VpcDns |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VpcDnsSpec | VpcDns 具体配置信息字段 |
status | VpcDnsStatus | VpcDns 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
vpc | String | VpcDns 所在的 Vpc 名称 |
subnet | String | VpcDns Pod 分配地址的 Subnet 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []VpcDnsCondition | VpcDns 状态变化信息,具体字段参考文档开头 Condition 定义 |
active | Bool | VpcDns 是否正在使用 |
VpcDns 的详细使用文档,可以参考 自定义 VPC DNS。
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 SwitchLBRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | SwitchLBRuleSpec | SwitchLBRule 具体配置信息字段 |
status | SwitchLBRuleStatus | SwitchLBRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
vip | String | SwitchLBRule 配置的 vip 地址 |
namespace | String | SwitchLBRule 的命名空间 |
selector | []String | 标准 Kubernetes Selector 匹配信息 |
sessionAffinity | String | 标准 Kubernetes Service 中 sessionAffinity 取值 |
ports | []SlrPort | SwitchLBRule 端口列表 |
SwitchLBRule 的详细配置信息,可以参考 自定义 VPC 内部负载均衡。
属性名称 | 类型 | 描述 |
---|---|---|
name | String | 端口名称 |
port | Int32 | 端口号 |
targetPort | Int32 | 目标端口号 |
protocol | String | 协议类型 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []SwitchLBRuleCondition | SwitchLBRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
ports | String | SwitchLBRule 端口信息 |
service | String | SwitchLBRule 提供服务的 service 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 SecurityGroup |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | SecurityGroupSpec | 安全组具体配置信息字段 |
status | SecurityGroupStatus | 安全组状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
ingressRules | []*SgRule | 入方向安全组规则 |
egressRules | []*SgRule | 出方向安全组规则 |
allowSameGroupTraffic | Bool | 同一安全组内的 lsp 是否可以互通,以及流量规则是否需要更新 |
属性名称 | 类型 | 描述 |
---|---|---|
ipVersion | String | IP 版本号,取值为 ipv4 或者 ipv6 |
protocol | String | 取值为 all 、icmp 、tcp 或者 udp |
priority | Int | Acl 优先级,取值范围为 1-200,数值越小,优先级越高 |
remoteType | String | 取值为 address 或者 securityGroup |
remoteAddress | String | 对端地址 |
remoteSecurityGroup | String | 对端安全组 |
portRangeMin | Int | 端口范围起始值,最小取值为 1 |
portRangeMax | Int | 端口范围最大值,最大取值为 65535 |
policy | String | 取值为 allow 或者 drop |
属性名称 | 类型 | 描述 |
---|---|---|
portGroup | String | 安全组对应的 port-group 名称 |
allowSameGroupTraffic | Bool | 同一安全组内的 lsp 是否可以互通,以及安全组的流量规则是否需要更新 |
ingressMd5 | String | 入方向安全组规则 MD5 取值 |
egressMd5 | String | 出方向安全组规则 MD5 取值 |
ingressLastSyncSuccess | Bool | 入方向规则上一次同步是否成功 |
egressLastSyncSuccess | Bool | 出方向规则上一次同步是否成功 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 Vip |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VipSpec | Vip 具体配置信息字段 |
status | VipStatus | Vip 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
namespace | String | Vip 所在命名空间 |
subnet | String | Vip 所属子网 |
v4ip | String | Vip v4 IP 地址 |
v6ip | String | Vip v6 IP 地址 |
macAddress | String | Vip mac 地址 |
parentV4ip | String | 目前没有使用 |
parentV6ip | String | 目前没有使用 |
parentMac | String | 目前没有使用 |
attachSubnets | []String | 该字段废弃,不再使用 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []VipCondition | Vip 状态变化信息,具体字段参考文档开头 Condition 定义 |
ready | Bool | Vip 是否准备好 |
v4ip | String | Vip v4 IP 地址,应该和 spec 字段取值一致 |
v6ip | String | Vip v6 IP 地址,应该和 spec 字段取值一致 |
mac | String | Vip mac 地址,应该和 spec 字段取值一致 |
pv4ip | String | 目前没有使用 |
pv6ip | String | 目前没有使用 |
pmac | String | 目前没有使用 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 OvnEip |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | OvnEipSpec | 默认 Vpc 使用 OvnEip 具体配置信息字段 |
status | OvnEipStatus | 默认 Vpc 使用 OvnEip 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
externalSubnet | String | OvnEip 所在的子网名称 |
v4ip | String | OvnEip IP 地址 |
macAddress | String | OvnEip Mac 地址 |
type | String | OvnEip 使用类型,取值有 fip 、snat 或者 lrp |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []OvnEipCondition | 默认 Vpc OvnEip 状态变化信息,具体字段参考文档开头 Condition 定义 |
v4ip | String | OvnEip 使用的 v4 IP 地址 |
macAddress | String | OvnEip 使用的 Mac 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 OvnFip |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | OvnFipSpec | 默认 Vpc 使用 OvnFip 具体配置信息字段 |
status | OvnFipStatus | 默认 Vpc 使用 OvnFip 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
ovnEip | String | OvnFip 绑定的 OvnEip 名称 |
ipName | String | OvnFip 绑定 Pod 对应的 IP crd 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | OvnFip 是否配置完成 |
v4Eip | String | OvnFip 绑定的 OvnEip 名称 |
v4Ip | String | OvnFip 当前使用的 OvnEip 地址 |
macAddress | String | OvnFip 配置的 Mac 地址 |
vpc | String | OvnFip 所在的 Vpc 名称 |
conditions | []OvnFipCondition | OvnFip 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 OvnSnatRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | OvnSnatRuleSpec | 默认 Vpc OvnSnatRule 具体配置信息字段 |
status | OvnSnatRuleStatus | 默认 Vpc OvnSnatRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
ovnEip | String | OvnSnatRule 绑定的 OvnEip 名称 |
vpcSubnet | String | OvnSnatRule 配置的子网名称 |
ipName | String | OvnSnatRule 绑定 Pod 对应的 IP crd 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | OvnSnatRule 是否配置完成 |
v4Eip | String | OvnSnatRule 绑定的 OvnEip 地址 |
v4IpCidr | String | 在 logical-router 中配置 snat 转换使用的 cidr 地址 |
vpc | String | OvnSnatRule 所在的 Vpc 名称 |
conditions | []OvnSnatRuleCondition | OvnSnatRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
基于 Kube-OVN v1.12.0 版本,整理了 Kube-OVN 支持的 CRD 资源列表,列出 CRD 定义各字段的取值类型和含义,以供参考。
属性名称 | 类型 | 描述 |
---|---|---|
type | String | 状态类型 |
status | String | 状态值,取值为 True ,False 或 Unknown |
reason | String | 状态变化的原因 |
message | String | 状态变化的具体信息 |
lastUpdateTime | Time | 上次状态更新时间 |
lastTransitionTime | Time | 上次状态类型发生变化的时间 |
在各 CRD 的定义中,Status 中的 Condition 字段,都遵循上述格式,因此提前进行说明。
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 Subnet |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | SubnetSpec | Subnet 具体配置信息字段 |
status | SubnetStatus | Subnet 状态信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
default | Bool | 该子网是否为默认子网 |
vpc | String | 子网所属 Vpc,默认为 ovn-cluster |
protocol | String | IP 协议,取值可以为 IPv4 ,IPv6 或 Dual |
namespaces | []String | 该子网所绑定的 namespace 列表 |
cidrBlock | String | 子网的网段范围,如 10.16.0.0/16 |
gateway | String | 子网网关地址,默认为该子网 CIDRBlock 下的第一个可用地址 |
excludeIps | []String | 该子网下不会被自动分配的地址范围 |
provider | String | 默认为 ovn。多网卡情况下可以配置取值为 NetworkAttachmentDefinition 的 |
gatewayType | String | Overlay 模式下的网关类型,取值可以为 distributed 或 centralized |
gatewayNode | String | 当网关模式为 centralized 时的网关节点,可以为逗号分隔的多个节点 |
natOutgoing | Bool | 出网流量是否进行 NAT。该参数和 externalEgressGateway 参数不能同时设置。 |
externalEgressGateway | String | 外部网关地址。需要和子网网关节点在同一个二层可达域,该参数和 natOutgoing 参数不能同时设置 |
policyRoutingPriority | Uint32 | 策略路由优先级。添加策略路由使用参数,控制流量经子网网关之后,转发到外部网关地址 |
policyRoutingTableID | Uint32 | 使用的本地策略路由表的 TableID,每个子网均需不同以避免冲突 |
private | Bool | 标识该子网是否为私有子网,私有子网默认拒绝子网外的地址访问 |
allowSubnets | []String | 子网为私有子网的情况下,允许访问该子网地址的集合 |
vlan | String | 子网绑定的 Vlan 名称 |
vips | []String | 子网下 virtual 类型 lsp 的 virtual-ip 参数信息 |
logicalGateway | Bool | 是否启用逻辑网关 |
disableGatewayCheck | Bool | 创建 Pod 时是否跳过网关联通性检查 |
disableInterConnection | Bool | 控制是否开启子网跨集群互联 |
enableDHCP | Bool | 控制是否配置子网下 lsp 的 dhcp 配置选项 |
dhcpV4Options | String | 子网下 lsp dhcpv4_options 关联的 DHCP_Options 记录 |
dhcpV6Options | String | 子网下 lsp dhcpv6_options 关联的 DHCP_Options 记录 |
enableIPv6RA | Bool | 控制子网连接路由器的 lrp 端口,是否配置 ipv6_ra_configs 参数 |
ipv6RAConfigs | String | 子网连接路由器的 lrp 端口,ipv6_ra_configs 参数配置信息 |
acls | []Acl | 子网对应 logical-switch 关联的 acls 记录 |
u2oInterconnection | Bool | 是否开启 Overlay/Underlay 的互联模式 |
enableLb | *Bool | 控制子网对应的 logical-switch 是否关联 load-balancer 记录 |
enableEcmp | Bool | 集中式网关,是否开启 ECMP 路由 |
属性名称 | 类型 | 描述 |
---|---|---|
direction | String | Acl 限制方向,取值为 from-lport 或者 to-lport |
priority | Int | Acl 优先级,取值范围 0 到 32767 |
match | String | Acl 规则匹配表达式 |
action | String | Acl 规则动作,取值为 allow-related , allow-stateless , allow , drop , reject 其中一个 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []SubnetCondition | 子网状态变化信息,具体字段参考文档开头 Condition 定义 |
v4AvailableIPs | Float64 | 子网现在可用的 IPv4 IP 地址数量 |
v4availableIPrange | String | 子网现在可用的 IPv4 地址范围 |
v4UsingIPs | Float64 | 子网现在已用的 IPv4 IP 地址数量 |
v4usingIPrange | String | 子网现在已用的 IPv4 地址范围 |
v6AvailableIPs | Float64 | 子网现在可用的 IPv6 IP 地址数量 |
v6availableIPrange | String | 子网现在可用的 IPv6 地址范围 |
v6UsingIPs | Float64 | 子网现在已用的 IPv6 IP 地址数量 |
v6usingIPrange | String | 子网现在已用的 IPv6 地址范围 |
sctivateGateway | String | 集中式子网,主备模式下当前正在工作的网关节点 |
dhcpV4OptionsUUID | String | 子网下 lsp dhcpv4_options 关联的 DHCP_Options 记录标识 |
dhcpV6OptionsUUID | String | 子网下 lsp dhcpv6_options 关联的 DHCP_Options 记录标识 |
u2oInterconnectionIP | String | 开启 Overlay/Underlay 互联模式后,所占用的用于互联的 IP 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IP |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IPSpec | IP 具体配置信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
podName | String | 绑定 Pod 名称 |
namespace | String | 绑定 Pod 所在 Namespace 名称 |
subnet | String | IP 所属 Subnet |
attachSubnets | []String | 该主 IP 下其他附属子网名称(字段废弃不再使用) |
nodeName | String | 绑定 Pod 所在的节点名称 |
ipAddress | String | IP 地址,双栈情况下为 v4IP,v6IP 格式 |
v4IPAddress | String | IPv4 IP 地址 |
v6IPAddress | String | IPv6 IP 地址 |
attachIPs | []String | 该主 IP 下其他附属 IP 地址(字段废弃不再使用) |
macAddress | String | 绑定 Pod 的 Mac 地址 |
attachMacs | []String | 该主 IP 下其他附属 Mac 地址(字段废弃不再使用) |
containerID | String | 绑定 Pod 对应的 Container ID |
podType | String | 特殊工作负载 Pod,可为 StatefulSet ,VirtualMachine 或空 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 Vlan |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VlanSpec | Vlan 具体配置信息字段 |
status | VlanStatus | Vlan 状态信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
id | Int | Vlan tag 号,取值范围为 0~4096 |
provider | String | Vlan 绑定的 ProviderNetwork 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
subnets | []String | Vlan 绑定的子网列表 |
conditions | []VlanCondition | Vlan 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 ProviderNetwork |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | ProviderNetworkSpec | ProviderNetwork 具体配置信息字段 |
status | ProviderNetworkStatus | ProviderNetwork 状态信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
defaultInterface | String | 该桥接网络默认使用的网卡接口名称 |
customInterfaces | []CustomInterface | 该桥接网络特殊使用的网卡配置 |
excludeNodes | []String | 该桥接网络不会绑定的节点名称 |
exchangeLinkName | Bool | 是否交换桥接网卡和对应 OVS 网桥名称 |
属性名称 | 类型 | 描述 |
---|---|---|
interface | String | Underlay 使用网卡接口名称 |
nodes | []String | 使用自定义网卡接口的节点列表 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | 当前桥接网络是否进入就绪状态 |
readyNodes | []String | 桥接网络进入就绪状态的节点名称 |
notReadyNodes | []String | 桥接网络未进入就绪状态的节点名称 |
vlans | []String | 桥接网络绑定的 Vlan 名称 |
conditions | []ProviderNetworkCondition | ProviderNetwork 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 Vpc |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VpcSpec | Vpc 具体配置信息字段 |
status | VpcStatus | Vpc 状态信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
namespaces | []String | Vpc 绑定的命名空间列表 |
staticRoutes | []*StaticRoute | Vpc 下配置的静态路由信息 |
policyRoutes | []*PolicyRoute | Vpc 下配置的策略路由信息 |
vpcPeerings | []*VpcPeering | Vpc 互联信息 |
enableExternal | Bool | Vpc 是否连接到外部交换机 |
属性名称 | 类型 | 描述 |
---|---|---|
policy | String | 路由策略,取值为 policySrc 或者 policyDst |
cidr | String | 路由 Cidr 网段 |
nextHopIP | String | 路由下一跳信息 |
属性名称 | 类型 | 描述 |
---|---|---|
priority | Int32 | 策略路由优先级 |
match | String | 策略路由匹配条件 |
action | String | 策略路由动作,取值为 allow 、drop 或者 reroute |
nextHopIP | String | 策略路由下一跳信息,ECMP 路由情况下下一跳地址使用逗号隔开 |
属性名称 | 类型 | 描述 |
---|---|---|
remoteVpc | String | Vpc 互联对端 Vpc 名称 |
localConnectIP | String | Vpc 互联本端 IP 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []VpcCondition | Vpc 状态变化信息,具体字段参考文档开头 Condition 定义 |
standby | Bool | 标识 Vpc 是否创建完成,Vpc 下的 Subnet 需要等 Vpc 创建完成转换再继续处理 |
default | Bool | 是否是默认 Vpc |
defaultLogicalSwitch | String | Vpc 下的默认子网 |
router | String | Vpc 对应的 logical-router 名称 |
tcpLoadBalancer | String | Vpc 下的 TCP LB 信息 |
udpLoadBalancer | String | Vpc 下的 UDP LB 信息 |
tcpSessionLoadBalancer | String | Vpc 下的 TCP 会话保持 LB 信息 |
udpSessionLoadBalancer | String | Vpc 下的 UDP 会话保持 LB 信息 |
subnets | []String | Vpc 下的子网列表 |
vpcPeerings | []String | Vpc 互联的对端 Vpc 列表 |
enableExternal | Bool | Vpc 是否连接到外部交换机 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 VpcNatGateway |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VpcNatSpec | Vpc 网关具体配置信息字段 |
属性名称 | 类型 | 描述 |
---|---|---|
vpc | String | Vpc 网关 Pod 所在的 Vpc 名称 |
subnet | String | Vpc 网关 Pod 所属的子网名称 |
lanIp | String | Vpc 网关 Pod 指定分配的 IP 地址 |
selector | []String | 标准 Kubernetes Selector 匹配信息 |
tolerations | []VpcNatToleration | 标准 Kubernetes 容忍信息 |
属性名称 | 类型 | 描述 |
---|---|---|
key | String | 容忍污点的 key 信息 |
operator | String | 取值为 Exists 或者 Equal |
value | String | 容忍污点的 value 信息 |
effect | String | 容忍污点的作用效果,取值为 NoExecute 、NoSchedule 或者 PreferNoSchedule |
tolerationSeconds | Int64 | 添加污点后,Pod 还能继续在节点上运行的时间 |
以上容忍字段的含义,可以参考 Kubernetes 官方文档 污点和容忍度。
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IptablesEIP |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IptablesEipSpec | Vpc 网关使用的 IptablesEIP 具体配置信息字段 |
status | IptablesEipStatus | Vpc 网关使用的 IptablesEIP 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
v4ip | String | IptablesEIP v4 地址 |
v6ip | String | IptablesEIP v6 地址 |
macAddress | String | IptablesEIP crd 记录分配的 mac 地址,没有实际使用 |
natGwDp | String | Vpc 网关名称 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | IptablesEIP 是否配置完成 |
ip | String | IptablesEIP 使用的 IP 地址,目前只支持了 IPv4 地址 |
redo | String | IptablesEIP crd 创建或者更新时间 |
nat | String | IptablesEIP 的使用类型,取值为 fip 、snat 或者 dnat |
conditions | []IptablesEIPCondition | IptablesEIP 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IptablesFIPRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IptablesFIPRuleSpec | Vpc 网关使用的 IptablesFIPRule 具体配置信息字段 |
status | IptablesFIPRuleStatus | Vpc 网关使用的 IptablesFIPRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
eip | String | IptablesFIPRule 使用的 IptablesEIP 名称 |
internalIp | String | IptablesFIPRule 对应的内部的 IP 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | IptablesFIPRule 是否配置完成 |
v4ip | String | IptablesEIP 使用的 v4 IP 地址 |
v6ip | String | IptablesEIP 使用的 v6 IP 地址 |
natGwDp | String | Vpc 网关名称 |
redo | String | IptablesFIPRule crd 创建或者更新时间 |
conditions | []IptablesFIPRuleCondition | IptablesFIPRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IptablesSnatRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IptablesSnatRuleSpec | Vpc 网关使用的 IptablesSnatRule 具体配置信息字段 |
status | IptablesSnatRuleStatus | Vpc 网关使用的 IptablesSnatRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
eip | String | IptablesSnatRule 使用的 IptablesEIP 名称 |
internalIp | String | IptablesSnatRule 对应的内部的 IP 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | IptablesSnatRule 是否配置完成 |
v4ip | String | IptablesSnatRule 使用的 v4 IP 地址 |
v6ip | String | IptablesSnatRule 使用的 v6 IP 地址 |
natGwDp | String | Vpc 网关名称 |
redo | String | IptablesSnatRule crd 创建或者更新时间 |
conditions | []IptablesSnatRuleCondition | IptablesSnatRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 IptablesDnatRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | IptablesDnatRuleSpec | Vpc 网关使用的 IptablesDnatRule 具体配置信息字段 |
status | IptablesDnatRuleStatus | Vpc 网关使用的 IptablesDnatRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
eip | Sting | Vpc 网关配置 IptablesDnatRule 使用的 IptablesEIP 名称 |
externalPort | Sting | Vpc 网关配置 IptablesDnatRule 使用的外部端口 |
protocol | Sting | Vpc 网关配置 IptablesDnatRule 的协议类型 |
internalIp | Sting | Vpc 网关配置 IptablesDnatRule 使用的内部 IP 地址 |
internalPort | Sting | Vpc 网关配置 IptablesDnatRule 使用的内部端口 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | IptablesDnatRule 是否配置完成 |
v4ip | String | IptablesDnatRule 使用的 v4 IP 地址 |
v6ip | String | IptablesDnatRule 使用的 v6 IP 地址 |
natGwDp | String | Vpc 网关名称 |
redo | String | IptablesDnatRule crd 创建或者更新时间 |
conditions | []IptablesDnatRuleCondition | IptablesDnatRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 VpcDns |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VpcDnsSpec | VpcDns 具体配置信息字段 |
status | VpcDnsStatus | VpcDns 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
vpc | String | VpcDns 所在的 Vpc 名称 |
subnet | String | VpcDns Pod 分配地址的 Subnet 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []VpcDnsCondition | VpcDns 状态变化信息,具体字段参考文档开头 Condition 定义 |
active | Bool | VpcDns 是否正在使用 |
VpcDns 的详细使用文档,可以参考 自定义 VPC DNS。
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 SwitchLBRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | SwitchLBRuleSpec | SwitchLBRule 具体配置信息字段 |
status | SwitchLBRuleStatus | SwitchLBRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
vip | String | SwitchLBRule 配置的 vip 地址 |
namespace | String | SwitchLBRule 的命名空间 |
selector | []String | 标准 Kubernetes Selector 匹配信息 |
sessionAffinity | String | 标准 Kubernetes Service 中 sessionAffinity 取值 |
ports | []SlrPort | SwitchLBRule 端口列表 |
SwitchLBRule 的详细配置信息,可以参考 自定义 VPC 内部负载均衡。
属性名称 | 类型 | 描述 |
---|---|---|
name | String | 端口名称 |
port | Int32 | 端口号 |
targetPort | Int32 | 目标端口号 |
protocol | String | 协议类型 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []SwitchLBRuleCondition | SwitchLBRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
ports | String | SwitchLBRule 端口信息 |
service | String | SwitchLBRule 提供服务的 service 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 SecurityGroup |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | SecurityGroupSpec | 安全组具体配置信息字段 |
status | SecurityGroupStatus | 安全组状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
ingressRules | []*SgRule | 入方向安全组规则 |
egressRules | []*SgRule | 出方向安全组规则 |
allowSameGroupTraffic | Bool | 同一安全组内的 lsp 是否可以互通,以及流量规则是否需要更新 |
属性名称 | 类型 | 描述 |
---|---|---|
ipVersion | String | IP 版本号,取值为 ipv4 或者 ipv6 |
protocol | String | 取值为 all 、icmp 、tcp 或者 udp |
priority | Int | Acl 优先级,取值范围为 1-200,数值越小,优先级越高 |
remoteType | String | 取值为 address 或者 securityGroup |
remoteAddress | String | 对端地址 |
remoteSecurityGroup | String | 对端安全组 |
portRangeMin | Int | 端口范围起始值,最小取值为 1 |
portRangeMax | Int | 端口范围最大值,最大取值为 65535 |
policy | String | 取值为 allow 或者 drop |
属性名称 | 类型 | 描述 |
---|---|---|
portGroup | String | 安全组对应的 port-group 名称 |
allowSameGroupTraffic | Bool | 同一安全组内的 lsp 是否可以互通,以及安全组的流量规则是否需要更新 |
ingressMd5 | String | 入方向安全组规则 MD5 取值 |
egressMd5 | String | 出方向安全组规则 MD5 取值 |
ingressLastSyncSuccess | Bool | 入方向规则上一次同步是否成功 |
egressLastSyncSuccess | Bool | 出方向规则上一次同步是否成功 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 Vip |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | VipSpec | Vip 具体配置信息字段 |
status | VipStatus | Vip 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
namespace | String | Vip 所在命名空间 |
subnet | String | Vip 所属子网 |
v4ip | String | Vip v4 IP 地址 |
v6ip | String | Vip v6 IP 地址 |
macAddress | String | Vip mac 地址 |
parentV4ip | String | 目前没有使用 |
parentV6ip | String | 目前没有使用 |
parentMac | String | 目前没有使用 |
attachSubnets | []String | 该字段废弃,不再使用 |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []VipCondition | Vip 状态变化信息,具体字段参考文档开头 Condition 定义 |
ready | Bool | Vip 是否准备好 |
v4ip | String | Vip v4 IP 地址,应该和 spec 字段取值一致 |
v6ip | String | Vip v6 IP 地址,应该和 spec 字段取值一致 |
mac | String | Vip mac 地址,应该和 spec 字段取值一致 |
pv4ip | String | 目前没有使用 |
pv6ip | String | 目前没有使用 |
pmac | String | 目前没有使用 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 OvnEip |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | OvnEipSpec | 默认 Vpc 使用 OvnEip 具体配置信息字段 |
status | OvnEipStatus | 默认 Vpc 使用 OvnEip 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
externalSubnet | String | OvnEip 所在的子网名称 |
v4ip | String | OvnEip IP 地址 |
macAddress | String | OvnEip Mac 地址 |
type | String | OvnEip 使用类型,取值有 fip 、snat 或者 lrp |
属性名称 | 类型 | 描述 |
---|---|---|
conditions | []OvnEipCondition | 默认 Vpc OvnEip 状态变化信息,具体字段参考文档开头 Condition 定义 |
v4ip | String | OvnEip 使用的 v4 IP 地址 |
macAddress | String | OvnEip 使用的 Mac 地址 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 OvnFip |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | OvnFipSpec | 默认 Vpc 使用 OvnFip 具体配置信息字段 |
status | OvnFipStatus | 默认 Vpc 使用 OvnFip 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
ovnEip | String | OvnFip 绑定的 OvnEip 名称 |
ipName | String | OvnFip 绑定 Pod 对应的 IP crd 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | OvnFip 是否配置完成 |
v4Eip | String | OvnFip 绑定的 OvnEip 名称 |
v4Ip | String | OvnFip 当前使用的 OvnEip 地址 |
macAddress | String | OvnFip 配置的 Mac 地址 |
vpc | String | OvnFip 所在的 Vpc 名称 |
conditions | []OvnFipCondition | OvnFip 状态变化信息,具体字段参考文档开头 Condition 定义 |
属性名称 | 类型 | 描述 |
---|---|---|
apiVersion | String | 标准 Kubernetes 版本信息字段,所有自定义资源该值均为 kubeovn.io/v1 |
kind | String | 标准 Kubernetes 资源类型字段,本资源所有实例该值均为 OvnSnatRule |
metadata | ObjectMeta | 标准 Kubernetes 资源元数据信息 |
spec | OvnSnatRuleSpec | 默认 Vpc OvnSnatRule 具体配置信息字段 |
status | OvnSnatRuleStatus | 默认 Vpc OvnSnatRule 状态信息 |
属性名称 | 类型 | 描述 |
---|---|---|
ovnEip | String | OvnSnatRule 绑定的 OvnEip 名称 |
vpcSubnet | String | OvnSnatRule 配置的子网名称 |
ipName | String | OvnSnatRule 绑定 Pod 对应的 IP crd 名称 |
属性名称 | 类型 | 描述 |
---|---|---|
ready | Bool | OvnSnatRule 是否配置完成 |
v4Eip | String | OvnSnatRule 绑定的 OvnEip 地址 |
v4IpCidr | String | 在 logical-router 中配置 snat 转换使用的 cidr 地址 |
vpc | String | OvnSnatRule 所在的 Vpc 名称 |
conditions | []OvnSnatRuleCondition | OvnSnatRule 状态变化信息,具体字段参考文档开头 Condition 定义 |
基于 Kube-OVN v1.12.0 版本,整理了 Kube-ovn-pinger 支持的参数,列出参数定义各字段的取值类型,含义和默认值,以供参考
属性名称 | 类型 | 描述 | 默认值 |
---|---|---|---|
port | Int | metrics 端口 | 8080 |
kubeconfig | String | 具有认证信息的 kubeconfig 文件路径, 如果未设置,使用 inCluster 令牌。 | "" |
ds-namespace | String | kube-ovn-pinger 守护进程命名空间 | "kube-system" |
ds-name | String | kube-ovn-pinger 守护进程名字 | "kube-ovn-pinger" |
interval | Int | 连续 ping 之间的间隔秒数 | 5 |
mode | String | 服务器或工作模式 | "server" |
exit-code | Int | 失败时退出代码 | 0 |
internal-dns | String | 从 pod 内解析内部 dns | "kubernetes.default" |
external-dns | String | 从 pod 内解析外部 dns | "" |
external-address | String | 检查与外部地址的 ping 连通 | "114.114.114.114" |
network-mode | String | 当前集群使用的 cni 插件 | "kube-ovn" |
enable-metrics | Bool | 是否支持 metrics 查询 | true |
ovs.timeout | Int | 对 OVS 的 JSON-RPC 请求超时。 | 2 |
system.run.dir | String | OVS 默认运行目录。 | "/var/run/openvswitch" |
database.vswitch.name | String | OVS 数据库的名称。 | "Open_vSwitch" |
database.vswitch.socket.remote | String | JSON-RPC unix 套接字到 OVS 数据库。 | "unix:/var/run/openvswitch/db.sock" |
database.vswitch.file.data.path | String | OVS 数据库文件。 | "/etc/openvswitch/conf.db" |
database.vswitch.file.log.path | String | OVS 数据库日志文件。 | "/var/log/openvswitch/ovsdb-server.log" |
database.vswitch.file.pid.path | String | OVS 数据库进程 ID 文件。 | "/var/run/openvswitch/ovsdb-server.pid" |
database.vswitch.file.system.id.path | String | OVS 系统标识文件。 | "/etc/openvswitch/system-id.conf" |
service.vswitchd.file.log.path | String | OVS vswitchd 守护进程日志文件。 | "/var/log/openvswitch/ovs-vswitchd.log" |
service.vswitchd.file.pid.path | String | OVS vswitchd 守护进程进程 ID 文件。 | "/var/run/openvswitch/ovs-vswitchd.pid" |
service.ovncontroller.file.log.path | String | OVN 控制器守护进程日志文件。 | "/var/log/ovn/ovn-controller.log" |
service.ovncontroller.file.pid.path | String | OVN 控制器守护进程进程 ID 文件。 | "/var/run/ovn/ovn-controller.pid" |
基于 Kube-OVN v1.12.0 版本,整理了 Kube-ovn-pinger 支持的参数,列出参数定义各字段的取值类型,含义和默认值,以供参考
属性名称 | 类型 | 描述 | 默认值 |
---|---|---|---|
port | Int | metrics 端口 | 8080 |
kubeconfig | String | 具有认证信息的 kubeconfig 文件路径, 如果未设置,使用 inCluster 令牌。 | "" |
ds-namespace | String | kube-ovn-pinger 守护进程命名空间 | "kube-system" |
ds-name | String | kube-ovn-pinger 守护进程名字 | "kube-ovn-pinger" |
interval | Int | 连续 ping 之间的间隔秒数 | 5 |
mode | String | 服务器或工作模式 | "server" |
exit-code | Int | 失败时退出代码 | 0 |
internal-dns | String | 从 pod 内解析内部 dns | "kubernetes.default" |
external-dns | String | 从 pod 内解析外部 dns | "" |
external-address | String | 检查与外部地址的 ping 连通 | "114.114.114.114" |
network-mode | String | 当前集群使用的 cni 插件 | "kube-ovn" |
enable-metrics | Bool | 是否支持 metrics 查询 | true |
ovs.timeout | Int | 对 OVS 的 JSON-RPC 请求超时。 | 2 |
system.run.dir | String | OVS 默认运行目录。 | "/var/run/openvswitch" |
database.vswitch.name | String | OVS 数据库的名称。 | "Open_vSwitch" |
database.vswitch.socket.remote | String | JSON-RPC unix 套接字到 OVS 数据库。 | "unix:/var/run/openvswitch/db.sock" |
database.vswitch.file.data.path | String | OVS 数据库文件。 | "/etc/openvswitch/conf.db" |
database.vswitch.file.log.path | String | OVS 数据库日志文件。 | "/var/log/openvswitch/ovsdb-server.log" |
database.vswitch.file.pid.path | String | OVS 数据库进程 ID 文件。 | "/var/run/openvswitch/ovsdb-server.pid" |
database.vswitch.file.system.id.path | String | OVS 系统标识文件。 | "/etc/openvswitch/system-id.conf" |
service.vswitchd.file.log.path | String | OVS vswitchd 守护进程日志文件。 | "/var/log/openvswitch/ovs-vswitchd.log" |
service.vswitchd.file.pid.path | String | OVS vswitchd 守护进程进程 ID 文件。 | "/var/run/openvswitch/ovs-vswitchd.pid" |
service.ovncontroller.file.log.path | String | OVN 控制器守护进程日志文件。 | "/var/log/ovn/ovn-controller.log" |
service.ovncontroller.file.pid.path | String | OVN 控制器守护进程进程 ID 文件。 | "/var/run/ovn/ovn-controller.pid" |
本文档列举 Kube-OVN 所提供的监控指标。
OVN 自身状态监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Gauge | kube_ovn_ovn_status | OVN 角色状态, (2) 为 follower; (1) 为 leader, (0) 为异常状态。 |
Gauge | kube_ovn_failed_req_count | OVN 失败请求数量。 |
Gauge | kube_ovn_log_file_size_bytes | OVN 组件日志文件大小。 |
Gauge | kube_ovn_db_file_size_bytes | OVN 组件数据库文件大小。 |
Gauge | kube_ovn_chassis_info | OVN chassis 状态 (1) 运行中,(0) 停止。 |
Gauge | kube_ovn_db_status | OVN 数据库状态, (1) 为正常; (0) 为异常。 |
Gauge | kube_ovn_logical_switch_info | OVN logical switch 信息,值为 (1),标签中包含 logical switch 名字。 |
Gauge | kube_ovn_logical_switch_external_id | OVN logical switch external_id 信息,值为 (1),标签中包含 external-id 内容。 |
Gauge | kube_ovn_logical_switch_port_binding | OVN logical switch 和 logical switch port 关联信息,值为 (1),通过标签进行关联。 |
Gauge | kube_ovn_logical_switch_tunnel_key | 和 OVN logical switch 关联的 tunnel key 信息。 |
Gauge | kube_ovn_logical_switch_ports_num | OVN logical switch 上 logical port 的数量。 |
Gauge | kube_ovn_logical_switch_port_info | OVN logical switch port 信息,值为 (1),标签中包含具体信息。 |
Gauge | kube_ovn_logical_switch_port_tunnel_key | 和 OVN logical switch port 关联的 tunnel key 信息。 |
Gauge | kube_ovn_cluster_enabled | (1) OVN 数据库为集群模式; (0) OVN 数据库为非集群模式。 |
Gauge | kube_ovn_cluster_role | 每个数据库实例的角色,值为 (1),标签中包含对应角色信息。 |
Gauge | kube_ovn_cluster_status | 每个数据库实例的状态,值为 (1),标签中包含对应状态信息。 |
Gauge | kube_ovn_cluster_term | RAFT term 信息。 |
Gauge | kube_ovn_cluster_leader_self | 当前数据库实例是否为 leader (1) 是, (0) 不是。 |
Gauge | kube_ovn_cluster_vote_self | 当前数据库实例是否选举自己为 leader (1) 是, (0) 不是。 |
Gauge | kube_ovn_cluster_election_timer | 当前 election timer 值。 |
Gauge | kube_ovn_cluster_log_not_committed | 未 commit 的 RAFT 日志数量。 |
Gauge | kube_ovn_cluster_log_not_applied | 未 apply 的 RAFT 日志数量。 |
Gauge | kube_ovn_cluster_log_index_start | 当前 RAFT 日志条目的起始值。 |
Gauge | kube_ovn_cluster_log_index_next | RAFT 日志条目的下一个值。 |
Gauge | kube_ovn_cluster_inbound_connections_total | 当前实例的入向连接数量。 |
Gauge | kube_ovn_cluster_outbound_connections_total | 当前实例的出向连接数量。 |
Gauge | kube_ovn_cluster_inbound_connections_error_total | 当前实例的入向错误连接数量。 |
Gauge | kube_ovn_cluster_outbound_connections_error_total | 当前实例的出向错误连接数量。 |
ovsdb
和 vswitchd
自身状态监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Gauge | ovs_status | OVS 健康状态, (1) 为正常,(0) 为异常。 |
Gauge | ovs_info | OVS 基础信息,值为 (1),标签中包含对应信息。 |
Gauge | failed_req_count | OVS 失败请求数量。 |
Gauge | log_file_size | OVS 组件日志文件大小。 |
Gauge | db_file_size | OVS 组件数据库文件大小。 |
Gauge | datapath | Datapath 基础信息,值为 (1),标签中包含对应信息。 |
Gauge | dp_total | 当前 OVS 中 datapath 数量。 |
Gauge | dp_if | Datapath 接口基础信息,值为 (1),标签中包含对应信息。 |
Gauge | dp_if_total | 当前 datapath 中 port 数量。 |
Gauge | dp_flows_total | Datapath 中 flow 数量。 |
Gauge | dp_flows_lookup_hit | Datapath 中命中当前 flow 数据包数量。 |
Gauge | dp_flows_lookup_missed | Datapath 中未命中当前 flow 数据包数量。 |
Gauge | dp_flows_lookup_lost | Datapath 中需要发送给 userspace 处理的数据包数量。 |
Gauge | dp_masks_hit | Datapath 中命中当前 mask 数据包数量。 |
Gauge | dp_masks_total | Datapath 中 mask 的数量。 |
Gauge | dp_masks_hit_ratio | Datapath 中 数据包命中 mask 的比率。 |
Gauge | interface | OVS 接口基础信息,值为 (1),标签中包含对应信息。 |
Gauge | interface_admin_state | 接口管理状态信息 (0) 为 down, (1) 为 up, (2) 为其他状态。 |
Gauge | interface_link_state | 接口链路状态信息 (0) 为 down, (1) 为 up, (2) 为其他状态。 |
Gauge | interface_mac_in_use | OVS Interface 使用的 MAC 地址 |
Gauge | interface_mtu | OVS Interface 使用的 MTU。 |
Gauge | interface_of_port | OVS Interface 关联的 OpenFlow Port ID。 |
Gauge | interface_if_index | OVS Interface 关联的 Index。 |
Gauge | interface_tx_packets | OVS Interface 发送包数量。 |
Gauge | interface_tx_bytes | OVS Interface 发送包大小。 |
Gauge | interface_rx_packets | OVS Interface 接收包数量。 |
Gauge | interface_rx_bytes | OVS Interface 接收包大小。 |
Gauge | interface_rx_crc_err | OVS Interface 接收包校验和错误数量。 |
Gauge | interface_rx_dropped | OVS Interface 接收包丢弃数量。 |
Gauge | interface_rx_errors | OVS Interface 接收包错误数量。 |
Gauge | interface_rx_frame_err | OVS Interface 接收帧错误数量。 |
Gauge | interface_rx_missed_err | OVS Interface 接收包 miss 数量。 |
Gauge | interface_rx_over_err | OVS Interface 接收包 overrun 数量。 |
Gauge | interface_tx_dropped | OVS Interface 发送包丢弃数量。 |
Gauge | interface_tx_errors | OVS Interface 发送包错误数量。 |
Gauge | interface_collisions | OVS interface 冲突数量。 |
网络质量相关监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Gauge | pinger_ovs_up | 节点 OVS 运行。 |
Gauge | pinger_ovs_down | 节点 OVS 停止。 |
Gauge | pinger_ovn_controller_up | 节点 ovn-controller 运行。 |
Gauge | pinger_ovn_controller_down | 节点 ovn-controller 停止。 |
Gauge | pinger_inconsistent_port_binding | OVN-SB 里 portbinding 数量和主机 OVS interface 不一致的数量。 |
Gauge | pinger_apiserver_healthy | kube-ovn-pinger 可以联通 apiserver。 |
Gauge | pinger_apiserver_unhealthy | kube-ovn-pinger 无法联通 apiserver。 |
Histogram | pinger_apiserver_latency_ms | kube-ovn-pinger 访问 apiserver 延迟。 |
Gauge | pinger_internal_dns_healthy | kube-ovn-pinger 可以解析内部域名。 |
Gauge | pinger_internal_dns_unhealthy | kube-ovn-pinger 无法解析内部域名。 |
Histogram | pinger_internal_dns_latency_ms | kube-ovn-pinger 解析内部域名延迟。 |
Gauge | pinger_external_dns_health | kube-ovn-pinger 可以解析外部域名。 |
Gauge | pinger_external_dns_unhealthy | kube-ovn-pinger 无法解析外部域名。 |
Histogram | pinger_external_dns_latency_ms | kube-ovn-pinger 解析外部域名延迟。 |
Histogram | pinger_pod_ping_latency_ms | kube-ovn-pinger ping Pod 延迟。 |
Gauge | pinger_pod_ping_lost_total | kube-ovn-pinger ping Pod 丢包数量。 |
Gauge | pinger_pod_ping_count_total | kube-ovn-pinger ping Pod 数量。 |
Histogram | pinger_node_ping_latency_ms | kube-ovn-pinger ping Node 延迟。 |
Gauge | pinger_node_ping_lost_total | kube-ovn-pinger ping Node 丢包。 |
Gauge | pinger_node_ping_count_total | kube-ovn-pinger ping Node 数量。 |
Histogram | pinger_external_ping_latency_ms | kube-ovn-pinger ping 外部地址 延迟。 |
Gauge | pinger_external_lost_total | kube-ovn-pinger ping 外部丢包数量。 |
kube-ovn-controller
相关监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Histogram | rest_client_request_latency_seconds | 请求 apiserver 延迟。 |
Counter | rest_client_requests_total | 请求 apiserver 数量。 |
Counter | lists_total | API list 请求数量。 |
Summary | list_duration_seconds | API list 请求延迟。 |
Summary | items_per_list | API list 返回结果数量。 |
Counter | watches_total | API watch 请求数量。 |
Counter | short_watches_total | 短时间 API watch 请求数量。 |
Summary | watch_duration_seconds | API watch 持续时间。 |
Summary | items_per_watch | API watch 返回结果数量。 |
Gauge | last_resource_version | 最新的 resource version。 |
Histogram | ovs_client_request_latency_milliseconds | 请求 OVN 组件延迟。 |
Gauge | subnet_available_ip_count | 子网可用 IP 数量。 |
Gauge | subnet_used_ip_count | 子网已用 IP 数量。 |
kube-ovn-cni
相关监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Histogram | cni_op_latency_seconds | CNI 操作延迟。 |
Counter | cni_wait_address_seconds_total | CNI 等待地址就绪时间。 |
Counter | cni_wait_connectivity_seconds_total | CNI 等待连接就绪时间。 |
Counter | cni_wait_route_seconds_total | CNI 等待路由就绪时间。 |
Histogram | rest_client_request_latency_seconds | 请求 apiserver 延迟。 |
Counter | rest_client_requests_total | 请求 apiserver 数量。 |
Counter | lists_total | API list 请求数量。 |
Summary | list_duration_seconds | API list 请求延迟。 |
Summary | items_per_list | API list 返回结果数量。 |
Counter | watches_total | API watch 请求数量。 |
Counter | short_watches_total | 短时间 API watch 请求数量。 |
Summary | watch_duration_seconds | API watch 持续时间。 |
Summary | items_per_watch | API watch 返回结果数量。 |
Gauge | last_resource_version | 最新的 resource version。 |
Histogram | ovs_client_request_latency_milliseconds | 请求 OVN 组件延迟。 |
本文档列举 Kube-OVN 所提供的监控指标。
OVN 自身状态监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Gauge | kube_ovn_ovn_status | OVN 角色状态, (2) 为 follower; (1) 为 leader, (0) 为异常状态。 |
Gauge | kube_ovn_failed_req_count | OVN 失败请求数量。 |
Gauge | kube_ovn_log_file_size_bytes | OVN 组件日志文件大小。 |
Gauge | kube_ovn_db_file_size_bytes | OVN 组件数据库文件大小。 |
Gauge | kube_ovn_chassis_info | OVN chassis 状态 (1) 运行中,(0) 停止。 |
Gauge | kube_ovn_db_status | OVN 数据库状态, (1) 为正常; (0) 为异常。 |
Gauge | kube_ovn_logical_switch_info | OVN logical switch 信息,值为 (1),标签中包含 logical switch 名字。 |
Gauge | kube_ovn_logical_switch_external_id | OVN logical switch external_id 信息,值为 (1),标签中包含 external-id 内容。 |
Gauge | kube_ovn_logical_switch_port_binding | OVN logical switch 和 logical switch port 关联信息,值为 (1),通过标签进行关联。 |
Gauge | kube_ovn_logical_switch_tunnel_key | 和 OVN logical switch 关联的 tunnel key 信息。 |
Gauge | kube_ovn_logical_switch_ports_num | OVN logical switch 上 logical port 的数量。 |
Gauge | kube_ovn_logical_switch_port_info | OVN logical switch port 信息,值为 (1),标签中包含具体信息。 |
Gauge | kube_ovn_logical_switch_port_tunnel_key | 和 OVN logical switch port 关联的 tunnel key 信息。 |
Gauge | kube_ovn_cluster_enabled | (1) OVN 数据库为集群模式; (0) OVN 数据库为非集群模式。 |
Gauge | kube_ovn_cluster_role | 每个数据库实例的角色,值为 (1),标签中包含对应角色信息。 |
Gauge | kube_ovn_cluster_status | 每个数据库实例的状态,值为 (1),标签中包含对应状态信息。 |
Gauge | kube_ovn_cluster_term | RAFT term 信息。 |
Gauge | kube_ovn_cluster_leader_self | 当前数据库实例是否为 leader (1) 是, (0) 不是。 |
Gauge | kube_ovn_cluster_vote_self | 当前数据库实例是否选举自己为 leader (1) 是, (0) 不是。 |
Gauge | kube_ovn_cluster_election_timer | 当前 election timer 值。 |
Gauge | kube_ovn_cluster_log_not_committed | 未 commit 的 RAFT 日志数量。 |
Gauge | kube_ovn_cluster_log_not_applied | 未 apply 的 RAFT 日志数量。 |
Gauge | kube_ovn_cluster_log_index_start | 当前 RAFT 日志条目的起始值。 |
Gauge | kube_ovn_cluster_log_index_next | RAFT 日志条目的下一个值。 |
Gauge | kube_ovn_cluster_inbound_connections_total | 当前实例的入向连接数量。 |
Gauge | kube_ovn_cluster_outbound_connections_total | 当前实例的出向连接数量。 |
Gauge | kube_ovn_cluster_inbound_connections_error_total | 当前实例的入向错误连接数量。 |
Gauge | kube_ovn_cluster_outbound_connections_error_total | 当前实例的出向错误连接数量。 |
ovsdb
和 vswitchd
自身状态监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Gauge | ovs_status | OVS 健康状态, (1) 为正常,(0) 为异常。 |
Gauge | ovs_info | OVS 基础信息,值为 (1),标签中包含对应信息。 |
Gauge | failed_req_count | OVS 失败请求数量。 |
Gauge | log_file_size | OVS 组件日志文件大小。 |
Gauge | db_file_size | OVS 组件数据库文件大小。 |
Gauge | datapath | Datapath 基础信息,值为 (1),标签中包含对应信息。 |
Gauge | dp_total | 当前 OVS 中 datapath 数量。 |
Gauge | dp_if | Datapath 接口基础信息,值为 (1),标签中包含对应信息。 |
Gauge | dp_if_total | 当前 datapath 中 port 数量。 |
Gauge | dp_flows_total | Datapath 中 flow 数量。 |
Gauge | dp_flows_lookup_hit | Datapath 中命中当前 flow 数据包数量。 |
Gauge | dp_flows_lookup_missed | Datapath 中未命中当前 flow 数据包数量。 |
Gauge | dp_flows_lookup_lost | Datapath 中需要发送给 userspace 处理的数据包数量。 |
Gauge | dp_masks_hit | Datapath 中命中当前 mask 数据包数量。 |
Gauge | dp_masks_total | Datapath 中 mask 的数量。 |
Gauge | dp_masks_hit_ratio | Datapath 中 数据包命中 mask 的比率。 |
Gauge | interface | OVS 接口基础信息,值为 (1),标签中包含对应信息。 |
Gauge | interface_admin_state | 接口管理状态信息 (0) 为 down, (1) 为 up, (2) 为其他状态。 |
Gauge | interface_link_state | 接口链路状态信息 (0) 为 down, (1) 为 up, (2) 为其他状态。 |
Gauge | interface_mac_in_use | OVS Interface 使用的 MAC 地址 |
Gauge | interface_mtu | OVS Interface 使用的 MTU。 |
Gauge | interface_of_port | OVS Interface 关联的 OpenFlow Port ID。 |
Gauge | interface_if_index | OVS Interface 关联的 Index。 |
Gauge | interface_tx_packets | OVS Interface 发送包数量。 |
Gauge | interface_tx_bytes | OVS Interface 发送包大小。 |
Gauge | interface_rx_packets | OVS Interface 接收包数量。 |
Gauge | interface_rx_bytes | OVS Interface 接收包大小。 |
Gauge | interface_rx_crc_err | OVS Interface 接收包校验和错误数量。 |
Gauge | interface_rx_dropped | OVS Interface 接收包丢弃数量。 |
Gauge | interface_rx_errors | OVS Interface 接收包错误数量。 |
Gauge | interface_rx_frame_err | OVS Interface 接收帧错误数量。 |
Gauge | interface_rx_missed_err | OVS Interface 接收包 miss 数量。 |
Gauge | interface_rx_over_err | OVS Interface 接收包 overrun 数量。 |
Gauge | interface_tx_dropped | OVS Interface 发送包丢弃数量。 |
Gauge | interface_tx_errors | OVS Interface 发送包错误数量。 |
Gauge | interface_collisions | OVS interface 冲突数量。 |
网络质量相关监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Gauge | pinger_ovs_up | 节点 OVS 运行。 |
Gauge | pinger_ovs_down | 节点 OVS 停止。 |
Gauge | pinger_ovn_controller_up | 节点 ovn-controller 运行。 |
Gauge | pinger_ovn_controller_down | 节点 ovn-controller 停止。 |
Gauge | pinger_inconsistent_port_binding | OVN-SB 里 portbinding 数量和主机 OVS interface 不一致的数量。 |
Gauge | pinger_apiserver_healthy | kube-ovn-pinger 可以联通 apiserver。 |
Gauge | pinger_apiserver_unhealthy | kube-ovn-pinger 无法联通 apiserver。 |
Histogram | pinger_apiserver_latency_ms | kube-ovn-pinger 访问 apiserver 延迟。 |
Gauge | pinger_internal_dns_healthy | kube-ovn-pinger 可以解析内部域名。 |
Gauge | pinger_internal_dns_unhealthy | kube-ovn-pinger 无法解析内部域名。 |
Histogram | pinger_internal_dns_latency_ms | kube-ovn-pinger 解析内部域名延迟。 |
Gauge | pinger_external_dns_health | kube-ovn-pinger 可以解析外部域名。 |
Gauge | pinger_external_dns_unhealthy | kube-ovn-pinger 无法解析外部域名。 |
Histogram | pinger_external_dns_latency_ms | kube-ovn-pinger 解析外部域名延迟。 |
Histogram | pinger_pod_ping_latency_ms | kube-ovn-pinger ping Pod 延迟。 |
Gauge | pinger_pod_ping_lost_total | kube-ovn-pinger ping Pod 丢包数量。 |
Gauge | pinger_pod_ping_count_total | kube-ovn-pinger ping Pod 数量。 |
Histogram | pinger_node_ping_latency_ms | kube-ovn-pinger ping Node 延迟。 |
Gauge | pinger_node_ping_lost_total | kube-ovn-pinger ping Node 丢包。 |
Gauge | pinger_node_ping_count_total | kube-ovn-pinger ping Node 数量。 |
Histogram | pinger_external_ping_latency_ms | kube-ovn-pinger ping 外部地址 延迟。 |
Gauge | pinger_external_lost_total | kube-ovn-pinger ping 外部丢包数量。 |
kube-ovn-controller
相关监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Histogram | rest_client_request_latency_seconds | 请求 apiserver 延迟。 |
Counter | rest_client_requests_total | 请求 apiserver 数量。 |
Counter | lists_total | API list 请求数量。 |
Summary | list_duration_seconds | API list 请求延迟。 |
Summary | items_per_list | API list 返回结果数量。 |
Counter | watches_total | API watch 请求数量。 |
Counter | short_watches_total | 短时间 API watch 请求数量。 |
Summary | watch_duration_seconds | API watch 持续时间。 |
Summary | items_per_watch | API watch 返回结果数量。 |
Gauge | last_resource_version | 最新的 resource version。 |
Histogram | ovs_client_request_latency_milliseconds | 请求 OVN 组件延迟。 |
Gauge | subnet_available_ip_count | 子网可用 IP 数量。 |
Gauge | subnet_used_ip_count | 子网已用 IP 数量。 |
kube-ovn-cni
相关监控指标:
类型 | 指标项 | 描述 |
---|---|---|
Histogram | cni_op_latency_seconds | CNI 操作延迟。 |
Counter | cni_wait_address_seconds_total | CNI 等待地址就绪时间。 |
Counter | cni_wait_connectivity_seconds_total | CNI 等待连接就绪时间。 |
Counter | cni_wait_route_seconds_total | CNI 等待路由就绪时间。 |
Histogram | rest_client_request_latency_seconds | 请求 apiserver 延迟。 |
Counter | rest_client_requests_total | 请求 apiserver 数量。 |
Counter | lists_total | API list 请求数量。 |
Summary | list_duration_seconds | API list 请求延迟。 |
Summary | items_per_list | API list 返回结果数量。 |
Counter | watches_total | API watch 请求数量。 |
Counter | short_watches_total | 短时间 API watch 请求数量。 |
Summary | watch_duration_seconds | API watch 持续时间。 |
Summary | items_per_watch | API watch 返回结果数量。 |
Gauge | last_resource_version | 最新的 resource version。 |
Histogram | ovs_client_request_latency_milliseconds | 请求 OVN 组件延迟。 |
上游 OVN/OVS 最初设计目标为通用 SDN 控制器和数据平面。由于 Kubernetes 网络存在一些特殊的用法, 并且 Kube-OVN 只重点使用了部分功能,为了 达到更好的性能、稳定性和特定的功能,Kube-OVN 对上游 OVN/OVS 做了部分修改。用户如果使用自己的 OVN/OVS 配合 Kube-OVN 的控制器进行工作时需要注意 下述的改动可能造成的影响。
未合入上游修改:
已合入上游修改:
上游 OVN/OVS 最初设计目标为通用 SDN 控制器和数据平面。由于 Kubernetes 网络存在一些特殊的用法, 并且 Kube-OVN 只重点使用了部分功能,为了 达到更好的性能、稳定性和特定的功能,Kube-OVN 对上游 OVN/OVS 做了部分修改。用户如果使用自己的 OVN/OVS 配合 Kube-OVN 的控制器进行工作时需要注意 下述的改动可能造成的影响。
未合入上游修改:
已合入上游修改:
Kube-OVN 使用 OVN/OVS 作为数据平面实现,目前支持 Geneve
,Vxlan
和 STT
三种隧道封装协议。 这三种协议在功能,性能和易用性上存在着区别,本文档将介绍三种协议在使用中的差异,用户可根据自己的情况进行选择。
Geneve
协议为 Kube-OVN 部署时选择的默认隧道协议,也是 OVN 默认推荐的隧道协议。该协议在内核中得到了广泛的支持, 并可以利用现代网卡的通用 Offload 能力进行加速。由于 Geneve
有着可变长的头部,可以使用 24bit 空间来标志不同的 datapath 用户可以创建更多数量的虚拟网络。
如果使用 Mellanox 或芯启源的智能网卡 OVS 卸载,Geneve
需要较高版本的内核支持,需要选择 5.4 以上的上游内核, 或 backport 了该功能的其他兼容内核。
由于使用 UDP 进行封装,该协议在处理 TCP over UDP 时不能很好的利用现代网卡的 TCP 相关卸载,在处理大包时会消耗较多 CPU 资源。
Vxlan
为上游 OVN 近期支持的协议,该协议在内核中得到了广泛的支持, 并可以利用现代网卡的通用 Offload 能力进行加速。 由于该协议头部长度有限,并且 OVN 需要使用额外的空间进行编排,datapath 的数量存在限制,最多只能创建 4096 个 datapath, 每个 datapath 下最多 4096 个端口。同时由于空间有限,基于 inport
的 ACL 没有进行支持。
如果使用 Mellanox 或芯启源的智能网卡 OVS 卸载,Vxlan
的卸载在常见内核中已获得支持。
由于使用 UDP 进行封装,该协议在处理 TCP over UDP 时不能很好的利用现代网卡的 TCP 相关卸载,在处理大包时会消耗较多 CPU 资源。
STT
协议为 OVN 较早支持的隧道协议,该协议使用类 TCP 的头部,可以充分利用现代网卡通用的 TCP 卸载能力,大幅提升 TCP 的吞吐量。同时该协议头部较长可支持完整的 OVN 能力和大规模的 datapath。
该协议未在内核中支持,若要使用需要额外编译 OVS 内核模块,并在升级内核时对应再次编译新版本内核模块。
该协议目前未被智能网卡支持,无法使用 OVS 的卸载能力。
Kube-OVN 使用 OVN/OVS 作为数据平面实现,目前支持 Geneve
,Vxlan
和 STT
三种隧道封装协议。 这三种协议在功能,性能和易用性上存在着区别,本文档将介绍三种协议在使用中的差异,用户可根据自己的情况进行选择。
Geneve
协议为 Kube-OVN 部署时选择的默认隧道协议,也是 OVN 默认推荐的隧道协议。该协议在内核中得到了广泛的支持, 并可以利用现代网卡的通用 Offload 能力进行加速。由于 Geneve
有着可变长的头部,可以使用 24bit 空间来标志不同的 datapath 用户可以创建更多数量的虚拟网络。
如果使用 Mellanox 或芯启源的智能网卡 OVS 卸载,Geneve
需要较高版本的内核支持,需要选择 5.4 以上的上游内核, 或 backport 了该功能的其他兼容内核。
由于使用 UDP 进行封装,该协议在处理 TCP over UDP 时不能很好的利用现代网卡的 TCP 相关卸载,在处理大包时会消耗较多 CPU 资源。
Vxlan
为上游 OVN 近期支持的协议,该协议在内核中得到了广泛的支持, 并可以利用现代网卡的通用 Offload 能力进行加速。 由于该协议头部长度有限,并且 OVN 需要使用额外的空间进行编排,datapath 的数量存在限制,最多只能创建 4096 个 datapath, 每个 datapath 下最多 4096 个端口。同时由于空间有限,基于 inport
的 ACL 没有进行支持。
如果使用 Mellanox 或芯启源的智能网卡 OVS 卸载,Vxlan
的卸载在常见内核中已获得支持。
由于使用 UDP 进行封装,该协议在处理 TCP over UDP 时不能很好的利用现代网卡的 TCP 相关卸载,在处理大包时会消耗较多 CPU 资源。
STT
协议为 OVN 较早支持的隧道协议,该协议使用类 TCP 的头部,可以充分利用现代网卡通用的 TCP 卸载能力,大幅提升 TCP 的吞吐量。同时该协议头部较长可支持完整的 OVN 能力和大规模的 datapath。
该协议未在内核中支持,若要使用需要额外编译 OVS 内核模块,并在升级内核时对应再次编译新版本内核模块。
该协议目前未被智能网卡支持,无法使用 OVS 的卸载能力。
本文档介绍 Underlay 模式下流量在不同情况下的转发路径。
内部逻辑交换机直接交换数据包,不进入外部网络。
数据包经由节点网卡进入外部交换机,由外部交换机进行交换。
数据包经由节点网卡进入外部网络,由外部交换机及路由器进行交换和路由转发。
此处 br-provider-1 和 br-provider-2 可以是同一个 OVS 网桥,即多个不同子网可以使用同一个 Provider Network。
数据包经由节点网卡进入外部网络,由外部交换机及路由器进行交换和路由转发。
数据包经由节点网卡进入外部网络,由外部交换机及路由器进行交换和路由转发。
节点与 Pod 之间的通信大体上也遵循此逻辑。
Kube-OVN 为每个 Kubernetes Service 在每个子网的逻辑交换机上配置了负载均衡。 当 Pod 通过访问 Service IP 访问其它 Pod 时,会构造一个目的地址为 Service IP、目的 MAC 地址为网关 MAC 地址的网络包。 网络包进入逻辑交换机后,负载均衡会对网络包进行拦截和 DNAT 处理,将目的 IP 和端口修改为 Service 对应的某个 Endpoint 的 IP 和端口。 由于逻辑交换机并未修改网络包的二层目的 MAC 地址,网络包在进入外部交换机后仍然会送到外部网关,此时需要外部网关对网络包进行转发。
本文档介绍 Underlay 模式下流量在不同情况下的转发路径。
内部逻辑交换机直接交换数据包,不进入外部网络。
数据包经由节点网卡进入外部交换机,由外部交换机进行交换。
数据包经由节点网卡进入外部网络,由外部交换机及路由器进行交换和路由转发。
此处 br-provider-1 和 br-provider-2 可以是同一个 OVS 网桥,即多个不同子网可以使用同一个 Provider Network。
数据包经由节点网卡进入外部网络,由外部交换机及路由器进行交换和路由转发。
数据包经由节点网卡进入外部网络,由外部交换机及路由器进行交换和路由转发。
节点与 Pod 之间的通信大体上也遵循此逻辑。
Kube-OVN 为每个 Kubernetes Service 在每个子网的逻辑交换机上配置了负载均衡。 当 Pod 通过访问 Service IP 访问其它 Pod 时,会构造一个目的地址为 Service IP、目的 MAC 地址为网关 MAC 地址的网络包。 网络包进入逻辑交换机后,负载均衡会对网络包进行拦截和 DNAT 处理,将目的 IP 和端口修改为 Service 对应的某个 Endpoint 的 IP 和端口。 由于逻辑交换机并未修改网络包的二层目的 MAC 地址,网络包在进入外部交换机后仍然会送到外部网关,此时需要外部网关对网络包进行转发。
Kube-OVN 是一个符合 CNI 规范的网络组件,其运行需要依赖 Kubernetes 环境及对应的内核网络模块。 以下是通过测试的操作系统和软件版本,环境配置和所需要开放的端口信息。
geneve
, openvswitch
, ip_tables
和 iptable_nat
,Kube-OVN 正常工作依赖上述模块。注意事项:
netfilter
模块存在 bug 会导致 Kube-OVN 内置负载均衡器无法工作,需要对内核升级,建议使用 CentOS 官方对应版本最新内核保证系统的安全。相关内核 bug 参考 Floating IPs broken after kernel upgrade to Centos/RHEL 7.5 - DNAT not working。openvswitch
模块存在问题,建议升级或手动编译 openvswitch
新版本模块进行更新cat /proc/cmdline
检查内核启动参数, 相关内核 bug 请参考 Geneve tunnels don't work when ipv6 is disabled。ipv6.disable=1
需要将其设置为 0。kube-proxy
正常工作,Kube-OVN 可以通过 Service ClusterIP 访问到 kube-apiserver
。--network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d
。/etc/cni/net.d/
路径下无其他网络插件配置文件。如果之前安装过其他网络插件,建议删除后重启机器清理残留网络资源。组件 | 端口 | 用途 |
---|---|---|
ovn-central | 6641/tcp, 6642/tcp, 6643/tcp, 6644/tcp | ovn-db 和 raft server 监听端口 |
ovs-ovn | Geneve 6081/udp, STT 7471/tcp, Vxlan 4789/udp | 隧道端口 |
kube-ovn-controller | 10660/tcp | 监控监听端口 |
kube-ovn-daemon | 10665/tcp | 监控监听端口 |
kube-ovn-monitor | 10661/tcp | 监控监听端口 |
Kube-OVN 是一个符合 CNI 规范的网络组件,其运行需要依赖 Kubernetes 环境及对应的内核网络模块。 以下是通过测试的操作系统和软件版本,环境配置和所需要开放的端口信息。
geneve
, openvswitch
, ip_tables
和 iptable_nat
,Kube-OVN 正常工作依赖上述模块。注意事项:
netfilter
模块存在 bug 会导致 Kube-OVN 内置负载均衡器无法工作,需要对内核升级,建议使用 CentOS 官方对应版本最新内核保证系统的安全。相关内核 bug 参考 Floating IPs broken after kernel upgrade to Centos/RHEL 7.5 - DNAT not working。openvswitch
模块存在问题,建议升级或手动编译 openvswitch
新版本模块进行更新cat /proc/cmdline
检查内核启动参数, 相关内核 bug 请参考 Geneve tunnels don't work when ipv6 is disabled。ipv6.disable=1
需要将其设置为 0。kube-proxy
正常工作,Kube-OVN 可以通过 Service ClusterIP 访问到 kube-apiserver
。--network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d
。/etc/cni/net.d/
路径下无其他网络插件配置文件。如果之前安装过其他网络插件,建议删除后重启机器清理残留网络资源。组件 | 端口 | 用途 |
---|---|---|
ovn-central | 6641/tcp, 6642/tcp, 6643/tcp, 6644/tcp | ovn-db 和 raft server 监听端口 |
ovs-ovn | Geneve 6081/udp, STT 7471/tcp, Vxlan 4789/udp | 隧道端口 |
kube-ovn-controller | 10660/tcp | 监控监听端口 |
kube-ovn-daemon | 10665/tcp | 监控监听端口 |
kube-ovn-monitor | 10661/tcp | 监控监听端口 |