diff --git a/v1.13.x/advance/with-ovn-ic/index.html b/v1.13.x/advance/with-ovn-ic/index.html index ef7568131..fcc95d565 100644 --- a/v1.13.x/advance/with-ovn-ic/index.html +++ b/v1.13.x/advance/with-ovn-ic/index.html @@ -1,4 +1,4 @@ - 使用 OVN-IC 进行多集群互联 - Kube-OVN 文档
跳转至

使用 OVN-IC 进行多集群互联

Kube-OVN 支持通过 OVN-IC 将两个 Kubernetes 集群 Pod 网络打通,打通后的两个集群内的 Pod 可以通过 Pod IP 进行直接通信。 Kube-OVN 使用隧道对跨集群流量进行封装,两个集群之间只要存在一组 IP 可达的机器即可完成容器网络的互通。

该模式的多集群互联为 Overlay 网络功能,Underlay 网络如果想要实现集群互联需要底层基础设施做网络打通。

前提条件

  1. 自动互联模式下不同集群的子网 CIDR 不能相互重叠,默认子网需在安装时配置为不重叠的网段。若存在重叠需参考后续手动互联过程,只能将不重叠网段打通。
  2. 需要存在一组机器可以被每个集群的 kube-ovn-controller 通过 IP 访问,用来部署跨集群互联的控制器。
  3. 每个集群需要有一组可以通过 IP 进行跨集群互访的机器作为之后的网关节点。
  4. 该功能只对默认 VPC 生效,用户自定义 VPC 无法使用互联功能。

部署单节点 OVN-IC 数据库

在每个集群 kube-ovn-controller 可通过 IP 访问的机器上部署 OVN-IC 数据库,该节点将保存各个集群同步上来的网络配置信息。

部署 docker 的环境可以使用下面的命令启动 OVN-IC 数据库:

docker run --name=ovn-ic-db -d --network=host --privileged  -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
-

对于部署 containerd 取代 docker 的环境可以使用下面的命令:

ctr -n k8s.io run -d --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
+ 

使用 OVN-IC 进行多集群互联

Kube-OVN 支持通过 OVN-IC 将两个 Kubernetes 集群 Pod 网络打通,打通后的两个集群内的 Pod 可以通过 Pod IP 进行直接通信。 Kube-OVN 使用隧道对跨集群流量进行封装,两个集群之间只要存在一组 IP 可达的机器即可完成容器网络的互通。

该模式的多集群互联为 Overlay 网络功能,Underlay 网络如果想要实现集群互联需要底层基础设施做网络打通。

前提条件

  1. 自动互联模式下不同集群的子网 CIDR 不能相互重叠,默认子网需在安装时配置为不重叠的网段。若存在重叠需参考后续手动互联过程,只能将不重叠网段打通。
  2. 需要存在一组机器可以被每个集群的 kube-ovn-controller 通过 IP 访问,用来部署跨集群互联的控制器。
  3. 每个集群需要有一组可以通过 IP 进行跨集群互访的机器作为之后的网关节点。
  4. 该功能只对默认 VPC 生效,用户自定义 VPC 无法使用互联功能。

部署单节点 OVN-IC 数据库

单节点部署方案 1

优先推荐方案 1,Kube-OVN v1.11.16 之后支持。

该方法不区别 "单节点" 或者 "多节点高可用" 部署,控制器会以 Deployment 的形式部署在 master 节点上,集群 master 节点为 1,即单节点部署,master 节点为多个,即多节点高可用部署。

先获取脚本 install-ovn-ic.sh,使用下面命令:

wget https://raw.githubusercontent.com/kubeovn/kube-ovn/master/dist/images/install-ic-server.sh
+

执行命令安装,其中 TS_NUM 表示集群互联的 ECMP Path 数量:

sed 's/VERSION=.*/VERSION=v1.13.0/' dist/images/install-ic-server.sh | TS_NUM=3 bash
+

执行成功输出如下:

deployment.apps/ovn-ic-server created
+Waiting for deployment spec update to be observed...
+Waiting for deployment "ovn-ic-server" rollout to finish: 0 out of 3 new replicas have been updated...
+Waiting for deployment "ovn-ic-server" rollout to finish: 0 of 3 updated replicas are available...
+Waiting for deployment "ovn-ic-server" rollout to finish: 1 of 3 updated replicas are available...
+Waiting for deployment "ovn-ic-server" rollout to finish: 2 of 3 updated replicas are available...
+deployment "ovn-ic-server" successfully rolled out
+OVN IC Server installed Successfully
+

通过 kubectl ko icsbctl show 命令可以查看当前互联控制器的状态,命令如下:

kubectl ko icsbctl show
+availability-zone az0
+    gateway 059b5c54-c540-4d77-b009-02d65f181a02
+        hostname: kube-ovn-worker
+        type: geneve
+            ip: 172.18.0.3
+        port ts-az0
+            transit switch: ts
+            address: ["00:00:00:B4:8E:BE 169.254.100.97/24"]
+    gateway 74ee4b9a-ba48-4a07-861e-1a8e4b9f905f
+        hostname: kube-ovn-worker2
+        type: geneve
+            ip: 172.18.0.2
+        port ts1-az0
+            transit switch: ts1
+            address: ["00:00:00:19:2E:F7 169.254.101.90/24"]
+    gateway 7e2428b6-344c-4dd5-a0d5-972c1ccec581
+        hostname: kube-ovn-control-plane
+        type: geneve
+            ip: 172.18.0.4
+        port ts2-az0
+            transit switch: ts2
+            address: ["00:00:00:EA:32:BA 169.254.102.103/24"]
+availability-zone az1
+    gateway 034da7cb-3826-4318-81ce-6a877a9bf285
+        hostname: kube-ovn1-worker
+        type: geneve
+            ip: 172.18.0.6
+        port ts-az1
+            transit switch: ts
+            address: ["00:00:00:25:3A:B9 169.254.100.51/24"]
+    gateway 2531a683-283e-4fb8-a619-bdbcb33539b8
+        hostname: kube-ovn1-worker2
+        type: geneve
+            ip: 172.18.0.5
+        port ts1-az1
+            transit switch: ts1
+            address: ["00:00:00:52:87:F4 169.254.101.118/24"]
+    gateway b0efb0be-e5a7-4323-ad4b-317637a757c4
+        hostname: kube-ovn1-control-plane
+        type: geneve
+            ip: 172.18.0.8
+        port ts2-az1
+            transit switch: ts2
+            address: ["00:00:00:F6:93:1A 169.254.102.17/24"]
+

单节点部署方案 2

在每个集群 kube-ovn-controller 可通过 IP 访问的机器上部署 OVN-IC 数据库,该节点将保存各个集群同步上来的网络配置信息。

部署 docker 的环境可以使用下面的命令启动 OVN-IC 数据库:

docker run --name=ovn-ic-db -d --env "ENABLE_OVN_LEADER_CHECK="false"" --network=host --privileged  -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
+

对于部署 containerd 取代 docker 的环境可以使用下面的命令:

ctr -n k8s.io run -d --env "ENABLE_OVN_LEADER_CHECK="false"" --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
 

自动路由设置

在自动路由设置下,每个集群会将自己默认 VPC 下 Subnet 的 CIDR 信息同步给 OVN-IC,因此要确保两个集群的 Subnet CIDR 不存在重叠。

kube-system Namespace 下创建 ovn-ic-config ConfigMap:

apiVersion: v1
 kind: ConfigMap
 metadata:
@@ -87,8 +142,8 @@
         addresses: ["00:00:00:FB:2A:F7 169.254.100.79/24"]        
 

由上输出可知,集群 az1 到 集群 az2 的远端地址为 169.254.100.31az2az1 的远端地址为 169.254.100.79

下面手动设置路由,在该例子中,集群 az1 内的子网 CIDR 为 10.16.0.0/24,集群 az2 内的子网 CIDR 为 10.17.0.0/24

在集群 az1 设置到集群 az2 的路由:

kubectl ko nbctl lr-route-add ovn-cluster 10.17.0.0/24 169.254.100.31
 

在集群 az2 设置到集群 az1 的路由:

kubectl ko nbctl lr-route-add ovn-cluster 10.16.0.0/24 169.254.100.79
-

高可用 OVN-IC 数据库部署

OVN-IC 数据库之间可以通过 Raft 协议组成一个高可用集群,该部署模式需要至少 3 个节点。

首先在第一个节点上启动 OVN-IC 数据库的 leader。

部署 docker 环境的用户可以使用下面的命令:

docker run --name=ovn-ic-db -d --network=host --privileged -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn -e LOCAL_IP="192.168.65.3"  -e NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"   kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
-

如果是部署 containerd 的用户可以使用下面的命令:

ctr -n k8s.io run -d --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw"  --env="NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"" --env="LOCAL_IP="192.168.65.3"" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
+

高可用 OVN-IC 数据库部署

高可用部署方案 1

优先推荐方案 1,Kube-OVN v1.11.16 之后支持。

方法同单节点部署方案 1

高可用部署方案 2

OVN-IC 数据库之间可以通过 Raft 协议组成一个高可用集群,该部署模式需要至少 3 个节点。

首先在第一个节点上启动 OVN-IC 数据库的 leader。

部署 docker 环境的用户可以使用下面的命令:

docker run --name=ovn-ic-db -d --env "ENABLE_OVN_LEADER_CHECK="false"" --network=host --privileged -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn -e LOCAL_IP="192.168.65.3"  -e NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"   kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
+

如果是部署 containerd 的用户可以使用下面的命令:

ctr -n k8s.io run -d --env "ENABLE_OVN_LEADER_CHECK="false"" --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw"  --env="NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"" --env="LOCAL_IP="192.168.65.3"" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
 
  • LOCAL_IP: 当前容器所在节点 IP 地址。
  • NODE_IPS: 运行 OVN-IC 数据库的三个节点 IP 地址,使用逗号进行分隔。

接下来,在另外两个节点部署 OVN-IC 数据库的 follower。

部署 docker 环境的用户可以使用下面的命令:

docker run --name=ovn-ic-db -d --network=host --privileged -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn -e LOCAL_IP="192.168.65.2"  -e NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1" -e LEADER_IP="192.168.65.3"  kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
 

如果是部署 containerd 的用户可以使用下面的命令:

ctr -n k8s.io run -d --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw"  --env="NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"" --env="LOCAL_IP="192.168.65.2"" --env="LEADER_IP="192.168.65.3"" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
 
  • LOCAL_IP: 当前容器所在节点 IP 地址。
  • NODE_IPS: 运行 OVN-IC 数据库的三个节点 IP 地址,使用逗号进行分隔。
  • LEADER_IP: 运行 OVN-IC 数据库 leader 节点的 IP 地址。

在每个集群创建 ovn-ic-config 时指定多个 OVN-IC 数据库节点地址:

apiVersion: v1
@@ -104,7 +159,8 @@
   ic-sb-port: "6646"
   gw-nodes: "az1-gw"
   auto-route: "true"
-

手动重置

在一些情况下,由于配置错误需要对整个互联配置进行清理,可以参考下面的步骤清理环境。

删除当前的 ovn-ic-config Configmap:

kubectl -n kube-system delete cm ovn-ic-config
+

支持集群互联 ECMP

前提控制器是按照单节点部署方案 1部署

该方案默认支持集群互联 ECMP,ECMP path 默认为 3,同时也支持修改 ECMP path 条数,使用命令:

kubectl edit deployment ovn-ic-server -n kube-system
+

修改环境变量 'TS_NUM' 数值即可,TS_NUM 表示两个集群之间访问的 ECMP Path 条数。

手动重置

在一些情况下,由于配置错误需要对整个互联配置进行清理,可以参考下面的步骤清理环境。

删除当前的 ovn-ic-config Configmap:

kubectl -n kube-system delete cm ovn-ic-config
 

删除 ts 逻辑交换机:

kubectl ko nbctl ls-del ts
 

在对端集群重复同样的步骤。

修改 az-name

可以直接通过 kubectl edit 的方式对 ovn-ic-config 这个 configmap 中的 az-name 字段进行修改。 但是需要在每个 ovn-cni pod 上执行以下命令,否则可能出现最长 10 分钟的跨集群网络中断。

ovn-appctl -t ovn-controller inc-engine/recompute
 

清理集群互联

删除所有集群的 ovn-ic-config Configmap:

kubectl -n kube-system delete cm ovn-ic-config
@@ -113,7 +169,10 @@
 docker rm ovn-ic-db
 

如果控制器是 containerd 部署执行命令:

ctr -n k8s.io task kill ovn-ic-db
 ctr -n k8s.io containers rm ovn-ic-db
-

微信群 Slack Twitter Support

评论

Cluster Inter-Connection with OVN-IC

Kube-OVN supports interconnecting two Kubernetes cluster Pod networks via OVN-IC, and the Pods in the two clusters can communicate directly via Pod IPs . Kube-OVN uses tunnels to encapsulate cross-cluster traffic, allowing container networks to interconnect between two clusters as long as there is a set of IP reachable machines.

This mode of multi-cluster interconnection is for Overlay network. For Underlay network, it needs the underlying infrastructure to do the inter-connection work.

Prerequisites

  1. The subnet CIDRs within OpenStack and Kubernetes cannot overlap with each other in auto-interconnect mode. If there is overlap, you need to refer to the subsequent manual interconnection process, which can only connect non-overlapping Subnets.
  2. A set of machines needs to exist that can be accessed by each cluster over the network and used to deploy controllers that interconnect across clusters.
  3. Each cluster needs to have a set of machines that can access each other across clusters via IP as the gateway nodes.
  4. This solution only connects to the Kubernetes default VPCs.

Deploy a single-node OVN-IC DB

Deploy the OVN-IC DB on a machine accessible by kube-ovn-controller, This DB will hold the network configuration information synchronized up from each cluster.

An environment deploying docker can start the OVN-IC DB with the following command.

docker run --name=ovn-ic-db -d --network=host --privileged -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
-

For deploying a containerd environment instead of docker you can use the following command:

ctr -n k8s.io run -d --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
+ 

Cluster Inter-Connection with OVN-IC

Kube-OVN supports interconnecting two Kubernetes cluster Pod networks via OVN-IC, and the Pods in the two clusters can communicate directly via Pod IPs . Kube-OVN uses tunnels to encapsulate cross-cluster traffic, allowing container networks to interconnect between two clusters as long as there is a set of IP reachable machines.

This mode of multi-cluster interconnection is for Overlay network. For Underlay network, it needs the underlying infrastructure to do the inter-connection work.

Prerequisites

  1. The subnet CIDRs within OpenStack and Kubernetes cannot overlap with each other in auto-interconnect mode. If there is overlap, you need to refer to the subsequent manual interconnection process, which can only connect non-overlapping Subnets.
  2. A set of machines needs to exist that can be accessed by each cluster over the network and used to deploy controllers that interconnect across clusters.
  3. Each cluster needs to have a set of machines that can access each other across clusters via IP as the gateway nodes.
  4. This solution only connects to the Kubernetes default VPCs.

Deploy a single-node OVN-IC DB

Single node deployment solution 1

Solution 1 is recommended first, supported after Kube-OVN v1.11.16.

This method does not distinguish between "single node" or "multi-node high availability" deployment. The controller will be deployed on the master node in the form of Deployment. The cluster master node is 1, which is a single node deployment, and the number of master nodes is multiple, that is, multi-node. Highly available deployment.

First get the script install-ovn-ic.sh and use the following command:

wget https://raw.githubusercontent.com/kubeovn/kube-ovn/master/dist/images/install-ic-server.sh
+

Execute the command installation, where TS_NUM represents the number of ECMP Paths connected to the cluster:

sed 's/VERSION=.*/VERSION=v1.13.0/' dist/images/install-ic-server.sh | TS_NUM=3 bash
+

The output of successful execution is as follows:

deployment.apps/ovn-ic-server created
+Waiting for deployment spec update to be observed...
+Waiting for deployment "ovn-ic-server" rollout to finish: 0 out of 3 new replicas have been updated...
+Waiting for deployment "ovn-ic-server" rollout to finish: 0 of 3 updated replicas are available...
+Waiting for deployment "ovn-ic-server" rollout to finish: 1 of 3 updated replicas are available...
+Waiting for deployment "ovn-ic-server" rollout to finish: 2 of 3 updated replicas are available...
+deployment "ovn-ic-server" successfully rolled out
+OVN IC Server installed Successfully
+

You can view the status of the current interconnected controller through the kubectl ko icsbctl show command. The command is as follows:

kubectl ko icsbctl show
+availability-zone az0
+    gateway 059b5c54-c540-4d77-b009-02d65f181a02
+        hostname: kube-ovn-worker
+        type: geneve
+            ip: 172.18.0.3
+        port ts-az0
+            transit switch: ts
+            address: ["00:00:00:B4:8E:BE 169.254.100.97/24"]
+    gateway 74ee4b9a-ba48-4a07-861e-1a8e4b9f905f
+        hostname: kube-ovn-worker2
+        type: geneve
+            ip: 172.18.0.2
+        port ts1-az0
+            transit switch: ts1
+            address: ["00:00:00:19:2E:F7 169.254.101.90/24"]
+    gateway 7e2428b6-344c-4dd5-a0d5-972c1ccec581
+        hostname: kube-ovn-control-plane
+        type: geneve
+            ip: 172.18.0.4
+        port ts2-az0
+            transit switch: ts2
+            address: ["00:00:00:EA:32:BA 169.254.102.103/24"]
+availability-zone az1
+    gateway 034da7cb-3826-4318-81ce-6a877a9bf285
+        hostname: kube-ovn1-worker
+        type: geneve
+            ip: 172.18.0.6
+        port ts-az1
+            transit switch: ts
+            address: ["00:00:00:25:3A:B9 169.254.100.51/24"]
+    gateway 2531a683-283e-4fb8-a619-bdbcb33539b8
+        hostname: kube-ovn1-worker2
+        type: geneve
+            ip: 172.18.0.5
+        port ts1-az1
+            transit switch: ts1
+            address: ["00:00:00:52:87:F4 169.254.101.118/24"]
+    gateway b0efb0be-e5a7-4323-ad4b-317637a757c4
+        hostname: kube-ovn1-control-plane
+        type: geneve
+            ip: 172.18.0.8
+        port ts2-az1
+            transit switch: ts2
+            address: ["00:00:00:F6:93:1A 169.254.102.17/24"]
+

Single node deployment solution 2

Deploy the OVN-IC DB on a machine accessible by kube-ovn-controller, This DB will hold the network configuration information synchronized up from each cluster.

An environment deploying docker can start the OVN-IC DB with the following command.

docker run --name=ovn-ic-db -d --env "ENABLE_OVN_LEADER_CHECK="false"" --network=host --privileged  -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
+

For deploying a containerd environment instead of docker you can use the following command:

ctr -n k8s.io run -d --env "ENABLE_OVN_LEADER_CHECK="false"" --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
 

Automatic Routing Mode

In auto-routing mode, each cluster synchronizes the CIDR information of the Subnet under its own default VPC to OVN-IC, so make sure there is no overlap between the Subnet CIDRs of the two clusters.

Create ovn-ic-config ConfigMap in kube-system Namespace:

apiVersion: v1
 kind: ConfigMap
 metadata:
@@ -87,8 +142,8 @@
         addresses: ["00:00:00:FB:2A:F7 169.254.100.79/24"]        
 

The output above shows that the remote address from cluster az1 to cluster az2 is 169.254.100.31 and the remote address from az2 to az1 is 169.254.100.79.

In this example, the subnet CIDR within cluster az1 is 10.16.0.0/24 and the subnet CIDR within cluster az2 is 10.17.0.0/24.

Set up a route from cluster az1 to cluster az2 in cluster az1:

kubectl ko nbctl lr-route-add ovn-cluster 10.17.0.0/24 169.254.100.31
 

Set up a route to cluster az1 in cluster az2:

kubectl ko nbctl lr-route-add ovn-cluster 10.16.0.0/24 169.254.100.79
-

Highly Available OVN-IC DB Installation

A highly available cluster can be formed between OVN-IC DB via the Raft protocol, which requires a minimum of 3 nodes for this deployment model.

First start the leader of the OVN-IC DB on the first node.

Users deploying a docker environment can use the following command:

docker run --name=ovn-ic-db -d --network=host --privileged -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn -e LOCAL_IP="192.168.65.3"  -e NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"   kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
-

If you are using containerd you can use the following command:

ctr -n k8s.io run -d --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw"  --env="NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"" --env="LOCAL_IP="192.168.65.3"" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
+

Highly Available OVN-IC DB Installation

High availability deployment solution 1

Solution 1 is recommended first, supported after Kube-OVN v1.11.16.

The method is the same as Single node deployment solution 1

High availability deployment solution 2

A highly available cluster can be formed between OVN-IC DB via the Raft protocol, which requires a minimum of 3 nodes for this deployment model.

First start the leader of the OVN-IC DB on the first node.

Users deploying a docker environment can use the following command:

docker run --name=ovn-ic-db -d --env "ENABLE_OVN_LEADER_CHECK="false"" --network=host --privileged -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn -e LOCAL_IP="192.168.65.3"  -e NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"   kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
+

If you are using containerd you can use the following command:

ctr -n k8s.io run -d --env "ENABLE_OVN_LEADER_CHECK="false"" --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw"  --env="NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"" --env="LOCAL_IP="192.168.65.3"" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
 
  • LOCAL_IP: The IP address of the node where the current container is located.
  • NODE_IPS: The IP addresses of the three nodes running the OVN-IC database, separated by commas.

Next, deploy the follower of the OVN-IC DB on the other two nodes.

docker environment can use the following command.

docker run --name=ovn-ic-db -d --network=host --privileged -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn -e LOCAL_IP="192.168.65.2"  -e NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1" -e LEADER_IP="192.168.65.3"  kubeovn/kube-ovn:v1.13.0 bash start-ic-db.sh
 

If using containerd you can use the following command:

ctr -n k8s.io run -d --net-host --privileged --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw"  --env="NODE_IPS="192.168.65.3,192.168.65.2,192.168.65.1"" --env="LOCAL_IP="192.168.65.2"" --env="LEADER_IP="192.168.65.3"" docker.io/kubeovn/kube-ovn:v1.13.0 ovn-ic-db bash start-ic-db.sh
 
  • LOCAL_IP: The IP address of the node where the current container is located.
  • NODE_IPS: The IP addresses of the three nodes running the OVN-IC database, separated by commas.
  • LEADER_IP: The IP address of the OVN-IC DB leader node.

Specify multiple OVN-IC database node addresses when creating ovn-ic-config for each cluster:

apiVersion: v1
@@ -104,7 +159,8 @@
   ic-sb-port: "6646"
   gw-nodes: "az1-gw"
   auto-route: "true"
-

Manual Reset

In some cases, the entire interconnection configuration needs to be cleaned up due to configuration errors, you can refer to the following steps to clean up your environment.

Delete the current ovn-ic-config Configmap:

kubectl -n kube-system delete cm ovn-ic-config
+

Support cluster interconnection ECMP

The premise controller is deployed according to Single Node Deployment Solution 1

This solution supports cluster interconnection ECMP by default. The default ECMP path is 3. It also supports modifying the number of ECMP paths. Use the command:

kubectl edit deployment ovn-ic-server -n kube-system
+

Just modify the value of the environment variable 'TS_NUM'. TS_NUM represents the number of ECMP Paths accessed between the two clusters.

Manual Reset

In some cases, the entire interconnection configuration needs to be cleaned up due to configuration errors, you can refer to the following steps to clean up your environment.

Delete the current ovn-ic-config Configmap:

kubectl -n kube-system delete cm ovn-ic-config
 

Delete ts logical switch:

kubectl ko nbctl ls-del ts
 

Repeat the same steps at the peer cluster.

Clean OVN-IC

Delete the ovn-ic-config Configmap for all clusters:

kubectl -n kube-system delete cm ovn-ic-config
 

Delete all clusters' ts logical switches:

kubectl ko nbctl ls-del ts
@@ -112,7 +168,10 @@
 docker rm ovn-ic-db
 

If the controller is containerd deploy the command:

ctr -n k8s.io task kill ovn-ic-db
 ctr -n k8s.io containers rm ovn-ic-db
-

微信群 Slack Twitter Support

Comments