From a8dfa2367ab982a620bab9557c79e4fd4563e276 Mon Sep 17 00:00:00 2001 From: ci-bot Date: Mon, 9 Sep 2024 14:30:37 +0000 Subject: [PATCH] Deployed d0b4e242 to v1.13.x with MkDocs 1.6.1 and mike 2.1.3 --- v1.13.x/advance/with-ovn-ic/index.html | 4 +- v1.13.x/en/advance/with-ovn-ic/index.html | 6 +- v1.13.x/search/search_index.json | 2 +- v1.13.x/sitemap.xml | 312 +++++++++++----------- v1.13.x/sitemap.xml.gz | Bin 2830 -> 2829 bytes 5 files changed, 162 insertions(+), 162 deletions(-) diff --git a/v1.13.x/advance/with-ovn-ic/index.html b/v1.13.x/advance/with-ovn-ic/index.html index f38feaae0..8cd67f676 100644 --- a/v1.13.x/advance/with-ovn-ic/index.html +++ b/v1.13.x/advance/with-ovn-ic/index.html @@ -7,7 +7,7 @@ s.parentNode.insertBefore(hm, s); })();

使用 OVN-IC 进行多集群互联

Kube-OVN 支持通过 OVN-IC 将两个 Kubernetes 集群 Pod 网络打通,打通后的两个集群内的 Pod 可以通过 Pod IP 进行直接通信。 Kube-OVN 使用隧道对跨集群流量进行封装,两个集群之间只要存在一组 IP 可达的机器即可完成容器网络的互通。

该模式的多集群互联为 Overlay 网络功能,Underlay 网络如果想要实现集群互联需要底层基础设施做网络打通。

前提条件

  1. 1.11.16 之后版本部署的集群默认关闭了集群互联的开关,需要在部署脚本 install.sh 里修改下列变量:

    ENABLE_IC=true
    -

    打开开关后部署集群,会出现组件 deployment ovn-ic-controller。 2. 自动互联模式下不同集群的子网 CIDR 不能相互重叠,默认子网需在安装时配置为不重叠的网段。若存在重叠需参考后续手动互联过程,只能将不重叠网段打通。 3. 需要存在一组机器可以被每个集群的 kube-ovn-controller 通过 IP 访问,用来部署跨集群互联的控制器。 4. 每个集群需要有一组可以通过 IP 进行跨集群互访的机器作为之后的网关节点。 5. 该功能只对默认 VPC 生效,用户自定义 VPC 无法使用互联功能。

部署单节点 OVN-IC 数据库

单节点部署方案 1

优先推荐方案 1,Kube-OVN v1.11.16 之后支持。

该方法不区别 "单节点" 或者 "多节点高可用" 部署,控制器会以 Deployment 的形式部署在 master 节点上,集群 master 节点为 1,即单节点部署,master 节点为多个,即多节点高可用部署。

先获取脚本 install-ovn-ic.sh,使用下面命令:

wget https://raw.githubusercontent.com/kubeovn/kube-ovn/release-1.12/dist/images/install-ic-server.sh
+

打开开关后部署集群,会出现组件 deployment ovn-ic-controller。

  • 自动互联模式下不同集群的子网 CIDR 不能相互重叠,默认子网需在安装时配置为不重叠的网段。若存在重叠需参考后续手动互联过程,只能将不重叠网段打通。

  • 需要存在一组机器可以被每个集群的 kube-ovn-controller 通过 IP 访问,用来部署跨集群互联的控制器。
  • 每个集群需要有一组可以通过 IP 进行跨集群互访的机器作为之后的网关节点。
  • 该功能只对默认 VPC 生效,用户自定义 VPC 无法使用互联功能。
  • 部署单节点 OVN-IC 数据库

    单节点部署方案 1

    优先推荐方案 1,Kube-OVN v1.11.16 之后支持。

    该方法不区别 "单节点" 或者 "多节点高可用" 部署,控制器会以 Deployment 的形式部署在 master 节点上,集群 master 节点为 1,即单节点部署,master 节点为多个,即多节点高可用部署。

    先获取脚本 install-ovn-ic.sh,使用下面命令:

    wget https://raw.githubusercontent.com/kubeovn/kube-ovn/release-1.12/dist/images/install-ic-server.sh
     

    执行命令安装,其中 TS_NUM 表示集群互联的 ECMP Path 数量:

    sed 's/VERSION=.*/VERSION=v1.12.12/' dist/images/install-ic-server.sh | TS_NUM=3 bash
     

    执行成功输出如下:

    deployment.apps/ovn-ic-server created
     Waiting for deployment spec update to be observed...
    @@ -173,7 +173,7 @@
     

    如果控制器是使用 deployment ovn-ic-server 部署:

    kubectl delete deployment ovn-ic-server -n kube-system
     

    然后在每个 master 节点上清理互联相关的 DB,命令如下:

    rm -f /etc/origin/ovn/ovn_ic_nb_db.db
     rm -f /etc/origin/ovn/ovn_ic_sb_db.db
    -

    微信群 Slack Twitter Support Meeting

    评论

    Cluster Inter-Connection with OVN-IC

    Kube-OVN supports interconnecting two Kubernetes cluster Pod networks via OVN-IC, and the Pods in the two clusters can communicate directly via Pod IPs . Kube-OVN uses tunnels to encapsulate cross-cluster traffic, allowing container networks to interconnect between two clusters as long as there is a set of IP reachable machines.

    This mode of multi-cluster interconnection is for Overlay network. For Underlay network, it needs the underlying infrastructure to do the inter-connection work.

    Prerequisites

    1. Clusters configured in versions after 1.11.16 have the cluster interconnection switch turned off by default. You need to mark the following in the configuration script install.sh:

      ENABLE_IC=true
      -

    After opening the switch and deploying the cluster, the component deployment ovn-ic-controller will appear. 2. The subnet CIDRs within OpenStack and Kubernetes cannot overlap with each other in auto-interconnect mode. If there is overlap, you need to refer to the subsequent manual interconnection process, which can only connect non-overlapping Subnets. 3. A set of machines needs to exist that can be accessed by each cluster over the network and used to deploy controllers that interconnect across clusters. 4. Each cluster needs to have a set of machines that can access each other across clusters via IP as the gateway nodes. 5. This solution only connects to the Kubernetes default VPCs.

    Deploy a single-node OVN-IC DB

    Single node deployment solution 1

    Solution 1 is recommended first, supported after Kube-OVN v1.11.16.

    This method does not distinguish between "single node" or "multi-node high availability" deployment. The controller will be deployed on the master node in the form of Deployment. The cluster master node is 1, which is a single node deployment, and the number of master nodes is multiple, that is, multi-node. Highly available deployment.

    First get the script install-ovn-ic.sh and use the following command:

    wget https://raw.githubusercontent.com/kubeovn/kube-ovn/release-1.12/dist/images/install-ic-server.sh
    +       

    Cluster Inter-Connection with OVN-IC

    Kube-OVN supports interconnecting two Kubernetes cluster Pod networks via OVN-IC, and the Pods in the two clusters can communicate directly via Pod IPs. Kube-OVN uses tunnels to encapsulate cross-cluster traffic, allowing container networks to interconnect between two clusters as long as there is a set of IP reachable machines.

    This mode of multi-cluster interconnection is for Overlay network. For Underlay network, it needs the underlying infrastructure to do the inter-connection work.

    Prerequisites

    1. Clusters configured in versions after 1.11.16 have the cluster interconnection switch turned off by default. You need to mark the following in the configuration script install.sh:

      ENABLE_IC=true
      +

      After opening the switch and deploying the cluster, the component deployment ovn-ic-controller will appear.

    2. The subnet CIDRs within OpenStack and Kubernetes cannot overlap with each other in auto-interconnect mode. If there is overlap, you need to refer to the subsequent manual interconnection process, which can only connect non-overlapping Subnets.

    3. A set of machines needs to exist that can be accessed by each cluster over the network and used to deploy controllers that interconnect across clusters.
    4. Each cluster needs to have a set of machines that can access each other across clusters via IP as the gateway nodes.
    5. This solution only connects to the Kubernetes default VPCs.

    Deploy a single-node OVN-IC DB

    Single node deployment solution 1

    Solution 1 is recommended first, supported after Kube-OVN v1.11.16.

    This method does not distinguish between "single node" or "multi-node high availability" deployment. The controller will be deployed on the master node in the form of Deployment. The cluster master node is 1, which is a single node deployment, and the number of master nodes is multiple, that is, multi-node. Highly available deployment.

    First get the script install-ovn-ic.sh and use the following command:

    wget https://raw.githubusercontent.com/kubeovn/kube-ovn/release-1.12/dist/images/install-ic-server.sh
     

    Execute the command installation, where TS_NUM represents the number of ECMP Paths connected to the cluster:

    sed 's/VERSION=.*/VERSION=v1.12.12/' dist/images/install-ic-server.sh | TS_NUM=3 bash
     

    The output of successful execution is as follows:

    deployment.apps/ovn-ic-server created
     Waiting for deployment spec update to be observed...
    @@ -172,7 +172,7 @@
     

    If the controller is deployed using deployment ovn-ic-server:

    kubectl delete deployment ovn-ic-server -n kube-system
     

    Then clean up the interconnection-related DB on each master node. The command is as follows:

    rm -f /etc/origin/ovn/ovn_ic_nb_db.db
     rm -f /etc/origin/ovn/ovn_ic_sb_db.db
    -

    微信群 Slack Twitter Support Meeting

    Comments