diff --git a/docs/usage/install/cloud/get-started-alibaba-zh_CN.md b/docs/usage/install/cloud/get-started-alibaba-zh_CN.md index 1bd5ef0ae8..6bc0478b8a 100644 --- a/docs/usage/install/cloud/get-started-alibaba-zh_CN.md +++ b/docs/usage/install/cloud/get-started-alibaba-zh_CN.md @@ -28,11 +28,11 @@ Spiderpool 有节点拓扑、解决 MAC 地址合法性、对接基于 `spec.ext - 准备一套阿里云环境,给虚拟机分配 2 个网卡,每张网卡均分配一些辅助私网 IP,如图: -![alicloud-web-network](../../../images/alicloud-network-web.png) + ![alicloud-web-network](../../../images/alicloud-network-web.png) - 使用上述配置的虚拟机,搭建一套 Kubernetes 集群,节点的可用 IP 及集群网络拓扑图如下: -![网络拓扑](../../../images/alicloud-k8s-network.png) + ![网络拓扑](../../../images/alicloud-k8s-network.png) ### 安装 Spiderpool @@ -115,9 +115,9 @@ ipvlan-eth1 10m Spiderpool 的 CRD:`SpiderIPPool` 提供了 `nodeName`、`multusName` 与 `ips` 字段: -- `nodeName`:当 nodeName 不为空时,Pod 在某个节点上启动,并尝试从 SpiderIPPool 分配 IP 地址, 若 Pod 所在节点符合该 nodeName ,则能从该 SpiderIPPool 中成功分配出 IP,若 Pod 所在节点不符合 nodeName,则无法从该 SpiderIPPool 中分配出 IP。当 nodeName 为空时,Spiderpool 对 Pod 不实施任何分配限制。 +- `nodeName`:当 `nodeName` 不为空时,Pod 在某个节点上启动,并尝试从 SpiderIPPool 分配 IP 地址, 若 Pod 所在节点符合该 `nodeName`,则能从该 SpiderIPPool 中成功分配出 IP,若 Pod 所在节点不符合 `nodeName`,则无法从该 SpiderIPPool 中分配出 IP。当 `nodeName` 为空时,Spiderpool 对 Pod 不实施任何分配限制。 -- `multusName`:Spiderpool 通过该字段与 Multus CNI 深度结合以应对多网卡场景。当 multusName 不为空时,SpiderIPPool 会使用对应的 Multus CR 实例为 Pod 配置网络,若 multusName 对应的 Multus CR 不存在,那么 Spiderpool 将无法为 Pod 指定 Multus CR。当 multusName 为空时,Spiderpool 对 Pod 所使用的 Multus CR 不作限制。 +- `multusName`:Spiderpool 通过该字段与 Multus CNI 深度结合以应对多网卡场景。当 `multusName` 不为空时,SpiderIPPool 会使用对应的 Multus CR 实例为 Pod 配置网络,若 `multusName` 对应的 Multus CR 不存在,那么 Spiderpool 将无法为 Pod 指定 Multus CR。当 `multusName` 为空时,Spiderpool 对 Pod 所使用的 Multus CR 不作限制。 - `spec.ips`:该字段的值必须设置。由于阿里云限制了节点可使用的 IP 地址,故该值的范围必须在 `nodeName` 对应主机的辅助私网 IP 范围内,您可以从阿里云的弹性网卡界面获取。 @@ -189,7 +189,7 @@ EOF ### 创建应用 -以下的示例 Yaml 中,会创建 2 组 daemonSet 应用和 1 个 `type` 为 ClusterIP 的 service ,其中: +以下的示例 Yaml 中,会创建 2 组 DaemonSet 应用和 1 个 `type` 为 ClusterIP 的 service ,其中: - `v1.multus-cni.io/default-network`:用于指定应用所使用的子网,示例中的应用分别使用了不同的子网。 @@ -293,62 +293,62 @@ worker-192 4 192.168.0.0/24 1 5 t - 测试 Pod 与宿主机的通讯情况: -```bash -~# kubectl get nodes -owide -NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME -master Ready control-plane 2d12h v1.27.3 172.31.199.183 CentOS Linux 7 (Core) 6.4.0-1.el7.elrepo.x86_64 containerd://1.7.1 -worker Ready 2d12h v1.27.3 172.31.199.184 CentOS Linux 7 (Core) 6.4.0-1.el7.elrepo.x86_64 containerd://1.7.1 - -~# kubectl exec -ti test-app-1-b7765b8d8-422sb -- ping 172.31.199.183 -c 2 -PING 172.31.199.183 (172.31.199.183): 56 data bytes -64 bytes from 172.31.199.183: seq=0 ttl=64 time=0.088 ms -64 bytes from 172.31.199.183: seq=1 ttl=64 time=0.054 ms - ---- 172.31.199.183 ping statistics --- -2 packets transmitted, 2 packets received, 0% packet loss -round-trip min/avg/max = 0.054/0.071/0.088 ms -``` + ```bash + ~# kubectl get nodes -owide + NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME + master Ready control-plane 2d12h v1.27.3 172.31.199.183 CentOS Linux 7 (Core) 6.4.0-1.el7.elrepo.x86_64 containerd://1.7.1 + worker Ready 2d12h v1.27.3 172.31.199.184 CentOS Linux 7 (Core) 6.4.0-1.el7.elrepo.x86_64 containerd://1.7.1 + + ~# kubectl exec -ti test-app-1-b7765b8d8-422sb -- ping 172.31.199.183 -c 2 + PING 172.31.199.183 (172.31.199.183): 56 data bytes + 64 bytes from 172.31.199.183: seq=0 ttl=64 time=0.088 ms + 64 bytes from 172.31.199.183: seq=1 ttl=64 time=0.054 ms + + --- 172.31.199.183 ping statistics --- + 2 packets transmitted, 2 packets received, 0% packet loss + round-trip min/avg/max = 0.054/0.071/0.088 ms + ``` - 测试 Pod 与跨节点、跨子网 Pod 的通讯情况 -```shell -~# kubectl exec -ti test-app-1-b7765b8d8-422sb -- ping 172.31.199.193 -c 2 -PING 172.31.199.193 (172.31.199.193): 56 data bytes -64 bytes from 172.31.199.193: seq=0 ttl=64 time=0.460 ms -64 bytes from 172.31.199.193: seq=1 ttl=64 time=0.210 ms - ---- 172.31.199.193 ping statistics --- -2 packets transmitted, 2 packets received, 0% packet loss -round-trip min/avg/max = 0.210/0.335/0.460 ms - -~# kubectl exec -ti test-app-1-b7765b8d8-422sb -- ping 192.168.0.161 -c 2 -PING 192.168.0.161 (192.168.0.161): 56 data bytes -64 bytes from 192.168.0.161: seq=0 ttl=64 time=0.408 ms -64 bytes from 192.168.0.161: seq=1 ttl=64 time=0.194 ms - ---- 192.168.0.161 ping statistics --- -2 packets transmitted, 2 packets received, 0% packet loss -round-trip min/avg/max = 0.194/0.301/0.408 ms -``` + ```shell + ~# kubectl exec -ti test-app-1-b7765b8d8-422sb -- ping 172.31.199.193 -c 2 + PING 172.31.199.193 (172.31.199.193): 56 data bytes + 64 bytes from 172.31.199.193: seq=0 ttl=64 time=0.460 ms + 64 bytes from 172.31.199.193: seq=1 ttl=64 time=0.210 ms -- 测试 Pod 与 ClusterIP 的通讯情况: + --- 172.31.199.193 ping statistics --- + 2 packets transmitted, 2 packets received, 0% packet loss + round-trip min/avg/max = 0.210/0.335/0.460 ms -```bash -~# kubectl get svc test-svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -test-svc ClusterIP 10.233.23.194 80/TCP 26s + ~# kubectl exec -ti test-app-1-b7765b8d8-422sb -- ping 192.168.0.161 -c 2 + PING 192.168.0.161 (192.168.0.161): 56 data bytes + 64 bytes from 192.168.0.161: seq=0 ttl=64 time=0.408 ms + 64 bytes from 192.168.0.161: seq=1 ttl=64 time=0.194 ms -~# kubectl exec -ti test-app-2-7c56876fc6-7brhf -- curl 10.233.23.194 -I -HTTP/1.1 200 OK -Server: nginx/1.10.1 -Date: Fri, 21 Jul 2023 06:45:56 GMT -Content-Type: text/html -Content-Length: 4086 -Last-Modified: Fri, 21 Jul 2023 06:38:41 GMT -Connection: keep-alive -ETag: "64ba27f1-ff6" -Accept-Ranges: bytes -``` + --- 192.168.0.161 ping statistics --- + 2 packets transmitted, 2 packets received, 0% packet loss + round-trip min/avg/max = 0.194/0.301/0.408 ms + ``` + +- 测试 Pod 与 ClusterIP 的通讯情况: + + ```bash + ~# kubectl get svc test-svc + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + test-svc ClusterIP 10.233.23.194 80/TCP 26s + + ~# kubectl exec -ti test-app-2-7c56876fc6-7brhf -- curl 10.233.23.194 -I + HTTP/1.1 200 OK + Server: nginx/1.10.1 + Date: Fri, 21 Jul 2023 06:45:56 GMT + Content-Type: text/html + Content-Length: 4086 + Last-Modified: Fri, 21 Jul 2023 06:38:41 GMT + Connection: keep-alive + ETag: "64ba27f1-ff6" + Accept-Ranges: bytes + ``` ### 测试集群南北向连通性 @@ -360,20 +360,20 @@ Accept-Ranges: bytes - 测试集群内 Pod 的流量出口访问 -```bash -~# kubectl exec -ti test-app-2-7c56876fc6-7brhf -- curl www.baidu.com -I -HTTP/1.1 200 OK -Accept-Ranges: bytes -Cache-Control: private, no-cache, no-store, proxy-revalidate, no-transform -Connection: keep-alive -Content-Length: 277 -Content-Type: text/html -Date: Fri, 21 Jul 2023 08:42:17 GMT -Etag: "575e1f60-115" -Last-Modified: Mon, 13 Jun 2016 02:50:08 GMT -Pragma: no-cache -Server: bfe/1.0.8.18 -``` + ```bash + ~# kubectl exec -ti test-app-2-7c56876fc6-7brhf -- curl www.baidu.com -I + HTTP/1.1 200 OK + Accept-Ranges: bytes + Cache-Control: private, no-cache, no-store, proxy-revalidate, no-transform + Connection: keep-alive + Content-Length: 277 + Content-Type: text/html + Date: Fri, 21 Jul 2023 08:42:17 GMT + Etag: "575e1f60-115" + Last-Modified: Mon, 13 Jun 2016 02:50:08 GMT + Pragma: no-cache + Server: bfe/1.0.8.18 + ``` #### 负载均衡流量入口访问 @@ -383,7 +383,7 @@ CCM(Cloud Controller Manager)是阿里云提供的一个用于 Kubernetes 1. 集群节点配置 `providerID` - 务必在集群中的每个节点上,分别执行如下命令,从而获取每个节点各自的 `providerID`。`http://100.100.100.200/latest/meta-data` 是阿里云 CLI 提供获取实例元数据的 API 入口,在下列示例中无需修改它。更多用法可参考[实例元数据](https://help.aliyun.com/document_detail/49150.html?spm=a2c4g.170249.0.0.3ffc59d7JhEqHl) + 务必在集群中的每个节点上,分别执行如下命令,从而获取每个节点各自的 `providerID`。 是阿里云 CLI 提供获取实例元数据的 API 入口,在下列示例中无需修改它。更多用法可参考[实例元数据](https://help.aliyun.com/document_detail/49150.html?spm=a2c4g.170249.0.0.3ffc59d7JhEqHl) ```bash ~# META_EP=http://100.100.100.200/latest/meta-data diff --git a/docs/usage/install/cloud/get-started-alibaba.md b/docs/usage/install/cloud/get-started-alibaba.md index 3e0138615c..20c097df12 100644 --- a/docs/usage/install/cloud/get-started-alibaba.md +++ b/docs/usage/install/cloud/get-started-alibaba.md @@ -1,3 +1,549 @@ # Running On Alibaba Cloud **English** | [**简体中文**](./get-started-alibaba-zh_CN.md) + +## Introduction + +With a multitude of public cloud providers available, such as Alibaba Cloud, Huawei Cloud, Tencent Cloud, AWS, and more, it can be challenging to use mainstream open-source CNI plugins to operate on these platforms using Underlay networks. Instead, one has to rely on proprietary CNI plugins provided by each cloud vendor, leading to a lack of standardized Underlay solutions for public clouds. This page introduces [Spiderpool](../../../README.md), an Underlay networking solution designed to work seamlessly in any public cloud environment. A unified CNI solution offers easier management across multiple clouds, particularly in hybrid cloud scenarios. + +## Features + +Spiderpool offers features such as node topology, MAC address validation, and integration with services that use `spec.externalTrafficPolicy` set to Local mode. Spiderpool can run on public cloud environments using the IPVlan Underlay CNI. Here's an overview of its implementation: + +1. When using Underlay networks in a public cloud environment, each network interface of a cloud server can only be assigned a limited number of IP addresses. To enable communication when an application runs on a specific cloud server, it needs to obtain the valid IP addresses allocated to different network interfaces within the VPC network. To address this IP allocation requirement, Spiderpool introduces a CRD named `SpiderIPPool`. By configuring the nodeName and multusName fields in `SpiderIPPool`, it enables node topology functionality. Spiderpool leverages the affinity between the IP pool and nodes, as well as the affinity between the IP pool and IPvlan Multus, facilitating the utilization and management of available IP addresses on the nodes. This ensures that applications are assigned valid IP addresses, enabling seamless communication within the VPC network, including communication between Pods and also between Pods and cloud servers. + +2. In a public cloud VPC network, network security controls and packet forwarding principles dictate that when network data packets contain MAC and IP addresses unknown to the VPC network, correct forwarding becomes unattainable. This issue arises in scenarios where Macvlan or OVS based Underlay CNI plugins generate new MAC addresses for Pod NICs, resulting in communication failures among Pods. To address this challenge, Spiderpool offers a solution in conjunction with [IPVlan CNI](https://www.cni.dev/plugins/current/main/ipvlan/). The IPVlan CNI operates at the L3 of the network, eliminating the reliance on L2 broadcasts and avoiding the generation of new MAC addresses. Instead, it maintains consistency with the parent interface. By incorporating IPVlan, the legitimacy of MAC addresses in a public cloud environment can be effectively resolved. + +3. When the `.spec.externalTrafficPolicy` of a service is set to `Local`, the client's source IP can be reserved. However, in self-managed public cloud clusters, using the platform's LoadBalancer component for nodePort forwarding in this mode will lead to access failures. To tackle this problem, Spiderpool offers the coordinator plugin. This plugin utilizes iptables to apply packet marking, ensuring that response packets coming from veth0 are still forwarded through veth0. This resolves the problem of nodePort access in this mode. + +## Prerequisites + +1. The system kernel version must be greater than 4.2 when using IPVlan as the cluster's CNI. + +2. [Helm](https://helm.sh/docs/intro/install/) is installed. + +## Steps + +### Alibaba Cloud Environment + +- Prepare an Alibaba Cloud environment with virtual machines that have 2 network interfaces. Assign a set of auxiliary private IP addresses to each network interface, as shown in the picture: + + ![alicloud-web-network](../../../images/alicloud-network-web.png) + +- Utilize the configured VMs to build a Kubernetes cluster. The available IP addresses for the nodes and the network topology of the cluster are depicted below: + + ![网络拓扑](../../../images/alicloud-k8s-network.png) + +### Install Spiderpool + +Install Spiderpool via helm: + +```bash +helm repo add spiderpool https://spidernet-io.github.io/spiderpool + +helm repo update spiderpool + +helm install spiderpool spiderpool/spiderpool --namespace kube-system --set ipam.enableStatefulSet=false --set multus.multusCNI.defaultCniCRName="ipvlan-eth0" +``` + +> If you are using a cloud server from a Chinese mainland cloud provider, you can enhance image pulling speed by specifying the parameter `--set global.imageRegistryOverride=ghcr.m.daocloud.io`. +> +> Spiderpool allows for fixed IP addresses for application replicas with a controller type of `StatefulSet`. However, in the underlay network scenario of public clouds, cloud instances are limited to using specific IP addresses. When StatefulSet replicas migrate to different nodes, the original fixed IP becomes invalid and unavailable on the new node, causing network unavailability for the new Pods. To address this issue, set `ipam.enableStatefulSet` to `false` to disable this feature. +> +> Specify the Multus clusterNetwork for the cluster using `multus.multusCNI.defaultCniCRName`. `clusterNetwork` is a specific field within the Multus plugin used to define the default network interface for Pods. + +### Install CNI + +To simplify the creation of JSON-formatted Multus CNI configurations, Spiderpool offers the SpiderMultusConfig CR to automatically manage Multus NetworkAttachmentDefinition CRs. Here is an example of creating an IPvlan SpiderMultusConfig configuration: + +```shell +IPVLAN_MASTER_INTERFACE0="eth0" +IPVLAN_MULTUS_NAME0="ipvlan-$IPVLAN_MASTER_INTERFACE0" +IPVLAN_MASTER_INTERFACE1="eth1" +IPVLAN_MULTUS_NAME1="ipvlan-$IPVLAN_MASTER_INTERFACE1" + +cat < +test-app-1-b7765b8d8-qjgpj 1/1 Running 0 16s 172.31.199.193 worker +test-app-2-7c56876fc6-7brhf 1/1 Running 0 12s 192.168.0.160 master +test-app-2-7c56876fc6-zlxxt 1/1 Running 0 12s 192.168.0.161 worker +``` + +Spiderpool automatically assigns IP addresses to the applications, ensuring that the assigned IPs are within the expected IP pool. + +```bash +~# kubectl get spiderippool +NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT +master-172 4 172.31.192.0/20 1 5 true +master-192 4 192.168.0.0/24 1 5 true +worker-172 4 172.31.192.0/20 1 5 true +worker-192 4 192.168.0.0/24 1 5 true +``` + +### Test East-West Connectivity + +- Test communication between Pods and their hosts: + + ```bash + ~# kubectl get nodes -owide + NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME + master Ready control-plane 2d12h v1.27.3 172.31.199.183 CentOS Linux 7 (Core) 6.4.0-1.el7.elrepo.x86_64 containerd://1.7.1 + worker Ready 2d12h v1.27.3 172.31.199.184 CentOS Linux 7 (Core) 6.4.0-1.el7.elrepo.x86_64 containerd://1.7.1 + + ~# kubectl exec -ti test-app-1-b7765b8d8-422sb -- ping 172.31.199.183 -c 2 + PING 172.31.199.183 (172.31.199.183): 56 data bytes + 64 bytes from 172.31.199.183: seq=0 ttl=64 time=0.088 ms + 64 bytes from 172.31.199.183: seq=1 ttl=64 time=0.054 ms + + --- 172.31.199.183 ping statistics --- + 2 packets transmitted, 2 packets received, 0% packet loss + round-trip min/avg/max = 0.054/0.071/0.088 ms + ``` + +- Test communication between Pods across different nodes and subnets: + + ```shell + ~# kubectl exec -ti test-app-1-b7765b8d8-422sb -- ping 172.31.199.193 -c 2 + PING 172.31.199.193 (172.31.199.193): 56 data bytes + 64 bytes from 172.31.199.193: seq=0 ttl=64 time=0.460 ms + 64 bytes from 172.31.199.193: seq=1 ttl=64 time=0.210 ms + + --- 172.31.199.193 ping statistics --- + 2 packets transmitted, 2 packets received, 0% packet loss + round-trip min/avg/max = 0.210/0.335/0.460 ms + + ~# kubectl exec -ti test-app-1-b7765b8d8-422sb -- ping 192.168.0.161 -c 2 + PING 192.168.0.161 (192.168.0.161): 56 data bytes + 64 bytes from 192.168.0.161: seq=0 ttl=64 time=0.408 ms + 64 bytes from 192.168.0.161: seq=1 ttl=64 time=0.194 ms + + --- 192.168.0.161 ping statistics --- + 2 packets transmitted, 2 packets received, 0% packet loss + round-trip min/avg/max = 0.194/0.301/0.408 ms + ``` + +- Test communication between Pods and ClusterIP services: + + ```bash + ~# kubectl get svc test-svc + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + test-svc ClusterIP 10.233.23.194 80/TCP 26s + + ~# kubectl exec -ti test-app-2-7c56876fc6-7brhf -- curl 10.233.23.194 -I + HTTP/1.1 200 OK + Server: nginx/1.10.1 + Date: Fri, 21 Jul 2023 06:45:56 GMT + Content-Type: text/html + Content-Length: 4086 + Last-Modified: Fri, 21 Jul 2023 06:38:41 GMT + Connection: keep-alive + ETag: "64ba27f1-ff6" + Accept-Ranges: bytes + ``` + +### Test North-South Connectivity + +#### Test egress traffic from Pods to external destinations + +- Alibaba Cloud's NAT Gateway provides an ingress and egress gateway for public or private network traffic within a VPC environment. By utilizing NAT Gateway, the cluster can have egress connectivity. Please refer to [the NAT Gateway documentation](https://www.alibabacloud.com/help/en/nat-gateway?spm=a2c63.p38356.0.0.1b111b76Rn9rPa) for creating a NAT Gateway as depicted in the picture: + +![alicloud-natgateway](../../../images/alicloud-natgateway.png) + +- Test egress traffic from Pods + + ```bash + ~# kubectl exec -ti test-app-2-7c56876fc6-7brhf -- curl www.baidu.com -I + HTTP/1.1 200 OK + Accept-Ranges: bytes + Cache-Control: private, no-cache, no-store, proxy-revalidate, no-transform + Connection: keep-alive + Content-Length: 277 + Content-Type: text/html + Date: Fri, 21 Jul 2023 08:42:17 GMT + Etag: "575e1f60-115" + Last-Modified: Mon, 13 Jun 2016 02:50:08 GMT + Pragma: no-cache + Server: bfe/1.0.8.18 + ``` + +#### Load Balancer Traffic Ingress Access + +##### Deploy Cloud Controller Manager + +Cloud Controller Manager (CCM) is an Alibaba Cloud's component that enables integration between Kubernetes and Alibaba Cloud services. We will use CCM along with Alibaba Cloud infrastructure to facilitate load balancer traffic ingress access. Follow the steps below and refer to [the CCM documentation](https://github.com/kubernetes/cloud-provider-alibaba-cloud/blob/master/docs/getting-started.md) for deploying CCM. + +1. Configure `providerID` on Cluster Nodes + + On each node in the cluster, run the following command to obtain the `providerID` for each node. is the API entry point provided by Alibaba Cloud CLI for retrieving instance metadata. You don't need to modify it in the provided example. For more information, please refer to [ECS instance metadata](https://www.alibabacloud.com/help/en/ecs/user-guide/overview-of-ecs-instance-metadata?spm=a2c63.p38356.0.0.1c3c592dPUwXMN). + + ```bash + ~# META_EP=http://100.100.100.200/latest/meta-data + ~# provider_id=`curl -s $META_EP/region-id`.`curl -s $META_EP/instance-id` + ``` + + On the `master` node of the cluster, use the `kubectl patch` command to add the `providerID` for each node in the cluster. This step is necessary to ensure the proper functioning of the CCM Pod on each corresponding node. Failure to run this step will result in the CCM Pod being unable to run correctly. + + ```bash + ~# kubectl get nodes + ~# kubectl patch node ${NODE_NAME} -p '{"spec":{"providerID": "${provider_id}"}}' + ``` + +2. Create an Alibaba Cloud RAM user and grant authorization. + + A RAM user is an entity within Alibaba Cloud's Resource Access Management (RAM) that represents individuals or applications requiring access to Alibaba Cloud resources. Refer to [Overview of RAM users](https://www.alibabacloud.com/help/en/ram/user-guide/overview-of-ram-users?spm=a2c63.p38356.0.0.435a7b9fxy619R) to create a RAM user and assign the necessary permissions for accessing resources. + + To ensure that the RAM user used in the subsequent steps has sufficient privileges, grant the `AdministratorAccess` and `AliyunSLBFullAccess` permissions to the RAM user, following the instructions provided here. + +3. Obtain the AccessKey & AccessKeySecret for the RAM user. + + Log in to the RAM User account and go to [User Center](https://account.alibabacloud.com/login/login.htm?spm=5176.12901015-2.0.0.36cb525cXk2SG0) to retrieve the corresponding AccessKey & AccessKeySecret for the RAM User. + +4. Create the Cloud ConfigMap for CCM. + + Use the following method to write the AccessKey & AccessKeySecret obtained in step 3 as environment variables. + + ```bash + ~# export ACCESS_KEY_ID=LTAI******************** + ~# export ACCESS_KEY_SECRET=HAeS************************** + ``` + + Run the following command to create cloud-config: + + ```bash + accessKeyIDBase64=`echo -n "$ACCESS_KEY_ID" |base64 -w 0` + accessKeySecretBase64=`echo -n "$ACCESS_KEY_SECRET"|base64 -w 0` + + cat <>` with the actual cluster CIDR. + + ```bash + ~# wget https://raw.githubusercontent.com/spidernet-io/spiderpool/main/docs/example/alicloud/cloud-controller-manager.yaml + ~# kubectl apply -f cloud-controller-manager.yaml + ``` + +6. Verify if CCM is installed. + + ```bash + ~# kubectl get po -n kube-system | grep cloud-controller-manager + NAME READY STATUS RESTARTS AGE + cloud-controller-manager-72vzr 1/1 Running 0 27s + cloud-controller-manager-k7jpn 1/1 Running 0 27s + ``` + +##### Create Load Balancer Ingress for Applications + +The following YAML will create two sets of services, one for TCP (layer 4 load balancing) and one for HTTP (layer 7 load balancing), with `spec.type` set to `LoadBalancer`. + +- `service.beta.kubernetes.io/alibaba-cloud-loadbalancer-protocol-port`: this annotation provided by CCM allows you to customize the exposed ports for layer 7 load balancing. For more information, refer to [the CCM Usage Documentation](https://github.com/kubernetes/cloud-provider-alibaba-cloud/blob/master/docs/usage.md). + +- `.spec.externalTrafficPolicy`: indicates whether the service prefers to route external traffic to local or cluster-wide endpoints. It has two options: Cluster (default) and Local. Setting `.spec.externalTrafficPolicy` to `Local` preserves the client source IP. + +```bash +~# cat <