We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kk version: &version.Info{Major:"3", Minor:"1", GitVersion:"v3.1.7", GitCommit:"da475c670813fc8a4dd3b1312aaa36e96ff01a1f", GitTreeState:"clean", BuildDate:"2024-10-30T09:41:20Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
kylin v10 sp3
apiVersion: kubekey.kubesphere.io/v1alpha2 kind: Cluster metadata: name: sample spec: hosts: - {name: node155, address: 192.168.10.58, internalAddress: 192.168.10.58, user: root, password: "",arch:arm64} roleGroups: etcd: - node155 control-plane: - node155 worker: - node155 registry: - node155 controlPlaneEndpoint: ## Internal loadbalancer for apiservers # internalLoadbalancer: haproxy domain: lb.kubesphere.local address: "" port: 6443 kubernetes: version: v1.23.15 clusterName: cluster.local autoRenewCerts: true containerManager: docker etcd: type: kubekey network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni multusCNI: enabled: false registry: type: harbor domain: dockerhub.kubekey.local tls: selfSigned: true certCommonName: dockerhub.kubekey.local auths: "dockerhub.kubekey.local": username: admin password: Harbor12345 privateRegistry: "dockerhub.kubekey.local" namespaceOverride: "kubesphereio" registryMirrors: [] insecureRegistries: [] addons: [] --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.4.1 spec: persistence: storageClass: "" authentication: jwtSecret: "" local_registry: "" # dev_tag: "" etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: core: console: enableMultiLogin: true port: 30880 type: NodePort # apiserver: # resources: {} # controllerManager: # resources: {} redis: enabled: false enableHA: false volumeSize: 2Gi openldap: enabled: false volumeSize: 2Gi minio: volumeSize: 20Gi monitoring: # type: external endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 GPUMonitoring: enabled: false gpu: kinds: - resourceName: "nvidia.com/gpu" resourceType: "GPU" default: true es: # master: # volumeSize: 4Gi # replicas: 1 # resources: {} # data: # volumeSize: 20Gi # replicas: 1 # resources: {} enabled: false logMaxAge: 7 elkPrefix: logstash basicAuth: enabled: false username: "" password: "" externalElasticsearchHost: "" externalElasticsearchPort: "" opensearch: # master: # volumeSize: 4Gi # replicas: 1 # resources: {} # data: # volumeSize: 20Gi # replicas: 1 # resources: {} enabled: true logMaxAge: 7 opensearchPrefix: whizard basicAuth: enabled: true username: "admin" password: "admin" externalOpensearchHost: "" externalOpensearchPort: "" dashboard: enabled: false alerting: enabled: false # thanosruler: # replicas: 1 # resources: {} auditing: enabled: false # operator: # resources: {} # webhook: # resources: {} devops: enabled: false jenkinsCpuReq: 0.5 jenkinsCpuLim: 1 jenkinsMemoryReq: 4Gi jenkinsMemoryLim: 4Gi jenkinsVolumeSize: 16Gi events: enabled: false # operator: # resources: {} # exporter: # resources: {} ruler: enabled: true replicas: 2 # resources: {} logging: enabled: false logsidecar: enabled: true replicas: 2 # resources: {} metrics_server: enabled: false monitoring: storageClass: "" node_exporter: port: 9100 # resources: {} # kube_rbac_proxy: # resources: {} # kube_state_metrics: # resources: {} # prometheus: # replicas: 1 # volumeSize: 20Gi # resources: {} # operator: # resources: {} # alertmanager: # replicas: 1 # resources: {} # notification_manager: # resources: {} # operator: # resources: {} # proxy: # resources: {} gpu: nvidia_dcgm_exporter: enabled: false # resources: {} multicluster: clusterRole: none network: networkpolicy: enabled: false ippool: type: none topology: type: none openpitrix: store: enabled: false servicemesh: enabled: false istio: components: ingressGateways: - name: istio-ingressgateway enabled: false cni: enabled: false edgeruntime: enabled: false kubeedge: enabled: false cloudCore: cloudHub: advertiseAddress: - "" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" # resources: {} # hostNetWork: false iptables-manager: enabled: true mode: "external" # resources: {} # edgeService: # resources: {} gatekeeper: enabled: false # controller_manager: # resources: {} # audit: # resources: {} terminal: timeout: 600
docker的默认runtime是ascend-docker-runtime。kubekey创建了用户kube,并修改了/usr/local/bin/runc等多个docker相关的目录和文件的用户为kube。,导致启动容器的时候ascend-docker-runtime提示用户账户不正常,无法启动。 如果让kubekey安装的k8s用root账户执行而不是用kube
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
What is version of KubeKey has the issue?
kk version: &version.Info{Major:"3", Minor:"1", GitVersion:"v3.1.7", GitCommit:"da475c670813fc8a4dd3b1312aaa36e96ff01a1f", GitTreeState:"clean", BuildDate:"2024-10-30T09:41:20Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
What is your os environment?
kylin v10 sp3
KubeKey config file
A clear and concise description of what happend.
docker的默认runtime是ascend-docker-runtime。kubekey创建了用户kube,并修改了/usr/local/bin/runc等多个docker相关的目录和文件的用户为kube。,导致启动容器的时候ascend-docker-runtime提示用户账户不正常,无法启动。
如果让kubekey安装的k8s用root账户执行而不是用kube
Relevant log output
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: