couldn't create scheduler from policy: extenders[0].managedResources[13].name: Invalid value: "huawei.com/Ascend310P": duplicate extender managed resource name #485
Labels
kind/bug
Something isn't working
What happened:
Install hami failed when followed the installation in README:
env:
logs:
[root@master1 ~]# kubectl logs hami-scheduler-6c5bcdf467-62fvm -n kube-system
Defaulted container "kube-scheduler" out of: kube-scheduler, vgpu-scheduler-extender
I0910 07:17:38.643767 1 flags.go:59] FLAG: --add-dir-header="false"
I0910 07:17:38.643782 1 flags.go:59] FLAG: --address="0.0.0.0"
I0910 07:17:38.643785 1 flags.go:59] FLAG: --algorithm-provider=""
I0910 07:17:38.643786 1 flags.go:59] FLAG: --alsologtostderr="false"
I0910 07:17:38.643787 1 flags.go:59] FLAG: --authentication-kubeconfig=""
I0910 07:17:38.643788 1 flags.go:59] FLAG: --authentication-skip-lookup="false"
I0910 07:17:38.643790 1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl="10s"
I0910 07:17:38.643791 1 flags.go:59] FLAG: --authentication-tolerate-lookup-failure="true"
I0910 07:17:38.643792 1 flags.go:59] FLAG: --authorization-always-allow-paths="[/healthz]"
I0910 07:17:38.643795 1 flags.go:59] FLAG: --authorization-kubeconfig=""
I0910 07:17:38.643796 1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
I0910 07:17:38.643797 1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
I0910 07:17:38.643798 1 flags.go:59] FLAG: --bind-address="0.0.0.0"
I0910 07:17:38.643800 1 flags.go:59] FLAG: --cert-dir=""
I0910 07:17:38.643801 1 flags.go:59] FLAG: --client-ca-file=""
I0910 07:17:38.643803 1 flags.go:59] FLAG: --config=""
I0910 07:17:38.643804 1 flags.go:59] FLAG: --contention-profiling="true"
I0910 07:17:38.643806 1 flags.go:59] FLAG: --experimental-logging-sanitization="false"
I0910 07:17:38.643807 1 flags.go:59] FLAG: --feature-gates=""
I0910 07:17:38.643810 1 flags.go:59] FLAG: --hard-pod-affinity-symmetric-weight="1"
I0910 07:17:38.643813 1 flags.go:59] FLAG: --help="false"
I0910 07:17:38.643814 1 flags.go:59] FLAG: --http2-max-streams-per-connection="0"
I0910 07:17:38.643816 1 flags.go:59] FLAG: --kube-api-burst="100"
I0910 07:17:38.643817 1 flags.go:59] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0910 07:17:38.643819 1 flags.go:59] FLAG: --kube-api-qps="50"
I0910 07:17:38.643821 1 flags.go:59] FLAG: --kubeconfig=""
I0910 07:17:38.643822 1 flags.go:59] FLAG: --leader-elect="true"
I0910 07:17:38.643823 1 flags.go:59] FLAG: --leader-elect-lease-duration="15s"
I0910 07:17:38.643825 1 flags.go:59] FLAG: --leader-elect-renew-deadline="10s"
I0910 07:17:38.643826 1 flags.go:59] FLAG: --leader-elect-resource-lock="leases"
I0910 07:17:38.643827 1 flags.go:59] FLAG: --leader-elect-resource-name="hami-scheduler"
I0910 07:17:38.643829 1 flags.go:59] FLAG: --leader-elect-resource-namespace="kube-system"
I0910 07:17:38.643830 1 flags.go:59] FLAG: --leader-elect-retry-period="2s"
I0910 07:17:38.643831 1 flags.go:59] FLAG: --lock-object-name="hami-scheduler"
I0910 07:17:38.643832 1 flags.go:59] FLAG: --lock-object-namespace="kube-system"
I0910 07:17:38.643833 1 flags.go:59] FLAG: --log-backtrace-at=":0"
I0910 07:17:38.643835 1 flags.go:59] FLAG: --log-dir=""
I0910 07:17:38.643836 1 flags.go:59] FLAG: --log-file=""
I0910 07:17:38.643838 1 flags.go:59] FLAG: --log-file-max-size="1800"
I0910 07:17:38.643839 1 flags.go:59] FLAG: --log-flush-frequency="5s"
I0910 07:17:38.643840 1 flags.go:59] FLAG: --logging-format="text"
I0910 07:17:38.643841 1 flags.go:59] FLAG: --logtostderr="true"
I0910 07:17:38.643842 1 flags.go:59] FLAG: --master=""
I0910 07:17:38.643843 1 flags.go:59] FLAG: --one-output="false"
I0910 07:17:38.643844 1 flags.go:59] FLAG: --permit-port-sharing="false"
I0910 07:17:38.643846 1 flags.go:59] FLAG: --policy-config-file="/config/config.json"
I0910 07:17:38.643847 1 flags.go:59] FLAG: --policy-configmap=""
I0910 07:17:38.643848 1 flags.go:59] FLAG: --policy-configmap-namespace="kube-system"
I0910 07:17:38.643849 1 flags.go:59] FLAG: --port="10251"
I0910 07:17:38.643850 1 flags.go:59] FLAG: --profiling="true"
I0910 07:17:38.643851 1 flags.go:59] FLAG: --requestheader-allowed-names="[]"
I0910 07:17:38.643853 1 flags.go:59] FLAG: --requestheader-client-ca-file=""
I0910 07:17:38.643854 1 flags.go:59] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
I0910 07:17:38.643856 1 flags.go:59] FLAG: --requestheader-group-headers="[x-remote-group]"
I0910 07:17:38.643857 1 flags.go:59] FLAG: --requestheader-username-headers="[x-remote-user]"
I0910 07:17:38.643859 1 flags.go:59] FLAG: --scheduler-name="hami-scheduler"
I0910 07:17:38.643860 1 flags.go:59] FLAG: --secure-port="10259"
I0910 07:17:38.643862 1 flags.go:59] FLAG: --show-hidden-metrics-for-version=""
I0910 07:17:38.643863 1 flags.go:59] FLAG: --skip-headers="false"
I0910 07:17:38.643864 1 flags.go:59] FLAG: --skip-log-headers="false"
I0910 07:17:38.643865 1 flags.go:59] FLAG: --stderrthreshold="2"
I0910 07:17:38.643866 1 flags.go:59] FLAG: --tls-cert-file=""
I0910 07:17:38.643867 1 flags.go:59] FLAG: --tls-cipher-suites="[]"
I0910 07:17:38.643868 1 flags.go:59] FLAG: --tls-min-version=""
I0910 07:17:38.643869 1 flags.go:59] FLAG: --tls-private-key-file=""
I0910 07:17:38.643870 1 flags.go:59] FLAG: --tls-sni-cert-key="[]"
I0910 07:17:38.643872 1 flags.go:59] FLAG: --use-legacy-policy-config="false"
I0910 07:17:38.643873 1 flags.go:59] FLAG: --v="4"
I0910 07:17:38.643875 1 flags.go:59] FLAG: --version="false"
I0910 07:17:38.643877 1 flags.go:59] FLAG: --vmodule=""
I0910 07:17:38.643878 1 flags.go:59] FLAG: --write-config-to=""
I0910 07:17:39.120613 1 serving.go:331] Generated self-signed cert in-memory
I0910 07:17:39.349390 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController
W0910 07:17:39.349651 1 options.go:332] Neither --kubeconfig nor --master was specified. Using default API client. This might not work.
I0910 07:17:39.349681 1 merged_client_builder.go:121] Using in-cluster configuration
I0910 07:17:39.353250 1 factory.go:210] Creating scheduler from configuration: {{ } [] [] [{https://127.0.0.1:443 filter 1 bind true 0xc00075a2d0 {30s} true [{nvidia.com/gpu true} {nvidia.com/gpumem true} {nvidia.com/gpucores true} {nvidia.com/gpumem-percentage true} {nvidia.com/priority true} {cambricon.com/vmlu true} {hygon.com/dcunum true} {hygon.com/dcumem true} {hygon.com/dcucores true} {iluvatar.ai/vgpu true} {huawei.com/Ascend910-memory true} {huawei.com/Ascend910 true} {huawei.com/Ascend310P true} {huawei.com/Ascend310P true}] false}] 0 false}
couldn't create scheduler from policy: extenders[0].managedResources[13].name: Invalid value: "huawei.com/Ascend310P": duplicate extender managed resource name
The text was updated successfully, but these errors were encountered: