Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update aks/aks-gpu-cuda docker tag to v550.144.03-20250123200851 #5595

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

chore(deps): update aks/aks-gpu-cuda docker tag to v550.144.03-202501…

90cafaf
Select commit
Loading
Failed to load commit list.
Open

chore(deps): update aks/aks-gpu-cuda docker tag to v550.144.03-20250123200851 #5595

chore(deps): update aks/aks-gpu-cuda docker tag to v550.144.03-202501…
90cafaf
Select commit
Loading
Failed to load commit list.
Azure Pipelines / AKS Linux VHD Build - PR check-in gate failed Feb 8, 2025 in 59m 12s

Build #20250208.107_merge_114665672 had test failures

Details

Tests

  • Failed: 1 (2.86%)
  • Passed: 34 (97.14%)
  • Other: 0 (0.00%)
  • Total: 35

Annotations

Check failure on line 4990 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / AKS Linux VHD Build - PR check-in gate

Build log #L4990

Bash exited with code '1'.

Check failure on line 1 in Test_MarinerV2_ChronyRestarts_Scriptless

See this annotation in the file changed.

@azure-pipelines azure-pipelines / AKS Linux VHD Build - PR check-in gate

Test_MarinerV2_ChronyRestarts_Scriptless

Failed
Raw output
    scenario_helpers_test.go:190: VHD: "/subscriptions/c4c3550e-a965-4993-a50c-628fd38cd3e1/resourceGroups/aksvhdtestbuildrg/providers/Microsoft.Compute/galleries/PackerSigGalleryEastUS/images/CBLMarinerV2gen2/versions/1.1739001419.28662", TAGS {Name:Test_MarinerV2_ChronyRestarts_Scriptless ImageName:CBLMarinerV2gen2 OS:mariner Arch:amd64 Airgap:false NonAnonymousACR:false GPU:false WASM:false ServerTLSBootstrapping:false KubeletCustomConfig:false}
    vmss.go:39: creating VMSS "r2ua-2025-02-08-marinerv2chronyrestartsscriptless" in resource group "MC_abe2e-westus3_abe2e-kubenet-331fc_westus3"
    scenario_helpers_test.go:145: vmss r2ua-2025-02-08-marinerv2chronyrestartsscriptless creation succeeded
    kube.go:147: waiting for node r2ua-2025-02-08-marinerv2chronyrestartsscriptless to be ready
    kube.go:168: node r2ua-2025-02-08-marinerv2chronyrestartsscriptless000000 is tainted. Taints: [{"key":"node.kubernetes.io/network-unavailable","effect":"NoSchedule","timeAdded":"2025-02-08T08:45:39Z"}] Conditions: [{"type":"NetworkUnavailable","status":"True","lastHeartbeatTime":"2025-02-08T08:45:39Z","lastTransitionTime":"2025-02-08T08:45:39Z","reason":"NodeInitialization","message":"Waiting for cloud routes"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2025-02-08T08:45:26Z","lastTransitionTime":"2025-02-08T08:45:26Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2025-02-08T08:45:26Z","lastTransitionTime":"2025-02-08T08:45:26Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"PIDPressure","status":"False","lastHeartbeatTime":"2025-02-08T08:45:26Z","lastTransitionTime":"2025-02-08T08:45:26Z","reason":"KubeletHasSufficientPID","message":"kubelet has sufficient PID available"},{"type":"Ready","status":"True","lastHeartbeatTime":"2025-02-08T08:45:26Z","lastTransitionTime":"2025-02-08T08:45:26Z","reason":"KubeletReady","message":"kubelet is posting ready status"}]
    kube.go:168: node r2ua-2025-02-08-marinerv2chronyrestartsscriptless000000 is tainted. Taints: [{"key":"node.kubernetes.io/network-unavailable","effect":"NoSchedule","timeAdded":"2025-02-08T08:45:39Z"}] Conditions: [{"type":"NetworkUnavailable","status":"True","lastHeartbeatTime":"2025-02-08T08:45:39Z","lastTransitionTime":"2025-02-08T08:45:39Z","reason":"NodeInitialization","message":"Waiting for cloud routes"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2025-02-08T08:45:57Z","lastTransitionTime":"2025-02-08T08:45:26Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2025-02-08T08:45:57Z","lastTransitionTime":"2025-02-08T08:45:26Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"PIDPressure","status":"False","lastHeartbeatTime":"2025-02-08T08:45:57Z","lastTransitionTime":"2025-02-08T08:45:26Z","reason":"KubeletHasSufficientPID","message":"kubelet has sufficient PID available"},{"type":"Ready","status":"True","lastHeartbeatTime":"2025-02-08T08:45:57Z","lastTransitionTime":"2025-02-08T08:45:26Z","reason":"KubeletReady","message":"kubelet is posting ready status"}]
    kube.go:188: failed to wait for "r2ua-2025-02-08-marinerv2chronyrestartsscriptless" (r2ua-2025-02-08-marinerv2chronyrestartsscriptless000000) to be ready {Capacity:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} ephemeral-storage:{i:{value:52172304384 scale:0} d:{Dec:<nil>} s:50949516Ki Format:BinarySI} hugepages-1Gi:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} hugepages-2Mi:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} memory:{i:{value:8056201216 scale:0} d:{Dec:<nil>} s: Format:BinarySI} pods:{i:{value:110 scale:0} d:{Dec:<nil>} s:110 Format:DecimalSI}] Allocatable:map[cpu:{i:{value:1900 scale:-3} d:{Dec:<nil>} s:1900m Format:DecimalSI} ephemeral-storage