You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fleet apply logs from fleet job Pod do not indicate source of "no chart name found" errors in GitRepo containing multiple bundles. If one (or more) charts defined by a fleet.yaml bundle in the GitRepo do no exist in the index.yaml for their configured repository, only a generic level=fatal msg="no chart name found" is logged in the Job Pod, with no indication of which chart/bundle the error originates from.
Business impact:
Makes troubleshooting of the issue a heavy lift manual task
Repro steps:
Provision a Rancher v2.9.4 instance with a single all-role node custom RKE cluster (I used github.com/axeal/tf-do-rancher2)
In fleet-default workspace of Fleet, add a GitRepo with the reposistory https://github.com/axeal/fleet-test.git and branch 01563984
Observe successful deployment of the two charts/bundles
Update the GitRepo branch to 01563984-invalid, in which the rancher-monitoring-crd chart name is updated to the invalid rancher-monitoring-crds (use of the the rancher-logging-crd and rancher-monitoring-crd charts was arbitrary for the purpose of reproduction, in customer environment they are deploying their internal applications).
Observe after a short time that the GitRepo goes into an error state with Job Failed. failed: 3/1time="2024-12-18T13:24:13Z" level=fatal msg="no chart name found"
Observe the fleet Job pods for the GitRepo contain only the following logs:
Enable debug logging for fleet by Upgrading the fleet app in the local cluster and setting the value debug to true Force Update the GitRepo
Observe there are no additional log messages in the new Job pod logs for the GitRepo
Workaround:
Is a workaround available and implemented? Yes
What is the workaround: Manually investigate each fleet.yaml bundle within the GitRepo to validate whether the specified helm chart is present within the index.yaml of the defined helm repository.
Actual behavior:
fleet apply logs from fleet job Pod do not indicate source of "no chart name found" errors in GitRepo containing multiple bundles
Expected behavior:
fleet apply logs from fleet job Pod indicate the source chart/bundle of "no chart name found" errors in a GitRepo containing multiple bundles
The text was updated successfully, but these errors were encountered:
SURE-9542
Issue description:
fleet apply logs from fleet job Pod do not indicate source of "no chart name found" errors in GitRepo containing multiple bundles. If one (or more) charts defined by a fleet.yaml bundle in the GitRepo do no exist in the index.yaml for their configured repository, only a generic
level=fatal msg="no chart name found"
is logged in the Job Pod, with no indication of which chart/bundle the error originates from.Business impact:
Makes troubleshooting of the issue a heavy lift manual task
Repro steps:
Provision a Rancher v2.9.4 instance with a single all-role node custom RKE cluster (I used github.com/axeal/tf-do-rancher2)
In fleet-default workspace of Fleet, add a GitRepo with the reposistory https://github.com/axeal/fleet-test.git and branch 01563984
Observe successful deployment of the two charts/bundles
Update the GitRepo branch to 01563984-invalid, in which the rancher-monitoring-crd chart name is updated to the invalid rancher-monitoring-crds (use of the the rancher-logging-crd and rancher-monitoring-crd charts was arbitrary for the purpose of reproduction, in customer environment they are deploying their internal applications).
Observe after a short time that the GitRepo goes into an error state with
Job Failed. failed: 3/1time="2024-12-18T13:24:13Z" level=fatal msg="no chart name found"
Observe the fleet Job pods for the GitRepo contain only the following logs:
Enable debug logging for fleet by Upgrading the fleet app in the local cluster and setting the value debug to true
Force Update
the GitRepoObserve there are no additional log messages in the new Job pod logs for the GitRepo
Workaround:
Is a workaround available and implemented? Yes
What is the workaround: Manually investigate each fleet.yaml bundle within the GitRepo to validate whether the specified helm chart is present within the index.yaml of the defined helm repository.
Actual behavior:
fleet apply logs from fleet job Pod do not indicate source of "no chart name found" errors in GitRepo containing multiple bundles
Expected behavior:
fleet apply logs from fleet job Pod indicate the source chart/bundle of "no chart name found" errors in a GitRepo containing multiple bundles
The text was updated successfully, but these errors were encountered: