-
HPA resources are always OutOfSync. k8s version yaml in repo apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: xxxx
namespace: xxxx
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: xxxx
minReplicas: 2
maxReplicas: 4
behavior:
scaleDown:
stabilizationWindowSeconds: 3600 # 1hour
metrics:
- resource:
name: cpu
target:
averageUtilization: 25
type: Utilization
type: Resource
- external:
metric:
name: "prometheus.googleapis.com|k8s_hpa_replicas_max|gauge"
selector:
matchLabels:
resource.labels.cluster: "xxxx"
metric.labels.hpa_name: "hpa-bulk-scale"
target:
value: 10
type: Value
type: External yaml in cluster apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: xxxx
pipecd.dev/application: xxxx
pipecd.dev/commit-hash: xxxx
pipecd.dev/managed-by: piped
pipecd.dev/original-api-version: autoscaling/v2
pipecd.dev/piped: xxxx
pipecd.dev/resource-key: autoscaling/v2:HorizontalPodAutoscaler:xxxx:xxxx
pipecd.dev/variant: primary
creationTimestamp: "2023-01-05T06:26:44Z"
labels:
app.kubernetes.io/env: dev
app.kubernetes.io/group: xxxx
app.kubernetes.io/managed-by: pipecd
app.kubernetes.io/name: xxxx
app.kubernetes.io/owner: backend
name: xxxx
namespace: xxxx
resourceVersion: "1571250293"
uid: be3815e6-a830-4799-bdeb-40fa538bb2a1
spec:
behavior:
scaleDown:
policies:
- periodSeconds: 15
type: Percent
value: 100
selectPolicy: Max
stabilizationWindowSeconds: 3600
scaleUp:
policies:
- periodSeconds: 15
type: Pods
value: 4
- periodSeconds: 15
type: Percent
value: 100
selectPolicy: Max
stabilizationWindowSeconds: 0
maxReplicas: 4
metrics:
- external:
metric:
name: prometheus.googleapis.com|k8s_hpa_replicas_max|gauge
selector:
matchLabels:
metric.labels.hpa_name: hpa-bulk-scale
resource.labels.cluster: xxxx
target:
type: Value
value: "10"
type: External
- resource:
name: cpu
target:
averageUtilization: 25
type: Utilization
type: Resource
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: xxxx
status:
conditions:
- lastTransitionTime: "2023-12-19T06:12:48Z"
message: recommended size matches current size
reason: ReadyForNewScale
status: "True"
type: AbleToScale
- lastTransitionTime: "2023-12-19T05:12:39Z"
message: xxxx
reason: ValidMetricFound
status: "True"
type: ScalingActive
- lastTransitionTime: "2023-12-19T06:12:48Z"
message: the desired replica count is less than the minimum replica count
reason: TooFewReplicas
status: "True"
type: ScalingLimited
currentMetrics:
- external:
current:
value: "2"
metric:
name: prometheus.googleapis.com|k8s_hpa_replicas_max|gauge
selector:
matchLabels:
metric.labels.hpa_name: hpa-bulk-scale
resource.labels.cluster: xxxx
type: External
- resource:
current:
averageUtilization: 1
averageValue: 13m
name: cpu
type: Resource
currentReplicas: 2
desiredReplicas: 2
lastScaleTime: "2023-10-11T13:24:37Z" |
Beta Was this translation helpful? Give feedback.
Answered by
ffjlabo
Dec 20, 2023
Replies: 2 comments
-
This is a problem with k8s HPA resource reordering. According to this issue,
So the solution is to fix the yaml in the repo in the same order as one in the cluster. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
ffjlabo
-
Also, we consider whether to allow HPA reordering :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is a problem with k8s HPA resource reordering.
kubernetes/kubernetes#74099
According to this issue,
.spec.metrics
(array) in HPA is reordered after applying the manifest.So the solution is to fix the yaml in the repo in the same order as one in the cluster.