You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (kubectl version)?
Client Version: v1.31.3
Kustomize Version: v5.4.2
Server Version: v1.30.3+rke2r1
What did you do?
redis-replication 0.16.4, clusterSize = 2
redis-sentinel 0.16.6, clusterSize = 3
correctly spawn 2 redis pods, one with redis-role=slave and one with redis-role=master. However after killing the master, 2 slaves are left.
operator logs
{"level":"error","ts":"2024-11-26T23:52:32Z","logger":"controllers.RedisSentinel","msg":"","Request.Namespace":"test","Request.Name":"redis","error":"no real master pod found","stacktrace":"github.com/OT-CONTAINER-KIT/redis-operator/pkg/k8sutils.getRedisReplicationMasterIP\n\t/workspace/pkg/k8sutils/redis-sentinel.go:349\ngithub.com/OT-CONTAINER-KIT/redis-operator/pkg/k8sutils.IsRedisReplicationReady\n\t/workspace/pkg/k8sutils/redis-replication.go:230\ngithub.com/OT-CONTAINER-KIT/redis-operator/pkg/controllers/redissentinel.(*RedisSentinelReconciler).Reconcile\n\t/workspace/pkg/controllers/redissentinel/redissentinel_controller.go:57\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}
all the sentinels have the same logs
1:X 26 Nov 2024 23:45:32.315 * Running mode=sentinel, port=26379.
1:X 26 Nov 2024 23:45:32.320 * Sentinel new configuration saved on disk
1:X 26 Nov 2024 23:45:32.320 # Sentinel ID is 0fb2f2583a21c2cdb08625fd8e0bdd7d3f459971
1:X 26 Nov 2024 23:45:32.320 # +monitor master myMaster 10.42.2.248 6379 quorum 2
1:X 26 Nov 2024 23:46:02.314 # +sdown master myMaster 10.42.2.248 6379
What did you expect to see?
successfully switching the master
What did you see instead?
no master
The text was updated successfully, but these errors were encountered:
this is help for me. But after pod is restarted same error
kubectl exec -it redis-sentinel-sentinel-1 -n ***** -- redis-cli -p 26379 -a redispass SENTINEL set myMaster auth-pass redispass
What version of redis operator are you using?
redis-operator version: ghcr.io/ot-container-kit/redis-operator/redis-operator:v0.18.1
Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
kubectl version
)?Client Version: v1.31.3
Kustomize Version: v5.4.2
Server Version: v1.30.3+rke2r1
What did you do?
redis-replication 0.16.4, clusterSize = 2
redis-sentinel 0.16.6, clusterSize = 3
correctly spawn 2 redis pods, one with
redis-role=slave
and one withredis-role=master
. However after killing the master, 2 slaves are left.operator logs
all the sentinels have the same logs
What did you expect to see?
successfully switching the master
What did you see instead?
no master
The text was updated successfully, but these errors were encountered: