You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using Jedis (5.2.0) to, amongst other things, ping Redis cluster nodes, both primary and replicas. We have configured Jedis to use a topologyRefreshPeriod (introduced in Jedis v5.1.0) to refresh the in-memory topology automatically every few minutes. All works fine until there is a primary node failover in which case the in-memory Map of Redis Node - > Connection Pool held in the Jedis client suddenly loses data: the Map loses data for one or more of the nodes in the same shard in which the failover occurred. Once the app sees that there is missing data in the Map, it calls ClusterConnectionProvider.renewSlotCache to update the Jedis in-memory topology, but the issue remains nonetheless for up to 8 minutes.
Due to using topologyRefreshPeriod and ClusterConnectionProvider.renewSlotCach, the issue eventually self-resolves - the in-memory Map once again is eventually populated with all the node entries for all the shards. As mentioned above, that happens after 8 minutes approx. (Note: before using topologyRefreshPeriod,ClusterConnectionProvider.renewSlotCach, the issue never self-resolved, we had to cycle the EC2 instances to get rid of the issue ).
Have ye seen anything like this before? Why does Jedis take 8 minutes approx to repopulate the in-memory Map with all the nodes?
Note: We are using Amazon ElastiCache and Java 17.
The text was updated successfully, but these errors were encountered:
We are using Jedis (5.2.0) to, amongst other things, ping Redis cluster nodes, both primary and replicas. We have configured Jedis to use a topologyRefreshPeriod (introduced in Jedis v5.1.0) to refresh the in-memory topology automatically every few minutes. All works fine until there is a primary node failover in which case the in-memory Map of Redis Node - > Connection Pool held in the Jedis client suddenly loses data: the Map loses data for one or more of the nodes in the same shard in which the failover occurred. Once the app sees that there is missing data in the Map, it calls ClusterConnectionProvider.renewSlotCache to update the Jedis in-memory topology, but the issue remains nonetheless for up to 8 minutes.
Due to using topologyRefreshPeriod and ClusterConnectionProvider.renewSlotCach, the issue eventually self-resolves - the in-memory Map once again is eventually populated with all the node entries for all the shards. As mentioned above, that happens after 8 minutes approx. (Note: before using topologyRefreshPeriod,ClusterConnectionProvider.renewSlotCach, the issue never self-resolved, we had to cycle the EC2 instances to get rid of the issue ).
Have ye seen anything like this before? Why does Jedis take 8 minutes approx to repopulate the in-memory Map with all the nodes?
Note: We are using Amazon ElastiCache and Java 17.
The text was updated successfully, but these errors were encountered: