You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We faced an issue in Pinot production setup wherein the controller leader node went down but re-election was not triggered. This lead to segment upload errors for tables trying to commit a segment, instances being marked as unavailable for the segment and finally queries failing for the tables with segments unavailable error.
Timeline for this was as follows:
A GET call for a large table's segments' metadata tables/<tablename>/segments/<segmentName>/metadata?columns=* spawns ~75k threads. This cause a huge memory spike and heap to go out of memory (heap size being 128GB here), maybe crashing the node. We suspect it was because of reload status button which triggers the segment metadata call
The node's 2 zk sessions time out at 17:19:56, the node tries to reestablish connection but it keeps emitting metrics as a leader until 17:29:x
The health check for node starts failing around 17:20:x, but standby controller nodes keep polling and getting failed leader's session ID as leader until 18:10 when we triggered a force replacement of error node
Instance 123 is not leader of cluster production-cluster due to current session 702147796290178 does not match leader session 702147796290171
Expected behavior
Ideally the re-election should've triggered around 17:20.
Additional context
Helix version - 1.0.4
Pinot version - 1.0.x
Pinot issue - apache/pinot#13990
The text was updated successfully, but these errors were encountered:
I remember it was fixed by Jiajun before. The root cause was Helix reading the session data from the ZNode stored content instead of stats of the LEADER ZNode itself.
Describe the bug
We faced an issue in Pinot production setup wherein the controller leader node went down but re-election was not triggered. This lead to segment upload errors for tables trying to commit a segment, instances being marked as unavailable for the segment and finally queries failing for the tables with segments unavailable error.
Timeline for this was as follows:
A GET call for a large table's segments' metadata
tables/<tablename>/segments/<segmentName>/metadata?columns=*
spawns ~75k threads. This cause a huge memory spike and heap to go out of memory (heap size being 128GB here), maybe crashing the node. We suspect it was because of reload status button which triggers the segment metadata callThe node's 2 zk sessions time out at 17:19:56, the node tries to reestablish connection but it keeps emitting metrics as a leader until 17:29:x
The health check for node starts failing around 17:20:x, but standby controller nodes keep polling and getting failed leader's session ID as leader until 18:10 when we triggered a force replacement of error node
Expected behavior
Ideally the re-election should've triggered around 17:20.
Additional context
Helix version - 1.0.4
Pinot version - 1.0.x
Pinot issue - apache/pinot#13990
The text was updated successfully, but these errors were encountered: