Table of Contents
- Introduction
- openEquella Network Failure
- Unique IDs for openEquella Nodes
- openEquella Session Failover
- Failure to Connect to Zookeeper on Startup
- Rolling Restart of openEquella
- Rolling Restart of Zookeeper
- Running Zookeeper in a Degraded State
- Appendix: Healthy Cluster State
Zookeeper failover: Zookeeper requires a majority of its quorum nodes to be responsive in order to be available to openEquella. For the following scenarios, we’ll assume the Zookeeper quorum has 3 nodes.
openEquella failover: openEquella does not have a majority requirement for active nodes in its cluster. It does however have a delay of roughly 30 seconds to persist user sessions / state (such as wizards) to the database. Note that Bulk Operations will not failover to other openEquella nodes. For the following scenarios, we’ll assume the openEquella cluster has 2 nodes.
For a short network failure, no visible change should occur in openEquella.
- Identify the openEquella node to create a network failure on (for example, on Linux, you can simulate this by pausing the openEquella Java process using ‘sudo kill -STOP [PID]’ and resume with a ‘sudo kill -CONT [PID]’).
- Take note of the Active Tasks details.
- Login to openEquella and ensure you’re on the targeted node.
- Start a Bulk Op on the targeted node that will go for at least 30 seconds.
- Simulate the network failure on the targeted openEquella node for less than 10 seconds.
- Ensure the Active Tasks details remain as before, and that the Bulk Op task is on the list.
- Ensure the Bulk Op dialog finishes successfully.
- Ensure the Active Tasks details no longer shows the Bulk Op.
- Check the openEquella logs - ensure no task failover was logged.
- Verify cluster health.
For a longer network failure, the nodes that had trouble communicating with Zookeeper will refresh its ‘Active Tasks’ state. To do so, any tasks on the affected node(s) will be stopped (after completion of the current execution, provided the openEquella node stays healthy). Then, only primary tasks (such as the Scheduler Supervisor, Migration, and Institution tasks) will be restarted as per the state in Zookeeper.
- Identify the openEquella node to create a network failure on (on a Linux, you can simulate this by doing a ‘sudo kill -STOP [PID]’ and resume with a ‘sudo kill -CONT [PID]’).
- Take note of the Active Tasks details.
- Login to openEquella and ensure you’re on the targeted node.
- Start a Bulk Op on the targeted node that will go for at least 2 minutes.
- Simulate the network failure on the targeted openEquella node for ~45 seconds.
- Quite possibly, the Bulk Op dialog will appear to cease functioning. Check the thread dumps of both servers - there should still be the Bulk Op running.
- Ensure the Active Tasks details no longer shows the Bulk Op.
- Let the Bulk Op complete - the logs will indicate this.
- Check the openEquella logs - ensure the logs show that the primary Active Tasks were failed over. They might get failed over to the same node. This is appropriate behavior.
- Verify cluster health.
Covered in the openEquella Clustering Guide. Note that in 6.3+ the first part of the cluster id can be specified in optional-config.
Covered in the openEquella Clustering Guide.
openEquella should wait for Zookeeper to start up and proceed as normal once connected to Zookeeper.
- Turn off openEquella (2 node cluster).
- Turn off a majority of the Zookeeper nodes.
- Turn on both openEquella nodes (should not fully start up - i.e. the website will not be accessible).
- Turn on a majority of the Zookeeper nodes.
- Verify the logs show Zookeeper is now CONNECTED.
- Verify cluster health.
openEquella should operate as normal when performing rolling restarts for a majority of use cases, which includes cluster health.
- Assuming a 2 node cluster, take note of the Active Tasks details.
- Stop node A, wait 15 seconds, start node A, wait 30 seconds.
- Verify the Active Tasks are now running on node B.
- Stop node B, wait 15 seconds, start node B, wait 30 seconds.
- Verify the Active Tasks are now running on node A.
- Verify cluster health.
Zookeeper is highly robust in terms of failover, and as such, rolling restarts are appropriate. As long as the majority of the Zookeeper nodes are always active, openEquella should operate as normal, but will need to reconnect to Zookeeper when its current Zookeeper node is bounced. This is similar to the scenario of a network failure scenario lasting less than 10 seconds.
- Take note of the Active Tasks details.
- Login to openEquella and start a Bulk Op that will go for at least 30 seconds longer than the rolling restart of Zookeeper will take.
- Perform the rolling restart of Zookeeper (bounce a Zookeeper node, wait 30 seconds, continue). For testing failover, try to bounce the Zookeeper leader node if possible.
- Ensure the Active Tasks details remain as before, and that the Bulk Op task is on the list.
- Ensure the Bulk Op dialog completes.
- Ensure the Active Tasks details no longer shows the Bulk Op, and the rest of the tasks are the same as the start of the test.
- Check the openEquella logs - ensure no task failover was logged, and no Zookeeper connections were LOST (only SUSPENDED and RECONNECTED).
- Verify cluster health.
Zookeeper needs a majority of its nodes responding to have the quorum be available to openEquella. Assuming a 3 node Zookeeper configuration, this means 2 nodes have to be responsive. openEquella should run as normal as long as the quorum is available, even if not all Zookeeper nodes are responsive.
- Start with the Zookeeper quorum and openEquella cluster fully operational.
- Turn off the Zookeeper node that is the ‘leader’.
- Verify cluster health - let enough time pass that the hourly scheduled tasks run.
To verify a ‘healthy’ openEquella cluster (assuming a 2 node cluster):
- Verify scheduled tasks are running (via the logs) every hour. Perform the add-item test:
- Add an item (can be in draft) with an attachment on the first node
- Access ‘Manage Resources’ from another node - ensure the item shows up immediately after it’s created
- Perform the add-item test for each node
- Verify disabling / enabling an institution works on both nodes.
- Verify the logged in users are roughly equivalent (can be up to a 60 second delay).
- Run a bulk action (doesn’t have to be large - two items is sufficient). Ensure it completes and the Bulk Op task is removed from the Active Tasks details and no longer in the thread dumps.
- Verify the ‘Active Tasks’ section on the cluster health page is as expected.
Monitor the logs:
- Ensure openEquella isn’t reconnecting to Zookeeper or other openEquella nodes a lot (openEquella should rarely need to reconnect unless a network failure or app failure occurs)
- Ensure the Zookeeper logs are clean (We’ve seen cases where the Zookeeper quorum had insufficient resources / configs, and was performing inter-quorum reconnections a lot)