You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On our last meeting we have drafted a way to allow distributing client nodes (hereinafter referred to as clients) among supernodes.
Basic idea
Supernodes get an option called "weight" in the range of 1 to 100 which is manually defined by the operator depending on the individually available resources. This weight is used to relatively derive the number of clients they aim to handle.
Example:
Initially the network consists of supernode A and supernode B. A's weight is 20, B's weight is 30. In the network there are 1000 clients. The sum of A and B's weight is 20 + 30 = 50. 1000 clients divided by weight 50 equals 20 clients per weight unit. Therefore a fair share of clients for supernode A would be 20 * 20 = 400 clients and for B 30 * 20 = 600 clients.
Considering the case that A and B currently have 500 clients each, this would mean that A needs to "transfer" 500 - 400 = 100 clients to supernode B.
To mitigate oscillations and limit the number of client reconnects and the accompanying outages, it is required that supernodes coordinate the transfer process.
Example of such a situation:
Supernode A and B are already balanced based on the example above. This means A has 400 clients and B has 600 clients.
Now a new supernode C is being added to the network with weight 50. The new calculation for the sum of weights is therefore 20 + 30 + 50 = 100. 1000 nodes divided by 100 equals 10 nodes per weight unit. Therefore a fair share of nodes for supernode A would be 20 * 10 = 200, for B 30 * 10 = 300 and for C 50 * 10 = 500 clients.
This means A needs to transfer 400 - 200 = 200 clients and B needs to transfer 600 - 300 = 300 clients to supernode C.
Problem: It would eventually happen that A starts "kicking" clients to lower the number it handles, but some of them would then connect to supernode B (which is already "saturated") instead of connecting to supernode C.
Discussion
A and B (or in the general case the supernodes which are saturated) must roughly coordinate to start not accepting new connections so that the clients they kick are forced to connect to supernode C. Meanwhile the not-yet saturated supernodes (think of an additional supernode D) should make sure that they limit the number of new clients connecting to not exceed their newly calculated fair share.
For achieving this, a distributed approach which uses common knowledge of each other supernode's weight and client numbers is wishful.
Additionally some form of delay when detecting the oversaturation situation is needed, as when then (with the upcoming of a new unbalanced situation) an oversaturated supernode starts kicking clients, it could happen that the clients connect to an according to the new situation saturated supernode, which has not yet recognized the new situation. This would otherwise a short time later lead to unnecessary "kicks". Therefore a specific grace period needs to elapse before starting to kick clients to decrease the probability of uninformed supernodes which have not yet switched to the mode in which they need to decline new clients.
The kicking process is achieved by temporarily removing the clients to be kicked from WireGuard's configuration.
To reduce the time for a client to detect that it has been kicked and to reconnect, it is helpful to understand how the algorithm on the client's device works (wgpeerselector). wgpeerselector uses the time of the last received handshake of WireGuard to detect a faulty connection. WireGuard sends a handshake packet at least every two minutes if there is data (which is always the case when using BATMAN due to regular transfer of OGMs). wgpeerselector will assume a faulty connection if the last handshake was received more than 150 seconds ago. Therefore it makes sense to try to kick the client right before sending a handshake.
For not excluding clients from the network which are "pinned" to a specific subset of supernodes, which means that some of the supernodes are disabled or missing in the client's configuration, it is wishful to maintain a topically kept whitelist of those clients and exclude them from the kicking process on supernodes, which are included in the client's configuration.
But creating this whitelist is a problem on its own as the idea was that a new field needs to be introduced in respondd which reflects the configured supernodes. As respondd's data size is already near the single packet's size limit and situations can happen where e.g. an ISP's firewall blocks a specific supernode this idea needs to be reconciled with the idea of empirically checking to which supernodes a client can connect to.
For the latter solution we have found a workaround comparable to the children's game musical chairs (German: Reise nach Jerusalem).
The idea is that we create this whitelist on the fly on saturated supernodes by utilizing the synchronized time of the supernodes to pause the blocking in the balancing process, e.g. every 5 minutes in a synchronized way, after a settling period (to allow recently kicked clients to reconnect, who are able to choose a not saturated supernode). Clients which connect within this pause are assumed to not have been able to connect to an unsaturated supernode and will be excluded from the kicking process until the balancing process was successful.
Furthermore it must be made sure that the redistribution process actually works before kicking a large number of clients as it can e.g. happen that a newly appearing supernode does not accept clients due to misconfiguration or external network issues. To detect this case the balancing process will be stretched starting with a small step-wise increasing number of clients to be kicked. If it occurs that the vast majority of the kicked clients reconnects within the pause respectively the number of clients on the undersaturated supernodes does not increase accordingly, it must be assumed that the aimed supernode has a problem. In this case a notification of the operators will be triggered and it will be blacklisted leading to the exclusion from the weighting/balancing process.
Solution idea
The solution is based on the children's game musical chairs (German: Reise nach Jerusalem).
A supernode is considered saturated if its number of clients exceeds its fair share by 5% and at least 50 clients.
The basic idea is to divide every hour into 6 terms, consisting each of a 6 minute balancing period followed by a 4 minute pause.
In the pause clients which were unable to choose a not-saturated supernode can reconnect to a saturated one and will be temporarily whitelisted. If there is later a balancing needed at e.g. 2 AM the whitelisted clients will be removed from the whitelist and be kicked with a higher priority than other clients to prevent false-positives.
Calculating the number of clients to kick per term on oversaturated supernodes: The excessive number of clients is being remembered for the last 6 terms. The biggest of these numbers is divided by the number of terms (6) and represents the number of clients to kick per term (minimum 10). This way a full balancing should be achieved within an hour.
Example:
16:00:00 Term starts
If one supernode is saturated a balancing period starts
If this supernode is saturated:
Block new connections
Wait until 16:00:30 (until all saturated supernodes are blocking new connections)
Kick excessive number of clients according to above-stated calculation over the period of 120 seconds
(at least 15 seconds and at most 30 seconds before sending a handshake)
Else:
Carefully watch new connections to prevent oversaturation
16:02:30
Clients now have 3 minutes and 30 seconds in the worst case to connect to an unsaturated supernode.
16:06:00
Unblock new connections for 3 minutes and 30 seconds
Clients which were unable to connect to unsaturated
supernodes (e.g. because they are pinned or there is a network issue) can return
to saturated supernodes and will be whitelisted as stated above.
16:09:30
Determine the number of likely successful transfers of clients to unsaturated supernodes
and the number of clients which were returning to already saturated supernodes.
If less than e.g. 10 percent of the clients expected to choose a specific unsaturated supernode
actually chose that supernode, it is being considered broken, will be excluded from the calculations
and a technician will be informed and the above-stated list with the numbers of clients to kick will be cleared and
the whitelisted clients from the last step will be removed again as it is considered an issue with the supernode.
On our last meeting we have drafted a way to allow distributing client nodes (hereinafter referred to as clients) among supernodes.
Basic idea
Supernodes get an option called "weight" in the range of 1 to 100 which is manually defined by the operator depending on the individually available resources. This weight is used to relatively derive the number of clients they aim to handle.
Example:
To mitigate oscillations and limit the number of client reconnects and the accompanying outages, it is required that supernodes coordinate the transfer process.
Example of such a situation:
Problem: It would eventually happen that A starts "kicking" clients to lower the number it handles, but some of them would then connect to supernode B (which is already "saturated") instead of connecting to supernode C.
Discussion
A and B (or in the general case the supernodes which are saturated) must roughly coordinate to start not accepting new connections so that the clients they kick are forced to connect to supernode C. Meanwhile the not-yet saturated supernodes (think of an additional supernode D) should make sure that they limit the number of new clients connecting to not exceed their newly calculated fair share.
For achieving this, a distributed approach which uses common knowledge of each other supernode's weight and client numbers is wishful.
Additionally some form of delay when detecting the oversaturation situation is needed, as when then (with the upcoming of a new unbalanced situation) an oversaturated supernode starts kicking clients, it could happen that the clients connect to an according to the new situation saturated supernode, which has not yet recognized the new situation. This would otherwise a short time later lead to unnecessary "kicks". Therefore a specific grace period needs to elapse before starting to kick clients to decrease the probability of uninformed supernodes which have not yet switched to the mode in which they need to decline new clients.
The kicking process is achieved by temporarily removing the clients to be kicked from WireGuard's configuration.
To reduce the time for a client to detect that it has been kicked and to reconnect, it is helpful to understand how the algorithm on the client's device works (wgpeerselector). wgpeerselector uses the time of the last received handshake of WireGuard to detect a faulty connection. WireGuard sends a handshake packet at least every two minutes if there is data (which is always the case when using BATMAN due to regular transfer of OGMs). wgpeerselector will assume a faulty connection if the last handshake was received more than 150 seconds ago. Therefore it makes sense to try to kick the client right before sending a handshake.
For not excluding clients from the network which are "pinned" to a specific subset of supernodes, which means that some of the supernodes are disabled or missing in the client's configuration, it is wishful to maintain a topically kept whitelist of those clients and exclude them from the kicking process on supernodes, which are included in the client's configuration.
But creating this whitelist is a problem on its own as the idea was that a new field needs to be introduced in respondd which reflects the configured supernodes. As respondd's data size is already near the single packet's size limit and situations can happen where e.g. an ISP's firewall blocks a specific supernode this idea needs to be reconciled with the idea of empirically checking to which supernodes a client can connect to.
For the latter solution we have found a workaround comparable to the children's game musical chairs (German: Reise nach Jerusalem).
The idea is that we create this whitelist on the fly on saturated supernodes by utilizing the synchronized time of the supernodes to pause the blocking in the balancing process, e.g. every 5 minutes in a synchronized way, after a settling period (to allow recently kicked clients to reconnect, who are able to choose a not saturated supernode). Clients which connect within this pause are assumed to not have been able to connect to an unsaturated supernode and will be excluded from the kicking process until the balancing process was successful.
Furthermore it must be made sure that the redistribution process actually works before kicking a large number of clients as it can e.g. happen that a newly appearing supernode does not accept clients due to misconfiguration or external network issues. To detect this case the balancing process will be stretched starting with a small step-wise increasing number of clients to be kicked. If it occurs that the vast majority of the kicked clients reconnects within the pause respectively the number of clients on the undersaturated supernodes does not increase accordingly, it must be assumed that the aimed supernode has a problem. In this case a notification of the operators will be triggered and it will be blacklisted leading to the exclusion from the weighting/balancing process.
Solution idea
The solution is based on the children's game musical chairs (German: Reise nach Jerusalem).
A supernode is considered saturated if its number of clients exceeds its fair share by 5% and at least 50 clients.
The basic idea is to divide every hour into 6 terms, consisting each of a 6 minute balancing period followed by a 4 minute pause.
In the pause clients which were unable to choose a not-saturated supernode can reconnect to a saturated one and will be temporarily whitelisted. If there is later a balancing needed at e.g. 2 AM the whitelisted clients will be removed from the whitelist and be kicked with a higher priority than other clients to prevent false-positives.
Calculating the number of clients to kick per term on oversaturated supernodes:
The excessive number of clients is being remembered for the last 6 terms. The biggest of these numbers is divided by the number of terms (6) and represents the number of clients to kick per term (minimum 10). This way a full balancing should be achieved within an hour.
Example:
16:00:00 Term starts
(at least 15 seconds and at most 30 seconds before sending a handshake)
16:02:30
16:06:00
supernodes (e.g. because they are pinned or there is a network issue) can return
to saturated supernodes and will be whitelisted as stated above.
16:09:30
and the number of clients which were returning to already saturated supernodes.
actually chose that supernode, it is being considered broken, will be excluded from the calculations
and a technician will be informed and the above-stated list with the numbers of clients to kick will be cleared and
the whitelisted clients from the last step will be removed again as it is considered an issue with the supernode.
16:10:00 Next term starts
@lemoer @AiyionPrime @1977er It would be nice if you could have a look at it.
The text was updated successfully, but these errors were encountered: