-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load active peers from previous session saved into file. #2230
Conversation
8be6a43
to
a1e85d5
Compare
rskj-core/src/main/java/org/ethereum/net/server/ChannelManagerImpl.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job!
I just have some few recommendations. Let me know what you think about them.
rskj-core/src/test/java/co/rsk/config/RskSystemPropertiesTest.java
Outdated
Show resolved
Hide resolved
rskj-core/src/test/java/co/rsk/net/discovery/DiscoveredPeersPersistenceServiceTest.java
Outdated
Show resolved
Hide resolved
666a418
to
87a3e6a
Compare
rskj-core/src/test/java/co/rsk/net/discovery/DiscoveredPeersPersistenceServiceTest.java
Fixed
Show fixed
Hide fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job! I see that all recommendations were handled, I am approving it. Well done! :)
@@ -145,7 +145,11 @@ public synchronized void stop() { | |||
logger.info("Shutting down RSK node"); | |||
|
|||
for (int i = internalServices.size() - 1; i >= 0; i--) { | |||
internalServices.get(i).stop(); | |||
try { | |||
internalServices.get(i).stop(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you elaborate a bit more on why you added this try-catch
? any specific need?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I see - so this is not required by the feature itself
I guess it's a tradeoff: non-properly finished service may sometimes result to data inconsistency in next services. We should be very careful with such changes in shutdown process
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(previous deleted reply) If the stop of any of the services fails throwing an exception the execution would stop and the missing services won't stop nicely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I see - so this is not required by the feature itself
I guess it's a tradeoff: non-properly finished service may sometimes result to data inconsistency in next services. We should be very careful with such changes in shutdown process
Huh if it could happen I think it is something we should fix. A wrong/different ending of one service shouldn't affect to the others
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if that happens then there's a bug in the code which has to be fixed. We don't expect service.stop()
to throw any checked exception. Only unchecked exceptions can happen here (eg. RuntimeException), and if that happens we should propagate it call stack up. Not sure it's a good idea to catch an exception and continue stopping other services as node is already in inconsistent state - best we can do here is to catch it at upper level and log it. So I'd rather not to add this try-catch
in here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok got it, I'll remove it. But In my opinion the services should be independent and in case there are dependencies we could create inner groups and close them together. Maybe is something we can try to do in the future.
As an example, in the case of this service. If the service throws an error due to the file persist or any other thing processing an unexpected input data or so, the others services shouldn't be affected and should be stopped properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, that was my point. If a service for any reason throws an unchecked exception, which means that the node most probably cannot recover from that state. If the service can recover from an exception that was thrown in the stop()
method, then it should not be propagated up but rather handled in it.
@@ -61,6 +61,7 @@ public class RskSystemProperties extends SystemProperties { | |||
private static final int CHUNK_SIZE = 192; | |||
|
|||
public static final String PROPERTY_SYNC_TOP_BEST = "sync.topBest"; | |||
public static final String USE_PEERS_FROM_LAST_SESSION = "peer.usePeersFromLastSession"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about keeping it under peer.discovery
config section?
rskj-core/src/main/java/co/rsk/net/discovery/DiscoveredPeersPersistenceService.java
Outdated
Show resolved
Hide resolved
rskj-core/src/main/java/org/ethereum/config/SystemProperties.java
Outdated
Show resolved
Hide resolved
b9ed487
to
e155aa7
Compare
rskj-core/src/test/java/co/rsk/net/discovery/DiscoveredPeersPersistenceServiceTest.java
Fixed
Show fixed
Hide fixed
pipeline:run |
rskj-core/src/main/java/co/rsk/net/discovery/KnownPeersSaver.java
Outdated
Show resolved
Hide resolved
rskj-core/src/main/java/co/rsk/net/discovery/KnownPeersSaver.java
Outdated
Show resolved
Hide resolved
rskj-core/src/main/java/org/ethereum/util/SimpleFileWriter.java
Outdated
Show resolved
Hide resolved
rskj-core/src/main/java/org/ethereum/net/server/ChannelManagerImpl.java
Outdated
Show resolved
Hide resolved
|
||
import static org.junit.jupiter.api.Assertions.assertEquals; | ||
|
||
class SimpleFileWriterTest { |
Check notice
Code scanning / CodeQL
Unused classes and interfaces Note test
} | ||
|
||
public void savePropertiesIntoFile(Properties properties, Path filePath) throws IOException { | ||
File tempFile = File.createTempFile(filePath.toString(), TMP); |
Check warning
Code scanning / CodeQL
Local information disclosure in a temporary directory Medium
} | ||
public void saveDataIntoFile(String data, Path filePath) throws IOException { | ||
|
||
File tempFile = File.createTempFile(filePath.toString(), TMP); |
Check warning
Code scanning / CodeQL
Local information disclosure in a temporary directory Medium
Quality Gate passedIssues Measures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
3d37038
to
8e827eb
Compare
…ed into peerExplorer Updating known peers origin, adding tests and small fixes Removing unnecesary log to avoid log injection Load active peers from previous session saved into file.
…lorer dispose method
8e827eb
to
aa6bce5
Compare
Quality Gate passedIssues Measures |
A client node would be able to re-connect to previously connected peers on restart.
Description
Persist discovered peers across node restarts
Motivation and Context
As of now, RSKj node keeps discovered peers in a so-called distance table in memory, so that after node restart those are being lost. Next time that the node starts over it has to start peer discovery process from scratch by having only list of boot nodes.
To improve, and most importantly speed the process up, we want to preserve a list of already discovered peers on a disk.
Note: it looks like we could make use of the org.ethereum.util.MapSnapshot<> class for that purpose.
Expected behaviour:
when node stops, it should save all discovered peers to a file
when node starts, it should check if the file with discovered peers exists in a database directory, and if so, load peer list from there. Otherwise, do nothing and proceed as before
Notes:
this functionality is part of the peer discovery protocol (see the PeerExplorer class)
the node should continue using bootstrap nodes the same way as it does now
How Has This Been Tested?
Types of changes
Checklist: