Reset parallel.loadmanager in EngineTestRunner after every data loading #517
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR resets
parallel.loadmanager
after a test running throughEngineTestRunner
is completed, ensuring the sub-division of data is consistent among tests.This is necessary because when the data is divided into blocks, it uses the same instance of
parallel.loadmanager
and the calculation ofpartition
depends onself.load
, which has been modified in-place by the previous loading. This is fine for normal reconstruction (not strictly for stochastic-type however), but not for comparing among tests when consistency is desired.This small script using the
MoonFlowerScan
scan illustrates the difference:When executing with 4 MPI ranks, you would get something similar to this:
Note the number 41, the number of active pods, belongs to different rank when this is executed sequentially in a for-loop. This should not happen in testing. Changing to
reset=True
in the above script givies