-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/ parse port spec port as sender or recvr #2727
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Jan Vesely <[email protected]>
Signed-off-by: Jan Vesely <[email protected]>
Signed-off-by: Jan Vesely <[email protected]>
…ound Signed-off-by: Jan Vesely <[email protected]>
Imported in core/rpc/graph_pb2.py Signed-off-by: Jan Vesely <[email protected]>
Signed-off-by: Jan Vesely <[email protected]>
…2624) Drop grpcio-tools, and add protobuf instead. Use tighter bound on dill.
…rt np.all(==) Test all logged matrices, not just one arbitrary index. Drop explicit testing of shape. Signed-off-by: Jan Vesely <[email protected]>
…(==) Checks the result of the comparison. Signed-off-by: Jan Vesely <[email protected]>
Assert on the return value of equality.
Using the fixture adds correct marks. Skip on compiled variant on fp32 since Python execution uses fp64. Signed-off-by: Jan Vesely <[email protected]>
Fix selection of expected results when running in Python and --fp-precision=fp32 Signed-off-by: Jan Vesely <[email protected]>
autodiff_mode includes LLVMRun and PyTorch execution mode, checking comparison to ExecutionMode.Python is false for both. Signed-off-by: Jan Vesely <[email protected]>
Switch fp32 run to python3.8 to get better coverage of python versions. This should help catch compiled tests that are not correctly marked. Signed-off-by: Jan Vesely <[email protected]>
Only check fp32 results if running compiled variant of a test. Use correct execution mode checks in test_optimizer_specs. Convert identicalness PyTorch vs. LLVM test to autodiff_mode fixture. Enable the entire test suite running --fp-precision=fp32 in CI.
…t of learning results The 'benchmark' fixture runs and arbitrary number of iterations. Because of this it is override in conftest.py to preserve the return value of the first invocation. However, this only applies to explicitly returned values, not stateful variables. Add a helper function that returns 'learning_results' to workaround this restriction. Fixes --benchmark-enable test in AutodiffComposition Fixes: 521a4a2 ("Tentative learning branch (#2623)") Signed-off-by: Jan Vesely <[email protected]>
This is not monitoring performance, just making sure the benchmark variants pass without error. Signed-off-by: Jan Vesely <[email protected]>
This does not monitor performance, it just checks that the tests pass if the benchmark fixture runs multiple iterations of the benchmarked call.
• tests/composition/test_autodiffcomposition.py: - TestTrainingCorrectness::test_pytorch_equivalence_with_autodiff_composition - fixed with atol=1e-6 • tests/composition/test_composition.py - TestRun::test_execute_no_inputs - corrected shape of expected result - TestRun::test_run_2_mechanisms_with_multiple_trials_of_input_values - corrected shape of expected result - TestRun::test_run_recurrent_transfer_mechanism - - corrected shape of expected results • tests/composition/test_interfaces.py - TestConnectCompositionsViaCIMS::test_compositions_as_origin_nodes - corrected shape of expected result • tests/functions/test_transfer.py - test_transfer_derivative_out: FAILS FOR Python, func=SoftMax, and kw.PER_ITEM:True (see comment) • tests/llvm/test_multiple_executions.py - test_nested_composition_run_trials_inputs: FAILS FOR LLVM WITH NUM EXECUTIONS = 1: EXPECTS 3D BUT GETS 4D (see comment) • tests/mechanisms/test_control_mechanism.py - TestControlMechanism::test_control_of_all_output_ports - corrected expected shape of control_allocation • tests/mechanisms/test_integrator_mechanism.py - TestIntegratorFunctions::test_FitzHughNagumo_simple_scalar FAILS FOR PYTHON: ELEMENTS OF val ARE 2D NOT 1D ARRAYS (see comment) • tests/mechanisms/test_recurrent_transfer_mechanism.py - TestCustomCombinationFunction::test_max_executions_before_finished FAILS FOR PYTHON: results IS 2D BUT EXPECTED TO BE 3D (see comment) • tests/mechanisms/test_transfer_mechanism.py - TestTransferMechanismFunctions::test_transfer_mech_func FAILS FOR PYTHON: val IS LIST WITH 1 ITEM OF LEN 4, BUT EXPECTS LIST WITH 4 1D LISTS • tests/models/test_greedy_agent.py - test_predator_prey CAN FIX MANY BY results[0] -> results, but still fails some • tests/scheduling/test_scheduler.py - TestFeedback::test_scheduler_conditions - got rid of hack, now passes
Fixes for tests: - test_xor_training_correctness - test_run_5_mechanisms_2_origins_1_terminal - test_execute_composition - test_LPP_two_origins_one_terminal - test_compositions_as_origin_nodes_multiple_trials All of these seem straightforward. Needed to change expected output shape from 1D to 2D. All of these are checking the return of Composition.run. The were expecting 1D arrays but the documentation states that the return will be 2D.
• tests/mechanisms/test_recurrent_transfer_mechanism.py - TestCustomCombinationFunction::test_max_executions_before_finished - ADD DIMENSIONS TO result UNLESS ExecutionMode.LLVM
• tests/mechanisms/test_recurrent_transfer_mechanism.py - TestCustomCombinationFunction::test_max_executions_before_finished - ADD DIMENSIONS TO result UNLESS ExecutionMode.LLVM • tests/models/test_greedy_agent.py - test_predator_prey - PARTIALLY FIXED, REQUIRES MODIFIED TOLERANCES (1e-6 and 1e-6) (see comments)
• test_control.py: - test_model_based_num_estimates: specify retain_old_simulation_data=True and revise results to match those generated
This looks like a case of the expected shape being wrong. Composition.run should return a 2D list. Each inner list should have shape of length 2 because the processing mechanism is of size=2. The expected value was specified as a list of 2 single element lists.
Fixed the following tests: - test_nested_transfer_mechanism_composition - test_nested_transfer_mechanism_composition_parallel - test_connect_compositions_with_complicated_states Each of these tests was expecting a 3D output from Composition.run. Based on my reading of Composition.run's documentation I think this is incorrect and the outputs should be 2D. All I did was modify the expected arrays to drop the singleton 3D dimension.
• Only test marked with "control" that doesn't pass is: test_model_based_num_estimates • test_control.py: - test_multilevel_ocm_gridsearch_conflicting_directions - test_multilevel_ocm_gridsearch_maximize - test_multilevel_ocm_gridsearch_minimize - test_two_tier_ocm: correct expected results to include both output nodes - test_two_tier_ocm: call with atol=1e-8 - test_multilevel_control: correct shape of result
Add Optuna support to PEC
Reduces the number of test variants by half. Signed-off-by: Jan Vesely <[email protected]>
Use the new helper to test all combinations of debug flags. Signed-off-by: Jan Vesely <[email protected]>
…ebug options (#2707) Add power_set helper and register it with pytest helpers Remove duplicate debug flag from the list of tested options for test_debug_comp. Use the new helper to generate all test variants of test_debug_comp
Use "--benchmark-only" in addition to "-m benchmark". The former will fail if benchmarking is not available/possible. The latter works around a bug in pytest test collection: ionelmc/pytest-benchmark#243 Fixes: d6f8e35 ("tests: Use worksteal xdist balancer (#2670)") Signed-off-by: Jan Vesely <[email protected]>
We don't need integer operations. Signed-off-by: Jan Vesely <[email protected]>
Clarify skip message. Clarify comment. Add newlines between subtests. Signed-off-by: Jan Vesely <[email protected]>
…ions of costs Signed-off-by: Jan Vesely <[email protected]>
…tures Drop context and component parameters to writeback, the first one should match context used by Execution, the second neither used nor implemented. Writeback the most recent stateful value. Replace empty structures with None. Only try to match the original shape in data writeback if there is one Signed-off-by: Jan Vesely <[email protected]> fixup writeback
Except for memory functions which use special 'ring_memory' parameter. Signed-off-by: Jan Vesely <[email protected]>
…t calculation No support for combination cost function yet, blocked by: #2712 CUDA testing needs synchronization of stateful params. Signed-off-by: Jan Vesely <[email protected]>
…da_execute Enable TransferWithCosts CUDA tests Signed-off-by: Jan Vesely <[email protected]>
Use Flag instead of IntFlag for cost flags. Add test testing all cost combinations. Implement invocation of cost functions. Writeback stateful parameters after execution of compiled functions and mechanisms. Writeback stateful parameters back to CPU memory after GPU execution.
…cting power set (#2714) It si automatically excluded in python 3.11, but earlier versions need explicit exclusion. Signed-off-by: Jan Vesely <[email protected]>
win32 wheel is no longer provided Signed-off-by: Jan Vesely <[email protected]>
Compiled execution uses custom structure to represent random state. These need conversion to Python RandomState first. Fixes: 3cb0ce4 ("llvm/execution: Improve writeback handling of history and empty structures") Signed-off-by: Jan Vesely <[email protected]>
They won't be used anyway. Fixes: 3cb0ce4 ("llvm/execution: Improve writeback handling of history and empty structures") Signed-off-by: Jan Vesely <[email protected]>
"random_state" needs conversion to Python RandomState or Generator structure. Do not create numpy arrays of empty structures.
- replace default_allocation Parameter with assignment as constructor_argument for control_allocation parameter - refactor default_allocation to use defaults.control_allocation - revert expected Nieuwenhuis model results to before #2636. These were affected by a change in `ControlMechanism._instantiate_control_signal_type` checking the length of `defaults.control_allocation` (prior to #2636) or `defaults.value` (#2636 and on), which caused the global `defaultControlAllocation` to be used for LCControlMechanism instead of `defaults.control_allocation`. `default.control_allocation` is determined more correct.
* • port.py and projection.py: - fix bug in which specification using deferred init MappingProjection to specify an InputPort failed • test_iput_state_spec.py: - rename as test_input_port_spec.py - add test for above
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
• port.py
_parse_port_spec(): fix assignment of port if port_spec[PORT_SPEC_ARG] is a dict with a projection specified