-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nengolib #1611
Conversation
e5083bf
to
fd8394a
Compare
c2ff7b2
to
9e8a73a
Compare
004f0b7
to
c361792
Compare
Still reviewing, but I cherry-picked the automatic nengo-bones update from master to fix the build. I did this rather than rebasing so that I wouldn't be force-pushing. Just FYI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a few more commits:
- Cherry-picked commit from
bones-autoupdate
- Fixed docstring for
LinearFilter
- Fixed docstring for
LinearSystem
- Fixed docstring for
Process
- Updated LMU notebook to use
LegendreDelay
- Added
ScatteredHypersphere
to the cache whitelist - Added
h(t)
equation forDoubleExp
- Couple sanity tests for
Alpha
andDoubleExp
- Added
repr
tests forScatteredHypersphere
- Removed unneeded copy of
get_activities
fromnengo<2.3.0
(that was in the nengolib code just to help with compatibility across older versions of nengo) - Added a test that
_make_betaincinv22_table
creates the same file as_betaincinv22_file
I left a few other questions/comments that should hopfully be straightforward to resolve or fix, so I'll mark this PR as approved. A few things to surface:
- Is the goal to keep codecov at 100%? It dropped due to
linear_system.py
although the codecov website isn't showing me which lines: https://codecov.io/gh/nengo/nengo/pull/1611/tree - Could my two requests above that fall outside the scope of this PR (Nengolib #1611 (comment)) be tracked somewhere so they aren't forgotten?
- Similarly, my comment that https://www.nengo.ai/nengo-dl/examples/lmu.html can now be updated to use
LegendreDelay
(the same way that I did for the LMU example in this repo) could be tracked somewhere.
Thanks for this! A lot of really valuable and expressive additions I think! :)
nengo/synapses.py
Outdated
``Ensemble`` and ``solver.weights`` is set, in which case ``n_synapses`` equals | ||
the number of neurons in the post ``Ensemble``. | ||
|
||
.. versionadded:: 3.1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So these are both parameters from base classes, which is why I didn't document them here. I can understand the desire to document them here, particularly because they're passed explicitly rather than as part of a **kwargs
. This is especially helpful because we've got diamond inheritance here, so it's extra confusing for a user where to look if they want documentation for these arguments. The downside, though, is then we have duplication of docstrings. I'm wondering if it's possible to make some sort of note (ideally both here and in the original docstrings) so that if one of them changes, we know to update the other location. I'm not sure how best to do that, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could something like https://sublime-and-sphinx-guide.readthedocs.io/en/latest/reuse.html be used to define and reuse the same description in multiple places? There could also be some sort of 'inherit params' magic annotation. Seems like a fairly general problem that we wouldn't be the first to encounter.
nengo/tests/test_processes.py
Outdated
t, y2 = self.run_sim( | ||
Simulator, sys.discrete_ss(dt), analog=False, dt=dt, plt=plt, f_in=f_in | ||
) | ||
assert np.allclose(y1, y0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these and other np.allclose
calls intentionally not using the allclose
fixture?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, just an oversight I think. I'd definitely change the ones in this test to allclose
, because it uses Simulator
. The others don't matter so much because they don't use Simulator
, so they aren't for backends. But there's no harm in using allclose
, I don't think.
3bb2b39
to
e1ed1ac
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we discussed offline, I split this up into the more straightforward improvements (this PR) and the LinearSystem stuff (#1650). I still have a few things left to look into in this PR, but had a few questions that I wanted to talk about at tomorrow's scrum, so I pushed what I have so far.
nengo/dists.py
Outdated
elif self.method == "tfww": | ||
mapped = self.spherical_transform_tfww(samples) | ||
else: | ||
raise NotImplementedError(self.method) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty sure this is impossible to test without some crazy hacks, so I'm going to make it an assert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we not care about coverage for NotImplementedErrors
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah, that's a good point. I guess it feels weird though because it's not like we have any intention of implementing more methods later?
edc9553
to
2cfcc6d
Compare
9846ab6
to
5832b24
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright I encountered a few minor things but I think I've got it all worked out now and this all looks good to me. I made some changes though that would be good for @hunse and/or @arvoelke to sign off on.
The big thing is changing the behavior of ScatteredHypersphere
when d == 1
. I believe the references that are linked in various docstrings don't give any guidance as to what we should do for d == 1
so I did what felt right after chatting about it with Eric (but if I'm mistaken and those references do have things to say, please point me to that).
So here's what happens:
-
For
d == 1 and surface is True
, we alternate between 1 and -1. The only randomness is whether we start with 1 or -1; so, if you're sampling 4 numbers it'll be[-1, 1, -1, 1]
or[1, -1, 1, -1]
. I went with this for simplicity and because Eric didn't want it to clump such that you have long runs of 1s or -1s.One downside of this is that two ensembles with the same
n_neurons
anddimensions
will have the same encoders half of the time. We have a test that ensures that changing seeds changes all attributes of an ensemble, and this fails 50% of seeds because encoders will be same when changing the seed. This is reflective of there being little diversity in encoders for 1D populations by default. I'm not sure if this is an issue or not, and if it is an issue, how it should be resolved. -
For
d == 1 and surface is False
, we drawn
samples fromself.base
and then map from the[0, 1)
range to(-1, min_mag] U [min_mag, 1)
by scaling to the[min_mag, 1)
range and then changing the sign of alternating samples in the same manner as we change the sign of encoders. When you inspect the generated samples, you get what you would expect in that it optimally tiles the(-1, 1)
space with the(-min_mag, min_mag)
cutout with a random offset.
Does this seem like an appropriate thing to do for d == 1
? Keep in mind that right now we're only doing this for eval_points
and encoders
. We could, in the future, do something similar for Uniform
and use this for more things (like intercepts
, max_rates
, etc) but that's a bigger change so it's out of the scope of this PR.
The only other potentially controversial change that I made was to add import matplotlib.pyplot as plt
to our doctest_setup
. I'm not sure if this has a significant performance impact, but I don't think it should. The bigger change would be to enable numpydoc_use_plots
, but that is also outside of the scope of this PR so I'll track that separately.
Oh, and I also removed the force-learning example from this branch because it depended on Bandpass
. It's still in #1650, but for now I instead added nengo.RLS
to the learn-product
example.
Currently, we use ScatteredHypersphere for encoders and eval_points. Co-authored-by: Aaron Voelker <[email protected]> Co-authored-by: Trevor Bekolay <[email protected]>
Co-authored-by: Eric Hunsberger <[email protected]>
After some discussion with @hunse, I made the following changes:
The lack of encoder diversity was the major issue from the previous implementation IMO, so I think with those things this is good to go. There are some lines in |
The uncovered lines were because vendorizing the filter design stuff (3bb2b39) made it such that it was impossible to make a |
Motivation and context:
Migrate some key features from https://github.com/arvoelke/nengolib:
LegendreDelay
)ScatteredHypersphere
distribution for more evenly distributed spacing of points on the hypersphere, and use this to generate encoders and eval points.Interactions with other PRs:
Based on #1609.
How has this been tested?
ScatteredHypersphere
, the large new test produces plots that give a good idea how the point distribution compares withUniformHypersphere
How long should this take to review?
Where should a reviewer start?
LinearSystem
is best understood in the context ofLinearFilter
, and how it's used there.ScatteredHypersphere
, the math is a bit complicated. @arvoelke or I can send you the referenced book pages if you want to look at them. Probably best to start by looking at the test plots to understand the result, then looking more at the code.Types of changes:
make_neuron_state
; they just have to make sure the state names are consistent.Checklist: