Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nengolib #1611

Merged
merged 3 commits into from
Nov 3, 2020
Merged

Nengolib #1611

merged 3 commits into from
Nov 3, 2020

Conversation

hunse
Copy link
Collaborator

@hunse hunse commented May 7, 2020

Motivation and context:

Migrate some key features from https://github.com/arvoelke/nengolib:

Interactions with other PRs:

Based on #1609.

How has this been tested?

  • Added many unit tests
  • For ScatteredHypersphere, the large new test produces plots that give a good idea how the point distribution compares with UniformHypersphere

How long should this take to review?

  • Lengthy (more than 150 lines changed or changes are complicated)

Where should a reviewer start?

  • The new features can be reviewed relatively independently.
  • The new LinearSystem is best understood in the context of LinearFilter, and how it's used there.
  • Setting the ensemble voltages has been done in two commits, but should be understood all together (I left the second part separate because the breaking change to identifying states by name may be controversial).
  • For ScatteredHypersphere, the math is a bit complicated. @arvoelke or I can send you the referenced book pages if you want to look at them. Probably best to start by looking at the test plots to understand the result, then looking more at the code.

Types of changes:

  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that causes existing functionality to change)
    • We now store neurons' state as (key, value) pairs. This allows the user to not worry about order if they're using make_neuron_state; they just have to make sure the state names are consistent.

Checklist:

  • I have read the CONTRIBUTING.rst document.
  • I have updated the documentation accordingly.
  • I have included a changelog entry.
  • I have added tests to cover my changes.
  • I have run the test suite locally and all tests passed.

@hunse hunse force-pushed the nengolib branch 5 times, most recently from e5083bf to fd8394a Compare May 14, 2020 18:22
@hunse hunse force-pushed the nengolib branch 2 times, most recently from c2ff7b2 to 9e8a73a Compare July 22, 2020 19:02
@hunse hunse requested a review from arvoelke July 22, 2020 19:02
nengo/connection.py Outdated Show resolved Hide resolved
nengo/connection.py Outdated Show resolved Hide resolved
nengo/builder/processes.py Outdated Show resolved Hide resolved
nengo/utils/numpy.py Outdated Show resolved Hide resolved
nengo/synapses.py Outdated Show resolved Hide resolved
nengo/linear_system.py Outdated Show resolved Hide resolved
nengo/linear_system.py Outdated Show resolved Hide resolved
nengo/linear_system.py Outdated Show resolved Hide resolved
nengo/linear_system.py Outdated Show resolved Hide resolved
@hunse hunse force-pushed the nengolib branch 3 times, most recently from 004f0b7 to c361792 Compare August 17, 2020 17:04
CHANGES.rst Outdated Show resolved Hide resolved
@arvoelke
Copy link
Contributor

Still reviewing, but I cherry-picked the automatic nengo-bones update from master to fix the build. I did this rather than rebasing so that I wouldn't be force-pushing. Just FYI.

Copy link
Contributor

@arvoelke arvoelke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a few more commits:

  • Cherry-picked commit from bones-autoupdate
  • Fixed docstring for LinearFilter
  • Fixed docstring for LinearSystem
  • Fixed docstring for Process
  • Updated LMU notebook to use LegendreDelay
  • Added ScatteredHypersphere to the cache whitelist
  • Added h(t) equation for DoubleExp
  • Couple sanity tests for Alpha and DoubleExp
  • Added repr tests for ScatteredHypersphere
  • Removed unneeded copy of get_activities from nengo<2.3.0 (that was in the nengolib code just to help with compatibility across older versions of nengo)
  • Added a test that _make_betaincinv22_table creates the same file as _betaincinv22_file

I left a few other questions/comments that should hopfully be straightforward to resolve or fix, so I'll mark this PR as approved. A few things to surface:

Thanks for this! A lot of really valuable and expressive additions I think! :)

``Ensemble`` and ``solver.weights`` is set, in which case ``n_synapses`` equals
the number of neurons in the post ``Ensemble``.

.. versionadded:: 3.1.0
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So these are both parameters from base classes, which is why I didn't document them here. I can understand the desire to document them here, particularly because they're passed explicitly rather than as part of a **kwargs. This is especially helpful because we've got diamond inheritance here, so it's extra confusing for a user where to look if they want documentation for these arguments. The downside, though, is then we have duplication of docstrings. I'm wondering if it's possible to make some sort of note (ideally both here and in the original docstrings) so that if one of them changes, we know to update the other location. I'm not sure how best to do that, though.

Copy link
Contributor

@arvoelke arvoelke Aug 31, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could something like https://sublime-and-sphinx-guide.readthedocs.io/en/latest/reuse.html be used to define and reuse the same description in multiple places? There could also be some sort of 'inherit params' magic annotation. Seems like a fairly general problem that we wouldn't be the first to encounter.

t, y2 = self.run_sim(
Simulator, sys.discrete_ss(dt), analog=False, dt=dt, plt=plt, f_in=f_in
)
assert np.allclose(y1, y0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these and other np.allclose calls intentionally not using the allclose fixture?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, just an oversight I think. I'd definitely change the ones in this test to allclose, because it uses Simulator. The others don't matter so much because they don't use Simulator, so they aren't for backends. But there's no harm in using allclose, I don't think.

nengo/dists.py Outdated Show resolved Hide resolved
@tbekolay tbekolay force-pushed the nengolib branch 2 times, most recently from 3bb2b39 to e1ed1ac Compare October 23, 2020 02:24
Copy link
Member

@tbekolay tbekolay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed offline, I split this up into the more straightforward improvements (this PR) and the LinearSystem stuff (#1650). I still have a few things left to look into in this PR, but had a few questions that I wanted to talk about at tomorrow's scrum, so I pushed what I have so far.

nengo/dists.py Outdated Show resolved Hide resolved
nengo/dists.py Outdated Show resolved Hide resolved
nengo/dists.py Outdated Show resolved Hide resolved
nengo/dists.py Outdated
elif self.method == "tfww":
mapped = self.spherical_transform_tfww(samples)
else:
raise NotImplementedError(self.method)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure this is impossible to test without some crazy hacks, so I'm going to make it an assert.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we not care about coverage for NotImplementedErrors?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeah, that's a good point. I guess it feels weird though because it's not like we have any intention of implementing more methods later?

nengo/dists.py Outdated Show resolved Hide resolved
nengo/dists.py Outdated Show resolved Hide resolved
@tbekolay tbekolay force-pushed the nengolib branch 5 times, most recently from edc9553 to 2cfcc6d Compare November 3, 2020 05:26
@tbekolay tbekolay force-pushed the nengolib branch 2 times, most recently from 9846ab6 to 5832b24 Compare November 3, 2020 16:44
Copy link
Member

@tbekolay tbekolay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright I encountered a few minor things but I think I've got it all worked out now and this all looks good to me. I made some changes though that would be good for @hunse and/or @arvoelke to sign off on.

The big thing is changing the behavior of ScatteredHypersphere when d == 1. I believe the references that are linked in various docstrings don't give any guidance as to what we should do for d == 1 so I did what felt right after chatting about it with Eric (but if I'm mistaken and those references do have things to say, please point me to that).

So here's what happens:

  • For d == 1 and surface is True, we alternate between 1 and -1. The only randomness is whether we start with 1 or -1; so, if you're sampling 4 numbers it'll be [-1, 1, -1, 1] or [1, -1, 1, -1]. I went with this for simplicity and because Eric didn't want it to clump such that you have long runs of 1s or -1s.

    One downside of this is that two ensembles with the same n_neurons and dimensions will have the same encoders half of the time. We have a test that ensures that changing seeds changes all attributes of an ensemble, and this fails 50% of seeds because encoders will be same when changing the seed. This is reflective of there being little diversity in encoders for 1D populations by default. I'm not sure if this is an issue or not, and if it is an issue, how it should be resolved.

  • For d == 1 and surface is False, we draw n samples from self.base and then map from the [0, 1) range to (-1, min_mag] U [min_mag, 1) by scaling to the [min_mag, 1) range and then changing the sign of alternating samples in the same manner as we change the sign of encoders. When you inspect the generated samples, you get what you would expect in that it optimally tiles the (-1, 1) space with the (-min_mag, min_mag) cutout with a random offset.

Does this seem like an appropriate thing to do for d == 1? Keep in mind that right now we're only doing this for eval_points and encoders. We could, in the future, do something similar for Uniform and use this for more things (like intercepts, max_rates, etc) but that's a bigger change so it's out of the scope of this PR.


The only other potentially controversial change that I made was to add import matplotlib.pyplot as plt to our doctest_setup. I'm not sure if this has a significant performance impact, but I don't think it should. The bigger change would be to enable numpydoc_use_plots, but that is also outside of the scope of this PR so I'll track that separately.

Oh, and I also removed the force-learning example from this branch because it depended on Bandpass. It's still in #1650, but for now I instead added nengo.RLS to the learn-product example.

hunse and others added 2 commits November 3, 2020 14:09
Currently, we use ScatteredHypersphere for encoders and eval_points.

Co-authored-by: Aaron Voelker <[email protected]>
Co-authored-by: Trevor Bekolay <[email protected]>
Co-authored-by: Eric Hunsberger <[email protected]>
@tbekolay
Copy link
Member

tbekolay commented Nov 3, 2020

After some discussion with @hunse, I made the following changes:

  1. For the d == 1 and surface is False case, we sample and scale twice, and on the second sampling we make all those values negative.
  2. For both d == 1 and surface is False and all QuasirandomSequence cases, we shuffle the samples before returning them to break any unwanted correlations. This means that in the d == 1 and surface is True case I could simplify to just using np.sign as the samples we get from self.base are in shuffled order, which also means that we have far more diversity in encoders.

The lack of encoder diversity was the major issue from the previous implementation IMO, so I think with those things this is good to go. There are some lines in synapses.py that are now uncovered for some reason, so I'm going to look into that and make sure they're covered before merging. Hopefully that won't take too long though so if anyone else wants to give some thoughts before merging, better act fast!

@tbekolay
Copy link
Member

tbekolay commented Nov 3, 2020

The uncovered lines were because vendorizing the filter design stuff (3bb2b39) made it such that it was impossible to make a LinearFilter.NoX step function; not sure if that's because of something that changed in the newer functions, or if we did something custom there that we missed (or intentionally changed) when vendorizing. Whatever the case may be, I just cut that commit out for now and we'll deal with it #1650. Merging once CI finishes!

@tbekolay tbekolay merged commit ae189ba into master Nov 3, 2020
@tbekolay tbekolay deleted the nengolib branch November 3, 2020 21:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants