-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deprecate Learner2D #56
Comments
originally posted by Bas Nijholt (@basnijholt) at 2018-07-09T16:40:11.612Z on GitLab If good enough == better than. |
originally posted by Jorn Hoofwijk (@Jorn) at 2018-07-13T07:59:33.461Z on GitLab I think good enough == better than the Learner2d.
|
originally posted by Jorn Hoofwijk (@Jorn) at 2018-07-13T08:22:29.208Z on GitLab I take it you didn't want to refer to issue 1? In the future I would like to implement a way to maintain locality and still be able to take the second derivative into account (like taking the loss over a simplex and it's neighbours or something like that). But to me this sounds like a non-trivial task. |
Hi, Recently, I have been developing a custom loss function that does take into account all points, which I was planning to generalize to the LearnerND. However, I recently realized that the LearnerND only takes into account a single simplex or a simplex and neighbors, which really bummed me out. That's how I got to this issue. Why do you plan to remove the ability to calculate loss with the context of all of the simplexes? I feel like 1) that removes the ability to write a vectorized loss function and 2) there are plenty of loss functions that may want to consider global properties like maximum and minimum value. For example, I wrote a boltzmann loss for the 2D case. See below: def boltzmann_loss(ip, kt=0.59):
"""Loss function that combines default loss and `boltzmann probabilities`.
Applies higher loss to lower values.
Works with `~adaptive.Learner2D` only.
Parameters
----------
ip : `scipy.interpolate.LinearNDInterpolator` instance
Returns
-------
losses : numpy.ndarray
Loss per triangle in ``ip.tri``.
"""
vs = np.squeeze(np.min(ip.values[ip.tri.vertices], axis=1))
zs = areas(ip) * np.exp((vs.min() - vs) / kt)
bps = zs / np.sum(zs)
bps /= np.median(bps) or 1.0
return bps * default_loss(ip) It would be great if we could generalize this to N-Dimensions. It is amazing for molecular conformer/ensemble sampling, which is what I am developing it for. Adam |
Hi Adam, Personally I don't think we should deprecate the I thought that we could have 2 Actually, there isn't much in the Learner2D that is really tight to 2D, just the loss-function AFAIR. Of course, there might be some things like the bounds, but those are trivial things to change. If you have the energy to generalize the Learner2D to ND, I will be more than willing to accept the PR 😄 We should probably think about renaming them though, to distinguish the global and local losses. |
Hey @atom-moyer, your Boltzmann loss function looks really interesting and useful for minimization. Can you give the unnormalized Boltzmann loss a shot? If it works it would be a really useful addition to the collection of the loss functions that we have. One may also consider dynamically adjusting |
The reason why we consider making loss only depend on the local properties is the scaling of performance. If one only needs to update the loss of Right now the overhead of local operations in At the same time, Such amortized updates would allow to e.g. define We currently don't have a general interface for global amortized data updates, and AFAIR this was one of the open questions that arose in discussions of #220. |
(original issue on GitLab)
opened by Joseph Weston (@jbweston) at 2018-07-09T14:51:40.135Z
Once LearnerND becomes good enough we should remove Learner2D, as it will no longer be needed.
The text was updated successfully, but these errors were encountered: