-
Notifications
You must be signed in to change notification settings - Fork 237
New library: Reverse-mode automatic differentiation #1302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Working differentiation
added linear regression example
Integrate b2 build
Remove documentation for z-test function which doesn't exist
looks like msvc errors are gone. Not sure why CI fails but looking inside it doesn't seem to be related to any of my code |
The failure(s) in CI seem harmless, as you mentioned. The job that failed CI is on Drone. This, in fact, very many jobs running on various older runner images on Drone. Most of the tests for older compilers such as GCC 7, 8 or old clang. And non-x86_64 architectures (insofar as these are available) run on Drone. But sometimes, Drone has a hard time acquiring a VM or setting it up properly. So your failures failed in about 3 minutes - too short for a normal run, as these are mostly 7 - 30 minutes. It looks like all the Drone failures were the failure to properly setup LINUX. It gets annoying, but Drone has to be manually judged sometimes to see if failures were real or bogus. This gets doubly annoying when there is actually mixture of real and bogus errors. Which is what happens when we write new code. Sigh, that's just how it goes. On the upside, GHA is super-reliable nowdays. |
Hi @demroz I will be taking a dedicated look at this hopefully in the next few days. I intend to extend one of your examples and really get my hands onto this code and learn more about it. I'll also be looking into top-level quality aspects such as retaining code coverage and compiler warnings (if there are any). I might have questions along the way. Please give me a few days on this. We could also benefit later from some feedback later from John and Matt. Let me get into this thing for a few days. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of my comments can be repeated in many places (e.g. constants, using statements, etc), but I didn't as to not completely overload this.
include/boost/math/differentiation/detail/reverse_mode_autodiff_basic_operator_overloads.hpp
Outdated
Show resolved
Hide resolved
include/boost/math/differentiation/detail/reverse_mode_autodiff_basic_operator_overloads.hpp
Outdated
Show resolved
Hide resolved
Just as a quick suggestion: The test_functions_for_optimization.hpp might be a very nice place to do unit tests for this as most of these are well suited for reverse mode AD; e.g., mapping from ℝ^n →ℝ. As an aside: The gradient based optimizers are a great idea, and would make a nice addition to the black-box optimizers we currently have. |
This pull request introduces a new library for reverse-mode automatic differentiation. Its a tape based reverse mode autodiff, so the idea is to build a computational graph, and then call backward on the entire graph to compute all the gradients at once.
Currently it supports all the basic operations (+,-,/,/), everything listed in conceptual requirements for real number types, and boost calls to
erf
,erc
,erf_inv
anderfc_inv
. Everything is tested up to the 4th derivative. The list of tests:test_reverse_mode_autodiff_basic_math_ops.cpp
test_reverse_mode_autodiff_comparison_operators.cpp
test_reverse_mode_autodiff_constructors.cpp
test_reverse_mode_autodiff_error_functions.cpp
test_reverse_mode_autodiff_flat_linear_allocator.cpp
test_reverse_mode_autodiff_stl_support.cpp
There are also two examples in the example directory:
reverse_mode_linear_regression_example.cpp
-> simple linear regression that demonstrates how this library can be used for optimizationautodiff_reverse_black_scholes.cpp
-> a rewrite of the forward mode equivalent.Important notes
f in this case is not actually type
rvar
, butadd_expr<rvar,mult_expr<rvar,rvar>>
new
for memory allocations. This is a deliberate design choice. The flat_linear_allocator destructor explicitly calls the destructors of individual elements. Explicit calls todelete
shouldn't be needed hereThank you, and looking forward to your feedback