-
Notifications
You must be signed in to change notification settings - Fork 580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add adjoint hessian called tfq.math.inner_product_hessian #530
base: master
Are you sure you want to change the base?
Conversation
dec4520
to
0141262
Compare
FYI. This op shows 20x~100x faster than cirq hessian calculation used in the unit test file. |
Hi Jae, thanks for writing all of this! I have a few high level questions/comments:
I think we might want to have it be a part of with tf.GradientTape():
with tf.GradientTape():
<second order stuff> or https://www.tensorflow.org/api_docs/python/tf/hessians , could just use these with this op and have it just work. I think this means you might need to register another
Nice! Did you try using the analytic form of it's gradient gate instead of finite difference ? |
This PR adds
tfq.math.inner_product_hessian()
based on adjoint hessian reverse-mode calculation. It's independent of TensorFlow's Jacobian routine, so you can get the Hessian directly withouttf.GradientTape
.Note: due to the large numerical error from the 2nd order finite differencing on
cirq.PhasedXPowGate
, it will complain if any input circuit has the gate.Instead of getting gradient values, it accepts weight float values on
programs[i]
andother_programs[i][j]
, which can be used for any linear combination of the Hessian terms. You can pass justtf.ones()
for the bare values.