-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature, Performance] Optimize HamEvo for commuting Hamiltonians #346
Comments
@vytautas-a like we discussed, I suspect a first implementation of this may only require dealing with the The |
HamEvo checks if the qubit hamiltonian is made of commuting terms and generates a gate representation of it from what I can see. So why does it take different execution times if the generator is specified as one chunk or separate commuting chunks when both have identical digital decomposition to be implemented in the end |
Actually it currently only does that if you call But the suggestion here is exactly to make the commutation check be done automatically and optimize based on that. |
Cool, understood.
|
Found a paper about distributing a pauli decomposition into commuting set of collections, https://arxiv.org/pdf/1908.06942.pdf Apparently there is a lot of work on this mostly in the direction of doing efficient observable estimation with minimal shots, but can be used here as well, i think |
Nice @rajaiitp ! Seems relevant yes. |
Also came accross this one: https://arxiv.org/pdf/1907.09040.pdf |
Closing after opening in PyQTorch: pasqal-io/pyqtorch#177 |
Currently HamEvo of some generator is always exponentiated fully, without checking for commutation relations. However, generators composed of several commuting parts can be exponentiated separately. We can make some of those checks automatic in the instantiation of HamEvo and then optimize the calculation in the backend. Below is an example script doing it manually to showcase the potential:
To implement this, there is already a
block_is_commuting_hamiltonian
function inqadence.blocks.utils
that would be useful. It seem efficient, but should be reviewed. This function just returns True or False, so a first implementation could be based on that: essentially, ifTrue
, every term in theAddBlock
is separated into its own matrix to be exponentiated.A next level would be to look for optimizations even if the function returns False, where each non-commuting term would be aggregated into its own group. However I suspect this would not be straightforward.
Related to #134 since this would reduce to calling block to tensor on each of the smaller commuting terms.
The text was updated successfully, but these errors were encountered: