-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code Cleanup analysis transition #8
Comments
Organized TODO A metric should contain an AnalysisPass, a property_set key, and specify which averaging function to use.
Some options:
One of the reasons this is annoying is because we have to give the main benchmark class the metric classes, but in the current version - it requires that the metric class has already put the appropriate key into the pass manager's property set. What we could do instead is for every metric given to the benchmarker, modify each of the runner's post-process pass managers |
Another task, each of the metrics should be able to specify how they are averaged. Currently is hardcoded - and the code for results is generally way to verbose # TODO fix this
def add_result(self, metric_name, circuit_name, transpiler_name, result):
"""Add a result to the container."""
# Check the metric name and set the use_gmean attribute
if metric_name == "monodromy_depth":
self.results[metric_name][circuit_name][transpiler_name].use_gmean = True
elif metric_name == "accepted_subs":
self.results[metric_name][circuit_name][transpiler_name].use_gmean = False
else:
raise ValueError(f"Unknown metric name {metric_name}")
self.results[metric_name][circuit_name][transpiler_name].add_result(result) |
Another task, percent improvements should be handled in the results analysis of this repo. Currently, some messy things I have put together in virtual-swap, but we should make this more generic as a function over each transpiler and each metric. # find average improvement from 'Qiskit-$\\texttt{CNOT}$' to 'SABREVS-$\\texttt{CNOT}$'
# find average improvement from 'Qiskit-$\\sqrt{\\texttt{iSWAP}}$' to 'SABREVS-$\\sqrt{\\texttt{iSWAP}}$'
key = "monodromy_depth"
# key = 'accepted_subs'
l1 = list(
x["Qiskit-$\\texttt{CNOT}$"].average
for x in benchmark.results.results[key].values()
)
l2 = list(
x["SABREVS-$\\texttt{CNOT}$"].average
for x in benchmark.results.results[key].values()
)
l3 = list(
x["Qiskit-$\\sqrt{\\texttt{iSWAP}}$"].average
for x in benchmark.results.results[key].values()
)
l4 = list(
x["SABREVS-$\\sqrt{\\texttt{iSWAP}}$"].average
for x in benchmark.results.results[key].values()
)
# average element wise percent improvmement
print(sum((x - y) / x for x, y in zip(l1, l2)) / len(l1))
print(sum((x - y) / x for x, y in zip(l3, l4)) / len(l3)) |
From #2 Currently metrics are computed taking circuit inputs post-transpilation. |
on a call to run(), since the metric class is the same - it would just keep adding results into the same metric instance. that could be fine actually, just need to consider if that is fine thinking about conflicts and such. either we could empty/reinitialize all the metrics on every call to run, or if we call run again just save new results into each metric class. or if the result already exists in results we could skip running |
Feature Description and Why
Previously a partial transition from circuit analysis to DAG analysis in metrics class was done. The refactor could be improved and code cleanup can be done specifically in various parts of the runner and benchmark classes.
Additional Details / Helpful Resources
The text was updated successfully, but these errors were encountered: