Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code Cleanup analysis transition #8

Open
R-G-1 opened this issue Jun 26, 2023 · 5 comments
Open

Code Cleanup analysis transition #8

R-G-1 opened this issue Jun 26, 2023 · 5 comments
Assignees
Labels
enhancement New feature or request

Comments

@R-G-1
Copy link
Contributor

R-G-1 commented Jun 26, 2023

Feature Description and Why

Previously a partial transition from circuit analysis to DAG analysis in metrics class was done. The refactor could be improved and code cleanup can be done specifically in various parts of the runner and benchmark classes.

Additional Details / Helpful Resources

@R-G-1 R-G-1 added the enhancement New feature or request label Jun 26, 2023
@evmckinney9
Copy link
Owner

evmckinney9 commented Jun 29, 2023

Organized TODO

A metric should contain an AnalysisPass, a property_set key, and specify which averaging function to use.

  • Metrics are computed as AnalysisPasses
  • Metric passes should automatically be added to the runners(?)
    • This is not straight-forward to do cleanly because some runners will add an analysis pass with custom parameters. For example, MonodromyDepth needs to have a basis gate specified. If we add the metric from the runner, it doesn't know which settings to use.
  • Specify averaging functioning
  • Move to own class with statistics utilties
  • Handle/Clean metric data on re-run
  • Update README.md documentation following all these changes

Some options:

  • Can use separate metrics as a way to verify that each runner has included it in the post-processing stage as a pass. Acts as just a confirmation each runner will have the metrics computed.
  • The runner sets some class attribute that can be referenced when appending the metric to post-process. (no this is messy)
  • We get rid of metrics. Instead, the runners handle it (messy)
  • The runners have a method for appending metric passes. This makes it easy to compartmentalize functionality for checking if a metric analysispass requires class attributes from the runner before initializing/appending it to the post-process pass manager.

One of the reasons this is annoying is because we have to give the main benchmark class the metric classes, but in the current version - it requires that the metric class has already put the appropriate key into the pass manager's property set. What we could do instead is for every metric given to the benchmarker, modify each of the runner's post-process pass managers

@evmckinney9
Copy link
Owner

Another task, each of the metrics should be able to specify how they are averaged.

Currently is hardcoded - and the code for results is generally way to verbose

# TODO fix this
    def add_result(self, metric_name, circuit_name, transpiler_name, result):
        """Add a result to the container."""
        # Check the metric name and set the use_gmean attribute
        if metric_name == "monodromy_depth":
            self.results[metric_name][circuit_name][transpiler_name].use_gmean = True
        elif metric_name == "accepted_subs":
            self.results[metric_name][circuit_name][transpiler_name].use_gmean = False
        else:
            raise ValueError(f"Unknown metric name {metric_name}")

        self.results[metric_name][circuit_name][transpiler_name].add_result(result)

@evmckinney9
Copy link
Owner

Another task, percent improvements should be handled in the results analysis of this repo.

Currently, some messy things I have put together in virtual-swap, but we should make this more generic as a function over each transpiler and each metric.

# find average improvement from 'Qiskit-$\\texttt{CNOT}$' to 'SABREVS-$\\texttt{CNOT}$'
# find average improvement from 'Qiskit-$\\sqrt{\\texttt{iSWAP}}$' to 'SABREVS-$\\sqrt{\\texttt{iSWAP}}$'
key = "monodromy_depth"
# key = 'accepted_subs'
l1 = list(
    x["Qiskit-$\\texttt{CNOT}$"].average
    for x in benchmark.results.results[key].values()
)
l2 = list(
    x["SABREVS-$\\texttt{CNOT}$"].average
    for x in benchmark.results.results[key].values()
)

l3 = list(
    x["Qiskit-$\\sqrt{\\texttt{iSWAP}}$"].average
    for x in benchmark.results.results[key].values()
)
l4 = list(
    x["SABREVS-$\\sqrt{\\texttt{iSWAP}}$"].average
    for x in benchmark.results.results[key].values()
)

# average element wise percent improvmement
print(sum((x - y) / x for x, y in zip(l1, l2)) / len(l1))
print(sum((x - y) / x for x, y in zip(l3, l4)) / len(l3))

@evmckinney9
Copy link
Owner

From #2 Currently metrics are computed taking circuit inputs post-transpilation.
I think it would be better to instead have metrics computed on the dag as final set of AnalysisPasses.
Avoids needing another circuit->dag conversion.

@evmckinney9
Copy link
Owner

on a call to run(), since the metric class is the same - it would just keep adding results into the same metric instance. that could be fine actually, just need to consider if that is fine thinking about conflicts and such. either we could empty/reinitialize all the metrics on every call to run, or if we call run again just save new results into each metric class. or if the result already exists in results we could skip running

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants