This repository is for benchmarking the performance of different tensor network contraction order optimizers in OMEinsumContractionOrders.
The following figure shows the results of the contraction order optimizers on the examples/quantumcircuit/codes/sycamore_53_20_0.json instance.
- Version:
[email protected] - Platform: Ubuntu 24.04 LTS
- Device: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz
Check the full report: report.pdf, benchmark results are available in the examples/*/results folder.
make init # install all dependencies for all examplesIf you want to benchmark with the developed version of OMEinsumContractionOrders, run
make dev # develop the master branch of OMEinsumContractionOrders for all examplesTo switch back to the released version of OMEinsumContractionOrders, run
make free # switch back to the released version of OMEinsumContractionOrdersTo update the dependencies of all examples, run
make updateExamples are defined in the examples folder. To generate contraction codes for all examples, run
make generate-codesIt will generate a file in the codes folder of each example, named *.json.
These instances are defined in the main.jl file of each example.
There is also a script to generate the contraction codes for the einsumorg package, run
make generate-einsumorg-codesIt will generate a file in the codes folder of the einsumorg example, named *.json. It requires:
- Having a working python interpreter in your terminal.
- Downloading the
instancesdataset from here and unpack it in theexamples/einsumorg/instancesfolder.
To run benchmarks, run
optimizer="Treewidth(alg=MF())" make run
optimizer="Treewidth(alg=MMD())" make run
optimizer="Treewidth(alg=AMF())" make run
optimizer="KaHyParBipartite(; sc_target=25)" make run
optimizer="KaHyParBipartite(; sc_target=25, imbalances=0.0:0.1:0.8)" make run
optimizer="HyperND()" make run
optimizer="HyperND(; dis=METISND(), width=50, imbalances=100:10:800)" make run
optimizer="HyperND(; dis=KaHyParND(), width=50, imbalances=100:10:800)" make runIt will read the *.json files in the codes folder of each example, and run the benchmarks (twice by default, to avoid just-in-time compilation overhead).
The runner script is defined in the runner.jl file.
If you want to run a batch of jobs, just run
for niters in 1 2 4 6 8 10 20 30 40 50; do optimizer="TreeSA(niters=$niters)" make run; done
for niters in {0..10}; do optimizer="GreedyMethod(α=$niters * 0.1)" make run; doneIf you want to overwrite the existing results, run with argument overwrite=true. To remove existing results of all benchmarks, run
make clean-resultsTo summarize the results (a necessary step for visualization), run
make summaryIt will generate a file named summary.json in the root folder, which contains the results of all benchmarks.
To visualize the results, typst >= 0.13 is required. After installing typst just run
make reportIt will generate a file named report.pdf in the root folder, which contains the report of the benchmarks.
Alternatively, you can use VSCode + Tinymist typst extension to directly preview it.
The examples are defined in the examples folder. To add a new example, you need to:
- Add a new folder in the
examplesfolder, named after the problem. - Setup a independent environment in the new folder, and add the dependencies in the
Project.tomlfile. - Add a new
main.jlfile in the new folder, which should contain the following functions:main(folder::String): the main function to generate the contraction codes to the target folder. The sample JSON file is as follows:
The{ "einsum": { "ixs": [[1, 2], [2, 3], [3, 4]], "iy": [] }, "size": { "1": 2, "2": 2, "3": 2, "4": 2 } }einsumfield is the contraction code with two fields:ixs(input labels) andiy(output label), andsizeis the size of the tensor indices. - Edit the
config.tomlfile to add the new example in theinstancessection.