Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AerEstimatorV2 fails to execute small circuits transpiled against larger backends #2249

Open
nonhermitian opened this issue Nov 3, 2024 · 2 comments · May be fixed by #2265
Open

AerEstimatorV2 fails to execute small circuits transpiled against larger backends #2249

nonhermitian opened this issue Nov 3, 2024 · 2 comments · May be fixed by #2265
Assignees
Labels
bug Something isn't working

Comments

@nonhermitian
Copy link
Contributor

Informations

  • Qiskit Aer version: 0.15.1
  • Python version:
  • Operating system:

What is the current behavior?

AerEstimatorV2 gives memory error when trying to execute small circuits that have been transpiled against larger backends. Essentially it is the same issue as #2084

Steps to reproduce the problem

Try this circuit and operator transpiled against an Eagle machine

N = 12
qc = QuantumCircuit(N)

qc.x(range(N))
qc.h(range(N))

for kk in range(N // 2, 0, -1):
    qc.ch(kk, kk - 1)
for kk in range(N // 2, N - 1):
    qc.ch(kk, kk + 1)

op = SparsePauliOp('Z'*12)

I currently get

QiskitError: 'ERROR: [Experiment 0] Insufficient memory to run circuit circuit-205 using the statevector simulator. Required memory: 18446744073709551615M, max memory: 127903M , ERROR: Insufficient memory to run circuit circuit-205 using the statevector simulator. Required memory: 18446744073709551615M, max memory: 127903M'

But the active number of qubits is only 12

What is the expected behavior?

The simulation should work if the number of active qubits is small enough to fit in memory

Suggested solutions

@nonhermitian nonhermitian added the bug Something isn't working label Nov 3, 2024
@gadial gadial self-assigned this Nov 7, 2024
@gadial
Copy link
Collaborator

gadial commented Nov 11, 2024

Before attempting to fix I want to make sure I understand the problem. I reproduced it with the following code:

from qiskit import QuantumCircuit
from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager
from qiskit.quantum_info import SparsePauliOp
from qiskit_aer.primitives import EstimatorV2
from qiskit_ibm_runtime.fake_provider import FakeWashingtonV2

N = 12
qc = QuantumCircuit(N)

qc.x(range(N))
qc.h(range(N))

for kk in range(N // 2, 0, -1):
    qc.ch(kk, kk - 1)
for kk in range(N // 2, N - 1):
    qc.ch(kk, kk + 1)

op = SparsePauliOp('Z'*12)
pm = generate_preset_pass_manager(backend=FakeWashingtonV2(), optimization_level=1)
isa_circuit = pm.run(qc)
mapped_observable = op.apply_layout(isa_circuit.layout)

est = EstimatorV2()
res = est.run(pubs=[(isa_circuit, mapped_observable)]).result()
print(res)

If I understand correctly, the current solution for transpiling against large backends is present only in Estimator (not in EstimatorV2) and is simply copying the number of qubits from the circuit prior to its transpilation.

This is irrelevant for a use case as in the example above, where the circuit is transpiled and only afterwards sent to the simulator (with its qubit number now listed as 127).

@nonhermitian, am I getting this right? In this case Aer has to actively go over the circuit and count the number of qubits used in practice, correct?

@nonhermitian
Copy link
Contributor Author

The root issue is that the number of qubits that needs to be simulated is the number of active qubits in the circuit. As in the example here, this need not equal the total number of device in a transpiled circuit. The Aer backend.run interface will perform this truncation, so the primitives should as well.

@gadial gadial linked a pull request Nov 24, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants