|
| 1 | +Qiskit Machine Learning v0.8 Migration Guide |
| 2 | +============================================ |
| 3 | + |
| 4 | +This tutorial will guide you through the process of migrating your code |
| 5 | +using V2 primitives. |
| 6 | + |
| 7 | +Introduction |
| 8 | +------------ |
| 9 | + |
| 10 | +The Qiskit Machine Learning 0.8 release focuses on transitioning from V1 to V2 primitives. |
| 11 | +This release also incorporates selected algorithms from the now deprecated `qiskit_algorithms` repository. |
| 12 | + |
| 13 | + |
| 14 | +Contents: |
| 15 | + |
| 16 | +- Overview of the primitives |
| 17 | +- Transpilation and Pass Managers |
| 18 | +- Algorithms from `qiskit_algorithms` |
| 19 | +- 🔪 The Sharp Bits: Common Pitfalls |
| 20 | + |
| 21 | +Overview of the primitives |
| 22 | +-------------------------- |
| 23 | + |
| 24 | +With the launch of `Qiskit 1.0`, V1 primitives are deprecated and replaced by V2 primitives. Further details |
| 25 | +are available in the |
| 26 | +`V2 primitives migration guide <https://docs.quantum.ibm.com/migration-guides/v2-primitives>`__. |
| 27 | + |
| 28 | +The Qiskit Machine Learning 0.8 update aligns with the Qiskit IBM Runtime’s Primitive Unified Block (PUB) |
| 29 | +requirements and the constraints of the instruction set architecture (ISA) for circuits and observables. |
| 30 | + |
| 31 | +Users can switch between `V1` primitives and `V2` primitives from version `0.8`. |
| 32 | + |
| 33 | +**Warning**: V1 primitives are deprecated and will be removed in version `0.9`. To ensure full compatibility |
| 34 | +with V2 primitives, review the transpilation and pass managers section if your primitives require transpilation, |
| 35 | +such as those from `qiskit-ibm-runtime`. |
| 36 | + |
| 37 | +Usage of V2 primitives is as straightforward as using V1: |
| 38 | + |
| 39 | +- For kernel based methods: |
| 40 | + |
| 41 | +.. code:: ipython3 |
| 42 | +
|
| 43 | + from qiskit.primitives import StatevectorSampler as Sampler |
| 44 | + from qiskit_machine_learning.state_fidelities import ComputeUncompute |
| 45 | + from qiskit_machine_learning.kernels import FidelityQuantumKernel |
| 46 | + ... |
| 47 | + sampler = Sampler() |
| 48 | + fidelity = ComputeUncompute(sampler=sampler) |
| 49 | + feature_map = ZZFeatureMap(num_qubits) |
| 50 | + qk = FidelityQuantumKernel(feature_map=feature_map, fidelity=fidelity) |
| 51 | + ... |
| 52 | +
|
| 53 | +- For Estimator based neural_network based methods: |
| 54 | + |
| 55 | +.. code:: ipython3 |
| 56 | +
|
| 57 | + from qiskit.primitives import StatevectorEstimator as Estimator |
| 58 | + from qiskit_machine_learning.neural_networks import EstimatorQNN |
| 59 | + from qiskit_machine_learning.gradients import ParamShiftEstimatorGradient |
| 60 | + ... |
| 61 | + estimator = Estimator() |
| 62 | + estimator_gradient = ParamShiftEstimatorGradient(estimator=estimator) |
| 63 | + |
| 64 | + estimator_qnn = EstimatorQNN( |
| 65 | + circuit=circuit, |
| 66 | + observables=observables, |
| 67 | + input_params=feature_map.parameters, |
| 68 | + weight_params=ansatz.parameters, |
| 69 | + estimator=estimator, |
| 70 | + gradient=estimator_gradient, |
| 71 | + ) |
| 72 | + ... |
| 73 | +
|
| 74 | +- For Sampler based neural_network based methods: |
| 75 | + |
| 76 | +.. code:: ipython3 |
| 77 | + |
| 78 | + from qiskit.primitives import StatevectorSampler as Sampler |
| 79 | + from qiskit_machine_learning.neural_networks import SamplerQNN |
| 80 | + from qiskit_machine_learning.gradients import ParamShiftSamplerGradient |
| 81 | + ... |
| 82 | + sampler = Sampler() |
| 83 | + sampler_gradient = ParamShiftSamplerGradient(sampler=sampler) |
| 84 | +
|
| 85 | + sampler_qnn = SamplerQNN( |
| 86 | + circuit=circuit, |
| 87 | + input_params=feature_map.parameters, |
| 88 | + weight_params=ansatz.parameters, |
| 89 | + interpret=parity, |
| 90 | + output_shape=output_shape, |
| 91 | + sampler=sampler, |
| 92 | + gradient=sampler_gradient, |
| 93 | + ) |
| 94 | + ... |
| 95 | +
|
| 96 | +
|
| 97 | +Transpilation and Pass Managers |
| 98 | +------------------------------- |
| 99 | + |
| 100 | +If your primitives require transpiled circuits,i.e. `qiskit-ibm-runtime.primitives`, |
| 101 | +use `pass_manager` with `qiskit-machine-learning` functions to optimize performance. |
| 102 | + |
| 103 | +- For kernel based methods: |
| 104 | + |
| 105 | +.. code:: ipython3 |
| 106 | +
|
| 107 | + from qiskit_ibm_runtime import Session, SamplerV2 |
| 108 | + from qiskit.providers.fake_provider import GenericBackendV2 |
| 109 | + from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager |
| 110 | +
|
| 111 | + from qiskit_machine_learning.state_fidelities import ComputeUncompute |
| 112 | + from qiskit_machine_learning.kernels import FidelityQuantumKernel |
| 113 | +
|
| 114 | + ... |
| 115 | + backend = GenericBackendV2(num_qubits=num_qubits) |
| 116 | + session = Session(backend=backend) |
| 117 | + pass_manager = generate_preset_pass_manager(optimization_level=0, backend=backend) |
| 118 | +
|
| 119 | + sampler = SamplerV2(mode=session) |
| 120 | + fidelity = ComputeUncompute(sampler=sampler, pass_manager=pass_manager) |
| 121 | +
|
| 122 | + feature_map = ZZFeatureMap(num_qubits) |
| 123 | + qk = FidelityQuantumKernel(feature_map=feature_map, fidelity=fidelity) |
| 124 | + ... |
| 125 | +
|
| 126 | +- For Estimator based neural_network based methods: |
| 127 | + |
| 128 | +.. code:: ipython3 |
| 129 | +
|
| 130 | + from qiskit_ibm_runtime import Session, EstimatorV2 |
| 131 | + from qiskit.providers.fake_provider import GenericBackendV2 |
| 132 | + from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager |
| 133 | +
|
| 134 | + from qiskit_machine_learning.neural_networks import EstimatorQNN |
| 135 | + from qiskit_machine_learning.gradients import ParamShiftEstimatorGradient |
| 136 | +
|
| 137 | + ... |
| 138 | + backend = GenericBackendV2(num_qubits=num_qubits) |
| 139 | + session = Session(backend=backend) |
| 140 | +
|
| 141 | + estimator = Estimator(mode=session) |
| 142 | + pass_manager = generate_preset_pass_manager(optimization_level=0, backend=backend) |
| 143 | + estimator_qnn = EstimatorQNN( |
| 144 | + circuit=qc, |
| 145 | + observables=[observables], |
| 146 | + input_params=feature_map.parameters, |
| 147 | + weight_params=ansatz.parameters, |
| 148 | + estimator=estimator, |
| 149 | + pass_manager=pass_manager, |
| 150 | + ) |
| 151 | +
|
| 152 | +or with more details: |
| 153 | + |
| 154 | +.. code:: ipython3 |
| 155 | +
|
| 156 | + backend = GenericBackendV2(num_qubits=num_qubits) |
| 157 | + session = Session(backend=backend) |
| 158 | +
|
| 159 | + estimator = Estimator(mode=session) |
| 160 | + pass_manager = generate_preset_pass_manager(optimization_level=0, backend=backend) |
| 161 | + estimator_gradient = ParamShiftEstimatorGradient( |
| 162 | + estimator=estimator, pass_manager=pass_manager |
| 163 | + ) |
| 164 | +
|
| 165 | + isa_qc = pass_manager.run(qc) |
| 166 | + observables = SparsePauliOp.from_list(...) |
| 167 | + isa_observables = observables.apply_layout(isa_qc.layout) |
| 168 | + estimator_qnn = EstimatorQNN( |
| 169 | + circuit=isa_qc, |
| 170 | + observables=[isa_observables], |
| 171 | + input_params=feature_map.parameters, |
| 172 | + weight_params=ansatz.parameters, |
| 173 | + estimator=estimator, |
| 174 | + gradient=estimator_gradient, |
| 175 | + ) |
| 176 | +
|
| 177 | +- For Sampler based neural_network based methods: |
| 178 | + |
| 179 | +.. code:: ipython3 |
| 180 | + |
| 181 | + from qiskit_ibm_runtime import Session, SamplerV2 |
| 182 | + from qiskit.providers.fake_provider import GenericBackendV2 |
| 183 | + from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager |
| 184 | +
|
| 185 | + from qiskit_machine_learning.neural_networks import SamplerQNN |
| 186 | + from qiskit_machine_learning.gradients import ParamShiftSamplerGradient |
| 187 | +
|
| 188 | + ... |
| 189 | + backend = GenericBackendV2(num_qubits=num_qubits) |
| 190 | + session = Session(backend=backend) |
| 191 | + pass_manager = generate_preset_pass_manager(optimization_level=0, backend=backend) |
| 192 | + sampler = SamplerV2(mode=session) |
| 193 | +
|
| 194 | + sampler_qnn = SamplerQNN( |
| 195 | + circuit=qc, |
| 196 | + input_params=feature_map.parameters, |
| 197 | + weight_params=ansatz.parameters, |
| 198 | + interpret=parity, |
| 199 | + output_shape=output_shape, |
| 200 | + sampler=sampler, |
| 201 | + pass_manager=pass_manager, |
| 202 | + ) |
| 203 | +
|
| 204 | +or with more details: |
| 205 | + |
| 206 | +.. code:: ipython3 |
| 207 | +
|
| 208 | + backend = GenericBackendV2(num_qubits=num_qubits) |
| 209 | + session = Session(backend=backend) |
| 210 | + pass_manager = generate_preset_pass_manager(optimization_level=0, backend=backend) |
| 211 | +
|
| 212 | + sampler = SamplerV2(mode=session) |
| 213 | + sampler_gradient = ParamShiftSamplerGradient(sampler=sampler, pass_manager=self.pass_manager) |
| 214 | + isa_qc = pass_manager.run(qc) |
| 215 | + sampler_qnn = SamplerQNN( |
| 216 | + circuit=isa_qc, |
| 217 | + input_params=feature_map.parameters, |
| 218 | + weight_params=ansatz.parameters, |
| 219 | + interpret=parity, |
| 220 | + output_shape=output_shape, |
| 221 | + sampler=sampler, |
| 222 | + gradient=sampler_gradient, |
| 223 | + ) |
| 224 | + ... |
| 225 | +
|
| 226 | +
|
| 227 | +Algorithms from `qiskit_algorithms` |
| 228 | +----------------------------------- |
| 229 | + |
| 230 | +Essential features of Qiskit Algorithms have been integrated into Qiskit Machine Learning. |
| 231 | +Therefore, Qiskit Machine Learning will no longer depend on Qiskit Algorithms. |
| 232 | +This migration requires Qiskit 1.0 or higher and may necessitate updating Qiskit Aer. |
| 233 | +Be cautious during updates to avoid breaking changes in critical production stages. |
| 234 | + |
| 235 | +Users must update their imports and code references in code that uses Qiskit Machine Leaning and Algorithms: |
| 236 | + |
| 237 | +- Change `qiskit_algorithms.gradients` to `qiskit_machine_learning.gradients` |
| 238 | +- Change `qiskit_algorithms.optimizers` to `qiskit_machine_learning.optimizers` |
| 239 | +- Change `qiskit_algorithms.state_fidelities` to `qiskit_machine_learning.state_fidelities` |
| 240 | +- Update utilities as needed due to partial merge. |
| 241 | + |
| 242 | +To continue using sub-modules and functionalities of Qiskit Algorithms that **have not been transferred**, |
| 243 | +you may continue using them as before by importing from Qiskit Algorithms. However, be aware that Qiskit Algorithms |
| 244 | +is no longer officially supported and some of its functionalities may not work in your use case. For any problems |
| 245 | +directly related to Qiskit Algorithms, please open a GitHub issue at |
| 246 | +`qiskit-algorithms <https://github.com/qiskit-community/qiskit-algorithms>`__. |
| 247 | +Should you want to include a Qiskit Algorithms functionality that has not been incorporated in Qiskit Machine |
| 248 | +Learning, please open a feature-request issue at |
| 249 | +`qiskit-machine-learning <https://github.com/qiskit-community/qiskit-machine-learning>`__, |
| 250 | + |
| 251 | +explaining why this change would be useful for you and other users. |
| 252 | + |
| 253 | +Four examples of upgrading the code can be found below. |
| 254 | + |
| 255 | +Gradients: |
| 256 | + |
| 257 | +.. code:: ipython3 |
| 258 | +
|
| 259 | + # Before: |
| 260 | + from qiskit_algorithms.gradients import SPSA, ParameterShift |
| 261 | + # After: |
| 262 | + from qiskit_machine_learning.gradients import SPSA, ParameterShift |
| 263 | + # Usage |
| 264 | + spsa = SPSA() |
| 265 | + param_shift = ParameterShift() |
| 266 | +
|
| 267 | +Optimizers: |
| 268 | + |
| 269 | +.. code:: ipython3 |
| 270 | +
|
| 271 | + # Before: |
| 272 | + from qiskit_algorithms.optimizers import COBYLA, ADAM |
| 273 | + # After: |
| 274 | + from qiskit_machine_learning.optimizers import COBYLA, ADAM |
| 275 | + # Usage |
| 276 | + cobyla = COBYLA() |
| 277 | + adam = ADAM() |
| 278 | +
|
| 279 | +Quantum state fidelities: |
| 280 | + |
| 281 | +.. code:: ipython3 |
| 282 | +
|
| 283 | + # Before: |
| 284 | + from qiskit_algorithms.state_fidelities import ComputeFidelity |
| 285 | + # After: |
| 286 | + from qiskit_machine_learning.state_fidelities import ComputeFidelity |
| 287 | + # Usage |
| 288 | + fidelity = ComputeFidelity() |
| 289 | +
|
| 290 | +
|
| 291 | +Algorithm globals (used to fix the random seed): |
| 292 | + |
| 293 | +.. code:: ipython3 |
| 294 | +
|
| 295 | + # Before: |
| 296 | + from qiskit_algorithms.utils import algorithm_globals |
| 297 | + # After: |
| 298 | + from qiskit_machine_learning.utils import algorithm_globals |
| 299 | + algorithm_globals.random_seed = 1234 |
| 300 | +
|
| 301 | +
|
| 302 | +🔪 The Sharp Bits: Common Pitfalls |
| 303 | +----------------------------------- |
| 304 | + |
| 305 | +- 🔪 Transpiling without measurements: |
| 306 | + |
| 307 | +.. code:: ipython3 |
| 308 | +
|
| 309 | + # Before: |
| 310 | + qc = QuantumCircuit(1) |
| 311 | + qc.h(0) |
| 312 | + qc.ry(params[0], 0) |
| 313 | + qc.rx(params[1], 0) |
| 314 | + pass_manager.run(qc) |
| 315 | +
|
| 316 | +This approach causes issues for the transpiler, as it will measure all physical qubits instead |
| 317 | +of virtual qubits when the number of physical qubits exceeds the number of virtual qubits. |
| 318 | +Always add measurements before transpilation: |
| 319 | + |
| 320 | + |
| 321 | +.. code:: ipython3 |
| 322 | +
|
| 323 | + # After: |
| 324 | + qc = QuantumCircuit(1) |
| 325 | + qc.h(0) |
| 326 | + qc.ry(params[0], 0) |
| 327 | + qc.rx(params[1], 0) |
| 328 | + qc.measure_all() |
| 329 | + pass_manager.run(qc) |
| 330 | +
|
| 331 | +- 🔪 Dynamic Attribute Naming in Qiskit v1.x: |
| 332 | + |
| 333 | +In the latest version of Qiskit (v1.x), the dynamic naming of attributes based on the |
| 334 | +classical register's name introduces potential bugs. |
| 335 | +Please use `meas` or `c` for your register names to avoid any issues for SamplerV2. |
| 336 | + |
| 337 | +.. code:: ipython3 |
| 338 | +
|
| 339 | + # for measue_all(): |
| 340 | + dist = result[0].data.meas.get_counts() |
| 341 | +
|
| 342 | +.. code:: ipython3 |
| 343 | +
|
| 344 | + # for cbit: |
| 345 | + dist = result[0].data.c.get_counts() |
| 346 | +
|
| 347 | +- 🔪 Adapting observables for transpiled circuits: |
| 348 | + |
| 349 | +.. code:: ipython3 |
| 350 | +
|
| 351 | + # Wrong: |
| 352 | + ... |
| 353 | + pass_manager = generate_preset_pass_manager(optimization_level=0, backend=backend) |
| 354 | + isa_qc = pass_manager.run(qc) |
| 355 | + observables = SparsePauliOp.from_list(...) |
| 356 | + estimator_qnn = EstimatorQNN( |
| 357 | + circuit=isa_qc, |
| 358 | + observables=[observables], |
| 359 | + ... |
| 360 | +
|
| 361 | +
|
| 362 | + # Correct: |
| 363 | + ... |
| 364 | + pass_manager = generate_preset_pass_manager(optimization_level=0, backend=backend) |
| 365 | + isa_qc = pass_manager.run(qc) |
| 366 | + observables = SparsePauliOp.from_list(...) |
| 367 | + isa_observables = observables.apply_layout(isa_qc.layout) |
| 368 | + estimator_qnn = EstimatorQNN( |
| 369 | + circuit=isa_qc, |
| 370 | + observables=[isa_observables], |
| 371 | + ... |
| 372 | +
|
| 373 | +
|
| 374 | +- 🔪 Passing gradients without a pass manager: |
| 375 | + |
| 376 | +Some gradient algorithms may require creation of new circuits, and primitives from |
| 377 | +`qiskit-ibm-runtime` require transpilation. Please ensure a pass manager is also provided to gradients. |
| 378 | + |
| 379 | +.. code:: ipython3 |
| 380 | + |
| 381 | + # Wrong: |
| 382 | + ... |
| 383 | + pass_manager = generate_preset_pass_manager(optimization_level=0, backend=backend) |
| 384 | + gradient = ParamShiftEstimatorGradient(estimator=estimator) |
| 385 | + ... |
| 386 | +
|
| 387 | + # Correct: |
| 388 | + ... |
| 389 | + pass_manager = generate_preset_pass_manager(optimization_level=0, backend=backend) |
| 390 | + gradient = ParamShiftEstimatorGradient( |
| 391 | + estimator=estimator, pass_manager=pass_manager |
| 392 | + ) |
| 393 | + ... |
| 394 | +
|
| 395 | +- 🔪 Don't forget to migrate if you are using functions from `qiskit_algorithms` instead of `qiskit-machine-learning` for V2 primitives. |
| 396 | +- 🔪 Some gradients such as SPSA and LCU from `qiskit_machine_learning.gradients` can be very prone to noise, be cautious of gradient values. |
0 commit comments