The original version is here. This code can be run on IBM Quantum Lab with web interfaces, but can also be run on local computers having access to Qiskit runtime.
Below is the example of executing Qiskit runtime
service = QiskitRuntimeService(
channel='ibm_quantum',
instance='ibm-q-utokyo/internal/cs-slecture8',
)
You can change the experimental settings by the following code in main.ipynb.
Q = 2 # number of qubits per quantum circuit
L = 2 # number of Fraxis layers, i.e., encoding and ansatz combined
W = 100 # number of quantum nodes. Thus, Q*W=200 is the total number of qubits
N = 63 # number of training instances
M = 63 # number of testing instances
U = 5 # number of sweep updates
atomNo = 45 # the number of amino acid. To read data for training and testing backend = service.backend('ibm_washington') # the name of devices
The main python notebook files are (they are basically the same)
main_seattle_molecules.ipynb #This is to run on simulator_mps
main_seattle_molecules-Copy1.ipynb #This is to run on ibm_seattle
You may want to look at the data_loader()
defined at training_molecules.py
to get an understanding of providing instances for training and testing. Basically, you have to place the data files under the directory data/IBM
.
When using a CSV file, execute the script main_tomihari.py
.
The following parameters can be adjusted in your quantum computation:
update:
Specifies the update mode. Set asupdate = "inorder"
.trainrate
: Specifies the training rate. Set astrainrate = 1.0
.preprocessing
: Preprocessing method for the data, if any. Set aspreprocessing=None
.label
: Label for the data. In this example, it is set aslabel="Survived"
.feat
: Features to be used from the data. In this example,feat=["Age", "Fare"]
.CSVpath
: Path to the CSV file with the data. Set asCSVpath = "data/titanic/train.csv"
.
The quantum computation backend is set with backend = service.backend("simulator_mps")
.
Parameters can be initialized using random values:
params = np.random.rand(L, Q, 3)
params /= np.linalg.norm(params, axis=2, keepdims=True)
For additional details on configuration and usage, refer to the parallel_train() function in the training_tomihari.py
file.