diff --git a/tutorials/Tutorial 0 - Getting Started.ipynb b/tutorials/Tutorial 0 - Getting Started.ipynb
index 34871f22..b725ba17 100644
--- a/tutorials/Tutorial 0 - Getting Started.ipynb
+++ b/tutorials/Tutorial 0 - Getting Started.ipynb
@@ -2,6 +2,7 @@
"cells": [
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"# Tutorial 0: Getting Started\n",
"\n",
@@ -14,15 +15,16 @@
"\n",
"Authors:\n",
"- Ayoub Benaissa - Twitter: [@y0uben11](https://twitter.com/y0uben11)"
- ],
- "metadata": {}
+ ]
},
{
+ "attachments": {},
"cell_type": "markdown",
+ "metadata": {},
"source": [
"## Homomorphic Encryption\n",
"\n",
- "__Definition__ : Homomorphic encription (HE) is an encryption technique that allows computations to be made on ciphertexts and generates results that when decrypted, correspond to the results of the same computations made on plaintexts.\n",
+ "__Definition__ : Homomorphic encryption (HE) is an encryption technique that allows computations to be made on ciphertexts and generates results that when decrypted, correspond to the results of the same computations made on plaintexts.\n",
"\n",
"\n",
"\n",
@@ -43,73 +45,60 @@
"\n",
"```\n",
"\n",
- "Many details are hidden in this Python script, things like key generation doesn't appear, and that `+` operation over encrypted numbers isn't the usual `+` over integers, but a special evaluation algorithm that can evaluate addition over encrypted numbers. TenSEAL supports addition, substraction and multiplication of encrypted vectors of either integers (using BFV) or real numbers (using CKKS).\n",
+ "Many details are hidden in this Python script, things like key generation doesn't appear, and that `+` operation over encrypted numbers isn't the usual `+` over integers, but a special evaluation algorithm that can evaluate addition over encrypted numbers. TenSEAL supports addition, subtraction and multiplication of encrypted vectors of either integers (using BFV) or real numbers (using CKKS).\n",
"\n",
"Next we will look at the most important object of the library, the TenSEALContext."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"## TenSEALContext\n",
"\n",
"The TenSEALContext is a special object that holds different encryption keys and parameters for you, so that you only need to use a single object to make your encrypted computation instead of managing all the keys and the HE details. Basically, you will want to create a single TenSEALContext before doing your encrypted computation. Let's see how to create one !"
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 1,
- "source": [
- "import tenseal as ts\n",
- "\n",
- "context = ts.context(ts.SCHEME_TYPE.BFV, poly_modulus_degree=4096, plain_modulus=1032193)\n",
- "context"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "execute_result",
"data": {
"text/plain": [
"<_tenseal_cpp.TenSEALContext at 0x7fcb980c71f0>"
]
},
+ "execution_count": 1,
"metadata": {},
- "execution_count": 1
+ "output_type": "execute_result"
}
],
- "metadata": {}
+ "source": [
+ "import tenseal as ts\n",
+ "\n",
+ "context = ts.context(ts.SCHEME_TYPE.BFV, poly_modulus_degree=4096, plain_modulus=1032193)\n",
+ "context"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"That's it ! We need to specify the HE scheme (BFV here) that we want to use, as well as its parameters. Don't worry about the parameters now, you will learn more about them in upcoming tutorials.\n",
"\n",
"An important thing to note is that the TenSEALContext is now holding the secret key and you can decrypt without the need to provide it, however, you can choose to manage it as a separate object and you will need to pass it to functions that require the secret key. Let's see how this translates into Python!"
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 2,
- "source": [
- "public_context = ts.context(ts.SCHEME_TYPE.BFV, poly_modulus_degree=4096, plain_modulus=1032193)\n",
- "print(\"Is the context private?\", (\"Yes\" if public_context.is_private() else \"No\"))\n",
- "print(\"Is the context public?\", (\"Yes\" if public_context.is_public() else \"No\"))\n",
- "\n",
- "sk = public_context.secret_key()\n",
- "\n",
- "# the context will drop the secret-key at this point\n",
- "public_context.make_context_public()\n",
- "print(\"Secret-key dropped\")\n",
- "print(\"Is the context private?\", (\"Yes\" if public_context.is_private() else \"No\"))\n",
- "print(\"Is the context public?\", (\"Yes\" if public_context.is_public() else \"No\"))"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"Is the context private? Yes\n",
"Is the context public? No\n",
@@ -119,184 +108,208 @@
]
}
],
- "metadata": {}
+ "source": [
+ "public_context = ts.context(ts.SCHEME_TYPE.BFV, poly_modulus_degree=4096, plain_modulus=1032193)\n",
+ "print(\"Is the context private?\", (\"Yes\" if public_context.is_private() else \"No\"))\n",
+ "print(\"Is the context public?\", (\"Yes\" if public_context.is_public() else \"No\"))\n",
+ "\n",
+ "sk = public_context.secret_key()\n",
+ "\n",
+ "# the context will drop the secret-key at this point\n",
+ "public_context.make_context_public()\n",
+ "print(\"Secret-key dropped\")\n",
+ "print(\"Is the context private?\", (\"Yes\" if public_context.is_private() else \"No\"))\n",
+ "print(\"Is the context public?\", (\"Yes\" if public_context.is_public() else \"No\"))"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"You can now try to fetch the secret key from the `public_context` and see that it raises an error. We will now continue using our first created TenSEALContext `context` which is still holding the secret key."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"## Encryption and Evaluation"
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"The next step after creating our TenSEALContext is to start doing some encrypted computation. First, we create an encrypted vector of integers."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 3,
- "source": [
- "plain_vector = [60, 66, 73, 81, 90]\n",
- "encrypted_vector = ts.bfv_vector(context, plain_vector)\n",
- "print(\"We just encrypted our plaintext vector of size:\", encrypted_vector.size())\n",
- "encrypted_vector"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"We just encrypted our plaintext vector of size: 5\n"
]
},
{
- "output_type": "execute_result",
"data": {
"text/plain": [
"<_tenseal_cpp.BFVVector at 0x7fcb980bc330>"
]
},
+ "execution_count": 3,
"metadata": {},
- "execution_count": 3
+ "output_type": "execute_result"
}
],
- "metadata": {}
+ "source": [
+ "plain_vector = [60, 66, 73, 81, 90]\n",
+ "encrypted_vector = ts.bfv_vector(context, plain_vector)\n",
+ "print(\"We just encrypted our plaintext vector of size:\", encrypted_vector.size())\n",
+ "encrypted_vector"
+ ]
},
{
+ "attachments": {},
"cell_type": "markdown",
+ "metadata": {},
"source": [
- "Here we encrypted a vector of integers into a BFVVector, a vector type that uses the BFV scheme. Now we can do both addition, substraction and multiplication in an element-wise fashion with other encrypted or plain vectors."
- ],
- "metadata": {}
+ "Here we encrypted a vector of integers into a BFVVector, a vector type that uses the BFV scheme. Now we can do both addition, subtraction and multiplication in an element-wise fashion with other encrypted or plain vectors."
+ ]
},
{
"cell_type": "code",
"execution_count": 4,
- "source": [
- "add_result = encrypted_vector + [1, 2, 3, 4, 5]\n",
- "print(add_result.decrypt())"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"[61, 68, 76, 85, 95]\n"
]
}
],
- "metadata": {}
+ "source": [
+ "add_result = encrypted_vector + [1, 2, 3, 4, 5]\n",
+ "print(add_result.decrypt())"
+ ]
},
{
"cell_type": "code",
"execution_count": 5,
- "source": [
- "sub_result = encrypted_vector - [1, 2, 3, 4, 5]\n",
- "print(sub_result.decrypt())"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"[59, 64, 70, 77, 85]\n"
]
}
],
- "metadata": {}
+ "source": [
+ "sub_result = encrypted_vector - [1, 2, 3, 4, 5]\n",
+ "print(sub_result.decrypt())"
+ ]
},
{
"cell_type": "code",
"execution_count": 6,
- "source": [
- "mul_result = encrypted_vector * [1, 2, 3, 4, 5]\n",
- "print(mul_result.decrypt())"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"[60, 132, 219, 324, 450]\n"
]
}
],
- "metadata": {}
+ "source": [
+ "mul_result = encrypted_vector * [1, 2, 3, 4, 5]\n",
+ "print(mul_result.decrypt())"
+ ]
},
{
"cell_type": "code",
"execution_count": 7,
- "source": [
- "encrypted_add = add_result + sub_result\n",
- "print(encrypted_add.decrypt())"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"[120, 132, 146, 162, 180]\n"
]
}
],
- "metadata": {}
+ "source": [
+ "encrypted_add = add_result + sub_result\n",
+ "print(encrypted_add.decrypt())"
+ ]
},
{
"cell_type": "code",
"execution_count": 8,
- "source": [
- "encrypted_sub = encrypted_add - encrypted_vector\n",
- "print(encrypted_sub.decrypt())"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"[60, 66, 73, 81, 90]\n"
]
}
],
- "metadata": {}
+ "source": [
+ "encrypted_sub = encrypted_add - encrypted_vector\n",
+ "print(encrypted_sub.decrypt())"
+ ]
},
{
"cell_type": "code",
"execution_count": 9,
- "source": [
- "encrypted_mul = encrypted_add * encrypted_sub\n",
- "print(encrypted_mul.decrypt())"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"[7200, 8712, 10658, 13122, 16200]\n"
]
}
],
- "metadata": {}
+ "source": [
+ "encrypted_mul = encrypted_add * encrypted_sub\n",
+ "print(encrypted_mul.decrypt())"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"We just made both ciphertext to plaintext (c2p) and ciphertext to ciphertext (c2c) evaluations (add, sub and mul). An important thing to note is that you should never encrypt your plaintext values to evaluate them with ciphertexts if they don't need to be kept private. That's because c2p evaluations are more efficient than c2c. Look at the below script to see how much faster a c2p multiplication is compared to a c2c one."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 10,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "c2c multiply time: 18.739938735961914 ms\n",
+ "c2p multiply time: 1.5423297882080078 ms\n"
+ ]
+ }
+ ],
"source": [
"from time import time\n",
"\n",
@@ -309,40 +322,26 @@
"_ = encrypted_add * [1, 2, 3, 4, 5]\n",
"t_end = time()\n",
"print(\"c2p multiply time: {} ms\".format((t_end - t_start) * 1000))"
- ],
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "c2c multiply time: 18.739938735961914 ms\n",
- "c2p multiply time: 1.5423297882080078 ms\n"
- ]
- }
- ],
- "metadata": {}
+ ]
},
{
+ "attachments": {},
"cell_type": "markdown",
+ "metadata": {},
"source": [
"## More about TenSEALContext\n",
"\n",
- "TenSEALContext is holding more attributes than what we have seen so far, so it's worth mentioning some other interesting ones. The coolest attributes (at least to me) are the ones for setting automatic relinearization, rescaling (for CKKS only) and modulus switching. These features are enabled by defaut as you can see below:"
- ],
- "metadata": {}
+ "TenSEALContext is holding more attributes than what we have seen so far, so it's worth mentioning some other interesting ones. The coolest attributes (at least to me) are the ones for setting automatic relinearization, rescaling (for CKKS only) and modulus switching. These features are enabled by default as you can see below:"
+ ]
},
{
"cell_type": "code",
"execution_count": 11,
- "source": [
- "print(\"Automatic relinearization is:\", (\"on\" if context.auto_relin else \"off\"))\n",
- "print(\"Automatic rescaling is:\", (\"on\" if context.auto_rescale else \"off\"))\n",
- "print(\"Automatic modulus switching is:\", (\"on\" if context.auto_mod_switch else \"off\"))"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"Automatic relinearization is: on\n",
"Automatic rescaling is: on\n",
@@ -350,20 +349,35 @@
]
}
],
- "metadata": {}
+ "source": [
+ "print(\"Automatic relinearization is:\", (\"on\" if context.auto_relin else \"off\"))\n",
+ "print(\"Automatic rescaling is:\", (\"on\" if context.auto_rescale else \"off\"))\n",
+ "print(\"Automatic modulus switching is:\", (\"on\" if context.auto_mod_switch else \"off\"))"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"Experienced users can choose to disable one or more of these features and manage for themselves when and how to do these operations.\n",
"\n",
"TenSEALContext can also hold a `global_scale` (only used when using CKKS), which is used as a default scale value when the user doesn't provide one. As most often users will define a single value to be used as scale during the entire HE computation, defining it globally can be more straight forward compared to passing it to every function call."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 12,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "The global_scale isn't defined yet\n",
+ "global_scale: 1048576.0\n"
+ ]
+ }
+ ],
"source": [
"# this should throw an error as the global_scale isn't defined yet\n",
"try:\n",
@@ -374,21 +388,11 @@
"# you can define it to 2 ** 20 for instance\n",
"context.global_scale = 2 ** 20\n",
"print(\"global_scale:\", context.global_scale)"
- ],
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "The global_scale isn't defined yet\n",
- "global_scale: 1048576.0\n"
- ]
- }
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"# Congratulations!!! - Time to Join the Community!\n",
"\n",
@@ -409,8 +413,7 @@
"If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go towards our web hosting and other community expenses such as hackathons and meetups!\n",
"\n",
"[OpenMined's Open Collective Page](https://opencollective.com/openmined)\n"
- ],
- "metadata": {}
+ ]
}
],
"metadata": {
diff --git a/tutorials/Tutorial 1 - Training and Evaluation of Logistic Regression on Encrypted Data.ipynb b/tutorials/Tutorial 1 - Training and Evaluation of Logistic Regression on Encrypted Data.ipynb
index 2546dd7e..2743a20a 100644
--- a/tutorials/Tutorial 1 - Training and Evaluation of Logistic Regression on Encrypted Data.ipynb
+++ b/tutorials/Tutorial 1 - Training and Evaluation of Logistic Regression on Encrypted Data.ipynb
@@ -2,6 +2,7 @@
"cells": [
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"# Tutorial 1: Training and Evaluation of Logistic Regression on Encrypted Data\n",
"\n",
@@ -13,21 +14,22 @@
"\n",
"Authors:\n",
"- Ayoub Benaissa - Twitter: [@y0uben11](https://twitter.com/y0uben11)"
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"## Setup\n",
"\n",
"All modules are imported here. Make sure everything is installed by running the cell below:"
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 1,
+ "metadata": {},
+ "outputs": [],
"source": [
"import torch\n",
"import tenseal as ts\n",
@@ -38,22 +40,35 @@
"# those are optional and are not necessary for training\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt"
- ],
- "outputs": [],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"We now prepare the training and test data. The dataset was downloaded from Kaggle [here](https://www.kaggle.com/dileep070/heart-disease-prediction-using-logistic-regression). This dataset includes patients' information along with a 10-year risk of future coronary heart disease (CHD) as a label. The goal is to build a model that can predict this 10-year CHD risk based on patients' information. You can read more about the dataset in the link provided. \n",
"\n",
"Alternatively, we also provide the `random_data()` function below that generates random, linearly separable points. You can use it instead of the dataset from Kaggle, for those who just want to see how things work. The rest of the tutorial should work in the same way."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "############# Data summary #############\n",
+ "x_train has shape: torch.Size([780, 9])\n",
+ "y_train has shape: torch.Size([780, 1])\n",
+ "x_test has shape: torch.Size([334, 9])\n",
+ "y_test has shape: torch.Size([334, 1])\n",
+ "#######################################\n"
+ ]
+ }
+ ],
"source": [
"torch.random.manual_seed(73)\n",
"random.seed(73)\n",
@@ -105,35 +120,22 @@
"print(f\"x_test has shape: {x_test.shape}\")\n",
"print(f\"y_test has shape: {y_test.shape}\")\n",
"print(\"#######################################\")"
- ],
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "############# Data summary #############\n",
- "x_train has shape: torch.Size([780, 9])\n",
- "y_train has shape: torch.Size([780, 1])\n",
- "x_test has shape: torch.Size([334, 9])\n",
- "y_test has shape: torch.Size([334, 1])\n",
- "#######################################\n"
- ]
- }
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"## Training a Logistic Regression Model\n",
"\n",
"We will start by training a logistic regression model (without any encryption), which can be viewed as a single layer neural network with a single node. We will be using this model as a means of comparison against encrypted training and evaluation."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 3,
+ "metadata": {},
+ "outputs": [],
"source": [
"class LR(torch.nn.Module):\n",
"\n",
@@ -144,13 +146,13 @@
" def forward(self, x):\n",
" out = torch.sigmoid(self.lr(x))\n",
" return out"
- ],
- "outputs": [],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 4,
+ "metadata": {},
+ "outputs": [],
"source": [
"n_features = x_train.shape[1]\n",
"model = LR(n_features)\n",
@@ -158,13 +160,25 @@
"optim = torch.optim.SGD(model.parameters(), lr=1)\n",
"# use Binary Cross Entropy Loss\n",
"criterion = torch.nn.BCELoss()"
- ],
- "outputs": [],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Loss at epoch 1: 0.8504332900047302\n",
+ "Loss at epoch 2: 0.6863385438919067\n",
+ "Loss at epoch 3: 0.635811448097229\n",
+ "Loss at epoch 4: 0.6193529367446899\n",
+ "Loss at epoch 5: 0.6124349236488342\n"
+ ]
+ }
+ ],
"source": [
"# define the number of epochs for both plain and encrypted training\n",
"EPOCHS = 5\n",
@@ -180,25 +194,21 @@
" return model\n",
"\n",
"model = train(model, optim, criterion, x_train, y_train)"
- ],
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
- "Loss at epoch 1: 0.8504332900047302\n",
- "Loss at epoch 2: 0.6863385438919067\n",
- "Loss at epoch 3: 0.635811448097229\n",
- "Loss at epoch 4: 0.6193529367446899\n",
- "Loss at epoch 5: 0.6124349236488342\n"
+ "Accuracy on plain test_set: 0.703592836856842\n"
]
}
],
- "metadata": {}
- },
- {
- "cell_type": "code",
- "execution_count": 6,
"source": [
"def accuracy(model, x, y):\n",
" out = model(x)\n",
@@ -207,37 +217,29 @@
"\n",
"plain_accuracy = accuracy(model, x_test, y_test)\n",
"print(f\"Accuracy on plain test_set: {plain_accuracy}\")"
- ],
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "Accuracy on plain test_set: 0.703592836856842\n"
- ]
- }
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"It is worth to remember that a high accuracy isn't our goal. We just want to see that training on encrypted data doesn't affect the final result, so we will be comparing accuracies over encrypted data against the `plain_accuracy` we got here."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"## Encrypted Evaluation\n",
"\n",
"In this part, we will just focus on evaluating the logistic regression model with plain parameters (optionally encrypted parameters) on the encrypted test set. We first create a PyTorch-like LR model that can evaluate encrypted data:"
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 7,
+ "metadata": {},
+ "outputs": [],
"source": [
"class EncryptedLR:\n",
" \n",
@@ -272,20 +274,20 @@
" \n",
"\n",
"eelr = EncryptedLR(model)"
- ],
- "outputs": [],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"We now create a TenSEALContext for specifying the scheme and the parameters we are going to use. Here we choose small and secure parameters that allow us to make a single multiplication. That's enough for evaluating a logistic regression model, however, we will see that we need larger parameters when doing training on encrypted data."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 8,
+ "metadata": {},
+ "outputs": [],
"source": [
"# parameters\n",
"poly_mod_degree = 4096\n",
@@ -296,57 +298,67 @@
"ctx_eval.global_scale = 2 ** 20\n",
"# this key is needed for doing dot-product operations\n",
"ctx_eval.generate_galois_keys()"
- ],
- "outputs": [],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"We will encrypt the whole test set before the evaluation:"
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 9,
- "source": [
- "t_start = time()\n",
- "enc_x_test = [ts.ckks_vector(ctx_eval, x.tolist()) for x in x_test]\n",
- "t_end = time()\n",
- "print(f\"Encryption of the test-set took {int(t_end - t_start)} seconds\")"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"Encryption of the test-set took 1 seconds\n"
]
}
],
- "metadata": {}
+ "source": [
+ "t_start = time()\n",
+ "enc_x_test = [ts.ckks_vector(ctx_eval, x.tolist()) for x in x_test]\n",
+ "t_end = time()\n",
+ "print(f\"Encryption of the test-set took {int(t_end - t_start)} seconds\")"
+ ]
},
{
"cell_type": "code",
"execution_count": 10,
+ "metadata": {},
+ "outputs": [],
"source": [
"# (optional) encrypt the model's parameters\n",
"# eelr.encrypt(ctx_eval)"
- ],
- "outputs": [],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"As you may have already noticed when we built the EncryptedLR class, we don't compute the sigmoid function on the encrypted output of the linear layer, simply because it's not needed, and computing sigmoid over encrypted data will increase the computation time and require larger encryption parameters. However, we will use sigmoid for the encrypted training part. We now proceed with the evaluation of the encrypted test set and compare the accuracy to the one on the plain test set."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 11,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Evaluated test_set of 334 entries in 1 seconds\n",
+ "Accuracy: 225/334 = 0.6736526946107785\n",
+ "Difference between plain and encrypted accuracies: 0.029940128326416016\n"
+ ]
+ }
+ ],
"source": [
"def encrypted_evaluation(model, enc_x_test, y_test):\n",
" t_start = time()\n",
@@ -355,7 +367,7 @@
" for enc_x, y in zip(enc_x_test, y_test):\n",
" # encrypted evaluation\n",
" enc_out = model(enc_x)\n",
- " # plain comparaison\n",
+ " # plain comparison\n",
" out = enc_out.decrypt()\n",
" out = torch.tensor(out)\n",
" out = torch.sigmoid(out)\n",
@@ -373,29 +385,19 @@
"print(f\"Difference between plain and encrypted accuracies: {diff_accuracy}\")\n",
"if diff_accuracy < 0:\n",
" print(\"Oh! We got a better accuracy on the encrypted test-set! The noise was on our side...\")"
- ],
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "Evaluated test_set of 334 entries in 1 seconds\n",
- "Accuracy: 225/334 = 0.6736526946107785\n",
- "Difference between plain and encrypted accuracies: 0.029940128326416016\n"
- ]
- }
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"We saw that evaluating on the encrypted test set doesn't affect the accuracy that much. I've even seen examples where the encrypted evaluation performs better."
- ],
- "metadata": {}
+ ]
},
{
+ "attachments": {},
"cell_type": "markdown",
+ "metadata": {},
"source": [
"## Training an Encrypted Logistic Regression Model on Encrypted Data\n",
"\n",
@@ -425,13 +427,14 @@
"\n",
"#### Homomorphic Encryption Parameters\n",
"\n",
- "From the input data to the parameter update, a ciphertext will need a multiplicative depth of 6, 1 for the dot product operation, 2 for the sigmoid approximation, and 3 for the backprobagation phase (one is actually hidden in the `self._delta_w += enc_x * out_minus_y` operation in the `backward()` function, which is multiplying a 1-sized vector with an n-sized one, which requires masking the first slot and replicating it n times in the first vector). With a scale of around 20 bits, we need 6 coefficients modulus with the same bit-size as the scale, plus the last coeffcient, which needs more bits, we are already out of the 4096 polynomial modulus degree (which requires < 109 total bit count of the coefficients modulus, if we consider 128-bit security), so we will use 8192. This will allow us to batch up to 4096 values in a single ciphertext, but we are far away from this limitation, so we shouldn't even think about it.\n"
- ],
- "metadata": {}
+ "From the input data to the parameter update, a ciphertext will need a multiplicative depth of 6, 1 for the dot product operation, 2 for the sigmoid approximation, and 3 for the backpropagation phase (one is actually hidden in the `self._delta_w += enc_x * out_minus_y` operation in the `backward()` function, which is multiplying a 1-sized vector with an n-sized one, which requires masking the first slot and replicating it n times in the first vector). With a scale of around 20 bits, we need 6 coefficients modulus with the same bit-size as the scale, plus the last coefficient, which needs more bits, we are already out of the 4096 polynomial modulus degree (which requires < 109 total bit count of the coefficients modulus, if we consider 128-bit security), so we will use 8192. This will allow us to batch up to 4096 values in a single ciphertext, but we are far away from this limitation, so we shouldn't even think about it.\n"
+ ]
},
{
"cell_type": "code",
"execution_count": 12,
+ "metadata": {},
+ "outputs": [],
"source": [
"class EncryptedLR:\n",
" \n",
@@ -494,13 +497,13 @@
" \n",
" def __call__(self, *args, **kwargs):\n",
" return self.forward(*args, **kwargs)\n"
- ],
- "outputs": [],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 13,
+ "metadata": {},
+ "outputs": [],
"source": [
"# parameters\n",
"poly_mod_degree = 8192\n",
@@ -509,45 +512,84 @@
"ctx_training = ts.context(ts.SCHEME_TYPE.CKKS, poly_mod_degree, -1, coeff_mod_bit_sizes)\n",
"ctx_training.global_scale = 2 ** 21\n",
"ctx_training.generate_galois_keys()"
- ],
- "outputs": [],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 14,
- "source": [
- "t_start = time()\n",
- "enc_x_train = [ts.ckks_vector(ctx_training, x.tolist()) for x in x_train]\n",
- "enc_y_train = [ts.ckks_vector(ctx_training, y.tolist()) for y in y_train]\n",
- "t_end = time()\n",
- "print(f\"Encryption of the training_set took {int(t_end - t_start)} seconds\")"
- ],
+ "metadata": {},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"Encryption of the training_set took 26 seconds\n"
]
}
],
- "metadata": {}
+ "source": [
+ "t_start = time()\n",
+ "enc_x_train = [ts.ckks_vector(ctx_training, x.tolist()) for x in x_train]\n",
+ "enc_y_train = [ts.ckks_vector(ctx_training, y.tolist()) for y in y_train]\n",
+ "t_end = time()\n",
+ "print(f\"Encryption of the training_set took {int(t_end - t_start)} seconds\")"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {},
"source": [
"Below we study the distribution of `x.dot(weight) + bias` in both plain and encrypted domains. Making sure that it falls into the range $[-5,5]$, which is where our sigmoid approximation is good at, and we don't want to feed it data that is out of this range so that we don't get erroneous output, which can make our training behave unpredictably. But the weights will change during the training process, and we should try to keep them as small as possible while still learning. A technique often used with logistic regression, and we do exactly this (but serving another purpose which is *generalization*), is known as *regularization*, and you might already have spotted the additional term `self.weight * 0.05` in the `update_parameters()` function, which is the result of doing regularization.\n",
"\n",
"To recap, since our sigmoid approximation is only good in the range $[-5,5]$, we want to have all its inputs in that range. In order to do this, we need to keep our logistic regression parameters as small as possible, so we apply regularization.\n",
"\n",
"**Note:** Keeping the parameters small certainly reduces the magnitude of the output, but we can also get out of range if the data wasn't standardized. You may have spotted that we standardized the data with a mean of 0 and std of 1, this was both for better performance, as well as to keep the inputs to the sigmoid in the desired range."
- ],
- "metadata": {}
+ ]
},
{
"cell_type": "code",
"execution_count": 15,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Distribution on plain data:\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXQAAAD4CAYAAAD8Zh1EAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO3daXAc553f8e8fAwxAHCQAAjzEWzJlmbKtC0t7vdbaqbVjSrtr2nt4JW/WTmyXSkmUxC82ZbpccZxV5YVjbzaVtbxc2VHZ2XJZScUX10uv5GzF97GkbOqgJEo0RYngCRAXcQ6Of15MDzgaDoAG0IPpbvw+VSjOdD+Y+bMB/PDg6X6eNndHRESSr6baBYiISDQU6CIiKaFAFxFJCQW6iEhKKNBFRFKitlpv3NHR4Tt37qzW24uIJNITTzzR6+6d5fZVLdB37tzJ0aNHq/X2IiKJZGYvz7VPQy4iIimhQBcRSQkFuohISijQRURSQoEuIpISCnQRkZRQoIuIpIQCXVatZ88N8Tc/Pc345HS1SxGJRNUmFolUU99Ijnu/8DMGxyZ58dIwf7b/9dUuSWTZ1EOXVelbx84yODbJG7eu438fPcPQ+GS1SxJZtlCBbmb7zOyEmZ00swNl9v97MzsWfDxjZtNm1h59uSLR+PZT57n5urV84u7XMT45w09/dbnaJYks24KBbmYZ4CHgLmAPcK+Z7Slu4+6fcfdb3f1W4OPA9929rxIFiyzXxNQ0T3cP8tbXdHDr9lYa6moU6JIKYXroe4GT7n7K3XPAo8D+edrfC3w1iuJEKuHZc0Pkpme4bXsr9bUZ7tjRxtGX1f+Q5AsT6FuAM0XPu4Nt1zCzRmAf8LU59t9nZkfN7GhPT89iaxWJxC9fGQDgtu1tANx83TpeuDjM1PRMNcsSWbYwgW5ltvkcbX8X+PFcwy3u/rC7d7l7V2dn2eV8RSru+QtDdDRn2bi2AYDXbW4hNzXDS70jVa5MZHnCBHo3sK3o+Vbg3Bxt70HDLRJzL/WOcH1H8+zzmzatBeDZ80PVKkkkEmEC/Qiw28x2mVmWfGgfKm1kZuuAtwHfirZEkWid6hnh+s6m2ee7Opowg9O9o1WsSmT5FpxY5O5TZvYA8BiQAR5x9+Nmdn+w/2DQ9L3A4+6uv1sltgZHJ7k8kmNXx9VAb6jLsHltAy9f1reuJFuomaLufhg4XLLtYMnzLwFfiqowkUp4KQjt4kAH2L6+kZf71EOXZNNMUVlVTvUMA3B9Z/Ortu9c36QeuiSeAl1WlVeCXvi29jWv2r5jfRO9wzmGJ6aqUZZIJBTosqqcHxino7me+trMq7Zvb28E4IyGXSTBFOiyqpwfGue61oZrtm9al992YXB8pUsSiYwCXVaV8wNjbF53baAXQv68Al0STIEuq8r5wXE2r1tzzfbO5npqDM4PjlWhKpFoKNBl1Rgan2R4YqrskEttpoYNLQ3qoUuiKdBl1Tg/kA/rTWV66ACbWxs0hi6JpkCXVaMwnHJdmTF0gM3rGjTkIommQJdVozCcsrm1fA9909o1nB8cx32uxURF4k2BLqvG+cFxzGBjS33Z/ZvXNTCam2ZoXJOLJJkU6LJq9FyZYH1TltpM+W/7ziDoe4cnVrIskcgo0GXV6B2eoKO5fO8cmN3Xe0WBLsmkQJdVY8FAb8kG7XIrVZJIpBTosmrkAz075/7OIOx7rujSRUkmBbqsGr1XcvP20Nsas2RqTD10SSwFuqwKIxNTjE1O0zHHFS4ANTVGe1NWJ0UlsRTosioUQnp909xDLpA/MapAl6RSoMuqUAjp+XrokL90sUdXuUhChQp0M9tnZifM7KSZHZijzdvN7JiZHTez70dbpsjy9FzJj4t3zjOGDtDRnNUYuiTWgjeJNrMM8BDwTqAbOGJmh9z92aI2rcDngX3u/oqZbahUwSJLMdtDXyDQO5vr6RmewN0xs5UoTSQyYXroe4GT7n7K3XPAo8D+kjbvB77u7q8AuPulaMsUWZ7ZMfR5LluEfODnpma4onuLSgKFCfQtwJmi593BtmI3Am1m9j0ze8LMPlDuhczsPjM7amZHe3p6llaxyBL0Dk/Q2lhH3RzT/gtmp/9rHF0SKEygl/u7s3Q5ulrgDuC3gXcB/8HMbrzmk9wfdvcud+/q7OxcdLEiS7XQNegFHbOTixTokjwLjqGT75FvK3q+FThXpk2vu48AI2b2A+AW4IVIqhRZpssj888SLSgMyfSP6sSoJE+YHvoRYLeZ7TKzLHAPcKikzbeAO82s1swagTcBz0VbqsjS9Q6H66G3NeYDvW9kstIliURuwR66u0+Z2QPAY0AGeMTdj5vZ/cH+g+7+nJn9PfAUMAN80d2fqWThIotxeXhiwUlFAK2NdYB66JJMYYZccPfDwOGSbQdLnn8G+Ex0pYlEY2p6hqHxKVobFw70hroMTdkMfSMKdEkezRSV1BsYyw+ftIfooQO0NWXpV6BLAinQJfUGguGTwnDKQtqbsvRpyEUSSIEuqdc/mu+ht4UYcim0Uw9dkkiBLqlXGA8PO+TS3pSd/SUgkiQKdEm9xQ65tDbWqYcuiaRAl9Rb7JBLe2OWKxNT5KZmKlmWSOQU6JJ6/SM5srU1NGYzodq3BUMzAzoxKgmjQJfU6x/N0dZYF3o53MJYu650kaRRoEvq9Y9Ohh5ugeLp/wp0SRYFuqRe/0huUYFe6KH3az0XSRgFuqRe/2iOtqZwV7gAtAVXw2jIRZJGgS6pN7DIIZfCmi+6dFGSRoEuqTYz48FJ0fCBnq2toaW+VisuSuIo0CXVroxPMePhJxUVaIEuSSIFuqRaoZcddtp/QVtTlj5N/5eEUaBLqhVObC5myCXfvo6+Ed1XVJJFgS6ptth1XAryKy6qhy7JokCXVCvcG3TRQy6NWU39l8QJFehmts/MTpjZSTM7UGb/281s0MyOBR+fjL5UkcW72kNf/JDLSG5aC3RJoix4T1EzywAPAe8EuoEjZnbI3Z8tafpDd/+dCtQosmT9ozkyNcbahlC3z53VWrRA14a1DZUoTSRyYXroe4GT7n7K3XPAo8D+ypYlEo2+kclFLcxVUJgtqhtdSJKECfQtwJmi593BtlK/bmZPmtl3zOzmci9kZveZ2VEzO9rT07OEckUWZ2A0t+jhFrh6VYwmF0mShAn0cl0bL3n+C2CHu98C/CXwzXIv5O4Pu3uXu3d1dnYurlKRJegfzdG+hEAvXBWjE6OSJGECvRvYVvR8K3CuuIG7D7n7cPD4MFBnZh2RVSmyRP0jk4u+ZBGKe+gacpHkCBPoR4DdZrbLzLLAPcCh4gZmtsmCQUoz2xu87uWoixVZrIGx3DIDXT10SY4FT/27+5SZPQA8BmSAR9z9uJndH+w/CPwB8C/NbAoYA+5x99JhGZEV5e6LvrlFwZpshvraGgbUQ5cECXUtVzCMcrhk28Gix58DPhdtaSLLMz45Q25qZkknRaEwW1Q9dEkOzRSV1BoYW9q0/4K2pqyGXCRRFOiSWoXhktY1Swz0xjqdFJVEUaBLahV61+uW2kNvVA9dkkWBLqk1GPSul3JSFPJDNTopKkmiQJfUGhgLhlyW0UMfGM0xM6MLtiQZFOiSWlfH0JfeQ5/x/G3sRJJAgS6pNTCaI1tbQ0Pd0r7NNblIkkaBLqk1MLq0lRYL2poKKy4q0CUZFOiSWgNjuSUPt8DVm2LoxKgkhQJdUqt/dHLJlyyChlwkeRToklqDwZDLUrVrxUVJGAW6pNZyh1xaGmqpMbSeiySGAl1Sa2B0aWuhF9TUGK2aLSoJokCXVBrLTTMxNbOsMXTQbFFJFgW6pFJhpcWlTvsv0HoukiQKdEml5a60WKAVFyVJFOiSSoVAX/6QS1Y3ipbEUKBLKhVCeDlXuUChh65Al2RQoEsqFVZaLEzfX6rWxizjkzOMT05HUZZIRYUKdDPbZ2YnzOykmR2Yp92vmdm0mf1BdCWKLN5yV1os0GxRSZIFA93MMsBDwF3AHuBeM9szR7tPA49FXaTIYg2MLW+lxYL2wgJdIzoxKvEX5rt9L3DS3U+5ew54FNhfpt2/Ab4GXIqwPpElGRiZpHXN0ldaLLi6QJd66BJ/YQJ9C3Cm6Hl3sG2WmW0B3gscnO+FzOw+MztqZkd7enoWW6tIaANjuWVfgw5Xh1z6FOiSAGECvVwXp/SeXP8N+Ji7z3vmyN0fdvcud+/q7OwMW6PIog0sc6XFgsLiXroWXZKgNkSbbmBb0fOtwLmSNl3Ao8Gftx3A3WY25e7fjKRKkUUaGJ1kx/rGZb/O7JCLFuiSBAgT6EeA3Wa2CzgL3AO8v7iBu+8qPDazLwHfVphLNQ2M5bilcd2yXydbW0NTNqMeuiTCgoHu7lNm9gD5q1cywCPuftzM7g/2zztuLlIN+dvPLX8MHTRbVJIjTA8ddz8MHC7ZVjbI3f2fL78skaUbn4xmpcWCtibNFpVk0ExRSZ3+iKb9F+RXXNSQi8SfAl1SpzBLdDm3nyumIRdJCgW6pE5UKy0WtGsJXUkIBbqkzuBYtEMurY1ZhsYnmZ4pnX4hEi8KdEmdQm96OfcTLdbWWIc7DI6ply7xpkCX1Lk6hh7RSdGmYPq/JhdJzCnQJXWiWmmxQAt0SVIo0CV1BkejWWmxQOu5SFIo0CV1+kdzkY2fg25yIcmhQJfUGRidnB0miULhl4OGXCTuFOiSOoNj+SGXqDTX11JbYxpykdhToEvqRD3kYmaaLSqJoECXVHF3+kcnZy81jEp7U53uKyqxp0CXVBnNTZObmqE9wjF0yF+6qJOiEncKdEmVwuSfqHvobY11sxOWROJKgS6pUuhFR91Db1MPXRJAgS6pUqkeemHIxV0LdEl8KdAlVWZ76BUYcpmcdkZy05G+rkiUQgW6me0zsxNmdtLMDpTZv9/MnjKzY2Z21MzeGn2pIgvrC65EqcSQC0C/FuiSGFsw0M0sAzwE3AXsAe41sz0lzf4BuMXdbwU+BHwx6kJFwugfyZGpMVoaQt0uN7Srs0V1YlTiK0wPfS9w0t1PuXsOeBTYX9zA3Yf96uBiE6CBRqmKvtEcbY111NREszBXQWFMXidGJc7CBPoW4EzR8+5g26uY2XvN7Hng78j30q9hZvcFQzJHe3p6llKvyLz6R3KRrYNe7OqKiwp0ia8wgV6uq3NND9zdv+HuNwHvAR4s90Lu/rC7d7l7V2dn5+IqFQmhbyQX+RUucHUMXUMuEmdhAr0b2Fb0fCtwbq7G7v4D4AYz61hmbSKL1j+ai/yEKMC6NeqhS/yFCfQjwG4z22VmWeAe4FBxAzN7jQV3EzCz24EscDnqYkUW0jcS/TouALWZGtY21KqHLrG24KUA7j5lZg8AjwEZ4BF3P25m9wf7DwK/D3zAzCaBMeCPXDMwZIXlF+bK0d4U3UqLxdqaNFtU4i3UtV3ufhg4XLLtYNHjTwOfjrY0kcUZGp9iesYrclIUCrNF1UOX+NJMUUmNwqSfqGeJFrQ11mlikcSaAl1So2+0Muu4FGiBLok7BbqkRt9wZVZaLGjVEroScwp0SY2+Ci3MVdDWmGV4Yorc1ExFXl9kuRTokhr9FVo6t6AwW3RgTMMuEk8KdEmNvtEc2UwNTdlMRV6/8ItCwy4SVwp0SY2+4RxtTXUEc9wipyV0Je4U6JIavcMTdDTXV+z1W2cX6FIPXeJJgS6pcXkkV9FAv7pAl3roEk8KdEmN3iuV7aHPDrmohy4xpUCXVHB3eodzdDRX5goXgDXZDPW1NeqhS2wp0CUVhsanyE3PVLSHDpotKvGmQJdUuDw8AUBHS+V66JA/MVq4EbVI3CjQJRV6g2n/le6hr2/O0jcyUdH3EFkqBbqkQm+hh17hQN/Q0kDPsAJd4kmBLqlQCPT1FTwpCtDZUs+loQl0/xaJIwW6pELvcA6zyq20WLChpZ6JqRmuTExV9H1ElkKBLqnQOzxBe2OW2kxlv6U7W/JDOpeGNOwi8aNAl1So9KSigkKg91xRoEv8hAp0M9tnZifM7KSZHSiz/4/N7Kng4ydmdkv0pYrMrXd4ouLj55AfcgG4dGW84u8lslgLBrqZZYCHgLuAPcC9ZranpNlLwNvc/Y3Ag8DDURcqMp9Kr+NS0NnSAKiHLvEUpoe+Fzjp7qfcPQc8CuwvbuDuP3H3/uDpz4Ct0ZYpMr+VGnJZ21BLtrZGgS6xFCbQtwBnip53B9vm8mHgO+V2mNl9ZnbUzI729PSEr1JkHsMTU4zkptm4tvKBbmZ0Ntcr0CWWwgR6ubsFlL0I18z+CflA/1i5/e7+sLt3uXtXZ2dn+CpF5nFxKD+evXFtw4q834a19VxSoEsMhQn0bmBb0fOtwLnSRmb2RuCLwH53vxxNeSILuzi4soGuHrrEVZhAPwLsNrNdZpYF7gEOFTcws+3A14E/cfcXoi9TZG4XrxQCvfJDLlDooesqF4mf2oUauPuUmT0APAZkgEfc/biZ3R/sPwh8ElgPfD64n+OUu3dVrmyRqy4M5nvLK9dDb6B/dJLc1AzZWk3lkPhYMNAB3P0wcLhk28Gixx8BPhJtaSLhXBwap6W+lqb6UN/Oy7Yh+Eugd3iC61rXrMh7ioSh7oUk3sWh8dmQXQmdzYXJRRpHl3hRoEviXRwaX7HhFrg6tHNhUOPoEi8KdEm8i0MTbFrBQL+uNf9e5wfHVuw9RcJQoEuizcw4l66Ms2EFA729KUt9bQ3n1UOXmFGgS6L1j+aYnPYVu2QR8rNFr2tdw9kB9dAlXhTokmiFXvJKDrkAbF7XwHkFusSMAl0SrdBL3tK2spcPbl63hnMDGnKReFGgS6Kd7Q8CfYWvB9/S2sClK+NMTc+s6PuKzEeBLol2dmCMhroa2psqf3OLYptb1zDjcFHXokuMKNAl0c72j7G1rZFgyYkVU5ghek7j6BIjCnRJtLMDYys+3AJw3br8SVgFusSJAl0S7ezA2IqfEIX8kAugE6MSKwp0SazR3BR9I7mq9NCb62tZ21CrHrrEigJdEqtwhcvWKvTQ8+/bSHf/aFXeW6QcBbokVvdAdQN9Z0cjL19WoEt8KNAlsbpnr0FvrMr771jfxJn+UV2LLrGhQJfEOt07wpq6zIqu41Js5/pGJqddi3RJbCjQJbFe6h1hZ0fTil+DXrBjfRMApy+PVOX9RUqFCnQz22dmJ8zspJkdKLP/JjP7qZlNmNmfRl+myLVO946wq6M6wy0AO2cDXePoEg8LBrqZZYCHgLuAPcC9ZranpFkf8G+Bz0ZeoUgZU9MzvNI3Ohuq1bChpZ6Guhpe7lUPXeIhTA99L3DS3U+5ew54FNhf3MDdL7n7EWCyAjWKXKO7f4ypGWdXR/UCvabG2NHepB66xEaYQN8CnCl63h1sWzQzu8/MjprZ0Z6enqW8hAgALwXj1tUMdIAd6xs1hi6xESbQy51x8qW8mbs/7O5d7t7V2dm5lJcQAeClnnyI7qxyoO/e2Mzp3hFyU7p0UaovTKB3A9uKnm8FzlWmHJFwXrh4hfamLOtXeNncUjdubGFqxjnVO1zVOkQgXKAfAXab2S4zywL3AIcqW5bI/J67cIXXbmyp2iWLBTdtWgvAiQtXqlqHCEDtQg3cfcrMHgAeAzLAI+5+3MzuD/YfNLNNwFFgLTBjZh8F9rj7UAVrl1VqZsZ58eIV3te1beHGFbaro4naGlOgSywsGOgA7n4YOFyy7WDR4wvkh2JEKu5M/yijuWlu2tRS7VLI1tZwQ2ezAl1iQTNFJXGeO58Pz5s2r61yJXk3bmrhxEUFulSfAl0S5/kLQ5jBjRubq10KADdtaqG7f4yhcU3DkOpSoEviPNU9yGs6m2nMhhoxrLg3bFkHwNPdg1WuRFY7Bbokirvzy1f6uW17a7VLmXXLtnwtv3ylv8qVyGqnQJdEefnyKP2jk9y2va3apcxat6aOGzqb+OUrA9UuRVY5Bbokyi/P5HvBceqhA9y2vY1jZwZwX9IkapFIKNAlUZ54uZ+mbIbdG6p/yWKx27a3cnkkp4W6pKoU6JIoPzl5mb272snUVHeGaKm33NABwI9O9la5ElnNFOiSGGcHxjjVO8Jbd8dvYbed6xvZ2raGH76gVUSlehTokhg/ejEflnfu7qhyJdcyM+7c3clPf3VZN42WqlGgS2I8fvwiW1rXsHtDPCYUlXrbjR1cmZjiH0/3VbsUWaUU6JIIg2OT/ODFHu56/aaqr7A4l7fduIHGbIa/ffJ8tUuRVUqBLonw2PELTE47v/3GzdUuZU5rshnedfMmDj99Xje8kKpQoEsifOXnr3BDZxO3bovX9eel3n3rdQyOTfL4sxeqXYqsQgp0ib1jZwZ48swAf/LmHbEdbin4zd2d7FzfyBd+cEqTjGTFKdAl9v7iuy+wbk0dv39H/Jfcz9QYH77zep7sHuTHJy9XuxxZZRToEmvfO3GJ77/Qw796+w20NNRVu5xQ/vCOrWxtW8N/+tvjTOoSRllBCnSJrcvDExz42tO8ZkMzH3zLzmqXE1pDXYZP/e7NvHhpmM8+fqLa5cgqokCXWOofyfGhLx+lfzTHX7zvVhrqMtUuaVHesWcjf/ym7fz190/x5Z+crnY5skqECnQz22dmJ8zspJkdKLPfzOy/B/ufMrPboy9VVosfvdjLez7/Y547N8Tn3n87b9i6rtolLcmn3n0z73jdBv7joeN87P88Rf9IrtolScoteMsXM8sADwHvBLqBI2Z2yN2fLWp2F7A7+HgT8FfBvyLzGstN0zs8wcuXRzl2pp+/P36BZ84Osb29ka/e9ybu2NFe7RKXrC5Tw8F/dgefffwFvvDDU3zrybPsu3kTv/GaDl63eS1bWtfQ2lgX+yt3JDnC3MNrL3DS3U8BmNmjwH6gOND3A//T89dp/czMWs1ss7tHPmXu+y/08OC3r751uUvDrtni8+8P8xqlTbykRbkr1Ba6aq30fcs1X+z7hnmN0lbhXmP+Wq95jRDHdGraGZucftW2129Zy5/tv5n3dW1L3DBLObWZGg7cdRO/d/sWvvyT0/zd0+f55rFzs/trDNbUZViTraWhrgYzMCz4N79GjAGUPpdE+6Nf28ZH7rw+8tcNE+hbgDNFz7u5tvddrs0W4FWBbmb3AfcBbN++fbG1AtBcX8trN5ashV3mO7x0U2kv6Nr9y3+N8nWUfI6V7g9TxwKvEaKQxb5vudhY+DUWjpriJhkz2pqydDbXc13rGt6wZR3rGpNxJcti3bixhf/83jfw4P7Xc7JnmFM9w3T3jzEwOsnY5DRjk9OMT06D53/xzbjjwWP34Ne4X/sLXZKpo7m+Iq8bJtDL/ZSWfleFaYO7Pww8DNDV1bWk78w7drRxx4743H5MZDFqaowbN7ZwY2mnRCQCYU6KdgPbip5vBc4toY2IiFRQmEA/Auw2s11mlgXuAQ6VtDkEfCC42uXNwGAlxs9FRGRuCw65uPuUmT0APAZkgEfc/biZ3R/sPwgcBu4GTgKjwL+oXMkiIlJOmDF03P0w+dAu3naw6LED/zra0kREZDE0U1REJCUU6CIiKaFAFxFJCQW6iEhKWLXuqmJmPcDLS/z0DqA3wnKiEte6IL61qa7FUV2Lk8a6drh7Z7kdVQv05TCzo+7eVe06SsW1LohvbaprcVTX4qy2ujTkIiKSEgp0EZGUSGqgP1ztAuYQ17ogvrWprsVRXYuzqupK5Bi6iIhcK6k9dBERKaFAFxFJidgGupn9oZkdN7MZM+sq2ffx4IbUJ8zsXXN8fruZfdfMXgz+jfyuGGb2v8zsWPBx2syOzdHutJk9HbQ7GnUdZd7vU2Z2tqi2u+doN+/NvytU22fM7PngZuLfMLPWOdpV/JjF8ebnZrbNzP6fmT0XfP//uzJt3m5mg0Vf309Wuq6i957361KlY/baomNxzMyGzOyjJW1W5JiZ2SNmdsnMninaFiqLIvl5dPdYfgCvA14LfA/oKtq+B3gSqAd2Ab8CMmU+/78AB4LHB4BPV7jePwc+Oce+00DHCh67TwF/ukCbTHDsrgeywTHdswK1/VOgNnj86bm+LpU+ZmH+/+SXhP4O+TtyvRn4+Qocn83A7cHjFuCFMnW9Hfj2Sn0/LebrUo1jVubreoH85JsVP2bAbwK3A88UbVswi6L6eYxtD93dn3P3E2V27QcedfcJd3+J/Brse+do9+Xg8ZeB91Sm0nyvBHgf8NVKvUcFzN78291zQOHm3xXl7o+7+1Tw9Gfk725VDWH+/7M3P3f3nwGtZra5kkW5+3l3/0Xw+ArwHPn78ybFih+zEr8F/MrdlzoLfVnc/QdAX8nmMFkUyc9jbAN9HnPdkLrURg/umhT8u6GCNd0JXHT3F+fY78DjZvZEcKPslfBA8CfvI3P8iRf2OFbSh8j35sqp9DEL8/+v6jEys53AbcDPy+z+dTN70sy+Y2Y3r1RNLPx1qfb31T3M3bGq1jELk0WRHLdQN7ioFDP7v8CmMrs+4e7fmuvTymyr2LWXIWu8l/l757/h7ufMbAPwXTN7PvhNXpG6gL8CHiR/XB4kPxz0odKXKPO5kRzHMMfMzD4BTAFfmeNlIj9mpWWW2bakm59Xgpk1A18DPuruQyW7f0F+SGE4OD/yTWD3StTFwl+Xah6zLPBu4ONldlfzmIURyXGraqC7+zuW8Glhb0h90cw2u/v54E++S5Wo0cxqgd8D7pjnNc4F/14ys2+Q//NqWeEU9tiZ2ReAb5fZVbEbe4c4Zh8Efgf4LQ8GEMu8RuTHrERsb35uZnXkw/wr7v710v3FAe/uh83s82bW4e4VX4QqxNelmjeMvwv4hbtfLN1RzWNGuCyK5LglccjlEHCPmdWb2S7yv2X/cY52HwwefxCYq8e/XO8Annf37nI7zazJzFoKj8mfFHymXNuolIxZvneO9wtz8+9K1LYP+BjwbncfnaPNShyzWN78PDgf8z+A59z9v87RZlPQDjPbS/7n+HIl6wreK8zXpZo3jJ/zL+VqHbNAmCyK5uex0md9l/pBPoi6gQngIvBY0b5PkD8jfAK4q2j7FwmuiAHWA/8AvBj8216hOr8E3F+y7TrgcPD4evJnrJ8EjpMfdqj0sfsb4GngqeCbYrTYGwwAAACgSURBVHNpXcHzu8lfRfGrlagreM+T5McKjwUfB6t1zMr9/4H7C19P8n8GPxTsf5qiq60qeHzeSv5P7aeKjtHdJXU9EByXJ8mfWH7LCn3tyn5dqn3MgvdtJB/Q64q2rfgxI/8L5TwwGeTXh+fKokr8PGrqv4hISiRxyEVERMpQoIuIpIQCXUQkJRToIiIpoUAXEUkJBbqISEoo0EVEUuL/A3Ar/hj8WM8fAAAAAElFTkSuQmCC",
+ "text/plain": [
+ "