From c8cf7a33ffa3aac032055e43574a3490762c6148 Mon Sep 17 00:00:00 2001 From: Otavio Napoli Date: Fri, 2 Feb 2024 11:27:12 -0300 Subject: [PATCH 1/3] Updated notebooks 1 and 2 --- notebooks/01_structuring_input.ipynb | 140 +- notebooks/02_training_model.ipynb | 4609 ++++++++++++++++++++++++-- 2 files changed, 4502 insertions(+), 247 deletions(-) diff --git a/notebooks/01_structuring_input.ipynb b/notebooks/01_structuring_input.ipynb index bce7443..fc5e0da 100644 --- a/notebooks/01_structuring_input.ipynb +++ b/notebooks/01_structuring_input.ipynb @@ -6,36 +6,47 @@ "source": [ "# 1. Structuring the input\n", "\n", - "In order to train and test models, we need to structure the input data in a way that is compatible with the model and framweork's experiments.\n", + "In order to train and test models, we need to structure the input data in a way that is compatible with the model and the framework's experiments.\n", "\n", - "This framework is designed to work with time-series data and Pytorch Lightning. \n", - "Thus, it provides the necessary tools to create a `Dataset` object and a `LightningDataModule` object.\n", + "For now, this framework is designed to work with time-series data and Pytorch Lightning. \n", + "Thus, it provides the necessary tools to create `Dataset` and `LightningDataModule` objects, required by Pytorch Lightning to train and test models.\n", "\n", - "The `Dataset` object is responsible for loading the data. It is a Pytorch object that is used to load the data and make it available to the model. \n", - "Every `Dataset` class must implement two methods: `__len__` and `__getitem__`.\n", - "The `__len__` method returns the number of samples in the dataset, and the `__getitem__`, given an integer from 0 to `__len__` - 1, returns the corresponding sample from the dataset.\n", - "The returned type of the `__getitem__` method is not specified, but it is usually a 2-element tuple with the input and the target. The input is the data that will be used to make the predictions, and the target is the data that the model will try to predict.\n", - "\n", - "For now, this framework provide implementations for the `Dataset` objects for time-series data, where data is organized in two different ways:\n", + "In this notebook, we explain the default data pipeline, which includes:\n", + "1. Creating `Dataset` objects, that are responsible for loading the data.\n", + "2. Creating `DataLoader` objects, that are responsible for batched loading of the data. It encapsulates the `Dataset` object and provides an iterator to iterate over the data in batches.\n", + "3. Creating `LightningDataModule` objects, that are responsible for loading the data and creating the `Dataset` and encapsulate it into `DataLoader` objects for training, validation, and test sets.\n", "\n", - "- A directory with several CSV files, where each file contains a time-series. Each row in a CSV file is a time-step, each column is a feature, and the whole file is a time-series. This is handled by the `SeriesFolderCSVDataset` class.\n", - "- A single CSV file with a windowed time-series. Each row in the CSV file is a window, and each column is a feature. This is handled by the `MultiModalSeriesCSVDataset` class.\n", - "\n", - "We explain both classes in detais nextly." + "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Time-series dataset implementations" + "## Time-series dataset implementations\n", + "\n", + "The `Dataset` object is responsible for loading the data. \n", + "It is a Pytorch object that is used to load the data and make it available to the model. \n", + "\n", + "Every `Dataset` class must implement two methods: `__len__` and `__getitem__`.\n", + "The `__len__` method returns the number of samples in the dataset, and the `__getitem__`, given an integer from 0 to `__len__` - 1, returns the corresponding sample from the dataset.\n", + "The returned type of the `__getitem__` method is not specified, but it is usually a 2-element tuple with the input and the target. The input is the data that will be used to make the predictions, and the target is the data that the model will try to predict.\n", + "\n", + "The first step when creating a `Dataset` object is to identify the layout of the data directory and choose the appropriate class to handle it.\n", + "For now, this framework provides default `Dataset` classes for time-series data, which are the `SeriesFolderCSVDataset` and `MultiModalSeriesCSVDataset` classes.\n", + "Both classes assumes that data are stored in CSV files, but with different layouts, to know:\n", + "\n", + "- A directory with several CSV files, where each file contains a time-series. Each row in a CSV file is a time-step, each column is a feature. Thus, the whole file is a single multi-modal time-series. Also, if you want to use labels, it must be in a separated column of the CSV file and it should exists to all rows (time-steps). This layout is handled by the `SeriesFolderCSVDataset` class.\n", + "- A single CSV file with a windowed time-series. Each row contains different modalities of the same windowed time-series. The prefix of the column names is used to identify the modalities. For instance, if the is `accel-x`, all columns that start with this prefix, like `accel-x-1`, `accel-x-2`, `accel-x-3`, are considered time-steps from the same modality (`accel-x`). Also, if you want to use labels, it must be in a separated column and it should exists to all rows, that is, for each windowed multimodal time-series. This layout is handled by the `MultiModalSeriesCSVDataset` class.\n", + "\n", + "We will show how to use these classes in the next sections." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### SeriesFolderCSVDataset\n", + "### `SeriesFolderCSVDataset`\n", "\n", "The `SeriesFolderCSVDataset` class is designed to work with a directory containing several CSV files, where each file represent a time-series. \n", "Each row in a CSV file is a time-step, and each column is a feature. \n", @@ -50,7 +61,7 @@ " ...\n", "```\n", "\n", - "Where each CSV file represents a time-series. \n", + "Where each CSV file represents a time-series, similar to the one below: \n", "\n", "| accel-x | accel-y | accel-z | gyro-x | gyro-y | gyro-z | class |\n", "|---------|---------|---------|---------|---------|---------|---------|\n", @@ -59,7 +70,8 @@ "| 0.498217| 0.00001 | 0.12312 | 0.12312 | 0.12312 | 0.12312 | 1 |\n", "\n", "\n", - "Note that the CSV must have a header with the column names." + "Note that the CSV must have a header with the column names.\n", + "Also, columns that are not used as features or labels are ignored." ] }, { @@ -67,14 +79,13 @@ "metadata": {}, "source": [ "To handle this kind of data, we use the `SeriesFolderCSVDataset` class. This class is a Pytorch `Dataset` object that loads the data from the CSV files and makes it available to the model.\n", - "For this class, we must specify the path to the directory containing the CSV files, the name of the columns that will be used as features, and the name of the column that will be used as the target.\n", - "Note that, each feature (column) represent a dimension of the time-series, while the rows represent the time-steps.\n", + "Note that, each feature (column) represent a dimension of the time-series, while the rows represent the time-steps. The sample is a numpy array.\n", "\n", - "Thus, the `SeriesFolderCSVDataset` class minimally requires:\n", + "For this class, we must specify the following parameters:\n", "\n", - "- `data_path`: the path to the directory containing the CSV files\n", - "- `features`: a list of strings with the names of the features columns, e.g. `['accel-x', 'accel-y', 'accel-z', 'gyro-x', 'gyro-y', 'gyro-z']`\n", - "- `label`: a string with the name of the label column, e.g. `'class'`" + "- `data_path`: the path to the directory containing the CSV files;\n", + "- `features`: a list of strings with the names of the features columns, *e.g.* `['accel-x', 'accel-y', 'accel-z', 'gyro-x', 'gyro-y', 'gyro-z']`;\n", + "- `label`: a string with the name of the label column, *e.g.* `'class'`." ] }, { @@ -83,11 +94,10 @@ "metadata": {}, "outputs": [ { - "name": "stderr", + "name": "stdout", "output_type": "stream", "text": [ - "/usr/local/lib/python3.10/dist-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", - " from .autonotebook import tqdm as notebook_tqdm\n" + "[1706883997.242541] [aae107fc745c:2264626:f] vfs_fuse.c:281 UCX ERROR inotify_add_watch(/tmp) failed: No space left on device\n" ] }, { @@ -122,10 +132,10 @@ "metadata": {}, "source": [ "We can get the number of samples in the dataset with the `len` function, and we can retrive a sample with the `__getitem__` method, that is, using `[]`, such as `dataset[0]`.\n", - "The dataset may return:\n", + "The dataset return type is different depending on the `label` parameter.\n", "\n", - "- A 2-element tuple, where the first element is a 2D numpy array with shape `(num_features, time_steps)`, and the second element is a 1D tensor with shape `(time_steps,)`.\n", - "- A 2D numpy array with shape `(num_features, time_steps)`, if `label` is `None`, at the time of the dataset object's creation.\n", + "- If `label` is speficied, the return type is a 2-element tuple, where the first element is a 2D numpy array with shape `(num_features, time_steps)`, and the second element is a 1D tensor with shape `(time_steps,)`.\n", + "- If `label` is not speficied, the return type is a single 2D numpy array with shape `(num_features, time_steps)`.\n", "\n", "Let's check the number of samples and access the first sample and its label." ] @@ -163,7 +173,7 @@ } ], "source": [ - "# Get the first sample\n", + "# Get the first sample. We can go from 0 to length_of_dataset - 1 (56)\n", "sample = dataset[0]\n", "type_of_sample = type(sample).__name__\n", "print(f\"Type of sample: {type_of_sample} with {len(sample)} elements\")" @@ -201,12 +211,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### MultiModalSeriesCSVDataset\n", + "### `MultiModalSeriesCSVDataset`\n", "\n", "\n", - "The `MultiModalSeriesCSVDataset` class is designed to work with a single CSV file containing a windowed time-series. \n", - "The CSV is a multi-modal time-series, where each row is a sample and each column is a feature at a given time-step. \n", - "Features are organized in a way that each group of columns represent a different modality.\n", + "The `MultiModalSeriesCSVDataset` class is designed to work with a single CSV file containing a windowed time-series.\n", + "Each row contains different modalities of the same windowed time-series. \n", + "The prefix of the column names is used to identify the modalities. \n", + "For instance, if the prefix is `accel-x`, all columns that start with this prefix, like `accel-x-1`, `accel-x-2`, `accel-x-3`, are considered time-steps from the same modality (`accel-x`). \n", + "Also, if you want to use labels, it must be in a separated column and it should exists to all rows, that is, for each windowed multimodal time-series. \n", "\n", "The CSV file looks like this:\n", "\n", @@ -217,24 +229,19 @@ "| 0.6820123 | 0.02123 | 0.502123 | 0.502123 | 1 |\n", "| 0.498217 | 0.00001 | 1.414141 | 3.141592 | 1 |\n", "\n", - "In the example, columns `accel-x-0` and `accel-x-1` are the `accel-x` feature at time `0` and time `1`, respectively. The same goes for the `accel-y` feature. Finally, the `class` column is the label. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To handle this kind of data, we use the `MultiModalSeriesCSVDataset` class.\n", - "For this class, we must specify the path to the CSV file, the prefix of the columns that will be used as features, and the columns that will be used as label.\n", - "Note that, each feature (column) represent a dimension of the time-series, while the rows represent the samples.\n", + "In the example, columns `accel-x-0` and `accel-x-1` are the `accel-x` feature at time `0` and time `1`, respectively. \n", + "The same goes for the `accel-y` feature. Finally, the `class` column is the label. \n", + "Columns that are not used as features or labels are ignored.\n", "\n", - "The `MultiModalSeriesCSVDataset` class minimally requires:\n", + "To use `MultiModalSeriesCSVDataset`, we must specify the following parameters:\n", "\n", "- `data_path`: the path to the CSV file\n", "- `feature_prefixes`: a list of strings with the prefixes of the feature columns, e.g. `['accel-x', 'accel-y']`. The class will look for columns with these prefixes and will consider them as features of a modality.\n", "- `label`: a string with the name of the label column, e.g. `'class'`\n", "- `features_as_channels`: a boolean indicating if the features should be treated as channels, that is, if each prefix will become a channel. If ``True``, the data will be returned as a vector of shape `(C, T)`, where C is the number of channels (features/prefixes) and `T` is the number of time steps. Else, the data will be returned as a vector of shape `T*C` (a single vector with all the features).\n", "\n", + "Note that, each feature (column) represent a dimension of the time-series, while the rows represent the samples.\n", + "\n", "Let's show how to read this data and create a `MultiModalSeriesCSVDataset` object." ] }, @@ -343,15 +350,19 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Loading batches of data\n", + "## Loading batches of data using DataLoader\n", "\n", "Pytorch models are trained using batches of data. Thus, we do not feed the model with a single sample at a time, but with a batch of samples.\n", "If we see the last example, the `MultiModalSeriesCSVDataset` object returns a single sample at a time. Each sample is a 2-element tuple, where first element is a `(6, 60)` numpy array and the second is an integer, representing the label.\n", "\n", - "A batch of samples add an extra dimension to the data. Thus, in our case, a batch of samples is a 3D tensor, where the first dimension is the batch size (`B`), the second dimension is the number of features, or channels (`C`), and the third dimension is the number of time-steps (`T`).\n", - "Thus, if the data have the shape `(6, 60)`, a batch of samples will have the shape `(B, 6, 60)`. The same happens to `label`, which gains an extra dimension. \n", + "A batch of samples add an extra dimension to the data. \n", + "Thus, in our case, a batch of samples would be a 3D tensor, where the first dimension is the batch size (`B`), the second dimension is the number of features, or channels (`C`), and the third dimension is the number of time-steps (`T`).\n", + "Thus, if the data have the shape `(6, 60)`, a batch of 32 samples will be a tensor with shape `(32, 6, 60)`. \n", + "The same happens to `label`, which gains an extra dimension, and would be an 1D tensor with shape `(32,)`.\n", "\n", - "The batching of samples is done using a `DataLoader` object. This object is a Pytorch object that takes a `Dataset` object and returns batches of samples. The `DataLoader` object is responsible for shuffling the data, dividing it into batches, and loading the data in parallel.\n", + "The batching of samples is done using a `DataLoader` object. \n", + "This object is a Pytorch object that takes a `Dataset` object and returns batches of samples. \n", + "The `DataLoader` object is responsible for shuffling the data, dividing it into batches, and loading the data in parallel.\n", "Thus, given a `Dataset` object, we can easilly create a `DataLoader` object using the `torch.utils.data.DataLoader` class." ] }, @@ -363,7 +374,7 @@ { "data": { "text/plain": [ - "" + "" ] }, "execution_count": 9, @@ -383,8 +394,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can fetch a batch of samples from the `DataLoader` object using the `__iter__` method, that is, using a `for` loop. Each iteration returns a batch of samples.\n", - "In our case, each batch is a 2-element tuple, where the first element is a 3D tensor with shape `(B, C, T)`, and the second element is a 1D tensor with shape `(B,)`." + "Datasets implement the `__len__` and `__getitem__` methods. \n", + "However, the `DataLoader` object implements the iterable protocol, that is, it implements the `__iter__` method, which returns an iterator to iterate over the data in batches.\n", + "Thus, to fetch a batch of samples from the `DataLoader` object, we can use a `for` loop, as we do with any other iterable object in Python, like lists and tuples (*e.g.* `for batch in dataloader: ...`).\n", + "We can also use the `next` function to fetch a single batch of samples, such as `batch = next(iter(dataloader))`.\n", + "Let's fetch a single sample from the `DataLoader` object and check its shape." ] }, { @@ -401,24 +415,30 @@ } ], "source": [ - "for batch in dataloader:\n", - " inputs, labels = batch\n", - " print(f\"Inputs shape: {inputs.shape}, labels shape: {labels.shape}\")\n", - " break" + "batch = next(iter(dataloader))\n", + "# Batch is a tuple with two elements: inputs and labels. \n", + "# Let's extract it to two different variables\n", + "inputs, labels = batch\n", + "# Print the shape of the inputs and labels\n", + "print(f\"Inputs shape: {inputs.shape}, labels shape: {labels.shape}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Handling data splits (train, validation, and test)\n", + "## Handling data splits (train, validation, and test) using `LightningDataModule`\n", "\n", "Usually, we create a `DataLoader` object for the training data, another for the validation data, and another for the test data. \n", - "A simple way to do this is to create a `LightningDataModule` object, which is a Pytorch Lightning object that is responsible for creating the `DataLoader` objects for the training, validation, and test data.\n", + "We can encapsulate the `DataLoader` creation logic in a single place, and make it easy to use the same data processing logic across different experiments.\n", + "A simple way to do this is to create a `LightningDataModule` object.\n", + "\n", + "A `LightningDataModule` object is responsible for splitting the data into training, validation, and test sets, and creating the `DataLoader` objects for each set. \n", + "This object may also be responsible for setting up the data, such as downloading the data from the internet, checking the data, and add the augmentations. \n", "\n", - "A `LightningDataModule` object is responsible for splitting the data into training, validation, and test sets, and creating the `DataLoader` objects for each set. This object may also be responsible for setting up the data, such as downloading the data from the internet, checking the data, and add the augmentations. This module is used to encapsulate all the data loading and processing logic in a single place, and to make it easy to use the same data processing logic across different experiments.\n", + "The `LightningDataModule` object must implement four methods: `setup`, `train_dataloader`, `val_dataloader`, and `test_dataloader`. The `setup` is optional, and is responsible for splitting the data into training, validation, and test sets, and `train_dataloader`, `val_dataloader` and `test_dataloader` methods are responsible for creating the `DataLoader` objects for the training, validation and test sets, respectively.\n", "\n", - "The `LightningDataModule` object must implement three methods: `setup`, `train_dataloader`, and `val_dataloader`. The `setup` is optional, and is responsible for splitting the data into training, validation, and test sets, and the `train_dataloader` and `val_dataloader` methods are responsible for creating the `DataLoader` objects for the training and validation sets, respectively." + "A data module may be implemented as shown below." ] }, { diff --git a/notebooks/02_training_model.ipynb b/notebooks/02_training_model.ipynb index e7260ef..4ce9a64 100644 --- a/notebooks/02_training_model.ipynb +++ b/notebooks/02_training_model.ipynb @@ -7,7 +7,7 @@ "# 2. Training a Pytorch Lighning model\n", "\n", "In this notebook, we show the training of a simple CNN model using Pytorch Lightning. \n", - "We first start with data, then we define the model, and finally we train it." + "We first start with data, then define the model, and finally train it for a HAR task." ] }, { @@ -16,8 +16,8 @@ "source": [ "## Creating KuHar LightningDataModule\n", "\n", - "In order to train a model, we must first create a LightningDataModule.\n", - "In this work, we will use the Standartized KuHar HAR data. Our data folder looks like this:\n", + "In order to train a model, we must first create a `LightningDataModule`, that will define the data loaders for training, validation and test.\n", + "Here, we will use the Standartized KuHar data. Therefore, the data directory may looks like this:\n", "\n", "```\n", "KuHar/\n", @@ -28,35 +28,47 @@ "\n", "The `train.csv` file may look like this:\n", "\n", - "| accel-x-0 | accel-x-1 | accel-y-0 | accel-y-1 | class |\n", - "|-----------|-----------|-----------|-----------|--------|\n", - "| 0.502123 | 0.02123 | 0.502123 | 0.502123 | 0 |\n", - "| 0.6820123 | 0.02123 | 0.502123 | 0.502123 | 1 |\n", - "| 0.498217 | 0.00001 | 1.414141 | 3.141592 | 1 |\n", + "| accel-x-0 | accel-x-1 | accel-y-0 | accel-y-1 | ... | standard activity code |\n", + "|-----------|-----------|-----------|-----------|------|------------------------|\n", + "| 0.502123 | 0.02123 | 0.502123 | 0.502123 | ... | 0 |\n", + "| 0.6820123 | 0.02123 | 0.502123 | 0.502123 | ... | 0 |\n", + "| 0.498217 | 0.00001 | 1.414141 | 3.141592 | ... | 1 |\n", "\n", - "As each CSV file contains time-windows signals of two 3-axis sensors (accelerometer and gyroscope), we must use the `MultiModalSeriesCSVDataset` class. After it, we must create a LightningDataModule, that will define the data loaders for training, validation and test. \n", + "As each CSV file contains windowed time signals of two 3-axial sensors, we may use the `MultiModalSeriesCSVDataset` class to handle this data structure.\n", + "After it, we must create a `LightningDataModule`, that will define the data loaders for training, validation and test. \n", + "The implementation of `LightningDataModule` may look like the snippet below:\n", "\n", - "### Faciliting the creation of the LightningDataModule with MultiModalHARSeriesDataModule\n", - "\n", - "In order to facilitate the `Dataset` and `DataLoader` creation, we will use the `MultiModalHARSeriesDataModule`. If:\n", - "\n", - "1. Your directory is organized like the one above; and \n", - "2. Each CSV file is a collection os time-windows of signals (that possibly would be used as a dataset wrapping `MultiModalSeriesCSVDataset`).\n", - "\n", - "Then, you can use the `The `train.csv` file may look like this:\n", + "```python\n", + "import lightning as L\n", + "from torch.utils.data import DataLoader\n", + "from ssl_tools.data.datasets import MultiModalSeriesCSVDataset\n", "\n", - "| accel-x-0 | accel-x-1 | accel-y-0 | accel-y-1 | class |\n", - "|-----------|-----------|-----------|-----------|--------|\n", - "| 0.502123 | 0.02123 | 0.502123 | 0.502123 | 0 |\n", - "| 0.6820123 | 0.02123 | 0.502123 | 0.502123 | 1 |\n", - "| 0.498217 | 0.00001 | 1.414141 | 3.141592 | 1 |\n", + "class HARDataModule(L.LightningDataModule):\n", + " def __init__(self, data_path: Path, batch_size: int):\n", + " super().__init__()\n", + " self.data_path = data_path\n", + " self.batch_size = batch_size\n", + " \n", + " def train_dataloader(self):\n", + " dataset = MultiModalSeriesCSVDataset(self.data_path / 'train.csv')\n", + " return DataLoader(dataset, batch_size=self.batch_size, shuffle=True)\n", + " \n", + " ...\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Faciliting the creation of the LightningDataModule with MultiModalHARSeriesDataModule\n", "\n", - "As each CSV file contains time-windows signals of two 3-axis sensors (accelerometer and gyroscope), we must use the `MultiModalSeriesCSVDataset` class. After it, we must create a LightningDataModule, that will define the data loaders for training, validation and test. ` to create a `LightningDataModule`, easily. \n", - "The `train_dataloader` method will use `train.csv`, `val_dataloader` will use `validation.csv` and `test_dataloader` will use `test.csv`.\n", + "If your directory is organized like the one above, the CSVs are a collection of time-windows of signals, and the `LightningDataModule` implementation may looks like the one above, you can use the `MultiModalHARSeriesDataModule` to create a `LightningDataModule` easily for you.\n", + "The `train_dataloader` method will use `train.csv`, `val_dataloader` will use `validation.csv` and `test_dataloader` will use `test.csv` to create the `MultiModalSeriesCSVDataset` and encapsulate into `DataLoader`.\n", "\n", "To create a `MultiModalHARSeriesDataModule`, we must pass:\n", "\n", - "- `data_path`: the path to the `KuHar` folder;\n", + "- `data_path`: the path to the directory containing the CSV files (`train.csv`, `validation.csv` and `test.csv`). We use `standardized_balanced/KuHar` in this case;\n", "- `feature_prefixes`: the prefixes of the features in the CSV files. In this case, we have `accel-x`, `accel-y`, `accel-z`, `gyro-x`, `gyro-y` and `gyro-z`;\n", "- `batch_size`: the batch size for the data loaders; and\n", "- `num_workers`: the number of workers for the data loaders. Essentially, the number of parallel processes to load the data.\n", @@ -69,18 +81,10 @@ "execution_count": 1, "metadata": {}, "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "/usr/local/lib/python3.10/dist-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", - " from .autonotebook import tqdm as notebook_tqdm\n" - ] - }, { "data": { "text/plain": [ - "" + "MultiModalHARSeriesDataModule(data_path=/workspaces/hiaac-m4/ssl_tools/data/standartized_balanced/KuHar, batch_size=64)" ] }, "execution_count": 1, @@ -99,7 +103,6 @@ " label=\"standard activity code\",\n", " features_as_channels=True,\n", " batch_size=64,\n", - " num_workers=0, # Sequential, for notebook compatibility\n", ")\n", "data_module" ] @@ -108,12 +111,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can test the dataloaders by getting the first batch of each one. Let's do it, but just for the `train_dataloader`. Note that the `.setup()` method must be called before getting the data loaders. If you don't call it, the data loaders will not be created. However, when used to train a model, the Pytorch Lightning `.fit()` method will call the `.setup()` method for you. So, we put it here just to show how to use it." + "We can test the dataloaders by getting the first batch of each one. Let's do it (only for`train_dataloader`)!. \n", + "\n", + "> **NOTE**: We use the data_module.train_dataloader() method to get the data loader for the training set. Note that the `.setup()` method must be called before getting the data loaders. If you don't call it, the data loaders will not be created. However, when used to train a model, the Pytorch Lightning `Trainer.fit()` method will automatically call the `.setup()` method for you. So, we put it here just to show how to fetch a data from `train_dataloader` and check if it is working." ] }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 4, "metadata": {}, "outputs": [ { @@ -126,14 +131,21 @@ ], "source": [ "data_module.setup(\"fit\") # We just put it here to test.\n", - " # When training a model, the Trainer will call this method.\n", + " # When training a model, the Trainer will \n", + " # call this method.\n", + "\n", "train_dataloader = data_module.train_dataloader()\n", "\n", - "# Pick the first batch to inspect. The batch size is 64, so we have 64 samples.\n", + "# Pick the first batch to inspect. As batch size is 64, we will have 64 samples.\n", + "# Note that dataloader only implement iterator protocol, \n", + "# so we can use next() to fetch one batch.\n", "batch = next(iter(train_dataloader))\n", - "# Each batch is a 2-element tuple with the first element being the 64 sample input and the second the 64 sample target.\n", + "# Each batch is a 2-element tuple:\n", + "# First element is a Tensor with 64 input samples\n", + "# and the second is a Tensor with 64 labels.\n", "inputs, targets = batch\n", "\n", + "# (B, C, T) = (Batch size, Channels, Time steps) = (64, 6, 60)\n", "print(f\"Inputs shape: {inputs.shape}, Targets shape: {targets.shape}\")" ] }, @@ -143,22 +155,23 @@ "source": [ "## Training a simple model\n", "\n", - "We will create a simple 1D CNN Pytorch Lightning model using the `Simple1DConvNetwork`. The model will be trained to classify the activities in the KuHar dataset. \n", + "We will create a simple 1D CNN Pytorch Lightning model using the `Simple1DConvNetwork`. The model will be trained to classify the activities in KuHar dataset. \n", "\n", - "Pytorch Lightning models must implement the `forward` method, `training_step` and `configure_optimizers` methods. Also, the `__init__` method is used to define the model.\n", + "Pytorch Lightning models must implement the `forward` method, `training_step` and `configure_optimizers` methods. \n", + "Also, the `__init__` method is used to define the model.\n", "The `forward` method is the same as the Pytorch `forward` method. \n", - "The `training_step` method is the method that will be called for each batch of data during the training. \n", + "The `training_step` method is the method that will be called for each batch of data during the training. It should return the loss of the batch.\n", "The `configure_optimizers` method is the method that will define the optimizer to be used during the training.\n", "\n", - "The `Simple1DConvNetwork` is a simple 1D CNN model that will be used to classify the activities in the KuHar dataset. It has 3 convolutional layers and 2 fully connected layers. It is trained using the `Adam` optimizer and the `CrossEntropyLoss` loss function.\n", + "The `Simple1DConvNetwork` is a simple 1D CNN model, that has 3 convolutional layers and 2 fully connected layers. \n", + "It is trained using the `Adam` optimizer and the `CrossEntropyLoss` loss function.\n", "\n", - "Besides that, Lightning models implemented in this framework, usually logs the training and validation losses.\n", - "Also, the `test` usually implement common metrics, such as accuracy." + "Besides that, Lightning models implemented in this framework, usually logs the training and validation losses." ] }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 5, "metadata": {}, "outputs": [ { @@ -186,7 +199,7 @@ ")" ] }, - "execution_count": 3, + "execution_count": 5, "metadata": {}, "output_type": "execute_result" } @@ -195,10 +208,10 @@ "from ssl_tools.models.nets.convnet import Simple1DConvNetwork\n", "\n", "model = Simple1DConvNetwork(\n", - " input_channels=6, # The number of input channels (accel-x, accel-y, accel-z, gyro-x, gyro-y, gyro-z)\n", - " num_classes=6, # The number of output classes\n", - " time_steps=60, # Used to automatically calculate the input size of the linear layer\n", - " learning_rate=1e-3, # The learning rate for the optimizer\n", + " input_channels=6, # The number of input channels (accel-x, accel-y, ...)\n", + " num_classes=6, # The number of output classes\n", + " time_steps=60, # Used to auto calculate the input size of FC layers\n", + " learning_rate=1e-3, # The learning rate of the Adam optimizer\n", ")\n", "\n", "model" @@ -208,16 +221,19 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "To train a Lightning model using Pytorch Lightning, we must create a `Trainer` and call the `fit` method. The `Trainer` is responsible for training the model. It has several parameters, such as the number of epochs, the number of GPUs to use, the number of TPU cores to use, etc. \n", + "To train a Lightning model using Pytorch Lightning, we must create a `Trainer` and call the `fit` method. The `Trainer` is responsible for training the model. \n", + "It has several parameters, such as the number of epochs, the number of GPUs/CPUs to use, *etc*. \n", "\n", - "We will train our model using the already defined dataloader. The `fit` method will be responsible for training the model using the training and validation data loaders. After the training, we will test the model using the test data loader.\n", + "We will train our model using the already defined dataloader. \n", + "The `fit` method will be responsible for training the model using the training and validation data loaders. \n", + "After training, we will test the model using the test data loader and Trainer's `test` method.\n", "\n", - "The training will run for 300 epochs (`max_epochs`) and will use 1 (`devices`) GPU only (`accelerator`)." + "Here, the training will run for 300 epochs (`max_epochs`) and will use only 1 (`devices`) GPU (`accelerator`)." ] }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 6, "metadata": {}, "outputs": [ { @@ -227,13 +243,7 @@ "GPU available: True (cuda), used: True\n", "TPU available: False, using: 0 TPU cores\n", "IPU available: False, using: 0 IPUs\n", - "HPU available: False, using: 0 HPUs\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ + "HPU available: False, using: 0 HPUs\n", "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n", "\n", " | Name | Type | Params\n", @@ -249,114 +259,133 @@ ] }, { - "name": "stdout", - "output_type": "stream", - "text": [ - "Sanity Checking DataLoader 0: 0%| | 0/2 [00:00┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", - "┃ Test metric DataLoader 0 ┃\n", - "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", - "│ test_acc 0.9027777910232544 │\n", - "│ test_loss 0.626140832901001 │\n", - "└───────────────────────────┴───────────────────────────┘\n", - "\n" - ], + "application/vnd.jupyter.widget-view+json": { + "model_id": "22489dabd4ac414597c2e5684f91780b", + "version_major": 2, + "version_minor": 0 + }, "text/plain": [ - "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", - "┃\u001b[1m \u001b[0m\u001b[1m Test metric \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m DataLoader 0 \u001b[0m\u001b[1m \u001b[0m┃\n", - "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", - "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.9027777910232544 \u001b[0m\u001b[35m \u001b[0m│\n", - "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.626140832901001 \u001b[0m\u001b[35m \u001b[0m│\n", - "└───────────────────────────┴───────────────────────────┘\n" + "Validation: | | 0/? [00:00┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", + "┃ Test metric DataLoader 0 ┃\n", + "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", + "│ test_acc 0.8333333134651184 │\n", + "│ test_loss 1.9901254177093506 │\n", + "└───────────────────────────┴───────────────────────────┘\n", + "\n" + ], + "text/plain": [ + "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", + "┃\u001b[1m \u001b[0m\u001b[1m Test metric \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m DataLoader 0 \u001b[0m\u001b[1m \u001b[0m┃\n", + "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", + "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.8333333134651184 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.9901254177093506 \u001b[0m\u001b[35m \u001b[0m│\n", + "└───────────────────────────┴───────────────────────────┘\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "[{'test_loss': 1.9901254177093506, 'test_acc': 0.8333333134651184}]" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "trainer.test(model, data_module)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Using any other set from data module" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And if we want to test the model using the validation data loader, we also can use the `trainer.test` method, but passing the `val_dataloader`. \n", + "Remember that as we are not passing a `LightningDataModule` to the `test` method, but a `DataLoader`, we must call `setup` method." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "bd31957d8c5a40bfa0624948e306396e", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Testing: | | 0/? [00:00┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", "┃ Test metric DataLoader 0 ┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", - "│ test_acc 0.5680751204490662 │\n", - "│ test_loss 13.804328918457031 │\n", + "│ test_acc 0.5962441563606262 │\n", + "│ test_loss 14.916933059692383 │\n", "└───────────────────────────┴───────────────────────────┘\n", "\n" ], @@ -419,8 +4653,8 @@ "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", "┃\u001b[1m \u001b[0m\u001b[1m Test metric \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m DataLoader 0 \u001b[0m\u001b[1m \u001b[0m┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", - "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.5680751204490662 \u001b[0m\u001b[35m \u001b[0m│\n", - "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 13.804328918457031 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.5962441563606262 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 14.916933059692383 \u001b[0m\u001b[35m \u001b[0m│\n", "└───────────────────────────┴───────────────────────────┘\n" ] }, @@ -430,17 +4664,18 @@ { "data": { "text/plain": [ - "[{'test_loss': 13.804328918457031, 'test_acc': 0.5680751204490662}]" + "[{'test_loss': 14.916933059692383, 'test_acc': 0.5962441563606262}]" ] }, - "execution_count": 6, + "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data_module.setup(\"fit\")\n", - "trainer.test(model, data_module.val_dataloader())" + "validation_dataloader = data_module.val_dataloader()\n", + "trainer.test(model, validation_dataloader)" ] } ], From 2864d30dee04ba774eb4e1732d78fc5e07a0cecb Mon Sep 17 00:00:00 2001 From: Otavio Napoli Date: Fri, 2 Feb 2024 11:43:35 -0300 Subject: [PATCH 2/3] Updated notebooks 3 and 4 --- notebooks/03_training_ssl_model.ipynb | 128 +++--- notebooks/04_using_experiments.ipynb | 555 +++++++++++++++++++++----- 2 files changed, 533 insertions(+), 150 deletions(-) diff --git a/notebooks/03_training_ssl_model.ipynb b/notebooks/03_training_ssl_model.ipynb index 88886ea..0442211 100644 --- a/notebooks/03_training_ssl_model.ipynb +++ b/notebooks/03_training_ssl_model.ipynb @@ -4,20 +4,20 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# 3. Training a self-supervised model (CPC)\n", + "# 3. Training a self-supervised model: Contrastive Predictive Coding (CPC)\n", "\n", "In this notebook, we will train a self-supervised model using the Contrastive Predictive Coding (CPC) method. \n", "This method is based on the idea of predicting future tokens in a sequence, and it has been shown to be very effective in learning useful representations for downstream tasks.\n", "This framework already provides an implementation of CPC, so we will use it to train the model.\n", "\n", - "We will pre-train the model using KuHar dataset, and then we will use the learned representations to train a classifier for the downstream task. \n", + "We will pre-train the model using KuHar dataset, and then we will use the learned representations to train a classifier for the downstream task (fine tuning). \n", "For both stages of training, as the last notebook, we will:\n", "\n", "1. Create a `Dataset` and then `LightningDataModule` to load the data;\n", "2. Instantiate the CPC model; and\n", "3. Train the model using PyTorch Lightning.\n", "\n", - "We can instantiate the model in two ways:\n", + "Every SSL model in this framework can instantiate in two ways:\n", "\n", "1. Instantiate each element, such as the encoder, the autoregressive model, and the CPC model, and then pass them to the CPC model; or\n", "2. Using builder methods to instantiate the model. In this case, we do not need to instantiate each element separately, but we can still customize the model by passing the desired parameters to the builder methods. This is the approach we will use in this notebook.\n", @@ -31,7 +31,9 @@ "source": [ "## Pre-training the model\n", "\n", - "We will pre-train the model using the KuHar dataset. CPC is a self-supervised method, so we do not need labels to train the model. However, CPC assumes that the input data is sequential, that is, an input is a sequence of time-steps comprising different acitivities. Thus, for HAR, usually, one sample (a multi-modal time-series) correspond to the whole time-series of a single user.\n", + "We will pre-train the model using the KuHar dataset. CPC is a self-supervised method, so we do not need labels to train the model. \n", + "However, CPC assumes that the input data is sequential, that is, an input is a sequence of time-steps comprising different acitivities. \n", + "Thus, for HAR, usually, one sample is a multi-modal time-series correspond to the whole time-series of a single user.\n", "\n", "### Creating the LightningDataModule\n", "\n", @@ -53,7 +55,7 @@ " ...\n", "```\n", "\n", - "And the content of each file should be something like:\n", + "And the content of each CSV file should be something like:\n", "\n", "| timestamp | accel-x | accel-y | accel-z | gyro-x | gyro-y | gyro-z | activity |\n", "|-----------|---------|---------|---------|--------|--------|--------|-----------|\n", @@ -66,16 +68,16 @@ "In this way, we should use the `SeriesFolderCSVDataset` to load the data.\n", "This will create a `Dataset` for us, where each CSV file is a sample, and each row of the CSV file is a time-step, and the columns are the features.\n", "\n", - "> **NOTE**: The samples may have different lengths, so, for this method, the `batch_size` must be 1.\n", - "\n", "If your data is organized as above, where inside the root folder (`data/` in this case) there are sub-folders for each split (`train/`, `validation/`, and `test/`), and inside each split folder there are the CSV files, you can use the `UserActivityFolderDataModule` to create a `LightningDataModule` for you.\n", "This class will create `DataLoader` of `SeriesFolderCSVDataset` for each split (train, validation, and test), and will setup data correctly.\n", "\n", - "In this notebook, we will use the `UserActivityFolderDataModule` to create the `LightningDataModule` for us. This class minimally requires:\n", + "In this notebook, we will use the `UserActivityFolderDataModule` to create the `LightningDataModule` for us. This class requires the following parameters:\n", "\n", "- `data_path`: the root directory of the data;\n", "- `features`: the name of the features columns;\n", - "- `pad`: a boolean indicating if the samples should be padded to the same length, that is, the length of the longest sample in the dataset. The padding scheme will replicate the samples, from the beginning, until the length of the longest sample is reached. " + "- `pad`: a boolean indicating if the samples should be padded to the same length, that is, the length of the longest sample in the dataset. The padding scheme will replicate the samples, from the beginning, until the length of the longest sample is reached. \n", + " \n", + "> **NOTE**: The samples may have different lengths, so, for this method, the `batch_size` must be 1." ] }, { @@ -83,6 +85,13 @@ "execution_count": 1, "metadata": {}, "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[1706884475.353781] [aae107fc745c:2265333:f] vfs_fuse.c:281 UCX ERROR inotify_add_watch(/tmp) failed: No space left on device\n" + ] + }, { "data": { "text/plain": [ @@ -120,9 +129,10 @@ "### Pre-training the model\n", "\n", "Here we will use the builder method `build_cpc` to instantiate the CPC model.\n", - "This will instantiate an CPC self-supervised model, with the default encoder (`ssl_tools.models.layers.gru.GRUEncoder`), that is an GRU+Linear, and the default autoregressive model (`torch.nn.GRU`), a linear layer.\n", + "This will instantiate an CPC self-supervised model, with the default encoder (`ssl_tools.models.layers.gru.GRUEncoder`), that is an GRU+Linear, and the default autoregressive model (`torch.nn.GRU`).\n", "\n", - "We can parametrize the creation of the model by passing the desired parameters to the builder method. The `build_cpc` method can be parametrized the following parameters:\n", + "We can parametrize the creation of the model by passing the desired parameters to the builder method. T\n", + "he `build_cpc` method can be parametrized the following parameters:\n", "\n", "- `encoding_size`: the size of the encoded representation;\n", "- `in_channels`: number of input features;\n", @@ -131,9 +141,10 @@ "- `learning_rate`: the learning rate of the optimizer;\n", "- `window_size` : size of the input windows (`X_t`) to be fed to the encoder (GRU).\n", "\n", - "All parameters are optional, and have default values. You may want to consult the documentation of the method to see the default values and additional parameters.\n", + "All parameters are optional, and have default values. \n", + "You may want to consult the documentation of the method to see the default values and additional parameters.\n", "\n", - "Note that the `LightningModule` returned by the `build_cpc` method is already configured to use the `CPC` loss, and the `Adam` optimizer." + "Note that the `LightningModule` returned by the `build_cpc` method is already configured to use the CPC loss, and the `Adam` optimizer." ] }, { @@ -217,7 +228,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "ae5e2aa76b724bd28d4a6a30a24741b6", + "model_id": "8e3634f46d0440d78d1cc4df789b6f63", "version_major": 2, "version_minor": 0 }, @@ -231,7 +242,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "09c22282fde24cfd9de81b97f95e86fd", + "model_id": "640b0f0a79794c2d97ca00a311e4b08d", "version_major": 2, "version_minor": 0 }, @@ -245,7 +256,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "bef77ce96a814d7285754998c34e9ac9", + "model_id": "36aa3a12d53f47e39962e445c39d2af3", "version_major": 2, "version_minor": 0 }, @@ -259,7 +270,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "1937e63c20084b50bb692146e61413a1", + "model_id": "9b41ae7b312341bb8872f7b4e56a753e", "version_major": 2, "version_minor": 0 }, @@ -273,7 +284,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "f69d85b4c6d740a69283266cec51d423", + "model_id": "dbcc4c7c3da648fa94d7a3f2663ffe5e", "version_major": 2, "version_minor": 0 }, @@ -287,7 +298,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "b646a368e4604032afa6c76ad58e442c", + "model_id": "7f89209f2f5b44488be9b5d002fb949f", "version_major": 2, "version_minor": 0 }, @@ -301,7 +312,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "34378bbe5c29451b8e8699f5d5e5c5f3", + "model_id": "52322530fcc641b2b3f2da9e76d23a5d", "version_major": 2, "version_minor": 0 }, @@ -315,7 +326,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "7eb66332ab514500beab85981de2cc86", + "model_id": "268860b5d4104d1f83692fe48f173f7c", "version_major": 2, "version_minor": 0 }, @@ -329,7 +340,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "c497708ecea642399c3972d6588834dc", + "model_id": "96c45fd34de246cea3b0321434c8205a", "version_major": 2, "version_minor": 0 }, @@ -343,7 +354,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "81d632bcc570499580819d00fad1f48c", + "model_id": "66aa47a3e2e34cd383f03c62dde9bbaf", "version_major": 2, "version_minor": 0 }, @@ -357,7 +368,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "5f64cf4dbb41406da82a5a8028f50080", + "model_id": "e6ce5719fcd540dfad7536ac5ac75baf", "version_major": 2, "version_minor": 0 }, @@ -371,7 +382,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "2c930a3f8e18484bb2fc18b2a4773d1a", + "model_id": "e472b12bf71c4b2f909f481385a2b371", "version_major": 2, "version_minor": 0 }, @@ -404,7 +415,9 @@ "source": [ "This finishes the pre-training stage. \n", "\n", - "To obtain the latent representations of the data, we must use cpc `forward` method on the data. In this framework, the `forward` method of the SSL models returns the latent representations of the input data. Usually this is the output of the encoder, as in this case, but it may vary depending on the model.\n", + "To obtain the latent representations of the data, we must use `model.forward()` method on the data. \n", + "In this framework, the `forward` method of the SSL models returns the latent representations of the input data. \n", + "Usually this is the output of the encoder, as in this case, but it may vary depending on the model.\n", "\n", "We will use the encoder to obtain the latent representations of the data, and then we will use these representations to train a classifier for the downstream task." ] @@ -429,7 +442,8 @@ "Human acivity recognition is a supervised classification task, that usually receives multi-modal windowed time-series as input, diferently from the self-supervised task, that receives the whole time-series of a single user.\n", "Thus, we cannot use the same `LightningDataModule` to load the data for the downstream task. \n", "\n", - "In this notebook, we will use the windowed time-series version of the KuHar dataset, that each split is a single CSV file, containing windowed time-series of the users. The content of the file should be something like:\n", + "In this notebook, we will use the windowed time-series version of the KuHar dataset, that each split is a single CSV file, containing windowed time-series of the users. \n", + "The content of the file should be something like:\n", "\n", "```\n", "KuHar/\n", @@ -438,7 +452,7 @@ " test.csv\n", "```\n", "\n", - "The `train.csv` file may look like this:\n", + "The CSVs file may look like this:\n", "\n", "| accel-x-0 | accel-x-1 | accel-y-0 | accel-y-1 | class |\n", "|-----------|-----------|-----------|-----------|--------|\n", @@ -494,18 +508,24 @@ "To handle the fine-tune process, we can design a new model, that is composed of the pre-trained backbone and the prediction head, and then train this new model with the labeled data. \n", "In order to facilitate this process, this framework provides the `SSLDiscriminator` class, that receives the backbone model and the prediction head, and then trains the classifier with the labeled data.\n", "\n", - "In summary, the `SSLDiscriminator` class is a `LightningModule` that generate the representations of the input data using the backbone model, that is, using the `forward` method of the backbone model, and then uses the prediction head to output the predictions. The predictions and labels are then used to compute the loss and train the model. \n", - "By default, the `SSLDiscriminator` is trained using the `Adam` optimizer with the `learning_rate` defined by the user (1e-3 by default).\n", + "In summary, the `SSLDiscriminator` class is a `LightningModule` that generate the representations of the input data using the backbone model, that is, using the `forward` method of the pre-trained backbone model, and then uses the prediction head to output the predictions, something like `y_hat = prediction_head(backbone(sample))`. \n", + "The predictions and labels are then used to compute the loss and train the model. \n", + "By default, the `SSLDiscriminator` is trained using the `Adam` optimizer with parametrizable `learning_rate`.\n", "\n", - "It worth to mention that the `SSLDiscriminator` class `forward` method receives the input data and the labels, and returns the predictions. This is different from the `forward` method of the self-supervised models, that receives only the input data and returns the latent representations of the input data.\n", + "It worth to mention that the `SSLDiscriminator` class `forward` method receives the input data and the labels, and returns the predictions. \n", + "This is different from the `forward` method of the self-supervised models, that receives only the input data and returns the latent representations of the input data.\n", "\n", "It worth to notice that the fine-tune train process can be done in two ways: \n", "\n", "1. Fine-tuning the whole model, that is, backbone (encoder) and classifier, with the labeled data; or \n", "2. Fine-tuning only the classifier, with the labeled data.\n", - "The `SSLDisriminator` class can handle both cases, with the `update_backbone` parameter. If `update_backbone` is `True`, the whole model is fine-tuned (case 1, above), otherwise, only the classifier is fine-tuned (case 2, above).\n", "\n", - "Let's create our prediction head and `SSLDisriminator` model and train it with the labeled data. Prediction heads for most popular tasks are already implemented in the `ssl_tools.models.ssl.modules.heads` module. In this notebook, we will use the `CPCPredictionHead` prediction head, that is a MLP with 3 hidden layers and dropout." + "The `SSLDisriminator` class can handle both cases, with the `update_backbone` parameter. \n", + "If `update_backbone` is `True`, the whole model is fine-tuned (case 1, above), otherwise, only the classifier is fine-tuned (case 2, above).\n", + "\n", + "Let's create our prediction head and `SSLDisriminator` model and train it with the labeled data. \n", + "Prediction heads for most popular tasks are already implemented in the `ssl_tools.models.ssl.modules.heads` module. \n", + "In this notebook, we will use the `CPCPredictionHead` prediction head, that is a MLP with 3 hidden layers and dropout." ] }, { @@ -556,14 +576,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will create the `SSLDisriminator` model. \n", - "The `SSLDisriminator` minimally requires:\n", + "Now we create the `SSLDisriminator` model. This class requires the following parameters:\n", "\n", "- `backbone`: the backbone model, that is, the pre-trained model;\n", "- `head`: the prediction head model;\n", "- `loss_fn`: the loss function to be used to train the model;\n", "\n", - "Also, we can attach metrics that will be calculated with for every batch of `validation` and `test` sets. The metrics is passed using the `metrics` parameter of the `SSLDisriminator` class, that receives a dictionary with the name of the metric as key and the `torchmetrics.Metric` as value.\n", + "Also, we can attach metrics that will be calculated with for every batch of `validation` and `test` sets. \n", + "The metrics is passed using the `metrics` parameter of the `SSLDisriminator` class, that receives a dictionary with the name of the metric as key and the `torchmetrics.Metric` as value.\n", "\n", "Let's create the `SSLDiscriminator` and attach the `Accuracy` metric to the model, to check the validation accuracy per epoch." ] @@ -667,7 +687,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "2cf48079d50145c7afc3309882108058", + "model_id": "b4a9bc62bb02433c814bb266ad167222", "version_major": 2, "version_minor": 0 }, @@ -690,7 +710,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "e2ae4d6c11b94de5995bb8a7ef1c495b", + "model_id": "5a315d4c45dc43baac89162331a17467", "version_major": 2, "version_minor": 0 }, @@ -704,7 +724,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "b5eb6ffc4ee4423a93c60c73a1edfbf1", + "model_id": "a43e85394ce34b1f987664c15e50fd30", "version_major": 2, "version_minor": 0 }, @@ -718,7 +738,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "c66686d98e9d4f6daf3e11ad6dbba580", + "model_id": "ae46f2bf6c324fe7b442a2f88f27a28e", "version_major": 2, "version_minor": 0 }, @@ -732,7 +752,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "f5ddf90cc6cb45c698b80b613d67ba43", + "model_id": "242bcdc871da408eaf38a5a56334a8b1", "version_major": 2, "version_minor": 0 }, @@ -746,7 +766,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "a2e606bf41644aa3ba73f1dd1af1f0ba", + "model_id": "1970a89b26fd4cf39c0ca393b9b6c027", "version_major": 2, "version_minor": 0 }, @@ -760,7 +780,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "5225a29cc4734fe2a51e0a672c017e44", + "model_id": "6bdbd86f03074702a869b386d177fb7d", "version_major": 2, "version_minor": 0 }, @@ -774,7 +794,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "1daec8bdfe244e98a0506f59cadce3e8", + "model_id": "d2167f1510bb4d64a54da935335ccc21", "version_major": 2, "version_minor": 0 }, @@ -788,7 +808,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "0bf6a6c07005402aa725b3711cfc1e8a", + "model_id": "1f3e63d7f34f4dd2b0dc31467dd464dd", "version_major": 2, "version_minor": 0 }, @@ -802,7 +822,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "6f8c6f1f5b7e42c5b4f7a94e3ab5c929", + "model_id": "69e4bfc1a324439a8ecb59e82b9e810f", "version_major": 2, "version_minor": 0 }, @@ -816,7 +836,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "e378c409ea68429b80f4e26858a472dd", + "model_id": "220928c965124f8fa5d5feeb1ec05f2d", "version_major": 2, "version_minor": 0 }, @@ -830,7 +850,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "e6535154a526421997abf0f6c82cda9f", + "model_id": "6e0696c0f6a144909060b0ab7d53b45a", "version_major": 2, "version_minor": 0 }, @@ -862,7 +882,7 @@ "metadata": {}, "source": [ "Let's evaluate the model using the test set. If we have added the `Accuracy` metric to the model, it will calculate the accuracy of the model on the test set.\n", - "All logged metrics will be returnet by `.test()` method, as a dictionary." + "All logged metrics will be returnet by `.test()` method, as a list of dictionaries." ] }, { @@ -881,7 +901,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "0075d6e46e5243eca4724a1a3ead2011", + "model_id": "96b0536249cb4290abbe1fb6367f6de6", "version_major": 2, "version_minor": 0 }, @@ -899,7 +919,7 @@ "┃ Test metric DataLoader 0 ┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", "│ test_acc 0.5277777910232544 │\n", - "│ test_loss 1.5032936334609985 │\n", + "│ test_loss 1.4903016090393066 │\n", "└───────────────────────────┴───────────────────────────┘\n", "\n" ], @@ -908,7 +928,7 @@ "┃\u001b[1m \u001b[0m\u001b[1m Test metric \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m DataLoader 0 \u001b[0m\u001b[1m \u001b[0m┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.5277777910232544 \u001b[0m\u001b[35m \u001b[0m│\n", - "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.5032936334609985 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.4903016090393066 \u001b[0m\u001b[35m \u001b[0m│\n", "└───────────────────────────┴───────────────────────────┘\n" ] }, @@ -918,7 +938,7 @@ { "data": { "text/plain": [ - "[{'test_loss': 1.5032936334609985, 'test_acc': 0.5277777910232544}]" + "[{'test_loss': 1.4903016090393066, 'test_acc': 0.5277777910232544}]" ] }, "execution_count": 8, @@ -957,7 +977,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "c344339416884a01b84c3bc5dace1252", + "model_id": "64924b90def24c998374ee4e9ebd8478", "version_major": 2, "version_minor": 0 }, diff --git a/notebooks/04_using_experiments.ipynb b/notebooks/04_using_experiments.ipynb index f16629a..4ece8ba 100644 --- a/notebooks/04_using_experiments.ipynb +++ b/notebooks/04_using_experiments.ipynb @@ -6,9 +6,12 @@ "source": [ "# 4. Using Experiments\n", "\n", - "Although the process of training and evaluating models becomes easier due to the abstractions and facilities provided by this framework and Pytorch Lightning, we also standarize the way we conduct experiments, in order to allow for a more systematic and organized approach to the development of models.\n", + "Although the process of training and evaluating models becomes easier due to the abstractions and facilities provided by this framework and Pytorch Lightning, we also aims to standarize the way we conduct experiments, in order to allow for a more systematic and organized approach to the development of models.\n", "\n", - "The `LightningExperiment` class aims to standartize the way we conduct experiments, including: default callacks and loggers, the directory structure for the logs and checkpoints, logging of hyperparameters, and the way we handle the training and evaluation of models and data modules.\n", + "The `LightningExperiment` implements a default pipeline (similar to ones used in previous notebooks) and provides a set of default configurations and settings for the experiments. \n", + "This includes the default configurations for the Lightning Trainer, Logger, and Callbacks, as well as model and data module configurations.\n", + "Also, it standardize the ouputs of the experiments, and the way we log the hyperparameters and results.\n", + "However, it also provides flexibility as the user can customize the experiment by overriding the default configurations and settings.\n", "\n", "In this notebook, we will demonstrate how to use the `LightningExperiment` class to conduct experiments in a systematic and organized way." ] @@ -20,55 +23,86 @@ "\n", "## Experiment Structure\n", "\n", - "The `LightningExperiment` follows the structure below. The first box is the name of the class, the second box is the name of the attributes and their type, and the third box is the methods of that class, the input parameters and return type. \n", - "The arrows represent the inheritance relationship between the classes. \n", - "Derived classes inherit the attributes and methods of their parent classes, that is, it have access to all the attributes and methods of the parent class. \n", - "Methods named in italic are abstract methods, that is, they must be implemented by the derived class. Some methods are not abstract, or it may already e implemented in some childs (overriden). \n", + "The `LightningExperiment` follows the structure below. Each rectangle (vertex of the graph) corresponds to a class, and the arrows (edges of the graph) correspond to the inheritance relationship between the classes. Inside each rectangle, there are three boxes. The first box is the name of the class, the second box is the name of the attributes and their type, and the third box is the methods of that class, the input parameters and return type. \n", "\n", + "As derived classes inherit the attributes and methods of their parent classes, they have access to all the attributes and methods of the parent class. \n", + "Methods named in italic are abstract methods, that is, they must be implemented in some of the derived classes (below him). \n", + "Some methods are not abstract, thus they have a default implementation, but can be overriden by the user to customize the experiment.\n", + "You may want to check some useful material, if you are not familiar with the concept of inheritance in object-oriented programming, such as [this one from Real Python](https://realpython.com/inheritance-composition-python/), that gives a comprehensive overview of inheritance in Python, or [this one from Geeks For Geeks](https://www.geeksforgeeks.org/inheritance-in-python/).\n", "\n", - "![Experiment Structure](experiment_classes.svg)\n", "\n", + "![Experiment Structure](experiment_classes.svg)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "### The `Experiment` class\n", "\n", "The `Experiment` class is the base class for all experiments and includes the `experiment_dir` (where logs, checkpoints, and outputs are saved), the `name` and `run_id` (tipically, the time).\n", "The experiment directory is created when the experiment is instantiated, and the `experiment_dir` attribute is set to the path of the created directory.\n", "The experiment consist in 3 stages: `setup`, `run` and `teardown`.\n", - "You can use the `execute` method to run the experiment, that will call the `setup`, `run` and `teardown` methods in sequence.\n", + "The `run` method is an abstract method, and must be implemented in the derived classes.\n", "\n", + "You can use the `execute` method to run the experiment, that will call the `setup`, `run` and `teardown` methods in sequence." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "### The `LightningExperiment` class\n", "\n", - "The `LightningExperiment` adds common parameters for train and test models using Pytorch Lightning. Usually this is the base class for any experiment that uses Pytorch Lightning.\n", - "This class also implements the `run` method, that execute a generic Pytorch Lightning pipeline, and calls the `get_callbacks`, `get_logger`, `get_data_module`, `get_model`, `get_trainer`, `load_checkpoint`, `run_model` and `log_hyperparameter` methods. \n", - "The pseudo-code for the `run` method is:\n", + "The `LightningExperiment` adds common parameters when using Pytorch Lightning for training or testing. \n", + "Usually this is the base class for any experiment that uses Pytorch Lightning.\n", + "This class also implements the `run` method, that execute a generic Pytorch Lightning pipeline, similar to ones used in previous notebooks. \n", + "This pipeline calls some methods that it defines.\n", + "In fact, the pseudo-code for pipeline implemented by the `run` method is:\n", "\n", "1. Get the model and data module using `get_model` and `get_data_module` methods.\n", - "2. If `self.load` is provided, load the checkpoint using the `load_checkpoint` method.\n", + "2. If `self.load` is provided (path to the checkpoint), load the checkpoint using the `load_checkpoint` method.\n", "3. Get the callbacks and logger using `get_callbacks` and `get_logger` methods.\n", - "4. Log the hyperparameters using the `log_hyperparameters` method.\n", - "5. Get the trainer using the `get_trainer` method.\n", + "4. Log the hyperparameters of experiment and model using the `log_hyperparameters` method.\n", + "5. Get the trainer using the `get_trainer` method and attach the logger and callbacks.\n", "6. Run the model using the `run_model` method.\n", "\n", - "The user can override these methods to customize the experiment. By default, `get_callbacks`, `get_logger`, `load_checkpoint`, and `log_hyperparameters` have default implementations, and `get_data_module`, `get_model`, `get_trainer`, and `run_model` are abstract methods that must be implemented by the derived class.\n", - "\n", - "\n", + "The user can override these methods to customize the experiment. By default, `get_callbacks`, `get_logger`, `load_checkpoint`, and `log_hyperparameters` have default implementations, and `get_data_module`, `get_model`, `get_trainer`, and `run_model` are abstract methods that must be implemented by the derived class." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "### The `LightningTrain` and `LightningTest` classes\n", "\n", - "The `LightningTrain` and `LightningTest` classes are derived from `LightningExperiment` and are used to train and test models, respectively. These classes adds more specific parameters for training and testing models using Pytorch Lightning and implements specific `get_callbacks`, `get_trainer`, and `run_model` methods, that are specific for training and testing models, respectively. This standardizes the way we train and test models, logging the same information and using the same callbacks and loggers. Thus, it allows the user to focus on the model and data module, and not on the training and testing process, that is already standardized (and can be customized) and can be reused in different experiments.\n", - "In fact, `get_model` and `get_data_module` are abstract methods that must be implemented by the derived class, that varies according to the model and data module used in the experiment.\n", - "\n", + "The `LightningTrain` and `LightningTest` classes are derived from `LightningExperiment` and are used to train and test models, respectively. \n", + "These classes adds more specific parameters for different contexts, such as training and testing. \n", + "For instance, training usually requires the number of epochs, learning rate, and other parameters.\n", + "Both classes implement parent's `get_callbacks`, `get_trainer` and `run_model` methods. \n", "\n", - "### The `LightningSSLTrain` class\n", + "This standardizes the way we train and test models, logging the same information and using the same callbacks and loggers. \n", + "It allows the user to focus on the model and data module, and not on the training and testing process, that is already standardized (and can be customized) and can be reused in different experiments.\n", + "In fact, `get_model` and `get_data_module` are abstract methods that must be implemented by the derived class, as it varies according to the model and data module used in the experiment." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### `LightningSSLTrain` class\n", "\n", - "The `LightningTrain` class allow to train arbitrary models. \n", - "The `LightningSSLTrain` class is a derived class that is used to train models using self-supervised learning. It adds 4 new methods: \n", + "While `LightningTrain` class allows training arbitrary models, the `LightningSSLTrain` class is a derived class used to train models using self-supervised learning. This class adds more 4 new abstract methods: \n", "\n", "* `get_pretrain_model` and `get_pretrain_data_module`: the user must return the model and data module used to pretrain the model.\n", "* `get_finetune_model` and `get_finetune_data_module`: the user must return the model and data module used to finetune the model.\n", "\n", - "The `training_mode` variable is used to indicate if the model is in pretrain or finetune mode. In fact, the `get_model` and `get_data_module` methods will call the `get_pretrain_model` and `get_pretrain_data_module` methods if `training_mode` is `pretrain`, and the `get_finetune_model` and `get_finetune_data_module` methods if `training_mode` is `finetune`. \n", + "The `training_mode` variable is also introduced and it is used to indicate if the model is in pretrain or finetune mode. \n", + "Then, the `get_model` and `get_data_module` methods will call the `get_pretrain_model` and `get_pretrain_data_module` methods if `training_mode` is `pretrain`, and the `get_finetune_model` and `get_finetune_data_module` methods if `training_mode` is `finetune`. \n", "\n", "One important thing to note is about `load` parameter. \n", - "If it is provided, the `load_checkpoint` method will load the checkpoint for the model, in order to resume the training. The `get_finetune_model` receives an additional parameter, the `load_backbone` parameter. After the backbone is loaded, the `load` parameter is used to resume the finetuning, that is, load the checkpoint for the finetune model (`SSLDiscriminator`)." + "If it is provided, it will load the checkpoint for the model, in order to resume the training. This is valid both for pretrain and finetune modes. \n", + "The `load_backbone` parameter is only used in finetune mode, in order to load the checkpoint for the backbone model, that is, load a model that was pretrained using self-supervised learning. This is usually used to start a finetuning from a model that was pretrained using self-supervised learning. If you want to resume a finetuning from a checkpoint, you should use the `load` parameter." ] }, { @@ -77,18 +111,24 @@ "source": [ "## Running CPC Experiment\n", "\n", - "In this notebook, we will demonstrate how to run a CPC experiment, from pretrain to finetune. The `CPCTrain` class derives from `LightningSSLTrain` and implements the `get_pretrain_model`, `get_pretrain_data_module`, `get_finetune_model` and `get_finetune_data_module` methods, while the `CPCTest` class derives from `LightningTest` and implements the `get_model` and `get_data_module` methods.\n", - "Both classes add specific parameters to create CPC model and instantiate the data module.\n", + "In this notebook, we will demonstrate how to run a CPC experiment, from pretrain to finetune and, finally, test. \n", + "The `CPCTrain` class derives from `LightningSSLTrain` and implements the `get_pretrain_model`, `get_pretrain_data_module`, `get_finetune_model` and `get_finetune_data_module` methods, while the `CPCTest` class derives from `LightningTest` and implements the `get_model` and `get_data_module` methods.\n", + "Both classes add specific parameters to instantiate CPC model and the respective data module.\n", "\n", - "Let's first start by pretraining the CPC model, using KuHAR dataset, as in previous notebooks.\n", - "\n", - "### Experiment of Pretraining CPC\n", + "Let's first start by pretraining the CPC model, using KuHAR dataset, as in previous notebooks." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Experiment: Pretraining CPC\n", "\n", "The `CPCTrain` class will encapsuate the default code for creating models and data modules from previous notebooks into the `get_pretrain_model` and `get_pretrain_data_module` methods. \n", - "Thus, we just need to pass the required parameters to the `CPCTrain` class and call the `execute` method to run the experiment.\n", - "As `CPCTrain` is a derived class, we can pass the parameters from all parent classes (`epochs`, `accelerator`, `batch_size`, *etc.*), as well as the parameters from the `CPCTrain` class (`window_size`, `num_classes`, *etc.*) in the class constructor.\n", + "Thus, we just need to pass the required parameters to the `CPCTrain` class constructor and call the `execute` method to run the experiment.\n", + "As `CPCTrain` is a derived class, we can pass the parameters from all parent classes (`epochs`, `accelerator`, `seed`, *etc.*), as well as the parameters from the `CPCTrain` class (`window_size`, `num_classes`, *etc.*) in the class constructor.\n", "\n", - "The `CPCTrain` includes parameters to create the model as well as the data module. These parameters include:\n", + "These main parameters include for `CPCTrain` class are:\n", "\n", "* `data`: the path to the dataset folder. For pretrain, the data must be the path to a dataset where the samples are the whole time-series of an user. For finetune, the data must be the path to a dataset where the samples are the windows of the time-series, as in previous notebooks.\n", "* `encoding_size`: the size of the latent representation of the CPC model.\n", @@ -98,25 +138,28 @@ "* `num_classes`: number of classes in the dataset.\n", "* `update_backbone`: boolean indicating if the backbone should be updated during finetuning (only useful for fine-tuning process).\n", "\n", - "Only the `data` parameter is required, the others have default values. Please check the documentation of the `CPCTrain` class for more details.\n", + "Only the `data` parameter is required, the others have default values. \n", + "Please check the documentation of the `CPCTrain` class for more details.\n", "\n", "Let's create the `CPCTrain` class and run the pretraining experiment." ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [] - }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[1706883882.274114] [aae107fc745c:2257856:f] vfs_fuse.c:281 UCX ERROR inotify_add_watch(/tmp) failed: No space left on device\n" + ] + }, { "data": { "text/plain": [ - "LightningExperiment(experiment_dir=logs/pretrain/CPC/2024-02-01_23-52-39, model=CPC, run_id=2024-02-01_23-52-39, finished=False)" + "LightningExperiment(experiment_dir=logs/pretrain/CPC/2024-02-02_11-24-49, model=CPC, run_id=2024-02-02_11-24-49, finished=False)" ] }, "execution_count": 1, @@ -141,7 +184,6 @@ " num_classes=6,\n", " # Trainer params\n", " epochs=10,\n", - " num_workers=12,\n", " batch_size=1,\n", " accelerator=\"gpu\",\n", " devices=1,\n", @@ -159,7 +201,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/pretrain/CPC/2024-02-01_23-52-39 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", + "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/pretrain/CPC/2024-02-02_11-24-49 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", "GPU available: True (cuda), used: True\n", "TPU available: False, using: 0 TPU cores\n", "IPU available: False, using: 0 IPUs\n", @@ -175,7 +217,7 @@ "Setting up experiment: CPC...\n", "Running experiment: CPC...\n", "Training will start\n", - "\tExperiment path: logs/pretrain/CPC/2024-02-01_23-52-39\n" + "\tExperiment path: logs/pretrain/CPC/2024-02-02_11-24-49\n" ] }, { @@ -234,7 +276,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "d3d79b4d1870487a9225f9f9afdc262c", + "model_id": "84a17deffcaa4fa992ad2c2e009290d2", "version_major": 2, "version_minor": 0 }, @@ -255,11 +297,11 @@ { "data": { "text/html": [ - "
--> Overall fit time: 19.928 seconds\n",
+       "
--> Overall fit time: 31.599 seconds\n",
        "
\n" ], "text/plain": [ - "--> Overall fit time: 19.928 seconds\n" + "--> Overall fit time: 31.599 seconds\n" ] }, "metadata": {}, @@ -293,7 +335,7 @@ "output_type": "stream", "text": [ "Training finished\n", - "Last checkpoint saved at: logs/pretrain/CPC/2024-02-01_23-52-39/checkpoints/last.ckpt\n", + "Last checkpoint saved at: logs/pretrain/CPC/2024-02-02_11-24-49/checkpoints/last.ckpt\n", "Teardown experiment: CPC...\n" ] } @@ -307,13 +349,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Once the experiment finished, we may have a directory structure like this:\n", + "Once the experiment finished, we may have a directory structure similar to this:\n", "\n", "```\n", "logs/\n", " pretrain/\n", " CPC/\n", - " 2024-02-01_22-01-31/\n", + " 2024-02-02_10-53-31/\n", " checkpoints/\n", " epoch=9-step=570.ckpt\n", " last.ckpt\n", @@ -321,16 +363,14 @@ " metrics.csv\n", "```\n", "\n", - "This is the default directory structure for experiments, where the experiment directory is `logs/pretrain/CPC/2024-02-01_22-01-31/`. The `checkpoints directory` contains the saved checkpoints and inside it we may have a `last.ckpt` file which is the last checkpoint saved.\n", - "The `hparams.yaml` file contains the hyperparameters, and the `metrics.csv` file contains the metrics logged during training.\n", - "\n", + "This is the default directory structure for experiments. The experiment directory is `logs/pretrain/CPC/2024-02-01_22-01-31/`, and can be accessed using the `cpc_experiment.experiment_dir` attribute. The `checkpoints directory` contains the saved checkpoints and inside it we may have a `last.ckpt` file which is the last checkpoint saved. It can be accessed using the `cpc_experiment.checkpoint_dir` attribute. The `hparams.yaml` file contains the hyperparameters, and the `metrics.csv` file contains the metrics logged during training.\n", "\n", "We can obtain the experiment's model, data module, logger, checkpoint directory, callbacks, trianer, and hyperparameters using the `cpc_experiment.model`, `cpc_experiment.data_module`, `cpc_experiment.logger`, `cpc_experiment.checkpoint_dir`, `cpc_experiment.callbacks`, `cpc_experiment.trainer`, and `cpc_experiment.hyperparameters` attributes, respectively. \n", "These objects are cached in the `cpc_experiment` object, thus, it is instantiated only once, and can be accessed multiple times.\n", "Also, the `cpc_experiment.finished` attribute is a boolean indicating if the experiment has finished sucessfuly or not.\n", "\n", - "We will need this checkpoint to load the weights of the backbone for the finetuning process.\n", - "Let's obtain the checkpoint file and the experiment's model and data module, and then run the finetuning experiment." + "For fine-tunning, we will need this checkpoint to load the weights of the backbone.\n", + "Let's obtain the checkpoint file and then run the finetuning experiment." ] }, { @@ -341,7 +381,7 @@ { "data": { "text/plain": [ - "PosixPath('logs/pretrain/CPC/2024-02-01_23-52-39/checkpoints/last.ckpt')" + "PosixPath('logs/pretrain/CPC/2024-02-02_11-24-49/checkpoints/last.ckpt')" ] }, "execution_count": 3, @@ -358,16 +398,18 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Experiment of Fine-tune CPC\n", + "### Experiment: Fine-tunning CPC\n", "\n", "The `CPCTrain` class also encapsuate the default code for creating models and data modules from previous notebooks into the `get_finetune_model` and `get_finetune_data_module` methods. \n", "The behaviour of these methods is similar to the `get_pretrain_model` and `get_pretrain_data_module` methods, but they are used to create the model and data module for the finetuning process.\n", "In fact, the `get_finetune_model` will encapsulate the CPC code inside `SSLDisriminator` class, as seen in previous notebooks.\n", "\n", - "As we use the same class for pretrain and finetune, we just need to set the `training_mode` attribute to `finetune` and set the `load_backbone` parameter to the checkpoint file obtained in the pretrain process. \n", + "As we use the same class for pretrain and finetune, we just need to set the `training_mode` attribute to `finetune` and set the `load_backbone` parameter to the checkpoint file obtained in the pretrain process, in order to load the weights of the backbone model. \n", "Then, we can call the `execute` method to run the experiment.\n", "\n", - "However, it worth to notice that fine tune is an supervised learning process and uses windowed time-series as input. Thus, the `data` parameter must be the path to a dataset where the samples are the windows of the time-series, as in previous notebooks. In our case, we will use the standardized balanced view of the KuHar dataset." + "However, it worth to notice that fine tune is an supervised learning process and uses windowed time-series as input. \n", + "Thus, the `data` parameter must be the path to a dataset where the samples are the windows of the time-series, as in previous notebooks. \n", + "In our case, we will use the standardized balanced view of the KuHar dataset." ] }, { @@ -378,7 +420,7 @@ { "data": { "text/plain": [ - "LightningExperiment(experiment_dir=logs/finetune/CPC/2024-02-02_00-03-12, model=CPC, run_id=2024-02-02_00-03-12, finished=False)" + "LightningExperiment(experiment_dir=logs/finetune/CPC/2024-02-02_11-31-12, model=CPC, run_id=2024-02-02_11-31-12, finished=False)" ] }, "execution_count": 4, @@ -420,7 +462,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/finetune/CPC/2024-02-02_00-03-12 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", + "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/finetune/CPC/2024-02-02_11-31-12 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", "GPU available: True (cuda), used: True\n", "TPU available: False, using: 0 TPU cores\n", "IPU available: False, using: 0 IPUs\n", @@ -435,10 +477,10 @@ "text": [ "Setting up experiment: CPC...\n", "Running experiment: CPC...\n", - "Loading model from: logs/pretrain/CPC/2024-02-01_23-52-39/checkpoints/last.ckpt...\n", + "Loading model from: logs/pretrain/CPC/2024-02-02_11-24-49/checkpoints/last.ckpt...\n", "Model loaded successfully\n", "Training will start\n", - "\tExperiment path: logs/finetune/CPC/2024-02-02_00-03-12\n" + "\tExperiment path: logs/finetune/CPC/2024-02-02_11-31-12\n" ] }, { @@ -495,7 +537,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "f478640308374e5894df3302aa127e92", + "model_id": "92081a68a29343e18b11f8ae49a8f37c", "version_major": 2, "version_minor": 0 }, @@ -518,6 +560,58 @@ }, "metadata": {}, "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "`Trainer.fit` stopped: `max_epochs=10` reached.\n" + ] + }, + { + "data": { + "text/html": [ + "
--> Overall fit time: 12.987 seconds\n",
+       "
\n" + ], + "text/plain": [ + "--> Overall fit time: 12.987 seconds\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
\n"
+      ],
+      "text/plain": []
+     },
+     "metadata": {},
+     "output_type": "display_data"
+    },
+    {
+     "data": {
+      "text/html": [
+       "
\n",
+       "
\n" + ], + "text/plain": [ + "\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Training finished\n", + "Last checkpoint saved at: logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt\n", + "Teardown experiment: CPC...\n" + ] } ], "source": [ @@ -534,13 +628,13 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "PosixPath('logs/finetune/CPC/2024-02-01_22-38-32/checkpoints/last.ckpt')" + "PosixPath('logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt')" ] }, "execution_count": 6, @@ -557,26 +651,27 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### CPC performance evaluation experiment\n", + "## Experiment: Evaluating CPC Performance\n", "\n", - "Finally, we can evaluate the performance of the CPC model using the `CPCTest` class. This class inherits from `LightningTest` and encapsulate the default code for creating models and data modules from previous notebooks into the `get_model` and `get_data_module` methods.\n", + "Finally, we can evaluate the performance of the CPC model using the `CPCTest` class. \n", + "This class inherits from `LightningTest` and encapsulate the default code for creating models and data modules from previous notebooks into the `get_model` and `get_data_module` methods.\n", "\n", - "The signature of the `CPCTest` class is very similar to the `CPCTrain` class. Also, we will use the same data module used in the finetuning process. However, differently from the train process the test process uses the `.test` method in the trainer and not the `.fit` method.\n", - "Also, the `load` parameter is used to load the checkpoint obtained in the finetuning process (that load the weights from `SSLDiscriminator`, backbone and prediction haad).\n", + "The signature of the `CPCTest` class is very similar to the `CPCTrain` class. However, differently from the train process the test process uses the `.test()` method in the Trainer and not the `.fit()` method.\n", + "Also, the `load` parameter is used to load the checkpoint obtained in the finetuning process (that load the weights from `SSLDiscriminator`, backbone and prediction head).\n", "\n", "Let's create experiments to test the CPC model, using the test set from different datasets besides KuHAR." ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ - "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/test/CPC/2024-02-01_23-01-24 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", + "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/test/CPC/2024-02-02_11-32-55 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", "GPU available: True (cuda), used: True\n", "TPU available: False, using: 0 TPU cores\n", "IPU available: False, using: 0 IPUs\n", @@ -590,17 +685,17 @@ "output_type": "stream", "text": [ "Dataset at: /workspaces/hiaac-m4/ssl_tools/data/standartized_balanced/KuHar\n", - "Loading model from logs/finetune/CPC/2024-02-01_22-38-32/checkpoints/last.ckpt and executing test using dataset at KuHar...\n", + "Loading model from logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt and executing test using dataset at KuHar...\n", "Setting up experiment: CPC...\n", "Running experiment: CPC...\n", - "Loading model from: logs/finetune/CPC/2024-02-01_22-38-32/checkpoints/last.ckpt...\n", + "Loading model from: logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt...\n", "Model loaded successfully\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "f580fe5d3fa9441a846ad0962953d3c3", + "model_id": "d2398c86ef214c4d96783616d573caf7", "version_major": 2, "version_minor": 0 }, @@ -617,8 +712,8 @@ "
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
        "┃        Test metric               DataLoader 0        ┃\n",
        "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n",
-       "│         test_acc              0.4583333432674408     │\n",
-       "│         test_loss              1.576676845550537     │\n",
+       "│         test_acc              0.4652777910232544     │\n",
+       "│         test_loss             1.5481091737747192     │\n",
        "└───────────────────────────┴───────────────────────────┘\n",
        "
\n" ], @@ -626,8 +721,8 @@ "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", "┃\u001b[1m \u001b[0m\u001b[1m Test metric \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m DataLoader 0 \u001b[0m\u001b[1m \u001b[0m┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", - "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.4583333432674408 \u001b[0m\u001b[35m \u001b[0m│\n", - "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.576676845550537 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.4652777910232544 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.5481091737747192 \u001b[0m\u001b[35m \u001b[0m│\n", "└───────────────────────────┴───────────────────────────┘\n" ] }, @@ -661,13 +756,12 @@ "name": "stderr", "output_type": "stream", "text": [ - "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/test/CPC/2024-02-01_23-01-28 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", + "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/test/CPC/2024-02-02_11-32-57 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", "GPU available: True (cuda), used: True\n", "TPU available: False, using: 0 TPU cores\n", "IPU available: False, using: 0 IPUs\n", "HPU available: False, using: 0 HPUs\n", - "`Trainer(limit_test_batches=1.0)` was configured so 100% of the batches will be used..\n", - "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + "`Trainer(limit_test_batches=1.0)` was configured so 100% of the batches will be used..\n" ] }, { @@ -675,19 +769,26 @@ "output_type": "stream", "text": [ "Teardown experiment: CPC...\n", - "Test on dataset KuHar finished !\n", + "Test on dataset KuHar finished!\n", "Dataset at: /workspaces/hiaac-m4/ssl_tools/data/standartized_balanced/MotionSense\n", - "Loading model from logs/finetune/CPC/2024-02-01_22-38-32/checkpoints/last.ckpt and executing test using dataset at MotionSense...\n", + "Loading model from logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt and executing test using dataset at MotionSense...\n", "Setting up experiment: CPC...\n", "Running experiment: CPC...\n", - "Loading model from: logs/finetune/CPC/2024-02-01_22-38-32/checkpoints/last.ckpt...\n", + "Loading model from: logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt...\n", "Model loaded successfully\n" ] }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + }, { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "483322b4b438455c8455b6417a625b5e", + "model_id": "813175344f71488aaf2c634d00c5f080", "version_major": 2, "version_minor": 0 }, @@ -704,8 +805,8 @@ "
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
        "┃        Test metric               DataLoader 0        ┃\n",
        "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n",
-       "│         test_acc              0.3860640227794647     │\n",
-       "│         test_loss             1.6338088512420654     │\n",
+       "│         test_acc              0.43879473209381104    │\n",
+       "│         test_loss             1.5955957174301147     │\n",
        "└───────────────────────────┴───────────────────────────┘\n",
        "
\n" ], @@ -713,8 +814,8 @@ "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", "┃\u001b[1m \u001b[0m\u001b[1m Test metric \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m DataLoader 0 \u001b[0m\u001b[1m \u001b[0m┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", - "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.3860640227794647 \u001b[0m\u001b[35m \u001b[0m│\n", - "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.6338088512420654 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.43879473209381104 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.5955957174301147 \u001b[0m\u001b[35m \u001b[0m│\n", "└───────────────────────────┴───────────────────────────┘\n" ] }, @@ -748,7 +849,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/test/CPC/2024-02-01_23-01-39 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", + "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/test/CPC/2024-02-02_11-32-59 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", "GPU available: True (cuda), used: True\n", "TPU available: False, using: 0 TPU cores\n", "IPU available: False, using: 0 IPUs\n", @@ -761,12 +862,105 @@ "output_type": "stream", "text": [ "Teardown experiment: CPC...\n", - "Test on dataset MotionSense finished !\n", + "Test on dataset MotionSense finished!\n", "Dataset at: /workspaces/hiaac-m4/ssl_tools/data/standartized_balanced/RealWorld_thigh\n", - "Loading model from logs/finetune/CPC/2024-02-01_22-38-32/checkpoints/last.ckpt and executing test using dataset at RealWorld_thigh...\n", + "Loading model from logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt and executing test using dataset at RealWorld_thigh...\n", + "Setting up experiment: CPC...\n", + "Running experiment: CPC...\n", + "Loading model from: logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt...\n", + "Model loaded successfully\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "612ec0d361884c7399217888425f69d8", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Output()" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
+       "┃        Test metric               DataLoader 0        ┃\n",
+       "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n",
+       "│         test_acc              0.40372908115386963    │\n",
+       "│         test_loss              1.643248438835144     │\n",
+       "└───────────────────────────┴───────────────────────────┘\n",
+       "
\n" + ], + "text/plain": [ + "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", + "┃\u001b[1m \u001b[0m\u001b[1m Test metric \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m DataLoader 0 \u001b[0m\u001b[1m \u001b[0m┃\n", + "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", + "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.40372908115386963 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.643248438835144 \u001b[0m\u001b[35m \u001b[0m│\n", + "└───────────────────────────┴───────────────────────────┘\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
\n"
+      ],
+      "text/plain": []
+     },
+     "metadata": {},
+     "output_type": "display_data"
+    },
+    {
+     "data": {
+      "text/html": [
+       "
\n",
+       "
\n" + ], + "text/plain": [ + "\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/test/CPC/2024-02-02_11-33-01 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "IPU available: False, using: 0 IPUs\n", + "HPU available: False, using: 0 HPUs\n", + "`Trainer(limit_test_batches=1.0)` was configured so 100% of the batches will be used..\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Teardown experiment: CPC...\n", + "Test on dataset RealWorld_thigh finished!\n", + "Dataset at: /workspaces/hiaac-m4/ssl_tools/data/standartized_balanced/RealWorld_waist\n", + "Loading model from logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt and executing test using dataset at RealWorld_waist...\n", "Setting up experiment: CPC...\n", "Running experiment: CPC...\n", - "Loading model from: logs/finetune/CPC/2024-02-01_22-38-32/checkpoints/last.ckpt...\n", + "Loading model from: logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt...\n", "Model loaded successfully\n" ] }, @@ -780,7 +974,7 @@ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "bc9753a4e8d549daad98bc8a70185c97", + "model_id": "1d507d05b32d4f5e9563d6393d53b022", "version_major": 2, "version_minor": 0 }, @@ -790,6 +984,147 @@ }, "metadata": {}, "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
+       "┃        Test metric               DataLoader 0        ┃\n",
+       "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n",
+       "│         test_acc              0.4031635820865631     │\n",
+       "│         test_loss             1.6421446800231934     │\n",
+       "└───────────────────────────┴───────────────────────────┘\n",
+       "
\n" + ], + "text/plain": [ + "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", + "┃\u001b[1m \u001b[0m\u001b[1m Test metric \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m DataLoader 0 \u001b[0m\u001b[1m \u001b[0m┃\n", + "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", + "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.4031635820865631 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.6421446800231934 \u001b[0m\u001b[35m \u001b[0m│\n", + "└───────────────────────────┴───────────────────────────┘\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
\n"
+      ],
+      "text/plain": []
+     },
+     "metadata": {},
+     "output_type": "display_data"
+    },
+    {
+     "data": {
+      "text/html": [
+       "
\n",
+       "
\n" + ], + "text/plain": [ + "\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/test/CPC/2024-02-02_11-33-03 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n", + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "IPU available: False, using: 0 IPUs\n", + "HPU available: False, using: 0 HPUs\n", + "`Trainer(limit_test_batches=1.0)` was configured so 100% of the batches will be used..\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Teardown experiment: CPC...\n", + "Test on dataset RealWorld_waist finished!\n", + "Dataset at: /workspaces/hiaac-m4/ssl_tools/data/standartized_balanced/UCI\n", + "Loading model from logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt and executing test using dataset at UCI...\n", + "Setting up experiment: CPC...\n", + "Running experiment: CPC...\n", + "Loading model from: logs/finetune/CPC/2024-02-02_11-31-12/checkpoints/last.ckpt...\n", + "Model loaded successfully\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "a48db052fd0141dea8947f2ffebf7fbf", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Output()" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
+       "┃        Test metric               DataLoader 0        ┃\n",
+       "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n",
+       "│         test_acc              0.35652172565460205    │\n",
+       "│         test_loss             1.6707664728164673     │\n",
+       "└───────────────────────────┴───────────────────────────┘\n",
+       "
\n" + ], + "text/plain": [ + "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n", + "┃\u001b[1m \u001b[0m\u001b[1m Test metric \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m DataLoader 0 \u001b[0m\u001b[1m \u001b[0m┃\n", + "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n", + "│\u001b[36m \u001b[0m\u001b[36m test_acc \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 0.35652172565460205 \u001b[0m\u001b[35m \u001b[0m│\n", + "│\u001b[36m \u001b[0m\u001b[36m test_loss \u001b[0m\u001b[36m \u001b[0m│\u001b[35m \u001b[0m\u001b[35m 1.6707664728164673 \u001b[0m\u001b[35m \u001b[0m│\n", + "└───────────────────────────┴───────────────────────────┘\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
\n"
+      ],
+      "text/plain": []
+     },
+     "metadata": {},
+     "output_type": "display_data"
+    },
+    {
+     "data": {
+      "text/html": [
+       "
\n",
+       "
\n" + ], + "text/plain": [ + "\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Teardown experiment: CPC...\n", + "Test on dataset UCI finished!\n" + ] } ], "source": [ @@ -803,8 +1138,7 @@ " \"MotionSense\",\n", " \"RealWorld_thigh\",\n", " \"RealWorld_waist\",\n", - " \"UCI\"\n", - " \"WISDM\"\n", + " \"UCI\",\n", "]\n", "\n", "results = dict()\n", @@ -822,6 +1156,7 @@ " in_channel=6,\n", " num_classes=6,\n", " # Trainer params\n", + " batch_size=256,\n", " accelerator=\"gpu\",\n", " devices=1,\n", " )\n", @@ -830,6 +1165,34 @@ " print(f\"Test on dataset {dataset} finished!\")" ] }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'KuHar': [{'test_loss': 1.5481091737747192, 'test_acc': 0.4652777910232544}],\n", + " 'MotionSense': [{'test_loss': 1.5955957174301147,\n", + " 'test_acc': 0.43879473209381104}],\n", + " 'RealWorld_thigh': [{'test_loss': 1.643248438835144,\n", + " 'test_acc': 0.40372908115386963}],\n", + " 'RealWorld_waist': [{'test_loss': 1.6421446800231934,\n", + " 'test_acc': 0.4031635820865631}],\n", + " 'UCI': [{'test_loss': 1.6707664728164673, 'test_acc': 0.35652172565460205}]}" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# We can acess the results\n", + "results" + ] + }, { "cell_type": "markdown", "metadata": {}, From 868dcfee6bc2599673426b1ed6be8d8c9d4eca1c Mon Sep 17 00:00:00 2001 From: Otavio Napoli Date: Fri, 2 Feb 2024 11:44:00 -0300 Subject: [PATCH 3/3] Removed unused Figure Signed-off-by: Otavio Napoli --- notebooks/experiment_classes.pdf | Bin 25391 -> 0 bytes 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 notebooks/experiment_classes.pdf diff --git a/notebooks/experiment_classes.pdf b/notebooks/experiment_classes.pdf deleted file mode 100644 index 73a79b2e23595efeab12a047a6b39e8640424620..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 25391 zcmd42WmFx_)-8-{fZ!V3-QC??g6qbDySuwfu;A_z+#N!2_u%d>A0#K|JnwkV9p4!D z*QMEAU8|&5t*YHU=N_|2G7=%%wf2>@#&-iRwfRn_$+TG zMHqa1d^%Ax3kQJRTWX>201yTkS{ngiczI#$9qa)5mM|{q$5=~NOQJWP?FP6K-wbw! zF|mStS)tL$%^jS}J?-FnssZZdIVDjm%#YXZW03!Q0*_B#II04y#hLP7q(H+Q`eoPVZ6pNVwOWj?abhkWzPms zJIHVOD((@aNM%R^-sQ#gjeG z7&g-nK}eimdE5%qRVX!f2vM>{3uAZx42Xx&TLz9eJ}pid!cq&U#|9G~XMai0>%`Xf z1oEL2sG<$+SzO6$kAsN$xKJ6-uj;|4@42d7h`;kg+d%@zZ~QzO<1qUkv+^dSK%BJ~ zC$_##nglAOck{Cys!9U0Yk9E^kG9HU?##A&eHPPCs(z@QQ8GP|-bll<|6UOZ23)BoP z41?&k14yELE*L~jOlR6O_)y%8FJOP`7b2#6>_BIfC*a>UPwdTc{lMiSEkuM-Xm2A*8Pg%s1Yr)$tIGvIY*Gu7Li{%WI zx3)aA2VFoWFM+5!JQ6I+dC&$lEQXt3oLUll)Qe@O1D(pRB8D0kEpE73$?X1UY<3JGxo|xJ*N~nV5?+%t7)Wo{{_AC@>Y8uS)r5+_lCvh-3dZTRVoBVbkGd+Ll8noO+3|l{$|yw!wdk1`-EPjXkVKK?Poz1q(09vFm9BSrmac zPZO>sw8g88`g#!d$$tgbqx71rrW==SK&k>pAKJ2(s-FGB>Fk}83&QvKo|Xv+wol>` zhmtepFQC&>9%9i9~HS?nc~x;n=%s!E{bEK;+X7<#QXx)v9- zqVi%xNeM?08fatESOis1BaF5upO%^$S09bjw7P#LJ4i>`n#CaeqHZAOMH$0l%#+kw zP=~ubCf^L0X+9Or2u4nz0Df^V5qTz+!!@|jV=)#yqJ$b8xy%z$mk9jKGPnR}D5#pj<6vKL^}O0x>}SMy&6-RpvEVNr-W!9?K}C)GBo@N7~h za;a5VWuKe7w~_SSWUg*s;gHjAvBe4_X`bSp3PhOu%w?vo1B!HdPctn)mhHQaTE}+3{I7>9oSU}aU`I@W2k{2h~8S&H)`7@Vs2yyY_3|>x!|h>Swnzj zk3KA(fxi7?i%=0J*JWei%i!nfz%6|Z;9KpY-4PgWjfYx8?fFK=WN-K-#GxRZo0JKn43?gum1(;(h&b zCE66(QS~kIunOiKB5&JrZnSxJml9oT*f2~*`RGnbIUDN8YXn_KrI9rnOgDTY_8IKv zKe!bvMc3B4VN()HnEnV?)hh_`a8~*%V7AKAQL*ALqEVFsdb9s{fHRJ+tL5`)NpA{f zcJj-^^wW3FUD-g@tz|QrT}l{)PGc2Ga7*T2ZPw`(Bu*&hAS|+pa0AhkdQGM#8GM4g zkj?2h`Cs}s1iw^iqdCkY76-&T>C!jwD$;ubPgZ2lhKLm}6wY6hr>}!8+9aoQqv&RX zg|I6>vse|W;1R225y}kPexFc67_^0r8`*&I#t;N;yrl4VEm%(eeU&dqIVKv$&hcsyRfVr>AKR$yLN zYmj|IaFK7+YfVN8Gn|C);tg9rkDthYe*IjVtc<{_mpDp*MyMS9k4){i)dpu*_ zUf;mLb;F|a=7{~q!&u)0kYg21Ic+xKdjh&3&-x*A@IMEDktB@fIh}p93nYh%8D#>O z##fg$=z}ix!A<;?9RAU^n?lU!5d0DH5utVrYY7HmW%MtI^mg?QuHF&Vd-{f{7?~K^ z-z&Ulnf{HvexofB7Y8vVhc~2!&&~aAc>AN%r+>@f(+LUSGvL!1>c5@e@Yh?xpEI4P zwUxs^Y8%tg)6nCy{>5+a9T?LvyqVbkr-`110iXSUni$?p9Dhx3y}$kc%S*;L>tBSZ z>}mtRr<2z=0pOG1GyL{iUf&Mz<^;pvwsbN8BQt$LYZrXYHxoTR0}B&7J_EzsN&9zw zp*NS`8rb8%BhNoDQ_RlV(dHk+_;Z)v*1vI3!lzTV)3>s>dB2;X>tC}3zJr}3;BQXo zt(Gvr$;=R-C?@!B{8y9|0ru98c7_0Z{P&0}{$p$m?;idg+`r&2!+(bI-|+8yQ2*3q zcpDZwEBpUI#M^GJ4k#)wr)`d3j`Nc$bS78Sn`<4D;GsVWK}nIvdNP3+`1yl{5IFdB z0{Ig6e}hl+iS4WUV3@8|ccwbot1D8S;D@@KWnRfoUJpuOevoa12U?@Fn z83Y6l@FC=Q7TojIS3v39^G?~8&kN_CGi|wM9gxDVX+TZQH*bptKS!#);8elAtY9y_ zzS~pn>H^FvM_#WzS+Vu&kaj4IR>g3cl}2ofCqakZ9zj)QKg%7Y1UAZQ zTesm2u}W-_%u+O&C7e`|E7J?KPowWgVoYTnhkp>c_w@8ket>Y_(rHsWpGpcH?SEz2T;sa%TZuJ%1+d>dI!V{IlHI6M}e zfKxI!2G7|%95pp=F@v^n-U?mYhCER?YPXDaSZDYWC!2hE%#PapmIxj`AERlRwlex6 zxp}2xpEg#A`P7~XjEyt~w8V(WBiG>EUAu;Ou^cBb_J;s0yj+e)!>&j4CWS9cNhbd6oBDW$p*G1^owM9!y+WcAZfw)b75OQ z<_SJ@HF?xLzd-cGQ>4>`oyJ|4-MgK;Ty{y3A%lv34a3rEI<(WolIO#5&pX}&UOT)c zJzc7bB-N49#F;V23b$LVQ>-}k zNo1RBSTL)|Phb}CinXuvVYilZbn9lCj&lWc?H-?Xn?ITC(eVZFnP)ewcQ0yqmh0d| zZYn;b-mOH+&CL2i=fpnXG>78`Tad%86bCXW#)V|89@nx-77s-Nk67)p>H^jU_W9i; zF>HC|6y!gU4UrqsiL+gyqTDEEEyEg5%*}DlliiAKc{Sk$Xe!#2k>oOgqM)^;pL7pa zD@EVq;O{`bpZGwNx~Ux*vNlk+ zh@>%r@M*~+F|>94!ngS`FJ0K*psM1#j~0G|97NB7a{`KK6n|XObe!<2!0Ebkk+at6 zU8*a;3%#hTZFb0%qQV4C5xrj{fz6oRReBd)rbp{WFhxs`Zbnm)chE}K3#yXI!lyJ% zr(u*$zPqCQdvP+_X~=V$2gsjz-uKf|KR-datq#;)9nAg@Vt~24h^c^pJ1?K-6i- zYJrGs>ybH6KXNie^vfuF^3E)`h99hs3?>_-jgLWOA7%kq(iRKN5G;+c#AgX6wY@NC z(I!$lHdngg(UfM_fk(Af-s#XUL7%KfHHA$Mb=P=*85N#{iw+%;(9O++1<7_{6-(}b z)DWV7EprW7H+8^`v5X`nEaS0&Emr(YVbe27=-&pBw{t53AwUT(rpa>$0kDc^WUIpf z9MJUmd`ON5uUDuJ9VYnI4>yFzb+bf`Z*FWEvAfLE zc?oi*bUf=LS3hMzOQ;ps>KlMni(mwXoUR)51xK(UzX^Oxg(lascmoI_p~jFqw~avU z^QU;!G3bEFo>XulL&BzMu6g(3O0}Q@c4;H*8NB^lVF!6{&}E=i=EIc1Cvokq32~Ev zrs|69o#r6HaKq%U)+H389YiOhlleT) zVq_haXIXg0voFty03oGk=OvHuu@@{~)?9Cgzf6!Po=^7-L}MA`b|0r1m;?dkQhhae z%6atFl`sut4XEYvWbVp@Lz^7LIpY%caUWsDq{yA=rKbvQP*{P7()w}e4 zp%D`hDx4GajjaVL@b}YX- z?LK|N{{A}s4Rc?Dc`{kxiJ;Ag)c{N3-a1u43>ZR(zYZ9EWH{owuj&-!5bOvxXQ^JA zjiv?a;-sPQXeU9`k;Ht9@_30{Y@Q_kBYbX-a25#}D*Z9cXOsiMDJF(D4&7K?KVtqa2GT0Vj0LFMV|fu4h%WyI-1Id}!_uO30|_bQ zCyOjluwH;x@`J%D{|OyC+Sx27foeaCH8||G7N}P;vI0xX?a@M;`_ENv-GeN{)4uTe zQg4P0xwbZRx|^F9GsJIgQA7{c4%*&aq_H427`H&%(> z>9zHfW-V>1n2mc4uWUN$Zp2XRP9;FTuWs|+>`i;CzmsZHliJ!+x4D$Zm;sB>T;>1g+9D;+E>yzG9$No-GC zJn(koT>HiHs@5yL;vZkr1xdgq%UKT{N@+gkt=vIRchvz8vb8+@Yr1?O2N(Oz_k0<*DLI zLW2u-$(xf~Rsbp-D=aHKt%jF7m)Do4FX|#GutBl?v6a{g>htTs7yT@YEzN=z;lbg- z5%O^*r6%n(?V{PE8>1SdU80C_&NH1doid!VZqjAhk5LKcsiLGlqA=hgbcv9raaT0j zJ2flWFh<#0swAwwoOvRQt$o**nmX0V^z=|jSZy`FUTjKpI$1dDOAB`w)zQ)9t2F3N z&vxCwQ$FQJ8N0VTKS*V=ZXqJ0^LAg!-0fzoGNL-0_Ny6x)@bK5y%vn}!)|N2Q)Rgm zU%c6xl^$2J*-S_l=xSAVpC`+XNj%SQwy%-F?Z28{&qgf0I(XLgwyLQD^FtMqo7zrq zPIpva;czfiWjiTomOxnPxvJu+z;%u;)9W!g1ZIYY}LdG-%0dTjj1AkGT!aOK0VT4$S zK8MlBEwCO&#hS3(lmf=;sytSY(r37Ui`2~SU=Le4+p@m3{lW^+l%HVqS+<(gt1zo| z!c3{*3k7kaZP6p%1*dc#H8=N#2pjj8*j-B#*ZI<7~FVsbAyy7b$S ztX-rG-oTHr->P=_M=Xyn0!SK$@e!Z>_6DIqwv8VazmPk1gaAP9VGxhD+XFw;k)-m^?2UNQ1JSg>@tAk;+~9J3 z&U<24=vV01MD;4Ls_VVzI-L?!Xta~NdvecOuX$D*IT)2(*RMIaK+C)G%{$-Js;4RI z;!H4MkT>~J5Kx9!s(@qtjiW4G6y~|kTip1Z_U_A=9)3(TT6^Q{lrZfwjvJ#Ik2aCx za+u<_<`)N4o6+YDh=xhe?BCN%@g2541(S0T@kVFgN_@NJRnEH~=ar+XKCR{{sDoJ& z&(|HTy^on1*tvr{X>(*fp@fJW&0aUv&e=#YX!4CP;Jx_jidB&%3-g>kb%xSTcltcj zR$p<9z|-I?x3nnc;WE0ByG}i>!p+>l|2ooet?Th}pM1-bcEw$smEF+Od_=BLYwy%s zfue4t07WvK4B`Ia!zYiC@djxjoj%%G6zuxv4m#bJ(`Rh9lewSNJ-cUwT7f@lG%;p0 zuL`gH-K0Fqx%5tq?X5bDf6;HMg{gwaxga84iLccc&*bO_i3Wr%_v;V5$fVuqHhZY^ z)zXLIb*}lN@_p29lt$Qxyu#Zj5W20<5e&HxamA|}_syI-;aB+pRS%;?J;IJ6Sz&-V zgcFC@caD?fDw|*kfJohK+DD<91)e<%s4zs=2r)=Pa2b>TV5A0~GiUk@#9O4 zN*mqk&?<9J$Zn@fQ)evvtHv6}R`DxhX9uq(+bUOevr@pHjKXYNMfGu|e>LJptp7s3 zQh~eM)K3q$U$Q;aSiv?c8_`8+3Z}tS@mQkEBI_tFZ;KP~RMvbvK6*S!%4ZvheB&b5 zbtvO>T$JM;s=iFarrmB@!SfEQgbSt#V^qrW%vG3mJ)>L?UFX7EnSNjHg%|6wFj>w~ z9Oug-XS_;3>}gKadq5r$LeH4#XT{eJ)Mg~U?k+^_Bfb1`5WAO~{4@oOk$s%V!M4p+ zZhV1uxe;?Zu6^#Imy?8Ogfa*j=X?p>BC;%wkN1)4-V(m!r$(Nlelm!LW{g`~$R)yl z=R^{-e9GqGm3HYoP~=@fSXAjL7b>g$1Ydm8ZDtQ~n=!E# zy-RV0-YG|CH8IR{403Aa+yt&;o=k$TWPuK*{cr$KfhS22v5J zb^4=rI4A4TtVbFb*iO%U96B*zY^S!DyhFQVTebNXw>HJS+MB7wKQ>O|&@RLC`%bWa zYIxPQ$hgJIeFGosR>#CPH1U-v>n&9!+jW8}5vea>ojLgkm>{{&fToS+k{YZrQfKP3 zppY9=u)}E@tD;)UIK?x@HpU}O8Wb@P{Sx8rM?c0o#w&|csS357WOt}Ni35l-jM5-d zUMl$kvS;AxThm!B-!`XyTya&2JgYvdj=k+T%qiBkxVKnhwD z6Yy1|lNiu(iE$O~`JZ_nb^Yi=LcU`Ve(klhqU{B5R9_~n$o%D!iT+fVWeZI!1u>j$ zpMnG2`57k)W=p~}$1}$Toe~{A7ppq6!eqAQ&?R2~Mp#e$tc5)s@+#;Q{|hLGE1VSM z0A@3ea0n0h7pG0!W2-MM2g)?ZetpovilyKcu>Iy{f(pk$;Z|Z8+_f49%9c)PW%5yq zEi0_r^?Kb@PCZHz%$$1L86`puOJzad3D?U=L>++D3@Y4groH8NI$Xi-l=oTh91`^q zoHRYmIpH;HrUkhd6?39`l&IrpTBvV}R9r#eLo?CTmT!N1s7(i)1QsEwDXQd!v1*x-s12#Xz|yG9nmcCKPPRdm_rNJL znH6BCKs5M%*{V-~$X>0PZ0=XJu0%h~ufNa$lpEGJ&)!#6^-Uh@KwhA4;(=CWgD{ZD zGu#7>crQfb70tQd?1MhAwOpyqZOUZ^&SPE#sTJLR(57!!No|f5lZ0zy2X%c+E5f@7 z+88*Z7G%9Y()&3rdmVLQhz4_hRitoHq<|_k!7#W-Ww}mcFoz*srggOFJbBMFB^hQi zN=}<!k|F+Vd8ah(>}FV&jl$vHU?!7I!9V$~)V6|7SfPyw_9+9j$;z7ZEyExvQmC zM&r2O0T_G>aYz&ayOdZM%U%>DhGdy|%o?N)lj1jM@XRHiL!M*YT5||>Wcd`dfzrgz7STW5CAQ7B?b zL~F6-#$3T%{p^yRoTT>gIN!Mnij>l~wVDju4J$b3ngU8~r#X^}q|(1Mp^3qafb4TC zxE7|_gE$!3?R$`7tkPG+&?$zeBR8HuvC|iFH7z`499Wp3s`aTAho_e|u{@bOCfc`E zt~mEPFDAnu;6OiKhlJW z?n8DwGnIR|do$ngh<1eNilFn~gno_m>4xPtzzKo+XnBSC1kCTRH>eaQ_Dc}aUg?Cw zJ8`Z0^7@>N)rkA@d(#e%?08l+J{2^JnU`;M9H&+Gr*5V2-YF!if=gEL{Mv*MN;mK~ zS6GNih@HZ;`?+p}q8t4520{zeID_I*=oz#; z9!*kk>maD1b-faF@>t=Sp2XKt@bZiZRek;9mwOor1O{mctWRHYkGQyQqY*xq$;WwYdh#+kZFVLe_eA>hseJvlQWlK3DC{Z86vf9|!tlT5etY{)q9 zC)Oa%AkBz&YzuEPgV6?%dQt6Z=Hfmxc${GvHRQI|BV`rmNSx)52EbCSQVr-r*U|LG zx{&B+v3b#XsouCPL<%AhB0rMio+(mIQq977(j~HdX?SUPS=sU`<{R3id zY&HRysBB?ek*<<{8!~O5A6=$EM5Jlyu5U9X)hEc-IL#nx1|) z6=Dk;DyOgCI72n|Dz0K3M5Vt=*foO|GQlv`LjjqNk0$X$R5PaV^L9j6#Nt zHP$hmHNiI!>Vg(l9)O{%69YFf{ehj-KM*klUh|D(z|#2`-FvmhyG3&14SqlY69vnJ zE6`~(e|m*F=wz}b(Xx)xRoieIx;}h$W3ubKbm8Om!3&;6w5}9*M14WkN`5(c0sHmJ zs>gIdWAZ^B?KCA|K>%O41pgY1?K{h%5Dd{xu zZdNv_{nLjZ*l?m_4E=4V*|h0!w+f;?_k;Y0O7wCBT8|%2DucWrHB5Ez@U7D<`CQX9 z!pRjOOaME`pNhgG^2{!rEZDN#kRwG0CiH)$yJFXOc`{8vj`w#TDUC}w!1D#bZ{8{} zEk>zg`iV)*gWP-ygmAKI==gS;YR%f8Gx`|w$@v?3*(LF)7d>lK?YXp~M2Rg&Sxk5E z5HZGzPmTjgUCcs%>S1`fZ_d~C=kx2F@5QGI3l30tLAMyEF4#MK5u%`NGyMC+^A!GG zc8HE2vE}ywW8Wy2QD%eGD=>cqmCEFOJT-}zCow0j)epj?Z}{}16R%T~|611wb5UHi z8@MTcI7MrwctrvCPDO;C?#X$@XB3=MLA|hzVBGQAT%E(n95qAQfNW@;Yd@k+?q#M= z?xI_1WEeEVIsy+p%h6+-9V1NzzEBh)NZ;eBk^9n}F6U(5ypKfvGGD=4h@!QaZI{Y! zN0-a%aw8G(BKT=woW+6I_~+RO%~bqH#klr+8=JtVjkN_1L47Rteh(kjNd(|%#c}Zm z;u6x+Yzk$nvr;%2eL+(giXdAf1@HFJ0!^NFy&%tL)AiV2xUMhI09k`W6}RYjfN|O)eY2+E$J>i9 zMDM&2@{wy?v(X#Iq z&a*jOqtf%KSyo4BsFY`av{YqFCGtdqtfMYoZ7d)~tz?A5d$M{eg)N-`tQw2KO3AtgDxggDkO_!$qy}MY?Pz>JU?~y{VvSfTN3gljt`990SR8BTXgz!044CNw3)H=BKm}sz} zoGitXbK&8P+xQT=#*ydwEa}6}hU=DoVu09 zK($Uj)0GlN2cSkWgz9hz<+7xu4i}yoWm@_sss~1a8D-;p&Biqz>J+uDNz_y%QeP}b z6;r9T<>nT}%SFXD5+{Ib@^-*#g~?dLcNtj7#DBQC**Zehy5OppP0V0!=``-I0xPPB zpfwE&W>AkyY(Ei5=gaOO5Zvw=Wy-d*B`&W>?;PYA;84sBX>+F9`z;s8tE(6l#4jTf z0KQ8}t1~re6Ea!N^v9X0GUkU#%WkI*SsEz@NE0(MDUcxJ7;yvi``w2cmz0gQXqqg` zMwbiXs=sTbRx>uKAg5@S3nPaZpV^}E6htOWx=QamFuEngWAY?MV43NQatx72;+hc< zN#cx#xaJu(b|~ z(}U@ceIlSG<@4Qgo)U_q8U2WkS(7=2T$0ktiZtfPxKT`ujuv}J+=P_@nzySQdsGHO z`luQQ9sod%`G_OPIfNs?cmyro&mEV!qc>?1%j`n^Ltv7bGm)%|9|SpF41@v(SsdTj ziW6KXDJLf3+My9U*CrYa=xjX!Pu$UAjJ$QW_9s)a!w@>vPHSro{eQaUsFkFYGKLQy$YEnR4;J9(0f z36&8ul`vK%*aNGi1N76NL4PtJ3icQ|;QC5Xz;@L_O+ue^t6(?$T7ETQyYf!+@ow?G zeO$XvjZgmH%9r0_ihoI8*qNFCEq!75r>f+y@Zyg;<6U6!--H)$Vvc_ckF}L#oGYyt8?H2J zo|iE@zAxnEj+a_Ui4=I(KOTPpm-hhFU#3DYN;K=2_JxT?o)41?Uxb^Bd%i#R?oPS2 zZCB&g zK<}rq8JYM0MK$ovI?p@$rEMK79SoTsdd<%ov!&XAc2reKM~Dfm4AoaFpI`xc?v?{gP9292cJy)nU7sSCCj{42J+Fk0 zdpgQr7!ComL{xO@;4bi+3qAhm2wl0S;J@0lSJ5%ChQ_Ulx9JeT&;KMbYZ#cz<{@G%J(Z{u&L?F)HEBx)Ysw z`Y)^Duj1)#j+q&m{wj*o^)1aTT=B{Ol;Qs_r1(>6p>Ogg0Al=iRlz?RB^owX4tyFW zW>$Pg26~n^Ne?~8zm!T64*C{mh5}Y5765$u-^~Q<4d1m+9B&e?caPp3e9zM`GO_$N z2D0`u1gz}M{u+MYTFuPJ!PNdu?8U_Jt}^@Q!Nft2|NePD{*?aj zI)CdkF})A^KL;D*`xAnVmE|p54ESt}@7lEg9IUL&Z&71>Ury`{9Qdqk@0H(_Z|p4i ztgNi~?96}ie;WL8;m?_c{@upH!HUny{Fbu5wRulD-ZfwESFFs;Z?3-IjP-3XykBuJ zv;VfT(lh?9L66VO#*WXzz=Y5I=Gc3EmN#eESyTE)ujecJ^7 z?+4yL=a}KIDDWRUFBtwU-TSXk!1p!xS9-?C_O_J&`AGbs>@Kf7hso#cy?gX%o7^^t zoiyMig7z7mB}cC^M5POhu%DkQtQkZde^ZknEhhLH^$f9v*FXa)!z{d9{jmzWnx+6F zerVh~N5y0@)kw6^FsNE4%?Syj^4Y1c;8^<6+P#D0)}!O}nQM0pfMht@>xP-q>h-94 z+f`GW#ZbUCoZ?WIUs`{O(3TZN-(RRW?K>N#O>-DlhAqJ-^73*k@#FnOMP^ zAZx}j*ouW?9A&+_*0#q;lm(f)Uc-zS>N_?Q#ODvMh$amn>{xt_GYW3vT5*xAmh2l< z-M6{_0AJZRAZ<3pTiBHryZJMjuM`7z8D0y!iHCHJw23VvQFR%1Sqj?p*6XptgDyr{ z=R{4$PQa|I-_I9>mPMj|Vha_nL+>6TsMwu1lym8R{_RJ0O~UQn{n|V&6Lq_jQO4a| z9&h7?QOw$55G6&IeRGG~^u0N49_QDS`^T)zmRAX8tmmyUC3LJn9gTaV*Urkki=gXa&Aa^bK#)gI=8WgTLB_+QicS*RRZ>3MB`{H;E(QtLqq#4C6@ih`dk-k!~ zteobze?M7^TxEZz87f_I8$5`119$+)iKo$Z5gPb5FL^yJq#0{z9BvtIfQN$fXaEEa zp~w25d29MS(2oGC2v~KiP?w`%YEo3ibXriZr(mEUes+wd#&Ztj9Z6~5+@4Q=#_2$T>i0WF1labPdY9?WyQ+0?tBG9AGr;@k#9;p@P5?)^lx)dH?U^C>Bs$0ig1TZ=A?AEBG7s?Af07mzD9Q#9< z4%|=Y!OeSSZuExmT@a|HbBkyp?nA+7&=tfZJA*<@u7;MrC|irQB#MTXY8qrs3bQY) z4W{#D`bx<%7L(%uq2ytOEhP%4Wy&^i^ouG5gt= znN@QIdX%tK;Ws4Ut_7&o&V+#+qq&5&F`~JuEZQ{!^UJ+Cka=v`-sH!5u&#z z1q5=|eohHG0H;-0i2_%t`MfST;}KpI$Dqp+dXH8_T6At-wyPVdhf9Zc*&a0vBceQ; zV^E>4OkN~^uf?>JKo-g&L!NbvvMrHv+-t}>I!SLIL}+ZDqOgO+gpvS0nll>M=@1W< zg6QJHHzjkHTma+EA}JLoVn!($>?+ZqoPfmy=0u!dp8!4KF zCcL5CL7pC6%sj8m*IMVw;sJo?ZcyGZ6BO`@BQFx;&EF;?(A{C3`X{;QpsOBO$!|I$7x8XEN=@0HfmLVcY$3g1T1hkXUiw!Fos}#m~K-W zm|PHlWH-c*5AMw;cwYV& z@cS3ly}ia6IGEV}jeg%@(_ggrmkRw`o&R5fz(3#n+VTQcJ@_A6FT;2WfWDO)og5)-EdTMn_`c8N?h;CG~GF14E&wdpPeRkCRuZIshTV&p0F?A*)yQ z>Qdh#YI5?mq?zI%R#U`Wti&y=gfZXa9(AHjIKAQ|zn{|AqjT;+QMym%Cmm$H{nUSn z1fMJgE?{}ae}*!DM&|AEc_0kyyJ>N`ySLnFjRR76$e{;y~R^S`_Le~3l=*|+<) zHvVqhc!#?GJ08DXSNcmMgd~LDBk~@uKRn<+RO0^-pSKoDuJ&(CM#9S2`n|Jv0ILWv zF?(a@Z_G>JZEFkQH|2VVt8#WmZ~W|yJCI8ly-_7I2iJcWDLLBMSO6^F3Dj?P^yc_` zi+6ASGi7hzl(#|r;UQ}Hn)GixgY^x><{fT(to3GZ@#`$AIA4cEB@Kv@(;-WXV=X?Cgh!E{GE`0ebax_ zoIm6HZ$EmxN8~T$XJuylpOF9fv#X1;N>aydhVvI!mBiK(H7?>}NfHVq%3^r}{?FZ@ zAT{Ko{G8tyz!QKe@$udDrSkxCP`(&w3WSrk!kT`XV*H^_or(HxKB9b%q9nBJ3|Txq zcNCv+?R8W>w`ZQb*c-p3ICmsF-@1G4rmr;>3X&5D1CB5jn#V@9 zWcvhrgk1x~MUC47El5n(z=TGvQ@wK{mIA*-RIOcOI@HP$(CgLfMAJYDe|dB_>y*}+AfN$YwU8xKg6Z| z(Zjjj=;kX^o3S;CoLjO`$7*GwvE}5?*Ow~NGQLw!NtK;xaKkVFCUBd*p5d47M?)k#7 zoU^}L|Q zHza^3tUomOQLSjJHM@y95ePpy1ovjfG(lx~piF*)*(YVRIwU`ZSL}+gB@Q^R;Z7LkB{ncKK%ilu_K2u&G%iZzMd5vYbBu;Pk0YiE> zC5I?2O-b9DG3&ILu5HrzbB@i#v6D;f6$6|IbJDBFiq#XU`Ael}%n6xPodVQ&ug((I z7iZOnkxbu&q!GXibH~8xb>X9DZQw%CLjOX+f>|w3ZCmZshzIr*tJ~WKm;&o7mO>Wy zuWdi}f3(lmsmhgpXK9;BCPXO1bm?{Lb$VO;w!N;sj+1VaPLp1fwv(=tiORCW!DwiC ziZg@$wSLUeXz;jxVfOP6=?ms^xy94QGvCChqc5&C@ziyzoqLiTy*=#vt6demE{9P| zO{^E2+gHX;epytQR z3gxVi^D-()AKtfM$XOb}iWq~q$jcwYIH{akTF%3BB-_0bLqO0sy(7;qwo=$fa;Fy8 z3nvqiEtc@D-J<3h$*9jb9#A%Sv$Xa8r18y-o&rZ2Mo<_VGH2W5#>@R|zh16@1>DQ! zvF3uM-us$tIAf04s$zk{PT2W;>gxfKWgY*1zJFr4SajR{EH*SAhA;2B$w-Jlb&L`Y z2%-#}s*acemQ@GK(R7$G);JNB#5jcxW^qt@G?G)FTVavIS7Zz3;`W1x+6jqtPm2Ug z=@AFka+&1(ACutPEEN!Qz{&Eo_v0q8qdJ;+uShO6FnoM2XKvLfL%slm$|$O0p;ZT0iVWnx`I$;@&( z&)4$0dW&lKn65+#S}Wp<5PKmCR7nW6Z5B2MK`YpQrTLJyGade7I` zUWRR79|tE0|z2>NvhZV zeS$fu#zK3kmAm1un`Z3^a;#AJHn&9h;t-pK2|n+}vn|Lsf$)Ob=~@C;*bO88o30JX zjNTkenUbr^l4^n|HlrkF1sW}!=f2WSOG;}`B$bUD3fyKHzwC!(Be+b9GaJkCYV9G{ zHugcYM{09|HJQiSjN`qX+T87BOg;=8#cL7ya4Nf4TY%|mxBHYN*Y@KFNa9jroN-vy zClqWjoH=wQNtg9TP>qZvTzuuiDak(7AydXM<1s}GM$FSVFc{8S2~aKSRI*1n&Qc>p z8Bzalb7sChM*buVy_a|#%HCL^2PaMeR{a(?{zQ4Gj(o(c!r{+0?g&8_RoZadbLXUa zPF*yBaUXhq%5S*=gv9}}XrvMyNVqYC4%t=m8oq2s(uCi(X4jOHu<5*z1 z{gi#R=+MM0?h_#z!Lrk#ZsB???2%3qHO8^P%$ivwcON(mqBxX>Yv9R<_DrpqBX-1U zC%#TBZZso45b6imrkJGw8kLPQ`1_~Mo6jGhyTN*he+f?s+vd3D0CJeS+z=@8^Z&o* z&O9FKx9j7TC9-526d~D{X;w4#o$Sllvt}I>F(hk3k}YK4l6{SoE&Gx+Yh;NAi4Y3O zmV4;#SJdympXdJLdG7BY^Ljb2Yre}l*O+sCuJ<`($o=4RW`PzekAEBK#`Pk$;kXfyqt-<7EO&!52GsGXv0}QzW(>=r$F*xj1BgG5-US4q42t zfudfW>vypwzts&Rn|_kg_C%I?fPY#cr0Lk=YpGi^BssU#9Yl8SxIbStxd!UJjP*w% zk1L%pe6Pa&;ru#lb0YX`C`2%Ci+41Dq*RGs{;Eaf;;|ZbtMQ9daCr%HsgFwS$DM0M zuP1Gla(iPp!&fe~UX>r6mLDx)3wtodZNwTFb&Y9nsls%kqND*^;p6dPbmM9cWf|Y& z47Q6%*TvM6GJ_e&CGLmrwC^cJ5P-T|ndTHWP3fY64;XSIY35Af%7Hi5DST?R))lj| z4qNLB^9~aIQrUf=Fx`z%C8RONdB;KAmS^wGxctUIcFEJV=mEWn^yS_0=soIiViTk4 zEY6S|_Xg&XPp%MKg5EZRM^~^xgoO&dVS9T!gMp_k1Z{JYJ@3vi4lS{1JPEO9<=5Lw zC7F>gS3bEg=xsgQ&5 z!IeT-?W2IFO++by&>%$Zyfow<k}|1%1*sEUJyY^v9vmlz&QdErq8Re32a?2%+U?yN6W6E(KEU!V*>eKwA9}nYO+r5Tys= zJ{R+gsqolHWGy~n*yX_4jY)RvitS4K3ANMHg7asr9rv0nXgV$jMXEN(qGkd+GYGWm zFeT;f79TA_BW#}zS`da6`JJdzi;`;0IBSI^N0mHIQO>0MM58Qd@)~5Uf;qud655$W zgN|C}UadxSCNg{t&XRxSK6(kAh%U_D-CFO5)G{XyzSt_4%qj})S@YT~C#>rtQ|9Jh zeOe=4o8j>K(+-3~sxozU;`Y__RAS zZP(Ni?qWg_9cAb#>zV9Y5-GJp*K?710|*dbTBuu(yZMSf=|WU|Drw6N$lKFb-A`=a ziKOHU4Z*5KDDLQd_F;)mP`isTZVec@b6lD<8gtUYaO= z9SZA0V2r{72#j^GHe=@&=d^8};st(gMc4oN=73@!d>H4I!`*UM)ncNq&S6(T6R%5Oau)>LNTy%3{Z z_KdNtV~6&Eof3!rgoRZi%FKCtoqSWMuAP)&WsZ!}O@!I|YT)|ImAK=!7btb|C0?>d z3|3(K=Gng35amT-BV5`SDT#;Jr3=msC76XC%h$a*JfB}|Yw1(gfycYubjt84DtMb-$P-yszPy?I5Owxd4H1YtfEI}CrsXL% z^%lJB$VmI@^C~ZNWXCD);%apK3X7`y3XcpcsE5!Q5SK1d!5SU_$Z7LCNyDX!{l@KR z0`Jl>)ggYsy~0ai#T4iPh-30a>r`;!XWrl}?YFHEK9dy|(b||y7{}J@fv(6xfL7YZQP5Z_RMSxAAkIVsD6q|Q1rnR%SF#S_R| zOClHg^oa(_^c|6J=+nka@yzGzS6E3HjLBu2x(eD(GtoWw`HguHjprE7PryC59Z^V1oKqf)-!-_{@dAI>z^$5Q|rP z?fZJ#V!~-9BR;;BIE7q{?6eY8+JHZmM83|pn@r>#7J*sP>u7$FJ*8{ts)Mk6MI;HpcHo%IV@qV@^xQbD792XaCwdDtU%G2))#Bf zLMF4=4ZvPx0vzbAMK5q|Z6!RTx`JZtnIMxM6nH;Fhzmg>f6wI$= z2@XH`U1t8SMIQ+4e`xCYtIQ06g9VO|C1`uC*MM4wS`VxBkhg9r4TKlX!?RzwF<9Ly zrr)wx1|!dtb2MK&0|E_)#hQL;iMQy-FKMwNJ{NO;Na6ly!X5HLMb5Y~j)40-pC3y| zXMDTFrk+TYmZg$^r6Jv#@R4HlXx&HAs^ye*;m$!XDN3nf8D)u_g|e%km`Ia16^Ujb z%jqAz&eiyYc(scd3@=E?_jG@GjwfaPjbV1h} z|HyO8z;RA0zU(Unq6|ZYlNC%c40_#jMy2U$Wg2&9X81g|1qzqPss?s$#FP_B=KCu= zWw<_~nq4Ur9G79;mfz^9U7|V6Cc8dkSDsD0KKjwF9FOTn#F)uDpl$5bgq3pbjn2E5 z4fnowayZAUH&eaff!jUOlUYZ{h(x3>J-K(~X0hY6KOf!Va}Iqq^clrW$L`KGEj@_R zIY47fqS0j81JxAzm0Xj-H{=tf#?L)h%iNiJa2GRXIExPvIy7%6sEj~JPe&cnvgPhHa zwfHowf6x_hQNX~>6cu(y?Yzum(>h8VcCHhbF-|@Oubw9%dJ;R8q_|WnW#G|T!Ro&$X zwyKJr;1KVvG-Ga5Od8ooMUopfns@r@N5>y2m4TzPa>d5qHZdt@d3W#Sn1HPe&^$Q6 z2Q;fNLvOxg1`c1iI;+UwmZ-Tt?F{xOEds>?gVo|0sV+5Zz68mG$0C zNbv&~yeqk%!Pb-sNCyt$C{~_o+s1b$j0$sV&)mdq`11x9+8XLWzmI%%W*0S1y;L_TZ-OJxz7!v z8NZ>Lu=8CBbwN^gw)0b`3sJg0UK2IfeZt(Ew$Ss*PMnIPzI8Fi7fgt5d>d!j*m}hf zzP4w;(N3KX$VJ2$ujlSHHhFR+a?!O8*yJG_mUS-$e%6h#?57zJud&5?oh~v2rtwl5 z_RY*t?FXdZIS2td`Iw15O_z*um^lmY0+J$Xdt(#rL%;1(8NM(&+d3h7ZS?Z3n)Z|FTX>bN zq2o#JH}D_huYAV$M1;jM-j!vgfv#K$SG@RsDj+A=n22@0adkI-t@`9{{4k(Z72!Hu z;*}U%7H&Q^Br^KGWk|$+@*zLLu)S%NZ>X2*sLjKkATgtBcc&^RHrEUh?z8FH!hY^r zYt0rn2&-I1TDYIT_+&pzBx?7t^n!Bog}IRSFjsLG|Guy z?L|4;$GitlOT56+8Uus>6cBQqd{1itGCF_bS z>LbM)kY-9KI#2Ty)#itVF%Wzt601%{3*hHDbE${V=h^3VORnb>@D!u81@f*u>8DNW zK=O3J^K4`U5gJ8lhOwy1wOq;vD#Ysg{8^4>s1sZ>>96V$@9USl&il}_*Yfe8?}Rsb z>aTowGJKEKfgBJp$&ZOboSe|mZ)-m_&K1&rLiiLWep%X>gCo{d^V%&A_=Q*vXS_Br zC?ZM~29lI5%hGT~z`_GzWC=h~CW{i2A)+@T#^uMOPd_7%H}tV$1eR5jBG1&v<7ES( zq)0u-xzZ{#Yi19cGNJ-ngg#vF4G49=OM{|SgCerQf}Q@AMPNUvnS=sQ#5kVf=H5rD zy@eCs7N+Xf#gC_QOJ4Xx-uNRS!hL;$ahU2~iQ}(3(LeTif9_-d+bsTmk8Kl(^u@CG zsdHn}*9V-5LY5+%zV+Il#tu7K*;u3_1FLr}5EzgzKbx2Ms{Yn!-4`0joLZP`J*n7z zbHk|)TC-;E`?f&r?Ux?f4#J5UPwSh?ERsAvYdqthOC*i$Vqi2jWc-wKOv0yFnjF?J zxh;fi@0-wn&N_qCde8;i+v)7UQImK^QupgC;rPJnWc+QE=T+iK&4u@7?Q(%i5ZaMm z|2$D8Os{*8>iJ5a)#S4^yI{jX6^-!7d+y70MV_D09YtMJ+1DPO<)%F`>8#J4B{jH8 zf{Ji#`-2yMU>i6uLViNw(EWtz$7cD@Z1ZokZ*_IKONtu*QTF}cNOK${aldAD&|l$! zpZ++Ueg6xZ<8S=)e@k;51oe+J$03sA$Bg*jk{o~Yj6YWj|CZ!9@bYgY2jn}B^>Zck zdo1>QqB#ac%f9XoqVu`#od1WP~6f2n+-P zfgvCe3?|4Af^vdD+`nG*S9Rh06p}W>6*)LqAKduCBmA?G>(57ZbF#ELg*gjEn$OeXh zanSx@+kU|YE)s{w!QnVO|F8{ry8Q?q8V7GKI5jZK}s7>(bd%#q<1QhpGa`?S)&~Ls1!$COa>F78puC?w_8v=Ut*&rY& z{OC1sHsn$NK|l!HBJ=S1aJJul3WojWD+m;d`0ad9sNiqsgW`IC9=!$>@mtIxP(eZ5 z=Vbr*YhFQMF!<3JL16G-uZwXq!;RxS9?Q7#8UMXW;vPpnZM~ eR}ORV_koIL=7c$z?GTs%1a^v@T~