diff --git a/how-to/1_Read_and_visualise/TIFFStackReader.ipynb b/how-to/1_Read_and_visualise/TIFFStackReader.ipynb new file mode 100644 index 0000000..6355600 --- /dev/null +++ b/how-to/1_Read_and_visualise/TIFFStackReader.ipynb @@ -0,0 +1,299 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# -*- coding: utf-8 -*-\n", + "# Copyright 2021 - 2024 United Kingdom Research and Innovation\n", + "# Copyright 2021 - 2024 The University of Manchester\n", + "#\n", + "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# http://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License.\n", + "#\n", + "# Authored by: Mariam Demir (UKRI-STFC)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load and Visualise Data Using TIFFStackReader\n", + "This example shows how to use the `TIFFStackReader` to load data from .tiff files and quickly visualise the data and geometry." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from cil.io import TIFFStackReader\n", + "from cil.framework import AcquisitionGeometry, AcquisitionData\n", + "from cil.utilities import dataexample\n", + "from cil.utilities.display import show_geometry\n", + "from cil.utilities.display import show2D\n", + "import numpy as np\n", + "import os" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Get the example dataset `dataexample.SANDSTONE` using `download_data()`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dataexample.SANDSTONE.download_data(data_dir='../data', prompt=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can load the .tif file from the dataset using the `TIFFStackReader`. The reader can take a directory or a list of .tiff or .tif files as an argument. \n", + "Here, we create a list of the .tif files, excluding the dark- and flat-field files (See the how-to `3_Processors/FlatDarkFieldNormaliser` notebook for more information on flat and dark fields and how to normalise the data). \n", + "\n", + "We specify the files to load using `file_name = tiff_files`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data_dir = '../data/sandstone/proj'\n", + "tiff_files = [os.path.join(data_dir, file) for file in os.listdir(data_dir) if \".tif\" in file \\\n", + " and (file not in [\"BBii_0001.tif\", \"BBii_0002.tif\", \"BBii_0031.tif\", \\\n", + " \"BBii_0032.tif\", \"BBii_1632.tif\", \"BBii_1633.tif\"])]\n", + "\n", + "data_reader = TIFFStackReader(file_name=tiff_files)\n", + "data = data_reader.read()\n", + "\n", + "print(data.shape)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can see the data contains 8 projections, with a panel of size 2160 by 2560 pixels. For this dataset, the 8 projections are taken at uniform intervals over 0-180 degrees and there are 2160 pixels in the vertical direction, 2560 in the horizontal direction. \n", + "\n", + "To use CIL's visualisation and reconstruction tools, we need to store this array in an `AcquisitionData` object. This object holds both the pixel data, and the `AcquisitionGeometry`.\n", + "\n", + "First, we will manually create the `AcquisitionGeometry` object based on information about the experimental setup. This dataset has parallel-beam geometry, so we create a Parallel3D `AcquisitionGeometry` object:\n", + "* We know the first axis is angle, the second is vertical, and the third is horizontal, so we set `dimension_labels` to `('angle', 'vertical', 'horizontal')` . \n", + "\n", + "* We set the `num_pixels` to a tuple containing the number of vertical and horizontal pixels, `(data.shape[2], data.shape[1])`. \n", + "\n", + "* The `angles` are set to an array of the projection angles." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "parallel_geom = AcquisitionGeometry.create_Parallel3D() \\\n", + " .set_labels(['angle', 'vertical', 'horizontal']) \\\n", + " .set_panel(num_pixels=(data.shape[2], data.shape[1])) \\\n", + " .set_angles(angles=np.linspace(0,180,8,endpoint=False))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Check the geometry shape, and source/detector positions look reasonable using `show_geometry()`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "show_geometry(parallel_geom)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, we use the loaded data and `AcquisitionGeometry` to create the `AcquisitionData` object:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "sandstone = AcquisitionData(array=data, geometry=parallel_geom)\n", + "print(sandstone)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now the data has been loaded, and we are able to use CIL's visualisation and reconstruction tools on the dataset. \n", + "We can view a central projection of the data with `show2D()`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "show2D(sandstone)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Uncomment the cell below to delete the dataset and its folder" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# import shutil\n", + "# shutil.rmtree('../data/sandstone')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To save the sandstone `AcquisitionData` object to the kernel, uncomment and run the cell below. This means that `sandstone` can be called in other notebooks such as `3_Processors/FlatDarkFieldNormaliser.ipynb`, by running `%store -r sandstone`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %store sandstone" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Using TIFFStackReader's Additional Arguments:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Use the `roi` argument when reading the file to load a subset of the data. The `roi` argument should be passed as a dictionary e.g. `{'axis_1': (start, end, step), 'axis_2': (start, end, step)}` with axis labels `'axis_0'` (angle), `'axis_1'` (vertical), or `'axis_2'` (horizontal)\n", + "\n", + "To load a cropped subset of the data, change the start and end values. Note that setting 'axis_label': -1 is a shortcut to load all elements along the axis." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "roi = {'axis_1':(100, 800, 1), 'axis_2':(-1)}\n", + "data_reader = TIFFStackReader(file_name=tiff_files, roi=roi)\n", + "data = data_reader.read()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Update the `AcquisitionGeometry` to the new panel size and create the updated `AcquisitionData`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "parallel_geom.set_panel(num_pixels=(data.shape[2], data.shape[1]))\n", + "sandstone = AcquisitionData(array=data, geometry=parallel_geom)\n", + "\n", + "print(sandstone)\n", + "show2D(sandstone)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To load a binned subset of the data, change the step value. \n", + "Here we use different binning for the vertical (`axis_1`) and horizontal (`axis_2`) dimensions, which results in a different aspect ratio:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "roi = {'axis_1':(None, None, 2), 'axis_2':(None, None, 4)}\n", + "data_reader = TIFFStackReader(file_name=tiff_files, roi=roi)\n", + "data = data_reader.read()\n", + "\n", + "parallel_geom.set_panel(num_pixels=(data.shape[2], data.shape[1]))\n", + "sandstone = AcquisitionData(array=data, geometry=parallel_geom)\n", + "\n", + "print(sandstone)\n", + "show2D(sandstone)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "cil", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.15" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to/2_Geometry/CreateCustomGeometry.ipynb b/how-to/2_Geometry/CreateCustomGeometry.ipynb new file mode 100644 index 0000000..15e7785 --- /dev/null +++ b/how-to/2_Geometry/CreateCustomGeometry.ipynb @@ -0,0 +1,315 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# -*- coding: utf-8 -*-\n", + "# Copyright 2021 - 2024 United Kingdom Research and Innovation\n", + "# Copyright 2021 - 2024 The University of Manchester\n", + "#\n", + "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# http://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License.\n", + "#\n", + "# Authored by: Mariam Demir (UKRI-STFC)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create AcquisitionGeometry Using Dataset Metadata\n", + "This example shows how to use a dataset's metadata to manually create and visualise an AcquisitionGeometry and thus define an AcquisitionData. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "from cil.io import TIFFStackReader\n", + "from cil.framework import AcquisitionGeometry, AcquisitionData\n", + "from cil.utilities.display import show_geometry\n", + "from cil.utilities.display import show2D\n", + "from zipfile import ZipFile\n", + "import numpy as np" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Download and extract the file `SparseBeads_B12_L1` from Zenodo [here](https://zenodo.org/records/290117/files/SparseBeads_B12_L1.zip)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!wget -P ../data https://zenodo.org/records/290117/files/SparseBeads_B12_L1.zip\n", + "\n", + "z = ZipFile(os.path.expanduser(\"../data/SparseBeads_B12_L1.zip\"))\n", + "z.extractall(os.path.expanduser(\"../data/\"))\n", + "!rm -rf ../data/SparseBeads_B12_L1.zip" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First let's load the raw data file and look at its properties:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "file_name = '../data/SparseBeads_B12_L1/CentreSlice/Sinograms/SparseBeads_B12_L1_0001.tif'\n", + "data_reader = TIFFStackReader(file_name)\n", + "data = data_reader.read()\n", + "\n", + "print(\"Data shape: \", data.shape)\n", + "print(\"Data array: \", data)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The data is an array of pixel intensities of 2520 projections, containing 2000 horizontal pixels and a single vertical slice. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We know this dataset has fan-beam geometry, so we create a Cone2D (a.k.a. fan-beam) AcquisitionGeometry object.\n", + "\n", + "We need to specify the `source_position`, `detector_position`, and `detector_direction` used in the experiment.", + "For this dataset, this metadata is stored in the file `SparseBeads_B12_L1.xtek2dct`. The location of the metadata may vary across systems and data-saving methods.\n", + "\n", + "The following coordinates describe the source and detector positions:\n", + "* `SrcToObject = 121.932688713074` \n", + "* `SrcToDetector = 1400.207` \n", + "\n", + "In CIL, the object (sample)'s coordinate is treated as the **0 position**. \n", + " The source position is therefore 0 - `SrcToObject` and the detector position is `SrcToDetector` - `SrcToObject`.\n", + "\n", + "Finally, we set the `num_pixels` to the number of horizontal pixels, and the `pixel_size`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "SrcToObject=121.932688713074\n", + "SrcToDetector=1400.207\n", + "\n", + "src_coord = 0 - SrcToObject\n", + "detec_coord = SrcToDetector - SrcToObject\n", + "\n", + "cone_geom = AcquisitionGeometry.create_Cone2D(source_position= [0, src_coord], \n", + " detector_position= [0, detec_coord], \n", + " detector_direction_x= [1, 0]) \\\n", + " .set_panel(num_pixels=data.shape[1], pixel_size=[0.2, 0.2])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To complete the geometry information, we generate a list of angles based on the number of projections and the `AngularStep`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Projections=2520\n", + "AngularStep=0.142857142857143\n", + "\n", + "angles = np.linspace(0, Projections*AngularStep, Projections, endpoint=False)\n", + "\n", + "cone_geom.set_angles(angles=angles)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we have created our geometry. We can visualise it to check that it looks accurate, with the correct shape and source/detector positions:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "show_geometry(cone_geom)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In CIL, we store the data and the `AcquisitionGeometry` in an `AcquisitionData` object, which is needed to use many of CIL's reconstruction and visualisation tools:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "sparse_beads = AcquisitionData(array=data, geometry=cone_geom)\n", + "print(sparse_beads)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Using `show2D()`, we can view a central projection of the data:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "show2D(sparse_beads)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Checking The Reconstruction\n", + "##### Below we use the FDK algorithm to reconstruct this dataset, and check that the geometry is correct:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from cil.recon import FDK\n", + "from cil.processors import TransmissionAbsorptionConverter\n", + "\n", + "# Convert data to absorption data\n", + "sparse_beads = TransmissionAbsorptionConverter()(sparse_beads)\n", + "\n", + "# Perform reconstruction\n", + "recon = FDK(sparse_beads).run()\n", + "\n", + "# Apply a mask to show the beads only\n", + "recon.apply_circular_mask(0.9)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "show2D(recon)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Above we can see that there are double edges over each bead, which indicates that the centre of rotation is slightly off.\n", + "\n", + "We can use the `CentreOfRotationCorrector` processor to correct the centre of rotation offset, and perform the reconstruction again:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from cil.processors import CentreOfRotationCorrector\n", + "\n", + "processor = CentreOfRotationCorrector.image_sharpness()\n", + "processor.set_input(sparse_beads)\n", + "centred_data = processor.get_output()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Perform reconstruction\n", + "recon = FDK(centred_data).run()\n", + "\n", + "# Apply a mask to show the beads only\n", + "recon.apply_circular_mask(0.9)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now the geometry is more accurate, and results in a more reasonable reconstruction without the double edges:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "show2D(recon)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "cil", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.15" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to/3_Processors/FlatDarkFieldNormaliser.ipynb b/how-to/3_Processors/FlatDarkFieldNormaliser.ipynb new file mode 100644 index 0000000..ddd8c68 --- /dev/null +++ b/how-to/3_Processors/FlatDarkFieldNormaliser.ipynb @@ -0,0 +1,197 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# -*- coding: utf-8 -*-\n", + "# Copyright 2021 - 2024 United Kingdom Research and Innovation\n", + "# Copyright 2021 - 2024 The University of Manchester\n", + "#\n", + "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# http://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License.\n", + "#\n", + "# Authored by: Mariam Demir (UKRI-STFC)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Normalise Data Using Flat and Dark Field Projections\n", + " Detectors can contain fixed pattern noise and varying pixel-to-pixel sensitivies, this can result in artifacts in the tomogram. \n", + " To correct for detector artifacts, we can take dark and flat field images. Dark field images are taken without any signal or sample. Flat field images are taken with the signal, but no sample. The intensities in these images can be subtracted from the sample projections. \n", + " \n", + "This example shows how to use the `Normaliser` processor to perform flat and dark field corrections on projections" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from cil.io import TIFFStackReader\n", + "from cil.framework import AcquisitionGeometry, AcquisitionData\n", + "from cil.processors import Normaliser\n", + "from cil.utilities import dataexample\n", + "from cil.utilities.display import show_geometry\n", + "from cil.utilities.display import show2D\n", + "import numpy as np\n", + "import os" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In a previous How-To, we showed how to load the `sandstone` dataset from .tif files. Shown below is a central slice from this dataset - Notice the horizontal lines running through the image, as well as the pixel intensities (right) are not within 0 and 1.\n", + "\n", + "This is data that needs flat and dark field correcting. In this How-To, we will be using the `Normaliser` processor." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note: The `%store -r sandstone` command allows us to use the `sandstone` `AcquisitionData` we saved in the `1_Read_and_visualise/TIFFStackReader.ipynb` notebook. Please run the TIFFStackReader notebook (without deleting the `data/sandstone` folder), and save the `sandstone` variable before uncommenting and running the cell below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %store -r sandstone\n", + "# show2D(sandstone)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First, load the dark and flat field projections separately. We create a list of paths to the known dark and flat field files:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "datapath = '../data/sandstone/proj'\n", + "\n", + "darkfiles = [\"BBii_0001.tif\", \"BBii_0002.tif\"]\n", + "flatfiles = [\"BBii_0031.tif\", \"BBii_0032.tif\", \"BBii_1632.tif\", \"BBii_1633.tif\"]\n", + "\n", + "darkfiles = [os.path.join(datapath, file) for file in darkfiles]\n", + "flatfiles = [os.path.join(datapath, file) for file in flatfiles]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `TIFFStackReader` is used to read in the file data:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "tiff_reader = TIFFStackReader(file_name=flatfiles)\n", + "flat = tiff_reader.read()\n", + "\n", + "tiff_reader = TIFFStackReader(file_name=darkfiles)\n", + "dark = tiff_reader.read()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can take a look at a slice from the dark and flat field projections:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "show2D([dark, flat], title=['Flatfield', 'Darkfield'])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now use the `Normaliser` processor on the `sandstone` dataset. We compute a mean of pixel intensities from the `flat` and `dark` projections, which are passed as `flat_field` and `dark_field`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "norm = Normaliser(flat_field=np.mean(flat, axis=0),\n", + " dark_field=np.mean(dark, axis=0))\n", + "norm.set_input(dataset=sandstone)\n", + "\n", + "sandstone_norm = norm.get_output()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Below we can see a comparison of the raw `sandstone` dataset vs. the normalised `sandstone_norm` data. In the corrected images, the pixel intensities are now between 0 and 1, and the horizontal lines have been corrected." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "show2D([sandstone, sandstone_norm], title=['Sandstone', 'Normalised Sandstone'])" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "cil", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.15" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to/data/.foo b/how-to/data/.foo new file mode 100644 index 0000000..e69de29