|
| 1 | +# HDRM subpackage (CHAP.hdrm) |
| 2 | + |
| 3 | +The HDRM subpackage contains the modules that are unique to High Dynamic Range Mapping (HDRM) data processing workflows. This document describes how to run the tools to stack the raw data, integrate the data azimuthally, find Bragg peaks, and obtain the orientation matrix (calibrating the detector & beamline and getting the HKLs is not yet implemented in CHAP). |
| 4 | + |
| 5 | +## Activating the HDRM conda environment |
| 6 | + |
| 7 | +### From the CHESS Compute Farm |
| 8 | + |
| 9 | +Log in to the CHESS Compute Farm and activate the `CHAP_hdrm` environment: |
| 10 | +```bash |
| 11 | +source /nfs/chess/sw/miniforge3_chap/bin/activate |
| 12 | +conda activate CHAP_hdrm |
| 13 | +``` |
| 14 | + |
| 15 | +### From a local CHAP clone |
| 16 | + |
| 17 | +1. Create and activate a base conda environent, e.g. with [Miniforge](https://github.com/conda-forge/miniforge). |
| 18 | +1. Install a local version of the CHAP package according to the [instructions](/docs/installation.md) |
| 19 | +1. Create the HDRM conda environment: |
| 20 | + ```bash |
| 21 | + mamba env create -f <path_to_CHAP_clone_dir>/CHAP/hdrm/environment.yml |
| 22 | + ``` |
| 23 | +1. Activate the `CHAP_hdrm` environment: |
| 24 | + ```bash |
| 25 | + conda activate CHAP_hdrm |
| 26 | + ``` |
| 27 | + |
| 28 | +## Running a HDRM workflow |
| 29 | + |
| 30 | +1. Navigate to your work directory. |
| 31 | +1. Create the required CHAP pipeline file for the workflow (see below) and any additional workflow specific input files. |
| 32 | +1. Run the workflow: |
| 33 | + ```bash |
| 34 | + CHAP <pipelinefilename> |
| 35 | + ``` |
| 36 | + |
| 37 | +## Inspecting output |
| 38 | + |
| 39 | +The output consists of a single NeXus (`.nxs`) file containing the results of the analysis as well as all metadata pertaining to it. Additionally, optional output figures (`.png`) may be saved to an output directory specified in the pipeline file. |
| 40 | + |
| 41 | +Any of the optional output figures can be viewed directly by any PNG image viewer. The data in the NeXus output file can be viewed in [NeXpy](https://nexpy.github.io/nexpy/), a high-level python interface to HDF5 files, particularly those stored as [NeXus data](http://www.nexusformat.org): |
| 42 | + |
| 43 | +1. Open the NeXpy GUI by entering in your terminal: |
| 44 | + ```bash |
| 45 | + nexpy & |
| 46 | + ``` |
| 47 | +1. After the GUI pops up, click File-> Open to navigate to the folder where your output `.nxs` file was saved, and select it. |
| 48 | +1. Navigate the filetree in the "NeXus Data" panel to inspect any output or metadata field. |
| 49 | + |
| 50 | +## Creating the pipeline file |
| 51 | + |
| 52 | +Create a workflow `pipeline.yaml` file according to the [instructions](/docs/pipeline.md). A generic pipeline input file is as follows (note that spaces and indentation are important in `.yaml` files): |
| 53 | +``` |
| 54 | +config: |
| 55 | + root: . # Change as desired |
| 56 | + inputdir: . # Change as desired |
| 57 | + outputdir: . # Change as desired |
| 58 | + interactive: false # None of these tools have interactive parts |
| 59 | + log_level: INFO |
| 60 | + profile: false |
| 61 | +
|
| 62 | +map: |
| 63 | +
|
| 64 | + # Stack the raw detector image files |
| 65 | + - common.MapProcessor: |
| 66 | + config: |
| 67 | + station: id4b |
| 68 | + experiment_type: HDRM |
| 69 | + spec_scans: # Edit: spec.log path and scan numbers |
| 70 | + # Path can be relative to inputdir (line 2) or absolute |
| 71 | + - spec_file: <your_raw_data_directory>/spec.log |
| 72 | + scan_numbers: 1 # Change as desired |
| 73 | + independent_dimensions: |
| 74 | + - label: phi |
| 75 | + units: degrees |
| 76 | + data_type: scan_column |
| 77 | + name: phi |
| 78 | + scalar_data: |
| 79 | + - label: chi |
| 80 | + units: degrees |
| 81 | + data_type: spec_motor |
| 82 | + name: chi |
| 83 | + - label: mu |
| 84 | + units: degrees |
| 85 | + data_type: spec_motor |
| 86 | + name: mu |
| 87 | + - label: eta |
| 88 | + units: degrees |
| 89 | + data_type: spec_motor |
| 90 | + name: th |
| 91 | + detectors: |
| 92 | + - id: PIL10 |
| 93 | + - common.NexusWriter: |
| 94 | + filename: map.nxs # Change as desired |
| 95 | + # will be placed in 'outdutdir' (line 3) |
| 96 | + force_overwrite: true # Do not set to false! |
| 97 | + # Rename an existing file if you want to prevent |
| 98 | + # it from being overwritten |
| 99 | +
|
| 100 | +integrate: |
| 101 | +
|
| 102 | + # Integrate the raw detector image data |
| 103 | + - common.NexusReader: |
| 104 | + filename: map.nxs |
| 105 | + - giwaxs.PyfaiIntegrationProcessor: |
| 106 | + config: |
| 107 | + azimuthal_integrators: # Edit: PONI and mask file paths |
| 108 | + - id: PIL10 |
| 109 | + poni_file: <path_to_poni_file_location>/basename.poni |
| 110 | + # Path can be relative to inputdir (line 2) or absolute |
| 111 | + mask_file: <path_to_mask_file_location>/mask.edf |
| 112 | + # Path can be relative to inputdir (line 2) or absolute |
| 113 | + integrations: |
| 114 | + - name: azimuthal |
| 115 | + integration_method: integrate1d |
| 116 | + integration_params: |
| 117 | + ais: PIL10 |
| 118 | + azimuth_range: null |
| 119 | + radial_range: null |
| 120 | + unit: q_A^-1 |
| 121 | + npt: 8000 |
| 122 | + sum_axes: true # This will sum the data over the independent dimension |
| 123 | + save_figures: false |
| 124 | + - common.NexusWriter: |
| 125 | + filename: integrated.nxs # Change as desired |
| 126 | + # will be placed in 'outdutdir' (line 3) |
| 127 | + force_overwrite: true # Do not set to false! |
| 128 | + # Rename an existing file if you want to prevent |
| 129 | + # it from being overwritten |
| 130 | +
|
| 131 | +peaks: |
| 132 | +
|
| 133 | + # Find the Bragg peaks |
| 134 | + - common.NexusReader: |
| 135 | + filename: map.nxs |
| 136 | + - hdrm.HdrmPeakfinderProcessor: |
| 137 | + config: |
| 138 | + peak_cutoff: 0.95 # Change as desired |
| 139 | + - common.NexusWriter: |
| 140 | + filename: peaks.nxs # Change as desired |
| 141 | + # will be placed in 'outdutdir' (line 3) |
| 142 | + force_overwrite: true # Do not set to false! |
| 143 | + # Rename an existing file if you want to prevent |
| 144 | + # it from being overwritten |
| 145 | +
|
| 146 | +orm: |
| 147 | +
|
| 148 | + # Solve for the orientation matrix |
| 149 | + - common.NexusReader: |
| 150 | + filename: peaks.nxs |
| 151 | + - hdrm.HdrmOrmfinderProcessor: |
| 152 | + config: |
| 153 | + azimuthal_integrators: # Edit: PONI and mask file paths |
| 154 | + - id: PIL10 |
| 155 | + poni_file: <path_to_poni_file_location>/basename.poni |
| 156 | + # Path can be relative to inputdir (line 2) or absolute |
| 157 | + materials: |
| 158 | + - material_name: FeNiCo # Change as desired |
| 159 | + sgnum: 225 # Change as desired |
| 160 | + lattice_parameters: 3.569 # Change as desired |
| 161 | + - common.NexusWriter: |
| 162 | + filename: orm.nxs # Change as desired |
| 163 | + # will be placed in 'outdutdir' (line 3) |
| 164 | + force_overwrite: true # Do not set to false! |
| 165 | + # Rename an existing file if you want to prevent |
| 166 | + # it from being overwritten |
| 167 | +``` |
| 168 | + |
| 169 | +The "config" block defines the CHAP generic configuration parameters: |
| 170 | + |
| 171 | +- `root`: The work directory, defaults to the current directory (where `CHAP <pipelinefilename>` is executed). Must be an absolute path or relative to the current directory. |
| 172 | + |
| 173 | +- `inputdir`: The default directory for files read by any CHAP reader (must have read access), defaults to `root`. Must be an absolute path or relative to `root`. |
| 174 | + |
| 175 | +- `outputdir`: The default directory for files written by any CHAP writter (must have write access, will be created if not existing), defaults to `root`. Must be an absolute path or relative to `root`. |
| 176 | + |
| 177 | +- `interactive`: Allows for user interactions, defaults to `False`. |
| 178 | + |
| 179 | +- `log_level`: The [Python logging level](https://docs.python.org/3/library/logging.html#levels). |
| 180 | + |
| 181 | +- `profile`: Runs the pipeline in a [Python profiler](https://docs.python.org/3/library/profile.html). |
| 182 | + |
| 183 | +The remaining blocks create the actual workflow pipeline, in this example it consists of four toplevel sub-workflows bracketed by their (optional) CHAP readers and writers. These sub-workflows can be get executed individually or in a certain combination, or they can all four be execute successively as a single workflow. |
| 184 | + |
| 185 | +- Stacking the raw detector image files consists of one processor and a writer: |
| 186 | + |
| 187 | + - `common.MapProcessor`: A CHAP processor that reads the raw detector image data and collects everything in a single CHAP style map. |
| 188 | + |
| 189 | + - `common.NexusWriter`: A CHAP writer that writes the stacked data map to a NeXus file. |
| 190 | + |
| 191 | +- Integrating the raw detector image data consists of one processor, optionally bracketed by a reader and writer: |
| 192 | + |
| 193 | + - `common.NexusReader`: A CHAP reader that reads the stacked data map from a NeXus file. |
| 194 | + |
| 195 | + - `giwaxs.PyfaiIntegrationProcessor`: A CHAP processor that performs azimuthal integration of the image data. |
| 196 | + |
| 197 | + - `common.NexusWriter`: A CHAP writer that writes the integrated image data to a NeXus file. |
| 198 | + |
| 199 | +- Finding the Bragg peaks consists of one processor, optionally bracketed by a reader and writer: |
| 200 | + |
| 201 | + - `common.NexusReader`: A CHAP reader that reads the stacked data map from a NeXus file. |
| 202 | + |
| 203 | + - `hdrm.HdrmPeakfinderProcessor`: A CHAP processor that finds the Bragg peaks in the stacked image data. |
| 204 | + |
| 205 | + - `common.NexusWriter`: A CHAP writer that writes the peak information to a NeXus file. |
| 206 | + |
| 207 | +- Solving for the orientation matrix consists of one processor, optionally bracketed by a reader and writer: |
| 208 | + |
| 209 | + - `common.NexusReader`: A CHAP reader that reads the peak information from a NeXus file. |
| 210 | + |
| 211 | + - `hdrm.HdrmOrmfinderProcessor`: A CHAP processor that obtains the orientation matrix from the Bragg peaks in the stacked image data. |
| 212 | + |
| 213 | + - `common.NexusWriter`: A CHAP writer that writes the orientation matrix to a NeXus file. |
| 214 | + |
| 215 | +## Executing the pipeline file |
| 216 | + |
| 217 | +The workflow pipeline can be executed as a single workflow, but, as mentioned above, the four toplevel sub-workflows can also be executed individually or in a certain combination. When the entire pipeline or several individual sub-workflows are executed, the enclosed pairs of CHAP readers and writers can be commented out or removed from the pipeline file. In this case each processor's output will be piped to the next processor as available input. This can greatly reduce the processing time and/or required memory to store the results. |
| 218 | + |
| 219 | +Running the entire workflow pipeline is as described above under `Running a HDRM workflow`. To create only the stacked data map, run: |
| 220 | + ```bash |
| 221 | + CHAP <pipelinefilename> -p map |
| 222 | + ``` |
| 223 | +Do not remove or comment out the Nexus writer that writes the stacked data to file in this case! |
| 224 | + |
| 225 | +If instead you would like to create the orientation matrix from the raw image data files, without performing the azimuthal integration, you can run: |
| 226 | + ```bash |
| 227 | + CHAP <pipelinefilename> -p map peaks orm |
| 228 | + ``` |
| 229 | +In this case the individual sub-workflows are added to the `-p` flag separated by spaces and in the correct order of processing. You can now comment out or remove the Nexus writer and reader for the stacked data map, as well as those for the Bragg peak information, but do not remove the final orientation matrix writer! |
| 230 | + |
| 231 | +To create the azimuthally integrated data, run either: |
| 232 | + ```bash |
| 233 | + CHAP <pipelinefilename> -p map integrate |
| 234 | + ``` |
| 235 | +in which case you can optionally skip the intermediate writing of the stacked image stack, or run: |
| 236 | + ```bash |
| 237 | + CHAP <pipelinefilename> -p integrate |
| 238 | + ``` |
| 239 | +in which case you have to load the stacked image data map in the `integrate` sub-workflow from an earlier created map Nexus file. |
| 240 | + |
| 241 | +Note that the each processor adds data and metadata to the loaded Nexus file. However, to reduce the total file size and since the orientation matrix processor only needs the Bragg peaks, the processor that finds the Bragg peaks will strip the raw data from the existing Nexus file. |
| 242 | +Finally, note that the actual sub-workflow labels `map`, `integrate`, `peaks`, and `orm` are irrelevant to CHAP, the user is free to chose any single string of characters followed by a semi-colon as label for a sub-workflow. Just one label is required after the `config` block, i.e., at the start of the actual workflow pipeline. |
0 commit comments