Skip to content

Commit

Permalink
v0.2.1 (12-07-2023 (#153)
Browse files Browse the repository at this point in the history
### v0.2.1 (12-07-2023)

- Added Documentation
      - Added documentation for the detection and labelling widget
      - Added Instructions for installation using python v-env

- New features
      - Installation and Running .bat scripts
      - Manipulator positions calibration for TESCAN
      - Microscope positions available in the movement widget
      - Added minimap of microscope positions
      - Added a fibsem version number for development tracking
      - Live chat (experimental)
      - Autoliftout utils
      - GIS Widget for cryo-control of gas injection 
      - Embedded detection widget

- Fixed bugs
      - fixed issue where parameters were passed incorrectly for milling
      - fixed Eucentric movement where z-direction was flipped 

- Updated Functionality / Improved Processes
      - system/model yaml files can now be modified from the system widget
      - demo log paths now in fibsem base directory
      - scan/image rotation now saved to microscope state
      - An option to click to move multiple milling stages together is now available
      - Added a crosshair to the images
      - movement of milling pattern now emits a pyqt signal (backend)
      - Manufacturer / model /serial no info can now be accessed/saved 
      - Manipulator UI adaptive based on if manipulator is retracted or inserted 
      - Enabled granular hardware control for stage and manipulator (backend), eg: disable rotation only
  • Loading branch information
LucileNaegele authored Jul 12, 2023
1 parent cdca980 commit f832c65
Show file tree
Hide file tree
Showing 107 changed files with 7,373 additions and 2,171 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,5 @@ media/*
fibsem/segmentation/models/**/*.pt*
fibsem/segmentation/models/*
example/**/*.tif
example/demo/*
fibsem/chat/secret.txt
34 changes: 33 additions & 1 deletion CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,38 @@
## Changes

### 12/07/2023

- Added Documentation
- Added documentation for the detection and labelling widget
- Added Instructions for installation using python v-env

- New features
- Installation and Running .bat scripts
- Manipulator positions calibration for TESCAN
- Microscope positions available in the movement widget
- Added minimap of microscope positions
- Added a fibsem version number for development tracking
- Live chat (experimental)
- Autoliftout utils
- GIS Widget for cryo-control of gas injection
- Embedded detection widget

- Fixed bugs
- fixed issue where parameters were passed incorrectly for milling
- fixed Eucentric movement where z-direction was flipped

- Updated Functionality / Improved Processes
- system/model yaml files can now be modified from the system widget
- demo log paths now in fibsem base directory
- scan/image rotation now saved to microscope state
- An option to click to move multiple milling stages together is now available
- Added a crosshair to the images
- movement of milling pattern now emits a pyqt signal (backend)
- Manufacturer / model /serial no info can now be accessed/saved
- Manipulator UI adaptive based on if manipulator is retracted or inserted
- Enabled granular hardware control for stage and manipulator (backend), eg: disable rotation only


### 24/05/2023

- Added new features
Expand All @@ -18,4 +51,3 @@
- Import TESCAN image files



22 changes: 21 additions & 1 deletion INSTALLATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,25 @@ cd fibsem
conda env create -f environment.yml
conda activate fibsem
pip install -e .
```
```

### Installation through Python virtualenv

Alternatively to using Conda, you may use the Python virtualenv tool to create a virtual environment for the project.

Firstly, install python 3.9+ on your system.
In a terminal window, move to a directory where you would like to place the virtual environment and then create a virtual environment using the following command
```
python -m venv fibsem
```
Once the environment is created, activate the environment using the following command
```
fibsem\Scripts\activate.bat
```
Once activated, move to the fibsem root directory and install fibsem like so
```
pip install -e .
```

## Installing Microscope Hardware APIs

Expand Down Expand Up @@ -59,6 +77,8 @@ Typically, you can expect the environment is named 'Autoscript', and its install
2. Find the conda environment location you just made called `fibsem`.
`...conda/envs/fibsem/Lib/site-packages/`

*Note: if you used python virtual env to create a virtual environment, the location of the fibsem/Lib/site-packages will be where the virtual environment was created. Where this document mentions the site-packages directory, it is referring to the site-packages directory of the virtual environment.*

***Troubleshooting:** If you're having trouble finding the conda environment location for `fibsem`*
*you can open the *Anaconda terminal* on your machine and type `where python` (Windows) or `which python` (Unix).*
*The result will be something like `C:\Users\yourusername\.conda\envs\fibsem\python.exe`*
Expand Down
45 changes: 45 additions & 0 deletions docs/features.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Features

## Beam Current Alignment

The beam current alignment feature is a calibration tool to perform beam current alignment. It can be easily accessed and from the tools menu.

![Beam Current Alignment](img/features/select_current_alignment.png)

This will open a new window which will allow the user to align the currents to a reference.

![Beam Current Alignment](img/features/current_alignment_window.png)

To begin, the user must take images by clicking on the button labelled as "Take Images". This takes two images, one for the reference and one for aligning.

The images are labelled as reference and aligning which allows the user to visually comparing when aligning the beam. The reference and the aligning currents can be selected using the drop down available on the right.

To align the beam, the user must simply click on align beam. When alignment is done, a pop up window will reveal the shift needed to align the images

![Beam Current Alignment](img/features/alignment_shift.png)

Clicking 'OK' will close the window and the user can continue to align the beam with other currents. Below the buttons there will be a list for reference which shows which currents have been aligned.

Clicking the "Overlay Images" checkbox will overlay the two images for comparison which provides another visual check for alignment.

![Beam Current Alignment](img/features/current_alignment_overlaid.png)

Once the user is satisfied with the alignment, they can close the window and return back to the main OpenFIBSEM window. If necessary, this tool can be run again when required.

## Measurement Tools

OpenFIBSEM has a napari based built in ruler to measure distances on an image. The ruler can be toggled on and off in the Image tab of the main window.

![Ruler](img/features/ruler.png)

The check box enables or disables the ruler. Please note, while the ruler is **enabled** the user will **not** be able to interact with the image. To interact with the image, the ruler must be **disabled**. Standard image features such as panning, zooming, clicking to move and so on are all only available when the ruler is disabled.

To measure a distance, the user must drag either the left or the right point. Dragging a point will update the measured distance automatically. The line **cannot** be moved, only the points.

Next to the checkbox, the measured distance is calculated in microns. The dX and dY values are also displayed. The measured values are simply a pixelwise calculation of the distance based on the horizontal field width (HFW).

The ruler can be used on either the ION image or ELECTRON image. The point can dragged to either side to calculate distance in an image. If the two images do not share the same HFW, then the HFW of the image on which the left point is on will be used to calculate the distance. Hence ensure that both points are wholly in a single image.

There is also a static scale bar and cross hair point that can be enabled or disabled in the Image tab. The scale bar automatically updates if the HFW is changed and a new image is taken.

![Scale Bar](img/features/scale_bar.png)
Binary file added docs/img/features/alignment_shift.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/features/current_alignment_overlaid.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/features/current_alignment_window.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/features/ruler.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/features/scale_bar.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/features/select_current_alignment.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/detect_widget_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/detect_widget_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/load_model.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/model_assissted_label.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/select_path.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/standard_labelling.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/train_data.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/train_model.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/train_param.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/train_progress.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/ml/train_wandb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/ml.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ We have developed an integrated labelling tool for segmentation dataset.
### Labelling UI
The Labelling UI allows users to draw the labels (masks) for training a segmentation model.
The Labelling UI allows users to draw the labels (masks) for training a segmentation model. To see more detailed instructions, see the guide on labelling and detection widgets [here](ml_details.md)
### Model Assisted Labelling
Expand Down
77 changes: 77 additions & 0 deletions docs/ml_details.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Labelling UI Guide

## Labelling UI widget

The labelling widget is built into OpenFIBSEM and provides an image processing tool to prepare labels for the purposes of training an segmentation machine learning model such as UNet. The model can run on CPU however a CUDA enabled GPU is recommended for speed and efficiency.

![Labelling UI](img/ml/ui_label_step.png)

## Labelling UI workflow

To begin, launch the UI and load the directories in which the images are that are to be labelled and the directory in which the labels are to be saved. Clicking the button with the three dots will open a file explorer window to select the directory. Once the paths are set, click the "Load Data" button to load the images and labels.

The number of classes refers to the unique number of objects to be segmented. In our example, we have the needle and lamella which are two classes. The number of classes can be changed at any time and the UI will update accordingly.

![Labelling UI](img/ml/select_path.png)

Once the data has been loaded. The first image will be displayed in the main napari window. The image can be manipulated as normal using the napari controls.

![Labelling UI](img/ml/standard_labelling.png)

To label the images manually, select the mask layer and apply the brush tool. The brush tool can be selected and manipulated as standard in napari. The standard napari toolbar can be seen on the top left in the figure above.

The images in the data set can be cycled through by clicking previous or next. The current image index is displayed above the next/previous buttons.

### Model assisted labelling

The model assisted labelling tool assists the process to reduce the effort of manually labelling the data with the brushing tool. The model provides a prediction of the image and the user can then correct the prediction by brushing over the image.

A CUDA enabled GPU is not required for training purposes but it is recommended for efficiency.

To enable model assisted labelling, first load a model in the model tab. Ensure that the correct checkpoint and encoder is loaded. The model path can be loaded using the 3 dots button to open the file directory. The model can then be loaded by clicking the Load Model button.

![Labelling UI](img/ml/load_model.png)

Once the model is loaded, click the Model Assisted checkbox if it is not already done so. The images will now automatically be semi-labelled and can be adjusted as necessary with the paint brush tool. The model assisted labelling can be toggled on and off at any time. The labels are saved automatically and switching between images will automatically update the labels.

![Labelling UI](img/ml/model_assissted_label.png)

## Training a new model

A new model can also be trained based on the images and labels created within this UI itself. To train a new model, ensure you have a image set with labels. The models are created and trained using PyTorch. The models are saved as .pt files and can be loaded in to the UI for model assisted labelling or for any other purpose.

To setup, begin by going to the **Train** tab and setting up the paths to the images and labels. The output path is the location where the newly trained model will be saved.

![Labelling UI](img/ml/train_data.png)

**Ensure that the number of images and the number of labels are the same length**



In the **Model** tab, you can set parameters for the model.
- Encoder: the encoder used for the segmentation model (e.g. resnet34)
- Checkpoint: the name for the checkpoint when training the new model
- Number of classes: the number of classes in the data set

![Labelling UI](img/ml/train_model.png)



In the **Training** tab, you can set parameters for the training.
- Batch size: the batch size used for training
- Number of epochs: the number of epochs to train for
- Learning rate: the learning rate used for training

![labelling UI](img/ml/train_param.png)

In the **Logging** the use of Weights And Biases (WANDB) can be setup if desired. WANDB is a tool for logging and visualising training runs. To use WANDB, you will need to create an account and login. The WANDB API key can be found in your account settings.

Click the "Use Weights And Biases" checkbox if using WANDB.

![Labelling UI](img/ml/train_wandb.png)

Once everything is set up, press train model and the training will begin. The training progress will be displayed in the terminal. The WANDB dashboard can be used to visualise the training progress. The progress of the model training is visualised in the main window

![Labelling UI](img/ml/train_progress.png)

Once the training is done, the model can be accessed from the location where it is saved and can be loaded in to use for assisted labelling or any other purpose.
17 changes: 17 additions & 0 deletions docs/ml_feature_detection.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Feature Detection and Correction Widget

The feature detection widget is a supplementary tool used to detect and correct features in the image. The widget is built into OpenFIBSEM and used a napari pop up window to correct or make changes to features that have been detected by the model. The model can run on CPU however a CUDA enabled GPU is recommended for speed and efficiency.

One of the current use cases that this widget is used for is to correct the needle and lamella features that have been detected by the model. If the model is inaccurate with the detection, the user can make changes to the feature positions in a simple UI. The positions are then used for needle movement.

Ideally, this widget would be used in conjunction with an existing workflow to make changes on the fly if necessary, for example, correcting the location of a feature for milling, correcting the location for needle movement and so on.

![Feature Detection UI](img/ml/detect_widget_1.png)

This is an example of the feature detection widget. In this scenario, it is relevant to identify the locations of the needle tip and lamella centre. The features detected by the model and its corresponding locations can be seen. Here the location of the needle tip is accurate, however, the lamella centre seems to be incorrect. Hence the user can make changes to this very easily.

![Feature Detection UI](img/ml/detect_widget_2.png)

The user simply can move the point labelled lamella centre towards a more accurate location to properly label the detection. The change can be seen in the right hand side, with information in pixel coordinates. The flag user corrected is raised to true to inform the system that the user has made changes to the feature location. If necessary, clicking "Run Feature Detection" will run the model again to detect the features. The user can then make further changes if necessary.

Once completed, clicking "Continue" will pass on the relevant information to the next step in the workflow. In this case, the needle tip and lamella centre coordinates are passed on to the needle movement widget.
Loading

0 comments on commit f832c65

Please sign in to comment.