Skip to content
This repository has been archived by the owner on Jul 29, 2023. It is now read-only.

Commit

Permalink
remove old inference module
Browse files Browse the repository at this point in the history
  • Loading branch information
ziw-liu committed May 27, 2023
1 parent f4a2c14 commit 1f33eb0
Show file tree
Hide file tree
Showing 11 changed files with 19 additions and 858 deletions.
2 changes: 1 addition & 1 deletion micro_dl/cli/curator_script.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
import iohub.ngff as ngff
import argparse

import micro_dl.inference.evaluation_metrics as metrics
import micro_dl.evaluation.evaluation_metrics as metrics
import micro_dl.utils.aux_utils as aux_utils
# from waveorder.focus import focus_from_transverse_band

Expand Down
2 changes: 1 addition & 1 deletion micro_dl/cli/metrics_script.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import argparse
import pandas as pd

import micro_dl.inference.evaluation_metrics as metrics
import micro_dl.evaluation.evaluation_metrics as metrics
import micro_dl.utils.aux_utils as aux_utils

# %% read the below details from the config file
Expand Down
15 changes: 15 additions & 0 deletions micro_dl/cli/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# CLI

## Exporting models to ONNX

If you wish to run inference via usage of the ONNXruntime, models can be exported to onnx using the `micro_dl/cli/onnx_export_script.py`. See below for an example usage of this script with 5-input-stack model:

```bash
python micro_dl/cli/onnx_export_script.py --model_path path/to/your/pt_model.pt --stack_depth 5 --export_path intended/path/to/model/export.onnx --test_input path/to/test/input.npy
```

**Some Notes:**

* For cpu sharing reasons, running an onnx model requires a dedicated node on hpc OR a non-distributed system (for example a personal laptop or other device).
* Test inputs are optional, but help verify that the exported model can be run if exporting from intended usage device.
* Models must be located in a lighting training logs directory with a valid `config.yaml` in order to be initialized. This can be "hacked" by locating the config in a directory called `checkpoints` beneath a valid config's directory.
201 changes: 0 additions & 201 deletions micro_dl/cli/torch_inference_script.py

This file was deleted.

2 changes: 1 addition & 1 deletion micro_dl/evaluation/evaluation.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import numpy as np
import micro_dl.inference.evaluation_metrics as inference_metrics
import micro_dl.evaluation.evaluation_metrics as inference_metrics
from torch.utils.tensorboard import SummaryWriter


Expand Down
1 change: 0 additions & 1 deletion micro_dl/inference/__init__.py

This file was deleted.

Loading

0 comments on commit 1f33eb0

Please sign in to comment.