Skip to content

Commit

Permalink
update examples readme
Browse files Browse the repository at this point in the history
Signed-off-by: hsj576 <[email protected]>
  • Loading branch information
hsj576 committed Dec 7, 2023
1 parent 0e6fd8c commit abe40ec
Show file tree
Hide file tree
Showing 350 changed files with 162 additions and 127 deletions.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Next, download pretrained model via [[google]](https://drive.google.com/file/d/1
We are now ready to run the ianvs for benchmarking pedestrian tracking on the MOT17 dataset.

```python
ianvs -f ./examples/pedestrian_tracking/multiedge_inference_bench/tracking_job.yaml
ianvs -f ./examples/MOT17/multiedge_inference_bench/pedestrian_tracking/tracking_job.yaml
```

The benchmarking process takes a few minutes and varies depending on devices.
Expand Down Expand Up @@ -78,7 +78,7 @@ Next, download pretrained model via [[google]](https://drive.google.com/drive/fo
We are now ready to run the ianvs for benchmarking pedestrian re-identification on the MOT17 dataset.

```python
ianvs -f ./examples/pedestrian_tracking/multiedge_inference_bench/reid_job.yaml
ianvs -f ./examples/MOT17/multiedge_inference_bench/pedestrian_tracking/reid_job.yaml
```

The benchmarking process takes a few minutes and varies depending on devices.
Expand All @@ -93,9 +93,9 @@ The final output might look like this:
## Step 3. Generate test report

```shell
python ./examples/pedestrian_tracking/multiedge_inference_bench/generate_reports.py \
-t ./examples/pedestrian_tracking/multiedge_inference_bench/tracking_job.yaml \
-r ./examples/pedestrian_tracking/multiedge_inference_bench/reid_job.yaml
python ./examples/MOT17/multiedge_inference_bench/pedestrian_tracking/generate_reports.py \
-t ./examples/MOT17/multiedge_inference_bench/pedestrian_tracking/tracking_job.yaml \
-r ./examples/MOT17/multiedge_inference_bench/pedestrian_tracking/reid_job.yaml
```

Finally, the report is generated under <Ianvs_HOME>/examples/pedestrian_tracking/multiedge_inference_bench/reports. You can also check the sample report under the current directory.
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ we have done that for you and the interested readers can refer to [testenv.yaml]

Related algorithm is also ready in this quick start.
``` shell
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/curb-detection/lifelong_learning_bench/testalgorithms/rfnet/RFNet
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/bdd/lifelong_learning_bench/curb-detection/testalgorithms/rfnet/RFNet
```

The URL address of this algorithm then should be filled in the configuration file ``algorithm.yaml``. In this quick
Expand Down Expand Up @@ -115,7 +115,7 @@ We are now ready to run the ianvs for benchmarking.

``` shell
cd /ianvs/project
ianvs -f examples/bdd/lifelong_learning_bench/benchmarkingjob.yaml
ianvs -f examples/bdd/lifelong_learning_bench/curb-detection/benchmarkingjob.yaml
```

Finally, the user can check the result of benchmarking on the console and also in the output path(
Expand Down
Original file line number Diff line number Diff line change
@@ -1,106 +1,106 @@
model = dict(
type='ImageClassifier',
backbone=dict(
type='ResNet',
depth=18,
num_stages=4,
out_indices=(3, ),
style='pytorch'),
neck=dict(type='GlobalAveragePooling'),
head=dict(
type='MultiLabelLinearClsHead',
num_classes=20,
in_channels=512,
loss=dict(type='CrossEntropyLoss', loss_weight=1.0, use_soft=True)))
dataset_type = 'BDD_Performance'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]
data = dict(
samples_per_gpu=64,
workers_per_gpu=2,
train=dict(
type='BDD_Performance',
data_prefix='',
ann_file=
'/home/liyunzhe/Mobile-Inference/algorithm/labels/0129_real_world_multi_label_remo_xyxy_bdd_train.txt',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]),
val=dict(
type='BDD_Performance',
data_prefix='',
ann_file=
'/home/liyunzhe/Mobile-Inference/algorithm/labels/0129_real_world_multi_label_remo_xyxy_bdd_val.txt',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]),
test=dict(
type='BDD_Performance',
data_prefix='',
ann_file=
'/home/liyunzhe/Mobile-Inference/algorithm/labels/0129_real_world_multi_label_remo_xyxy_bdd_val.txt',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]))
evaluation = dict(interval=1, metric='mAP')
optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(policy='step', step=[30, 60, 90])
runner = dict(type='EpochBasedRunner', max_epochs=100)
checkpoint_config = dict(interval=1)
log_config = dict(interval=100, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
work_dir = 'work_dirs/220208-bdd-best'
model = dict(
type='ImageClassifier',
backbone=dict(
type='ResNet',
depth=18,
num_stages=4,
out_indices=(3, ),
style='pytorch'),
neck=dict(type='GlobalAveragePooling'),
head=dict(
type='MultiLabelLinearClsHead',
num_classes=20,
in_channels=512,
loss=dict(type='CrossEntropyLoss', loss_weight=1.0, use_soft=True)))
dataset_type = 'BDD_Performance'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]
data = dict(
samples_per_gpu=64,
workers_per_gpu=2,
train=dict(
type='BDD_Performance',
data_prefix='',
ann_file=
'/home/liyunzhe/Mobile-Inference/algorithm/labels/0129_real_world_multi_label_remo_xyxy_bdd_train.txt',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]),
val=dict(
type='BDD_Performance',
data_prefix='',
ann_file=
'/home/liyunzhe/Mobile-Inference/algorithm/labels/0129_real_world_multi_label_remo_xyxy_bdd_val.txt',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]),
test=dict(
type='BDD_Performance',
data_prefix='',
ann_file=
'/home/liyunzhe/Mobile-Inference/algorithm/labels/0129_real_world_multi_label_remo_xyxy_bdd_val.txt',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]))
evaluation = dict(interval=1, metric='mAP')
optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(policy='step', step=[30, 60, 90])
runner = dict(type='EpochBasedRunner', max_epochs=100)
checkpoint_config = dict(interval=1)
log_config = dict(interval=100, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
work_dir = 'work_dirs/220208-bdd-best'
gpu_ids = range(0, 1)
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ we have done that for you and the interested readers can refer to [testenv.yaml]

Related algorithm is also ready in this quick start.
``` shell
export PYTHONPATH=$PYTHONPATH:/ianvs/project/ianvs/examples/curb-detection/lifelong_learning_bench/testalgorithms/rfnet/RFNet
export PYTHONPATH=$PYTHONPATH:/ianvs/project/ianvs/examples/cityscapes-synthia/lifelong_learning_bench/curb-detection/testalgorithms/rfnet/RFNet
```

The URL address of this algorithm then should be filled in the configuration file ``algorithm.yaml``. In this quick
Expand All @@ -80,7 +80,7 @@ We are now ready to run the ianvs for benchmarking.

``` shell
cd /ianvs/project/ianvs
ianvs -f examples/curb-detection/lifelong_learning_bench/benchmarkingjob.yaml
ianvs -f examples/cityscapes-synthia/lifelong_learning_bench/curb-detection/benchmarkingjob.yaml
```

Finally, the user can check the result of benchmarking on the console and also in the output path(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Before using Ianvs, you might want to have the device ready:
- Internet connection for GitHub and pip, etc
- Python 3.6+ installed


In this example, we are using the Linux platform with Python 3.8.5. If you are using Windows, most steps should still apply but a few like commands and package requirements might be different.

## Step 1. Ianvs Preparation
Expand Down Expand Up @@ -65,15 +66,15 @@ we have done that for you and the interested readers can refer to [testenv.yaml]
Copy the index files of dataset.

``` shell
cp /ianvs/project/ianvs/examples/semantic_segmentation/lifelong_learning_bench/indexes/* /root/data/semantic_segmentation_dataset/
cp /ianvs/project/ianvs/examples/cityscapes-synthia/lifelong_learning_bench/semantic-segmentation/indexes/* /root/data/semantic_segmentation_dataset/
```

<!-- Please put the downloaded dataset on the above dataset path, e.g., `/ianvs/dataset`. One can transfer the dataset to the path, e.g., on a remote Linux system using [XFTP]. -->


Related algorithm is also ready in this quick start.
``` shell
export PYTHONPATH=$PYTHONPATH:/ianvs/project/ianvs/examples/semantic_segmentation/lifelong_learning_bench/testalgorithms/rfnet/RFNet
export PYTHONPATH=$PYTHONPATH:/ianvs/project/ianvs/examples/cityscapes-synthia/lifelong_learning_bench/semantic-segmentation/testalgorithms/rfnet/RFNet
```

The URL address of this algorithm then should be filled in the configuration file ``algorithm.yaml``. In this quick
Expand All @@ -85,7 +86,7 @@ We are now ready to run the ianvs for benchmarking.

``` shell
cd /ianvs/project/ianvs
ianvs -f examples/semantic_segmentation/lifelong_learning_bench/benchmarkingjob-smalltest.yaml
ianvs -f examples/cityscapes-synthia/lifelong_learning_bench/semantic-segmentation/benchmarkingjob-smalltest.yaml
```

Finally, the user can check the result of benchmarking on the console and also in the output path(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ unzip dataset.zip
The URL address of this dataset then should be filled in the configuration file `testenv.yaml`. In this quick start, we have done that for you and interested readers can refer to [testenv.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details.

```shell
cd /ianvs/examples/scene-based-unknown-task-recognition/lifelong_learning_bench/testalgorithms/rfnet
cd /ianvs/examples/cityscapes-synthia/scene-based-unknown-task-recognition/curb-detection/testalgorithms/rfnet
mkdir results
```

Expand All @@ -68,8 +68,8 @@ Put the model to results.Download [model](https://pan.baidu.com/s/18MA8Gaw7ptpip
The related algorithm is also ready for this quick start.

```shell
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/scene-based-unknown-task-recognition/lifelong_learning_bench/testalgorithms/rfnet/RFNet
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/scene-based-unknown-task-recognition/lifelong_learning_bench/testalgorithms/rfnet/
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/cityscapes-synthia/scene-based-unknown-task-recognition/curb-detection/testalgorithms/rfnet/RFNet
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/cityscapes-synthia/scene-based-unknown-task-recognition/curb-detection/testalgorithms/rfnet/
```

The URL address of this algorithm then should be filled in the configuration file `algorithm.yaml`. In this quick start, we have done that for you and interested readers can refer to [algorithm.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details.
Expand All @@ -80,7 +80,7 @@ We are now ready to run the ianvs for benchmarking.

```shell
cd /ianvs/project
ianvs -f examples/scene-based-unknown-task-recognition/lifelong_learning_bench/benchmarkingjob.yaml
ianvs -f examples/cityscapes-synthia/scene-based-unknown-task-recognition/curb-detection/benchmarkingjob.yaml
```

Finally, the user can check the result of benchmarking on the console and also in the output path( e.g. `/ianvs/lifelong_learning_bench/workspace`) defined in the benchmarking config file ( e.g. `benchmarkingjob.yaml`). In this quick start, we have done all configurations for you and the interested readers can refer to [benchmarkingJob.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details.
Expand Down
34 changes: 34 additions & 0 deletions examples/how-to-contribute-examples.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# How to contribute examples

## Overall contribution workflow

1. Apply for a topic.
Once you have a new example, you can apply for a topic to discuss it on [SIG AI weekly meeting](http://github.com/kubeedge/ianvs.git).
2. Submit proposal.
After the idea is fully discussed, the former proposal PR is needed to submit to the [Ianvs repository](http://github.com/kubeedge/ianvs.git).
3. Fix proposal review comments.
If other Ianvs maintainers leave review comments to the PR, you need to fix them and get at least 2 reviewers' `/lgtm`, and 1 approver's `/approve`.
4. Submit code.
Then you can implement your code, and a good code style is encouraged.
5. Fix code review comments.
Besides the merge requirements of the proposal, CI passing is needed before reviewing this step.

## Add a new example

The new example should be stored in the following path:

~~~bash
examples/dataset_name/algorithm_name/task_name/
~~~

Here is an example:

~~~bash
examples/robot/lifelong_learning_bench/semantic-segmentation/
~~~

For contributing a new example, you can follow these steps to determine its storage path:

1. Under the examples directory, choose the dataset you used in this example, such as cityscapes, robot, and so on. Only when you use a new dataset can you create a new folder under the examples directory to store the example related to that dataset.
2. After determining the dataset, select the algorithm paradigm you used, such as lifelong learning, single-task learning, and so on. If you used a new algorithm paradigm, you can create a new folder under the dataset directory to store examples of that type of algorithm.
3. After determining the algorithm paradigm, select the task for your example, such as semantic segmentation, curb detection, and so on. If you used a new task, you can create a new folder under the algorithm paradigm directory to store examples of that type of task.
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ we have done that for you and the interested readers can refer to [testenv.yaml]
Related algorithm is also ready in this quick start.

``` shell
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/class_increment_semantic_segmentation/lifelong_learning_bench/testalgorithms/erfnet/ERFNet
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/robot-cityscapes-synthia/lifelong_learning_bench/semantic-segmentation/testalgorithms/erfnet/ERFNet
```

The URL address of this algorithm then should be filled in the configuration file ``algorithm.yaml``. In this quick
Expand All @@ -77,7 +77,7 @@ We are now ready to run the ianvs for benchmarking.

``` shell
cd /ianvs/project
ianvs -f examples/class_increment_semantic_segmentation/lifelong_learning_bench/benchmarkingjob.yaml
ianvs -f examples/robot-cityscapes-synthia/lifelong_learning_bench/semantic-segmentation/benchmarkingjob.yaml
```

Finally, the user can check the result of benchmarking on the console and also in the output path(
Expand Down
File renamed without changes.
File renamed without changes.
Loading

0 comments on commit abe40ec

Please sign in to comment.