Skip to content

Commit

Permalink
[Refactor] Fix spelling (open-mmlab#1681)
Browse files Browse the repository at this point in the history
* [Refactor] Fix spelling

* [Refactor] Fix spelling

* [Refactor] Fix spelling

* [Refactor] Fix spelling
  • Loading branch information
fanqiNO1 authored Jul 5, 2023
1 parent feb0814 commit 7cbfb36
Show file tree
Hide file tree
Showing 59 changed files with 68 additions and 68 deletions.
2 changes: 1 addition & 1 deletion .dev_scripts/generate_readme.py
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ def generate_model_table(models,
if any('Converted From' in model.data for model in models):
table_string += (
f"\n*Models with \* are converted from the [official repo]({converted_from['Code']}). "
"The config files of these models are only for inference. We haven't reprodcue the training results.*\n"
"The config files of these models are only for inference. We haven't reproduce the training results.*\n"
)

return table_string
Expand Down
2 changes: 1 addition & 1 deletion configs/beit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ python tools/test.py configs/beit/benchmarks/beit-base-p16_8xb128-coslr-100e_in1
| `beit-base-p16_beit-pre_8xb128-coslr-100e_in1k` | [BEIT](https://download.openmmlab.com/mmselfsup/1.x/beit/beit_vit-base-p16_8xb256-amp-coslr-300e_in1k/beit_vit-base-p16_8xb256-amp-coslr-300e_in1k_20221128-ab79e626.pth) | 86.53 | 17.58 | 83.10 | N/A | [config](benchmarks/beit-base-p16_8xb128-coslr-100e_in1k.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/beit/beit_vit-base-p16_8xb256-amp-coslr-300e_in1k/vit-base-p16_ft-8xb128-coslr-100e_in1k/vit-base-p16_ft-8xb128-coslr-100e_in1k_20221128-0ca393e9.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/beit/beit_vit-base-p16_8xb256-amp-coslr-300e_in1k/vit-base-p16_ft-8xb128-coslr-100e_in1k/vit-base-p16_ft-8xb128-coslr-100e_in1k_20221128-0ca393e9.json) |
| `beit-base-p16_beit-in21k-pre_3rdparty_in1k`\* | BEIT ImageNet-21k | 86.53 | 17.58 | 85.28 | 97.59 | [config](benchmarks/beit-base-p16_8xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/beit/beit-base_3rdparty_in1k_20221114-c0a4df23.pth) |

*Models with * are converted from the [official repo](https://github.com/microsoft/unilm/tree/master/beit). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/microsoft/unilm/tree/master/beit). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/beitv2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ python tools/test.py configs/beitv2/benchmarks/beit-base-p16_8xb128-coslr-100e_i
| `beit-base-p16_beitv2-pre_8xb128-coslr-100e_in1k` | [BEITV2](https://download.openmmlab.com/mmselfsup/1.x/beitv2/beitv2_vit-base-p16_8xb256-amp-coslr-300e_in1k/beitv2_vit-base-p16_8xb256-amp-coslr-300e_in1k_20221212-a157be30.pth) | 86.53 | 17.58 | 85.00 | N/A | [config](benchmarks/beit-base-p16_8xb128-coslr-100e_in1k.py) | [model](https://download.openmmlab.com/mmselfsup/1.x/beitv2/beitv2_vit-base-p16_8xb256-amp-coslr-300e_in1k/vit-base-p16_ft-8xb128-coslr-100e_in1k/vit-base-p16_ft-8xb128-coslr-100e_in1k_20221212-d1c0789e.pth) \| [log](https://download.openmmlab.com/mmselfsup/1.x/beitv2/beitv2_vit-base-p16_8xb256-amp-coslr-300e_in1k/vit-base-p16_ft-8xb128-coslr-100e_in1k/vit-base-p16_ft-8xb128-coslr-100e_in1k_20221212-d1c0789e.json) |
| `beit-base-p16_beitv2-in21k-pre_3rdparty_in1k`\* | BEITV2 ImageNet-21k | 86.53 | 17.58 | 86.47 | 97.99 | [config](benchmarks/beit-base-p16_8xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/beit/beitv2-base_3rdparty_in1k_20221114-73e11905.pth) |

*Models with * are converted from the [official repo](https://github.com/microsoft/unilm/tree/master/beit2). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/microsoft/unilm/tree/master/beit2). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/blip/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ python tools/test.py configs/blip/blip-base_8xb32_caption.py https://download.op
| :-------------------------- | :--------: | :-------: | :---------------------------------: | :------------------------------------------------------------------------------------------------------------: |
| `blip-base_3rdparty_nlvr`\* | 259.37 | 82.33 | [config](./blip-base_8xb32_nlvr.py) | [model](https://download.openmmlab.com/mmclassification/v1/blip/blip-base_3rdparty_nlvr_20230427-3b14d33f.pth) |

*Models with * are converted from the [official repo](https://github.com/salesforce/LAVIS). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/salesforce/LAVIS). The config files of these models are only for inference. We haven't reproduce the training results.*

*Results with # denote zero-shot evaluation. The corresponding model hasn't been finetuned on that dataset.*

Expand Down
2 changes: 1 addition & 1 deletion configs/blip2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ python tools/test.py configs/blip2/blip2_8xb32_retrieval.py https://download.ope
| :--------------------------- | :--------: | :------: | :----------------------------------: | :-------------------------------------------------------------------------------------------------------------: |
| `blip2_3rdparty_retrieval`\* | 1173.19 | 85.40 | [config](./blip2_8xb32_retrieval.py) | [model](https://download.openmmlab.com/mmclassification/v1/blip2/blip2_3rdparty_pretrain_20230505-f7ef4390.pth) |

*Models with * are converted from the [official repo](https://github.com/salesforce/LAVIS). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/salesforce/LAVIS). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/chinese_clip/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ python tools/test.py configs/chinese_clip/cn-clip_resnet50_zeroshot-cls_cifar100
| `cn-clip_vit-large-p14_zeroshot-cls_cifar100`\* | 406.00 | 74.80 | [config](cn-clip_vit-large-p14_zeroshot-cls_cifar100.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_vit-large-p14_3rdparty_20230519-3f844503.pth) |
| `cn-clip_vit-huge-p14_zeroshot-cls_cifar100`\* | 958.00 | 79.10 | [config](cn-clip_vit-huge-p14_zeroshot-cls_cifar100.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/chinese_clip/cn-clip_vit-huge-p14_3rdparty_20230519-e4f49b00.pth) |

*Models with * are converted from the [official repo](https://github.com/OFA-Sys/Chinese-CLIP). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/OFA-Sys/Chinese-CLIP). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/clip/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ python tools/test.py configs/clip/vit-base-p32_pt-64xb64_in1k.py https://downloa
| `vit-base-p16_clip-openai-in12k-pre_3rdparty_in1k-384px`\* | CLIP OPENAI ImageNet-12k | 86.57 | 49.37 | 86.87 | 98.05 | [config](vit-base-p16_pt-64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-in12k-pre_3rdparty_in1k-384px_20221220-8df86b74.pth) |
| `vit-base-p16_clip-openai-pre_3rdparty_in1k-384px`\* | CLIP OPENAI | 86.57 | 49.37 | 86.25 | 97.90 | [config](vit-base-p16_pt-64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/clip/clip-vit-base-p16_openai-pre_3rdparty_in1k-384px_20221220-eb012e87.pth) |

*Models with * are converted from the [timm](https://github.com/rwightman/pytorch-image-models). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [timm](https://github.com/rwightman/pytorch-image-models). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/conformer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ python tools/test.py configs/conformer/conformer-tiny-p16_8xb128_in1k.py https:/
| `conformer-small-p32_8xb128_in1k` | From scratch | 38.85 | 7.09 | 81.96 | 96.02 | [config](conformer-small-p32_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/conformer/conformer-small-p32_8xb128_in1k_20211206-947a0816.pth) |
| `conformer-base-p16_3rdparty_in1k`\* | From scratch | 83.29 | 22.89 | 83.82 | 96.59 | [config](conformer-base-p16_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/conformer/conformer-base-p16_3rdparty_8xb128_in1k_20211206-bfdf8637.pth) |

*Models with * are converted from the [official repo](https://github.com/pengzhiliang/Conformer/blob/main/models.py#L89). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/pengzhiliang/Conformer/blob/main/models.py#L89). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/convmixer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ python tools/test.py configs/convmixer/convmixer-768-32_10xb64_in1k.py https://d
| `convmixer-1024-20_3rdparty_in1k`\* | From scratch | 24.38 | 5.55 | 76.94 | 93.36 | [config](convmixer-1024-20_10xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/convmixer/convmixer-1024-20_3rdparty_10xb64_in1k_20220323-48f8aeba.pth) |
| `convmixer-1536-20_3rdparty_in1k`\* | From scratch | 51.63 | 48.71 | 81.37 | 95.61 | [config](convmixer-1536-20_10xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/convmixer/convmixer-1536_20_3rdparty_10xb64_in1k_20220323-ea5786f3.pth) |

*Models with * are converted from the [official repo](https://github.com/locuslab/convmixer). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/locuslab/convmixer). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
4 changes: 2 additions & 2 deletions configs/convnext/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ python tools/test.py configs/convnext/convnext-tiny_32xb128_in1k.py https://down
| `convnext-large_3rdparty_in21k`\* | 197.77 | 34.37 | [config](convnext-large_64xb64_in21k.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext/convnext-large_3rdparty_in21k_20220124-41b5a79f.pth) |
| `convnext-xlarge_3rdparty_in21k`\* | 350.20 | 60.93 | [config](convnext-xlarge_64xb64_in21k.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext/convnext-xlarge_3rdparty_in21k_20220124-f909bad7.pth) |

*Models with * are converted from the [official repo](https://github.com/facebookresearch/ConvNeXt). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/facebookresearch/ConvNeXt). The config files of these models are only for inference. We haven't reproduce the training results.*

### Image Classification on ImageNet-1k

Expand Down Expand Up @@ -109,7 +109,7 @@ python tools/test.py configs/convnext/convnext-tiny_32xb128_in1k.py https://down
| `convnext-xlarge_in21k-pre_3rdparty_in1k`\* | ImageNet-21k | 350.20 | 60.93 | 86.97 | 98.20 | [config](convnext-xlarge_64xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext/convnext-xlarge_in21k-pre-3rdparty_64xb64_in1k_20220124-76b6863d.pth) |
| `convnext-xlarge_in21k-pre-3rdparty_in1k-384px`\* | From scratch | 350.20 | 179.20 | 87.76 | 98.55 | [config](convnext-xlarge_64xb64_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext/convnext-xlarge_in21k-pre-3rdparty_in1k-384px_20221219-b161bc14.pth) |

*Models with * are converted from the [official repo](https://github.com/facebookresearch/ConvNeXt). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/facebookresearch/ConvNeXt). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
4 changes: 2 additions & 2 deletions configs/convnext_v2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ python tools/test.py configs/convnext_v2/convnext-v2-atto_32xb32_in1k.py https:/
| `convnext-v2-large_3rdparty-fcmae_in1k`\* | 197.96 | 34.40 | [config](convnext-v2-large_32xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext-v2/convnext-v2-large_3rdparty-fcmae_in1k_20230104-bf38df92.pth) |
| `convnext-v2-huge_3rdparty-fcmae_in1k`\* | 660.29 | 115.00 | [config](convnext-v2-huge_32xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext-v2/convnext-v2-huge_3rdparty-fcmae_in1k_20230104-fe43ae6c.pth) |

*Models with * are converted from the [official repo](https://github.com/facebookresearch/ConvNeXt-V2). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/facebookresearch/ConvNeXt-V2). The config files of these models are only for inference. We haven't reproduce the training results.*

### Image Classification on ImageNet-1k

Expand All @@ -93,7 +93,7 @@ python tools/test.py configs/convnext_v2/convnext-v2-atto_32xb32_in1k.py https:/
| `convnext-v2-huge_fcmae-in21k-pre_3rdparty_in1k-384px`\* | FCMAE ImageNet-21k | 660.29 | 337.96 | 88.68 | 98.73 | [config](convnext-v2-huge_32xb32_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext-v2/convnext-v2-huge_fcmae-in21k-pre_3rdparty_in1k-384px_20230104-02a4eb35.pth) |
| `convnext-v2-huge_fcmae-in21k-pre_3rdparty_in1k-512px`\* | FCMAE ImageNet-21k | 660.29 | 600.81 | 88.86 | 98.74 | [config](convnext-v2-huge_32xb32_in1k-512px.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext-v2/convnext-v2-huge_fcmae-in21k-pre_3rdparty_in1k-512px_20230104-ce32e63c.pth) |

*Models with * are converted from the [official repo](https://github.com/facebookresearch/ConvNeXt-V2). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/facebookresearch/ConvNeXt-V2). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/cspnet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ python tools/test.py configs/cspnet/cspdarknet50_8xb32_in1k.py https://download.
| `cspresnet50_3rdparty_8xb32_in1k`\* | From scratch | 21.62 | 3.48 | 79.55 | 94.68 | [config](cspresnet50_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/cspnet/cspresnet50_3rdparty_8xb32_in1k_20220329-dd6dddfb.pth) |
| `cspresnext50_3rdparty_8xb32_in1k`\* | From scratch | 20.57 | 3.11 | 79.96 | 94.96 | [config](cspresnext50_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/cspnet/cspresnext50_3rdparty_8xb32_in1k_20220329-2cc84d21.pth) |

*Models with * are converted from the [official repo](https://github.com/rwightman/pytorch-image-models). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/rwightman/pytorch-image-models). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/davit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ python tools/test.py configs/davit/davit-tiny_4xb256_in1k.py https://download.op
| `davit-small_3rdparty_in1k`\* | From scratch | 49.75 | 8.80 | 83.61 | 96.75 | [config](davit-small_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/davit/davit-small_3rdparty_in1k_20221116-51a849a6.pth) |
| `davit-base_3rdparty_in1k`\* | From scratch | 87.95 | 15.51 | 84.09 | 96.82 | [config](davit-base_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/davit/davit-base_3rdparty_in1k_20221116-19e0d956.pth) |

*Models with * are converted from the [official repo](https://github.com/dingmyu/davit/blob/main/mmdet/mmdet/models/backbones/davit.py#L355). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/dingmyu/davit/blob/main/mmdet/mmdet/models/backbones/davit.py#L355). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/deit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ python tools/test.py configs/deit/deit-tiny_4xb256_in1k.py https://download.open
| `deit-base_224px-pre_3rdparty_in1k-384px`\* | 224px | 86.86 | 55.54 | 83.04 | 96.31 | [config](deit-base_16xb32_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-base_3rdparty_ft-16xb32_in1k-384px_20211124-822d02f2.pth) |
| `deit-base-distilled_224px-pre_3rdparty_in1k-384px`\* | 224px | 87.63 | 55.65 | 85.55 | 97.35 | [config](deit-base-distilled_16xb32_in1k-384px.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-base-distilled_3rdparty_ft-16xb32_in1k-384px_20211216-e48d6000.pth) |

*Models with * are converted from the [official repo](https://github.com/facebookresearch/deit/blob/f5123946205daf72a88783dae94cabff98c49c55/models.py#L168). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/facebookresearch/deit/blob/f5123946205daf72a88783dae94cabff98c49c55/models.py#L168). The config files of these models are only for inference. We haven't reproduce the training results.*

```{warning}
MMPretrain doesn't support training the distilled version DeiT.
Expand Down
2 changes: 1 addition & 1 deletion configs/deit3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ python tools/test.py configs/deit3/deit3-small-p16_64xb64_in1k.py https://downlo
| `deit3-huge-p14_3rdparty_in1k`\* | From scratch | 632.13 | 167.40 | 85.21 | 97.36 | [config](deit3-huge-p14_64xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-huge-p14_3rdparty_in1k_20221009-e107bcb7.pth) |
| `deit3-huge-p14_in21k-pre_3rdparty_in1k`\* | ImageNet-21k | 632.13 | 167.40 | 87.19 | 98.26 | [config](deit3-huge-p14_64xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit3/deit3-huge-p14_in21k-pre_3rdparty_in1k_20221009-19b8a535.pth) |

*Models with * are converted from the [official repo](https://github.com/facebookresearch/deit/blob/main/models_v2.py#L171). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/facebookresearch/deit/blob/main/models_v2.py#L171). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/densenet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ python tools/test.py configs/densenet/densenet121_4xb256_in1k.py https://downloa
| `densenet201_3rdparty_in1k`\* | From scratch | 20.01 | 4.37 | 77.32 | 93.64 | [config](densenet201_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/densenet/densenet201_4xb256_in1k_20220426-05cae4ef.pth) |
| `densenet161_3rdparty_in1k`\* | From scratch | 28.68 | 7.82 | 77.61 | 93.83 | [config](densenet161_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/densenet/densenet161_4xb256_in1k_20220426-ee6a80a9.pth) |

*Models with * are converted from the [official repo](https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py). The config files of these models are only for inference. We haven't reprodcue the training results.*
*Models with * are converted from the [official repo](https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py). The config files of these models are only for inference. We haven't reproduce the training results.*

## Citation

Expand Down
Loading

0 comments on commit 7cbfb36

Please sign in to comment.