diff --git a/CONTRIBUTING_CN.md b/CONTRIBUTING_CN.md new file mode 100644 index 000000000..82e7ee39f --- /dev/null +++ b/CONTRIBUTING_CN.md @@ -0,0 +1,138 @@ +# MindCV 贡献指南 + +欢迎贡献,我们将不胜感激!每一份贡献都是有益的,请接受我们的赞扬。 + +## 贡献者许可协议 + +首次向 MindCV 社区提交代码前需签署 CLA。 + +个人贡献者请参考 [ICLA 在线文档](https://www.mindspore.cn/icla) 了解详细信息。 + +## 贡献类型 + +### 报告错误 + +报告错误至 https://github.com/mindspore-lab/mindcv/issues. + +如果您要报告错误,请包括: + +* 您的操作系统名称和版本。 +* 任何可能有助于故障排除的本地设置详细信息。 +* 重现错误的详细步骤。 + +### 修复Bugs + +查阅GitHub issues以了解Bugs。任何带有“bug”和“help wanted”标签的issue都对想要解决它的人开放。 + +### 实现features + +查阅GitHub issues以了解features。任何标有“enhancement”和“help wanted”的issue都对想要实现它的人开放。 + +### 编写文档 + +MindCV通常可以使用多种方式编写文档,可以编写在官方MindCV文档中,或者编写在docstrings中,甚至可以编写在网络上的博客、文章上。 + +### 提交反馈 + +发送反馈的最佳方式是在 https://github.com/mindspore-lab/mindcv/issues 上提交问题。 + +如果您要提出一项功能: + +* 详细说明它将如何工作。 +* 尽可能缩小范围,使其更易于实施。 +* 请记住,这是一个志愿者驱动的项目,欢迎贡献 :) + +## 入门 + +准备好贡献了吗?以下是如何设置 `mindcv` 进行本地开发。 + +1. 在 [GitHub](https://github.com/mindlab-ai/mindcv) 上 fork `mindcv` 代码仓。 +2. 在本地克隆您的 fork: + +```shell +git clone git@github.com:your_name_here/mindcv.git +``` + +之后,您应该将官方代码仓添加为upstream代码仓: + +```shell +git remote add upper git@github.com:mindspore-lab/mindcv +``` + +3. 将本地副本配置到 conda 环境中。假设您已安装 conda,您可以按照以下方式设置 fork 以进行本地开发: + +```shell +conda create -n mindcv python=3.8 +conda activate mindcv +cd mindcv +pip install -e 。 +``` + +4. 为本地开发创建一个分支: + +```shell +git checkout -b name-of-your-bugfix-or-feature +``` + +现在您可以在本地进行更改。 + +5. 完成更改后,检查您的更改是否通过了linters和tests检查: + +```shell +pre-commit run --show-diff-on-failure --color=always --all-files +pytest +``` + +如果所有静态 linting 都通过,您将获得如下输出: + +![pre-commit-succeed](https://user-images.githubusercontent.com/74176172/221346245-ea868015-bb09-4e53-aa56-73b015e1e336.png) + +否则,您需要根据输出修复警告: + +![pre-commit-failed](https://user-images.githubusercontent.com/74176172/221346251-7d8f531f-9094-474b-97f0-fd5a55e6d3de.png) + +要获取 pre-commit 和 pytest,只需使用 pip 安装它们到您的 conda 环境中即可。 + +6. 提交您的更改并将您的分支推送到 GitHub: + +```shell +git add . +git commit -m “您对更改的详细描述。” +git push origin name-of-your-bugfix-or-feature +``` + +7. 通过 GitHub 网站提交pull request。 + +## pull request指南 + +在提交pull request之前,请检查它是否符合以下指南: + +1. pull request应包括测试。 +2. 如果pull request添加了功能,则应更新文档。将新功能放入带有docstring的函数中,并将特性添加到 README.md 中的列表中。 +3. pull request应适用于 Python 3.7、3.8 和 3.9 以及 PyPy。检查 + https://github.com/mindspore-lab/mindcv/actions + 并确保所有受支持的 Python 版本的测试都通过。 + +## 提示 + +您可以安装 git hook脚本,而不是手动使用 `pre-commit run -a` 进行 linting。 + +运行flowing command来设置 git hook脚本 + +```shell +pre-commit install +``` + +现在 `pre-commit` 将在 `git commit` 上自动运行! + +## 发布 + +提醒维护者如何部署。确保提交所有更改(包括 HISTORY.md 中的条目),然后运行: + +```shell +bump2version patch # possible: major / minor / patch +git push +git push --tags +``` + +如果测试通过,GitHub Action 将部署到 PyPI。 diff --git a/README_CN.md b/README_CN.md index 7977d4987..9a420de20 100644 --- a/README_CN.md +++ b/README_CN.md @@ -25,7 +25,8 @@ ## 简介 -MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力于计算机视觉相关技术研发的开源工具箱。它提供大量的计算机视觉领域的经典模型和SoTA模型以及它们的预训练权重和训练策略。同时,还提供了自动增强等SoTA算法来提高模型性能。通过解耦的模块设计,您可以轻松地将MindCV应用到您自己的CV任务中。 +MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) +开发的,致力于计算机视觉相关技术研发的开源工具箱。它提供大量的计算机视觉领域的经典模型和SoTA模型以及它们的预训练权重和训练策略。同时,还提供了自动增强等SoTA算法来提高模型性能。通过解耦的模块设计,您可以轻松地将MindCV应用到您自己的CV任务中。 主分支代码目前支持 **MindSpore 1.8+** 以上的版本,包含 **MindSpore 2.0🔥** 版本。 @@ -41,7 +42,7 @@ MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力 >>> network = mindcv.create_model('resnet50', pretrained=True) ``` - 用户可通过预定义的训练和微调脚本,快速配置并完成训练或迁移学习任务。 + 用户可通过预定义的训练和微调脚本,快速配置并完成训练或迁移学习任务。 ```shell # 配置和启动迁移学习任务 @@ -54,7 +55,8 @@ MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力 ## 模型支持 -基于MindCV进行模型实现和重训练的汇总结果详见[模型仓库](https://mindspore-lab.github.io/mindcv/zh/modelzoo/), 所用到的训练策略和训练后的模型权重均可通过表中链接获取。 +基于MindCV进行模型实现和重训练的汇总结果详见[模型仓库](https://mindspore-lab.github.io/mindcv/zh/modelzoo/), +所用到的训练策略和训练后的模型权重均可通过表中链接获取。 各模型讲解和训练说明详见[configs](configs)目录。 @@ -113,11 +115,12 @@ python infer.py --model=swin_tiny --image_path='./dog.jpg' python train.py --model resnet50 --dataset cifar10 --dataset_download ``` - 以上代码是在CIFAR10数据集上单卡(CPU/GPU/Ascend)训练ResNet的示例,通过`model`和`dataset`参数分别指定需要训练的模型和数据集。 + 以上代码是在CIFAR10数据集上单卡(CPU/GPU/Ascend)训练ResNet的示例,通过`model`和`dataset`参数分别指定需要训练的模型和数据集。 - 分布式训练 - 对于像ImageNet这样的大型数据集,有必要在多个设备上以分布式模式进行训练。基于MindSpore对分布式相关功能的良好支持,用户可以使用`mpirun`来进行模型的分布式训练。 + 对于像ImageNet这样的大型数据集,有必要在多个设备上以分布式模式进行训练。基于MindSpore对分布式相关功能的良好支持,用户可以使用`mpirun` + 来进行模型的分布式训练。 ```shell # 分布式训练 @@ -126,26 +129,27 @@ python infer.py --model=swin_tiny --image_path='./dog.jpg' --model densenet121 --dataset imagenet --data_dir ./datasets/imagenet ``` - 完整的参数列表及说明在`config.py`中定义,可运行`python train.py --help`快速查看。 + 完整的参数列表及说明在`config.py`中定义,可运行`python train.py --help`快速查看。 - 如需恢复训练,请指定`--ckpt_path`和`--ckpt_save_dir`参数,脚本将加载路径中的模型权重和优化器状态,并恢复中断的训练进程。 + 如需恢复训练,请指定`--ckpt_path`和`--ckpt_save_dir`参数,脚本将加载路径中的模型权重和优化器状态,并恢复中断的训练进程。 - 超参配置和预训练策略 - 您可以编写yaml文件或设置外部参数来指定配置数据、模型、优化器等组件及其超参数。以下是使用预设的训练策略(yaml文件)进行模型训练的示例。 + 您可以编写yaml文件或设置外部参数来指定配置数据、模型、优化器等组件及其超参数。以下是使用预设的训练策略(yaml文件)进行模型训练的示例。 ```shell mpirun --allow-run-as-root -n 4 python train.py -c configs/squeezenet/squeezenet_1.0_gpu.yaml ``` - **预定义的训练策略** - MindCV目前提供了超过20种模型训练策略,在ImageNet取得SoTA性能。 - 具体的参数配置和详细精度性能汇总请见[`configs`](configs)文件夹。 - 您可以便捷地将这些训练策略用于您的模型训练中以提高性能(复用或修改相应的yaml文件即可)。 + **预定义的训练策略** + MindCV目前提供了超过20种模型训练策略,在ImageNet取得SoTA性能。 + 具体的参数配置和详细精度性能汇总请见[`configs`](configs)文件夹。 + 您可以便捷地将这些训练策略用于您的模型训练中以提高性能(复用或修改相应的yaml文件即可)。 - 在ModelArts/OpenI平台上训练 - 在[ModelArts](https://www.huaweicloud.com/intl/en-us/product/modelarts.html)或[OpenI](https://openi.pcl.ac.cn/)云平台上进行训练,需要执行以下操作: + 在[ModelArts](https://www.huaweicloud.com/intl/en-us/product/modelarts.html)或[OpenI](https://openi.pcl.ac.cn/) + 云平台上进行训练,需要执行以下操作: ```text 1、在云平台上创建新的训练任务。 @@ -156,12 +160,16 @@ python infer.py --model=swin_tiny --image_path='./dog.jpg' **静态图和动态图模式** -在默认情况下,模型训练(`train.py`)在MindSpore上以[图模式](https://www.mindspore.cn/tutorials/zh-CN/r1.8/advanced/pynative_graph/mode.html) 运行,该模式对使用静态图编译对性能和并行计算进行了优化。 -相比之下,[pynative模式](https://www.mindspore.cn/tutorials/zh-CN/r1.8/advanced/pynative_graph/mode.html#%E5%8A%A8%E6%80%81%E5%9B%BE)的优势在于灵活性和易于调试。为了方便调试,您可以将参数`--mode`设为1以将运行模式设置为调试模式。 +在默认情况下,模型训练(`train.py` +)在MindSpore上以[图模式](https://www.mindspore.cn/tutorials/zh-CN/r1.8/advanced/pynative_graph/mode.html) +运行,该模式对使用静态图编译对性能和并行计算进行了优化。 +相比之下,[pynative模式](https://www.mindspore.cn/tutorials/zh-CN/r1.8/advanced/pynative_graph/mode.html#%E5%8A%A8%E6%80%81%E5%9B%BE) +的优势在于灵活性和易于调试。为了方便调试,您可以将参数`--mode`设为1以将运行模式设置为调试模式。 **混合模式** -[基于mindspore.jit的混合模式](https://www.mindspore.cn/tutorials/zh-CN/r1.8/advanced/pynative_graph/combine.html) 是兼顾了MindSpore的效率和灵活的混合模式。用户可通过使用`train_with_func.py`文件来使用该混合模式进行训练。 +[基于mindspore.jit的混合模式](https://www.mindspore.cn/tutorials/zh-CN/r1.8/advanced/pynative_graph/combine.html) +是兼顾了MindSpore的效率和灵活的混合模式。用户可通过使用`train_with_func.py`文件来使用该混合模式进行训练。 ```shell python train_with_func.py --model=resnet50 --dataset=cifar10 --dataset_download --epoch_size=10 @@ -287,8 +295,8 @@ python train.py --model=resnet50 --dataset=cifar10 \ * Stochastic Depth (depends on networks) * Dropout (depends on networks) * 损失函数 - * Cross Entropy (w/ class weight and auxiliary logit support) - * Binary Cross Entropy (w/ class weight and auxiliary logit support) + * Cross Entropy (w/ class weight and auxiliary logit support) + * Binary Cross Entropy (w/ class weight and auxiliary logit support) * Soft Cross Entropy Loss (automatically enabled if mixup or label smoothing is used) * Soft Binary Cross Entropy Loss (automatically enabled if mixup or label smoothing is used) * 模型融合 @@ -303,54 +311,58 @@ python train.py --model=resnet50 --dataset=cifar10 \ 新版本`0.3.0`发布。我们将在未来的发布中丢弃对MindSpore1.x版本的支持 1. 新模型: - - [RegNet](configs/regnet)的Y-16GF规格 - - [SwinTransformerV2](configs/swintransformerv2) - - [VOLO](configs/volo) - - [CMT](configs/cmt) - - [HaloNet](configs/halonet) - - [SSD](examples/det/ssd) - - [DeepLabV3](examples/seg/deeplabv3) - - [CLIP](examples/clip) & [OpenCLIP](examples/open_clip) + - [RegNet](configs/regnet)的Y-16GF规格 + - [SwinTransformerV2](configs/swintransformerv2) + - [VOLO](configs/volo) + - [CMT](configs/cmt) + - [HaloNet](configs/halonet) + - [SSD](examples/det/ssd) + - [DeepLabV3](examples/seg/deeplabv3) + - [CLIP](examples/clip) & [OpenCLIP](examples/open_clip) 2. 特性: - - 损失函数AsymmetricLoss及JSDCrossEntropy - - 数据增强分离(Augmentations Split) - - 自定义混合精度策略 + - 损失函数AsymmetricLoss及JSDCrossEntropy + - 数据增强分离(Augmentations Split) + - 自定义混合精度策略 3. 错误修复: - - 由于分类器参数未完全弹出,您在新建预训练模型时传入参数`num_classes`可能会导致错误。 + - 由于分类器参数未完全弹出,您在新建预训练模型时传入参数`num_classes`可能会导致错误。 4. 重构: - - 许多模型的名字发生了变更,以便更好的理解。 - - `VisionTransformer`的模型定义[脚本](mindcv/models/vit.py)。 - - 混合模式(PyNative+jit)的训练[脚本](train_with_func.py)。 + - 许多模型的名字发生了变更,以便更好的理解。 + - `VisionTransformer`的模型定义[脚本](mindcv/models/vit.py)。 + - 混合模式(PyNative+jit)的训练[脚本](train_with_func.py)。 5. 文档: - - 如何提取多尺度特征的教程指引。 - - 如何在自定义数据集上微调预训练模型的教程指引。 + - 如何提取多尺度特征的教程指引。 + - 如何在自定义数据集上微调预训练模型的教程指引。 6. BREAKING CHANGES: - - 我们将在此小版本的未来发布中丢弃对MindSpore1.x的支持。 - - 配置项`filter_bias_and_bn`将被移除并更名为`weight_decay_filter`。 - 我们会对已有训练策略进行迁移,但函数`create_optimizer`的签名变更将是不兼容的,且未迁移旧版本的训练策略也将变得不兼容。详见[PR/752](https://github.com/mindspore-lab/mindcv/pull/752)。 + - 我们将在此小版本的未来发布中丢弃对MindSpore1.x的支持。 + - 配置项`filter_bias_and_bn`将被移除并更名为`weight_decay_filter`。 + 我们会对已有训练策略进行迁移,但函数`create_optimizer` + 的签名变更将是不兼容的,且未迁移旧版本的训练策略也将变得不兼容。详见[PR/752](https://github.com/mindspore-lab/mindcv/pull/752)。 - 2023/6/16 + 1. 新版本 `0.2.2` 发布啦!我们将`MindSpore`升级到了2.0版本,同时保持了对1.8版本的兼容 2. 新模型: - - [ConvNextV2](configs/convnextv2) - - [CoAT](configs/coat)的mini规格 - - [MnasNet](configs/mnasnet)的1.3规格 - - [ShuffleNetV2](configs/shufflenetv2)的混合精度(O3)版本 + - [ConvNextV2](configs/convnextv2) + - [CoAT](configs/coat)的mini规格 + - [MnasNet](configs/mnasnet)的1.3规格 + - [ShuffleNetV2](configs/shufflenetv2)的混合精度(O3)版本 3. 新特性: - - 梯度累加 - - 自定义[TrainStep](mindcv/utils/train_step.py)支持了动态损失缩放 - - `OneCycleLR`和`CyclicLR`学习率调度器 - - 更好的日志打印与记录 - - 金字塔特征抽取 + - 梯度累加 + - 自定义[TrainStep](mindcv/utils/train_step.py)支持了动态损失缩放 + - `OneCycleLR`和`CyclicLR`学习率调度器 + - 更好的日志打印与记录 + - 金字塔特征抽取 4. 错误修复: - - `Serving`部署教程(mobilenet_v3在昇腾后端的MindSpore1.8版本上不支持) - - 文档网站上的损坏链接 + - `Serving`部署教程(mobilenet_v3在昇腾后端的MindSpore1.8版本上不支持) + - 文档网站上的损坏链接 - 2023/6/2 + 1. 新版本:`0.2.1` 发布 2. 新[文档](https://mindspore-lab.github.io/mindcv/zh/)上线 - 2023/5/30 + 1. 新模型: - [VGG](configs/vgg)混合精度(O2)版本 - [GhostNet](configs/ghostnet) @@ -365,6 +377,7 @@ python train.py --model=resnet50 --dataset=cifar10 \ - ViT 池化模式 - 2023/04/28 + 1. 增添了一些新模型,列出如下: - [VGG](configs/vgg) - [DPN](configs/dpn) @@ -387,6 +400,7 @@ python train.py --model=resnet50 --dataset=cifar10 \ - 修正了优化器`Adan`中标志变量不为`Tensor`的错误 - 2023/03/25 + 1. 更新ResNet网络预训练权重,现在预训练权重有更高Top1精度 - ResNet18精度从70.09提升到70.31 - ResNet34精度从73.69提升到74.15 @@ -396,6 +410,7 @@ python train.py --model=resnet50 --dataset=cifar10 \ 2. 按照规则(model_scale-sha256sum.ckpt)更新预训练权重名字和相应下载URL链接 - 2023/03/05 + 1. 增加Lion (EvoLved Sign Momentum)优化器,论文 https://arxiv.org/abs/2302.06675 - Lion所使用的学习率一般比Adamw小3到10倍,而权重衰减(weigt_decay)要大3到10倍 2. 增加6个模型及其训练策略、预训练权重: @@ -408,36 +423,42 @@ python train.py --model=resnet50 --dataset=cifar10 \ 3. 支持梯度裁剪 - 2023/01/10 + 1. MindCV v0.1发布! 支持通过PyPI安装 (`pip install mindcv`) 2. 新增4个模型的预训练权重及其策略: googlenet, inception_v3, inception_v4, xception - 2022/12/09 + 1. 支持在所有学习率策略中添加学习率预热操作,除cosine decay策略外 2. 支持`Repeated Augmenation`操作,可以通过`--aug_repeats`对其进行设置,设置值应大于1(通常为3或4) 3. 支持EMA 4. 通过支持mixup和cutmix操作进一步优化BCE损失函数 - 2022/11/21 + 1. 支持模型损失和正确率的可视化 2. 支持轮次维度的cosine decay策略的学习率预热操作(之前仅支持步维度) - 2022/11/09 + 1. 支持2个ViT预训练模型 2. 支持RandAugment augmentation操作 3. 提高了CutMix操作的可用性,CutMix和Mixup目前可以一起使用 4. 解决了学习率画图的bug - 2022/10/12 + 1. BCE和CE损失函数目前都支持class-weight config操作、label smoothing操作、auxilary logit input操作(适用于类似Inception模型) - 2022/09/13 + 1. 支持Adan优化器(试用版) ## 贡献方式 欢迎开发者用户提issue或提交代码PR,或贡献更多的算法和模型,一起让MindCV变得更好。 -有关贡献指南,请参阅[CONTRIBUTING.md](CONTRIBUTING.md)。 +有关贡献指南,请参阅[CONTRIBUTING_CN.md](CONTRIBUTING_CN)。 请遵循[模型编写指南](docs/zh/how_to_guides/write_a_new_model.md)所规定的规则来贡献模型接口:) ## 许可证 diff --git a/benchmark_results_CN.md b/benchmark_results_CN.md new file mode 100644 index 000000000..1148483f6 --- /dev/null +++ b/benchmark_results_CN.md @@ -0,0 +1,163 @@ +| Model | Context | Top-1 (%) | Top-5 (%) | Params(M) | Recipe | Download | +|------------------------|-----------|-----------|-----------|-----------|---------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------| +| bit_resnet50 | D910x8-G | 76.81 | 93.17 | 25.55 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) | +| bit_resnet50x3 | D910x8-G | 80.63 | 95.12 | 217.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50x3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50x3-a960f91f.ckpt) | +| bit_resnet101 | D910x8-G | 77.93 | 93.75 | 44.54 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet101_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet101-2efa9106.ckpt) | +| cmt_small | D910x8-G | 83.24 | 96.41 | 26.09 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) | +| coat_lite_tiny | D910x8-G | 77.35 | 93.43 | 5.72 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_lite_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_lite_tiny-fa7bf894.ckpt) | +| coat_lite_mini | D910x8-G | 78.51 | 93.84 | 11.01 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_lite_mini_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_lite_mini-55a52f05.ckpt) | +| coat_tiny | D910x8-G | 79.67 | 94.88 | 5.50 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) | +| coat_mini | D910x8-G | 81.08 | 95.34 | 10.34 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_mini_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_mini-57c5bce7.ckpt) | +| convit_tiny | D910x8-G | 73.66 | 91.72 | 5.71 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_tiny-e31023f2.ckpt) | +| convit_tiny_plus | D910x8-G | 77.00 | 93.60 | 9.97 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_plus_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_tiny_plus-e9d7fb92.ckpt) | +| convit_small | D910x8-G | 81.63 | 95.59 | 27.78 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_small-ba858604.ckpt) | +| convit_small_plus | D910x8-G | 81.80 | 95.42 | 48.98 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_small_plus_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_small_plus-2352b9f7.ckpt) | +| convit_base | D910x8-G | 82.10 | 95.52 | 86.54 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_base_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_base-c61b808c.ckpt) | +| convit_base_plus | D910x8-G | 81.96 | 95.04 | 153.13 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_base_plus_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_base_plus-5c61c9ce.ckpt) | +| convnext_tiny | D910x64-G | 81.91 | 95.79 | 28.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) | +| convnext_small | D910x64-G | 83.40 | 96.36 | 50.22 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_small-e23008f3.ckpt) | +| convnext_base | D910x64-G | 83.32 | 96.24 | 88.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_base_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_base-ee3544b8.ckpt) | +| convnextv2_tiny | D910x8-G | 82.43 | 95.98 | 28.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) | +| crossvit_9 | D910x8-G | 73.56 | 91.79 | 8.55 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/crossvit/crossvit_9-e74c8e18.ckpt) | +| crossvit_15 | D910x8-G | 81.08 | 95.33 | 27.27 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_15_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/crossvit/crossvit_15-eaa43c02.ckpt) | +| crossvit_18 | D910x8-G | 81.93 | 95.75 | 43.27 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/crossvit/crossvit_18-ca0a2e43.ckpt) | +| densenet121 | D910x8-G | 75.64 | 92.84 | 8.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) | +| densenet161 | D910x8-G | 79.09 | 94.66 | 28.90 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_161_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet161-120_5004_Ascend.ckpt) | +| densenet169 | D910x8-G | 77.26 | 93.71 | 14.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_169_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet169-120_5004_Ascend.ckpt) | +| densenet201 | D910x8-G | 78.14 | 94.08 | 20.24 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_201_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet201-120_5004_Ascend.ckpt) | +| dpn92 | D910x8-G | 79.46 | 94.49 | 37.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn92_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn92-e3e0fca.ckpt) | +| dpn98 | D910x8-G | 79.94 | 94.57 | 61.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn98_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn98-119a8207.ckpt) | +| dpn107 | D910x8-G | 80.05 | 94.74 | 87.13 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn107_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn107-7d7df07b.ckpt) | +| dpn131 | D910x8-G | 80.07 | 94.72 | 79.48 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn131_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn131-47f084b3.ckpt) | +| edgenext_xx_small | D910x8-G | 71.02 | 89.99 | 1.33 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_xx_small-afc971fb.ckpt) | +| edgenext_x_small | D910x8-G | 75.14 | 92.50 | 2.34 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_x_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_x_small-a200c6fc.ckpt) | +| edgenext_small | D910x8-G | 79.15 | 94.39 | 5.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_small-f530c372.ckpt) | +| edgenext_base | D910x8-G | 82.24 | 95.94 | 18.51 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_base_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_base-4335e9dc.ckpt) | +| efficientnet_b0 | D910x64-G | 76.89 | 93.16 | 5.33 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/efficientnet/efficientnet_b0-103ec70c.ckpt) | +| efficientnet_b1 | D910x64-G | 78.95 | 94.34 | 7.86 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/efficientnet/efficientnet_b1-f8c6b13f.ckpt) | +| ghostnet_050 | D910x8-G | 66.03 | 86.64 | 2.60 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) | +| ghostnet_100 | D910x8-G | 73.78 | 91.66 | 5.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_100_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_100-bef8025a.ckpt) | +| ghostnet_130 | D910x8-G | 75.50 | 92.56 | 7.39 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_130_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_130-cf4c235c.ckpt) | +| googlenet | D910x8-G | 72.68 | 90.89 | 6.99 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) | +| halonet_50t | D910X8-G | 79.53 | 94.79 | 22.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/halonet/halonet_50t_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/halonet/halonet_50t-533da6be.ckpt) | +| hrnet_w32 | D910x8-G | 80.64 | 95.44 | 41.30 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w32_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/hrnet/hrnet_w32-cc4fbd91.ckpt) | +| hrnet_w48 | D910x8-G | 81.19 | 95.69 | 77.57 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w48_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/hrnet/hrnet_w48-2e3399cd.ckpt) | +| inception_v3 | D910x8-G | 79.11 | 94.40 | 27.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v3/inception_v3-38f67890.ckpt) | +| inception_v4 | D910x8-G | 80.88 | 95.34 | 42.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v4/inception_v4-db9c45b3.ckpt) | +| mixnet_s | D910x8-G | 75.52 | 92.52 | 4.17 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mixnet/mixnet_s-2a5ef3a3.ckpt) | +| mixnet_m | D910x8-G | 76.64 | 93.05 | 5.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_m_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mixnet/mixnet_m-74cc4cb1.ckpt) | +| mixnet_l | D910x8-G | 78.73 | 94.31 | 7.38 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_l_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mixnet/mixnet_l-978edf2b.ckpt) | +| mnasnet_050 | D910x8-G | 68.07 | 88.09 | 2.14 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_050-7d8bf4db.ckpt) | +| mnasnet_075 | D910x8-G | 71.81 | 90.53 | 3.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_075-465d366d.ckpt) | +| mnasnet_100 | D910x8-G | 74.28 | 91.70 | 4.42 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_100-1bcf43f8.ckpt) | +| mnasnet_130 | D910x8-G | 75.65 | 92.64 | 6.33 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_1.3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_130-a43a150a.ckpt) | +| mnasnet_140 | D910x8-G | 76.01 | 92.83 | 7.16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_1.4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_140-7e20bb30.ckpt) | +| mobilenet_v1_025 | D910x8-G | 53.87 | 77.66 | 0.47 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-d3377fba.ckpt) | +| mobilenet_v1_050 | D910x8-G | 65.94 | 86.51 | 1.34 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_050-23e9ddbe.ckpt) | +| mobilenet_v1_075 | D910x8-G | 70.44 | 89.49 | 2.60 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_075-5bed0c73.ckpt) | +| mobilenet_v1_100 | D910x8-G | 72.95 | 91.01 | 4.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_100-91c7b206.ckpt) | +| mobilenet_v2_075 | D910x8-G | 69.98 | 89.32 | 2.66 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-bd7bd4c4.ckpt) | +| mobilenet_v2_100 | D910x8-G | 72.27 | 90.72 | 3.54 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_100-d5532038.ckpt) | +| mobilenet_v2_140 | D910x8-G | 75.56 | 92.56 | 6.15 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_1.4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_140-98776171.ckpt) | +| mobilenet_v3_small_100 | D910x8-G | 68.10 | 87.86 | 2.55 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-509c6047.ckpt) | +| mobilenet_v3_large_100 | D910x8-G | 75.23 | 92.31 | 5.51 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-1279ad5f.ckpt) | +| mobilevit_xx_small | D910x8-G | 68.91 | 88.91 | 1.27 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-af9da8a0.ckpt) | +| mobilevit_x_small | D910x8-G | 74.99 | 92.32 | 2.32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_x_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilevit/mobilevit_x_small-673fc6f2.ckpt) | +| mobilevit_small | D910x8-G | 78.47 | 94.18 | 5.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilevit/mobilevit_small-caf79638.ckpt) | +| nasnet_a_4x1056 | D910x8-G | 73.65 | 91.25 | 5.33 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/nasnet/nasnet_a_4x1056_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/nasnet/nasnet_a_4x1056-0fbb5cdd.ckpt) | +| pit_ti | D910x8-G | 72.96 | 91.33 | 4.85 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pit/pit_ti-e647a593.ckpt) | +| pit_xs | D910x8-G | 78.41 | 94.06 | 10.61 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_xs_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pit/pit_xs-fea0d37e.ckpt) | +| pit_s | D910x8-G | 80.56 | 94.80 | 23.46 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_s_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pit/pit_s-3c1ba36f.ckpt) | +| pit_b | D910x8-G | 81.87 | 95.04 | 73.76 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_b_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pit/pit_b-2411c9b6.ckpt) | +| poolformer_s12 | D910x8-G | 77.33 | 93.34 | 11.92 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/poolformer/poolformer_s12-5be5c4e4.ckpt) | +| pvt_tiny | D910x8-G | 74.81 | 92.18 | 13.23 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_tiny-6abb953d.ckpt) | +| pvt_small | D910x8-G | 79.66 | 94.71 | 24.49 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_small-213c2ed1.ckpt) | +| pvt_medium | D910x8-G | 81.82 | 95.81 | 44.21 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_medium_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_medium-469e6802.ckpt) | +| pvt_large | D910x8-G | 81.75 | 95.70 | 61.36 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_large_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_large-bb6895d7.ckpt) | +| pvt_v2_b0 | D910x8-G | 71.50 | 90.60 | 3.67 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-1c4f6683.ckpt) | +| pvt_v2_b1 | D910x8-G | 78.91 | 94.49 | 14.01 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b1-3ceb171a.ckpt) | +| pvt_v2_b2 | D910x8-G | 81.99 | 95.74 | 25.35 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b2_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b2-0565d18e.ckpt) | +| pvt_v2_b3 | D910x8-G | 82.84 | 96.24 | 45.24 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b3-feaae3fc.ckpt) | +| pvt_v2_b4 | D910x8-G | 83.14 | 96.27 | 62.56 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b4-1cf4bc03.ckpt) | +| regnet_x_200mf | D910x8-G | 68.74 | 88.38 | 2.68 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_200mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_200mf-0c2b1eb5.ckpt) | +| regnet_x_400mf | D910x8-G | 73.16 | 91.35 | 5.16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_400mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_400mf-4848837d.ckpt) | +| regnet_x_600mf | D910x8-G | 74.34 | 92.00 | 6.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_600mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_600mf-ccd76c94.ckpt) | +| regnet_x_800mf | D910x8-G | 76.04 | 92.97 | 7.26 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_800mf-617227f4.ckpt) | +| regnet_y_200mf | D910x8-G | 70.30 | 89.61 | 3.16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_y_200mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_y_200mf-76a2f720.ckpt) | +| regnet_y_400mf | D910x8-G | 73.91 | 91.84 | 4.34 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_y_400mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_y_400mf-d496799d.ckpt) | +| regnet_y_600mf | D910x8-G | 75.69 | 92.50 | 6.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_y_600mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_y_600mf-a84e19b2.ckpt) | +| regnet_y_800mf | D910x8-G | 76.52 | 93.10 | 6.26 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_y_800mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_y_800mf-9b5211bd.ckpt) | +| regnet_y_16gf | D910x8-G | 82.92 | 96.29 | 83.71 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_y_16gf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_y_16gf-c30a856f.ckpt) | +| repmlp_t224 | D910x8-G | 76.71 | 93.30 | 38.30 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repmlp/repmlp_t224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repmlp/repmlp_t224-8dbedd00.ckpt) | +| repvgg_a0 | D910x8-G | 72.19 | 90.75 | 9.13 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a0-6e71139d.ckpt) | +| repvgg_a1 | D910x8-G | 74.19 | 91.89 | 14.12 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a1-539513ac.ckpt) | +| repvgg_a2 | D910x8-G | 76.63 | 93.42 | 28.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a2_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a2-cdc90b11.ckpt) | +| repvgg_b0 | D910x8-G | 74.99 | 92.40 | 15.85 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_b0-54d5862c.ckpt) | +| repvgg_b1 | D910x8-G | 78.81 | 94.37 | 57.48 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_b1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_b1-4673797.ckpt) | +| repvgg_b2 | D910x64-G | 79.29 | 94.66 | 89.11 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_b2_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_b2-7c91ccd4.ckpt) | +| repvgg_b3 | D910x64-G | 80.46 | 95.34 | 123.19 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_b3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_b3-30b35f52.ckpt) | +| repvgg_b1g2 | D910x8-G | 78.03 | 94.09 | 45.85 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_b1g2_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_b1g2-f0dc714f.ckpt) | +| repvgg_b1g4 | D910x8-G | 77.64 | 94.03 | 40.03 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_b1g4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_b1g4-bd93230e.ckpt) | +| repvgg_b2g4 | D910x8-G | 78.8 | 94.36 | 61.84 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_b2g4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_b2g4-e79eeadd.ckpt) | +| res2net50 | D910x8-G | 79.35 | 94.64 | 25.76 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/res2net/res2net50-f42cf71b.ckpt) | +| res2net101 | D910x8-G | 79.56 | 94.70 | 45.33 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_101_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/res2net/res2net101-8ae60132.ckpt) | +| res2net50_v1b | D910x8-G | 80.32 | 95.09 | 25.77 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_v1b_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/res2net/res2net50_v1b-99304e92.ckpt) | +| res2net101_v1b | D910x8-G | 81.14 | 95.41 | 45.35 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_101_v1b_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/res2net/res2net101_v1b-7e6db001.ckpt) | +| resnest50 | D910x8-G | 80.81 | 95.16 | 27.55 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnest/resnest50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnest/resnest50-f2e7fc9c.ckpt) | +| resnest101 | D910x8-G | 82.90 | 96.12 | 48.41 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnest/resnest101_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnest/resnest101-7cc5c258.ckpt) | +| resnet18 | D910x8-G | 70.21 | 89.62 | 11.70 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet18-1e65cd21.ckpt) | +| resnet34 | D910x8-G | 74.15 | 91.98 | 21.81 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_34_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet34-f297d27e.ckpt) | +| resnet50 | D910x8-G | 76.69 | 93.50 | 25.61 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet50-e0733ab8.ckpt) | +| resnet101 | D910x8-G | 78.24 | 94.09 | 44.65 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_101_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet101-689c5e77.ckpt) | +| resnet152 | D910x8-G | 78.72 | 94.45 | 60.34 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_152_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet152-beb689d8.ckpt) | +| resnetv2_50 | D910x8-G | 76.90 | 93.37 | 25.60 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnetv2/resnetv2_50-3c2f143b.ckpt) | +| resnetv2_101 | D910x8-G | 78.48 | 94.23 | 44.55 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_101_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnetv2/resnetv2_101-5d4c49a1.ckpt) | +| resnext50_32x4d | D910x8-G | 78.53 | 94.10 | 25.10 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext50_32x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnext/resnext50_32x4d-af8aba16.ckpt) | +| resnext101_32x4d | D910x8-G | 79.83 | 94.80 | 44.32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext101_32x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnext/resnext101_32x4d-3c1e9c51.ckpt) | +| resnext101_64x4d | D910x8-G | 80.30 | 94.82 | 83.66 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext101_64x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnext/resnext101_64x4d-8929255b.ckpt) | +| resnext152_64x4d | D910x8-G | 80.52 | 95.00 | 115.27 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext152_64x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnext/resnext152_64x4d-3aba275c.ckpt) | +| rexnet_09 | D910x8-G | 77.06 | 93.41 | 4.13 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x09_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/rexnet/rexnet_09-da498331.ckpt) | +| rexnet_10 | D910x8-G | 77.38 | 93.60 | 4.84 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x10_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/rexnet/rexnet_10-c5fb2dc7.ckpt) | +| rexnet_13 | D910x8-G | 79.06 | 94.28 | 7.61 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x13_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/rexnet/rexnet_13-a49c41e5.ckpt) | +| rexnet_15 | D910x8-G | 79.95 | 94.74 | 9.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x15_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/rexnet/rexnet_15-37a931d3.ckpt) | +| rexnet_20 | D910x8-G | 80.64 | 94.99 | 16.45 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x20_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/rexnet/rexnet_20-c5810914.ckpt) | +| seresnet18 | D910x8-G | 71.81 | 90.49 | 11.80 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/senet/seresnet18-7880643b.ckpt) | +| seresnet34 | D910x8-G | 75.38 | 92.50 | 21.98 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet34_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/senet/seresnet34-8179d3c9.ckpt) | +| seresnet50 | D910x8-G | 78.32 | 94.07 | 28.14 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/senet/seresnet50-ff9cd214.ckpt) | +| seresnext26_32x4d | D910x8-G | 77.17 | 93.42 | 16.83 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnext26_32x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/senet/seresnext26_32x4d-5361f5b6.ckpt) | +| seresnext50_32x4d | D910x8-G | 78.71 | 94.36 | 27.63 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnext50_32x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/senet/seresnext50_32x4d-fdc35aca.ckpt) | +| shufflenet_v1_g3_05 | D910x8-G | 57.05 | 79.73 | 0.73 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_05-42cfe109.ckpt) | +| shufflenet_v1_g3_10 | D910x8-G | 67.77 | 87.73 | 1.89 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_10-245f0ccf.ckpt) | +| shufflenet_v2_x0_5 | D910x8-G | 60.53 | 82.11 | 1.37 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x0_5-8c841061.ckpt) | +| shufflenet_v2_x1_0 | D910x8-G | 69.47 | 88.88 | 2.29 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x1_0-0da4b7fa.ckpt) | +| shufflenet_v2_x1_5 | D910x8-G | 72.79 | 90.93 | 3.53 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_1.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x1_5-00b56131.ckpt) | +| shufflenet_v2_x2_0 | D910x8-G | 75.07 | 92.08 | 7.44 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_2.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x2_0-ed8e698d.ckpt) | +| skresnet18 | D910x8-G | 73.09 | 91.20 | 11.97 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/sknet/skresnet18-868228e5.ckpt) | +| skresnet34 | D910x8-G | 76.71 | 93.10 | 22.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet34_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/sknet/skresnet34-d668b629.ckpt) | +| skresnext50_32x4d | D910x8-G | 79.08 | 94.60 | 37.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnext50_32x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/sknet/skresnext50_32x4d-395413a2.ckpt) | +| squeezenet1_0 | D910x8-G | 59.01 | 81.01 | 1.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/squeezenet/squeezenet1_0-e2d78c4a.ckpt) | +| squeezenet1_0 | GPUx8-G | 58.83 | 81.08 | 1.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_gpu.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/squeezenet/squeezenet1_0_gpu-685f5941.ckpt) | +| squeezenet1_1 | D910x8-G | 58.44 | 80.84 | 1.24 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/squeezenet/squeezenet1_1-da256d3a.ckpt) | +| squeezenet1_1 | GPUx8-G | 59.18 | 81.41 | 1.24 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.1_gpu.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/squeezenet/squeezenet1_1_gpu-0e33234a.ckpt) | +| swin_tiny | D910x8-G | 80.82 | 94.80 | 33.38 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformer/swin_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swin/swin_tiny-0ff2f96d.ckpt) | +| swinv2_tiny_window8 | D910x8-G | 81.42 | 95.43 | 28.78 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swinv2/swinv2_tiny_window8-3ef8b787.ckpt) | +| vgg11 | D910x8-G | 71.86 | 90.50 | 132.86 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg11_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg11-ef31d161.ckpt) | +| vgg13 | D910x8-G | 72.87 | 91.02 | 133.04 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg13_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg13-da805e6e.ckpt) | +| vgg16 | D910x8-G | 74.61 | 91.87 | 138.35 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg16_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg16-95697531.ckpt) | +| vgg19 | D910x8-G | 75.21 | 92.56 | 143.66 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg19_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg19-bedee7b6.ckpt) | +| visformer_tiny | D910x8-G | 78.28 | 94.15 | 10.33 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/visformer/visformer_tiny-daee0322.ckpt) | +| visformer_tiny_v2 | D910x8-G | 78.82 | 94.41 | 9.38 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_v2_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/visformer/visformer_tiny_v2-6711a758.ckpt) | +| visformer_small | D910x8-G | 81.76 | 95.88 | 40.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/visformer/visformer_small-6c83b6db.ckpt) | +| visformer_small_v2 | D910x8-G | 82.17 | 95.90 | 23.52 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_small_v2_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/visformer/visformer_small_v2-63674ade.ckpt) | +| vit_b_32_224 | D910x8-G | 75.86 | 92.08 | 87.46 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vit/vit_b32_224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vit/vit_b_32_224-7553218f.ckpt) | +| vit_l_16_224 | D910x8-G | 76.34 | 92.79 | 303.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vit/vit_l16_224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vit/vit_l_16_224-f02b2487.ckpt) | +| vit_l_32_224 | D910x8-G | 73.71 | 90.92 | 305.52 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vit/vit_l32_224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vit/vit_l_32_224-3a961018.ckpt) | +| volo_d1 | D910x8-G | 82.59 | 95.99 | 27 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/volo/volo_d1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/volo/volo_d1-c7efada9.ckpt) | +| xception | D910x8-G | 79.01 | 94.25 | 22.91 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xception/xception_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/xception/xception-2c1e711df.ckpt) | +| xcit_tiny_12_p16_224 | D910x8-G | 77.67 | 93.79 | 7.00 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xcit/xcit_tiny_12_p16_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/xcit/xcit_tiny_12_p16_224-1b1c9301.ckpt) | + +#### 说明 + +- Context:Training context,表示为{设备}x{数量}-{MS模式},其中mindspore模式可以是 G-graph 模式或带ms功能的 F-pynative + 模式。例如,D910x8-G是使用 graph 模式在8块Ascend 910 NPU上进行训练。 +- Top-1 和 Top-5:在 ImageNet-1K 验证集上报告的Accuracy。 diff --git a/docs/en/index.md b/docs/en/index.md index 5bf194b33..b2faee12f 100644 --- a/docs/en/index.md +++ b/docs/en/index.md @@ -22,11 +22,16 @@ hide: ## Introduction -MindCV is an open-source toolbox for computer vision research and development based on [MindSpore](https://www.mindspore.cn/en). It collects a series of classic and SoTA vision models, such as ResNet and SwinTransformer, along with their pre-trained weights and training strategies. SoTA methods such as auto augmentation are also provided for performance improvement. With the decoupled module design, it is easy to apply or adapt MindCV to your own CV tasks. +MindCV is an open-source toolbox for computer vision research and development based +on [MindSpore](https://www.mindspore.cn/en). It collects a series of classic and SoTA vision models, such as ResNet and +SwinTransformer, along with their pre-trained weights and training strategies. SoTA methods such as auto augmentation +are also provided for performance improvement. With the decoupled module design, it is easy to apply or adapt MindCV to +your own CV tasks. ### Major Features -- **Easy-to-Use.** MindCV decomposes the vision framework into various configurable components. It is easy to customize your data pipeline, models, and learning pipeline with MindCV: +- **Easy-to-Use.** MindCV decomposes the vision framework into various configurable components. It is easy to customize + your data pipeline, models, and learning pipeline with MindCV: ```pycon >>> import mindcv @@ -36,22 +41,27 @@ MindCV is an open-source toolbox for computer vision research and development ba >>> network = mindcv.create_model('resnet50', pretrained=True) ``` - Users can customize and launch their transfer learning or training task in one command line. + Users can customize and launch their transfer learning or training task in one command line. ```shell # transfer learning in one command line python train.py --model=swin_tiny --pretrained --opt=adamw --lr=0.001 --data_dir=/path/to/data ``` -- **State-of-The-Art.** MindCV provides various CNN-based and Transformer-based vision models including SwinTransformer. Their pretrained weights and performance reports are provided to help users select and reuse the right model: +- **State-of-The-Art.** MindCV provides various CNN-based and Transformer-based vision models including SwinTransformer. + Their pretrained weights and performance reports are provided to help users select and reuse the right model: -- **Flexibility and efficiency.** MindCV is built on MindSpore which is an efficient DL framework that can be run on different hardware platforms (GPU/CPU/Ascend). It supports both graph mode for high efficiency and pynative mode for flexibility. +- **Flexibility and efficiency.** MindCV is built on MindSpore which is an efficient DL framework that can be run on + different hardware platforms (GPU/CPU/Ascend). It supports both graph mode for high efficiency and pynative mode for + flexibility. ## Model Zoo -The performance of the models trained with MindCV is summarized in [here](./modelzoo.md), where the training recipes and weights are both available. +The performance of the models trained with MindCV is summarized in [here](./modelzoo.md), where the training recipes and +weights are both available. -Model introduction and training details can be viewed in each sub-folder under [configs](https://github.com/mindspore-lab/mindcv/tree/main/configs). +Model introduction and training details can be viewed in each sub-folder +under [configs](https://github.com/mindspore-lab/mindcv/tree/main/configs). ## Installation @@ -61,7 +71,8 @@ See [Installation](./installation.md) for details. ### Hands-on Tutorial -To get started with MindCV, please see the [Quick Start](./tutorials/quick_start.md), which will give you a quick tour of each key component and the train/validate/predict pipelines. +To get started with MindCV, please see the [Quick Start](./tutorials/quick_start.md), which will give you a quick tour +of each key component and the train/validate/predict pipelines. Below are a few code snippets for your taste. @@ -96,7 +107,8 @@ Below are a few code snippets for your taste. ### Training -It is easy to train your model on a standard or customized dataset using `train.py`, where the training strategy (e.g., augmentation, LR scheduling) can be configured with external arguments or a yaml config file. +It is easy to train your model on a standard or customized dataset using `train.py`, where the training strategy (e.g., +augmentation, LR scheduling) can be configured with external arguments or a yaml config file. - Standalone Training @@ -105,11 +117,12 @@ It is easy to train your model on a standard or customized dataset using `train. python train.py --model=resnet50 --dataset=cifar10 --dataset_download ``` - Above is an example of training ResNet50 on CIFAR10 dataset on a CPU/GPU/Ascend device + Above is an example of training ResNet50 on CIFAR10 dataset on a CPU/GPU/Ascend device - Distributed Training - For large datasets like ImageNet, it is necessary to do training in distributed mode on multiple devices. This can be achieved with `mpirun` and parallel features supported by MindSpore. + For large datasets like ImageNet, it is necessary to do training in distributed mode on multiple devices. This can be + achieved with `mpirun` and parallel features supported by MindSpore. ```shell # distributed training @@ -117,33 +130,37 @@ It is easy to train your model on a standard or customized dataset using `train. mpirun -n 4 python train.py --distribute \ --model=densenet121 --dataset=imagenet --data_dir=/path/to/imagenet ``` - > Notes: If the script is executed by the root user, the `--allow-run-as-root` parameter must be added to `mpirun`. + > Notes: If the script is executed by the root user, the `--allow-run-as-root` parameter must be added to `mpirun`. - Detailed parameter definitions can be seen in `config.py` and checked by running `python train.py --help'. + Detailed parameter definitions can be seen in `config.py` and checked by running 'python train.py --help'. - To resume training, please set the `--ckpt_path` and `--ckpt_save_dir` arguments. The optimizer state including the learning rate of the last stopped epoch will also be recovered. + To resume training, please set the `--ckpt_path` and `--ckpt_save_dir` arguments. The optimizer state including the + learning rate of the last stopped epoch will also be recovered. - Config and Training Strategy - You can configure your model and other components either by specifying external parameters or by writing a yaml config file. Here is an example of training using a preset yaml file. + You can configure your model and other components either by specifying external parameters or by writing a yaml config + file. Here is an example of training using a preset yaml file. ```shell mpirun --allow-run-as-root -n 4 python train.py -c configs/squeezenet/squeezenet_1.0_gpu.yaml ``` - !!! tip "Pre-defined Training Strategies" - We provide more than 20 training recipes that achieve SoTA results on ImageNet currently. - Please look into the [`configs`](https://github.com/mindspore-lab/mindcv/tree/main/configs) folder for details. - Please feel free to adapt these training strategies to your own model for performance improvement, which can be easily done by modifying the yaml file. + !!! tip "Pre-defined Training Strategies" + We provide more than 20 training recipes that achieve SoTA results on ImageNet currently. + Please look into the [`configs`](https://github.com/mindspore-lab/mindcv/tree/main/configs) folder for details. + Please feel free to adapt these training strategies to your own model for performance improvement, which can be easily + done by modifying the yaml file. - Train on ModelArts/OpenI Platform - To run training on the [ModelArts](https://www.huaweicloud.com/intl/en-us/product/modelarts.html) or [OpenI](https://openi.pcl.ac.cn/) cloud platform: + To run training on the [ModelArts](https://www.huaweicloud.com/intl/en-us/product/modelarts.html) + or [OpenI](https://openi.pcl.ac.cn/) cloud platform: ```text 1. Create a new training task on the cloud platform. - 2. Add the parameter `config` and specify the path to the yaml config file on the website UI interface. - 3. Add the parameter `enable_modelarts` and set True on the website UI interface. + 2. Add the parameter 'config' and specify the path to the yaml config file on the website UI interface. + 3. Add the parameter 'enable_modelarts' and set True on the website UI interface. 4. Fill in other blanks on the website and launch the training task. ``` @@ -191,7 +208,7 @@ We provide the following jupyter notebook tutorials to help users learn to use M - [Learn about configs](./tutorials/configuration.md) - [Inference with a pretrained model](./tutorials/inference.md) - [Finetune a pretrained model on custom datasets](./tutorials/finetune.md) -- [Customize your model]() //coming soon +- [Customize your model](./how_to_guides/write_a_new_model.md) - [Optimizing performance for vision transformer]() //coming soon - [Deployment demo](./tutorials/deployment.md) @@ -245,15 +262,18 @@ We provide the following jupyter notebook tutorials to help users learn to use M We appreciate all kinds of contributions including issues and PRs to make MindCV better. Please refer to [CONTRIBUTING](./notes/contributing.md) for the contributing guideline. -Please follow the [Model Template and Guideline](./how_to_guides/write_a_new_model.md) for contributing a model that fits the overall interface :) +Please follow the [Model Template and Guideline](./how_to_guides/write_a_new_model.md) for contributing a model that +fits the overall interface :) ## License -This project follows the [Apache License 2.0](https://github.com/mindspore-lab/mindcv/blob/main/LICENSE.md) open-source license. +This project follows the [Apache License 2.0](https://github.com/mindspore-lab/mindcv/blob/main/LICENSE.md) open-source +license. ## Acknowledgement -MindCV is an open-source project jointly developed by the MindSpore team, Xidian University, and Xi'an Jiaotong University. +MindCV is an open-source project jointly developed by the MindSpore team, Xidian University, and Xi'an Jiaotong +University. Sincere thanks to all participating researchers and developers for their hard work on this project. We also acknowledge the computing resources provided by [OpenI](https://openi.pcl.ac.cn/). diff --git a/docs/en/reference/models.md b/docs/en/reference/models.md new file mode 100644 index 000000000..98041f1c1 --- /dev/null +++ b/docs/en/reference/models.md @@ -0,0 +1,256 @@ +# Models + + +## Create Model + +### ::: mindcv.models.model_factory.create_model + + +## bit + +### ::: mindcv.models.bit + + +## cait + +### ::: mindcv.models.cait + + +## cmt + +### ::: mindcv.models.cmt + + +## coat + +### ::: mindcv.models.coat + + +## convit + +### ::: mindcv.models.convit + + +## convnext + +### ::: mindcv.models.convnext + + +## crossvit + +### ::: mindcv.models.crossvit + + +## densenet + +### ::: mindcv.models.densenet + + +## dpn + +### ::: mindcv.models.dpn + + +## edgenext + +### ::: mindcv.models.edgenext + + +## efficientnet + +### ::: mindcv.models.efficientnet + + +## features + +### ::: mindcv.models.features + + +## ghostnet + +### ::: mindcv.models.ghostnet + + +## halonet + +### ::: mindcv.models.halonet + + +## hrnet + +### ::: mindcv.models.hrnet + + +## inceptionv3 + +### ::: mindcv.models.inceptionv3 + + +## inceptionv4 + +### ::: mindcv.models.inceptionv4 + + +## mae + +### ::: mindcv.models.mae + + +## mixnet + +### ::: mindcv.models.mixnet + + +## mlpmixer + +### ::: mindcv.models.mlpmixer + + +## mnasnet + +### ::: mindcv.models.mnasnet + + +## mobilenetv1 + +### ::: mindcv.models.mobilenetv1 + + +## mobilenetv2 + +### ::: mindcv.models.mobilenetv2 + + +## mobilenetv3 + +### ::: mindcv.models.mobilenetv3 + + +## mobilevit + +### ::: mindcv.models.mobilevit + + +## nasnet + +### ::: mindcv.models.nasnet + + +## pit + +### ::: mindcv.models.pit + + +## poolformer + +### ::: mindcv.models.poolformer + + +## pvt + +### ::: mindcv.models.pvt + + +## pvtv2 + +### ::: mindcv.models.pvtv2 + + +## regnet + +### ::: mindcv.models.regnet + + +## repmlp + +### ::: mindcv.models.repmlp + + +## repvgg + +### ::: mindcv.models.repvgg + + +## res2net + +### ::: mindcv.models.res2net + + +## resnest + +### ::: mindcv.models.resnest + + +## resnet + +### ::: mindcv.models.resnet + + +## resnetv2 + +### ::: mindcv.models.resnetv2 + + +## rexnet + +### ::: mindcv.models.rexnet + + +## senet + +### ::: mindcv.models.senet + + +## shufflenetv1 + +### ::: mindcv.models.shufflenetv1 + + +## shufflenetv2 + +### ::: mindcv.models.shufflenetv2 + + +## sknet + +### ::: mindcv.models.sknet + + +## squeezenet + +### ::: mindcv.models.squeezenet + + +## swintransformer + +### ::: mindcv.models.swintransformer + + +## swintransformerv2 + +### ::: mindcv.models.swintransformerv2 + + +## vgg + +### ::: mindcv.models.vgg + + +## visformer + +### ::: mindcv.models.visformer + + +## vit + +### ::: mindcv.models.vit + + +## volo + +### ::: mindcv.models.volo + + +## xcit + +### ::: mindcv.models.xcit diff --git a/docs/gen_ref_pages.py b/docs/gen_ref_pages.py index 9eeb4f9d3..10b01f4d8 100644 --- a/docs/gen_ref_pages.py +++ b/docs/gen_ref_pages.py @@ -36,9 +36,7 @@ def _gen_page(lang): try: print(f"\n\n## {parts[-1]}", file=fd) identifier = ".".join(parts) # eg: mindcv.models.resnet - mod = importlib.import_module(identifier) - for mem in sorted(set(mod.__all__)): - print(f"\n### ::: {identifier}.{mem}", file=fd) + print(f"\n### ::: {identifier}", file=fd) except Exception as err: _logger.warning(f"Cannot generate reference of {identifier}, error: {err}.") @@ -57,3 +55,7 @@ def on_startup(command, dirty): def on_shutdown(): for lang in _langs: _del_page(lang) + + +if __name__ == '__main__': + _gen_page('en') diff --git a/docs/zh/installation.md b/docs/zh/installation.md index 92d291d87..cba3d43f9 100644 --- a/docs/zh/installation.md +++ b/docs/zh/installation.md @@ -53,24 +53,30 @@ MindCV被发布为一个[Python包]并能够通过`pip`进行安装。我们推 我们强烈推荐您通过[官方指引](https://www.mindspore.cn/install)来安装[MindSpore]。 [Python包]: https://pypi.org/project/mindcv/ + [虚拟环境]: https://realpython.com/what-is-pip/#using-pip-in-a-python-virtual-environment + [MindSpore]: https://www.mindspore.cn/ + [OpenMPI]: https://www.open-mpi.org/ + [NumPy]: https://numpy.org/ + [PyYAML]: https://pyyaml.org/ + [tqdm]: https://tqdm.github.io/ -[使用Python的pip来管理您的项目的依赖关系]: https://realpython.com/what-is-pip/ +[使用Python的pip来管理您的项目的依赖关系]: https://realpython.com/what-is-pip/ ## 源码安装 (未经测试版本) -### from VSC +### VCS源码安装 ```shell pip install git+https://github.com/mindspore-lab/mindcv.git ``` -### from local src +### 本地源码安装 !!! tip diff --git a/docs/zh/modelzoo.md b/docs/zh/modelzoo.md index 21936b0bf..84d91771b 100644 --- a/docs/zh/modelzoo.md +++ b/docs/zh/modelzoo.md @@ -6,4 +6,4 @@ hide: # 模型仓库 -{% include-markdown "../../benchmark_results.md" %} +{% include-markdown "../../benchmark_results_CN.md" %} diff --git a/docs/zh/notes/contributing.md b/docs/zh/notes/contributing.md index b3865f3ec..93792ae86 100644 --- a/docs/zh/notes/contributing.md +++ b/docs/zh/notes/contributing.md @@ -1 +1 @@ -{% include-markdown "../../../CONTRIBUTING.md" %} +{% include-markdown "../../../CONTRIBUTING_CN.md" %} diff --git a/docs/zh/reference/models.md b/docs/zh/reference/models.md new file mode 100644 index 000000000..98041f1c1 --- /dev/null +++ b/docs/zh/reference/models.md @@ -0,0 +1,256 @@ +# Models + + +## Create Model + +### ::: mindcv.models.model_factory.create_model + + +## bit + +### ::: mindcv.models.bit + + +## cait + +### ::: mindcv.models.cait + + +## cmt + +### ::: mindcv.models.cmt + + +## coat + +### ::: mindcv.models.coat + + +## convit + +### ::: mindcv.models.convit + + +## convnext + +### ::: mindcv.models.convnext + + +## crossvit + +### ::: mindcv.models.crossvit + + +## densenet + +### ::: mindcv.models.densenet + + +## dpn + +### ::: mindcv.models.dpn + + +## edgenext + +### ::: mindcv.models.edgenext + + +## efficientnet + +### ::: mindcv.models.efficientnet + + +## features + +### ::: mindcv.models.features + + +## ghostnet + +### ::: mindcv.models.ghostnet + + +## halonet + +### ::: mindcv.models.halonet + + +## hrnet + +### ::: mindcv.models.hrnet + + +## inceptionv3 + +### ::: mindcv.models.inceptionv3 + + +## inceptionv4 + +### ::: mindcv.models.inceptionv4 + + +## mae + +### ::: mindcv.models.mae + + +## mixnet + +### ::: mindcv.models.mixnet + + +## mlpmixer + +### ::: mindcv.models.mlpmixer + + +## mnasnet + +### ::: mindcv.models.mnasnet + + +## mobilenetv1 + +### ::: mindcv.models.mobilenetv1 + + +## mobilenetv2 + +### ::: mindcv.models.mobilenetv2 + + +## mobilenetv3 + +### ::: mindcv.models.mobilenetv3 + + +## mobilevit + +### ::: mindcv.models.mobilevit + + +## nasnet + +### ::: mindcv.models.nasnet + + +## pit + +### ::: mindcv.models.pit + + +## poolformer + +### ::: mindcv.models.poolformer + + +## pvt + +### ::: mindcv.models.pvt + + +## pvtv2 + +### ::: mindcv.models.pvtv2 + + +## regnet + +### ::: mindcv.models.regnet + + +## repmlp + +### ::: mindcv.models.repmlp + + +## repvgg + +### ::: mindcv.models.repvgg + + +## res2net + +### ::: mindcv.models.res2net + + +## resnest + +### ::: mindcv.models.resnest + + +## resnet + +### ::: mindcv.models.resnet + + +## resnetv2 + +### ::: mindcv.models.resnetv2 + + +## rexnet + +### ::: mindcv.models.rexnet + + +## senet + +### ::: mindcv.models.senet + + +## shufflenetv1 + +### ::: mindcv.models.shufflenetv1 + + +## shufflenetv2 + +### ::: mindcv.models.shufflenetv2 + + +## sknet + +### ::: mindcv.models.sknet + + +## squeezenet + +### ::: mindcv.models.squeezenet + + +## swintransformer + +### ::: mindcv.models.swintransformer + + +## swintransformerv2 + +### ::: mindcv.models.swintransformerv2 + + +## vgg + +### ::: mindcv.models.vgg + + +## visformer + +### ::: mindcv.models.visformer + + +## vit + +### ::: mindcv.models.vit + + +## volo + +### ::: mindcv.models.volo + + +## xcit + +### ::: mindcv.models.xcit diff --git a/mkdocs.yml b/mkdocs.yml index 02fe96dac..07314a85f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -8,13 +8,13 @@ nav: - Home: index.md - Installation: installation.md - Model Zoo: modelzoo.md - - Tutorials: # Learning Oriented + - Tutorials: # Learning Oriented - Quick Start: tutorials/quick_start.md - Configuration: tutorials/configuration.md - Finetune: tutorials/finetune.md - Inference: tutorials/inference.md - Deployment: tutorials/deployment.md - - How-To Guides: # Problem Oriented + - How-To Guides: # Problem Oriented - Write A New Model: how_to_guides/write_a_new_model.md - Multi-Scale Feature Extraction: how_to_guides/feature_extraction.md - Fine-tune with A Custom Dataset: how_to_guides/finetune_with_a_custom_dataset.md @@ -139,8 +139,13 @@ plugins: Finetune: 微调 Inference: 推理 Deployment: 部署 + How-To Guides: 操作指南 + Write A New Model: 编写一个新模型 + Multi-Scale Feature Extraction: 多尺度特征提取 + Fine-tune with A Custom Dataset: 自定义数据集的模型微调指南 Notes: 说明 Change Log: 更新日志 + Contributing: 贡献 Code of Conduct: 行为准则 FAQ: 常见问题