From 2045f7d3800253a6a2e17ac0a8982b60bea36b67 Mon Sep 17 00:00:00 2001 From: Junwon Lee <63298243+cpprhtn@users.noreply.github.com> Date: Fri, 7 Jul 2023 16:51:53 +0900 Subject: [PATCH 01/15] =?UTF-8?q?fix:=20=EC=9B=B9=ED=8E=98=EC=9D=B4?= =?UTF-8?q?=EC=A7=80=EC=97=90=EC=84=9C=20table=EA=B0=80=20=EA=B9=A8?= =?UTF-8?q?=EC=A7=80=EB=8A=94=20=ED=98=84=EC=83=81?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- pytorch_vision_wide_resnet.md | 1 + 1 file changed, 1 insertion(+) diff --git a/pytorch_vision_wide_resnet.md b/pytorch_vision_wide_resnet.md index d0e7e76..bb5513f 100644 --- a/pytorch_vision_wide_resnet.md +++ b/pytorch_vision_wide_resnet.md @@ -91,6 +91,7 @@ Wide Residual 네트워크는 ResNet에 비해 단순히 채널 수가 증가했 `wide_resnet50_2` 및 `wide_resnet101_2` 모델은 [Warm Restarts가 있는 SGD(SGDR)](https://arxiv.org/abs/1608.03983)를 사용하여 혼합 정밀도(Mixed Precision) 방식으로 학습되었습니다. 체크 포인트는 크기가 작은 경우 절반 정밀도(batch norm 제외)의 가중치를 가지며 FP32 모델에서도 사용할 수 있습니다. + | Model structure | Top-1 error | Top-5 error | # parameters | | ----------------- | :---------: | :---------: | :----------: | | wide_resnet50_2 | 21.49 | 5.91 | 68.9M | From 757ed043069d077936a502092a9767cc2ed60b0e Mon Sep 17 00:00:00 2001 From: Junwon Lee <63298243+cpprhtn@users.noreply.github.com> Date: Fri, 7 Jul 2023 16:53:24 +0900 Subject: [PATCH 02/15] =?UTF-8?q?fix:=20=EB=A7=81=ED=81=AC=20=EC=97=B0?= =?UTF-8?q?=EA=B2=B0=EC=9D=B4=20=EC=A0=9C=EB=8C=80=EB=A1=9C=20=EB=90=98?= =?UTF-8?q?=EC=96=B4=EC=9E=88=EC=A7=80=EC=95=8A=EB=8A=94=EB=B6=80=EB=B6=84?= =?UTF-8?q?=20=EC=88=98=EC=A0=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- pytorch_vision_resnext.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pytorch_vision_resnext.md b/pytorch_vision_resnext.md index 3daadff..5417e4a 100644 --- a/pytorch_vision_resnext.md +++ b/pytorch_vision_resnext.md @@ -85,7 +85,7 @@ for i in range(top5_prob.size(0)): ### 모델 설명 -Resnext 모델은 논문 [Aggregated Residual Transformations for Deep Neural Networks]에서 제안되었습니다. (https://arxiv.org/abs/1611.05431). +Resnext 모델은 논문 ["Aggregated Residual Transformations for Deep Neural Networks"](https://arxiv.org/abs/1611.05431) 에서 제안되었습니다. 여기서는 50개의 계층과 101개의 계층을 가지는 2개의 resnet 모델을 제공하고 있습니다. resnet50과 resnext50의 아키텍처 차이는 논문의 Table 1을 참고하십시오. ImageNet 데이터셋에 대한 사전훈련된 모델의 에러(성능)은 아래 표와 같습니다. From 77b3cd77a69756b9495e20211c00dfce3fa81533 Mon Sep 17 00:00:00 2001 From: Junwon Lee <63298243+cpprhtn@users.noreply.github.com> Date: Fri, 7 Jul 2023 16:57:08 +0900 Subject: [PATCH 03/15] =?UTF-8?q?fix:=20=EB=8B=A8=EC=96=B4=20=EC=98=A4?= =?UTF-8?q?=ED=83=80=20=EC=88=98=EC=A0=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- pytorch_vision_vgg.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pytorch_vision_vgg.md b/pytorch_vision_vgg.md index 5855d03..c379f3a 100644 --- a/pytorch_vision_vgg.md +++ b/pytorch_vision_vgg.md @@ -90,12 +90,12 @@ for i in range(top5_prob.size(0)): ### 모델 설명 -각 구성 및 bachnorm 버전에 대해서 [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)에서 제안한 모델에 대한 구현이 있습니다. +각 구성 및 BatchNorm 버전에 대해서 [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)에서 제안한 모델에 대한 구현이 있습니다. 예를 들어, 논문에 제시된 구성 `A`는 `vgg11`, `B`는 `vgg13`, `D`는 `vgg16`, `E`는 `vgg19`입니다. batchnorm 버전은 `_bn`이 접미사로 붙어있습니다. -사전 훈련된 모델이 있는 imagenet 데이터 세트의 1-crop 오류율은 아래에 나열되어 있습니다. +사전 훈련된 모델이 있는 ImageNet 데이터 세트의 Top-1 오류율은 아래에 나열되어 있습니다. | Model structure | Top-1 error | Top-5 error | | --------------- | ----------- | ----------- | From 468fd663b7633777600bf7134167893a804ddcfe Mon Sep 17 00:00:00 2001 From: Junwon Lee <63298243+cpprhtn@users.noreply.github.com> Date: Fri, 7 Jul 2023 17:03:05 +0900 Subject: [PATCH 04/15] Update pytorch_vision_vgg.md --- pytorch_vision_vgg.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pytorch_vision_vgg.md b/pytorch_vision_vgg.md index c379f3a..56b5003 100644 --- a/pytorch_vision_vgg.md +++ b/pytorch_vision_vgg.md @@ -90,7 +90,7 @@ for i in range(top5_prob.size(0)): ### 모델 설명 -각 구성 및 BatchNorm 버전에 대해서 [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)에서 제안한 모델에 대한 구현이 있습니다. +각 구성 및 batchnorm 버전에 대해서 [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)에서 제안한 모델에 대한 구현이 있습니다. 예를 들어, 논문에 제시된 구성 `A`는 `vgg11`, `B`는 `vgg13`, `D`는 `vgg16`, `E`는 `vgg19`입니다. batchnorm 버전은 `_bn`이 접미사로 붙어있습니다. From 4f001158cd2aaf5a0bb6d5554db1879a5700a31c Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Fri, 18 Aug 2023 00:36:37 +0900 Subject: [PATCH 05/15] =?UTF-8?q?fix:=20=EC=83=9D=EB=9E=B5=EB=90=9C=20?= =?UTF-8?q?=EC=9D=B8=EC=9A=A9=EB=A7=81=ED=81=AC=EC=99=80=20=EC=9D=B8?= =?UTF-8?q?=EC=9A=A9=EB=AC=B8=20=EC=B6=94=EA=B0=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- hustvl_yolop.md | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/hustvl_yolop.md b/hustvl_yolop.md index 7d677d0..2fe373f 100644 --- a/hustvl_yolop.md +++ b/hustvl_yolop.md @@ -127,5 +127,15 @@ det_out, da_seg_out,ll_seg_out = model(img) See for more detail in [github code](https://github.com/hustvl/YOLOP) and [arxiv paper](https://arxiv.org/abs/2108.11250). -본 논문과 코드가 여러분의 연구에 유용하다고 판단되면, GitHub star를 주는 것과 본 논문을 인용하는 것을 고려해 주세요: - +본 [논문](https://arxiv.org/abs/2108.11250) 과 [코드](https://github.com/hustvl/YOLOP) 가 여러분의 연구에 유용하다고 판단되면, GitHub star를 주는 것과 본 논문을 인용하는 것을 고려해 주세요: + +```BibTeX +@article{wu2022yolop, + title={Yolop: You only look once for panoptic driving perception}, + author={Wu, Dong and Liao, Man-Wen and Zhang, Wei-Tian and Wang, Xing-Gang and Bai, Xiang and Cheng, Wen-Qing and Liu, Wen-Yu}, + journal={Machine Intelligence Research}, + pages={1--13}, + year={2022}, + publisher={Springer} +} +``` \ No newline at end of file From 6fd55bd985fd5924c8aab5825f6ed657ef77ecdf Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Fri, 18 Aug 2023 00:37:57 +0900 Subject: [PATCH 06/15] =?UTF-8?q?fix:=20=EB=AF=B8=EB=B2=88=EC=97=AD=20?= =?UTF-8?q?=EB=AC=B8=EC=9E=A5=20=EC=A0=9C=EA=B1=B0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- hustvl_yolop.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/hustvl_yolop.md b/hustvl_yolop.md index 2fe373f..3332cf0 100644 --- a/hustvl_yolop.md +++ b/hustvl_yolop.md @@ -125,8 +125,6 @@ det_out, da_seg_out,ll_seg_out = model(img) ### 인용(Citation) -See for more detail in [github code](https://github.com/hustvl/YOLOP) and [arxiv paper](https://arxiv.org/abs/2108.11250). - 본 [논문](https://arxiv.org/abs/2108.11250) 과 [코드](https://github.com/hustvl/YOLOP) 가 여러분의 연구에 유용하다고 판단되면, GitHub star를 주는 것과 본 논문을 인용하는 것을 고려해 주세요: ```BibTeX From 3ae9a8b2a9eed4b667df2a63a64ef7b62285564f Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Fri, 18 Aug 2023 00:41:03 +0900 Subject: [PATCH 07/15] =?UTF-8?q?fix:=20=EC=9B=B9=ED=8E=98=EC=9D=B4?= =?UTF-8?q?=EC=A7=80=EC=97=90=EC=84=9C=20table=EA=B0=80=20=EA=B9=A8?= =?UTF-8?q?=EC=A7=80=EB=8A=94=20=ED=98=84=EC=83=81?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- facebookresearch_pytorchvideo_resnet.md | 1 + 1 file changed, 1 insertion(+) diff --git a/facebookresearch_pytorchvideo_resnet.md b/facebookresearch_pytorchvideo_resnet.md index 46b24a2..f9d324e 100644 --- a/facebookresearch_pytorchvideo_resnet.md +++ b/facebookresearch_pytorchvideo_resnet.md @@ -159,6 +159,7 @@ print("Top 5 predicted labels: %s" % ", ".join(pred_class_names)) ### 모델 설명 모델 아키텍처는 Kinetics 데이터셋의 8x8 설정을 사용하여 사전 훈련된 가중치가 있는 참고문헌 [1]을 기반으로 합니다. + | arch | depth | frame length x sample rate | top 1 | top 5 | Flops (G) | Params (M) | | --------------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | | Slow | R50 | 8x8 | 74.58 | 91.63 | 54.52 | 32.45 | From 281628460bf6293432700987f1b2ba76cb4faac8 Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Fri, 18 Aug 2023 00:45:43 +0900 Subject: [PATCH 08/15] =?UTF-8?q?fix:=20=EB=AC=B8=EC=9E=A5=EA=B0=9C?= =?UTF-8?q?=EC=84=A0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- facebookresearch_pytorchvideo_resnet.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/facebookresearch_pytorchvideo_resnet.md b/facebookresearch_pytorchvideo_resnet.md index f9d324e..81b7510 100644 --- a/facebookresearch_pytorchvideo_resnet.md +++ b/facebookresearch_pytorchvideo_resnet.md @@ -18,7 +18,7 @@ demo-model-link: https://huggingface.co/spaces/pytorch/3D_ResNet ### 사용 예시 -#### Imports +#### 불러오기 모델 불러오기: @@ -58,8 +58,7 @@ model = model.eval() model = model.to(device) ``` -토치 허브 모델이 훈련된 Kinetics 400 데이터셋에 대해 ID에서의 레이블과 맞는 정보를 다운로드합니다. 이는 예측된 클래스 ID에서 카테고리 레이블 이름을 가져오는데 사용됩니다. - +토치 허브 모델이 훈련된 Kinetics 400 데이터셋을 위한 id-레이블 매핑 정보를 다운로드합니다. 이는 예측된 클래스 id에 카테고리 레이블 이름을 붙이는 데 사용됩니다. ```python json_url = "https://dl.fbaipublicfiles.com/pyslowfast/dataset/class_names/kinetics_classnames.json" json_filename = "kinetics_classnames.json" From f1f37cfcfca50a6ac532d43717dd456ba9d8abc7 Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Fri, 18 Aug 2023 00:53:37 +0900 Subject: [PATCH 09/15] =?UTF-8?q?fix:=20=EB=8B=A4=EB=A5=B8=20facebookresea?= =?UTF-8?q?rch=20=ED=8C=8C=EC=9D=BC=EA=B3=BC=20=EC=96=91=EC=8B=9D=20?= =?UTF-8?q?=EB=8F=99=EC=9D=BC=ED=99=94?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- facebookresearch_pytorchvideo_resnet.md | 5 +++-- facebookresearch_pytorchvideo_x3d.md | 2 +- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/facebookresearch_pytorchvideo_resnet.md b/facebookresearch_pytorchvideo_resnet.md index 81b7510..627f3d3 100644 --- a/facebookresearch_pytorchvideo_resnet.md +++ b/facebookresearch_pytorchvideo_resnet.md @@ -47,7 +47,7 @@ from pytorchvideo.transforms import ( ) ``` -#### 환경설정 +#### 셋업 모델을 평가 모드로 설정하고 원하는 디바이스 방식을 선택합니다. @@ -58,7 +58,8 @@ model = model.eval() model = model.to(device) ``` -토치 허브 모델이 훈련된 Kinetics 400 데이터셋을 위한 id-레이블 매핑 정보를 다운로드합니다. 이는 예측된 클래스 id에 카테고리 레이블 이름을 붙이는 데 사용됩니다. +토치 허브 모델이 훈련된 Kinetics 400 데이터셋에 대해 ID에서의 레이블 매핑 정보를 다운로드합니다. 이는 예측된 클래스 ID에서 카테고리 레이블 이름을 가져오는데 사용됩니다. + ```python json_url = "https://dl.fbaipublicfiles.com/pyslowfast/dataset/class_names/kinetics_classnames.json" json_filename = "kinetics_classnames.json" diff --git a/facebookresearch_pytorchvideo_x3d.md b/facebookresearch_pytorchvideo_x3d.md index cc43463..d0585ca 100644 --- a/facebookresearch_pytorchvideo_x3d.md +++ b/facebookresearch_pytorchvideo_x3d.md @@ -18,7 +18,7 @@ demo-model-link: https://huggingface.co/spaces/pytorch/X3D ### 사용 예시 -#### Imports +#### 불러오기 모델 불러오기: From c8ddd2cb34855e06cbbad7b76fb4939cb31e5909 Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Fri, 18 Aug 2023 00:55:39 +0900 Subject: [PATCH 10/15] =?UTF-8?q?fix:=20=EB=8B=A4=EB=A5=B8=20facebookresea?= =?UTF-8?q?rch=20=ED=8C=8C=EC=9D=BC=EA=B3=BC=20=EC=96=91=EC=8B=9D=20?= =?UTF-8?q?=EB=8F=99=EC=9D=BC=ED=99=94?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- facebookresearch_pytorchvideo_slowfast.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/facebookresearch_pytorchvideo_slowfast.md b/facebookresearch_pytorchvideo_slowfast.md index e971993..a51e031 100644 --- a/facebookresearch_pytorchvideo_slowfast.md +++ b/facebookresearch_pytorchvideo_slowfast.md @@ -78,7 +78,7 @@ for k, v in kinetics_classnames.items(): kinetics_id_to_classname[v] = str(k).replace('"', "") ``` -#### 입력 변환에 대한 정의 +#### 입력 형태에 대한 정의 ```python side_size = 256 From dfc2285ba8d918d3b89294d7546effdb7812ba6c Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Sat, 19 Aug 2023 14:55:50 +0900 Subject: [PATCH 11/15] =?UTF-8?q?summary=20=EB=B2=88=EC=97=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- facebookresearch_WSL-Images_resnext.md | 2 +- facebookresearch_pytorch-gan-zoo_pgan.md | 2 +- facebookresearch_pytorchvideo_resnet.md | 2 +- facebookresearch_pytorchvideo_slowfast.md | 2 +- facebookresearch_pytorchvideo_x3d.md | 2 +- ...bookresearch_semi-supervised-ImageNet1K-models_resnext.md | 2 +- huggingface_pytorch-transformers.md | 2 +- hustvl_yolop.md | 2 +- intelisl_midas_v2.md | 2 +- mateuszbuda_brain-segmentation-pytorch_unet.md | 2 +- nicolalandro_ntsnet-cub200_ntsnet.md | 2 +- nvidia_deeplearningexamples_efficientnet.md | 2 +- nvidia_deeplearningexamples_resnet50.md | 2 +- nvidia_deeplearningexamples_resnext.md | 2 +- nvidia_deeplearningexamples_ssd.md | 5 ++--- nvidia_deeplearningexamples_tacotron2.md | 2 +- pytorch_vision_alexnet.md | 2 +- 17 files changed, 18 insertions(+), 19 deletions(-) diff --git a/facebookresearch_WSL-Images_resnext.md b/facebookresearch_WSL-Images_resnext.md index b9a63b4..53716ac 100644 --- a/facebookresearch_WSL-Images_resnext.md +++ b/facebookresearch_WSL-Images_resnext.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: ResNext WSL -summary: ResNext models trained with billion scale weakly-supervised data. +summary: 10억 규모의 약한 지도(weakly-supervised) 데이터셋을 사용한 ResNext 모델. category: researchers image: wsl-image.png author: Facebook AI diff --git a/facebookresearch_pytorch-gan-zoo_pgan.md b/facebookresearch_pytorch-gan-zoo_pgan.md index 4e0abf5..0238fb5 100644 --- a/facebookresearch_pytorch-gan-zoo_pgan.md +++ b/facebookresearch_pytorch-gan-zoo_pgan.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: Progressive Growing of GANs (PGAN) -summary: High-quality image generation of fashion, celebrity faces +summary: 패션, 연예인 얼굴의 고품질 이미지 생성 category: researchers image: pganlogo.png author: FAIR HDGAN diff --git a/facebookresearch_pytorchvideo_resnet.md b/facebookresearch_pytorchvideo_resnet.md index 627f3d3..71a3a9b 100644 --- a/facebookresearch_pytorchvideo_resnet.md +++ b/facebookresearch_pytorchvideo_resnet.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: 3D ResNet -summary: Resnet Style Video classification networks pretrained on the Kinetics 400 dataset +summary: Kinetics 400 데이터셋에서 사전 학습된 Resnet 스타일 비디오 분류 네트워크 image: slowfast.png author: FAIR PyTorchVideo tags: [vision] diff --git a/facebookresearch_pytorchvideo_slowfast.md b/facebookresearch_pytorchvideo_slowfast.md index a51e031..c021619 100644 --- a/facebookresearch_pytorchvideo_slowfast.md +++ b/facebookresearch_pytorchvideo_slowfast.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: SlowFast -summary: SlowFast networks pretrained on the Kinetics 400 dataset +summary: Kinetics 400 데이터섯에서 사전 학습된 SlowFast 네트워크 image: slowfast.png author: FAIR PyTorchVideo tags: [vision] diff --git a/facebookresearch_pytorchvideo_x3d.md b/facebookresearch_pytorchvideo_x3d.md index d0585ca..0c403c9 100644 --- a/facebookresearch_pytorchvideo_x3d.md +++ b/facebookresearch_pytorchvideo_x3d.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: X3D -summary: X3D networks pretrained on the Kinetics 400 dataset +summary: Kinetics 400 데이터섯에서 사전 학습된 X3D 네트워크 image: x3d.png author: FAIR PyTorchVideo tags: [vision] diff --git a/facebookresearch_semi-supervised-ImageNet1K-models_resnext.md b/facebookresearch_semi-supervised-ImageNet1K-models_resnext.md index 2966b11..847b7a8 100644 --- a/facebookresearch_semi-supervised-ImageNet1K-models_resnext.md +++ b/facebookresearch_semi-supervised-ImageNet1K-models_resnext.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: Semi-supervised and semi-weakly supervised ImageNet Models -summary: Billion scale semi-supervised learning for image classification 에서 제안된 ResNet, ResNext 모델 +summary: Billion scale semi-supervised learning for image classification 논문에서 제안된 ResNet, ResNext 모델 category: researchers image: ssl-image.png author: Facebook AI diff --git a/huggingface_pytorch-transformers.md b/huggingface_pytorch-transformers.md index d338bf6..d960caa 100644 --- a/huggingface_pytorch-transformers.md +++ b/huggingface_pytorch-transformers.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: PyTorch-Transformers -summary: PyTorch implementations of popular NLP Transformers +summary: 널리 사용되는 NLP Transformers의 PyTorch 구현 category: researchers image: huggingface-logo.png author: HuggingFace Team diff --git a/hustvl_yolop.md b/hustvl_yolop.md index 3332cf0..e908052 100644 --- a/hustvl_yolop.md +++ b/hustvl_yolop.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: YOLOP -summary: YOLOP pretrained on the BDD100K dataset +summary: BDD100K 데이터 세트에서 사전 훈련된 YOLOP image: yolop.png author: Hust Visual Learning Team tags: [vision] diff --git a/intelisl_midas_v2.md b/intelisl_midas_v2.md index dbc2fde..2e118d3 100644 --- a/intelisl_midas_v2.md +++ b/intelisl_midas_v2.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: MiDaS -summary: MiDaS models for computing relative depth from a single image. +summary: 단일 이미지에서 상대적인 깊이를 계산하기 위한 MiDaaS 모델. image: intel-logo.png author: Intel ISL tags: [vision] diff --git a/mateuszbuda_brain-segmentation-pytorch_unet.md b/mateuszbuda_brain-segmentation-pytorch_unet.md index 9442b86..3998433 100644 --- a/mateuszbuda_brain-segmentation-pytorch_unet.md +++ b/mateuszbuda_brain-segmentation-pytorch_unet.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: U-Net for brain MRI -summary: U-Net with batch normalization for biomedical image segmentation with pretrained weights for abnormality segmentation in brain MRI +summary: 뇌 MRI 이상 분할을 위해 사전 훈련된 가중치를 사용한 의료 이미지 분할을 위한 배치 정규화를 사용하는 U-Net image: unet_tcga_cs_4944.png author: mateuszbuda tags: [vision] diff --git a/nicolalandro_ntsnet-cub200_ntsnet.md b/nicolalandro_ntsnet-cub200_ntsnet.md index 98fce0d..acd4b6d 100644 --- a/nicolalandro_ntsnet-cub200_ntsnet.md +++ b/nicolalandro_ntsnet-cub200_ntsnet.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: ntsnet -summary: classify birds using this fine-grained image classifier +summary: fine-grained 이미지 분류기를 사용한 새 분류 image: Cub200Dataset.png author: Moreno Caraffini and Nicola Landro tags: [vision] diff --git a/nvidia_deeplearningexamples_efficientnet.md b/nvidia_deeplearningexamples_efficientnet.md index c306b34..52810b2 100644 --- a/nvidia_deeplearningexamples_efficientnet.md +++ b/nvidia_deeplearningexamples_efficientnet.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: EfficientNet -summary: EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, being an order-of-magnitude smaller and faster. Trained with mixed precision using Tensor Cores. +summary: EfficientNets는 최첨단 정확도를 달성하는 이미지 분류 모델 계열로 크기가 작고 빠릅니다. 텐서 코어를 사용하여 혼합 정밀도로 훈련되었습니다. category: researchers image: nvidia_logo.png author: NVIDIA diff --git a/nvidia_deeplearningexamples_resnet50.md b/nvidia_deeplearningexamples_resnet50.md index c13848f..143d09d 100644 --- a/nvidia_deeplearningexamples_resnet50.md +++ b/nvidia_deeplearningexamples_resnet50.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: ResNet50 -summary: ResNet50 model trained with mixed precision using Tensor Cores. +summary: 텐서 코어를 사용하여 혼합 정밀도로 훈련된 ResNet50 모델. category: researchers image: nvidia_logo.png author: NVIDIA diff --git a/nvidia_deeplearningexamples_resnext.md b/nvidia_deeplearningexamples_resnext.md index ea8d6f8..be01abc 100644 --- a/nvidia_deeplearningexamples_resnext.md +++ b/nvidia_deeplearningexamples_resnext.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: ResNeXt101 -summary: ResNet with bottleneck 3x3 Convolutions substituted by 3x3 Grouped Convolutions, trained with mixed precision using Tensor Cores. +summary: ResNet의 3x3 그룹 합성곱(Grouped Convolution) 계층을 병목 블록(Bottleneck Block) 내부의 3x3 합성곱 계층으로 대체한 모델. category: researchers image: nvidia_logo.png author: NVIDIA diff --git a/nvidia_deeplearningexamples_ssd.md b/nvidia_deeplearningexamples_ssd.md index aa6dc8b..9c90aee 100644 --- a/nvidia_deeplearningexamples_ssd.md +++ b/nvidia_deeplearningexamples_ssd.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: SSD -summary: Single Shot MultiBox Detector model for object detection +summary: 객체 탐지를 위한 Single Shot MultiBox Detector 모델 category: researchers image: nvidia_logo.png author: NVIDIA @@ -30,8 +30,7 @@ SSD300 모델은 "단일 심층 신경망을 사용하여 이미지에서 물체 * conv4_x의 모든 strides는 1x1로 설정됩니다. 백본 뒤에는 5개의 합성곱 레이어가 추가됩니다. 또한 합성곱 레이어 외에도 6개의 detection heads를 추가했습니다. -The backbone is followed by 5 additional convolutional layers. -In addition to the convolutional layers, we attached 6 detection heads: + * 첫 번째 detection head는 마지막 conv4_x 레이어에 연결됩니다. * 나머지 5개의 detection head는 추가되는 5개의 합성곱 레이어에 부착됩니다. diff --git a/nvidia_deeplearningexamples_tacotron2.md b/nvidia_deeplearningexamples_tacotron2.md index 4e82bd3..21a6e8e 100644 --- a/nvidia_deeplearningexamples_tacotron2.md +++ b/nvidia_deeplearningexamples_tacotron2.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: Tacotron 2 -summary: The Tacotron 2 model for generating mel spectrograms from text +summary: 텍스트에서 멜 스펙트로그램(mel spectrogram)을 생성하는 Tacotron 2 모델 category: researchers image: nvidia_logo.png author: NVIDIA diff --git a/pytorch_vision_alexnet.md b/pytorch_vision_alexnet.md index 05e0fc1..b956e9c 100644 --- a/pytorch_vision_alexnet.md +++ b/pytorch_vision_alexnet.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: AlexNet -summary: The 2012 ImageNet winner achieved a top-5 error of 15.3%, more than 10.8 percentage points lower than that of the runner up. +summary: 2012년 ImageNet 우승자는 15.3%의 top-5 에러율을 달성하여 준우승자보다 10.8%P 이상 낮았습니다. category: researchers image: alexnet2.png author: Pytorch Team From f98b5b33b910cf2bc0c8143c48f7ede1026be473 Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Wed, 30 Aug 2023 01:06:09 +0900 Subject: [PATCH 12/15] =?UTF-8?q?summary=20=EB=B2=88=EC=97=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- nvidia_deeplearningexamples_waveglow.md | 2 +- pytorch_vision_deeplabv3_resnet101.md | 2 +- pytorch_vision_densenet.md | 2 +- pytorch_vision_fcn_resnet101.md | 2 +- pytorch_vision_ghostnet.md | 2 +- pytorch_vision_googlenet.md | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/nvidia_deeplearningexamples_waveglow.md b/nvidia_deeplearningexamples_waveglow.md index 65d3bf0..a1996ce 100644 --- a/nvidia_deeplearningexamples_waveglow.md +++ b/nvidia_deeplearningexamples_waveglow.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: WaveGlow -summary: WaveGlow model for generating speech from mel spectrograms (generated by Tacotron2) +summary: 멜 스펙트로그램으로부터 음성을 생성하기 위한 WaveGlow 모델 (Tacotron2에 의해 생성됨) category: researchers image: nvidia_logo.png author: NVIDIA diff --git a/pytorch_vision_deeplabv3_resnet101.md b/pytorch_vision_deeplabv3_resnet101.md index 4cffbb3..8fbe1c0 100644 --- a/pytorch_vision_deeplabv3_resnet101.md +++ b/pytorch_vision_deeplabv3_resnet101.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: Deeplabv3 -summary: DeepLabV3 models with ResNet-50, ResNet-101 and MobileNet-V3 backbones +summary: ResNet-50, ResNet-101 또는 MobileNet-V3 백본이 포함된 DeepLab V3 모델 category: researchers image: deeplab2.png author: Pytorch Team diff --git a/pytorch_vision_densenet.md b/pytorch_vision_densenet.md index 22cdc50..5bace88 100644 --- a/pytorch_vision_densenet.md +++ b/pytorch_vision_densenet.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: Densenet -summary: Dense Convolutional Network (DenseNet), connects each layer to every other layer in a feed-forward fashion. +summary: DenseNet(Dense Convolutional Network)은 피드포워드 방식으로 각 계층을 다른 모든 계층에 연결합니다. category: researchers image: densenet1.png author: Pytorch Team diff --git a/pytorch_vision_fcn_resnet101.md b/pytorch_vision_fcn_resnet101.md index 9aeedee..fa81783 100644 --- a/pytorch_vision_fcn_resnet101.md +++ b/pytorch_vision_fcn_resnet101.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: FCN -summary: Fully-Convolutional Network model with ResNet-50 and ResNet-101 backbones +summary: ResNet-50 및 ResNet-101 백본을 사용하는 완전 컨볼루션 네트워크 모델 category: researchers image: fcn2.png author: Pytorch Team diff --git a/pytorch_vision_ghostnet.md b/pytorch_vision_ghostnet.md index 06223f6..e80b8a5 100644 --- a/pytorch_vision_ghostnet.md +++ b/pytorch_vision_ghostnet.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: GhostNet -summary: Efficient networks by generating more features from cheap operations +summary: 적은 연산에서 더 많은 특징을 생성하여 효율적인 네트워크 category: researchers image: ghostnet.png author: Huawei Noah's Ark Lab diff --git a/pytorch_vision_googlenet.md b/pytorch_vision_googlenet.md index 3e5a910..faa8abd 100644 --- a/pytorch_vision_googlenet.md +++ b/pytorch_vision_googlenet.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: GoogLeNet -summary: GoogLeNet was based on a deep convolutional neural network architecture codenamed "Inception" which won ImageNet 2014. +summary: GoogLeNet은 "Inception"이라는 심층 컨볼루션 신경망 아키텍처를 기반으로 하여 ImageNet 2014에서 수상했습니다. category: researchers image: googlenet1.png author: Pytorch Team From 9e26b7098ef4c30cb9e6cfff386ed4cb36957d0b Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Tue, 19 Sep 2023 00:20:07 +0900 Subject: [PATCH 13/15] =?UTF-8?q?summary=20=EB=B2=88=EC=97=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- pytorch_vision_ibnnet.md | 2 +- pytorch_vision_inception_v3.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/pytorch_vision_ibnnet.md b/pytorch_vision_ibnnet.md index 37bccf6..034b62d 100644 --- a/pytorch_vision_ibnnet.md +++ b/pytorch_vision_ibnnet.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: IBN-Net -summary: Networks with domain/appearance invariance +summary: 도메인/외관 불변성을 갖는 네트워크 category: researchers image: ibnnet.png author: Xingang Pan diff --git a/pytorch_vision_inception_v3.md b/pytorch_vision_inception_v3.md index 3d61234..625ed00 100644 --- a/pytorch_vision_inception_v3.md +++ b/pytorch_vision_inception_v3.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: Inception_v3 -summary: Also called GoogleNetv3, a famous ConvNet trained on Imagenet from 2015 +summary: GoogleNetv3이라고도 불리는, 2015년 ImageNet으로 훈련된 유명한 ConvNet category: researchers image: inception_v3.png author: Pytorch Team From fece7baa4386b55a3825e955840f27ecdee031a2 Mon Sep 17 00:00:00 2001 From: Junwon Lee Date: Wed, 20 Sep 2023 21:46:51 +0900 Subject: [PATCH 14/15] =?UTF-8?q?summary=20=EB=B2=88=EC=97=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- pytorch_vision_meal_v2.md | 2 +- pytorch_vision_proxylessnas.md | 2 +- pytorch_vision_resnest.md | 2 +- pytorch_vision_resnext.md | 2 +- pytorch_vision_shufflenet_v2.md | 2 +- pytorch_vision_squeezenet.md | 2 +- pytorch_vision_vgg.md | 2 +- snakers4_silero-models_stt.md | 2 +- snakers4_silero-models_tts.md | 2 +- snakers4_silero-vad_language.md | 2 +- 10 files changed, 10 insertions(+), 10 deletions(-) diff --git a/pytorch_vision_meal_v2.md b/pytorch_vision_meal_v2.md index 90f289b..cdab4cf 100644 --- a/pytorch_vision_meal_v2.md +++ b/pytorch_vision_meal_v2.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: MEAL_V2 -summary: Boosting Tiny and Efficient Models using Knowledge Distillation. +summary: Knowledge Distillation을 사용한 작고 효율적인 모델 category: researchers image: MEALV2.png author: Carnegie Mellon University diff --git a/pytorch_vision_proxylessnas.md b/pytorch_vision_proxylessnas.md index cd9dea0..2d35203 100644 --- a/pytorch_vision_proxylessnas.md +++ b/pytorch_vision_proxylessnas.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: ProxylessNAS -summary: Proxylessly specialize CNN architectures for different hardware platforms. +summary: 다양한 하드웨어 플랫폼을 위해 프록시 없이 전문화시킨 CNN 아키텍처 category: researchers image: proxylessnas.png author: MIT Han Lab diff --git a/pytorch_vision_resnest.md b/pytorch_vision_resnest.md index 9419761..55f04a4 100644 --- a/pytorch_vision_resnest.md +++ b/pytorch_vision_resnest.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: ResNeSt -summary: A new ResNet variant. +summary: 새로운 ResNet 변형 모델 category: researchers image: resnest.jpg author: Hang Zhang diff --git a/pytorch_vision_resnext.md b/pytorch_vision_resnext.md index 5417e4a..bb236c1 100644 --- a/pytorch_vision_resnext.md +++ b/pytorch_vision_resnext.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: ResNext -summary: Next generation ResNets, more efficient and accurate +summary: 보다 효율적이고 정확한 차세대 ResNets category: researchers image: resnext.png author: Pytorch Team diff --git a/pytorch_vision_shufflenet_v2.md b/pytorch_vision_shufflenet_v2.md index a54dd32..11a3802 100644 --- a/pytorch_vision_shufflenet_v2.md +++ b/pytorch_vision_shufflenet_v2.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: ShuffleNet v2 -summary: An efficient ConvNet optimized for speed and memory, pre-trained on Imagenet +summary: ImageNet에서 사전 훈련된 속도와 메모리에 최적화된 효율적인 ConvNet category: researchers image: shufflenet_v2_1.png author: Pytorch Team diff --git a/pytorch_vision_squeezenet.md b/pytorch_vision_squeezenet.md index b235ab6..04bdbe5 100644 --- a/pytorch_vision_squeezenet.md +++ b/pytorch_vision_squeezenet.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: SqueezeNet -summary: Alexnet-level accuracy with 50x fewer parameters. +summary: 50배 적은 파라미터로 Alexnet 수준의 정확도 제공 category: researchers image: squeezenet.png author: Pytorch Team diff --git a/pytorch_vision_vgg.md b/pytorch_vision_vgg.md index 56b5003..1ef4330 100644 --- a/pytorch_vision_vgg.md +++ b/pytorch_vision_vgg.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: vgg-nets -summary: Award winning ConvNets from 2014 Imagenet ILSVRC challenge +summary: 2014 Imagenet ILSVRC 챌린지에서 ConvNets부분 수상 category: researchers image: vgg.png author: Pytorch Team diff --git a/snakers4_silero-models_stt.md b/snakers4_silero-models_stt.md index e477d71..6728b88 100644 --- a/snakers4_silero-models_stt.md +++ b/snakers4_silero-models_stt.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: Silero Speech-To-Text Models -summary: A set of compact enterprise-grade pre-trained STT Models for multiple languages. +summary: 여러 언어에 대해 사전 훈련된 소형 엔터프라이즈급 STT 모델 세트 image: silero_logo.jpg author: Silero AI Team tags: [audio, scriptable] diff --git a/snakers4_silero-models_tts.md b/snakers4_silero-models_tts.md index 0490cb9..ae0d78a 100644 --- a/snakers4_silero-models_tts.md +++ b/snakers4_silero-models_tts.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: Silero Text-To-Speech Models -summary: A set of compact enterprise-grade pre-trained TTS Models for multiple languages +summary: 여러 언어에 대해 사전 훈련된 소형 엔터프라이즈급 TTS 모델 세트 image: silero_logo.jpg author: Silero AI Team tags: [audio, scriptable] diff --git a/snakers4_silero-vad_language.md b/snakers4_silero-vad_language.md index 2b4c17b..5ce24fd 100644 --- a/snakers4_silero-vad_language.md +++ b/snakers4_silero-vad_language.md @@ -4,7 +4,7 @@ background-class: hub-background body-class: hub category: researchers title: Silero Language Classifier -summary: Pre-trained Spoken Language Classifier +summary: 사전 훈련된 음성 언어 분류기 image: silero_logo.jpg author: Silero AI Team tags: [audio, scriptable] From 515d33c38bd5b8c600bf2fda31e9487f036ed243 Mon Sep 17 00:00:00 2001 From: cpprhtn Date: Sat, 30 Sep 2023 15:40:51 +0900 Subject: [PATCH 15/15] refactor: unify the words into one --- facebookresearch_WSL-Images_resnext.md | 2 +- nvidia_deeplearningexamples_efficientnet.md | 2 +- nvidia_deeplearningexamples_resnet50.md | 2 +- nvidia_deeplearningexamples_resnext.md | 2 +- nvidia_deeplearningexamples_se-resnext.md | 2 +- pytorch_vision_alexnet.md | 4 ++-- pytorch_vision_densenet.md | 2 +- pytorch_vision_inception_v3.md | 2 +- pytorch_vision_mobilenet_v2.md | 2 +- pytorch_vision_resnet.md | 4 ++-- pytorch_vision_vgg.md | 4 ++-- 11 files changed, 14 insertions(+), 14 deletions(-) diff --git a/facebookresearch_WSL-Images_resnext.md b/facebookresearch_WSL-Images_resnext.md index 53716ac..b93fc66 100644 --- a/facebookresearch_WSL-Images_resnext.md +++ b/facebookresearch_WSL-Images_resnext.md @@ -64,7 +64,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Imagenet의 1000개 클래스에 대한 신뢰도 점수를 가진, shape이 1000인 텐서 출력 +# ImageNet의 1000개 클래스에 대한 신뢰도 점수를 가진, shape이 1000인 텐서 출력 print(output[0]) # 출력값은 정규화되지 않은 형태입니다. Softmax를 실행하면 확률을 얻을 수 있습니다. print(torch.nn.functional.softmax(output[0], dim=0)) diff --git a/nvidia_deeplearningexamples_efficientnet.md b/nvidia_deeplearningexamples_efficientnet.md index 52810b2..89348d2 100644 --- a/nvidia_deeplearningexamples_efficientnet.md +++ b/nvidia_deeplearningexamples_efficientnet.md @@ -55,7 +55,7 @@ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cp print(f'Using {device} for inference') ``` -Load the model pretrained on IMAGENET dataset. +Load the model pretrained on ImageNet dataset. You can choose among the following models: diff --git a/nvidia_deeplearningexamples_resnet50.md b/nvidia_deeplearningexamples_resnet50.md index 143d09d..f25ffb4 100644 --- a/nvidia_deeplearningexamples_resnet50.md +++ b/nvidia_deeplearningexamples_resnet50.md @@ -57,7 +57,7 @@ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cp print(f'Using {device} for inference') ``` -IMAGENET 데이터셋에서 사전 훈련된 모델을 로드합니다. +ImageNet 데이터셋에서 사전 훈련된 모델을 로드합니다. ```python resnet50 = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_resnet50', pretrained=True) utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_convnets_processing_utils') diff --git a/nvidia_deeplearningexamples_resnext.md b/nvidia_deeplearningexamples_resnext.md index be01abc..290ecc9 100644 --- a/nvidia_deeplearningexamples_resnext.md +++ b/nvidia_deeplearningexamples_resnext.md @@ -64,7 +64,7 @@ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cp print(f'Using {device} for inference') ``` -IMAGENET 데이터셋으로 사전 학습된 모델을 불러옵니다. +ImageNet 데이터셋으로 사전 학습된 모델을 불러옵니다. ```python resneXt = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_resneXt') utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_convnets_processing_utils') diff --git a/nvidia_deeplearningexamples_se-resnext.md b/nvidia_deeplearningexamples_se-resnext.md index 27bc1f7..a2537b3 100644 --- a/nvidia_deeplearningexamples_se-resnext.md +++ b/nvidia_deeplearningexamples_se-resnext.md @@ -64,7 +64,7 @@ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cp print(f'Using {device} for inference') ``` -Load the model pretrained on IMAGENET dataset. +Load the model pretrained on ImageNet dataset. ```python resneXt = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_se_resnext101_32x4d') utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_convnets_processing_utils') diff --git a/pytorch_vision_alexnet.md b/pytorch_vision_alexnet.md index b956e9c..fead253 100644 --- a/pytorch_vision_alexnet.md +++ b/pytorch_vision_alexnet.md @@ -58,7 +58,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Imagenet 1000개 클래스의 신뢰 점수를 나타내는 텐서 +# ImageNet 1000개 클래스의 신뢰 점수를 나타내는 텐서 print(output[0]) @@ -90,7 +90,7 @@ AlexNet은 2012년도 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) | 모델 구조 | Top-1 에러 | Top-5 에러 | | --------------- | ----------- | ----------- | -| alexnet | 43.45 | 20.91 | --> +| AlexNet | 43.45 | 20.91 | --> ### 참고문헌 diff --git a/pytorch_vision_densenet.md b/pytorch_vision_densenet.md index 5bace88..f5752d2 100644 --- a/pytorch_vision_densenet.md +++ b/pytorch_vision_densenet.md @@ -88,7 +88,7 @@ for i in range(top5_prob.size(0)): Dense Convolutional Network (DenseNet)는 순전파(feed-forward) 방식으로 각 레이어를 다른 모든 레이어과 연결합니다. L 계층의 기존 합성곱 신경망이 L개의 연결 - 각 층과 다음 층 사이의 하나 - 인 반면 우리의 신경망은 L(L+1)/2 직접 연결을 가집니다. 각 계층에, 모든 선행 계층의 (feature-map)형상 맵은 입력으로 사용되며, 자체 형상 맵은 모든 후속 계층에 대한 입력으로 사용됩니다. DenseNets는 몇 가지 강력한 장점을 가집니다: 그레디언트가 사라지는 문제를 완화시키고, 특징 전파를 강화하며, 특징 재사용을 권장하며, 매개 변수의 수를 크게 줄입니다. -사전 학습된 모델을 사용한 imagenet 데이터셋의 1-crop 오류율은 다음 표와 같습니다. +사전 학습된 모델을 사용한 ImageNet 데이터셋의 1-crop 오류율은 다음 표와 같습니다. | Model structure | Top-1 error | Top-5 error | | --------------- | ----------- | ----------- | diff --git a/pytorch_vision_inception_v3.md b/pytorch_vision_inception_v3.md index 625ed00..725545e 100644 --- a/pytorch_vision_inception_v3.md +++ b/pytorch_vision_inception_v3.md @@ -57,7 +57,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# output은 shape가 [1000]인 Tensor 자료형이며, 이는 Imagenet 데이터셋의 1000개의 각 클래스에 대한 모델의 확신도(confidence)를 나타냄 +# output은 shape가 [1000]인 Tensor 자료형이며, 이는 ImageNet 데이터셋의 1000개의 각 클래스에 대한 모델의 확신도(confidence)를 나타냄 print(output[0]) # output은 정규화되지 않았으므로, 확률화하기 위해 softmax 함수를 처리 probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_mobilenet_v2.md b/pytorch_vision_mobilenet_v2.md index 73100c0..3c7e070 100644 --- a/pytorch_vision_mobilenet_v2.md +++ b/pytorch_vision_mobilenet_v2.md @@ -58,7 +58,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# output은 1000개의 Tensor 형태이며, 이는 Imagenet 데이터 셋의 1000개 클래스에 대한 신뢰도 점수를 나타내는 결과 +# output은 1000개의 Tensor 형태이며, 이는 ImageNet 데이터 셋의 1000개 클래스에 대한 신뢰도 점수를 나타내는 결과 print(output[0]) # output 결과는 정규화되지 않은 결과. 확률을 얻기 위해선 softmax를 거쳐야 함. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_resnet.md b/pytorch_vision_resnet.md index a64244a..2ab97bd 100644 --- a/pytorch_vision_resnet.md +++ b/pytorch_vision_resnet.md @@ -64,7 +64,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) @@ -91,7 +91,7 @@ for i in range(top5_prob.size(0)): Resnet models were proposed in "Deep Residual Learning for Image Recognition". Here we have the 5 versions of resnet models, which contains 18, 34, 50, 101, 152 layers respectively. Detailed model architectures can be found in Table 1. -Their 1-crop error rates on imagenet dataset with pretrained models are listed below. +Their 1-crop error rates on ImageNet dataset with pretrained models are listed below. | Model structure | Top-1 error | Top-5 error | | --------------- | ----------- | ----------- | diff --git a/pytorch_vision_vgg.md b/pytorch_vision_vgg.md index 1ef4330..fc65b18 100644 --- a/pytorch_vision_vgg.md +++ b/pytorch_vision_vgg.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: vgg-nets -summary: 2014 Imagenet ILSVRC 챌린지에서 ConvNets부분 수상 +summary: 2014 ImageNet ILSVRC 챌린지에서 ConvNets부분 수상 category: researchers image: vgg.png author: Pytorch Team @@ -66,7 +66,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Imagenet의 1000개 클래스에 대한 신뢰도 점수가 있는 1000개의 Tensor입니다. +# ImageNet의 1000개 클래스에 대한 신뢰도 점수가 있는 1000개의 Tensor입니다. print(output[0]) # 출력에 정규화되지 않은 점수가 있습니다. 확률을 얻으려면 소프트맥스를 실행할 수 있습니다. probabilities = torch.nn.functional.softmax(output[0], dim=0)