diff --git a/README.md b/README.md index 483135e..43143b7 100644 --- a/README.md +++ b/README.md @@ -82,9 +82,9 @@ | 已录制 | 年份 | 名字 | 简介 | 引用 | | ------ | ---- | ------------------------------------------------------------ | -------------------- | -----------------------------------------------------------: | | ✅ | 2017 | [Transformer](https://arxiv.org/abs/1706.03762) | 继MLP、CNN、RNN后的第四大类架构 | 26029 ([link](https://www.semanticscholar.org/paper/Attention-is-All-you-Need-Vaswani-Shazeer/204e3073870fae3d05bcbc2f6a8e263d9b72e776)) | -| | 2018 | [GPT](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) | 使用 Transformer 来做预训练 | 2752 ([link](https://www.semanticscholar.org/paper/Improving-Language-Understanding-by-Generative-Radford-Narasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035)) | +| | 2018 | [GPT](https://www.google.com.hk/url?sa=t&source=web&rct=j&opi=89978449&url=https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf&ved=2ahUKEwjZ_JqW3_6JAxXmKEQIHdLzIh4QFnoECAwQAQ&usg=AOvVaw1PUplWBVhKvvqfKLZEP7LT) | 使用 Transformer 来做预训练 | 2752 ([link](https://www.semanticscholar.org/paper/Improving-Language-Understanding-by-Generative-Radford-Narasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035)) | | ✅ | 2018 | [BERT](https://arxiv.org/abs/1810.04805) | Transformer一统NLP的开始 | 25340 ([link](https://www.semanticscholar.org/paper/BERT%3A-Pre-training-of-Deep-Bidirectional-for-Devlin-Chang/df2b0e26d0599ce3e70df8a9da02e51594e0e992)) | -| | 2019 | [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) | | 4534 ([link](https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe)) | +| | 2019 | [GPT-2](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf&ved=2ahUKEwiDjsuSyv6JAxU5lIkEHfmDCtYQFnoECAwQAQ&usg=AOvVaw0jLIElqYl9ae5VbYq-_MLE) | | 4534 ([link](https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe)) | | | 2020 | [GPT-3](https://arxiv.org/abs/2005.14165) | 朝着zero-shot learning迈了一大步 | 2548 ([link](https://www.semanticscholar.org/paper/Language-Models-are-Few-Shot-Learners-Brown-Mann/6b85b63579a916f705a8e10a49bd8d849d91b1fc)) |