Skip to content

Commit

Permalink
add Qwen2.5-Math-7B-Instruct-jp-EZO_OREO & AIdeaLab VideoJP (#432)
Browse files Browse the repository at this point in the history
* add Qwen2.5-Math-7B-Instruct-jp-EZO_OREO & AIdeaLab VideoJP

* add ignore url
  • Loading branch information
kaisugi authored Jan 27, 2025
1 parent ae6b674 commit 058e6c7
Show file tree
Hide file tree
Showing 5 changed files with 25 additions and 1 deletion.
3 changes: 2 additions & 1 deletion .404-links.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ ignore:
- https://gitlab.llm-jp.nii.ac.jp/datasets/*
- https://llm-jp.nii.ac.jp/blog/2024/02/09/v1.1-tuning.html
- https://ojs.aaai.org/*
- https://www.anlp.jp/*
- https://www.anlp.jp/*
- https://www.informatix.co.jp/pr-roberta/
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,7 @@
| | ドメイン | ベースのLLM | 開発元 | ライセンス |
|:---|:---:|:---:|:---:|:---:|
| [JMedLoRA](https://arxiv.org/pdf/2310.10083.pdf)<br>([llama2-jmedlora-6.89ep](https://huggingface.co/AIgroup-CVM-utokyohospital/llama2-jmedlora-6.89ep)) | 医療 | Llama 2 (**70b**) | 東京大学医学部附属病院 循環器内科 AIグループ | CC BY-NC 4.0 |
| [AXCXEPT/Qwen2.5-Math-7B-Instruct-jp-EZO_OREO](https://huggingface.co/AXCXEPT/Qwen2.5-Math-7B-Instruct-jp-EZO_OREO) | 数学 | Qwen2.5-Math-7B-Instruct (**7b**) | Axcxept | Apache 2.0 |

<a id="merged-models"></a>
### 複数のLLMをマージして作成されたモデル
Expand Down Expand Up @@ -378,6 +379,12 @@
| [Evo-Nishikie](https://sakana.ai/evo-ukiyoe/)<br>([v1](https://huggingface.co/SakanaAI/Evo-Nishikie-v1)) | Stable Diffusion (ControlNet) | 浮世絵 | Sakana AI | Apache 2.0[^14] |
| [Evo-Ukiyoe](https://sakana.ai/evo-ukiyoe/)<br>([v1](https://huggingface.co/SakanaAI/Evo-Ukiyoe-v1)) | Stable Diffusion | 浮世絵 | Sakana AI | Apache 2.0[^14] |

### テキストからの動画生成

| | アーキテクチャ | 学習データ | 開発元 | ライセンス |
|:---|:---:|:---:|:---:|:---:|
| [AIdeaLab VideoJP](https://aidealab.com/news/QSvdcQfA)<br>([AIdeaLab-VideoJP](https://huggingface.co/aidealab/AIdeaLab-VideoJP)) | CogVideoX | Pixabay, FineVideo | AIdeaLab | Apache 2.0 |

<a id="multimodal-others"></a>
### その他

Expand Down
7 changes: 7 additions & 0 deletions en/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,7 @@ Please point out any errors on the [issues page](https://github.com/llm-jp/aweso
| | Domain | Base Model | Developer | License |
|:---|:---:|:---:|:---:|:---:|
| [JMedLoRA](https://arxiv.org/pdf/2310.10083.pdf)<br>([llama2-jmedlora-6.89ep](https://huggingface.co/AIgroup-CVM-utokyohospital/llama2-jmedlora-6.89ep)) | Medicine | Llama 2 (**70b**) | University of Tokyo Hospital Department of Cardiovascular Medicine AI Group | CC BY-NC 4.0 |
| [AXCXEPT/Qwen2.5-Math-7B-Instruct-jp-EZO_OREO](https://huggingface.co/AXCXEPT/Qwen2.5-Math-7B-Instruct-jp-EZO_OREO) | Mathematics | Qwen2.5-Math-7B-Instruct (**7b**) | Axcxept | Apache 2.0 |

<a id="merged-models"></a>
### Merged models
Expand Down Expand Up @@ -376,6 +377,12 @@ Please point out any errors on the [issues page](https://github.com/llm-jp/aweso
| [Evo-Nishikie](https://sakana.ai/evo-ukiyoe/)<br>([v1](https://huggingface.co/SakanaAI/Evo-Nishikie-v1)) | Stable Diffusion (ControlNet) | Ukiyo-e | Sakana AI | Apache 2.0[^14] |
| [Evo-Ukiyoe](https://sakana.ai/evo-ukiyoe/)<br>([v1](https://huggingface.co/SakanaAI/Evo-Ukiyoe-v1)) | Stable Diffusion | Ukiyo-e | Sakana AI | Apache 2.0[^14] |

### Text to Video

| | Architecture | Training Data | Developer | License |
|:---|:---:|:---:|:---:|:---:|
| [AIdeaLab VideoJP](https://aidealab.com/news/QSvdcQfA)<br>([AIdeaLab-VideoJP](https://huggingface.co/aidealab/AIdeaLab-VideoJP)) | CogVideoX | Pixabay, FineVideo | AIdeaLab | Apache 2.0 |

<a id="multimodal-others"></a>
### Others

Expand Down
7 changes: 7 additions & 0 deletions fr/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,7 @@ N'hésitez pas à signaler les erreurs sur la page [issues](https://github.com/l
| | Domaine | Base du Model | Développeur | Licence |
|:---|:---:|:---:|:---:|:---:|
| [JMedLoRA](https://arxiv.org/pdf/2310.10083.pdf)<br>([llama2-jmedlora-6.89ep](https://huggingface.co/AIgroup-CVM-utokyohospital/llama2-jmedlora-6.89ep)) | Médecine | Llama 2 (**70b**) | Université de Tokyo - AI Group du Département hospitalier de médecine cardiovasculaire | CC BY-NC 4.0 |
| [AXCXEPT/Qwen2.5-Math-7B-Instruct-jp-EZO_OREO](https://huggingface.co/AXCXEPT/Qwen2.5-Math-7B-Instruct-jp-EZO_OREO) | Mathématiques | Qwen2.5-Math-7B-Instruct (**7b**) | Axcxept | Apache 2.0 |

<a id="merged-models"></a>
### Modèles fusionnés
Expand Down Expand Up @@ -376,6 +377,12 @@ N'hésitez pas à signaler les erreurs sur la page [issues](https://github.com/l
| [Evo-Nishikie](https://sakana.ai/evo-ukiyoe/)<br>([v1](https://huggingface.co/SakanaAI/Evo-Nishikie-v1)) | Stable Diffusion (ControlNet) | Ukiyo-e | Sakana AI | Apache 2.0[^14] |
| [Evo-Ukiyoe](https://sakana.ai/evo-ukiyoe/)<br>([v1](https://huggingface.co/SakanaAI/Evo-Ukiyoe-v1)) | Stable Diffusion | Ukiyo-e | Sakana AI | Apache 2.0[^14] |

### Text vers Vidéo

| | Architecture | Training Data | Développeur | License |
|:---|:---:|:---:|:---:|:---:|
| [AIdeaLab VideoJP](https://aidealab.com/news/QSvdcQfA)<br>([AIdeaLab-VideoJP](https://huggingface.co/aidealab/AIdeaLab-VideoJP)) | CogVideoX | Pixabay, FineVideo | AIdeaLab | Apache 2.0 |

<a id="multimodal-others"></a>
### Autres

Expand Down
2 changes: 2 additions & 0 deletions parts/references_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
| GPT-NeoX | 2022.04.14 | BigScience Research Workshop at ACL 2022 | [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://aclanthology.org/2022.bigscience-1.9/) |
| DiffCSE | 2022.04.21 | NAACL 2022 | [DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings](https://aclanthology.org/2022.naacl-main.311/) |
| GIT | 2022.05.27 | TMLR 2022 | [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) |
| CogVideo | 2022.05.29 | ICLR 2023 | [CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers](https://arxiv.org/abs/2205.15868) |
| Whisper | 2022.12.06 | ICML 2023 | [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) |
| BLIP-2 | 2023.01.30 | ICML 2023 | [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) |
| ControlNet | 2023.02.10 | ICCV 2023 | [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) |
Expand Down Expand Up @@ -74,4 +75,5 @@
| LLM-jp-13B | 2024.07.04 | - | [LLM-jp: A Cross-organizational Project for the Research and Development of Fully Open Japanese LLMs](https://arxiv.org/abs/2407.03963) |
| Llama 3.1 | 2024.07.23 | - | [The Llama 3 Herd of Models](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/) |
| Gemma 2 | 2024.07.31 | - | [Gemma 2: Improving Open Language Models at a Practical Size](https://arxiv.org/abs/2408.00118) |
| CogVideoX | 2024.08.12 | - | [CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer](https://arxiv.org/abs/2408.06072) |
| PLaMo-100B | 2024.10.10 | - | [PLaMo-100B: A Ground-Up Language Model Designed for Japanese Proficiency](https://arxiv.org/abs/2410.07563) |

0 comments on commit 058e6c7

Please sign in to comment.