From 0c5d213870b25a94ef4261f3381ff9dfb0f72f18 Mon Sep 17 00:00:00 2001 From: mdingemanse Date: Fri, 19 Apr 2024 15:13:54 +0000 Subject: [PATCH] Apply automatic changes --- csv_file_path | 1 + docs/figure.html | 78 ++++-------------------------------------------- docs/index.html | 4 ++- 3 files changed, 10 insertions(+), 73 deletions(-) diff --git a/csv_file_path b/csv_file_path index df0d49d..171c68d 100644 --- a/csv_file_path +++ b/csv_file_path @@ -1,6 +1,7 @@ project.link,project.notes,project.llmbase,project.rlbase,project.license,org.name,org.link,org.notes,opencode.class,opencode.link,opencode.notes,llmdata.class,llmdata.link,llmdata.notes,llmweights.class,llmweights.link,llmweights.notes,rldata.class,rldata.link,rldata.notes,rlweights.class,rlweights.link,rlweights.notes,license.class,license.link,license.notes,code.class,code.link,code.notes,architecture.class,architecture.link,architecture.notes,preprint.class,preprint.link,preprint.notes,paper.class,paper.link,paper.notes,modelcard.class,modelcard.link,modelcard.notes,datasheet.class,datasheet.link,datasheet.notes,package.class,package.link,package.notes,api.class,api.link,api.notes,source.file,openness https://huggingface.co/bigscience/bloomz,,"BLOOMZ, mT0",xP3,Apache 2.0 and RAIL (responsible AI license),bigscience-workshop,https://github.com/bigscience-workshop,,open,https://github.com/bigscience-workshop/xmtf,Repository provides a guided overview to all components,open,https://github.com/bigscience-workshop/xmtf#data,Data made available & documented in detail in repo and preprint,open,https://github.com/bigscience-workshop/xmtf#models,Model made available on github,open,https://huggingface.co/datasets/bigscience/xP3all,From the documentation 'xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks',partial,https://huggingface.co/bigscience/bloomz-optimizer-states/tree/main,Fine-tuned checkpoint available for download,partial,https://bigscience.huggingface.co/blog/the-bigscience-rail-license,"Code licensed under Apache 2.0, model under bespoke 'Responsible AI License' which imposes some limitations",open,https://github.com/bigscience-workshop/xmtf,Code well documented and actively maintained,open,https://github.com/bigscience-workshop/xmtf#create-xp3x,"Architecture described in preprint, code available in github repo, recipe on HuggingFace",open,https://arxiv.org/abs/2211.05100,Preprint (updated June 2023) of 65 pages + 10 page appendix,open,https://aclanthology.org/2023.acl-long.891/,Peer-reviewed paper of 9 pages + 114 page appendix describes the multitask finetuning (instruction tuning) of BLOOM (see preprint) to form BLOOMZ,open,https://huggingface.co/bigscience/bloomz,Model card,open,https://huggingface.co/datasets/bigscience/xP3,Dataset documented in dataset card at HuggingFace,closed,,No packages published,open,https://huggingface.co/spaces/bigscience/petals-api,Petals API via HuggingFace not always available ('not enough hardware capacity'),/projects/bloomz.yaml,12.0 https://huggingface.co/LLM360/AmberChat,,Amber,ShareGPT + Evol-Instruct (synthetic),Apache 2.0,LLM360,https://www.llm360.ai/index.html,,open,https://github.com/LLM360/amber-train/tree/main,amber-train repository includes code for training and finetuning.,open,https://huggingface.co/datasets/LLM360/AmberDatasets,data well-documented and openly available,open,https://huggingface.co/LLM360/Amber,360 model checkpoints released,open,https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k,RL and fine-tuning data shared and documented,open,https://huggingface.co/LLM360/AmberChat,Finetuned model available for download.,open,https://huggingface.co/LLM360/AmberChat,Everything licensed under Apache 2.0,partial,https://github.com/LLM360,Code documented in helpful readme.md files but only partly inline.,partial,https://arxiv.org/abs/2312.06550,"Architecture described in preprint, but not all details documented.",open,https://arxiv.org/abs/2312.06550,"Preprint describes architecture, design choices, training and fine-tuning.",closed,,No peer-reviewed paper yet.,partial,https://huggingface.co/LLM360/AmberChat,Model card doesn't specify use or limitations,partial,https://huggingface.co/datasets/LLM360/AmberDatasets,"Concise description (better than most), but doesn't specify funders, purposes, representativeness, legal status as prescribed by datasheets industry standard",closed,,No released package found,open,https://huggingface.co/LLM360/AmberChat,Free Huggingface inference API.,/projects/amber.yaml,10.0 +https://blog.allenai.org/olmo-open-language-model-87ccfc95f580,,OLMo 7B,OpenInstruct,Apache 2.0,AllenAI,https://allenai.org/allennlp,,open,https://github.com/allenai/OLMo,"Multiple repos with training, architecture and fine-tuning code available",open,https://huggingface.co/datasets/allenai/dolma,Dolma training data released and documented in exemplary way,open,https://huggingface.co/collections/allenai/olmo-suite-65aeaae8fe5b6b2122b46778,OLMo 7B and many training checkpoints available,open,https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned,Instruction tuning datasets documented and made available in exemplary ways,open,https://huggingface.co/allenai/OLMo-7B-Instruct/tree/main,Full model weights made available,,,,,,,,,,open,https://arxiv.org/abs/2402.00838,"Preprint describes model architecture, training and fine-tuning data, and training and SFT pipelines",closed,,No peer-reviewed paper found,open,https://huggingface.co/allenai/OLMo-7B-Instruct,Model card provides broad overview and links to full details,open,https://huggingface.co/datasets/allenai/dolma,"Data sheets and documentation available for the datasets used, linked here is Dolma",open,https://pypi.org/project/ai2-olmo/,No separate package made available,partial,https://huggingface.co/allenai/OLMo-7B-hf,Available through HuggingFace though model is,/projects/olmo-7b-instruct.yaml,9.5 https://open-assistant.io/,,Pythia 12B,OpenAssistant Conversations,Apache 2.0,LAION-AI,https://open-assistant.io/,,open,https://github.com/LAION-AI/Open-Assistant,Code includes guide for developers,open,https://github.com/LAION-AI/Open-Assistant/tree/main/data/datasets,Datasets documented in detail and recipes for cleaning up and downloading provided in code notebooks.,open,https://huggingface.co/OpenAssistant,Model weights in several variants downloadable through HuggingFace,open,https://huggingface.co/datasets/OpenAssistant/oasst1,"OpenAssistant Conversations is 'a human-generated, human-annotated assistant-style conversation corpus consisting of 161443 messages distributed across 66497 conversation trees, in 35 different languages, annotated with 461292 quality ratings' (preprint)",closed,,RLHF weights not separately released,open,https://projects.laion.ai/Open-Assistant/docs/faq#what-license-does-open-assistant-use,Apache 2.0,open,https://projects.laion.ai/Open-Assistant/docs/intro,Separate website provides entry point to comprehensive documentation,open,https://github.com/LAION-AI/Open-Assistant/tree/main/model,Instructions to tune the pipeline on training data,partial,https://arxiv.org/abs//2304.07327,"Preprint describes creation of OpenAssistant Conversations corpus for instruction tuning, but not the base LLM, hence partial.",closed,,No peer-reviewed paper or published data audit found,closed,,,closed,,,open,,,open,https://projects.laion.ai/Open-Assistant/api,,/projects/Open-Assistant.yaml,9.5 https://github.com/imoneoi/openchat,,Mistral 7B,ShareGPT with C-RLFT,Apache 2.0,Tshinghua University,https://github.com/imoneoi,OpenChat notes 'We are a student team from Tsinghua University',open,https://github.com/imoneoi/openchat/tree/master/ochat,Repository offers a large amount of fairly well-organized code for data curation and model,closed,,Pretraining data for Mistral is nowhere disclosed or documented,open,https://github.com/mistralai/mistral-src#download-the-model,Mistral 7B weights available via Mistral repository,closed,,Preprint says shareGPT dataset 'collected from sharegpt.com' but not disclosed or made available by this project,open,https://huggingface.co/openchat/openchat_3.5/tree/main,Instruction-tuned model weights made available via HuggingFace,open,https://github.com/imoneoi/openchat/blob/master/LICENSE,Code and model released under Apache 2.0,partial,https://github.com/imoneoi/openchat/tree/master/ochat,There is plenty of code in the github repository but only some of it is documented,open,https://arxiv.org/abs/2309.11235,Architecture quite well described in preprint,open,https://arxiv.org/abs/2309.11235,"Preprint describes the model architecture and instruction tuning approach, though is hampered by building on notoriously closed Llama2",open,https://openreview.net/forum?id=AOJyfhWYHf,Paper reviewed and accepted for ICLR 2024,partial,https://huggingface.co/openchat/openchat_v3.2,There is a model card that provides some details on architecture and evaluation,closed,,Datasheet not provided.,open,https://github.com/imoneoi/openchat/tree/master#installation,Python package 'ochat' provided through pip,partial,,"Model too large to load onto HuggingFace free inference API, so only available through Inference Endpoints or package",/projects/OpenChat.yaml,9.5 https://huggingface.co/togethercomputer/Pythia-Chat-Base-7B,,EleutherAI pythia,OIG,Apache 2.0 license,togethercomputer,https://github.com/togethercomputer,,open,,,open,https://github.com/togethercomputer/OpenDataHub,Training data curated and shared in separate repository,open,https://huggingface.co/togethercomputer/Pythia-Chat-Base-7B/tree/main,Model weights available via HuggingFace,open,https://huggingface.co/datasets/laion/OIG,From the documentation 'This is our attempt to create a large instruction dataset of medium quality along with a smaller high quality instruciton dataset (OIG-small-chip2).',closed,,RL weights not separately made available,open,https://huggingface.co/togethercomputer/Pythia-Chat-Base-7B#model-details,Apache 2.0,open,https://github.com/togethercomputer/OpenChatKit,Actively maintained repository,open,https://github.com/togethercomputer/OpenChatKit#reproducing-pythia-chat-base-7b,Architecture and recipe for reproducing model provided,partial,https://arxiv.org/abs/2304.01373,Preprint describes LM base (Pythia) but not instruction tuning details,closed,,No peer-reviewed paper or data audit found,partial,https://huggingface.co/togethercomputer/Pythia-Chat-Base-7B,Model card partially available but fairly minimally specified,partial,https://huggingface.co/datasets/laion/OIG,OIG instruction dataset documented,open,,,closed,,,/projects/pythia-chat-base-7B.yaml,9.5 diff --git a/docs/figure.html b/docs/figure.html index 47f6a58..5297fbc 100644 --- a/docs/figure.html +++ b/docs/figure.html @@ -9,11 +9,9 @@ -
-

Liesenfeld, A., Lopez, A. & Dingemanse, M. 2023. β€œOpening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces. July 19-21, Eindhoven. doi: 10.1145/3571884.3604316 (PDF).

-

There is a growing amount of instruction-tuned text generators billing themselves as 'open source'. How open are they really? πŸ”—ACM paper πŸ”—PDF πŸ”—repo

+

@@ -21,117 +19,53 @@

- - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ProjectAvailabilityDocumentationAccess
BLOOMZβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽ~~βœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ˜βœ”οΈŽ
bigscience-workshopLLM base: BLOOMZ, mT0RL base: xP3
AmberChatβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽ~~βœ”οΈŽβœ˜~~βœ˜βœ”οΈŽ
LLM360LLM base: AmberRL base: ShareGPT + Evol-Instruct (synthetic)
OLMo 7B Instructβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽ???βœ”οΈŽβœ˜βœ”οΈŽβœ”οΈŽβœ”οΈŽ~
Open Assistantβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ˜βœ”οΈŽβœ”οΈŽβœ”οΈŽ~βœ˜βœ˜βœ˜βœ”οΈŽβœ”οΈŽ
LAION-AILLM base: Pythia 12BRL base: OpenAssistant Conversations
OpenChat 3.5 7Bβœ”οΈŽβœ˜βœ”οΈŽβœ˜βœ”οΈŽβœ”οΈŽ~βœ”οΈŽβœ”οΈŽβœ”οΈŽ~βœ˜βœ”οΈŽ~
Tshinghua UniversityLLM base: Mistral 7BRL base: ShareGPT with C-RLFT
Pythia-Chat-Base-7B-v0.16βœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ˜βœ”οΈŽβœ”οΈŽβœ”οΈŽ~✘~~βœ”οΈŽβœ˜
togethercomputerLLM base: EleutherAI pythiaRL base: OIG
RedPajama-INCITE-Instruct-7B~βœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽ~~~βœ˜βœ˜βœ”οΈŽβœ”οΈŽβœ˜~
TogetherComputerLLM base: RedPajama-INCITE-7B-BaseRL base: various (GPT-JT recipe)
dollyβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ˜βœ”οΈŽβœ”οΈŽβœ”οΈŽ~βœ˜βœ˜βœ˜βœ”οΈŽβœ˜
databricksLLM base: EleutherAI pythiaRL base: databricks-dolly-15k
Tulu V2 DPO 70Bβœ”οΈŽβœ˜~βœ”οΈŽβœ”οΈŽ~~~βœ”οΈŽβœ˜~~βœ˜βœ”οΈŽ
AllenAILLM base: Llama2RL base: Tulu SFT, Ultrafeedback
MPT-30B Instructβœ”οΈŽ~βœ”οΈŽ~βœ˜βœ”οΈŽβœ”οΈŽ~✘✘~βœ˜βœ”οΈŽ~
MosaicMLLLM base: MosaicMLRL base: dolly, anthropic
MPT-7B Instructβœ”οΈŽ~βœ”οΈŽ~βœ˜βœ”οΈŽβœ”οΈŽ~βœ˜βœ˜βœ”οΈŽβœ˜βœ”οΈŽβœ˜
MosaicMLLLM base: MosaicMLRL base: dolly, anthropic
trlxβœ”οΈŽβœ”οΈŽβœ”οΈŽ~βœ˜βœ”οΈŽβœ”οΈŽ~✘✘✘✘~βœ”οΈŽ
carperaiLLM base: various (pythia, flan, OPT)RL base: various
Vicuna 13B v 1.3βœ”οΈŽ~βœ”οΈŽβœ˜βœ˜~βœ”οΈŽβœ˜βœ”οΈŽβœ˜~βœ˜βœ”οΈŽ~
LMSYSLLM base: LLaMARL base: ShareGPT
minChatGPTβœ”οΈŽβœ”οΈŽβœ”οΈŽ~βœ˜βœ”οΈŽβœ”οΈŽ~βœ˜βœ˜βœ˜βœ˜βœ˜βœ”οΈŽ
ethanyanjialiLLM base: GPT2RL base: anthropic
Cerebras-GPT-111Mβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ˜βœ”οΈŽβœ˜βœ”οΈŽ~✘✘✘✘✘
Cerebras + SchrammLLM base: RL base: Alpaca (synthetic)
ChatRWKVβœ”οΈŽ~βœ”οΈŽβœ˜βœ˜βœ”οΈŽ~~~βœ˜βœ˜βœ˜βœ”οΈŽ~
BlinkDL/RWKVLLM base: RWKV-LMRL base: alpaca, shareGPT (synthetic)
BELLEβœ”οΈŽ~~~~✘~βœ”οΈŽβœ”οΈŽβœ˜βœ˜~✘✘
KE TechnologiesLLM base: LLaMA & BLOOMZRL base: alpaca, shareGPT, Belle (synthetic)
WizardLM 13B v1.2~✘~βœ”οΈŽβœ”οΈŽ~~βœ”οΈŽβœ”οΈŽβœ˜βœ˜βœ˜βœ˜βœ˜
Microsoft & Peking UniversityLLM base: LLaMA2-13BRL base: Evol-Instruct (synthetic)
Airoboros L2 70B GPT4~✘~βœ”οΈŽβœ”οΈŽ~~~✘✘~~✘✘
Jon DurbinLLM base: Llama2RL base: Airoboros (synthetic)
ChatGLM-6B~~βœ”οΈŽβœ˜βœ˜βœ”οΈŽ~~✘~βœ˜βœ˜βœ˜βœ”οΈŽ
THUDMLLM base: GLM (own)RL base: Unspecified
Mistral 7B-Instruct~βœ˜βœ”οΈŽβœ˜~βœ”οΈŽβœ˜~~✘✘✘~βœ”οΈŽ
Mistral AILLM base: unclearRL base: unspecified
WizardLM-7B~~βœ˜βœ”οΈŽ~~~βœ”οΈŽβœ”οΈŽβœ˜βœ˜βœ˜βœ˜βœ˜
Microsoft & Peking UniversityLLM base: LLaMA-7BRL base: Evol-Instruct (synthetic)
Qwen 1.5~βœ˜βœ”οΈŽβœ˜βœ”οΈŽβœ˜~~✘✘✘✘~βœ”οΈŽ
Alibaba CloudLLM base: QwenLMRL base: Unspecified
StableVicuna-13B~✘~~~~~~~✘~✘✘~
CarperAILLM base: LLaMARL base: OASST1 (human), GPT4All (human), Alpaca (synthetic)
Falcon-40B-instruct✘~βœ”οΈŽ~βœ˜βœ”οΈŽβœ˜~~✘~✘✘✘
Technology Innovation InstituteLLM base: Falcon 40BRL base: Baize (synthetic)
UltraLM✘✘~βœ”οΈŽ~✘✘~βœ”οΈŽβœ˜~~✘✘
OpenBMBLLM base: LLaMA2RL base: UltraFeedback (part synthetic)
Yi 34B Chat~βœ˜βœ”οΈŽβœ˜βœ”οΈŽ~βœ˜βœ˜βœ”οΈŽβœ˜βœ˜βœ˜βœ˜~
01.AILLM base: Yi 34BRL base: unspecified
Koala 13Bβœ”οΈŽ~~~✘~~~✘✘✘✘✘✘
BAIRLLM base: LLaMA 13BRL base: HC3, ShareGPT, alpaca (synthetic)
Mixtral 8x7B Instructβœ˜βœ˜βœ”οΈŽβœ˜~βœ”οΈŽβœ˜~~✘✘✘~✘
Mistral AILLM base: MistralRL base: Unspecified
Stable Beluga 2✘✘~βœ˜βœ”οΈŽ~✘~~✘~✘✘~
Stability AILLM base: LLaMA2RL base: Orca-style (synthetic)
Stanford Alpacaβœ”οΈŽβœ˜~~~✘~βœ”οΈŽβœ˜βœ˜βœ˜βœ˜βœ˜βœ˜
Stanford University CRFMLLM base: LLaMARL base: Self-Instruct (synthetic)
Falcon-180B-chat✘~~~~✘✘~~✘~✘✘✘
Technology Innovation InstituteLLM base: Falcon 180BRL base: OpenPlatypus, Ultrachat, Airoboros (synthetic)
Orca 2✘✘~βœ˜βœ”οΈŽβœ˜βœ˜~~✘~✘✘~
Microsoft ResearchLLM base: LLaMA2RL base: FLAN, Math, undisclosed (synthetic)
Command R+βœ˜βœ˜βœ˜βœ”οΈŽβœ”οΈŽ~✘✘✘✘~✘✘✘
Cohere AILLM base: RL base: Aya Collection
Gemma 7B Instruct~✘~✘~✘✘~βœ˜βœ˜βœ”οΈŽβœ˜βœ˜βœ˜
Google DeepMindLLM base: GemmaRL base: Unspecified
LLaMA2 Chat✘✘~✘~✘✘~~✘~✘✘~
Facebook ResearchLLM base: LLaMA2RL base: Meta, StackExchange, Anthropic
Nanbeige2-Chatβœ”οΈŽβœ˜βœ˜βœ˜βœ”οΈŽ~✘✘✘✘✘✘✘~
Nanbeige LLM labLLM base: UnknownRL base: Unknown
Llama 3 Instruct✘✘~✘~✘✘~✘✘~✘✘~
Facebook ResearchLLM base: Meta Llama 3RL base: Meta, undocumented
Solar 70B✘✘~✘~✘✘✘✘✘~✘✘~
Upstage AILLM base: LLaMA2RL base: Orca-style, Alpaca-style
Xwin-LM✘✘~✘✘✘✘✘✘✘✘✘✘~
Xwin-LMLLM base: LLaMA2RL base: unknown
ChatGPT✘✘✘✘✘✘✘✘~✘✘✘✘✘
OpenAILLM base: GPT 3.5RL base: Instruct-GPT

How to use this table. Every cell records a three-level openness judgement (βœ”οΈŽ open, ~ partial or ✘ closed) with a direct link to the available evidence; on hover, the cell will display the notes we have on file for that judgement. The name of each project is a direct link to source data. The table is sorted by cumulative openness, where βœ”οΈŽ is 1, ~ is 0.5 and ✘ is 0 points. Note that RL may refer to RLHF or other forms of fine-tuning aimed at fostering instruction-following behaviour.

-

Why is openness important?

-

Open research is the lifeblood of cumulative progress in science and engineering. Openness is key for fundamental research, for fostering critical computational literacy, and for making informed choices for or against deployment of instruction-tuned LLM architectures. The closed & proprietary nature of ChatGPT and kin makes them fundamentally unfit for responsible use in research and education.

-

Open alternatives provide ways to build reproducible workflows, chart resource costs, and lessen reliance on corporate whims. One aim of our work here is to provide tools to track openness, transparency and accountability in the fast-evolving landscape of instruction-tuned text generators. Read more in the paper (PDF) or contribute to the repo.

-

If you know a model that should be listed here or a data point that needs updating, please see guidelines for contributors. We welcome any contribution, whether it's a quick addition to our awesomelist or a more detail-oriented contribution to the metadata for a specific project.

-

TL;DR

-

Our paper makes the following contributions:

- -

We find the following recurrent patterns:

- -

We conclude as follows:

-
Openness is not the full solution to the scientific and ethical challenges of conversational text generators. Open data will not mitigate the harmful consequences of thoughtless deployment of large language models, nor the questionable copyright implications of scraping all publicly available data from the internet. However, openness does make original research possible, including efforts to build reproducible workflows and understand the fundamentals of instruction-tuned LLM architectures. Openness also enables checks and balances, fostering a culture of accountability for data and its curation, and for models and their deployment. We hope that our work provides a small step in this direction. -
-

Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. β€œOpening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces. July 19-21, Eindhoven. doi: 10.1145/3571884.3604316 (PDF).

-
+ diff --git a/docs/index.html b/docs/index.html index 405dbc3..8f8fe8d 100644 --- a/docs/index.html +++ b/docs/index.html @@ -24,6 +24,8 @@

bigscience-workshopLLM base: BLOOMZ, mT0RL base: xP3Β§ AmberChatβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽ~~βœ”οΈŽβœ˜~~βœ˜βœ”οΈŽ LLM360LLM base: AmberRL base: ShareGPT + Evol-Instruct (synthetic)Β§ +OLMo 7B Instructβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽ???βœ”οΈŽβœ˜βœ”οΈŽβœ”οΈŽβœ”οΈŽ~ +AllenAILLM base: OLMo 7BRL base: OpenInstructΒ§ Open Assistantβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ”οΈŽβœ˜βœ”οΈŽβœ”οΈŽβœ”οΈŽ~βœ˜βœ˜βœ˜βœ”οΈŽβœ”οΈŽ LAION-AILLM base: Pythia 12BRL base: OpenAssistant ConversationsΒ§ OpenChat 3.5 7Bβœ”οΈŽβœ˜βœ”οΈŽβœ˜βœ”οΈŽβœ”οΈŽ~βœ”οΈŽβœ”οΈŽβœ”οΈŽ~βœ˜βœ”οΈŽ~ @@ -131,7 +133,7 @@

TL;DR