-
Notifications
You must be signed in to change notification settings - Fork 871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing model card / data sheet with info on pretraining and RLHF datasets #9
Comments
Information on the the language composition of the pretraining dataset would also be welcome, as there are no mention on multilingual capabilities of the model in the linked blog post. |
I would like to work on this project! |
Upvote thread |
diegolascasas
added a commit
that referenced
this issue
Dec 12, 2023
* Add MoE and Pipelining support * Update readme * Update requirements * Add faster loading * Make sliding window optional and add rope_theta with smart default --------- Co-authored-by: devendrachaplot <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
At opening-up-chatgpt.github.io we're documenting data sources and degrees of openness along several dimensions for instruction-tuned LLMs. I am looking for information about (1) pretraining dataset and (2) RLHF datasets but have not found any details. The HuggingFace model card says
The release blog post provides no information on this at present.
The text was updated successfully, but these errors were encountered: