Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FastText training data #209

Open
msaebi1993 opened this issue Sep 25, 2024 · 0 comments
Open

FastText training data #209

msaebi1993 opened this issue Sep 25, 2024 · 0 comments

Comments

@msaebi1993
Copy link

Hi, I was wondering if you could share more details about training data mix for your fastText model
In your blog, you mentioned you've used the following sources: https://blog.allenai.org/olmo-1-7-7b-a-24-point-improvement-on-mmlu-92b43f7d269d

  • Wikipedia
  • web-pages cited in Wikipedia (through MegaWika)
  • Small Web RSS feeds (through Kagi)
  • OpenHermes 2.5
  • Semantic Scholar
  • Project Gutenberg
  • OpenWebMath

Specifically, I have the following questions:

  1. Can you please elaborate on the percentage of each data in the final training data mix? Any chance you could share the training data as well?
  2. I see in a recent commit that you're using a new fastText model. Does this have the same training data mix as the one described in the blog? Can you please elaborate on the difference between the two?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant