-
Notifications
You must be signed in to change notification settings - Fork 88
Add the major Falcon Moddels 7b and 40b instruct #37
base: main
Are you sure you want to change the base?
Conversation
Change schema.json to reflect falcon 40B models
Sorry for the delay. Thanks for this! Will come back here after the next version comes out (hopefully tomorrow) |
"datePublished": "2023-10-22T03:04:42", | ||
"name": "falcon-40b-instruct", | ||
"description": "Falcon-40B-Instruct, based on Falcon-40B, has been fine-tuned on chat and instruct datasets. It offers outstanding performance and is an excellent choice for chat and instruct applications. Falcon-7B-Instruct is part of the Falcon family of language models, known for their exceptional capabilities and openness.", | ||
"author": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The author
field is intended for the model creator, so in this case it would talk about tiiuae. See other files for example
"url": "https://huggingface.co/maddes8cht", | ||
"blurb": "Maddes8cht Passionate about Open Source and AI. On Hugging Face he is advocating for real open source AI models with OSI compliant licenses" | ||
}, | ||
"numParameters": "40B", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's actually remove the 40B model from here: people might not realize the resources this model will require and they won't understand why it's not working. As a rule of thumb, catalog models should be 13B and below
"_descriptorVersion": "0.0.1", | ||
"datePublished": "2023-10-31T16:01:50", | ||
"name": "falcon-7b-instruct", | ||
"description": "Falcon-7B-Instruct, based on Falcon-7B, has been fine-tuned on chat and instruct datasets. It offers outstanding performance and is an excellent choice for chat and instruct applications. Falcon-7B-Instruct is part of the Falcon family of language models, known for their exceptional capabilities and openness.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I typically generate description by following this process, more or less:
- copy the parts from the original model card with information about the model, the dataset, training process, prompting nuances, license
- feed this information to GPT4 along with a prompt that looks something like:
Please generate a short summary of this information. The audience is AI model users with varying technical backgrounds, from non-technical users to ML researchers. The summary must include every important detail in the text above. Use a scientific and dispassionate tone.
feel free to tweak it
"files": { | ||
"highlighted": { | ||
"economical": { | ||
"name": "tiiuae-falcon-7b-instruct-Q4_K_M" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these need to be the same as the filename to be downloaded
"name": "tiiuae-falcon-7b-instruct-Q2_K.gguf", | ||
"url": "https://huggingface.co/maddes8cht/tiiuae-falcon-7b-instruct-gguf/blob/main/tiiuae-falcon-7b-instruct-Q2_K.gguf", | ||
"sizeBytes": 4025162688, | ||
"quantization": "Q2_K", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Economical should be Q4_K_S
in most cases (also it should match the filename under "files"
above)
"repositoryUrl": "https://huggingface.co/maddes8cht/tiiuae-falcon-7b-instruct-gguf" | ||
}, | ||
{ | ||
"name": "tiiuae-falcon-7b-instruct-Q4_K_M.gguf", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to include files here that aren't in the highlighted section. The max should be 2
Add the major Falcon Moddels, 7b instruct and 40b instruct..
Change schema.json to reflect falcon 40B models:
In schema.json there is one quantization type missing in
definition - ModelFile-properties: There is a
Q3_K_L
type, i added it to the enum.The apache-licensed Falcon Models come in the flavours of 7B and 40B.
A 40B option is missing in the enumeration
numParameters
, so i added "40B" to the enum.I have a bunch of other interesting and powerful Falcon-based Models, notably the wizardLM uncensored versions, that i would like to ad to this list.
I will do so if you accept these changes.