We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempting to load vision model with mllama https://huggingface.co/mlx-community/Llama-3.2-11B-Vision-Instruct-8bit MLX architecture
mllama
Is this meant to be supported on my machine/version? I can't seem to find any docs about it.
🥲 Failed to load the model Failed to load model Error when loading model: ValueError: Model type mllama not supported.
MacOS 15.1, M4 Pro LM Studio 0.3.5 Runtime: LM Studio MLX 0.0.14
I also have Metal Llama.cpp 1.2.0 but I don't believe it is being used.
The text was updated successfully, but these errors were encountered:
@chronick this model is not yet supported in the current stable version of LM Studio. See: lmstudio-ai/mlx-engine#5 (comment)
nb: It's also not supported in llama.cpp so GGUF's of it won't load.
Sorry, something went wrong.
@YorkieDev , Thanks for the response.. Is there a plan to support this model or the likes of it such as deepseek-vl-vl2, mold or SmolVLM etc?
No branches or pull requests
Attempting to load vision model with
mllama
https://huggingface.co/mlx-community/Llama-3.2-11B-Vision-Instruct-8bitMLX architecture
Is this meant to be supported on my machine/version? I can't seem to find any docs about it.
MacOS 15.1, M4 Pro
LM Studio 0.3.5
Runtime: LM Studio MLX 0.0.14
I also have Metal Llama.cpp 1.2.0 but I don't believe it is being used.
The text was updated successfully, but these errors were encountered: