-
Notifications
You must be signed in to change notification settings - Fork 13.7k
Add complete Megrez-MoE support: GGUF conversion + inference. #17141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Implements complete support for Megrez-MoE (Mixture of Experts) models: - Add LLM_ARCH_MEGREZ_MOE architecture enum and mappings - Implement build_mergez_moe_ffn() with sigmoid+bias gating - Add llm_build_megrez_moe class for full model graph construction - Support 31-layer architecture (layer 0: dense FFN, layers 1-30: MoE) - Implement expert sharing pattern with 64 experts, 6 used per token, 4 shared - Load all model hyperparameters and 372 tensors correctly - Configure NEOX RoPE type for proper positional encoding Tested with Megrez2-3x7B-A3B_Q4_K_M.gguf model. All 39 llama.cpp tests pass successfully. Output verified to match infinigence/llama.cpp reference implementation. Note: Use --no-warmup flag to avoid warmup memory allocation issue.
Megrez-MoE creates many intermediate tensors during MoE FFN construction: - sigmoid, add, reshape (3x), get_rows, sum_rows, div, view_2d, mul_mat operations - ggml_top_k internally calls ggml_argsort + ggml_view_4d (2 more tensors per layer) - Each of 30 MoE layers creates ~35 intermediate tensors during graph construction During warmup, the graph is built 3 times with different batch sizes, requiring sufficient memory pool space for all intermediate tensors. Add 4096 node overhead for LLM_ARCH_MEGREZ_MOE to accommodate these intermediate tensors (30 layers × 35 tensors/layer ≈ 1050 nodes, doubled for safety margin). This fixes the 'not enough space in the context's memory pool' error during warmup, allowing Megrez-MoE to work without the --no-warmup flag. Tested: - All 39 tests pass - Megrez-MoE works with warmup enabled (no crashes) - Other models (e.g., Gemma-2) are unaffected - Verified with outputs up to 100 tokens
- Move llm_build_megrez_moe from llama-model.cpp to src/models/megrez-moe.cpp - Add declaration to src/models/models.h - Update CMakeLists.txt to include megrez-moe.cpp in build - Resolve merge conflicts in llama-arch.cpp and llama-model.cpp - Fix PANGU_EMBED case statement closing braces The model loads successfully, all tests pass (40/40), and inference works correctly.
…oe_ffn - Remove custom build_mergez_moe_ffn implementation (100+ lines) - Use existing build_moe_ffn with LLAMA_EXPERT_GATING_FUNC_TYPE_SIGMOID - Pre-compute gate logits from pre_gate_hidden (Megrez-MoE's unique gating) - Pass pre-computed logits via probs_in parameter - Maintain exact same behavior and output quality This addresses review feedback to reuse existing MoE infrastructure instead of duplicating code. The sigmoid gating + bias after activation is already supported by build_moe_ffn.
- Restore PANGU_EMBED and COGVLM tensor mappings in llama-arch.cpp - Remove extra blank line in llama-context.cpp
Restore HunYuanMoE code to upstream version - no modifications needed
|
|
||
| uint32_t llama_context::graph_max_nodes() const { | ||
| return std::max<uint32_t>(1024u, 8u*model.n_tensors()); | ||
| uint32_t base_nodes = std::max<uint32_t>(1024u, 8u*model.n_tensors()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| uint32_t base_nodes = std::max<uint32_t>(1024u, 8u*model.n_tensors()); | |
| uint32_t factor = LLM_ARCH_MEGREZ_MOE ? 9u : 8u; | |
| uint32_t base_nodes = std::max<uint32_t>(1024u, factor * model.n_tensors()); |
increase the 9u if needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Applied your suggestion.
Thank's!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace your code with the one I suggested
| // Layer 0 | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prevent duplicating this code block if possible, merge it to the for loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept the Layer 0 code separate for now. While merging would reduce duplication slightly, the current structure is clearer and separates the dense layer (layer 0) from the MoE layers (1-30). The duplication is minimal and the readability benefit outweighs the consolidation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how this can't be merged with the loop below.
Your code below has ((uint32_t) il < hparams.n_layer_dense_lead), which literally translate to "the first n_layer_dense_lead are dense, non-MoE layers"
Unless you think otherwise, you should refactor your code to make it more clear.
- Use explicit expert_layer_stride variable instead of hard-coded expression - Apply clang-format to ensure consistent code style - Fix trailing whitespace issues
ngxson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
honestly, I think this PR is taking so much time for maintainers to review.
many parts of the code are not clean, I suggest that you should have a deeper look into how other models structure their code and follow the existing pattern.
Architecture
Changes
Conversion (
convert_hf_to_gguf.py): MegrezMoEModel classhidden_size × 2.75 = 5632e_score_correction_bias→exp_probs_bArchitecture (
constants.py): MODEL_TENSORS.MEGREZ_MOEInference (
llama.cpp): MoE FFN + graph memory fixTesting
-Real model: Infinigence/Megrez2-3x7B-A3B (13.92 GB → 14 GB GGUF)
-Conversion: 372 tensors, all parameters correct
-Inference: Coherent output, 5.77 tok/s
-All 40 tests pass
Example
python3 convert_hf_to_gguf.py models/Megrez2-3x7B-A3B/ ./build/bin/llama-cli -m megrez2-3x7b-a3b-f16.gguf -p "Hello" -n 50