Skip to content

Conversation

@bradhilton
Copy link
Collaborator

No description provided.

- Changed the base model in `yes-no-maybe.ipynb` to `Qwen/Qwen3-30B-A3B-Instruct`.
- Disabled loading in 4-bit mode in `get_model_config.py` for better compatibility.
- Adjusted target modules for Qwen3 MoE models to avoid unsupported LoRA weights.
- Enhanced the `openai_server_task` in `server.py` to ensure missing fields in LoRA requests are handled gracefully.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants