Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] Support for Heterogeneous Parallelism in Multimodal Training #1375

Open
swiftomkar opened this issue Feb 4, 2025 · 0 comments
Open

Comments

@swiftomkar
Copy link

I have been using MegatronLM to train multimodal models and successfully followed the example under examples/multimodal. However, for efficient training, multimodal models often require different parallelism strategies for each component, as vision models are typically smaller than the LLM in such setups.

Does MegatronLM support heterogeneous parallelism strategies, where different models within a multimodal system can use distinct parallelization techniques? If not, are there any recommended workarounds?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant