You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been using MegatronLM to train multimodal models and successfully followed the example under examples/multimodal. However, for efficient training, multimodal models often require different parallelism strategies for each component, as vision models are typically smaller than the LLM in such setups.
Does MegatronLM support heterogeneous parallelism strategies, where different models within a multimodal system can use distinct parallelization techniques? If not, are there any recommended workarounds?
The text was updated successfully, but these errors were encountered:
I have been using MegatronLM to train multimodal models and successfully followed the example under examples/multimodal. However, for efficient training, multimodal models often require different parallelism strategies for each component, as vision models are typically smaller than the LLM in such setups.
Does MegatronLM support heterogeneous parallelism strategies, where different models within a multimodal system can use distinct parallelization techniques? If not, are there any recommended workarounds?
The text was updated successfully, but these errors were encountered: