You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While running on colab free tier with T4 GPU, I got the following error:
ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla T4 GPU has compute capability 7.5. You can use float16 instead by explicitly setting thedtype flag in CLI, for example: --dtype=half.
I tried using --dtype, however its not working and I cannot locate this argument in any scripts as well.
The text was updated successfully, but these errors were encountered:
Hi i ran into the same issue you can go to math_eval.py and at line 90 add an additional dtype="float16" argument for a temporary fix llm = LLM(model=args.model_name_or_path, tensor_parallel_size=len(available_gpus), trust_remote_code=True,dtype="float16")
While running on colab free tier with T4 GPU, I got the following error:
ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla T4 GPU has compute capability 7.5. You can use float16 instead by explicitly setting the
dtype
flag in CLI, for example: --dtype=half.I tried using --dtype, however its not working and I cannot locate this argument in any scripts as well.
The text was updated successfully, but these errors were encountered: