Skip to content

Commit 194fb44

Browse files
committed
linting error fixes and rebase fix
1 parent 8cf5d71 commit 194fb44

File tree

2 files changed

+0
-4
lines changed

2 files changed

+0
-4
lines changed

py/torch_tensorrt/dynamo/_compiler.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -516,11 +516,8 @@ def compile(
516516
enable_weight_streaming (bool): Enable weight streaming.
517517
tiling_optimization_level (str): The optimization level of tiling strategies. A higher level allows TensorRT to spend more time searching for better tiling strategy. We currently support ["none", "fast", "moderate", "full"].
518518
l2_limit_for_tiling (int): The target L2 cache usage limit (in bytes) for tiling optimization (default is -1 which means no limit).
519-
<<<<<<< HEAD
520519
offload_module_to_cpu (bool): Offload the module to CPU. This is useful when we need to minimize GPU memory usage.
521-
=======
522520
use_distributed_mode_trace (bool): Using aot_autograd to trace the graph. This is enabled when DTensors or distributed tensors are present in distributed model
523-
>>>>>>> c3b62d239 (TensorRT-LLM import fix and aot_joint_export specify as explicit setting in dynamo.compile)
524521
**kwargs: Any,
525522
Returns:
526523
torch.fx.GraphModule: Compiled FX Module, when run it will execute via TensorRT

py/torch_tensorrt/dynamo/conversion/converter_utils.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1048,4 +1048,3 @@ def promote_trt_tensors_to_same_dtype(
10481048
rhs_cast = cast_trt_tensor(ctx, rhs, promoted_dtype, f"{name_prefix}rhs_cast")
10491049

10501050
return lhs_cast, rhs_cast
1051-

0 commit comments

Comments
 (0)