-
Notifications
You must be signed in to change notification settings - Fork 244
cuTENSOR: Destroy plan description and preference after construction. #2794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
[only subpackages]
Your PR requires formatting changes to meet the project's style guidelines. Click here to view the suggested changes.diff --git a/lib/cutensor/src/operations.jl b/lib/cutensor/src/operations.jl
index 0c4f3d276..6dea1325b 100644
--- a/lib/cutensor/src/operations.jl
+++ b/lib/cutensor/src/operations.jl
@@ -104,7 +104,7 @@ function plan_elementwise_trinary(
plan_pref = Ref{cutensorPlanPreference_t}()
cutensorCreatePlanPreference(handle(), plan_pref, algo, jit)
- plan = CuTensorPlan(desc[], plan_pref[]; workspacePref=workspace)
+ plan = CuTensorPlan(desc[], plan_pref[]; workspacePref = workspace)
cutensorDestroyOperationDescriptor(desc[])
cutensorDestroyPlanPreference(plan_pref[])
return plan
@@ -189,7 +189,7 @@ function plan_elementwise_binary(
plan_pref = Ref{cutensorPlanPreference_t}()
cutensorCreatePlanPreference(handle(), plan_pref, algo, jit)
- plan = CuTensorPlan(desc[], plan_pref[]; workspacePref=workspace)
+ plan = CuTensorPlan(desc[], plan_pref[]; workspacePref = workspace)
cutensorDestroyOperationDescriptor(desc[])
cutensorDestroyPlanPreference(plan_pref[])
return plan
@@ -259,7 +259,7 @@ function plan_permutation(
plan_pref = Ref{cutensorPlanPreference_t}()
cutensorCreatePlanPreference(handle(), plan_pref, algo, jit)
- plan = CuTensorPlan(desc[], plan_pref[]; workspacePref=workspace)
+ plan = CuTensorPlan(desc[], plan_pref[]; workspacePref = workspace)
cutensorDestroyOperationDescriptor(desc[])
cutensorDestroyPlanPreference(plan_pref[])
return plan
@@ -349,7 +349,7 @@ function plan_contraction(
plan_pref = Ref{cutensorPlanPreference_t}()
cutensorCreatePlanPreference(handle(), plan_pref, algo, jit)
- plan = CuTensorPlan(desc[], plan_pref[]; workspacePref=workspace)
+ plan = CuTensorPlan(desc[], plan_pref[]; workspacePref = workspace)
cutensorDestroyOperationDescriptor(desc[])
cutensorDestroyPlanPreference(plan_pref[])
return plan
@@ -427,7 +427,7 @@ function plan_reduction(
plan_pref = Ref{cutensorPlanPreference_t}()
cutensorCreatePlanPreference(handle(), plan_pref, algo, jit)
- plan = CuTensorPlan(desc[], plan_pref[]; workspacePref=workspace)
+ plan = CuTensorPlan(desc[], plan_pref[]; workspacePref = workspace)
cutensorDestroyOperationDescriptor(desc[])
cutensorDestroyPlanPreference(plan_pref[])
return plan |
This should fix things but I wonder if we should add an option to pass a plan in so that users can re-use a contraction plan? |
Probably, the API is currently pretty happy to re-create a bunch of state over and over. For now this is the quick fix though. Feel free to add options; maybe @lkdvos can suggest some based on his usage in TensorOperations.jl. |
ITensor uses these a lot as well, cc @mtfishman |
I don't think we're using that right now. |
Thanks for investigating and fixing this. I ran the script again and the memory differences now look more like reasonable fluctuations because of other processes on my machine. (note difference in magnitude as well) At least for TensorOperations, plans is not something I ever made use of either. I would argue that if you really want to have the control of reusing the plans etc, you could technically do that already, possibly having to re-wrap implementations a little. Anyways, looks resolved to me, thank you! |
Should fix https://github.com/JuliaGPU/CUDA.jl/issues
@lkdevos Can you confirm?