From b5c01981e68dbeb38e0f7e6c52609b875b567c4e Mon Sep 17 00:00:00 2001
From: MRIDUL JAIN <105979087+Spinachboul@users.noreply.github.com>
Date: Fri, 3 May 2024 22:01:09 +0530
Subject: [PATCH 1/5] Create SimpleNonlinearSolve_Kernel_Tutorial.md
---
.../SimpleNonlinearSolve_Kernel_Tutorial.md | 83 +++++++++++++++++++
1 file changed, 83 insertions(+)
create mode 100644 docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
diff --git a/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
new file mode 100644
index 000000000..8f9761909
--- /dev/null
+++ b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
@@ -0,0 +1,83 @@
+# Using SimpleNonlinearSolve with KernelAbstractions.jl
+
+We'll demonstrate how to leverage [SimpleNonlinearSolve.jl](https://github.com/SciML/SimpleNonlinearSolve.jl) inside kernels using [KernelAbstractions.jl](https://github.com/JuliaGPU/KernelAbstractions.jl). This allows for efficient solving of very small nonlinear systems on GPUs by avoiding allocations and dynamic dispatch overhead. We'll use the generalized Rosenbrock problem as an example and solve it for multiple initial conditions on various GPU architectures.
+
+### Prerequisites
+Ensure the following packages are installed:
+- Julia (v1.6 or later)
+- NonlinearSolve.jl
+- StaticArrays.jl
+- KernelAbstractions.jl
+- CUDA.jl (for NVIDIA GPUs)
+- AMDGPU.jl (for AMD GPUs)
+
+## Writing the Kernel
+Define a kernel using '@kernel' from 'KernelAbstractions.jl' to solve a single initial condition.
+
+```@example kenel
+using NonlinearSolve, StaticArrays
+using KernelAbstractions, CUDA, AMDGPU
+
+@kernel function parallel_nonlinearsolve_kernel!(result, @Const(prob), @Const(alg))
+ i = @index(Global)
+ prob_i = remake(prob; u0 = prob.u0[i])
+ sol = solve(prob_i, alg)
+ @inbounds result[i] = sol.u
+ return nothing
+end
+```
+
+## Vectorized Solving
+Define a function to solve the problem for multiple initial conditions in parallel across GPU threads.
+
+```@example kernel
+function vectorized_solve(prob, alg; backend = CPU())
+ result = KernelAbstractions.allocate(backend, eltype(prob.u0), length(prob.u0))
+ groupsize = min(length(prob.u0), 1024)
+ kernel! = parallel_nonlinearsolve_kernel!(backend, groupsize, length(prob.u0))
+ kernel!(result, prob, alg)
+ KernelAbstractions.synchronize(backend)
+ return result
+end
+```
+
+## Define the Rosenbrock Function
+Define the generalized Rosenbrock function.
+
+```@example kernel
+@generated function generalized_rosenbrock(x::SVector{N}, p) where {N}
+ vals = ntuple(i -> gensym(string(i)), N)
+ expr = []
+ push!(expr, :($(vals[1]) = oneunit(x[1]) - x[1]))
+ for i in 2:N
+ push!(expr, :($(vals[i]) = 10.0 * (x[$i] - x[$i - 1] * x[$i - 1])))
+ end
+ push!(expr, :(@SVector [$(vals...)]))
+ return Expr(:block, expr...)
+end
+```
+
+## Define the Problem
+Create the nonlinear problem using the generalized Rosenbrock function and multiple initial conditions.
+
+```@example kernel
+u0 = @SVector [@SVector(rand(10)) for _ in 1:1024]
+prob = NonlinearProblem(generalized_rosenbrock, u0)
+```
+
+## Solve the Problem
+Solve the problem using SimpleNonlinearSolve.jl on different GPU architectures.
+
+```@example kernel
+# Threaded CPU
+vectorized_solve(prob, SimpleNewtonRaphson(); backend = CPU())
+
+# AMD ROCM GPU
+vectorized_solve(prob, SimpleNewtonRaphson(); backend = ROCBackend())
+
+# NVIDIA CUDA GPU
+vectorized_solve(prob, SimpleNewtonRaphson(); backend = CUDABackend())
+```
+
+## Conclusion
+This tutorial illustrated how to utilize SimpleNonlinearSolve.jl inside kernels using KernelAbstractions.jl, enabling efficient solving of small nonlinear systems on GPUs for applications requiring parallel processing and high performance.
From 52b08b1023deb7820055bd44f41d7a8ad6b48c7b Mon Sep 17 00:00:00 2001
From: MRIDUL JAIN <105979087+Spinachboul@users.noreply.github.com>
Date: Fri, 3 May 2024 22:55:16 +0530
Subject: [PATCH 2/5] Update SimpleNonlinearSolve_Kernel_Tutorial.md
---
docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
index 8f9761909..6fad635df 100644
--- a/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
+++ b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
@@ -14,7 +14,7 @@ Ensure the following packages are installed:
## Writing the Kernel
Define a kernel using '@kernel' from 'KernelAbstractions.jl' to solve a single initial condition.
-```@example kenel
+```@example kernel
using NonlinearSolve, StaticArrays
using KernelAbstractions, CUDA, AMDGPU
From 964b8d1900848e42a05b05d59b77e4d6de93fc1a Mon Sep 17 00:00:00 2001
From: MRIDUL JAIN <105979087+Spinachboul@users.noreply.github.com>
Date: Fri, 3 May 2024 23:01:52 +0530
Subject: [PATCH 3/5] Update SimpleNonlinearSolve_Kernel_Tutorial.md
---
docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
index 6fad635df..a0fc96b13 100644
--- a/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
+++ b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
@@ -12,7 +12,7 @@ Ensure the following packages are installed:
- AMDGPU.jl (for AMD GPUs)
## Writing the Kernel
-Define a kernel using '@kernel' from 'KernelAbstractions.jl' to solve a single initial condition.
+Define a kernel using **'@kernel'** from **'KernelAbstractions.jl'** to solve a single initial condition.
```@example kernel
using NonlinearSolve, StaticArrays
@@ -66,7 +66,7 @@ prob = NonlinearProblem(generalized_rosenbrock, u0)
```
## Solve the Problem
-Solve the problem using SimpleNonlinearSolve.jl on different GPU architectures.
+Solve the problem using **SimpleNonlinearSolve.jl** on different GPU architectures.
```@example kernel
# Threaded CPU
@@ -80,4 +80,4 @@ vectorized_solve(prob, SimpleNewtonRaphson(); backend = CUDABackend())
```
## Conclusion
-This tutorial illustrated how to utilize SimpleNonlinearSolve.jl inside kernels using KernelAbstractions.jl, enabling efficient solving of small nonlinear systems on GPUs for applications requiring parallel processing and high performance.
+This tutorial illustrated how to utilize **SimpleNonlinearSolve.jl** inside kernels using **KernelAbstractions.jl**, enabling efficient solving of small nonlinear systems on GPUs for applications requiring parallel processing and high performance.
From cbbb82e17cfd89717620623d20081473c3320d41 Mon Sep 17 00:00:00 2001
From: MRIDUL JAIN <105979087+Spinachboul@users.noreply.github.com>
Date: Sat, 4 May 2024 00:41:15 +0530
Subject: [PATCH 4/5] Update SimpleNonlinearSolve_Kernel_Tutorial.md
---
.../tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
index a0fc96b13..34ce2c001 100644
--- a/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
+++ b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
@@ -1,10 +1,10 @@
-# Using SimpleNonlinearSolve with KernelAbstractions.jl
+# Using Nonlinear Solvers inside GPU Kernels
We'll demonstrate how to leverage [SimpleNonlinearSolve.jl](https://github.com/SciML/SimpleNonlinearSolve.jl) inside kernels using [KernelAbstractions.jl](https://github.com/JuliaGPU/KernelAbstractions.jl). This allows for efficient solving of very small nonlinear systems on GPUs by avoiding allocations and dynamic dispatch overhead. We'll use the generalized Rosenbrock problem as an example and solve it for multiple initial conditions on various GPU architectures.
### Prerequisites
Ensure the following packages are installed:
-- Julia (v1.6 or later)
+- Julia (v1.10 or later)
- NonlinearSolve.jl
- StaticArrays.jl
- KernelAbstractions.jl
@@ -70,13 +70,13 @@ Solve the problem using **SimpleNonlinearSolve.jl** on different GPU architectur
```@example kernel
# Threaded CPU
-vectorized_solve(prob, SimpleNewtonRaphson(); backend = CPU())
+# vectorized_solve(prob, SimpleNewtonRaphson(); backend = CPU())
# AMD ROCM GPU
-vectorized_solve(prob, SimpleNewtonRaphson(); backend = ROCBackend())
+# vectorized_solve(prob, SimpleNewtonRaphson(); backend = ROCBackend())
# NVIDIA CUDA GPU
-vectorized_solve(prob, SimpleNewtonRaphson(); backend = CUDABackend())
+# vectorized_solve(prob, SimpleNewtonRaphson(); backend = CUDABackend())
```
## Conclusion
From b3443ffb70fca7fcccb3f2110b3ffb678da4809a Mon Sep 17 00:00:00 2001
From: MRIDUL JAIN <105979087+Spinachboul@users.noreply.github.com>
Date: Sat, 4 May 2024 07:30:57 +0530
Subject: [PATCH 5/5] Update SimpleNonlinearSolve_Kernel_Tutorial.md
---
docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
index 34ce2c001..07761d7a9 100644
--- a/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
+++ b/docs/src/tutorials/SimpleNonlinearSolve_Kernel_Tutorial.md
@@ -70,7 +70,7 @@ Solve the problem using **SimpleNonlinearSolve.jl** on different GPU architectur
```@example kernel
# Threaded CPU
-# vectorized_solve(prob, SimpleNewtonRaphson(); backend = CPU())
+vectorized_solve(prob, SimpleNewtonRaphson(); backend = CPU())
# AMD ROCM GPU
# vectorized_solve(prob, SimpleNewtonRaphson(); backend = ROCBackend())