diff --git a/previews/PR551/.documenter-siteinfo.json b/previews/PR551/.documenter-siteinfo.json
index 301aa192..71d3ffc2 100644
--- a/previews/PR551/.documenter-siteinfo.json
+++ b/previews/PR551/.documenter-siteinfo.json
@@ -1 +1 @@
-{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2025-01-08T12:43:39","documenter_version":"1.8.0"}}
\ No newline at end of file
+{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2025-01-09T12:16:49","documenter_version":"1.8.0"}}
\ No newline at end of file
diff --git a/previews/PR551/api/index.html b/previews/PR551/api/index.html
index f1e86892..d131095c 100644
--- a/previews/PR551/api/index.html
+++ b/previews/PR551/api/index.html
@@ -7,16 +7,16 @@
A = ones(1024)
B = rand(1024)
vecadd(CPU(), 64)(A, B, ndrange=size(A))
-synchronize(backend)source
@kernel config function f(args) end
This allows for two different configurations:
cpu={true, false}: Disables code-generation of the CPU function. This relaxes semantics such that KernelAbstractions primitives can be used in non-kernel functions.
inbounds={false, true}: Enables a forced @inbounds macro around the function definition in the case the user is using too many @inbounds already in their kernel. Note that this can lead to incorrect results, crashes, etc and is fundamentally unsafe. Be careful!
@Const is an argument annotiation that asserts that the memory reference by A is both not written to as part of the kernel and that it does not alias any other memory in the kernel.
Danger
Violating those constraints will lead to arbitrary behaviour.
As an example given a kernel signature kernel(A, @Const(B)), you are not allowed to call the kernel with kernel(A, A) or kernel(A, view(A, :)).
The @index macro can be used to give you the index of a workitem within a kernel function. It supports both the production of a linear index or a cartesian index. A cartesian index is a general N-dimensional index that is derived from the iteration space.
Index granularity
Global: Used to access global memory.
Group: The index of the workgroup.
Local: The within workgroup index.
Index kind
Linear: Produces an Int64 that can be used to linearly index into memory.
Cartesian: Produces a CartesianIndex{N} that can be used to index into memory.
NTuple: Produces a NTuple{N} that can be used to index into memory.
If the index kind is not provided it defaults to Linear, this is subject to change.
cpu={true, false}: Disables code-generation of the CPU function. This relaxes semantics such that KernelAbstractions primitives can be used in non-kernel functions.
inbounds={false, true}: Enables a forced @inbounds macro around the function definition in the case the user is using too many @inbounds already in their kernel. Note that this can lead to incorrect results, crashes, etc and is fundamentally unsafe. Be careful!
@Const is an argument annotiation that asserts that the memory reference by A is both not written to as part of the kernel and that it does not alias any other memory in the kernel.
Danger
Violating those constraints will lead to arbitrary behaviour.
As an example given a kernel signature kernel(A, @Const(B)), you are not allowed to call the kernel with kernel(A, A) or kernel(A, view(A, :)).
The @index macro can be used to give you the index of a workitem within a kernel function. It supports both the production of a linear index or a cartesian index. A cartesian index is a general N-dimensional index that is derived from the iteration space.
Index granularity
Global: Used to access global memory.
Group: The index of the workgroup.
Local: The within workgroup index.
Index kind
Linear: Produces an Int64 that can be used to linearly index into memory.
Cartesian: Produces a CartesianIndex{N} that can be used to index into memory.
NTuple: Produces a NTuple{N} that can be used to index into memory.
If the index kind is not provided it defaults to Linear, this is subject to change.
Declare storage that is local to each item in the workgroup. This can be safely used across @synchronize statements. On a CPU, this will allocate additional implicit dimensions to ensure correct localization.
For storage that only persists between @synchronize statements, an MArray can be used instead.
After a @synchronize statement all read and writes to global and local memory from each thread in the workgroup are visible in from all other threads in the workgroup.
After a @synchronize statement all read and writes to global and local memory from each thread in the workgroup are visible in from all other threads in the workgroup. cond is not allowed to have any visible sideffects.
Platform differences
GPU: This synchronization will only occur if the cond evaluates.
expr is evaluated outside the workitem scope. This is useful for variable declarations that span workitems, or are reused across @synchronize statements.
Query the workgroupsize on the backend. This function returns a tuple corresponding to kernel configuration. In order to get the total size you can use prod(@groupsize()).
Declare storage that is local to each item in the workgroup. This can be safely used across @synchronize statements. On a CPU, this will allocate additional implicit dimensions to ensure correct localization.
For storage that only persists between @synchronize statements, an MArray can be used instead.
After a @synchronize statement all read and writes to global and local memory from each thread in the workgroup are visible in from all other threads in the workgroup.
After a @synchronize statement all read and writes to global and local memory from each thread in the workgroup are visible in from all other threads in the workgroup. cond is not allowed to have any visible sideffects.
Platform differences
GPU: This synchronization will only occur if the cond evaluates.
expr is evaluated outside the workitem scope. This is useful for variable declarations that span workitems, or are reused across @synchronize statements.
Query the workgroupsize on the backend. This function returns a tuple corresponding to kernel configuration. In order to get the total size you can use prod(@groupsize()).