Open
Description
This issue lists functionalities and feature requests for BlockSparseArray
.
using LinearAlgebra
using NDTensors.BlockSparseArrays: BlockSparseArray
a = BlockSparseArray{Float64}([2, 3], [2, 3]);
Bugs
-
LinearAlgebra.Adjoint{T,<:BlockSparseArray{T}}
returns aBlockedArray
in certain slicing operations ([BlockSparseArrays] BlockSparseArray functionality #2). - Fix
cat(::BlockSparseArray, ::BlockSparseArray)
for dual axes (followup to [BlockSparseArrays] Direct sum/cat
ITensors.jl#1579).
Feature requests
- Allow the syntax
BlockSparseArray{Float64}([U1(0) => 2, U1(1) => 3], [U1(0) => 2, U1(1) => 3])
which could implicitly create axes withGradedUnitRange
internally. - Use
NestedPermutedDimsArray
instead ofSparsePermutedDimsArrayBlocks
(similar to how we are removingSparseAdjointBlocks
/SparseTransposeBlocks
in [BlockSparseArrays] Simplifications ofblocks
for blocksparseAdjoint
andTranspose
ITensors.jl#1580). Started in [NDTensors] IntroduceNestedPermutedDimsArrays
submodule ITensors.jl#1589, [SparseArrayInterface]NestedPermutedDimsArray
support ITensors.jl#1590. - An alternative design to [NDTensors] Introduce
NestedPermutedDimsArrays
submodule ITensors.jl#1589 would be to redefineNestedPermutedDimsArray
as aPermutedDimsArray
wrapping aMappedArray
where the map and inverse map convert toPermutedDimsArray
, that would be good to explore so we don't have to support all of theNestedPermutedDimsArrays
code, which is mostly just a copy ofBase.PermutedDimsArrays
anyway. - If slices are just blockwise, like
b = @view a[Block.(1:2), [Block(2), Block(1)]
, defineblocks(b)
as@view blocks(a)[1:2, [2, 1]]
, as opposed to using the more generalSparseSubArrayBlocks
in those cases. Like the newNestedPermutedDimsArray
, in principleSparseSubArrayBlocks
could be replaced by aNestedSubArray
type that defines the slicing behavior of the array storing the blocks and also the slicing of the blocks themselves, but that might be overkill and the concept is very particular to block arrays. But maybeSubArray
of theblocks
could still be used to simplify the code logic inSparseSubArrayBlocks
. - Constructor from
Dictionary
should check block sizes ([BlockSparseArrays] BlockSparseArray functionality #2). - Blockwise linear algebra operations like
svd
,qr
, etc. See [ENHANCEMENT] Blockwise matrix factorizations #3. These are well defined if the block sparse matrix has a block structure (i.e. the sparsity pattern of the sparse array of arraysblocks(a)
) corresponding to a generalized permutation matrix. Probably they should be called something likeblock_svd
,block_eigen
,block_qr
, etc. to distinguish that they are meant to be used on block sparse matrices with those structures (and error if they don't have that structure). See 1 for a prototype of a blockwise QR. See also BlockDiagonals.jl for an example in Julia of blockwise factorizations, they use a naming schemesvd_blockwise
,eigen_blockwise
, etc. The slicing operation introduced in [BlockSparseArrays] Sub-slices of multiple blocks ITensors.jl#1489 will be useful for performing block-wise truncated factorizations. - Define
storedblockview(a, ::Block)
to get the view of a stored block, which would allow avoiding wrapping the block in a view. Also definegetstoredindex(a, ::Block)
. - Change the behavior of slicing with non-blocked ranges (such as
a[1:2, 1:2]
) to output non-blocked arrays, and define@blocked a[1:2, 1:2]
to explicitly preserve blocking. See the discussion in Functionality for slicing with unit ranges that preserves block information JuliaArrays/BlockArrays.jl#347. - Reconsider the design of how duality is stored in graded unit ranges (graded axes), for example storing it at the level of the sector labels with a new
SectorDual
type and/or as a boolean flag. - Display non-initialized blocks differently from zero-initialized blocks, currently they both print as zeros.
Fixed
- Rename
SparseArrayInterface
toSparseArraysBase
and moveSparseArrayDOKs
into the newSparseArraysBase
. (Fixed in [SparseArraysBase] RenameSparseArrayInterface
toSparseArraysBase
ITensors.jl#1591, [SparseArraysBase] AbsorbSparseArrayDOKs
ITensors.jl#1592.) - Constructors like
BlockSparseArray{Float64,2,Matrix{Float64}}([2, 3], [2, 3])
,BlockSparseArray{Float64,2}([2, 3], [2, 3])
,BlockSparseMatrix{Float64}([2, 3], [2, 3])
are not defined. (Fixed by [BlockSparseArrays] Define more constructors ITensors.jl#1586.) - Rename
block_nstored
toblock_stored_length
andnstored
tostored_length
. (Fixed by [SparseArrayInterface] [BlockSparseArrays] Rename nstored to stored_length ITensors.jl#1585.) - Rename
BlockSparseArrayLike
toAnyAbstractBlockSparseArray
, which is the naming convention used in other Julia packages for a similar concept2. - Implement direct sums/concatenations of block sparse arrays that preserve and make use of block sparsity. This could be done by overloading
Base.cat
and related functions. (Implemented in [BlockSparseArrays] Direct sum/cat
ITensors.jl#1579.) - Fix issues with nested slices like
@views a[[Block(2), Block(1)], [Block(2), Block(1)]][2:4, 2:4]
in Julia 1.11 (see tests marked as broken in [NDTensors] [ITensors] Update tests to use Julia version 1.10 and 1.11 ITensors.jl#1539). (Fixed in [BlockSparseArrays] Fix nested slicing in Julia 1.11 ITensors.jl#1575.) - Support for blocks that are on GPU. (Partially addressed in [BlockSparseArrays] Initial support for more general blocks, such as GPU blocks ITensors.jl#1560 but more work is needed, we can track issues individually from now on.)
- Better support for blocks that are not
Array
, for exampleDiagonalArrays.DiagonalArray
,SparseArrayDOKs.SparseArrayDOK
,LinearAlgebra.Diagonal
, etc.BlockSparseArray
can have blocks that are AbstractArray subtypes, however some operations don't preserve those types properly (i.e. implicitly convert toArray
blocks) or don't work. (Partially addressed in [BlockSparseArrays] Initial support for more general blocks, such as GPU blocks ITensors.jl#1560 but more work is needed, we can track issues individually from now on.) - Cannot create zero dimension array (
BlockSparseArray{Float64}()
fails). (Fixed in [BlockSparseArrays] Zero dimensional block sparse array and some fixes for Adjoint and PermutedDimsArray ITensors.jl#1574.) -
permutedims
crashes for some block sparse arrays ([BlockSparseArrays] BlockSparseArray functionality #2). (Fixed in [BlockSparseArrays] Zero dimensional block sparse array and some fixes for Adjoint and PermutedDimsArray ITensors.jl#1574.) -
copy(adjoint)
does not preserve dual axes. (Fixed in [BlockSparseArrays] Zero dimensional block sparse array and some fixes for Adjoint and PermutedDimsArray ITensors.jl#1574.) -
block_stored_indices(::LinearAlgebra.Adjoint{T, BlockSparseArray})
does not transpose its indices. (Matt: I don't see an issue here, the values ofblock_stored_indices(a')
are the nonzero/stored block locations, thekeys
ofblock_stored_indices(a')
are an implementation detail and should not be used.) -
LinearAlgebra.norm(a)
crashes whena
containsNaN
. - Slicing a
BlockSparseArray
that hasGradedUnitRange
axes (as opposed toGradedOneTo
) fails. - Sub-slices of multiple blocks sometimes fails with a dual axis..
-
a[:, :]
creates an array with ill behaved axes (it should just be equivalent tocopy(a)
). Also triggers display error. - Printing a
BlockSparseArray
triggersBoundsError
in some rare contexts ([BlockSparseArrays] BlockSparseArray functionality #2). - Automatically initializing a block when only a slice is used (this should work if you make sure to use a view, see [BlockSparseArrays] BlockSparseArray functionality #2).
- Implement slicing syntax for merging blocks, i.e. More convenient syntax for merging blocks JuliaArrays/BlockArrays.jl#359.
-
a = BlockSparseArray{Float64}([2, 3], [2, 3]); @view a[Block(1, 1)]
returns aSubArray
where the last type parameter which marks whether or not the slice supports faster linear indexing isfalse
, while it should betrue
if that is the case for that block ofa
(this is addressed by [BlockSparseArrays] Redesign block views again ITensors.jl#1513,@view a[Block(1, 1)]
no longer outputs aSubArray
, but rather either the block data directly or aBlockView
object if the block doesn't exist yet). -
TensorAlgebra.contract
fails when called withview(::BlockSparseArray, ::Block)
orreshape(view(::BlockSparseArray, ::Block), ...)
. As a workaround it will work if you useview!
/@view!
instroduced in [BlockSparseArrays] Define in-place view that may instantiate blocks ITensors.jl#1498. - Fix tests that are marked as broken in https://github.com/ITensor/ITensors.jl/blob/main/NDTensors/src/lib/BlockSparseArrays/test/test_basics.jl.
-
a = BlockSparseArray{Float64}([2, 3], [2, 3]); b = @view a[Block.(1:2), Block.(1:2)]; b[Block(1, 1)] = randn(2, 2)
doesn't set the blockBlock(1, 1)
(it remains uninitialized, i.e. structurally zero). I think the issue is that@view b[Block(1, 1)]
makes two layers ofSubArray
wrappers instead of flattening down to a single layer, and those two layers are not being dispatched on properly (in general we only catch if something is aBlockSparseArray
or aBlockSparseArray
wrapped in a single wrapper layer). - Compatibility issues with
BlockArrays
v1.1, see CI for [SymmetrySectors] Non-abelian fusion ITensors.jl#1363. Fixed by [BlockSparseArrays] Update to BlockArrays v1.1, fix some issues with nested views ITensors.jl#1503. - Rewrite GradedAxes using BlockArrays v13.
-
r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(r, r); size(view(a, Block(1,1))[1:1,1:1])
returns a tuple ofLabelledInteger
instead ofInt
(see discussion, keep it that way at least for now). -
r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(dual(r), r); @view(a[Block(1, 1)])[1:1, 1:1]
and other combinations ofdual
lead to method ambiguity errors. - Implement slicing syntax for slicing multiple subsets of blocks, i.e. Indexing with
Vector{<:BlockIndexRange{1}}
JuliaArrays/BlockArrays.jl#358. -
dual
is not preserved when adding/subtractingBlockSparseArray
s, i.e.g = gradedrange([U1(0) => 1]); m = BlockSparseArray{Float64}(dual(g), g); isdual(axes(m + m, 1))
should betrue
but isfalse
. - Error printing views of blocks of BlockSparseArray with GradedUnitRange axes, i.e.
r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(r, r); @view a[Block(1, 1)]
. - Change the design of slicing with unit ranges, i.e.
a[2:4, 2:4]
, by usingBlockArrays.BlockSlice
. - Fix
a[Block(2), Block(2)] = randn(3, 3)
. - Fix
a[Block(2, 2)] .= 1
. - Fix
@view(a[Block(1, 1)])[1:1, 1:1] = 1
. - Throw error if
a[Block(1, 1)] = b
ifsize(a[Block(1, 1)]) != size(b)
. - Fix matrix multiplication of
BlockSparseMatrix
involving dual axes. - Fix matrix multiplication involving adjointed
BlockSparseMatrix
, i.e.a' * a
anda * a'
, with and without dual axes. - Take the dual of the axes in
adjoint(::BlockSparseMatrix)
. Can be implemented by overloadingaxes(::Adjoint{<:Any,<:AbstractBlockSparseMatrix})
. -
show(::Adjoint{<:Any,<:BlockSparseMatrix})
andshow(::Transpose{<:Any,<:BlockSparseMatrix))
are broken. - fix
eachindex(::BlockSparseArray)
involving dual axes. - Fix [Hermitian] transposed
BlockSparseMatrix
, i.e.a'
(in progress in4). -
Base.similar(a::BlockSparseArray, eltype::type)
andBase.similar(a::BlockSparseArray, eltype::type, size::NTuple{N,AbstractUnitRange})
do not seteltype
- Make
copy(::BlockSparseArray)
copy the blocks. - Slicing with
a[1:2, 1:2]
is not implemented yet and needs to be implemented (in progress in5). - Function for accessing a list of initialized blocks. Currently you can use
stored_indices(blocks(a)))
to get a list ofBlock
corresponding to initialized/stored blocks. Ideally there would be shorthands for this likeblock_stored_indices(a)
(in progress in5). - Function for accessing the number of initialized blocks. Currently you can use
nstored(blocks(a))
to get the number of initialized/stored blocks. Ideally there would be shorthands for this likeblock_nstored(a)
(in progress in5). - In place operations
.*=
and./=
, such asa .*= 2
, are broken (in progress in[^1]). -
Base.:*(::BlockSparseArray, x::Number)
andBase.:/(::BlockSparseArray, x::Number)
are not defined -
Base.:*(::ComplexF64, ::BlockSparseArray{Float64})
does not change data type for empty array and crashes ifa
contains data.
Footnotes
-
https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl ↩
-
https://github.com/JuliaGPU/GPUArrays.jl/blob/v11.1.0/lib/GPUArraysCore/src/GPUArraysCore.jl#L27, https://github.com/JuliaGPU/CUDA.jl/blob/v5.4.2/src/array.jl#L396 ↩
-
https://github.com/ITensor/ITensors.jl/pull/1452, https://github.com/JuliaArrays/BlockArrays.jl/pull/255 ↩
-
[BlockSparseArrays] Fix adjoint and transpose ITensors.jl#1470 ↩
-
[BlockSparseArrays] More general broadcasting and slicing ITensors.jl#1332 ↩ ↩2 ↩3