Skip to content

Commit

Permalink
implemented expm multiply (#585)
Browse files Browse the repository at this point in the history
* Refactor Expr
1) moving Hamiltonian/StepHamiltonian and ThreadedMatrix to Operator module
2) moving all linalg associate to 1) are moving to Operator module.

* BloqadeExpr Change
1) move linalg related to elementary structs (PermMatrix,Diagonal,SparseMatrixCSR/CSC) to Lowlevel/backends/
2) rename module Operator -> Lowlevel to avoid confusion with Yao
3) rename test/linalg.jl to test/linalg_mul.jl
4) [FIX] mul! testing case for ParallelMergeCSR is not properly dispatched.
5) add trace() and bind to LinearAlgebra.tr() for PermMatrix,Diagonal and SparseMatrixCSR/CSC
6) add test/linalg_tr.jl for testing tr()

* Get types info on Hamiltonain
1)  add precision_type() for Hamiltonian/StepHamiltonian
2) add highest_type() for Hamiltonian/StepHamiltonian

* add overload for SparseMatrixCSR, and remove redundant print

* Move ValHamiltionain from BloqadeKrylov to BloqadeExpr.Lowlevel

* 1) Add more testing cases for ValHamiltonian
2) move linalg associate to Hamiltonian etc to linalg.jl

* fix ASCII for '-' causing error on julia-v1.6

* Simplify Lowlevel data structure
1) Remove StepHamiltonian
2) Rename ValHamiltonian -> SumOfLinop

* temp up

* expm_multiply first version
1) Add expm_multiply, tested but this version consume more memory
2) Add get_optimal_sm() to get the optimal s and m_star.
   currently the one norm of power p is get exactly, not estimate (need to implement one norm est !

* Add onenormest
1) add onenormest() using block algorithm for 1 norm estimation

* update onenorm for expm_multiply
1) swap out the onenorm in expm_multiply using onenormest()
2) reinstat all the testing cases.
3) checking testing of expm_multiply

* fix bug in tA

* 1) modify SumOfLinop to take tuple of static terms instead of Hamiltonian
2) change test, linalg, size, types related to accomadate change.

* 1) modify onenormest and expm_multiply to accept generic type instead of AbstractMatrix.
2) modify parts to accompany change of SumOfLinop

* refactor Expr
1) additional field types for SumOfLinop Hermitian, SkewHermitian and RegularLinop
2) adjoint, Base.:*, add_I for lazy evaluate in the case input is Hermitian/SkewHermitian

* fix the bug in onenormest.
1) this is dirty version. need to be clean
2) fix the bug in calculating h in onenormest

* clean up the code

* integrated expm_multiply into integrators
1) fix bug in precision_type/highest_type/eltype for SumOfLinop by considering also dtype of fvals
2) add testing case for expm_multiply backend
3) add expmv_backend as additional options, and update docstring.
4) fix bug in get_optimal_sm when t is negative.

* add AD for Hamiltonian
1) add FowardDiff in BloqadeExpr
2) add derivative(H,t) for calculating Hamiltonian H'(t)
3) add testing case

* start developing of adaptive cft

* Bump patch version

* Refactor Expr (#572)

* Refactor Expr
1) moving Hamiltonian/StepHamiltonian and ThreadedMatrix to Operator module
2) moving all linalg associate to 1) are moving to Operator module.

* BloqadeExpr Change
1) move linalg related to elementary structs (PermMatrix,Diagonal,SparseMatrixCSR/CSC) to Lowlevel/backends/
2) rename module Operator -> Lowlevel to avoid confusion with Yao
3) rename test/linalg.jl to test/linalg_mul.jl
4) [FIX] mul! testing case for ParallelMergeCSR is not properly dispatched.
5) add trace() and bind to LinearAlgebra.tr() for PermMatrix,Diagonal and SparseMatrixCSR/CSC
6) add test/linalg_tr.jl for testing tr()

* Get types info on Hamiltonain
1)  add precision_type() for Hamiltonian/StepHamiltonian
2) add highest_type() for Hamiltonian/StepHamiltonian

* add overload for SparseMatrixCSR, and remove redundant print

* Move ValHamiltionain from BloqadeKrylov to BloqadeExpr.Lowlevel

* 1) Add more testing cases for ValHamiltonian
2) move linalg associate to Hamiltonian etc to linalg.jl

* fix ASCII for '-' causing error on julia-v1.6

* Simplify Lowlevel data structure
1) Remove StepHamiltonian
2) Rename ValHamiltonian -> SumOfLinop

* 1) remove BloqadeKrylov from toml
2) remove redundant comments

---------

Co-authored-by: Kai-Hsin Wu <[email protected]>
Co-authored-by: Kai-Hsin Wu <[email protected]>

* remove redundant

* fix missing end on runtest.jl after merge

* add compat for ForwardDiff

* add eltype() for threadedmatrix

* remove hard constrain on fix-time check for CFET clocks

* re-removing tests.

---------

Co-authored-by: Kai-Hsin Wu <[email protected]>
Co-authored-by: Kai-Hsin Wu <[email protected]>
Co-authored-by: Phillip Weinberg <[email protected]>
  • Loading branch information
4 people authored Sep 8, 2023
1 parent 6c0e573 commit c219ebf
Show file tree
Hide file tree
Showing 49 changed files with 1,820 additions and 52 deletions.
2 changes: 2 additions & 0 deletions lib/BloqadeExpr/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ version = "0.1.14"
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
BitBasis = "50ba71b6-fa0f-514d-ae9a-0916efc90dcf"
BloqadeLattices = "bd27d05e-4ce1-5e79-84dd-c5d7d508bbe4"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
LaTeXStrings = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Expand All @@ -28,6 +29,7 @@ LaTeXStrings = "1"
LuxurySparse = "0.7"
MLStyle = "0.4"
ParallelMergeCSR = "1.0.2"
ForwardDiff="0.10"
Polyester = "0.7.3"
Preferences = "1.3"
SparseMatricesCSR = "0.6.7"
Expand Down
13 changes: 11 additions & 2 deletions lib/BloqadeExpr/src/BloqadeExpr.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@ using BloqadeLattices: BoundedLattice, rydberg_interaction_matrix


include("Lowlevel/Lowlevel.jl")
using .Lowlevel: Hamiltonian, SumOfLinop, ThreadedMatrix, storage_size, to_matrix, precision_type, highest_type

using .Lowlevel: Hamiltonian, SumOfLinop, ThreadedMatrix, storage_size, to_matrix, precision_type, highest_type, add_I, derivative, RegularLinop, isskewhermitian


export rydberg_h,
rydberg_h_3,
Expand Down Expand Up @@ -47,7 +49,14 @@ export rydberg_h,
emulate!,
precision_type,
highest_type,
to_matrix

to_matrix,
add_I,
derivative,
RegularLinop, # abstype
SkewHermitian, # abstype
isskewhermitian


include("assert.jl")
include("space.jl")
Expand Down
5 changes: 5 additions & 0 deletions lib/BloqadeExpr/src/Lowlevel/Lowlevel.jl
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,17 @@ using Preferences
using Adapt
using LaTeXStrings
using LinearAlgebra
using ForwardDiff
using Base.Threads: nthreads


export Hamiltonian, SumOfLinop
export ThreadedMatrix
export set_backend
export storage_size, to_matrix
export precision_type, highest_type
export add_I, isskewhermitian

#export ValH, get_f # convert StepHamiltonian to ValHamiltonian

include("types.jl")
Expand Down
150 changes: 140 additions & 10 deletions lib/BloqadeExpr/src/Lowlevel/linalg.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#**************************************************************
# Here, one can find the linalg API for
# 1. Hamiltonian/SumOfLinop
# 1. Hamiltonian/SumOfLinop
# 2. Binding of standard LinearAlgebra functions
# to the backend of choice for ThreadedMatrix
#**************************************************************
Expand All @@ -10,32 +10,50 @@
#--------------------------------
function LinearAlgebra.mul!(C::AbstractVecOrMat, A::SumOfLinop, B::AbstractVecOrMat)
fill!(C, zero(eltype(C)))
for (f, term) in zip(A.fvals, A.h.ts)
for (f, term) in zip(A.fvals, A.ts)
mul!(C, term, B, f, one(f))
end
return C
end

## additionals, maybe we don't need this.
function Base.:*(a::Number, b::SumOfLinop)
return SumOfLinop(b.fvals .* a, b.h)

function Base.:*(a::Number,b::SumOfLinop)
return SumOfLinop{RegularLinop}(b.fvals .* a, b.ts)
end

function Base.:*(a::Complex,b::SumOfLinop{T}) where {T}
if real(a) 0
return SumOfLinop{anti_type(T)}(b.fvals .* a, b.ts)
elseif imag(a) 0
return SumOfLinop{T}(b.fvals .* a, b.ts)
else
return SumOfLinop{RegularLinop}(b.fvals .* a, b.ts)
end
end

function Base.:*(a::Real,b::SumOfLinop{T}) where {T}
return SumOfLinop{T}(b.fvals .* a, b.ts)
end

Base.:*(n, m::T) where {T <: ThreadedMatrix} = n * m.matrix


#=
function Base.:+(a::SumOfLinop, b::SumOfLinop)
if !(a === b)
if !(a.ts === b.ts)
error("two SumOfLinop must share the same static terms ")
end
return SumOfLinop(a.fvals + b.fvals, a.h)
return SumOfLinop(a.fvals + b.fvals, a.ts)
end
function Base.:-(a::SumOfLinop, b::SumOfLinop)
if !(a === b)
if !(a.ts === b.ts)
error("two SumOfLinop must share the same static terms ")
end
return SumOfLinop(a.fvals - b.fvals, a.h)
return SumOfLinop(a.fvals - b.fvals, a.ts)
end

=#


# if BloqadeExpr was the backend of choice, then ThreadedMatrix will have the SparseMatrixCSC type
Expand All @@ -47,6 +65,7 @@ LinearAlgebra.mul!(C, A::ThreadedMatrix, B, α, β) = bmul!(C, A.matrix, B, α,

##-------------------------------- mul!


## opnorm()
# --------------------------------
function LinearAlgebra.opnorm(h::SumOfLinop, p = 2)
Expand All @@ -59,7 +78,7 @@ end
## tr()
# --------------------------------
function LinearAlgebra.tr(A::SumOfLinop)
return sum(zip(A.fvals, A.h.ts)) do (f, t)
return sum(zip(A.fvals, A.ts)) do (f, t)
return f * tr(t)
end
end
Expand All @@ -77,3 +96,114 @@ end
LinearAlgebra.tr(A::ThreadedMatrix) = tr(A.matrix)

##-------------------------------- tr()


## check if is hermitian.
# --------------------------------
LinearAlgebra.ishermitian(A::SumOfLinop{<: LinearAlgebra.Hermitian}) = true
LinearAlgebra.ishermitian(A::SumOfLinop) = false

isskewhermitian(A::SumOfLinop{<: SkewHermitian}) = true
isskewhermitian(A::SumOfLinop) = false


## adjoint()
function LinearAlgebra.adjoint(A::SumOfLinop{<:LinearAlgebra.Hermitian})
return A
end
function LinearAlgebra.adjoint(A::SumOfLinop{<:SkewHermitian})
return SumOfLinop{SkewHermitian}(A.fvals.*(-1), A.ts)
end
function LinearAlgebra.adjoint(A::SumOfLinop{OPTYPE}) where {OPTYPE}
return SumOfLinop{OPTYPE}(conj.(A.fvals), map(adjoint,A.ts))
end

## add constant identity term into SumOfLinop
# [NOTE] this does not check the type consistency of c w.r.t. A.fvals.
function add_I(A,c::Number)
Iop = LinearAlgebra.I(size(A,1))
return A + c*Iop
end

function add_I(A::SumOfLinop, c::Number)
Iop = LinearAlgebra.I(size(A,1))

if nthreads() > 1
return SumOfLinop{RegularLinop}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
else
return SumOfLinop{RegularLinop}((A.fvals...,c),(A.ts...,Iop))
end

end

function add_I(A::SumOfLinop{<:LinearAlgebra.Hermitian}, c::Real)
# check backend:
Iop = LinearAlgebra.I(size(A,1))

if nthreads() > 1
return SumOfLinop{LinearAlgebra.Hermitian}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
else
return SumOfLinop{LinearAlgebra.Hermitian}((A.fvals...,c),(A.ts...,Iop))
end

end
function add_I(A::SumOfLinop{<:LinearAlgebra.Hermitian}, c::Complex)
# check backend:
Iop = LinearAlgebra.I(size(A,1))

OPTYPE=RegularLinop
if imag(c) 0
OPTYPE = LinearAlgebra.Hermitian
end

if nthreads() > 1
return SumOfLinop{OPTYPE}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
else
return SumOfLinop{OPTYPE}((A.fvals...,c),(A.ts...,Iop))
end

end

function add_I(A::SumOfLinop{<:SkewHermitian}, c::Real)
# check backend:
Iop = LinearAlgebra.I(size(A,1))

if nthreads() > 1
return SumOfLinop{RegularLinop}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
else
return SumOfLinop{RegularLinop}((A.fvals...,c),(A.ts...,Iop))
end

end
function add_I(A::SumOfLinop{<:SkewHermitian}, c::Complex)
# check backend:
Iop = LinearAlgebra.I(size(A,1))

OPTYPE=RegularLinop
if real(c) 0
OPTYPE = SkewHermitian
end

if nthreads() > 1
return SumOfLinop{OPTYPE}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
else
return SumOfLinop{OPTYPE}((A.fvals...,c),(A.ts...,Iop))
end

end



## taking derivative of Hamiltonian and evaluate at time
## H'(t)
## this returns a SumOfLinop with fvals being the derivative
#function derivative(h::Hamiltonian, t::Real)
# return SumOfLinop{Hermitian}(ForwardDiff.derivative.(h.fs,t), h.ts)
#end

function derivative(h::Hamiltonian, t::Real)
## remove terms that are zero
fvals = ForwardDiff.derivative.(h.fs,t)
mask = collect(fvals .!= 0)
return SumOfLinop{Hermitian}(fvals[mask],h.ts[mask])
end
50 changes: 41 additions & 9 deletions lib/BloqadeExpr/src/Lowlevel/types.jl
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,11 @@ end

Base.size(m::ThreadedMatrix) = size(m.matrix)
Base.size(m::ThreadedMatrix, i) = size(m.matrix)[i]
Base.eltype(m::ThreadedMatrix) = eltype(m.matrix)
Base.pointer(m::T) where {T <: Diagonal} = pointer(m.diag)


precision_type(m::T) where {T <: Number} = real(typeof(m))
precision_type(m::T) where {T <: Diagonal} = real(eltype(m))
precision_type(m::T) where {T <: PermMatrix} = real(eltype(m))
precision_type(m::T) where {T <: SparseMatrixCSR} = real(eltype(m))
Expand Down Expand Up @@ -61,27 +64,56 @@ function highest_type(h::Hamiltonian)
return promote_type(tp...)
end


Base.eltype(h::Hamiltonian) = highest_type(h)


Adapt.@adapt_structure Hamiltonian




abstract type RegularLinop end
abstract type SkewHermitian end

anti_type(::Type{LinearAlgebra.Hermitian}) = SkewHermitian
anti_type(::Type{SkewHermitian}) = LinearAlgebra.Hermitian
anti_type(::Type{RegularLinop}) = RegularLinop


"""
struct SumOfLinop
A low-level linear-map object that explicitly evaluate time dependent
coefficients at given time `t` fvals = fs(t) of Hamiltonian.
This object supports the linear map interface `mul!(Y, H, X)`.
"""
struct SumOfLinop{VS,FS,TS}

struct SumOfLinop{OPTYPE, VS,TS}
fvals::VS
h::Hamiltonian{FS,TS}
ts::TS
function SumOfLinop{OPTYPE}(fvals::VS, ts::TS) where {OPTYPE, VS, TS}
return new{OPTYPE,VS,TS}(fvals, ts)
end
end

Base.size(h::SumOfLinop, idx::Int) = size(h.h, idx)
Base.size(h::SumOfLinop) = size(h.h)
precision_type(h::SumOfLinop) = precision_type(h.h)
highest_type(h::SumOfLinop) = highest_type(h.h)
Base.size(h::SumOfLinop, idx::Int) = size(h.ts[1], idx)
Base.size(h::SumOfLinop) = size(h.ts[1])
function precision_type(h::SumOfLinop)
tp = unique(precision_type.(h.ts))
tp2 = unique(precision_type.(h.fvals))
tp = unique((tp...,tp2...))
return Union{tp...}
end
function highest_type(h::SumOfLinop)
tp = unique(eltype.(h.ts))
tp2 = unique(typeof.(h.fvals))
return promote_type(tp...,tp2...)
end
Base.eltype(h::SumOfLinop) = highest_type(h)

function to_matrix(h::SumOfLinop)
return sum(zip(h.fvals, h.h.ts)) do (f, t)
return sum(zip(h.fvals, h.ts)) do (f, t)
return f * t
end
end
Expand All @@ -93,9 +125,9 @@ function _getf(h::Hamiltonian,t)
)
end

(h::Hamiltonian)(t::Real) = SumOfLinop(_getf(h,t), h)


## lowering by Hamiltonian, so its Hermitian type
(h::Hamiltonian)(t::Real) = SumOfLinop{LinearAlgebra.Hermitian}(_getf(h,t), h.ts)



Expand Down
Loading

0 comments on commit c219ebf

Please sign in to comment.