Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HybridACPowerFlow #97

Open
wants to merge 35 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
11d5c48
added klu lin solver cache
luke-kiernan Feb 7, 2025
dbdcec1
copy the file from jd/polar_nr
Feb 7, 2025
5168045
abstract type added
Feb 7, 2025
6ad51aa
solver cache abstract class; replaced klu calls with klu cache struct…
luke-kiernan Feb 7, 2025
b87ec7d
tests for KLULinSolveCache
luke-kiernan Feb 11, 2025
8fc3e4f
comments on polar_nr_ac_powerflow
luke-kiernan Feb 11, 2025
bf3474a
nr pf using J, PolarPowerFlow functors, and KLULinSolveCache
luke-kiernan Feb 11, 2025
bd0d454
Int32 indexing in LinSolverCache, to match Jacobian functor
luke-kiernan Feb 11, 2025
5a6c66c
clean up new hybrid nr pf
luke-kiernan Feb 12, 2025
9a9fc38
debug, optimize hybrid nr pf; add new type to tests
luke-kiernan Feb 13, 2025
479c33a
Apply suggestions from code review
luke-kiernan Feb 13, 2025
61a4a11
formatting: end with newline
luke-kiernan Feb 14, 2025
95bd687
formatting: tests
luke-kiernan Feb 14, 2025
cf58844
index type for LinearSolverCache is template parameter
luke-kiernan Feb 14, 2025
20ac49c
typo/spelling: hyrbid -> hybrid
luke-kiernan Feb 17, 2025
8458a23
hybrid ac pf: singular fallback
luke-kiernan Feb 17, 2025
103eb04
last hyrbid -> hybrid fixes
luke-kiernan Feb 17, 2025
aaf9480
perf: dispatch on Val of bus type in jsp!
luke-kiernan Feb 17, 2025
d51813e
solve! takes StridedVecOrMat
luke-kiernan Feb 18, 2025
e22ae07
formatting
luke-kiernan Feb 18, 2025
ddae98e
Use KLULinSolveCache in dc powerflows
luke-kiernan Feb 18, 2025
4c34a7c
perf: do vPTDF via broadcasting, instead of iterating thru timesteps
luke-kiernan Feb 18, 2025
1e39bb5
Merge branch 'main' of https://github.com/NREL-Sienna/PowerFlows.jl i…
Feb 18, 2025
87c4a09
use Int32 for indexing in matrices
Feb 18, 2025
2eda04d
iterative refinement
luke-kiernan Feb 20, 2025
2d8241a
Gabriel code quality/style changes
luke-kiernan Feb 20, 2025
445bd9b
nr iteration in its own function
luke-kiernan Feb 20, 2025
fa4e013
relax iterative refinement test
luke-kiernan Feb 20, 2025
a082bf3
formatting
luke-kiernan Feb 20, 2025
e1bbbdd
add ProgressMeter for multiperiod AC PF and brute-force loss factor c…
Feb 21, 2025
5b84e13
Merge branch 'lin-solve-cache' of https://github.com/NREL-Sienna/Powe…
Feb 21, 2025
18becd9
small fix
Feb 21, 2025
d9848b6
change 0.54 p.u. to 0.84 p.u. for PSSE roundtrip testing
Feb 21, 2025
cc8e3ed
increase tolerance for testing loss factors
Feb 21, 2025
3d59c5f
Merge pull request #105 from NREL-Sienna/rb/lin_solve_cache_new_J
luke-kiernan Feb 21, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Logging = "56ddb016-857b-54e1-b83d-db4d58db5568"
NLsolve = "2774e3e8-f4cf-5e23-947b-6d7e65073b56"
PowerNetworkMatrices = "bed98974-b02a-5e2f-9fe0-a103f5c450dd"
PowerSystems = "bcd98974-b02a-5e2f-9ee0-a103f5c450dd"
ProgressMeter = "92933f4c-e287-5a05-a399-4b506db050ca"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"

[compat]
Expand All @@ -29,5 +30,6 @@ Logging = "1"
NLsolve = "4"
PowerNetworkMatrices = "^0.12.1"
PowerSystems = "^4.6"
ProgressMeter = "1.10.2"
SparseArrays = "1"
julia = "^1.6"
227 changes: 227 additions & 0 deletions src/LinearSolverCache/klu_linear_solver.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,227 @@
using LinearAlgebra
"""A cached linear solver using KLU. Carefully written so as to minimize
allocations: solve! and numeric_refactor! are completely non-allocating.
# Fields:
- `K`: the underlying KLU object.
- `reuse_symbolic::Bool`: reuse the symbolic factorization. Defaults to true.
- `check_pattern::Bool`: if true, `numeric_refactor!` verifies that the new
matrix has the same sparsity structure. Defaults to true.
-`rf_common`, `rf_symbolic`, `rf_numeric`: internal usage. Stored to avoid allocations."""

mutable struct KLULinSolveCache{T} <: LinearSolverCache{T}
K::KLU.KLUFactorization{Float64, T}
reuse_symbolic::Bool
check_pattern::Bool
# condest::Float64 # condition number. could be useful for interative refinement.
rf_common::Union{Base.RefValue{KLU.klu_common}, Base.RefValue{KLU.klu_l_common}}
# klu_free_{numeric/symbolic} requires these. store them to avoid allocating.
rf_symbolic::Union{
Base.RefValue{Ptr{KLU.klu_symbolic}},
Base.RefValue{Ptr{KLU.klu_l_symbolic}},
}
rf_numeric::Union{
Base.RefValue{Ptr{KLU.klu_numeric}},
Base.RefValue{Ptr{KLU.klu_l_numeric}},
}
end

function KLULinSolveCache(
A::SparseMatrixCSC{Float64, T},
reuse_symbolic::Bool = true,
check_pattern::Bool = true,
) where {T <: TIs}
symType = T <: Int32 ? KLU.klu_symbolic : KLU.klu_l_symbolic
numType = T <: Int32 ? KLU.klu_numeric : KLU.klu_l_numeric
K = KLU.KLUFactorization(A)
# KLU.kluerror(K.common) # necessary?
return KLULinSolveCache(K, reuse_symbolic, check_pattern,
# NaN,
Ref(K.common), Ref(Ptr{symType}(K._symbolic)),
Ref(Ptr{numType}(K._numeric)))
end

get_reuse_symbolic(cache::KLULinSolveCache) = cache.reuse_symbolic

# only does the symbolic, not the numeric.
function symbolic_factor!(
cache::KLULinSolveCache{T},
A::SparseMatrixCSC{Float64, T},
) where {T <: TIs}
if !(size(A, 1) == cache.K.n && size(A, 2) == cache.K.n)
throw(
LinearAlgebra.DimensionMismatch(
"Can't factor: matrix has different dimensions."),
)
end
# cache.condest = NaN # will need to re-calculate condition number.
if cache.K._symbolic != C_NULL
freeFcn = T <: Int32 ? KLU.klu_free_symbolic : KLU.klu_l_free_symbolic
@assert cache.rf_symbolic != Ref(C_NULL)
freeFcn(cache.rf_symbolic, cache.rf_common)
cache.K._symbolic = C_NULL
@assert cache.rf_symbolic[] == C_NULL
end
if cache.K._numeric != C_NULL
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is accessing the _symbolic and _numeric fields of K part of KLU's public interface? If not (and my guess is not), find a way to avoid

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a subject of discussion with @rbolgaryn. We're doing things this way (directly calling C wrappers, using private fields) for performance reasons: KLU's public interface was not written in a non-allocating manner. That being said, I have not actually measured the performance difference.

Copy link
Contributor

@GabrielKS GabrielKS Feb 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe @jd-lara and @daniel-thom want to weigh in here but I'd be veeery wary of doing this. I guess you could check the Git history to find out how stable in practice some of these things are, but typically when a field starts with an underscore it's to explicitly signal that it should not be relied upon externally….

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, how hard would it be to write our own wrapper to replace KLU.jl?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is how it looks in the KLU.jl package:

function klu_factor(Ap, Ai, Ax, Symbolic, Common)
    ccall((:klu_factor, libklu), Ptr{klu_numeric}, (Ptr{Int32}, Ptr{Int32}, Ptr{Cdouble}, Ptr{klu_symbolic}, Ptr{klu_common}), Ap, Ai, Ax, Symbolic, Common)
end


function klu_solve(Symbolic, Numeric, ldim, nrhs, B, Common)
    ccall((:klu_solve, libklu), Cint, (Ptr{klu_symbolic}, Ptr{klu_numeric}, Int32, Int32, Ptr{Cdouble}, Ptr{klu_common}), Symbolic, Numeric, ldim, nrhs, B, Common)
end

function klu_refactor(Ap, Ai, Ax, Symbolic, Numeric, Common)
    ccall((:klu_refactor, libklu), Cint, (Ptr{Int32}, Ptr{Int32}, Ptr{Cdouble}, Ptr{klu_symbolic}, Ptr{klu_numeric}, Ptr{klu_common}), Ap, Ai, Ax, Symbolic, Numeric, Common)
end

freeFcn = T <: Int32 ? KLU.klu_free_numeric : KLU.klu_l_free_numeric
@assert cache.rf_numeric != Ref(C_NULL)
freeFcn(cache.rf_numeric, cache.rf_common)
cache.K._numeric = C_NULL
@assert cache.rf_numeric[] == C_NULL
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I'm not fully sold on the use of C_NULL as a null value except in cases where a (public) interface with a library requires it. rf_numeric is already a Union; does it really impact performance to put the option of Nothing in there? (If it really does, I could get on board with this.)

end

# copy over new info in a minimally-allocating way.
resize!(cache.K.colptr, length(A.colptr))
copyto!(cache.K.colptr, A.colptr)
KLU.decrement!(cache.K.colptr)

resize!(cache.K.rowval, length(A.rowval))
copyto!(cache.K.rowval, A.rowval)
KLU.decrement!(cache.K.rowval)

resize!(cache.K.nzval, length(A.nzval))
@assert cache.K._symbolic == C_NULL

analyzeFcn = T <: Int32 ? KLU.klu_analyze : KLU.klu_l_analyze
sym = analyzeFcn(cache.K.n, cache.K.colptr, cache.K.rowval, cache.rf_common)
if sym != C_NULL
symType = T <: Int32 ? KLU.klu_symbolic : KLU.klu_l_symbolic
cache.K._symbolic = sym
cache.rf_symbolic[] = Ptr{symType}(cache.K._symbolic)
# we do need the above line: without it, this assert triggers
@assert cache.rf_symbolic[] == cache.K._symbolic
end
return
end

function symbolic_refactor!(
cache::KLULinSolveCache{T},
A::SparseMatrixCSC{Float64, T},
) where {T <: TIs}
# case 1: reuse_symbol && !check_pattern => assume pattern hasn't changed.
# case 2: reuse_symbol && check_pattern => check pattern
# case 3: !reuse_symbol => refactor.
if cache.reuse_symbolic && cache.check_pattern
if !(size(A, 1) == cache.K.n && size(A, 2) == cache.K.n)
throw(
LinearAlgebra.DimensionMismatch(
"Can't refactor: new matrix has different dimensions."),
)
end
# KLU does this same increment-compare-decrement pattern.
KLU.increment!(cache.K.rowval)
KLU.increment!(cache.K.colptr)
shouldErr::Bool = A.colptr != cache.K.colptr || A.rowval != cache.K.rowval
KLU.decrement!(cache.K.rowval)
KLU.decrement!(cache.K.colptr)
shouldErr && throw(
ArgumentError(
"Matrix has different sparse structure. " *
"Either make cache with reuse_symbolic = false, " *
"or call symbolic_factor! [instead of symbolic_refactor!].",
),
)
elseif !cache.reuse_symbolic
symbolic_factor!(cache, A) # this also sets rf_symbolic.
@assert cache.rf_symbolic[] == cache.K._symbolic
end
return
end

function numeric_refactor!(
cache::KLULinSolveCache{T},
A::SparseMatrixCSC{Float64, T},
) where {T <: TIs}
if cache.K._symbolic == C_NULL
@error("Need to call symbolic_factor! first.")
end
# cache.condest = NaN # will need to re-calculate condition number.
if cache.K._numeric == C_NULL
factorFcn = T <: Int32 ? KLU.klu_factor : KLU.klu_l_factor
cache.K._numeric = factorFcn(cache.K.colptr,
cache.K.rowval,
A.nzval,
cache.K._symbolic,
cache.rf_common)
if cache.K._numeric == C_NULL
@warn("factor failed")
KLU.kluerror(cache.K.common)
end
else
if cache.check_pattern
# 0 vs 1-indexing. If mismatch, put things back before throwing error.
KLU.increment!(cache.K.rowval)
KLU.increment!(cache.K.colptr)
shouldErr::Bool = A.colptr != cache.K.colptr || A.rowval != cache.K.rowval
KLU.decrement!(cache.K.rowval)
KLU.decrement!(cache.K.colptr)
shouldErr && throw(
ArgumentError(
"Cannot numeric_refactor: " *
"matrix has different sparse structure.",
),
)
end
refactorFcn = T <: Int32 ? KLU.klu_refactor : KLU.klu_l_refactor
ok = refactorFcn(cache.K.colptr,
cache.K.rowval,
A.nzval,
cache.K._symbolic,
cache.K._numeric,
cache.rf_common,
)
if (ok != 1)
@warn("refactor failed")
KLU.kluerror(cache.K.common)
end
numType = T <: Int32 ? KLU.klu_numeric : KLU.klu_l_numeric
cache.rf_numeric[] = Ptr{numType}(cache.K._numeric)
# we do need the above line: without it, the below assert triggers.
@assert cache.rf_numeric[] == Ptr{numType}(cache.K._numeric)
end
return
end

function solve!(cache::KLULinSolveCache{T}, B::StridedVecOrMat{Float64}) where {T <: TIs}
size(B, 1) == cache.K.n || throw(
LinearAlgebra.DimensionMismatch(
"Need size(B, 1) to equal $(cache.K.n), but got $(size(B, 1))."),
)
stride(B, 1) == 1 || throw(ArgumentError("B must have unit strides"))
solveFcn = T <: Int32 ? KLU.klu_solve : KLU.klu_l_solve
isok = solveFcn(cache.K._symbolic, cache.K._numeric,
size(B, 1), size(B, 2), B, cache.rf_common)
isok == 0 && KLU.kluerror(cache.K.common)
return B
end

function solve_w_refinement(cache::KLULinSolveCache{T}, A::SparseMatrixCSC{Float64, T},
B::StridedVecOrMat{Float64}, tol::Float64 = 1e-6) where {T <: TIs}
cache.K._numeric == C_NULL && @error("not factored yet")
# update condition number, if needed
#=if isnan(cache.condest)
condestFcn = T <: Int32 ? KLU.klu_condest : KLU.klu_l_condest
ok = condestFcn(cache.K.colptr, cache.K.nzval, cache.K._symbolic, cache.K._numeric, cache.rf_common)
if ok == 0
KLU.kluerror(cache.K.common)
end
cache.condest = K.common.condest
end=#
bNorm = norm(B, 1)
XB = zeros(size(B))
r = B - A * XB
MAX_ITERS = 10
iters = 0
while iters < MAX_ITERS && norm(r, 1) >= bNorm * tol #*cache.condest
lastError = norm(r, 1)
solve!(cache, r)
XB .+= r
r .= B - A * XB
iters += 1
if norm(r, 1) > lastError
@error("Iterative refinement failed: error is getting worse.")
return XB
end
end
@debug("Iterative refined converged in $iters iterations.")
return XB
end
23 changes: 23 additions & 0 deletions src/LinearSolverCache/linear_solver_cache.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
const TIs = Union{Int32, Int64}
"""Abstract supertype for all cached linear solvers.
Subtypes must implement: `symbolic_factor!`, `symbolic_refactor!`,
`numeric_refactor!` (which doubles as `numeric_factor!`), and `solve!`."""
abstract type LinearSolverCache{T <: TIs} end

function full_refactor!(
cache::LinearSolverCache{T},
A::SparseMatrixCSC{Float64, T},
) where {T <: TIs}
symbolic_refactor!(cache, A)
numeric_refactor!(cache, A)
return
end

function full_factor!(
cache::LinearSolverCache{T},
A::SparseMatrixCSC{Float64, T},
) where {T <: TIs}
symbolic_factor!(cache, A)
numeric_refactor!(cache, A)
return
end
37 changes: 37 additions & 0 deletions src/LinearSolverCache/lu_linear_solver.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
mutable struct LULinSolveCache{T} <: LinearSolverCache{T}
lu_fact::SparseArrays.UMFPACK.UmfpackLU{Float64, T}
reuse_symbolic::Bool
check_pattern::Bool
end

function LULinSolveCache(A::SparseMatrixCSC{Float64, T},
reuse_symbolic::Bool = true,
check_pattern::Bool = true) where {T <: TIs}
LULinSolveCache(LinearAlgebra.lu(A), reuse_symbolic, check_pattern)
end

function numeric_refactor!(
cache::LULinSolveCache{T},
A::SparseMatrixCSC{Float64, T},
) where {T <: TIs}
LinearAlgebra.lu!(cache.lu_fact, A; check = cache.check_pattern,
reuse_symbolic = cache.reuse_symbolic)
end

function symbolic_refactor!(
cache::LULinSolveCache{T},
A::SparseMatrixCSC{Float64, T},
) where {T <: TIs}
LinearAlgebra.lu!(cache.lu_fact, A; reuse_symbolic = false)
end

function symbolic_factor!(
cache::LULinSolveCache{T},
A::SparseMatrixCSC{Float64, T},
) where {T <: TIs}
symbolic_refactor!(cache, A)
end

function solve!(cache::LULinSolveCache{T}, B::StridedVecOrMat{Float64}) where {T <: TIs}
LinearAlgebra.ldiv!(cache.lu_fact, B)
end
5 changes: 5 additions & 0 deletions src/PowerFlows.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ export solve_powerflow!
export PowerFlowData
export DCPowerFlow
export NLSolveACPowerFlow
export HybridACPowerFlow
export KLUACPowerFlow
export ACPowerFlow
export ACPowerFlowSolverType
Expand Down Expand Up @@ -32,6 +33,7 @@ import SparseArrays: SparseMatrixCSC, sparse
import JSON3
import DataStructures: OrderedDict
import Dates
import ProgressMeter

const IS = InfrastructureSystems
const PSY = PowerSystems
Expand All @@ -42,6 +44,9 @@ include("definitions.jl")
include("powerflow_types.jl")
include("PowerFlowData.jl")
include("psse_export.jl")
include("LinearSolverCache/linear_solver_cache.jl")
include("LinearSolverCache/klu_linear_solver.jl")
include("LinearSolverCache/lu_linear_solver.jl")
include("solve_dc_powerflow.jl")
include("ac_power_flow.jl")
include("ac_power_flow_jacobian.jl")
Expand Down
8 changes: 4 additions & 4 deletions src/ac_power_flow.jl
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,8 @@ function polar_pf!(
θ = data.bus_angles[:, time_step]
# F is active and reactive power balance equations at all buses
for ix_f in 1:n_buses
S_re = -P_net[ix_f]
S_im = -Q_net[ix_f]
S_re = 0.0
S_im = 0.0
for ix_t in data.neighbors[ix_f]
gb = real(Yb[ix_f, ix_t])
bb = imag(Yb[ix_f, ix_t])
Expand All @@ -156,8 +156,8 @@ function polar_pf!(
(gb * sin(θ[ix_f] - θ[ix_t]) - bb * cos(θ[ix_f] - θ[ix_t]))
end
end
F[2 * ix_f - 1] = S_re
F[2 * ix_f] = S_im
F[2 * ix_f - 1] = S_re - P_net[ix_f]
F[2 * ix_f] = S_im - Q_net[ix_f]
end
return
end
Loading
Loading