SummationByPartsOperatorsExtra.jl API
SummationByPartsOperatorsExtra.SummationByPartsOperatorsExtra
— ModuleSummationByPartsOperatorsExtra
SummationByPartsOperatorsExtra.jl is a Julia package that implements some extra functionality for the package SummationByPartsOperators.jl. SummationByPartsOperatorsExtra.jl is still in an early stage of development and is meant to be used for research purposes. Maybe some parts of the package will be moved to SummationByPartsOperators.jl in the future. Until now, the package focuses on the implementation of function space summation-by-parts operators in one and multiple dimensions and on subcell summation-by-parts operators.
See also: SummationByPartsOperatorsExtra.jl
SummationByPartsOperatorsExtra.AnalysisCallback
— TypeAnalysisCallback(semi; interval = 0, dt = nothing)
Analyze the numerical solution either every interval
accepted time steps or every dt
in terms of integration time. You can only pass either interval
or dt
, but not both at the same time. The analyzed quantities are computed by analyze_quantities
defined for each equation type. The resulting quantities can be accessed via the quantities
function, and the corresponding time values via the tstops
function.
SummationByPartsOperatorsExtra.GlaubitzIskeLampertÖffner2025
— TypeGlaubitzIskeLampertÖffner2025()
Multidimensional function space SBP (MFSBP) operators given in
- Glaubitz, Iske, Lampert, Öffner (2025) Efficient construction and application of multi-dimensional summation-by-parts operators to global radial basis function methods TODO
SummationByPartsOperatorsExtra.GlaubitzLampertWintersNordström2025
— TypeGlaubitzLampertWintersNordström2025()
Sub-cell SBP operators given in
- Glaubitz, Lampert, Winters, Nordström (2025): Towards provable energy stable overset grid methods using sub-cell summation-by-parts operators. TODO
See subcell_operator
.
SummationByPartsOperatorsExtra.MultidimensionalLinearAdvectionNonperiodicSemidiscretization
— TypeMultidimensionalLinearAdvectionNonperiodicSemidiscretization(D, a, bc)
A semidiscretization of the linear advection equation $\partial_t u(x, t) + a\cdot \nabla u(x, t) = 0$ with boundary conditions bc(x, t)
.
D
is a multidimensional SBP derivative operator, and a
is a tuple of the constant coefficients.
SummationByPartsOperatorsExtra.PolynomialBasesDerivativeOperator
— TypePolynomialBasesDerivativeOperator{T<:Real, BasisType <: NodalBasis{PolynomialBases.Line}}
A derivative operator on a nonperiodic grid with scalar type T
based on a basis from PolynomialBases.jl.
SummationByPartsOperatorsExtra.PolynomialBasesDerivativeOperator
— MethodPolynomialBasesDerivativeOperator(basis_type, xmin::T, xmax::T, N::Int) where {T<:Real}
Construct the PolynomialBasesDerivativeOperator
on a grid between xmin
and xmax
using N
nodes and N-1
modes defined by basis_type
from PolynomialBases.jl.
SummationByPartsOperatorsExtra.SubcellOperator
— TypeSubcellOperator{T}
SubcellOperator(nodes::Vector{T}, x_M::T,
weights_left::Vector{T}, weights_right::Vector{T},
Q_left::QType, Q_right::QType,
B_left::BType, B_right::BType,
accuracy_order::Int,
source::SourceOfCoefficients) where {T <: Real,
QType <: AbstractMatrix{T},
BType <: AbstractMatrix{T},
SourceOfCoefficients}
A sub-cell derivative operator on a non-periodic grid with scalar type T
. A sub-cell operator consists of two parts, a left and a right part, which are defined on the left and right sub-cells of the grid. Each of the two parts satisfy a summation-by-parts property on their respecting sub-cell. The whole operator satisfies a summation-by-parts property on the whole grid.
The whole operator follows the general interface of a derivative operator, e.g., implementing matrix-vector multiplication, integration, and the mass matrix. To obtain the derivative matrix $D = P^{-1}(Q_L + Q_R)$ associated to the sub-cell operator, use the function derivative_matrix
or Matrix
. The left and right mass matrices can be obtained with the functions mass_matrix_left
and mass_matrix_right
, respectively. Similarly, the boundary mass matrices can be obtained with the functions mass_matrix_boundary_left
and mass_matrix_boundary_right
.
See also subcell_operator
and GlaubitzLampertWintersNordström2025
.
References:
- Glaubitz, Lampert, Winters, Nordström (2025): Towards provable energy stable overset grid methods using sub-cell summation-by-parts operators. TODO
PolynomialBases.derivative_matrix
— Methodderivative_matrix(Dop::SubcellOperator)
Returns the derivative matrix $D = P^{-1}(Q_L + Q_R)$ associated to the sub-cell operator Dop
.
SummationByPartsOperatorsExtra.compute_moments_boundary
— Methodcompute_moments_boundary(functions, nodes, normals)
compute_moments_boundary(functions, D::AbstractDerivativeOperator)
compute_moments_boundary(functions, geometry::Meshes.Geometry)
Compute the moments, i.e., the integrals of the product of two basis functions weighted by the normal direction of the direction. For each direction, it computes a $K \times K$ matrix, where $K$ is the number of functions
and returns a tuple of these matrices. In one dimension, nodes
and normals
can be passed. You can also pass a derivative operator D
or a Geometry
object from Meshes.jl. Note that the latter is defined in a package extension of MeshIntegrals.jl and therefore requires loading that package before.
SummationByPartsOperatorsExtra.couple_subcell
— Methodcouple_subcell(D_left::AbstractNonperiodicDerivativeOperator,
D_right::AbstractNonperiodicDerivativeOperator,
x_M)
Construct a SubcellOperator
from two non-periodic derivative operators D_left
and D_right
from SummationByPartsOperators.jl. D_left
is defined on the left sub-cell, which is the interval $[x_L, x_M]$ and D_right
is defined on the right sub-cell, which is the interval $[x_M, x_R]$, where x_L
and x_R
are the left and right boundaries of the grid of D_left
and D_right
, respectively. Note that x_M
must be between the right boundary of D_left
and the left boundary of D_right
.
See also GlaubitzLampertWintersNordström2025
.
SummationByPartsOperatorsExtra.function_space_operator
— Functionfunction_space_operator(basis_functions, nodes, source;
derivative_order = 1, accuracy_order = 0,
basis_functions_weights = ones(length(basis_functions)),
bandwidth = length(nodes) - 1, size_boundary = 2 * bandwidth,
different_values = true, sparsity_pattern = nothing,
opt_alg = Optim.LBFGS(), options = Optim.Options(g_tol = 1e-14, iterations = 10000),
autodiff = :forward, x0 = nothing, verbose = false)
Construct an operator that represents a first-derivative operator in a function space spanned by the basis_functions
, which is an iterable of functions. The operator is constructed on the interval [x_min, x_max]
with the nodes nodes
, where x_min
is taken as the minimal value in nodes
and x_max
the maximal value. Note that the nodes
will be sorted internally. The accuracy_order
is the order of the accuracy of the operator, which can optionally be passed, but does not have any effect on the operator.
The operator is constructed solving an optimization problem with Optim.jl. You can specify the optimization algorithm, the options for the optimization problem, and the autodiff
mode with the keyword arguments opt_alg
, options
, and autodiff
respectively, see also the documentation of Optim.jl about configurable options and automatic differentiation. In this case, reverse mode automatic differentiation is usually significantly faster than forward mode. We recommend using autodiff = ADTypes.AutoMooncake(; config = nothing)
or autodiff = ADTypes.AutoEnzyme(; mode = Enzyme.Reverse, function_annotation = Enzyme.Duplicated)
. Note that you need to import the package ADTypes
as well as the corresponding autodiff (i.e., Mooncake
or Enzyme
) package to use these modes.
The initial guess for the optimization problem can be passed with the keyword argument x0
, which is optional. If nothing
is passed, a default initial guess (zeros for the entries of the differentiation matrix and equal values for all the weights) is used.
You can weight each basis function with the keyword argument basis_functions_weights
, which is a vector of weights for each basis function. The default is a vector of ones, which means that all basis functions are equally weighted. This can be used to, e.g., enforce exactness for certain basis functions (high weights), but allow non-exactness for others only minimizing the error (low weights).
There are two alternative ways to enforce sparsity of the resulting operator. The first is by passing a matrix sparsity_pattern
that is a matrix of zeros and ones, where the ones indicate the non-zero entries of the operator. This matrix should be symmetric or UpperTriangular
and have zeros on the diagonal.
The second way is to use a banded-block structure for the operator as is common, e.g., in finite difference methods. The keyword arguments bandwidth
and size_boundary
specify the bandwidth and the size of the boundary blocks of the operator, where the default of bandwidth
is set to length(nodes) - 1
, i.e., a dense operator (in this case size_boundary
is ignored). To construct a sparse operator, you can set the bandwidth to a smaller value, such that 2 * size_boundary + bandwidth < length(nodes)
, which is a requirement for the boundary blocks in the upper left and lower right of the resulting operator. If different_values
is set to true
all the entries in the upper right triangle of S (the skew symmetric part of D) are different, which is generally meaningful for non-equidistant nodes and general bases, if it is false
the entries of the stencil are repeated in the central part and the two boundary closures share their values (makes sense for uniformly distributed nodes and, e.g., a polynomial basis). The keyword argument different_values
is ignored for dense operators.
The keyword argument verbose
can be set to true
to print information about the optimization process.
The operator that is returned follows the general interface. Currently, it is wrapped in a SummationByPartsOperators.MatrixDerivativeOperator
, but this might change in the future. In order to use this function, the package Optim
must be loaded.
See also SummationByPartsOperators.GlaubitzNordströmÖffner2023
.
SummationByPartsOperatorsExtra.get_multidimensional_optimization_entries
— Methodget_multidimensional_optimization_entries(D;
bandwidth = div(accuracy_order(D), 2),
size_boundary = SummationByPartsOperators.lower_bandwidth(D) + 1,
different_values = false,
sparsity_pattern = nothing)
Get the entries to optimize for in an optimization-based construction procedure of multidimensional summation-by-parts operators from a derivative operator D
. It contains the entries of the skew-symmetric part of the operator $D$, the entries of the diagonal mass matrix $M$, and the entries of the diagonal boundary mass matrix. For more details, see multidimensional_function_space_operator
. The output can be passed as initial values to the optimization problem as x0
.
SummationByPartsOperatorsExtra.get_nsigma
— Methodget_nsigma(N; bandwidth = N - 1,
size_boundary = 2 * bandwidth, different_values = true,
sparsity_pattern = nothing)
Get the number of unique non-zero entries in a skew-symmetric matrix. If bandwidth
is N - 1
, the whole upper right triangle is used. If bandwidth
is smaller, a block-banded structure with boundary blocks of size size_boundary
is used and a banded matrix with bandwidth bandwidth
in the middle. If different_values
is false
, the stencils are repeating. If sparsity_pattern
is given, the number of non-zero entries in the sparsity pattern is returned. The sparsity pattern is assumed to be a UpperTriangular
matrix with zeros on the diagonal.
SummationByPartsOperatorsExtra.get_optimization_entries
— Methodget_optimization_entries(D;
bandwidth = div(accuracy_order(D), 2),
size_boundary = SummationByPartsOperators.lower_bandwidth(D) + 1,
different_values = false,
sparsity_pattern = nothing)
Get the entries to optimize for in an optimization-based construction procedure of summation-by-parts operators from a derivative operator D
. It contains the entries of the skew-symmetric part of the operator D
and the entries of the diagonal mass matrix M
. For more details, see function_space_operator
. The output can be passed as initial values to the optimization problem as x0
.
SummationByPartsOperatorsExtra.get_sparsity_pattern
— Methodget_sparsity_pattern(S)
get_sparsity_pattern(D::AbstractNonperiodicDerivativeOperator)
get_sparsity_pattern(D::AbstractMultidimensionalMatrixDerivativeOperator{2})
If S
is a (skew-symmetric) matrix, this function returns the sparsity pattern of S
as a UpperTriangular
matrix. If D
is a one-dimensional derivative operator, this function returns the sparsity pattern of the skew-symmetric part of D
. If D
is a two-dimensional derivative operator, this function returns a tuple of the sparsity patterns of the skew-symmetric parts of D
in each direction.
SummationByPartsOperatorsExtra.grid_left
— Methodgrid_left(D::SubcellOperator)
Returns the grid associated to the left part of the sub-cell operator D
.
SummationByPartsOperatorsExtra.grid_right
— Methodgrid_right(D::SubcellOperator)
Returns the grid associated to the right part of the sub-cell operator D
.
SummationByPartsOperatorsExtra.left_projection_left
— Methodleft_projection_left(D::SubcellOperator)
Returns the left projection operator $e_L$ associated to the left part of the sub-cell operator D
.
SummationByPartsOperatorsExtra.left_projection_right
— Methodleft_projection_right(D::SubcellOperator)
Returns the right projection operator $e_{M_L}$ associated to the left part of the sub-cell operator D
.
SummationByPartsOperatorsExtra.mass_matrix_boundary_left
— Methodmass_matrix_boundary_left(D::SubcellOperator)
Returns the mass matrix associated to the left boundary of the sub-cell operator D
.
SummationByPartsOperatorsExtra.mass_matrix_boundary_right
— Methodmass_matrix_boundary_right(D::SubcellOperator)
Returns the mass matrix associated to the right boundary of the sub-cell operator D
.
SummationByPartsOperatorsExtra.mass_matrix_left
— Methodmass_matrix_left(D::SubcellOperator)
Returns the mass matrix associated to the left part of the sub-cell operator D
.
SummationByPartsOperatorsExtra.mass_matrix_right
— Methodmass_matrix_right(D::SubcellOperator)
Returns the mass matrix associated to the right part of the sub-cell operator D
.
SummationByPartsOperatorsExtra.multidimensional_function_space_operator
— Functionmultidimensional_function_space_operator(basis_functions, nodes, boundary_indices, normals, moments, vol, source;
derivative_order = 1, accuracy_order = 0,
bandwidth = length(nodes) - 1, size_boundary = 2 * bandwidth,
different_values = true, sparsity_pattern = nothing,
opt_alg = Optim.LBFGS(), options = Optim.Options(g_tol = 1e-14, iterations = 10000),
autodiff = :forward, x0 = nothing, verbose = false)
Construct a SummationByPartsOperators.MultidimensionalMatrixDerivativeOperator
that represents a first-derivative operator in a function space spanned by the basis_functions
, which is an iterable of functions. The operator is constructed on the scattered nodes nodes
. They should be provided as an iterable of SVector{Dim, T}
. The boundary_indices
is a vector of indies that indicates, which nodes are on the boundary. normals
is a vector of SVector{Dim, T}
that contains the normal vectors of the boundary nodes. The moments
are a Tuple
of matrices that represent the moments of the basis functions in each direction. The total volume of the domain is given by vol
.
The accuracy_order
is the order of the accuracy of the operator, which can optionally be passed, but does not have any effect on the operator.
The operator is constructed solving an optimization problem with Optim.jl. You can specify the optimization algorithm, the options for the optimization problem, and the autodiff
mode with the keyword arguments opt_alg
, options
, and autodiff
respectively, see also the documentation of Optim.jl about configurable options and automatic differentiation. In this case, reverse mode automatic differentiation is usually significantly faster than forward mode. We recommend using autodiff = ADTypes.AutoMooncake(; config = nothing)
or autodiff = ADTypes.AutoEnzyme(; mode = Enzyme.Reverse, function_annotation = Enzyme.Duplicated)
. Note that you need to import the package ADTypes
as well as the corresponding autodiff (i.e., Mooncake
or Enzyme
) package to use these modes.
The initial guess for the optimization problem can be passed with the keyword argument x0
, which is optional. If nothing
is passed, a default initial guess (zeros for the entries of the differentiation matrix and equal values for all the weights and boundary weights) is used.
There are two alternative ways to enforce sparsity of the resulting operator. The first is by passing a matrix sparsity_pattern
that is a matrix of zeros and ones, where the ones indicate the non-zero entries of the operator. This matrix should be symmetric or UpperTriangular
and have zeros on the diagonal.
The second way is to use a banded-block structure for the operator as is common, e.g., in finite difference methods. The keyword arguments bandwidth
and size_boundary
specify the bandwidth and the size of the boundary blocks of the differentiation matrices in each direction, where the default of bandwidth
is set to length(nodes) - 1
, i.e., dense operators (in this case size_boundary
is ignored). To construct sparse operators, you can set the bandwidth to a smaller value, such that 2 * size_boundary + bandwidth < length(nodes)
, which is a requirement for the boundary blocks in the upper left and lower right of the resulting operator. If different_values
is set to true
all the entries in the upper right triangle of all matrices S (the skew symmetric parts of the differentiation matrices D) are different, which is generally meaningful for non-equidistant nodes and general bases, if it is false
the entries of the stencil are repeated in the central part and the two boundary closures share their values (makes sense for uniformly distributed nodes and, e.g., a polynomial basis). The keyword argument different_values
is ignored for dense operators. The parameters bandwidth
, size_boundary
, and different_values
are only used if sparsity_pattern
is not provided.
The keyword argument verbose
can be set to true
to print information about the optimization process.
In order to use this function, the package Optim
must be loaded.
See also GlaubitzIskeLampertÖffner2025
.
SummationByPartsOperatorsExtra.neighborhood_sparsity_pattern
— Methodneighborhood_sparsity_pattern(nodes, lengths)
For a given set of nodes
in a multi-dimensional space, this function computes the sparsity pattern of the differentiation matrices, which only includes non-zero entries at nodes, which are within a certain ellipsoid neighborhood. lengths
is a tuple of length d
(dimension) representing the lengths of an ellipsoid indicating, which nodes are counted as neighbors. For example, for a differentiation matrix in x direction, it makes sense to use a larger length in x direction than in y direction.
SummationByPartsOperatorsExtra.plot_nodes
— Functionplot_nodes(nodes_inner, nodes_boundary; corners = nothing, kwargs...)
plot_nodes(nodes, boundary_indices::Vector{Int}; corner_indices = nothing,
kwargs...)
plot_nodes(D; kwargs...)
Plot the nodes of a multidimensional derivative operator D
. The interior nodes nodes_inner
are plotted and the boundary nodes nodes_boundary
are plotted in different colors. If corner_indices
are provided, the corners are also plotted in a different color. Additional keyword arguments are passed to viz
from Meshes.jl, see the documentation. The function returns the current figure.
SummationByPartsOperatorsExtra.plot_normals
— Functionplot_normals(nodes_boundary, normals; kwargs...)
plot_normals(D; kwargs...)
Plot the normals of a multidimensional derivative operator D
. The boundary nodes nodes_boundary
are plotted and the normals are plotted as arrows. Additional keyword arguments are passed to viz
from Meshes.jl, see the documentation. The function returns the current figure.
SummationByPartsOperatorsExtra.plot_sparsity_pattern
— Functionplot_sparsity_pattern(sparsity_pattern, nodes, node_index)
Plot the sparsity_pattern
, which is a boolean matrix of length length(nodes) x length(nodes)
, where nodes
is a set of nodes. The node_index
is the index of the node in nodes
that is used to plot the sparsity pattern around that node, i.e., it will be plotted as red, while all neighboring nodes according to the sparsity_pattern
are plotted as green dots and the remaining nodes are plotted as blue dots.
SummationByPartsOperatorsExtra.polynomialbases_derivative_operator
— Methodpolynomialbases_derivative_operator(basis_type, xmin::Real, xmax::Real, N::Integer)
polynomialbases_derivative_operator(basis_type; xmin::Real, xmax::Real, N::Integer)
Construct the PolynomialBasesDerivativeOperator
on a uniform grid between xmin
and xmax
using N
nodes and N-1
Legendre modes.
SummationByPartsOperatorsExtra.quantities
— Methodquantities(analysis_callback)
Return the computed quantities for each time step.
SummationByPartsOperatorsExtra.right_projection_left
— Methodright_projection_left(D::SubcellOperator)
Returns the left projection operator $e_{M_R}$ associated to the right part of the sub-cell operator D
.
SummationByPartsOperatorsExtra.right_projection_right
— Methodright_projection_right(D::SubcellOperator)
Returns the right projection operator $e_R$ associated to the right part of the sub-cell operator D
.
SummationByPartsOperatorsExtra.subcell_operator
— Functionsubcell_operator(basis_functions, nodes, x_M, source;
derivative_order = 1, accuracy_order = 0,
bandwidths = [N_L - 1, N_R - 1], size_boundaries = 2 .* bandwidths,
different_values = [true, true], sparsity_patterns = [nothing, nothing],
M_local_approximation = [N_L, N_R],
opt_alg = Optim.LBFGS(), options = Optim.Options(g_tol = 1e-14, iterations = 10000),
autodiff = :forward, x0 = nothing, verbose = false)
Construct a sub-cell operator in a function space spanned by the basis_functions
, which is an iterable of functions. The operator is constructed on the interval [x_min, x_max]
with the nodes nodes
, where x_min
is taken as the minimal value in nodes
and x_max
the maximal value. Note that the nodes
will be sorted internally. The left part of the sub-cell operator consists of the nodes
, which are smaller than x_M
and the right part of the nodes
, which are bigger than x_M
. The accuracy_order
is the order of the accuracy of the operator, which can optionally be passed, but does not have any effect on the operator.
The operator is constructed solving an optimization problem with Optim.jl. You can specify the optimization algorithm, the options for the optimization problem, and the autodiff
mode with the keyword arguments opt_alg
, options
, and autodiff
respectively, see also the documentation of Optim.jl about configurable options and automatic differentiation. In this case, reverse mode automatic differentiation is usually significantly faster than forward mode. We recommend using autodiff = ADTypes.AutoMooncake(; config = nothing)
or autodiff = ADTypes.AutoEnzyme(; mode = Enzyme.Reverse, function_annotation = Enzyme.Duplicated)
. Note that you need to import the package ADTypes
as well as the corresponding autodiff (i.e., Mooncake
or Enzyme
) package to use these modes.
The initial guess for the optimization problem can be passed with the keyword argument x0
, which is optional. If nothing
is passed, a default initial guess (zeros for the entries of the differentiation matrix and equal values for all the weights) is used.
There are two alternative ways to enforce sparsity of the resulting left and right operator. The first is by passing matrices sparsity_pattern
that are matrices of zeros and ones each, where the ones indicate the non-zero entries of the left and operator, respectively. The matrices should be symmetric or UpperTriangular
and have zeros on the diagonal.
The second way is to use a banded-block structure for the parts of the operator as is common, e.g., in finite difference methods. The keyword arguments bandwidths
and size_boundaries
specify the bandwidth and the size of the boundary blocks of the operators, where the default of bandwidths
is set to the number of nodes in the left and right sub-cell minus one, i.e., a dense operator (in this case size_boundaries
is ignored). To construct a sparse operator, you can set the bandwidth to a smaller value, such that 2 * size_boundaries[i] + bandwidths[i] < N_{L/R}
, which is a requirement for the boundary blocks in the upper left and lower right of the resulting operator. If different_values
is set to true
all the entries in the upper right triangle of S (the skew symmetric parts of the differentiation matrix blocks) are different, which is generally meaningful for non-equidistant nodes and general bases, if it is false
the entries of the stencil are repeated in the central part and the two boundary closures share their values (makes sense for uniformly distributed nodes and, e.g., a polynomial basis). The keyword argument different_values
is ignored for dense operators.
You can use the keyword argument M_local_approximation
to specify the number of points used for local approximations of the discrete projections. The default is to use the number of nodes in the left and right sub-cell, respectively. To use an interpolation, you can set M_local_approximation
to [K, K]
, where K
is the number of basis functions.
The keyword argument verbose
can be set to true
to print information about the optimization process.
Returns a SubcellOperator
object.
See also GlaubitzLampertWintersNordström2025
.
SummationByPartsOperatorsExtra.tstops
— Methodtstops(analysis_callback)
Return the time values that correspond to the saved values of the quantities
.