Mercurial > repos > public > sbplib_julia
changeset 1049:3bb94ce74697 feature/variable_derivatives
Merge default
author | Jonatan Werpers <jonatan@werpers.com> |
---|---|
date | Wed, 23 Mar 2022 12:54:45 +0100 |
parents | 86aa69ad3304 (current diff) c16116e403e2 (diff) |
children | eeecdf135912 |
files | Notes.md TODO.md src/LazyTensors/lazy_tensor_operations.jl src/SbpOperators/SbpOperators.jl src/SbpOperators/boundaryops/boundary_operator.jl src/SbpOperators/readoperator.jl src/SbpOperators/stencil_set.jl src/SbpOperators/volumeops/constant_interior_scaling_operator.jl src/SbpOperators/volumeops/derivatives/second_derivative_variable.jl src/SbpOperators/volumeops/volume_operator.jl test/SbpOperators/boundaryops/boundary_operator_test.jl test/SbpOperators/readoperator_test.jl test/SbpOperators/stencil_set_test.jl test/SbpOperators/volumeops/derivatives/second_derivative_variable_test.jl |
diffstat | 42 files changed, 1238 insertions(+), 1180 deletions(-) [+] |
line wrap: on
line diff
--- a/Notes.md Mon Mar 21 13:21:48 2022 +0100 +++ b/Notes.md Wed Mar 23 12:54:45 2022 +0100 @@ -132,20 +132,17 @@ * When doing specialised computations for different parts of the range/domain? * More? - Maybe if we should have dynamic sizing it could be only for the range. `domain_size` would not be implemented. And the `range_size` would be a function of a vector that the TensorMapping is applied to. + Maybe if we should have dynamic sizing it could be only for the range. `domain_size` would not be implemented. And the `range_size` would be a function of a vector that the LazyTensor is applied to. ## Reasearch and thinking - - [ ] Use a trait to indicate that a TensorMapping har the same range and domain? - - [ ] Rename all the Tensor stuff to just LazyOperator, LazyApplication and so on? + - [ ] Use a trait to indicate that a LazyTensor har the same range and domain? - [ ] Check how the native julia doc generator works - [ ] Check if Vidars design docs fit in there - [ ] Create a macro @lazy which replaces a binary op (+,-) by its lazy equivalent? Would be a neat way to indicate which evaluations are lazy without cluttering/confusing with special characters. - - [ ] Specificera operatorer i TOML eller något liknande? - H.. H_gamma etc.) - [ ] Dispatch on Lower() instead of the type Lower so `::Lower` instead of `::Type{Lower}` ??? Seems better unless there is some specific reason to use the type instead of the value. - [ ] How do we handle mixes of periodic and non-periodic grids? Seems it should be supported on the grid level and on the 1d operator level. Between there it should be transparent. - - [ ] Can we have a trait to tell if a TensorMapping is transposable? + - [ ] Can we have a trait to tell if a LazyTensor is transposable? - [ ] Is it ok to have "Constructors" for abstract types which create subtypes? For example a Grids() functions that gives different kind of grids based on input? - [ ] Figure out how to treat the borrowing parameters of operators. Include in into the struct? Expose via function dispatched on the operator type and grid? @@ -153,10 +150,10 @@ The identifiers (`Upper`, `Lower`, `Interior`) used for region indecies should probabily be included in the grid module. This allows new grid types to come with their own regions. ## Regions and tensormappings -- [ ] Use a trait to indicate if a TensorMapping uses indices with regions. +- [ ] Use a trait to indicate if a LazyTensor uses indices with regions. The default should be that they do NOT. - [ ] What to name this trait? Can we call it IndexStyle but not export it to avoid conflicts with Base.IndexStyle? - - [ ] Figure out repeated application of regioned TensorMappings. Maybe an instance of a tensor mapping needs to know the exact size of the range and domain for this to work? + - [ ] Figure out repeated application of regioned LazyTensors. Maybe an instance of a tensor mapping needs to know the exact size of the range and domain for this to work? ### Ideas for information sharing functions ```julia @@ -301,16 +298,16 @@ Vi kan ha tensor-operatorer som agerar på ett skalärt fält och ger ett vektorfält eller tensorfält. Vi kan också ha tensor-operatorer som agerar på ett vektorfält eller tensorfält och ger ett skalärt fält. -TBD: Just nu gör `apply_transpose` antagandet att domän-typen är samma som range-typen. Det behöver vi på något sätt bryta. Ett alternativ är låta en TensorMapping ha `T_domain` och `T_range` istället för bara `T`. Känns dock lite grötigt. Ett annat alternativ skulle vara någon typ av trait för transpose? Den skulle kunna innehålla typen som transponatet agerar på? Vet inte om det fungerar dock. +TBD: Just nu gör `apply_transpose` antagandet att domän-typen är samma som range-typen. Det behöver vi på något sätt bryta. Ett alternativ är låta en LazyTensor ha `T_domain` och `T_range` istället för bara `T`. Känns dock lite grötigt. Ett annat alternativ skulle vara någon typ av trait för transpose? Den skulle kunna innehålla typen som transponatet agerar på? Vet inte om det fungerar dock. -TBD: Vad är målet med `T`-parametern för en TensorMapping? Om vi vill kunna applicera en difference operator på vad som helst kan man inte anta att en `TensorMapping{T}` bara agerar på instanser av `T`. +TBD: Vad är målet med `T`-parametern för en LazyTensor? Om vi vill kunna applicera en difference operator på vad som helst kan man inte anta att en `LazyTensor{T}` bara agerar på instanser av `T`. Man kan implementera `∇` som en tensormapping som agerar på T och returnerar `StaticVector{N,T} where N`. (Man skulle eventuellt också kunna låta den agera på `StaticMatrix{N,T,D} where N` och returnera `StaticMatrix{M,T,D+1}`. Frågan är om man vinner något på det...) -Skulle kunna ha en funktion `range_type(::TensorMapping, ::Type{domain_type})` +Skulle kunna ha en funktion `range_type(::LazyTensor, ::Type{domain_type})` -Kanske kan man implementera `⋅(tm::TensorMapping{R,D}, v::AbstractArray{T,D})` där T är en AbstractArray, tm på något sätt har komponenter, lika många som T har element. +Kanske kan man implementera `⋅(tm::LazyTensor{R,D}, v::AbstractArray{T,D})` där T är en AbstractArray, tm på något sätt har komponenter, lika många som T har element. ### Ratade alternativ: @@ -342,7 +339,7 @@ En viktig operation för vektor fält är att kunna få ut komponenter som grid-funktioner. Detta behöver antagligen kunna ske lazy. Det finns ett par olika lösningar: * Implementera en egen typ av view som tar hand om detta. Eller Accessors.jl? -* Använda en TensorMapping +* Använda en LazyTensor * Någon typ av lazy-broadcast * En lazy array som applicerar en funktion för varje element. @@ -389,4 +386,4 @@ ## Name of the `VolumeOperator` type for constant stencils -It seems that the name is too general. The name of the method `volume_operator` makes sense. It should return different types of `TensorMapping` specialized for the grid. A suggetion for a better name is `ConstantStencilVolumeOperator` +It seems that the name is too general. The name of the method `volume_operator` makes sense. It should return different types of `LazyTensor` specialized for the grid. A suggetion for a better name is `ConstantStencilVolumeOperator`
--- a/TODO.md Mon Mar 21 13:21:48 2022 +0100 +++ b/TODO.md Wed Mar 23 12:54:45 2022 +0100 @@ -9,10 +9,7 @@ - [ ] Use `@inferred` in a lot of tests. - [ ] Replace `@inferred` tests with a benchmark suite that automatically tests for regressions. - [ ] Make sure we are setting tolerances in tests in a consistent way - - [ ] Add check for correct domain sizes to lazy tensor operations using SizeMismatch - [ ] Write down some coding guideline or checklist for code conventions. For example i,j,... for indices and I for multi-index - - [ ] Add boundschecking in TensorMappingApplication - - [ ] Start renaming things in LazyTensors - [ ] Clean up RegionIndices 1. [ ] Write tests for how things should work 2. [ ] Update RegionIndices accordingly @@ -23,7 +20,6 @@ See (https://docs.julialang.org/en/v1/manual/types/#man-custom-pretty-printing) - [ ] Samla noggrannhets- och SBP-ness-tester för alla operatorer på ett ställe - [ ] Move export statements to top of each module - - [ ] Add a type StencilSet for easier dispatch - [ ] Gå igenom alla typ parametrar och kolla om de är motiverade. Både i signaturer och typer, tex D i VariableSecondDerivative. Kan vi använda promote istället?
--- a/src/LazyTensors/LazyTensors.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/LazyTensors/LazyTensors.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,17 +1,36 @@ module LazyTensors -export LazyTensorMappingApplication -export LazyTensorMappingTranspose -export TensorMappingComposition -export LazyLinearMap -export IdentityMapping -export InflatedTensorMapping +export TensorApplication +export TensorTranspose +export TensorComposition +export DenseTensor +export IdentityTensor +export ScalingTensor +export InflatedTensor export LazyOuterProduct export ⊗ -export SizeMismatch +export DomainSizeMismatch +export RangeSizeMismatch -include("tensor_mapping.jl") +include("lazy_tensor.jl") +include("tensor_types.jl") include("lazy_array.jl") include("lazy_tensor_operations.jl") +include("tuple_manipulation.jl") + +# Applying lazy tensors to vectors +Base.:*(a::LazyTensor, v::AbstractArray) = TensorApplication(a,v) +Base.:*(a::LazyTensor, b::LazyTensor) = throw(MethodError(Base.:*,(a,b))) +Base.:*(a::LazyTensor, args::Union{LazyTensor, AbstractArray}...) = foldr(*,(a,args...)) + +# Addition and subtraction of lazy tensors +Base.:+(s::LazyTensor, t::LazyTensor) = ElementwiseTensorOperation{:+}(s,t) +Base.:-(s::LazyTensor, t::LazyTensor) = ElementwiseTensorOperation{:-}(s,t) + +# Composing lazy tensors +Base.:∘(s::LazyTensor, t::LazyTensor) = TensorComposition(s,t) + +# Outer products of tensors +⊗(a::LazyTensor, b::LazyTensor) = LazyOuterProduct(a,b) end # module
--- a/src/LazyTensors/lazy_array.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/LazyTensors/lazy_array.jl Wed Mar 23 12:54:45 2022 +0100 @@ -36,7 +36,7 @@ function Base.getindex(lfa::LazyFunctionArray{F,T,D}, I::Vararg{Int,D}) where {F,T,D} @boundscheck checkbounds(lfa, I...) - return lfa.f(I...) + return @inbounds lfa.f(I...) end @@ -61,25 +61,17 @@ end LazyElementwiseOperation{T,D,Op}(a::AbstractArray{T,D},b::T) where {T,D,Op} = LazyElementwiseOperation{T,D,Op}(a, LazyConstantArray(b, size(a))) LazyElementwiseOperation{T,D,Op}(a::T,b::AbstractArray{T,D}) where {T,D,Op} = LazyElementwiseOperation{T,D,Op}(LazyConstantArray(a, size(b)), b) -# TODO: Move Op to be the first parameter? Compare to Binary operations Base.size(v::LazyElementwiseOperation) = size(v.a) -evaluate(leo::LazyElementwiseOperation{T,D,:+}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] + leo.b[I...] -evaluate(leo::LazyElementwiseOperation{T,D,:-}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] - leo.b[I...] -evaluate(leo::LazyElementwiseOperation{T,D,:*}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] * leo.b[I...] -evaluate(leo::LazyElementwiseOperation{T,D,:/}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] / leo.b[I...] +Base.@propagate_inbounds evaluate(leo::LazyElementwiseOperation{T,D,:+}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] + leo.b[I...] +Base.@propagate_inbounds evaluate(leo::LazyElementwiseOperation{T,D,:-}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] - leo.b[I...] +Base.@propagate_inbounds evaluate(leo::LazyElementwiseOperation{T,D,:*}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] * leo.b[I...] +Base.@propagate_inbounds evaluate(leo::LazyElementwiseOperation{T,D,:/}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] / leo.b[I...] -# TODO: Make sure boundschecking is done properly and that the lenght of the vectors are equal -# NOTE: Boundschecking in getindex functions now assumes that the size of the -# vectors in the LazyElementwiseOperation are the same size. If we remove the -# size assertion in the constructor we might have to handle -# boundschecking differently. -Base.@propagate_inbounds @inline function Base.getindex(leo::LazyElementwiseOperation{T,D}, I::Vararg{Int,D}) where {T,D} - @boundscheck if !checkbounds(Bool, leo.a, I...) - throw(BoundsError([leo], I...)) - end - return evaluate(leo, I...) +function Base.getindex(leo::LazyElementwiseOperation{T,D}, I::Vararg{Int,D}) where {T,D} + @boundscheck checkbounds(leo.a, I...) + return @inbounds evaluate(leo, I...) end # Define lazy operations for AbstractArrays. Operations constructs a LazyElementwiseOperation which
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/LazyTensors/lazy_tensor.jl Wed Mar 23 12:54:45 2022 +0100 @@ -0,0 +1,79 @@ +export LazyTensor +export apply +export apply_transpose +export range_dim, domain_dim +export range_size, domain_size + +""" + LazyTensor{T,R,D} + +Describes a mapping of a `D` dimension tensor to an `R` dimension tensor. +The action of the mapping is implemented through the method +```julia + apply(t::LazyTensor{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg) where {R,D,T} +``` + +The size of the range and domain that the operator works with should be returned by +the functions +```julia + range_size(::LazyTensor) + domain_size(::LazyTensor) +``` +to allow querying for one or the other. + +Optionally the action of the transpose may be defined through +```julia + apply_transpose(t::LazyTensor{T,R,D}, v::AbstractArray{T,D}, I::Vararg) where {R,D,T} +``` +""" +abstract type LazyTensor{T,R,D} end + +""" + apply(t::LazyTensor{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg) where {R,D,T} + +Return the result of the mapping for a given index. +""" +function apply end + +""" + apply_transpose(t::LazyTensor{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg) where {R,D,T} + +Return the result of the transposed mapping for a given index. +""" +function apply_transpose end + +""" + range_dim(::LazyTensor) +Return the dimension of the range space of a given mapping +""" +range_dim(::LazyTensor{T,R,D}) where {T,R,D} = R + +""" + domain_dim(::LazyTensor) +Return the dimension of the domain space of a given mapping +""" +domain_dim(::LazyTensor{T,R,D}) where {T,R,D} = D + + +""" + range_size(M::LazyTensor) + +Return the range size for the mapping. +""" +function range_size end + +""" + domain_size(M::LazyTensor) + +Return the domain size for the mapping. +""" +function domain_size end + + +""" + eltype(::LazyTensor{T}) + +The type of elements the LazyTensor acts on. +""" +Base.eltype(::LazyTensor{T}) where T = T +
--- a/src/LazyTensors/lazy_tensor_operations.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/LazyTensors/lazy_tensor_operations.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,206 +1,139 @@ using Sbplib.RegionIndices """ - LazyTensorMappingApplication{T,R,D} <: LazyArray{T,R} + TensorApplication{T,R,D} <: LazyArray{T,R} -Struct for lazy application of a TensorMapping. Created using `*`. +Struct for lazy application of a LazyTensor. Created using `*`. -Allows the result of a `TensorMapping` applied to a vector to be treated as an `AbstractArray`. -With a mapping `m` and a vector `v` the LazyTensorMappingApplication object can be created by `m*v`. +Allows the result of a `LazyTensor` applied to a vector to be treated as an `AbstractArray`. +With a mapping `m` and a vector `v` the TensorApplication object can be created by `m*v`. The actual result will be calcualted when indexing into `m*v`. """ -struct LazyTensorMappingApplication{T,R,D, TM<:TensorMapping{<:Any,R,D}, AA<:AbstractArray{<:Any,D}} <: LazyArray{T,R} +struct TensorApplication{T,R,D, TM<:LazyTensor{<:Any,R,D}, AA<:AbstractArray{<:Any,D}} <: LazyArray{T,R} t::TM o::AA - function LazyTensorMappingApplication(t::TensorMapping{<:Any,R,D}, o::AbstractArray{<:Any,D}) where {R,D} + function TensorApplication(t::LazyTensor{<:Any,R,D}, o::AbstractArray{<:Any,D}) where {R,D} + @boundscheck check_domain_size(t, size(o)) I = ntuple(i->1, range_dim(t)) T = typeof(apply(t,o,I...)) return new{T,R,D,typeof(t), typeof(o)}(t,o) end end -# TODO: Do boundschecking on creation! - -Base.getindex(ta::LazyTensorMappingApplication{T,R}, I::Vararg{Any,R}) where {T,R} = apply(ta.t, ta.o, I...) -Base.getindex(ta::LazyTensorMappingApplication{T,1}, I::CartesianIndex{1}) where {T} = apply(ta.t, ta.o, I.I...) # Would otherwise be caught in the previous method. -Base.size(ta::LazyTensorMappingApplication) = range_size(ta.t) -# TODO: What else is needed to implement the AbstractArray interface? -Base.:*(a::TensorMapping, v::AbstractArray) = LazyTensorMappingApplication(a,v) -Base.:*(a::TensorMapping, b::TensorMapping) = throw(MethodError(Base.:*,(a,b))) -Base.:*(a::TensorMapping, args::Union{TensorMapping, AbstractArray}...) = foldr(*,(a,args...)) +function Base.getindex(ta::TensorApplication{T,R}, I::Vararg{Any,R}) where {T,R} + @boundscheck checkbounds(ta, Int.(I)...) + return @inbounds apply(ta.t, ta.o, I...) +end +Base.@propagate_inbounds Base.getindex(ta::TensorApplication{T,1} where T, I::CartesianIndex{1}) = ta[Tuple(I)...] # Would otherwise be caught in the previous method. +Base.size(ta::TensorApplication) = range_size(ta.t) -# # We need the associativity to be a→b→c = a→(b→c), which is the case for '→' -# # Should we overload some other infix binary opesrator? -# →(tm::TensorMapping{T,R,D}, o::AbstractArray{T,D}) where {T,R,D} = LazyTensorMappingApplication(tm,o) -# TODO: We need to be really careful about good error messages. -# For example what happens if you try to multiply LazyTensorMappingApplication with a TensorMapping(wrong order)? """ - LazyTensorMappingTranspose{T,R,D} <: TensorMapping{T,D,R} + TensorTranspose{T,R,D} <: LazyTensor{T,D,R} -Struct for lazy transpose of a TensorMapping. +Struct for lazy transpose of a LazyTensor. If a mapping implements the the `apply_transpose` method this allows working with -the transpose of mapping `m` by using `m'`. `m'` will work as a regular TensorMapping lazily calling +the transpose of mapping `m` by using `m'`. `m'` will work as a regular LazyTensor lazily calling the appropriate methods of `m`. """ -struct LazyTensorMappingTranspose{T,R,D, TM<:TensorMapping{T,R,D}} <: TensorMapping{T,D,R} +struct TensorTranspose{T,R,D, TM<:LazyTensor{T,R,D}} <: LazyTensor{T,D,R} tm::TM end # # TBD: Should this be implemented on a type by type basis or through a trait to provide earlier errors? -# Jonatan 2020-09-25: Is the problem that you can take the transpose of any TensorMapping even if it doesn't implement `apply_transpose`? -Base.adjoint(tm::TensorMapping) = LazyTensorMappingTranspose(tm) -Base.adjoint(tmt::LazyTensorMappingTranspose) = tmt.tm +# Jonatan 2020-09-25: Is the problem that you can take the transpose of any LazyTensor even if it doesn't implement `apply_transpose`? +Base.adjoint(tm::LazyTensor) = TensorTranspose(tm) +Base.adjoint(tmt::TensorTranspose) = tmt.tm -apply(tmt::LazyTensorMappingTranspose{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D} = apply_transpose(tmt.tm, v, I...) -apply_transpose(tmt::LazyTensorMappingTranspose{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmt.tm, v, I...) +apply(tmt::TensorTranspose{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D} = apply_transpose(tmt.tm, v, I...) +apply_transpose(tmt::TensorTranspose{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmt.tm, v, I...) -range_size(tmt::LazyTensorMappingTranspose) = domain_size(tmt.tm) -domain_size(tmt::LazyTensorMappingTranspose) = range_size(tmt.tm) +range_size(tmt::TensorTranspose) = domain_size(tmt.tm) +domain_size(tmt::TensorTranspose) = range_size(tmt.tm) -struct LazyTensorMappingBinaryOperation{Op,T,R,D,T1<:TensorMapping{T,R,D},T2<:TensorMapping{T,R,D}} <: TensorMapping{T,D,R} +struct ElementwiseTensorOperation{Op,T,R,D,T1<:LazyTensor{T,R,D},T2<:LazyTensor{T,R,D}} <: LazyTensor{T,D,R} tm1::T1 tm2::T2 - @inline function LazyTensorMappingBinaryOperation{Op,T,R,D}(tm1::T1,tm2::T2) where {Op,T,R,D, T1<:TensorMapping{T,R,D},T2<:TensorMapping{T,R,D}} + function ElementwiseTensorOperation{Op,T,R,D}(tm1::T1,tm2::T2) where {Op,T,R,D, T1<:LazyTensor{T,R,D},T2<:LazyTensor{T,R,D}} + @boundscheck check_domain_size(tm2, domain_size(tm1)) + @boundscheck check_range_size(tm2, range_size(tm1)) return new{Op,T,R,D,T1,T2}(tm1,tm2) end end -# TODO: Boundschecking in constructor. -apply(tmBinOp::LazyTensorMappingBinaryOperation{:+,T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmBinOp.tm1, v, I...) + apply(tmBinOp.tm2, v, I...) -apply(tmBinOp::LazyTensorMappingBinaryOperation{:-,T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmBinOp.tm1, v, I...) - apply(tmBinOp.tm2, v, I...) +ElementwiseTensorOperation{Op}(s,t) where Op = ElementwiseTensorOperation{Op,eltype(s), range_dim(s), domain_dim(s)}(s,t) -range_size(tmBinOp::LazyTensorMappingBinaryOperation) = range_size(tmBinOp.tm1) -domain_size(tmBinOp::LazyTensorMappingBinaryOperation) = domain_size(tmBinOp.tm1) +apply(tmBinOp::ElementwiseTensorOperation{:+,T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmBinOp.tm1, v, I...) + apply(tmBinOp.tm2, v, I...) +apply(tmBinOp::ElementwiseTensorOperation{:-,T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmBinOp.tm1, v, I...) - apply(tmBinOp.tm2, v, I...) -Base.:+(tm1::TensorMapping{T,R,D}, tm2::TensorMapping{T,R,D}) where {T,R,D} = LazyTensorMappingBinaryOperation{:+,T,R,D}(tm1,tm2) -Base.:-(tm1::TensorMapping{T,R,D}, tm2::TensorMapping{T,R,D}) where {T,R,D} = LazyTensorMappingBinaryOperation{:-,T,R,D}(tm1,tm2) +range_size(tmBinOp::ElementwiseTensorOperation) = range_size(tmBinOp.tm1) +domain_size(tmBinOp::ElementwiseTensorOperation) = domain_size(tmBinOp.tm1) + """ - TensorMappingComposition{T,R,K,D} + TensorComposition{T,R,K,D} -Lazily compose two `TensorMapping`s, so that they can be handled as a single `TensorMapping`. +Lazily compose two `LazyTensor`s, so that they can be handled as a single `LazyTensor`. """ -struct TensorMappingComposition{T,R,K,D, TM1<:TensorMapping{T,R,K}, TM2<:TensorMapping{T,K,D}} <: TensorMapping{T,R,D} +struct TensorComposition{T,R,K,D, TM1<:LazyTensor{T,R,K}, TM2<:LazyTensor{T,K,D}} <: LazyTensor{T,R,D} t1::TM1 t2::TM2 - @inline function TensorMappingComposition(t1::TensorMapping{T,R,K}, t2::TensorMapping{T,K,D}) where {T,R,K,D} + function TensorComposition(t1::LazyTensor{T,R,K}, t2::LazyTensor{T,K,D}) where {T,R,K,D} @boundscheck check_domain_size(t1, range_size(t2)) return new{T,R,K,D, typeof(t1), typeof(t2)}(t1,t2) end end -range_size(tm::TensorMappingComposition) = range_size(tm.t1) -domain_size(tm::TensorMappingComposition) = domain_size(tm.t2) +range_size(tm::TensorComposition) = range_size(tm.t1) +domain_size(tm::TensorComposition) = domain_size(tm.t2) -function apply(c::TensorMappingComposition{T,R,K,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,K,D} +function apply(c::TensorComposition{T,R,K,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,K,D} apply(c.t1, c.t2*v, I...) end -function apply_transpose(c::TensorMappingComposition{T,R,K,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,K,D} +function apply_transpose(c::TensorComposition{T,R,K,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,K,D} apply_transpose(c.t2, c.t1'*v, I...) end -Base.@propagate_inbounds Base.:∘(s::TensorMapping, t::TensorMapping) = TensorMappingComposition(s,t) - -""" - LazyLinearMap{T,R,D,...}(A, range_indicies, domain_indicies) - -TensorMapping defined by the AbstractArray A. `range_indicies` and `domain_indicies` define which indicies of A should -be considerd the range and domain of the TensorMapping. Each set of indices must be ordered in ascending order. - -For instance, if A is a m x n matrix, and range_size = (1,), domain_size = (2,), then the LazyLinearMap performs the -standard matrix-vector product on vectors of size n. -""" -struct LazyLinearMap{T,R,D, RD, AA<:AbstractArray{T,RD}} <: TensorMapping{T,R,D} - A::AA - range_indicies::NTuple{R,Int} - domain_indicies::NTuple{D,Int} - - function LazyLinearMap(A::AA, range_indicies::NTuple{R,Int}, domain_indicies::NTuple{D,Int}) where {T,R,D, RD, AA<:AbstractArray{T,RD}} - if !issorted(range_indicies) || !issorted(domain_indicies) - throw(DomainError("range_indicies and domain_indicies must be sorted in ascending order")) - end - - return new{T,R,D,RD,AA}(A,range_indicies,domain_indicies) - end -end - -range_size(llm::LazyLinearMap) = size(llm.A)[[llm.range_indicies...]] -domain_size(llm::LazyLinearMap) = size(llm.A)[[llm.domain_indicies...]] - -function apply(llm::LazyLinearMap{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} - view_index = ntuple(i->:,ndims(llm.A)) - for i ∈ 1:R - view_index = Base.setindex(view_index, Int(I[i]), llm.range_indicies[i]) - end - A_view = @view llm.A[view_index...] - return sum(A_view.*v) -end - -function apply_transpose(llm::LazyLinearMap{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D} - apply(LazyLinearMap(llm.A, llm.domain_indicies, llm.range_indicies), v, I...) -end - """ - IdentityMapping{T,D} <: TensorMapping{T,D,D} - -The lazy identity TensorMapping for a given size. Usefull for building up higher dimensional tensor mappings from lower -dimensional ones through outer products. Also used in the Implementation for InflatedTensorMapping. -""" -struct IdentityMapping{T,D} <: TensorMapping{T,D,D} - size::NTuple{D,Int} -end - -IdentityMapping{T}(size::NTuple{D,Int}) where {T,D} = IdentityMapping{T,D}(size) -IdentityMapping{T}(size::Vararg{Int,D}) where {T,D} = IdentityMapping{T,D}(size) -IdentityMapping(size::Vararg{Int,D}) where D = IdentityMapping{Float64,D}(size) + TensorComposition(tm, tmi::IdentityTensor) + TensorComposition(tmi::IdentityTensor, tm) -range_size(tmi::IdentityMapping) = tmi.size -domain_size(tmi::IdentityMapping) = tmi.size - -apply(tmi::IdentityMapping{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = v[I...] -apply_transpose(tmi::IdentityMapping{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = v[I...] - +Composes a `Tensormapping` `tm` with an `IdentityTensor` `tmi`, by returning `tm` """ - Base.:∘(tm, tmi) - Base.:∘(tmi, tm) - -Composes a `Tensormapping` `tm` with an `IdentityMapping` `tmi`, by returning `tm` -""" -@inline function Base.:∘(tm::TensorMapping{T,R,D}, tmi::IdentityMapping{T,D}) where {T,R,D} +function TensorComposition(tm::LazyTensor{T,R,D}, tmi::IdentityTensor{T,D}) where {T,R,D} @boundscheck check_domain_size(tm, range_size(tmi)) return tm end -@inline function Base.:∘(tmi::IdentityMapping{T,R}, tm::TensorMapping{T,R,D}) where {T,R,D} +function TensorComposition(tmi::IdentityTensor{T,R}, tm::LazyTensor{T,R,D}) where {T,R,D} @boundscheck check_domain_size(tmi, range_size(tm)) return tm end -# Specialization for the case where tm is an IdentityMapping. Required to resolve ambiguity. -@inline function Base.:∘(tm::IdentityMapping{T,D}, tmi::IdentityMapping{T,D}) where {T,D} +# Specialization for the case where tm is an IdentityTensor. Required to resolve ambiguity. +function TensorComposition(tm::IdentityTensor{T,D}, tmi::IdentityTensor{T,D}) where {T,D} @boundscheck check_domain_size(tm, range_size(tmi)) return tmi end """ - InflatedTensorMapping{T,R,D} <: TensorMapping{T,R,D} + InflatedTensor{T,R,D} <: LazyTensor{T,R,D} -An inflated `TensorMapping` with dimensions added before and afer its actual dimensions. +An inflated `LazyTensor` with dimensions added before and afer its actual dimensions. """ -struct InflatedTensorMapping{T,R,D,D_before,R_middle,D_middle,D_after, TM<:TensorMapping{T,R_middle,D_middle}} <: TensorMapping{T,R,D} - before::IdentityMapping{T,D_before} +struct InflatedTensor{T,R,D,D_before,R_middle,D_middle,D_after, TM<:LazyTensor{T,R_middle,D_middle}} <: LazyTensor{T,R,D} + before::IdentityTensor{T,D_before} tm::TM - after::IdentityMapping{T,D_after} + after::IdentityTensor{T,D_after} - function InflatedTensorMapping(before, tm::TensorMapping{T}, after) where T + function InflatedTensor(before, tm::LazyTensor{T}, after) where T R_before = range_dim(before) R_middle = range_dim(tm) R_after = range_dim(after) @@ -213,36 +146,37 @@ return new{T,R,D,D_before,R_middle,D_middle,D_after, typeof(tm)}(before, tm, after) end end + """ - InflatedTensorMapping(before, tm, after) - InflatedTensorMapping(before,tm) - InflatedTensorMapping(tm,after) + InflatedTensor(before, tm, after) + InflatedTensor(before,tm) + InflatedTensor(tm,after) -The outer product of `before`, `tm` and `after`, where `before` and `after` are `IdentityMapping`s. +The outer product of `before`, `tm` and `after`, where `before` and `after` are `IdentityTensor`s. -If one of `before` or `after` is left out, a 0-dimensional `IdentityMapping` is used as the default value. +If one of `before` or `after` is left out, a 0-dimensional `IdentityTensor` is used as the default value. -If `tm` already is an `InflatedTensorMapping`, `before` and `after` will be extended instead of -creating a nested `InflatedTensorMapping`. +If `tm` already is an `InflatedTensor`, `before` and `after` will be extended instead of +creating a nested `InflatedTensor`. """ -InflatedTensorMapping(::IdentityMapping, ::TensorMapping, ::IdentityMapping) +InflatedTensor(::IdentityTensor, ::LazyTensor, ::IdentityTensor) -function InflatedTensorMapping(before, itm::InflatedTensorMapping, after) - return InflatedTensorMapping( - IdentityMapping(before.size..., itm.before.size...), +function InflatedTensor(before, itm::InflatedTensor, after) + return InflatedTensor( + IdentityTensor(before.size..., itm.before.size...), itm.tm, - IdentityMapping(itm.after.size..., after.size...), + IdentityTensor(itm.after.size..., after.size...), ) end -InflatedTensorMapping(before::IdentityMapping, tm::TensorMapping{T}) where T = InflatedTensorMapping(before,tm,IdentityMapping{T}()) -InflatedTensorMapping(tm::TensorMapping{T}, after::IdentityMapping) where T = InflatedTensorMapping(IdentityMapping{T}(),tm,after) +InflatedTensor(before::IdentityTensor, tm::LazyTensor{T}) where T = InflatedTensor(before,tm,IdentityTensor{T}()) +InflatedTensor(tm::LazyTensor{T}, after::IdentityTensor) where T = InflatedTensor(IdentityTensor{T}(),tm,after) # Resolve ambiguity between the two previous methods -InflatedTensorMapping(I1::IdentityMapping{T}, I2::IdentityMapping{T}) where T = InflatedTensorMapping(I1,I2,IdentityMapping{T}()) +InflatedTensor(I1::IdentityTensor{T}, I2::IdentityTensor{T}) where T = InflatedTensor(I1,I2,IdentityTensor{T}()) -# TODO: Implement some pretty printing in terms of ⊗. E.g InflatedTensorMapping(I(3),B,I(2)) -> I(3)⊗B⊗I(2) +# TODO: Implement some pretty printing in terms of ⊗. E.g InflatedTensor(I(3),B,I(2)) -> I(3)⊗B⊗I(2) -function range_size(itm::InflatedTensorMapping) +function range_size(itm::InflatedTensor) return flatten_tuple( range_size(itm.before), range_size(itm.tm), @@ -250,7 +184,7 @@ ) end -function domain_size(itm::InflatedTensorMapping) +function domain_size(itm::InflatedTensor) return flatten_tuple( domain_size(itm.before), domain_size(itm.tm), @@ -258,7 +192,7 @@ ) end -function apply(itm::InflatedTensorMapping{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} +function apply(itm::InflatedTensor{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} dim_before = range_dim(itm.before) dim_domain = domain_dim(itm.tm) dim_range = range_dim(itm.tm) @@ -270,7 +204,7 @@ return apply(itm.tm, v_inner, inner_index...) end -function apply_transpose(itm::InflatedTensorMapping{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D} +function apply_transpose(itm::InflatedTensor{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D} dim_before = range_dim(itm.before) dim_domain = domain_dim(itm.tm) dim_range = range_dim(itm.tm) @@ -283,87 +217,10 @@ end -""" - split_index(::Val{dim_before}, ::Val{dim_view}, ::Val{dim_index}, ::Val{dim_after}, I...) - -Splits the multi-index `I` into two parts. One part which is expected to be -used as a view, and one which is expected to be used as an index. -Eg. -``` -split_index(Val(1),Val(3),Val(2),Val(1),(1,2,3,4)) -> (1,:,:,:,4), (2,3) -``` - -`dim_view` controls how many colons are in the view, and `dim_index` controls -how many elements are extracted from the middle. -`dim_before` and `dim_after` decides the length of the index parts before and after the colons in the view index. - -Arguments should satisfy `length(I) == dim_before+B_domain+dim_after`. - -The returned values satisfy - * `length(view_index) == dim_before + dim_view + dim_after` - * `length(I_middle) == dim_index` -""" -function split_index(::Val{dim_before}, ::Val{dim_view}, ::Val{dim_index}, ::Val{dim_after}, I...) where {dim_before,dim_view, dim_index,dim_after} - I_before, I_middle, I_after = split_tuple(I, Val(dim_before), Val(dim_index)) - - view_index = (I_before..., ntuple((i)->:, dim_view)..., I_after...) - - return view_index, I_middle -end - -# TODO: Can this be replaced by something more elegant while still being type stable? 2020-10-21 -# See: -# https://github.com/JuliaLang/julia/issues/34884 -# https://github.com/JuliaLang/julia/issues/30386 -""" - slice_tuple(t, Val(l), Val(u)) - -Get a slice of a tuple in a type stable way. -Equivalent to `t[l:u]` but type stable. -""" -function slice_tuple(t,::Val{L},::Val{U}) where {L,U} - return ntuple(i->t[i+L-1], U-L+1) -end - -""" - split_tuple(t::Tuple{...}, ::Val{M}) where {N,M} - -Split the tuple `t` into two parts. the first part is `M` long. -E.g -```julia -split_tuple((1,2,3,4),Val(3)) -> (1,2,3), (4,) -``` -""" -function split_tuple(t::NTuple{N,Any},::Val{M}) where {N,M} - return slice_tuple(t,Val(1), Val(M)), slice_tuple(t,Val(M+1), Val(N)) -end - -""" - split_tuple(t::Tuple{...},::Val{M},::Val{K}) where {N,M,K} - -Same as `split_tuple(t::NTuple{N},::Val{M})` but splits the tuple in three parts. With the first -two parts having lenght `M` and `K`. -""" -function split_tuple(t::NTuple{N,Any},::Val{M},::Val{K}) where {N,M,K} - p1, tail = split_tuple(t, Val(M)) - p2, p3 = split_tuple(tail, Val(K)) - return p1,p2,p3 -end - - -""" - flatten_tuple(t) - -Takes a nested tuple and flattens the whole structure -""" -flatten_tuple(t::NTuple{N, Number} where N) = t -flatten_tuple(t::Tuple) = ((flatten_tuple.(t)...)...,) # simplify? -flatten_tuple(ts::Vararg) = flatten_tuple(ts) - @doc raw""" LazyOuterProduct(tms...) -Creates a `TensorMappingComposition` for the outerproduct of `tms...`. +Creates a `TensorComposition` for the outerproduct of `tms...`. This is done by separating the outer product into regular products of outer products involving only identity mappings and one non-identity mapping. First let @@ -399,38 +256,53 @@ """ function LazyOuterProduct end -function LazyOuterProduct(tm1::TensorMapping{T}, tm2::TensorMapping{T}) where T - itm1 = InflatedTensorMapping(tm1, IdentityMapping{T}(range_size(tm2))) - itm2 = InflatedTensorMapping(IdentityMapping{T}(domain_size(tm1)),tm2) +function LazyOuterProduct(tm1::LazyTensor{T}, tm2::LazyTensor{T}) where T + itm1 = InflatedTensor(tm1, IdentityTensor{T}(range_size(tm2))) + itm2 = InflatedTensor(IdentityTensor{T}(domain_size(tm1)),tm2) return itm1∘itm2 end -LazyOuterProduct(t1::IdentityMapping{T}, t2::IdentityMapping{T}) where T = IdentityMapping{T}(t1.size...,t2.size...) -LazyOuterProduct(t1::TensorMapping, t2::IdentityMapping) = InflatedTensorMapping(t1, t2) -LazyOuterProduct(t1::IdentityMapping, t2::TensorMapping) = InflatedTensorMapping(t1, t2) +LazyOuterProduct(t1::IdentityTensor{T}, t2::IdentityTensor{T}) where T = IdentityTensor{T}(t1.size...,t2.size...) +LazyOuterProduct(t1::LazyTensor, t2::IdentityTensor) = InflatedTensor(t1, t2) +LazyOuterProduct(t1::IdentityTensor, t2::LazyTensor) = InflatedTensor(t1, t2) -LazyOuterProduct(tms::Vararg{TensorMapping}) = foldl(LazyOuterProduct, tms) - -⊗(a::TensorMapping, b::TensorMapping) = LazyOuterProduct(a,b) +LazyOuterProduct(tms::Vararg{LazyTensor}) = foldl(LazyOuterProduct, tms) -function check_domain_size(tm::TensorMapping, sz) +function check_domain_size(tm::LazyTensor, sz) if domain_size(tm) != sz - throw(SizeMismatch(tm,sz)) + throw(DomainSizeMismatch(tm,sz)) + end +end + +function check_range_size(tm::LazyTensor, sz) + if range_size(tm) != sz + throw(RangeSizeMismatch(tm,sz)) end end -struct SizeMismatch <: Exception - tm::TensorMapping +struct DomainSizeMismatch <: Exception + tm::LazyTensor sz end -function Base.showerror(io::IO, err::SizeMismatch) - print(io, "SizeMismatch: ") - print(io, "domain size $(domain_size(err.tm)) of TensorMapping not matching size $(err.sz)") +function Base.showerror(io::IO, err::DomainSizeMismatch) + print(io, "DomainSizeMismatch: ") + print(io, "domain size $(domain_size(err.tm)) of LazyTensor not matching size $(err.sz)") end +struct RangeSizeMismatch <: Exception + tm::LazyTensor + sz +end + +function Base.showerror(io::IO, err::RangeSizeMismatch) + print(io, "RangeSizeMismatch: ") + print(io, "range size $(range_size(err.tm)) of LazyTensor not matching size $(err.sz)") +end + + function apply_with_region(op, v, boundary_width::Integer, dim_size::Integer, i) if 0 < i <= boundary_width return LazyTensors.apply(op,v,Index(i,Lower)) @@ -454,4 +326,4 @@ else error("Bounds error") # TODO: Make this more standard end -end +end \ No newline at end of file
--- a/src/LazyTensors/tensor_mapping.jl Mon Mar 21 13:21:48 2022 +0100 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,80 +0,0 @@ -export TensorMapping -export apply -export apply_transpose -export range_dim, domain_dim -export range_size, domain_size - -""" - TensorMapping{T,R,D} - -Describes a mapping of a `D` dimension tensor to an `R` dimension tensor. -The action of the mapping is implemented through the method -```julia - apply(t::TensorMapping{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg) where {R,D,T} -``` - -The size of the range and domain that the operator works with should be returned by -the functions -```julia - range_size(::TensorMapping) - domain_size(::TensorMapping) -``` -to allow querying for one or the other. - -Optionally the action of the transpose may be defined through -```julia - apply_transpose(t::TensorMapping{T,R,D}, v::AbstractArray{T,D}, I::Vararg) where {R,D,T} -``` -""" -abstract type TensorMapping{T,R,D} end - -""" - apply(t::TensorMapping{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg) where {R,D,T} - -Return the result of the mapping for a given index. -""" -function apply end - -""" - apply_transpose(t::TensorMapping{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg) where {R,D,T} - -Return the result of the transposed mapping for a given index. -""" -function apply_transpose end - -""" - range_dim(::TensorMapping) -Return the dimension of the range space of a given mapping -""" -range_dim(::TensorMapping{T,R,D}) where {T,R,D} = R - -""" - domain_dim(::TensorMapping) -Return the dimension of the domain space of a given mapping -""" -domain_dim(::TensorMapping{T,R,D}) where {T,R,D} = D - - -""" - range_size(M::TensorMapping) - -Return the range size for the mapping. -""" -function range_size end - -""" - domain_size(M::TensorMapping) - -Return the domain size for the mapping. -""" -function domain_size end - - -""" - eltype(::TensorMapping{T}) - -The type of elements the TensorMapping acts on. -""" -Base.eltype(::TensorMapping{T}) where T = T - -# TODO: Think about boundschecking!
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/LazyTensors/tensor_types.jl Wed Mar 23 12:54:45 2022 +0100 @@ -0,0 +1,76 @@ +""" + IdentityTensor{T,D} <: LazyTensor{T,D,D} + +The lazy identity LazyTensor for a given size. Usefull for building up higher dimensional tensor mappings from lower +dimensional ones through outer products. Also used in the Implementation for InflatedTensor. +""" +struct IdentityTensor{T,D} <: LazyTensor{T,D,D} + size::NTuple{D,Int} +end + +IdentityTensor{T}(size::NTuple{D,Int}) where {T,D} = IdentityTensor{T,D}(size) +IdentityTensor{T}(size::Vararg{Int,D}) where {T,D} = IdentityTensor{T,D}(size) +IdentityTensor(size::Vararg{Int,D}) where D = IdentityTensor{Float64,D}(size) + +range_size(tmi::IdentityTensor) = tmi.size +domain_size(tmi::IdentityTensor) = tmi.size + +apply(tmi::IdentityTensor{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = v[I...] +apply_transpose(tmi::IdentityTensor{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = v[I...] + + +""" + ScalingTensor{T,D} <: LazyTensor{T,D,D} + +A lazy tensor that scales its input with `λ`. +""" +struct ScalingTensor{T,D} <: LazyTensor{T,D,D} + λ::T + size::NTuple{D,Int} +end + +LazyTensors.apply(tm::ScalingTensor{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = tm.λ*v[I...] +LazyTensors.apply_transpose(tm::ScalingTensor{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = tm.λ*v[I...] + +LazyTensors.range_size(m::ScalingTensor) = m.size +LazyTensors.domain_size(m::ScalingTensor) = m.size + + +""" + DenseTensor{T,R,D,...}(A, range_indicies, domain_indicies) + +LazyTensor defined by the AbstractArray A. `range_indicies` and `domain_indicies` define which indicies of A should +be considerd the range and domain of the LazyTensor. Each set of indices must be ordered in ascending order. + +For instance, if A is a m x n matrix, and range_size = (1,), domain_size = (2,), then the DenseTensor performs the +standard matrix-vector product on vectors of size n. +""" +struct DenseTensor{T,R,D, RD, AA<:AbstractArray{T,RD}} <: LazyTensor{T,R,D} + A::AA + range_indicies::NTuple{R,Int} + domain_indicies::NTuple{D,Int} + + function DenseTensor(A::AA, range_indicies::NTuple{R,Int}, domain_indicies::NTuple{D,Int}) where {T,R,D, RD, AA<:AbstractArray{T,RD}} + if !issorted(range_indicies) || !issorted(domain_indicies) + throw(DomainError("range_indicies and domain_indicies must be sorted in ascending order")) + end + + return new{T,R,D,RD,AA}(A,range_indicies,domain_indicies) + end +end + +range_size(llm::DenseTensor) = size(llm.A)[[llm.range_indicies...]] +domain_size(llm::DenseTensor) = size(llm.A)[[llm.domain_indicies...]] + +function apply(llm::DenseTensor{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} + view_index = ntuple(i->:,ndims(llm.A)) + for i ∈ 1:R + view_index = Base.setindex(view_index, Int(I[i]), llm.range_indicies[i]) + end + A_view = @view llm.A[view_index...] + return sum(A_view.*v) +end + +function apply_transpose(llm::DenseTensor{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D} + apply(DenseTensor(llm.A, llm.domain_indicies, llm.range_indicies), v, I...) +end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/LazyTensors/tuple_manipulation.jl Wed Mar 23 12:54:45 2022 +0100 @@ -0,0 +1,76 @@ +""" + split_index(::Val{dim_before}, ::Val{dim_view}, ::Val{dim_index}, ::Val{dim_after}, I...) + +Splits the multi-index `I` into two parts. One part which is expected to be +used as a view, and one which is expected to be used as an index. +Eg. +``` +split_index(Val(1),Val(3),Val(2),Val(1),(1,2,3,4)) -> (1,:,:,:,4), (2,3) +``` + +`dim_view` controls how many colons are in the view, and `dim_index` controls +how many elements are extracted from the middle. +`dim_before` and `dim_after` decides the length of the index parts before and after the colons in the view index. + +Arguments should satisfy `length(I) == dim_before+B_domain+dim_after`. + +The returned values satisfy + * `length(view_index) == dim_before + dim_view + dim_after` + * `length(I_middle) == dim_index` +""" +function split_index(::Val{dim_before}, ::Val{dim_view}, ::Val{dim_index}, ::Val{dim_after}, I...) where {dim_before,dim_view, dim_index,dim_after} + I_before, I_middle, I_after = split_tuple(I, Val(dim_before), Val(dim_index)) + + view_index = (I_before..., ntuple((i)->:, dim_view)..., I_after...) + + return view_index, I_middle +end + +# TODO: Can this be replaced by something more elegant while still being type stable? 2020-10-21 +# See: +# https://github.com/JuliaLang/julia/issues/34884 +# https://github.com/JuliaLang/julia/issues/30386 +""" + slice_tuple(t, Val(l), Val(u)) + +Get a slice of a tuple in a type stable way. +Equivalent to `t[l:u]` but type stable. +""" +function slice_tuple(t,::Val{L},::Val{U}) where {L,U} + return ntuple(i->t[i+L-1], U-L+1) +end + +""" + split_tuple(t::Tuple{...}, ::Val{M}) where {N,M} + +Split the tuple `t` into two parts. the first part is `M` long. +E.g +```julia +split_tuple((1,2,3,4),Val(3)) -> (1,2,3), (4,) +``` +""" +function split_tuple(t::NTuple{N,Any},::Val{M}) where {N,M} + return slice_tuple(t,Val(1), Val(M)), slice_tuple(t,Val(M+1), Val(N)) +end + +""" + split_tuple(t::Tuple{...},::Val{M},::Val{K}) where {N,M,K} + +Same as `split_tuple(t::NTuple{N},::Val{M})` but splits the tuple in three parts. With the first +two parts having lenght `M` and `K`. +""" +function split_tuple(t::NTuple{N,Any},::Val{M},::Val{K}) where {N,M,K} + p1, tail = split_tuple(t, Val(M)) + p2, p3 = split_tuple(tail, Val(K)) + return p1,p2,p3 +end + + +""" + flatten_tuple(t) + +Takes a nested tuple and flattens the whole structure +""" +flatten_tuple(t::NTuple{N, Number} where N) = t +flatten_tuple(t::Tuple) = ((flatten_tuple.(t)...)...,) # simplify? +flatten_tuple(ts::Vararg) = flatten_tuple(ts)
--- a/src/SbpOperators/SbpOperators.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/SbpOperators.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,5 +1,16 @@ module SbpOperators +# Stencil set +export StencilSet +export read_stencil_set +export get_stencil_set +export parse_stencil +export parse_nested_stencil +export parse_scalar +export parse_tuple +export sbp_operators_path + +# Operators export boundary_quadrature export boundary_restriction export inner_product @@ -22,7 +33,7 @@ export closure_size include("stencil.jl") -include("readoperator.jl") +include("stencil_set.jl") include("volumeops/volume_operator.jl") include("volumeops/constant_interior_scaling_operator.jl") include("volumeops/derivatives/first_derivative.jl")
--- a/src/SbpOperators/boundaryops/boundary_operator.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/boundaryops/boundary_operator.jl Wed Mar 23 12:54:45 2022 +0100 @@ -6,7 +6,7 @@ When `Dim=1`, the corresponding `BoundaryOperator` tensor mapping is returned. When `Dim>1`, the `BoundaryOperator` `op` is inflated by the outer product -of `IdentityMappings` in orthogonal coordinate directions, e.g for `Dim=3`, +of `IdentityTensors` in orthogonal coordinate directions, e.g for `Dim=3`, the boundary restriction operator in the y-direction direction is `Ix⊗op⊗Iz`. """ function boundary_operator(grid::EquidistantGrid, closure_stencil, boundary::CartesianBoundary) @@ -17,9 +17,9 @@ d = dim(boundary) op = BoundaryOperator(restrict(grid, d), closure_stencil, r) - # Create 1D IdentityMappings for each coordinate direction + # Create 1D IdentityTensors for each coordinate direction one_d_grids = restrict.(Ref(grid), Tuple(1:dimension(grid))) - Is = IdentityMapping{eltype(grid)}.(size.(one_d_grids)) + Is = IdentityTensor{eltype(grid)}.(size.(one_d_grids)) # Formulate the correct outer product sequence of the identity mappings and # the boundary operator @@ -28,15 +28,15 @@ end """ - BoundaryOperator{T,R,N} <: TensorMapping{T,0,1} + BoundaryOperator{T,R,N} <: LazyTensor{T,0,1} -Implements the boundary operator `op` for 1D as a `TensorMapping` +Implements the boundary operator `op` for 1D as a `LazyTensor` `op` is the restriction of a grid function to the boundary using some closure `Stencil{T,N}`. The boundary to restrict to is determined by `R`. `op'` is the prolongation of a zero dimensional array to the whole grid using the same closure stencil. """ -struct BoundaryOperator{T,R<:Region,N} <: TensorMapping{T,0,1} +struct BoundaryOperator{T,R<:Region,N} <: LazyTensor{T,0,1} stencil::Stencil{T,N} size::Int end
--- a/src/SbpOperators/boundaryops/boundary_restriction.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/boundaryops/boundary_restriction.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,10 +1,7 @@ -# TODO: The type parameter closure_stencil::Stencil is required since there isnt any suitable type -# for stencil_set. We should consider adding type ::StencilSet and dispatch on that instead. -# The same goes for other operators """ boundary_restriction(grid, closure_stencil::Stencil, boundary) -Creates boundary restriction operators `e` as `TensorMapping`s on `boundary` +Creates boundary restriction operators `e` as `LazyTensor`s on `boundary` `e` is the restriction of a grid function to `boundary` using a `Stencil` `closure_stencil`. `e'` is the prolongation of a grid function on `boundary` to the whole grid using the same `closure_stencil`. @@ -13,7 +10,7 @@ See also: [`boundary_operator`](@ref). """ -function boundary_restriction(grid, closure_stencil::Stencil, boundary) +function boundary_restriction(grid, closure_stencil, boundary) converted_stencil = convert(Stencil{eltype(grid)}, closure_stencil) return SbpOperators.boundary_operator(grid, converted_stencil, boundary) end @@ -21,7 +18,6 @@ """ boundary_restriction(grid, stencil_set, boundary) -Creates a `boundary_restriction` operator on `grid` given a parsed TOML -`stencil_set`. +Creates a `boundary_restriction` operator on `grid` given a `stencil_set`. """ -boundary_restriction(grid, stencil_set, boundary) = boundary_restriction(grid, parse_stencil(stencil_set["e"]["closure"]), boundary) +boundary_restriction(grid, stencil_set::StencilSet, boundary) = boundary_restriction(grid, parse_stencil(stencil_set["e"]["closure"]), boundary)
--- a/src/SbpOperators/boundaryops/normal_derivative.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/boundaryops/normal_derivative.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,7 +1,7 @@ """ normal_derivative(grid, closure_stencil::Stencil, boundary) -Creates the normal derivative boundary operator `d` as a `TensorMapping` +Creates the normal derivative boundary operator `d` as a `LazyTensor` `d` computes the normal derivative of a grid function on `boundary` a `Stencil` `closure_stencil`. `d'` is the prolongation of the normal derivative of a grid function to the whole grid using the same `closure_stencil`. @@ -10,7 +10,7 @@ See also: [`boundary_operator`](@ref). """ -function normal_derivative(grid, closure_stencil::Stencil, boundary) +function normal_derivative(grid, closure_stencil, boundary) direction = dim(boundary) h_inv = inverse_spacing(grid)[direction] return SbpOperators.boundary_operator(grid, scale(closure_stencil,h_inv), boundary) @@ -19,7 +19,6 @@ """ normal_derivative(grid, stencil_set, boundary) -Creates a `normal_derivative` operator on `grid` given a parsed TOML -`stencil_set`. +Creates a `normal_derivative` operator on `grid` given a `stencil_set`. """ -normal_derivative(grid, stencil_set, boundary) = normal_derivative(grid, parse_stencil(stencil_set["d1"]["closure"]), boundary) +normal_derivative(grid, stencil_set::StencilSet, boundary) = normal_derivative(grid, parse_stencil(stencil_set["d1"]["closure"]), boundary)
--- a/src/SbpOperators/readoperator.jl Mon Mar 21 13:21:48 2022 +0100 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,188 +0,0 @@ -using TOML - -export read_stencil_set -export get_stencil_set - -export parse_stencil -export parse_nested_stencil -export parse_scalar -export parse_tuple - -export sbp_operators_path - - -""" - read_stencil_set(filename; filters) - -Picks out a stencil set from a TOML file based on some key-value -filters. If more than one set matches the filters an error is raised. The -returned stencil set contains parsed TOML intended for functions like -`parse_scalar` and `parse_stencil`. - -The stencil set is not parsed beyond the inital TOML parse. To get usable -stencils use the `parse_stencil` functions on the fields of the stencil set. - -The reason for this is that since stencil sets are intended to be very -general, and currently do not include any way to specify how to parse a given -section, the exact parsing is left to the user. - -For more information see [Operator file format](@ref) in the documentation. - -See also [`sbp_operators_path`](@ref), [`get_stencil_set`](@ref), [`parse_stencil`](@ref), [`parse_scalar`](@ref), [`parse_tuple`](@ref),. -""" -read_stencil_set(filename; filters...) = get_stencil_set(TOML.parsefile(filename); filters...) - -""" - get_stencil_set(parsed_toml; filters...) - -Picks out a stencil set from an already parsed TOML based on some key-value -filters. - -See also [`read_stencil_set`](@ref). -""" -function get_stencil_set(parsed_toml; filters...) - matches = findall(parsed_toml["stencil_set"]) do set - for (key, val) ∈ filters - if set[string(key)] != val - return false - end - end - - return true - end - - if length(matches) != 1 - throw(ArgumentError("filters must pick out a single stencil set")) - end - - i = matches[1] - return parsed_toml["stencil_set"][i] -end - -""" - parse_stencil(parsed_toml) - -Accepts parsed TOML and reads it as a stencil. - -See also [`read_stencil_set`](@ref), [`parse_scalar`](@ref), [`parse_tuple`](@ref). -""" -function parse_stencil(parsed_toml) - check_stencil_toml(parsed_toml) - - if parsed_toml isa Array - weights = parse_rational.(parsed_toml) - return CenteredStencil(weights...) - end - - weights = parse_rational.(parsed_toml["s"]) - return Stencil(weights..., center = parsed_toml["c"]) -end - -""" - parse_stencil(T, parsed_toml) - -Parses the input as a stencil with element type `T`. -""" -parse_stencil(T, parsed_toml) = Stencil{T}(parse_stencil(parsed_toml)) - -function check_stencil_toml(parsed_toml) - if !(parsed_toml isa Dict || parsed_toml isa Vector{String}) - throw(ArgumentError("the TOML for a stencil must be a vector of strings or a table.")) - end - - if parsed_toml isa Vector{String} - return - end - - if !(haskey(parsed_toml, "s") && haskey(parsed_toml, "c")) - throw(ArgumentError("the table form of a stencil must have fields `s` and `c`.")) - end - - if !(parsed_toml["s"] isa Vector{String}) - throw(ArgumentError("a stencil must be specified as a vector of strings.")) - end - - if !(parsed_toml["c"] isa Int) - throw(ArgumentError("the center of a stencil must be specified as an integer.")) - end -end - - -""" - parse_nested_stencil(parsed_toml) - -Accept parsed TOML and read it as a nested tuple. - -See also [`read_stencil_set`](@ref), [`parse_stencil`](@ref). -""" -function parse_nested_stencil(parsed_toml) - if parsed_toml isa Array - weights = parse_stencil.(parsed_toml) - return CenteredNestedStencil(weights...) - end - - center = parsed_toml["c"] - weights = parse_tuple.(parsed_toml["s"]) - return NestedStencil(weights...; center) -end - -""" - parse_nested_stencil(T, parsed_toml) - -Parse the input as a nested stencil with element type `T`. -""" -parse_nested_stencil(T, parsed_toml) = NestedStencil{T}(parse_nested_stencil(parsed_toml)) - - -""" - parse_scalar(parsed_toml) - -Parse a scalar, represented as a string or a number in the TOML, and return it as a `Rational` - -See also [`read_stencil_set`](@ref), [`parse_stencil`](@ref) [`parse_tuple`](@ref). -""" -function parse_scalar(parsed_toml) - try - return parse_rational(parsed_toml) - catch e - throw(ArgumentError("must be a number or a string representing a number.")) - end -end - -""" - parse_tuple(parsed_toml) - -Parse an array as a tuple of scalars. - -See also [`read_stencil_set`](@ref), [`parse_stencil`](@ref), [`parse_scalar`](@ref). -""" -function parse_tuple(parsed_toml) - if !(parsed_toml isa Array) - throw(ArgumentError("argument must be an array")) - end - return Tuple(parse_scalar.(parsed_toml)) -end - - -""" - parse_rational(parsed_toml) - -Parse a string or a number as a rational. -""" -function parse_rational(parsed_toml) - if parsed_toml isa String - expr = Meta.parse(replace(parsed_toml, "/"=>"//")) - return eval(:(Rational($expr))) - else - return Rational(parsed_toml) - end -end - -""" - sbp_operators_path() - -Calculate the path for the operators folder with included stencil sets. - -See also [`read_stencil_set`](@ref) -""" -sbp_operators_path() = (@__DIR__) * "/operators/"
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/SbpOperators/stencil_set.jl Wed Mar 23 12:54:45 2022 +0100 @@ -0,0 +1,191 @@ +using TOML + + +""" + StencilSet + +A `StencilSet` contains a set of associated stencils. The stencils +are are stored in a table, and can be accesed by indexing into the `StencilSet`. +""" +struct StencilSet + table +end +Base.getindex(set::StencilSet,I...) = set.table[I...] + + +""" +read_stencil_set(filename; filters) + +Creates a `StencilSet` from a TOML file based on some key-value +filters. If more than one set matches the filters an error is raised. The +table of the `StencilSet` is a parsed TOML intended for functions like +`parse_scalar` and `parse_stencil`. + +The `StencilSet` table is not parsed beyond the inital TOML parse. To get usable +stencils use the `parse_stencil` functions on the fields of the stencil set. + +The reason for this is that since stencil sets are intended to be very +general, and currently do not include any way to specify how to parse a given +section, the exact parsing is left to the user. + +For more information see [Operator file format](@ref) in the documentation. + +See also [`StencilSet`](@ref), [`sbp_operators_path`](@ref), [`get_stencil_set`](@ref), [`parse_stencil`](@ref), [`parse_scalar`](@ref), [`parse_tuple`](@ref). +""" +read_stencil_set(filename; filters...) = StencilSet(get_stencil_set(TOML.parsefile(filename); filters...)) + + +""" + get_stencil_set(parsed_toml; filters...) + +Picks out a stencil set from an already parsed TOML based on some key-value +filters. + +See also [`read_stencil_set`](@ref). +""" +function get_stencil_set(parsed_toml; filters...) + matches = findall(parsed_toml["stencil_set"]) do set + for (key, val) ∈ filters + if set[string(key)] != val + return false + end + end + + return true + end + + if length(matches) != 1 + throw(ArgumentError("filters must pick out a single stencil set")) + end + + i = matches[1] + return parsed_toml["stencil_set"][i] +end + +""" + parse_stencil(parsed_toml) + +Accepts parsed TOML and reads it as a stencil. + +See also [`read_stencil_set`](@ref), [`parse_scalar`](@ref), [`parse_tuple`](@ref). +""" +function parse_stencil(parsed_toml) + check_stencil_toml(parsed_toml) + + if parsed_toml isa Array + weights = parse_rational.(parsed_toml) + return CenteredStencil(weights...) + end + + weights = parse_rational.(parsed_toml["s"]) + return Stencil(weights..., center = parsed_toml["c"]) +end + +""" + parse_stencil(T, parsed_toml) + +Parses the input as a stencil with element type `T`. +""" +parse_stencil(T, parsed_toml) = Stencil{T}(parse_stencil(parsed_toml)) + +function check_stencil_toml(parsed_toml) + if !(parsed_toml isa Dict || parsed_toml isa Vector{String}) + throw(ArgumentError("the TOML for a stencil must be a vector of strings or a table.")) + end + + if parsed_toml isa Vector{String} + return + end + + if !(haskey(parsed_toml, "s") && haskey(parsed_toml, "c")) + throw(ArgumentError("the table form of a stencil must have fields `s` and `c`.")) + end + + if !(parsed_toml["s"] isa Vector{String}) + throw(ArgumentError("a stencil must be specified as a vector of strings.")) + end + + if !(parsed_toml["c"] isa Int) + throw(ArgumentError("the center of a stencil must be specified as an integer.")) + end +end + + +""" + parse_nested_stencil(parsed_toml) + +Accept parsed TOML and read it as a nested tuple. + +See also [`read_stencil_set`](@ref), [`parse_stencil`](@ref). +""" +function parse_nested_stencil(parsed_toml) + if parsed_toml isa Array + weights = parse_stencil.(parsed_toml) + return CenteredNestedStencil(weights...) + end + + center = parsed_toml["c"] + weights = parse_tuple.(parsed_toml["s"]) + return NestedStencil(weights...; center) +end + +""" + parse_nested_stencil(T, parsed_toml) + +Parse the input as a nested stencil with element type `T`. +""" +parse_nested_stencil(T, parsed_toml) = NestedStencil{T}(parse_nested_stencil(parsed_toml)) + + +""" + parse_scalar(parsed_toml) + +Parse a scalar, represented as a string or a number in the TOML, and return it as a `Rational` + +See also [`read_stencil_set`](@ref), [`parse_stencil`](@ref) [`parse_tuple`](@ref). +""" +function parse_scalar(parsed_toml) + try + return parse_rational(parsed_toml) + catch e + throw(ArgumentError("must be a number or a string representing a number.")) + end +end + +""" + parse_tuple(parsed_toml) + +Parse an array as a tuple of scalars. + +See also [`read_stencil_set`](@ref), [`parse_stencil`](@ref), [`parse_scalar`](@ref). +""" +function parse_tuple(parsed_toml) + if !(parsed_toml isa Array) + throw(ArgumentError("argument must be an array")) + end + return Tuple(parse_scalar.(parsed_toml)) +end + + +""" + parse_rational(parsed_toml) + +Parse a string or a number as a rational. +""" +function parse_rational(parsed_toml) + if parsed_toml isa String + expr = Meta.parse(replace(parsed_toml, "/"=>"//")) + return eval(:(Rational($expr))) + else + return Rational(parsed_toml) + end +end + +""" + sbp_operators_path() + +Calculate the path for the operators folder with included stencil sets. + +See also [`StencilSet`](@ref), [`read_stencil_set`](@ref). +""" +sbp_operators_path() = (@__DIR__) * "/operators/"
--- a/src/SbpOperators/volumeops/constant_interior_scaling_operator.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/volumeops/constant_interior_scaling_operator.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,10 +1,10 @@ """ - ConstantInteriorScalingOperator{T,N} <: TensorMapping{T,1,1} + ConstantInteriorScalingOperator{T,N} <: LazyTensor{T,1,1} A one-dimensional operator scaling a vector. The first and last `N` points are scaled with individual weights while all interior points are scaled the same. """ -struct ConstantInteriorScalingOperator{T,N} <: TensorMapping{T,1,1} +struct ConstantInteriorScalingOperator{T,N} <: LazyTensor{T,1,1} interior_weight::T closure_weights::NTuple{N,T} size::Int
--- a/src/SbpOperators/volumeops/derivatives/first_derivative.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/volumeops/derivatives/first_derivative.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,14 +1,14 @@ """ first_derivative(grid::EquidistantGrid, inner_stencil, closure_stencils, direction) -Creates the first-derivative operator `D1` as a `TensorMapping` +Creates the first-derivative operator `D1` as a `LazyTensor` `D1` approximates the first-derivative d/dξ on `grid` along the coordinate dimension specified by `direction`, using the stencil `inner_stencil` in the interior and a set of stencils `closure_stencils` for the points in the closure regions. On a one-dimensional `grid`, `D1` is a `VolumeOperator`. On a multi-dimensional `grid`, `D1` is the outer product of the -one-dimensional operator with the `IdentityMapping`s in orthogonal coordinate dirrections. +one-dimensional operator with the `IdentityTensor`s in orthogonal coordinate dirrections. See also: [`volume_operator`](@ref). """ @@ -16,16 +16,31 @@ h_inv = inverse_spacing(grid)[direction] return SbpOperators.volume_operator(grid, scale(inner_stencil,h_inv), scale.(closure_stencils,h_inv), odd, direction) end -first_derivative(grid::EquidistantGrid{1}, inner_stencil::Stencil, closure_stencils) = first_derivative(grid,inner_stencil,closure_stencils,1) + """ - first_derivative(grid, stencil_set, direction) + first_derivative(grid, inner_stencil, closure_stencils) + +Creates a `first_derivative` operator on a 1D `grid` given `inner_stencil` and `closure_stencils`. +""" +first_derivative(grid::EquidistantGrid{1}, inner_stencil::Stencil, closure_stencils) = first_derivative(grid, inner_stencil, closure_stencils, 1) + -Creates a `first_derivative` operator on `grid` along coordinate dimension `direction` given a parsed TOML -`stencil_set`. """ -function first_derivative(grid::EquidistantGrid, stencil_set, direction) + first_derivative(grid, stencil_set::StencilSet, direction) + +Creates a `first_derivative` operator on `grid` along coordinate dimension `direction` given a `stencil_set`. +""" +function first_derivative(grid::EquidistantGrid, stencil_set::StencilSet, direction) inner_stencil = parse_stencil(stencil_set["D1"]["inner_stencil"]) closure_stencils = parse_stencil.(stencil_set["D1"]["closure_stencils"]) first_derivative(grid,inner_stencil,closure_stencils,direction); end + + +""" + first_derivative(grid, stencil_set) + +Creates a `first_derivative` operator on a 1D `grid` given a `stencil_set`. +""" +first_derivative(grid::EquidistantGrid{1}, stencil_set::StencilSet) = first_derivative(grid, stencil_set, 1)
--- a/src/SbpOperators/volumeops/derivatives/second_derivative.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/volumeops/derivatives/second_derivative.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,14 +1,14 @@ """ second_derivative(grid::EquidistantGrid, inner_stencil, closure_stencils, direction) -Creates the second-derivative operator `D2` as a `TensorMapping` +Creates the second-derivative operator `D2` as a `LazyTensor` `D2` approximates the second-derivative d²/dξ² on `grid` along the coordinate dimension specified by `direction`, using the stencil `inner_stencil` in the interior and a set of stencils `closure_stencils` for the points in the closure regions. On a one-dimensional `grid`, `D2` is a `VolumeOperator`. On a multi-dimensional `grid`, `D2` is the outer product of the -one-dimensional operator with the `IdentityMapping`s in orthogonal coordinate dirrections. +one-dimensional operator with the `IdentityTensor`s in orthogonal coordinate dirrections. See also: [`volume_operator`](@ref). """ @@ -16,16 +16,31 @@ h_inv = inverse_spacing(grid)[direction] return SbpOperators.volume_operator(grid, scale(inner_stencil,h_inv^2), scale.(closure_stencils,h_inv^2), even, direction) end -second_derivative(grid::EquidistantGrid{1}, inner_stencil::Stencil, closure_stencils) = second_derivative(grid,inner_stencil,closure_stencils,1) + + +""" + second_derivative(grid, inner_stencil, closure_stencils) + +Creates a `second_derivative` operator on a 1D `grid` given `inner_stencil` and `closure_stencils`. +""" +second_derivative(grid::EquidistantGrid{1}, inner_stencil::Stencil, closure_stencils) = second_derivative(grid, inner_stencil, closure_stencils,1) + """ second_derivative(grid, stencil_set, direction) -Creates a `second_derivative` operator on `grid` along coordinate dimension `direction` given a parsed TOML -`stencil_set`. +Creates a `second_derivative` operator on `grid` along coordinate dimension `direction` given a `stencil_set`. """ -function second_derivative(grid::EquidistantGrid, stencil_set, direction) +function second_derivative(grid::EquidistantGrid, stencil_set::StencilSet, direction) inner_stencil = parse_stencil(stencil_set["D2"]["inner_stencil"]) closure_stencils = parse_stencil.(stencil_set["D2"]["closure_stencils"]) second_derivative(grid,inner_stencil,closure_stencils,direction); -end +end + + +""" + second_derivative(grid, stencil_set) + +Creates a `second_derivative` operator on a 1D `grid` given a `stencil_set`. +""" +second_derivative(grid::EquidistantGrid{1}, stencil_set::StencilSet) = second_derivative(grid, stencil_set, 1) \ No newline at end of file
--- a/src/SbpOperators/volumeops/derivatives/second_derivative_variable.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/volumeops/derivatives/second_derivative_variable.jl Wed Mar 23 12:54:45 2022 +0100 @@ -2,11 +2,11 @@ # REVIEW: Fixa docs """ - SecondDerivativeVariable{Dir,T,D,...} <: TensorMapping{T,D,D} + SecondDerivativeVariable{Dir,T,D,...} <: LazyTensor{T,D,D} A second derivative operator in direction `Dir` with a variable coefficient. """ -struct SecondDerivativeVariable{Dir,T,D,M,IStencil<:NestedStencil{T},CStencil<:NestedStencil{T},TArray<:AbstractArray} <: TensorMapping{T,D,D} +struct SecondDerivativeVariable{Dir,T,D,M,IStencil<:NestedStencil{T},CStencil<:NestedStencil{T},TArray<:AbstractArray} <: LazyTensor{T,D,D} inner_stencil::IStencil closure_stencils::NTuple{M,CStencil} size::NTuple{D,Int} @@ -36,7 +36,7 @@ @doc raw""" SecondDerivativeVariable(grid::EquidistantGrid, coeff::AbstractArray, stencil_set, dir) -Create a `TensorMapping` for the second derivative with a variable coefficient +Create a `LazyTensor` for the second derivative with a variable coefficient `coeff` on `grid` from the stencils in `stencil_set`. The direction is determined by `dir`.
--- a/src/SbpOperators/volumeops/inner_products/inner_product.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/volumeops/inner_products/inner_product.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,7 +1,7 @@ """ inner_product(grid::EquidistantGrid, interior_weight, closure_weights) -Creates the discrete inner product operator `H` as a `TensorMapping` on an +Creates the discrete inner product operator `H` as a `LazyTensor` on an equidistant grid, defined as `(u,v) = u'Hv` for grid functions `u,v`. `inner_product` creates `H` on `grid` using the `interior_weight` for the @@ -11,7 +11,7 @@ On a 1-dimensional grid, `H` is a `ConstantInteriorScalingOperator`. On a N-dimensional grid, `H` is the outer product of the 1-dimensional inner product operators for each coordinate direction. On a 0-dimensional grid, -`H` is a 0-dimensional `IdentityMapping`. +`H` is a 0-dimensional `IdentityTensor`. See also: [`ConstantInteriorScalingOperator`](@ref). """ @@ -32,15 +32,14 @@ return H end -inner_product(grid::EquidistantGrid{0}, interior_weight, closure_weights) = IdentityMapping{eltype(grid)}() +inner_product(grid::EquidistantGrid{0}, interior_weight, closure_weights) = IdentityTensor{eltype(grid)}() """ inner_product(grid, stencil_set) -Creates a `inner_product` operator on `grid` given a parsed TOML -`stencil_set`. +Creates a `inner_product` operator on `grid` given a `stencil_set`. """ -function inner_product(grid, stencil_set) +function inner_product(grid, stencil_set::StencilSet) inner_stencil = parse_scalar(stencil_set["H"]["inner"]) closure_stencils = parse_tuple(stencil_set["H"]["closure"]) return inner_product(grid, inner_stencil, closure_stencils)
--- a/src/SbpOperators/volumeops/inner_products/inverse_inner_product.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/volumeops/inner_products/inverse_inner_product.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,14 +1,14 @@ """ inverse_inner_product(grid::EquidistantGrid, interior_weight, closure_weights) -Constructs the inverse inner product operator `H⁻¹` as a `TensorMapping` using +Constructs the inverse inner product operator `H⁻¹` as a `LazyTensor` using the weights of `H`, `interior_weight`, `closure_weights`. `H⁻¹` is inverse of the inner product operator `H`. On a 1-dimensional grid, `H⁻¹` is a `ConstantInteriorScalingOperator`. On an N-dimensional grid, `H⁻¹` is the outer product of the 1-dimensional inverse inner product operators for each coordinate direction. On a 0-dimensional -`grid`, `H⁻¹` is a 0-dimensional `IdentityMapping`. +`grid`, `H⁻¹` is a 0-dimensional `IdentityTensor`. See also: [`ConstantInteriorScalingOperator`](@ref). """ @@ -28,15 +28,14 @@ return H⁻¹ end -inverse_inner_product(grid::EquidistantGrid{0}, interior_weight, closure_weights) = IdentityMapping{eltype(grid)}() +inverse_inner_product(grid::EquidistantGrid{0}, interior_weight, closure_weights) = IdentityTensor{eltype(grid)}() """ inverse_inner_product(grid, stencil_set) -Creates a `inverse_inner_product` operator on `grid` given a parsed TOML -`stencil_set`. +Creates a `inverse_inner_product` operator on `grid` given a `stencil_set`. """ -function inverse_inner_product(grid, stencil_set) +function inverse_inner_product(grid, stencil_set::StencilSet) inner_stencil = parse_scalar(stencil_set["H"]["inner"]) closure_stencils = parse_tuple(stencil_set["H"]["closure"]) return inverse_inner_product(grid, inner_stencil, closure_stencils)
--- a/src/SbpOperators/volumeops/laplace/laplace.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/volumeops/laplace/laplace.jl Wed Mar 23 12:54:45 2022 +0100 @@ -1,22 +1,23 @@ """ - Laplace{T, Dim, TM} <: TensorMapping{T, Dim, Dim} + Laplace{T, Dim, TM} <: LazyTensor{T, Dim, Dim} Implements the Laplace operator, approximating ∑d²/xᵢ² , i = 1,...,`Dim` as a -`TensorMapping`. Additionally `Laplace` stores the stencil set (parsed from TOML) -used to construct the `TensorMapping`. +`LazyTensor`. Additionally `Laplace` stores the `StencilSet` +used to construct the `LazyTensor `. """ -struct Laplace{T, Dim, TM<:TensorMapping{T, Dim, Dim}} <: TensorMapping{T, Dim, Dim} +struct Laplace{T, Dim, TM<:LazyTensor{T, Dim, Dim}} <: LazyTensor{T, Dim, Dim} D::TM # Difference operator - stencil_set # Stencil set of the operator + stencil_set::StencilSet # Stencil set of the operator end """ Laplace(grid::Equidistant, stencil_set) -Creates the `Laplace` operator `Δ` on `grid` given a parsed TOML -`stencil_set`. See also [`laplace`](@ref). +Creates the `Laplace` operator `Δ` on `grid` given a `stencil_set`. + +See also [`laplace`](@ref). """ -function Laplace(grid::EquidistantGrid, stencil_set) +function Laplace(grid::EquidistantGrid, stencil_set::StencilSet) inner_stencil = parse_stencil(stencil_set["D2"]["inner_stencil"]) closure_stencils = parse_stencil.(stencil_set["D2"]["closure_stencils"]) Δ = laplace(grid, inner_stencil,closure_stencils) @@ -27,13 +28,13 @@ LazyTensors.domain_size(L::Laplace) = LazyTensors.domain_size(L.D) LazyTensors.apply(L::Laplace, v::AbstractArray, I...) = LazyTensors.apply(L.D,v,I...) -# TODO: Implement pretty printing of Laplace once pretty printing of TensorMappings is implemented. +# TODO: Implement pretty printing of Laplace once pretty printing of LazyTensors is implemented. # Base.show(io::IO, L::Laplace) = ... """ laplace(grid::EquidistantGrid, inner_stencil, closure_stencils) -Creates the Laplace operator operator `Δ` as a `TensorMapping` +Creates the Laplace operator operator `Δ` as a `LazyTensor` `Δ` approximates the Laplace operator ∑d²/xᵢ² , i = 1,...,`Dim` on `grid`, using the stencil `inner_stencil` in the interior and a set of stencils `closure_stencils`
--- a/src/SbpOperators/volumeops/volume_operator.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/src/SbpOperators/volumeops/volume_operator.jl Wed Mar 23 12:54:45 2022 +0100 @@ -6,7 +6,7 @@ the stencils `inner_stencil` and `closure_stencils`. When `Dim=1`, the corresponding `VolumeOperator` tensor mapping is returned. When `Dim>1`, the returned operator is the appropriate outer product of a one-dimensional -operators and `IdentityMapping`s, e.g for `Dim=3` the volume operator in the +operators and `IdentityTensor`s, e.g for `Dim=3` the volume operator in the y-direction is `I⊗op⊗I`. """ function volume_operator(grid::EquidistantGrid, inner_stencil, closure_stencils, parity, direction) @@ -14,9 +14,9 @@ # Create 1D volume operator in along coordinate direction op = VolumeOperator(restrict(grid, direction), inner_stencil, closure_stencils, parity) - # Create 1D IdentityMappings for each coordinate direction + # Create 1D IdentityTensors for each coordinate direction one_d_grids = restrict.(Ref(grid), Tuple(1:dimension(grid))) - Is = IdentityMapping{eltype(grid)}.(size.(one_d_grids)) + Is = IdentityTensor{eltype(grid)}.(size.(one_d_grids)) # Formulate the correct outer product sequence of the identity mappings and # the volume operator parts = Base.setindex(Is, op, direction) @@ -27,7 +27,7 @@ VolumeOperator{T,N,M,K} <: TensorOperator{T,1} Implements a one-dimensional constant coefficients volume operator """ -struct VolumeOperator{T,N,M,K} <: TensorMapping{T,1,1} +struct VolumeOperator{T,N,M,K} <: LazyTensor{T,1,1} inner_stencil::Stencil{T,N} closure_stencils::NTuple{M,Stencil{T,K}} size::NTuple{1,Int}
--- a/test/DiffOps/DiffOps_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/DiffOps/DiffOps_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -22,8 +22,8 @@ # v[:,2] = [7, 8, 9, 10] # v[:,1] = [10, 11, 12, 13] # -# @test e_w isa TensorMapping{T,2,1} where T -# @test e_w' isa TensorMapping{T,1,2} where T +# @test e_w isa LazyTensor{T,2,1} where T +# @test e_w' isa LazyTensor{T,1,2} where T # # @test domain_size(e_w, (3,2)) == (2,) # @test domain_size(e_e, (3,2)) == (2,) @@ -81,8 +81,8 @@ # v∂x = evalOn(g, (x,y)-> 2*x + y) # v∂y = evalOn(g, (x,y)-> 2*(y-1) + x) # -# @test d_w isa TensorMapping{T,2,1} where T -# @test d_w' isa TensorMapping{T,1,2} where T +# @test d_w isa LazyTensor{T,2,1} where T +# @test d_w' isa LazyTensor{T,1,2} where T # # @test domain_size(d_w, (3,2)) == (2,) # @test domain_size(d_e, (3,2)) == (2,)
--- a/test/LazyTensors/lazy_array_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/LazyTensors/lazy_array_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -59,8 +59,6 @@ end @test_throws BoundsError (v1 +̃ v2)[4] v2 = [1., 2, 3, 4] - # Test that size of arrays is asserted when not specified inbounds - # TODO: Replace these errors with SizeMismatch @test_throws DimensionMismatch v1 +̃ v2 # Test operations on LazyArray @@ -76,8 +74,6 @@ end @test_throws BoundsError (v1 + v2)[4] v2 = [1., 2, 3, 4] - # Test that size of arrays is asserted when not specified inbounds - # TODO: Replace these errors with SizeMismatch @test_throws DimensionMismatch v1 + v2 end
--- a/test/LazyTensors/lazy_tensor_operations_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/LazyTensors/lazy_tensor_operations_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -4,17 +4,28 @@ using Tullio -@testset "Mapping transpose" begin - struct DummyMapping{T,R,D} <: TensorMapping{T,R,D} end +struct DummyMapping{T,R,D} <: LazyTensor{T,R,D} end - LazyTensors.apply(m::DummyMapping{T,R}, v, I::Vararg{Any,R}) where {T,R} = :apply - LazyTensors.apply_transpose(m::DummyMapping{T,R,D}, v, I::Vararg{Any,D}) where {T,R,D} = :apply_transpose +LazyTensors.apply(m::DummyMapping{T,R}, v, I::Vararg{Any,R}) where {T,R} = :apply +LazyTensors.apply_transpose(m::DummyMapping{T,R,D}, v, I::Vararg{Any,D}) where {T,R,D} = :apply_transpose + +LazyTensors.range_size(m::DummyMapping) = :range_size +LazyTensors.domain_size(m::DummyMapping) = :domain_size + - LazyTensors.range_size(m::DummyMapping) = :range_size - LazyTensors.domain_size(m::DummyMapping) = :domain_size +struct SizeDoublingMapping{T,R,D} <: LazyTensor{T,R,D} + domain_size::NTuple{D,Int} +end +LazyTensors.apply(m::SizeDoublingMapping{T,R}, v, i::Vararg{Any,R}) where {T,R} = (:apply,v,i) +LazyTensors.range_size(m::SizeDoublingMapping) = 2 .* m.domain_size +LazyTensors.domain_size(m::SizeDoublingMapping) = m.domain_size + + + +@testset "Mapping transpose" begin m = DummyMapping{Float64,2,3}() - @test m' isa TensorMapping{Float64, 3,2} + @test m' isa LazyTensor{Float64, 3,2} @test m'' == m @test apply(m',zeros(Float64,(0,0)), 0, 0, 0) == :apply_transpose @test apply(m'',zeros(Float64,(0,0,0)), 0, 0) == :apply @@ -24,91 +35,91 @@ @test domain_size(m') == :range_size end + @testset "TensorApplication" begin - struct SizeDoublingMapping{T,R,D} <: TensorMapping{T,R,D} - domain_size::NTuple{D,Int} - end - - LazyTensors.apply(m::SizeDoublingMapping{T,R}, v, i::Vararg{Any,R}) where {T,R} = (:apply,v,i) - LazyTensors.range_size(m::SizeDoublingMapping) = 2 .* m.domain_size - LazyTensors.domain_size(m::SizeDoublingMapping) = m.domain_size - - m = SizeDoublingMapping{Int, 1, 1}((3,)) + mm = SizeDoublingMapping{Int, 1, 1}((6,)) v = [0,1,2] @test size(m*v) == 2 .*size(v) - @test (m*v)[0] == (:apply,v,(0,)) - @test (m*m*v)[1] == (:apply,m*v,(1,)) - @test (m*m*v)[3] == (:apply,m*v,(3,)) - @test (m*m*v)[6] == (:apply,m*v,(6,)) - @test_broken BoundsError == (m*m*v)[0] - @test_broken BoundsError == (m*m*v)[7] + @test (m*v)[1] == (:apply,v,(1,)) + @test (mm*m*v)[1] == (:apply,m*v,(1,)) + @test (mm*m*v)[3] == (:apply,m*v,(3,)) + @test (mm*m*v)[6] == (:apply,m*v,(6,)) @test_throws MethodError m*m @test (m*v)[CartesianIndex(2)] == (:apply,v,(2,)) - @test (m*m*v)[CartesianIndex(2)] == (:apply,m*v,(2,)) - - m = SizeDoublingMapping{Int, 2, 1}((3,)) - @test_throws MethodError m*ones(Int,2,2) - @test_throws MethodError m*m*v + @test (mm*m*v)[CartesianIndex(2)] == (:apply,m*v,(2,)) m = SizeDoublingMapping{Float64, 2, 2}((3,3)) + mm = SizeDoublingMapping{Float64, 2, 2}((6,6)) v = ones(3,3) @test size(m*v) == 2 .*size(v) @test (m*v)[1,2] == (:apply,v,(1,2)) @test (m*v)[CartesianIndex(2,3)] == (:apply,v,(2,3)) - @test (m*m*v)[CartesianIndex(4,3)] == (:apply,m*v,(4,3)) + @test (mm*m*v)[CartesianIndex(4,3)] == (:apply,m*v,(4,3)) - struct ScalingOperator{T,D} <: TensorMapping{T,D,D} - λ::T - size::NTuple{D,Int} - end - - LazyTensors.apply(m::ScalingOperator{T,D}, v, I::Vararg{Any,D}) where {T,D} = m.λ*v[I...] - LazyTensors.range_size(m::ScalingOperator) = m.size - LazyTensors.domain_size(m::ScalingOperator) = m.size - - m = ScalingOperator{Int,1}(2,(3,)) + m = ScalingTensor(2,(3,)) v = [1,2,3] @test m*v isa AbstractVector @test m*v == [2,4,6] - m = ScalingOperator{Int,2}(2,(2,2)) + m = ScalingTensor(2,(2,2)) v = [[1 2];[3 4]] @test m*v == [[2 4];[6 8]] @test (m*v)[2,1] == 6 + @testset "Error on index out of bounds" begin + m = SizeDoublingMapping{Int, 1, 1}((3,)) + v = [0,1,2] + + @test_throws BoundsError (m*v)[0] + @test_throws BoundsError (m*v)[7] + end + + @testset "Error on unmatched dimensions" begin + v = [0,1,2] + m = SizeDoublingMapping{Int, 2, 1}((3,)) + @test_throws MethodError m*ones(Int,2,2) + @test_throws MethodError m*m*v + end + + @testset "Error on unmatched sizes" begin + @test_throws DomainSizeMismatch ScalingTensor(2,(2,))*ones(3) + @test_throws DomainSizeMismatch ScalingTensor(2,(2,))*ScalingTensor(2,(3,))*ones(3) + end + + @testset "Type calculation" begin - m = ScalingOperator{Int,1}(2,(3,)) + m = ScalingTensor(2,(3,)) v = [1.,2.,3.] @test m*v isa AbstractVector{Float64} @test m*v == [2.,4.,6.] @inferred m*v @inferred (m*v)[1] - m = ScalingOperator{Int,2}(2,(2,2)) + m = ScalingTensor(2,(2,2)) v = [[1. 2.];[3. 4.]] @test m*v == [[2. 4.];[6. 8.]] @test (m*v)[2,1] == 6. @inferred m*v @inferred (m*v)[1] - m = ScalingOperator{ComplexF64,1}(2. +2. *im,(3,)) + m = ScalingTensor(2. +2. *im,(3,)) v = [1.,2.,3.] @test m*v isa AbstractVector{ComplexF64} @test m*v == [2. + 2. *im, 4. + 4. *im, 6. + 6. *im] @inferred m*v @inferred (m*v)[1] - m = ScalingOperator{ComplexF64,1}(1,(3,)) + m = ScalingTensor(1,(3,)) v = [2. + 2. *im, 4. + 4. *im, 6. + 6. *im] @test m*v isa AbstractVector{ComplexF64} @test m*v == [2. + 2. *im, 4. + 4. *im, 6. + 6. *im] @inferred m*v @inferred (m*v)[1] - m = ScalingOperator{Float64,1}(2., (3,)) + m = ScalingTensor(2., (3,)) v = [[1,2,3], [3,2,1],[1,3,1]] @test m*v isa AbstractVector{Vector{Float64}} @test m*v == [[2.,4.,6.], [6.,4.,2.],[2.,6.,2.]] @@ -117,19 +128,10 @@ end end -@testset "TensorMapping binary operations" begin - struct ScalarMapping{T,R,D} <: TensorMapping{T,R,D} - λ::T - range_size::NTuple{R,Int} - domain_size::NTuple{D,Int} - end - LazyTensors.apply(m::ScalarMapping{T,R}, v, I::Vararg{Any,R}) where {T,R} = m.λ*v[I...] - LazyTensors.range_size(m::ScalarMapping) = m.domain_size - LazyTensors.domain_size(m::ScalarMapping) = m.range_size - - A = ScalarMapping{Float64,1,1}(2.0, (3,), (3,)) - B = ScalarMapping{Float64,1,1}(3.0, (3,), (3,)) +@testset "LazyTensor binary operations" begin + A = ScalingTensor(2.0, (3,)) + B = ScalingTensor(3.0, (3,)) v = [1.1,1.2,1.3] for i ∈ eachindex(v) @@ -140,24 +142,34 @@ @test ((A-B)*v)[i] == 2*v[i] - 3*v[i] end + @test range_size(A+B) == range_size(A) == range_size(B) @test domain_size(A+B) == domain_size(A) == domain_size(B) @test ((A+B)*ComplexF64[1.1,1.2,1.3])[3] isa ComplexF64 + + @testset "Error on unmatched sizes" begin + @test_throws Union{DomainSizeMismatch, RangeSizeMismatch} ScalingTensor(2.0, (3,)) + ScalingTensor(2.0, (4,)) + + @test_throws DomainSizeMismatch ScalingTensor(2.0, (4,)) + SizeDoublingMapping{Float64,1,1}((2,)) + @test_throws DomainSizeMismatch SizeDoublingMapping{Float64,1,1}((2,)) + ScalingTensor(2.0, (4,)) + @test_throws RangeSizeMismatch ScalingTensor(2.0, (2,)) + SizeDoublingMapping{Float64,1,1}((2,)) + @test_throws RangeSizeMismatch SizeDoublingMapping{Float64,1,1}((2,)) + ScalingTensor(2.0, (2,)) + end end -@testset "TensorMappingComposition" begin +@testset "TensorComposition" begin A = rand(2,3) B = rand(3,4) - à = LazyLinearMap(A, (1,), (2,)) - B̃ = LazyLinearMap(B, (1,), (2,)) + à = DenseTensor(A, (1,), (2,)) + B̃ = DenseTensor(B, (1,), (2,)) - @test Ã∘B̃ isa TensorMappingComposition + @test Ã∘B̃ isa TensorComposition @test range_size(Ã∘B̃) == (2,) @test domain_size(Ã∘B̃) == (4,) - @test_throws SizeMismatch B̃∘à + @test_throws DomainSizeMismatch B̃∘à # @test @inbounds B̃∘à # Should not error even though dimensions don't match. (Since ]test runs with forced boundschecking this is currently not testable 2020-10-16) @@ -171,203 +183,102 @@ @test ((Ã∘B̃)'*ComplexF64[1.,2.])[1] isa ComplexF64 end -@testset "LazyLinearMap" begin - # Test a standard matrix-vector product - # mapping vectors of size 4 to vectors of size 3. - A = rand(3,4) - à = LazyLinearMap(A, (1,), (2,)) - v = rand(4) - w = rand(3) - @test à isa LazyLinearMap{T,1,1} where T - @test à isa TensorMapping{T,1,1} where T - @test range_size(Ã) == (3,) - @test domain_size(Ã) == (4,) - - @test Ã*ones(4) ≈ A*ones(4) atol=5e-13 - @test Ã*v ≈ A*v atol=5e-13 - @test Ã'*w ≈ A'*w - - A = rand(2,3,4) - @test_throws DomainError LazyLinearMap(A, (3,1), (2,)) - - # Test more exotic mappings - B = rand(3,4,2) - # Map vectors of size 2 to matrices of size (3,4) - B̃ = LazyLinearMap(B, (1,2), (3,)) - v = rand(2) - - @test range_size(B̃) == (3,4) - @test domain_size(B̃) == (2,) - @test B̃ isa TensorMapping{T,2,1} where T - @test B̃*ones(2) ≈ B[:,:,1] + B[:,:,2] atol=5e-13 - @test B̃*v ≈ B[:,:,1]*v[1] + B[:,:,2]*v[2] atol=5e-13 - - # Map matrices of size (3,2) to vectors of size 4 - B̃ = LazyLinearMap(B, (2,), (1,3)) - v = rand(3,2) - - @test range_size(B̃) == (4,) - @test domain_size(B̃) == (3,2) - @test B̃ isa TensorMapping{T,1,2} where T - @test B̃*ones(3,2) ≈ B[1,:,1] + B[2,:,1] + B[3,:,1] + - B[1,:,2] + B[2,:,2] + B[3,:,2] atol=5e-13 - @test B̃*v ≈ B[1,:,1]*v[1,1] + B[2,:,1]*v[2,1] + B[3,:,1]*v[3,1] + - B[1,:,2]v[1,2] + B[2,:,2]*v[2,2] + B[3,:,2]*v[3,2] atol=5e-13 - - - # TODO: - # @inferred (B̃*v)[2] -end - - -@testset "IdentityMapping" begin - @test IdentityMapping{Float64}((4,5)) isa IdentityMapping{T,2} where T - @test IdentityMapping{Float64}((4,5)) isa TensorMapping{T,2,2} where T - @test IdentityMapping{Float64}((4,5)) == IdentityMapping{Float64}(4,5) - - @test IdentityMapping(3,2) isa IdentityMapping{Float64,2} - - for sz ∈ [(4,5),(3,),(5,6,4)] - I = IdentityMapping{Float64}(sz) - v = rand(sz...) - @test I*v == v - @test I'*v == v - - v = rand(ComplexF64,sz...) - @test I*v == v - @test I'*v == v - - @test range_size(I) == sz - @test domain_size(I) == sz - end - - I = IdentityMapping{Float64}((4,5)) - v = rand(4,5) - @inferred (I*v)[3,2] - @inferred (I'*v)[3,2] - @inferred range_size(I) - - @inferred range_dim(I) - @inferred domain_dim(I) - - à = rand(4,2) - A = LazyLinearMap(Ã,(1,),(2,)) - I1 = IdentityMapping{Float64}(2) - I2 = IdentityMapping{Float64}(4) - @test A∘I1 == A - @test I2∘A == A - @test I1∘I1 == I1 - @test_throws SizeMismatch I1∘A - @test_throws SizeMismatch A∘I2 - @test_throws SizeMismatch I1∘I2 -end - -@testset "InflatedTensorMapping" begin - I(sz...) = IdentityMapping(sz...) +@testset "InflatedTensor" begin + I(sz...) = IdentityTensor(sz...) à = rand(4,2) B̃ = rand(4,2,3) C̃ = rand(4,2,3) - A = LazyLinearMap(Ã,(1,),(2,)) - B = LazyLinearMap(B̃,(1,2),(3,)) - C = LazyLinearMap(C̃,(1,),(2,3)) + A = DenseTensor(Ã,(1,),(2,)) + B = DenseTensor(B̃,(1,2),(3,)) + C = DenseTensor(C̃,(1,),(2,3)) @testset "Constructors" begin - @test InflatedTensorMapping(I(3,2), A, I(4)) isa TensorMapping{Float64, 4, 4} - @test InflatedTensorMapping(I(3,2), B, I(4)) isa TensorMapping{Float64, 5, 4} - @test InflatedTensorMapping(I(3), C, I(2,3)) isa TensorMapping{Float64, 4, 5} - @test InflatedTensorMapping(C, I(2,3)) isa TensorMapping{Float64, 3, 4} - @test InflatedTensorMapping(I(3), C) isa TensorMapping{Float64, 2, 3} - @test InflatedTensorMapping(I(3), I(2,3)) isa TensorMapping{Float64, 3, 3} + @test InflatedTensor(I(3,2), A, I(4)) isa LazyTensor{Float64, 4, 4} + @test InflatedTensor(I(3,2), B, I(4)) isa LazyTensor{Float64, 5, 4} + @test InflatedTensor(I(3), C, I(2,3)) isa LazyTensor{Float64, 4, 5} + @test InflatedTensor(C, I(2,3)) isa LazyTensor{Float64, 3, 4} + @test InflatedTensor(I(3), C) isa LazyTensor{Float64, 2, 3} + @test InflatedTensor(I(3), I(2,3)) isa LazyTensor{Float64, 3, 3} end @testset "Range and domain size" begin - @test range_size(InflatedTensorMapping(I(3,2), A, I(4))) == (3,2,4,4) - @test domain_size(InflatedTensorMapping(I(3,2), A, I(4))) == (3,2,2,4) + @test range_size(InflatedTensor(I(3,2), A, I(4))) == (3,2,4,4) + @test domain_size(InflatedTensor(I(3,2), A, I(4))) == (3,2,2,4) - @test range_size(InflatedTensorMapping(I(3,2), B, I(4))) == (3,2,4,2,4) - @test domain_size(InflatedTensorMapping(I(3,2), B, I(4))) == (3,2,3,4) + @test range_size(InflatedTensor(I(3,2), B, I(4))) == (3,2,4,2,4) + @test domain_size(InflatedTensor(I(3,2), B, I(4))) == (3,2,3,4) - @test range_size(InflatedTensorMapping(I(3), C, I(2,3))) == (3,4,2,3) - @test domain_size(InflatedTensorMapping(I(3), C, I(2,3))) == (3,2,3,2,3) + @test range_size(InflatedTensor(I(3), C, I(2,3))) == (3,4,2,3) + @test domain_size(InflatedTensor(I(3), C, I(2,3))) == (3,2,3,2,3) - @inferred range_size(InflatedTensorMapping(I(3,2), A, I(4))) == (3,2,4,4) - @inferred domain_size(InflatedTensorMapping(I(3,2), A, I(4))) == (3,2,2,4) + @inferred range_size(InflatedTensor(I(3,2), A, I(4))) == (3,2,4,4) + @inferred domain_size(InflatedTensor(I(3,2), A, I(4))) == (3,2,2,4) end @testset "Application" begin # Testing regular application and transposed application with inflation "before", "after" and "before and after". # The inflated tensor mappings are chosen to preserve, reduce and increase the dimension of the result compared to the input. - tests = [ + cases = [ ( - InflatedTensorMapping(I(3,2), A, I(4)), + InflatedTensor(I(3,2), A, I(4)), (v-> @tullio res[a,b,c,d] := Ã[c,i]*v[a,b,i,d]), # Expected result of apply (v-> @tullio res[a,b,c,d] := Ã[i,c]*v[a,b,i,d]), # Expected result of apply_transpose ), ( - InflatedTensorMapping(I(3,2), B, I(4)), + InflatedTensor(I(3,2), B, I(4)), (v-> @tullio res[a,b,c,d,e] := B̃[c,d,i]*v[a,b,i,e]), (v-> @tullio res[a,b,c,d] := B̃[i,j,c]*v[a,b,i,j,d]), ), ( - InflatedTensorMapping(I(3,2), C, I(4)), + InflatedTensor(I(3,2), C, I(4)), (v-> @tullio res[a,b,c,d] := C̃[c,i,j]*v[a,b,i,j,d]), (v-> @tullio res[a,b,c,d,e] := C̃[i,c,d]*v[a,b,i,e]), ), ( - InflatedTensorMapping(I(3,2), A), + InflatedTensor(I(3,2), A), (v-> @tullio res[a,b,c] := Ã[c,i]*v[a,b,i]), (v-> @tullio res[a,b,c] := Ã[i,c]*v[a,b,i]), ), ( - InflatedTensorMapping(I(3,2), B), + InflatedTensor(I(3,2), B), (v-> @tullio res[a,b,c,d] := B̃[c,d,i]*v[a,b,i]), (v-> @tullio res[a,b,c] := B̃[i,j,c]*v[a,b,i,j]), ), ( - InflatedTensorMapping(I(3,2), C), + InflatedTensor(I(3,2), C), (v-> @tullio res[a,b,c] := C̃[c,i,j]*v[a,b,i,j]), (v-> @tullio res[a,b,c,d] := C̃[i,c,d]*v[a,b,i]), ), ( - InflatedTensorMapping(A,I(4)), + InflatedTensor(A,I(4)), (v-> @tullio res[a,b] := Ã[a,i]*v[i,b]), (v-> @tullio res[a,b] := Ã[i,a]*v[i,b]), ), ( - InflatedTensorMapping(B,I(4)), + InflatedTensor(B,I(4)), (v-> @tullio res[a,b,c] := B̃[a,b,i]*v[i,c]), (v-> @tullio res[a,b] := B̃[i,j,a]*v[i,j,b]), ), ( - InflatedTensorMapping(C,I(4)), + InflatedTensor(C,I(4)), (v-> @tullio res[a,b] := C̃[a,i,j]*v[i,j,b]), (v-> @tullio res[a,b,c] := C̃[i,a,b]*v[i,c]), ), ] - @testset "apply" begin - for i ∈ 1:length(tests) - tm = tests[i][1] - v = rand(domain_size(tm)...) - true_value = tests[i][2](v) - @test tm*v ≈ true_value rtol=1e-14 - end - end + @testset "$tm" for (tm, true_apply, true_apply_transpose) ∈ cases + v = rand(domain_size(tm)...) + @test tm*v ≈ true_apply(v) rtol=1e-14 - @testset "apply_transpose" begin - for i ∈ 1:length(tests) - tm = tests[i][1] - v = rand(range_size(tm)...) - true_value = tests[i][3](v) - @test tm'*v ≈ true_value rtol=1e-14 - end + v = rand(range_size(tm)...) + @test tm'*v ≈ true_apply_transpose(v) rtol=1e-14 end @testset "application to other type" begin - tm = InflatedTensorMapping(I(3,2), A, I(4)) + tm = InflatedTensor(I(3,2), A, I(4)) v = rand(ComplexF64, domain_size(tm)...) @test (tm*v)[1,2,3,1] isa ComplexF64 @@ -377,16 +288,7 @@ end @testset "Inference of application" begin - struct ScalingOperator{T,D} <: TensorMapping{T,D,D} - λ::T - size::NTuple{D,Int} - end - - LazyTensors.apply(m::ScalingOperator{T,D}, v, I::Vararg{Any,D}) where {T,D} = m.λ*v[I...] - LazyTensors.range_size(m::ScalingOperator) = m.size - LazyTensors.domain_size(m::ScalingOperator) = m.size - - tm = InflatedTensorMapping(I(2,3),ScalingOperator(2.0, (3,2)),I(3,4)) + tm = InflatedTensor(I(2,3),ScalingTensor(2.0, (3,2)),I(3,4)) v = rand(domain_size(tm)...) @inferred apply(tm,v,1,2,3,2,2,4) @@ -394,94 +296,24 @@ end end - @testset "InflatedTensorMapping of InflatedTensorMapping" begin - A = ScalingOperator(2.0,(2,3)) - itm = InflatedTensorMapping(I(3,2), A, I(4)) - @test InflatedTensorMapping(I(4), itm, I(2)) == InflatedTensorMapping(I(4,3,2), A, I(4,2)) - @test InflatedTensorMapping(itm, I(2)) == InflatedTensorMapping(I(3,2), A, I(4,2)) - @test InflatedTensorMapping(I(4), itm) == InflatedTensorMapping(I(4,3,2), A, I(4)) + @testset "InflatedTensor of InflatedTensor" begin + A = ScalingTensor(2.0,(2,3)) + itm = InflatedTensor(I(3,2), A, I(4)) + @test InflatedTensor(I(4), itm, I(2)) == InflatedTensor(I(4,3,2), A, I(4,2)) + @test InflatedTensor(itm, I(2)) == InflatedTensor(I(3,2), A, I(4,2)) + @test InflatedTensor(I(4), itm) == InflatedTensor(I(4,3,2), A, I(4)) - @test InflatedTensorMapping(I(2), I(2), I(2)) isa InflatedTensorMapping # The constructor should always return its type. + @test InflatedTensor(I(2), I(2), I(2)) isa InflatedTensor # The constructor should always return its type. end end -@testset "split_index" begin - @test LazyTensors.split_index(Val(2),Val(1),Val(2),Val(2),1,2,3,4,5,6) == ((1,2,:,5,6),(3,4)) - @test LazyTensors.split_index(Val(2),Val(3),Val(2),Val(2),1,2,3,4,5,6) == ((1,2,:,:,:,5,6),(3,4)) - @test LazyTensors.split_index(Val(3),Val(1),Val(1),Val(2),1,2,3,4,5,6) == ((1,2,3,:,5,6),(4,)) - @test LazyTensors.split_index(Val(3),Val(2),Val(1),Val(2),1,2,3,4,5,6) == ((1,2,3,:,:,5,6),(4,)) - @test LazyTensors.split_index(Val(1),Val(1),Val(2),Val(3),1,2,3,4,5,6) == ((1,:,4,5,6),(2,3)) - @test LazyTensors.split_index(Val(1),Val(2),Val(2),Val(3),1,2,3,4,5,6) == ((1,:,:,4,5,6),(2,3)) - - @test LazyTensors.split_index(Val(0),Val(1),Val(3),Val(3),1,2,3,4,5,6) == ((:,4,5,6),(1,2,3)) - @test LazyTensors.split_index(Val(3),Val(1),Val(3),Val(0),1,2,3,4,5,6) == ((1,2,3,:),(4,5,6)) - - @inferred LazyTensors.split_index(Val(2),Val(3),Val(2),Val(2),1,2,3,2,2,4) -end - -@testset "slice_tuple" begin - @test LazyTensors.slice_tuple((1,2,3),Val(1), Val(3)) == (1,2,3) - @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(2), Val(5)) == (2,3,4,5) - @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(1), Val(3)) == (1,2,3) - @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(4), Val(6)) == (4,5,6) -end - -@testset "split_tuple" begin - @testset "2 parts" begin - @test LazyTensors.split_tuple((),Val(0)) == ((),()) - @test LazyTensors.split_tuple((1,),Val(0)) == ((),(1,)) - @test LazyTensors.split_tuple((1,),Val(1)) == ((1,),()) - - @test LazyTensors.split_tuple((1,2,3,4),Val(0)) == ((),(1,2,3,4)) - @test LazyTensors.split_tuple((1,2,3,4),Val(1)) == ((1,),(2,3,4)) - @test LazyTensors.split_tuple((1,2,3,4),Val(2)) == ((1,2),(3,4)) - @test LazyTensors.split_tuple((1,2,3,4),Val(3)) == ((1,2,3),(4,)) - @test LazyTensors.split_tuple((1,2,3,4),Val(4)) == ((1,2,3,4),()) - - @test LazyTensors.split_tuple((1,2,true,4),Val(3)) == ((1,2,true),(4,)) - - @inferred LazyTensors.split_tuple((1,2,3,4),Val(3)) - @inferred LazyTensors.split_tuple((1,2,true,4),Val(3)) - end - - @testset "3 parts" begin - @test LazyTensors.split_tuple((),Val(0),Val(0)) == ((),(),()) - @test LazyTensors.split_tuple((1,2,3),Val(1), Val(1)) == ((1,),(2,),(3,)) - @test LazyTensors.split_tuple((1,true,3),Val(1), Val(1)) == ((1,),(true,),(3,)) - - @test LazyTensors.split_tuple((1,2,3,4,5,6),Val(1),Val(2)) == ((1,),(2,3),(4,5,6)) - @test LazyTensors.split_tuple((1,2,3,4,5,6),Val(3),Val(2)) == ((1,2,3),(4,5),(6,)) - - @inferred LazyTensors.split_tuple((1,2,3,4,5,6),Val(3),Val(2)) - @inferred LazyTensors.split_tuple((1,true,3),Val(1), Val(1)) - end -end - -@testset "flatten_tuple" begin - @test LazyTensors.flatten_tuple((1,)) == (1,) - @test LazyTensors.flatten_tuple((1,2,3,4,5,6)) == (1,2,3,4,5,6) - @test LazyTensors.flatten_tuple((1,2,(3,4),5,6)) == (1,2,3,4,5,6) - @test LazyTensors.flatten_tuple((1,2,(3,(4,5)),6)) == (1,2,3,4,5,6) - @test LazyTensors.flatten_tuple(((1,2),(3,4),(5,),6)) == (1,2,3,4,5,6) -end - - @testset "LazyOuterProduct" begin - struct ScalingOperator{T,D} <: TensorMapping{T,D,D} - λ::T - size::NTuple{D,Int} - end - - LazyTensors.apply(m::ScalingOperator{T,D}, v, I::Vararg{Any,D}) where {T,D} = m.λ*v[I...] - LazyTensors.range_size(m::ScalingOperator) = m.size - LazyTensors.domain_size(m::ScalingOperator) = m.size - - A = ScalingOperator(2.0, (5,)) - B = ScalingOperator(3.0, (3,)) - C = ScalingOperator(5.0, (3,2)) + A = ScalingTensor(2.0, (5,)) + B = ScalingTensor(3.0, (3,)) + C = ScalingTensor(5.0, (3,2)) AB = LazyOuterProduct(A,B) - @test AB isa TensorMapping{T,2,2} where T + @test AB isa LazyTensor{T,2,2} where T @test range_size(AB) == (5,3) @test domain_size(AB) == (5,3) @@ -490,7 +322,7 @@ ABC = LazyOuterProduct(A,B,C) - @test ABC isa TensorMapping{T,4,4} where T + @test ABC isa LazyTensor{T,4,4} where T @test range_size(ABC) == (5,3,3,2) @test domain_size(ABC) == (5,3,3,2) @@ -503,8 +335,8 @@ v₁ = rand(2,4,3) v₂ = rand(4,3,2) - à = LazyLinearMap(A,(1,),(2,)) - B̃ = LazyLinearMap(B,(1,),(2,3)) + à = DenseTensor(A,(1,),(2,)) + B̃ = DenseTensor(B,(1,),(2,3)) ÃB̃ = LazyOuterProduct(Ã,B̃) @tullio ABv[i,k] := A[i,j]*B[k,l,m]*v₁[j,l,m] @@ -515,15 +347,14 @@ @test B̃Ã*v₂ ≈ BAv @testset "Indentity mapping arguments" begin - @test LazyOuterProduct(IdentityMapping(3,2), IdentityMapping(1,2)) == IdentityMapping(3,2,1,2) + @test LazyOuterProduct(IdentityTensor(3,2), IdentityTensor(1,2)) == IdentityTensor(3,2,1,2) - à = LazyLinearMap(A,(1,),(2,)) - @test LazyOuterProduct(IdentityMapping(3,2), Ã) == InflatedTensorMapping(IdentityMapping(3,2),Ã) - @test LazyOuterProduct(Ã, IdentityMapping(3,2)) == InflatedTensorMapping(Ã,IdentityMapping(3,2)) + à = DenseTensor(A,(1,),(2,)) + @test LazyOuterProduct(IdentityTensor(3,2), Ã) == InflatedTensor(IdentityTensor(3,2),Ã) + @test LazyOuterProduct(Ã, IdentityTensor(3,2)) == InflatedTensor(Ã,IdentityTensor(3,2)) - I1 = IdentityMapping(3,2) - I2 = IdentityMapping(4) - @test I1⊗Ã⊗I2 == InflatedTensorMapping(I1, Ã, I2) + I1 = IdentityTensor(3,2) + I2 = IdentityTensor(4) + @test I1⊗Ã⊗I2 == InflatedTensor(I1, Ã, I2) end - end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/LazyTensors/lazy_tensor_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -0,0 +1,12 @@ +using Test +using Sbplib.LazyTensors + +@testset "Generic Mapping methods" begin + struct DummyMapping{T,R,D} <: LazyTensor{T,R,D} end + LazyTensors.apply(m::DummyMapping{T,R,D}, v, I::Vararg{Any,R}) where {T,R,D} = :apply + @test range_dim(DummyMapping{Int,2,3}()) == 2 + @test domain_dim(DummyMapping{Int,2,3}()) == 3 + @test apply(DummyMapping{Int,2,3}(), zeros(Int, (0,0,0)),0,0) == :apply + @test eltype(DummyMapping{Int,2,3}()) == Int + @test eltype(DummyMapping{Float64,2,3}()) == Float64 +end
--- a/test/LazyTensors/tensor_mapping_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,12 +0,0 @@ -using Test -using Sbplib.LazyTensors - -@testset "Generic Mapping methods" begin - struct DummyMapping{T,R,D} <: TensorMapping{T,R,D} end - LazyTensors.apply(m::DummyMapping{T,R,D}, v, I::Vararg{Any,R}) where {T,R,D} = :apply - @test range_dim(DummyMapping{Int,2,3}()) == 2 - @test domain_dim(DummyMapping{Int,2,3}()) == 3 - @test apply(DummyMapping{Int,2,3}(), zeros(Int, (0,0,0)),0,0) == :apply - @test eltype(DummyMapping{Int,2,3}()) == Int - @test eltype(DummyMapping{Float64,2,3}()) == Float64 -end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/LazyTensors/tensor_types_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -0,0 +1,109 @@ +using Test +using Sbplib.LazyTensors + +@testset "IdentityTensor" begin + @test IdentityTensor{Float64}((4,5)) isa IdentityTensor{T,2} where T + @test IdentityTensor{Float64}((4,5)) isa LazyTensor{T,2,2} where T + @test IdentityTensor{Float64}((4,5)) == IdentityTensor{Float64}(4,5) + + @test IdentityTensor(3,2) isa IdentityTensor{Float64,2} + + for sz ∈ [(4,5),(3,),(5,6,4)] + I = IdentityTensor{Float64}(sz) + v = rand(sz...) + @test I*v == v + @test I'*v == v + + v = rand(ComplexF64,sz...) + @test I*v == v + @test I'*v == v + + @test range_size(I) == sz + @test domain_size(I) == sz + end + + I = IdentityTensor{Float64}((4,5)) + v = rand(4,5) + @inferred (I*v)[3,2] + @inferred (I'*v)[3,2] + @inferred range_size(I) + + @inferred range_dim(I) + @inferred domain_dim(I) + + à = rand(4,2) + A = DenseTensor(Ã,(1,),(2,)) + I1 = IdentityTensor{Float64}(2) + I2 = IdentityTensor{Float64}(4) + @test A∘I1 == A + @test I2∘A == A + @test I1∘I1 == I1 + @test_throws DomainSizeMismatch I1∘A + @test_throws DomainSizeMismatch A∘I2 + @test_throws DomainSizeMismatch I1∘I2 +end + + +@testset "ScalingTensor" begin + st = ScalingTensor(2.,(3,4)) + @test st isa LazyTensor{Float64, 2, 2} + @test range_size(st) == (3,4) + @test domain_size(st) == (3,4) + + v = rand(3,4) + @test st*v == 2.0 .* v + @test st'*v == 2.0 .* v + + @inferred (st*v)[2,2] + @inferred (st'*v)[2,2] +end + + +@testset "DenseTensor" begin + # Test a standard matrix-vector product + # mapping vectors of size 4 to vectors of size 3. + A = rand(3,4) + à = DenseTensor(A, (1,), (2,)) + v = rand(4) + w = rand(3) + + @test à isa DenseTensor{T,1,1} where T + @test à isa LazyTensor{T,1,1} where T + @test range_size(Ã) == (3,) + @test domain_size(Ã) == (4,) + + @test Ã*ones(4) ≈ A*ones(4) atol=5e-13 + @test Ã*v ≈ A*v atol=5e-13 + @test Ã'*w ≈ A'*w + + A = rand(2,3,4) + @test_throws DomainError DenseTensor(A, (3,1), (2,)) + + # Test more exotic mappings + B = rand(3,4,2) + # Map vectors of size 2 to matrices of size (3,4) + B̃ = DenseTensor(B, (1,2), (3,)) + v = rand(2) + + @test range_size(B̃) == (3,4) + @test domain_size(B̃) == (2,) + @test B̃ isa LazyTensor{T,2,1} where T + @test B̃*ones(2) ≈ B[:,:,1] + B[:,:,2] atol=5e-13 + @test B̃*v ≈ B[:,:,1]*v[1] + B[:,:,2]*v[2] atol=5e-13 + + # Map matrices of size (3,2) to vectors of size 4 + B̃ = DenseTensor(B, (2,), (1,3)) + v = rand(3,2) + + @test range_size(B̃) == (4,) + @test domain_size(B̃) == (3,2) + @test B̃ isa LazyTensor{T,1,2} where T + @test B̃*ones(3,2) ≈ B[1,:,1] + B[2,:,1] + B[3,:,1] + + B[1,:,2] + B[2,:,2] + B[3,:,2] atol=5e-13 + @test B̃*v ≈ B[1,:,1]*v[1,1] + B[2,:,1]*v[2,1] + B[3,:,1]*v[3,1] + + B[1,:,2]v[1,2] + B[2,:,2]*v[2,2] + B[3,:,2]*v[3,2] atol=5e-13 + + + # TODO: + # @inferred (B̃*v)[2] +end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/LazyTensors/tuple_manipulation_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -0,0 +1,62 @@ +using Test +using Sbplib.LazyTensors + +@testset "split_index" begin + @test LazyTensors.split_index(Val(2),Val(1),Val(2),Val(2),1,2,3,4,5,6) == ((1,2,:,5,6),(3,4)) + @test LazyTensors.split_index(Val(2),Val(3),Val(2),Val(2),1,2,3,4,5,6) == ((1,2,:,:,:,5,6),(3,4)) + @test LazyTensors.split_index(Val(3),Val(1),Val(1),Val(2),1,2,3,4,5,6) == ((1,2,3,:,5,6),(4,)) + @test LazyTensors.split_index(Val(3),Val(2),Val(1),Val(2),1,2,3,4,5,6) == ((1,2,3,:,:,5,6),(4,)) + @test LazyTensors.split_index(Val(1),Val(1),Val(2),Val(3),1,2,3,4,5,6) == ((1,:,4,5,6),(2,3)) + @test LazyTensors.split_index(Val(1),Val(2),Val(2),Val(3),1,2,3,4,5,6) == ((1,:,:,4,5,6),(2,3)) + + @test LazyTensors.split_index(Val(0),Val(1),Val(3),Val(3),1,2,3,4,5,6) == ((:,4,5,6),(1,2,3)) + @test LazyTensors.split_index(Val(3),Val(1),Val(3),Val(0),1,2,3,4,5,6) == ((1,2,3,:),(4,5,6)) + + @inferred LazyTensors.split_index(Val(2),Val(3),Val(2),Val(2),1,2,3,2,2,4) +end + +@testset "slice_tuple" begin + @test LazyTensors.slice_tuple((1,2,3),Val(1), Val(3)) == (1,2,3) + @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(2), Val(5)) == (2,3,4,5) + @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(1), Val(3)) == (1,2,3) + @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(4), Val(6)) == (4,5,6) +end + +@testset "split_tuple" begin + @testset "2 parts" begin + @test LazyTensors.split_tuple((),Val(0)) == ((),()) + @test LazyTensors.split_tuple((1,),Val(0)) == ((),(1,)) + @test LazyTensors.split_tuple((1,),Val(1)) == ((1,),()) + + @test LazyTensors.split_tuple((1,2,3,4),Val(0)) == ((),(1,2,3,4)) + @test LazyTensors.split_tuple((1,2,3,4),Val(1)) == ((1,),(2,3,4)) + @test LazyTensors.split_tuple((1,2,3,4),Val(2)) == ((1,2),(3,4)) + @test LazyTensors.split_tuple((1,2,3,4),Val(3)) == ((1,2,3),(4,)) + @test LazyTensors.split_tuple((1,2,3,4),Val(4)) == ((1,2,3,4),()) + + @test LazyTensors.split_tuple((1,2,true,4),Val(3)) == ((1,2,true),(4,)) + + @inferred LazyTensors.split_tuple((1,2,3,4),Val(3)) + @inferred LazyTensors.split_tuple((1,2,true,4),Val(3)) + end + + @testset "3 parts" begin + @test LazyTensors.split_tuple((),Val(0),Val(0)) == ((),(),()) + @test LazyTensors.split_tuple((1,2,3),Val(1), Val(1)) == ((1,),(2,),(3,)) + @test LazyTensors.split_tuple((1,true,3),Val(1), Val(1)) == ((1,),(true,),(3,)) + + @test LazyTensors.split_tuple((1,2,3,4,5,6),Val(1),Val(2)) == ((1,),(2,3),(4,5,6)) + @test LazyTensors.split_tuple((1,2,3,4,5,6),Val(3),Val(2)) == ((1,2,3),(4,5),(6,)) + + @inferred LazyTensors.split_tuple((1,2,3,4,5,6),Val(3),Val(2)) + @inferred LazyTensors.split_tuple((1,true,3),Val(1), Val(1)) + end +end + +@testset "flatten_tuple" begin + @test LazyTensors.flatten_tuple((1,)) == (1,) + @test LazyTensors.flatten_tuple((1,2,3,4,5,6)) == (1,2,3,4,5,6) + @test LazyTensors.flatten_tuple((1,2,(3,4),5,6)) == (1,2,3,4,5,6) + @test LazyTensors.flatten_tuple((1,2,(3,(4,5)),6)) == (1,2,3,4,5,6) + @test LazyTensors.flatten_tuple(((1,2),(3,4),(5,),6)) == (1,2,3,4,5,6) +end
--- a/test/SbpOperators/boundaryops/boundary_operator_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/boundaryops/boundary_operator_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -18,18 +18,18 @@ op_l = BoundaryOperator{Lower}(closure_stencil,size(g_1D)[1]) @test op_l == BoundaryOperator(g_1D,closure_stencil,Lower()) @test op_l == boundary_operator(g_1D,closure_stencil,CartesianBoundary{1,Lower}()) - @test op_l isa TensorMapping{T,0,1} where T + @test op_l isa LazyTensor{T,0,1} where T op_r = BoundaryOperator{Upper}(closure_stencil,size(g_1D)[1]) @test op_r == BoundaryOperator(g_1D,closure_stencil,Upper()) @test op_r == boundary_operator(g_1D,closure_stencil,CartesianBoundary{1,Upper}()) - @test op_r isa TensorMapping{T,0,1} where T + @test op_r isa LazyTensor{T,0,1} where T end @testset "2D" begin e_w = boundary_operator(g_2D,closure_stencil,CartesianBoundary{1,Upper}()) - @test e_w isa InflatedTensorMapping - @test e_w isa TensorMapping{T,1,2} where T + @test e_w isa InflatedTensor + @test e_w isa LazyTensor{T,1,2} where T end end op_l, op_r = boundary_operator.(Ref(g_1D), Ref(closure_stencil), boundary_identifiers(g_1D))
--- a/test/SbpOperators/boundaryops/boundary_restriction_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/boundaryops/boundary_restriction_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -4,7 +4,7 @@ using Sbplib.Grids using Sbplib.LazyTensors using Sbplib.RegionIndices -import Sbplib.SbpOperators.BoundaryOperator +using Sbplib.SbpOperators: BoundaryOperator, Stencil @testset "boundary_restriction" begin stencil_set = read_stencil_set(sbp_operators_path()*"standard_diagonal.toml"; order = 4) @@ -18,20 +18,20 @@ @test e_l == boundary_restriction(g_1D,stencil_set,CartesianBoundary{1,Lower}()) @test e_l == BoundaryOperator(g_1D,Stencil{Float64}(e_closure),Lower()) @test e_l isa BoundaryOperator{T,Lower} where T - @test e_l isa TensorMapping{T,0,1} where T + @test e_l isa LazyTensor{T,0,1} where T e_r = boundary_restriction(g_1D,e_closure,CartesianBoundary{1,Upper}()) @test e_r == boundary_restriction(g_1D,stencil_set,CartesianBoundary{1,Upper}()) @test e_r == BoundaryOperator(g_1D,Stencil{Float64}(e_closure),Upper()) @test e_r isa BoundaryOperator{T,Upper} where T - @test e_r isa TensorMapping{T,0,1} where T + @test e_r isa LazyTensor{T,0,1} where T end @testset "2D" begin e_w = boundary_restriction(g_2D,e_closure,CartesianBoundary{1,Upper}()) @test e_w == boundary_restriction(g_2D,stencil_set,CartesianBoundary{1,Upper}()) - @test e_w isa InflatedTensorMapping - @test e_w isa TensorMapping{T,1,2} where T + @test e_w isa InflatedTensor + @test e_w isa LazyTensor{T,1,2} where T end end
--- a/test/SbpOperators/boundaryops/normal_derivative_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/boundaryops/normal_derivative_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -16,20 +16,20 @@ d_l = normal_derivative(g_1D, d_closure, CartesianBoundary{1,Lower}()) @test d_l == normal_derivative(g_1D, stencil_set, CartesianBoundary{1,Lower}()) @test d_l isa BoundaryOperator{T,Lower} where T - @test d_l isa TensorMapping{T,0,1} where T + @test d_l isa LazyTensor{T,0,1} where T end @testset "2D" begin d_w = normal_derivative(g_2D, d_closure, CartesianBoundary{1,Lower}()) d_n = normal_derivative(g_2D, d_closure, CartesianBoundary{2,Upper}()) - Ix = IdentityMapping{Float64}((size(g_2D)[1],)) - Iy = IdentityMapping{Float64}((size(g_2D)[2],)) + Ix = IdentityTensor{Float64}((size(g_2D)[1],)) + Iy = IdentityTensor{Float64}((size(g_2D)[2],)) d_l = normal_derivative(restrict(g_2D,1),d_closure,CartesianBoundary{1,Lower}()) d_r = normal_derivative(restrict(g_2D,2),d_closure,CartesianBoundary{1,Upper}()) @test d_w == normal_derivative(g_2D, stencil_set, CartesianBoundary{1,Lower}()) @test d_w == d_l⊗Iy @test d_n == Ix⊗d_r - @test d_w isa TensorMapping{T,1,2} where T - @test d_n isa TensorMapping{T,1,2} where T + @test d_w isa LazyTensor{T,1,2} where T + @test d_n isa LazyTensor{T,1,2} where T end end @testset "Accuracy" begin
--- a/test/SbpOperators/readoperator_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,188 +0,0 @@ -using Test - -using TOML -using Sbplib.SbpOperators - -import Sbplib.SbpOperators.Stencil -import Sbplib.SbpOperators.NestedStencil - -@testset "readoperator" begin - toml_str = """ - [meta] - authors = "Ken Mattson" - description = "Standard operators for equidistant grids" - type = "equidistant" - cite = "A paper a long time ago in a galaxy far far away." - - [[stencil_set]] - - order = 2 - test = 2 - - H.inner = ["1"] - H.closure = ["1/2"] - - D1.inner_stencil = ["-1/2", "0", "1/2"] - D1.closure_stencils = [ - {s = ["-1", "1"], c = 1}, - ] - - D2.inner_stencil = ["1", "-2", "1"] - D2.closure_stencils = [ - {s = ["1", "-2", "1"], c = 1}, - ] - - e.closure = ["1"] - d1.closure = {s = ["-3/2", "2", "-1/2"], c = 1} - - [[stencil_set]] - - order = 4 - test = 1 - H.inner = ["1"] - H.closure = ["17/48", "59/48", "43/48", "49/48"] - - D2.inner_stencil = ["-1/12","4/3","-5/2","4/3","-1/12"] - D2.closure_stencils = [ - {s = [ "2", "-5", "4", "-1", "0", "0"], c = 1}, - {s = [ "1", "-2", "1", "0", "0", "0"], c = 2}, - {s = [ "-4/43", "59/43", "-110/43", "59/43", "-4/43", "0"], c = 3}, - {s = [ "-1/49", "0", "59/49", "-118/49", "64/49", "-4/49"], c = 4}, - ] - - e.closure = ["1"] - d1.closure = {s = ["-11/6", "3", "-3/2", "1/3"], c = 1} - - [[stencil_set]] - order = 4 - test = 2 - - H.closure = ["-1/49", "0", "59/49", "-118/49", "64/49", "-4/49"] - """ - - parsed_toml = TOML.parse(toml_str) - - @testset "get_stencil_set" begin - @test get_stencil_set(parsed_toml; order = 2) isa Dict - @test get_stencil_set(parsed_toml; order = 2) == parsed_toml["stencil_set"][1] - @test get_stencil_set(parsed_toml; test = 1) == parsed_toml["stencil_set"][2] - @test get_stencil_set(parsed_toml; order = 4, test = 2) == parsed_toml["stencil_set"][3] - - @test_throws ArgumentError get_stencil_set(parsed_toml; test = 2) - @test_throws ArgumentError get_stencil_set(parsed_toml; order = 4) - end - - @testset "parse_stencil" begin - toml = """ - s1 = ["-1/12","4/3","-5/2","4/3","-1/12"] - s2 = {s = ["2", "-5", "4", "-1", "0", "0"], c = 1} - s3 = {s = ["1", "-2", "1", "0", "0", "0"], c = 2} - s4 = "not a stencil" - s5 = [-1, 4, 3] - s6 = {k = ["1", "-2", "1", "0", "0", "0"], c = 2} - s7 = {s = [-1, 4, 3], c = 2} - s8 = {s = ["1", "-2", "1", "0", "0", "0"], c = [2,2]} - """ - - @test parse_stencil(TOML.parse(toml)["s1"]) == CenteredStencil(-1//12, 4//3, -5//2, 4//3, -1//12) - @test parse_stencil(TOML.parse(toml)["s2"]) == Stencil(2//1, -5//1, 4//1, -1//1, 0//1, 0//1; center=1) - @test parse_stencil(TOML.parse(toml)["s3"]) == Stencil(1//1, -2//1, 1//1, 0//1, 0//1, 0//1; center=2) - - @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s4"]) - @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s5"]) - @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s6"]) - @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s7"]) - @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s8"]) - - stencil_set = get_stencil_set(parsed_toml; order = 4, test = 1) - - @test parse_stencil.(stencil_set["D2"]["closure_stencils"]) == [ - Stencil( 2//1, -5//1, 4//1, -1//1, 0//1, 0//1; center=1), - Stencil( 1//1, -2//1, 1//1, 0//1, 0//1, 0//1; center=2), - Stencil(-4//43, 59//43, -110//43, 59//43, -4//43, 0//1; center=3), - Stencil(-1//49, 0//1, 59//49, -118//49, 64//49, -4//49; center=4), - ] - - - @test parse_stencil(Float64, TOML.parse(toml)["s1"]) == CenteredStencil(-1/12, 4/3, -5/2, 4/3, -1/12) - @test parse_stencil(Float64, TOML.parse(toml)["s2"]) == Stencil(2/1, -5/1, 4/1, -1/1, 0/1, 0/1; center=1) - @test parse_stencil(Float64, TOML.parse(toml)["s3"]) == Stencil(1/1, -2/1, 1/1, 0/1, 0/1, 0/1; center=2) - end - - @testset "parse_scalar" begin - toml = TOML.parse(""" - a1 = 1 - a2 = 1.5 - a3 = 1.0 - a4 = 10 - a5 = "1/2" - a6 = "1.5" - - e1 = [1,2,3] - e2 = "a string value" - """) - - @test parse_scalar(toml["a1"]) == 1//1 - @test parse_scalar(toml["a2"]) == 3//2 - @test parse_scalar(toml["a3"]) == 1//1 - @test parse_scalar(toml["a4"]) == 10//1 - @test parse_scalar(toml["a5"]) == 1//2 - @test parse_scalar(toml["a6"]) == 3//2 - - @test_throws ArgumentError parse_scalar(toml["e1"]) - @test_throws ArgumentError parse_scalar(toml["e2"]) - end - - @testset "parse_tuple" begin - toml = TOML.parse(""" - t1 = [1,3,4] - t2 = ["1/2","3/4","2/1"] - - e1 = "not a tuple" - e2.a="1" - e3 = 1 - e4 = ["1/2","3/4","not a number"] - """) - - @test parse_tuple(toml["t1"]) == (1//1,3//1,4//1) - @test parse_tuple(toml["t2"]) == (1//2,3//4,2//1) - - @test_throws ArgumentError parse_tuple(toml["e1"]) - @test_throws ArgumentError parse_tuple(toml["e2"]) - @test_throws ArgumentError parse_tuple(toml["e3"]) - @test_throws ArgumentError parse_tuple(toml["e4"]) - end -end - -@testset "parse_rational" begin - @test SbpOperators.parse_rational("1") isa Rational - @test SbpOperators.parse_rational("1") == 1//1 - @test SbpOperators.parse_rational("1/2") isa Rational - @test SbpOperators.parse_rational("1/2") == 1//2 - @test SbpOperators.parse_rational("37/13") isa Rational - @test SbpOperators.parse_rational("37/13") == 37//13 - - @test SbpOperators.parse_rational(0.5) isa Rational - @test SbpOperators.parse_rational(0.5) == 1//2 - - @test SbpOperators.parse_rational("0.5") isa Rational - @test SbpOperators.parse_rational("0.5") == 1//2 - - @test SbpOperators.parse_rational(2) isa Rational - @test SbpOperators.parse_rational(2) == 2//1 -end - -@testset "parse_nested_stencil" begin - toml = TOML.parse(""" - s1 = [["1/2", "1/2", "0"],[ "-1/2", "-1", "-1/2"],["0", "1/2", "1/2"]] - s2 = {s = [[ "2", "-1", "0"],[ "-3", "1", "0"],["1", "0", "0"]], c = 1} - s3 = {s = [[ "2", "-1", "0"],[ "-3", "1", "0"],["1", "0", "0"]], c = 2} - """) - - @test parse_nested_stencil(toml["s1"]) == CenteredNestedStencil((1//2, 1//2, 0//1),( -1//2, -1//1, -1//2),(0//1, 1//2, 1//2)) - @test parse_nested_stencil(toml["s2"]) == NestedStencil((2//1, -1//1, 0//1),( -3//1, 1//1, 0//1),(1//1, 0//1, 0//1), center = 1) - @test parse_nested_stencil(toml["s3"]) == NestedStencil((2//1, -1//1, 0//1),( -3//1, 1//1, 0//1),(1//1, 0//1, 0//1), center = 2) - - @test parse_nested_stencil(Float64, toml["s1"]) == CenteredNestedStencil((1/2, 1/2, 0.),( -1/2, -1., -1/2),(0., 1/2, 1/2)) - @test parse_nested_stencil(Int, toml["s2"]) == NestedStencil((2, -1, 0),( -3, 1, 0),(1, 0, 0), center = 1) -end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/SbpOperators/stencil_set_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -0,0 +1,188 @@ +using Test + +using TOML +using Sbplib.SbpOperators + +import Sbplib.SbpOperators.Stencil +import Sbplib.SbpOperators.NestedStencil + +@testset "readoperator" begin + toml_str = """ + [meta] + authors = "Ken Mattson" + description = "Standard operators for equidistant grids" + type = "equidistant" + cite = "A paper a long time ago in a galaxy far far away." + + [[stencil_set]] + + order = 2 + test = 2 + + H.inner = ["1"] + H.closure = ["1/2"] + + D1.inner_stencil = ["-1/2", "0", "1/2"] + D1.closure_stencils = [ + {s = ["-1", "1"], c = 1}, + ] + + D2.inner_stencil = ["1", "-2", "1"] + D2.closure_stencils = [ + {s = ["1", "-2", "1"], c = 1}, + ] + + e.closure = ["1"] + d1.closure = {s = ["-3/2", "2", "-1/2"], c = 1} + + [[stencil_set]] + + order = 4 + test = 1 + H.inner = ["1"] + H.closure = ["17/48", "59/48", "43/48", "49/48"] + + D2.inner_stencil = ["-1/12","4/3","-5/2","4/3","-1/12"] + D2.closure_stencils = [ + {s = [ "2", "-5", "4", "-1", "0", "0"], c = 1}, + {s = [ "1", "-2", "1", "0", "0", "0"], c = 2}, + {s = [ "-4/43", "59/43", "-110/43", "59/43", "-4/43", "0"], c = 3}, + {s = [ "-1/49", "0", "59/49", "-118/49", "64/49", "-4/49"], c = 4}, + ] + + e.closure = ["1"] + d1.closure = {s = ["-11/6", "3", "-3/2", "1/3"], c = 1} + + [[stencil_set]] + order = 4 + test = 2 + + H.closure = ["-1/49", "0", "59/49", "-118/49", "64/49", "-4/49"] + """ + + parsed_toml = TOML.parse(toml_str) + + @testset "get_stencil_set" begin + @test get_stencil_set(parsed_toml; order = 2) isa Dict + @test get_stencil_set(parsed_toml; order = 2) == parsed_toml["stencil_set"][1] + @test get_stencil_set(parsed_toml; test = 1) == parsed_toml["stencil_set"][2] + @test get_stencil_set(parsed_toml; order = 4, test = 2) == parsed_toml["stencil_set"][3] + + @test_throws ArgumentError get_stencil_set(parsed_toml; test = 2) + @test_throws ArgumentError get_stencil_set(parsed_toml; order = 4) + end + + @testset "parse_stencil" begin + toml = """ + s1 = ["-1/12","4/3","-5/2","4/3","-1/12"] + s2 = {s = ["2", "-5", "4", "-1", "0", "0"], c = 1} + s3 = {s = ["1", "-2", "1", "0", "0", "0"], c = 2} + s4 = "not a stencil" + s5 = [-1, 4, 3] + s6 = {k = ["1", "-2", "1", "0", "0", "0"], c = 2} + s7 = {s = [-1, 4, 3], c = 2} + s8 = {s = ["1", "-2", "1", "0", "0", "0"], c = [2,2]} + """ + + @test parse_stencil(TOML.parse(toml)["s1"]) == CenteredStencil(-1//12, 4//3, -5//2, 4//3, -1//12) + @test parse_stencil(TOML.parse(toml)["s2"]) == Stencil(2//1, -5//1, 4//1, -1//1, 0//1, 0//1; center=1) + @test parse_stencil(TOML.parse(toml)["s3"]) == Stencil(1//1, -2//1, 1//1, 0//1, 0//1, 0//1; center=2) + + @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s4"]) + @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s5"]) + @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s6"]) + @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s7"]) + @test_throws ArgumentError parse_stencil(TOML.parse(toml)["s8"]) + + stencil_set = get_stencil_set(parsed_toml; order = 4, test = 1) + + @test parse_stencil.(stencil_set["D2"]["closure_stencils"]) == [ + Stencil( 2//1, -5//1, 4//1, -1//1, 0//1, 0//1; center=1), + Stencil( 1//1, -2//1, 1//1, 0//1, 0//1, 0//1; center=2), + Stencil(-4//43, 59//43, -110//43, 59//43, -4//43, 0//1; center=3), + Stencil(-1//49, 0//1, 59//49, -118//49, 64//49, -4//49; center=4), + ] + + + @test parse_stencil(Float64, TOML.parse(toml)["s1"]) == CenteredStencil(-1/12, 4/3, -5/2, 4/3, -1/12) + @test parse_stencil(Float64, TOML.parse(toml)["s2"]) == Stencil(2/1, -5/1, 4/1, -1/1, 0/1, 0/1; center=1) + @test parse_stencil(Float64, TOML.parse(toml)["s3"]) == Stencil(1/1, -2/1, 1/1, 0/1, 0/1, 0/1; center=2) + end + + @testset "parse_scalar" begin + toml = TOML.parse(""" + a1 = 1 + a2 = 1.5 + a3 = 1.0 + a4 = 10 + a5 = "1/2" + a6 = "1.5" + + e1 = [1,2,3] + e2 = "a string value" + """) + + @test parse_scalar(toml["a1"]) == 1//1 + @test parse_scalar(toml["a2"]) == 3//2 + @test parse_scalar(toml["a3"]) == 1//1 + @test parse_scalar(toml["a4"]) == 10//1 + @test parse_scalar(toml["a5"]) == 1//2 + @test parse_scalar(toml["a6"]) == 3//2 + + @test_throws ArgumentError parse_scalar(toml["e1"]) + @test_throws ArgumentError parse_scalar(toml["e2"]) + end + + @testset "parse_tuple" begin + toml = TOML.parse(""" + t1 = [1,3,4] + t2 = ["1/2","3/4","2/1"] + + e1 = "not a tuple" + e2.a="1" + e3 = 1 + e4 = ["1/2","3/4","not a number"] + """) + + @test parse_tuple(toml["t1"]) == (1//1,3//1,4//1) + @test parse_tuple(toml["t2"]) == (1//2,3//4,2//1) + + @test_throws ArgumentError parse_tuple(toml["e1"]) + @test_throws ArgumentError parse_tuple(toml["e2"]) + @test_throws ArgumentError parse_tuple(toml["e3"]) + @test_throws ArgumentError parse_tuple(toml["e4"]) + end +end + +@testset "parse_rational" begin + @test SbpOperators.parse_rational("1") isa Rational + @test SbpOperators.parse_rational("1") == 1//1 + @test SbpOperators.parse_rational("1/2") isa Rational + @test SbpOperators.parse_rational("1/2") == 1//2 + @test SbpOperators.parse_rational("37/13") isa Rational + @test SbpOperators.parse_rational("37/13") == 37//13 + + @test SbpOperators.parse_rational(0.5) isa Rational + @test SbpOperators.parse_rational(0.5) == 1//2 + + @test SbpOperators.parse_rational("0.5") isa Rational + @test SbpOperators.parse_rational("0.5") == 1//2 + + @test SbpOperators.parse_rational(2) isa Rational + @test SbpOperators.parse_rational(2) == 2//1 +end + +@testset "parse_nested_stencil" begin + toml = TOML.parse(""" + s1 = [["1/2", "1/2", "0"],[ "-1/2", "-1", "-1/2"],["0", "1/2", "1/2"]] + s2 = {s = [[ "2", "-1", "0"],[ "-3", "1", "0"],["1", "0", "0"]], c = 1} + s3 = {s = [[ "2", "-1", "0"],[ "-3", "1", "0"],["1", "0", "0"]], c = 2} + """) + + @test parse_nested_stencil(toml["s1"]) == CenteredNestedStencil((1//2, 1//2, 0//1),( -1//2, -1//1, -1//2),(0//1, 1//2, 1//2)) + @test parse_nested_stencil(toml["s2"]) == NestedStencil((2//1, -1//1, 0//1),( -3//1, 1//1, 0//1),(1//1, 0//1, 0//1), center = 1) + @test parse_nested_stencil(toml["s3"]) == NestedStencil((2//1, -1//1, 0//1),( -3//1, 1//1, 0//1),(1//1, 0//1, 0//1), center = 2) + + @test parse_nested_stencil(Float64, toml["s1"]) == CenteredNestedStencil((1/2, 1/2, 0.),( -1/2, -1., -1/2),(0., 1/2, 1/2)) + @test parse_nested_stencil(Int, toml["s2"]) == NestedStencil((2, -1, 0),( -3, 1, 0),(1, 0, 0), center = 1) +end
--- a/test/SbpOperators/volumeops/derivatives/first_derivative_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/volumeops/derivatives/first_derivative_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -27,15 +27,17 @@ g₁ = EquidistantGrid(11, 0., 1.) g₂ = EquidistantGrid((11,14), (0.,1.), (1.,3.)) - @test first_derivative(g₁, stencil_set, 1) isa TensorMapping{Float64,1,1} - @test first_derivative(g₂, stencil_set, 2) isa TensorMapping{Float64,2,2} + @test first_derivative(g₁, stencil_set, 1) isa LazyTensor{Float64,1,1} + @test first_derivative(g₂, stencil_set, 2) isa LazyTensor{Float64,2,2} + @test first_derivative(g₁, stencil_set, 1) == first_derivative(g₁, stencil_set) interior_stencil = CenteredStencil(-1,0,1) closure_stencils = [Stencil(-1,1, center=1)] - @test first_derivative(g₁, interior_stencil, closure_stencils, 1) isa TensorMapping{Float64,1,1} + @test first_derivative(g₁, interior_stencil, closure_stencils, 1) isa LazyTensor{Float64,1,1} @test first_derivative(g₁, interior_stencil, closure_stencils, 1) isa VolumeOperator - @test first_derivative(g₂, interior_stencil, closure_stencils, 2) isa TensorMapping{Float64,2,2} + @test first_derivative(g₁, interior_stencil, closure_stencils, 1) == first_derivative(g₁, interior_stencil, closure_stencils) + @test first_derivative(g₂, interior_stencil, closure_stencils, 2) isa LazyTensor{Float64,2,2} end @testset "Accuracy conditions" begin
--- a/test/SbpOperators/volumeops/derivatives/second_derivative_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/volumeops/derivatives/second_derivative_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -21,15 +21,16 @@ Dₓₓ = second_derivative(g_1D,inner_stencil,closure_stencils,1) @test Dₓₓ == second_derivative(g_1D,inner_stencil,closure_stencils) @test Dₓₓ == second_derivative(g_1D,stencil_set,1) + @test Dₓₓ == second_derivative(g_1D,stencil_set) @test Dₓₓ isa VolumeOperator end @testset "2D" begin Dₓₓ = second_derivative(g_2D,inner_stencil,closure_stencils,1) - D2 = second_derivative(g_1D,inner_stencil,closure_stencils) - I = IdentityMapping{Float64}(size(g_2D)[2]) + D2 = second_derivative(g_1D,inner_stencil,closure_stencils,1) + I = IdentityTensor{Float64}(size(g_2D)[2]) @test Dₓₓ == D2⊗I @test Dₓₓ == second_derivative(g_2D,stencil_set,1) - @test Dₓₓ isa TensorMapping{T,2,2} where T + @test Dₓₓ isa LazyTensor{T,2,2} where T end end @@ -51,9 +52,7 @@ # implies that L*v should be exact for monomials up to order 2. @testset "2nd order" begin stencil_set = read_stencil_set(operator_path; order=2) - inner_stencil = parse_stencil(stencil_set["D2"]["inner_stencil"]) - closure_stencils = parse_stencil.(stencil_set["D2"]["closure_stencils"]) - Dₓₓ = second_derivative(g_1D,inner_stencil,closure_stencils) + Dₓₓ = second_derivative(g_1D,stencil_set) @test Dₓₓ*monomials[1] ≈ zeros(Float64,size(g_1D)...) atol = 5e-10 @test Dₓₓ*monomials[2] ≈ zeros(Float64,size(g_1D)...) atol = 5e-10 @test Dₓₓ*monomials[3] ≈ monomials[1] atol = 5e-10 @@ -64,9 +63,7 @@ # implies that L*v should be exact for monomials up to order 3. @testset "4th order" begin stencil_set = read_stencil_set(operator_path; order=4) - inner_stencil = parse_stencil(stencil_set["D2"]["inner_stencil"]) - closure_stencils = parse_stencil.(stencil_set["D2"]["closure_stencils"]) - Dₓₓ = second_derivative(g_1D,inner_stencil,closure_stencils) + Dₓₓ = second_derivative(g_1D,stencil_set) # NOTE: high tolerances for checking the "exact" differentiation # due to accumulation of round-off errors/cancellation errors? @test Dₓₓ*monomials[1] ≈ zeros(Float64,size(g_1D)...) atol = 5e-10 @@ -92,9 +89,7 @@ # implies that L*v should be exact for binomials up to order 2. @testset "2nd order" begin stencil_set = read_stencil_set(operator_path; order=2) - inner_stencil = parse_stencil(stencil_set["D2"]["inner_stencil"]) - closure_stencils = parse_stencil.(stencil_set["D2"]["closure_stencils"]) - Dyy = second_derivative(g_2D,inner_stencil,closure_stencils,2) + Dyy = second_derivative(g_2D,stencil_set,2) @test Dyy*binomials[1] ≈ zeros(Float64,size(g_2D)...) atol = 5e-9 @test Dyy*binomials[2] ≈ zeros(Float64,size(g_2D)...) atol = 5e-9 @test Dyy*binomials[3] ≈ evalOn(g_2D,(x,y)->1.) atol = 5e-9 @@ -105,9 +100,7 @@ # implies that L*v should be exact for binomials up to order 3. @testset "4th order" begin stencil_set = read_stencil_set(operator_path; order=4) - inner_stencil = parse_stencil(stencil_set["D2"]["inner_stencil"]) - closure_stencils = parse_stencil.(stencil_set["D2"]["closure_stencils"]) - Dyy = second_derivative(g_2D,inner_stencil,closure_stencils,2) + Dyy = second_derivative(g_2D,stencil_set,2) # NOTE: high tolerances for checking the "exact" differentiation # due to accumulation of round-off errors/cancellation errors? @test Dyy*binomials[1] ≈ zeros(Float64,size(g_2D)...) atol = 5e-9
--- a/test/SbpOperators/volumeops/derivatives/second_derivative_variable_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/volumeops/derivatives/second_derivative_variable_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -18,7 +18,7 @@ g = EquidistantGrid(11, 0., 1.) c = [ 1., 3., 6., 10., 15., 21., 28., 36., 45., 55., 66.] @testset "Constructors" begin - @test SecondDerivativeVariable(g, c, interior_stencil, closure_stencils) isa TensorMapping + @test SecondDerivativeVariable(g, c, interior_stencil, closure_stencils) isa LazyTensor D₂ᶜ = SecondDerivativeVariable(g, c, interior_stencil, closure_stencils) @test range_dim(D₂ᶜ) == 1 @@ -82,8 +82,8 @@ g = EquidistantGrid((11,9), (0.,0.), (10.,8.)) # h = 1 c = evalOn(g, (x,y)->x+y) @testset "Constructors" begin - @test SecondDerivativeVariable(g, c, interior_stencil, closure_stencils,1) isa TensorMapping - @test SecondDerivativeVariable(g, c, interior_stencil, closure_stencils,2) isa TensorMapping + @test SecondDerivativeVariable(g, c, interior_stencil, closure_stencils,1) isa LazyTensor + @test SecondDerivativeVariable(g, c, interior_stencil, closure_stencils,2) isa LazyTensor D₂ᶜ = SecondDerivativeVariable(g, c, interior_stencil, closure_stencils,1) @test range_dim(D₂ᶜ) == 2
--- a/test/SbpOperators/volumeops/inner_products/inner_product_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/volumeops/inner_products/inner_product_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -21,14 +21,14 @@ @testset "0D" begin H = inner_product(EquidistantGrid{Float64}(), quadrature_interior, quadrature_closure) @test H == inner_product(EquidistantGrid{Float64}(), stencil_set) - @test H == IdentityMapping{Float64}() - @test H isa TensorMapping{T,0,0} where T + @test H == IdentityTensor{Float64}() + @test H isa LazyTensor{T,0,0} where T end @testset "1D" begin H = inner_product(g_1D, quadrature_interior, quadrature_closure) @test H == inner_product(g_1D, stencil_set) @test H isa ConstantInteriorScalingOperator - @test H isa TensorMapping{T,1,1} where T + @test H isa LazyTensor{T,1,1} where T end @testset "2D" begin H = inner_product(g_2D, quadrature_interior, quadrature_closure) @@ -36,7 +36,7 @@ H_y = inner_product(restrict(g_2D,2), quadrature_interior, quadrature_closure) @test H == inner_product(g_2D, stencil_set) @test H == H_x⊗H_y - @test H isa TensorMapping{T,2,2} where T + @test H isa LazyTensor{T,2,2} where T end end
--- a/test/SbpOperators/volumeops/inner_products/inverse_inner_product_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/volumeops/inner_products/inverse_inner_product_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -18,14 +18,14 @@ @testset "0D" begin Hi = inverse_inner_product(EquidistantGrid{Float64}(), quadrature_interior, quadrature_closure) @test Hi == inverse_inner_product(EquidistantGrid{Float64}(), stencil_set) - @test Hi == IdentityMapping{Float64}() - @test Hi isa TensorMapping{T,0,0} where T + @test Hi == IdentityTensor{Float64}() + @test Hi isa LazyTensor{T,0,0} where T end @testset "1D" begin Hi = inverse_inner_product(g_1D, quadrature_interior, quadrature_closure) @test Hi == inverse_inner_product(g_1D, stencil_set) @test Hi isa ConstantInteriorScalingOperator - @test Hi isa TensorMapping{T,1,1} where T + @test Hi isa LazyTensor{T,1,1} where T end @testset "2D" begin Hi = inverse_inner_product(g_2D, quadrature_interior, quadrature_closure) @@ -33,7 +33,7 @@ Hi_y = inverse_inner_product(restrict(g_2D,2), quadrature_interior, quadrature_closure) @test Hi == inverse_inner_product(g_2D, stencil_set) @test Hi == Hi_x⊗Hi_y - @test Hi isa TensorMapping{T,2,2} where T + @test Hi isa LazyTensor{T,2,2} where T end end
--- a/test/SbpOperators/volumeops/laplace/laplace_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/volumeops/laplace/laplace_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -17,12 +17,12 @@ @testset "1D" begin Δ = laplace(g_1D, inner_stencil, closure_stencils) @test Laplace(g_1D, stencil_set) == Laplace(Δ, stencil_set) - @test Laplace(g_1D, stencil_set) isa TensorMapping{T,1,1} where T + @test Laplace(g_1D, stencil_set) isa LazyTensor{T,1,1} where T end @testset "3D" begin Δ = laplace(g_3D, inner_stencil, closure_stencils) @test Laplace(g_3D, stencil_set) == Laplace(Δ,stencil_set) - @test Laplace(g_3D, stencil_set) isa TensorMapping{T,3,3} where T + @test Laplace(g_3D, stencil_set) isa LazyTensor{T,3,3} where T end end @@ -69,17 +69,17 @@ @testset "laplace" begin @testset "1D" begin Δ = laplace(g_1D, inner_stencil, closure_stencils) - @test Δ == second_derivative(g_1D, inner_stencil, closure_stencils) - @test Δ isa TensorMapping{T,1,1} where T + @test Δ == second_derivative(g_1D, inner_stencil, closure_stencils, 1) + @test Δ isa LazyTensor{T,1,1} where T end @testset "3D" begin Δ = laplace(g_3D, inner_stencil, closure_stencils) - @test Δ isa TensorMapping{T,3,3} where T + @test Δ isa LazyTensor{T,3,3} where T Dxx = second_derivative(g_3D, inner_stencil, closure_stencils, 1) Dyy = second_derivative(g_3D, inner_stencil, closure_stencils, 2) Dzz = second_derivative(g_3D, inner_stencil, closure_stencils, 3) @test Δ == Dxx + Dyy + Dzz - @test Δ isa TensorMapping{T,3,3} where T + @test Δ isa LazyTensor{T,3,3} where T end end
--- a/test/SbpOperators/volumeops/volume_operator_test.jl Mon Mar 21 13:21:48 2022 +0100 +++ b/test/SbpOperators/volumeops/volume_operator_test.jl Wed Mar 23 12:54:45 2022 +0100 @@ -22,31 +22,31 @@ op = VolumeOperator(inner_stencil,closure_stencils,(11,),even) @test op == VolumeOperator(g_1D,inner_stencil,closure_stencils,even) @test op == volume_operator(g_1D,inner_stencil,closure_stencils,even,1) - @test op isa TensorMapping{T,1,1} where T + @test op isa LazyTensor{T,1,1} where T end @testset "2D" begin op_x = volume_operator(g_2D,inner_stencil,closure_stencils,even,1) op_y = volume_operator(g_2D,inner_stencil,closure_stencils,even,2) - Ix = IdentityMapping{Float64}((11,)) - Iy = IdentityMapping{Float64}((12,)) + Ix = IdentityTensor{Float64}((11,)) + Iy = IdentityTensor{Float64}((12,)) @test op_x == VolumeOperator(inner_stencil,closure_stencils,(11,),even)⊗Iy @test op_y == Ix⊗VolumeOperator(inner_stencil,closure_stencils,(12,),even) - @test op_x isa TensorMapping{T,2,2} where T - @test op_y isa TensorMapping{T,2,2} where T + @test op_x isa LazyTensor{T,2,2} where T + @test op_y isa LazyTensor{T,2,2} where T end @testset "3D" begin op_x = volume_operator(g_3D,inner_stencil,closure_stencils,even,1) op_y = volume_operator(g_3D,inner_stencil,closure_stencils,even,2) op_z = volume_operator(g_3D,inner_stencil,closure_stencils,even,3) - Ix = IdentityMapping{Float64}((11,)) - Iy = IdentityMapping{Float64}((12,)) - Iz = IdentityMapping{Float64}((10,)) + Ix = IdentityTensor{Float64}((11,)) + Iy = IdentityTensor{Float64}((12,)) + Iz = IdentityTensor{Float64}((10,)) @test op_x == VolumeOperator(inner_stencil,closure_stencils,(11,),even)⊗Iy⊗Iz @test op_y == Ix⊗VolumeOperator(inner_stencil,closure_stencils,(12,),even)⊗Iz @test op_z == Ix⊗Iy⊗VolumeOperator(inner_stencil,closure_stencils,(10,),even) - @test op_x isa TensorMapping{T,3,3} where T - @test op_y isa TensorMapping{T,3,3} where T - @test op_z isa TensorMapping{T,3,3} where T + @test op_x isa LazyTensor{T,3,3} where T + @test op_y isa LazyTensor{T,3,3} where T + @test op_z isa LazyTensor{T,3,3} where T end end