changeset 1043:c16116e403e2

Merge refactor/lazy_tensors
author Jonatan Werpers <jonatan@werpers.com>
date Tue, 22 Mar 2022 14:33:13 +0100
parents 696a3307b6a4 (current diff) b16706b4e996 (diff)
children f857057e61e6 d12ab8120d29 3bb94ce74697 396278072f18
files
diffstat 35 files changed, 775 insertions(+), 749 deletions(-) [+]
line wrap: on
line diff
--- a/Notes.md	Tue Mar 22 14:14:31 2022 +0100
+++ b/Notes.md	Tue Mar 22 14:33:13 2022 +0100
@@ -132,28 +132,25 @@
  * When doing specialised computations for different parts of the range/domain?
  * More?
 
- Maybe if we should have dynamic sizing it could be only for the range. `domain_size` would not be implemented. And the `range_size` would be a function of a vector that the TensorMapping is applied to.
+ Maybe if we should have dynamic sizing it could be only for the range. `domain_size` would not be implemented. And the `range_size` would be a function of a vector that the LazyTensor is applied to.
 
 ## Reasearch and thinking
- - [ ] Use a trait to indicate that a TensorMapping har the same range and domain?
- - [ ] Rename all the Tensor stuff to just LazyOperator, LazyApplication and so on?
+ - [ ] Use a trait to indicate that a LazyTensor har the same range and domain?
  - [ ] Check how the native julia doc generator works
     - [ ] Check if Vidars design docs fit in there
  - [ ] Create a macro @lazy which replaces a binary op (+,-) by its lazy equivalent? Would be a neat way to indicate which evaluations are lazy without cluttering/confusing with special characters.
- - [ ] Specificera operatorer i TOML eller något liknande?
- H.. H_gamma etc.)
  - [ ] Dispatch on Lower() instead of the type Lower so `::Lower` instead of `::Type{Lower}` ???
  	Seems better unless there is some specific reason to use the type instead of the value.
  - [ ] How do we handle mixes of periodic and non-periodic grids? Seems it should be supported on the grid level and on the 1d operator level. Between there it should be transparent.
- - [ ] Can we have a trait to tell if a TensorMapping is transposable?
+ - [ ] Can we have a trait to tell if a LazyTensor is transposable?
  - [ ] Is it ok to have "Constructors" for abstract types which create subtypes? For example a Grids() functions that gives different kind of grids based on input?
  - [ ] Figure out how to treat the borrowing parameters of operators. Include in into the struct? Expose via function dispatched on the operator type and grid?
 
 ## Regions and tensormappings
-- [ ] Use a trait to indicate if a TensorMapping uses indices with regions.
+- [ ] Use a trait to indicate if a LazyTensor uses indices with regions.
     The default should be that they do NOT.
         - [ ] What to name this trait? Can we call it IndexStyle but not export it to avoid conflicts with Base.IndexStyle?
- - [ ] Figure out repeated application of regioned TensorMappings. Maybe an instance of a tensor mapping needs to know the exact size of the range and domain for this to work?
+ - [ ] Figure out repeated application of regioned LazyTensors. Maybe an instance of a tensor mapping needs to know the exact size of the range and domain for this to work?
 
 ## Boundschecking and dimension checking
 Does it make sense to have boundschecking only in getindex methods?
@@ -261,16 +258,16 @@
 Vi kan ha tensor-operatorer som agerar på ett skalärt fält och ger ett vektorfält eller tensorfält.
 Vi kan också ha tensor-operatorer som agerar på ett vektorfält eller tensorfält och ger ett skalärt fält.
 
-TBD: Just nu gör `apply_transpose` antagandet att domän-typen är samma som range-typen. Det behöver vi på något sätt bryta. Ett alternativ är låta en TensorMapping ha `T_domain` och `T_range` istället för bara `T`. Känns dock lite grötigt. Ett annat alternativ skulle vara någon typ av trait för transpose? Den skulle kunna innehålla typen som transponatet agerar på? Vet inte om det fungerar dock.
+TBD: Just nu gör `apply_transpose` antagandet att domän-typen är samma som range-typen. Det behöver vi på något sätt bryta. Ett alternativ är låta en LazyTensor ha `T_domain` och `T_range` istället för bara `T`. Känns dock lite grötigt. Ett annat alternativ skulle vara någon typ av trait för transpose? Den skulle kunna innehålla typen som transponatet agerar på? Vet inte om det fungerar dock.
 
-TBD: Vad är målet med `T`-parametern för en TensorMapping? Om vi vill kunna applicera en difference operator på vad som helst kan man inte anta att en `TensorMapping{T}` bara agerar på instanser av `T`.
+TBD: Vad är målet med `T`-parametern för en LazyTensor? Om vi vill kunna applicera en difference operator på vad som helst kan man inte anta att en `LazyTensor{T}` bara agerar på instanser av `T`.
 
 Man kan implementera `∇` som en tensormapping som agerar på T och returnerar `StaticVector{N,T} where N`.
 (Man skulle eventuellt också kunna låta den agera på `StaticMatrix{N,T,D} where N` och returnera `StaticMatrix{M,T,D+1}`. Frågan är om man vinner något på det...)
 
-Skulle kunna ha en funktion `range_type(::TensorMapping, ::Type{domain_type})`
+Skulle kunna ha en funktion `range_type(::LazyTensor, ::Type{domain_type})`
 
-Kanske kan man implementera `⋅(tm::TensorMapping{R,D}, v::AbstractArray{T,D})` där T är en AbstractArray, tm på något sätt har komponenter, lika många som T har element.
+Kanske kan man implementera `⋅(tm::LazyTensor{R,D}, v::AbstractArray{T,D})` där T är en AbstractArray, tm på något sätt har komponenter, lika många som T har element.
 
 ### Ratade alternativ:
 
@@ -302,7 +299,7 @@
 En viktig operation för vektor fält är att kunna få ut komponenter som grid-funktioner. Detta behöver antagligen kunna ske lazy.
 Det finns ett par olika lösningar:
 * Implementera en egen typ av view som tar hand om detta. Eller Accessors.jl?
-* Använda en TensorMapping
+* Använda en LazyTensor
 * Någon typ av lazy-broadcast
 * En lazy array som applicerar en funktion för varje element.
 
@@ -349,4 +346,4 @@
 
 
 ## Name of the `VolumeOperator` type for constant stencils
-It seems that the name is too general. The name of the method `volume_operator` makes sense. It should return different types of `TensorMapping` specialized for the grid. A suggetion for a better name is `ConstantStencilVolumeOperator`
+It seems that the name is too general. The name of the method `volume_operator` makes sense. It should return different types of `LazyTensor` specialized for the grid. A suggetion for a better name is `ConstantStencilVolumeOperator`
--- a/TODO.md	Tue Mar 22 14:14:31 2022 +0100
+++ b/TODO.md	Tue Mar 22 14:33:13 2022 +0100
@@ -10,10 +10,7 @@
  - [ ] Replace getindex hack for flattening tuples with flatten_tuple. (eg. `getindex.(range_size.(L.D2),1)`)
  - [ ] Use `@inferred` in a lot of tests.
  - [ ] Make sure we are setting tolerances in tests in a consistent way
- - [ ] Add check for correct domain sizes to lazy tensor operations using SizeMismatch
  - [ ] Write down some coding guideline or checklist for code conventions. For example i,j,... for indices and I for multi-index
- - [ ] Add boundschecking in TensorMappingApplication
- - [ ] Start renaming things in LazyTensors
  - [ ] Clean up RegionIndices
     1. [ ] Write tests for how things should work
     2. [ ] Update RegionIndices accordingly
--- a/src/LazyTensors/LazyTensors.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/LazyTensors/LazyTensors.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,17 +1,36 @@
 module LazyTensors
 
-export LazyTensorMappingApplication
-export LazyTensorMappingTranspose
-export TensorMappingComposition
-export LazyLinearMap
-export IdentityMapping
-export InflatedTensorMapping
+export TensorApplication
+export TensorTranspose
+export TensorComposition
+export DenseTensor
+export IdentityTensor
+export ScalingTensor
+export InflatedTensor
 export LazyOuterProduct
 export ⊗
-export SizeMismatch
+export DomainSizeMismatch
+export RangeSizeMismatch
 
-include("tensor_mapping.jl")
+include("lazy_tensor.jl")
+include("tensor_types.jl")
 include("lazy_array.jl")
 include("lazy_tensor_operations.jl")
+include("tuple_manipulation.jl")
+
+# Applying lazy tensors to vectors
+Base.:*(a::LazyTensor, v::AbstractArray) = TensorApplication(a,v)
+Base.:*(a::LazyTensor, b::LazyTensor) = throw(MethodError(Base.:*,(a,b)))
+Base.:*(a::LazyTensor, args::Union{LazyTensor, AbstractArray}...) = foldr(*,(a,args...))
+
+# Addition and subtraction of lazy tensors
+Base.:+(s::LazyTensor, t::LazyTensor) = ElementwiseTensorOperation{:+}(s,t)
+Base.:-(s::LazyTensor, t::LazyTensor) = ElementwiseTensorOperation{:-}(s,t)
+
+# Composing lazy tensors
+Base.:∘(s::LazyTensor, t::LazyTensor) = TensorComposition(s,t)
+
+# Outer products of tensors
+⊗(a::LazyTensor, b::LazyTensor) = LazyOuterProduct(a,b)
 
 end # module
--- a/src/LazyTensors/lazy_array.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/LazyTensors/lazy_array.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -36,7 +36,7 @@
 
 function Base.getindex(lfa::LazyFunctionArray{F,T,D}, I::Vararg{Int,D}) where {F,T,D}
     @boundscheck checkbounds(lfa, I...)
-    return lfa.f(I...)
+    return @inbounds lfa.f(I...)
 end
 
 
@@ -61,25 +61,17 @@
 end
 LazyElementwiseOperation{T,D,Op}(a::AbstractArray{T,D},b::T) where {T,D,Op} = LazyElementwiseOperation{T,D,Op}(a, LazyConstantArray(b, size(a)))
 LazyElementwiseOperation{T,D,Op}(a::T,b::AbstractArray{T,D}) where {T,D,Op} = LazyElementwiseOperation{T,D,Op}(LazyConstantArray(a,  size(b)), b)
-# TODO: Move Op to be the first parameter? Compare to Binary operations
 
 Base.size(v::LazyElementwiseOperation) = size(v.a)
 
-evaluate(leo::LazyElementwiseOperation{T,D,:+}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] + leo.b[I...]
-evaluate(leo::LazyElementwiseOperation{T,D,:-}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] - leo.b[I...]
-evaluate(leo::LazyElementwiseOperation{T,D,:*}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] * leo.b[I...]
-evaluate(leo::LazyElementwiseOperation{T,D,:/}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] / leo.b[I...]
+Base.@propagate_inbounds evaluate(leo::LazyElementwiseOperation{T,D,:+}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] + leo.b[I...]
+Base.@propagate_inbounds evaluate(leo::LazyElementwiseOperation{T,D,:-}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] - leo.b[I...]
+Base.@propagate_inbounds evaluate(leo::LazyElementwiseOperation{T,D,:*}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] * leo.b[I...]
+Base.@propagate_inbounds evaluate(leo::LazyElementwiseOperation{T,D,:/}, I::Vararg{Int,D}) where {T,D} = leo.a[I...] / leo.b[I...]
 
-# TODO: Make sure boundschecking is done properly and that the lenght of the vectors are equal
-# NOTE: Boundschecking in getindex functions now assumes that the size of the
-# vectors in the LazyElementwiseOperation are the same size. If we remove the
-# size assertion in the constructor we might have to handle
-# boundschecking differently.
-Base.@propagate_inbounds @inline function Base.getindex(leo::LazyElementwiseOperation{T,D}, I::Vararg{Int,D}) where {T,D}
-    @boundscheck if !checkbounds(Bool, leo.a, I...)
-        throw(BoundsError([leo], I...))
-    end
-    return evaluate(leo, I...)
+function Base.getindex(leo::LazyElementwiseOperation{T,D}, I::Vararg{Int,D}) where {T,D}
+    @boundscheck checkbounds(leo.a, I...)
+    return @inbounds evaluate(leo, I...)
 end
 
 # Define lazy operations for AbstractArrays. Operations constructs a LazyElementwiseOperation which
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/LazyTensors/lazy_tensor.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -0,0 +1,79 @@
+export LazyTensor
+export apply
+export apply_transpose
+export range_dim, domain_dim
+export range_size, domain_size
+
+"""
+    LazyTensor{T,R,D}
+
+Describes a mapping of a `D` dimension tensor to an `R` dimension tensor.
+The action of the mapping is implemented through the method
+```julia
+    apply(t::LazyTensor{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg) where {R,D,T}
+```
+
+The size of the range and domain that the operator works with should be returned by
+the functions
+```julia
+    range_size(::LazyTensor)
+    domain_size(::LazyTensor)
+```
+to allow querying for one or the other.
+
+Optionally the action of the transpose may be defined through
+```julia
+    apply_transpose(t::LazyTensor{T,R,D}, v::AbstractArray{T,D}, I::Vararg) where {R,D,T}
+```
+"""
+abstract type LazyTensor{T,R,D} end
+
+"""
+    apply(t::LazyTensor{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg) where {R,D,T}
+
+Return the result of the mapping for a given index.
+"""
+function apply end
+
+"""
+    apply_transpose(t::LazyTensor{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg) where {R,D,T}
+
+Return the result of the transposed mapping for a given index.
+"""
+function apply_transpose end
+
+"""
+    range_dim(::LazyTensor)
+Return the dimension of the range space of a given mapping
+"""
+range_dim(::LazyTensor{T,R,D}) where {T,R,D} = R
+
+"""
+    domain_dim(::LazyTensor)
+Return the dimension of the domain space of a given mapping
+"""
+domain_dim(::LazyTensor{T,R,D}) where {T,R,D} = D
+
+
+"""
+    range_size(M::LazyTensor)
+
+Return the range size for the mapping.
+"""
+function range_size end
+
+"""
+    domain_size(M::LazyTensor)
+
+Return the domain size for the mapping.
+"""
+function domain_size end
+
+
+"""
+    eltype(::LazyTensor{T})
+
+The type of elements the LazyTensor acts on.
+"""
+Base.eltype(::LazyTensor{T}) where T = T
+
--- a/src/LazyTensors/lazy_tensor_operations.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/LazyTensors/lazy_tensor_operations.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,204 +1,137 @@
 """
-    LazyTensorMappingApplication{T,R,D} <: LazyArray{T,R}
+    TensorApplication{T,R,D} <: LazyArray{T,R}
 
-Struct for lazy application of a TensorMapping. Created using `*`.
+Struct for lazy application of a LazyTensor. Created using `*`.
 
-Allows the result of a `TensorMapping` applied to a vector to be treated as an `AbstractArray`.
-With a mapping `m` and a vector `v` the LazyTensorMappingApplication object can be created by `m*v`.
+Allows the result of a `LazyTensor` applied to a vector to be treated as an `AbstractArray`.
+With a mapping `m` and a vector `v` the TensorApplication object can be created by `m*v`.
 The actual result will be calcualted when indexing into `m*v`.
 """
-struct LazyTensorMappingApplication{T,R,D, TM<:TensorMapping{<:Any,R,D}, AA<:AbstractArray{<:Any,D}} <: LazyArray{T,R}
+struct TensorApplication{T,R,D, TM<:LazyTensor{<:Any,R,D}, AA<:AbstractArray{<:Any,D}} <: LazyArray{T,R}
     t::TM
     o::AA
 
-    function LazyTensorMappingApplication(t::TensorMapping{<:Any,R,D}, o::AbstractArray{<:Any,D}) where {R,D}
+    function TensorApplication(t::LazyTensor{<:Any,R,D}, o::AbstractArray{<:Any,D}) where {R,D}
+        @boundscheck check_domain_size(t, size(o))
         I = ntuple(i->1, range_dim(t))
         T = typeof(apply(t,o,I...))
         return new{T,R,D,typeof(t), typeof(o)}(t,o)
     end
 end
-# TODO: Do boundschecking on creation!
-
-Base.getindex(ta::LazyTensorMappingApplication{T,R}, I::Vararg{Any,R}) where {T,R} = apply(ta.t, ta.o, I...)
-Base.getindex(ta::LazyTensorMappingApplication{T,1}, I::CartesianIndex{1}) where {T} = apply(ta.t, ta.o, I.I...) # Would otherwise be caught in the previous method.
-Base.size(ta::LazyTensorMappingApplication) = range_size(ta.t)
-# TODO: What else is needed to implement the AbstractArray interface?
 
-Base.:*(a::TensorMapping, v::AbstractArray) = LazyTensorMappingApplication(a,v)
-Base.:*(a::TensorMapping, b::TensorMapping) = throw(MethodError(Base.:*,(a,b)))
-Base.:*(a::TensorMapping, args::Union{TensorMapping, AbstractArray}...) = foldr(*,(a,args...))
+function Base.getindex(ta::TensorApplication{T,R}, I::Vararg{Any,R}) where {T,R}
+    @boundscheck checkbounds(ta, Int.(I)...)
+    return @inbounds apply(ta.t, ta.o, I...)
+end
+Base.@propagate_inbounds Base.getindex(ta::TensorApplication{T,1} where T, I::CartesianIndex{1}) = ta[Tuple(I)...] # Would otherwise be caught in the previous method.
+Base.size(ta::TensorApplication) = range_size(ta.t)
 
-# # We need the associativity to be a→b→c = a→(b→c), which is the case for '→'
-# # Should we overload some other infix binary opesrator?
-# →(tm::TensorMapping{T,R,D}, o::AbstractArray{T,D}) where {T,R,D} = LazyTensorMappingApplication(tm,o)
-# TODO: We need to be really careful about good error messages.
-# For example what happens if you try to multiply LazyTensorMappingApplication with a TensorMapping(wrong order)?
 
 """
-    LazyTensorMappingTranspose{T,R,D} <: TensorMapping{T,D,R}
+    TensorTranspose{T,R,D} <: LazyTensor{T,D,R}
 
-Struct for lazy transpose of a TensorMapping.
+Struct for lazy transpose of a LazyTensor.
 
 If a mapping implements the the `apply_transpose` method this allows working with
-the transpose of mapping `m` by using `m'`. `m'` will work as a regular TensorMapping lazily calling
+the transpose of mapping `m` by using `m'`. `m'` will work as a regular LazyTensor lazily calling
 the appropriate methods of `m`.
 """
-struct LazyTensorMappingTranspose{T,R,D, TM<:TensorMapping{T,R,D}} <: TensorMapping{T,D,R}
+struct TensorTranspose{T,R,D, TM<:LazyTensor{T,R,D}} <: LazyTensor{T,D,R}
     tm::TM
 end
 
 # # TBD: Should this be implemented on a type by type basis or through a trait to provide earlier errors?
-# Jonatan 2020-09-25: Is the problem that you can take the transpose of any TensorMapping even if it doesn't implement `apply_transpose`?
-Base.adjoint(tm::TensorMapping) = LazyTensorMappingTranspose(tm)
-Base.adjoint(tmt::LazyTensorMappingTranspose) = tmt.tm
+# Jonatan 2020-09-25: Is the problem that you can take the transpose of any LazyTensor even if it doesn't implement `apply_transpose`?
+Base.adjoint(tm::LazyTensor) = TensorTranspose(tm)
+Base.adjoint(tmt::TensorTranspose) = tmt.tm
 
-apply(tmt::LazyTensorMappingTranspose{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D} = apply_transpose(tmt.tm, v, I...)
-apply_transpose(tmt::LazyTensorMappingTranspose{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmt.tm, v, I...)
+apply(tmt::TensorTranspose{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D} = apply_transpose(tmt.tm, v, I...)
+apply_transpose(tmt::TensorTranspose{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmt.tm, v, I...)
 
-range_size(tmt::LazyTensorMappingTranspose) = domain_size(tmt.tm)
-domain_size(tmt::LazyTensorMappingTranspose) = range_size(tmt.tm)
+range_size(tmt::TensorTranspose) = domain_size(tmt.tm)
+domain_size(tmt::TensorTranspose) = range_size(tmt.tm)
 
 
-struct LazyTensorMappingBinaryOperation{Op,T,R,D,T1<:TensorMapping{T,R,D},T2<:TensorMapping{T,R,D}} <: TensorMapping{T,D,R}
+struct ElementwiseTensorOperation{Op,T,R,D,T1<:LazyTensor{T,R,D},T2<:LazyTensor{T,R,D}} <: LazyTensor{T,D,R}
     tm1::T1
     tm2::T2
 
-    @inline function LazyTensorMappingBinaryOperation{Op,T,R,D}(tm1::T1,tm2::T2) where {Op,T,R,D, T1<:TensorMapping{T,R,D},T2<:TensorMapping{T,R,D}}
+    function ElementwiseTensorOperation{Op,T,R,D}(tm1::T1,tm2::T2) where {Op,T,R,D, T1<:LazyTensor{T,R,D},T2<:LazyTensor{T,R,D}}
+        @boundscheck check_domain_size(tm2, domain_size(tm1))
+        @boundscheck check_range_size(tm2, range_size(tm1))
         return new{Op,T,R,D,T1,T2}(tm1,tm2)
     end
 end
-# TODO: Boundschecking in constructor.
 
-apply(tmBinOp::LazyTensorMappingBinaryOperation{:+,T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmBinOp.tm1, v, I...) + apply(tmBinOp.tm2, v, I...)
-apply(tmBinOp::LazyTensorMappingBinaryOperation{:-,T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmBinOp.tm1, v, I...) - apply(tmBinOp.tm2, v, I...)
+ElementwiseTensorOperation{Op}(s,t) where Op = ElementwiseTensorOperation{Op,eltype(s), range_dim(s), domain_dim(s)}(s,t)
 
-range_size(tmBinOp::LazyTensorMappingBinaryOperation) = range_size(tmBinOp.tm1)
-domain_size(tmBinOp::LazyTensorMappingBinaryOperation) = domain_size(tmBinOp.tm1)
+apply(tmBinOp::ElementwiseTensorOperation{:+,T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmBinOp.tm1, v, I...) + apply(tmBinOp.tm2, v, I...)
+apply(tmBinOp::ElementwiseTensorOperation{:-,T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D} = apply(tmBinOp.tm1, v, I...) - apply(tmBinOp.tm2, v, I...)
 
-Base.:+(tm1::TensorMapping{T,R,D}, tm2::TensorMapping{T,R,D}) where {T,R,D} = LazyTensorMappingBinaryOperation{:+,T,R,D}(tm1,tm2)
-Base.:-(tm1::TensorMapping{T,R,D}, tm2::TensorMapping{T,R,D}) where {T,R,D} = LazyTensorMappingBinaryOperation{:-,T,R,D}(tm1,tm2)
+range_size(tmBinOp::ElementwiseTensorOperation) = range_size(tmBinOp.tm1)
+domain_size(tmBinOp::ElementwiseTensorOperation) = domain_size(tmBinOp.tm1)
+
 
 """
-    TensorMappingComposition{T,R,K,D}
+    TensorComposition{T,R,K,D}
 
-Lazily compose two `TensorMapping`s, so that they can be handled as a single `TensorMapping`.
+Lazily compose two `LazyTensor`s, so that they can be handled as a single `LazyTensor`.
 """
-struct TensorMappingComposition{T,R,K,D, TM1<:TensorMapping{T,R,K}, TM2<:TensorMapping{T,K,D}} <: TensorMapping{T,R,D}
+struct TensorComposition{T,R,K,D, TM1<:LazyTensor{T,R,K}, TM2<:LazyTensor{T,K,D}} <: LazyTensor{T,R,D}
     t1::TM1
     t2::TM2
 
-    @inline function TensorMappingComposition(t1::TensorMapping{T,R,K}, t2::TensorMapping{T,K,D}) where {T,R,K,D}
+    function TensorComposition(t1::LazyTensor{T,R,K}, t2::LazyTensor{T,K,D}) where {T,R,K,D}
         @boundscheck check_domain_size(t1, range_size(t2))
         return new{T,R,K,D, typeof(t1), typeof(t2)}(t1,t2)
     end
 end
 
-range_size(tm::TensorMappingComposition) = range_size(tm.t1)
-domain_size(tm::TensorMappingComposition) = domain_size(tm.t2)
+range_size(tm::TensorComposition) = range_size(tm.t1)
+domain_size(tm::TensorComposition) = domain_size(tm.t2)
 
-function apply(c::TensorMappingComposition{T,R,K,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,K,D}
+function apply(c::TensorComposition{T,R,K,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,K,D}
     apply(c.t1, c.t2*v, I...)
 end
 
-function apply_transpose(c::TensorMappingComposition{T,R,K,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,K,D}
+function apply_transpose(c::TensorComposition{T,R,K,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,K,D}
     apply_transpose(c.t2, c.t1'*v, I...)
 end
 
-Base.@propagate_inbounds Base.:∘(s::TensorMapping, t::TensorMapping) = TensorMappingComposition(s,t)
-
-"""
-    LazyLinearMap{T,R,D,...}(A, range_indicies, domain_indicies)
-
-TensorMapping defined by the AbstractArray A. `range_indicies` and `domain_indicies` define which indicies of A should
-be considerd the range and domain of the TensorMapping. Each set of indices must be ordered in ascending order.
-
-For instance, if A is a m x n matrix, and range_size = (1,), domain_size = (2,), then the LazyLinearMap performs the
-standard matrix-vector product on vectors of size n.
-"""
-struct LazyLinearMap{T,R,D, RD, AA<:AbstractArray{T,RD}} <: TensorMapping{T,R,D}
-    A::AA
-    range_indicies::NTuple{R,Int}
-    domain_indicies::NTuple{D,Int}
-
-    function LazyLinearMap(A::AA, range_indicies::NTuple{R,Int}, domain_indicies::NTuple{D,Int}) where {T,R,D, RD, AA<:AbstractArray{T,RD}}
-        if !issorted(range_indicies) || !issorted(domain_indicies)
-            throw(DomainError("range_indicies and domain_indicies must be sorted in ascending order"))
-        end
-
-        return new{T,R,D,RD,AA}(A,range_indicies,domain_indicies)
-    end
-end
-
-range_size(llm::LazyLinearMap) = size(llm.A)[[llm.range_indicies...]]
-domain_size(llm::LazyLinearMap) = size(llm.A)[[llm.domain_indicies...]]
-
-function apply(llm::LazyLinearMap{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D}
-    view_index = ntuple(i->:,ndims(llm.A))
-    for i ∈ 1:R
-        view_index = Base.setindex(view_index, Int(I[i]), llm.range_indicies[i])
-    end
-    A_view = @view llm.A[view_index...]
-    return sum(A_view.*v)
-end
-
-function apply_transpose(llm::LazyLinearMap{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D}
-    apply(LazyLinearMap(llm.A, llm.domain_indicies, llm.range_indicies), v, I...)
-end
-
 
 """
-    IdentityMapping{T,D} <: TensorMapping{T,D,D}
-
-The lazy identity TensorMapping for a given size. Usefull for building up higher dimensional tensor mappings from lower
-dimensional ones through outer products. Also used in the Implementation for InflatedTensorMapping.
-"""
-struct IdentityMapping{T,D} <: TensorMapping{T,D,D}
-    size::NTuple{D,Int}
-end
-
-IdentityMapping{T}(size::NTuple{D,Int}) where {T,D} = IdentityMapping{T,D}(size)
-IdentityMapping{T}(size::Vararg{Int,D}) where {T,D} = IdentityMapping{T,D}(size)
-IdentityMapping(size::Vararg{Int,D}) where D = IdentityMapping{Float64,D}(size)
+    TensorComposition(tm, tmi::IdentityTensor)
+    TensorComposition(tmi::IdentityTensor, tm)
 
-range_size(tmi::IdentityMapping) = tmi.size
-domain_size(tmi::IdentityMapping) = tmi.size
-
-apply(tmi::IdentityMapping{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = v[I...]
-apply_transpose(tmi::IdentityMapping{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = v[I...]
-
+Composes a `Tensormapping` `tm` with an `IdentityTensor` `tmi`, by returning `tm`
 """
-    Base.:∘(tm, tmi)
-    Base.:∘(tmi, tm)
-
-Composes a `Tensormapping` `tm` with an `IdentityMapping` `tmi`, by returning `tm`
-"""
-@inline function Base.:∘(tm::TensorMapping{T,R,D}, tmi::IdentityMapping{T,D}) where {T,R,D}
+function TensorComposition(tm::LazyTensor{T,R,D}, tmi::IdentityTensor{T,D}) where {T,R,D}
     @boundscheck check_domain_size(tm, range_size(tmi))
     return tm
 end
 
-@inline function Base.:∘(tmi::IdentityMapping{T,R}, tm::TensorMapping{T,R,D}) where {T,R,D}
+function TensorComposition(tmi::IdentityTensor{T,R}, tm::LazyTensor{T,R,D}) where {T,R,D}
     @boundscheck check_domain_size(tmi, range_size(tm))
     return tm
 end
-# Specialization for the case where tm is an IdentityMapping. Required to resolve ambiguity.
-@inline function Base.:∘(tm::IdentityMapping{T,D}, tmi::IdentityMapping{T,D}) where {T,D}
+# Specialization for the case where tm is an IdentityTensor. Required to resolve ambiguity.
+function TensorComposition(tm::IdentityTensor{T,D}, tmi::IdentityTensor{T,D}) where {T,D}
     @boundscheck check_domain_size(tm, range_size(tmi))
     return tmi
 end
 
 
 """
-    InflatedTensorMapping{T,R,D} <: TensorMapping{T,R,D}
+    InflatedTensor{T,R,D} <: LazyTensor{T,R,D}
 
-An inflated `TensorMapping` with dimensions added before and afer its actual dimensions.
+An inflated `LazyTensor` with dimensions added before and afer its actual dimensions.
 """
-struct InflatedTensorMapping{T,R,D,D_before,R_middle,D_middle,D_after, TM<:TensorMapping{T,R_middle,D_middle}} <: TensorMapping{T,R,D}
-    before::IdentityMapping{T,D_before}
+struct InflatedTensor{T,R,D,D_before,R_middle,D_middle,D_after, TM<:LazyTensor{T,R_middle,D_middle}} <: LazyTensor{T,R,D}
+    before::IdentityTensor{T,D_before}
     tm::TM
-    after::IdentityMapping{T,D_after}
+    after::IdentityTensor{T,D_after}
 
-    function InflatedTensorMapping(before, tm::TensorMapping{T}, after) where T
+    function InflatedTensor(before, tm::LazyTensor{T}, after) where T
         R_before = range_dim(before)
         R_middle = range_dim(tm)
         R_after = range_dim(after)
@@ -211,36 +144,37 @@
         return new{T,R,D,D_before,R_middle,D_middle,D_after, typeof(tm)}(before, tm, after)
     end
 end
+
 """
-    InflatedTensorMapping(before, tm, after)
-    InflatedTensorMapping(before,tm)
-    InflatedTensorMapping(tm,after)
+    InflatedTensor(before, tm, after)
+    InflatedTensor(before,tm)
+    InflatedTensor(tm,after)
 
-The outer product of `before`, `tm` and `after`, where `before` and `after` are `IdentityMapping`s.
+The outer product of `before`, `tm` and `after`, where `before` and `after` are `IdentityTensor`s.
 
-If one of `before` or `after` is left out, a 0-dimensional `IdentityMapping` is used as the default value.
+If one of `before` or `after` is left out, a 0-dimensional `IdentityTensor` is used as the default value.
 
-If `tm` already is an `InflatedTensorMapping`, `before` and `after` will be extended instead of
-creating a nested `InflatedTensorMapping`.
+If `tm` already is an `InflatedTensor`, `before` and `after` will be extended instead of
+creating a nested `InflatedTensor`.
 """
-InflatedTensorMapping(::IdentityMapping, ::TensorMapping, ::IdentityMapping)
+InflatedTensor(::IdentityTensor, ::LazyTensor, ::IdentityTensor)
 
-function InflatedTensorMapping(before, itm::InflatedTensorMapping, after)
-    return InflatedTensorMapping(
-        IdentityMapping(before.size...,  itm.before.size...),
+function InflatedTensor(before, itm::InflatedTensor, after)
+    return InflatedTensor(
+        IdentityTensor(before.size...,  itm.before.size...),
         itm.tm,
-        IdentityMapping(itm.after.size..., after.size...),
+        IdentityTensor(itm.after.size..., after.size...),
     )
 end
 
-InflatedTensorMapping(before::IdentityMapping, tm::TensorMapping{T}) where T = InflatedTensorMapping(before,tm,IdentityMapping{T}())
-InflatedTensorMapping(tm::TensorMapping{T}, after::IdentityMapping) where T = InflatedTensorMapping(IdentityMapping{T}(),tm,after)
+InflatedTensor(before::IdentityTensor, tm::LazyTensor{T}) where T = InflatedTensor(before,tm,IdentityTensor{T}())
+InflatedTensor(tm::LazyTensor{T}, after::IdentityTensor) where T = InflatedTensor(IdentityTensor{T}(),tm,after)
 # Resolve ambiguity between the two previous methods
-InflatedTensorMapping(I1::IdentityMapping{T}, I2::IdentityMapping{T}) where T = InflatedTensorMapping(I1,I2,IdentityMapping{T}())
+InflatedTensor(I1::IdentityTensor{T}, I2::IdentityTensor{T}) where T = InflatedTensor(I1,I2,IdentityTensor{T}())
 
-# TODO: Implement some pretty printing in terms of ⊗. E.g InflatedTensorMapping(I(3),B,I(2)) -> I(3)⊗B⊗I(2)
+# TODO: Implement some pretty printing in terms of ⊗. E.g InflatedTensor(I(3),B,I(2)) -> I(3)⊗B⊗I(2)
 
-function range_size(itm::InflatedTensorMapping)
+function range_size(itm::InflatedTensor)
     return flatten_tuple(
         range_size(itm.before),
         range_size(itm.tm),
@@ -248,7 +182,7 @@
     )
 end
 
-function domain_size(itm::InflatedTensorMapping)
+function domain_size(itm::InflatedTensor)
     return flatten_tuple(
         domain_size(itm.before),
         domain_size(itm.tm),
@@ -256,7 +190,7 @@
     )
 end
 
-function apply(itm::InflatedTensorMapping{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D}
+function apply(itm::InflatedTensor{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D}
     dim_before = range_dim(itm.before)
     dim_domain = domain_dim(itm.tm)
     dim_range = range_dim(itm.tm)
@@ -268,7 +202,7 @@
     return apply(itm.tm, v_inner, inner_index...)
 end
 
-function apply_transpose(itm::InflatedTensorMapping{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D}
+function apply_transpose(itm::InflatedTensor{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D}
     dim_before = range_dim(itm.before)
     dim_domain = domain_dim(itm.tm)
     dim_range = range_dim(itm.tm)
@@ -281,87 +215,10 @@
 end
 
 
-"""
-    split_index(::Val{dim_before}, ::Val{dim_view}, ::Val{dim_index}, ::Val{dim_after}, I...)
-
-Splits the multi-index `I` into two parts. One part which is expected to be
-used as a view, and one which is expected to be used as an index.
-Eg.
-```
-split_index(Val(1),Val(3),Val(2),Val(1),(1,2,3,4)) -> (1,:,:,:,4), (2,3)
-```
-
-`dim_view` controls how many colons are in the view, and `dim_index` controls
-how many elements are extracted from the middle.
-`dim_before` and `dim_after` decides the length of the index parts before and after the colons in the view index.
-
-Arguments should satisfy `length(I) == dim_before+B_domain+dim_after`.
-
-The returned values satisfy
- * `length(view_index) == dim_before + dim_view + dim_after`
- * `length(I_middle) == dim_index`
-"""
-function split_index(::Val{dim_before}, ::Val{dim_view}, ::Val{dim_index}, ::Val{dim_after}, I...) where {dim_before,dim_view, dim_index,dim_after}
-    I_before, I_middle, I_after = split_tuple(I, Val(dim_before), Val(dim_index))
-
-    view_index = (I_before..., ntuple((i)->:, dim_view)..., I_after...)
-
-    return view_index, I_middle
-end
-
-# TODO: Can this be replaced by something more elegant while still being type stable? 2020-10-21
-# See:
-# https://github.com/JuliaLang/julia/issues/34884
-# https://github.com/JuliaLang/julia/issues/30386
-"""
-    slice_tuple(t, Val(l), Val(u))
-
-Get a slice of a tuple in a type stable way.
-Equivalent to `t[l:u]` but type stable.
-"""
-function slice_tuple(t,::Val{L},::Val{U}) where {L,U}
-    return ntuple(i->t[i+L-1], U-L+1)
-end
-
-"""
-    split_tuple(t::Tuple{...}, ::Val{M}) where {N,M}
-
-Split the tuple `t` into two parts. the first part is `M` long.
-E.g
-```julia
-split_tuple((1,2,3,4),Val(3)) -> (1,2,3), (4,)
-```
-"""
-function split_tuple(t::NTuple{N,Any},::Val{M}) where {N,M}
-    return slice_tuple(t,Val(1), Val(M)), slice_tuple(t,Val(M+1), Val(N))
-end
-
-"""
-    split_tuple(t::Tuple{...},::Val{M},::Val{K}) where {N,M,K}
-
-Same as `split_tuple(t::NTuple{N},::Val{M})` but splits the tuple in three parts. With the first
-two parts having lenght `M` and `K`.
-"""
-function split_tuple(t::NTuple{N,Any},::Val{M},::Val{K}) where {N,M,K}
-    p1, tail = split_tuple(t, Val(M))
-    p2, p3 = split_tuple(tail, Val(K))
-    return p1,p2,p3
-end
-
-
-"""
-    flatten_tuple(t)
-
-Takes a nested tuple and flattens the whole structure
-"""
-flatten_tuple(t::NTuple{N, Number} where N) = t
-flatten_tuple(t::Tuple) = ((flatten_tuple.(t)...)...,) # simplify?
-flatten_tuple(ts::Vararg) = flatten_tuple(ts)
-
 @doc raw"""
     LazyOuterProduct(tms...)
 
-Creates a `TensorMappingComposition` for the outerproduct of `tms...`.
+Creates a `TensorComposition` for the outerproduct of `tms...`.
 This is done by separating the outer product into regular products of outer products involving only identity mappings and one non-identity mapping.
 
 First let
@@ -397,34 +254,49 @@
 """
 function LazyOuterProduct end
 
-function LazyOuterProduct(tm1::TensorMapping{T}, tm2::TensorMapping{T}) where T
-    itm1 = InflatedTensorMapping(tm1, IdentityMapping{T}(range_size(tm2)))
-    itm2 = InflatedTensorMapping(IdentityMapping{T}(domain_size(tm1)),tm2)
+function LazyOuterProduct(tm1::LazyTensor{T}, tm2::LazyTensor{T}) where T
+    itm1 = InflatedTensor(tm1, IdentityTensor{T}(range_size(tm2)))
+    itm2 = InflatedTensor(IdentityTensor{T}(domain_size(tm1)),tm2)
 
     return itm1∘itm2
 end
 
-LazyOuterProduct(t1::IdentityMapping{T}, t2::IdentityMapping{T}) where T = IdentityMapping{T}(t1.size...,t2.size...)
-LazyOuterProduct(t1::TensorMapping, t2::IdentityMapping) = InflatedTensorMapping(t1, t2)
-LazyOuterProduct(t1::IdentityMapping, t2::TensorMapping) = InflatedTensorMapping(t1, t2)
+LazyOuterProduct(t1::IdentityTensor{T}, t2::IdentityTensor{T}) where T = IdentityTensor{T}(t1.size...,t2.size...)
+LazyOuterProduct(t1::LazyTensor, t2::IdentityTensor) = InflatedTensor(t1, t2)
+LazyOuterProduct(t1::IdentityTensor, t2::LazyTensor) = InflatedTensor(t1, t2)
 
-LazyOuterProduct(tms::Vararg{TensorMapping}) = foldl(LazyOuterProduct, tms)
-
-⊗(a::TensorMapping, b::TensorMapping) = LazyOuterProduct(a,b)
+LazyOuterProduct(tms::Vararg{LazyTensor}) = foldl(LazyOuterProduct, tms)
 
 
-function check_domain_size(tm::TensorMapping, sz)
+function check_domain_size(tm::LazyTensor, sz)
     if domain_size(tm) != sz
-        throw(SizeMismatch(tm,sz))
+        throw(DomainSizeMismatch(tm,sz))
+    end
+end
+
+function check_range_size(tm::LazyTensor, sz)
+    if range_size(tm) != sz
+        throw(RangeSizeMismatch(tm,sz))
     end
 end
 
-struct SizeMismatch <: Exception
-    tm::TensorMapping
+struct DomainSizeMismatch <: Exception
+    tm::LazyTensor
     sz
 end
 
-function Base.showerror(io::IO, err::SizeMismatch)
-    print(io, "SizeMismatch: ")
-    print(io, "domain size $(domain_size(err.tm)) of TensorMapping not matching size $(err.sz)")
+function Base.showerror(io::IO, err::DomainSizeMismatch)
+    print(io, "DomainSizeMismatch: ")
+    print(io, "domain size $(domain_size(err.tm)) of LazyTensor not matching size $(err.sz)")
 end
+
+
+struct RangeSizeMismatch <: Exception
+    tm::LazyTensor
+    sz
+end
+
+function Base.showerror(io::IO, err::RangeSizeMismatch)
+    print(io, "RangeSizeMismatch: ")
+    print(io, "range size $(range_size(err.tm)) of LazyTensor not matching size $(err.sz)")
+end
--- a/src/LazyTensors/tensor_mapping.jl	Tue Mar 22 14:14:31 2022 +0100
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,80 +0,0 @@
-export TensorMapping
-export apply
-export apply_transpose
-export range_dim, domain_dim
-export range_size, domain_size
-
-"""
-    TensorMapping{T,R,D}
-
-Describes a mapping of a `D` dimension tensor to an `R` dimension tensor.
-The action of the mapping is implemented through the method
-```julia
-    apply(t::TensorMapping{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg) where {R,D,T}
-```
-
-The size of the range and domain that the operator works with should be returned by
-the functions
-```julia
-    range_size(::TensorMapping)
-    domain_size(::TensorMapping)
-```
-to allow querying for one or the other.
-
-Optionally the action of the transpose may be defined through
-```julia
-    apply_transpose(t::TensorMapping{T,R,D}, v::AbstractArray{T,D}, I::Vararg) where {R,D,T}
-```
-"""
-abstract type TensorMapping{T,R,D} end
-
-"""
-    apply(t::TensorMapping{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg) where {R,D,T}
-
-Return the result of the mapping for a given index.
-"""
-function apply end
-
-"""
-    apply_transpose(t::TensorMapping{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg) where {R,D,T}
-
-Return the result of the transposed mapping for a given index.
-"""
-function apply_transpose end
-
-"""
-    range_dim(::TensorMapping)
-Return the dimension of the range space of a given mapping
-"""
-range_dim(::TensorMapping{T,R,D}) where {T,R,D} = R
-
-"""
-    domain_dim(::TensorMapping)
-Return the dimension of the domain space of a given mapping
-"""
-domain_dim(::TensorMapping{T,R,D}) where {T,R,D} = D
-
-
-"""
-    range_size(M::TensorMapping)
-
-Return the range size for the mapping.
-"""
-function range_size end
-
-"""
-    domain_size(M::TensorMapping)
-
-Return the domain size for the mapping.
-"""
-function domain_size end
-
-
-"""
-    eltype(::TensorMapping{T})
-
-The type of elements the TensorMapping acts on.
-"""
-Base.eltype(::TensorMapping{T}) where T = T
-
-# TODO: Think about boundschecking!
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/LazyTensors/tensor_types.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -0,0 +1,76 @@
+"""
+    IdentityTensor{T,D} <: LazyTensor{T,D,D}
+
+The lazy identity LazyTensor for a given size. Usefull for building up higher dimensional tensor mappings from lower
+dimensional ones through outer products. Also used in the Implementation for InflatedTensor.
+"""
+struct IdentityTensor{T,D} <: LazyTensor{T,D,D}
+    size::NTuple{D,Int}
+end
+
+IdentityTensor{T}(size::NTuple{D,Int}) where {T,D} = IdentityTensor{T,D}(size)
+IdentityTensor{T}(size::Vararg{Int,D}) where {T,D} = IdentityTensor{T,D}(size)
+IdentityTensor(size::Vararg{Int,D}) where D = IdentityTensor{Float64,D}(size)
+
+range_size(tmi::IdentityTensor) = tmi.size
+domain_size(tmi::IdentityTensor) = tmi.size
+
+apply(tmi::IdentityTensor{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = v[I...]
+apply_transpose(tmi::IdentityTensor{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = v[I...]
+
+
+"""
+    ScalingTensor{T,D} <: LazyTensor{T,D,D}
+
+A lazy tensor that scales its input with `λ`.
+"""
+struct ScalingTensor{T,D} <: LazyTensor{T,D,D}
+    λ::T
+    size::NTuple{D,Int}
+end
+
+LazyTensors.apply(tm::ScalingTensor{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = tm.λ*v[I...]
+LazyTensors.apply_transpose(tm::ScalingTensor{T,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,D}) where {T,D} = tm.λ*v[I...]
+
+LazyTensors.range_size(m::ScalingTensor) = m.size
+LazyTensors.domain_size(m::ScalingTensor) = m.size
+
+
+"""
+    DenseTensor{T,R,D,...}(A, range_indicies, domain_indicies)
+
+LazyTensor defined by the AbstractArray A. `range_indicies` and `domain_indicies` define which indicies of A should
+be considerd the range and domain of the LazyTensor. Each set of indices must be ordered in ascending order.
+
+For instance, if A is a m x n matrix, and range_size = (1,), domain_size = (2,), then the DenseTensor performs the
+standard matrix-vector product on vectors of size n.
+"""
+struct DenseTensor{T,R,D, RD, AA<:AbstractArray{T,RD}} <: LazyTensor{T,R,D}
+    A::AA
+    range_indicies::NTuple{R,Int}
+    domain_indicies::NTuple{D,Int}
+
+    function DenseTensor(A::AA, range_indicies::NTuple{R,Int}, domain_indicies::NTuple{D,Int}) where {T,R,D, RD, AA<:AbstractArray{T,RD}}
+        if !issorted(range_indicies) || !issorted(domain_indicies)
+            throw(DomainError("range_indicies and domain_indicies must be sorted in ascending order"))
+        end
+
+        return new{T,R,D,RD,AA}(A,range_indicies,domain_indicies)
+    end
+end
+
+range_size(llm::DenseTensor) = size(llm.A)[[llm.range_indicies...]]
+domain_size(llm::DenseTensor) = size(llm.A)[[llm.domain_indicies...]]
+
+function apply(llm::DenseTensor{T,R,D}, v::AbstractArray{<:Any,D}, I::Vararg{Any,R}) where {T,R,D}
+    view_index = ntuple(i->:,ndims(llm.A))
+    for i ∈ 1:R
+        view_index = Base.setindex(view_index, Int(I[i]), llm.range_indicies[i])
+    end
+    A_view = @view llm.A[view_index...]
+    return sum(A_view.*v)
+end
+
+function apply_transpose(llm::DenseTensor{T,R,D}, v::AbstractArray{<:Any,R}, I::Vararg{Any,D}) where {T,R,D}
+    apply(DenseTensor(llm.A, llm.domain_indicies, llm.range_indicies), v, I...)
+end
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/LazyTensors/tuple_manipulation.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -0,0 +1,76 @@
+"""
+    split_index(::Val{dim_before}, ::Val{dim_view}, ::Val{dim_index}, ::Val{dim_after}, I...)
+
+Splits the multi-index `I` into two parts. One part which is expected to be
+used as a view, and one which is expected to be used as an index.
+Eg.
+```
+split_index(Val(1),Val(3),Val(2),Val(1),(1,2,3,4)) -> (1,:,:,:,4), (2,3)
+```
+
+`dim_view` controls how many colons are in the view, and `dim_index` controls
+how many elements are extracted from the middle.
+`dim_before` and `dim_after` decides the length of the index parts before and after the colons in the view index.
+
+Arguments should satisfy `length(I) == dim_before+B_domain+dim_after`.
+
+The returned values satisfy
+ * `length(view_index) == dim_before + dim_view + dim_after`
+ * `length(I_middle) == dim_index`
+"""
+function split_index(::Val{dim_before}, ::Val{dim_view}, ::Val{dim_index}, ::Val{dim_after}, I...) where {dim_before,dim_view, dim_index,dim_after}
+    I_before, I_middle, I_after = split_tuple(I, Val(dim_before), Val(dim_index))
+
+    view_index = (I_before..., ntuple((i)->:, dim_view)..., I_after...)
+
+    return view_index, I_middle
+end
+
+# TODO: Can this be replaced by something more elegant while still being type stable? 2020-10-21
+# See:
+# https://github.com/JuliaLang/julia/issues/34884
+# https://github.com/JuliaLang/julia/issues/30386
+"""
+    slice_tuple(t, Val(l), Val(u))
+
+Get a slice of a tuple in a type stable way.
+Equivalent to `t[l:u]` but type stable.
+"""
+function slice_tuple(t,::Val{L},::Val{U}) where {L,U}
+    return ntuple(i->t[i+L-1], U-L+1)
+end
+
+"""
+    split_tuple(t::Tuple{...}, ::Val{M}) where {N,M}
+
+Split the tuple `t` into two parts. the first part is `M` long.
+E.g
+```julia
+split_tuple((1,2,3,4),Val(3)) -> (1,2,3), (4,)
+```
+"""
+function split_tuple(t::NTuple{N,Any},::Val{M}) where {N,M}
+    return slice_tuple(t,Val(1), Val(M)), slice_tuple(t,Val(M+1), Val(N))
+end
+
+"""
+    split_tuple(t::Tuple{...},::Val{M},::Val{K}) where {N,M,K}
+
+Same as `split_tuple(t::NTuple{N},::Val{M})` but splits the tuple in three parts. With the first
+two parts having lenght `M` and `K`.
+"""
+function split_tuple(t::NTuple{N,Any},::Val{M},::Val{K}) where {N,M,K}
+    p1, tail = split_tuple(t, Val(M))
+    p2, p3 = split_tuple(tail, Val(K))
+    return p1,p2,p3
+end
+
+
+"""
+    flatten_tuple(t)
+
+Takes a nested tuple and flattens the whole structure
+"""
+flatten_tuple(t::NTuple{N, Number} where N) = t
+flatten_tuple(t::Tuple) = ((flatten_tuple.(t)...)...,) # simplify?
+flatten_tuple(ts::Vararg) = flatten_tuple(ts)
--- a/src/SbpOperators/boundaryops/boundary_operator.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/boundaryops/boundary_operator.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -6,7 +6,7 @@
 
 When `Dim=1`, the corresponding `BoundaryOperator` tensor mapping is returned.
 When `Dim>1`, the `BoundaryOperator` `op` is inflated by the outer product
-of `IdentityMappings` in orthogonal coordinate directions, e.g for `Dim=3`,
+of `IdentityTensors` in orthogonal coordinate directions, e.g for `Dim=3`,
 the boundary restriction operator in the y-direction direction is `Ix⊗op⊗Iz`.
 """
 function boundary_operator(grid::EquidistantGrid, closure_stencil, boundary::CartesianBoundary)
@@ -17,9 +17,9 @@
     d = dim(boundary)
     op = BoundaryOperator(restrict(grid, d), closure_stencil, r)
 
-    # Create 1D IdentityMappings for each coordinate direction
+    # Create 1D IdentityTensors for each coordinate direction
     one_d_grids = restrict.(Ref(grid), Tuple(1:dimension(grid)))
-    Is = IdentityMapping{eltype(grid)}.(size.(one_d_grids))
+    Is = IdentityTensor{eltype(grid)}.(size.(one_d_grids))
 
     # Formulate the correct outer product sequence of the identity mappings and
     # the boundary operator
@@ -28,15 +28,15 @@
 end
 
 """
-    BoundaryOperator{T,R,N} <: TensorMapping{T,0,1}
+    BoundaryOperator{T,R,N} <: LazyTensor{T,0,1}
 
-Implements the boundary operator `op` for 1D as a `TensorMapping`
+Implements the boundary operator `op` for 1D as a `LazyTensor`
 
 `op` is the restriction of a grid function to the boundary using some closure `Stencil{T,N}`.
 The boundary to restrict to is determined by `R`.
 `op'` is the prolongation of a zero dimensional array to the whole grid using the same closure stencil.
 """
-struct BoundaryOperator{T,R<:Region,N} <: TensorMapping{T,0,1}
+struct BoundaryOperator{T,R<:Region,N} <: LazyTensor{T,0,1}
     stencil::Stencil{T,N}
     size::Int
 end
--- a/src/SbpOperators/boundaryops/boundary_restriction.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/boundaryops/boundary_restriction.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,7 +1,7 @@
 """
     boundary_restriction(grid, closure_stencil::Stencil, boundary)
 
-Creates boundary restriction operators `e` as `TensorMapping`s on `boundary`
+Creates boundary restriction operators `e` as `LazyTensor`s on `boundary`
 
 `e` is the restriction of a grid function to `boundary` using a `Stencil` `closure_stencil`.
 `e'` is the prolongation of a grid function on `boundary` to the whole grid using the same `closure_stencil`.
--- a/src/SbpOperators/boundaryops/normal_derivative.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/boundaryops/normal_derivative.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,7 +1,7 @@
 """
     normal_derivative(grid, closure_stencil::Stencil, boundary)
 
-Creates the normal derivative boundary operator `d` as a `TensorMapping`
+Creates the normal derivative boundary operator `d` as a `LazyTensor`
 
 `d` computes the normal derivative of a grid function  on `boundary` a `Stencil` `closure_stencil`.
 `d'` is the prolongation of the normal derivative of a grid function to the whole grid using the same `closure_stencil`.
--- a/src/SbpOperators/volumeops/constant_interior_scaling_operator.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/volumeops/constant_interior_scaling_operator.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,10 +1,10 @@
 """
-    ConstantInteriorScalingOperator{T,N} <: TensorMapping{T,1,1}
+    ConstantInteriorScalingOperator{T,N} <: LazyTensor{T,1,1}
 
 A one-dimensional operator scaling a vector. The first and last `N` points are
 scaled with individual weights while all interior points are scaled the same.
 """
-struct ConstantInteriorScalingOperator{T,N} <: TensorMapping{T,1,1}
+struct ConstantInteriorScalingOperator{T,N} <: LazyTensor{T,1,1}
     interior_weight::T
     closure_weights::NTuple{N,T}
     size::Int
--- a/src/SbpOperators/volumeops/derivatives/first_derivative.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/volumeops/derivatives/first_derivative.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,14 +1,14 @@
 """
     first_derivative(grid::EquidistantGrid, inner_stencil, closure_stencils, direction)
 
-Creates the first-derivative operator `D1` as a `TensorMapping`
+Creates the first-derivative operator `D1` as a `LazyTensor`
 
 `D1` approximates the first-derivative d/dξ on `grid` along the coordinate dimension specified by
 `direction`, using the stencil `inner_stencil` in the interior and a set of stencils `closure_stencils`
 for the points in the closure regions.
 
 On a one-dimensional `grid`, `D1` is a `VolumeOperator`. On a multi-dimensional `grid`, `D1` is the outer product of the
-one-dimensional operator with the `IdentityMapping`s in orthogonal coordinate dirrections.
+one-dimensional operator with the `IdentityTensor`s in orthogonal coordinate dirrections.
 
 See also: [`volume_operator`](@ref).
 """
--- a/src/SbpOperators/volumeops/derivatives/second_derivative.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/volumeops/derivatives/second_derivative.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,14 +1,14 @@
 """
     second_derivative(grid::EquidistantGrid, inner_stencil, closure_stencils, direction)
 
-Creates the second-derivative operator `D2` as a `TensorMapping`
+Creates the second-derivative operator `D2` as a `LazyTensor`
 
 `D2` approximates the second-derivative d²/dξ² on `grid` along the coordinate dimension specified by
 `direction`, using the stencil `inner_stencil` in the interior and a set of stencils `closure_stencils`
 for the points in the closure regions.
 
 On a one-dimensional `grid`, `D2` is a `VolumeOperator`. On a multi-dimensional `grid`, `D2` is the outer product of the
-one-dimensional operator with the `IdentityMapping`s in orthogonal coordinate dirrections.
+one-dimensional operator with the `IdentityTensor`s in orthogonal coordinate dirrections.
 
 See also: [`volume_operator`](@ref).
 """
--- a/src/SbpOperators/volumeops/inner_products/inner_product.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/volumeops/inner_products/inner_product.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,7 +1,7 @@
 """
     inner_product(grid::EquidistantGrid, interior_weight, closure_weights)
 
-Creates the discrete inner product operator `H` as a `TensorMapping` on an
+Creates the discrete inner product operator `H` as a `LazyTensor` on an
 equidistant grid, defined as `(u,v)  = u'Hv` for grid functions `u,v`.
 
 `inner_product` creates `H` on `grid` using the `interior_weight` for the
@@ -11,7 +11,7 @@
 On a 1-dimensional grid, `H` is a `ConstantInteriorScalingOperator`. On a
 N-dimensional grid, `H` is the outer product of the 1-dimensional inner
 product operators for each coordinate direction. On a 0-dimensional grid,
-`H` is a 0-dimensional `IdentityMapping`.
+`H` is a 0-dimensional `IdentityTensor`.
 
 See also: [`ConstantInteriorScalingOperator`](@ref).
 """
@@ -32,7 +32,7 @@
     return H
 end
 
-inner_product(grid::EquidistantGrid{0}, interior_weight, closure_weights) = IdentityMapping{eltype(grid)}()
+inner_product(grid::EquidistantGrid{0}, interior_weight, closure_weights) = IdentityTensor{eltype(grid)}()
 
 """
     inner_product(grid, stencil_set)
--- a/src/SbpOperators/volumeops/inner_products/inverse_inner_product.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/volumeops/inner_products/inverse_inner_product.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,14 +1,14 @@
 """
     inverse_inner_product(grid::EquidistantGrid, interior_weight, closure_weights)
 
-Constructs the inverse inner product operator `H⁻¹` as a `TensorMapping` using
+Constructs the inverse inner product operator `H⁻¹` as a `LazyTensor` using
 the weights of `H`, `interior_weight`, `closure_weights`. `H⁻¹` is inverse of
 the inner product operator `H`.
 
 On a 1-dimensional grid, `H⁻¹` is a `ConstantInteriorScalingOperator`. On an
 N-dimensional grid, `H⁻¹` is the outer product of the 1-dimensional inverse
 inner product operators for each coordinate direction. On a 0-dimensional
-`grid`, `H⁻¹` is a 0-dimensional `IdentityMapping`. 
+`grid`, `H⁻¹` is a 0-dimensional `IdentityTensor`.
 
 See also: [`ConstantInteriorScalingOperator`](@ref).
 """
@@ -28,7 +28,7 @@
     return H⁻¹
 end
 
-inverse_inner_product(grid::EquidistantGrid{0}, interior_weight, closure_weights) = IdentityMapping{eltype(grid)}()
+inverse_inner_product(grid::EquidistantGrid{0}, interior_weight, closure_weights) = IdentityTensor{eltype(grid)}()
 
 """
     inverse_inner_product(grid, stencil_set)
--- a/src/SbpOperators/volumeops/laplace/laplace.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/volumeops/laplace/laplace.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -1,11 +1,11 @@
 """
-    Laplace{T, Dim, TM} <: TensorMapping{T, Dim, Dim}
+    Laplace{T, Dim, TM} <: LazyTensor{T, Dim, Dim}
 
 Implements the Laplace operator, approximating ∑d²/xᵢ² , i = 1,...,`Dim` as a
-`TensorMapping`. Additionally `Laplace` stores the `StencilSet`
-used to construct the `TensorMapping`.
+`LazyTensor`. Additionally `Laplace` stores the `StencilSet`
+used to construct the `LazyTensor `.
 """
-struct Laplace{T, Dim, TM<:TensorMapping{T, Dim, Dim}} <: TensorMapping{T, Dim, Dim}
+struct Laplace{T, Dim, TM<:LazyTensor{T, Dim, Dim}} <: LazyTensor{T, Dim, Dim}
     D::TM       # Difference operator
     stencil_set::StencilSet # Stencil set of the operator
 end
@@ -28,13 +28,13 @@
 LazyTensors.domain_size(L::Laplace) = LazyTensors.domain_size(L.D)
 LazyTensors.apply(L::Laplace, v::AbstractArray, I...) = LazyTensors.apply(L.D,v,I...)
 
-# TODO: Implement pretty printing of Laplace once pretty printing of TensorMappings is implemented. 
+# TODO: Implement pretty printing of Laplace once pretty printing of LazyTensors is implemented.
 # Base.show(io::IO, L::Laplace) = ...
 
 """
     laplace(grid::EquidistantGrid, inner_stencil, closure_stencils)
 
-Creates the Laplace operator operator `Δ` as a `TensorMapping`
+Creates the Laplace operator operator `Δ` as a `LazyTensor`
 
 `Δ` approximates the Laplace operator ∑d²/xᵢ² , i = 1,...,`Dim` on `grid`, using
 the stencil `inner_stencil` in the interior and a set of stencils `closure_stencils`
--- a/src/SbpOperators/volumeops/volume_operator.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/src/SbpOperators/volumeops/volume_operator.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -6,7 +6,7 @@
 the stencils `inner_stencil` and `closure_stencils`. When `Dim=1`, the
 corresponding `VolumeOperator` tensor mapping is returned. When `Dim>1`, the
 returned operator is the appropriate outer product of a one-dimensional
-operators and `IdentityMapping`s, e.g for `Dim=3` the volume operator in the
+operators and `IdentityTensor`s, e.g for `Dim=3` the volume operator in the
 y-direction is `I⊗op⊗I`.
 """
 function volume_operator(grid::EquidistantGrid, inner_stencil, closure_stencils, parity, direction)
@@ -14,9 +14,9 @@
 
     # Create 1D volume operator in along coordinate direction
     op = VolumeOperator(restrict(grid, direction), inner_stencil, closure_stencils, parity)
-    # Create 1D IdentityMappings for each coordinate direction
+    # Create 1D IdentityTensors for each coordinate direction
     one_d_grids = restrict.(Ref(grid), Tuple(1:dimension(grid)))
-    Is = IdentityMapping{eltype(grid)}.(size.(one_d_grids))
+    Is = IdentityTensor{eltype(grid)}.(size.(one_d_grids))
     # Formulate the correct outer product sequence of the identity mappings and
     # the volume operator
     parts = Base.setindex(Is, op, direction)
@@ -27,7 +27,7 @@
     VolumeOperator{T,N,M,K} <: TensorOperator{T,1}
 Implements a one-dimensional constant coefficients volume operator
 """
-struct VolumeOperator{T,N,M,K} <: TensorMapping{T,1,1}
+struct VolumeOperator{T,N,M,K} <: LazyTensor{T,1,1}
     inner_stencil::Stencil{T,N}
     closure_stencils::NTuple{M,Stencil{T,K}}
     size::NTuple{1,Int}
--- a/test/DiffOps/DiffOps_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/DiffOps/DiffOps_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -22,8 +22,8 @@
 #     v[:,2] = [7, 8, 9, 10]
 #     v[:,1] = [10, 11, 12, 13]
 #
-#     @test e_w  isa TensorMapping{T,2,1} where T
-#     @test e_w' isa TensorMapping{T,1,2} where T
+#     @test e_w  isa LazyTensor{T,2,1} where T
+#     @test e_w' isa LazyTensor{T,1,2} where T
 #
 #     @test domain_size(e_w, (3,2)) == (2,)
 #     @test domain_size(e_e, (3,2)) == (2,)
@@ -81,8 +81,8 @@
 #     v∂x = evalOn(g, (x,y)-> 2*x + y)
 #     v∂y = evalOn(g, (x,y)-> 2*(y-1) + x)
 #
-#     @test d_w  isa TensorMapping{T,2,1} where T
-#     @test d_w' isa TensorMapping{T,1,2} where T
+#     @test d_w  isa LazyTensor{T,2,1} where T
+#     @test d_w' isa LazyTensor{T,1,2} where T
 #
 #     @test domain_size(d_w, (3,2)) == (2,)
 #     @test domain_size(d_e, (3,2)) == (2,)
--- a/test/LazyTensors/lazy_array_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/LazyTensors/lazy_array_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -59,8 +59,6 @@
     end
     @test_throws BoundsError (v1 +̃  v2)[4]
     v2 = [1., 2, 3, 4]
-    # Test that size of arrays is asserted when not specified inbounds
-    # TODO: Replace these errors with SizeMismatch
     @test_throws DimensionMismatch v1 +̃ v2
 
     # Test operations on LazyArray
@@ -76,8 +74,6 @@
     end
     @test_throws BoundsError (v1 + v2)[4]
     v2 = [1., 2, 3, 4]
-    # Test that size of arrays is asserted when not specified inbounds
-    # TODO: Replace these errors with SizeMismatch
     @test_throws DimensionMismatch v1 + v2
 end
 
--- a/test/LazyTensors/lazy_tensor_operations_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/LazyTensors/lazy_tensor_operations_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -4,17 +4,28 @@
 
 using Tullio
 
-@testset "Mapping transpose" begin
-    struct DummyMapping{T,R,D} <: TensorMapping{T,R,D} end
+struct DummyMapping{T,R,D} <: LazyTensor{T,R,D} end
 
-    LazyTensors.apply(m::DummyMapping{T,R}, v, I::Vararg{Any,R}) where {T,R} = :apply
-    LazyTensors.apply_transpose(m::DummyMapping{T,R,D}, v, I::Vararg{Any,D}) where {T,R,D} = :apply_transpose
+LazyTensors.apply(m::DummyMapping{T,R}, v, I::Vararg{Any,R}) where {T,R} = :apply
+LazyTensors.apply_transpose(m::DummyMapping{T,R,D}, v, I::Vararg{Any,D}) where {T,R,D} = :apply_transpose
+
+LazyTensors.range_size(m::DummyMapping) = :range_size
+LazyTensors.domain_size(m::DummyMapping) = :domain_size
+
 
-    LazyTensors.range_size(m::DummyMapping) = :range_size
-    LazyTensors.domain_size(m::DummyMapping) = :domain_size
+struct SizeDoublingMapping{T,R,D} <: LazyTensor{T,R,D}
+    domain_size::NTuple{D,Int}
+end
 
+LazyTensors.apply(m::SizeDoublingMapping{T,R}, v, i::Vararg{Any,R}) where {T,R} = (:apply,v,i)
+LazyTensors.range_size(m::SizeDoublingMapping) = 2 .* m.domain_size
+LazyTensors.domain_size(m::SizeDoublingMapping) = m.domain_size
+
+
+
+@testset "Mapping transpose" begin
     m = DummyMapping{Float64,2,3}()
-    @test m' isa TensorMapping{Float64, 3,2}
+    @test m' isa LazyTensor{Float64, 3,2}
     @test m'' == m
     @test apply(m',zeros(Float64,(0,0)), 0, 0, 0) == :apply_transpose
     @test apply(m'',zeros(Float64,(0,0,0)), 0, 0) == :apply
@@ -24,91 +35,91 @@
     @test domain_size(m') == :range_size
 end
 
+
 @testset "TensorApplication" begin
-    struct SizeDoublingMapping{T,R,D} <: TensorMapping{T,R,D}
-        domain_size::NTuple{D,Int}
-    end
-
-    LazyTensors.apply(m::SizeDoublingMapping{T,R}, v, i::Vararg{Any,R}) where {T,R} = (:apply,v,i)
-    LazyTensors.range_size(m::SizeDoublingMapping) = 2 .* m.domain_size
-    LazyTensors.domain_size(m::SizeDoublingMapping) = m.domain_size
-
-
     m = SizeDoublingMapping{Int, 1, 1}((3,))
+    mm = SizeDoublingMapping{Int, 1, 1}((6,))
     v = [0,1,2]
     @test size(m*v) == 2 .*size(v)
-    @test (m*v)[0] == (:apply,v,(0,))
-    @test (m*m*v)[1] == (:apply,m*v,(1,))
-    @test (m*m*v)[3] == (:apply,m*v,(3,))
-    @test (m*m*v)[6] == (:apply,m*v,(6,))
-    @test_broken BoundsError == (m*m*v)[0]
-    @test_broken BoundsError == (m*m*v)[7]
+    @test (m*v)[1] == (:apply,v,(1,))
+    @test (mm*m*v)[1] == (:apply,m*v,(1,))
+    @test (mm*m*v)[3] == (:apply,m*v,(3,))
+    @test (mm*m*v)[6] == (:apply,m*v,(6,))
     @test_throws MethodError m*m
 
     @test (m*v)[CartesianIndex(2)] == (:apply,v,(2,))
-    @test (m*m*v)[CartesianIndex(2)] == (:apply,m*v,(2,))
-
-    m = SizeDoublingMapping{Int, 2, 1}((3,))
-    @test_throws MethodError m*ones(Int,2,2)
-    @test_throws MethodError m*m*v
+    @test (mm*m*v)[CartesianIndex(2)] == (:apply,m*v,(2,))
 
     m = SizeDoublingMapping{Float64, 2, 2}((3,3))
+    mm = SizeDoublingMapping{Float64, 2, 2}((6,6))
     v = ones(3,3)
     @test size(m*v) == 2 .*size(v)
     @test (m*v)[1,2] == (:apply,v,(1,2))
 
     @test (m*v)[CartesianIndex(2,3)] == (:apply,v,(2,3))
-    @test (m*m*v)[CartesianIndex(4,3)] == (:apply,m*v,(4,3))
+    @test (mm*m*v)[CartesianIndex(4,3)] == (:apply,m*v,(4,3))
 
-    struct ScalingOperator{T,D} <: TensorMapping{T,D,D}
-        λ::T
-        size::NTuple{D,Int}
-    end
-
-    LazyTensors.apply(m::ScalingOperator{T,D}, v, I::Vararg{Any,D}) where {T,D} = m.λ*v[I...]
-    LazyTensors.range_size(m::ScalingOperator) = m.size
-    LazyTensors.domain_size(m::ScalingOperator) = m.size
-
-    m = ScalingOperator{Int,1}(2,(3,))
+    m = ScalingTensor(2,(3,))
     v = [1,2,3]
     @test m*v isa AbstractVector
     @test m*v == [2,4,6]
 
-    m = ScalingOperator{Int,2}(2,(2,2))
+    m = ScalingTensor(2,(2,2))
     v = [[1 2];[3 4]]
     @test m*v == [[2 4];[6 8]]
     @test (m*v)[2,1] == 6
 
+    @testset "Error on index out of bounds" begin
+        m = SizeDoublingMapping{Int, 1, 1}((3,))
+        v = [0,1,2]
+
+        @test_throws BoundsError (m*v)[0]
+        @test_throws BoundsError (m*v)[7]
+    end
+
+    @testset "Error on unmatched dimensions" begin
+        v = [0,1,2]
+        m = SizeDoublingMapping{Int, 2, 1}((3,))
+        @test_throws MethodError m*ones(Int,2,2)
+        @test_throws MethodError m*m*v
+    end
+
+    @testset "Error on unmatched sizes" begin
+        @test_throws DomainSizeMismatch ScalingTensor(2,(2,))*ones(3)
+        @test_throws DomainSizeMismatch ScalingTensor(2,(2,))*ScalingTensor(2,(3,))*ones(3)
+    end
+
+
     @testset "Type calculation" begin
-        m = ScalingOperator{Int,1}(2,(3,))
+        m = ScalingTensor(2,(3,))
         v = [1.,2.,3.]
         @test m*v isa AbstractVector{Float64}
         @test m*v == [2.,4.,6.]
         @inferred m*v
         @inferred (m*v)[1]
 
-        m = ScalingOperator{Int,2}(2,(2,2))
+        m = ScalingTensor(2,(2,2))
         v = [[1. 2.];[3. 4.]]
         @test m*v == [[2. 4.];[6. 8.]]
         @test (m*v)[2,1] == 6.
         @inferred m*v
         @inferred (m*v)[1]
 
-        m = ScalingOperator{ComplexF64,1}(2. +2. *im,(3,))
+        m = ScalingTensor(2. +2. *im,(3,))
         v = [1.,2.,3.]
         @test m*v isa AbstractVector{ComplexF64}
         @test m*v == [2. + 2. *im, 4. + 4. *im, 6. + 6. *im]
         @inferred m*v
         @inferred (m*v)[1]
 
-        m = ScalingOperator{ComplexF64,1}(1,(3,))
+        m = ScalingTensor(1,(3,))
         v = [2. + 2. *im, 4. + 4. *im, 6. + 6. *im]
         @test m*v isa AbstractVector{ComplexF64}
         @test m*v == [2. + 2. *im, 4. + 4. *im, 6. + 6. *im]
         @inferred m*v
         @inferred (m*v)[1]
 
-        m = ScalingOperator{Float64,1}(2., (3,))
+        m = ScalingTensor(2., (3,))
         v = [[1,2,3], [3,2,1],[1,3,1]]
         @test m*v isa AbstractVector{Vector{Float64}}
         @test m*v == [[2.,4.,6.], [6.,4.,2.],[2.,6.,2.]]
@@ -117,19 +128,10 @@
     end
 end
 
-@testset "TensorMapping binary operations" begin
-    struct ScalarMapping{T,R,D} <: TensorMapping{T,R,D}
-        λ::T
-        range_size::NTuple{R,Int}
-        domain_size::NTuple{D,Int}
-    end
 
-    LazyTensors.apply(m::ScalarMapping{T,R}, v, I::Vararg{Any,R}) where {T,R} = m.λ*v[I...]
-    LazyTensors.range_size(m::ScalarMapping) = m.domain_size
-    LazyTensors.domain_size(m::ScalarMapping) = m.range_size
-
-    A = ScalarMapping{Float64,1,1}(2.0, (3,), (3,))
-    B = ScalarMapping{Float64,1,1}(3.0, (3,), (3,))
+@testset "LazyTensor binary operations" begin
+    A = ScalingTensor(2.0, (3,))
+    B = ScalingTensor(3.0, (3,))
 
     v = [1.1,1.2,1.3]
     for i ∈ eachindex(v)
@@ -140,24 +142,34 @@
         @test ((A-B)*v)[i] == 2*v[i] - 3*v[i]
     end
 
+
     @test range_size(A+B) == range_size(A) == range_size(B)
     @test domain_size(A+B) == domain_size(A) == domain_size(B)
 
     @test ((A+B)*ComplexF64[1.1,1.2,1.3])[3] isa ComplexF64
+
+    @testset "Error on unmatched sizes" begin
+        @test_throws Union{DomainSizeMismatch, RangeSizeMismatch} ScalingTensor(2.0, (3,)) + ScalingTensor(2.0, (4,))
+
+        @test_throws DomainSizeMismatch ScalingTensor(2.0, (4,)) + SizeDoublingMapping{Float64,1,1}((2,))
+        @test_throws DomainSizeMismatch SizeDoublingMapping{Float64,1,1}((2,)) + ScalingTensor(2.0, (4,))
+        @test_throws RangeSizeMismatch ScalingTensor(2.0, (2,)) + SizeDoublingMapping{Float64,1,1}((2,))
+        @test_throws RangeSizeMismatch SizeDoublingMapping{Float64,1,1}((2,)) + ScalingTensor(2.0, (2,))
+    end
 end
 
 
-@testset "TensorMappingComposition" begin
+@testset "TensorComposition" begin
     A = rand(2,3)
     B = rand(3,4)
 
-    Ã = LazyLinearMap(A, (1,), (2,))
-    B̃ = LazyLinearMap(B, (1,), (2,))
+    Ã = DenseTensor(A, (1,), (2,))
+    B̃ = DenseTensor(B, (1,), (2,))
 
-    @test Ã∘B̃ isa TensorMappingComposition
+    @test Ã∘B̃ isa TensorComposition
     @test range_size(Ã∘B̃) == (2,)
     @test domain_size(Ã∘B̃) == (4,)
-    @test_throws SizeMismatch B̃∘Ã
+    @test_throws DomainSizeMismatch B̃∘Ã
 
     # @test @inbounds B̃∘Ã # Should not error even though dimensions don't match. (Since ]test runs with forced boundschecking this is currently not testable 2020-10-16)
 
@@ -171,203 +183,102 @@
     @test ((Ã∘B̃)'*ComplexF64[1.,2.])[1] isa ComplexF64
 end
 
-@testset "LazyLinearMap" begin
-    # Test a standard matrix-vector product
-    # mapping vectors of size 4 to vectors of size 3.
-    A = rand(3,4)
-    Ã = LazyLinearMap(A, (1,), (2,))
-    v = rand(4)
-    w = rand(3)
 
-    @test à isa LazyLinearMap{T,1,1} where T
-    @test à isa TensorMapping{T,1,1} where T
-    @test range_size(Ã) == (3,)
-    @test domain_size(Ã) == (4,)
-
-    @test Ã*ones(4) ≈ A*ones(4) atol=5e-13
-    @test Ã*v ≈ A*v atol=5e-13
-    @test Ã'*w ≈ A'*w
-
-    A = rand(2,3,4)
-    @test_throws DomainError LazyLinearMap(A, (3,1), (2,))
-
-    # Test more exotic mappings
-    B = rand(3,4,2)
-    # Map vectors of size 2 to matrices of size (3,4)
-    B̃ = LazyLinearMap(B, (1,2), (3,))
-    v = rand(2)
-
-    @test range_size(B̃) == (3,4)
-    @test domain_size(B̃) == (2,)
-    @test B̃ isa TensorMapping{T,2,1} where T
-    @test B̃*ones(2) ≈ B[:,:,1] + B[:,:,2] atol=5e-13
-    @test B̃*v ≈ B[:,:,1]*v[1] + B[:,:,2]*v[2] atol=5e-13
-
-    # Map matrices of size (3,2) to vectors of size 4
-    B̃ = LazyLinearMap(B, (2,), (1,3))
-    v = rand(3,2)
-
-    @test range_size(B̃) == (4,)
-    @test domain_size(B̃) == (3,2)
-    @test B̃ isa TensorMapping{T,1,2} where T
-    @test B̃*ones(3,2) ≈ B[1,:,1] + B[2,:,1] + B[3,:,1] +
-                        B[1,:,2] + B[2,:,2] + B[3,:,2] atol=5e-13
-    @test B̃*v ≈ B[1,:,1]*v[1,1] + B[2,:,1]*v[2,1] + B[3,:,1]*v[3,1] +
-                B[1,:,2]v[1,2] + B[2,:,2]*v[2,2] + B[3,:,2]*v[3,2] atol=5e-13
-
-
-    # TODO:
-    # @inferred (B̃*v)[2]
-end
-
-
-@testset "IdentityMapping" begin
-    @test IdentityMapping{Float64}((4,5)) isa IdentityMapping{T,2} where T
-    @test IdentityMapping{Float64}((4,5)) isa TensorMapping{T,2,2} where T
-    @test IdentityMapping{Float64}((4,5)) == IdentityMapping{Float64}(4,5)
-
-    @test IdentityMapping(3,2) isa IdentityMapping{Float64,2}
-
-    for sz ∈ [(4,5),(3,),(5,6,4)]
-        I = IdentityMapping{Float64}(sz)
-        v = rand(sz...)
-        @test I*v == v
-        @test I'*v == v
-
-        v = rand(ComplexF64,sz...)
-        @test I*v == v
-        @test I'*v == v
-
-        @test range_size(I) == sz
-        @test domain_size(I) == sz
-    end
-
-    I = IdentityMapping{Float64}((4,5))
-    v = rand(4,5)
-    @inferred (I*v)[3,2]
-    @inferred (I'*v)[3,2]
-    @inferred range_size(I)
-
-    @inferred range_dim(I)
-    @inferred domain_dim(I)
-
-    Ã = rand(4,2)
-    A = LazyLinearMap(Ã,(1,),(2,))
-    I1 = IdentityMapping{Float64}(2)
-    I2 = IdentityMapping{Float64}(4)
-    @test A∘I1 == A
-    @test I2∘A == A
-    @test I1∘I1 == I1
-    @test_throws SizeMismatch I1∘A
-    @test_throws SizeMismatch A∘I2
-    @test_throws SizeMismatch I1∘I2
-end
-
-@testset "InflatedTensorMapping" begin
-    I(sz...) = IdentityMapping(sz...)
+@testset "InflatedTensor" begin
+    I(sz...) = IdentityTensor(sz...)
 
     Ã = rand(4,2)
     B̃ = rand(4,2,3)
     C̃ = rand(4,2,3)
 
-    A = LazyLinearMap(Ã,(1,),(2,))
-    B = LazyLinearMap(B̃,(1,2),(3,))
-    C = LazyLinearMap(C̃,(1,),(2,3))
+    A = DenseTensor(Ã,(1,),(2,))
+    B = DenseTensor(B̃,(1,2),(3,))
+    C = DenseTensor(C̃,(1,),(2,3))
 
     @testset "Constructors" begin
-        @test InflatedTensorMapping(I(3,2), A, I(4)) isa TensorMapping{Float64, 4, 4}
-        @test InflatedTensorMapping(I(3,2), B, I(4)) isa TensorMapping{Float64, 5, 4}
-        @test InflatedTensorMapping(I(3), C, I(2,3)) isa TensorMapping{Float64, 4, 5}
-        @test InflatedTensorMapping(C, I(2,3)) isa TensorMapping{Float64, 3, 4}
-        @test InflatedTensorMapping(I(3), C) isa TensorMapping{Float64, 2, 3}
-        @test InflatedTensorMapping(I(3), I(2,3)) isa TensorMapping{Float64, 3, 3}
+        @test InflatedTensor(I(3,2), A, I(4)) isa LazyTensor{Float64, 4, 4}
+        @test InflatedTensor(I(3,2), B, I(4)) isa LazyTensor{Float64, 5, 4}
+        @test InflatedTensor(I(3), C, I(2,3)) isa LazyTensor{Float64, 4, 5}
+        @test InflatedTensor(C, I(2,3)) isa LazyTensor{Float64, 3, 4}
+        @test InflatedTensor(I(3), C) isa LazyTensor{Float64, 2, 3}
+        @test InflatedTensor(I(3), I(2,3)) isa LazyTensor{Float64, 3, 3}
     end
 
     @testset "Range and domain size" begin
-        @test range_size(InflatedTensorMapping(I(3,2), A, I(4))) == (3,2,4,4)
-        @test domain_size(InflatedTensorMapping(I(3,2), A, I(4))) == (3,2,2,4)
+        @test range_size(InflatedTensor(I(3,2), A, I(4))) == (3,2,4,4)
+        @test domain_size(InflatedTensor(I(3,2), A, I(4))) == (3,2,2,4)
 
-        @test range_size(InflatedTensorMapping(I(3,2), B, I(4))) == (3,2,4,2,4)
-        @test domain_size(InflatedTensorMapping(I(3,2), B, I(4))) == (3,2,3,4)
+        @test range_size(InflatedTensor(I(3,2), B, I(4))) == (3,2,4,2,4)
+        @test domain_size(InflatedTensor(I(3,2), B, I(4))) == (3,2,3,4)
 
-        @test range_size(InflatedTensorMapping(I(3), C, I(2,3))) == (3,4,2,3)
-        @test domain_size(InflatedTensorMapping(I(3), C, I(2,3))) == (3,2,3,2,3)
+        @test range_size(InflatedTensor(I(3), C, I(2,3))) == (3,4,2,3)
+        @test domain_size(InflatedTensor(I(3), C, I(2,3))) == (3,2,3,2,3)
 
-        @inferred range_size(InflatedTensorMapping(I(3,2), A, I(4))) == (3,2,4,4)
-        @inferred domain_size(InflatedTensorMapping(I(3,2), A, I(4))) == (3,2,2,4)
+        @inferred range_size(InflatedTensor(I(3,2), A, I(4))) == (3,2,4,4)
+        @inferred domain_size(InflatedTensor(I(3,2), A, I(4))) == (3,2,2,4)
     end
 
     @testset "Application" begin
         # Testing regular application and transposed application with inflation "before", "after" and "before and after".
         # The inflated tensor mappings are chosen to preserve, reduce and increase the dimension of the result compared to the input.
-        tests = [
+        cases = [
             (
-                InflatedTensorMapping(I(3,2), A, I(4)),
+                InflatedTensor(I(3,2), A, I(4)),
                 (v-> @tullio res[a,b,c,d] := Ã[c,i]*v[a,b,i,d]), # Expected result of apply
                 (v-> @tullio res[a,b,c,d] := Ã[i,c]*v[a,b,i,d]), # Expected result of apply_transpose
             ),
             (
-                InflatedTensorMapping(I(3,2), B, I(4)),
+                InflatedTensor(I(3,2), B, I(4)),
                 (v-> @tullio res[a,b,c,d,e] := B̃[c,d,i]*v[a,b,i,e]),
                 (v-> @tullio res[a,b,c,d] := B̃[i,j,c]*v[a,b,i,j,d]),
             ),
             (
-                InflatedTensorMapping(I(3,2), C, I(4)),
+                InflatedTensor(I(3,2), C, I(4)),
                 (v-> @tullio res[a,b,c,d] := C̃[c,i,j]*v[a,b,i,j,d]),
                 (v-> @tullio res[a,b,c,d,e] := C̃[i,c,d]*v[a,b,i,e]),
             ),
             (
-                InflatedTensorMapping(I(3,2), A),
+                InflatedTensor(I(3,2), A),
                 (v-> @tullio res[a,b,c] := Ã[c,i]*v[a,b,i]),
                 (v-> @tullio res[a,b,c] := Ã[i,c]*v[a,b,i]),
             ),
             (
-                InflatedTensorMapping(I(3,2), B),
+                InflatedTensor(I(3,2), B),
                 (v-> @tullio res[a,b,c,d] := B̃[c,d,i]*v[a,b,i]),
                 (v-> @tullio res[a,b,c] := B̃[i,j,c]*v[a,b,i,j]),
             ),
             (
-                InflatedTensorMapping(I(3,2), C),
+                InflatedTensor(I(3,2), C),
                 (v-> @tullio res[a,b,c] := C̃[c,i,j]*v[a,b,i,j]),
                 (v-> @tullio res[a,b,c,d] := C̃[i,c,d]*v[a,b,i]),
             ),
             (
-                InflatedTensorMapping(A,I(4)),
+                InflatedTensor(A,I(4)),
                 (v-> @tullio res[a,b] := Ã[a,i]*v[i,b]),
                 (v-> @tullio res[a,b] := Ã[i,a]*v[i,b]),
             ),
             (
-                InflatedTensorMapping(B,I(4)),
+                InflatedTensor(B,I(4)),
                 (v-> @tullio res[a,b,c] := B̃[a,b,i]*v[i,c]),
                 (v-> @tullio res[a,b] := B̃[i,j,a]*v[i,j,b]),
             ),
             (
-                InflatedTensorMapping(C,I(4)),
+                InflatedTensor(C,I(4)),
                 (v-> @tullio res[a,b] := C̃[a,i,j]*v[i,j,b]),
                 (v-> @tullio res[a,b,c] := C̃[i,a,b]*v[i,c]),
             ),
         ]
 
-        @testset "apply" begin
-            for i ∈ 1:length(tests)
-                tm = tests[i][1]
-                v = rand(domain_size(tm)...)
-                true_value = tests[i][2](v)
-                @test tm*v ≈ true_value rtol=1e-14
-            end
-        end
+        @testset "$tm" for (tm, true_apply, true_apply_transpose) ∈ cases
+            v = rand(domain_size(tm)...)
+            @test tm*v ≈ true_apply(v) rtol=1e-14
 
-        @testset "apply_transpose" begin
-            for i ∈ 1:length(tests)
-                tm = tests[i][1]
-                v = rand(range_size(tm)...)
-                true_value = tests[i][3](v)
-                @test tm'*v ≈ true_value rtol=1e-14
-            end
+            v = rand(range_size(tm)...)
+            @test tm'*v ≈ true_apply_transpose(v) rtol=1e-14
         end
 
         @testset "application to other type" begin
-            tm = InflatedTensorMapping(I(3,2), A, I(4))
+            tm = InflatedTensor(I(3,2), A, I(4))
 
             v = rand(ComplexF64, domain_size(tm)...)
             @test (tm*v)[1,2,3,1] isa ComplexF64
@@ -377,16 +288,7 @@
         end
 
         @testset "Inference of application" begin
-            struct ScalingOperator{T,D} <: TensorMapping{T,D,D}
-                λ::T
-                size::NTuple{D,Int}
-            end
-
-            LazyTensors.apply(m::ScalingOperator{T,D}, v, I::Vararg{Any,D}) where {T,D} = m.λ*v[I...]
-            LazyTensors.range_size(m::ScalingOperator) = m.size
-            LazyTensors.domain_size(m::ScalingOperator) = m.size
-
-            tm = InflatedTensorMapping(I(2,3),ScalingOperator(2.0, (3,2)),I(3,4))
+            tm = InflatedTensor(I(2,3),ScalingTensor(2.0, (3,2)),I(3,4))
             v = rand(domain_size(tm)...)
 
             @inferred apply(tm,v,1,2,3,2,2,4)
@@ -394,94 +296,24 @@
         end
     end
 
-    @testset "InflatedTensorMapping of InflatedTensorMapping" begin
-        A = ScalingOperator(2.0,(2,3))
-        itm = InflatedTensorMapping(I(3,2), A, I(4))
-        @test  InflatedTensorMapping(I(4), itm, I(2)) == InflatedTensorMapping(I(4,3,2), A, I(4,2))
-        @test  InflatedTensorMapping(itm, I(2)) == InflatedTensorMapping(I(3,2), A, I(4,2))
-        @test  InflatedTensorMapping(I(4), itm) == InflatedTensorMapping(I(4,3,2), A, I(4))
+    @testset "InflatedTensor of InflatedTensor" begin
+        A = ScalingTensor(2.0,(2,3))
+        itm = InflatedTensor(I(3,2), A, I(4))
+        @test  InflatedTensor(I(4), itm, I(2)) == InflatedTensor(I(4,3,2), A, I(4,2))
+        @test  InflatedTensor(itm, I(2)) == InflatedTensor(I(3,2), A, I(4,2))
+        @test  InflatedTensor(I(4), itm) == InflatedTensor(I(4,3,2), A, I(4))
 
-        @test InflatedTensorMapping(I(2), I(2), I(2)) isa InflatedTensorMapping # The constructor should always return its type.
+        @test InflatedTensor(I(2), I(2), I(2)) isa InflatedTensor # The constructor should always return its type.
     end
 end
 
-@testset "split_index" begin
-    @test LazyTensors.split_index(Val(2),Val(1),Val(2),Val(2),1,2,3,4,5,6) == ((1,2,:,5,6),(3,4))
-    @test LazyTensors.split_index(Val(2),Val(3),Val(2),Val(2),1,2,3,4,5,6) == ((1,2,:,:,:,5,6),(3,4))
-    @test LazyTensors.split_index(Val(3),Val(1),Val(1),Val(2),1,2,3,4,5,6) == ((1,2,3,:,5,6),(4,))
-    @test LazyTensors.split_index(Val(3),Val(2),Val(1),Val(2),1,2,3,4,5,6) == ((1,2,3,:,:,5,6),(4,))
-    @test LazyTensors.split_index(Val(1),Val(1),Val(2),Val(3),1,2,3,4,5,6) == ((1,:,4,5,6),(2,3))
-    @test LazyTensors.split_index(Val(1),Val(2),Val(2),Val(3),1,2,3,4,5,6) == ((1,:,:,4,5,6),(2,3))
-
-    @test LazyTensors.split_index(Val(0),Val(1),Val(3),Val(3),1,2,3,4,5,6) == ((:,4,5,6),(1,2,3))
-    @test LazyTensors.split_index(Val(3),Val(1),Val(3),Val(0),1,2,3,4,5,6) == ((1,2,3,:),(4,5,6))
-
-    @inferred LazyTensors.split_index(Val(2),Val(3),Val(2),Val(2),1,2,3,2,2,4)
-end
-
-@testset "slice_tuple" begin
-    @test LazyTensors.slice_tuple((1,2,3),Val(1), Val(3)) == (1,2,3)
-    @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(2), Val(5)) == (2,3,4,5)
-    @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(1), Val(3)) == (1,2,3)
-    @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(4), Val(6)) == (4,5,6)
-end
-
-@testset "split_tuple" begin
-    @testset "2 parts" begin
-        @test LazyTensors.split_tuple((),Val(0)) == ((),())
-        @test LazyTensors.split_tuple((1,),Val(0)) == ((),(1,))
-        @test LazyTensors.split_tuple((1,),Val(1)) == ((1,),())
-
-        @test LazyTensors.split_tuple((1,2,3,4),Val(0)) == ((),(1,2,3,4))
-        @test LazyTensors.split_tuple((1,2,3,4),Val(1)) == ((1,),(2,3,4))
-        @test LazyTensors.split_tuple((1,2,3,4),Val(2)) == ((1,2),(3,4))
-        @test LazyTensors.split_tuple((1,2,3,4),Val(3)) == ((1,2,3),(4,))
-        @test LazyTensors.split_tuple((1,2,3,4),Val(4)) == ((1,2,3,4),())
-
-        @test LazyTensors.split_tuple((1,2,true,4),Val(3)) == ((1,2,true),(4,))
-
-        @inferred LazyTensors.split_tuple((1,2,3,4),Val(3))
-        @inferred LazyTensors.split_tuple((1,2,true,4),Val(3))
-    end
-
-    @testset "3 parts" begin
-        @test LazyTensors.split_tuple((),Val(0),Val(0)) == ((),(),())
-        @test LazyTensors.split_tuple((1,2,3),Val(1), Val(1)) == ((1,),(2,),(3,))
-        @test LazyTensors.split_tuple((1,true,3),Val(1), Val(1)) == ((1,),(true,),(3,))
-
-        @test LazyTensors.split_tuple((1,2,3,4,5,6),Val(1),Val(2)) == ((1,),(2,3),(4,5,6))
-        @test LazyTensors.split_tuple((1,2,3,4,5,6),Val(3),Val(2)) == ((1,2,3),(4,5),(6,))
-
-        @inferred LazyTensors.split_tuple((1,2,3,4,5,6),Val(3),Val(2))
-        @inferred LazyTensors.split_tuple((1,true,3),Val(1), Val(1))
-    end
-end
-
-@testset "flatten_tuple" begin
-    @test LazyTensors.flatten_tuple((1,)) == (1,)
-    @test LazyTensors.flatten_tuple((1,2,3,4,5,6)) == (1,2,3,4,5,6)
-    @test LazyTensors.flatten_tuple((1,2,(3,4),5,6)) == (1,2,3,4,5,6)
-    @test LazyTensors.flatten_tuple((1,2,(3,(4,5)),6)) == (1,2,3,4,5,6)
-    @test LazyTensors.flatten_tuple(((1,2),(3,4),(5,),6)) == (1,2,3,4,5,6)
-end
-
-
 @testset "LazyOuterProduct" begin
-    struct ScalingOperator{T,D} <: TensorMapping{T,D,D}
-        λ::T
-        size::NTuple{D,Int}
-    end
-
-    LazyTensors.apply(m::ScalingOperator{T,D}, v, I::Vararg{Any,D}) where {T,D} = m.λ*v[I...]
-    LazyTensors.range_size(m::ScalingOperator) = m.size
-    LazyTensors.domain_size(m::ScalingOperator) = m.size
-
-    A = ScalingOperator(2.0, (5,))
-    B = ScalingOperator(3.0, (3,))
-    C = ScalingOperator(5.0, (3,2))
+    A = ScalingTensor(2.0, (5,))
+    B = ScalingTensor(3.0, (3,))
+    C = ScalingTensor(5.0, (3,2))
 
     AB = LazyOuterProduct(A,B)
-    @test AB isa TensorMapping{T,2,2} where T
+    @test AB isa LazyTensor{T,2,2} where T
     @test range_size(AB) == (5,3)
     @test domain_size(AB) == (5,3)
 
@@ -490,7 +322,7 @@
 
     ABC = LazyOuterProduct(A,B,C)
 
-    @test ABC isa TensorMapping{T,4,4} where T
+    @test ABC isa LazyTensor{T,4,4} where T
     @test range_size(ABC) == (5,3,3,2)
     @test domain_size(ABC) == (5,3,3,2)
 
@@ -503,8 +335,8 @@
     v₁ = rand(2,4,3)
     v₂ = rand(4,3,2)
 
-    Ã = LazyLinearMap(A,(1,),(2,))
-    B̃ = LazyLinearMap(B,(1,),(2,3))
+    Ã = DenseTensor(A,(1,),(2,))
+    B̃ = DenseTensor(B,(1,),(2,3))
 
     ÃB̃ = LazyOuterProduct(Ã,B̃)
     @tullio ABv[i,k] := A[i,j]*B[k,l,m]*v₁[j,l,m]
@@ -515,15 +347,14 @@
     @test B̃Ã*v₂ ≈ BAv
 
     @testset "Indentity mapping arguments" begin
-        @test LazyOuterProduct(IdentityMapping(3,2), IdentityMapping(1,2)) == IdentityMapping(3,2,1,2)
+        @test LazyOuterProduct(IdentityTensor(3,2), IdentityTensor(1,2)) == IdentityTensor(3,2,1,2)
 
-        Ã = LazyLinearMap(A,(1,),(2,))
-        @test LazyOuterProduct(IdentityMapping(3,2), Ã) == InflatedTensorMapping(IdentityMapping(3,2),Ã)
-        @test LazyOuterProduct(Ã, IdentityMapping(3,2)) == InflatedTensorMapping(Ã,IdentityMapping(3,2))
+        Ã = DenseTensor(A,(1,),(2,))
+        @test LazyOuterProduct(IdentityTensor(3,2), Ã) == InflatedTensor(IdentityTensor(3,2),Ã)
+        @test LazyOuterProduct(Ã, IdentityTensor(3,2)) == InflatedTensor(Ã,IdentityTensor(3,2))
 
-        I1 = IdentityMapping(3,2)
-        I2 = IdentityMapping(4)
-        @test I1⊗Ã⊗I2 == InflatedTensorMapping(I1, Ã, I2)
+        I1 = IdentityTensor(3,2)
+        I2 = IdentityTensor(4)
+        @test I1⊗Ã⊗I2 == InflatedTensor(I1, Ã, I2)
     end
-
 end
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/test/LazyTensors/lazy_tensor_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -0,0 +1,12 @@
+using Test
+using Sbplib.LazyTensors
+
+@testset "Generic Mapping methods" begin
+    struct DummyMapping{T,R,D} <: LazyTensor{T,R,D} end
+    LazyTensors.apply(m::DummyMapping{T,R,D}, v, I::Vararg{Any,R}) where {T,R,D} = :apply
+    @test range_dim(DummyMapping{Int,2,3}()) == 2
+    @test domain_dim(DummyMapping{Int,2,3}()) == 3
+    @test apply(DummyMapping{Int,2,3}(), zeros(Int, (0,0,0)),0,0) == :apply
+    @test eltype(DummyMapping{Int,2,3}()) == Int
+    @test eltype(DummyMapping{Float64,2,3}()) == Float64
+end
--- a/test/LazyTensors/tensor_mapping_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,12 +0,0 @@
-using Test
-using Sbplib.LazyTensors
-
-@testset "Generic Mapping methods" begin
-    struct DummyMapping{T,R,D} <: TensorMapping{T,R,D} end
-    LazyTensors.apply(m::DummyMapping{T,R,D}, v, I::Vararg{Any,R}) where {T,R,D} = :apply
-    @test range_dim(DummyMapping{Int,2,3}()) == 2
-    @test domain_dim(DummyMapping{Int,2,3}()) == 3
-    @test apply(DummyMapping{Int,2,3}(), zeros(Int, (0,0,0)),0,0) == :apply
-    @test eltype(DummyMapping{Int,2,3}()) == Int
-    @test eltype(DummyMapping{Float64,2,3}()) == Float64
-end
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/test/LazyTensors/tensor_types_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -0,0 +1,109 @@
+using Test
+using Sbplib.LazyTensors
+
+@testset "IdentityTensor" begin
+    @test IdentityTensor{Float64}((4,5)) isa IdentityTensor{T,2} where T
+    @test IdentityTensor{Float64}((4,5)) isa LazyTensor{T,2,2} where T
+    @test IdentityTensor{Float64}((4,5)) == IdentityTensor{Float64}(4,5)
+
+    @test IdentityTensor(3,2) isa IdentityTensor{Float64,2}
+
+    for sz ∈ [(4,5),(3,),(5,6,4)]
+        I = IdentityTensor{Float64}(sz)
+        v = rand(sz...)
+        @test I*v == v
+        @test I'*v == v
+
+        v = rand(ComplexF64,sz...)
+        @test I*v == v
+        @test I'*v == v
+
+        @test range_size(I) == sz
+        @test domain_size(I) == sz
+    end
+
+    I = IdentityTensor{Float64}((4,5))
+    v = rand(4,5)
+    @inferred (I*v)[3,2]
+    @inferred (I'*v)[3,2]
+    @inferred range_size(I)
+
+    @inferred range_dim(I)
+    @inferred domain_dim(I)
+
+    Ã = rand(4,2)
+    A = DenseTensor(Ã,(1,),(2,))
+    I1 = IdentityTensor{Float64}(2)
+    I2 = IdentityTensor{Float64}(4)
+    @test A∘I1 == A
+    @test I2∘A == A
+    @test I1∘I1 == I1
+    @test_throws DomainSizeMismatch I1∘A
+    @test_throws DomainSizeMismatch A∘I2
+    @test_throws DomainSizeMismatch I1∘I2
+end
+
+
+@testset "ScalingTensor" begin
+    st = ScalingTensor(2.,(3,4))
+    @test st isa LazyTensor{Float64, 2, 2}
+    @test range_size(st) == (3,4)
+    @test domain_size(st) == (3,4)
+
+    v = rand(3,4)
+    @test st*v == 2.0 .* v
+    @test st'*v == 2.0 .* v
+
+    @inferred (st*v)[2,2]
+    @inferred (st'*v)[2,2]
+end
+
+
+@testset "DenseTensor" begin
+    # Test a standard matrix-vector product
+    # mapping vectors of size 4 to vectors of size 3.
+    A = rand(3,4)
+    Ã = DenseTensor(A, (1,), (2,))
+    v = rand(4)
+    w = rand(3)
+
+    @test à isa DenseTensor{T,1,1} where T
+    @test à isa LazyTensor{T,1,1} where T
+    @test range_size(Ã) == (3,)
+    @test domain_size(Ã) == (4,)
+
+    @test Ã*ones(4) ≈ A*ones(4) atol=5e-13
+    @test Ã*v ≈ A*v atol=5e-13
+    @test Ã'*w ≈ A'*w
+
+    A = rand(2,3,4)
+    @test_throws DomainError DenseTensor(A, (3,1), (2,))
+
+    # Test more exotic mappings
+    B = rand(3,4,2)
+    # Map vectors of size 2 to matrices of size (3,4)
+    B̃ = DenseTensor(B, (1,2), (3,))
+    v = rand(2)
+
+    @test range_size(B̃) == (3,4)
+    @test domain_size(B̃) == (2,)
+    @test B̃ isa LazyTensor{T,2,1} where T
+    @test B̃*ones(2) ≈ B[:,:,1] + B[:,:,2] atol=5e-13
+    @test B̃*v ≈ B[:,:,1]*v[1] + B[:,:,2]*v[2] atol=5e-13
+
+    # Map matrices of size (3,2) to vectors of size 4
+    B̃ = DenseTensor(B, (2,), (1,3))
+    v = rand(3,2)
+
+    @test range_size(B̃) == (4,)
+    @test domain_size(B̃) == (3,2)
+    @test B̃ isa LazyTensor{T,1,2} where T
+    @test B̃*ones(3,2) ≈ B[1,:,1] + B[2,:,1] + B[3,:,1] +
+                        B[1,:,2] + B[2,:,2] + B[3,:,2] atol=5e-13
+    @test B̃*v ≈ B[1,:,1]*v[1,1] + B[2,:,1]*v[2,1] + B[3,:,1]*v[3,1] +
+                B[1,:,2]v[1,2] + B[2,:,2]*v[2,2] + B[3,:,2]*v[3,2] atol=5e-13
+
+
+    # TODO:
+    # @inferred (B̃*v)[2]
+end
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/test/LazyTensors/tuple_manipulation_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -0,0 +1,62 @@
+using Test
+using Sbplib.LazyTensors
+
+@testset "split_index" begin
+    @test LazyTensors.split_index(Val(2),Val(1),Val(2),Val(2),1,2,3,4,5,6) == ((1,2,:,5,6),(3,4))
+    @test LazyTensors.split_index(Val(2),Val(3),Val(2),Val(2),1,2,3,4,5,6) == ((1,2,:,:,:,5,6),(3,4))
+    @test LazyTensors.split_index(Val(3),Val(1),Val(1),Val(2),1,2,3,4,5,6) == ((1,2,3,:,5,6),(4,))
+    @test LazyTensors.split_index(Val(3),Val(2),Val(1),Val(2),1,2,3,4,5,6) == ((1,2,3,:,:,5,6),(4,))
+    @test LazyTensors.split_index(Val(1),Val(1),Val(2),Val(3),1,2,3,4,5,6) == ((1,:,4,5,6),(2,3))
+    @test LazyTensors.split_index(Val(1),Val(2),Val(2),Val(3),1,2,3,4,5,6) == ((1,:,:,4,5,6),(2,3))
+
+    @test LazyTensors.split_index(Val(0),Val(1),Val(3),Val(3),1,2,3,4,5,6) == ((:,4,5,6),(1,2,3))
+    @test LazyTensors.split_index(Val(3),Val(1),Val(3),Val(0),1,2,3,4,5,6) == ((1,2,3,:),(4,5,6))
+
+    @inferred LazyTensors.split_index(Val(2),Val(3),Val(2),Val(2),1,2,3,2,2,4)
+end
+
+@testset "slice_tuple" begin
+    @test LazyTensors.slice_tuple((1,2,3),Val(1), Val(3)) == (1,2,3)
+    @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(2), Val(5)) == (2,3,4,5)
+    @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(1), Val(3)) == (1,2,3)
+    @test LazyTensors.slice_tuple((1,2,3,4,5,6),Val(4), Val(6)) == (4,5,6)
+end
+
+@testset "split_tuple" begin
+    @testset "2 parts" begin
+        @test LazyTensors.split_tuple((),Val(0)) == ((),())
+        @test LazyTensors.split_tuple((1,),Val(0)) == ((),(1,))
+        @test LazyTensors.split_tuple((1,),Val(1)) == ((1,),())
+
+        @test LazyTensors.split_tuple((1,2,3,4),Val(0)) == ((),(1,2,3,4))
+        @test LazyTensors.split_tuple((1,2,3,4),Val(1)) == ((1,),(2,3,4))
+        @test LazyTensors.split_tuple((1,2,3,4),Val(2)) == ((1,2),(3,4))
+        @test LazyTensors.split_tuple((1,2,3,4),Val(3)) == ((1,2,3),(4,))
+        @test LazyTensors.split_tuple((1,2,3,4),Val(4)) == ((1,2,3,4),())
+
+        @test LazyTensors.split_tuple((1,2,true,4),Val(3)) == ((1,2,true),(4,))
+
+        @inferred LazyTensors.split_tuple((1,2,3,4),Val(3))
+        @inferred LazyTensors.split_tuple((1,2,true,4),Val(3))
+    end
+
+    @testset "3 parts" begin
+        @test LazyTensors.split_tuple((),Val(0),Val(0)) == ((),(),())
+        @test LazyTensors.split_tuple((1,2,3),Val(1), Val(1)) == ((1,),(2,),(3,))
+        @test LazyTensors.split_tuple((1,true,3),Val(1), Val(1)) == ((1,),(true,),(3,))
+
+        @test LazyTensors.split_tuple((1,2,3,4,5,6),Val(1),Val(2)) == ((1,),(2,3),(4,5,6))
+        @test LazyTensors.split_tuple((1,2,3,4,5,6),Val(3),Val(2)) == ((1,2,3),(4,5),(6,))
+
+        @inferred LazyTensors.split_tuple((1,2,3,4,5,6),Val(3),Val(2))
+        @inferred LazyTensors.split_tuple((1,true,3),Val(1), Val(1))
+    end
+end
+
+@testset "flatten_tuple" begin
+    @test LazyTensors.flatten_tuple((1,)) == (1,)
+    @test LazyTensors.flatten_tuple((1,2,3,4,5,6)) == (1,2,3,4,5,6)
+    @test LazyTensors.flatten_tuple((1,2,(3,4),5,6)) == (1,2,3,4,5,6)
+    @test LazyTensors.flatten_tuple((1,2,(3,(4,5)),6)) == (1,2,3,4,5,6)
+    @test LazyTensors.flatten_tuple(((1,2),(3,4),(5,),6)) == (1,2,3,4,5,6)
+end
--- a/test/SbpOperators/boundaryops/boundary_operator_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/SbpOperators/boundaryops/boundary_operator_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -18,18 +18,18 @@
             op_l = BoundaryOperator{Lower}(closure_stencil,size(g_1D)[1])
             @test op_l == BoundaryOperator(g_1D,closure_stencil,Lower())
             @test op_l == boundary_operator(g_1D,closure_stencil,CartesianBoundary{1,Lower}())
-            @test op_l isa TensorMapping{T,0,1} where T
+            @test op_l isa LazyTensor{T,0,1} where T
 
             op_r = BoundaryOperator{Upper}(closure_stencil,size(g_1D)[1])
             @test op_r == BoundaryOperator(g_1D,closure_stencil,Upper())
             @test op_r == boundary_operator(g_1D,closure_stencil,CartesianBoundary{1,Upper}())
-            @test op_r isa TensorMapping{T,0,1} where T
+            @test op_r isa LazyTensor{T,0,1} where T
         end
 
         @testset "2D" begin
             e_w = boundary_operator(g_2D,closure_stencil,CartesianBoundary{1,Upper}())
-            @test e_w isa InflatedTensorMapping
-            @test e_w isa TensorMapping{T,1,2} where T
+            @test e_w isa InflatedTensor
+            @test e_w isa LazyTensor{T,1,2} where T
         end
     end
     op_l, op_r = boundary_operator.(Ref(g_1D), Ref(closure_stencil), boundary_identifiers(g_1D))
--- a/test/SbpOperators/boundaryops/boundary_restriction_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/SbpOperators/boundaryops/boundary_restriction_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -18,20 +18,20 @@
             @test e_l == boundary_restriction(g_1D,stencil_set,CartesianBoundary{1,Lower}())
             @test e_l == BoundaryOperator(g_1D,Stencil{Float64}(e_closure),Lower())
             @test e_l isa BoundaryOperator{T,Lower} where T
-            @test e_l isa TensorMapping{T,0,1} where T
+            @test e_l isa LazyTensor{T,0,1} where T
 
             e_r = boundary_restriction(g_1D,e_closure,CartesianBoundary{1,Upper}())
             @test e_r == boundary_restriction(g_1D,stencil_set,CartesianBoundary{1,Upper}())
             @test e_r == BoundaryOperator(g_1D,Stencil{Float64}(e_closure),Upper())
             @test e_r isa BoundaryOperator{T,Upper} where T
-            @test e_r isa TensorMapping{T,0,1} where T
+            @test e_r isa LazyTensor{T,0,1} where T
         end
 
         @testset "2D" begin
             e_w = boundary_restriction(g_2D,e_closure,CartesianBoundary{1,Upper}())
             @test e_w == boundary_restriction(g_2D,stencil_set,CartesianBoundary{1,Upper}())
-            @test e_w isa InflatedTensorMapping
-            @test e_w isa TensorMapping{T,1,2} where T
+            @test e_w isa InflatedTensor
+            @test e_w isa LazyTensor{T,1,2} where T
         end
     end
 
--- a/test/SbpOperators/boundaryops/normal_derivative_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/SbpOperators/boundaryops/normal_derivative_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -16,20 +16,20 @@
             d_l = normal_derivative(g_1D, d_closure, CartesianBoundary{1,Lower}())
             @test d_l == normal_derivative(g_1D, stencil_set, CartesianBoundary{1,Lower}())
             @test d_l isa BoundaryOperator{T,Lower} where T
-            @test d_l isa TensorMapping{T,0,1} where T
+            @test d_l isa LazyTensor{T,0,1} where T
         end
         @testset "2D" begin
             d_w = normal_derivative(g_2D, d_closure, CartesianBoundary{1,Lower}())
             d_n = normal_derivative(g_2D, d_closure, CartesianBoundary{2,Upper}())
-            Ix = IdentityMapping{Float64}((size(g_2D)[1],))
-            Iy = IdentityMapping{Float64}((size(g_2D)[2],))
+            Ix = IdentityTensor{Float64}((size(g_2D)[1],))
+            Iy = IdentityTensor{Float64}((size(g_2D)[2],))
             d_l = normal_derivative(restrict(g_2D,1),d_closure,CartesianBoundary{1,Lower}())
             d_r = normal_derivative(restrict(g_2D,2),d_closure,CartesianBoundary{1,Upper}())
             @test d_w == normal_derivative(g_2D, stencil_set, CartesianBoundary{1,Lower}())
             @test d_w ==  d_l⊗Iy
             @test d_n ==  Ix⊗d_r
-            @test d_w isa TensorMapping{T,1,2} where T
-            @test d_n isa TensorMapping{T,1,2} where T
+            @test d_w isa LazyTensor{T,1,2} where T
+            @test d_n isa LazyTensor{T,1,2} where T
         end
     end
     @testset "Accuracy" begin
--- a/test/SbpOperators/volumeops/derivatives/first_derivative_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/SbpOperators/volumeops/derivatives/first_derivative_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -27,17 +27,17 @@
         g₁ = EquidistantGrid(11, 0., 1.)
         g₂ = EquidistantGrid((11,14), (0.,1.), (1.,3.))
         
-        @test first_derivative(g₁, stencil_set, 1) isa TensorMapping{Float64,1,1}
-        @test first_derivative(g₂, stencil_set, 2) isa TensorMapping{Float64,2,2}
+        @test first_derivative(g₁, stencil_set, 1) isa LazyTensor{Float64,1,1}
+        @test first_derivative(g₂, stencil_set, 2) isa LazyTensor{Float64,2,2}
         @test first_derivative(g₁, stencil_set, 1) == first_derivative(g₁, stencil_set)
 
         interior_stencil = CenteredStencil(-1,0,1)
         closure_stencils = [Stencil(-1,1, center=1)]
 
-        @test first_derivative(g₁, interior_stencil, closure_stencils, 1) isa TensorMapping{Float64,1,1}
+        @test first_derivative(g₁, interior_stencil, closure_stencils, 1) isa LazyTensor{Float64,1,1}
         @test first_derivative(g₁, interior_stencil, closure_stencils, 1) isa VolumeOperator
         @test first_derivative(g₁, interior_stencil, closure_stencils, 1) == first_derivative(g₁, interior_stencil, closure_stencils)
-        @test first_derivative(g₂, interior_stencil, closure_stencils, 2) isa TensorMapping{Float64,2,2}
+        @test first_derivative(g₂, interior_stencil, closure_stencils, 2) isa LazyTensor{Float64,2,2}
     end
 
     @testset "Accuracy conditions" begin
--- a/test/SbpOperators/volumeops/derivatives/second_derivative_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/SbpOperators/volumeops/derivatives/second_derivative_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -27,10 +27,10 @@
         @testset "2D" begin
             Dₓₓ = second_derivative(g_2D,inner_stencil,closure_stencils,1)
             D2 = second_derivative(g_1D,inner_stencil,closure_stencils,1)
-            I = IdentityMapping{Float64}(size(g_2D)[2])
+            I = IdentityTensor{Float64}(size(g_2D)[2])
             @test Dₓₓ == D2⊗I
             @test Dₓₓ == second_derivative(g_2D,stencil_set,1)
-            @test Dₓₓ isa TensorMapping{T,2,2} where T
+            @test Dₓₓ isa LazyTensor{T,2,2} where T
         end
     end
 
--- a/test/SbpOperators/volumeops/inner_products/inner_product_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/SbpOperators/volumeops/inner_products/inner_product_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -21,14 +21,14 @@
         @testset "0D" begin
             H = inner_product(EquidistantGrid{Float64}(), quadrature_interior, quadrature_closure)
             @test H == inner_product(EquidistantGrid{Float64}(), stencil_set)
-            @test H == IdentityMapping{Float64}()
-            @test H isa TensorMapping{T,0,0} where T
+            @test H == IdentityTensor{Float64}()
+            @test H isa LazyTensor{T,0,0} where T
         end
         @testset "1D" begin
             H = inner_product(g_1D, quadrature_interior, quadrature_closure)
             @test H == inner_product(g_1D, stencil_set)
             @test H isa ConstantInteriorScalingOperator
-            @test H isa TensorMapping{T,1,1} where T
+            @test H isa LazyTensor{T,1,1} where T
         end
         @testset "2D" begin
             H = inner_product(g_2D, quadrature_interior, quadrature_closure)
@@ -36,7 +36,7 @@
             H_y = inner_product(restrict(g_2D,2), quadrature_interior, quadrature_closure)
             @test H == inner_product(g_2D, stencil_set)
             @test H == H_x⊗H_y
-            @test H isa TensorMapping{T,2,2} where T
+            @test H isa LazyTensor{T,2,2} where T
         end
     end
 
--- a/test/SbpOperators/volumeops/inner_products/inverse_inner_product_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/SbpOperators/volumeops/inner_products/inverse_inner_product_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -18,14 +18,14 @@
         @testset "0D" begin
             Hi = inverse_inner_product(EquidistantGrid{Float64}(), quadrature_interior, quadrature_closure)
             @test Hi == inverse_inner_product(EquidistantGrid{Float64}(), stencil_set)
-            @test Hi == IdentityMapping{Float64}()
-            @test Hi isa TensorMapping{T,0,0} where T
+            @test Hi == IdentityTensor{Float64}()
+            @test Hi isa LazyTensor{T,0,0} where T
         end
         @testset "1D" begin
             Hi = inverse_inner_product(g_1D,  quadrature_interior, quadrature_closure)
             @test Hi == inverse_inner_product(g_1D, stencil_set)
             @test Hi isa ConstantInteriorScalingOperator
-            @test Hi isa TensorMapping{T,1,1} where T
+            @test Hi isa LazyTensor{T,1,1} where T
         end
         @testset "2D" begin
             Hi = inverse_inner_product(g_2D, quadrature_interior, quadrature_closure)
@@ -33,7 +33,7 @@
             Hi_y = inverse_inner_product(restrict(g_2D,2), quadrature_interior, quadrature_closure)
             @test Hi == inverse_inner_product(g_2D, stencil_set)
             @test Hi == Hi_x⊗Hi_y
-            @test Hi isa TensorMapping{T,2,2} where T
+            @test Hi isa LazyTensor{T,2,2} where T
         end
     end
 
--- a/test/SbpOperators/volumeops/laplace/laplace_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/SbpOperators/volumeops/laplace/laplace_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -17,12 +17,12 @@
         @testset "1D" begin
             Δ = laplace(g_1D, inner_stencil, closure_stencils)
             @test Laplace(g_1D, stencil_set) == Laplace(Δ, stencil_set)
-            @test Laplace(g_1D, stencil_set) isa TensorMapping{T,1,1}  where T
+            @test Laplace(g_1D, stencil_set) isa LazyTensor{T,1,1}  where T
         end
         @testset "3D" begin
             Δ = laplace(g_3D, inner_stencil, closure_stencils)
             @test Laplace(g_3D, stencil_set) == Laplace(Δ,stencil_set)
-            @test Laplace(g_3D, stencil_set) isa TensorMapping{T,3,3} where T
+            @test Laplace(g_3D, stencil_set) isa LazyTensor{T,3,3} where T
         end
     end
 
@@ -70,16 +70,16 @@
     @testset "1D" begin
         Δ = laplace(g_1D, inner_stencil, closure_stencils)
         @test Δ == second_derivative(g_1D, inner_stencil, closure_stencils, 1)
-        @test Δ isa TensorMapping{T,1,1}  where T
+        @test Δ isa LazyTensor{T,1,1}  where T
     end
     @testset "3D" begin
         Δ = laplace(g_3D, inner_stencil, closure_stencils)
-        @test Δ isa TensorMapping{T,3,3} where T
+        @test Δ isa LazyTensor{T,3,3} where T
         Dxx = second_derivative(g_3D, inner_stencil, closure_stencils, 1)
         Dyy = second_derivative(g_3D, inner_stencil, closure_stencils, 2)
         Dzz = second_derivative(g_3D, inner_stencil, closure_stencils, 3)
         @test Δ == Dxx + Dyy + Dzz
-        @test Δ isa TensorMapping{T,3,3} where T
+        @test Δ isa LazyTensor{T,3,3} where T
     end
 end
 
--- a/test/SbpOperators/volumeops/volume_operator_test.jl	Tue Mar 22 14:14:31 2022 +0100
+++ b/test/SbpOperators/volumeops/volume_operator_test.jl	Tue Mar 22 14:33:13 2022 +0100
@@ -22,31 +22,31 @@
             op = VolumeOperator(inner_stencil,closure_stencils,(11,),even)
             @test op == VolumeOperator(g_1D,inner_stencil,closure_stencils,even)
             @test op == volume_operator(g_1D,inner_stencil,closure_stencils,even,1)
-            @test op isa TensorMapping{T,1,1} where T
+            @test op isa LazyTensor{T,1,1} where T
         end
         @testset "2D" begin
             op_x = volume_operator(g_2D,inner_stencil,closure_stencils,even,1)
             op_y = volume_operator(g_2D,inner_stencil,closure_stencils,even,2)
-            Ix = IdentityMapping{Float64}((11,))
-            Iy = IdentityMapping{Float64}((12,))
+            Ix = IdentityTensor{Float64}((11,))
+            Iy = IdentityTensor{Float64}((12,))
             @test op_x == VolumeOperator(inner_stencil,closure_stencils,(11,),even)⊗Iy
             @test op_y == Ix⊗VolumeOperator(inner_stencil,closure_stencils,(12,),even)
-            @test op_x isa TensorMapping{T,2,2} where T
-            @test op_y isa TensorMapping{T,2,2} where T
+            @test op_x isa LazyTensor{T,2,2} where T
+            @test op_y isa LazyTensor{T,2,2} where T
         end
         @testset "3D" begin
             op_x = volume_operator(g_3D,inner_stencil,closure_stencils,even,1)
             op_y = volume_operator(g_3D,inner_stencil,closure_stencils,even,2)
             op_z = volume_operator(g_3D,inner_stencil,closure_stencils,even,3)
-            Ix = IdentityMapping{Float64}((11,))
-            Iy = IdentityMapping{Float64}((12,))
-            Iz = IdentityMapping{Float64}((10,))
+            Ix = IdentityTensor{Float64}((11,))
+            Iy = IdentityTensor{Float64}((12,))
+            Iz = IdentityTensor{Float64}((10,))
             @test op_x == VolumeOperator(inner_stencil,closure_stencils,(11,),even)⊗Iy⊗Iz
             @test op_y == Ix⊗VolumeOperator(inner_stencil,closure_stencils,(12,),even)⊗Iz
             @test op_z == Ix⊗Iy⊗VolumeOperator(inner_stencil,closure_stencils,(10,),even)
-            @test op_x isa TensorMapping{T,3,3} where T
-            @test op_y isa TensorMapping{T,3,3} where T
-            @test op_z isa TensorMapping{T,3,3} where T
+            @test op_x isa LazyTensor{T,3,3} where T
+            @test op_y isa LazyTensor{T,3,3} where T
+            @test op_z isa LazyTensor{T,3,3} where T
         end
     end