diff Notes.md @ 1785:96f8cad255b4 feature/sbp_operators/laplace_curvilinear

Merge feature/grids/manifolds
author Jonatan Werpers <jonatan@werpers.com>
date Mon, 16 Sep 2024 10:33:47 +0200
parents fe058a0ebd97
children
line wrap: on
line diff
--- a/Notes.md	Wed Sep 11 16:26:19 2024 +0200
+++ b/Notes.md	Mon Sep 16 10:33:47 2024 +0200
@@ -118,19 +118,10 @@
  Maybe if we should have dynamic sizing it could be only for the range. `domain_size` would not be implemented. And the `range_size` would be a function of a vector that the LazyTensor is applied to.
 
 ## Reasearch and thinking
- - [ ] Use a trait to indicate that a LazyTensor har the same range and domain?
  - [ ] Check how the native julia doc generator works
     - [ ] Check if Vidars design docs fit in there
  - [ ] Create a macro @lazy which replaces a binary op (+,-) by its lazy equivalent? Would be a neat way to indicate which evaluations are lazy without cluttering/confusing with special characters.
- - [ ] Dispatch on Lower() instead of the type Lower so `::Lower` instead of `::Type{Lower}` ???
- 	Seems better unless there is some specific reason to use the type instead of the value.
  - [ ] Can we have a trait to tell if a LazyTensor is transposable?
- - [ ] Is it ok to have "Constructors" for abstract types which create subtypes? For example a Grids() functions that gives different kind of grids based on input?
- - [ ] Figure out how to treat the borrowing parameters of operators. Include in into the struct? Expose via function dispatched on the operator type and grid?
-
-## Identifiers for regions
-The identifiers (`Upper`, `Lower`, `Interior`) used for region indecies should probably be included in the grid module. This allows new grid types to come with their own regions.
-We implement this by refactoring RegionIndices to be agnostic to the region types and then moving the actual types to Grids.
 
 ## Regions and tensormappings
 - [ ] Use a trait to indicate if a LazyTensor uses indices with regions.
@@ -182,15 +173,14 @@
 Preferably dimensions and sizes should be checked when lazy objects are created, for example TensorApplication, TensorComposition and so on. If dimension checks decreases performance we can make them skippable later.
 
 ## Changes to `eval_on`
-There are reasons to replace `eval_on` with regular `map` from Base, and implement a kind of lazy map perhaps `lmap` that work on indexable collections.
-
-The benefit of doing this is that we can treat grids as gridfunctions for the coordinate function, and get a more flexible tool. For example `map`/`lmap` can then be used both to evaluate a function on the grid but also get a component of a vector valued grid function or similar.
+There are reasons to replace `eval_on` with regular `map` from Base, and
+implement a kind of lazy map perhaps `lmap` that work on indexable
+collections.
 
-A question is how and if we should implement `map`/`lmap` for functions like `(x,y)->x*y` or stick to just using vector inputs. There are a few options.
-
-* use `Base.splat((x,y)->x*y)` with the single argument `map`/`lmap`.
-* implement a kind of `unzip` function to get iterators for each component, which can then be used with the multiple-iterators-version of `map`/`lmap`.
-* Inspect the function in the `map`/`lmap` function to determine which matches.
+The benefit of doing this is that we can treat grids as gridfunctions for the
+coordinate function, and get a more flexible tool. For example `map`/`lmap`
+can then be used both to evaluate a function on the grid but also get a
+component of a vector valued grid function or similar.
 
 Below is a partial implementation of `lmap` with some ideas
 ```julia
@@ -220,7 +210,10 @@
 lmap(f,  I) = LazyIndexableMap(f,I)
 ```
 
-The interaction of the map methods with the probable design of multiblock functions involving nested indecies complicate the picture slightly. It's clear at the time of writing how this would work with `Base.map`. Perhaps we want to implement our own versions of both eager and lazy map.
+The interaction of the map methods with the probable design of multiblock
+functions involving nested indecies complicate the picture slightly. It's
+unclear at the time of writing how this would work with `Base.map`. Perhaps we
+want to implement our own versions of both eager and lazy map.
 
 
 ### 2024-04
@@ -291,16 +284,6 @@
 Grid-funktioner har typen `AbstractArray{T,2} where T`.
 `T` kan vara lite vad som helst, tillexemel en SVector eller Array, eller Tuple. Tensoroperatorerna bryr sig inte om exakt vad det är, mer än att typen måste stödja de operationer som operatorn använder.
 
-En nackdel kan vara hur man ska få ut gridfunktionen för tex andra komponenten.
-
-Syntax:
-```
-f(x̄) = x̄
-gf = evalOn(g, f)
-gf[2,3] # x̄ för en viss gridpunkt
-gf[2,3][2] # x̄[2] för en viss gridpunkt
-```
-
 ### Tensor operatorer
 Vi kan ha tensor-operatorer som agerar på ett skalärt fält och ger ett vektorfält eller tensorfält.
 Vi kan också ha tensor-operatorer som agerar på ett vektorfält eller tensorfält och ger ett skalärt fält.
@@ -316,16 +299,6 @@
 
 Kanske kan man implementera `⋅(tm::LazyTensor{R,D}, v::AbstractArray{T,D})` där T är en AbstractArray, tm på något sätt har komponenter, lika många som T har element.
 
-### Komponenter som gridfunktioner
-En viktig operation för vektorfält är att kunna få ut komponenter som grid-funktioner. Detta behöver antagligen kunna ske lazy.
-Det finns ett par olika lösningar:
-* Använda map eller en lazy map (se diskussion om eval_on)
-* Implementera en egen typ av view som tar hand om detta. Eller Accessors.jl?
-* Använda en LazyTensor
-* Någon typ av lazy-broadcast
-* En lazy array som applicerar en funktion för varje element.
-
-
 ### Prestanda-aspekter
 [Vidar, Discord, 2023-03-03]
 Typiskt sett finns det två sätt att representera vektorvärda gridfunktioner AbstractArray{T,Dim} där T är en vektor över komponenterna. Man skulle alltså i 1D ha
@@ -392,3 +365,38 @@
 stencil application on the bugfix/sbp_operators/stencil_return_type branch
 there seemed to be some strange results where such errors could be the
 culprit.
+
+
+## Tiled loops and regions in apply
+There should be easy ways to use functionalty splitting the application of a lazy array into regions and using tiled iteration. This could make the application more efficient by reducing branching and improving cache usage in the tight loop. On commit f215ac2a5c66 and before there were some early tests regarding this in a DiffOp submodule.
+
+The main ideas were:
+```julia
+function apply_region!(D::DiffOpCartesian{2}, u::AbstractArray{T,2}, v::AbstractArray{T,2}) where T
+    apply_region!(D, u, v, Lower, Lower)
+    apply_region!(D, u, v, Lower, Interior)
+    apply_region!(D, u, v, Lower, Upper)
+    apply_region!(D, u, v, Interior, Lower)
+    apply_region!(D, u, v, Interior, Interior)
+    apply_region!(D, u, v, Interior, Upper)
+    apply_region!(D, u, v, Upper, Lower)
+    apply_region!(D, u, v, Upper, Interior)
+    apply_region!(D, u, v, Upper, Upper)
+    return nothing
+end
+```
+
+```julia
+using TiledIteration
+function apply_region_tiled!(D::DiffOpCartesian{2}, u::AbstractArray{T,2}, v::AbstractArray{T,2}, r1::Type{<:Region}, r2::Type{<:Region}) where T
+    ri = regionindices(D.grid.size, closuresize(D.op), (r1,r2))
+    # TODO: Pass Tilesize to function
+    for tileaxs ∈ TileIterator(axes(ri), padded_tilesize(T, (5,5), 2))
+        for j ∈ tileaxs[2], i ∈ tileaxs[1]
+            I = ri[i,j]
+            u[I] = apply(D, v, (Index{r1}(I[1]), Index{r2}(I[2])))
+        end
+    end
+    return nothing
+end
+```