changeset 1342:c0c1189c5f2e refactor/grids

Clean up grid_refactor.md
author Jonatan Werpers <jonatan@werpers.com>
date Fri, 12 May 2023 15:50:09 +0200
parents 5761f4060f2b
children fa3695f634de
files Notes.md TODO.md grid_refactor.md src/Grids/Grids.jl src/Grids/grid.jl
diffstat 5 files changed, 83 insertions(+), 100 deletions(-) [+]
line wrap: on
line diff
--- a/Notes.md	Mon May 08 17:37:24 2023 +0200
+++ b/Notes.md	Fri May 12 15:50:09 2023 +0200
@@ -199,6 +199,77 @@
 
 Preferably dimensions and sizes should be checked when lazy objects are created, for example TensorApplication, TensorComposition and so on. If dimension checks decreases performance we can make them skippable later.
 
+## Changes to `eval_on`
+There are reasons to replace `eval_on` with regular `map` from Base, and implement a kind of lazy map perhaps `lmap` that work on indexable collections.
+
+The benefit of doing this is that we can treat grids as gridfunctions for the coordinate function, and get a more flexible tool. For example `map`/`lmap` can then be used both to evaluate a function on the grid but also get a component of a vector valued grid function or similar.
+
+A question is how and if we should implement `map`/`lmap` for functions like `(x,y)->x*y` or stick to just using vector inputs. There are a few options.
+
+* use `Base.splat((x,y)->x*y)` with the single argument `map`/`lmap`.
+* implement a kind of `unzip` function to get iterators for each component, which can then be used with the multiple-iterators-version of `map`/`lmap`.
+* Inspect the function in the `map`/`lmap` function to determine which mathches.
+
+Below is a partial implementation of `lmap` with some ideas
+```julia
+struct LazyMapping{T,IT,F}
+    f::F
+    indexable_iterator::IT # ___
+end
+
+function LazyMapping(f,I)
+    IT = eltype(I)
+    T = f(zero(T))
+    F = typeof(f)
+
+    return LazyMapping{T,IT,F}(f,I)
+end
+
+getindex(lm::LazyMapping, I...) = lm.f(lm.I[I...])
+# indexabl interface
+# iterable has shape
+
+iterate(lm::LazyMapping) = _lazy_mapping_iterate(lm, iterate(lm.I))
+iterate(lm::LazyMapping, state) = _lazy_mapping_iterate(lm, iterate(lm.I, state))
+
+_lazy_mapping_iterate(lm, ::Nothing) = nothing
+_lazy_mapping_iterate(lm, (next, state)) = lm.f(next), state
+
+lmap(f,  I) = LazyIndexableMap(f,I)
+```
+
+The interaction of the map methods with the probable design of multiblock functions involving nested indecies complicate the picture slightly. It's clear at the time of writing how this would work with `Base.map`. Perhaps we want to implement our own versions of both eager and lazy map.
+
+## Multiblock implementation
+We want multiblock things to work very similarly to regular one block things.
+
+### Grid functions
+Should probably support a nested indexing so that we first have an index for subgrid and then an index for nodes on that grid. E.g `g[1,2][2,3]` or `g[3][43,21]`.
+
+We could also possibly provide a combined indexing style `g[1,2,3,4]` where the first group of indices are for the subgrid and the remaining are for the nodes.
+
+We should make sure the underlying buffer for gridfunctions are continuously stored and are easy to convert to, so that interaction with for example DifferentialEquations is simple and without much boilerplate.
+
+#### `map` and `collect` and nested indexing
+We need to make sure `collect`, `map` and a potential lazy map work correctly through the nested indexing.
+
+### Tensor applications
+Should behave as grid functions
+
+### LazyTensors
+Could be built as a tuple or array of LazyTensors for each grid with a simple apply function.
+
+Nested indexing for these is problably not needed unless it simplifies their own implementation.
+
+Possibly useful to provide a simple type that doesn't know about connections between the grids. Antother type can include knowledge of the.
+
+We have at least two option for how to implement them:
+* Matrix of LazyTensors
+* Looking at the grid and determining what the apply should do.
+
+### Overall design implications of nested indices
+If some grids accept nested indexing there might be a clash with how LazyArrays work. It would be nice if the grid functions and lazy arrays that actually are arrays can be AbstractArray and things can be relaxed for nested index types.
+
 ## Vector valued grid functions
 
 ### Test-applikationer
@@ -225,8 +296,6 @@
 gf[2,3][2] # x̄[2] för en viss gridpunkt
 ```
 
-Note: Behöver bestämma om `eval_on` skickar in `x̄` eller `x̄...` till `f`. Eller om man kan stödja båda.
-
 ### Tensor operatorer
 Vi kan ha tensor-operatorer som agerar på ett skalärt fält och ger ett vektorfält eller tensorfält.
 Vi kan också ha tensor-operatorer som agerar på ett vektorfält eller tensorfält och ger ett skalärt fält.
--- a/TODO.md	Mon May 08 17:37:24 2023 +0200
+++ b/TODO.md	Fri May 12 15:50:09 2023 +0200
@@ -1,5 +1,7 @@
 # TODO
 
+## Organization
+ - [ ] Split up Notes.md in several files
 
 ## Coding
  - [ ] Ändra namn på variabler och funktioner så att det följer style-guide
@@ -26,3 +28,10 @@
  - [ ] Kolla att vi gör boundschecks överallt och att de är markerade med @boundscheck
  - [ ] Kolla att vi har @inline på rätt ställen
  - [ ] Profilera
+
+
+### Grids
+
+ - [ ] Multiblock grids
+ - [ ] Periodic grids
+ - [ ] Grids with modified boundary closures
--- a/grid_refactor.md	Mon May 08 17:37:24 2023 +0200
+++ b/grid_refactor.md	Fri May 12 15:50:09 2023 +0200
@@ -23,101 +23,8 @@
 * Clean out Notes.md of any solved issues
 * Delete this document, move remaining notes to Notes.md
 
-## Remaining work for feature branches
-* Multi-block grids
-* Periodic grids
-* Grids with modified boundary nodes
-* Unstructured grids?
-
 ## Frågor
 
-### Should we move utility functions to their own file?
-
-### Ska `Grid` vara en AbstractArray?
-Efter som alla nät ska agera som en gridfunktion för koordinaterna måste man
-svara på frågan hur vi hanterar generellla gridfunktioner samtidigt.
-
-Några saker att förhålla sig till:
-  - Multiblock nät?
-  - Unstructured?
-  - Triangular structured grids?
-  - Non-simply connected?
-  - CG/DG-nät
-
-Om alla nät är någon slags AbstractArray så kan tillexempel ett multiblock nät vara en AbstractArray{<:Grid, 1} och motsvarande gridfunktioner AbstractArray{<:AbstractArray,1}.
-Där man alltså först indexerar för vilket när man vill åt och sedan indexerar nätet. Tex `mg[2][32,12]`.
-
-Ett ostrukturerat nät med till exempel trianglar skulle vi kunna se på samma sätt som ett multiblocknät. Antagligen har de två typerna av nät olika underliggande datastruktur anpassade efter ändamålet.
-
-Hur fungerar tankarna ovan om man har tex tensorprodukten av ett ostrukturerat nät och en ekvidistant nät?
-```julia
-m = Mesh2DTriangle(10)
-e = EqudistantGrid(range(1:10)
-
-e[4] # fourth point
-
-m[3][5] # Fifth node in third triangle
-m[3,5] # Fifth node in third triangle # Funkar bara om alla nät är samma, (stämmer inte i mb-fallet)
-
-g = TensorGrid(m, e)
-
-g[3,4][5] # ??
-g[3,4] # ??
-
-g[3,5,4] # ??
-
-
-
-```
-
-Alla grids kanske inte är AbstractArray? Måste de vara rektangulära? Det blir svårt för strukturerade trianglar och NSC-griddar. Men de ska i allafall vara indexerbara?
-
-Man skulle kunna utesluta MultiblockGrid i tensorgrids
-
-CG-nät och DG-nät blir olika.
-På CG-nät kanske man både vill indexera noder och trianglar beroende på vad man håller på med?
-
-
-Om griddarna inte ska vara AbstractArray finns det många andra ställen som blir konstiga om de är AbstractArray. TensorApplication?! LazyArrays?! Är alla saker vi jobbar med egentligen mer generella object? Finns det något sätt att uttrycka koden så att man kan välja?
-
-
-Det vi är ute efter är kanske att griddarna uppfyller Iteration och Indexing interfacen.
-
-#### Försök till slutsater
- * Multiblock-nät indexeras i två nivåer tex `g[3][3,4]`
-     * Vi struntar i att implementera multiblock-nät som en del av ett tensorgrid till att börja med.
- * En grid kan inte alltid vara en AbstractArray eftersom till exempel ett NCS eller strukturerad triangel inte har rätt form.
- * Om vi har nod-indexerade ostrukturerade nät borde de fungera med TensorGrid.
- * Griddar ska uppfylla Indexing och Iteration interfacen
-
-### Should Grid have function for the target manifold dimension?
-Where would it be used?
-    In the constructor for TensorGrid
-    In eval on if we want to allow multiargument functions
-    Elsewhere?
-
-An alternative is to analyze T in Grid{T,D} to find the answer. (See combined_coordinate_vector_type in tensor_grid.jl)
-
-### Lazy version of map for our needs?
-Could be used to
- * evaulate functions on grids
- * pick out components of grid functions
- * More?
-
-Maybe this:
-```julia
-struct LazyMappedArray <: LazyArray
-    f::F
-    v::AT
-end
-```
-
-Could allow us to remove eval_on.
-
-### Do we need functions like `getcomponent`?
-Perhaps this can be more cleanly solved using map or a lazy version of map?
-That approach would be more flexible and more general requiring few specialized functions.
-
-(see "Lazy version of map for our needs?" above)
-
-## Implement the tensor product operator for grids?
+### Implement the tensor product operator for grids?
+Yes!
+This could be a useful way to create grids with mixes of different kinds of 1d grids. An example could be a grid which is periodic in one direction and bounded in one.
--- a/src/Grids/Grids.jl	Mon May 08 17:37:24 2023 +0200
+++ b/src/Grids/Grids.jl	Fri May 12 15:50:09 2023 +0200
@@ -42,5 +42,4 @@
 include("equidistant_grid.jl")
 include("zero_dim_grid.jl")
 
-
 end # module
--- a/src/Grids/grid.jl	Mon May 08 17:37:24 2023 +0200
+++ b/src/Grids/grid.jl	Fri May 12 15:50:09 2023 +0200
@@ -92,7 +92,6 @@
         return LazyTensors.LazyFunctionArray((I...)->f(g[I...]...), size(g))
     end
 end
-# TBD: How does `eval_on` relate to `map`. Should the be closer in name?
 
 _ncomponents(::Type{<:Number}) = 1
 _ncomponents(T::Type{<:SVector}) = length(T)