diff Notes.md @ 1766:fe058a0ebd97 cleanup

Delete old notes and todos
author Jonatan Werpers <jonatan@werpers.com>
date Fri, 13 Sep 2024 22:10:46 +0200
parents 2311f33b6bd3
children
line wrap: on
line diff
--- a/Notes.md	Fri Sep 13 12:57:54 2024 +0200
+++ b/Notes.md	Fri Sep 13 22:10:46 2024 +0200
@@ -118,19 +118,10 @@
  Maybe if we should have dynamic sizing it could be only for the range. `domain_size` would not be implemented. And the `range_size` would be a function of a vector that the LazyTensor is applied to.
 
 ## Reasearch and thinking
- - [ ] Use a trait to indicate that a LazyTensor har the same range and domain?
  - [ ] Check how the native julia doc generator works
     - [ ] Check if Vidars design docs fit in there
  - [ ] Create a macro @lazy which replaces a binary op (+,-) by its lazy equivalent? Would be a neat way to indicate which evaluations are lazy without cluttering/confusing with special characters.
- - [ ] Dispatch on Lower() instead of the type Lower so `::Lower` instead of `::Type{Lower}` ???
- 	Seems better unless there is some specific reason to use the type instead of the value.
  - [ ] Can we have a trait to tell if a LazyTensor is transposable?
- - [ ] Is it ok to have "Constructors" for abstract types which create subtypes? For example a Grids() functions that gives different kind of grids based on input?
- - [ ] Figure out how to treat the borrowing parameters of operators. Include in into the struct? Expose via function dispatched on the operator type and grid?
-
-## Identifiers for regions
-The identifiers (`Upper`, `Lower`, `Interior`) used for region indecies should probably be included in the grid module. This allows new grid types to come with their own regions.
-We implement this by refactoring RegionIndices to be agnostic to the region types and then moving the actual types to Grids.
 
 ## Regions and tensormappings
 - [ ] Use a trait to indicate if a LazyTensor uses indices with regions.
@@ -182,15 +173,14 @@
 Preferably dimensions and sizes should be checked when lazy objects are created, for example TensorApplication, TensorComposition and so on. If dimension checks decreases performance we can make them skippable later.
 
 ## Changes to `eval_on`
-There are reasons to replace `eval_on` with regular `map` from Base, and implement a kind of lazy map perhaps `lmap` that work on indexable collections.
-
-The benefit of doing this is that we can treat grids as gridfunctions for the coordinate function, and get a more flexible tool. For example `map`/`lmap` can then be used both to evaluate a function on the grid but also get a component of a vector valued grid function or similar.
+There are reasons to replace `eval_on` with regular `map` from Base, and
+implement a kind of lazy map perhaps `lmap` that work on indexable
+collections.
 
-A question is how and if we should implement `map`/`lmap` for functions like `(x,y)->x*y` or stick to just using vector inputs. There are a few options.
-
-* use `Base.splat((x,y)->x*y)` with the single argument `map`/`lmap`.
-* implement a kind of `unzip` function to get iterators for each component, which can then be used with the multiple-iterators-version of `map`/`lmap`.
-* Inspect the function in the `map`/`lmap` function to determine which matches.
+The benefit of doing this is that we can treat grids as gridfunctions for the
+coordinate function, and get a more flexible tool. For example `map`/`lmap`
+can then be used both to evaluate a function on the grid but also get a
+component of a vector valued grid function or similar.
 
 Below is a partial implementation of `lmap` with some ideas
 ```julia
@@ -220,7 +210,10 @@
 lmap(f,  I) = LazyIndexableMap(f,I)
 ```
 
-The interaction of the map methods with the probable design of multiblock functions involving nested indecies complicate the picture slightly. It's clear at the time of writing how this would work with `Base.map`. Perhaps we want to implement our own versions of both eager and lazy map.
+The interaction of the map methods with the probable design of multiblock
+functions involving nested indecies complicate the picture slightly. It's
+unclear at the time of writing how this would work with `Base.map`. Perhaps we
+want to implement our own versions of both eager and lazy map.
 
 
 ### 2024-04
@@ -291,16 +284,6 @@
 Grid-funktioner har typen `AbstractArray{T,2} where T`.
 `T` kan vara lite vad som helst, tillexemel en SVector eller Array, eller Tuple. Tensoroperatorerna bryr sig inte om exakt vad det är, mer än att typen måste stödja de operationer som operatorn använder.
 
-En nackdel kan vara hur man ska få ut gridfunktionen för tex andra komponenten.
-
-Syntax:
-```
-f(x̄) = x̄
-gf = evalOn(g, f)
-gf[2,3] # x̄ för en viss gridpunkt
-gf[2,3][2] # x̄[2] för en viss gridpunkt
-```
-
 ### Tensor operatorer
 Vi kan ha tensor-operatorer som agerar på ett skalärt fält och ger ett vektorfält eller tensorfält.
 Vi kan också ha tensor-operatorer som agerar på ett vektorfält eller tensorfält och ger ett skalärt fält.
@@ -316,16 +299,6 @@
 
 Kanske kan man implementera `⋅(tm::LazyTensor{R,D}, v::AbstractArray{T,D})` där T är en AbstractArray, tm på något sätt har komponenter, lika många som T har element.
 
-### Komponenter som gridfunktioner
-En viktig operation för vektorfält är att kunna få ut komponenter som grid-funktioner. Detta behöver antagligen kunna ske lazy.
-Det finns ett par olika lösningar:
-* Använda map eller en lazy map (se diskussion om eval_on)
-* Implementera en egen typ av view som tar hand om detta. Eller Accessors.jl?
-* Använda en LazyTensor
-* Någon typ av lazy-broadcast
-* En lazy array som applicerar en funktion för varje element.
-
-
 ### Prestanda-aspekter
 [Vidar, Discord, 2023-03-03]
 Typiskt sett finns det två sätt att representera vektorvärda gridfunktioner AbstractArray{T,Dim} där T är en vektor över komponenterna. Man skulle alltså i 1D ha