Mercurial > repos > public > sbplib_julia
comparison Notes.md @ 1829:871f3f1decea refactor/grids/iterable_boundary_indices
Merge default
author | Jonatan Werpers <jonatan@werpers.com> |
---|---|
date | Sun, 20 Oct 2024 21:38:09 +0200 |
parents | fe058a0ebd97 |
children |
comparison
equal
deleted
inserted
replaced
1828:8adecef380b4 | 1829:871f3f1decea |
---|---|
18 difference_operator(::Laplace, grid, stencil_set) = ... # Returns a plain LazyTensor. Replaces the current `laplace()` function. | 18 difference_operator(::Laplace, grid, stencil_set) = ... # Returns a plain LazyTensor. Replaces the current `laplace()` function. |
19 sat_tensors(::Laplace, grid, stencil_set, bc) = ... | 19 sat_tensors(::Laplace, grid, stencil_set, bc) = ... |
20 | 20 |
21 sat(::DifferentialOperator, grid, stencil_set, bc) = ... | 21 sat(::DifferentialOperator, grid, stencil_set, bc) = ... |
22 ``` | 22 ``` |
23 | |
24 | |
25 ### Update 2024-06-26 | |
26 We will run into trouble if we start assuming things about the coupling | |
27 between the continuous and discrete setting. We could add representations of | |
28 continuous operators but we will also need representations of discrete | |
29 operators. Ideally it should be possible to ignore the continuous | |
30 representations and only work with the discrete operators without losing | |
31 functionality. The discrete representations does not have to be LazyTensors. | |
32 The could be used as inputs to methods for `sat`, `difference_operator` and so | |
33 on. | |
34 | |
35 To see need for a fully functional discrete layer we can consider the | |
36 optimization of material parameters or something similar. In this case we do | |
37 not necessarily want to handle continuous objects. | |
23 | 38 |
24 ## Reading operators | 39 ## Reading operators |
25 | 40 |
26 Jonatan's suggestion is to add methods to `Laplace`, `SecondDerivative` and | 41 Jonatan's suggestion is to add methods to `Laplace`, `SecondDerivative` and |
27 similar functions that take in a filename from which to read stencils. These | 42 similar functions that take in a filename from which to read stencils. These |
101 * More? | 116 * More? |
102 | 117 |
103 Maybe if we should have dynamic sizing it could be only for the range. `domain_size` would not be implemented. And the `range_size` would be a function of a vector that the LazyTensor is applied to. | 118 Maybe if we should have dynamic sizing it could be only for the range. `domain_size` would not be implemented. And the `range_size` would be a function of a vector that the LazyTensor is applied to. |
104 | 119 |
105 ## Reasearch and thinking | 120 ## Reasearch and thinking |
106 - [ ] Use a trait to indicate that a LazyTensor har the same range and domain? | |
107 - [ ] Check how the native julia doc generator works | 121 - [ ] Check how the native julia doc generator works |
108 - [ ] Check if Vidars design docs fit in there | 122 - [ ] Check if Vidars design docs fit in there |
109 - [ ] Create a macro @lazy which replaces a binary op (+,-) by its lazy equivalent? Would be a neat way to indicate which evaluations are lazy without cluttering/confusing with special characters. | 123 - [ ] Create a macro @lazy which replaces a binary op (+,-) by its lazy equivalent? Would be a neat way to indicate which evaluations are lazy without cluttering/confusing with special characters. |
110 - [ ] Dispatch on Lower() instead of the type Lower so `::Lower` instead of `::Type{Lower}` ??? | |
111 Seems better unless there is some specific reason to use the type instead of the value. | |
112 - [ ] Can we have a trait to tell if a LazyTensor is transposable? | 124 - [ ] Can we have a trait to tell if a LazyTensor is transposable? |
113 - [ ] Is it ok to have "Constructors" for abstract types which create subtypes? For example a Grids() functions that gives different kind of grids based on input? | |
114 - [ ] Figure out how to treat the borrowing parameters of operators. Include in into the struct? Expose via function dispatched on the operator type and grid? | |
115 | |
116 ## Identifiers for regions | |
117 The identifiers (`Upper`, `Lower`, `Interior`) used for region indecies should probably be included in the grid module. This allows new grid types to come with their own regions. | |
118 We implement this by refactoring RegionIndices to be agnostic to the region types and then moving the actual types to Grids. | |
119 | 125 |
120 ## Regions and tensormappings | 126 ## Regions and tensormappings |
121 - [ ] Use a trait to indicate if a LazyTensor uses indices with regions. | 127 - [ ] Use a trait to indicate if a LazyTensor uses indices with regions. |
122 The default should be that they do NOT. | 128 The default should be that they do NOT. |
123 - [ ] What to name this trait? Can we call it IndexStyle but not export it to avoid conflicts with Base.IndexStyle? | 129 - [ ] What to name this trait? Can we call it IndexStyle but not export it to avoid conflicts with Base.IndexStyle? |
165 This would mean no bounds checking in applys, however any indexing that they do would be boundschecked. The only loss would be readability of errors. But users aren't really supposed to call apply directly anyway. | 171 This would mean no bounds checking in applys, however any indexing that they do would be boundschecked. The only loss would be readability of errors. But users aren't really supposed to call apply directly anyway. |
166 | 172 |
167 Preferably dimensions and sizes should be checked when lazy objects are created, for example TensorApplication, TensorComposition and so on. If dimension checks decreases performance we can make them skippable later. | 173 Preferably dimensions and sizes should be checked when lazy objects are created, for example TensorApplication, TensorComposition and so on. If dimension checks decreases performance we can make them skippable later. |
168 | 174 |
169 ## Changes to `eval_on` | 175 ## Changes to `eval_on` |
170 There are reasons to replace `eval_on` with regular `map` from Base, and implement a kind of lazy map perhaps `lmap` that work on indexable collections. | 176 There are reasons to replace `eval_on` with regular `map` from Base, and |
171 | 177 implement a kind of lazy map perhaps `lmap` that work on indexable |
172 The benefit of doing this is that we can treat grids as gridfunctions for the coordinate function, and get a more flexible tool. For example `map`/`lmap` can then be used both to evaluate a function on the grid but also get a component of a vector valued grid function or similar. | 178 collections. |
173 | 179 |
174 A question is how and if we should implement `map`/`lmap` for functions like `(x,y)->x*y` or stick to just using vector inputs. There are a few options. | 180 The benefit of doing this is that we can treat grids as gridfunctions for the |
175 | 181 coordinate function, and get a more flexible tool. For example `map`/`lmap` |
176 * use `Base.splat((x,y)->x*y)` with the single argument `map`/`lmap`. | 182 can then be used both to evaluate a function on the grid but also get a |
177 * implement a kind of `unzip` function to get iterators for each component, which can then be used with the multiple-iterators-version of `map`/`lmap`. | 183 component of a vector valued grid function or similar. |
178 * Inspect the function in the `map`/`lmap` function to determine which matches. | |
179 | 184 |
180 Below is a partial implementation of `lmap` with some ideas | 185 Below is a partial implementation of `lmap` with some ideas |
181 ```julia | 186 ```julia |
182 struct LazyMapping{T,IT,F} | 187 struct LazyMapping{T,IT,F} |
183 f::F | 188 f::F |
203 _lazy_mapping_iterate(lm, (next, state)) = lm.f(next), state | 208 _lazy_mapping_iterate(lm, (next, state)) = lm.f(next), state |
204 | 209 |
205 lmap(f, I) = LazyIndexableMap(f,I) | 210 lmap(f, I) = LazyIndexableMap(f,I) |
206 ``` | 211 ``` |
207 | 212 |
208 The interaction of the map methods with the probable design of multiblock functions involving nested indecies complicate the picture slightly. It's clear at the time of writing how this would work with `Base.map`. Perhaps we want to implement our own versions of both eager and lazy map. | 213 The interaction of the map methods with the probable design of multiblock |
214 functions involving nested indecies complicate the picture slightly. It's | |
215 unclear at the time of writing how this would work with `Base.map`. Perhaps we | |
216 want to implement our own versions of both eager and lazy map. | |
209 | 217 |
210 | 218 |
211 ### 2024-04 | 219 ### 2024-04 |
212 MappedArrays.jl provides a simple array type and function like the description | 220 MappedArrays.jl provides a simple array type and function like the description |
213 of LazyMapping above. One option is to remove `eval_on` completely and rely on | 221 of LazyMapping above. One option is to remove `eval_on` completely and rely on |
274 | 282 |
275 ### Grid-funktionen | 283 ### Grid-funktionen |
276 Grid-funktioner har typen `AbstractArray{T,2} where T`. | 284 Grid-funktioner har typen `AbstractArray{T,2} where T`. |
277 `T` kan vara lite vad som helst, tillexemel en SVector eller Array, eller Tuple. Tensoroperatorerna bryr sig inte om exakt vad det är, mer än att typen måste stödja de operationer som operatorn använder. | 285 `T` kan vara lite vad som helst, tillexemel en SVector eller Array, eller Tuple. Tensoroperatorerna bryr sig inte om exakt vad det är, mer än att typen måste stödja de operationer som operatorn använder. |
278 | 286 |
279 En nackdel kan vara hur man ska få ut gridfunktionen för tex andra komponenten. | |
280 | |
281 Syntax: | |
282 ``` | |
283 f(x̄) = x̄ | |
284 gf = evalOn(g, f) | |
285 gf[2,3] # x̄ för en viss gridpunkt | |
286 gf[2,3][2] # x̄[2] för en viss gridpunkt | |
287 ``` | |
288 | |
289 ### Tensor operatorer | 287 ### Tensor operatorer |
290 Vi kan ha tensor-operatorer som agerar på ett skalärt fält och ger ett vektorfält eller tensorfält. | 288 Vi kan ha tensor-operatorer som agerar på ett skalärt fält och ger ett vektorfält eller tensorfält. |
291 Vi kan också ha tensor-operatorer som agerar på ett vektorfält eller tensorfält och ger ett skalärt fält. | 289 Vi kan också ha tensor-operatorer som agerar på ett vektorfält eller tensorfält och ger ett skalärt fält. |
292 | 290 |
293 TBD: Just nu gör `apply_transpose` antagandet att domän-typen är samma som range-typen. Det behöver vi på något sätt bryta. Ett alternativ är låta en LazyTensor ha `T_domain` och `T_range` istället för bara `T`. Känns dock lite grötigt. Ett annat alternativ skulle vara någon typ av trait för transpose? Den skulle kunna innehålla typen som transponatet agerar på? Vet inte om det fungerar dock. | 291 TBD: Just nu gör `apply_transpose` antagandet att domän-typen är samma som range-typen. Det behöver vi på något sätt bryta. Ett alternativ är låta en LazyTensor ha `T_domain` och `T_range` istället för bara `T`. Känns dock lite grötigt. Ett annat alternativ skulle vara någon typ av trait för transpose? Den skulle kunna innehålla typen som transponatet agerar på? Vet inte om det fungerar dock. |
298 (Man skulle eventuellt också kunna låta den agera på `StaticMatrix{N,T,D} where N` och returnera `StaticMatrix{M,T,D+1}`. Frågan är om man vinner något på det...) | 296 (Man skulle eventuellt också kunna låta den agera på `StaticMatrix{N,T,D} where N` och returnera `StaticMatrix{M,T,D+1}`. Frågan är om man vinner något på det...) |
299 | 297 |
300 Skulle kunna ha en funktion `range_type(::LazyTensor, ::Type{domain_type})` | 298 Skulle kunna ha en funktion `range_type(::LazyTensor, ::Type{domain_type})` |
301 | 299 |
302 Kanske kan man implementera `⋅(tm::LazyTensor{R,D}, v::AbstractArray{T,D})` där T är en AbstractArray, tm på något sätt har komponenter, lika många som T har element. | 300 Kanske kan man implementera `⋅(tm::LazyTensor{R,D}, v::AbstractArray{T,D})` där T är en AbstractArray, tm på något sätt har komponenter, lika många som T har element. |
303 | |
304 ### Komponenter som gridfunktioner | |
305 En viktig operation för vektorfält är att kunna få ut komponenter som grid-funktioner. Detta behöver antagligen kunna ske lazy. | |
306 Det finns ett par olika lösningar: | |
307 * Använda map eller en lazy map (se diskussion om eval_on) | |
308 * Implementera en egen typ av view som tar hand om detta. Eller Accessors.jl? | |
309 * Använda en LazyTensor | |
310 * Någon typ av lazy-broadcast | |
311 * En lazy array som applicerar en funktion för varje element. | |
312 | |
313 | 301 |
314 ### Prestanda-aspekter | 302 ### Prestanda-aspekter |
315 [Vidar, Discord, 2023-03-03] | 303 [Vidar, Discord, 2023-03-03] |
316 Typiskt sett finns det två sätt att representera vektorvärda gridfunktioner AbstractArray{T,Dim} där T är en vektor över komponenterna. Man skulle alltså i 1D ha | 304 Typiskt sett finns det två sätt att representera vektorvärda gridfunktioner AbstractArray{T,Dim} där T är en vektor över komponenterna. Man skulle alltså i 1D ha |
317 u = [ [u1[x1], u2[x1]] , [u1[x2], u2[x2]], ... [u1[xN], u2[xN]]]. Detta brukar kallas array of structs (AoS). Alternativet är struct of arrays (SoA), där man har alla gridpunkter för en given komponent u = [[u1[x1], u1[x2]],... u1[xN]], [u2[x1], u2[x2], ... u2[xN]]]. | 305 u = [ [u1[x1], u2[x1]] , [u1[x2], u2[x2]], ... [u1[xN], u2[xN]]]. Detta brukar kallas array of structs (AoS). Alternativet är struct of arrays (SoA), där man har alla gridpunkter för en given komponent u = [[u1[x1], u1[x2]],... u1[xN]], [u2[x1], u2[x2], ... u2[xN]]]. |
375 We should make sure that `@inbounds` and `Base.@propagate_inbounds` are | 363 We should make sure that `@inbounds` and `Base.@propagate_inbounds` are |
376 applied correctly throughout the stack. When testing the performance of | 364 applied correctly throughout the stack. When testing the performance of |
377 stencil application on the bugfix/sbp_operators/stencil_return_type branch | 365 stencil application on the bugfix/sbp_operators/stencil_return_type branch |
378 there seemed to be some strange results where such errors could be the | 366 there seemed to be some strange results where such errors could be the |
379 culprit. | 367 culprit. |
368 | |
369 | |
370 ## Tiled loops and regions in apply | |
371 There should be easy ways to use functionalty splitting the application of a lazy array into regions and using tiled iteration. This could make the application more efficient by reducing branching and improving cache usage in the tight loop. On commit f215ac2a5c66 and before there were some early tests regarding this in a DiffOp submodule. | |
372 | |
373 The main ideas were: | |
374 ```julia | |
375 function apply_region!(D::DiffOpCartesian{2}, u::AbstractArray{T,2}, v::AbstractArray{T,2}) where T | |
376 apply_region!(D, u, v, Lower, Lower) | |
377 apply_region!(D, u, v, Lower, Interior) | |
378 apply_region!(D, u, v, Lower, Upper) | |
379 apply_region!(D, u, v, Interior, Lower) | |
380 apply_region!(D, u, v, Interior, Interior) | |
381 apply_region!(D, u, v, Interior, Upper) | |
382 apply_region!(D, u, v, Upper, Lower) | |
383 apply_region!(D, u, v, Upper, Interior) | |
384 apply_region!(D, u, v, Upper, Upper) | |
385 return nothing | |
386 end | |
387 ``` | |
388 | |
389 ```julia | |
390 using TiledIteration | |
391 function apply_region_tiled!(D::DiffOpCartesian{2}, u::AbstractArray{T,2}, v::AbstractArray{T,2}, r1::Type{<:Region}, r2::Type{<:Region}) where T | |
392 ri = regionindices(D.grid.size, closuresize(D.op), (r1,r2)) | |
393 # TODO: Pass Tilesize to function | |
394 for tileaxs ∈ TileIterator(axes(ri), padded_tilesize(T, (5,5), 2)) | |
395 for j ∈ tileaxs[2], i ∈ tileaxs[1] | |
396 I = ri[i,j] | |
397 u[I] = apply(D, v, (Index{r1}(I[1]), Index{r2}(I[2]))) | |
398 end | |
399 end | |
400 return nothing | |
401 end | |
402 ``` |