comparison Notes.md @ 1342:c0c1189c5f2e refactor/grids

Clean up grid_refactor.md
author Jonatan Werpers <jonatan@werpers.com>
date Fri, 12 May 2023 15:50:09 +0200
parents e352630a0309
children c2012db881cb
comparison
equal deleted inserted replaced
1341:5761f4060f2b 1342:c0c1189c5f2e
197 Does it make sense to have boundschecking only in getindex methods? 197 Does it make sense to have boundschecking only in getindex methods?
198 This would mean no bounds checking in applys, however any indexing that they do would be boundschecked. The only loss would be readability of errors. But users aren't really supposed to call apply directly anyway. 198 This would mean no bounds checking in applys, however any indexing that they do would be boundschecked. The only loss would be readability of errors. But users aren't really supposed to call apply directly anyway.
199 199
200 Preferably dimensions and sizes should be checked when lazy objects are created, for example TensorApplication, TensorComposition and so on. If dimension checks decreases performance we can make them skippable later. 200 Preferably dimensions and sizes should be checked when lazy objects are created, for example TensorApplication, TensorComposition and so on. If dimension checks decreases performance we can make them skippable later.
201 201
202 ## Changes to `eval_on`
203 There are reasons to replace `eval_on` with regular `map` from Base, and implement a kind of lazy map perhaps `lmap` that work on indexable collections.
204
205 The benefit of doing this is that we can treat grids as gridfunctions for the coordinate function, and get a more flexible tool. For example `map`/`lmap` can then be used both to evaluate a function on the grid but also get a component of a vector valued grid function or similar.
206
207 A question is how and if we should implement `map`/`lmap` for functions like `(x,y)->x*y` or stick to just using vector inputs. There are a few options.
208
209 * use `Base.splat((x,y)->x*y)` with the single argument `map`/`lmap`.
210 * implement a kind of `unzip` function to get iterators for each component, which can then be used with the multiple-iterators-version of `map`/`lmap`.
211 * Inspect the function in the `map`/`lmap` function to determine which mathches.
212
213 Below is a partial implementation of `lmap` with some ideas
214 ```julia
215 struct LazyMapping{T,IT,F}
216 f::F
217 indexable_iterator::IT # ___
218 end
219
220 function LazyMapping(f,I)
221 IT = eltype(I)
222 T = f(zero(T))
223 F = typeof(f)
224
225 return LazyMapping{T,IT,F}(f,I)
226 end
227
228 getindex(lm::LazyMapping, I...) = lm.f(lm.I[I...])
229 # indexabl interface
230 # iterable has shape
231
232 iterate(lm::LazyMapping) = _lazy_mapping_iterate(lm, iterate(lm.I))
233 iterate(lm::LazyMapping, state) = _lazy_mapping_iterate(lm, iterate(lm.I, state))
234
235 _lazy_mapping_iterate(lm, ::Nothing) = nothing
236 _lazy_mapping_iterate(lm, (next, state)) = lm.f(next), state
237
238 lmap(f, I) = LazyIndexableMap(f,I)
239 ```
240
241 The interaction of the map methods with the probable design of multiblock functions involving nested indecies complicate the picture slightly. It's clear at the time of writing how this would work with `Base.map`. Perhaps we want to implement our own versions of both eager and lazy map.
242
243 ## Multiblock implementation
244 We want multiblock things to work very similarly to regular one block things.
245
246 ### Grid functions
247 Should probably support a nested indexing so that we first have an index for subgrid and then an index for nodes on that grid. E.g `g[1,2][2,3]` or `g[3][43,21]`.
248
249 We could also possibly provide a combined indexing style `g[1,2,3,4]` where the first group of indices are for the subgrid and the remaining are for the nodes.
250
251 We should make sure the underlying buffer for gridfunctions are continuously stored and are easy to convert to, so that interaction with for example DifferentialEquations is simple and without much boilerplate.
252
253 #### `map` and `collect` and nested indexing
254 We need to make sure `collect`, `map` and a potential lazy map work correctly through the nested indexing.
255
256 ### Tensor applications
257 Should behave as grid functions
258
259 ### LazyTensors
260 Could be built as a tuple or array of LazyTensors for each grid with a simple apply function.
261
262 Nested indexing for these is problably not needed unless it simplifies their own implementation.
263
264 Possibly useful to provide a simple type that doesn't know about connections between the grids. Antother type can include knowledge of the.
265
266 We have at least two option for how to implement them:
267 * Matrix of LazyTensors
268 * Looking at the grid and determining what the apply should do.
269
270 ### Overall design implications of nested indices
271 If some grids accept nested indexing there might be a clash with how LazyArrays work. It would be nice if the grid functions and lazy arrays that actually are arrays can be AbstractArray and things can be relaxed for nested index types.
272
202 ## Vector valued grid functions 273 ## Vector valued grid functions
203 274
204 ### Test-applikationer 275 ### Test-applikationer
205 div- och grad-operationer 276 div- och grad-operationer
206 277
222 f(x̄) = x̄ 293 f(x̄) = x̄
223 gf = evalOn(g, f) 294 gf = evalOn(g, f)
224 gf[2,3] # x̄ för en viss gridpunkt 295 gf[2,3] # x̄ för en viss gridpunkt
225 gf[2,3][2] # x̄[2] för en viss gridpunkt 296 gf[2,3][2] # x̄[2] för en viss gridpunkt
226 ``` 297 ```
227
228 Note: Behöver bestämma om `eval_on` skickar in `x̄` eller `x̄...` till `f`. Eller om man kan stödja båda.
229 298
230 ### Tensor operatorer 299 ### Tensor operatorer
231 Vi kan ha tensor-operatorer som agerar på ett skalärt fält och ger ett vektorfält eller tensorfält. 300 Vi kan ha tensor-operatorer som agerar på ett skalärt fält och ger ett vektorfält eller tensorfält.
232 Vi kan också ha tensor-operatorer som agerar på ett vektorfält eller tensorfält och ger ett skalärt fält. 301 Vi kan också ha tensor-operatorer som agerar på ett vektorfält eller tensorfält och ger ett skalärt fält.
233 302