"Custom" PRange and/or partitioning #59
-
From the examples, and reading the code, it seems like it is assumed that the rows are split evenly among the processes, e.g. here: PartitionedArrays.jl/test/test_fem_sa.jl Line 68 in d5224f7 Is it possible to control the partioning? Maybe my mental model is the problem here, but lets say we have a FE problem split on two processors, wouldn't it be better (i.e. less data movement) if I could control the range myself? For example if one processor owns 20 rows, and the other 10. Perhaps it is better for other operations such as matmul that the data is evenly distributed. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
There are different types of BTW, if you plan to use this for FEM applications, perhaps we can join efforts. Eg. by improving linear solver support for systems represented with Check also the package https://github.com/gridap/GridapDistributed.jl |
Beta Was this translation helpful? Give feedback.
-
Great thanks.
Yea definitely. Are there other solvers than |
Beta Was this translation helpful? Give feedback.
-
Yes, but cg needs preconditioners in practice and AlgebraicMultigrid.jl does not work in parallel with PartitionedArrays.jl. In fact, this could be a nice student project: Extend We also have an ineficient implementation of In practice, we are currently using solvers from PETSc via the wrappers in https://github.com/gridap/GridapPETSc.jl Most of these wrappers could be factored out to isolate them from the Gridap-related parts. |
Beta Was this translation helpful? Give feedback.
-
Perhaps worth putting up as a GSoC suggestion (https://discourse.julialang.org/t/seeking-julia-mentors-and-projects-for-gsoc-2022/74328) and see if anyone bites?
Yea I noticed that package too, but just to clarify, those solvers work with their own matrices, not It would definitely be good to factor out that though, I guess it should live in PETSc.jl? |
Beta Was this translation helpful? Give feedback.
-
We have conversions from This is one of the next things I belive the package needs, and it it not so difficult. I believe that we need to consider several possible data layouts not just one as now. Each solver might want a different one. |
Beta Was this translation helpful? Give feedback.
There are different types of
PRange
constructors. The library is be flexible enough to implement FEM efficiently in distributed memory machines. See eg: https://github.com/gridap/GridapDistributed.jl/blob/700f9980aa9edccad5bc7ffa5e29c18cc353cf84/src/FESpaces.jl#L233BTW, if you plan to use this for FEM applications, perhaps we can join efforts. Eg. by improving linear solver support for systems represented with
PSparseMatrix
.Check also the package https://github.com/gridap/GridapDistributed.jl