High-performance computing in geosciences and adjoint-based optimisation with Julia on GPUs
The computational resources for this workshop are provided by the | |
---|---|
GLiCID mesocentre |
- Login to the GLiCID Jupyter Hub (and troubleshooting)
- Brief intro to Julia for HPC 📖
- Performance, CPUs, GPUs, array and kernel programming
- Presentation of the challenge of the workshop 📖
- Optimising injection/extraction from a heterogeneous reservoir
- Hands-on I - solving the forward problem 💻
- Steady-state diffusion problem
- The accelerated pseudo-transient method
- From CPU to GPU array programming
- Kernel programming (performance)
- CPU "kernel" programming -> multi-threading
- GPU kernel programming
- Presentation of the optimisation problem 📖
- Tha adjoint method
- Julia and the automatic differentiation (AD) tools
- Hands-on II - HPC GPU-based inversions 💻
- The adjoint problem and AD
- GPU-based adjoint solver using Enzyme.jl
- Sensitivity analysis
- Gradient-based inversion (Gradient descent - GD)
- Optional exercises 💻
- Use Optim.jl as optimiser
- Going for 3D
- Make combined loss (pressure + flux)
- Wrapping up & outlook 🍺
The goal of this workshop is to develop a fast iterative GPU-based solver for elliptic equations and use it to:
- Solve a steady state subsurface flow problem (geothermal operations, injection and extraction of fluids)
- Invert for the subsurface permeability having a sparse array of fluid pressure observations
anim.mp4
We will not use any "black-box" tooling but rather try to develop concise and performant codes (300 lines of code, max) that execute on graphics processing units (GPUs). We will also use automatic differentiation (AD) capabilities and the differentiable Julia language to automatise the calculation of the adjoint solutions in the gradient-based inversion procedure.
The main Julia packages we will rely on are:
- CUDA.jl for GPU computing on Nvidia GPUs
- Enzyme.jl for AD on GPUs
- CairoMakie.jl for plotting
- Optim.jl to extend the "vanilla" gradient-descent procedure
Most of the workshop is based on "hands-on". Changes to the scripts -Jupyter notebooks for this workshop- are incremental and should allow to build up complexity throughout the 2 days. Blanked-out notebooks for most of the steps are available in the notebooks folder. Solutions notebooks (following the s_xxx.ipynb
pattern) will be shared at some point in the notebooks_solutions folder. (Script versions of the notebooks are available in the corresponding scripts and the scripts_solutions folders.)
- The Julia language: https://julialang.org
- PDE on GPUs ETH Zurich course: https://pde-on-gpu.vaw.ethz.ch
- Julia Discourse (Julia Q&A): https://discourse.julialang.org
- Julia Slack (Julia dev chat): https://julialang.org/slack/
To start, let's make sure that everyone can connect to the GLiCID Jupyter Hub in order to access GPU resources: https://nuts-workshop.glicid.fr/
To start the GLiCID Jupyter Hub, you should use the credentials you received in the second email (subject Your account has been validated
) after having followed the account creation procedure (see attached PDF in the info mail from Friday 17.05.24)
If all went smooth, you should be able to see and execute the notebooks/visu_2D.ipynb notebook which will produce this figure:
Note
Deploy: The Jupyter notebooks are generated automatically using Literate.jl-powered literate programming in Julia upon running using Pkg; Pkg.add("Literate")
and then the deploy_notebooks.jl script.
Warning
Current ipynb
will fail on plotting if run in VS code native notebook env because of unsupported IJulia.clear_output(true)
.