-
Notifications
You must be signed in to change notification settings - Fork 0
Sync meeting 2023 07 07 with CernVM FS developers on Best Practices for CernVM FS on HPC tutorial
Kenneth Hoste edited this page Jul 7, 2023
·
1 revision
- https://github.com/multixscale/cvmfs-tutorial-hpc-best-practices
- online tutorial, focused on (Euro)HPC system administrators
- aiming for Fall 2023 (Sept-Oct-Nov)
- collaboration between MultiXscale/EESSI partners and CernVM-FS developers
- tutorial + improvements to CernVM-FS docs
- similar approach to introductory tutorial by Kenneth & Bob in 2021, see https://cvmfs-contrib.github.io/cvmfs-tutorial-2021/
- format: tutorial website (+ CVMFS docs) + accompanying slide deck
Attending:
- CernVM-FS: Laura, Jakob, Valentin
- EESSI/MultiXscale: Kenneth, Bob, Lara, Alan
- Current status
- repository set up: https://github.com/multixscale/cvmfs-tutorial-hpc-best-practices
- tutorial website rendered at https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices
- is automatically updated when updates are pushed to
main
branch via GitHub Actions
- is automatically updated when updates are pushed to
- preliminary structure in place: https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/#tutorial-contents
- To discuss
- CVMFS is being installed on some EuroHPC systems through MultiXscale
- already on Vega (Slovenia), see https://doc.vega.izum.si/cvmfs
- working on Karolina (Czech Republic), see https://docs.it4i.cz/karolina/hardware-overview
- reached out to Meluxina (Luxembourg), no answer yet
- will also target Deucalion (Portugal)
- see https://eurohpc-ju.europa.eu/supercomputers/our-supercomputers_en
- organise tutorial along with CASTIEL2, to get attention of EuroHPC centres?
- we don't expect to get help from them on the tutorial itself
- set up an issue per section to discuss how to flesh out that section, what should be covered, references to existing material + docs, etc.?
- EESSI as a running example
- should we also briefly cover other "flagship" CVMFS repos?
- CVMFS is being installed on some EuroHPC systems through MultiXscale
- TODO
- plan next sync meeting (Sept'23?)
-
- sync up with Dave, perhaps once outline of tutorial (subsections) is a bit more worked out?
- Tue 5 Sept 2023 14:00 CEST + (CVMFS coord meeting on Mon 11 Sept 17:00 CEST)
-
- give more people write access to repository (Laura, Jakob, Valentin, ...)
- AP Kenneth
- add EESSI to CernVM-FS configuration repo, once switch to eessi.io domain is done (work-in-progress)
- write tutorial contents
- split up the work by section, in pairs?
- [Kenneth, Jakob] What is CernVM-FS?
- incl. terminology
- "on-demand streaming software installations"
- [Lara, Bart/Ryan, Jakob] European Environment for Scientific Software (EESSI)
- maybe rename as "flagship CernVM-FS repositories"?
- mainly to show which communities are using it
- incl. LCG stacks, CERN container repo, ComputeCanada, ...
- [Kenneth, Valentin] Accessing a CernVM-FS repository
- mounting with autofs vs manual mount: advantages & disadvantages
- [Kenneth/Bob, Laura, Dave? (or someome else from US)] Configuring CernVM-FS on HPC infrastructure
- controlling which domains can be accessed
- network settings, timeouts, etc.
- turn-key configuration for EESSI, can be customized
- should there be a way to share sensible configurations for HPC systems somewhere centrally: standard EuroHPC configuration?
- makes sense to discuss in next CVMFS coordination meeting
- may be split up in multiple pages
- [Bob, Laura] Troubleshooting and debugging CernVM-FS
- current troubleshooting section in CVMFS docs is a bit lacking
- half a dozen "steps" that can be used to help figure things out
- see also https://github.com/EESSI/filesystem-layer#verification-and-usage
- indicate which steps require root access
- [Alan?, Laura] Performance aspects of CernVM-FS
- not in CVMFS docs, covered in publications (see Laura's CHEP paper, incl. TensorFlow - https://indico.jlab.org/event/459/contributions/11483/)
- OpenFOAM for OS jitter
- should we cover cache reuse?
- caching layer setup that Matt has (see CernVM workshop 2022)
- troubleshooting performance issues?
- [Bob, Jakob] Different storage backends for CernVM-FS
- rename to "Using S3 as a storage backend for CernVM-FS"
- [Bob, Valentin] Containers and CernVM-FS
- see:
- https://indico.cern.ch/event/1079490/contributions/4949462/attachments/2507133/4308160/slides-cvmfs-csi.pdf
- https://indico.cern.ch/event/1079490/contributions/4940923/attachments/2507377/4309143/ATLAS%20Cloud%20R&D,%20Kubernetes%20and%20CVMFS%20(1).pdf
- https://indico.cern.ch/event/1079490/contributions/4940923/attachments/2507377/4309143/ATLAS%20Cloud%20R&D,%20Kubernetes%20and%20CVMFS%20(1).pdf
- see:
- Getting started with CernVM-FS (from scratch)
- [Kenneth, Jakob] What is CernVM-FS?
- split up the work by section, in pairs?
- schedule date for online tutorial
- stay away from Nov'23 (clashes with SC'23)
- Oct'23 may be too ambitious?
- early Dec'23, or after mid Jan'24?
- plan next sync meeting (Sept'23?)
Bob, Lara, Kenneth
- next sync meeting early Sept'23
- test tutorial with SURF?
- split up in subsections, based on notes in https://github.com/multixscale/meetings/wiki/Sync-meeting-2023-05-25-with-CernVM-FS-developers-on-Best-Practices-for-CernVM-FS-on-HPC-tutorial (Kenneth) What is CernVM-FS? (Lara) EESSI (Kenneth) Accessing a repository (Kenneth) Configuration on HPC systems (Bob) Troubleshooting and debugging (Alan?) Performance aspects (Bob) Storage backends (Bob) Containers (Getting started with CernVM-FS)
Notes available at https://github.com/multixscale/meetings/wiki/Sync-meeting-2023-05-25-with-CernVM-FS-developers-on-Best-Practices-for-CernVM-FS-on-HPC-tutorial