Skip to content

meeting CernVM FS Sept 18th 2020

Kenneth Hoste edited this page Oct 1, 2020 · 2 revisions

Slides

  • Current CernVM-FS Software Installation Practices at CERN (Jakob) (slides)
  • Key4HEP, a Case Study for SoftwareDeployment (Valentin) (slides)
  • CernVM-FS in EESSI (by Kenneth & Bob) (slides)

Attendees (20)

  • Walther Blom (Dell Technologies)
  • Jakob Blomer (CERN)
  • Kees de Jong (SURF)
  • Bob Droge (Univ. of Groningen)
  • Gerardo Ganis (CERN)
  • Kenneth Hoste (HPC-UGent)
  • Terje Kvernes (Univ. of Oslo)
  • Maxim Masterov (SURF)
  • Pere Mato (CERN)
  • Simone Mosciatti (CERN)
  • Oriol Mula Valls (HPCNow!)
  • Alan O'Cais (Julich Supercomputing Centre)
  • Thomas Röblitz (HPC UBergen)
  • Ryan Taylor (ComputeCanada)
  • Peter Stol (VU Amsterdam)
  • Andrea Valenzuela (CERN)
  • Jaco van Dijk (Dell Technologies)
  • Caspar van Leeuwen (SURF)
  • Davide Vanzo (Microsoft)
  • Valentin Volkl (CERN)

Notes

(by Kenneth Hoste)

  • HEP software stack (see Jakob's slides)
    • does "opt" means just "-O2" generic builds, or also optimized for different processor families?
      • some optimizations are also being done for specific processors like Haswell
      • but only where it matters, stack is "layered" (optimized binaries on top of generic binaries where performance is irrelevant)
    • software is made relocatable afterwards via relocation script
      • part of lcg cmake (see sft.cern.ch)
  • Key4HEP software stack
    • cern.ch/key4hep
    • built with Spack
      • EasyBuild was not really considered as an alternative until now
    • mix of optimized and generic binaries?
    • no relocation
    • builds for specific operating systems
      • relying on $LD_LIBRARY_PATH
    • aimed at supporting developers
    • question by Caspar: where is this software used?
      • mostly grid
    • question by Kenneth: build isolation from host OS?
      • mostly relying on compiler wrappers in Spack for this
  • Q&A
    • cvmfs enter
      • coming up in future CernVM-FS release
      • very close to what's being done in EESSI pilot
        • Singularity container + fuse-overlayfs to get illusion of write access to /cvfms
      • relying on features in modern OSs to avoid need for root permissions (user namespaces)
    • there are some corner cases that won't work well with overlay + tarball approach for publishing software
      • deleting files, overwriting existing files, changing permissions, ...
      • advice from Jakob: create tarball in overlay, rather than just tarring upper overlay dir outside of overlay
    • (Alan) use of proxy for client nodes that don't have internet access
      • alien caches seemed like a better solution?
        • only needed because there's no Stratum-1 within HPC network
        • Ryan doesn't recommend preloading it
          • no size management (garbage collection)
          • preload scripts need to be run again to pick up changes
          • alien cache may get hammered, single file + loop device or 2-level cache helps there
        • also need direct access to Stratum-0?
    • help with cleanup: clients will re-request files if needed
Clone this wiki locally