Skip to content

Flexible tensor reads and optional double-precision inference

Latest
Compare
Choose a tag to compare
@rouson rouson released this 29 Oct 23:35
· 9 commits to main since this release
9a277f2

This release offers

  • A with a new version of demo/app/train-cloud-microphysics.f90 that reads tensor component names from demo/training_configuration.json and then reads the named variables from training data files written by Berkeley Lab's ICAR fork. ☁️
  • A switch to LLVM flang as the supported compiler. (We have submitted bug reports to other compiler vendors.) 🪃
  • Optional double precision inference as demonstrated in demo/app/infer-aerosol.f90. 🔢
  • Non_overridable inference and training procedures. We are collaborating with LLVM flang developers at AMD on leveraging this feature to automatically offload parallel inference and training to graphics processing units (GPUs). 📈
  • A global renaming of this software to Fiats in all source code and documentation from 🌐

What's Changed

  • Replace gfortran in CI with flang-new by @ktras in #200
  • Support building cloud-microphysics with LLVM Flang by @ktras in #203
  • Update filenames being read in by infer_aerosol by @ktras in #204
  • Remove unallowed whitespace from project name in demo/fpm.toml by @ktras in #209
  • Fix GELU & sigmoid activation precision by @rouson in #214
  • Make all procedures involved in inference and training non_overridable by @rouson in #215
  • Simplify class relationships by @rouson in #217
  • Rename Inference-Engine to Fiats by @rouson in #218
  • Merge multi-precision support into main by @rouson in #213
  • Generalize train cloud microphysics by @rouson in #220
  • doc(README): "tensor names" in JSON configuration by @rouson in #221

Full Changelog: 0.14.0...0.15.0