Releases: LLNL/magpie
Releases · LLNL/magpie
Magpie 1.56
- Use --time instead of SBATCH_TIMELIMIT in sbatch scripts
- Fix corner case in MAGPIE_TIMELIMIT_MINUTES calculation if job < 1 hour
Magpie 1.55
Various minor fixes.
Magpie 1.54
- Add basic acl checks for Hadoop.
- Set default HDFS umask 077.
- Add shared secret to Spark.
Magpie 1.53
- Update various path defaults in scripts and submission-scripts.
- Remove MAGPIE_USERNAME, just use USER instead.
- Use HOME instead of /home/${USER} in various locations.
- Documentation updates.
Magpie 1.52
- Support MAGPIE_USERNAME environment variable.
- Default most paths to use MAGPIE_USERNAME instead of generic 'username'.
- No longer require users to set MAGPIE_TIMELIMIT_MINUTES when using Moab.
- Make default SPARK_LOCAL_SCRATCH_DIR a Lustre path.
- In Spark w/o HDFS submission script, default to setting up network based scratch space.
Magpie 1.51
- Support Storm 0.9.3.
- Update primary Hadoop support to 2.6.0.
- Update primary Pig to 0.14.0.
- Update primary Spark to 1.2.0. Add appropriate patches for support.
- Update primary Hbase to 0.98.9.
- Add new magpie-apache-download-and-setup.sh convenience script.
- Various re-org of script files and script directories.
Magpie 1.50
Support SPARK_DEPLOY_SPREADOUT option.
Magpie 1.49
- Support Moab scheduler w/ Torque resource manager through msubtorque submission type. (See new submission scripts in script-msub-torque)
- Fix scripts/magpie-gather-config-files-and-logs-script.sh to gather all Spark work stderr/stdout.
Mapgie 1.48
Fix build for correct release.
Magpie 1.47
- Support HADOOP_SLAVE_CORE_COUNT and SPARK_SLAVE_CORE_COUNT environment variables.
- Support 'hdfsonly' option to HADOOP_MODE for clarity.
- Support convenience environment variables HADOOP_NAMENODE & HADOOP_NAMENODE_PORT.
- Add additional environment variables into MAGPIE_ENVIRONMENT_VARIABLE_SCRIPT.
-Support HDFS federation (experimental) - Support Spark configuration of spark storage memory fraction.
- Support Spark configuration of spark shuffle memory fraction.
- Default akkathreads = core count in Spark setup.