From 65340489228b85b83cba4de7d7a8dff75e0d015b Mon Sep 17 00:00:00 2001 From: Tony Craig Date: Mon, 7 Mar 2022 16:41:21 -0800 Subject: [PATCH] Update cgridDEV to main #8d7314f (CICE6.3.1 release) (#64) * update icepack, rename snwITDrdg to snwitdrdg (#658) * Change max_blocks for rake tests on izumi (nothread). (#665) * Fix some raketests for izumi * fix some rake tests * Makefile: make Fortran object files depend on their dependency files (#667) When 'make' is invoked on the CICE Makefile, the first thing it does is to try to make the included dependency files (*.d) (which are in fact Makefiles themselves) [1], in alphabetical order. The rule to make the dep files have the dependency generator, 'makdep', as a prerequisite, so when processing the first dep file, make notices 'makdep' does not exist and proceeds to build it. If for whatever reason this compilation fails, make will then proceed to the second dep file, notice that it recently tried and failed to build its dependency 'makdep', give up on the second dep file, proceed to the third, and so on. In the end, no dep file is produced. Make then restarts itself and proceeds to build the code, which of course fails catastrophically because the Fortran source files are not compiled in the right order because the dependency files are missing. To avoid that, add a dependency on the dep file to the rules that make the object file out of the Fortran source files. Since old-fashioned suffix rules cannot have their own prerequisites [2], migrate the rules for the Fortran source files to use pattern rules [3] instead. While at it, also migrate the rule for the C source files. With this new dependency, the builds abort early, before trying to compile the Fortran sources, making it easier to understand what has gone wrong. Since we do not use suffix rules anymore, remove the '.SUFFIXES' line that indicates which extension to use suffix rules for (but keep the line that eliminates all default suffix rules). [1] https://www.gnu.org/software/make/manual/html_node/Remaking-Makefiles.html [2] https://www.gnu.org/software/make/manual/html_node/Suffix-Rules.html [3] https://www.gnu.org/software/make/manual/html_node/Pattern-Rules.html#Pattern-Rules * Fix multi-pe advection=none bug (#664) * update parsing scripts to improve robustness, fix multi-pe advection=none * Update cice script to improve performance including minor refactoring of parse_namelist and parse_settings to reduce cost and ability to use already setup ice_in file from a prior case in the suite. Added commented out timing ability in cice.setup. Change test default to PEND from FAIL. * fix cice.setup for case * add sedbak implementation to support Mac sed * s/spend/spent * nuopc/cmeps driver updates (#668) * add debug_model feature * add required variables and calls for tr_snow * Main namelist debug (#671) * Adding method to write erroneous namelist options * Remove erroneous comma in abort_ice for namelist check * Added check for zbgc_nml. I missed that namelist in this file. * Added space and colons for namelist error output * Added space and colons for namelist error output Co-authored-by: David A. Hebert * NUOPC/CMEPS cap updates (#670) * updated orbital calculations needed for cesm * fixed problems in updated orbital calculations needed for cesm * update CICE6 to support coupling with UFS * put in changes so that both ufsatm and cesm requirements for potential temperature and density are satisfied * update icepack submodule * Revert "update icepack submodule" This reverts commit e70d1abcbeb4351195a2b81c6ce3f623c936426c. * update comp_ice.backend with temporary ice_timers fix * Fix threading problem in init_bgc * Fix additional OMP problems * changes for coldstart running * Move the forapps directory * remove cesmcoupled ifdefs * Fix logging issues for NUOPC * removal of many cpp-ifdefs * fix compile errors * fixes to get cesm working * fixed white space issue * Add restart_coszen namelist option * Update NUOPC cap to work with latest CICE6 master * nuopc,cmeps or s2s build updates * fixes for dbug_flag * Update nuopc2 to latest CICE master * Fix some merge problems * Fix dbug variable * Manual merge of UFS changes * fixes to get CESM B1850 compset working * refactored ice_prescribed_mod.F90 to work with cdeps rather than the mct data models * Fix use_restart_time * changes for creating masks at runtime * added ice_mesh_mod * implemented area correction factors as option * more cleanup * Fix dragio * Fix mushy bug * updates to nuopc cap to resolve inconsistency between driver inputs and cice namelists * changed error message * added icepack_warnings_flush * updates to get ice categories working * updates to have F compset almost working with cice6 - still problems in polar regions - need to resolve 253K/cice6 versus 273K/cice5 differences * changed tolerance of mesh/grid comparison * added issues raised in PR * Update CESM-CICE sync with new time manager * Add back in latlongrid * Add new advanced snow physics to driver * Fix restart issue with land blocks * Update mesh check in cap * fix scam problems * reintroduced imesh_eps check * Put dragio in the namelist instead * Remove redundant code * Fix some indents Co-authored-by: Mariana Vertenstein Co-authored-by: apcraig Co-authored-by: Denise Worthen * Add CESM1_PIO for fill value check (#675) * Add CESM1_PIO for fill value check * Revert PIO_FILL_DOUBLE change for now * - Update the namelist read to make the group order flexible. (#677) - Remove recent update to namelist read that traps bad lines, it conflicts with flexibility to read groups in random order picked up by NAG. - change print* statements to write(nu_diag,*) * Port to Narwhal and add perf_suite (#678) * Add narwhal intel, gnu, cray, aocc Add perf_suite.ts * update narwhal_cray and perf_suite * Update OMP (#680) * Add narwhal intel, gnu, cray, aocc Add perf_suite.ts * update narwhal_cray and perf_suite * Review and update OMP implementation - Fix call to timers in block loops - Fix some OMP Private variables - Test OMP Scheduling, add SCHEDULE(runtime) to some OMP loops - Review column and advection OMP implementation - ADD OMP_TIMERS CPP option (temporary) to time threaded sections - Add timer_tmp timers (temporary) - Add omp_suite.ts test suite - Add ability to set OMP_SCHEDULE via options (ompscheds, ompscheds1, ompschedd1) * - Review diagnostics OMP implementation - Add timer_stats namelist to turn on extra timer output information - Add ICE_BFBTYPE and update bit-for-bit comparison logic in scripts - Update qc and logbfb testing - Remove logbfb and qchkf tests, add cmplog, cmplogrest, cmprest set_env files to set ICE_BFBTYPE - Update documentation * Update EVP OMP implementation * - Refactor puny/pi scalars in eap dynamics to improve performance - Update OMP in evp and eap * Clean up * Comment out temporary timers * Update OMP env variables on Narwhal * Update gaffney OMP_STACKSIZE * update OMP_STACKSIZE on cori * Update Onyx OMP_STACKSIZE Update documentation * Update OMP_STACKSIZE on mustang * - Update Tsfc values on land in various places in the code, was affecting testing. Specifically fix upwind advection. - Comment out OMP in ice_dyn_evp_1d_kernel, was producing non bit-for-bit results with different thread counts * updating LICENSE.pdf for 2022 * seabed stress - remove if statements (#673) * refactor seabed_stress. Bit for bit * Removed if statement from stepu. Results are binary identical, however taubx and tauby is updated on all iterations instead of just the last one. Not used within iteration * changed capping from logical to numeric in order to remove if statement. Moved call to deformation out of loop * clean dyn_finish, correct intent(inout) to intent(in) for Cw, resse Cb in stepu, remove if from seabed_stress_LKD * Reolve conflicts after updating main * modified environment for Freya to accomodate for additional OMP commands * Requested changes after review. Only changed in seabed stress and not bit for bit if cor=0.0 added legacy comment in ice_dyn_finish * move deformation to subcycling * - Update version and copyright. (#691) - Remove gordon and conrad machines. - Add setenv OMP_STACKSIZE commented out in env files - Update Icepack to fc4b809 * add OMP_STACKSIZE for koehr (#693) * Update C/CD deformations calls to be consistent with main B changes Update tauxbx, tauxby calculations on C/CD to be consistent with main B changes * Update OpenMP in C/CD implementation Extend omp_suite to include C/CD tests * reconcile recent merge problem * set default value of capping to 0. in vp cases for backwards compatibility * Set capping to 1.0 in vp consistent with evp, changes answers for vp configurations Co-authored-by: David A. Bailey Co-authored-by: Philippe Blain Co-authored-by: Denise Worthen Co-authored-by: daveh150 Co-authored-by: David A. Hebert Co-authored-by: Mariana Vertenstein Co-authored-by: Elizabeth Hunke Co-authored-by: TRasmussen <33480590+TillRasmussen@users.noreply.github.com> --- LICENSE.pdf | Bin 80898 -> 113509 bytes .../cicedynB/analysis/ice_diagnostics.F90 | 13 +- cicecore/cicedynB/analysis/ice_history.F90 | 30 +- .../cicedynB/analysis/ice_history_bgc.F90 | 31 +- .../cicedynB/analysis/ice_history_drag.F90 | 30 +- .../cicedynB/analysis/ice_history_fsd.F90 | 29 +- .../cicedynB/analysis/ice_history_mechred.F90 | 29 +- .../cicedynB/analysis/ice_history_pond.F90 | 29 +- .../cicedynB/analysis/ice_history_snow.F90 | 46 ++- cicecore/cicedynB/dynamics/ice_dyn_eap.F90 | 212 ++++------ cicecore/cicedynB/dynamics/ice_dyn_evp.F90 | 383 ++++++++---------- cicecore/cicedynB/dynamics/ice_dyn_evp_1d.F90 | 23 +- cicecore/cicedynB/dynamics/ice_dyn_shared.F90 | 101 ++--- cicecore/cicedynB/dynamics/ice_dyn_vp.F90 | 64 ++- .../dynamics/ice_transport_driver.F90 | 95 +++-- .../cicedynB/dynamics/ice_transport_remap.F90 | 60 ++- cicecore/cicedynB/general/ice_init.F90 | 196 +++++++-- cicecore/cicedynB/general/ice_step_mod.F90 | 26 +- .../infrastructure/comm/mpi/ice_timers.F90 | 36 +- .../infrastructure/comm/serial/ice_timers.F90 | 34 +- .../cicedynB/infrastructure/ice_domain.F90 | 27 +- cicecore/cicedynB/infrastructure/ice_grid.F90 | 31 ++ .../infrastructure/ice_restart_driver.F90 | 11 + .../cicedynB/infrastructure/ice_restoring.F90 | 6 +- .../infrastructure/io/io_pio2/ice_restart.F90 | 11 +- cicecore/drivers/direct/hadgem3/CICE.F90 | 4 +- cicecore/drivers/mct/cesm1/CICE_RunMod.F90 | 1 - cicecore/drivers/mct/cesm1/CICE_copyright.txt | 4 +- .../drivers/mct/cesm1/ice_prescribed_mod.F90 | 27 +- cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 | 53 ++- cicecore/drivers/nuopc/cmeps/CICE_RunMod.F90 | 100 ++++- .../drivers/nuopc/cmeps/CICE_copyright.txt | 4 +- .../drivers/nuopc/cmeps/ice_comp_nuopc.F90 | 203 ++++++---- .../drivers/nuopc/cmeps/ice_import_export.F90 | 6 +- cicecore/drivers/nuopc/cmeps/ice_mesh_mod.F90 | 7 +- cicecore/drivers/nuopc/dmi/CICE.F90 | 4 +- cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 | 4 +- cicecore/drivers/standalone/cice/CICE.F90 | 4 +- .../drivers/standalone/cice/CICE_FinalMod.F90 | 5 +- .../drivers/standalone/cice/CICE_RunMod.F90 | 28 +- cicecore/shared/ice_init_column.F90 | 27 +- cicecore/version.txt | 2 +- configuration/scripts/cice.batch.csh | 20 +- configuration/scripts/cice.launch.csh | 9 +- configuration/scripts/cice.run.setup.csh | 6 +- configuration/scripts/cice.settings | 2 + configuration/scripts/ice_in | 1 + .../scripts/machines/Macros.conrad_cray | 57 --- .../scripts/machines/Macros.conrad_intel | 56 --- .../scripts/machines/Macros.conrad_pgi | 55 --- .../scripts/machines/Macros.gordon_cray | 57 --- ...{Macros.gordon_pgi => Macros.narwhal_aocc} | 22 +- ...{Macros.conrad_gnu => Macros.narwhal_cray} | 21 +- .../{Macros.gordon_gnu => Macros.narwhal_gnu} | 8 +- ...cros.gordon_intel => Macros.narwhal_intel} | 8 +- .../scripts/machines/env.banting_gnu | 3 + .../scripts/machines/env.banting_intel | 3 + .../scripts/machines/env.cesium_intel | 3 + .../scripts/machines/env.cheyenne_gnu | 6 +- .../scripts/machines/env.cheyenne_intel | 6 +- .../scripts/machines/env.cheyenne_pgi | 6 +- .../scripts/machines/env.compy_intel | 3 + .../scripts/machines/env.conda_linux | 3 + .../scripts/machines/env.conda_macos | 3 + configuration/scripts/machines/env.conrad_gnu | 77 ---- .../scripts/machines/env.conrad_intel | 59 --- configuration/scripts/machines/env.conrad_pgi | 57 --- configuration/scripts/machines/env.cori_intel | 1 + configuration/scripts/machines/env.daley_gnu | 3 + .../scripts/machines/env.daley_intel | 3 + configuration/scripts/machines/env.fram_intel | 3 + configuration/scripts/machines/env.freya_gnu | 3 +- .../scripts/machines/env.freya_intel | 1 + configuration/scripts/machines/env.gaea_intel | 3 + .../scripts/machines/env.gaffney_gnu | 1 + .../scripts/machines/env.gaffney_intel | 1 + .../scripts/machines/env.gordon_intel | 59 --- configuration/scripts/machines/env.gordon_pgi | 57 --- configuration/scripts/machines/env.hera_intel | 3 + .../scripts/machines/env.high_Sierra_gnu | 3 + .../scripts/machines/env.hobart_intel | 3 + configuration/scripts/machines/env.hobart_nag | 3 + .../scripts/machines/env.koehr_intel | 3 + .../scripts/machines/env.millikan_intel | 3 + .../scripts/machines/env.mustang_intel18 | 2 +- .../scripts/machines/env.mustang_intel19 | 2 +- .../scripts/machines/env.mustang_intel20 | 2 +- .../scripts/machines/env.narwhal_aocc | 54 +++ .../{env.conrad_cray => env.narwhal_cray} | 43 +- .../{env.gordon_gnu => env.narwhal_gnu} | 43 +- .../{env.gordon_cray => env.narwhal_intel} | 45 +- configuration/scripts/machines/env.onyx_cray | 1 + configuration/scripts/machines/env.onyx_gnu | 1 + configuration/scripts/machines/env.onyx_intel | 1 + .../scripts/machines/env.orion_intel | 3 + .../scripts/machines/env.phase3_intel | 3 + .../scripts/machines/env.testmachine_intel | 3 + .../scripts/machines/env.travisCI_gnu | 3 + configuration/scripts/options/set_env.cmplog | 1 + .../scripts/options/set_env.cmplogrest | 1 + configuration/scripts/options/set_env.cmprest | 1 + .../scripts/options/set_env.ompschedd1 | 1 + .../scripts/options/set_env.ompscheds | 1 + .../scripts/options/set_env.ompscheds1 | 1 + configuration/scripts/options/set_env.qcchk | 1 + configuration/scripts/options/set_env.qcchkf | 1 + configuration/scripts/options/set_nml.dt3456s | 1 + .../scripts/options/set_nml.dynanderson | 2 + .../scripts/options/set_nml.dynpicard | 1 + .../scripts/options/set_nml.qcnonbfb | 16 - .../scripts/options/set_nml.timerstats | 1 + configuration/scripts/tests/baseline.script | 54 ++- configuration/scripts/tests/first_suite.ts | 2 +- configuration/scripts/tests/nothread_suite.ts | 14 +- configuration/scripts/tests/omp_suite.ts | 141 +++++++ configuration/scripts/tests/perf_suite.ts | 30 ++ configuration/scripts/tests/prod_suite.ts | 8 +- configuration/scripts/tests/reprosum_suite.ts | 20 +- .../scripts/tests/test_logbfb.script | 33 -- .../scripts/tests/test_qcchkf.script | 36 -- doc/source/cice_index.rst | 1 + doc/source/conf.py | 6 +- doc/source/intro/copyright.rst | 2 +- doc/source/user_guide/ug_case_settings.rst | 27 +- doc/source/user_guide/ug_implementation.rst | 61 ++- icepack | 2 +- 126 files changed, 1753 insertions(+), 1659 deletions(-) mode change 100755 => 100644 cicecore/cicedynB/dynamics/ice_dyn_evp_1d.F90 mode change 100755 => 100644 cicecore/cicedynB/dynamics/ice_dyn_shared.F90 delete mode 100644 configuration/scripts/machines/Macros.conrad_cray delete mode 100644 configuration/scripts/machines/Macros.conrad_intel delete mode 100644 configuration/scripts/machines/Macros.conrad_pgi delete mode 100644 configuration/scripts/machines/Macros.gordon_cray rename configuration/scripts/machines/{Macros.gordon_pgi => Macros.narwhal_aocc} (70%) rename configuration/scripts/machines/{Macros.conrad_gnu => Macros.narwhal_cray} (75%) rename configuration/scripts/machines/{Macros.gordon_gnu => Macros.narwhal_gnu} (87%) rename configuration/scripts/machines/{Macros.gordon_intel => Macros.narwhal_intel} (82%) delete mode 100755 configuration/scripts/machines/env.conrad_gnu delete mode 100755 configuration/scripts/machines/env.conrad_intel delete mode 100755 configuration/scripts/machines/env.conrad_pgi delete mode 100755 configuration/scripts/machines/env.gordon_intel delete mode 100755 configuration/scripts/machines/env.gordon_pgi create mode 100755 configuration/scripts/machines/env.narwhal_aocc rename configuration/scripts/machines/{env.conrad_cray => env.narwhal_cray} (53%) rename configuration/scripts/machines/{env.gordon_gnu => env.narwhal_gnu} (51%) rename configuration/scripts/machines/{env.gordon_cray => env.narwhal_intel} (50%) create mode 100644 configuration/scripts/options/set_env.cmplog create mode 100644 configuration/scripts/options/set_env.cmplogrest create mode 100644 configuration/scripts/options/set_env.cmprest create mode 100644 configuration/scripts/options/set_env.ompschedd1 create mode 100644 configuration/scripts/options/set_env.ompscheds create mode 100644 configuration/scripts/options/set_env.ompscheds1 create mode 100644 configuration/scripts/options/set_env.qcchk create mode 100644 configuration/scripts/options/set_env.qcchkf create mode 100644 configuration/scripts/options/set_nml.dt3456s delete mode 100644 configuration/scripts/options/set_nml.qcnonbfb create mode 100644 configuration/scripts/options/set_nml.timerstats create mode 100644 configuration/scripts/tests/omp_suite.ts create mode 100644 configuration/scripts/tests/perf_suite.ts delete mode 100644 configuration/scripts/tests/test_logbfb.script delete mode 100644 configuration/scripts/tests/test_qcchkf.script diff --git a/LICENSE.pdf b/LICENSE.pdf index 5d6b29280111f197c9f13fd29e59e3011d268a74..d98d3da80d4be814224a113d781d15426d4bbf6d 100644 GIT binary patch delta 28961 zcmb?@2RxST+i;{^N+m@`Q(>zv1V?Bh7E+rd7ehMw{JQw0Q-=TK!RGA@=5 z)0CAJH`+Mcdf3U(=}dZbr@X-FW$cwy1uF`LBE#mcWYCz_Rvh*U85)H~RisiBX>0{5 zONmBNqSG?&tPrRcm_bvZ&=hD?V+vP^!cn5qqR*}rVEwn&jL$0tUJ5LxaA=&JEG9>R zO0~3BU@{n%3OkvW6a`Bfoo#8!v7xf*))}uT0!|Yn^94l(8SKc9g7Jb3R;2&fEd+!0 zH`Hr(y4z^FID04>YwMe-tIHePJK4DF*?1eeIPG-SH_|m$Txa8MW#eqU)7b;Zxyvv( zFxya3!`aHk+TPh#QQO+a*~8w$M`699k*B4H&mJ2^V|Y*$ja^Kf?O~9O41-Hf!DdBG z7hz`)Rn=**Bq9zVTfJj3L8DQ)s;cfDZZ42;=FFL<=NGpbXGWbXjymdvaE_`wG^E1I$dgJ321BeG_?yMA= z?$Im_0seCB@ zyk^f17hY!f*RGJ2J*B=J!~0=&V=f18+EjS?QAz*K?w1-5GW#4CCpT)}Kep}%<)fzM z0hLhobn50l1+OW}Vca`ogg=^%edkA*j-CCZP@ukMT}Xod#$bviRU&c7bGFgqz1hLq zl7~`*7QDJLWhX^@;nMDBQ^x5G`Yz6Wk{L7WUeoqMMJ0i`B0s(ukC`TWeEp;gD~|Qb z?xZeU!ZK6)Hfzo9*PM%X+ZOxGHxOd9HGC2MUcO)ZV!-TgTH`|fRvr$qNOh?wc9M2p zKui@a=9Nz09C%&DwrvwzenbC>D7kRM*LOA_eg2}mrF^x@V^K|I-759C-c|F0jb3{0 zvydA6wK&}H?9fv;B^w3%lMmKRX3thn)ue419IBl@ymz2O+DvV@BfRSxF;sr5-7b04 z&4|g?wrif{t`03-wm{;#%CT*iAGb~)_U+UTO(&kdE)}j_Qxg()S@v>Jxhqfi_={*Q zYhBd^yHf`P8?={fW;)k4b(ckYJqnZ-byIlvBO&t5<#)OA=TF=ZQF+V7r|vA#R2wj$98g|bbhml?i;mW4veX zDtO}0+F!%CI?2b@WR*SpjqgNL_7TA{N6nlHQH4WAJ)a&CkurH%F#-v^++3md&_I#m zji+kGU+&>0xT^n=o z)cdoW^fQ*Ht)z?F*QnYV&7LZq*e<=-#!#cz#^6QAhvw|%((+*)_q>b8*Jg9_INNfc zX5HM8-^B~-ID3>ZdCppGX#ZKrqE3DB!W%Y03m#;bxp*Ca7dDV{TTwKUKGTbSzG7=~ zbyReBS&Z(2j2)3TzE?NzSJ>CIe5D16Z@IuY~$d&D^!p`hqOGrcx{a;_5ku z)Z)A)_gCjIZgb95AE(*h)bZ2`)D-ZzrkO4se)1#ZtcZP>`1TpX%iXOveI3U-rB4_i zF-^*g(L499rf;6y2Sb^8gs#O=-=A+qQfHn%G<*M~sh)c;b?7HfJUI1wzEj8Ip&P_=8d}Qi~Yy+Pxc$!D$AQRqhRXYi?%9V7kRT7u{P`PW@NS8&t2E= z_r}j|;s%!u@%e?BlPIR?3KOoi*eue%lD6%rb@;IBo(t3K;x_ubEckd`VwM+SB{{5- znfKiOQfj@hs6fZ6Blirft4mccYx~=qzw(K_u=mUJPPJo;o?KBhKI~*Dzh@~utN7R1 z+)nnF)5Y@#YBXMJa4jF6D{5cnv1URPoi2ZvxozO&)vjKp?%egVSE7#Z{b_be$XHn}^rSjG{v2Wkh9Q-#N@E*lEG zvX%4YMeOpWXDXt{L{xwIaJ;%d{M7JD!DtgbnRmj&Yf1Tk4m+jr)Pac zCU94xGZ7>3yk2>G%fTGQDUbHKM9b>|X7%lRs`1xM zyb|Z`c~aT*y{gwlnDNVS%QgR_VUKOS>mnx+(<6V(J=}CMw5V6J*CqLD)!;0jj+@Gh z>b@@4zZYcKWI~kayp(F}-+9bRV97dJWr29#rOG?^Rz1?Wb*Z$trREC9-g-xP%qk^m z(TpWcg()ZW^d;w-EccmtUN-sQMS2Tmigc)oj_d2PAmxqnPW=xAhraQoj|jc;@fKgc zH;toA1Z$1Eb~U!$K>197MsZBW`1htQ>>ibtBlT*IQ_$H%1B23ndYZ?3;tw{GXy`r^7PHx0^TLJc(1 ziCNElcFw+Eq*J_L$E6)(1bV}NT)LNFWb>fhcW8AC@#}c?O0)ImWjsp^VL6b?? zWOhy5WS^M#LhHh}w6ODpZLTJ|=bL7XFJ+pXdRcpNX3A=*-c4m&nHGa9q&|lkS02w% zAKR|4@=LEV`Ng&$EmhY3Yh9v+yi7zJMKd>~M* zHRoQq;Z>`ShEt1FTHQWp1rB*^D2dQxSyz3kP`W%x;VNssqs`Hak@d$kC<-Pv}c)Zr;*YC%W9WXTU}T~C&qPYelL zBRsG+MPRyW|F#X+Z&~F|E-&z{N}2@JF4H1`+J(zp{HE&U`M-kCSr!u z-k9CcT=0PC*Xuqs$jW|DoN_<7?nY7jpnv@rzwZ-m#CAQ8^geOOQAGK(l`eJN42M-M zTKfgYT&!`4{u4(8-}@s`GvYu0@1E#d?R7>z?jANy+RnRNWSHoC8!AGhv5S`WI^CUn z{&GbX*O|#s!Se=Z)7b<4S2z1T9xiS&Od2@_+*E?}Q(-QSK);kmI<4dcMKg1XIhzgi zi{9v5z^5T#_-q-88(w_VZ(lGJg*F!L!yGL!tpr( z9e)ysX-?q62sk2!!;yg+F2@DL!`Zo1Km)&lP9lZD0c_#6P%;@*9+xA-r66 zb5SWYAUi+;N2gMlGE^#)1e8i;L7m(S$V34eQ3!H%DxGT1paA*kFoDhoj!I{m(~;y% zAPdXa{gjWq#BJv#qgG7R2l{No&pC${-*#xfOz0OG>{J@IYFbs$!Wj?IDp0=nGQJx zH$%QaCPc`Q>Ns%}P%Lo_0t>OO3=M>Y6gD6c4ah==ej1R526Ndccnl_q{?RpXKbQ~l zhnv8uag&jfM`JQUTFJ$sWRpUV6LHL+B?}^Gd>;k!&k_$ozQgIUq>s$~^8job!9Nix z-T={P`RD2XT8K6xs()bO&kYecCW;#X#0GW3q%mX|Y``9HIuo$M0Fp34BuUJ%0plPQ zT!O{r(fPUxN)km-g07+RH58r7HRl2Y;i8CM7NVWS;L%}mI*keMSOTel0ofGL6Kp#0 z0%{Q$gA5IGxyVPrfOKFyay&2|h$t`>g$d{9QrSE@2dNF`L$>08z5w`HNC0F_kO>lL z(1%+vL3`seG>~KlfQtH;0o&k~ct!@W9GwYgrvNu0cu*fD+CK;_!y-o5>z{}u&-(|W z|LvV*TR{51mcU&|K$y>CAQ2e+!!Q_p{Ys2<@=u7+bD4Z(|FQYMamBv@HQJ6EoxuWi z1MH1$4Avi-7~dHTg49>I!RRCe2qu#6f0R51P>;bGF*qCs3KO(}Vnxy+2{>$0z<@)b z9kmK@1dxoug+A`c7%rQyDS)gTASHuJkVa011Dcr)){F)^4FN%bSPVLkiS(xMwE*w| zIT<&@F>GMJQ9VbF1zunR+mZ(Xwk5R#vK(4E7|#SPL-v7|VS~27{petSMjQEp0)r3; zBm-{DKr0I>3MV6zVILEi3Qx|Y18RX88K4D_K^b6*=x|b;OKKC43rNlVu^x^AnSnOs zH#W=#`-|Ehm;x+1lLb=C;%jFni*6383EE(D7Mv6WjLBk|vq86!n?ur{#{{}E;Y8#@92SW>k^~4PkbuplfNKE2 zAx#mLUL#sHC601UJPiEw2G7-0rj7|0OI5De5}prvWZTzF>C z& zKmR0u9)wO(`oA9n4=@5e2{h;*fb>5=i`c-0$f0tbOaKrfmn;CHfx%QR=vIXpH97aM#~ejE6HsI<|q0}qG;${d{!4wxej;ulup zP(VkrfS&*i4&aDGh4V8Z0^k6GIp7+h;^cri#yS@$MJMeaNdG_HVRH~zG`CD12TU;s z^%-se49o$&1~uFoF@UWs8mX>1pi4NwFJv2tGKUNNk7(g=K_$XK7O3G7M1nI042d2a zjmzW0dAMktfG4;V@Z9+AAeS7%lcP4_&lJt`Ry>50Oq2DD&d0EqqRcd@{(*%6kxcl{1-I(&c7Ou32@44ajLSt{0bb(b@jz=Z zfg!oT30wl$0IA6ZE6s%gbf72~G&&b8HcsGTA3zF#aX}qpG)+3tPRA47~j2w^(3`L~^`EU%lRii#4+ALh~|6@t~wctNJ;Q!b6e|o^=$wx8puO;Ze z|2PmW^(gB8r}EE*`6wPm^}m<6-sn7Hv@%j6YDZ7<9|8Nv_`epyn3)E`1P_vFXy6T@ z&xOto`e10MNk<5!5^Xz_umR*Bzzg`doxsNeP{FStOFpVbiBIkfsu5MCdWcz|Pp z+7kmAcpxO?8;%6^MvQdgGGOXPI^jGFP*MCoRBQ|&GrtdX6ubdd(bz!^*8xJKRf7uo zKy}LRBuAh}g6&9X1`rx;6;$9%Kvm)os|V>aBr!y(;Bw&sz%Y&)G{~eO*yyy9U65w@ zqYD@wedduaP?#g;5n6E_penzMgR;Q4oX`q~U=Tpokv?!q{_Z35b|8EttRT9eu>oo- zoD^y}5p5%hd~gN-;hn)b&{Cp+qE&+m`2dRnli&@eV+cw15fD1yE8p5e1vbW@cVr?i z0}6yckxKpNPXEm>{0AieTR)FVkW^)&tAwY)=l~v|QAj{QHNhLkJ!Buk2IhRE4`Cx+ zbL;~)g9qQ2fC{bx)aB0uHiI{a2FNvm%ixXgQBcYJBVctzSuxQJl5sm&RH)!k44_f` znV>SLOkg5(MW6!V?x?t61zKP~6c>n=M;#L$yx6;u-#=#nan2|`T#H~)kQ z?hzG(UH+~}b_jPzW`Ljkmm>r5E_RJ*5+)=;|LVdl8I^^OKlnLNVW2jJge}qRpQZZsCO#ynw01lZ+0E&;6m~_HrFuufXh!Hmn z)j3LS;2$&d+)ziz^DmD_s^Mp zVnO_am47;$xC)|dwES}-saF9!>_afI{AWL&70H0>;yS1yAj|aEIHVrthMDO9jc$&M z{0YqHqW=|c+qGy82 zjSWYD6f4Q8Oi;&=;ez={3%rY^qZbAF2y?JR*bH<;T|k5E1ymr5guns|(9j})9tXb_ zh9U8}02hfY%sjvwS`0Wk7bs07F!6w<3-mpl4^s{Np#TpR#fHQZW*gv*&+Aa(0G0uP z=!gKkc&A1KW`|W_M{+H2Wa#MMKyCpZ;5G3V1+R(Z6)sYZ%o`Bou0R7A4wKQz#ascJ za;R`en$aX;c7Wdu9uI#s7(g<}@Fnzc^YA3R)#{akzCIX+}`+hMR1L zJ8ph6#Cwo;9R-Cu0t!I^wubgmw7>-ke>gbMG|V!Pt3Z~4-we43Y#spxjQU73_=q$} zEsTr?-i7aIQKdrxEQ*d7G5}QY0tf_cL=|$lqwzOb8dAdlP|{?Sig$;AC^9wx4;$kE z@@^6Qe$;oUI526*1mR$T>Ok&-+gvyj4fZ2pr@{9Oil^@){!ukl% zi1`4?hxs6!@POS&;2Dra0yU0G0f(WKQ^5w3Hv^={0OF6yG#2O_Qga};VycG&Y%&^p z!*xF-wqPnc4UohzWqa`18G-q_rZ%ADItyqX{F$z=x1VN0+-g1?j;jNe*pm6 z5TSy&8UTk(8>CBd02Fu;if$3#7a?^UvK(whx@YL4P^gf91zM1(#)JUk1XL>r(2Oo0 z9D^V5kON?wze+#|-hi+{*`S}v_eNkM&;woV(GsaXiUOR=gk4!M1_KllIs(8O1@o^( z&~PI8M*8r4I0nm6-{7A%b99qY^kO+WZ*-l%%0E;*fp=fzy$muncKoCK3n-*G{2%q} zIKbbS-A8SX*B981Mga9aUaX<+M1@WQ4(%)uF%|{53!`xwng<{fgHBLMDj}#Oevmqd zr+<`x9pw+uNwock_fgdRZzbuY{B{5M9~Szz4M$`N zzb*KG+m0tg433tFQ3Uosl)(SUX)uBGn*W#b-%c{x{(lSL-{V04Bcy*`bi;=qpuf;# z#9>e&@4ER?@&}c0Pm*o^QQ~rl_0c_W47|fPCFr7~M~5F~Km`mob5#FA4LgBH+58G9 z65v+S6NCzIEPAVW?*$dmjx5w>XbV`NJZXFljDx@#fd^0&oI0rB19L)XcA5PzX&f*N*$0mb`Mr~owhaSM4E7I>EM21i2J$S@G460Znoa07}Dyh}pZK+tHU zM1xX;3M4F;Tpm<#Q*=x)kq7Q;Bp5mI2$}4 z5`!#oaNrFNfEPwAi0a`1AcNk73ONr~Am`x&Z7Bc+ynJX#e;(+}R|B9%;R7--k0dmZ z0wFI+p&Q+L4k${P#G{dsE?kC=Ok%_RAGD32iH%p#cmX=1Sm2vDn1^04v?3sY;*l<( zJZ8-hgK$-gTLV(js6YiVg}yMq53vX}KFE+0hgbkO7x;_|o)sJJpTKoRlZ@<0UYX)u zDTyb%EajjUM}r8y1$l1^--JM@3tOO>=1-!6e*vEmK??wb;X0}rsKBQtBVtHi*Nlod z$SsV8L<%wk6Z~&{gY#hI1_yz-4IfB9Knuie{Ba{14I6PDejjgkCjJKzV8J0Ue~EL*r9im?}ZiMKR5-ZvB?YvXdwoOC}#OF?+6)W=mV|; z5u}sBCA?#<6x#voO!#gAoG`#H23jCtSOI?-oMSq0EPiQ8r&0+R1O6jC0HFXZRN(A* z`%8X22-m;nbYM?Ne!*yra*0osQWC(%z!d-!nhpFwi2Qg6PQ^l35lsMo?MFdu;iWBE z@FR9KZ1DXZb^@RjJRZgw6i8m8{8MS51c48rpD%1!A-Ntf2Oc``HS_}gQB2@upkesF z7wKXG;rZWhpqIyIS(F6+dO$EbUeKZmh6+Yl{15^%*RTspA_Ytm!pek*mkXabQs6}J z;425Hz;_rR74ZET!iP$f0vZMW6}13L>`F*%3C{unPST1;jgOGi2R(J!VF~I&p4VOXppWlhgAUC00QQ&0oH4PA+ z>_fgHBSx|huncdbz@P|XAjzKx!U%7pz<~Gx!;b(1M(MBIIRVYE3dWU~Jcr8PA6LK& zEJ8+`{4QJscmO#IDmabIl90`?IeuP^S4L1F2LL~T9)-96;rj-#za%^UBMpt;q~g?( z@CPb?-}k@^ZU!QcCI~8j-S?0d^?&+o1}(=Q{jduG{tE~;9tyYyt~c-jct-((L=Qet z3due^6e=KoA8rlmk^Eu<68HE9!i{u+3_Cv17Qe1q=`l{Jt9=qoaX) za4ozQ!Ra`K+!%5em_Xy_7NG({hFe1Z0n7q^O$^Eg9-wUyFwCjJgAWE&fPVlE>;kx8 zFxW46V7d(lgLWD5u}QG;{&Qp~oC${`Cy_Y`Bqih};C2Y53w~NbgZwKt!QsH6h6=J0 z4&rNkKq-s?)e8^Q(>NAB5P{4cHUp(GQ$d;r$X0;S$AEw&5OxEB$dBaF4A9^^K>!r= z3BG`DfgTJTnaYLlUr9qjo)~pCbb?C)xd%KWsvuwhsK75g0Lcg(xS`mM7y`aQ1dM!E zNSZh_VR$<9$-q8ACpi$vKq3tivFKoo1Y*z)_rI_LP#rxAI!=QJm=>TjQ~+93hUm&c z1qFc&H=qyH0zUX|9#p`c0)*hLCAtB)K5mG&7^890C~J|x$-E)m|JFnZ6KW;HW)hmqyQfHgDxoeZXWONpRj`}`Rk7WY;v=>s%+6$r@hOuzLp?qpgHJS0S$X665&7vNLL%Gkjz7_uU3>iamV-q``iq&& zDZv}(^c88S?vis@dGJzlaA@Q1r5-l)T}%7c$(JWIUM`G|5a-(^phx(7Mq2SJxiXo=u%nNs=Q4tw>74;Dd@HcZdTm&#kMgmRYgiLb1B_h zyWqz8ZrcN_Q~TFH+py_ubyA~)Kv|FQs_vI{VRMGPgabT_i_DYyw)k;o@S<0_n2K9u z&lX%UpDb)Pe)s1*EKBvG)-~!TcSbiikgJtZ>haCljH`J$Btwete&7ptz7SaGple~*lm$;8EK6Bj?y6hG@K@Y-O1>-~@! zi$BjyYwF-cOqZTY$>=+G*fJ$PM_D9@$65V=AtK@^dz{kpSpF_EEonhs9A(L6B!t$-B&T!`vM^!jF+K!knK@^}UmA)^ zn5^X*k}Y@iiH}y(-l4Tr{XDwY@yNQ9+ZMdLDci5oYtviWUU*f=wB+9B`9Vke4qyI0 zswC`PMsZ z?Z)b*a{G_?j9WCd{A=N_$j9#)LBAU4lar>pTo&5R*ZVWa_PV(xby~k8H;wsH z8kgeJtz42Rni`lKqnhy4BPy(1)VESd`Bub8p>lIPXJj7{yz)BIz*ex8(Q*K3cimiQP{&3<8> za`@MTh4WQnnm6ov5aXV7uyc}aoyERT7uVdFR{xn-xb^ zyEb054`IY=ZwXnmRj)%|BuD2~^PydibvY(PSz&5)=Qzn7aS59J$DCjLKh|Ec*w+2R z#5p3CV>%dmQ^l1Up_q9 zNW|ep>}{G!gwl-`lS96@2UCq~o8A2LUcH~`q@Dg{Gi!@#%EwgY~_{8m`#cRp8mgUcSpO`hao{D-Ql5$j=W_Bmyn#cI+NUzNGOK9dL z)KxYI_bzTKx9(grZoQi1B6`x~z?=D^ovT0U*D$m{UYgOZ>c61zIcKN+)^+iwH8KZ2eHO z#fwg<702a_&l@z{UnyBv>k-aMa)`SSar5k6lc^UROg~zp5dS|{~tfKc3tArDR8g$WPp#!Ye&mu)|0?eomA&@*uJ61|;r+Q+|q{8d{T z`e5JKqIHzru|(wCFAR;xViH?Ar0-8>%B4G(CB)gTvYxwCX6D_a^+!Kvxt^nnb}X1Y zfiodkM2@vHW{Y0RmR_xtfXQu_=dU{M931rii%i z^P06WLN0%5%F~TnOP9tjuc+17#qPIE&=ZODoG$uO+%F`z&iALvse8_{bqiY!WPa7L zti*mu-=2PH-8tKD^DRw@n3V60dxiEK-sL8m6R7;L;r^1Mtv|iOMVL#~8{LmM>E@UcGl8EQ6M8Ixb1;OHdzcHU4ME zyNpd99oH=)L{y$W^*!u0eUk9xw#JFyGX^~UlXaT(6#J_7DS6q|-ud>!iP);v)bPIO zsYCgO$!pHMjyU1qJuV^B_{1VAHT%Z5_4Q$?wVqcp+diK9t|c9*; zrIm9^D{GZ+B>K-7e@+M7xlF4)jD47dF{Pawd390>EDAuen-y&MP0#GfrQ0)Y9h8YX1;qrtL?pU>o>ZXtCPh|GrQ7@YznIn%8(#({ajhJL zl(*95%w)B)Uw0o|lzZ%e&iJ9|9}d&QZYtH+s`S?6Nc{K|sO;sk<}1&cJ=JkphnCEf z(@7#nFB}+?MUBv%ShPE8_RX|JanBjsF1#suQ(iHlJ|t|Wt?!}Al}d|}tF{jB_r?= zo;MP=)9x(KGuas)XLI4uZkMFb+OHnRHg;OoD;GJQ3EI_{F#D&QU$XSs>Pw9t1##iN zkpo%2Z#>#wjC(S)%CS~jl z)@yyF)sgKq@o>1ahGUk|k82D2A||?Y)HufZ>hBpKKAe1TPgf-^NKTb`+D#)?VQp># z`~`-7f1QhdRRZDRpT^7HbUW02WO!~$=E7wA+*`)O6MhuL<#?vOuBwPgofYGunx6P= z(FX27_CbZaa@?_k?G6U(9Ad_{B(|(~8#m;eny>Asr>JrBu~x<uvk3b+4bemS0Cw~YZrHI_bT$;zDH5+j(73h=jWs) zwpdUuDL=_M88SI2&9}U&CgS-=%9DzNPnSp9ySAO)sPy_=cfa5K4<0nFw^w3cIBvQ! z_^s6UZbzc0c~3&pQQe|_`sQVI>RRt#v*w!h+-_el#q5kpyGE$^Y+iPWo5RVMpZ)UN z^Ur=V$IivBIjE-I?W&Yp`h7!|rT#LPqwLP*4o9tmwBOyUd%0+N^^T;j*fpk<3(|*p zCvI-Mw9w|3+o~g0jnTShXH8?9&YEs>x};I|Qhx8o_&IXnQCgk)1@<2Kra^b|JTpVY zPAu4SLtvKh*Ryun_vD6%B$1WQQ;!r@bL8h2KCLS})A{yoWXMvhn9h45M?YlPOFsUv zM)Gsw<8uwG3RQW(H@-J&xI6oyVbA#?i?8g@;cKLiY;3!Ar#&OPH-FvlbXZ}*8m8$n(PbtI7bonQU}7RJy48M;^K8+4c^zU-x0sy9 z7SCPTd2eBP;B55AC;+oM$TO7!sbh}NyH znayn*Qf_q=-w|!fDv7lfDM&HP>L+67@V-e*{a!Ozw>rY@_SeCM>OB7&mCaqr7Yn+Q zLnoIld$BEYQLp)cLZy+7ZkF!gkatl{jH+()fT@_LT=M1+%j4oPx(q+l9QuMC9QVc- zagT$i&wXM(n-ifFig zZ?jKVLy*O9uiEnM!|g%dHNSrEewTa3Rxf^G>3H7!N-y8)qHe3$l|qRZ_67S5)n{(M zx9>w=$@$_{ciZW5n*%LHs^q&OgF>FA2_@2-yGnFqb+^!en<>ecJ^P(EC2{&b#~V6! z(VDZzrH>OXf3i>Y3W{{B(4+N{D{Ms>%hHsW)1`B3PU*$k zv1-I=^@mg63vQ2IT@%u8exY_^)TK?4c0XCG6@vm6SsDrF2(7X_C*gatSkH#2 zaT)p92`AD%&au)nR5Nxs75}Nbf35hNsvCxP26)TsJ%7XwGLwhfUO!ZyGQghC9bQ~% z=31R`cXh}U%`Y+?Z`b}xQ&k9i?bUt3TR3yp^(#We)4mf0R{pm8H4~#(TrL@q*thMf z^u_nfCLBl-3jf7Wxq1E?$1(ZQtWUa%3bSADJoAk=v-Wc4>L}+Yz9KHM6KjoLsOFH`xJ4z_*80i<&o)h1--A?zqPOyABzqsMWlse(6 zvBrYa`c(ZH;m_^le;nzJ@NeAqDLKac#k}oLEaIC6=UK?QJDedrevap6yijkfpm2{p zC^ZZ%W^SChmJ!4~8=R(?Fnv;qR>`F8{X=^Kism0M&M+KT7}i@I9k60@myE{FtS!Sw zonKvjt<-JyiIDwizMyHLYH88@-`fVL=jLRspS$+W{+ur#{T-tAo!l9xpS~k|LEb%! zReGDWeb#I&(_5CQ`pN$$G5zQSx9{Db-5609<3g8M-Ty`;4ihq}WfQ&%e+ zTxpb+m^x?58l((=%)IuF`K&L4t-0i4M<(_Q*GTA*JlGc%0b~qLGkY%dB{v2 z`0&eJsEN{Zwj*rN!uFHJT_avQ;c0nJZF*ava#POorM`!btYU7UciCER5-5+SPJV+TH^OKuDo`xC8fsYZDGT_DE+$z8$uTZm5WiQv>q&I*11lI`mYZw zbdq@xHaOkxlWhDr9fJakUrn?Q|4%bd3C`*+-6NH#adz$Sm)zfnZr#7h38-n$_1B&% zv}OhCMBWU+!?Tad7=#E+77$F061=)zBWWRBZ~TS^t!s(L-(1|GV5M_zj?s}P#@5T! zJo-#-q_3VZZ|fwJGo~{nCu?{F)?NQSm$2)qD1DG6A$9ifi8;faFZVUJ<@gbC-n}*M zx$l0zvz%3EroD+b;kEYypCidL6Hb2HY$%>8$MSEyAojv{##p0@h0k8D_1x%v@_ovd ztEDm>4qMZw8q~EW#Qh3>x$SM&k<KTJ5S|e$F}F*I#(Xn&F*x*`ZMJnz2s2a zty{`2oywP|4=p_B>D;095}U{mt`?q&Mao5G|E1K?$qfEiEN?pIIPz zBI4#E6SnYS$D-wuMAVbl>OAk-g2{AN2j+v|7VjBl};GSoa*=}xnjL}A2=ie{nN11z7rX=^M3#S-gNbIhfVIeqs^ zxpR4u@wLWkrz2D}>A+@K51vSCdjcz9Tw?*1)S9spqDVi;s ztE16me?#@uUbaiGXM^My@uR%&Uh?HVB{Su0bKS1J`jA*j49f)V&)V!ZGk9@fyRN_L z;KGKz?7EM_&Gy0G4;12^g&y9E@|+()AMV^$o^tEgFVj%(l<)pxPd)0N`qU-AU;B#M z6Xq>wKFcpCJLkma$x#Ca?KAJ#_MN* ztqV@N!@^&5%2IA^{_s4%>}LRlDs5Piw(Z^9J3nfJYGr?~Ebll(JU!=oCh&Y>G_B3L z?c7COm6UQ7$;2sE+B!}PZ7<(WqYlS^YRjH9;gk1p>jTRt7DouZ9SnyfA#M{cb9Wz( ztLpQM2%PUI)+H!=_ds^l-l(Pt7owVYZxWxZES%{ee)5gq7ZHi9s^=OHgI7!_SYPsE zfMszoXpkztUG&%YO#)MX4^7)7P@5rQQ;{UedNk+yrA?d?dX)5&Ty>uiHPf-Bp&gHD zbElUMq<_{f>ur9~d8dVlnA3hS<4wBP@ck_kHS{$f+Jwuc%w3IoGP5-M20z^U%)2yh zcg2G5wYr^6f!=Xt_lQ?U>N;XKt{O`wZe@i9m=0E2ch{IYIY#e@E8Ar+Z2awZKT~z$ z%eLAd8;sfyU4D`sA#QFZX3g|C=zJhC;iYK1Qg`0@6_za;L$x}S8^Va#cS)Zl8@a*T zdNb1>OI?Z!%9-1kN=au0Cp&#fHRn0Ig#Dh#_1y6&WMbmmpna-4ZqIy~|1BZ$<6wll zk33h2xrzAbcBX0VtDvIQOQ;_|B(C$cr%RS;K2SLKp~lVFJ2h*5`I4R74Jp0lIbtm_ z+nzfubZf5)u2G1Se3wdmT}%9&&pwxN;e~kUh9jJ^pg0r3)}vxy3B~F6>wV>K3^;E+ zGokfO5HQc_d8WR?wa9vHBKXD^Ebl(>&egO zOkPFa|GjO;5ssSPf?eSin>$uAoo?G_Cpl zjg)X9FT3BhOBIPHl2`o~M`mvOE$_ON*dRC6X#38E9qs#bdyKla_$~FHr)Da(K@GB=Fr$@L!Z=40Y+*%$Zh$52UC z$TYVYp>*2sQUl!!L*;kN#OwT3JZqA+*U5f)Rd{7WtNT)BbjDSu+o3fg1zTI2d==ks zBo4hvy30E0zEu+{5n4vZd9l zr_6I&<5xKEQO&$hyb2XfpDO|K)jGS>5@ocM+@c3X=Z$@O?i=mn(8Z8C;xseOM`ZhE z^QxANjtk;z%Li?`Pg^>2e;(M7zh%*>eNTVxFKHtV(CUj#dP4h83z*PqVyqf}l`^&x*H z6^{Aa9s#-6!@GwT#U6LboJ%NH{x~W&)PE-3InnsRz^943??v@J)NredkC@-<{Gf14 z0TD2}>pGzp7{-(PRVL4RGho%0adNtAe*e8MAKc-G1Nq0kMM!UM+Hv`}c+V60JzI`3 z%O=Ei=(sEke11HQYWi%zJ!#qAk{|ImBo?lk_IPfR>bNSO;nY=KIag`DvkPhi!)N*I zlFFJjuu#xnYN9@`#-umYU7FhBtEqYMZk2$+bu;t%ORf{8p=)xgL=3J=mmic)Oxyi3 zyrRWV<6`bRh3}WsT!Sy^Z}%8GT%dI^VBf0Wc24K~G%w~hP5XUj#*zL>25Lt)5r^JA zh(2F1Gp~P6+s7Qb8WoxGSeQ_yXY@amFcOMcIprVq))k;`S#T{{Tg z$-+;0Y4>Fc&pB?f-`;h{L3sV_GxOi)%dx3u&phYJJ#d&$C1!Jp8)X5enM#T?9tLdK zwrb_Pl5HZqZH{isJ6VIfzebmQjlBEH<>nVzi^!VJt{MGHtvPmx_G^I3$XU|-h0r|~pHOqM{WIYveXUlnd45%h*p-D&%wNl*dyB1&`@L>> zb)I7mYkTiC->hHe;F`#)%NZ{?lo*sAi>FzcJDiwa|ZF|-Qf)rqEu3NdZ|Z@)|!q}L73ELc5l#f2eWwP{%S#Js$>Ty*NmX+k%d)4P@_=tj)of}LCIgOX;q*V4Q_h0&gp&I94X zGC48NC|klaf95pQc^s0vp(MJyc&PI;C4Kw5A05&!h2O@TrB*)u9&voON?XfwgBAU& z3%P`A_w6?~v?W!Z)v3HrNRO=G-ffFF@tH#mh!-Be=(XBx`*RC}hneD=zoiTfvW&M+ zPaf>|-JA8BHV8BHu63>T9GK`8w^i2OHRDX4IKy6j&hCcUe%~$(yw)$SwGONQ*}A!J zW@Dt)%S#E19tRZe-8~~CNWFQW`1@}O>g+3`)5d!TmWCV_lU+(LkdpFIJi#elHb38f zJe7!=8$MU+*SDYrzx-aQ>I?DuvLz;Q-_0BQeb8!g_EDjYRYAwU?l{?cep}&}%*BMa zTki8-*}}mqwVjax(YLy@v||01S4O%Vtd%?abSm4WN4+EM)BL{ZH!HH^uPt|jJk8_Q zPtwPYW`>zx7gKyXp=N(=!p{dUruTo9Klz3I*dmG$pA@0|ZBDj3y|-!evih9^Nv-aJ zKjej64$bJ#mhI5Gol}##@_v(2WQf0&p8DPU$)CoDDOjy<@4V2h<6!LDasO37>CSku zB#SlvrzJY^plm+COy57y7wsVisFeWWVZ^XG89t zxdZCquRgGeo628XJ&OmLyym+(8Omsd&He~mtULuW`nNlwMt4-M!wPP858Df3F=x`AL?l?JNRRV|D({9 zdGlp|@%j`JOvl+AQ(R4S&O6fkeQw3JD0Q)84d36n_2ql^5!D;z0|wN1>sD2+^C_tR zH1pMf{*XXL#^h$9zB&7)n9~nli?EnFAX_PwdVs|`^o%ZH&=@qR^i^oy%7SrX?xK$4 zR@5vhc^atBy|4M+EhE0usgid=H_cpA>W7H_!BlClrEQzkC;9AEuFtqRcrS{zUBa&Z zz^+X$S2QY(f?u^CA+`seTU6k){NCyE&HG+Y?HwxfewCB4UNYZ1WOCB1U+-QBy``Qh zu0APQB0?2i*4k-!F74Qy&0?+{ekYxieU}xxPbhioZFit)SFEMem9vx*vBxS?e>ZjX zIZs)qkTB1W5L!zn2PJ3#n!QJKLnUXbgB|M4%=LWV`{dP zMF`wDckh9)ztj%@g-lk-KA+svugp#b%0JHT8+zvVne%&zr2m{-gIRMQHZARuPZ;}d z|JYsfosw>0F|G5d^1trL4AivNXrG(bK3MTUqJF5%{b(btV4%cn=@R1bgDRg^fsfM8 zV}G1D@9iU1OS>}nCojkRt6l*S8}9Xj_Vu~s$9IDzmm6|K8KSEh?BX$06D4~$ zw4Y`B*8Jwysh-GXb$2NT7=ew?(kiz7WPb3<-R^-OrvlWte%2qWwd%QnI*A3f0r5Ws z$Ep>r4GA!?PxEQ?YI|q4tnJ*IdXWKVuhM|L4Pm@#M-P?#cF3OGdM2d7ir`+o7UF-* z^VZMOqqCx1rz#l~aO~9LLc7lAiEp+bLIsvo!h!2dc9j{P_o2w#X+vUP9587blQ)=ee*0wO1;Y#KPXU&pWsJ z^rV=Ol&C^e?-0DZG6zNq{FrA{DLrp%9M$os&Ub)Zp^N)5jZ{Oa80g` zvt!$bmjR8M<_-`0&jd$G8!aao;-QVSsh2{`n^l%yl3f+>O`ZhO3_*C;4uzQQs)e_OfEVu=`WO9>Ml|_acf5X`~9bx*IiC0uueU_ z6P6%V26)9vHsi@=@bd>g;b%8T5RkZEEwrpWLu7h4$vhMEOqLiVeek_jA++ zdxo{n2b^)pTQ!OLR837;JzKXp-=}KH(0bbg8t<1q*K|;|)2I@VYP!yeUZyZjRBiey zx{OmtYV*aX+sem^`!JTTR-=u&xu%i&XOg|sRLf@h%*K=lqB@TY`+Rp z^5?mpzWN|%=4-)=-cRQ(qvVY*qD}2G3hz^q6j~}8%%1rxe#$pr?cd_vcaNEMxbT-s0iCxwR~&?w&W&7T!6b5$z;$Lzkbd&M@-7Tshd_H}&enBaTg ze;!(+|9A;u`LHtFXK3zV%J&sQ24&M$toW=Nb@%hN%0;HHr+v-Z;&^#d()8fOSM7l> zkND4Q@JUizbN<`PQ^Nu)-gW3VgfEI8JK%34z4q7Q%5Tq7{EZtvJj)7O@#KmBy- zES=h8r&{xhJWn3E8D@P}I79cq*VSi*Q=8$T8oh36M{sA%F)v&OYDbT>_qXOIja?K+*n0%n4@>~lfD6|+2YXO_0m$KQsFj_dk0s|VEUf(`7d;I`C^BKWt>h9JOlh05Qz$I1ERn<;1ZU> zs;~s>60A$456A&-j96li7qD9OG5ZYnu@>_SkNHKM_@Z;DvoF?*M{W+70Yv0pgnyAo zS6a*5Zi6?ty{{TS%k3(7dyI$&+z@}v4PpBY;o(0-a=?EC;3n(hAbuPMS!S>bOaRM( zOB}*yP#hGN@L3cV#fYUG*r}3TyYNz5zeD!N{6a0(fI{Ep{A13SI7c}*S#`b2gI?uV zy~=lbmHlW4&G=DP=npxEEN@J?7E}6SN=r;Zw$G6eQG}hGCC2n`oS)|$u^K|k$06l@ zNck|Nyd8g1&V`gOhS<2>*t?3*z}d@~zQMVd^FGU~D!)~gK~?Eem6$@WQdB2#&U-mG zGNvD0^Cf)bshjj8l0YWvm@R5qAYAB{W5yu=am-_oe>>(?$S00@MZHBoru-+3K5&}u zsj;2(AGF8C`uD5)A?+ax&`s!B=sYp00sVzzUSNOm?_m6!=>ME(GK>Ei>F48SXb-o) zT^;vRCo+PnZ#j`~qUxI@!mIwsiQI$riW9j6`GFI88FJY%*aW}kn9r!O9rP?|5?Jgg zF$6Y~bhYA_kojfkvC8126JetgtU#G|JNg00em2WnR44tsoa*TOB=-=Vr`Sz&KIukc zaIJr%K0bMcXk2ASzkn>#HRGQ8FY_ilaq>^tLiaDU$5~MSu*xo%qpnBF z)VQwxUcagSrm0L%IqF)ZEW>d-QWofYY6%zFMyx=utILtI>XOdAb2_}Z)iY+R`j$SV z{=`7vQ7=Spv1yR6MD)F^*bxGfXKtQz%@{tfjxm=Bg|&&q;&psCEJ z(vBPp*VSiGai7ko^5w1@!n4Ffi&n%l=^62y@*MXZ_H=q$JweYN&u&kHTrd0OI=NP^ zk!4wuU9un(xuLwZVTB2HVT0snBDom5c-t>9X8%O-)EB5MVDH#Zizy*B5T|y1ill!| z#_g_fs_fZ1VIK^qY`Om{nG&Vn&!T4w-z`&eGJ^%mOcxvN`tHmsp|tJd>y1pmb@BCV zmZt2rQIZ-7+7AYB9yP~@Y_}c{kX^4t1JU{;JDxk5cmn0C9DZ~J!k;|?d+aw-0~!05 zd$M*1Yi#YwrtD(_!PA*l;hZp@Osszj6HKz1Rr;!ME_s64>8pwC!!V)=69^+F6Bf2Y zG!{lQy28WKJPgZJOD0O1Rtb8a_OKu583-sDUEIC!$gLZd>12t)>lif4>rj9CD5kj* zgH=$C5oE-K|DVTL{C^0hE5mosj3&qQWL{5>0eSm-ue=zr&yNIyr89S#BWR2J@*|@! zG94bX@9JX-`;49lmWD^4;2ve}VLeeIqsfz*(x^3-aE7hnq&}R;uFOw$ryi?y>0zzz zsV5|xV#&H$t@+dw+$rXsXSIJ)tX7KEnz!b8t<;G)P4#C=GKpuqPgitB@YY~IF!u$zp)pCb4Nl11bCH@u7@u+Gf(W@`-;eWw)bPJ-RcN z|F*eq3ghOKdD)yXmrZxEn1#9iwzjSbIc;2<(k^Q=+Oj4w$LY*<%bd~vu8BpgCOU~D zncy{xP<&Z;vRGgTL4g8LDHkq=yE8G3jN)yV;?2C3JPqhTCollG$*;kG1U>-n11|Dy z@D<=4V1-%4R0mLlsRvUr{sG5r8_;&YGAH zFW!{g0RIm76Yy8y6TmHYh#fq6v9fBj1yTr8OrM}L$*2&Xq+v8DyWEq7aF{T#1&%v_ z&@g>`-4j|U5?l)05D0>}wZMiInQkA<@m~Q30Q<@yybWb;WOEO6a%Ev{4P|a*Z(?c< zH8qz&IRPV=_BjD6f87d#KoAAsSDCwZFLY-Vv?K&kgGkGMpy*C?10fvPHnN_+Bnr;e zIeeTD5ax50Bvlcu^z*@{y+QCPcPcfywjK9bZMDcE4{1u|&B(`q=q$fG8*%&?seV{U zRdV|`-Wb9ePRVmHoFGo5stmXHMjKJq4Q%#s0z#L3H-PQEK{oaURkldEw|cNr9Xfl} zqU;*OH@;XH@4$g#nda+#?zY?vGLGK@3;?FYI}Z(IZe(+Ga%Ev{4P|a*Z(?c puny) & -! print*,'afsdn not normal', & +! write(nu_diag,*) 'afsdn not normal', & ! sum(trcrn(i,j,nt_fsd:nt_fsd+nfsd-1,n,iblk)), & ! trcrn(i,j,nt_fsd:nt_fsd+nfsd-1,n,iblk) ! endif diff --git a/cicecore/cicedynB/analysis/ice_history.F90 b/cicecore/cicedynB/analysis/ice_history.F90 index d1fce0d67..94ee4f956 100644 --- a/cicecore/cicedynB/analysis/ice_history.F90 +++ b/cicecore/cicedynB/analysis/ice_history.F90 @@ -103,6 +103,7 @@ subroutine init_hist (dt) cstr_gat, cstr_gau, cstr_gav, & ! mask area name for t, u, v atm grid (ga) cstr_got, cstr_gou, cstr_gov ! mask area name for t, u, v ocn grid (go) character(len=char_len) :: description + character(len=*), parameter :: subname = '(init_hist)' !----------------------------------------------------------------- @@ -224,25 +225,27 @@ subroutine init_hist (dt) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) - call get_fileunit(nu_nml) if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) + write(nu_diag,*) subname,' Reading icefields_nml' + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 + call abort_ice(subname//'ERROR: icefields_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) endif + + nml_error = 1 do while (nml_error > 0) read(nu_nml, nml=icefields_nml,iostat=nml_error) end do - if (nml_error == 0) close(nu_nml) - endif - call release_fileunit(nu_nml) - - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - close (nu_nml) - call abort_ice(subname//'ERROR: reading icefields_nml') + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: icefields_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif ! histfreq options ('1','h','d','m','y') @@ -2259,7 +2262,6 @@ subroutine accum_hist (dt) ! increment field !--------------------------------------------------------------- -! MHRI: CHECK THIS OMP ... Maybe ok after "dfresh,dfsalt" added !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block, & !$OMP k,n,qn,ns,sn,rho_ocn,rho_ice,Tice,Sbr,phi,rhob,dfresh,dfsalt, & !$OMP worka,workb,worka3,Tinz4d,Sinz4d,Tsnz4d) diff --git a/cicecore/cicedynB/analysis/ice_history_bgc.F90 b/cicecore/cicedynB/analysis/ice_history_bgc.F90 index fdb8c4393..8802cf431 100644 --- a/cicecore/cicedynB/analysis/ice_history_bgc.F90 +++ b/cicecore/cicedynB/analysis/ice_history_bgc.F90 @@ -282,7 +282,8 @@ subroutine init_hist_bgc_2D tr_bgc_N, tr_bgc_C, tr_bgc_chl, & tr_bgc_DON, tr_bgc_Fe, tr_bgc_hum, & skl_bgc, solve_zsal, z_tracers - character(len=*), parameter :: subname = '(init_hist_bgc_2D)' + + character(len=*), parameter :: subname = '(init_hist_bgc_2D)' call icepack_query_parameters(skl_bgc_out=skl_bgc, & solve_zsal_out=solve_zsal, z_tracers_out=z_tracers) @@ -303,25 +304,27 @@ subroutine init_hist_bgc_2D ! read namelist !----------------------------------------------------------------- - call get_fileunit(nu_nml) if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) + write(nu_diag,*) subname,' Reading icefields_bgc_nml' + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 + call abort_ice(subname//'ERROR: icefields_bgc_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) endif + + nml_error = 1 do while (nml_error > 0) read(nu_nml, nml=icefields_bgc_nml,iostat=nml_error) end do - if (nml_error == 0) close(nu_nml) - endif - call release_fileunit(nu_nml) - - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - close (nu_nml) - call abort_ice(subname//'ERROR: reading icefields_bgc_nml') + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: icefields_bgc_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif if (.not. tr_iso) then diff --git a/cicecore/cicedynB/analysis/ice_history_drag.F90 b/cicecore/cicedynB/analysis/ice_history_drag.F90 index 31a92158b..c0a1f99bd 100644 --- a/cicecore/cicedynB/analysis/ice_history_drag.F90 +++ b/cicecore/cicedynB/analysis/ice_history_drag.F90 @@ -68,6 +68,7 @@ subroutine init_hist_drag_2D integer (kind=int_kind) :: ns integer (kind=int_kind) :: nml_error ! namelist i/o error flag logical (kind=log_kind) :: formdrag + character(len=*), parameter :: subname = '(init_hist_drag_2D)' call icepack_query_parameters(formdrag_out=formdrag) @@ -79,26 +80,27 @@ subroutine init_hist_drag_2D ! read namelist !----------------------------------------------------------------- - call get_fileunit(nu_nml) if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) + write(nu_diag,*) subname,' Reading icefields_drag_nml' + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 + call abort_ice(subname//'ERROR: icefields_drag_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) endif + + nml_error = 1 do while (nml_error > 0) read(nu_nml, nml=icefields_drag_nml,iostat=nml_error) - if (nml_error > 0) read(nu_nml,*) ! for Nagware compiler end do - if (nml_error == 0) close(nu_nml) - endif - call release_fileunit(nu_nml) - - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - close (nu_nml) - call abort_ice(subname//'ERROR: reading icefields_drag_nml') + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: icefields_drag_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif call broadcast_scalar (f_Cdn_atm, master_task) diff --git a/cicecore/cicedynB/analysis/ice_history_fsd.F90 b/cicecore/cicedynB/analysis/ice_history_fsd.F90 index 7ad81e7d2..c64ecbefa 100644 --- a/cicecore/cicedynB/analysis/ice_history_fsd.F90 +++ b/cicecore/cicedynB/analysis/ice_history_fsd.F90 @@ -81,6 +81,7 @@ subroutine init_hist_fsd_2D integer (kind=int_kind) :: nml_error ! namelist i/o error flag real (kind=dbl_kind) :: secday logical (kind=log_kind) :: tr_fsd, wave_spec + character(len=*), parameter :: subname = '(init_hist_fsd_2D)' call icepack_query_tracer_flags(tr_fsd_out=tr_fsd) @@ -95,25 +96,27 @@ subroutine init_hist_fsd_2D ! read namelist !----------------------------------------------------------------- - call get_fileunit(nu_nml) if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) + write(nu_diag,*) subname,' Reading icefields_fsd_nml' + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 + call abort_ice(subname//'ERROR: icefields_fsd_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) endif + + nml_error = 1 do while (nml_error > 0) read(nu_nml, nml=icefields_fsd_nml,iostat=nml_error) end do - if (nml_error == 0) close(nu_nml) - endif - call release_fileunit(nu_nml) - - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - close (nu_nml) - call abort_ice(subname//'ERROR: reading icefields_fsd_nml') + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: icefields_fsd_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif call broadcast_scalar (f_afsd, master_task) diff --git a/cicecore/cicedynB/analysis/ice_history_mechred.F90 b/cicecore/cicedynB/analysis/ice_history_mechred.F90 index a20df5fb0..920a83b47 100644 --- a/cicecore/cicedynB/analysis/ice_history_mechred.F90 +++ b/cicecore/cicedynB/analysis/ice_history_mechred.F90 @@ -89,6 +89,7 @@ subroutine init_hist_mechred_2D integer (kind=int_kind) :: nml_error ! namelist i/o error flag real (kind=dbl_kind) :: secday logical (kind=log_kind) :: tr_lvl + character(len=*), parameter :: subname = '(init_hist_mechred_2D)' call icepack_query_parameters(secday_out=secday) @@ -101,25 +102,27 @@ subroutine init_hist_mechred_2D ! read namelist !----------------------------------------------------------------- - call get_fileunit(nu_nml) if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) + write(nu_diag,*) subname,' Reading icefields_mechred_nml' + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 + call abort_ice(subname//'ERROR: icefields_mechred_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) endif + + nml_error = 1 do while (nml_error > 0) read(nu_nml, nml=icefields_mechred_nml,iostat=nml_error) end do - if (nml_error == 0) close(nu_nml) - endif - call release_fileunit(nu_nml) - - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - close (nu_nml) - call abort_ice(subname//'ERROR: reading icefields_mechred_nml') + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: icefields_mechred_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif if (.not. tr_lvl) then diff --git a/cicecore/cicedynB/analysis/ice_history_pond.F90 b/cicecore/cicedynB/analysis/ice_history_pond.F90 index 182865fec..365bd4410 100644 --- a/cicecore/cicedynB/analysis/ice_history_pond.F90 +++ b/cicecore/cicedynB/analysis/ice_history_pond.F90 @@ -73,6 +73,7 @@ subroutine init_hist_pond_2D integer (kind=int_kind) :: ns integer (kind=int_kind) :: nml_error ! namelist i/o error flag logical (kind=log_kind) :: tr_pond + character(len=*), parameter :: subname = '(init_hist_pond_2D)' call icepack_query_tracer_flags(tr_pond_out=tr_pond) @@ -84,25 +85,27 @@ subroutine init_hist_pond_2D ! read namelist !----------------------------------------------------------------- - call get_fileunit(nu_nml) if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) + write(nu_diag,*) subname,' Reading icefields_pond_nml' + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 + call abort_ice(subname//'ERROR: icefields_pond_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) endif + + nml_error = 1 do while (nml_error > 0) read(nu_nml, nml=icefields_pond_nml,iostat=nml_error) end do - if (nml_error == 0) close(nu_nml) - endif - call release_fileunit(nu_nml) - - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - close (nu_nml) - call abort_ice(subname//'ERROR: reading icefields_pond_nml') + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: icefields_pond_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif if (.not. tr_pond) then diff --git a/cicecore/cicedynB/analysis/ice_history_snow.F90 b/cicecore/cicedynB/analysis/ice_history_snow.F90 index 5a590af2b..090759759 100644 --- a/cicecore/cicedynB/analysis/ice_history_snow.F90 +++ b/cicecore/cicedynB/analysis/ice_history_snow.F90 @@ -87,30 +87,32 @@ subroutine init_hist_snow_2D (dt) if (tr_snow) then - !----------------------------------------------------------------- - ! read namelist - !----------------------------------------------------------------- - - call get_fileunit(nu_nml) - if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) - if (nml_error /= 0) then - nml_error = -1 - else + !----------------------------------------------------------------- + ! read namelist + !----------------------------------------------------------------- + + if (my_task == master_task) then + write(nu_diag,*) subname,' Reading icefields_snow_nml' + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: icefields_snow_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 + do while (nml_error > 0) + read(nu_nml, nml=icefields_snow_nml,iostat=nml_error) + end do + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: icefields_snow_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif - do while (nml_error > 0) - read(nu_nml, nml=icefields_snow_nml,iostat=nml_error) - end do - if (nml_error == 0) close(nu_nml) - endif - call release_fileunit(nu_nml) - - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - close (nu_nml) - call abort_ice('ice: error reading icefields_snow_nml') - endif else ! .not. tr_snow f_smassice = 'x' diff --git a/cicecore/cicedynB/dynamics/ice_dyn_eap.F90 b/cicecore/cicedynB/dynamics/ice_dyn_eap.F90 index 2b735b71c..f3bb7a935 100644 --- a/cicecore/cicedynB/dynamics/ice_dyn_eap.F90 +++ b/cicecore/cicedynB/dynamics/ice_dyn_eap.F90 @@ -25,6 +25,11 @@ module ice_dyn_eap p001, p027, p055, p111, p166, p222, p25, p333 use ice_fileunits, only: nu_diag, nu_dump_eap, nu_restart_eap use ice_exit, only: abort_ice +! use ice_timers, only: & +! ice_timer_start, ice_timer_stop, & +! timer_tmp1, timer_tmp2, timer_tmp3, timer_tmp4, & +! timer_tmp5, timer_tmp6, timer_tmp7, timer_tmp8, timer_tmp9 + use icepack_intfc, only: icepack_warnings_flush, icepack_warnings_aborted use icepack_intfc, only: icepack_query_parameters use icepack_intfc, only: icepack_ice_strength @@ -61,6 +66,11 @@ module ice_dyn_eap a11 , & ! components of structure tensor () a12 + ! private for reuse, set in init_eap + + real (kind=dbl_kind) :: & + puny, pi, pi2, piq, pih + !======================================================================= contains @@ -138,9 +148,6 @@ subroutine eap (dt) grid_atm_dynu, grid_atm_dynv, grid_ocn_dynu, grid_ocn_dynv use ice_state, only: aice, vice, vsno, uvel, vvel, divu, shear, & aice_init, aice0, aicen, vicen, strength -! use ice_timers, only: timer_dynamics, timer_bound, & -! ice_timer_start, ice_timer_stop, & -! timer_tmp1, timer_tmp2, timer_tmp3 use ice_timers, only: timer_dynamics, timer_bound, & ice_timer_start, ice_timer_stop @@ -211,7 +218,7 @@ subroutine eap (dt) ! This call is needed only if dt changes during runtime. ! call set_evp_parameters (dt) - !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block) + !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block) SCHEDULE(runtime) do iblk = 1, nblocks do j = 1, ny_block do i = 1, nx_block @@ -285,10 +292,7 @@ subroutine eap (dt) call grid_average_X2Y('F',strairyT,'T',strairy,'U') endif -! tcraig, tcx, turned off this threaded region, in evp, this block and -! the icepack_ice_strength call seems to not be thread safe. more -! debugging needed - !$TCXOMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block) + !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,ij,i,j) SCHEDULE(runtime) do iblk = 1, nblocks !----------------------------------------------------------------- @@ -375,7 +379,7 @@ subroutine eap (dt) strength = strength(i,j, iblk) ) enddo ! ij enddo ! iblk - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & @@ -406,30 +410,28 @@ subroutine eap (dt) !----------------------------------------------------------------- if (seabed_stress) then - - ! tcraig, evp omp causes abort on cheyenne with pgi, turn off here too - !$TCXOMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - - if ( seabed_stress_method == 'LKD' ) then - - call seabed_stress_factor_LKD (nx_block, ny_block, & - icellu (iblk), & - indxui(:,iblk), indxuj(:,iblk), & - vice(:,:,iblk), aice(:,:,iblk), & - hwater(:,:,iblk), Tbu(:,:,iblk)) - - elseif ( seabed_stress_method == 'probabilistic' ) then - - call seabed_stress_factor_prob (nx_block, ny_block, & - icellt(iblk), indxti(:,iblk), indxtj(:,iblk), & - icellu(iblk), indxui(:,iblk), indxuj(:,iblk), & - aicen(:,:,:,iblk), vicen(:,:,:,iblk), & - hwater(:,:,iblk), Tbu(:,:,iblk)) - endif - - enddo - !$TCXOMP END PARALLEL DO + if ( seabed_stress_method == 'LKD' ) then + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks + call seabed_stress_factor_LKD (nx_block, ny_block, & + icellu (iblk), & + indxui(:,iblk), indxuj(:,iblk), & + vice(:,:,iblk), aice(:,:,iblk), & + hwater(:,:,iblk), Tbu(:,:,iblk)) + enddo + !$OMP END PARALLEL DO + + elseif ( seabed_stress_method == 'probabilistic' ) then + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks + call seabed_stress_factor_prob (nx_block, ny_block, & + icellt(iblk), indxti(:,iblk), indxtj(:,iblk), & + icellu(iblk), indxui(:,iblk), indxuj(:,iblk), & + aicen(:,:,:,iblk), vicen(:,:,:,iblk), & + hwater(:,:,iblk), Tbu(:,:,iblk)) + enddo + !$OMP END PARALLEL DO + endif endif do ksub = 1,ndte ! subcycling @@ -438,10 +440,10 @@ subroutine eap (dt) ! stress tensor equation, total surface stress !----------------------------------------------------------------- - !$TCXOMP PARALLEL DO PRIVATE(iblk,strtmp) + !$OMP PARALLEL DO PRIVATE(iblk,strtmp) SCHEDULE(runtime) do iblk = 1, nblocks -! call ice_timer_start(timer_tmp1) ! dynamics +! call ice_timer_start(timer_tmp1,iblk) call stress_eap (nx_block, ny_block, & ksub, ndte, & icellt(iblk), & @@ -474,16 +476,16 @@ subroutine eap (dt) ! rdg_conv (:,:,iblk), rdg_shear (:,:,iblk), & rdg_conv (:,:,iblk), & strtmp (:,:,:)) -! call ice_timer_stop(timer_tmp1) ! dynamics +! call ice_timer_stop(timer_tmp1,iblk) !----------------------------------------------------------------- ! momentum equation !----------------------------------------------------------------- +! call ice_timer_start(timer_tmp2,iblk) call stepu (nx_block, ny_block, & icellu (iblk), Cdn_ocn (:,:,iblk), & indxui (:,iblk), indxuj (:,iblk), & - ksub, & aiu (:,:,iblk), strtmp (:,:,:), & uocnU (:,:,iblk), vocnU (:,:,iblk), & waterx (:,:,iblk), watery (:,:,iblk), & @@ -495,12 +497,13 @@ subroutine eap (dt) uvel_init(:,:,iblk), vvel_init(:,:,iblk),& uvel (:,:,iblk), vvel (:,:,iblk), & Tbu (:,:,iblk)) +! call ice_timer_stop(timer_tmp2,iblk) !----------------------------------------------------------------- ! evolution of structure tensor A !----------------------------------------------------------------- -! call ice_timer_start(timer_tmp3) ! dynamics +! call ice_timer_start(timer_tmp3,iblk) if (mod(ksub,10) == 1) then ! only called every 10th timestep call stepa (nx_block, ny_block, & dtei, icellt (iblk), & @@ -517,9 +520,9 @@ subroutine eap (dt) stress12_1(:,:,iblk), stress12_2(:,:,iblk), & stress12_3(:,:,iblk), stress12_4(:,:,iblk)) endif -! call ice_timer_stop(timer_tmp3) ! dynamics +! call ice_timer_stop(timer_tmp3,iblk) enddo - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO call stack_velocity_field(uvel, vvel, fld2) call ice_timer_start(timer_bound) @@ -542,7 +545,7 @@ subroutine eap (dt) ! ice-ocean stress !----------------------------------------------------------------- - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks call dyn_finish & @@ -552,8 +555,6 @@ subroutine eap (dt) uvel (:,:,iblk), vvel (:,:,iblk), & uocnU (:,:,iblk), vocnU (:,:,iblk), & aiu (:,:,iblk), fm (:,:,iblk), & - strintx (:,:,iblk), strinty (:,:,iblk), & - strairx (:,:,iblk), strairy (:,:,iblk), & strocnx (:,:,iblk), strocny (:,:,iblk)) enddo @@ -620,17 +621,19 @@ subroutine init_eap real (kind=dbl_kind) :: & ainit, xinit, yinit, zinit, & da, dx, dy, dz, & - pi, pih, piq, phi + phi character(len=*), parameter :: subname = '(init_eap)' - call icepack_query_parameters(pi_out=pi, pih_out=pih, piq_out=piq) + call icepack_query_parameters(puny_out=puny, & + pi_out=pi, pi2_out=pi2, piq_out=piq, pih_out=pih) call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) + phi = pi/c12 ! diamond shaped floe smaller angle (default phi = 30 deg) - !$OMP PARALLEL DO PRIVATE(iblk,i,j) + !$OMP PARALLEL DO PRIVATE(iblk,i,j) SCHEDULE(runtime) do iblk = 1, nblocks do j = 1, ny_block do i = 1, nx_block @@ -784,14 +787,9 @@ FUNCTION s11kr(x,y,z,phi) d11, d12, d22, & IIn1t2, IIn2t1, & ! IIt1t2, & - Hen1t2, Hen2t1, & - pih, puny - character(len=*), parameter :: subname = '(s11kr)' + Hen1t2, Hen2t1 - call icepack_query_parameters(pih_out=pih, puny_out=puny) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) + character(len=*), parameter :: subname = '(s11kr)' p = phi @@ -855,14 +853,9 @@ FUNCTION s12kr(x,y,z,phi) d11, d12, d22, & IIn1t2, IIn2t1, & ! IIt1t2, & - Hen1t2, Hen2t1, & - pih, puny - character(len=*), parameter :: subname = '(s12kr)' + Hen1t2, Hen2t1 - call icepack_query_parameters(pih_out=pih, puny_out=puny) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) + character(len=*), parameter :: subname = '(s12kr)' p = phi @@ -926,14 +919,9 @@ FUNCTION s22kr(x,y,z,phi) d11, d12, d22, & IIn1t2, IIn2t1, & ! IIt1t2, & - Hen1t2, Hen2t1, & - pih, puny - character(len=*), parameter :: subname = '(s22kr)' + Hen1t2, Hen2t1 - call icepack_query_parameters(pih_out=pih, puny_out=puny) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) + character(len=*), parameter :: subname = '(s22kr)' p = phi @@ -996,14 +984,9 @@ FUNCTION s11ks(x,y,z,phi) ! t2t1i12, t2t1i21, t2t1i22, & d11, d12, d22, & IIn1t2, IIn2t1, IIt1t2, & - Hen1t2, Hen2t1, & - pih, puny - character(len=*), parameter :: subname = '(s11ks)' + Hen1t2, Hen2t1 - call icepack_query_parameters(pih_out=pih, puny_out=puny) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) + character(len=*), parameter :: subname = '(s11ks)' p = phi @@ -1065,14 +1048,9 @@ FUNCTION s12ks(x,y,z,phi) t2t1i12, t2t1i21, & d11, d12, d22, & IIn1t2, IIn2t1, IIt1t2, & - Hen1t2, Hen2t1, & - pih, puny - character(len=*), parameter :: subname = '(s12ks)' + Hen1t2, Hen2t1 - call icepack_query_parameters(pih_out=pih, puny_out=puny) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) + character(len=*), parameter :: subname = '(s12ks)' p =phi @@ -1136,14 +1114,9 @@ FUNCTION s22ks(x,y,z,phi) t2t1i22, & d11, d12, d22, & IIn1t2, IIn2t1, IIt1t2, & - Hen1t2, Hen2t1, & - pih, puny - character(len=*), parameter :: subname = '(s22ks)' + Hen1t2, Hen2t1 - call icepack_query_parameters(pih_out=pih, puny_out=puny) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) + character(len=*), parameter :: subname = '(s22ks)' p = phi @@ -1225,11 +1198,6 @@ subroutine stress_eap (nx_block, ny_block, & rdg_conv, & strtmp) -!echmod tmp -! use ice_timers, only: & -! ice_timer_start, ice_timer_stop, & -! timer_tmp1, timer_tmp2, timer_tmp3 - integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions ksub , & ! subcycling step @@ -1307,7 +1275,7 @@ subroutine stress_eap (nx_block, ny_block, & csigmne, csigmnw, csigmse, csigmsw , & csig12ne, csig12nw, csig12se, csig12sw , & str12ew, str12we, str12ns, str12sn , & - strp_tmp, strm_tmp, puny + strp_tmp, strm_tmp real (kind=dbl_kind) :: & alpharne, alpharnw, alpharsw, alpharse, & @@ -1319,11 +1287,6 @@ subroutine stress_eap (nx_block, ny_block, & ! Initialize !----------------------------------------------------------------- - call icepack_query_parameters(puny_out=puny) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) - strtmp(:,:,:) = c0 do ij = 1, icellt @@ -1367,7 +1330,6 @@ subroutine stress_eap (nx_block, ny_block, & !----------------------------------------------------------------- ! Stress updated depending on strain rate and structure tensor !----------------------------------------------------------------- -! call ice_timer_start(timer_tmp2) ! dynamics ! ne call update_stress_rdg (ksub, ndte, divune, tensionne, & @@ -1394,7 +1356,6 @@ subroutine stress_eap (nx_block, ny_block, & stress12tmp_4, strength(i,j), & alpharse, alphasse) -! call ice_timer_stop(timer_tmp2) ! dynamics !----------------------------------------------------------------- ! on last subcycle, save quantities for mechanical redistribution !----------------------------------------------------------------- @@ -1646,10 +1607,14 @@ subroutine update_stress_rdg (ksub, ndte, divu, tension, & Angle_denom_gamma, Angle_denom_alpha, & Tany_1, Tany_2, & x, y, dx, dy, da, & - invdx, invdy, invda, invsin, & dtemp1, dtemp2, atempprime, & - kxw, kyw, kaw, & - puny, pi, pi2, piq, pih + kxw, kyw, kaw + + real (kind=dbl_kind), save :: & + invdx, invdy, invda, invsin + + logical (kind=log_kind), save :: & + first_call = .true. real (kind=dbl_kind), parameter :: & kfriction = 0.45_dbl_kind @@ -1661,17 +1626,13 @@ subroutine update_stress_rdg (ksub, ndte, divu, tension, & character(len=*), parameter :: subname = '(update_stress_rdg)' - call icepack_query_parameters(puny_out=puny, & - pi_out=pi, pi2_out=pi2, piq_out=piq, pih_out=pih) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) - ! Factor to maintain the same stress as in EVP (see Section 3) ! Can be set to 1 otherwise - invstressconviso = c1/(c1+kfriction*kfriction) - invsin = c1/sin(pi2/c12) * invstressconviso + if (first_call) then + invstressconviso = c1/(c1+kfriction*kfriction) + invsin = c1/sin(pi2/c12) * invstressconviso + endif ! compute eigenvalues, eigenvectors and angles for structure tensor, strain rates @@ -1679,7 +1640,7 @@ subroutine update_stress_rdg (ksub, ndte, divu, tension, & a22 = c1-a11 -! gamma: angle between general coordiantes and principal axis of A +! gamma: angle between general coordinates and principal axis of A ! here Tan2gamma = 2 a12 / (a11 - a22) Q11Q11 = c1 @@ -1770,12 +1731,14 @@ subroutine update_stress_rdg (ksub, ndte, divu, tension, & if (y < 0) y = y + pi ! Now calculate updated stress tensor - dx = pi/real(nx_yield-1,kind=dbl_kind) - dy = pi/real(ny_yield-1,kind=dbl_kind) - da = p5/real(na_yield-1,kind=dbl_kind) - invdx = c1/dx - invdy = c1/dy - invda = c1/da + if (first_call) then + dx = pi/real(nx_yield-1,kind=dbl_kind) + dy = pi/real(ny_yield-1,kind=dbl_kind) + da = p5/real(na_yield-1,kind=dbl_kind) + invdx = c1/dx + invdy = c1/dy + invda = c1/da + endif if (interpolate_stress_rdg) then @@ -1913,6 +1876,8 @@ subroutine update_stress_rdg (ksub, ndte, divu, tension, & + rotstemp22s*dtemp22 endif + first_call = .false. + end subroutine update_stress_rdg !======================================================================= @@ -2043,7 +2008,7 @@ subroutine calc_ffrac (stressp, stressm, & real (kind=dbl_kind) :: & sigma11, sigma12, sigma22, & - gamma, sigma_1, sigma_2, pih, & + gamma, sigma_1, sigma_2, & Q11, Q12, Q11Q11, Q11Q12, Q12Q12 real (kind=dbl_kind), parameter :: & @@ -2052,11 +2017,6 @@ subroutine calc_ffrac (stressp, stressm, & character(len=*), parameter :: subname = '(calc_ffrac)' - call icepack_query_parameters(pih_out=pih) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) - sigma11 = p5*(stressp+stressm) sigma12 = stress12 sigma22 = p5*(stressp-stressm) @@ -2219,7 +2179,7 @@ subroutine read_restart_eap() ! Ensure unused values in west and south ghost cells are 0 !----------------------------------------------------------------- - !$OMP PARALLEL DO PRIVATE(iblk,i,j) + !$OMP PARALLEL DO PRIVATE(iblk,i,j) SCHEDULE(runtime) do iblk = 1, nblocks do j = 1, nghost do i = 1, nx_block diff --git a/cicecore/cicedynB/dynamics/ice_dyn_evp.F90 b/cicecore/cicedynB/dynamics/ice_dyn_evp.F90 index 0a48783c4..360781a79 100644 --- a/cicecore/cicedynB/dynamics/ice_dyn_evp.F90 +++ b/cicecore/cicedynB/dynamics/ice_dyn_evp.F90 @@ -114,7 +114,7 @@ subroutine evp (dt) use ice_dyn_evp_1d, only: ice_dyn_evp_1d_copyin, ice_dyn_evp_1d_kernel, & ice_dyn_evp_1d_copyout use ice_dyn_shared, only: evp_algorithm, stack_velocity_field, unstack_velocity_field, DminTarea - + use ice_dyn_shared, only: deformations, deformations_T real (kind=dbl_kind), intent(in) :: & dt ! time step @@ -246,7 +246,7 @@ subroutine evp (dt) ! field_loc_center, field_type_scalar) ! call ice_timer_stop(timer_bound) - !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block) + !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block) SCHEDULE(runtime) do iblk = 1, nblocks do j = 1, ny_block @@ -339,9 +339,7 @@ subroutine evp (dt) endif endif -! tcraig, tcx, threading here leads to some non-reproducbile results and failures in icepack_ice_strength -! need to do more debugging - !$TCXOMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block) + !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,ij,i,j) SCHEDULE(runtime) do iblk = 1, nblocks !----------------------------------------------------------------- @@ -433,11 +431,11 @@ subroutine evp (dt) enddo ! ij enddo ! iblk - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO if (grid_ice == 'CD' .or. grid_ice == 'C') then - !$TCXOMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block) + !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,ij,i,j) SCHEDULE(runtime) do iblk = 1, nblocks !----------------------------------------------------------------- @@ -528,7 +526,7 @@ subroutine evp (dt) enddo enddo enddo ! iblk - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO endif ! grid_ice @@ -592,61 +590,67 @@ subroutine evp (dt) if (seabed_stress) then - ! tcraig, causes abort with pgi compiler on cheyenne - !$TCXOMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - - select case (trim(grid_ice)) - case('B') - - if ( seabed_stress_method == 'LKD' ) then + select case (trim(grid_ice)) + case('B') + if ( seabed_stress_method == 'LKD' ) then + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks call seabed_stress_factor_LKD (nx_block, ny_block, & icellu (iblk), & indxui(:,iblk), indxuj(:,iblk), & vice(:,:,iblk), aice(:,:,iblk), & hwater(:,:,iblk), Tbu(:,:,iblk)) + enddo + !$OMP END PARALLEL DO - elseif ( seabed_stress_method == 'probabilistic' ) then - + elseif ( seabed_stress_method == 'probabilistic' ) then + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks call seabed_stress_factor_prob (nx_block, ny_block, & icellt(iblk), indxti(:,iblk), indxtj(:,iblk), & icellu(iblk), indxui(:,iblk), indxuj(:,iblk), & aicen(:,:,:,iblk), vicen(:,:,:,iblk), & hwater(:,:,iblk), Tbu(:,:,iblk)) - endif + enddo + !$OMP END PARALLEL DO + endif case('CD','C') if ( seabed_stress_method == 'LKD' ) then - - call seabed_stress_factor_LKD (nx_block, ny_block, & - icelle (iblk), & - indxei(:,iblk), indxej(:,iblk), & - vice(:,:,iblk), aice(:,:,iblk), & - hwater(:,:,iblk), TbE(:,:,iblk)) - call seabed_stress_factor_LKD (nx_block, ny_block, & - icelln (iblk), & - indxni(:,iblk), indxnj(:,iblk), & - vice(:,:,iblk), aice(:,:,iblk), & - hwater(:,:,iblk), TbN(:,:,iblk)) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks + call seabed_stress_factor_LKD (nx_block, ny_block, & + icelle (iblk), & + indxei(:,iblk), indxej(:,iblk), & + vice(:,:,iblk), aice(:,:,iblk), & + hwater(:,:,iblk), TbE(:,:,iblk)) + call seabed_stress_factor_LKD (nx_block, ny_block, & + icelln (iblk), & + indxni(:,iblk), indxnj(:,iblk), & + vice(:,:,iblk), aice(:,:,iblk), & + hwater(:,:,iblk), TbN(:,:,iblk)) + enddo + !$OMP END PARALLEL DO elseif ( seabed_stress_method == 'probabilistic' ) then - - call seabed_stress_factor_prob (nx_block, ny_block, & - icellt(iblk), indxti(:,iblk), indxtj(:,iblk), & - icellu(iblk), indxui(:,iblk), indxuj(:,iblk), & - aicen(:,:,:,iblk), vicen(:,:,:,iblk), & - hwater(:,:,iblk), Tbu(:,:,iblk), & - TbE(:,:,iblk), TbN(:,:,iblk), & - icelle(iblk), indxei(:,iblk), indxej(:,iblk), & - icelln(iblk), indxni(:,iblk), indxnj(:,iblk) ) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks + call seabed_stress_factor_prob (nx_block, ny_block, & + icellt(iblk), indxti(:,iblk), indxtj(:,iblk), & + icellu(iblk), indxui(:,iblk), indxuj(:,iblk), & + aicen(:,:,:,iblk), vicen(:,:,:,iblk), & + hwater(:,:,iblk), Tbu(:,:,iblk), & + TbE(:,:,iblk), TbN(:,:,iblk), & + icelle(iblk), indxei(:,iblk), indxej(:,iblk), & + icelln(iblk), indxni(:,iblk), indxnj(:,iblk) ) + enddo + !$OMP END PARALLEL DO endif end select - enddo - !$TCXOMP END PARALLEL DO endif call ice_timer_start(timer_evp_2d) @@ -687,25 +691,24 @@ subroutine evp (dt) do ksub = 1,ndte ! subcycling - !----------------------------------------------------------------- - ! stress tensor equation, total surface stress - !----------------------------------------------------------------- - select case (grid_ice) case('B') - !$TCXOMP PARALLEL DO PRIVATE(iblk,strtmp) - do iblk = 1, nblocks - + !$OMP PARALLEL DO PRIVATE(iblk,strtmp) SCHEDULE(runtime) + do iblk = 1, nblocks + + !----------------------------------------------------------------- + ! stress tensor equation, total surface stress + !----------------------------------------------------------------- call stress (nx_block, ny_block, & - ksub, icellt(iblk), & + icellt(iblk), & indxti (:,iblk), indxtj (:,iblk), & uvel (:,:,iblk), vvel (:,:,iblk), & dxt (:,:,iblk), dyt (:,:,iblk), & dxhy (:,:,iblk), dyhx (:,:,iblk), & cxp (:,:,iblk), cyp (:,:,iblk), & cxm (:,:,iblk), cym (:,:,iblk), & - tarear (:,:,iblk), DminTarea(:,:,iblk), & + DminTarea (:,:,iblk), & strength (:,:,iblk), & stressp_1 (:,:,iblk), stressp_2 (:,:,iblk), & stressp_3 (:,:,iblk), stressp_4 (:,:,iblk), & @@ -713,17 +716,30 @@ subroutine evp (dt) stressm_3 (:,:,iblk), stressm_4 (:,:,iblk), & stress12_1(:,:,iblk), stress12_2(:,:,iblk), & stress12_3(:,:,iblk), stress12_4(:,:,iblk), & - shear (:,:,iblk), divu (:,:,iblk), & - rdg_conv (:,:,iblk), rdg_shear (:,:,iblk), & strtmp (:,:,:) ) - !----------------------------------------------------------------- - ! momentum equation - !----------------------------------------------------------------- + !----------------------------------------------------------------- + ! on last subcycle, save quantities for mechanical redistribution + !----------------------------------------------------------------- + if (ksub == ndte) then + call deformations (nx_block , ny_block , & + icellt (iblk), & + indxti (:,iblk), indxtj (:,iblk), & + uvel (:,:,iblk), vvel (:,:,iblk), & + dxt (:,:,iblk), dyt (:,:,iblk), & + cxp (:,:,iblk), cyp (:,:,iblk), & + cxm (:,:,iblk), cym (:,:,iblk), & + tarear (:,:,iblk), & + shear (:,:,iblk), divu (:,:,iblk), & + rdg_conv(:,:,iblk), rdg_shear(:,:,iblk) ) + endif + + !----------------------------------------------------------------- + ! momentum equation + !----------------------------------------------------------------- call stepu (nx_block, ny_block, & icellu (iblk), Cdn_ocn (:,:,iblk), & indxui (:,iblk), indxuj (:,iblk), & - ksub, & aiu (:,:,iblk), strtmp (:,:,:), & uocnU (:,:,iblk), vocnU (:,:,iblk), & waterx (:,:,iblk), watery (:,:,iblk), & @@ -736,44 +752,58 @@ subroutine evp (dt) uvel (:,:,iblk), vvel (:,:,iblk), & Tbu (:,:,iblk)) - enddo - !$TCXOMP END PARALLEL DO + + + enddo ! iblk + !$OMP END PARALLEL DO case('CD','C') - !$TCXOMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) do iblk = 1, nblocks - call stress_T (nx_block, ny_block, & - ksub, icellt(iblk), & + icellt(iblk), & indxti (:,iblk), indxtj (:,iblk), & uvelE (:,:,iblk), vvelE (:,:,iblk), & uvelN (:,:,iblk), vvelN (:,:,iblk), & dxN (:,:,iblk), dyE (:,:,iblk), & dxT (:,:,iblk), dyT (:,:,iblk), & - tarear (:,:,iblk), DminTarea (:,:,iblk), & + DminTarea (:,:,iblk), & strength (:,:,iblk), & zetax2T (:,:,iblk), etax2T (:,:,iblk), & stresspT (:,:,iblk), stressmT (:,:,iblk), & - stress12T (:,:,iblk), & - shear (:,:,iblk), divu (:,:,iblk), & - rdg_conv (:,:,iblk), rdg_shear (:,:,iblk) ) - + stress12T (:,:,iblk) ) + + !----------------------------------------------------------------- + ! on last subcycle, save quantities for mechanical redistribution + !----------------------------------------------------------------- + if (ksub == ndte) then + call deformations_T (nx_block , ny_block , & + icellt (iblk), & + indxti (:,iblk), indxtj (:,iblk), & + uvelE (:,:,iblk), vvelE (:,:,iblk), & + uvelN (:,:,iblk), vvelN (:,:,iblk), & + dxN (:,:,iblk), dyE (:,:,iblk), & + dxT (:,:,iblk), dyT (:,:,iblk), & + tarear (:,:,iblk), & + shear (:,:,iblk), divu (:,:,iblk), & + rdg_conv(:,:,iblk), rdg_shear(:,:,iblk)) + endif enddo + !$OMP END PARALLEL DO ! Need to update the halos for the stress components call ice_timer_start(timer_bound) call ice_HaloUpdate (zetax2T, halo_info, & - field_loc_center, field_type_scalar) - call ice_HaloUpdate (etax2T, halo_info, & - field_loc_center, field_type_scalar) + field_loc_center, field_type_scalar) + call ice_HaloUpdate (etax2T, halo_info, & + field_loc_center, field_type_scalar) call ice_timer_stop(timer_bound) - !$TCXOMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) do iblk = 1, nblocks - call stress_U (nx_block, ny_block, & - ksub, icellu(iblk), & + icellu(iblk), & indxui (:,iblk), indxuj (:,iblk), & uvelE (:,:,iblk), vvelE (:,:,iblk), & uvelN (:,:,iblk), vvelN (:,:,iblk), & @@ -789,9 +819,8 @@ subroutine evp (dt) strength (:,:,iblk), & stresspU (:,:,iblk), stressmU (:,:,iblk), & stress12U (:,:,iblk)) - enddo - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO ! Need to update the halos for the stress components call ice_timer_start(timer_bound) @@ -809,11 +838,11 @@ subroutine evp (dt) field_loc_NEcorner, field_type_scalar) call ice_timer_stop(timer_bound) - !$TCXOMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) do iblk = 1, nblocks call div_stress (nx_block, ny_block, & ! E point - ksub, icelle(iblk), & + icelle(iblk), & indxei (:,iblk), indxej (:,iblk), & dxE (:,:,iblk), dyE (:,:,iblk), & dxU (:,:,iblk), dyT (:,:,iblk), & @@ -825,8 +854,8 @@ subroutine evp (dt) strintxE (:,:,iblk), strintyE (:,:,iblk), & 'E') - call div_stress (nx_block, ny_block, & ! N point - ksub, icelln(iblk), & + call div_stress (nx_block, ny_block, & ! N point + icelln(iblk), & indxni (:,iblk), indxnj (:,iblk), & dxN (:,:,iblk), dyN (:,:,iblk), & dxT (:,:,iblk), dyU (:,:,iblk), & @@ -839,17 +868,17 @@ subroutine evp (dt) 'N') enddo - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO if (grid_ice == 'CD') then - !$TCXOMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) do iblk = 1, nblocks call step_vel (nx_block, ny_block, & ! E point icelle (iblk), Cdn_ocn (:,:,iblk), & indxei (:,iblk), indxej (:,iblk), & - ksub, aiE (:,:,iblk), & + aiE (:,:,iblk), & uocnE (:,:,iblk), vocnE (:,:,iblk), & waterxE (:,:,iblk), wateryE (:,:,iblk), & forcexE (:,:,iblk), forceyE (:,:,iblk), & @@ -863,7 +892,7 @@ subroutine evp (dt) call step_vel (nx_block, ny_block, & ! N point icelln (iblk), Cdn_ocn (:,:,iblk), & indxni (:,iblk), indxnj (:,iblk), & - ksub, aiN (:,:,iblk), & + aiN (:,:,iblk), & uocnN (:,:,iblk), vocnN (:,:,iblk), & waterxN (:,:,iblk), wateryN (:,:,iblk), & forcexN (:,:,iblk), forceyN (:,:,iblk), & @@ -875,17 +904,17 @@ subroutine evp (dt) TbN (:,:,iblk)) enddo - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO elseif (grid_ice == 'C') then - !$TCXOMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) do iblk = 1, nblocks call stepu_Cgrid (nx_block, ny_block, & ! u, E point icelle (iblk), Cdn_ocn (:,:,iblk), & indxei (:,iblk), indxej (:,iblk), & - ksub, aiE (:,:,iblk), & + aiE (:,:,iblk), & uocnE (:,:,iblk), vocnE (:,:,iblk), & waterxE (:,:,iblk), forcexE (:,:,iblk), & emassdti (:,:,iblk), fmE (:,:,iblk), & @@ -897,7 +926,7 @@ subroutine evp (dt) call stepv_Cgrid (nx_block, ny_block, & ! v, N point icelln (iblk), Cdn_ocn (:,:,iblk), & indxni (:,iblk), indxnj (:,iblk), & - ksub, aiN (:,:,iblk), & + aiN (:,:,iblk), & uocnN (:,:,iblk), vocnN (:,:,iblk), & wateryN (:,:,iblk), forceyN (:,:,iblk), & nmassdti (:,:,iblk), fmN (:,:,iblk), & @@ -907,7 +936,7 @@ subroutine evp (dt) TbN (:,:,iblk)) enddo - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO endif @@ -1037,7 +1066,7 @@ subroutine evp (dt) ! ice-ocean stress !----------------------------------------------------------------- - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks call dyn_finish & @@ -1047,8 +1076,6 @@ subroutine evp (dt) uvel (:,:,iblk), vvel (:,:,iblk), & uocnU (:,:,iblk), vocnU (:,:,iblk), & aiu (:,:,iblk), fm (:,:,iblk), & - strintx (:,:,iblk), strinty (:,:,iblk), & - strairx (:,:,iblk), strairy (:,:,iblk), & strocnx (:,:,iblk), strocny (:,:,iblk)) enddo @@ -1056,7 +1083,7 @@ subroutine evp (dt) if (grid_ice == 'CD' .or. grid_ice == 'C') then - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks call dyn_finish & @@ -1066,8 +1093,6 @@ subroutine evp (dt) uvelN (:,:,iblk), vvelN (:,:,iblk), & uocnN (:,:,iblk), vocnN (:,:,iblk), & aiN (:,:,iblk), fmN (:,:,iblk), & - strintxN(:,:,iblk), strintyN(:,:,iblk), & - strairxN(:,:,iblk), strairyN(:,:,iblk), & strocnxN(:,:,iblk), strocnyN(:,:,iblk)) call dyn_finish & @@ -1077,8 +1102,6 @@ subroutine evp (dt) uvelE (:,:,iblk), vvelE (:,:,iblk), & uocnE (:,:,iblk), vocnE (:,:,iblk), & aiE (:,:,iblk), fmE (:,:,iblk), & - strintxE(:,:,iblk), strintyE(:,:,iblk), & - strairxE(:,:,iblk), strairyE(:,:,iblk), & strocnxE(:,:,iblk), strocnyE(:,:,iblk)) enddo @@ -1092,7 +1115,7 @@ subroutine evp (dt) ! conservation requires aiu be divided before averaging work1 = c0 work2 = c0 - !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) SCHEDULE(runtime) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij,iblk) @@ -1127,14 +1150,14 @@ end subroutine evp ! author: Elizabeth C. Hunke, LANL subroutine stress (nx_block, ny_block, & - ksub, icellt, & + icellt, & indxti, indxtj, & uvel, vvel, & dxt, dyt, & dxhy, dyhx, & cxp, cyp, & cxm, cym, & - tarear, DminTarea, & + DminTarea, & strength, & stressp_1, stressp_2, & stressp_3, stressp_4, & @@ -1142,16 +1165,13 @@ subroutine stress (nx_block, ny_block, & stressm_3, stressm_4, & stress12_1, stress12_2, & stress12_3, stress12_4, & - shear, divu, & - rdg_conv, rdg_shear, & str ) - use ice_dyn_shared, only: strain_rates, deformations, & - viscous_coeffs_and_rep_pressure_T, capping + use ice_dyn_shared, only: strain_rates, viscous_coeffs_and_rep_pressure_T, & + capping integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions - ksub , & ! subcycling step icellt ! no. of cells where icetmask = 1 integer (kind=int_kind), dimension (nx_block*ny_block), intent(in) :: & @@ -1170,7 +1190,6 @@ subroutine stress (nx_block, ny_block, & cxp , & ! 1.5*HTN - 0.5*HTS cym , & ! 0.5*HTE - 1.5*HTW cxm , & ! 0.5*HTN - 1.5*HTS - tarear , & ! 1/tarea DminTarea ! deltaminEVP*tarea real (kind=dbl_kind), dimension (nx_block,ny_block), intent(inout) :: & @@ -1178,12 +1197,6 @@ subroutine stress (nx_block, ny_block, & stressm_1, stressm_2, stressm_3, stressm_4 , & ! sigma11-sigma22 stress12_1,stress12_2,stress12_3,stress12_4 ! sigma12 - real (kind=dbl_kind), dimension (nx_block,ny_block), intent(inout) :: & - shear , & ! strain rate II component (1/s) - divu , & ! strain rate I component, velocity divergence (1/s) - rdg_conv , & ! convergence term for ridging (1/s) - rdg_shear ! shear term for ridging (1/s) - real (kind=dbl_kind), dimension(nx_block,ny_block,8), intent(out) :: & str ! stress combinations @@ -1433,23 +1446,6 @@ subroutine stress (nx_block, ny_block, & enddo ! ij - !----------------------------------------------------------------- - ! on last subcycle, save quantities for mechanical redistribution - !----------------------------------------------------------------- - if (ksub == ndte) then - call deformations (nx_block , ny_block , & - icellt , & - indxti , indxtj , & - uvel , vvel , & - dxt , dyt , & - cxp , cyp , & - cxm , cym , & - tarear , & - shear , divu , & - rdg_conv , rdg_shear ) - - endif - end subroutine stress !======================================================================= @@ -1460,26 +1456,23 @@ end subroutine stress ! Nov 2021 subroutine stress_T (nx_block, ny_block, & - ksub, icellt, & + icellt, & indxti, indxtj, & uvelE, vvelE, & uvelN, vvelN, & dxN, dyE, & dxT, dyT, & - tarear, DminTarea, & + DminTarea, & strength, & zetax2T, etax2T, & stresspT, stressmT, & - stress12T, & - shear, divu, & - rdg_conv, rdg_shear ) + stress12T ) - use ice_dyn_shared, only: strain_rates_T, deformations_T, & - viscous_coeffs_and_rep_pressure_T, capping + use ice_dyn_shared, only: strain_rates_T, capping, & + viscous_coeffs_and_rep_pressure_T integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions - ksub , & ! subcycling step icellt ! no. of cells where icetmask = 1 integer (kind=int_kind), dimension (nx_block*ny_block), intent(in) :: & @@ -1496,7 +1489,6 @@ subroutine stress_T (nx_block, ny_block, & dxT , & ! width of T-cell through the middle (m) dyT , & ! height of T-cell through the middle (m) strength , & ! ice strength (N/m) - tarear , & ! 1/tarea DminTarea ! deltaminEVP*tarea real (kind=dbl_kind), dimension (nx_block,ny_block), intent(inout) :: & @@ -1506,12 +1498,6 @@ subroutine stress_T (nx_block, ny_block, & stressmT , & ! sigma11-sigma22 stress12T ! sigma12 - real (kind=dbl_kind), dimension (nx_block,ny_block), intent(inout) :: & - shear , & ! strain rate II component (1/s) - divu , & ! strain rate I component, velocity divergence (1/s) - rdg_conv , & ! convergence term for ridging (1/s) - rdg_shear ! shear term for ridging (1/s) - ! local variables integer (kind=int_kind) :: & @@ -1532,10 +1518,10 @@ subroutine stress_T (nx_block, ny_block, & i = indxti(ij) j = indxtj(ij) - !----------------------------------------------------------------- - ! strain rates at T point - ! NOTE these are actually strain rates * area (m^2/s) - !----------------------------------------------------------------- + !----------------------------------------------------------------- + ! strain rates at T point + ! NOTE these are actually strain rates * area (m^2/s) + !----------------------------------------------------------------- call strain_rates_T (nx_block, ny_block, & i, j, & @@ -1546,20 +1532,20 @@ subroutine stress_T (nx_block, ny_block, & divT, tensionT, & shearT, DeltaT ) - !----------------------------------------------------------------- - ! viscous coefficients and replacement pressure at T point - !----------------------------------------------------------------- + !----------------------------------------------------------------- + ! viscous coefficients and replacement pressure at T point + !----------------------------------------------------------------- call viscous_coeffs_and_rep_pressure_T (strength(i,j), & DminTarea(i,j), DeltaT, & zetax2T(i,j),etax2T(i,j),& rep_prsT, capping ) - !----------------------------------------------------------------- - ! the stresses ! kg/s^2 - !----------------------------------------------------------------- + !----------------------------------------------------------------- + ! the stresses ! kg/s^2 + !----------------------------------------------------------------- - ! NOTE: for comp. efficiency 2 x zeta and 2 x eta are used in the code + ! NOTE: for comp. efficiency 2 x zeta and 2 x eta are used in the code stresspT(i,j) = (stresspT(i,j)*(c1-arlx1i*revp) + & arlx1i*(zetax2T(i,j)*divT - rep_prsT)) * denom1 @@ -1572,25 +1558,7 @@ subroutine stress_T (nx_block, ny_block, & enddo ! ij - !----------------------------------------------------------------- - ! on last subcycle, save quantities for mechanical redistribution - !----------------------------------------------------------------- - if (ksub == ndte) then - - call deformations_T (nx_block , ny_block , & - icellt , & - indxti , indxtj , & - uvelE, vvelE, & - uvelN, vvelN, & - dxN, dyE, & - dxT, dyT, & - tarear , & - shear , divu , & - rdg_conv , rdg_shear ) - - endif - - end subroutine stress_T + end subroutine stress_T !======================================================================= @@ -1600,7 +1568,7 @@ end subroutine stress_T ! Nov 2021 subroutine stress_U (nx_block, ny_block, & - ksub, icellu, & + icellu, & indxui, indxuj, & uvelE, vvelE, & uvelN, vvelN, & @@ -1623,7 +1591,6 @@ subroutine stress_U (nx_block, ny_block, & integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions - ksub , & ! subcycling step icellu ! no. of cells where iceumask = 1 integer (kind=int_kind), dimension (nx_block*ny_block), intent(in) :: & @@ -1676,10 +1643,10 @@ subroutine stress_U (nx_block, ny_block, & i = indxui(ij) j = indxuj(ij) - !----------------------------------------------------------------- - ! strain rates at U point - ! NOTE these are actually strain rates * area (m^2/s) - !----------------------------------------------------------------- + !----------------------------------------------------------------- + ! strain rates at U point + ! NOTE these are actually strain rates * area (m^2/s) + !----------------------------------------------------------------- call strain_rates_U (nx_block, ny_block, & i, j, & @@ -1694,43 +1661,40 @@ subroutine stress_U (nx_block, ny_block, & divU, tensionU, & shearU, DeltaU ) - !----------------------------------------------------------------- - ! viscous coefficients and replacement pressure at U point - !----------------------------------------------------------------- + !----------------------------------------------------------------- + ! viscous coefficients and replacement pressure at U point + !----------------------------------------------------------------- if (visc_coeff_method == 'avg_zeta') then - - call viscous_coeffs_and_rep_pressure_T2U (zetax2T(i ,j ), zetax2T(i ,j+1), & - zetax2T(i+1,j+1), zetax2T(i+1,j ), & - etax2T (i ,j ), etax2T (i ,j+1), & - etax2T (i+1,j+1), etax2T (i+1,j ), & - hm (i ,j ), hm (i ,j+1), & - hm (i+1,j+1), hm (i+1,j ), & - tarea (i ,j ), tarea (i ,j+1), & - tarea (i+1,j+1), tarea (i+1,j ), & - DeltaU,zetax2U, etax2U, rep_prsU) - + call viscous_coeffs_and_rep_pressure_T2U (zetax2T(i ,j ), zetax2T(i ,j+1), & + zetax2T(i+1,j+1), zetax2T(i+1,j ), & + etax2T (i ,j ), etax2T (i ,j+1), & + etax2T (i+1,j+1), etax2T (i+1,j ), & + hm (i ,j ), hm (i ,j+1), & + hm (i+1,j+1), hm (i+1,j ), & + tarea (i ,j ), tarea (i ,j+1), & + tarea (i+1,j+1), tarea (i+1,j ), & + DeltaU,zetax2U, etax2U, rep_prsU) elseif (visc_coeff_method == 'avg_strength') then - DminUarea = deltaminEVP*uarea(i,j) - - call viscous_coeffs_and_rep_pressure_U (strength(i ,j ), strength(i ,j+1), & - strength(i+1,j+1), strength(i+1,j ), & - hm (i ,j ) , hm (i ,j+1), & - hm (i+1,j+1) , hm (i+1,j ), & - tarea (i ,j ) , tarea (i ,j+1), & - tarea (i+1,j+1) , tarea (i+1,j ), & - DminUarea, & - DeltaU , capping, & - zetax2U, etax2U, rep_prsU) + DminUarea = deltaminEVP*uarea(i,j) + call viscous_coeffs_and_rep_pressure_U (strength(i ,j ), strength(i ,j+1), & + strength(i+1,j+1), strength(i+1,j ), & + hm (i ,j ) , hm (i ,j+1), & + hm (i+1,j+1) , hm (i+1,j ), & + tarea (i ,j ) , tarea (i ,j+1), & + tarea (i+1,j+1) , tarea (i+1,j ), & + DminUarea, & + DeltaU , capping, & + zetax2U, etax2U, rep_prsU) endif - !----------------------------------------------------------------- - ! the stresses ! kg/s^2 - !----------------------------------------------------------------- + !----------------------------------------------------------------- + ! the stresses ! kg/s^2 + !----------------------------------------------------------------- - ! NOTE: for comp. efficiency 2 x zeta and 2 x eta are used in the code + ! NOTE: for comp. efficiency 2 x zeta and 2 x eta are used in the code stresspU(i,j) = (stresspU(i,j)*(c1-arlx1i*revp) + & arlx1i*(zetax2U*divU - rep_prsU)) * denom1 @@ -1743,7 +1707,7 @@ subroutine stress_U (nx_block, ny_block, & enddo ! ij - end subroutine stress_U + end subroutine stress_U !======================================================================= @@ -1753,7 +1717,7 @@ end subroutine stress_U ! Nov 2021 subroutine div_stress (nx_block, ny_block, & - ksub, icell, & + icell, & indxi, indxj, & dxE_N, dyE_N, & dxT_U, dyT_U, & @@ -1768,7 +1732,6 @@ subroutine div_stress (nx_block, ny_block, & integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions - ksub , & ! subcycling step icell ! no. of cells where epm (or npm) = 1 integer (kind=int_kind), dimension (nx_block*ny_block), intent(in) :: & diff --git a/cicecore/cicedynB/dynamics/ice_dyn_evp_1d.F90 b/cicecore/cicedynB/dynamics/ice_dyn_evp_1d.F90 old mode 100755 new mode 100644 index b896bdfe4..1ca41898d --- a/cicecore/cicedynB/dynamics/ice_dyn_evp_1d.F90 +++ b/cicecore/cicedynB/dynamics/ice_dyn_evp_1d.F90 @@ -857,10 +857,8 @@ subroutine stepu_last(NA_len, rhow, lb, ub, Cw, aiu, uocn, vocn, & vvel(iw) = (cca * cc2 - ccb * cc1) / ab2 ! calculate seabed stress component for outputs - if (seabed_stress) then - taubx(iw) = -uvel(iw) * Tbu(iw) / (sqrt(uold**2 + vold**2) + u0) - tauby(iw) = -vvel(iw) * Tbu(iw) / (sqrt(uold**2 + vold**2) + u0) - end if + taubx(iw) = -uvel(iw) * Tbu(iw) / (sqrt(uold**2 + vold**2) + u0) + tauby(iw) = -vvel(iw) * Tbu(iw) / (sqrt(uold**2 + vold**2) + u0) end do #ifdef _OPENACC @@ -1294,7 +1292,10 @@ subroutine ice_dyn_evp_1d_kernel if (ndte < 2) call abort_ice(subname & // ' ERROR: ndte must be 2 or higher for this kernel') - !$OMP PARALLEL PRIVATE(ksub) + ! tcraig, turn off the OMP directives here, Jan, 2022 + ! This produces non bit-for-bit results with different thread counts. + ! Seems like there isn't an opportunity for safe threading here ??? + !$XXXOMP PARALLEL PRIVATE(ksub) do ksub = 1, ndte - 1 call evp1d_stress(NA_len, ee, ne, se, 1, NA_len, uvel, & vvel, dxt, dyt, hte, htn, htem1, htnm1, strength, & @@ -1302,15 +1303,15 @@ subroutine ice_dyn_evp_1d_kernel stressm_2, stressm_3, stressm_4, stress12_1, & stress12_2, stress12_3, stress12_4, str1, str2, str3, & str4, str5, str6, str7, str8, skiptcell) - !$OMP BARRIER + !$XXXOMP BARRIER call evp1d_stepu(NA_len, rhow, 1, NA_len, cdn_ocn, aiu, & uocn, vocn, forcex, forcey, umassdti, fm, uarear, Tbu, & uvel_init, vvel_init, uvel, vvel, str1, str2, str3, & str4, str5, str6, str7, str8, nw, sw, sse, skipucell) - !$OMP BARRIER + !$XXXOMP BARRIER call evp1d_halo_update(NAVEL_len, 1, NA_len, uvel, vvel, & halo_parent) - !$OMP BARRIER + !$XXXOMP BARRIER end do call evp1d_stress(NA_len, ee, ne, se, 1, NA_len, uvel, vvel, & @@ -1319,16 +1320,16 @@ subroutine ice_dyn_evp_1d_kernel stressm_3, stressm_4, stress12_1, stress12_2, stress12_3, & stress12_4, str1, str2, str3, str4, str5, str6, str7, & str8, skiptcell, tarear, divu, rdg_conv, rdg_shear, shear) - !$OMP BARRIER + !$XXXOMP BARRIER call evp1d_stepu(NA_len, rhow, 1, NA_len, cdn_ocn, aiu, uocn, & vocn, forcex, forcey, umassdti, fm, uarear, Tbu, & uvel_init, vvel_init, uvel, vvel, str1, str2, str3, str4, & str5, str6, str7, str8, nw, sw, sse, skipucell, strintx, & strinty, taubx, tauby) - !$OMP BARRIER + !$XXXOMP BARRIER call evp1d_halo_update(NAVEL_len, 1, NA_len, uvel, vvel, & halo_parent) - !$OMP END PARALLEL + !$XXXOMP END PARALLEL end if ! master task diff --git a/cicecore/cicedynB/dynamics/ice_dyn_shared.F90 b/cicecore/cicedynB/dynamics/ice_dyn_shared.F90 old mode 100755 new mode 100644 index 5fe524b32..8301a967b --- a/cicecore/cicedynB/dynamics/ice_dyn_shared.F90 +++ b/cicecore/cicedynB/dynamics/ice_dyn_shared.F90 @@ -201,7 +201,7 @@ subroutine init_dyn (dt) allocate(fcorN_blk(nx_block,ny_block,max_blocks)) endif - !$OMP PARALLEL DO PRIVATE(iblk,i,j) + !$OMP PARALLEL DO PRIVATE(iblk,i,j) SCHEDULE(runtime) do iblk = 1, nblocks do j = 1, ny_block do i = 1, nx_block @@ -696,7 +696,6 @@ end subroutine dyn_prep2 subroutine stepu (nx_block, ny_block, & icellu, Cw, & indxui, indxuj, & - ksub, & aiu, str, & uocn, vocn, & waterx, watery, & @@ -711,8 +710,7 @@ subroutine stepu (nx_block, ny_block, & integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions - icellu, & ! total count when iceumask is true - ksub ! subcycling iteration + icellu ! total count when iceumask is true integer (kind=int_kind), dimension (nx_block*ny_block), intent(in) :: & indxui , & ! compressed index in i-direction @@ -746,7 +744,7 @@ subroutine stepu (nx_block, ny_block, & taubx , & ! seabed stress, x-direction (N/m^2) tauby ! seabed stress, y-direction (N/m^2) - real (kind=dbl_kind), dimension (nx_block,ny_block), intent(inout) :: & + real (kind=dbl_kind), dimension (nx_block,ny_block), intent(in) :: & Cw ! ocean-ice neutral drag coefficient ! local variables @@ -810,14 +808,10 @@ subroutine stepu (nx_block, ny_block, & uvel(i,j) = (cca*cc1 + ccb*cc2) / ab2 ! m/s vvel(i,j) = (cca*cc2 - ccb*cc1) / ab2 - ! calculate seabed stress component for outputs - if (ksub == ndte) then ! on last subcycling iteration - if ( seabed_stress ) then - taubx(i,j) = -uvel(i,j)*Tbu(i,j) / (sqrt(uold**2 + vold**2) + u0) - tauby(i,j) = -vvel(i,j)*Tbu(i,j) / (sqrt(uold**2 + vold**2) + u0) - endif - endif - + ! calculate seabed stress component for outputs + ! only needed on last iteration. + taubx(i,j) = -uvel(i,j)*Cb + tauby(i,j) = -vvel(i,j)*Cb enddo ! ij end subroutine stepu @@ -829,7 +823,7 @@ end subroutine stepu subroutine step_vel (nx_block, ny_block, & icell, Cw, & indxi, indxj, & - ksub, aiu, & + aiu, & uocn, vocn, & waterx, watery, & forcex, forcey, & @@ -842,8 +836,7 @@ subroutine step_vel (nx_block, ny_block, & integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions - icell, & ! total count when ice[en]mask is true - ksub ! subcycling iteration + icell ! total count when ice[en]mask is true integer (kind=int_kind), dimension (nx_block*ny_block), intent(in) :: & indxi , & ! compressed index in i-direction @@ -933,10 +926,9 @@ subroutine step_vel (nx_block, ny_block, & vvel(i,j) = (cca*cc2 - ccb*cc1) / ab2 ! calculate seabed stress component for outputs - if (ksub == ndte .and. seabed_stress) then ! on last subcycling iteration - taubx(i,j) = -uvel(i,j)*Tb(i,j) / ccc - tauby(i,j) = -vvel(i,j)*Tb(i,j) / ccc - endif + ! only needed on last iteration. + taubx(i,j) = -uvel(i,j)*Cb + tauby(i,j) = -vvel(i,j)*Cb enddo ! ij @@ -949,7 +941,7 @@ end subroutine step_vel subroutine stepu_Cgrid (nx_block, ny_block, & icell, Cw, & indxi, indxj, & - ksub, aiu, & + aiu, & uocn, vocn, & waterx, forcex, & massdti, fm, & @@ -960,8 +952,7 @@ subroutine stepu_Cgrid (nx_block, ny_block, & integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions - icell, & ! total count when ice[en]mask is true - ksub ! subcycling iteration + icell ! total count when ice[en]mask is true integer (kind=int_kind), dimension (nx_block*ny_block), intent(in) :: & indxi , & ! compressed index in i-direction @@ -1036,9 +1027,8 @@ subroutine stepu_Cgrid (nx_block, ny_block, & uvel(i,j) = (ccb*vold + cc1) / cca ! m/s ! calculate seabed stress component for outputs - if (ksub == ndte .and. seabed_stress) then ! on last subcycling iteration - taubx(i,j) = -uvel(i,j)*Tb(i,j) / ccc - endif + ! only needed on last iteration. + taubx(i,j) = -uvel(i,j)*Cb enddo ! ij @@ -1051,7 +1041,7 @@ end subroutine stepu_Cgrid subroutine stepv_Cgrid (nx_block, ny_block, & icell, Cw, & indxi, indxj, & - ksub, aiu, & + aiu, & uocn, vocn, & watery, forcey, & massdti, fm, & @@ -1062,8 +1052,7 @@ subroutine stepv_Cgrid (nx_block, ny_block, & integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions - icell, & ! total count when ice[en]mask is true - ksub ! subcycling iteration + icell ! total count when ice[en]mask is true integer (kind=int_kind), dimension (nx_block*ny_block), intent(in) :: & indxi , & ! compressed index in i-direction @@ -1138,9 +1127,8 @@ subroutine stepv_Cgrid (nx_block, ny_block, & vvel(i,j) = (-ccb*uold + cc2) / cca ! calculate seabed stress component for outputs - if (ksub == ndte .and. seabed_stress) then ! on last subcycling iteration - tauby(i,j) = -vvel(i,j)*Tb(i,j) / ccc - endif + ! only needed on last iteration. + tauby(i,j) = -vvel(i,j)*Cb enddo ! ij @@ -1159,9 +1147,7 @@ subroutine dyn_finish (nx_block, ny_block, & uvel, vvel, & uocn, vocn, & aiu, fm, & - strintx, strinty, & - strairx, strairy, & - strocnx, strocny) + strocnx, strocny) integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions @@ -1177,24 +1163,21 @@ subroutine dyn_finish (nx_block, ny_block, & uocn , & ! ocean current, x-direction (m/s) vocn , & ! ocean current, y-direction (m/s) aiu , & ! ice fraction on u-grid - fm , & ! Coriolis param. * mass in U-cell (kg/s) - strintx , & ! divergence of internal ice stress, x (N/m^2) - strinty , & ! divergence of internal ice stress, y (N/m^2) - strairx , & ! stress on ice by air, x-direction - strairy ! stress on ice by air, y-direction + fm ! Coriolis param. * mass in U-cell (kg/s) real (kind=dbl_kind), dimension (nx_block,ny_block), intent(inout) :: & strocnx , & ! ice-ocean stress, x-direction strocny ! ice-ocean stress, y-direction + real (kind=dbl_kind), dimension (nx_block,ny_block), intent(in) :: & + Cw ! ocean-ice neutral drag coefficient + ! local variables integer (kind=int_kind) :: & i, j, ij real (kind=dbl_kind) :: vrel, rhow - real (kind=dbl_kind), dimension (nx_block,ny_block), intent(inout) :: & - Cw ! ocean-ice neutral drag coefficient character(len=*), parameter :: subname = '(dyn_finish)' @@ -1277,13 +1260,14 @@ subroutine seabed_stress_factor_LKD (nx_block, ny_block, & Tbu ! seabed stress factor at 'grid_location' (N/m^2) character(len=*), optional, intent(inout) :: & - grid_location ! grid location (U, E, N), U assumed if not present + grid_location ! grid location (U, E, N), U assumed if not present real (kind=dbl_kind) :: & - au, & ! concentration of ice at 'grid_location' - hu, & ! volume per unit area of ice at 'grid_location' (mean thickness, m) - hwu, & ! water depth at 'grid_location' (m) - hcu ! critical thickness at 'grid_location' (m) + au , & ! concentration of ice at u location + hu , & ! volume per unit area of ice at u location (mean thickness, m) + hwu , & ! water depth at u location (m) + docalc_tbu, & ! logical as real (C0,C1) decides whether c0 is 0 or + hcu ! critical thickness at u location (m) integer (kind=int_kind) :: & i, j, ij @@ -1308,18 +1292,17 @@ subroutine seabed_stress_factor_LKD (nx_block, ny_block, & hwu = grid_neighbor_min(hwater, i, j, l_grid_location) - if (hwu < threshold_hw) then + docalc_tbu = merge(c1,c0,hwu < threshold_hw) + - au = grid_neighbor_max(aice, i, j, l_grid_location) - hu = grid_neighbor_max(vice, i, j, l_grid_location) + au = grid_neighbor_max(aice, i, j, l_grid_location) + hu = grid_neighbor_max(vice, i, j, l_grid_location) - ! 1- calculate critical thickness - hcu = au * hwu / k1 + ! 1- calculate critical thickness + hcu = au * hwu / k1 - ! 2- calculate seabed stress factor - Tbu(i,j) = k2 * max(c0,(hu - hcu)) * exp(-alphab * (c1 - au)) - - endif + ! 2- calculate seabed stress factor + Tbu(i,j) = docalc_tbu*k2 * max(c0,(hu - hcu)) * exp(-alphab * (c1 - au)) enddo ! ij @@ -2166,7 +2149,7 @@ subroutine viscous_coeffs_and_rep_pressure_T (strength, DminTarea, & rep_prs = (c1-Ktens)*tmpcalc*Delta etax2 = epp2i*zetax2 - end subroutine viscous_coeffs_and_rep_pressure_T + end subroutine viscous_coeffs_and_rep_pressure_T subroutine viscous_coeffs_and_rep_pressure_T2U (zetax2T_00, zetax2T_01, & @@ -2291,7 +2274,7 @@ subroutine stack_velocity_field(uvel, vvel, fld2) character(len=*), parameter :: subname = '(stack_velocity_field)' ! load velocity into array for boundary updates - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks fld2(:,:,1,iblk) = uvel(:,:,iblk) fld2(:,:,2,iblk) = vvel(:,:,iblk) @@ -2323,7 +2306,7 @@ subroutine unstack_velocity_field(fld2, uvel, vvel) character(len=*), parameter :: subname = '(unstack_velocity_field)' ! Unload velocity from array after boundary updates - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks uvel(:,:,iblk) = fld2(:,:,1,iblk) vvel(:,:,iblk) = fld2(:,:,2,iblk) diff --git a/cicecore/cicedynB/dynamics/ice_dyn_vp.F90 b/cicecore/cicedynB/dynamics/ice_dyn_vp.F90 index 61720d2eb..5d414c204 100644 --- a/cicecore/cicedynB/dynamics/ice_dyn_vp.F90 +++ b/cicecore/cicedynB/dynamics/ice_dyn_vp.F90 @@ -328,7 +328,7 @@ subroutine implicit_solver (dt) ! tcraig, tcx, threading here leads to some non-reproducbile results and failures in icepack_ice_strength ! need to do more debugging - !$TCXOMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block) + !$TCXOMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,ij,i,j) do iblk = 1, nblocks !----------------------------------------------------------------- @@ -425,34 +425,34 @@ subroutine implicit_solver (dt) !----------------------------------------------------------------- ! seabed stress factor Tbu (Tbu is part of Cb coefficient) !----------------------------------------------------------------- - if (seabed_stress) then - - ! tcraig, evp omp causes abort on cheyenne with pgi, turn off here too - !$TCXOMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - - if ( seabed_stress_method == 'LKD' ) then - + if ( seabed_stress_method == 'LKD' ) then + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks call seabed_stress_factor_LKD (nx_block, ny_block, & icellu (iblk), & indxui(:,iblk), indxuj(:,iblk), & vice(:,:,iblk), aice(:,:,iblk), & hwater(:,:,iblk), Tbu(:,:,iblk)) + enddo + !$OMP END PARALLEL DO - elseif ( seabed_stress_method == 'probabilistic' ) then + elseif ( seabed_stress_method == 'probabilistic' ) then + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks call seabed_stress_factor_prob (nx_block, ny_block, & icellt(iblk), indxti(:,iblk), indxtj(:,iblk), & icellu(iblk), indxui(:,iblk), indxuj(:,iblk), & aicen(:,:,:,iblk), vicen(:,:,:,iblk), & hwater(:,:,iblk), Tbu(:,:,iblk)) - endif + enddo + !$OMP END PARALLEL DO - enddo - !$TCXOMP END PARALLEL DO + endif endif + !----------------------------------------------------------------- ! calc size of problem (ntot) and allocate solution vector !----------------------------------------------------------------- @@ -627,8 +627,8 @@ subroutine implicit_solver (dt) uvel (:,:,iblk), vvel (:,:,iblk), & uocnU (:,:,iblk), vocnU (:,:,iblk), & aiu (:,:,iblk), fm (:,:,iblk), & - strintx (:,:,iblk), strinty (:,:,iblk), & - strairx (:,:,iblk), strairy (:,:,iblk), & +! strintx (:,:,iblk), strinty (:,:,iblk), & +! strairx (:,:,iblk), strairy (:,:,iblk), & strocnx (:,:,iblk), strocny (:,:,iblk)) enddo @@ -830,9 +830,9 @@ subroutine anderson_solver (icellt , icellu, & !----------------------------------------------------------------- ! Calc zetax2, etax2, dPr/dx, dPr/dy, Cb and vrel = f(uprev_k, vprev_k) !----------------------------------------------------------------- - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,stress_Pr) do iblk = 1, nblocks - + if (use_mean_vrel) then ulin(:,:,iblk) = p5*uprev_k(:,:,iblk) + p5*uvel(:,:,iblk) vlin(:,:,iblk) = p5*vprev_k(:,:,iblk) + p5*vvel(:,:,iblk) @@ -928,7 +928,7 @@ subroutine anderson_solver (icellt , icellu, & ! Prepare diagonal for preconditioner if (precond == 'diag' .or. precond == 'pgmres') then - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,diag_rheo) do iblk = 1, nblocks ! first compute diagonal contributions due to rheology term call formDiag_step1 (nx_block , ny_block , & @@ -1207,8 +1207,6 @@ subroutine calc_zeta_dPr (nx_block, ny_block, & character(len=*), parameter :: subname = '(calc_zeta_dPr)' - ! Initialize - ! Initialize stPr, zetax2 and etax2 to zero ! (for cells where icetmask is false) stPr = c0 @@ -2864,7 +2862,7 @@ subroutine fgmres (zetax2 , etax2 , & ! Normalize the first Arnoldi vector inverse_norm = c1 / norm_residual - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -2960,7 +2958,7 @@ subroutine fgmres (zetax2 , etax2 , & if (.not. almost_zero( hessenberg(nextit,initer) ) ) then ! Normalize next Arnoldi vector inverse_norm = c1 / hessenberg(nextit,initer) - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3026,7 +3024,7 @@ subroutine fgmres (zetax2 , etax2 , & ! Form linear combination to get new solution iterate do it = 1, initer t = rhs_hess(it) - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3070,7 +3068,7 @@ subroutine fgmres (zetax2 , etax2 , & workspace_x = c0 workspace_y = c0 do it = 1, nextit - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3257,7 +3255,7 @@ subroutine pgmres (zetax2 , etax2 , & ! Normalize the first Arnoldi vector inverse_norm = c1 / norm_residual - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3342,7 +3340,7 @@ subroutine pgmres (zetax2 , etax2 , & if (.not. almost_zero( hessenberg(nextit,initer) ) ) then ! Normalize next Arnoldi vector inverse_norm = c1 / hessenberg(nextit,initer) - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3410,7 +3408,7 @@ subroutine pgmres (zetax2 , etax2 , & workspace_y = c0 do it = 1, initer t = rhs_hess(it) - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3466,7 +3464,7 @@ subroutine pgmres (zetax2 , etax2 , & workspace_x = c0 workspace_y = c0 do it = 1, nextit - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3547,7 +3545,7 @@ subroutine precondition(zetax2 , etax2, & wx = vx wy = vy elseif (precond_type == 'diag') then ! Jacobi preconditioner (diagonal) - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3630,7 +3628,7 @@ subroutine orthogonalize(ortho_type , initer , & do it = 1, initer local_dot = c0 - !$OMP PARALLEL DO PRIVATE(iblk, ij, i, j) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3650,7 +3648,7 @@ subroutine orthogonalize(ortho_type , initer , & ! Second loop of Gram-Schmidt (orthonormalize) do it = 1, initer - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3669,7 +3667,7 @@ subroutine orthogonalize(ortho_type , initer , & do it = 1, initer local_dot = c0 - !$OMP PARALLEL DO PRIVATE(iblk, ij, i, j) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) @@ -3684,7 +3682,7 @@ subroutine orthogonalize(ortho_type , initer , & hessenberg(it,initer) = global_sum(sum(local_dot), distrb_info) - !$OMP PARALLEL DO PRIVATE(iblk, ij, i, j) + !$OMP PARALLEL DO PRIVATE(iblk,ij,i,j) do iblk = 1, nblocks do ij = 1, icellu(iblk) i = indxui(ij, iblk) diff --git a/cicecore/cicedynB/dynamics/ice_transport_driver.F90 b/cicecore/cicedynB/dynamics/ice_transport_driver.F90 index dc6425adb..38650459f 100644 --- a/cicecore/cicedynB/dynamics/ice_transport_driver.F90 +++ b/cicecore/cicedynB/dynamics/ice_transport_driver.F90 @@ -17,6 +17,7 @@ module ice_transport_driver use ice_constants, only: c0, c1, p5, & field_loc_center, & field_type_scalar, field_type_vector, & + field_loc_NEcorner, & field_loc_Nface, field_loc_Eface use ice_fileunits, only: nu_diag use ice_exit, only: abort_ice @@ -355,7 +356,7 @@ subroutine transport_remap (dt) ! Here we assume that aice0 is up to date. !------------------------------------------------------------------- -! !$OMP PARALLEL DO PRIVATE(i,j,iblk) +! !$OMP PARALLEL DO PRIVATE(i,j,iblk) SCHEDULE(runtime) ! do iblk = 1, nblocks ! do j = 1, ny_block ! do i = 1, nx_block @@ -397,8 +398,7 @@ subroutine transport_remap (dt) ! call ice_timer_stop(timer_bound) -! MHRI: CHECK THIS OMP ... maybe ok: Were trcrn(:,:,1:ntrcr,:,iblk) in my testcode - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks !------------------------------------------------------------------- @@ -471,7 +471,7 @@ subroutine transport_remap (dt) tmin(:,:,:,:,:) = c0 tmax(:,:,:,:,:) = c0 - !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,n) + !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,n) SCHEDULE(runtime) do iblk = 1, nblocks this_block = get_block(blocks_ice(iblk),iblk) ilo = this_block%ilo @@ -516,7 +516,7 @@ subroutine transport_remap (dt) field_loc_center, field_type_scalar) call ice_timer_stop(timer_bound) - !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,n) + !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,n) SCHEDULE(runtime) do iblk = 1, nblocks this_block = get_block(blocks_ice(iblk),iblk) ilo = this_block%ilo @@ -561,8 +561,7 @@ subroutine transport_remap (dt) ! Given new fields, recompute state variables. !------------------------------------------------------------------- -! MHRI: CHECK THIS OMP ... maybe ok: Were trcrn(:,:,1:ntrcr,:,iblk) in my testcode - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks call tracers_to_state (nx_block, ny_block, & @@ -666,7 +665,7 @@ subroutine transport_remap (dt) !------------------------------------------------------------------- if (l_monotonicity_check) then - !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,n,ckflag,istop,jstop) + !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,n,ckflag,istop,jstop) SCHEDULE(runtime) do iblk = 1, nblocks this_block = get_block(blocks_ice(iblk),iblk) ilo = this_block%ilo @@ -720,7 +719,7 @@ subroutine transport_upwind (dt) use ice_state, only: aice0, aicen, vicen, vsnon, trcrn, & uvel, vvel, trcr_depend, bound_state, trcr_base, & n_trcr_strata, nt_strata, uvelE, vvelN - use ice_grid, only: HTE, HTN, tarea, grid_ice + use ice_grid, only: HTE, HTN, tarea, tmask, grid_ice use ice_timers, only: ice_timer_start, ice_timer_stop, & timer_bound, timer_advect @@ -768,38 +767,46 @@ subroutine transport_upwind (dt) ! vicen, vsnon, & ! ntrcr, trcrn) +! call ice_timer_start(timer_bound) +! call ice_HaloUpdate (uvel, halo_info, & +! field_loc_NEcorner, field_type_vector) +! call ice_HaloUpdate (vvel, halo_info, & +! field_loc_NEcorner, field_type_vector) +! call ice_timer_stop(timer_bound) + !------------------------------------------------------------------- ! Average corner velocities to edges. !------------------------------------------------------------------- if (grid_ice == 'CD' .or. grid_ice == 'C') then - uee(:,:,:)=uvelE(:,:,:) - vnn(:,:,:)=vvelN(:,:,:) + uee(:,:,:)=uvelE(:,:,:) + vnn(:,:,:)=vvelN(:,:,:) else - !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block) - do iblk = 1, nblocks - this_block = get_block(blocks_ice(iblk),iblk) - ilo = this_block%ilo - ihi = this_block%ihi - jlo = this_block%jlo - jhi = this_block%jhi - - do j = jlo, jhi - do i = ilo, ihi - uee(i,j,iblk) = p5*(uvel(i,j,iblk) + uvel(i,j-1,iblk)) - vnn(i,j,iblk) = p5*(vvel(i,j,iblk) + vvel(i-1,j,iblk)) - enddo - enddo - enddo - !$OMP END PARALLEL DO - - call ice_timer_start(timer_bound) - call ice_HaloUpdate (uee, halo_info, & - field_loc_Eface, field_type_vector) - call ice_HaloUpdate (vnn, halo_info, & - field_loc_Nface, field_type_vector) - call ice_timer_stop(timer_bound) + !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block) SCHEDULE(runtime) + do iblk = 1, nblocks + this_block = get_block(blocks_ice(iblk),iblk) + ilo = this_block%ilo + ihi = this_block%ihi + jlo = this_block%jlo + jhi = this_block%jhi + + do j = jlo, jhi + do i = ilo, ihi + uee(i,j,iblk) = p5*(uvel(i,j,iblk) + uvel(i,j-1,iblk)) + vnn(i,j,iblk) = p5*(vvel(i,j,iblk) + vvel(i-1,j,iblk)) + enddo + enddo + enddo + !$OMP END PARALLEL DO + + call ice_timer_start(timer_bound) + call ice_HaloUpdate (uee, halo_info, & + field_loc_Eface, field_type_vector) + call ice_HaloUpdate (vnn, halo_info, & + field_loc_Nface, field_type_vector) + call ice_timer_stop(timer_bound) endif - !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block) + + !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block) SCHEDULE(runtime) do iblk = 1, nblocks this_block = get_block(blocks_ice(iblk),iblk) ilo = this_block%ilo @@ -839,6 +846,7 @@ subroutine transport_upwind (dt) ntrcr, narr, & trcr_depend(:), trcr_base(:,:), & n_trcr_strata(:), nt_strata(:,:), & + tmask(:,:, iblk), & aicen(:,:, :,iblk), trcrn (:,:,:,:,iblk), & vicen(:,:, :,iblk), vsnon (:,:, :,iblk), & aice0(:,:, iblk), works (:,:, :,iblk)) @@ -1643,6 +1651,7 @@ subroutine work_to_state (nx_block, ny_block, & trcr_base, & n_trcr_strata, & nt_strata, & + tmask, & aicen, trcrn, & vicen, vsnon, & aice0, works) @@ -1665,6 +1674,9 @@ subroutine work_to_state (nx_block, ny_block, & integer (kind=int_kind), dimension (ntrcr,2), intent(in) :: & nt_strata ! indices of underlying tracer layers + logical (kind=log_kind), intent (in) :: & + tmask (nx_block,ny_block) + real (kind=dbl_kind), intent (in) :: & works (nx_block,ny_block,narr) @@ -1684,6 +1696,7 @@ subroutine work_to_state (nx_block, ny_block, & integer (kind=int_kind) :: & i, j, ij, n ,&! counting indices narrays ,&! counter for number of state variable arrays + nt_Tsfc ,&! Tsfc tracer number icells ! number of ocean/ice cells integer (kind=int_kind), dimension (nx_block*ny_block) :: & @@ -1694,6 +1707,11 @@ subroutine work_to_state (nx_block, ny_block, & character(len=*), parameter :: subname = '(work_to_state)' + call icepack_query_tracer_indices(nt_Tsfc_out=nt_Tsfc) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & + file=__FILE__, line=__LINE__) + ! for call to compute_tracers icells = 0 do j = 1, ny_block @@ -1736,7 +1754,14 @@ subroutine work_to_state (nx_block, ny_block, & n_trcr_strata = n_trcr_strata(:), & nt_strata = nt_strata(:,:), & trcrn = trcrn(i,j,:,n)) + + ! tcraig, don't let land points get non-zero Tsfc + if (.not.tmask(i,j)) then + trcrn(i,j,nt_Tsfc,n) = c0 + endif + enddo + narrays = narrays + ntrcr enddo ! ncat diff --git a/cicecore/cicedynB/dynamics/ice_transport_remap.F90 b/cicecore/cicedynB/dynamics/ice_transport_remap.F90 index 922b3f06b..6f35b2da8 100644 --- a/cicecore/cicedynB/dynamics/ice_transport_remap.F90 +++ b/cicecore/cicedynB/dynamics/ice_transport_remap.F90 @@ -270,7 +270,7 @@ subroutine init_remap ! Note: On a rectangular grid, the integral of any odd function ! of x or y = 0. - !$OMP PARALLEL DO PRIVATE(iblk,i,j) + !$OMP PARALLEL DO PRIVATE(iblk,i,j) SCHEDULE(runtime) do iblk = 1, nblocks do j = 1, ny_block do i = 1, nx_block @@ -462,10 +462,9 @@ subroutine horizontal_remap (dt, ntrace, & !---! Remap the open water area (without tracers). !---!------------------------------------------------------------------- - !--- tcraig, tcx, this omp loop leads to a seg fault in gnu - !--- need to check private variables and debug further - !$TCXOMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,n,m, & - !$TCXOMP indxinc,indxjnc,mmask,tmask,istop,jstop,l_stop) + !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block,n, & + !$OMP indxinc,indxjnc,mmask,tmask,istop,jstop,l_stop) & + !$OMP SCHEDULE(runtime) do iblk = 1, nblocks l_stop = .false. @@ -567,7 +566,7 @@ subroutine horizontal_remap (dt, ntrace, & endif enddo ! iblk - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO !------------------------------------------------------------------- ! Ghost cell updates @@ -595,7 +594,7 @@ subroutine horizontal_remap (dt, ntrace, & ! tracer fields if (maskhalo_remap) then halomask(:,:,:) = 0 - !$OMP PARALLEL DO PRIVATE(iblk,this_block,ilo,ihi,jlo,jhi,n,m,j,i) + !$OMP PARALLEL DO PRIVATE(iblk,this_block,ilo,ihi,jlo,jhi,n,m,j,i) SCHEDULE(runtime) do iblk = 1, nblocks this_block = get_block(blocks_ice(iblk),iblk) ilo = this_block%ilo @@ -639,12 +638,11 @@ subroutine horizontal_remap (dt, ntrace, & endif ! nghost - !--- tcraig, tcx, this omp loop leads to a seg fault in gnu - !--- need to check private variables and debug further - !$TCXOMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block,n,m, & - !$TCXOMP edgearea_e,edgearea_n,edge,iflux,jflux, & - !$TCXOMP xp,yp,indxing,indxjng,mflxe,mflxn, & - !$TCXOMP mtflxe,mtflxn,triarea,istop,jstop,l_stop) + !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block,n, & + !$OMP edgearea_e,edgearea_n,edge,iflux,jflux, & + !$OMP xp,yp,indxing,indxjng,mflxe,mflxn, & + !$OMP mtflxe,mtflxn,triarea,istop,jstop,l_stop) & + !$OMP SCHEDULE(runtime) do iblk = 1, nblocks l_stop = .false. @@ -867,7 +865,7 @@ subroutine horizontal_remap (dt, ntrace, & enddo ! n enddo ! iblk - !$TCXOMP END PARALLEL DO + !$OMP END PARALLEL DO end subroutine horizontal_remap @@ -2984,17 +2982,17 @@ subroutine locate_triangles (nx_block, ny_block, & i = indxid(ij) j = indxjd(ij) if (abs(areasum(i,j) - edgearea(i,j)) > eps13*areafac_c(i,j)) then - print*, '' - print*, 'Areas do not add up: m, i, j, edge =', & + write(nu_diag,*) '' + write(nu_diag,*) 'Areas do not add up: m, i, j, edge =', & my_task, i, j, trim(edge) - print*, 'edgearea =', edgearea(i,j) - print*, 'areasum =', areasum(i,j) - print*, 'areafac_c =', areafac_c(i,j) - print*, '' - print*, 'Triangle areas:' + write(nu_diag,*) 'edgearea =', edgearea(i,j) + write(nu_diag,*) 'areasum =', areasum(i,j) + write(nu_diag,*) 'areafac_c =', areafac_c(i,j) + write(nu_diag,*) '' + write(nu_diag,*) 'Triangle areas:' do ng = 1, ngroups ! not vector friendly if (abs(triarea(i,j,ng)) > eps16*abs(areafact(i,j,ng))) then - print*, ng, triarea(i,j,ng) + write(nu_diag,*) ng, triarea(i,j,ng) endif enddo endif @@ -3051,18 +3049,18 @@ subroutine locate_triangles (nx_block, ny_block, & do i = ib, ie if (abs(triarea(i,j,ng)) > puny) then if (abs(xp(i,j,nv,ng)) > p5+puny) then - print*, '' - print*, 'WARNING: xp =', xp(i,j,nv,ng) - print*, 'm, i, j, ng, nv =', my_task, i, j, ng, nv -! print*, 'yil,xdl,xcl,ydl=',yil,xdl,xcl,ydl -! print*, 'yir,xdr,xcr,ydr=',yir,xdr,xcr,ydr -! print*, 'ydm=',ydm + write(nu_diag,*) '' + write(nu_diag,*) 'WARNING: xp =', xp(i,j,nv,ng) + write(nu_diag,*) 'm, i, j, ng, nv =', my_task, i, j, ng, nv +! write(nu_diag,*) 'yil,xdl,xcl,ydl=',yil,xdl,xcl,ydl +! write(nu_diag,*) 'yir,xdr,xcr,ydr=',yir,xdr,xcr,ydr +! write(nu_diag,*) 'ydm=',ydm ! stop endif if (abs(yp(i,j,nv,ng)) > p5+puny) then - print*, '' - print*, 'WARNING: yp =', yp(i,j,nv,ng) - print*, 'm, i, j, ng, nv =', my_task, i, j, ng, nv + write(nu_diag,*) '' + write(nu_diag,*) 'WARNING: yp =', yp(i,j,nv,ng) + write(nu_diag,*) 'm, i, j, ng, nv =', my_task, i, j, ng, nv endif endif ! triarea enddo diff --git a/cicecore/cicedynB/general/ice_init.F90 b/cicecore/cicedynB/general/ice_init.F90 index e6a27c96f..4c7817a3d 100644 --- a/cicecore/cicedynB/general/ice_init.F90 +++ b/cicecore/cicedynB/general/ice_init.F90 @@ -120,6 +120,7 @@ subroutine input_data damping_andacc, start_andacc, use_mean_vrel, ortho_type use ice_transport_driver, only: advection, conserv_check use ice_restoring, only: restore_ice + use ice_timers, only: timer_stats #ifdef CESMCOUPLED use shr_file_mod, only: shr_file_setIO #endif @@ -178,7 +179,7 @@ subroutine input_data print_global, print_points, latpnt, lonpnt, & debug_forcing, histfreq, histfreq_n, hist_avg, & history_dir, history_file, history_precision, cpl_bgc, & - histfreq_base, dumpfreq_base, & + histfreq_base, dumpfreq_base, timer_stats, & conserv_check, debug_model, debug_model_step, & debug_model_i, debug_model_j, debug_model_iblk, debug_model_task, & year_init, month_init, day_init, sec_init, & @@ -298,6 +299,7 @@ subroutine input_data debug_model_task = -1 ! debug model local task number print_points = .false. ! if true, print point data print_global = .true. ! if true, print global diagnostic data + timer_stats = .false. ! if true, print out detailed timer statistics bfbflag = 'off' ! off = optimized diag_type = 'stdout' diag_file = 'ice_diag.d' @@ -565,53 +567,154 @@ subroutine input_data nml_filename = 'ice_in'//trim(inst_suffix) #endif - call get_fileunit(nu_nml) - if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 - endif + call abort_ice(subname//'ERROR: open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) + endif + write(nu_diag,*) subname,' Reading setup_nml' + rewind(unit=nu_nml, iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: setup_nml rewind ', & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 do while (nml_error > 0) - print*,'Reading setup_nml' - read(nu_nml, nml=setup_nml,iostat=nml_error) - if (nml_error /= 0) exit - print*,'Reading grid_nml' - read(nu_nml, nml=grid_nml,iostat=nml_error) - if (nml_error /= 0) exit - print*,'Reading tracer_nml' - read(nu_nml, nml=tracer_nml,iostat=nml_error) - if (nml_error /= 0) exit - print*,'Reading thermo_nml' - read(nu_nml, nml=thermo_nml,iostat=nml_error) - if (nml_error /= 0) exit - print*,'Reading dynamics_nml' - read(nu_nml, nml=dynamics_nml,iostat=nml_error) - if (nml_error /= 0) exit - print*,'Reading shortwave_nml' - read(nu_nml, nml=shortwave_nml,iostat=nml_error) - if (nml_error /= 0) exit - print*,'Reading ponds_nml' - read(nu_nml, nml=ponds_nml,iostat=nml_error) - if (nml_error /= 0) exit - print*,'Reading snow_nml' - read(nu_nml, nml=snow_nml,iostat=nml_error) - if (nml_error /= 0) exit - print*,'Reading forcing_nml' - read(nu_nml, nml=forcing_nml,iostat=nml_error) - if (nml_error /= 0) exit + read(nu_nml, nml=setup_nml,iostat=nml_error) end do - if (nml_error == 0) close(nu_nml) - endif - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - call abort_ice(subname//'ERROR: reading namelist', & - file=__FILE__, line=__LINE__) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: setup_nml reading ', & + file=__FILE__, line=__LINE__) + endif + + write(nu_diag,*) subname,' Reading grid_nml' + rewind(unit=nu_nml, iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: grid_nml rewind ', & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 + do while (nml_error > 0) + read(nu_nml, nml=grid_nml,iostat=nml_error) + end do + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: grid_nml reading ', & + file=__FILE__, line=__LINE__) + endif + + write(nu_diag,*) subname,' Reading tracer_nml' + rewind(unit=nu_nml, iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: tracer_nml rewind ', & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 + do while (nml_error > 0) + read(nu_nml, nml=tracer_nml,iostat=nml_error) + end do + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: tracer_nml reading ', & + file=__FILE__, line=__LINE__) + endif + + write(nu_diag,*) subname,' Reading thermo_nml' + rewind(unit=nu_nml, iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: thermo_nml rewind ', & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 + do while (nml_error > 0) + read(nu_nml, nml=thermo_nml,iostat=nml_error) + end do + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: thermo_nml reading ', & + file=__FILE__, line=__LINE__) + endif + + write(nu_diag,*) subname,' Reading dynamics_nml' + rewind(unit=nu_nml, iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: dynamics_nml rewind ', & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 + do while (nml_error > 0) + read(nu_nml, nml=dynamics_nml,iostat=nml_error) + end do + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: dynamics_nml reading ', & + file=__FILE__, line=__LINE__) + endif + + write(nu_diag,*) subname,' Reading shortwave_nml' + rewind(unit=nu_nml, iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: shortwave_nml rewind ', & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 + do while (nml_error > 0) + read(nu_nml, nml=shortwave_nml,iostat=nml_error) + end do + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: shortwave_nml reading ', & + file=__FILE__, line=__LINE__) + endif + + write(nu_diag,*) subname,' Reading ponds_nml' + rewind(unit=nu_nml, iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: ponds_nml rewind ', & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 + do while (nml_error > 0) + read(nu_nml, nml=ponds_nml,iostat=nml_error) + end do + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: ponds_nml reading ', & + file=__FILE__, line=__LINE__) + endif + + write(nu_diag,*) subname,' Reading snow_nml' + rewind(unit=nu_nml, iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: snow_nml rewind ', & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 + do while (nml_error > 0) + read(nu_nml, nml=snow_nml,iostat=nml_error) + end do + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: snow_nml reading ', & + file=__FILE__, line=__LINE__) + endif + + write(nu_diag,*) subname,' Reading forcing_nml' + rewind(unit=nu_nml, iostat=nml_error) + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: forcing_nml rewind ', & + file=__FILE__, line=__LINE__) + endif + nml_error = 1 + do while (nml_error > 0) + read(nu_nml, nml=forcing_nml,iostat=nml_error) + end do + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: forcing_nml reading ', & + file=__FILE__, line=__LINE__) + endif + + close(nu_nml) + call release_fileunit(nu_nml) endif - call release_fileunit(nu_nml) !----------------------------------------------------------------- ! set up diagnostics output and resolve conflicts @@ -676,6 +779,7 @@ subroutine input_data call broadcast_scalar(debug_model_task, master_task) call broadcast_scalar(print_points, master_task) call broadcast_scalar(print_global, master_task) + call broadcast_scalar(timer_stats, master_task) call broadcast_scalar(bfbflag, master_task) call broadcast_scalar(diag_type, master_task) call broadcast_scalar(diag_file, master_task) @@ -2037,6 +2141,7 @@ subroutine input_data write(nu_diag,1021) ' debug_model_i = ', debug_model_j write(nu_diag,1021) ' debug_model_iblk = ', debug_model_iblk write(nu_diag,1021) ' debug_model_task = ', debug_model_task + write(nu_diag,1011) ' timer_stats = ', timer_stats write(nu_diag,1031) ' bfbflag = ', trim(bfbflag) write(nu_diag,1021) ' numin = ', numin write(nu_diag,1021) ' numax = ', numax @@ -2466,7 +2571,6 @@ subroutine init_state ! Set state variables !----------------------------------------------------------------- -!MHRI: CHECK THIS OMP !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block, & !$OMP iglob,jglob) do iblk = 1, nblocks @@ -2703,7 +2807,11 @@ subroutine set_state_var (nx_block, ny_block, & aicen(i,j,n) = c0 vicen(i,j,n) = c0 vsnon(i,j,n) = c0 - trcrn(i,j,nt_Tsfc,n) = Tf(i,j) ! surface temperature + if (tmask(i,j)) then + trcrn(i,j,nt_Tsfc,n) = Tf(i,j) ! surface temperature + else + trcrn(i,j,nt_Tsfc,n) = c0 ! at land grid cells (for clean history/restart files) + endif if (ntrcr >= 2) then do it = 2, ntrcr trcrn(i,j,it,n) = c0 diff --git a/cicecore/cicedynB/general/ice_step_mod.F90 b/cicecore/cicedynB/general/ice_step_mod.F90 index 46a9c9389..3b0201cbf 100644 --- a/cicecore/cicedynB/general/ice_step_mod.F90 +++ b/cicecore/cicedynB/general/ice_step_mod.F90 @@ -121,7 +121,7 @@ subroutine prep_radiation (iblk) character(len=*), parameter :: subname = '(prep_radiation)' - call ice_timer_start(timer_sw) ! shortwave + call ice_timer_start(timer_sw,iblk) ! shortwave alvdr_init(:,:,iblk) = c0 alvdf_init(:,:,iblk) = c0 @@ -169,7 +169,7 @@ subroutine prep_radiation (iblk) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) - call ice_timer_stop(timer_sw) ! shortwave + call ice_timer_stop(timer_sw,iblk) ! shortwave end subroutine prep_radiation @@ -739,7 +739,7 @@ subroutine update_state (dt, daidt, dvidt, dagedt, offset) use ice_state, only: aicen, trcrn, vicen, vsnon, & aice, trcr, vice, vsno, aice0, trcr_depend, & bound_state, trcr_base, nt_strata, n_trcr_strata - use ice_timers, only: ice_timer_start, ice_timer_stop, timer_bound + use ice_timers, only: ice_timer_start, ice_timer_stop, timer_bound, timer_updstate real (kind=dbl_kind), intent(in) :: & dt ! time step @@ -763,6 +763,7 @@ subroutine update_state (dt, daidt, dvidt, dagedt, offset) character(len=*), parameter :: subname='(update_state)' + call ice_timer_start(timer_updstate) call icepack_query_tracer_flags(tr_iage_out=tr_iage) call icepack_query_tracer_sizes(ntrcr_out=ntrcr) call icepack_query_tracer_indices(nt_iage_out=nt_iage) @@ -780,7 +781,7 @@ subroutine update_state (dt, daidt, dvidt, dagedt, offset) ntrcr, trcrn) call ice_timer_stop(timer_bound) - !$OMP PARALLEL DO PRIVATE(iblk,i,j) + !$OMP PARALLEL DO PRIVATE(iblk,i,j) SCHEDULE(runtime) do iblk = 1, nblocks do j = 1, ny_block do i = 1, nx_block @@ -834,6 +835,7 @@ subroutine update_state (dt, daidt, dvidt, dagedt, offset) call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) + call ice_timer_stop(timer_updstate) end subroutine update_state @@ -1009,8 +1011,8 @@ subroutine step_dyn_ridge (dt, ndtd, iblk) ! Ridging !----------------------------------------------------------------- - call ice_timer_start(timer_column) - call ice_timer_start(timer_ridge) + call ice_timer_start(timer_column,iblk) + call ice_timer_start(timer_ridge,iblk) call icepack_query_tracer_sizes(ntrcr_out=ntrcr, nbtrcr_out=nbtrcr) call icepack_warnings_flush(nu_diag) @@ -1079,8 +1081,8 @@ subroutine step_dyn_ridge (dt, ndtd, iblk) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) - call ice_timer_stop(timer_ridge) - call ice_timer_stop(timer_column) + call ice_timer_stop(timer_ridge,iblk) + call ice_timer_stop(timer_column,iblk) end subroutine step_dyn_ridge @@ -1267,7 +1269,7 @@ subroutine step_radiation (dt, iblk) character(len=*), parameter :: subname = '(step_radiation)' - call ice_timer_start(timer_sw) ! shortwave + call ice_timer_start(timer_sw,iblk) ! shortwave call icepack_query_tracer_sizes(ntrcr_out=ntrcr, & nbtrcr_out=nbtrcr, nbtrcr_sw_out=nbtrcr_sw) @@ -1386,7 +1388,7 @@ subroutine step_radiation (dt, iblk) deallocate(ztrcr_sw) deallocate(rsnow) - call ice_timer_stop(timer_sw) ! shortwave + call ice_timer_stop(timer_sw,iblk) ! shortwave end subroutine step_radiation @@ -1614,7 +1616,7 @@ subroutine biogeochemistry (dt, iblk) if (tr_brine .or. skl_bgc) then - call ice_timer_start(timer_bgc) ! biogeochemistry + call ice_timer_start(timer_bgc,iblk) ! biogeochemistry this_block = get_block(blocks_ice(iblk),iblk) ilo = this_block%ilo @@ -1707,7 +1709,7 @@ subroutine biogeochemistry (dt, iblk) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) - call ice_timer_stop(timer_bgc) ! biogeochemistry + call ice_timer_stop(timer_bgc,iblk) ! biogeochemistry endif ! tr_brine .or. skl_bgc diff --git a/cicecore/cicedynB/infrastructure/comm/mpi/ice_timers.F90 b/cicecore/cicedynB/infrastructure/comm/mpi/ice_timers.F90 index 046cf9336..bc14e30d3 100644 --- a/cicecore/cicedynB/infrastructure/comm/mpi/ice_timers.F90 +++ b/cicecore/cicedynB/infrastructure/comm/mpi/ice_timers.F90 @@ -32,6 +32,9 @@ module ice_timers ice_timer_print_all, & ice_timer_check + logical(log_kind), public :: & + timer_stats ! controls printing of timer statistics + !----------------------------------------------------------------------- ! public timers !----------------------------------------------------------------------- @@ -62,8 +65,18 @@ module ice_timers timer_bgc, &! biogeochemistry timer_forcing, &! forcing timer_evp_1d, &! timer only loop - timer_evp_2d ! timer including conversion 1d/2d -! timer_tmp ! for temporary timings + timer_evp_2d, &! timer including conversion 1d/2d + timer_updstate ! update state +! timer_updstate, &! update state +! timer_tmp1, &! for temporary timings +! timer_tmp2, &! for temporary timings +! timer_tmp3, &! for temporary timings +! timer_tmp4, &! for temporary timings +! timer_tmp5, &! for temporary timings +! timer_tmp6, &! for temporary timings +! timer_tmp7, &! for temporary timings +! timer_tmp8, &! for temporary timings +! timer_tmp9 ! for temporary timings !----------------------------------------------------------------------- ! @@ -173,7 +186,7 @@ subroutine init_ice_timers ! call get_ice_timer(timer_ponds, 'Meltponds',nblocks,distrb_info%nprocs) call get_ice_timer(timer_ridge, 'Ridging', nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_catconv, 'Cat Conv', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_fsd, 'Floe size',nblocks,distrb_info%nprocs) + call get_ice_timer(timer_fsd, 'FloeSize', nblocks,distrb_info%nprocs) call get_ice_timer(timer_couple, 'Coupling', nblocks,distrb_info%nprocs) call get_ice_timer(timer_readwrite,'ReadWrite',nblocks,distrb_info%nprocs) call get_ice_timer(timer_diags, 'Diags ',nblocks,distrb_info%nprocs) @@ -187,9 +200,18 @@ subroutine init_ice_timers call get_ice_timer(timer_cplsend, 'Cpl-Send', nblocks,distrb_info%nprocs) call get_ice_timer(timer_sndrcv, 'Snd->Rcv', nblocks,distrb_info%nprocs) #endif - call get_ice_timer(timer_evp_1d, '1d-evp', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_evp_2d, '2d-evp', nblocks,distrb_info%nprocs) -! call get_ice_timer(timer_tmp, ' ',nblocks,distrb_info%nprocs) + call get_ice_timer(timer_evp_1d, '1d-evp', nblocks,distrb_info%nprocs) + call get_ice_timer(timer_evp_2d, '2d-evp', nblocks,distrb_info%nprocs) + call get_ice_timer(timer_updstate, 'UpdState', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp1, 'tmp1', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp2, 'tmp2', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp3, 'tmp3', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp4, 'tmp4', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp5, 'tmp5', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp6, 'tmp6', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp7, 'tmp7', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp8, 'tmp8', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp9, 'tmp9', nblocks,distrb_info%nprocs) !----------------------------------------------------------------------- @@ -333,6 +355,7 @@ subroutine ice_timer_start(timer_id, block_id) character(len=*), parameter :: subname = '(ice_timer_start)' +! if (my_task == master_task) write(nu_diag,*) subname,trim(all_timers(timer_id)%name) !----------------------------------------------------------------------- ! ! if timer is defined, start it up @@ -433,6 +456,7 @@ subroutine ice_timer_stop(timer_id, block_id) character(len=*), parameter :: subname = '(ice_timer_stop)' +! if (my_task == master_task) write(nu_diag,*) subname,trim(all_timers(timer_id)%name) !----------------------------------------------------------------------- ! ! get end cycles diff --git a/cicecore/cicedynB/infrastructure/comm/serial/ice_timers.F90 b/cicecore/cicedynB/infrastructure/comm/serial/ice_timers.F90 index 4599de42e..b18c35040 100644 --- a/cicecore/cicedynB/infrastructure/comm/serial/ice_timers.F90 +++ b/cicecore/cicedynB/infrastructure/comm/serial/ice_timers.F90 @@ -30,6 +30,9 @@ module ice_timers ice_timer_print_all, & ice_timer_check + logical(log_kind), public :: & + timer_stats ! controls printing of timer statistics + !----------------------------------------------------------------------- ! public timers !----------------------------------------------------------------------- @@ -54,8 +57,18 @@ module ice_timers timer_bgc, &! biogeochemistry timer_forcing, &! forcing timer_evp_1d, &! timer only loop - timer_evp_2d ! timer including conversion 1d/2d -! timer_tmp ! for temporary timings + timer_evp_2d, &! timer including conversion 1d/2d + timer_updstate ! update state +! timer_updstate, &! update state +! timer_tmp1, &! for temporary timings +! timer_tmp2, &! for temporary timings +! timer_tmp3, &! for temporary timings +! timer_tmp4, &! for temporary timings +! timer_tmp5, &! for temporary timings +! timer_tmp6, &! for temporary timings +! timer_tmp7, &! for temporary timings +! timer_tmp8, &! for temporary timings +! timer_tmp9 ! for temporary timings !----------------------------------------------------------------------- ! @@ -187,7 +200,7 @@ subroutine init_ice_timers ! call get_ice_timer(timer_ponds, 'Meltponds',nblocks,distrb_info%nprocs) call get_ice_timer(timer_ridge, 'Ridging', nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_catconv, 'Cat Conv', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_fsd, 'Floe size',nblocks,distrb_info%nprocs) + call get_ice_timer(timer_fsd, 'FloeSize', nblocks,distrb_info%nprocs) call get_ice_timer(timer_couple, 'Coupling', nblocks,distrb_info%nprocs) call get_ice_timer(timer_readwrite,'ReadWrite',nblocks,distrb_info%nprocs) call get_ice_timer(timer_diags, 'Diags ',nblocks,distrb_info%nprocs) @@ -197,7 +210,16 @@ subroutine init_ice_timers call get_ice_timer(timer_forcing, 'Forcing', nblocks,distrb_info%nprocs) call get_ice_timer(timer_evp_1d, '1d-evp', nblocks,distrb_info%nprocs) call get_ice_timer(timer_evp_2d, '2d-evp', nblocks,distrb_info%nprocs) -! call get_ice_timer(timer_tmp, ' ',nblocks,distrb_info%nprocs) + call get_ice_timer(timer_updstate, 'UpdState', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp1, 'tmp1', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp2, 'tmp2', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp3, 'tmp3', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp4, 'tmp4', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp5, 'tmp5', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp6, 'tmp6', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp7, 'tmp7', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp8, 'tmp8', nblocks,distrb_info%nprocs) +! call get_ice_timer(timer_tmp9, 'tmp9', nblocks,distrb_info%nprocs) !----------------------------------------------------------------------- @@ -341,6 +363,8 @@ subroutine ice_timer_start(timer_id, block_id) character(len=*), parameter :: subname = '(ice_timer_start)' +! if (my_task == master_task) write(nu_diag,*) subname,trim(all_timers(timer_id)%name) + !----------------------------------------------------------------------- ! ! if timer is defined, start it up @@ -444,6 +468,8 @@ subroutine ice_timer_stop(timer_id, block_id) character(len=*), parameter :: subname = '(ice_timer_stop)' +! if (my_task == master_task) write(nu_diag,*) subname,trim(all_timers(timer_id)%name) + !----------------------------------------------------------------------- ! ! get end cycles diff --git a/cicecore/cicedynB/infrastructure/ice_domain.F90 b/cicecore/cicedynB/infrastructure/ice_domain.F90 index ee7d98b50..6f8fee49a 100644 --- a/cicecore/cicedynB/infrastructure/ice_domain.F90 +++ b/cicecore/cicedynB/infrastructure/ice_domain.F90 @@ -164,24 +164,27 @@ subroutine init_domain_blocks ny_global = -1 ! NYGLOB, j-axis size landblockelim = .true. ! on by default - call get_fileunit(nu_nml) if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) + write(nu_diag,*) subname,' Reading domain_nml' + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 + call abort_ice(subname//'ERROR: domain_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) endif + + nml_error = 1 do while (nml_error > 0) read(nu_nml, nml=domain_nml,iostat=nml_error) end do - if (nml_error == 0) close(nu_nml) - endif - call release_fileunit(nu_nml) - - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - call abort_ice(subname//'ERROR: error reading domain_nml') + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: domain_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif call broadcast_scalar(nprocs, master_task) diff --git a/cicecore/cicedynB/infrastructure/ice_grid.F90 b/cicecore/cicedynB/infrastructure/ice_grid.F90 index 0b174a408..80e876571 100644 --- a/cicecore/cicedynB/infrastructure/ice_grid.F90 +++ b/cicecore/cicedynB/infrastructure/ice_grid.F90 @@ -437,6 +437,9 @@ subroutine init_grid2 field_loc_center, field_loc_NEcorner, field_loc_Nface, field_loc_Eface, & field_type_scalar, field_type_vector, field_type_angle use ice_domain_size, only: max_blocks +#if defined (_OPENMP) + use OMP_LIB +#endif integer (kind=int_kind) :: & i, j, iblk, & @@ -455,6 +458,11 @@ subroutine init_grid2 type (block) :: & this_block ! block information for current block +#if defined (_OPENMP) + integer(kind=omp_sched_kind) :: ompsk ! openmp schedule + integer(kind=int_kind) :: ompcs ! openmp schedule count +#endif + character(len=*), parameter :: subname = '(init_grid2)' !----------------------------------------------------------------- @@ -485,6 +493,29 @@ subroutine init_grid2 call rectgrid ! regular rectangular grid endif + !----------------------------------------------------------------- + ! Diagnose OpenMP thread schedule, force order in output + !----------------------------------------------------------------- + +#if defined (_OPENMP) + !$OMP PARALLEL DO ORDERED PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks + if (my_task == master_task) then + !$OMP ORDERED + if (iblk == 1) then + call omp_get_schedule(ompsk,ompcs) + write(nu_diag,*) '' + write(nu_diag,*) subname,' OpenMP runtime thread schedule:' + write(nu_diag,*) subname,' omp schedule = ',ompsk,ompcs + endif + write(nu_diag,*) subname,' block, thread = ',iblk,OMP_GET_THREAD_NUM() + call flush_fileunit(nu_diag) + !$OMP END ORDERED + endif + enddo + !$OMP END PARALLEL DO +#endif + !----------------------------------------------------------------- ! T-grid cell and U-grid cell quantities ! Fill halo data locally where possible to avoid missing diff --git a/cicecore/cicedynB/infrastructure/ice_restart_driver.F90 b/cicecore/cicedynB/infrastructure/ice_restart_driver.F90 index b3cd413a9..474ed892e 100644 --- a/cicecore/cicedynB/infrastructure/ice_restart_driver.F90 +++ b/cicecore/cicedynB/infrastructure/ice_restart_driver.F90 @@ -555,6 +555,17 @@ subroutine restartfile (ice_ic) endif + ! set Tsfcn to c0 on land + !$OMP PARALLEL DO PRIVATE(iblk,i,j) + do iblk = 1, nblocks + do j = 1, ny_block + do i = 1, nx_block + if (.not. tmask(i,j,iblk)) trcrn(i,j,nt_Tsfc,:,iblk) = c0 + enddo + enddo + enddo + !$OMP END PARALLEL DO + ! for mixed layer model if (oceanmixed_ice) then diff --git a/cicecore/cicedynB/infrastructure/ice_restoring.F90 b/cicecore/cicedynB/infrastructure/ice_restoring.F90 index c7254cd80..f21e50513 100644 --- a/cicecore/cicedynB/infrastructure/ice_restoring.F90 +++ b/cicecore/cicedynB/infrastructure/ice_restoring.F90 @@ -394,7 +394,11 @@ subroutine set_restore_var (nx_block, ny_block, & aicen(i,j,n) = c0 vicen(i,j,n) = c0 vsnon(i,j,n) = c0 - trcrn(i,j,nt_Tsfc,n) = Tf(i,j) ! surface temperature + if (tmask(i,j)) then + trcrn(i,j,nt_Tsfc,n) = Tf(i,j) ! surface temperature + else + trcrn(i,j,nt_Tsfc,n) = c0 ! on land gridcells + endif if (ntrcr >= 2) then do it = 2, ntrcr trcrn(i,j,it,n) = c0 diff --git a/cicecore/cicedynB/infrastructure/io/io_pio2/ice_restart.F90 b/cicecore/cicedynB/infrastructure/io/io_pio2/ice_restart.F90 index 24a5b75be..74638e45a 100644 --- a/cicecore/cicedynB/infrastructure/io/io_pio2/ice_restart.F90 +++ b/cicecore/cicedynB/infrastructure/io/io_pio2/ice_restart.F90 @@ -745,7 +745,8 @@ subroutine read_restart_field(nu,nrec,work,atype,vname,ndim3,diag, & status = pio_inq_varid(File,trim(vname),vardesc) if (status /= PIO_noerr) then - call abort_ice(subname//"ERROR: CICE restart? Missing variable: "//trim(vname)) + call abort_ice(subname// & + "ERROR: CICE restart? Missing variable: "//trim(vname)) endif status = pio_inq_varndims(File, vardesc, ndims) @@ -755,6 +756,10 @@ subroutine read_restart_field(nu,nrec,work,atype,vname,ndim3,diag, & ! if (ndim3 == ncat .and. ncat>1) then if (ndim3 == ncat .and. ndims == 3) then call pio_read_darray(File, vardesc, iodesc3d_ncat, work, status) +!#ifndef CESM1_PIO +!! This only works for PIO2 +! where (work == PIO_FILL_DOUBLE) work = c0 +!#endif if (present(field_loc)) then do n=1,ndim3 call ice_HaloUpdate (work(:,:,n,:), halo_info, & @@ -764,6 +769,10 @@ subroutine read_restart_field(nu,nrec,work,atype,vname,ndim3,diag, & ! elseif (ndim3 == 1) then elseif (ndim3 == 1 .and. ndims == 2) then call pio_read_darray(File, vardesc, iodesc2d, work, status) +!#ifndef CESM1_PIO +!! This only works for PIO2 +! where (work == PIO_FILL_DOUBLE) work = c0 +!#endif if (present(field_loc)) then call ice_HaloUpdate (work(:,:,1,:), halo_info, & field_loc, field_type) diff --git a/cicecore/drivers/direct/hadgem3/CICE.F90 b/cicecore/drivers/direct/hadgem3/CICE.F90 index b2314240c..b0176e801 100644 --- a/cicecore/drivers/direct/hadgem3/CICE.F90 +++ b/cicecore/drivers/direct/hadgem3/CICE.F90 @@ -1,8 +1,8 @@ !======================================================================= -! Copyright (c) 2021, Triad National Security, LLC +! Copyright (c) 2022, Triad National Security, LLC ! All rights reserved. ! -! Copyright 2021. Triad National Security, LLC. This software was +! Copyright 2022. Triad National Security, LLC. This software was ! produced under U.S. Government contract DE-AC52-06NA25396 for Los ! Alamos National Laboratory (LANL), which is operated by Triad ! National Security, LLC for the U.S. Department of Energy. The U.S. diff --git a/cicecore/drivers/mct/cesm1/CICE_RunMod.F90 b/cicecore/drivers/mct/cesm1/CICE_RunMod.F90 index 365322dde..6ede4411d 100644 --- a/cicecore/drivers/mct/cesm1/CICE_RunMod.F90 +++ b/cicecore/drivers/mct/cesm1/CICE_RunMod.F90 @@ -287,7 +287,6 @@ subroutine ice_step call ice_timer_start(timer_column) ! column physics call ice_timer_start(timer_thermo) ! thermodynamics -!MHRI: CHECK THIS OMP !$OMP PARALLEL DO PRIVATE(iblk) do iblk = 1, nblocks diff --git a/cicecore/drivers/mct/cesm1/CICE_copyright.txt b/cicecore/drivers/mct/cesm1/CICE_copyright.txt index e10da1e77..6eb3c9cca 100644 --- a/cicecore/drivers/mct/cesm1/CICE_copyright.txt +++ b/cicecore/drivers/mct/cesm1/CICE_copyright.txt @@ -1,7 +1,7 @@ -! Copyright (c) 2021, Triad National Security, LLC +! Copyright (c) 2022, Triad National Security, LLC ! All rights reserved. ! -! Copyright 2021. Triad National Security, LLC. This software was +! Copyright 2022. Triad National Security, LLC. This software was ! produced under U.S. Government contract DE-AC52-06NA25396 for Los ! Alamos National Laboratory (LANL), which is operated by Triad ! National Security, LLC for the U.S. Department of Energy. The U.S. diff --git a/cicecore/drivers/mct/cesm1/ice_prescribed_mod.F90 b/cicecore/drivers/mct/cesm1/ice_prescribed_mod.F90 index e068a2892..0868ef2fa 100644 --- a/cicecore/drivers/mct/cesm1/ice_prescribed_mod.F90 +++ b/cicecore/drivers/mct/cesm1/ice_prescribed_mod.F90 @@ -168,23 +168,28 @@ subroutine ice_prescribed_init(compid, gsmap, dom) prescribed_ice_fill = .false. ! true if pice data fill required ! read from input file - call get_fileunit(nu_nml) + if (my_task == master_task) then - open (nu_nml, file=nml_filename, status='old',iostat=nml_error) + write(nu_diag,*) subname,' Reading ice_prescribed_nml' + + call get_fileunit(nu_nml) + open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 + call abort_ice(subname//'ERROR: ice_prescribed_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) endif + + nml_error = 1 do while (nml_error > 0) read(nu_nml, nml=ice_prescribed_nml,iostat=nml_error) end do - if (nml_error == 0) close(nu_nml) - endif - call release_fileunit(nu_nml) - call broadcast_scalar(nml_error,master_task) - if (nml_error /= 0) then - call abort_ice (subname//' ERROR: Namelist read error in ice_prescribed_mod') + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: ice_prescribed_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif call broadcast_scalar(prescribed_ice,master_task) diff --git a/cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 b/cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 index cfca994c3..338b25050 100644 --- a/cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 +++ b/cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 @@ -8,6 +8,7 @@ module CICE_InitMod use icepack_intfc, only: icepack_aggregate use icepack_intfc, only: icepack_init_itd, icepack_init_itd_hist use icepack_intfc, only: icepack_init_fsd_bounds, icepack_init_wave + use icepack_intfc, only: icepack_init_snow use icepack_intfc, only: icepack_configure use icepack_intfc, only: icepack_warnings_flush, icepack_warnings_aborted use icepack_intfc, only: icepack_query_parameters, icepack_query_tracer_flags @@ -83,7 +84,7 @@ subroutine cice_init2() use ice_dyn_vp , only: init_vp use ice_flux , only: init_coupler_flux, init_history_therm use ice_flux , only: init_history_dyn, init_flux_atm, init_flux_ocn - use ice_forcing , only: init_forcing_ocn + use ice_forcing , only: init_snowtable use ice_forcing_bgc , only: get_forcing_bgc, get_atm_bgc use ice_forcing_bgc , only: faero_default, faero_optics, alloc_forcing_bgc, fiso_default use ice_history , only: init_hist, accum_hist @@ -95,7 +96,8 @@ subroutine cice_init2() use ice_transport_driver , only: init_transport logical(kind=log_kind) :: tr_aero, tr_zaero, skl_bgc, z_tracers - logical(kind=log_kind) :: tr_iso, tr_fsd, wave_spec + logical(kind=log_kind) :: tr_iso, tr_fsd, wave_spec, tr_snow + character(len=char_len) :: snw_aging_table character(len=*), parameter :: subname = '(cice_init2)' !---------------------------------------------------- @@ -137,15 +139,12 @@ subroutine cice_init2() call calendar() ! determine the initial date - !TODO: - why is this being called when you are using CMEPS? - call init_forcing_ocn(dt) ! initialize sss and sst from data - call init_state ! initialize the ice state call init_transport ! initialize horizontal transport call ice_HaloRestore_init ! restored boundary conditions call icepack_query_parameters(skl_bgc_out=skl_bgc, z_tracers_out=z_tracers, & - wave_spec_out=wave_spec) + wave_spec_out=wave_spec, snw_aging_table_out=snw_aging_table) call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(trim(subname), & file=__FILE__,line= __LINE__) @@ -158,7 +157,7 @@ subroutine cice_init2() call init_history_dyn ! initialize dynamic history variables call icepack_query_tracer_flags(tr_aero_out=tr_aero, tr_zaero_out=tr_zaero) - call icepack_query_tracer_flags(tr_iso_out=tr_iso) + call icepack_query_tracer_flags(tr_iso_out=tr_iso, tr_snow_out=tr_snow) call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(trim(subname), & file=__FILE__,line= __LINE__) @@ -167,6 +166,17 @@ subroutine cice_init2() call faero_optics !initialize aerosol optical property tables end if + ! snow aging lookup table initialization + if (tr_snow) then ! advanced snow physics + call icepack_init_snow() + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & + file=__FILE__, line=__LINE__) + if (snw_aging_table(1:4) /= 'test') then + call init_snowtable() + endif + endif + ! Initialize shortwave components using swdn from previous timestep ! if restarting. These components will be scaled to current forcing ! in prep_radiation. @@ -199,12 +209,12 @@ subroutine init_restart() use ice_calendar, only: calendar use ice_constants, only: c0 use ice_domain, only: nblocks - use ice_domain_size, only: ncat, n_iso, n_aero, nfsd + use ice_domain_size, only: ncat, n_iso, n_aero, nfsd, nslyr use ice_dyn_eap, only: read_restart_eap use ice_dyn_shared, only: kdyn use ice_grid, only: tmask use ice_init, only: ice_ic - use ice_init_column, only: init_age, init_FY, init_lvl, & + use ice_init_column, only: init_age, init_FY, init_lvl, init_snowtracers, & init_meltponds_cesm, init_meltponds_lvl, init_meltponds_topo, & init_isotope, init_aerosol, init_hbrine, init_bgc, init_fsd use ice_restart_column, only: restart_age, read_restart_age, & @@ -212,6 +222,7 @@ subroutine init_restart() restart_pond_cesm, read_restart_pond_cesm, & restart_pond_lvl, read_restart_pond_lvl, & restart_pond_topo, read_restart_pond_topo, & + restart_snow, read_restart_snow, & restart_fsd, read_restart_fsd, & restart_iso, read_restart_iso, & restart_aero, read_restart_aero, & @@ -226,12 +237,13 @@ subroutine init_restart() iblk ! block index logical(kind=log_kind) :: & tr_iage, tr_FY, tr_lvl, tr_pond_cesm, tr_pond_lvl, & - tr_pond_topo, tr_fsd, tr_iso, tr_aero, tr_brine, & + tr_pond_topo, tr_fsd, tr_iso, tr_aero, tr_brine, tr_snow, & skl_bgc, z_tracers, solve_zsal integer(kind=int_kind) :: & ntrcr integer(kind=int_kind) :: & nt_alvl, nt_vlvl, nt_apnd, nt_hpnd, nt_ipnd, & + nt_smice, nt_smliq, nt_rhos, nt_rsnw, & nt_iage, nt_FY, nt_aero, nt_fsd, nt_isosno, nt_isoice character(len=*), parameter :: subname = '(init_restart)' @@ -247,10 +259,12 @@ subroutine init_restart() call icepack_query_tracer_flags(tr_iage_out=tr_iage, tr_FY_out=tr_FY, & tr_lvl_out=tr_lvl, tr_pond_cesm_out=tr_pond_cesm, tr_pond_lvl_out=tr_pond_lvl, & tr_pond_topo_out=tr_pond_topo, tr_aero_out=tr_aero, tr_brine_out=tr_brine, & - tr_fsd_out=tr_fsd, tr_iso_out=tr_iso) + tr_snow_out=tr_snow, tr_fsd_out=tr_fsd, tr_iso_out=tr_iso) call icepack_query_tracer_indices(nt_alvl_out=nt_alvl, nt_vlvl_out=nt_vlvl, & nt_apnd_out=nt_apnd, nt_hpnd_out=nt_hpnd, nt_ipnd_out=nt_ipnd, & nt_iage_out=nt_iage, nt_FY_out=nt_FY, nt_aero_out=nt_aero, nt_fsd_out=nt_fsd, & + nt_smice_out=nt_smice, nt_smliq_out=nt_smliq, & + nt_rhos_out=nt_rhos, nt_rsnw_out=nt_rsnw, & nt_isosno_out=nt_isosno, nt_isoice_out=nt_isoice) call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & @@ -347,6 +361,21 @@ subroutine init_restart() enddo ! iblk endif ! .not. restart_pond endif + ! snow redistribution/metamorphism + if (tr_snow) then + if (trim(runtype) == 'continue') restart_snow = .true. + if (restart_snow) then + call read_restart_snow + else + do iblk = 1, nblocks + call init_snowtracers(trcrn(:,:,nt_smice:nt_smice+nslyr-1,:,iblk), & + trcrn(:,:,nt_smliq:nt_smliq+nslyr-1,:,iblk), & + trcrn(:,:,nt_rhos :nt_rhos +nslyr-1,:,iblk), & + trcrn(:,:,nt_rsnw :nt_rsnw +nslyr-1,:,iblk)) + enddo ! iblk + endif + endif + ! floe size distribution if (tr_fsd) then if (trim(runtype) == 'continue') restart_fsd = .true. @@ -356,7 +385,6 @@ subroutine init_restart() call init_fsd(trcrn(:,:,nt_fsd:nt_fsd+nfsd-1,:,:)) endif endif - ! isotopes if (tr_iso) then if (trim(runtype) == 'continue') restart_iso = .true. @@ -441,7 +469,6 @@ subroutine init_restart() call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) - end subroutine init_restart !======================================================================= diff --git a/cicecore/drivers/nuopc/cmeps/CICE_RunMod.F90 b/cicecore/drivers/nuopc/cmeps/CICE_RunMod.F90 index d4b100518..6f145ab0e 100644 --- a/cicecore/drivers/nuopc/cmeps/CICE_RunMod.F90 +++ b/cicecore/drivers/nuopc/cmeps/CICE_RunMod.F90 @@ -56,9 +56,9 @@ subroutine CICE_Run tr_iso, tr_aero, tr_zaero, skl_bgc, z_tracers, wave_spec, tr_fsd character(len=*), parameter :: subname = '(CICE_Run)' - !-------------------------------------------------------------------- - ! initialize error code and step timer - !-------------------------------------------------------------------- + !-------------------------------------------------------------------- + ! initialize error code and step timer + !-------------------------------------------------------------------- call ice_timer_start(timer_step) ! start timing entire run @@ -73,13 +73,13 @@ subroutine CICE_Run if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) - !-------------------------------------------------------------------- - ! timestep loop - !-------------------------------------------------------------------- + !-------------------------------------------------------------------- + ! timestep loop + !-------------------------------------------------------------------- call ice_timer_start(timer_couple) ! atm/ocn coupling - call advance_timestep() ! advance timestep and update calendar data + call advance_timestep() ! advance timestep and update calendar data if (z_tracers) call get_atm_bgc ! biogeochemistry @@ -90,9 +90,9 @@ subroutine CICE_Run call ice_step - !-------------------------------------------------------------------- - ! end of timestep loop - !-------------------------------------------------------------------- + !-------------------------------------------------------------------- + ! end of timestep loop + !-------------------------------------------------------------------- call ice_timer_stop(timer_step) ! end timestepping loop timer @@ -110,7 +110,7 @@ subroutine ice_step use ice_boundary, only: ice_HaloUpdate use ice_calendar, only: dt, dt_dyn, ndtd, diagfreq, write_restart, istep use ice_calendar, only: idate, msec - use ice_diagnostics, only: init_mass_diags, runtime_diags + use ice_diagnostics, only: init_mass_diags, runtime_diags, debug_model, debug_ice use ice_diagnostics_bgc, only: hbrine_diags, zsal_diags, bgc_diags use ice_domain, only: halo_info, nblocks use ice_dyn_eap, only: write_restart_eap @@ -123,12 +123,13 @@ subroutine ice_step use ice_restart_column, only: write_restart_age, write_restart_FY, & write_restart_lvl, write_restart_pond_cesm, write_restart_pond_lvl, & write_restart_pond_topo, write_restart_aero, write_restart_fsd, & - write_restart_iso, write_restart_bgc, write_restart_hbrine + write_restart_iso, write_restart_bgc, write_restart_hbrine, & + write_restart_snow use ice_restart_driver, only: dumpfile use ice_restoring, only: restore_ice, ice_HaloRestore use ice_step_mod, only: prep_radiation, step_therm1, step_therm2, & update_state, step_dyn_horiz, step_dyn_ridge, step_radiation, & - biogeochemistry, step_prep, step_dyn_wave + biogeochemistry, save_init, step_dyn_wave, step_snow use ice_timers, only: ice_timer_start, ice_timer_stop, & timer_diags, timer_column, timer_thermo, timer_bound, & timer_hist, timer_readwrite @@ -144,19 +145,28 @@ subroutine ice_step offset ! d(age)/dt time offset logical (kind=log_kind) :: & - tr_iage, tr_FY, tr_lvl, tr_fsd, & + tr_iage, tr_FY, tr_lvl, tr_fsd, tr_snow, & tr_pond_cesm, tr_pond_lvl, tr_pond_topo, tr_brine, tr_iso, tr_aero, & calc_Tsfc, skl_bgc, solve_zsal, z_tracers, wave_spec character(len=*), parameter :: subname = '(ice_step)' + character (len=char_len) :: plabeld + + if (debug_model) then + plabeld = 'beginning time step' + do iblk = 1, nblocks + call debug_ice (iblk, plabeld) + enddo + endif + call icepack_query_parameters(calc_Tsfc_out=calc_Tsfc, skl_bgc_out=skl_bgc, & solve_zsal_out=solve_zsal, z_tracers_out=z_tracers, ktherm_out=ktherm, & wave_spec_out=wave_spec) call icepack_query_tracer_flags(tr_iage_out=tr_iage, tr_FY_out=tr_FY, & tr_lvl_out=tr_lvl, tr_pond_cesm_out=tr_pond_cesm, tr_pond_lvl_out=tr_pond_lvl, & tr_pond_topo_out=tr_pond_topo, tr_brine_out=tr_brine, tr_aero_out=tr_aero, & - tr_iso_out=tr_iso, tr_fsd_out=tr_fsd) + tr_iso_out=tr_iso, tr_fsd_out=tr_fsd, tr_snow_out=tr_snow) call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) @@ -201,15 +211,33 @@ subroutine ice_step !----------------------------------------------------------------- if (calc_Tsfc) call prep_radiation (iblk) + if (debug_model) then + plabeld = 'post prep_radiation' + call debug_ice (iblk, plabeld) + endif !----------------------------------------------------------------- ! thermodynamics and biogeochemistry !----------------------------------------------------------------- call step_therm1 (dt, iblk) ! vertical thermodynamics + if (debug_model) then + plabeld = 'post step_therm1' + call debug_ice (iblk, plabeld) + endif + call biogeochemistry (dt, iblk) ! biogeochemistry + if (debug_model) then + plabeld = 'post biogeochemistry' + call debug_ice (iblk, plabeld) + endif + if (.not.prescribed_ice) & call step_therm2 (dt, iblk) ! ice thickness distribution thermo + if (debug_model) then + plabeld = 'post step_therm2' + call debug_ice (iblk, plabeld) + endif endif ! ktherm > 0 @@ -237,6 +265,12 @@ subroutine ice_step ! momentum, stress, transport call step_dyn_horiz (dt_dyn) + if (debug_model) then + plabeld = 'post step_dyn_horiz' + do iblk = 1, nblocks + call debug_ice (iblk, plabeld) + enddo ! iblk + endif ! ridging !$OMP PARALLEL DO PRIVATE(iblk) @@ -244,12 +278,24 @@ subroutine ice_step if (kridge > 0) call step_dyn_ridge (dt_dyn, ndtd, iblk) enddo !$OMP END PARALLEL DO + if (debug_model) then + plabeld = 'post step_dyn_ridge' + do iblk = 1, nblocks + call debug_ice (iblk, plabeld) + enddo ! iblk + endif ! clean up, update tendency diagnostics offset = c0 call update_state (dt_dyn, daidtd, dvidtd, dagedtd, offset) enddo + if (debug_model) then + plabeld = 'post dynamics' + do iblk = 1, nblocks + call debug_ice (iblk, plabeld) + enddo + endif endif ! not prescribed ice @@ -260,18 +306,36 @@ subroutine ice_step call ice_timer_start(timer_column) ! column physics call ice_timer_start(timer_thermo) ! thermodynamics + !----------------------------------------------------------------- + ! snow redistribution and metamorphosis + !----------------------------------------------------------------- + + if (tr_snow) then ! advanced snow physics + do iblk = 1, nblocks + call step_snow (dt, iblk) + enddo + call update_state (dt) ! clean up + endif + !MHRI: CHECK THIS OMP !$OMP PARALLEL DO PRIVATE(iblk) do iblk = 1, nblocks if (ktherm >= 0) call step_radiation (dt, iblk) + if (debug_model) then + plabeld = 'post step_radiation' + call debug_ice (iblk, plabeld) + endif !----------------------------------------------------------------- ! get ready for coupling and the next time step !----------------------------------------------------------------- call coupling_prep (iblk) - + if (debug_model) then + plabeld = 'post coupling_prep' + call debug_ice (iblk, plabeld) + endif enddo ! iblk !$OMP END PARALLEL DO @@ -309,6 +373,7 @@ subroutine ice_step if (tr_pond_cesm) call write_restart_pond_cesm if (tr_pond_lvl) call write_restart_pond_lvl if (tr_pond_topo) call write_restart_pond_topo + if (tr_snow) call write_restart_snow if (tr_fsd) call write_restart_fsd if (tr_iso) call write_restart_iso if (tr_aero) call write_restart_aero @@ -634,11 +699,12 @@ subroutine sfcflux_to_ocn(nx_block, ny_block, & real (kind=dbl_kind) :: & puny, & ! + Lsub, & ! rLsub ! 1/Lsub character(len=*), parameter :: subname = '(sfcflux_to_ocn)' - call icepack_query_parameters(puny_out=puny) + call icepack_query_parameters(puny_out=puny, Lsub_out=Lsub) call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) diff --git a/cicecore/drivers/nuopc/cmeps/CICE_copyright.txt b/cicecore/drivers/nuopc/cmeps/CICE_copyright.txt index e10da1e77..6eb3c9cca 100644 --- a/cicecore/drivers/nuopc/cmeps/CICE_copyright.txt +++ b/cicecore/drivers/nuopc/cmeps/CICE_copyright.txt @@ -1,7 +1,7 @@ -! Copyright (c) 2021, Triad National Security, LLC +! Copyright (c) 2022, Triad National Security, LLC ! All rights reserved. ! -! Copyright 2021. Triad National Security, LLC. This software was +! Copyright 2022. Triad National Security, LLC. This software was ! produced under U.S. Government contract DE-AC52-06NA25396 for Los ! Alamos National Laboratory (LANL), which is operated by Triad ! National Security, LLC for the U.S. Department of Energy. The U.S. diff --git a/cicecore/drivers/nuopc/cmeps/ice_comp_nuopc.F90 b/cicecore/drivers/nuopc/cmeps/ice_comp_nuopc.F90 index a832e7bdf..a9d71e479 100644 --- a/cicecore/drivers/nuopc/cmeps/ice_comp_nuopc.F90 +++ b/cicecore/drivers/nuopc/cmeps/ice_comp_nuopc.F90 @@ -88,6 +88,7 @@ module ice_comp_nuopc integer :: nthrds ! Number of threads to use in this component integer :: dbug = 0 + logical :: profile_memory = .false. integer , parameter :: debug_import = 0 ! internal debug level integer , parameter :: debug_export = 0 ! internal debug level character(*), parameter :: modName = "(ice_comp_nuopc)" @@ -157,6 +158,10 @@ subroutine InitializeP0(gcomp, importState, exportState, clock, rc) type(ESMF_State) :: importState, exportState type(ESMF_Clock) :: clock integer, intent(out) :: rc + + logical :: isPresent, isSet + character(len=64) :: value + character(len=char_len_long) :: logmsg !-------------------------------- rc = ESMF_SUCCESS @@ -166,6 +171,14 @@ subroutine InitializeP0(gcomp, importState, exportState, clock, rc) acceptStringList=(/"IPDv01p"/), rc=rc) if (ChkErr(rc,__LINE__,u_FILE_u)) return + profile_memory = .false. + call NUOPC_CompAttributeGet(gcomp, name="ProfileMemory", value=value, & + isPresent=isPresent, isSet=isSet, rc=rc) + if (ChkErr(rc,__LINE__,u_FILE_u)) return + if (isPresent .and. isSet) profile_memory=(trim(value)=="true") + write(logmsg,*) profile_memory + call ESMF_LogWrite('CICE_cap:ProfileMemory = '//trim(logmsg), ESMF_LOGMSG_INFO) + end subroutine InitializeP0 !=============================================================================== @@ -224,6 +237,10 @@ subroutine InitializeAdvertise(gcomp, importState, exportState, clock, rc) integer :: ilo, ihi, jlo, jhi ! beginning and end of physical domain character(len=char_len_long) :: diag_filename = 'unset' character(len=char_len_long) :: logmsg + character(len=char_len_long) :: single_column_lnd_domainfile + real(dbl_kind) :: scol_lon + real(dbl_kind) :: scol_lat + real(dbl_kind) :: scol_spval character(len=*), parameter :: subname=trim(modName)//':(InitializeAdvertise) ' !-------------------------------- @@ -363,8 +380,7 @@ subroutine InitializeAdvertise(gcomp, importState, exportState, clock, rc) depressT_in = 0.054_dbl_kind, & Tocnfrz_in = -34.0_dbl_kind*0.054_dbl_kind, & pi_in = SHR_CONST_PI, & - snowpatch_in = 0.005_dbl_kind, & - dragio_in = 0.00536_dbl_kind) + snowpatch_in = 0.005_dbl_kind) call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & @@ -493,12 +509,67 @@ subroutine InitializeAdvertise(gcomp, importState, exportState, clock, rc) ! First cice initialization phase - before initializing grid info !---------------------------------------------------------------------------- +#ifdef CESMCOUPLED + ! Determine if single column + + call NUOPC_CompAttributeGet(gcomp, name='scol_lon', value=cvalue, rc=rc) + if (ChkErr(rc,__LINE__,u_FILE_u)) return + read(cvalue,*) scmlon + call NUOPC_CompAttributeGet(gcomp, name='scol_lat', value=cvalue, rc=rc) + if (ChkErr(rc,__LINE__,u_FILE_u)) return + read(cvalue,*) scmlat + call NUOPC_CompAttributeGet(gcomp, name='scol_spval', value=cvalue, rc=rc) + if (ChkErr(rc,__LINE__,u_FILE_u)) return + read(cvalue,*) scol_spval + + if (scmlon > scol_spval .and. scmlat > scol_spval) then + call NUOPC_CompAttributeGet(gcomp, name='single_column_lnd_domainfile', & + value=single_column_lnd_domainfile, rc=rc) + if (ChkErr(rc,__LINE__,u_FILE_u)) return + if (trim(single_column_lnd_domainfile) /= 'UNSET') then + single_column = .true. + else + call abort_ice('single_column_domainfile cannot be null for single column mode') + end if + call NUOPC_CompAttributeGet(gcomp, name='scol_ocnmask', value=cvalue, rc=rc) + if (chkerr(rc,__LINE__,u_FILE_u)) return + read(cvalue,*) scol_mask + call NUOPC_CompAttributeGet(gcomp, name='scol_ocnfrac', value=cvalue, rc=rc) + if (chkerr(rc,__LINE__,u_FILE_u)) return + read(cvalue,*) scol_frac + call NUOPC_CompAttributeGet(gcomp, name='scol_ni', value=cvalue, rc=rc) + if (chkerr(rc,__LINE__,u_FILE_u)) return + read(cvalue,*) scol_ni + call NUOPC_CompAttributeGet(gcomp, name='scol_nj', value=cvalue, rc=rc) + if (chkerr(rc,__LINE__,u_FILE_u)) return + read(cvalue,*) scol_nj + + call ice_mesh_create_scolumn(scmlon, scmlat, ice_mesh, rc) + if (ChkErr(rc,__LINE__,u_FILE_u)) return + + scol_valid = (scol_mask == 1) + if (.not. scol_valid) then + write(6,*)'DEBUG: i am here' + ! Advertise fields + call ice_advertise_fields(gcomp, importState, exportState, flds_scalar_name, rc) + if (ChkErr(rc,__LINE__,u_FILE_u)) return + + call t_stopf ('cice_init_total') + + ! ******************* + ! *** RETURN HERE *** + ! ******************* + RETURN + end if + end if + ! Read the cice namelist as part of the call to cice_init1 + ! Note that if single_column is true and scol_valid is not - will never get here + call t_startf ('cice_init1') call cice_init1 call t_stopf ('cice_init1') -#ifdef CESMCOUPLED ! Form of ocean freezing temperature ! 'minus1p8' = -1.8 C ! 'linear_salt' = -depressT * sss @@ -546,13 +617,20 @@ subroutine InitializeAdvertise(gcomp, importState, exportState, clock, rc) ' must be the same as natmiter from cice namelist ',natmiter call abort_ice(trim(errmsg)) endif + +#else + + ! Read the cice namelist as part of the call to cice_init1 + call t_startf ('cice_init1') + call cice_init1 + call t_stopf ('cice_init1') + #endif + !---------------------------------------------------------------------------- ! Initialize grid info !---------------------------------------------------------------------------- - ! Initialize cice mesh and mask if appropriate - if (single_column .and. scol_valid) then call ice_mesh_init_tlon_tlat_area_hm() else @@ -737,82 +815,43 @@ subroutine InitializeRealize(gcomp, importState, exportState, clock, rc) if (dbug > 5) call ESMF_LogWrite(subname//' called', ESMF_LOGMSG_INFO) #ifdef CESMCOUPLED - call NUOPC_CompAttributeGet(gcomp, name='scol_lon', value=cvalue, rc=rc) - if (ChkErr(rc,__LINE__,u_FILE_u)) return - read(cvalue,*) scmlon - call NUOPC_CompAttributeGet(gcomp, name='scol_lat', value=cvalue, rc=rc) - if (ChkErr(rc,__LINE__,u_FILE_u)) return - read(cvalue,*) scmlat - call NUOPC_CompAttributeGet(gcomp, name='scol_spval', value=cvalue, rc=rc) - if (ChkErr(rc,__LINE__,u_FILE_u)) return - read(cvalue,*) scol_spval - - if (scmlon > scol_spval .and. scmlat > scol_spval) then - call NUOPC_CompAttributeGet(gcomp, name='single_column_lnd_domainfile', & - value=single_column_lnd_domainfile, rc=rc) + ! if single column is not valid - set all export state fields to zero and return + if (single_column .and. .not. scol_valid) then + write(nu_diag,'(a)')' (ice_comp_nuopc) single column mode point does not contain any ocn/ice '& + //' - setting all export data to 0' + call ice_realize_fields(gcomp, mesh=ice_mesh, & + flds_scalar_name=flds_scalar_name, flds_scalar_num=flds_scalar_num, rc=rc) if (ChkErr(rc,__LINE__,u_FILE_u)) return - if (trim(single_column_lnd_domainfile) /= 'UNSET') then - single_column = .true. - else - call abort_ice('single_column_domainfile cannot be null for single column mode') - end if - call NUOPC_CompAttributeGet(gcomp, name='scol_ocnmask', value=cvalue, rc=rc) - if (chkerr(rc,__LINE__,u_FILE_u)) return - read(cvalue,*) scol_mask - call NUOPC_CompAttributeGet(gcomp, name='scol_ocnfrac', value=cvalue, rc=rc) - if (chkerr(rc,__LINE__,u_FILE_u)) return - read(cvalue,*) scol_frac - call NUOPC_CompAttributeGet(gcomp, name='scol_ni', value=cvalue, rc=rc) + call ESMF_StateGet(exportState, itemCount=fieldCount, rc=rc) if (chkerr(rc,__LINE__,u_FILE_u)) return - read(cvalue,*) scol_ni - call NUOPC_CompAttributeGet(gcomp, name='scol_nj', value=cvalue, rc=rc) + allocate(lfieldnamelist(fieldCount)) + call ESMF_StateGet(exportState, itemNameList=lfieldnamelist, rc=rc) if (chkerr(rc,__LINE__,u_FILE_u)) return - read(cvalue,*) scol_nj - - call ice_mesh_create_scolumn(scmlon, scmlat, ice_mesh, rc) - if (ChkErr(rc,__LINE__,u_FILE_u)) return - - scol_valid = (scol_mask == 1) - if (.not. scol_valid) then - ! if single column is not valid - set all export state fields to zero and return - write(nu_diag,'(a)')' (ice_comp_nuopc) single column mode point does not contain any ocn/ice '& - //' - setting all export data to 0' - call ice_realize_fields(gcomp, mesh=ice_mesh, & - flds_scalar_name=flds_scalar_name, flds_scalar_num=flds_scalar_num, rc=rc) - if (ChkErr(rc,__LINE__,u_FILE_u)) return - call ESMF_StateGet(exportState, itemCount=fieldCount, rc=rc) - if (chkerr(rc,__LINE__,u_FILE_u)) return - allocate(lfieldnamelist(fieldCount)) - call ESMF_StateGet(exportState, itemNameList=lfieldnamelist, rc=rc) - if (chkerr(rc,__LINE__,u_FILE_u)) return - do n = 1, fieldCount - if (trim(lfieldnamelist(n)) /= flds_scalar_name) then - call ESMF_StateGet(exportState, itemName=trim(lfieldnamelist(n)), field=lfield, rc=rc) - if (chkerr(rc,__LINE__,u_FILE_u)) return - call ESMF_FieldGet(lfield, rank=rank, rc=rc) - if (chkerr(rc,__LINE__,u_FILE_u)) return - if (rank == 2) then - call ESMF_FieldGet(lfield, farrayPtr=fldptr2d, rc=rc) - if (ChkErr(rc,__LINE__,u_FILE_u)) return - fldptr2d(:,:) = 0._dbl_kind - else - call ESMF_FieldGet(lfield, farrayPtr=fldptr1d, rc=rc) - if (ChkErr(rc,__LINE__,u_FILE_u)) return - fldptr1d(:) = 0._dbl_kind - end if + do n = 1, fieldCount + if (trim(lfieldnamelist(n)) /= flds_scalar_name) then + call ESMF_StateGet(exportState, itemName=trim(lfieldnamelist(n)), field=lfield, rc=rc) + if (chkerr(rc,__LINE__,u_FILE_u)) return + call ESMF_FieldGet(lfield, rank=rank, rc=rc) + if (chkerr(rc,__LINE__,u_FILE_u)) return + if (rank == 2) then + call ESMF_FieldGet(lfield, farrayPtr=fldptr2d, rc=rc) + if (ChkErr(rc,__LINE__,u_FILE_u)) return + fldptr2d(:,:) = 0._dbl_kind + else + call ESMF_FieldGet(lfield, farrayPtr=fldptr1d, rc=rc) + if (ChkErr(rc,__LINE__,u_FILE_u)) return + fldptr1d(:) = 0._dbl_kind end if - enddo - deallocate(lfieldnamelist) - ! ******************* - ! *** RETURN HERE *** - ! ******************* - RETURN - else - write(nu_diag,'(a,3(f10.5,2x))')' (ice_comp_nuopc) single column mode lon/lat/frac is ',& - scmlon,scmlat,scol_frac - end if + end if + enddo + deallocate(lfieldnamelist) + ! ******************* + ! *** RETURN HERE *** + ! ******************* + RETURN else - single_column = .false. + write(nu_diag,'(a,3(f10.5,2x))')' (ice_comp_nuopc) single column mode lon/lat/frac is ',& + scmlon,scmlat,scol_frac end if #endif @@ -902,6 +941,16 @@ subroutine ModelAdvance(gcomp, rc) !-------------------------------- rc = ESMF_SUCCESS + + call ESMF_LogWrite(subname//' called', ESMF_LOGMSG_INFO) + + if (single_column .and. .not. scol_valid) then + ! ******************* + ! *** RETURN HERE *** + ! ******************* + RETURN + end if + if (dbug > 5) call ESMF_LogWrite(subname//' called', ESMF_LOGMSG_INFO) ! query the Component for its clock, importState and exportState @@ -1049,7 +1098,9 @@ subroutine ModelAdvance(gcomp, rc) ! Advance cice and timestep update !-------------------------------- + if(profile_memory) call ESMF_VMLogMemInfo("Entering CICE_Run : ") call CICE_Run() + if(profile_memory) call ESMF_VMLogMemInfo("Leaving CICE_Run : ") !-------------------------------- ! Create export state diff --git a/cicecore/drivers/nuopc/cmeps/ice_import_export.F90 b/cicecore/drivers/nuopc/cmeps/ice_import_export.F90 index 0f7f1ebd4..dbdf5c07d 100644 --- a/cicecore/drivers/nuopc/cmeps/ice_import_export.F90 +++ b/cicecore/drivers/nuopc/cmeps/ice_import_export.F90 @@ -131,7 +131,9 @@ subroutine ice_advertise_fields(gcomp, importState, exportState, flds_scalar_nam write(nu_diag,*)'send_i2x_per_cat = ',send_i2x_per_cat end if if (.not.send_i2x_per_cat) then - deallocate(fswthrun_ai) + if (allocated(fswthrun_ai)) then + deallocate(fswthrun_ai) + end if end if ! Determine if the following attributes are sent by the driver and if so read them in @@ -583,7 +585,7 @@ subroutine ice_import( importState, rc ) rhoa(i,j,iblk) = inst_pres_height_lowest / & (287.058_ESMF_KIND_R8*(1._ESMF_KIND_R8+0.608_ESMF_KIND_R8*Qa(i,j,iblk))*Tair(i,j,iblk)) else - rhoa(i,j,iblk) = 0._ESMF_KIND_R8 + rhoa(i,j,iblk) = 1.2_ESMF_KIND_R8 endif end do !i end do !j diff --git a/cicecore/drivers/nuopc/cmeps/ice_mesh_mod.F90 b/cicecore/drivers/nuopc/cmeps/ice_mesh_mod.F90 index a0d18c5fd..e7fb5f632 100644 --- a/cicecore/drivers/nuopc/cmeps/ice_mesh_mod.F90 +++ b/cicecore/drivers/nuopc/cmeps/ice_mesh_mod.F90 @@ -639,16 +639,13 @@ subroutine ice_mesh_check(gcomp, ice_mesh, rc) diff_lon = abs(mod(lonMesh(n) - tmplon,360.0)) if (diff_lon > eps_imesh ) then write(6,100)n,lonMesh(n),tmplon, diff_lon - call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) + !call abort_ice(error_message=subname, file=__FILE__, line=__LINE__) end if diff_lat = abs(latMesh(n) - lat(n)) if (diff_lat > eps_imesh) then write(6,101)n,latMesh(n),lat(n), diff_lat - call abort_ice(error_message=subname, & - file=__FILE__, line=__LINE__) + !call abort_ice(error_message=subname, file=__FILE__, line=__LINE__) end if - enddo enddo enddo diff --git a/cicecore/drivers/nuopc/dmi/CICE.F90 b/cicecore/drivers/nuopc/dmi/CICE.F90 index 7056e0e5b..d6a28c3ba 100644 --- a/cicecore/drivers/nuopc/dmi/CICE.F90 +++ b/cicecore/drivers/nuopc/dmi/CICE.F90 @@ -1,8 +1,8 @@ !======================================================================= -! Copyright (c) 2021, Triad National Security, LLC +! Copyright (c) 2022, Triad National Security, LLC ! All rights reserved. ! -! Copyright 2021. Triad National Security, LLC. This software was +! Copyright 2022. Triad National Security, LLC. This software was ! produced under U.S. Government contract DE-AC52-06NA25396 for Los ! Alamos National Laboratory (LANL), which is operated by Triad ! National Security, LLC for the U.S. Department of Energy. The U.S. diff --git a/cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 b/cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 index 7da73db1d..1aaee77f4 100644 --- a/cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 +++ b/cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 @@ -496,7 +496,7 @@ subroutine coupling_prep (iblk) enddo enddo - call ice_timer_start(timer_couple) ! atm/ocn coupling + call ice_timer_start(timer_couple,iblk) ! atm/ocn coupling if (oceanmixed_ice) & call ocean_mixed_layer (dt,iblk) ! ocean surface fluxes and sst @@ -663,7 +663,7 @@ subroutine coupling_prep (iblk) endif !echmod #endif - call ice_timer_stop(timer_couple) ! atm/ocn coupling + call ice_timer_stop(timer_couple,iblk) ! atm/ocn coupling end subroutine coupling_prep diff --git a/cicecore/drivers/standalone/cice/CICE.F90 b/cicecore/drivers/standalone/cice/CICE.F90 index 7056e0e5b..d6a28c3ba 100644 --- a/cicecore/drivers/standalone/cice/CICE.F90 +++ b/cicecore/drivers/standalone/cice/CICE.F90 @@ -1,8 +1,8 @@ !======================================================================= -! Copyright (c) 2021, Triad National Security, LLC +! Copyright (c) 2022, Triad National Security, LLC ! All rights reserved. ! -! Copyright 2021. Triad National Security, LLC. This software was +! Copyright 2022. Triad National Security, LLC. This software was ! produced under U.S. Government contract DE-AC52-06NA25396 for Los ! Alamos National Laboratory (LANL), which is operated by Triad ! National Security, LLC for the U.S. Department of Energy. The U.S. diff --git a/cicecore/drivers/standalone/cice/CICE_FinalMod.F90 b/cicecore/drivers/standalone/cice/CICE_FinalMod.F90 index a59c210aa..28811c3cd 100644 --- a/cicecore/drivers/standalone/cice/CICE_FinalMod.F90 +++ b/cicecore/drivers/standalone/cice/CICE_FinalMod.F90 @@ -31,7 +31,8 @@ module CICE_FinalMod subroutine CICE_Finalize use ice_restart_shared, only: runid - use ice_timers, only: ice_timer_stop, ice_timer_print_all, timer_total + use ice_timers, only: ice_timer_stop, ice_timer_print_all, timer_total, & + timer_stats character(len=*), parameter :: subname = '(CICE_Finalize)' @@ -40,7 +41,7 @@ subroutine CICE_Finalize !------------------------------------------------------------------- call ice_timer_stop(timer_total) ! stop timing entire run - call ice_timer_print_all(stats=.false.) ! print timing information + call ice_timer_print_all(stats=timer_stats) ! print timing information call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & diff --git a/cicecore/drivers/standalone/cice/CICE_RunMod.F90 b/cicecore/drivers/standalone/cice/CICE_RunMod.F90 index 27d61db84..0b4326c0a 100644 --- a/cicecore/drivers/standalone/cice/CICE_RunMod.F90 +++ b/cicecore/drivers/standalone/cice/CICE_RunMod.F90 @@ -218,10 +218,9 @@ subroutine ice_step call step_prep - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - - if (ktherm >= 0) then + if (ktherm >= 0) then + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks !----------------------------------------------------------------- ! scale radiation fields @@ -237,7 +236,7 @@ subroutine ice_step !----------------------------------------------------------------- ! thermodynamics and biogeochemistry !----------------------------------------------------------------- - + call step_therm1 (dt, iblk) ! vertical thermodynamics if (debug_model) then @@ -259,10 +258,9 @@ subroutine ice_step call debug_ice (iblk, plabeld) endif - endif ! ktherm > 0 - - enddo ! iblk - !$OMP END PARALLEL DO + enddo + !$OMP END PARALLEL DO + endif ! ktherm > 0 ! clean up, update tendency diagnostics offset = dt @@ -292,7 +290,7 @@ subroutine ice_step endif ! ridging - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks if (kridge > 0) call step_dyn_ridge (dt_dyn, ndtd, iblk) enddo @@ -326,14 +324,15 @@ subroutine ice_step !----------------------------------------------------------------- if (tr_snow) then ! advanced snow physics + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks call step_snow (dt, iblk) enddo + !$OMP END PARALLEL DO call update_state (dt) ! clean up endif -!MHRI: CHECK THIS OMP - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks !----------------------------------------------------------------- @@ -405,7 +404,6 @@ subroutine ice_step if (kdyn == 2) call write_restart_eap call final_restart endif - call ice_timer_stop(timer_readwrite) ! reading/writing end subroutine ice_step @@ -488,7 +486,7 @@ subroutine coupling_prep (iblk) enddo enddo - call ice_timer_start(timer_couple) ! atm/ocn coupling + call ice_timer_start(timer_couple,iblk) ! atm/ocn coupling if (oceanmixed_ice) & call ocean_mixed_layer (dt,iblk) ! ocean surface fluxes and sst @@ -655,7 +653,7 @@ subroutine coupling_prep (iblk) endif !echmod #endif - call ice_timer_stop(timer_couple) ! atm/ocn coupling + call ice_timer_stop(timer_couple,iblk) ! atm/ocn coupling end subroutine coupling_prep diff --git a/cicecore/shared/ice_init_column.F90 b/cicecore/shared/ice_init_column.F90 index eff39a464..5643b4277 100644 --- a/cicecore/shared/ice_init_column.F90 +++ b/cicecore/shared/ice_init_column.F90 @@ -1301,27 +1301,28 @@ subroutine input_zbgc ! read from input file !----------------------------------------------------------------- - call get_fileunit(nu_nml) - if (my_task == master_task) then + write(nu_diag,*) subname,' Reading zbgc_nml' + + call get_fileunit(nu_nml) open (nu_nml, file=trim(nml_filename), status='old',iostat=nml_error) if (nml_error /= 0) then - nml_error = -1 - else - nml_error = 1 - endif + call abort_ice(subname//'ERROR: zbgc_nml open file '// & + trim(nml_filename), & + file=__FILE__, line=__LINE__) + endif - print*,'Reading zbgc_nml' + nml_error = 1 do while (nml_error > 0) read(nu_nml, nml=zbgc_nml,iostat=nml_error) end do - if (nml_error == 0) close(nu_nml) - endif - call broadcast_scalar(nml_error, master_task) - if (nml_error /= 0) then - call abort_ice(subname//'ERROR: reading zbgc namelist') + if (nml_error /= 0) then + call abort_ice(subname//'ERROR: zbgc_nml reading ', & + file=__FILE__, line=__LINE__) + endif + close(nu_nml) + call release_fileunit(nu_nml) endif - call release_fileunit(nu_nml) !----------------------------------------------------------------- ! broadcast diff --git a/cicecore/version.txt b/cicecore/version.txt index 04a90ef1a..9e5f9f3e1 100644 --- a/cicecore/version.txt +++ b/cicecore/version.txt @@ -1 +1 @@ -CICE 6.3.0 +CICE 6.3.1 diff --git a/configuration/scripts/cice.batch.csh b/configuration/scripts/cice.batch.csh index 04f397034..7a1334532 100755 --- a/configuration/scripts/cice.batch.csh +++ b/configuration/scripts/cice.batch.csh @@ -91,7 +91,25 @@ cat >> ${jobfile} << EOFB #PBS -l walltime=${batchtime} EOFB -else if (${ICE_MACHINE} =~ gordon* || ${ICE_MACHINE} =~ conrad* || ${ICE_MACHINE} =~ gaffney* || ${ICE_MACHINE} =~ koehr* || ${ICE_MACHINE} =~ mustang) then +else if (${ICE_MACHINE} =~ gaffney* || ${ICE_MACHINE} =~ koehr* || ${ICE_MACHINE} =~ mustang*) then +cat >> ${jobfile} << EOFB +#PBS -N ${shortcase} +#PBS -q ${queue} +#PBS -A ${acct} +#PBS -l select=${nnodes}:ncpus=${maxtpn}:mpiprocs=${taskpernode} +#PBS -l walltime=${batchtime} +#PBS -j oe +#PBS -W umask=022 +###PBS -M username@domain.com +###PBS -m be +EOFB + +else if (${ICE_MACHINE} =~ narwhal*) then +if (${runlength} <= 0) then + set batchtime = "00:29:59" +else + set queue = "standard" +endif cat >> ${jobfile} << EOFB #PBS -N ${shortcase} #PBS -q ${queue} diff --git a/configuration/scripts/cice.launch.csh b/configuration/scripts/cice.launch.csh index 904a0b636..9a557ec44 100755 --- a/configuration/scripts/cice.launch.csh +++ b/configuration/scripts/cice.launch.csh @@ -69,15 +69,8 @@ mpirun -np ${ntasks} ./cice >&! \$ICE_RUNLOG_FILE EOFR endif - -#======= -else if (${ICE_MACHINE} =~ onyx*) then -cat >> ${jobfile} << EOFR -aprun -n ${ntasks} -N ${taskpernodelimit} -d ${nthrds} ./cice >&! \$ICE_RUNLOG_FILE -EOFR - #======= -else if (${ICE_MACHINE} =~ gordon* || ${ICE_MACHINE} =~ conrad*) then +else if (${ICE_MACHINE} =~ onyx* || ${ICE_MACHINE} =~ narwhal) then cat >> ${jobfile} << EOFR aprun -n ${ntasks} -N ${taskpernodelimit} -d ${nthrds} ./cice >&! \$ICE_RUNLOG_FILE EOFR diff --git a/configuration/scripts/cice.run.setup.csh b/configuration/scripts/cice.run.setup.csh index aa578b5ca..58c4ebe66 100755 --- a/configuration/scripts/cice.run.setup.csh +++ b/configuration/scripts/cice.run.setup.csh @@ -9,8 +9,6 @@ echo "running cice.run.setup.csh" set jobfile = cice.run set subfile = cice.submit -set nthrds = ${ICE_NTHRDS} - #========================================== # Write the batch code into the job file @@ -43,7 +41,9 @@ set ICE_RUNLOG_FILE = "cice.runlog.\${stamp}" #-------------------------------------------- cd \${ICE_RUNDIR} -setenv OMP_NUM_THREADS ${nthrds} +setenv OMP_NUM_THREADS \${ICE_NTHRDS} +setenv OMP_SCHEDULE "\${ICE_OMPSCHED}" +#setenv OMP_DISPLAY_ENV TRUE cp -f \${ICE_CASEDIR}/ice_in \${ICE_RUNDIR} cp -f \${ICE_CASEDIR}/env.${ICE_MACHCOMP} \${ICE_RUNDIR} diff --git a/configuration/scripts/cice.settings b/configuration/scripts/cice.settings index 1faf2c5be..9b57aab3f 100755 --- a/configuration/scripts/cice.settings +++ b/configuration/scripts/cice.settings @@ -21,6 +21,7 @@ setenv ICE_QUIETMODE false setenv ICE_GRID undefined setenv ICE_NTASKS undefined setenv ICE_NTHRDS undefined +setenv ICE_OMPSCHED "static,1" setenv ICE_TEST undefined setenv ICE_TESTNAME undefined setenv ICE_TESTID undefined @@ -28,6 +29,7 @@ setenv ICE_BASELINE undefined setenv ICE_BASEGEN undefined setenv ICE_BASECOM undefined setenv ICE_BFBCOMP undefined +setenv ICE_BFBTYPE restart setenv ICE_SPVAL undefined setenv ICE_RUNLENGTH -1 setenv ICE_ACCOUNT undefined diff --git a/configuration/scripts/ice_in b/configuration/scripts/ice_in index 11fa9b5ca..095822640 100644 --- a/configuration/scripts/ice_in +++ b/configuration/scripts/ice_in @@ -38,6 +38,7 @@ debug_forcing = .false. print_global = .true. print_points = .true. + timer_stats = .false. conserv_check = .false. latpnt(1) = 90. lonpnt(1) = 0. diff --git a/configuration/scripts/machines/Macros.conrad_cray b/configuration/scripts/machines/Macros.conrad_cray deleted file mode 100644 index 19ddcb8f5..000000000 --- a/configuration/scripts/machines/Macros.conrad_cray +++ /dev/null @@ -1,57 +0,0 @@ -#============================================================================== -# Macros file for NAVYDSRC conrad, cray compiler -#============================================================================== - -CPP := ftn -e P -CPPDEFS := -DFORTRANUNDERSCORE ${ICE_CPPDEFS} -CFLAGS := -c -O2 -h fp0 - -FIXEDFLAGS := -132 -FREEFLAGS := -FFLAGS := -h fp0 -h byteswapio -FFLAGS_NOOPT:= -O0 -LDFLAGS := -h byteswapio - -ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -Rbcdps -# FFLAGS += -O0 -g -Rbcdps -ei -else - FFLAGS += -O2 -endif - -SCC := cc -SFC := ftn -MPICC := cc -MPIFC := ftn - -ifeq ($(ICE_COMMDIR), mpi) - FC := $(MPIFC) - CC := $(MPICC) -else - FC := $(SFC) - CC := $(SCC) -endif -LD:= $(FC) - -# defined by module -#NETCDF_PATH := $(NETCDF) -#PNETCDF_PATH := $(PNETCDF) -#PNETCDF_PATH := /glade/apps/opt/pnetcdf/1.3.0/intel/default -#LAPACK_LIBDIR := /glade/apps/opt/lapack/3.4.2/intel/12.1.5/lib - -#PIO_CONFIG_OPTS:= --enable-filesystem-hints=gpfs - -INCLDIR := $(INCLDIR) -#INCLDIR += -I$(NETCDF_PATH)/include - -#LIB_NETCDF := $(NETCDF_PATH)/lib -#LIB_PNETCDF := $(PNETCDF_PATH)/lib -#LIB_MPI := $(IMPILIBDIR) -#SLIBS := -L$(LIB_NETCDF) -lnetcdf -lnetcdff - -ifeq ($(ICE_THREADED), false) - LDFLAGS += -hnoomp - CFLAGS += -hnoomp - FFLAGS += -hnoomp -endif - diff --git a/configuration/scripts/machines/Macros.conrad_intel b/configuration/scripts/machines/Macros.conrad_intel deleted file mode 100644 index 74a36304d..000000000 --- a/configuration/scripts/machines/Macros.conrad_intel +++ /dev/null @@ -1,56 +0,0 @@ -#============================================================================== -# Macros file for NAVYDSRC conrad, intel compiler -#============================================================================== - -CPP := fpp -CPPDEFS := -DFORTRANUNDERSCORE ${ICE_CPPDEFS} -CFLAGS := -c -O2 -fp-model precise -xHost - -FIXEDFLAGS := -132 -FREEFLAGS := -FR -FFLAGS := -fp-model precise -convert big_endian -assume byterecl -ftz -traceback -xHost -FFLAGS_NOOPT:= -O0 - -ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -check uninit -check bounds -check pointers -fpe0 -check noarg_temp_created -# FFLAGS += -O0 -g -check uninit -check bounds -check pointers -fpe0 -check noarg_temp_created -init=snan,arrays -else - FFLAGS += -O2 -endif - -SCC := cc -SFC := ftn -MPICC := cc -MPIFC := ftn - -ifeq ($(ICE_COMMDIR), mpi) - FC := $(MPIFC) - CC := $(MPICC) -else - FC := $(SFC) - CC := $(SCC) -endif -LD:= $(FC) - -# defined by module -#NETCDF_PATH := $(NETCDF) -#PNETCDF_PATH := $(PNETCDF) -#PNETCDF_PATH := /glade/apps/opt/pnetcdf/1.3.0/intel/default -#LAPACK_LIBDIR := /glade/apps/opt/lapack/3.4.2/intel/12.1.5/lib - -#PIO_CONFIG_OPTS:= --enable-filesystem-hints=gpfs - -INCLDIR := $(INCLDIR) -#INCLDIR += -I$(NETCDF_PATH)/include - -#LIB_NETCDF := $(NETCDF_PATH)/lib -#LIB_PNETCDF := $(PNETCDF_PATH)/lib -#LIB_MPI := $(IMPILIBDIR) -#SLIBS := -L$(LIB_NETCDF) -lnetcdf -lnetcdff - -ifeq ($(ICE_THREADED), true) - LDFLAGS += -qopenmp - CFLAGS += -qopenmp - FFLAGS += -qopenmp -endif - diff --git a/configuration/scripts/machines/Macros.conrad_pgi b/configuration/scripts/machines/Macros.conrad_pgi deleted file mode 100644 index ef0a25ab4..000000000 --- a/configuration/scripts/machines/Macros.conrad_pgi +++ /dev/null @@ -1,55 +0,0 @@ -#============================================================================== -# Macros file for NAVYDSRC conrad, pgi compiler -#============================================================================== - -CPP := pgcc -E -CPPDEFS := -DFORTRANUNDERSCORE -DNO_R16 ${ICE_CPPDEFS} -CFLAGS := -c -O2 -Kieee - -FIXEDFLAGS := -Mextend -FREEFLAGS := -Mfree -FFLAGS := -Kieee -Mbyteswapio -traceback -FFLAGS_NOOPT:= -O0 - -ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -Mbounds -Mchkptr -else - FFLAGS += -O -g -endif - -SCC := cc -SFC := ftn -MPICC := cc -MPIFC := ftn - -ifeq ($(ICE_COMMDIR), mpi) - FC := $(MPIFC) - CC := $(MPICC) -else - FC := $(SFC) - CC := $(SCC) -endif -LD:= $(FC) - -# defined by module -#NETCDF_PATH := $(NETCDF) -#PNETCDF_PATH := $(PNETCDF) -#PNETCDF_PATH := /glade/apps/opt/pnetcdf/1.3.0/intel/default -#LAPACK_LIBDIR := /glade/apps/opt/lapack/3.4.2/intel/12.1.5/lib - -#PIO_CONFIG_OPTS:= --enable-filesystem-hints=gpfs - -INCLDIR := $(INCLDIR) -#INCLDIR += -I$(NETCDF_PATH)/include - -#LIB_NETCDF := $(NETCDF_PATH)/lib -#LIB_PNETCDF := $(PNETCDF_PATH)/lib -#LIB_MPI := $(IMPILIBDIR) -#SLIBS := -L$(LIB_NETCDF) -lnetcdf -lnetcdff - -ifeq ($(ICE_THREADED), true) - LDFLAGS += -mp - CFLAGS += -mp - FFLAGS += -mp -endif - diff --git a/configuration/scripts/machines/Macros.gordon_cray b/configuration/scripts/machines/Macros.gordon_cray deleted file mode 100644 index 6c5032e0d..000000000 --- a/configuration/scripts/machines/Macros.gordon_cray +++ /dev/null @@ -1,57 +0,0 @@ -#============================================================================== -# Macros file for NAVYDSRC gordon, cray compiler -#============================================================================== - -CPP := ftn -e P -CPPDEFS := -DFORTRANUNDERSCORE ${ICE_CPPDEFS} -CFLAGS := -c -O2 -h fp0 - -FIXEDFLAGS := -132 -FREEFLAGS := -FFLAGS := -h fp0 -h byteswapio -FFLAGS_NOOPT:= -O0 -LDFLAGS := -h byteswapio - -ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -Rbcdps -# FFLAGS += -O0 -g -Rbcdps -ei -else - FFLAGS += -O2 -endif - -SCC := cc -SFC := ftn -MPICC := cc -MPIFC := ftn - -ifeq ($(ICE_COMMDIR), mpi) - FC := $(MPIFC) - CC := $(MPICC) -else - FC := $(SFC) - CC := $(SCC) -endif -LD:= $(FC) - -# defined by module -#NETCDF_PATH := $(NETCDF) -#PNETCDF_PATH := $(PNETCDF) -#PNETCDF_PATH := /glade/apps/opt/pnetcdf/1.3.0/intel/default -#LAPACK_LIBDIR := /glade/apps/opt/lapack/3.4.2/intel/12.1.5/lib - -#PIO_CONFIG_OPTS:= --enable-filesystem-hints=gpfs - -INCLDIR := $(INCLDIR) -#INCLDIR += -I$(NETCDF_PATH)/include - -#LIB_NETCDF := $(NETCDF_PATH)/lib -#LIB_PNETCDF := $(PNETCDF_PATH)/lib -#LIB_MPI := $(IMPILIBDIR) -#SLIBS := -L$(LIB_NETCDF) -lnetcdf -lnetcdff - -ifeq ($(ICE_THREADED), false) - LDFLAGS += -hnoomp - CFLAGS += -hnoomp - FFLAGS += -hnoomp -endif - diff --git a/configuration/scripts/machines/Macros.gordon_pgi b/configuration/scripts/machines/Macros.narwhal_aocc similarity index 70% rename from configuration/scripts/machines/Macros.gordon_pgi rename to configuration/scripts/machines/Macros.narwhal_aocc index 1190f6eca..44b1dc2f6 100644 --- a/configuration/scripts/machines/Macros.gordon_pgi +++ b/configuration/scripts/machines/Macros.narwhal_aocc @@ -1,20 +1,20 @@ #============================================================================== -# Macros file for NAVYDSRC gordon, pgi compiler +# Macros file for NAVYDSRC narwhal, aocc compiler #============================================================================== -CPP := pgcc -Mcpp -CPPDEFS := -DFORTRANUNDERSCORE -DNO_R16 ${ICE_CPPDEFS} -CFLAGS := -c -O2 -Kieee +CPP := ftn -E +CPPDEFS := -DNO_R16 -DFORTRANUNDERSCORE ${ICE_CPPDEFS} +CFLAGS := -c -O2 -FIXEDFLAGS := -Mextend -FREEFLAGS := -Mfree -FFLAGS := -Kieee -Mbyteswapio -traceback +FIXEDFLAGS := -ffixed-form +FREEFLAGS := -ffree-form +FFLAGS := -byteswapio FFLAGS_NOOPT:= -O0 ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -Mbounds -Mchkptr + FFLAGS += -O0 -g -fsanitize=integer-divide-by-zero,float-divide-by-zero,bounds else - FFLAGS += -O -g + FFLAGS += -O2 endif SCC := cc @@ -51,5 +51,9 @@ ifeq ($(ICE_THREADED), true) LDFLAGS += -mp CFLAGS += -mp FFLAGS += -mp +else + LDFLAGS += -nomp +# CFLAGS += -nomp + FFLAGS += -nomp endif diff --git a/configuration/scripts/machines/Macros.conrad_gnu b/configuration/scripts/machines/Macros.narwhal_cray similarity index 75% rename from configuration/scripts/machines/Macros.conrad_gnu rename to configuration/scripts/machines/Macros.narwhal_cray index 5459d9b6b..ab0e6378e 100644 --- a/configuration/scripts/machines/Macros.conrad_gnu +++ b/configuration/scripts/machines/Macros.narwhal_cray @@ -1,20 +1,21 @@ #============================================================================== -# Macros file for NAVYDSRC conrad, gnu compiler +# Macros file for NAVYDSRC narwhal, cray compiler #============================================================================== -CPP := ftn -E +CPP := ftn -e P CPPDEFS := -DFORTRANUNDERSCORE ${ICE_CPPDEFS} CFLAGS := -c -O2 -FIXEDFLAGS := -ffixed-line-length-132 -FREEFLAGS := -ffree-form -FFLAGS := -fconvert=big-endian -fbacktrace -ffree-line-length-none +FIXEDFLAGS := -132 +FREEFLAGS := +FFLAGS := -hbyteswapio FFLAGS_NOOPT:= -O0 - +LDFLAGS := -hbyteswapio + ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -fcheck=bounds -finit-real=nan -fimplicit-none -ffpe-trap=invalid,zero,overflow + FFLAGS += -O0 -hfp0 -g -Rbcdps -Ktrap=fp else - FFLAGS += -O2 + FFLAGS += -O2 -hfp0 # -eo endif SCC := cc @@ -51,5 +52,9 @@ ifeq ($(ICE_THREADED), true) LDFLAGS += -fopenmp CFLAGS += -fopenmp FFLAGS += -fopenmp +else + LDFLAGS += -hnoomp +# CFLAGS += -hnoomp + FFLAGS += -hnoomp endif diff --git a/configuration/scripts/machines/Macros.gordon_gnu b/configuration/scripts/machines/Macros.narwhal_gnu similarity index 87% rename from configuration/scripts/machines/Macros.gordon_gnu rename to configuration/scripts/machines/Macros.narwhal_gnu index 8c3e277ab..e980c1e29 100644 --- a/configuration/scripts/machines/Macros.gordon_gnu +++ b/configuration/scripts/machines/Macros.narwhal_gnu @@ -1,5 +1,5 @@ #============================================================================== -# Macros file for NAVYDSRC gordon, gnu compiler +# Macros file for NAVYDSRC narwhal, gnu compiler #============================================================================== CPP := ftn -E @@ -8,12 +8,12 @@ CFLAGS := -c FIXEDFLAGS := -ffixed-line-length-132 FREEFLAGS := -ffree-form -FFLAGS := -fconvert=big-endian -fbacktrace -ffree-line-length-none +FFLAGS := -fconvert=big-endian -fbacktrace -ffree-line-length-none -fallow-argument-mismatch FFLAGS_NOOPT:= -O0 ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -fcheck=bounds -finit-real=nan -fimplicit-none -ffpe-trap=invalid,zero,overflow - CFLAGS += -O0 + FFLAGS += -O0 -g -fcheck=bounds -finit-real=nan -fimplicit-none -ffpe-trap=invalid,zero,overflow + CFLAGS += -O0 endif ifeq ($(ICE_COVERAGE), true) diff --git a/configuration/scripts/machines/Macros.gordon_intel b/configuration/scripts/machines/Macros.narwhal_intel similarity index 82% rename from configuration/scripts/machines/Macros.gordon_intel rename to configuration/scripts/machines/Macros.narwhal_intel index 84659d00a..c7c103b24 100644 --- a/configuration/scripts/machines/Macros.gordon_intel +++ b/configuration/scripts/machines/Macros.narwhal_intel @@ -1,18 +1,20 @@ #============================================================================== -# Macros file for NAVYDSRC gordon, intel compiler +# Macros file for NAVYDSRC narwhal, intel compiler #============================================================================== CPP := fpp CPPDEFS := -DFORTRANUNDERSCORE ${ICE_CPPDEFS} -CFLAGS := -c -O2 -fp-model precise -xHost +CFLAGS := -c -O2 -fp-model precise -fcommon FIXEDFLAGS := -132 FREEFLAGS := -FR -FFLAGS := -fp-model precise -convert big_endian -assume byterecl -ftz -traceback -xHost +FFLAGS := -fp-model precise -convert big_endian -assume byterecl -ftz -traceback +# -mcmodel medium -shared-intel FFLAGS_NOOPT:= -O0 ifeq ($(ICE_BLDDEBUG), true) FFLAGS += -O0 -g -check uninit -check bounds -check pointers -fpe0 -check noarg_temp_created +# FFLAGS += -O0 -g -check all -fpe0 -ftrapuv -fp-model except -check noarg_temp_created -init=snan,arrays else FFLAGS += -O2 endif diff --git a/configuration/scripts/machines/env.banting_gnu b/configuration/scripts/machines/env.banting_gnu index 0c0a4abce..997816a9d 100755 --- a/configuration/scripts/machines/env.banting_gnu +++ b/configuration/scripts/machines/env.banting_gnu @@ -19,6 +19,9 @@ module load cray-netcdf # NetCDF module load cray-hdf5 # HDF5 setenv HDF5_USE_FILE_LOCKING FALSE # necessary since data is on an NFS filesystem +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME banting diff --git a/configuration/scripts/machines/env.banting_intel b/configuration/scripts/machines/env.banting_intel index ac01e4d72..0beeb2618 100755 --- a/configuration/scripts/machines/env.banting_intel +++ b/configuration/scripts/machines/env.banting_intel @@ -14,6 +14,9 @@ module load cray-netcdf # NetCDF module load cray-hdf5 # HDF5 setenv HDF5_USE_FILE_LOCKING FALSE # necessary since data is on an NFS filesystem +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME banting diff --git a/configuration/scripts/machines/env.cesium_intel b/configuration/scripts/machines/env.cesium_intel index 19209919e..8dabe1645 100755 --- a/configuration/scripts/machines/env.cesium_intel +++ b/configuration/scripts/machines/env.cesium_intel @@ -6,6 +6,9 @@ source /fs/ssm/main/opt/intelcomp/intelcomp-2016.1.156/intelcomp_2016.1.156_mult source $ssmuse -d /fs/ssm/main/opt/openmpi/openmpi-1.6.5/intelcomp-2016.1.156 # openmpi source $ssmuse -d /fs/ssm/hpco/tmp/eccc/201402/04/intel-2016.1.150 # netcdf (and openmpi) +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + setenv ICE_MACHINE_MACHNAME cesium setenv ICE_MACHINE_ENVNAME intel setenv ICE_MACHINE_MAKE colormake-short diff --git a/configuration/scripts/machines/env.cheyenne_gnu b/configuration/scripts/machines/env.cheyenne_gnu index c68a87d5c..f580cc354 100755 --- a/configuration/scripts/machines/env.cheyenne_gnu +++ b/configuration/scripts/machines/env.cheyenne_gnu @@ -29,8 +29,8 @@ if ($ICE_IOTYPE =~ pio*) then endif endif -if ($?ICE_TEST) then -if ($ICE_TEST =~ qcchk*) then +if ($?ICE_BFBTYPE) then +if ($ICE_BFBTYPE =~ qcchk*) then module load python source /glade/u/apps/opt/ncar_pylib/ncar_pylib.csh default endif @@ -40,6 +40,8 @@ endif limit coredumpsize unlimited limit stacksize unlimited +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M setenv ICE_MACHINE_MACHNAME cheyenne setenv ICE_MACHINE_MACHINFO "SGI ICE XA Xeon E5-2697V4 Broadwell" diff --git a/configuration/scripts/machines/env.cheyenne_intel b/configuration/scripts/machines/env.cheyenne_intel index d6eeb67ea..ef12df914 100755 --- a/configuration/scripts/machines/env.cheyenne_intel +++ b/configuration/scripts/machines/env.cheyenne_intel @@ -29,8 +29,8 @@ if ($ICE_IOTYPE =~ pio*) then endif endif -if ($?ICE_TEST) then -if ($ICE_TEST =~ qcchk*) then +if ($?ICE_BFBTYPE) then +if ($ICE_BFBTYPE =~ qcchk*) then module load python source /glade/u/apps/opt/ncar_pylib/ncar_pylib.csh default endif @@ -40,6 +40,8 @@ endif limit coredumpsize unlimited limit stacksize unlimited +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M setenv ICE_MACHINE_MACHNAME cheyenne setenv ICE_MACHINE_MACHINFO "SGI ICE XA Xeon E5-2697V4 Broadwell" diff --git a/configuration/scripts/machines/env.cheyenne_pgi b/configuration/scripts/machines/env.cheyenne_pgi index 9c559b90c..cbd486c29 100755 --- a/configuration/scripts/machines/env.cheyenne_pgi +++ b/configuration/scripts/machines/env.cheyenne_pgi @@ -29,8 +29,8 @@ if ($ICE_IOTYPE =~ pio*) then endif endif -if ($?ICE_TEST) then -if ($ICE_TEST =~ qcchk*) then +if ($?ICE_BFBTYPE) then +if ($ICE_BFBTYPE =~ qcchk*) then module load python source /glade/u/apps/opt/ncar_pylib/ncar_pylib.csh default endif @@ -40,6 +40,8 @@ endif limit coredumpsize unlimited limit stacksize unlimited +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M setenv ICE_MACHINE_MACHNAME cheyenne setenv ICE_MACHINE_MACHINFO "SGI ICE XA Xeon E5-2697V4 Broadwell" diff --git a/configuration/scripts/machines/env.compy_intel b/configuration/scripts/machines/env.compy_intel index fe3511aa6..6fc273204 100755 --- a/configuration/scripts/machines/env.compy_intel +++ b/configuration/scripts/machines/env.compy_intel @@ -23,6 +23,9 @@ setenv I_MPI_ADJUST_ALLREDUCE 1 limit coredumpsize unlimited limit stacksize unlimited +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME compy diff --git a/configuration/scripts/machines/env.conda_linux b/configuration/scripts/machines/env.conda_linux index 08cf27724..ae6ea1b79 100755 --- a/configuration/scripts/machines/env.conda_linux +++ b/configuration/scripts/machines/env.conda_linux @@ -24,6 +24,9 @@ if $status then exit 1 endif +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME conda diff --git a/configuration/scripts/machines/env.conda_macos b/configuration/scripts/machines/env.conda_macos index e33eee710..3b3537bf7 100755 --- a/configuration/scripts/machines/env.conda_macos +++ b/configuration/scripts/machines/env.conda_macos @@ -24,6 +24,9 @@ if $status then exit 1 endif +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME conda diff --git a/configuration/scripts/machines/env.conrad_gnu b/configuration/scripts/machines/env.conrad_gnu deleted file mode 100755 index f14ee33a5..000000000 --- a/configuration/scripts/machines/env.conrad_gnu +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/csh -f - -set inp = "undefined" -if ($#argv == 1) then - set inp = $1 -endif - -if ("$inp" != "-nomodules") then - -source /opt/modules/default/init/csh - -module unload PrgEnv-cray -module unload PrgEnv-gnu -module unload PrgEnv-intel -module unload PrgEnv-pgi -module load PrgEnv-gnu/5.2.82 - -module unload gcc -module load gcc/6.3.0 - -module unload cray-mpich -module unload cray-mpich2 -module load cray-mpich/7.5.3 - -module unload netcdf -module unload cray-netcdf -module unload cray-hdf5 -module unload cray-hdf5-parallel -module unload cray-netcdf-hdf5parallel -module unload cray-parallel-netcdf -module load cray-netcdf/4.4.1.1 -module load cray-hdf5/1.10.0.1 - -module unload cray-libsci - -module load craype-haswell - -setenv NETCDF_PATH ${NETCDF_DIR} -limit coredumpsize unlimited -limit stacksize unlimited - -endif - -setenv ICE_MACHINE_MACHNAME conrad -setenv ICE_MACHINE_MACHINFO "Cray XC40 Xeon E5-2698v3 Haswell" -setenv ICE_MACHINE_ENVNAME gnu -setenv ICE_MACHINE_ENVINFO "GNU Fortran (GCC) 6.3.0 20161221, mpich 7.5.3, netcdf 4.4.1.1" -setenv ICE_MACHINE_MAKE gmake -setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS -setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium -setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE -setenv ICE_MACHINE_SUBMIT "qsub " -setenv ICE_MACHINE_ACCT P00000000 -setenv ICE_MACHINE_QUEUE "debug" -setenv ICE_MACHINE_TPNODE 32 # tasks per node -setenv ICE_MACHINE_BLDTHRDS 4 -setenv ICE_MACHINE_QSTAT "qstat " - -# For lcov -set lcovpath = "/p/home/apcraig/bin" -set lcovp5l = "/p/home/apcraig/usr/lib/perl5/site_perl/5.10.0/x86_64-linux-thread-multi" - -if ($?PATH) then - if ("$PATH" !~ "*${lcovpath}*") then - setenv PATH ${PATH}:$lcovpath - endif -else - setenv PATH $lcovpath -endif - -if ($?PERL5LIB) then - if ("$PERL5LIB" !~ "*${lcovp5l}*") then - setenv PERL5LIB ${PERL5LIB}:$lcovp5l - endif -else - setenv PERL5LIB $lcovp5l -endif diff --git a/configuration/scripts/machines/env.conrad_intel b/configuration/scripts/machines/env.conrad_intel deleted file mode 100755 index e37ce4b1f..000000000 --- a/configuration/scripts/machines/env.conrad_intel +++ /dev/null @@ -1,59 +0,0 @@ -#!/bin/csh -f - -set inp = "undefined" -if ($#argv == 1) then - set inp = $1 -endif - -if ("$inp" != "-nomodules") then - -source /opt/modules/default/init/csh - -module unload PrgEnv-cray -module unload PrgEnv-gnu -module unload PrgEnv-intel -module unload PrgEnv-pgi -module load PrgEnv-intel/5.2.40 - -module unload intel -module load intel/17.0.2.174 - -module unload cray-mpich -module unload cray-mpich2 -module load cray-mpich/7.3.2 - -module unload netcdf -module unload cray-netcdf -module unload cray-hdf5 -module unload cray-hdf5-parallel -module unload cray-netcdf-hdf5parallel -module unload cray-parallel-netcdf -module load cray-netcdf/4.3.2 -module load cray-hdf5/1.8.13 - -module unload cray-libsci - -module load craype-haswell - -setenv NETCDF_PATH ${NETCDF_DIR} -limit coredumpsize unlimited -limit stacksize unlimited - -endif - -setenv ICE_MACHINE_MACHNAME conrad -setenv ICE_MACHINE_MACHINFO "Cray XC40 Xeon E5-2698v3 Haswell" -setenv ICE_MACHINE_ENVNAME intel -setenv ICE_MACHINE_ENVINFO "ifort 17.0.2 20170213, mpich 7.3.2, netcdf 4.3.2" -setenv ICE_MACHINE_MAKE gmake -setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS -setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium -setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE -setenv ICE_MACHINE_SUBMIT "qsub " -setenv ICE_MACHINE_ACCT P00000000 -setenv ICE_MACHINE_QUEUE "debug" -setenv ICE_MACHINE_TPNODE 32 # tasks per node -setenv ICE_MACHINE_MAXPES 8000 # maximum total pes (tasks * threads) available -setenv ICE_MACHINE_MAXRUNLENGTH 168 # maximum batch wall time limit in hours (integer) -setenv ICE_MACHINE_BLDTHRDS 4 -setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.conrad_pgi b/configuration/scripts/machines/env.conrad_pgi deleted file mode 100755 index 2e82ea34f..000000000 --- a/configuration/scripts/machines/env.conrad_pgi +++ /dev/null @@ -1,57 +0,0 @@ -#!/bin/csh -f - -set inp = "undefined" -if ($#argv == 1) then - set inp = $1 -endif - -if ("$inp" != "-nomodules") then - -source /opt/modules/default/init/csh - -module unload PrgEnv-cray -module unload PrgEnv-gnu -module unload PrgEnv-intel -module unload PrgEnv-pgi -module load PrgEnv-pgi/5.2.82 - -module unload pgi -module load pgi/16.10.0 - -module unload cray-mpich -module unload cray-mpich2 -module load cray-mpich/7.5.3 - -module unload netcdf -module unload cray-netcdf -module unload cray-hdf5 -module unload cray-hdf5-parallel -module unload cray-netcdf-hdf5parallel -module unload cray-parallel-netcdf -module load cray-netcdf/4.4.1.1 -module load cray-hdf5/1.10.0.1 - -module unload cray-libsci - -module load craype-haswell - -setenv NETCDF_PATH ${NETCDF_DIR} -limit coredumpsize unlimited -limit stacksize unlimited - -endif - -setenv ICE_MACHINE_MACHNAME conrad -setenv ICE_MACHINE_MACHINFO "Cray XC40 Xeon E5-2698v3 Haswell" -setenv ICE_MACHINE_ENVNAME pgi -setenv ICE_MACHINE_ENVINFO "pgf90 16.10-0, mpich 7.5.3, netcdf 4.4.1.1" -setenv ICE_MACHINE_MAKE gmake -setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS -setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium -setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE -setenv ICE_MACHINE_SUBMIT "qsub " -setenv ICE_MACHINE_ACCT ARLAP96070PET -setenv ICE_MACHINE_QUEUE "debug" -setenv ICE_MACHINE_TPNODE 32 # tasks per node -setenv ICE_MACHINE_BLDTHRDS 4 -setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.cori_intel b/configuration/scripts/machines/env.cori_intel index ed01928f4..734b2edf3 100755 --- a/configuration/scripts/machines/env.cori_intel +++ b/configuration/scripts/machines/env.cori_intel @@ -39,6 +39,7 @@ module load craype/2.6.2 setenv NETCDF_PATH ${NETCDF_DIR} setenv OMP_PROC_BIND true setenv OMP_PLACES threads +setenv OMP_STACKSIZE 32M limit coredumpsize unlimited limit stacksize unlimited diff --git a/configuration/scripts/machines/env.daley_gnu b/configuration/scripts/machines/env.daley_gnu index b1e379eb0..25b438e8e 100755 --- a/configuration/scripts/machines/env.daley_gnu +++ b/configuration/scripts/machines/env.daley_gnu @@ -19,6 +19,9 @@ module load cray-netcdf # NetCDF module load cray-hdf5 # HDF5 setenv HDF5_USE_FILE_LOCKING FALSE # necessary since data is on an NFS filesystem +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME daley diff --git a/configuration/scripts/machines/env.daley_intel b/configuration/scripts/machines/env.daley_intel index 502c71037..49178365b 100755 --- a/configuration/scripts/machines/env.daley_intel +++ b/configuration/scripts/machines/env.daley_intel @@ -14,6 +14,9 @@ module load cray-netcdf # NetCDF module load cray-hdf5 # HDF5 setenv HDF5_USE_FILE_LOCKING FALSE # necessary since data is on an NFS filesystem +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME daley diff --git a/configuration/scripts/machines/env.fram_intel b/configuration/scripts/machines/env.fram_intel index a7b141479..98edb3a66 100755 --- a/configuration/scripts/machines/env.fram_intel +++ b/configuration/scripts/machines/env.fram_intel @@ -7,6 +7,9 @@ source /fs/ssm/main/opt/intelcomp/intelcomp-2016.1.156/intelcomp_2016.1.156_mult source $ssmuse -d /fs/ssm/main/opt/openmpi/openmpi-1.6.5/intelcomp-2016.1.156 # openmpi source $ssmuse -d /fs/ssm/hpco/tmp/eccc/201402/04/intel-2016.1.150 # netcdf (and openmpi) +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + setenv ICE_MACHINE_MACHNAME fram setenv ICE_MACHINE_ENVNAME intel setenv ICE_MACHINE_MAKE make diff --git a/configuration/scripts/machines/env.freya_gnu b/configuration/scripts/machines/env.freya_gnu index b655d6dd0..2681e1318 100755 --- a/configuration/scripts/machines/env.freya_gnu +++ b/configuration/scripts/machines/env.freya_gnu @@ -8,7 +8,7 @@ endif if ("$inp" != "-nomodules") then source /opt/modules/default/init/csh # Initialize modules for csh - Clear environment +# Clear environment module rm PrgEnv-intel module rm PrgEnv-cray module rm PrgEnv-gnu @@ -37,3 +37,4 @@ setenv ICE_MACHINE_ACCT P0000000 setenv ICE_MACHINE_QUEUE "development" setenv ICE_MACHINE_BLDTHRDS 18 setenv ICE_MACHINE_QSTAT "qstat " +setenv OMP_STACKSIZE 64M diff --git a/configuration/scripts/machines/env.freya_intel b/configuration/scripts/machines/env.freya_intel index dcbc1f8ba..4b45cd9e7 100755 --- a/configuration/scripts/machines/env.freya_intel +++ b/configuration/scripts/machines/env.freya_intel @@ -36,3 +36,4 @@ setenv ICE_MACHINE_ACCT P0000000 setenv ICE_MACHINE_QUEUE "development" setenv ICE_MACHINE_BLDTHRDS 18 setenv ICE_MACHINE_QSTAT "qstat " +setenv OMP_STACKSIZE 64M diff --git a/configuration/scripts/machines/env.gaea_intel b/configuration/scripts/machines/env.gaea_intel index d143270d7..e204c6fff 100755 --- a/configuration/scripts/machines/env.gaea_intel +++ b/configuration/scripts/machines/env.gaea_intel @@ -16,6 +16,9 @@ module load cray-netcdf module load PrgEnv-intel/6.0.5 module list +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME gaea diff --git a/configuration/scripts/machines/env.gaffney_gnu b/configuration/scripts/machines/env.gaffney_gnu index a63ee2ae4..dd889c5af 100755 --- a/configuration/scripts/machines/env.gaffney_gnu +++ b/configuration/scripts/machines/env.gaffney_gnu @@ -24,6 +24,7 @@ setenv MPI_DSM_DISTRIBUTE 0 setenv KMP_AFFINITY disabled limit coredumpsize unlimited limit stacksize unlimited +setenv OMP_STACKSIZE 64M endif diff --git a/configuration/scripts/machines/env.gaffney_intel b/configuration/scripts/machines/env.gaffney_intel index 9fa11d16e..c7fd0f6b3 100755 --- a/configuration/scripts/machines/env.gaffney_intel +++ b/configuration/scripts/machines/env.gaffney_intel @@ -24,6 +24,7 @@ setenv MPI_DSM_DISTRIBUTE 0 setenv KMP_AFFINITY disabled limit coredumpsize unlimited limit stacksize unlimited +setenv OMP_STACKSIZE 64M endif diff --git a/configuration/scripts/machines/env.gordon_intel b/configuration/scripts/machines/env.gordon_intel deleted file mode 100755 index 67aaa9c69..000000000 --- a/configuration/scripts/machines/env.gordon_intel +++ /dev/null @@ -1,59 +0,0 @@ -#!/bin/csh -f - -set inp = "undefined" -if ($#argv == 1) then - set inp = $1 -endif - -if ("$inp" != "-nomodules") then - -source /opt/modules/default/init/csh - -module unload PrgEnv-cray -module unload PrgEnv-gnu -module unload PrgEnv-intel -module unload PrgEnv-pgi -module load PrgEnv-intel/5.2.40 - -module unload intel -module load intel/17.0.2.174 - -module unload cray-mpich -module unload cray-mpich2 -module load cray-mpich/7.3.2 - -module unload netcdf -module unload cray-netcdf -module unload cray-hdf5 -module unload cray-hdf5-parallel -module unload cray-netcdf-hdf5parallel -module unload cray-parallel-netcdf -module load cray-netcdf/4.3.2 -module load cray-hdf5/1.8.13 - -module unload cray-libsci - -module load craype-haswell - -setenv NETCDF_PATH ${NETCDF_DIR} -limit coredumpsize unlimited -limit stacksize unlimited - -endif - -setenv ICE_MACHINE_MACHNAME gordon -setenv ICE_MACHINE_MACHINFO "Cray XC40 Xeon E5-2698v3 Haswell" -setenv ICE_MACHINE_ENVNAME intel -setenv ICE_MACHINE_ENVINFO "ifort 17.0.2 20170213, mpich 7.3.2, netcdf 4.3.2" -setenv ICE_MACHINE_MAKE gmake -setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS -setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium -setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE -setenv ICE_MACHINE_SUBMIT "qsub " -setenv ICE_MACHINE_ACCT P00000000 -setenv ICE_MACHINE_QUEUE "debug" -setenv ICE_MACHINE_TPNODE 32 # tasks per node -setenv ICE_MACHINE_MAXPES 8000 # maximum total pes (tasks * threads) available -setenv ICE_MACHINE_MAXRUNLENGTH 168 # maximum batch wall time limit in hours (integer) -setenv ICE_MACHINE_BLDTHRDS 4 -setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.gordon_pgi b/configuration/scripts/machines/env.gordon_pgi deleted file mode 100755 index 5885afb4b..000000000 --- a/configuration/scripts/machines/env.gordon_pgi +++ /dev/null @@ -1,57 +0,0 @@ -#!/bin/csh -f - -set inp = "undefined" -if ($#argv == 1) then - set inp = $1 -endif - -if ("$inp" != "-nomodules") then - -source /opt/modules/default/init/csh - -module unload PrgEnv-cray -module unload PrgEnv-gnu -module unload PrgEnv-intel -module unload PrgEnv-pgi -module load PrgEnv-pgi/5.2.82 - -module unload pgi -module load pgi/16.10.0 - -module unload cray-mpich -module unload cray-mpich2 -module load cray-mpich/7.5.3 - -module unload netcdf -module unload cray-netcdf -module unload cray-hdf5 -module unload cray-hdf5-parallel -module unload cray-netcdf-hdf5parallel -module unload cray-parallel-netcdf -module load cray-netcdf/4.4.1.1 -module load cray-hdf5/1.10.0.1 - -module unload cray-libsci - -module load craype-haswell - -setenv NETCDF_PATH ${NETCDF_DIR} -limit coredumpsize unlimited -limit stacksize unlimited - -endif - -setenv ICE_MACHINE_MACHNAME gordon -setenv ICE_MACHINE_MACHINFO "Cray XC40 Xeon E5-2698v3 Haswell" -setenv ICE_MACHINE_ENVNAME pgi -setenv ICE_MACHINE_ENVINFO "pgf90 16.10-0, mpich 7.5.3, netcdf 4.4.1.1" -setenv ICE_MACHINE_MAKE gmake -setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS -setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium -setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE -setenv ICE_MACHINE_SUBMIT "qsub " -setenv ICE_MACHINE_ACCT ARLAP96070PET -setenv ICE_MACHINE_QUEUE "debug" -setenv ICE_MACHINE_TPNODE 32 # tasks per node -setenv ICE_MACHINE_BLDTHRDS 4 -setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.hera_intel b/configuration/scripts/machines/env.hera_intel index 7330c3937..a9cf59516 100755 --- a/configuration/scripts/machines/env.hera_intel +++ b/configuration/scripts/machines/env.hera_intel @@ -15,6 +15,9 @@ module load impi/2018.0.4 module load netcdf/4.7.0 #module list +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME hera diff --git a/configuration/scripts/machines/env.high_Sierra_gnu b/configuration/scripts/machines/env.high_Sierra_gnu index 3845a91aa..0bd31181b 100755 --- a/configuration/scripts/machines/env.high_Sierra_gnu +++ b/configuration/scripts/machines/env.high_Sierra_gnu @@ -1,5 +1,8 @@ #!/bin/csh -f +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + setenv ICE_MACHINE_MACHNAME high_Sierra setenv ICE_MACHINE_ENVNAME gnu setenv ICE_MACHINE_MAKE make diff --git a/configuration/scripts/machines/env.hobart_intel b/configuration/scripts/machines/env.hobart_intel index 2ab7a3c53..0b6c5b12c 100755 --- a/configuration/scripts/machines/env.hobart_intel +++ b/configuration/scripts/machines/env.hobart_intel @@ -12,6 +12,9 @@ source /usr/share/Modules/init/csh module purge module load compiler/intel/18.0.3 +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME hobart diff --git a/configuration/scripts/machines/env.hobart_nag b/configuration/scripts/machines/env.hobart_nag index cae8c0fd8..6d22beca9 100755 --- a/configuration/scripts/machines/env.hobart_nag +++ b/configuration/scripts/machines/env.hobart_nag @@ -12,6 +12,9 @@ source /usr/share/Modules/init/csh module purge module load compiler/nag/6.2 +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME hobart diff --git a/configuration/scripts/machines/env.koehr_intel b/configuration/scripts/machines/env.koehr_intel index f4d7cada2..21f124b5f 100755 --- a/configuration/scripts/machines/env.koehr_intel +++ b/configuration/scripts/machines/env.koehr_intel @@ -25,6 +25,9 @@ setenv KMP_AFFINITY disabled limit coredumpsize unlimited limit stacksize unlimited +# May be needed for OpenMP memory +setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME koehr diff --git a/configuration/scripts/machines/env.millikan_intel b/configuration/scripts/machines/env.millikan_intel index 63913166d..c0a7356ad 100755 --- a/configuration/scripts/machines/env.millikan_intel +++ b/configuration/scripts/machines/env.millikan_intel @@ -6,6 +6,9 @@ source /fs/ssm/main/opt/intelcomp/intelcomp-2016.1.156/intelcomp_2016.1.156_mult source $ssmuse -d /fs/ssm/main/opt/openmpi/openmpi-1.6.5/intelcomp-2016.1.156 # openmpi source $ssmuse -d /fs/ssm/hpco/tmp/eccc/201402/04/intel-2016.1.150 # netcdf (and openmpi) +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + setenv ICE_MACHINE_MACHNAME millikan setenv ICE_MACHINE_ENVNAME intel setenv ICE_MACHINE_MAKE make diff --git a/configuration/scripts/machines/env.mustang_intel18 b/configuration/scripts/machines/env.mustang_intel18 index f420ec7ff..45e5b6518 100755 --- a/configuration/scripts/machines/env.mustang_intel18 +++ b/configuration/scripts/machines/env.mustang_intel18 @@ -21,7 +21,7 @@ module load netcdf-fortran/intel/4.4.2 setenv NETCDF_PATH /app/COST/netcdf-fortran/4.4.2/intel -#setenv OMP_STACKSIZE 256M +setenv OMP_STACKSIZE 64M #setenv MP_LABELIO yes #setenv MP_INFOLEVEL 2 #setenv MP_SHARED_MEMORY yes diff --git a/configuration/scripts/machines/env.mustang_intel19 b/configuration/scripts/machines/env.mustang_intel19 index 0fc0458fd..438bc1111 100755 --- a/configuration/scripts/machines/env.mustang_intel19 +++ b/configuration/scripts/machines/env.mustang_intel19 @@ -21,7 +21,7 @@ module load netcdf-fortran/intel/4.4.2 setenv NETCDF_PATH /app/COST/netcdf-fortran/4.4.2/intel -#setenv OMP_STACKSIZE 256M +setenv OMP_STACKSIZE 64M #setenv MP_LABELIO yes #setenv MP_INFOLEVEL 2 #setenv MP_SHARED_MEMORY yes diff --git a/configuration/scripts/machines/env.mustang_intel20 b/configuration/scripts/machines/env.mustang_intel20 index 00c4a250d..cca0b3019 100755 --- a/configuration/scripts/machines/env.mustang_intel20 +++ b/configuration/scripts/machines/env.mustang_intel20 @@ -21,7 +21,7 @@ module load netcdf-fortran/intel/4.4.2 setenv NETCDF_PATH /app/COST/netcdf-fortran/4.4.2/intel -#setenv OMP_STACKSIZE 256M +setenv OMP_STACKSIZE 64M #setenv MP_LABELIO yes #setenv MP_INFOLEVEL 2 #setenv MP_SHARED_MEMORY yes diff --git a/configuration/scripts/machines/env.narwhal_aocc b/configuration/scripts/machines/env.narwhal_aocc new file mode 100755 index 000000000..6d6822f46 --- /dev/null +++ b/configuration/scripts/machines/env.narwhal_aocc @@ -0,0 +1,54 @@ +#!/bin/csh -f + +set inp = "undefined" +if ($#argv == 1) then + set inp = $1 +endif + +if ("$inp" != "-nomodules") then + +source ${MODULESHOME}/init/csh + +module unload PrgEnv-aocc +module unload PrgEnv-cray +module unload PrgEnv-gnu +module unload PrgEnv-intel +module unload PrgEnv-nvidia +module load PrgEnv-aocc/8.1.0 +module load cray-pals/1.0.17 +module load bct-env/0.1 +module unload aocc +module load aocc/2.2.0.1 +module unload cray-mpich +module load cray-mpich/8.1.5 + +module unload cray-hdf5 +module unload cray-hdf5-parallel +module unload cray-netcdf-hdf5parallel +module unload cray-parallel-netcdf +module unload netcdf +module load cray-netcdf/4.7.4.4 +module load cray-hdf5/1.12.0.4 + +setenv NETCDF_PATH ${NETCDF_DIR} +limit coredumpsize unlimited +limit stacksize unlimited +setenv OMP_STACKSIZE 128M +setenv OMP_WAIT_POLICY PASSIVE + +endif + +setenv ICE_MACHINE_MACHNAME narwhal +setenv ICE_MACHINE_MACHINFO "Cray EX AMD EPYC 7H12" +setenv ICE_MACHINE_ENVNAME intel +setenv ICE_MACHINE_ENVINFO "aocc_3.0.0-Build#78 2020_12_10 clang/flang 12.0.0, cray-mpich/8.1.9, netcdf/4.7.4.4" +setenv ICE_MACHINE_MAKE gmake +setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS +setenv ICE_MACHINE_INPUTDATA /p/work1/projects/RASM/cice_consortium +setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE +setenv ICE_MACHINE_SUBMIT "qsub " +setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" +setenv ICE_MACHINE_TPNODE 128 # tasks per node +setenv ICE_MACHINE_BLDTHRDS 12 +setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.conrad_cray b/configuration/scripts/machines/env.narwhal_cray similarity index 53% rename from configuration/scripts/machines/env.conrad_cray rename to configuration/scripts/machines/env.narwhal_cray index 62549a738..d0fcc9ba7 100755 --- a/configuration/scripts/machines/env.conrad_cray +++ b/configuration/scripts/machines/env.narwhal_cray @@ -7,51 +7,48 @@ endif if ("$inp" != "-nomodules") then -source /opt/modules/default/init/csh +source ${MODULESHOME}/init/csh +module unload PrgEnv-aocc module unload PrgEnv-cray module unload PrgEnv-gnu module unload PrgEnv-intel -module unload PrgEnv-pgi -module load PrgEnv-cray/5.2.82 - -module unload cce -module load cce/8.5.8 - +module unload PrgEnv-nvidia +module load PrgEnv-cray/8.1.0 +module load cray-pals/1.0.17 +module load bct-env/0.1 +module unload cce +module load cce/12.0.3 module unload cray-mpich -module unload cray-mpich2 -module load cray-mpich/7.5.3 +module load cray-mpich/8.1.9 -module unload netcdf -module unload cray-netcdf module unload cray-hdf5 module unload cray-hdf5-parallel module unload cray-netcdf-hdf5parallel module unload cray-parallel-netcdf -module load cray-netcdf/4.4.1.1 -module load cray-hdf5/1.10.0.1 - -module unload cray-libsci - -module load craype-haswell +module unload netcdf +module load cray-netcdf/4.7.4.4 +module load cray-hdf5/1.12.0.4 setenv NETCDF_PATH ${NETCDF_DIR} limit coredumpsize unlimited limit stacksize unlimited +setenv OMP_STACKSIZE 128M +setenv OMP_WAIT_POLICY PASSIVE endif -setenv ICE_MACHINE_MACHNAME conrad -setenv ICE_MACHINE_MACHINFO "Cray XC40 Xeon E5-2698v3 Haswell" +setenv ICE_MACHINE_MACHNAME narwhal +setenv ICE_MACHINE_MACHINFO "Cray EX AMD EPYC 7H12" setenv ICE_MACHINE_ENVNAME cray -setenv ICE_MACHINE_ENVINFO "cce 8.5.8, mpich 7.5.3, netcdf 4.4.1.1" +setenv ICE_MACHINE_ENVINFO "cce 12.0.3, cray-mpich/8.1.9, netcdf/4.7.4.4" setenv ICE_MACHINE_MAKE gmake setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS -setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium +setenv ICE_MACHINE_INPUTDATA /p/work1/projects/RASM/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 setenv ICE_MACHINE_QUEUE "debug" -setenv ICE_MACHINE_TPNODE 32 # tasks per node -setenv ICE_MACHINE_BLDTHRDS 4 +setenv ICE_MACHINE_TPNODE 128 # tasks per node +setenv ICE_MACHINE_BLDTHRDS 12 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.gordon_gnu b/configuration/scripts/machines/env.narwhal_gnu similarity index 51% rename from configuration/scripts/machines/env.gordon_gnu rename to configuration/scripts/machines/env.narwhal_gnu index d17923bd3..51a272f4e 100755 --- a/configuration/scripts/machines/env.gordon_gnu +++ b/configuration/scripts/machines/env.narwhal_gnu @@ -7,51 +7,48 @@ endif if ("$inp" != "-nomodules") then -source /opt/modules/default/init/csh +source ${MODULESHOME}/init/csh +module unload PrgEnv-aocc module unload PrgEnv-cray module unload PrgEnv-gnu module unload PrgEnv-intel -module unload PrgEnv-pgi -module load PrgEnv-gnu/5.2.82 - +module unload PrgEnv-nvidia +module load PrgEnv-gnu/8.1.0 +module load cray-pals/1.0.17 +module load bct-env/0.1 module unload gcc -module load gcc/6.3.0 - +module load gcc/11.2.0 module unload cray-mpich -module unload cray-mpich2 -module load cray-mpich/7.5.3 +module load cray-mpich/8.1.9 -module unload netcdf -module unload cray-netcdf module unload cray-hdf5 module unload cray-hdf5-parallel module unload cray-netcdf-hdf5parallel module unload cray-parallel-netcdf -module load cray-netcdf/4.4.1.1 -module load cray-hdf5/1.10.0.1 - -module unload cray-libsci - -module load craype-haswell +module unload netcdf +module load cray-netcdf/4.7.4.4 +module load cray-hdf5/1.12.0.4 setenv NETCDF_PATH ${NETCDF_DIR} limit coredumpsize unlimited limit stacksize unlimited +setenv OMP_STACKSIZE 128M +setenv OMP_WAIT_POLICY PASSIVE endif -setenv ICE_MACHINE_MACHNAME gordon -setenv ICE_MACHINE_MACHINFO "Cray XC40 Xeon E5-2698v3 Haswell" -setenv ICE_MACHINE_ENVNAME gnu -setenv ICE_MACHINE_ENVINFO "GNU Fortran (GCC) 6.3.0 20161221, mpich 7.5.3, netcdf 4.4.1.1" +setenv ICE_MACHINE_MACHNAME narwhal +setenv ICE_MACHINE_MACHINFO "Cray EX AMD EPYC 7H12" +setenv ICE_MACHINE_ENVNAME intel +setenv ICE_MACHINE_ENVINFO "gnu fortran/c 11.2.0, cray-mpich/8.1.9, netcdf/4.7.4.4" setenv ICE_MACHINE_MAKE gmake setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS -setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium +setenv ICE_MACHINE_INPUTDATA /p/work1/projects/RASM/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 setenv ICE_MACHINE_QUEUE "debug" -setenv ICE_MACHINE_TPNODE 32 # tasks per node -setenv ICE_MACHINE_BLDTHRDS 4 +setenv ICE_MACHINE_TPNODE 128 # tasks per node +setenv ICE_MACHINE_BLDTHRDS 12 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.gordon_cray b/configuration/scripts/machines/env.narwhal_intel similarity index 50% rename from configuration/scripts/machines/env.gordon_cray rename to configuration/scripts/machines/env.narwhal_intel index d8c392d60..f79d962ff 100755 --- a/configuration/scripts/machines/env.gordon_cray +++ b/configuration/scripts/machines/env.narwhal_intel @@ -7,51 +7,48 @@ endif if ("$inp" != "-nomodules") then -source /opt/modules/default/init/csh +source ${MODULESHOME}/init/csh +module unload PrgEnv-aocc module unload PrgEnv-cray module unload PrgEnv-gnu module unload PrgEnv-intel -module unload PrgEnv-pgi -module load PrgEnv-cray/5.2.82 - -module unload cce -module load cce/8.5.8 - +module unload PrgEnv-nvidia +module load PrgEnv-intel/8.0.0 +module load cray-pals/1.0.17 +module load bct-env/0.1 +module unload intel +module load intel/2021.1 module unload cray-mpich -module unload cray-mpich2 -module load cray-mpich/7.5.3 +module load cray-mpich/8.1.9 -module unload netcdf -module unload cray-netcdf module unload cray-hdf5 module unload cray-hdf5-parallel module unload cray-netcdf-hdf5parallel module unload cray-parallel-netcdf -module load cray-netcdf/4.4.1.1 -module load cray-hdf5/1.10.0.1 - -module unload cray-libsci - -module load craype-haswell +module unload netcdf +module load cray-netcdf/4.7.4.4 +module load cray-hdf5/1.12.0.4 setenv NETCDF_PATH ${NETCDF_DIR} limit coredumpsize unlimited limit stacksize unlimited +setenv OMP_STACKSIZE 128M +setenv OMP_WAIT_POLICY PASSIVE endif -setenv ICE_MACHINE_MACHNAME gordon -setenv ICE_MACHINE_MACHINFO "Cray XC40 Xeon E5-2698v3 Haswell" -setenv ICE_MACHINE_ENVNAME cray -setenv ICE_MACHINE_ENVINFO "cce 8.5.8, mpich 7.5.3, netcdf 4.4.1.1" +setenv ICE_MACHINE_MACHNAME narwhal +setenv ICE_MACHINE_MACHINFO "Cray EX AMD EPYC 7H12" +setenv ICE_MACHINE_ENVNAME intel +setenv ICE_MACHINE_ENVINFO "ifort 2021.1 Beta 20201112, cray-mpich/8.1.9, netcdf/4.7.4.4" setenv ICE_MACHINE_MAKE gmake setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS -setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium +setenv ICE_MACHINE_INPUTDATA /p/work1/projects/RASM/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 setenv ICE_MACHINE_QUEUE "debug" -setenv ICE_MACHINE_TPNODE 32 # tasks per node -setenv ICE_MACHINE_BLDTHRDS 4 +setenv ICE_MACHINE_TPNODE 128 # tasks per node +setenv ICE_MACHINE_BLDTHRDS 12 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.onyx_cray b/configuration/scripts/machines/env.onyx_cray index 38785a27d..e696d1b98 100755 --- a/configuration/scripts/machines/env.onyx_cray +++ b/configuration/scripts/machines/env.onyx_cray @@ -39,6 +39,7 @@ module load craype-broadwell setenv NETCDF_PATH ${NETCDF_DIR} limit coredumpsize unlimited limit stacksize unlimited +setenv OMP_STACKSIZE 64M endif diff --git a/configuration/scripts/machines/env.onyx_gnu b/configuration/scripts/machines/env.onyx_gnu index 699c01559..80ebb8e43 100755 --- a/configuration/scripts/machines/env.onyx_gnu +++ b/configuration/scripts/machines/env.onyx_gnu @@ -39,6 +39,7 @@ module load craype-broadwell setenv NETCDF_PATH ${NETCDF_DIR} limit coredumpsize unlimited limit stacksize unlimited +setenv OMP_STACKSIZE 64M endif diff --git a/configuration/scripts/machines/env.onyx_intel b/configuration/scripts/machines/env.onyx_intel index 39f25e8e5..362454dd4 100755 --- a/configuration/scripts/machines/env.onyx_intel +++ b/configuration/scripts/machines/env.onyx_intel @@ -39,6 +39,7 @@ module load craype-broadwell setenv NETCDF_PATH ${NETCDF_DIR} limit coredumpsize unlimited limit stacksize unlimited +setenv OMP_STACKSIZE 64M endif diff --git a/configuration/scripts/machines/env.orion_intel b/configuration/scripts/machines/env.orion_intel index 95850b6bb..bdfccdd60 100755 --- a/configuration/scripts/machines/env.orion_intel +++ b/configuration/scripts/machines/env.orion_intel @@ -22,6 +22,9 @@ echo " module load netcdf/4.7.2" #module load netcdf/4.7.2 ##module list +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + endif setenv ICE_MACHINE_MACHNAME orion diff --git a/configuration/scripts/machines/env.phase3_intel b/configuration/scripts/machines/env.phase3_intel index af8dd3e5f..f5e3e4584 100755 --- a/configuration/scripts/machines/env.phase3_intel +++ b/configuration/scripts/machines/env.phase3_intel @@ -13,6 +13,9 @@ module load NetCDF/4.5.0 module load ESMF/7_1_0r module list +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + setenv ICE_MACHINE_MACHNAME phase3 setenv ICE_MACHINE_ENVNAME intel setenv ICE_MACHINE_MAKE gmake diff --git a/configuration/scripts/machines/env.testmachine_intel b/configuration/scripts/machines/env.testmachine_intel index 5b52f1b07..b6f7c329e 100755 --- a/configuration/scripts/machines/env.testmachine_intel +++ b/configuration/scripts/machines/env.testmachine_intel @@ -1,5 +1,8 @@ #!/bin/csh -f +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + setenv ICE_MACHINE_MACHNAME testmachine setenv ICE_MACHINE_MACHINFO "Undefined" setenv ICE_MACHINE_ENVNAME intel diff --git a/configuration/scripts/machines/env.travisCI_gnu b/configuration/scripts/machines/env.travisCI_gnu index b7a1b6176..aa3c1eec7 100755 --- a/configuration/scripts/machines/env.travisCI_gnu +++ b/configuration/scripts/machines/env.travisCI_gnu @@ -1,5 +1,8 @@ #!/bin/csh -f +# May be needed for OpenMP memory +#setenv OMP_STACKSIZE 64M + setenv ICE_MACHINE_MACHNAME travisCI setenv ICE_MACHINE_MACHINFO "Cloud Computing" setenv ICE_MACHINE_ENVNAME gnu diff --git a/configuration/scripts/options/set_env.cmplog b/configuration/scripts/options/set_env.cmplog new file mode 100644 index 000000000..b59c1cb6d --- /dev/null +++ b/configuration/scripts/options/set_env.cmplog @@ -0,0 +1 @@ +setenv ICE_BFBTYPE log diff --git a/configuration/scripts/options/set_env.cmplogrest b/configuration/scripts/options/set_env.cmplogrest new file mode 100644 index 000000000..118986199 --- /dev/null +++ b/configuration/scripts/options/set_env.cmplogrest @@ -0,0 +1 @@ +setenv ICE_BFBTYPE logrest diff --git a/configuration/scripts/options/set_env.cmprest b/configuration/scripts/options/set_env.cmprest new file mode 100644 index 000000000..7258fa058 --- /dev/null +++ b/configuration/scripts/options/set_env.cmprest @@ -0,0 +1 @@ +setenv ICE_BFBTYPE restart diff --git a/configuration/scripts/options/set_env.ompschedd1 b/configuration/scripts/options/set_env.ompschedd1 new file mode 100644 index 000000000..a4d255f48 --- /dev/null +++ b/configuration/scripts/options/set_env.ompschedd1 @@ -0,0 +1 @@ +setenv ICE_OMPSCHED "dynamic,1" diff --git a/configuration/scripts/options/set_env.ompscheds b/configuration/scripts/options/set_env.ompscheds new file mode 100644 index 000000000..b9a4f58b0 --- /dev/null +++ b/configuration/scripts/options/set_env.ompscheds @@ -0,0 +1 @@ +setenv ICE_OMPSCHED "static" diff --git a/configuration/scripts/options/set_env.ompscheds1 b/configuration/scripts/options/set_env.ompscheds1 new file mode 100644 index 000000000..a9ca4874d --- /dev/null +++ b/configuration/scripts/options/set_env.ompscheds1 @@ -0,0 +1 @@ +setenv ICE_OMPSCHED "static,1" diff --git a/configuration/scripts/options/set_env.qcchk b/configuration/scripts/options/set_env.qcchk new file mode 100644 index 000000000..9b9fbbd2e --- /dev/null +++ b/configuration/scripts/options/set_env.qcchk @@ -0,0 +1 @@ +setenv ICE_BFBTYPE qcchk diff --git a/configuration/scripts/options/set_env.qcchkf b/configuration/scripts/options/set_env.qcchkf new file mode 100644 index 000000000..589e60772 --- /dev/null +++ b/configuration/scripts/options/set_env.qcchkf @@ -0,0 +1 @@ +setenv ICE_BFBTYPE qcchkf diff --git a/configuration/scripts/options/set_nml.dt3456s b/configuration/scripts/options/set_nml.dt3456s new file mode 100644 index 000000000..74e5482d7 --- /dev/null +++ b/configuration/scripts/options/set_nml.dt3456s @@ -0,0 +1 @@ +dt = 3456.0 diff --git a/configuration/scripts/options/set_nml.dynanderson b/configuration/scripts/options/set_nml.dynanderson index 566c53a09..2e8e13659 100644 --- a/configuration/scripts/options/set_nml.dynanderson +++ b/configuration/scripts/options/set_nml.dynanderson @@ -1,3 +1,5 @@ kdyn = 3 algo_nonlin = 'anderson' use_mean_vrel = .false. +capping = 1. + diff --git a/configuration/scripts/options/set_nml.dynpicard b/configuration/scripts/options/set_nml.dynpicard index b81f4d4e6..05efb3526 100644 --- a/configuration/scripts/options/set_nml.dynpicard +++ b/configuration/scripts/options/set_nml.dynpicard @@ -1,3 +1,4 @@ kdyn = 3 algo_nonlin = 'picard' use_mean_vrel = .true. +capping = 1. diff --git a/configuration/scripts/options/set_nml.qcnonbfb b/configuration/scripts/options/set_nml.qcnonbfb deleted file mode 100644 index a965b863c..000000000 --- a/configuration/scripts/options/set_nml.qcnonbfb +++ /dev/null @@ -1,16 +0,0 @@ -dt = 3456.0 -npt_unit = 'y' -npt = 5 -year_init = 2005 -month_init = 1 -day_init = 1 -sec_init = 0 -use_leap_years = .false. -fyear_init = 2005 -ycycle = 1 -dumpfreq = 'm' -dumpfreq_n = 12 -diagfreq = 24 -histfreq = 'd','x','x','x','x' -f_hi = 'd' -hist_avg = .false. diff --git a/configuration/scripts/options/set_nml.timerstats b/configuration/scripts/options/set_nml.timerstats new file mode 100644 index 000000000..723891b7b --- /dev/null +++ b/configuration/scripts/options/set_nml.timerstats @@ -0,0 +1 @@ +timer_stats = .true. diff --git a/configuration/scripts/tests/baseline.script b/configuration/scripts/tests/baseline.script index ac69d49a0..bb8f50a1f 100644 --- a/configuration/scripts/tests/baseline.script +++ b/configuration/scripts/tests/baseline.script @@ -36,7 +36,7 @@ if (${ICE_BASECOM} != ${ICE_SPVAL}) then ${ICE_CASEDIR}/casescripts/comparelog.csh ${base_file} ${test_file} notcicefile set bfbstatus = $status - else if (${ICE_TEST} =~ qcchk*) then + else if (${ICE_BFBTYPE} =~ qcchk*) then set test_dir = ${ICE_RUNDIR} set base_dir = ${ICE_BASELINE}/${ICE_BASECOM}/${ICE_TESTNAME} ${ICE_SANDBOX}/configuration/scripts/tests/QC/cice.t-test.py ${base_dir} ${test_dir} @@ -151,7 +151,7 @@ if (${ICE_BFBCOMP} != ${ICE_SPVAL}) then endif endif - if (${ICE_TEST} == "logbfb") then + if (${ICE_BFBTYPE} == "log") then set test_file = `ls -1t ${ICE_RUNDIR}/cice.runlog* | head -1` set base_file = `ls -1t ${ICE_RUNDIR}/../${ICE_BFBCOMP}.${ICE_TESTID}/cice.runlog* | head -1` @@ -163,21 +163,61 @@ if (${ICE_BFBCOMP} != ${ICE_SPVAL}) then ${ICE_CASEDIR}/casescripts/comparelog.csh ${base_file} ${test_file} set bfbstatus = $status - else if (${ICE_TEST} =~ qcchk*) then + else if (${ICE_BFBTYPE} == "logrest") then + set test_file = `ls -1t ${ICE_RUNDIR}/cice.runlog* | head -1` + set base_file = `ls -1t ${ICE_RUNDIR}/../${ICE_BFBCOMP}.${ICE_TESTID}/cice.runlog* | head -1` + + echo "" + echo "bfb Log Compare Mode:" + echo "base_file: ${base_file}" + echo "test_file: ${test_file}" + + ${ICE_CASEDIR}/casescripts/comparelog.csh ${base_file} ${test_file} + set bfbstatusl = $status + + set test_dir = ${ICE_RUNDIR}/restart + set base_dir = ${ICE_RUNDIR}/../${ICE_BFBCOMP}.${ICE_TESTID}/restart + + echo "" + echo "bfb Restart Compare Mode:" + echo "base_dir: ${base_dir}" + echo "test_dir: ${test_dir}" + + ${ICE_CASEDIR}/casescripts/comparebfb.csh ${base_dir} ${test_dir} + set bfbstatusr = $status + + if ({$bfbstatusl} == ${bfbstatusr}) then + set bfbstatus = ${bfbstatusl} + else if (${bfbstatusl} == 1 || ${bfbstatusr} == 1) then + set bfbstatus = 1 + else if ({$bfbstatusl} > ${bfbstatusr}) then + set bfbstatus = ${bfbstatusl} + else + set bfbstatus = ${bfbstatusr} + endif + + echo "bfb log, rest, combined status = ${bfbstatusl},${bfbstatusr},${bfbstatus}" + + else if (${ICE_BFBTYPE} =~ qcchk*) then set test_dir = ${ICE_RUNDIR} set base_dir = ${ICE_RUNDIR}/../${ICE_BFBCOMP}.${ICE_TESTID} + echo "" + echo "qcchk Compare Mode:" + echo "base_dir: ${base_dir}" + echo "test_dir: ${test_dir}" ${ICE_SANDBOX}/configuration/scripts/tests/QC/cice.t-test.py ${base_dir} ${test_dir} set bfbstatus = $status # expecting failure, so switch value - if (${ICE_TEST} =~ qcchkf*) then + if (${ICE_BFBTYPE} == "qcchkf") then @ bfbstatus = 1 - $bfbstatus endif + else set test_dir = ${ICE_RUNDIR}/restart set base_dir = ${ICE_RUNDIR}/../${ICE_BFBCOMP}.${ICE_TESTID}/restart echo "" - echo "bfb Compare Mode:" + echo "bfb Restart Compare Mode:" echo "base_dir: ${base_dir}" echo "test_dir: ${test_dir}" @@ -190,10 +230,10 @@ if (${ICE_BFBCOMP} != ${ICE_SPVAL}) then rm -f ${ICE_CASEDIR}/test_output.prev if (${bfbstatus} == 0) then echo "PASS ${ICE_TESTNAME} bfbcomp ${ICE_BFBCOMP}" >> ${ICE_CASEDIR}/test_output - echo "bfb baseline and test dataset are identical" + echo "bfbcomp baseline and test dataset pass" else if (${bfbstatus} == 1) then echo "FAIL ${ICE_TESTNAME} bfbcomp ${ICE_BFBCOMP} different-data" >> ${ICE_CASEDIR}/test_output - echo "bfbcomp and test dataset are different" + echo "bfbcomp baseline and test dataset fail" else if (${bfbstatus} == 2) then echo "MISS ${ICE_TESTNAME} bfbcomp ${ICE_BFBCOMP} missing-data" >> ${ICE_CASEDIR}/test_output echo "Missing data" diff --git a/configuration/scripts/tests/first_suite.ts b/configuration/scripts/tests/first_suite.ts index 31eba9fb7..b42d917ea 100644 --- a/configuration/scripts/tests/first_suite.ts +++ b/configuration/scripts/tests/first_suite.ts @@ -2,5 +2,5 @@ smoke gx3 8x2 diag1,run5day restart gx3 4x2x25x29x4 dslenderX2 smoke gx3 4x2x25x29x4 debug,run2day,dslenderX2 -logbfb gx3 4x2x25x29x4 dslenderX2,diag1,reprosum +smoke gx3 4x2x25x29x4 dslenderX2,diag1,reprosum,cmplog smoke gx3 1x2 run2day diff --git a/configuration/scripts/tests/nothread_suite.ts b/configuration/scripts/tests/nothread_suite.ts index 616741aa2..12fd03662 100644 --- a/configuration/scripts/tests/nothread_suite.ts +++ b/configuration/scripts/tests/nothread_suite.ts @@ -1,7 +1,7 @@ # Test Grid PEs Sets BFB-compare restart gx3 8x1x25x29x2 dslenderX2 -logbfb gx3 8x1x25x29x2 dslenderX2,diag1,reprosum +smoke gx3 8x1x25x29x2 dslenderX2,diag1,reprosum smoke gx3 16x1 diag1,run5day smoke gx3 1x1 debug,diag1,run2day @@ -70,9 +70,9 @@ restart gx3 32x1x5x10x12 drakeX2 restart_gx3_8x1x25x29x2_ restart gx3 16x1x8x10x10 droundrobin,maskhalo restart_gx3_8x1x25x29x2_dslenderX2 restart gx3 4x1x25x29x4 droundrobin restart_gx3_8x1x25x29x2_dslenderX2 -logbfb gx3 1x1x50x58x4 droundrobin,diag1,maskhalo,reprosum logbfb_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum -logbfb gx3 4x1x25x116x1 dslenderX1,diag1,maskhalo,reprosum logbfb_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum -logbfb gx3 20x1x5x29x20 dsectrobin,diag1,short,reprosum logbfb_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum -logbfb gx3 16x1x8x10x10 droundrobin,diag1,reprosum logbfb_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum -logbfb gx3 6x1x50x58x1 droundrobin,diag1,reprosum logbfb_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum -logbfb gx3 12x1x4x29x9 dspacecurve,diag1,maskhalo,reprosum logbfb_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum +smoke gx3 1x1x50x58x4 droundrobin,diag1,maskhalo,reprosum,cmplog smoke_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum +smoke gx3 4x1x25x116x1 dslenderX1,diag1,maskhalo,reprosum,cmplog smoke_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum +smoke gx3 20x1x5x29x20 dsectrobin,diag1,short,reprosum,cmplog smoke_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum +smoke gx3 16x1x8x10x10 droundrobin,diag1,reprosum,cmplog smoke_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum +smoke gx3 6x1x50x58x1 droundrobin,diag1,reprosum,cmplog smoke_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum +smoke gx3 12x1x4x29x9 dspacecurve,diag1,maskhalo,reprosum,cmplog smoke_gx3_8x1x25x29x2_diag1_dslenderX2_reprosum diff --git a/configuration/scripts/tests/omp_suite.ts b/configuration/scripts/tests/omp_suite.ts new file mode 100644 index 000000000..8ac499c2f --- /dev/null +++ b/configuration/scripts/tests/omp_suite.ts @@ -0,0 +1,141 @@ +# Test Grid PEs Sets BFB-compare + +smoke gx3 8x4 diag1,reprosum,run10day +smoke gx3 6x2 alt01,reprosum,run10day +smoke gx3 8x2 alt02,reprosum,run10day +smoke gx3 12x2 alt03,droundrobin,reprosum,run10day +smoke gx3 4x4 alt04,reprosum,run10day +smoke gx3 4x4 alt05,reprosum,run10day +smoke gx3 8x2 alt06,reprosum,run10day +smoke gx3 8x2 bgcz,reprosum,run10day +smoke gx1 15x2 seabedprob,reprosum,run10day +smoke gx3 14x2 fsd12,reprosum,run10day +smoke gx3 11x2 isotope,reprosum,run10day +smoke gx3 8x4 snwitdrdg,snwgrain,icdefault,reprosum,run10day +smoke gx3 6x4 dynpicard,reprosum,run10day +smoke gx3 8x3 zsal,reprosum,run10day + +smoke gbox128 8x2 reprosum,run10day +smoke gbox128 12x2 boxnodyn,reprosum,run10day +smoke gbox128 9x2 boxadv,reprosum,run10day +smoke gbox128 14x2 boxrestore,reprosum,run10day +smoke gbox80 4x5 box2001,reprosum,run10day +smoke gbox80 11x3 boxslotcyl,reprosum,run10day + +smoke gx3 4x2 diag1,reprosum,run10day,cmplogrest smoke_gx3_8x4_diag1_reprosum_run10day +smoke gx3 4x1 diag1,reprosum,run10day,cmplogrest,thread smoke_gx3_8x4_diag1_reprosum_run10day +smoke gx3 8x1 alt01,reprosum,run10day,cmplogrest,thread smoke_gx3_6x2_alt01_reprosum_run10day +smoke gx3 16x1 alt02,reprosum,run10day,cmplogrest,thread smoke_gx3_8x2_alt02_reprosum_run10day +smoke gx3 24x1 alt03,reprosum,run10day,cmplogrest,thread smoke_gx3_12x2_alt03_droundrobin_reprosum_run10day +smoke gx3 24x1 alt04,reprosum,run10day,cmplogrest,thread smoke_gx3_4x4_alt04_reprosum_run10day +smoke gx3 14x1 alt05,reprosum,run10day,cmplogrest,thread smoke_gx3_4x4_alt05_reprosum_run10day +smoke gx3 24x1 alt06,reprosum,run10day,cmplogrest,thread smoke_gx3_8x2_alt06_reprosum_run10day +smoke gx3 12x1 bgcz,reprosum,run10day,cmplogrest,thread smoke_gx3_8x2_bgcz_reprosum_run10day +smoke gx1 28x1 seabedprob,reprosum,run10day,cmplogrest,thread smoke_gx1_15x2_reprosum_run10day_seabedprob +smoke gx3 30x1 fsd12,reprosum,run10day,cmplogrest,thread smoke_gx3_14x2_fsd12_reprosum_run10day +smoke gx3 16x1 isotope,reprosum,run10day,cmplogrest,thread smoke_gx3_11x2_isotope_reprosum_run10day +smoke gx3 28x1 snwitdrdg,snwgrain,icdefault,reprosum,run10day,cmplogrest,thread smoke_gx3_8x4_icdefault_reprosum_run10day_snwitdrdg_snwgrain +smoke gx3 18x1 dynpicard,reprosum,run10day,cmplogrest,thread smoke_gx3_6x4_dynpicard_reprosum_run10day +smoke gx3 20x1 zsal,reprosum,run10day,cmplogrest,thread smoke_gx3_8x3_reprosum_run10day_zsal + +smoke gbox128 20x1 reprosum,run10day,cmplogrest,thread smoke_gbox128_8x2_reprosum_run10day +smoke gbox128 16x1 boxnodyn,reprosum,run10day,cmplogrest,thread smoke_gbox128_12x2_boxnodyn_reprosum_run10day +smoke gbox128 14x1 boxadv,reprosum,run10day,cmplogrest,thread smoke_gbox128_9x2_boxadv_reprosum_run10day +smoke gbox128 24x1 boxrestore,reprosum,run10day,cmplogrest,thread smoke_gbox128_14x2_boxrestore_reprosum_run10day +smoke gbox80 19x1 box2001,reprosum,run10day,cmplogrest,thread smoke_gbox80_4x5_box2001_reprosum_run10day +smoke gbox80 8x4 boxslotcyl,reprosum,run10day,cmplogrest,thread smoke_gbox80_11x3_boxslotcyl_reprosum_run10day + +#gridC + +smoke gx3 8x4 diag1,reprosum,run10day,gridc +smoke gx3 6x2 alt01,reprosum,run10day,gridc +smoke gx3 8x2 alt02,reprosum,run10day,gridc +smoke gx3 12x2 alt03,droundrobin,reprosum,run10day,gridc +smoke gx3 4x4 alt04,reprosum,run10day,gridc +smoke gx3 4x4 alt05,reprosum,run10day,gridc +smoke gx3 8x2 alt06,reprosum,run10day,gridc +smoke gx3 8x2 bgcz,reprosum,run10day,gridc +smoke gx1 15x2 seabedprob,reprosum,run10day,gridc +smoke gx3 14x2 fsd12,reprosum,run10day,gridc +smoke gx3 11x2 isotope,reprosum,run10day,gridc +smoke gx3 8x4 snwitdrdg,snwgrain,icdefault,reprosum,run10day,gridc +smoke gx3 6x4 dynpicard,reprosum,run10day,gridc +smoke gx3 8x3 zsal,reprosum,run10day,gridc + +smoke gbox128 8x2 reprosum,run10day,gridc +smoke gbox128 12x2 boxnodyn,reprosum,run10day,gridc +smoke gbox128 9x2 boxadv,reprosum,run10day,gridc +smoke gbox128 14x2 boxrestore,reprosum,run10day,gridc +smoke gbox80 4x5 box2001,reprosum,run10day,gridc +smoke gbox80 11x3 boxslotcyl,reprosum,run10day,gridc + +smoke gx3 4x2 diag1,reprosum,run10day,cmplogrest,gridc smoke_gx3_8x4_gridc_diag1_reprosum_run10day +smoke gx3 4x1 diag1,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_8x4_gridc_diag1_reprosum_run10day +smoke gx3 8x1 alt01,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_6x2_alt01_gridc_reprosum_run10day +smoke gx3 16x1 alt02,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_8x2_alt02_gridc_reprosum_run10day +smoke gx3 24x1 alt03,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_12x2_alt03_droundrobin_gridc_reprosum_run10day +smoke gx3 24x1 alt04,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_4x4_alt04_gridc_reprosum_run10day +smoke gx3 14x1 alt05,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_4x4_alt05_gridc_reprosum_run10day +smoke gx3 24x1 alt06,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_8x2_alt06_gridc_reprosum_run10day +smoke gx3 12x1 bgcz,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_8x2_bgcz_gridc_reprosum_run10day +smoke gx1 28x1 seabedprob,reprosum,run10day,cmplogrest,thread,gridc smoke_gx1_15x2_gridc_reprosum_run10day_seabedprob +smoke gx3 30x1 fsd12,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_14x2_fsd12_gridc_reprosum_run10day +smoke gx3 16x1 isotope,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_11x2_gridc_isotope_reprosum_run10day +smoke gx3 28x1 snwitdrdg,snwgrain,icdefault,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_8x4_gridc_icdefault_reprosum_run10day_snwitdrdg_snwgrain +smoke gx3 18x1 dynpicard,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_6x4_dynpicard_gridc_reprosum_run10day +smoke gx3 20x1 zsal,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_8x3_gridc_reprosum_run10day_zsal + +smoke gbox128 20x1 reprosum,run10day,cmplogrest,thread,gridc smoke_gbox128_8x2_gridc_reprosum_run10day +smoke gbox128 16x1 boxnodyn,reprosum,run10day,cmplogrest,thread,gridc smoke_gbox128_12x2_boxnodyn_gridc_reprosum_run10day +smoke gbox128 14x1 boxadv,reprosum,run10day,cmplogrest,thread,gridc smoke_gbox128_9x2_boxadv_gridc_reprosum_run10day +smoke gbox128 24x1 boxrestore,reprosum,run10day,cmplogrest,thread,gridc smoke_gbox128_14x2_boxrestore_gridc_reprosum_run10day +smoke gbox80 19x1 box2001,reprosum,run10day,cmplogrest,thread,gridc smoke_gbox80_4x5_box2001_gridc_reprosum_run10day +smoke gbox80 8x4 boxslotcyl,reprosum,run10day,cmplogrest,thread,gridc smoke_gbox80_11x3_boxslotcyl_gridc_reprosum_run10day + +#gridCD + +smoke gx3 8x4 diag1,reprosum,run10day,gridcd +smoke gx3 6x2 alt01,reprosum,run10day,gridcd +smoke gx3 8x2 alt02,reprosum,run10day,gridcd +smoke gx3 12x2 alt03,droundrobin,reprosum,run10day,gridcd +smoke gx3 4x4 alt04,reprosum,run10day,gridcd +smoke gx3 4x4 alt05,reprosum,run10day,gridcd +smoke gx3 8x2 alt06,reprosum,run10day,gridcd +smoke gx3 8x2 bgcz,reprosum,run10day,gridcd +smoke gx1 15x2 seabedprob,reprosum,run10day,gridcd +smoke gx3 14x2 fsd12,reprosum,run10day,gridcd +smoke gx3 11x2 isotope,reprosum,run10day,gridcd +smoke gx3 8x4 snwitdrdg,snwgrain,icdefault,reprosum,run10day,gridcd +smoke gx3 6x4 dynpicard,reprosum,run10day,gridcd +smoke gx3 8x3 zsal,reprosum,run10day,gridcd + +smoke gbox128 8x2 reprosum,run10day,gridcd +smoke gbox128 12x2 boxnodyn,reprosum,run10day,gridcd +smoke gbox128 9x2 boxadv,reprosum,run10day,gridcd +smoke gbox128 14x2 boxrestore,reprosum,run10day,gridcd +smoke gbox80 4x5 box2001,reprosum,run10day,gridcd +smoke gbox80 11x3 boxslotcyl,reprosum,run10day,gridcd + +smoke gx3 4x2 diag1,reprosum,run10day,cmplogrest,gridcd smoke_gx3_8x4_gridcd_diag1_reprosum_run10day +smoke gx3 4x1 diag1,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_8x4_gridcd_diag1_reprosum_run10day +smoke gx3 8x1 alt01,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_6x2_alt01_gridcd_reprosum_run10day +smoke gx3 16x1 alt02,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_8x2_alt02_gridcd_reprosum_run10day +smoke gx3 24x1 alt03,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_12x2_alt03_droundrobin_gridcd_reprosum_run10day +smoke gx3 24x1 alt04,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_4x4_alt04_gridcd_reprosum_run10day +smoke gx3 14x1 alt05,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_4x4_alt05_gridcd_reprosum_run10day +smoke gx3 24x1 alt06,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_8x2_alt06_gridcd_reprosum_run10day +smoke gx3 12x1 bgcz,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_8x2_bgcz_gridcd_reprosum_run10day +smoke gx1 28x1 seabedprob,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx1_15x2_gridcd_reprosum_run10day_seabedprob +smoke gx3 30x1 fsd12,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_14x2_fsd12_gridcd_reprosum_run10day +smoke gx3 16x1 isotope,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_11x2_gridcd_isotope_reprosum_run10day +smoke gx3 28x1 snwitdrdg,snwgrain,icdefault,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_8x4_gridcd_icdefault_reprosum_run10day_snwitdrdg_snwgrain +smoke gx3 18x1 dynpicard,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_6x4_dynpicard_gridcd_reprosum_run10day +smoke gx3 20x1 zsal,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_8x3_gridcd_reprosum_run10day_zsal + +smoke gbox128 20x1 reprosum,run10day,cmplogrest,thread,gridcd smoke_gbox128_8x2_gridcd_reprosum_run10day +smoke gbox128 16x1 boxnodyn,reprosum,run10day,cmplogrest,thread,gridcd smoke_gbox128_12x2_boxnodyn_gridcd_reprosum_run10day +smoke gbox128 14x1 boxadv,reprosum,run10day,cmplogrest,thread,gridcd smoke_gbox128_9x2_boxadv_gridcd_reprosum_run10day +smoke gbox128 24x1 boxrestore,reprosum,run10day,cmplogrest,thread,gridcd smoke_gbox128_14x2_boxrestore_gridcd_reprosum_run10day +smoke gbox80 19x1 box2001,reprosum,run10day,cmplogrest,thread,gridcd smoke_gbox80_4x5_box2001_gridcd_reprosum_run10day +smoke gbox80 8x4 boxslotcyl,reprosum,run10day,cmplogrest,thread,gridcd smoke_gbox80_11x3_boxslotcyl_gridcd_reprosum_run10day + diff --git a/configuration/scripts/tests/perf_suite.ts b/configuration/scripts/tests/perf_suite.ts new file mode 100644 index 000000000..9a17d8a55 --- /dev/null +++ b/configuration/scripts/tests/perf_suite.ts @@ -0,0 +1,30 @@ +# Test Grid PEs Sets BFB-compare +smoke gx1 1x1x320x384x1 run2day,droundrobin +smoke gx1 64x1x16x16x8 run2day,droundrobin,thread +sleep 180 +# +smoke gx1 1x1x320x384x1 run2day,droundrobin +smoke gx1 1x1x160x192x4 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 1x1x80x96x16 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 1x1x40x48x64 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 1x1x20x24x256 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +# +smoke gx1 1x1x16x16x480 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 2x1x16x16x240 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 4x1x16x16x120 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 8x1x16x16x60 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 16x1x16x16x30 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 32x1x16x16x15 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 64x1x16x16x8 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 128x1x16x16x4 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +# +smoke gx1 64x1x16x16x8 run2day,droundrobin smoke_gx1_1x1x320x384x1_droundrobin_run2day +smoke gx1 64x1x16x16x8 run2day,droundrobin,thread +smoke gx1 32x2x16x16x16 run2day,droundrobin smoke_gx1_64x1x16x16x8_droundrobin_run2day_thread +smoke gx1 16x4x16x16x32 run2day,droundrobin smoke_gx1_64x1x16x16x8_droundrobin_run2day_thread +smoke gx1 8x8x16x16x64 run2day,droundrobin smoke_gx1_64x1x16x16x8_droundrobin_run2day_thread +smoke gx1 4x16x16x16x128 run2day,droundrobin smoke_gx1_64x1x16x16x8_droundrobin_run2day_thread +smoke gx1 32x2x16x16x16 run2day,droundrobin,ompscheds smoke_gx1_64x1x16x16x8_droundrobin_run2day_thread +smoke gx1 32x2x16x16x16 run2day,droundrobin,ompschedd1 smoke_gx1_64x1x16x16x8_droundrobin_run2day_thread +smoke gx1 32x2x16x16x16 run2day,droundrobin,ompscheds1 smoke_gx1_64x1x16x16x8_droundrobin_run2day_thread +# diff --git a/configuration/scripts/tests/prod_suite.ts b/configuration/scripts/tests/prod_suite.ts index 04982adb1..877fa1ce6 100644 --- a/configuration/scripts/tests/prod_suite.ts +++ b/configuration/scripts/tests/prod_suite.ts @@ -1,6 +1,6 @@ # Test Grid PEs Sets BFB-compare -qcchk gx3 72x1 qc,medium qcchk_gx3_72x1_medium_qc -qcchk gx1 144x1 qc,medium +qcchk gx3 72x1 qc,qcchk,medium qcchk_gx3_72x1_medium_qc_qcchk +qcchk gx1 144x1 qc,qcchk,medium smoke gx1 144x2 gx1prod,long,run10year -qcchkf gx3 72x1 qc,medium,alt02 qcchk_gx3_72x1_medium_qc -qcchk gx3 72x1 qcnonbfb,medium qcchk_gx3_72x1_medium_qc +qcchk gx3 72x1 qc,qcchkf,medium,alt02 qcchk_gx3_72x1_medium_qc_qcchk +qcchk gx3 72x1 qc,qcchk,dt3456s,medium qcchk_gx3_72x1_medium_qc_qcchk diff --git a/configuration/scripts/tests/reprosum_suite.ts b/configuration/scripts/tests/reprosum_suite.ts index a7f3fe5bc..417a7de2e 100644 --- a/configuration/scripts/tests/reprosum_suite.ts +++ b/configuration/scripts/tests/reprosum_suite.ts @@ -1,11 +1,11 @@ # Test Grid PEs Sets BFB-compare -logbfb gx3 4x2x25x29x4 dslenderX2,diag1,reprosum -#logbfb gx3 4x2x25x29x4 dslenderX2,diag1 -logbfb gx3 1x1x50x58x4 droundrobin,diag1,thread,maskhalo,reprosum logbfb_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum -logbfb gx3 4x1x25x116x1 dslenderX1,diag1,thread,maskhalo,reprosum logbfb_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum -logbfb gx3 1x20x5x29x80 dsectrobin,diag1,short,reprosum logbfb_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum -logbfb gx3 8x2x8x10x20 droundrobin,diag1,reprosum logbfb_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum -logbfb gx3 6x2x50x58x1 droundrobin,diag1,reprosum logbfb_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum -logbfb gx3 6x2x4x29x18 dspacecurve,diag1,maskhalo,reprosum logbfb_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum -logbfb gx3 17x2x1x1x800 droundrobin,diag1,maskhalo,reprosum logbfb_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum -#logbfb gx3 8x2x8x10x20 droundrobin,diag1 logbfb_gx3_4x2x25x29x4_diag1_dslenderX2 +smoke gx3 4x2x25x29x4 dslenderX2,diag1,reprosum +#smoke gx3 4x2x25x29x4 dslenderX2,diag1 +smoke gx3 1x1x50x58x4 droundrobin,diag1,thread,maskhalo,reprosum,cmplog smoke_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum +smoke gx3 4x1x25x116x1 dslenderX1,diag1,thread,maskhalo,reprosum,cmplog smoke_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum +smoke gx3 1x20x5x29x80 dsectrobin,diag1,short,reprosum,cmplog smoke_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum +smoke gx3 8x2x8x10x20 droundrobin,diag1,reprosum,cmplog smoke_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum +smoke gx3 6x2x50x58x1 droundrobin,diag1,reprosum,cmplog smoke_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum +smoke gx3 6x2x4x29x18 dspacecurve,diag1,maskhalo,reprosum,cmplog smoke_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum +smoke gx3 17x2x1x1x800 droundrobin,diag1,maskhalo,reprosum,cmplog smoke_gx3_4x2x25x29x4_diag1_dslenderX2_reprosum +#smoke gx3 8x2x8x10x20 droundrobin,diag1,cmplog smoke_gx3_4x2x25x29x4_diag1_dslenderX2 diff --git a/configuration/scripts/tests/test_logbfb.script b/configuration/scripts/tests/test_logbfb.script deleted file mode 100644 index 0ac1ed224..000000000 --- a/configuration/scripts/tests/test_logbfb.script +++ /dev/null @@ -1,33 +0,0 @@ -# This is identical to a smoke test, but triggers bfbcompare with log files instead of restarts -#---------------------------------------------------- -# Run the CICE model -# cice.run returns -1 if run did not complete successfully - -./cice.run -set res="$status" - -set log_file = `ls -t1 ${ICE_RUNDIR}/cice.runlog* | head -1` -set ttimeloop = `grep TimeLoop ${log_file} | grep Timer | cut -c 22-32` -set tdynamics = `grep Dynamics ${log_file} | grep Timer | cut -c 22-32` -set tcolumn = `grep Column ${log_file} | grep Timer | cut -c 22-32` -if (${ttimeloop} == "") set ttimeloop = -1 -if (${tdynamics} == "") set tdynamics = -1 -if (${tcolumn} == "") set tcolumn = -1 - -mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev -cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} run" >! ${ICE_CASEDIR}/test_output -mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev -cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} test" >! ${ICE_CASEDIR}/test_output -rm -f ${ICE_CASEDIR}/test_output.prev - -set grade = PASS -if ( $res != 0 ) then - set grade = FAIL - echo "$grade ${ICE_TESTNAME} run ${ttimeloop} ${tdynamics} ${tcolumn}" >> ${ICE_CASEDIR}/test_output - echo "$grade ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output - exit 99 -endif - -echo "$grade ${ICE_TESTNAME} run ${ttimeloop} ${tdynamics} ${tcolumn}" >> ${ICE_CASEDIR}/test_output -echo "$grade ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output - diff --git a/configuration/scripts/tests/test_qcchkf.script b/configuration/scripts/tests/test_qcchkf.script deleted file mode 100644 index 81b5f05fc..000000000 --- a/configuration/scripts/tests/test_qcchkf.script +++ /dev/null @@ -1,36 +0,0 @@ - -cp ${ICE_SANDBOX}/configuration/scripts/tests/QC/CICE_t_critical_p0.8.nc . -cp ${ICE_SANDBOX}/configuration/scripts/tests/QC/CICE_Lookup_Table_p0.8_n1825.nc . - -#---------------------------------------------------- -# Run the CICE model -# cice.run returns -1 if run did not complete successfully - -./cice.run -set res="$status" - -set log_file = `ls -t1 ${ICE_RUNDIR}/cice.runlog* | head -1` -set ttimeloop = `grep TimeLoop ${log_file} | grep Timer | cut -c 22-32` -set tdynamics = `grep Dynamics ${log_file} | grep Timer | cut -c 22-32` -set tcolumn = `grep Column ${log_file} | grep Timer | cut -c 22-32` -if (${ttimeloop} == "") set ttimeloop = -1 -if (${tdynamics} == "") set tdynamics = -1 -if (${tcolumn} == "") set tcolumn = -1 - -mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev -cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} run" >! ${ICE_CASEDIR}/test_output -mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev -cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} test" >! ${ICE_CASEDIR}/test_output -rm -f ${ICE_CASEDIR}/test_output.prev - -set grade = PASS -if ( $res != 0 ) then - set grade = FAIL - echo "$grade ${ICE_TESTNAME} run ${ttimeloop} ${tdynamics} ${tcolumn}" >> ${ICE_CASEDIR}/test_output - echo "$grade ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output - exit 99 -endif - -echo "$grade ${ICE_TESTNAME} run ${ttimeloop} ${tdynamics} ${tcolumn}" >> ${ICE_CASEDIR}/test_output -echo "$grade ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output - diff --git a/doc/source/cice_index.rst b/doc/source/cice_index.rst index 38f38b6b1..8ec9c8f4a 100644 --- a/doc/source/cice_index.rst +++ b/doc/source/cice_index.rst @@ -675,6 +675,7 @@ either Celsius or Kelvin units). "Tffresh", "freezing temp of fresh ice", "273.15 K" "tfrz_option", "form of ocean freezing temperature", "" "thinS", "minimum ice thickness for brine tracer", "" + "timer_stats", "logical to turn on extra timer statistics", ".false." "timesecs", "total elapsed time in seconds", "s" "time_beg", "beginning time for history averages", "" "time_bounds", "beginning and ending time for history averages", "" diff --git a/doc/source/conf.py b/doc/source/conf.py index 099f65403..8b9aecaa6 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -54,7 +54,7 @@ # General information about the project. project = u'CICE' -copyright = u'2021, Triad National Security, LLC (code) and National Center for Atmospheric Research (documentation)' +copyright = u'2022, Triad National Security, LLC (code) and National Center for Atmospheric Research (documentation)' author = u'CICE-Consortium' # The version info for the project you're documenting, acts as replacement for @@ -62,9 +62,9 @@ # built documents. # # The short X.Y version. -version = u'6.3.0' +version = u'6.3.1' # The full version, including alpha/beta/rc tags. -version = u'6.3.0' +version = u'6.3.1' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/doc/source/intro/copyright.rst b/doc/source/intro/copyright.rst index f09f6c58d..86b15b8d2 100644 --- a/doc/source/intro/copyright.rst +++ b/doc/source/intro/copyright.rst @@ -5,7 +5,7 @@ Copyright ============================= -© Copyright 2021, Triad National Security LLC. All rights reserved. +© Copyright 2022, Triad National Security LLC. All rights reserved. This software was produced under U.S. Government contract 89233218CNA000001 for Los Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC for the U.S. Department diff --git a/doc/source/user_guide/ug_case_settings.rst b/doc/source/user_guide/ug_case_settings.rst index aa63facf7..51437ae1e 100644 --- a/doc/source/user_guide/ug_case_settings.rst +++ b/doc/source/user_guide/ug_case_settings.rst @@ -81,25 +81,34 @@ can be modified as needed. "ICE_TARGET", "string", "build target", "set by cice.setup" "ICE_IOTYPE", "string", "I/O format", "set by cice.setup" " ", "netcdf", "serial netCDF" - " ", "pio", "parallel netCDF" " ", "none", "netCDF library is not available" + " ", "pio", "parallel netCDF" "ICE_CLEANBUILD", "true, false", "automatically clean before building", "true" "ICE_CPPDEFS", "user defined preprocessor macros for build", "null" "ICE_QUIETMODE", "true, false", "reduce build output to the screen", "false" "ICE_GRID", "string (see below)", "grid", "set by cice.setup" - " ", "gx3", "3-deg displace-pole (Greenland) global grid", " " - " ", "gx1", "1-deg displace-pole (Greenland) global grid", " " - " ", "tx1", "1-deg tripole global grid", " " " ", "gbox80", "80x80 box", " " " ", "gbox128", "128x128 box", " " - "ICE_NTASKS", "integer", "number of tasks, must be set to 1", "set by cice.setup" - "ICE_NTHRDS", "integer", "number of threads per task, must be set to 1", "set by cice.setup" + " ", "gbox180", "180x180 box", " " + " ", "gx1", "1-deg displace-pole (Greenland) global grid", " " + " ", "gx3", "3-deg displace-pole (Greenland) global grid", " " + " ", "tx1", "1-deg tripole global grid", " " + "ICE_NTASKS", "integer", "number of MPI tasks", "set by cice.setup" + "ICE_NTHRDS", "integer", "number of threads per task", "set by cice.setup" + "ICE_OMPSCHED", "string", "OpenMP SCHEDULE env setting", "static,1" "ICE_TEST", "string", "test setting if using a test", "set by cice.setup" "ICE_TESTNAME", "string", "test name if using a test", "set by cice.setup" - "ICE_BASELINE", "string", "baseline directory name, associated with cice.setup -bdir ", "set by cice.setup" + "ICE_TESTID", "string", "test name testid", "set by cice.setup" + "ICE_BASELINE", "string", "baseline directory name, associated with cice.setup --bdir ", "set by cice.setup" "ICE_BASEGEN", "string", "baseline directory name for regression generation, associated with cice.setup -bgen ", "set by cice.setup" "ICE_BASECOM", "string", "baseline directory name for regression comparison, associated with cice.setup -bcmp ", "set by cice.setup" - "ICE_BFBCOMP", "string", "location of case for comparison, associated with cice.setup -td", "set by cice.setup" + "ICE_BFBCOMP", "string", "location of case for comparison, associated with cice.setup --bcmp", "set by cice.setup" + "ICE_BFBTYPE", "string", "type and files used in BFBCOMP", "restart" + " ", "log", "log file comparison for bit for bit", " " + " ", "logrest", "log and restart files for bit for bit", " " + " ", "qcchk", "QC test for same climate", " " + " ", "qcchkf", "QC test for different climate", " " + " ", "restart", "restart files for bit for bit", " " "ICE_SPVAL", "string", "special value for cice.settings strings", "set by cice.setup" "ICE_RUNLENGTH", "integer (see below)", "batch run length default", "set by cice.setup" " ", "-1", "15 minutes (default)", " " @@ -111,6 +120,7 @@ can be modified as needed. "ICE_ACCOUNT", "string", "batch account number", "set by cice.setup, .cice_proj or by default" "ICE_QUEUE", "string", "batch queue name", "set by cice.setup or by default" "ICE_THREADED", "true, false", "force threading in compile, will always compile threaded if ICE_NTHRDS :math:`> 1`", "false" + "ICE_COMMDIR", "mpi, serial", "specify infrastructure comm version", "set by ICE_NTASKS" "ICE_BLDDEBUG", "true, false", "turn on compile debug flags", "false" "ICE_COVERAGE", "true, false", "turn on code coverage flags", "false" @@ -214,6 +224,7 @@ setup_nml "``runtype``", "``continue``", "restart using ``pointer_file``", "``initial``" "", "``initial``", "start from ``ice_ic``", "" "``sec_init``", "integer", "the initial second if not using restart", "0" + "``timer_stats``", "logical", "controls extra timer output", "``.false.``" "``use_leap_years``", "logical", "include leap days", "``.false.``" "``use_restart_time``", "logical", "set initial date using restart file on initial runtype only", "``.false.``" "``version_name``", "string", "model version", "'unknown_version_name'" diff --git a/doc/source/user_guide/ug_implementation.rst b/doc/source/user_guide/ug_implementation.rst index a838f887b..624d135c3 100644 --- a/doc/source/user_guide/ug_implementation.rst +++ b/doc/source/user_guide/ug_implementation.rst @@ -214,7 +214,8 @@ and chooses a block size ``block_size_x`` :math:`\times`\ ``block_size_y``, and ``distribution_type`` in **ice\_in**. That information is used to determine how the blocks are distributed across the processors, and how the processors are -distributed across the grid domain. Recommended combinations of these +distributed across the grid domain. The model is parallelized over blocks +for both MPI and OpenMP. Some suggested combinations for these parameters for best performance are given in Section :ref:`performance`. The script **cice.setup** computes some default decompositions and layouts but the user can overwrite the defaults by manually changing the values in @@ -553,7 +554,8 @@ The user specifies the total number of tasks and threads in **cice.settings** and the block size and decompostion in the namelist file. The main trades offs are the relative efficiency of large square blocks versus model internal load balance -as CICE computation cost is very small for ice-free blocks. +as CICE computation cost is very small for ice-free blocks. The code +is parallelized over blocks for both MPI and OpenMP. Smaller, more numerous blocks provides an opportunity for better load balance by allocating each processor both ice-covered and ice-free blocks. But smaller, more numerous blocks becomes @@ -564,6 +566,18 @@ volume-to-surface ratio important for communication cost. Often 3 to 8 blocks per processor provide the decompositions flexiblity to create reasonable load balance configurations. +Like MPI, load balance +of blocks across threads is important for efficient performance. Most of the OpenMP +threading is implemented with ``SCHEDULE(runtime)``, so the OMP_SCHEDULE env +variable can be used to set the OpenMPI schedule. The default ``OMP_SCHEDULE`` +setting is defined by the +variable ``ICE_OMPSCHE`` in **cice.settings**. ``OMP_SCHEDULE`` values of "STATIC,1" +and "DYNAMIC,1" are worth testing. The OpenMP implementation in +CICE is constantly under review, but users should validate results and +performance on their machine. CICE should be bit-for-bit with different block sizes, +different decompositions, different MPI task counts, and different OpenMP threads. +Finally, we recommend the ``OMP_STACKSIZE`` env variable should be set to 32M or greater. + The ``distribution_type`` options allow standard cartesian distributions of blocks, redistribution via a ‘rake’ algorithm for improved load balancing across processors, and redistribution based on space-filling @@ -1227,15 +1241,18 @@ Timers are declared and initialized in **ice\_timers.F90**, and the code to be timed is wrapped with calls to *ice\_timer\_start* and *ice\_timer\_stop*. Finally, *ice\_timer\_print* writes the results to the log file. The optional “stats" argument (true/false) prints -additional statistics. Calling *ice\_timer\_print\_all* prints all of +additional statistics. The "stats" argument can be set by the ``timer_stats`` +namelist. Calling *ice\_timer\_print\_all* prints all of the timings at once, rather than having to call each individually. Currently, the timers are set up as in :ref:`timers`. Section :ref:`addtimer` contains instructions for adding timers. The timings provided by these timers are not mutually exclusive. For -example, the column timer (5) includes the timings from 6–10, and -subroutine *bound* (timer 15) is called from many different places in -the code, including the dynamics and advection routines. +example, the Column timer includes the timings from several other +timers, while timer Bound is called from many different places in +the code, including the dynamics and advection routines. The +Dynamics, Advection, and Column timers do not overlap and represent +most of the overall model work. The timers use *MPI\_WTIME* for parallel runs and the F90 intrinsic *system\_clock* for single-processor runs. @@ -1251,35 +1268,41 @@ The timers use *MPI\_WTIME* for parallel runs and the F90 intrinsic +--------------+-------------+----------------------------------------------------+ | 1 | Total | the entire run | +--------------+-------------+----------------------------------------------------+ - | 2 | Step | total minus initialization and exit | + | 2 | Timeloop | total minus initialization and exit | +--------------+-------------+----------------------------------------------------+ - | 3 | Dynamics | EVP | + | 3 | Dynamics | dynamics | +--------------+-------------+----------------------------------------------------+ | 4 | Advection | horizontal transport | +--------------+-------------+----------------------------------------------------+ | 5 | Column | all vertical (column) processes | +--------------+-------------+----------------------------------------------------+ - | 6 | Thermo | vertical thermodynamics | + | 6 | Thermo | vertical thermodynamics, part of Column timer | + +--------------+-------------+----------------------------------------------------+ + | 7 | Shortwave | SW radiation and albedo, part of Thermo timer | + +--------------+-------------+----------------------------------------------------+ + | 8 | Ridging | mechanical redistribution, part of Column timer | + +--------------+-------------+----------------------------------------------------+ + | 9 | FloeSize | flow size, part of Column timer | +--------------+-------------+----------------------------------------------------+ - | 7 | Shortwave | SW radiation and albedo | + | 10 | Coupling | sending/receiving coupler messages | +--------------+-------------+----------------------------------------------------+ - | 8 | Meltponds | melt ponds | + | 11 | ReadWrite | reading/writing files | +--------------+-------------+----------------------------------------------------+ - | 9 | Ridging | mechanical redistribution | + | 12 | Diags | diagnostics (log file) | +--------------+-------------+----------------------------------------------------+ - | 10 | Cat Conv | transport in thickness space | + | 13 | History | history output | +--------------+-------------+----------------------------------------------------+ - | 11 | Coupling | sending/receiving coupler messages | + | 14 | Bound | boundary conditions and subdomain communications | +--------------+-------------+----------------------------------------------------+ - | 12 | ReadWrite | reading/writing files | + | 15 | BGC | biogeochemistry, part of Thermo timer | +--------------+-------------+----------------------------------------------------+ - | 13 | Diags | diagnostics (log file) | + | 16 | Forcing | forcing | +--------------+-------------+----------------------------------------------------+ - | 14 | History | history output | + | 17 | 1d-evp | 1d evp, part of Dynamics timer | +--------------+-------------+----------------------------------------------------+ - | 15 | Bound | boundary conditions and subdomain communications | + | 18 | 2d-evp | 2d evp, part of Dynamics timer | +--------------+-------------+----------------------------------------------------+ - | 16 | BGC | biogeochemistry | + | 19 | UpdState | update state | +--------------+-------------+----------------------------------------------------+ .. _restartfiles: diff --git a/icepack b/icepack index 152bd701e..76ecd418d 160000 --- a/icepack +++ b/icepack @@ -1 +1 @@ -Subproject commit 152bd701e0cf3ec4385e5ce81918ba94e7a791cb +Subproject commit 76ecd418d2efad7e74fe35c4ec85f0830923bda6