-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update CICE to latest Consortium master #24
Update CICE to latest Consortium master #24
Conversation
…sed. The current construction did not use the calculated values as they were defined private and overwritten at each i/j (CICE-Consortium#507)
* deprecate upwind advection * fix diagnostic info * remove single quotes * error message if upwind is used
The documentation states that this test case sets 'coriolis' to zero, but 'configuration/scripts/options/set_nml.box2001' sets 'coriolis' to 'constant'. Fix the documentation to be in line with the code. While at it, add the correct unit for the Coriolis parameter (in both places where it appears in the documentation). Closes CICE-Consortium#509 Reported-by: Jean-François Lemieux <jean-francois.lemieux@canada.ca>
Ice pressure is different from ice strength, so remove 'pressure' from the description of 'ice strength'.
* initial coding of ice_dyn_vp.F90 * code (matvec, bvec and residual) now compiles after debugging * introduced calc_vrel_Cb and calc_visc_coeff subroutines and precond.F90 file * after debugging...now compiles * in process of adding precondD * precondD is in progress * I finished coding precondD...it compiles * added calculation for L2norm * modif to ice_step_mod.F90 for call to imp_solver * now form complete vectors by alternating u(i,j) and v(i,j) and so on * added simple routine to form a vector * corrected bug for vectors of size ntot * added vectors of size ntot to bvec and matvec * step back...not so easy to form all the vectors for the blocks... * new subroutine for assembling ntot vectors * In the process of adding the call to fgmres * new subroutine to convert vector to the max_blocks arrays * ice_dyn_vp compiles...I now need to make fgmres.F90 compile... * added BLAS routines needed by fgmres * added the fgmres.F90 routine...does not compile yet... * minor modif to fgmres.F90 * bug in call to arrays_to_vec instead of vec_to_arrays * Remove fgmres.F90 (MPI version, will re-add GEM version later) This file 'include's mpif.h and so breaks serial compilation. I will copy the FGMRES solver from GEM later on and modify it to call the CICE communication routines. * added serial fgmres for testing * modified code for serial tests * after debugging... * added code to put sol vector in uvel and vvel arrays at the end of fgmres * modified tinyarea calc for VP (punyVP=2e-9) * modified location of allocation for some variables... * small bug corrected in allocating variables * introduced capping of the viscous coeff following Kreysher * added a relaxation to uvel,vvel to try to get nonlinear conv * modif to relaxation calc...minor * I found a bug in my 1st implementation. The dsig/dx term related to the rep pressurer should be on the RHS...in the process of correcting this * Pr is now calc with zeta * dPrdx is now in calc_bvec * removed Pr part from stress_vp calc * minor modifs to improve print of res norm in fgmres * added linear conv criterion based on a gamma * added NL conv criterion * in the process of adding precond diag * now forms the vector diagvec * precond diag is completed and compiles * diagvec was not allocated...now ok * diagvec was not deallocated...now ok * fixed bug in formDiag_step2 * small modif * small modif...again * modified formation of diagonal for precond * just removed a comment * minor modif to formDiag_step1 * added an option for precond (free drift diag or complete diag * minor modif * minor modif * removed OLDformDiag_step1 * Ok I think I know what was wrong with the precond diag...I have to use icellu and not icellt * precond 3 works but not as good as precond 2 (free drift)...removed OLD code * ok found the problem in step2...now fixed * added routine to calc deformations for mech redistribution...for the evp this is done for k=ndte in stress...here it is done once we have NL convergence * chnaged names of evp_prep1 to dyn_prep1... * in process of adding gmres as precond to fgmres... * matvec is now done in one routine instead of in two steps before * in the process of adding gmres precond * finalizing implementation of gmres for precond * pgmres seems to work...I used it as a solver and it works fine * minor modif: introduction of maxits_pgmres * modif to comments...very minor * something was inconsistent for precond choice...now fixed and removed one precond option (free drift) * removed krelax...not needed as a more complex method will be added * in the process of adding the RRE1 acceleration method * fixed problem in halo update (spherical grid...wraps on itself) before matvec * cicecore: add puny_dyn variable to initialize tinyarea, remove temporary punyVP variable * ice_dyn_vp: move solver algorithm and output parameters to namelist Also, since 'yield_curve' and 'e_ratio' are used for both VP and EVP dynamics, print their values for both solvers. * {f,}gmres: use nu_diag for output * dynamics: correct 2 bugs found using debugging flags - eps1 was printed before being initialized in pgmres - a loop on iblk was missing around the call to deformations in ice_dyn_vp * ice_dyn_vp: rename VP namelist variables to be more descriptive Also, add namelist flag to monitor nonlinear convergence and document VP namelist variables in runtime log. * pgmres: streamlined monitoring output Bring the PGMRES output more in line with the FGMRES and nonlinear solvers. * ice_dyn_vp: add computation of nonlinear and fixed point residuals * ice_dyn_vp: remove references to RRE acceleration method We will not use this acceleration method, so remove remaining references to it. * ice_dyn_vp: remove unused 'ulin', 'vlin' variables * ice_dyn_vp: move Picard iteration to separate subroutine Make the code more modular by moving the Picard solver to its own subroutine. Also, rename 'kmax' to 'maxits_nonlin' to bring it in line with 'maxits_{f,p}gmres'. * ice_dyn_vp: add Anderson acceleration for Picard iteration * Remove nonlinear convergence logic from fgmresD.F90 (move to picard_solver in ice_dyn_vp.F90) * Add ouput of residual norm to fgmresD.F90 * Remove unused variables (fgmresD.F90, pgmres.F90, ice_dyn_vp.F90) * Fix warning for format 1008 (ice_init.F90) * Move FGMRES solver to separate subroutine (ice_dyn_vp.F90) * Remove unused use statements in picard_solver (ice_dyyn_vp.F90) * options: add options files for implicit solver - diag_imp sets all monitoring options (nonlin, {f,p}gmres) to on - dynanderson sets the implicit solver to on and the nonlinear algorithm to Anderson acceleration - dynpicard sets the implicit solver to 'on' and the nonlinear algorithm to Picard iteration - run3dt runs the model for three time steps * ice_dyn_vp: correct residual computations if max_blocks > 1 The way we compute the nonlinear residual norm does not take into account that `L2norm` is an array of size `max_blocks`. Fix that by summing the components, squared, and taking the square root (we need to square the components because the subroutine 'calc_L2norm' returns the square root). * ice_dyn_vp: differentiate fixed point and progress residuals For Anderson acceleration, the fixed point residual (g(x) - x) is not the same as the 'progress residual' (u-u_old). Make this distinction clear in the code and in the monitoring output. * ice_dyn_vp: fix convergence issues in Picard and Anderson solvers Both solvers currently converge, but the solution (uvel,vvel) is not qualitatively the same as the EVP solution; velocities are too small. For picard_solver, it is due to the fact that the initial residual stored and used to check for convergence is not the initial residual at the start of the nonlinear iteration, but the last residual at the end of the first FGMRES iteration. Since this residual is smaller, convergence is "faster" but the solution is not the same; it is not converged enough in the sense of the nonlinear residual. This is an implementation error that was introduced when the nonlinear convergence logic was removed from the fgmres solver. For anderson_solver, it is due to using the fixed point residual norm to check for convergence. Similarly, this residual is smaller than the nonlinear residual and thus leads to a "faster" convergence, but to a different solution (not converged enough in the sense of the nonlinear residual). Fix both errors. * ice_dyn_vp: add (ulin,vlin) to compute vrel based on 2 previous iterates Initial test shows that using the mean of the 2 previous iterates to compute vrel accelerates convergence for Picard iteration (this is robust). For Anderson it is sometimes better, sometimes worse, so by default it is set to false in the namelist. This also removes the oscillation produced by Picard iteration, which is in line with results in doi:10.1029/2008JC005017 * machines: cesium: add LAPACK and BLAS library to Macros * ice_dyn_vp: add 'subname' variable to each subroutine * ice_dyn_vp: cleanup 'icepack_warnings_*' calls Some calls to Icepack warning interfaces are not needed; remove them. Another call is misplaced; move it to the right place. * ice_dyn_vp, pgmres: remove unused arguments, variables and 'use' statements variables * ice_dyn_vp: refactor 'puny' treatment for implicit solver The way 'puny_dyn' is set in the 'cice' driver, and in 'ice_constants' and 'ice_grid' is kind of hacky. Revert those changes. Instead, add an 'init_vp' subroutine that is called from the drivers and that recomputes 'tinyarea' with the new 'puny_vp' value. Call 'init_evp' from 'init_vp', mimicking what is done ine 'init_eap'. * ice_dyn_vp: make 'calc_bvec' use already computed 'vrel' Pass the already computed 'vrel' as an argument instead of recomputing it. * ice_dyn_vp: remove references to fgmres2 subroutine We still reference 'fgmres2', an MPI implementation of FGMRES that was never completely integrated with the implicit solver. Remove references to it as well as its arguments. * dynamics: remove 'BLAS_routines.F90' It is a best practice to use the BLAS library natively available on a system, or install a high performance implementation like ATLAS, OpenBLAS, etc. Remove the file 'BLAS_routines.F90', which contains BLAS routines copied from the reference NETLIB implementation. * ice_dyn_vp: synchronize 'imp_solver' with ice_dyn_evp::evp The 'imp_solver' subroutine is the main driver of the implicit VP solver and was copied from a very old version of the 'evp' EVP driver in ice_dyn_evp.F90. Bring 'imp_solver' in line with changes in 'evp', and move some statements around to keep related statements together. * dynamics: refactor strain rates and deformation calculations Create subroutines for strain rates and deformations computations, add them to ice_dyn_shared and call them in ice_dyn_evp and ice_dyn_vp. This reduces code duplication. * dynamics: add sol_fgmres2d from GEM as ice_krylov.F90 (does not compile) Copy (verbatim) the 2D FGMRES solver from the GEM atmospheric model as 'ice_krylov.F90'. * ice_krylov: adapt FGMRES algo up to inner loop (WIP) * ice_krylov: adapt FGMRES algo up to preconditioner call (WIP) * ice_krylov: adapt FGMRES algo up to before CGS (WIP) * ice_krylov: adapt FGMRES algo up to end of Arnoldi (except 1st GS loop) (WIP) * ice_krylov: adapt FGMRES algo up to updating of solution (WIP) * ice_krylov: adapt FGMRES algo up to residual calculation (WIP) * ice_krylov: correct CICE kinds in function almost_zero * comm: add 'global_sums' module procedure Add a 'global_sums' module procedure that computes the global sum of several scalars, held in a 1D array ('vector'). Make use of the existing 'compute_sums_dbl' subroutine. * ice_krylov: adapt FGMRES including Gram-Schmidt (WIP) * ice_krylov: add 'pgmres' subroutine (mostly the same as 'fgmres') The PGMRES algorithm is the same as using FGMRES with the same preconditioner at each iteration. However, in order to reuse the FGMRES code the 'fgmres' subroutine would have to be a recursive subroutine, which might have performance implications. For now, add it a separate subroutine. * ice_krylov: move subroutines to ice_dyn_vp to work around circular deps 'ice_krylov' and 'ice_dyn_vp' modules each 'use' variables from each other, which makes them impossible to compile in a clean build. Transfer the subroutines in 'ice_krylov' to 'ice_dyn_vp'. While at it, update the public interface of module ice_dyn_vp to only expose what is used in other modules. Also, make 'icellu', 'icellt' and 'indxtij' et al. module variables, to reduce subroutine argument counts. In the same spirit, add use statements. * ice_dyn_vp: add 'subname' to subroutine 'almost_zero' and correct indentation * ice_dyn_vp: add 'ice_HaloUpdate_vel' subroutine Introduce the subroutine 'ice_HaloUpdate_vel', which can be used to do halo updates for vector field, such as the ice velocity (uvel,vvel). * ice_dyn_vp: make 'fld2' a module variable Reduce subroutine argument counts by making 'fld2' a module variable, and allocate it in 'init_vp'. * ice_dyn_vp: move 'ice_halo' to module 'use' statement * ice_dyn_vp: add workspace initialization to 0 and haloupdate before matvec in fgmres (WIP) * ice_dyn_vp: add choice between classical and modified Gram-Schmidt orthogonalization The FGMRES solver from GEM only has classical Gram-Schmidt (CGS), but modified Gram-Schmidt (MGS) is more robust. Preliminary tests show that the convergence of the non-linear solver is seriously degraded when CGS is used instead of MGS inside FGMRES. This was tested by changing MGS to CGS in the Saad FMGRES implementation : diff --git a/cicecore/cicedynB/dynamics/fgmresD.F90 b/cicecore/cicedynB/dynamics/fgmresD.F90 index 5a93e68..c7d49fa 100644 --- a/cicecore/cicedynB/dynamics/fgmresD.F90 +++ b/cicecore/cicedynB/dynamics/fgmresD.F90 @@ -192,12 +192,13 @@ subroutine fgmres (n,im,rhs,sol,i,vv,w,wk1, wk2, & ! if (icode .eq. 3) goto 11 call dcopy (n, wk2, 1, vv(1,i1), 1) !jfl modification ! -! modified gram - schmidt... +! classical gram - schmidt... ! do j=1, i - t = ddot(n, vv(1,j), 1, vv(1,i1), 1) !jfl modification - hh(j,i) = t - call daxpy(n, -t, vv(1,j), 1, vv(1,i1), 1) !jfl modification + hh(j,i) = ddot(n, vv(1,j), 1, vv(1,i1), 1) !jfl modification + enddo + do j=1, i + call daxpy(n, -hh(j,i), vv(1,j), 1, vv(1,i1), 1) !jfl modification enddo t = sqrt(ddot(n, vv(1,i1), 1, vv(1,i1), 1)) !jfl modification hh(i1,i) = t Add an 'orthogonalize' subroutine that can do either CGS or MGS to abstract away the orthogonalization procedure. Add a namelist parameter 'ortho_type' (defaulting to 'mgs') so that the user can change at run time if the FGMRES solver should use CGS or MGS as an orthogonalization method. Note: at the moment the 'pgmres' subroutine is still hard-coded to use CGS. * ice_dyn_vp: change order of arguments for 'precondition' subroutine The arguments with 'intent(out)' and 'intent(inout)' are usually listed last in the argument list of subroutines and functions. This is in line with [1]. Change 'precondition' to be in line with this standard. [1] European Standards for Writing and Documenting Exchangeable Fortran 90 Code, https://wiki.c2sm.ethz.ch/pub/COSMO/CosmoRules/europ_sd.pdf * ice_dyn_vp: fgmres: correctly initialize workspace_[xy] and arnoldi_basis_[xy] A previous commit only initialied them in pgmres. This should be squashed with 1c1b5cf (Add workspace initialization to 0 and haloupdate before matvec in fgmres (WIP), 2019-07-24) * ice_dyn_vp: matvec: change Au,Av from 'intent(out)' to 'intent(inout)' According to The Fortran 2003 Handbook [1], the Fortran standard states that arguments with 'intent(out)' are left uninitialized if the subroutine/function does not set them. This is dangerous here since the 'Au', 'Av' arguments are set in 'matvec' only for grid cells where there is ice, so any grid cells with no ice that were previously initialized (eg. to zero) in the actual arguments could be left with garbage values after the call to 'matvec' (this does not happen with the Intel compiler but it's still better to follow the standard). [1] J. C. Adams, W. S. Brainerd, R. A. Hendrickson, R. E. Maine, J. T. Martin, and B. T. Smith, The Fortran 2003 Handbook. London: Springer London, 2009. * ice_dyn_vp: clarify decriptions of FGMRES variables User 'Arnoldi' and 'restarts' in addition to 'inner' and 'outer'. * ice_dyn_vp: add subroutines description and clarify existing ones * ice_dyn_vp: fix comments in FGMRES algorithm The next block does not only compute the new Givens rotation, it applies it, so tweak the comment to be in line with the code. * ice_dyn_vp: fgmres: fix misleading check for return Using `outiter > maxouter` means that the algorithm would always perform at most `maxouter+1` outer (restarts) iterations, since at the end of the `maxouter`-th iteration we would have `maxouter > maxouter`, which is false, so the algorithm would do a last outer iteration before returning. Fix this by using ">=" so that the `maxouter` value is really the maximum number of outer iterations permitted. This error was in the GEM version. * ice_dyn_vp: fgmres: fix error in the last loop computing the residual workspace_[xy] was rewritten at each iteration instead of being incremented. Fix that. * ice_dyn_vp: change the definition of F(x) from 'A.x - b' to 'b - A.x' This makes the result of the 'residual_vec' subroutine conform with the definition of the residual in the FGMRES algorithm (r := b - A.x). This should have no other impact than making the new fgmres subroutine actually work. * ice_dyn_vp: fgmres: comment out early return (WIP) This is different than the legacy FGMRES and thus needs to be commented to compare both versions of the algorithm. * ice_dyn_vp: pgmres: synchronize with fgmres (WIP) Also remove unnecessary return statement at the end of fgmres. * ice_dyn_vp: picard_solver: re-enable legacy solver Rename the subroutines fgmres and pgmres in standalone files with a '_legacy' suffix. * cicecore: remove remaining 'SVN:$Id:' lines * ice_dyn_vp: remove 'picard_solver' subroutine The 'picard_solver' subroutine has been abandoned for quite a while and the Picard algorithm can be used by using the Anderson solver with `im_andacc = 0`, i.e. not saving any residuals. Remove the subroutine and move the logic around the namelist variable 'algo_nonlin' from 'imp_solver' to 'anderson_solver'. This way, the new developments made in 'anderson_solver' (using the FGMRES solver from GEM, choosing the orthogonalization method, the addition of the precondition subroutine) can also be used with `algo_nonlin = 1`. * ice_dyn_vp: remove legacy FGMRES solver The new FGMRES solver adapted from GEM was already verified to give the same results as the legacy implementation. Remove the legacy implementation. * ice_dyn_vp: pgmres: use Classical Gram-Schmidt The legay PGMRES preconditioner uses Classical Gram-Schmidt for orthogonalization, so in order to compare the new solver with the old one we must use CGS in PGMRES. A following commit will add a namelist variable to control this behavior. * ice_in: change maxits_{f,p}gmres to be in line with new meaning Change the reference namelist to conform with the new meaning of `maxits_{f,p}gmres`, which is different between the old and the new {F,P}GMRES solvers: - in the old solvers 'maxits_*' is the total number of inner (Arnoldi) iterations - in the new solvers 'maxits_*' is given as the 'maxouter' argument, i.e. the total number of outer (restarts) iterations and so 'maxits' from the old solvers correspond to 'maxouter*maxinner' This means that in the namelist 'maxits_*' must be set to 1 if it was previously set to the same thing as 'im_*' (i.e., do not do any restarts). * ice_dyn_vp: uniformize naming of namelist parameters for solver tolerances 'gammaNL', 'gamma' and 'epsprecond' refer respectively to the relative tolerances of the nonlinear solver, linear solver (FGMRES) and preconditioner (PGMRES). For consistency and to use meaningful names in the code, rename them to 'reltol_nonlin', 'reltol_fgmres' and 'reltol_pgmres'. * ice_dyn_vp: anderson_solver: adapt residual norm computations for MPI The norm of the different residuals need to be computed using the squared value of all components across MPI processes. We keep the 'res' and 'fpfunc' vectors as 1D vectors for now, to minimize code changes. This will be revised when the Anderson solver is parallelized. * ice_dyn_vp: write solver diagnostics only for master task Note that care must be taken not to call 'global_sum' inside an `if (my_task == master_task)` construct, as this will create an incorrect communication pattern and ultimately an MPI failure. * ice_dyn_vp: remove references to EVP Reomve leftovers references from the initial copy of ice_dyn_evp.F90 * ice_dyn_vp: rename 'puny_vp' to 'min_strain_rate' This value really is a minimum strain rate, so name it as such. * ice_dyn_vp: remove "signature" comments from JF These comments do not add value to the code, remove them. * ice_dyn_vp: write stresses at end of time step In the EVP solver, stress are subcycled using the global 'stress{p,m,12}_[1-4]' arrays, so these arrays contain the subcycled stresses at the end of the EVP iterations. These global arrays are used to write the stresses to the history tape if the stress variables are enabled in the namelist. In the VP solver, stresses are not explicitly computed to solve the momentum equation, so they need to be computed separately at the end of the time step so that they can be properly written to the history tape. Use the existing, but unused, 'stress_vp' subroutine to compute the stresses, and call it at the end of the 'imp_solver' driver. Remove uneeded computations from 'stress_vp'. Move the definition of the 'zetaD' array to 'imp_solver' so it can be passed to 'stress_vp'. * ice_dyn_vp: cleanup 'intent's - move 'intent(out)' and 'intent(inout)' arguments to the end of the argument list (ice_HaloUpdate_vel is an exception, for consistency with other ice_HaloUpdate subroutines). - move 'intent(in)' before any 'intent(out)' or 'intent(inout)' - change 'intent(inout)' to 'intent(out)' if the argument is rewritten inside the subroutine * dynamics: correct the definition of 'shearing strain rate' in comments All occurences of 'shearing strain rate' as a comment before the computation of the quantities `shear{n,s}{e,w}` define these quantities as shearing strain rate = e_12 However, the correct definition of these quantities is 2*e_12. Bring the code in line with the definition of the shearing strain rate in the documentation (see the 'Internal stress' section in doc/source/science_guide/sg_dynamics.rst), which defines D_S = 2\dot{\epsilon}_{12} Correct these. * ice_dyn_vp: correctly use 'subname' in 'abort_ice' calls * ice_dyn_vp: add 'calc_seabed_stress' subroutine Add a subroutine to compute the diagnostic seabed stress ('taubx' and 'tauby'), and call it from the 'imp_solver' driver. Note that in EVP, this is done in the 'stepu' subroutine in ice_dyn_shared. * dynamics: move basal stress residual velocity ('u0') to ice_dyn_shared Make 'u0' a module variable in ice_dyn_shared, and use for both EVP and VP without duplicating the hard-coded value in the code. * ice_dyn_vp: use 'deformations' from ice_dyn_shared The 'deformations' subroutine is used in ice_dyn_evp, but not in ice_dyn_vp. * ice_dyn_vp: simplify 'calc_vrel_Cb' subroutine Remove the uneeded variables 'utp', 'vtp' and directly use 'uvel' and 'vvel' instead. * ice_dyn_vp: rename 'Diag[uv]' to 'diag[xy]' for consistency The rest of the code uses 'x' ('y') as a suffix for the x- and y- components of "vector field"-type arrays. Bring 'Diag[uv]' in line with this convention, and loose the capitalization. * ice_dyn_vp: introduce 'CICE_USE_LAPACK' preprocessor macro The 'anderson_solver' subroutine uses calls from the LAPACK library, but these are strictly needed only for the Anderson solver, and not for the Picard solver. Introduce a preprocessor macro, 'CICE_USE_LAPACK', that can be used to "preprocess out" any code that uses LAPACK calls. Refactor the code so that the Picard solver works without LAPACK (this is easy since we only have one call to 'dnrm2' to remove, and an alternative implementation already exist, but is commented). Uncoment it and add an 'ifdef' so that this code is used if 'CICE_USE_LAPACK' is not defined. Also, make the Anderson solver abort if CICE was compiled without LAPACK. These two changes make the model compile if LAPACK is not available. Add an option file 'set_env.lapack' to automatically configure the model compilation upon 'cice.setup' to use LAPACK (note that Macros and env files for the chosen machine need to be modified to add LAPACK support; at the moment only 'cesium' is supported). * ice_dyn_vp: convert 'algo_nonlin' to string Make the reference namelist easier to use by changing the type of the 'algo_nonlin' variable to a string. This way, we can use 'picard' and 'anderson' instead of '1' and '2'. * ice_dyn_vp: convert 'precond' to string Make the namelist easier to use by converting the 'precond' namelist variable to a string. This way we can use 'ident', 'diag' or 'pgmres' instead of '1', '2' or '3'. * ice_dyn_vp: convert 'monitor_{f,p}gmres' to logical Convert the namelist variables 'monitor_{f,p}gmres', which are used as logical values, to actual logical values. * ice_dyn_vp: remove unimplemented 'fpfunc_andacc' from the namelist In the future, we might add a second fixed point function, g_2(x), instead of the regular Picard fixed point function currently implemented, g_1(x) = FGMRES(A(x), x0, b(x)) - x Remove the namelist flag 'fpfunc_andacc' from the reference namelist, and make the model abort if this flag is used, since only g_1(x) is currently implemented. * ice_dyn_vp: add input validation for string namelist variables Add input validation in ice_init for 'algo_nonlin', 'precond' and 'ortho_type'. Currently, the code that resets 'im_andacc' to zero if the Picard solver is chosen (algo_nonlin = 1) is inside the 'anderson_solver' subroutine, which is not clean because 'im_andacc' is used as a dimension for several local variables. These variables would thus have whatever size 'im_andacc' is set to in the namelist, even if the Picard solver is chosen. Fix that by moving the code that sets 'im_andacc' to zero if the Picard solver is chosen (algo_nonlin = 1) to ice_init. * ice_dyn_vp: abort if Anderson solver is used in parallel Since the Anderson solver is not yet parallelized, abort the model if users attempt to run it on more than one proc. * ice_dyn_vp: reimplement monitor_{f,p}gmres The ability to monitor the residual norm of the FGMRES solver and the PGMRES preconditioner was lost when we transitioned to the new solver implementation. Re-add this capability. * ice_dyn_vp: use 'ice_HaloUpdate_vel' everywhere By performing the halo update after creating the halo_info_mask in imp_solver, we can use the ice_HaloUpdate_vel subroutine, increasing code reuse. * ice_dyn_vp: move ice_HaloUpdate_vel subroutine to ice_dyn_shared Move the subroutine to ice_dyn_shared so it can be used in ice_dyn_evp and ice_dyn_eap. * ice_dyn_evp: use ice_HaloUpdate_vel Now that the subroutine is in ice_dyn_shared, use it to increase core reuse. * ice_dyn_eap: use ice_HaloUpdate_vel Now that the subroutine is in ice_dyn_shared, use it to increase code reuse. * options: set 'use_mean_vrel' in 'dynpicard' Preliminary testing suggests convergence is improved when using the mean of the two previous estimates to compute 'vrel' when the Picard solver is used. Set this as the default with option 'dynpicard'. * conda: add 'libapack' to environment spec and Macros Add the necessary conda package to build CICE with LAPACK, so that the Anderson solver can be tested in the conda environment. * ice_dyn_vp: initialize 'L2norm' and 'norm_squared' to zero If we don't initialize thess arrays, using any MPI decomposition for which nblocks /= max_blocks on some process will cause uninitialized values to be summed when computing the local sum for each block, like in nlres_norm = sqrt(global_sum(sum(L2norm), distrb_info)) Initialize them to zero. * ice_dyn_vp: initialize 'stPr' and 'zetaD' to zero Since 'stPr' and 'zetaD' are initialized with a loop over 'icellt', but are used on loops over 'icellu', it is possible that these latter loops use uninitialized values since 'iceumask' and 'icetmask' can differ at some grid points since they are not defined using the same criteria (see ice_dyn_shared::dyn_prep1 and ice_dyn_shared::dyn_prep2). To avoid using unitialized values, initialize the 'stPr' and 'zetaD' arrays to zero, so that any index corresponding to a cell with no ice is set to zero. This mimics what is done for EVP, where the 'str' array is initialized to zero in ice_dyn_evp::stress. * doc: document implicit solver * ice_dyn_vp: default 'use_mean_vrel' to 'true' Previous testing reveals it's better to set 'use_mean_vrel' to true (i.e., use the mean of the 2 previous nonlinear iterates to compute `vrel`) for the Picard solver, but the picture is less clear for the Anderson solver. Since we are only advertising the Picard solver for now, make this flag default to 'true'. * ice_in: remove Anderson solver parameters Since we are only advertising the Picard solver for now, remove the namelist parameters for the Anderson solver from the reference namelist. * ice_dyn_vp: add article references * ice_dyn_vp: remove unused subroutines Remove subroutines 'stress_prime_vpOLD', 'matvecOLD' and 'precond_diag'. 'stress_prime_vpOLD' and 'matvecOLD' have been unused since the creation of the 'matvec' subroutine. 'precond_diag' has been unused since we deactivated the legacy FGMRES solver. * ice_dyn_vp: remove unused local variables * ice_dyn_vp: calc_bvec: remove unused arguments This completes 56d55c7 (ice_dyn_vp: make 'calc_bvec' use already computed 'vrel', 2019-06-10) * ice_dyn_vp: anderson_solver: remove unused arguments This completes 4616628 (ice_dyn_vp: remove legacy FGMRES solver, 2020-02-17). * ice_dyn_vp: remove unused 'use'-imported variables in subroutines * ice_dyn_vp: fix trailing whitespace errors * ice_dyn_vp: uniformize indentation and whitespace * ice_dyn_vp: put 'intent's on same lines as types * ice_dyn_vp: use '==' instead of '.eq.' * ice_dyn_vp: uniformize whitespace in subroutine arguments * ice_dyn_vp: calc_bvec: remove unneeded arguments * ice_dyn_vp: clean up and clarify code comments * ice_dyn_vp: matvec: remove unneeded local variable * ice_dyn_vp: rename 'stPrtmp' to 'stress_Pr' and 'Dstrtmp' to 'diag_rheo' Also rename 'Dstr' to 'Drheo' in formDiag_step[12] * ice_dyn_vp: rename 'calc_zeta_Pr' to 'calc_zeta_dPr' The subroutine 'calc_zeta_Pr' computes the derivatives of the replacement pressure, so rename it to 'calc_zeta_dPr' to bring its name in line with its computation. * ice_dyn_vp: rename 'ww[xy]' to 'orig_basis_[xy]' The arrays 'ww_[xy]' are used to keep the values of the preconditioned vectors, in order to update the solution at the end of the iteration. As such, rename them to 'orig_basis_[xy]', since they form a basis of the solution space and this name describes their purpose better. * ice_dyn_vp: [fp]gmres: removed unused 'r0' variable and 'conv' argument Both of these are either leftovers of the conversion from the legacy FGMRES implementation or of the GEM implementation, but are unused in our implementation. Remove them Move the declaration of the BLAS functions in their own type declaration to avoid having to deal with the trailing `, &`. * ice_dyn_vp: make 'maxits_nonlin' default to 4 Let's use a small number of nonlinear iterations by default, since other models with VP solvers are often used with a small number of iterations. Add an option file 'set_nml.nonlin5000' to set 'maxits_nonlin' to 5000, effectively letting the VP solver iterate until 'reltol_nonlin' is reached. * ice_dyn_vp: rename 'im_{fgmres,pgmres,andacc}' to 'dim_*' Since these three parameters denote dimensions (of the FGMRES, PGMRES and Anderson acceleration solution spaces), name them as such. * tests: add 'dynpicard' test to base_suite Add a test using the VP implicit solver to the base_suite. Let's use the gx3 grid because it is less computationally intensive, and use a 4x1 decomposition to test the solver in parallel. * dynamics: remove proprietary vector directives * ice_init: validate implicit solver arguments only if solver is active 'algo_nonlin', 'precond' and 'ortho_type' are only active if 'kdyn == 3', so wrap the code validating the values of these flags in an if block. * ice_dyn_vp: rename 'imp_solver' to 'implicit_solver' Use a more explicit subroutine name, which additionnally does not have any negative connotations. * doc: add caveat for VP solver (tx1 grid, threading) * ice_dyn_vp: rename 'CICE_USE_LAPACK' macro to 'USE_LAPACK' Bring the name of the new macro 'CICE_USE_LAPACK' more in line with the changes to the CPP macros in 819eedd (Update CPP implementation (CICE-Consortium#490), 2020-07-31) by renaming it to 'USE_LAPACK'. * comm: rename 'global_sums' to 'global_allreduce_sum' Make the purpose of the 'global_sums' interface, i.e. reducing a distributed variable to a variable of the same shape (MPI_ALLREDUCE) clearer by renaming it to 'global_allreduce_sum'. This makes it clearer that in contrast to the 'global_sum' interface, the resulting variable is of the same shape as the distributed variable. * dynamics: remove 'ice_HaloUpdate_vel' and introduce '(un)stack_velocity_field' The 'ice_HaloUpdate_vel' subroutine feels out of place in 'ice_dyn_shared', and moving it to 'ice_boundary' introduces other problems, including circular dependencies. Remove it, and introduce two simple subroutines, 'stack_velocity_field' and 'unstack_velocity_field', that are responsible to load the 'uvel' and 'vvel' arrays into the 'fld2' array used for the halo updates. Use these new subroutines in ice_dyn_{evp,eap,vp}. * dynamics: rename 'init_evp' to 'init_dyn' The 'init_evp' subroutine initializes arrays that are used in the EVP, EAP or VP dynamics. Rename it to 'init_dyn' to emphasize that it is not specific to EVP. * drivers: call 'init_dyn' unconditionnally 'init_dyn' is called in both 'init_eap' and 'init_vp', so it makes more sense for this subroutine to be called unconditionnally in the drivers, and then call 'init_eap' or 'init_vp' if the EAP or VP dynamics are chosen. Co-authored-by: JFLemieux73 <jean-francois.lemieux@canada.ca>
…R#479) in all drivers (CICE-Consortium#515)
… decompositions. In CICE-Consortium#491, an unmasked halo update was changed to a masked halo update. This affects only padded decompositions with maskhalo_dyn=true and was picked up by an exact restart failure (CICE-Consortium#517)
Add this option instructing the Intel compiler not to wrap output at 80 characters. Reference: https://software.intel.com/content/www/us/en/develop/documentation/fortran-compiler-developer-guide-and-reference/top/compiler-reference/compiler-options/compiler-option-details/language-options/wrap-margin.html
…sortium#528) * Add DMI Cray XC50 hpc (freya) to machines. Based on Banting * separation of Banting and Freya * Removed depreciated cpp and added gnu to freya
…ICE-Consortium#532) * Fix minor issues in documentation, key_ CPPs, bfbcomp return codes quickstart documentation points to porting (CICE-Consortium#529) check additional return codes in the bfbcomp tool (CICE-Consortium#524) fix undefined variable in ice_init output (CICE-Consortium#520) add documentation about aliases (CICE-Consortium#523) remove key_ CPPS, can be handled by passing communicator thru interface (CICE-Consortium#498) * update alias documentation
…rtium#535) * Revert "deprecate upwind advection (CICE-Consortium#508)" This reverts commit 164fcfc. * Deprecate ktransport and update advection="none" implementation Update documentation as needed ktransport still exists as a namelist only ktransport<=0 will override the advection setting, set advection='none', and print out a warning message advection can only be set to remap, upwind, or none * update documentation * update documentation * un-deprecate ktransport in update and deprecate advection=none advection valid options are only remap and upwind ktransport turns on/off advection via advection=none internally * update documentation * update documentation
…m#536) Add floediam and hfrazilmin as CICE namelist. Requires latest version of Icepack. Update documentation. Icepack has two non bit-for-bit changes including - a fix to multiple declarations of floeshape (CICE-Consortium/Icepack#342) - a fix to the convergence scheme in icepack_atmo (CICE-Consortium/Icepack#341)
…rtium#539) * Update the timeseries plots to accurately extract data from log files. Also add an option to plot SST from the python plotting script. This addresses issue CICE-Consortium#538 * Fix 2 typos * Remove SST and arwt tot mass from list of fields to plot
…ICE-Consortium#540) * Move sw_redist, sw_frac, sw_dtemp from thermo to shortwave namelist Update verbose diagnostic output Update icepack to also include sw_redist, sw_frac, sw_dtemp changes in icepack driver (not required) Update version number in prep for next minor release * update verbose output in ice_init.F90 * update verbose output * update close_boundaries documentation
* Update test suite, add complete set of jra55 tests. jra55 tests for different grids gx3 uses jra55_gx3 gx1 uses jra55_gx1 tx1 is already using jra55 by default (no changes to tests required) gbox* uses internal forcing, no jra55 data (no changes to tests required) remove ocn_data_dir='default' and bgc_data_dir='default' in set_nml.jra55* files because it causes the bgc tests to fail. the bgc tests work with WOA bgc forcing data with jra55 physics forcing data. iobinary tests must use the old gx3 binary forcing, the jra55_gx3 netcdf data causes an abort because iobinary is tested with netcdf turned off. the old gx3 data is NCAR_bulk in binary format. alt01 with jra55 uses the alt01a setting. the difference is that use_leap_years is set to false in alt01a. It's set to true by default in the jra55 datasets. * add wget of JRA55_gx3 forcing to travis yml * add set_nml.alt01a * Update options and tests to make JRA55 the default set_nml.gx3 and set_nml.gx1 are now JRA55 by default get rid of set_nml.jra55* files add set_nml.gx3ncarbulk and set_nml.gx1coreii which invokes prior forcing datasets modify the test suites to do identical tests but with new options, compare to prior implementation by creating a renamed regression baseline via a script and links to generate the new names. Use zz* for the new set_nml options temporarily just to make it easy to generate the new links (the zz ensures the new options are last in the list by alphabet) chmod -x all set_* files for consistency * rename zz test options this is done as a clean separate step so new baselines can be established to continue to evolve the test cases and allow regression testing * remove most old forcing test cases * rename alt01a to alt01 in test options and suite
* zsal option for testing zsalinity code * add zsal tests to base_suite * fix history conditionals for fields outside main namelist; adjust zsal tests in base_suite
* solve_zsal bug fix and update to zsal test option * update icepack include zsal changes (not required) Co-authored-by: apcraig <anthony.p.craig@gmail.com>
* add Github Actions test-cice.yml script * Update README.md Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>
…m#553) Change emissivity default to 0.985 (from 0.95) Change nblyr default to 1 (from 7) Add nblyr=7 to set_nml.zsal Update Icepack to include emissivity default changes Update documentation as needed
…ICE-Consortium#561) * Add DMI Cray XC50 hpc (freya) to machines. Based on Banting * Removed depreciated cpp and added gnu to freya * added flag to test evp1d * OMP private variables are not used in this loop. should be removed
* Update JRA55 forcing implementation Several bug fixes addressed - Reading two timesteps instead of 1 now (was reading the same timestep into both slots) - Advancing files by year (was just cycling the initial year in a multi-year run) - Time interpolation updated (was not working in multi-year runs) - Interpolate 4 input states in time, but hold 3 input fluxes constant over 3 hour forcing period - Additional output diagnosing forcing filename as it evolves * Add forcing documentation to user guide * add jra55data schematic figure * update forcing documentation, add Hunke07 reference
) * In the process of adding seabed2 parameterization * Added calls foe icepack_query_parameters * Modified modules used in subroutine to match latest code (it now compiles) * Changed expression basal stress to seabed stress * Replaced basal by better expression seabed at many places * seabed everywhere...basalstress logical replaced by seabedstress * Added kseabed in namelist for choosing seabed stress method * Changed gacc to gravit (from icepack) and use of alphab from ice_in * Cleaned up the seabed2 code, added comments * Changing the doc for new grounding scheme * Almost done with the doc * Minor changes to the doc * Modifs to the doc * Value of sigmab was not the right one with respect to Dupont et el in prep * For some rebase did not work completely...replced basalstress logical by seabedstress * Other issue with the rebase for ice_init...fixed * Minor mistake corrected...it compiles * changed basalstress to seabedstress in options for tests * Addressed some of the changes required for the PR * Replace kseabed by character string for choice of method * Updated the doc * Cosmetic change ice_init * Update doc/source/user_guide/ug_testing.rst Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com> * Cosmetic changes to ice_init to align prints * Some modifs requested for PR: same taub after 4 months (gx1) * Done with requested changes to new grounding method: same taub after 4 months (gx1) * Modifs to doc and new test of prob method Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>
* Modified k1 value and some of the doc * Update doc/source/science_guide/sg_dynamics.rst Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com> Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>
@SMoorthi-emc The latest CICE updated the default emissivity for snow and ice from 0.95 to 0.985. This is a namelist parameter so we do not have to update the emissivity we are currently using (0.95). However, since this PR will require new baselines (because of a bugfix in icepack), if we want to change the value used in our ice_in, it would be a good time. Are either (or neither) of these values more consistent w/ ATM? |
Denise,
Neither of them are consistent with ATM. There is emissivity defined in
radiation in ATM and we are making other parts consistent within the ATM.
I made some updtate to handle fractional grid and passed it on to Michael
Barlage who is making additional updates.
Moorthi.
…On Sun, Mar 14, 2021 at 10:41 AM Denise Worthen ***@***.***> wrote:
@SMoorthi-emc <https://github.com/SMoorthi-emc> The latest CICE updated
the default emissivity for snow and ice from 0.95 to 0.985. This is a
namelist parameter so we do not have to update the emissivity we are
currently using (0.95). However, since this PR will require new baselines
(because of a bugfix in icepack), if we want to change the value used in
our ice_in, it would be a good time. Are either (or neither) of these
values more consistent w/ ATM?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#24 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALLVRYV5ER42Q6FLK6VXU2LTDTDKXANCNFSM4ZFA4HOQ>
.
--
Dr. Shrinivas Moorthi
Research Meteorologist
Modeling and Data Assimilation Branch
Environmental Modeling Center / National Centers for Environmental
Prediction
5830 University Research Court - (W/NP23), College Park MD 20740 USA
Tel: (301)683-3718
e-mail: ***@***.***
Phone: (301) 683-3718 Fax: (301) 683-3718
|
It is great though if we can specify through namelist. Can we have
different emissivity for ice and snow on ice?
On Sun, Mar 14, 2021 at 8:41 PM Shrinivas Moorthi - NOAA Federal <
***@***.***> wrote:
… Denise,
Neither of them are consistent with ATM. There is emissivity defined
in radiation in ATM and we are making other parts consistent within the
ATM. I made some updtate to handle fractional grid and passed it on to
Michael Barlage who is making additional updates.
Moorthi.
On Sun, Mar 14, 2021 at 10:41 AM Denise Worthen ***@***.***>
wrote:
> @SMoorthi-emc <https://github.com/SMoorthi-emc> The latest CICE updated
> the default emissivity for snow and ice from 0.95 to 0.985. This is a
> namelist parameter so we do not have to update the emissivity we are
> currently using (0.95). However, since this PR will require new baselines
> (because of a bugfix in icepack), if we want to change the value used in
> our ice_in, it would be a good time. Are either (or neither) of these
> values more consistent w/ ATM?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#24 (comment)>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ALLVRYV5ER42Q6FLK6VXU2LTDTDKXANCNFSM4ZFA4HOQ>
> .
>
--
Dr. Shrinivas Moorthi
Research Meteorologist
Modeling and Data Assimilation Branch
Environmental Modeling Center / National Centers for Environmental
Prediction
5830 University Research Court - (W/NP23), College Park MD 20740 USA
Tel: (301)683-3718
e-mail: ***@***.***
Phone: (301) 683-3718 Fax: (301) 683-3718
--
Dr. Shrinivas Moorthi
Research Meteorologist
Modeling and Data Assimilation Branch
Environmental Modeling Center / National Centers for Environmental
Prediction
5830 University Research Court - (W/NP23), College Park MD 20740 USA
Tel: (301)683-3718
e-mail: ***@***.***
Phone: (301) 683-3718 Fax: (301) 683-3718
|
I will need to check if CICE is using this single value for both, or there is another value set somewhere. I suspect since the namelist contains a single emissivity setting it is using the same for both. |
So the emissivity is for both snow and ice. I will leave the namelist at our current value (0.95) unless you suggest otherwise. |
Issue #25
This update includes the features or bugfixes:
adds non-b4b bugfix in icepack #341
implements updated scheme for landfast ice
adds min ice thickness for new frazil ice (hfrazilmin) as a namelist variable
changes default emissivity from 0.95 to 0.985