Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dynamics: add implicit VP solver #491

Merged
merged 196 commits into from
Sep 22, 2020

Conversation

phil-blain
Copy link
Member

@phil-blain phil-blain commented Jul 20, 2020

PR checklist

  • Short (1 sentence) summary of your PR:
    Add an implicit solver for the VP rheology based on Picard iteration

  • Developer(s):
    P. Blain, J-F Lemieux.

  • Suggest PR reviewers from list in the column to the right.

  • Please copy the PR test results link or provide a summary of testing completed below.
    Here are the results from the base_suite (Test-Results link):

    249 measured results of 249 total results
    216 of 249 tests PASSED
    0 of 249 tests PENDING
    2 of 249 tests MISSING data
    31 of 249 tests FAILED

    This is comparing the tip of this branch to c22c6d5 (clarify CICE citation info in doc (clarify CICE citation info in doc #487), 2020-07-13).

    The two missing data tests are:

    MISS daley_intel_smoke_gx3_4x1_diag1_run5day_thread bfbcomp daley_intel_smoke_gx3_8x2_diag1_run5day missing-data
    MISS daley_intel_smoke_gx3_4x1_dynpicard_medium compare c22c6d5  -1 -1 -1 missing-data

    The second one is expected, it's the test I add in this branch so it does not yet exist on master. The first one is unexpected, I'll look into that. EDIT 24/07 : see dynamics: add implicit VP solver #491 (comment), this was simply a timing issues between the two tests.

    The 31 failing tests are all "different-data". I expected this branch to not be bit for bit since did some refactoring of the dynamics code to increase code reuse. I've not yet investigated whether the answers change a lot or just at machine precision level. I'll update the PR when that's done.

    EDIT 22/09/20 The failing tests seem to be those where OpenMP (threading) is turned on and compiler optimization is also turned on, see dynamics: add implicit VP solver #491 (comment).

    I've also run the QC test, comparing an EVP run and a VP run (with 20 nonlinear iterations instead of the default of 4). The QC test passes:

    $ ./configuration/scripts/tests/QC/cice.t-test.py ~/data/site4/cice/runs/daley_intel_smoke_gx1_32x1_medium_qc.qc/     ~/data/site4/cice/runs/daley_intel_smoke_gx1_160x1_dynpicard_medium_nonlin20_qc.qc/
    INFO:__main__:Running QC test on the following directories:
    INFO:__main__:  /home/phb001/data/site4/cice/runs/daley_intel_smoke_gx1_32x1_medium_qc.qc/
    INFO:__main__:  /home/phb001/data/site4/cice/runs/daley_intel_smoke_gx1_160x1_dynpicard_medium_nonlin20_qc.qc/
    INFO:__main__:Number of files: 1825
    INFO:__main__:2 Stage Test Passed
    INFO:__main__:Quadratic Skill Test Passed for Northern Hemisphere
    INFO:__main__:Quadratic Skill Test Passed for Southern Hemisphere
    INFO:__main__:Creating map of the data (ice_thickness_daley_intel_smoke_gx1_32x1_medium_qc.qc.png)
    INFO:__main__:Creating map of the data (ice_thickness_daley_intel_smoke_gx1_160x1_dynpicard_medium_nonlin20_qc.qc.png)
    INFO:__main__:Creating map of the data (ice_thickness_daley_intel_smoke_gx1_32x1_medium_qc.qc_minus_daley_intel_smoke_gx1_160x1_dynpicard_medium_nonlin20_qc.qc.png)
    INFO:__main__:
    INFO:__main__:Quality Control Test PASSED

    EDIT 13/08/2020 The QC test is bit-for-bit in the default decomposition ( -p 44x1). The QC test passes in a 160x2 decomposition.

  • How much do the PR code changes differ from the unmodified code?

    • bit for bit
    • different at roundoff level
    • more substantial
  • Does this PR create or have dependencies on Icepack or any other models?

    • Yes
    • No
  • Does this PR add any new test cases?

    • Yes: I've added a test to the base_suite with the settings dynpicard (i.e. using the implicit solver).
    • No
  • Is the documentation being updated? ("Documentation" includes information on the wiki or in the .rst files from doc/source/, which are used to create the online technical docs at https://readthedocs.org/projects/cice-consortium-cice/. A test build of the technical docs will be performed as part of the PR testing.)

  • Please provide any additional information or relevant details below:

We've been talking about it for a long time, and it's finally ready for review!

This PR adds the implicit VP solver that JF and I have been working on for the last 2 years.

The VP solver is implemented in a new module in dynamics/, ice_dyn_vp.F90. Here are some points that I would like to highlight:

  • The subroutine imp_solver is the main driver (equivalent to subroutine evp for EVP), and should be read side-by-side with evp for ease of reviewing. We are aware that there is a lot of repetition between imp_solver and evp, but there was also already a lot of repetition between evp and eap in ice_dyn_eap.F90. If everyone is on-board, I could work on refactoring this to increase code reuse (although that would be in a subsequent PR).
  • The nonlinear equations resulting from the VP rheology discretization are solved using Picard iteration. This is done in subroutine anderson_solver. This subroutine also implements Anderson Acceleration, which is an acceleration method for fixed point iterations (Picard iteration is a fixed point iteration). Picard iteration is implemented as a special case of Anderson acceleration. At the moment the Anderson solver is not parallelized, so it is not documented. The Picard solver is parallelized and so it is the only one that is documented in the doc. Also, I've added an abort in the Anderson code if this method is used in parallel. The namelist setting algo_nonlin can be used to change from 'picard' to 'anderson', this is also not documented and it defaults to 'picard'.
  • The namelist setting use_mean_vrel is also not documented because it should always be set to .true. for the Picard solver for faster convergence. It is present in the namelist because it has to be set to false for the Anderson solver.
  • We set a default of 4 nonlinear iterations (maxits_nonlin), 1E-2 for the tolerance of the FGMRES solver (reltol_fgmres) and 5 PGMRES iterations (dim_pgmres, maxits_pgmres). @JFLemieux73 tells me this is in line with how implicit VP solvers are used in other models (ex. MITgcm). This means that by default, the code does not iterate until the solution is "converged". For numerics studies, I've added an options setting set_nm.nonlin5000 that sets maxits_nonlin to 5000 so that the code iterates until the solution reaches the desired tolerance (reltol_nonlin).
  • We are aware there is a lot of code duplication between some subroutines in ice_dyn_vp, and also between the stress and stress_eap subroutines in ice_dyn_[ea]vp, relating to the different computations for rheology. This is documented (briefly) in Refactor rheology (stress) computations phil-blain/CICE#36. I plan to refactor the code to reduce code duplication in a subsequent PR.
  • The subroutines fgmres and pgmres are also very similar. fgmres implements the FGMRES linear solver, and pgmres implements the right-preconditioned GMRES linear solver, which is used as a preconditioner for FGMRES. In theory GMRES is a special case of FGMRES, with a fixed preconditioner, so the code could be refactored so that they are both implemented in the same subroutine. However, to have only one subroutine would mean that this subroutine would have to be a recursive subroutine (since fgmres calls pgmres). I'm not sure of the performance implication of that, so as a first step I used separate subroutines.
  • The Anderson solver needs LAPACK at the moment, but the Picard solver does not. The code compiles out of the box without LAPACK since I've protected the LAPACK calls with preprocessor macro, CICE_USE_LAPACK. This preprocessor macro can be activated using the setting set_env.lapack, although this is not documented (since the Anderson solver itself is not documented).
  • I've tried to make clear commit messages that explain why I made the changes I made in each commit. If you want to know why did a particular change, you can use the following commands to read commit messages for changes touching specific files:
    git fetch https://github.com/phil-blain/CICE.git parallel-picard
    git log upstream/master..FETCH_HEAD <path/to/file>
    Each message can also be read in the "Commits" tab of the PR by clicking in the ... button next to the summary line.

@phil-blain
Copy link
Member Author

phil-blain commented Jul 20, 2020

I've updated the PR description with links to the specific parts of the doc that I modified.

@phil-blain
Copy link
Member Author

I've updated the PR description with the result of the base_suite.

Copy link
Contributor

@eclare108213 eclare108213 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!

ice_HaloUpdate_vel is much more elegant than the loading/unloading of fld2. Is this new routine usable for any vector quantity? If so, then it could be named ice_HaloUpdate_vec and put in the boundary module as a generic routine.

Are you sure about the factors of 2 in the comment lines for shearing strain rate? I would think that someone else would have noticed that, after all this time, if it were wrong. (But it might be wrong!)

Rather than imp_solver, please spell it out as implicit_solver. ('imp' means devil or rascal or troublemaker, etc, not exactly what I'd want in the code. This reminds me of @njeffery's flag called 'solve_sin' which at least had a positive connotation...)

Now I'm going to be a bit lazy: what is the difference between global_sum and global_sums in ice_global_reductions.F90?

If the Anderson solver isn't working yet, then why add -llapack to the macros now?

Copy link
Contributor

@eclare108213 eclare108213 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would the set_nml.diagimp and set_nml.dynpicard and set_nml.nonlin5000 options ever be used with kdyn /= 3? Should they have kdyn=3?

@phil-blain
Copy link
Member Author

phil-blain commented Jul 21, 2020

Awesome!

ice_HaloUpdate_vel is much more elegant than the loading/unloading of fld2. Is this new routine usable for any vector quantity? If so, then it could be named ice_HaloUpdate_vec and put in the boundary module as a generic routine.

Good point. I think it depends on what kind of halo updates are needed. I replaced all occurrences of fld2 by calls to ice_haloUpdate_vel, and they were all for the velocity field. One reason I prefer to keep it in ice_dyn_shared instead of ice_boundary is that this way the code is not duplicated between the serial and MPI versions..

Are you sure about the factors of 2 in the comment lines for shearing strain rate? I would think that someone else would have noticed that, after all this time, if it were wrong. (But it might be wrong!)

I think my change is correct. This is in commit fb4a49c:

All occurrences of 'shearing strain rate' as a comment before the
computation of the quantities `shear{n,s}{e,w}` define these quantities
as

    shearing strain rate  =  e_12

However, the correct definition of these quantities is 2*e_12.

Bring the code in line with the definition of the shearing strain rate
in the documentation (see the 'Internal stress' section in
doc/source/science_guide/sg_dynamics.rst), which defines

    D_S = 2\dot{\epsilon}_{12}

The formatted doc that I refer to is here : https://cice-consortium-cice.readthedocs.io/en/latest/science_guide/sg_dynamics.html#internal-stress

Rather than imp_solver, please spell it out as implicit_solver. ('imp' means devil or rascal or troublemaker, etc, not exactly what I'd want in the code.

Will gladly change that, 'imp' was not in my English vocabulary :P
EDIT 23/07: done, just pushed a new commit.

This reminds me of @njeffery's flag called 'solve_sin' which at least had a positive connotation...)

Haha that made me smile :P

Now I'm going to be a bit lazy: what is the difference between global_sum and global_sums in ice_global_reductions.F90?

global_sum is for a single scalar, whereas global_sums is for several scalars, i.e. on each proc (i) I have a table [a_i, b_i, c_i, ...] and I want to compute the distributed sums of all members of this "vector" : [\sum_i a_i, \sum_i b_i, \sum_i c_i, ...]

If the Anderson solver isn't working yet, then why add -llapack to the macros now?

The Anderson solver is working well in serial mode, but it's not parallelized yet. I'm just adding it to the macros for my own workstation at work, and for the conda port. Adding it to the conda port does not have any downside that I can see since it's just a additional package that gets installed in the conda environment. Plus, the code is protected by a CPP so even if -llapack is there at link time, the linker won't link any subroutines from the library if the CPP was not activated.

Would the set_nml.diagimp and set_nml.dynpicard and set_nml.nonlin5000 options ever be used with kdyn /= 3? Should they have kdyn=3?

set_nml.dynpicard does set kdyn=3. As for set_nml.diagimp and set_nml.nonlin5000, it's true that they only make sense with kdyn=3, but I prefered to create "narrow-purpose" options that can then be combined, i.e.

./cice.setup ... -s dynpicard             # default : 4 nonlin. iterations, no convergence diagnostics
./cice.setup ... -s dynpicard,diagimp     # add convergence diagnostics
./cice.setup ... -s dynpicard,nonlin5000  # max 5000 nonlin. iterations
# etc

I'm open to changing that however.

Thanks for this review!

@dabail10
Copy link
Contributor

Dumb question. Does this restart on a tripole grid? There are sometimes issues with vector halo updates and the tripole seam.

Dave

@phil-blain
Copy link
Member Author

phil-blain commented Jul 21, 2020

@dabail10 good point! I actually have just tested it on the gx3 and gx1 grids. It is on my todo list to test it with the tx1 grid.

I will add a caveat to the documentation.

EDIT 23/07 : I've just pushed a commit adding a caveat for the tx1 grid and threading (which I've not yet tested with this solver either).

@@ -392,6 +372,9 @@ subroutine eap (dt)
call ice_HaloMask(halo_info_mask, halo_info, halomask)
endif

! velocities may have changed in dyn_prep2
call ice_HaloUpdate_vel(uvel, vvel, halo_info_mask)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

halo_info_mask is not defined unless maskhalo_dyn is true. I find it a little awkward that halo_info_mask is passed in here and then in ice_haloUpdate_vel, halo_info_mask is only used if maskhalo_dyn is true, otherwise halo_info is used (which has been passed into the subroutine via a "use" statement).

I'm wondering if a clearer implementation might not be

if (maskhalo_dyn) then
  call ice_HaloUpdate_vel(uvel,vvel,halo_info_mask)
else
  call ice_HaloUpdate_vel(uvel,vvel,halo_info)
endif

and then remove the if check inside ice_HaloUpdate_vel. Also, would it make sense to generalize the ice_HaloUpdate_vel so it looks more like

call ice_HaloUpdate_vel(f1, f2, halo_info, field_loc)

which would make it more usable and more closely match the other HaloUpdate interfaces. I'm also torn about whether this should be in ice_boundary or not. If I had a choice, I'd put it in ice_boundary now and then create a separate issue that targeted improved reusability (or even merged) the mpi and serial comm directories.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this is a little awkward. Today I've been working on refactoring that, it's not as easy as it looks.

I tried just moving the subroutine to module ice_boundary, but that creates a circular dependency between ice_domain and ice_boundary (because I have to use ice_domain in the subroutine to get access to halo_info, maskhalo_dyn, and nblocks.

So I'll try a different approach tomorrow, keeping the subroutine in ice_dyn_shared but generalizing the interface a bit, as you suggest.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried another approach in af8d03e. Since as I explained above moving the ice_Haloupdate_vel subroutine to ice_boundary does not appear feasible, I think that leaving it in ice_dyn_shared makes more sense. Then it is clear that it's only for the velocity, so field_loc_NEcorner stays hardcoded.

As I explain in the commit message of af8d03e, I chose to move the declaration of halo_info_mask to ice_dyn_shared as a module variable. I'm open to undoing that if we feel it's not something we want. It does simplify the interfaces in ice_dyn_vp since halo_info_mask does not have to be passed down the call stack.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a quick look at af8d03e. I am pretty uncomfortable with halo_info_mask becoming reusable module data. halo_info is set at initialization and is static. halo_info_mask is very dynamic. In fact, the correct version may vary in different parts of the code depending what you want to mask. It certainly varies in time. My preference continues to be that halo_info_mask be local, computed when needed locally, used locally, passed into other routines, and then destroyed. There is too much risk someone is going to use/resuse that module data when it's going to contain stale data or nothing at all. I also don't like the disconnect between computing halo_info_mask in one place and how delicately that interacts with the halo update in another part of the code.

I continue to believe if we are going to want an ice_haloUpdate_vel, it should be more general (call ice_HaloUpdate_vel(uvel, vvel, halo_info, field_loc)) and in ice_boundary.F90. If there is some circular logic issues, then maybe we need to think about moving some stuff out of ice_domain.F90 so it can be used in ice_boundary.F90.

I understand that aggregating uvel and vvel into a single variable might be part of the problem. That's done for performance. We can always haloUpdate each variable separately if we don't like the look of it. All of this (creating the maskhalo and copying the two fields into one) is all done for performance. I feel like the new implementation is creating more confusion/problems than it fixes. I'm not sure what problem we're trying to fix.

The other thing is maybe we could create a method that creates "f2" (call copy2to1(f2,uvel,vvel)), then a call ice_haloUpdate as before with an if test on the maskhalo, then a copy1to2(f2,uvel,vvel) . Is the problem that you don't like to have to copy in/out of f2 not in a reused subroutine.

Having said all that, I think this can maybe be fixed by changing some names. I think use of ice_HaloUpdate_vel and halo_info_mask are part of my problem. Those are way too generic and are giving me heartburn. If ice_HaloUpdate_vel were renamed to something like ice_dyn_velhalo and the module data, halo_info_mask were renamed to something like ice_dyn_velhalomask, I could handle this a lot better. I still think ice_dyn_velhalomask should not be module data and prefer an implementation more like

if (maskhalo_dyn) then
  call ice_dyn_velhalo(uvel,vvel,halo_info_mask,field_loc_NEcorner)
else
  call ice_dyn_velhalo(uvel,vvel,halo_info,field_loc_NEcorner)
endif

which means we could keep the name halo_info_mask as it's no longer module data and just local. We also get rid of the mask_halo_dyn check inside the "ice_HaloUpdate_vel" call.

I guess in general, I still prefer to see a more generic interface created in this case. If we want to continue down this path, I think at least we need to come up with names that don't feel out of place.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am pretty uncomfortable with halo_info_mask becoming reusable module data.

I had a feeling you would not be comfortable with this change. In fact I was unsure myself if I liked my solution because as you say it makes it easy for callers of "ice_HaloUpdate_vel" to forget that the halo_info_mask has to be previously correctly created. I'll revert that change.

I feel like the new implementation is creating more confusion/problems than it fixes. I'm not sure what problem we're trying to fix.

My main goal was reducing code duplication.

I continue to believe if we are going to want an ice_haloUpdate_vel, it should be more general (call ice_HaloUpdate_vel(uvel, vvel, halo_info, field_loc)) and in ice_boundary.F90. If there is some circular logic issues, then maybe we need to think about moving some stuff out of ice_domain.F90 so it can be used in ice_boundary.F90.

OK. I'll try to work on that. The other thing that felt awkward to me when I tried this solution is that the fld2 array used to combine uvel and vvel needs to be allocated somewhere (and I think for performance we wouldn't want to allocate it inside the subroutine each time it is called, but I could be wrong). If we move the subroutine to ice_boundary, then it makes more sense for fld2 to be private module data in ice_boundary, but there is no existing alloc_boundary subroutine, so it would have to be added and all drivers updated to call it, which I wanted to avoid...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are a number of issues to get this refactoring right. The allocation issue is a good point. It's better if fld2 is local and we can reuse it during the loop. I understand there could be a bit more reuse, and I'm supportive of that. Sorry for being a pain, I just want to make sure the implementation is relatively clean if we do change it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I understand. I'll come back to this the week after next.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I re-immersed myself in this today and came to the conclusion that I a big refactor would involve spending more time on this than I want to/can invest. So what I did was reverting the changes I made (removing ice_HaloUpdate_vel) and simply introduce two new subroutines, stack_velocity_field and unstack_velocity_field, like you suggested above.

There is still a lot of copied code but at least the amount of code reuse is increased a bit.

The changes are here: 6afd6f4.

configuration/scripts/options/set_env.lapack Outdated Show resolved Hide resolved
@@ -55,6 +56,12 @@ module ice_global_reductions
global_sum_scalar_int
end interface

interface global_sums
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I too find this interface a little perplexing. All the public interfaces in this subroutine are for fields. This is for a vector of scalars where all scalars are added together across all pes (if I understand correctly). I would like to suggest that a global_scalar_sum interface would be better and for the local vector sum to be passed into the global_scalar_sum. I would also call it global_scalar_sum instead of global_sums. Or if it remains a vector, then we should be able to clearly differentiate an interface where the sum of the vectors produces a vector vs a scalar. I think we want to differentiate these not just thru different module procedure interfaces, but actually via clearer interface names. It may be that we never implement the global sum of vectors to vector, but it makes sense to me to put in place a naming convention that makes it clear to do.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One other comment. Is the "global_sums" used for diagnostics or is it needed in the solver? In either case, I think we need to target the ability to get "bit-for-bit" sums on different decompositions and pe counts using the bfbflag or some other way if possible. This bit-for-bit capability is important at least for testing. I haven't looked closely at the branch, but I'd be happy to see if we can leverage the reprosum algorithm to do this. I think what it also implies is that the per pe values in global_sums cannot already be a local reduction of a field.

Copy link
Member Author

@phil-blain phil-blain Aug 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I'll try to explain what global_sums does in a clearer way (I don't mind changing the name):
In fact it is simply an MPI_ALL_REDUCE. Referencing the interface:

function global_sums_dbl(vector, dist) &
result(globalSums)

Here is the computation that's taking place:

processes |      "vector"      |  "globalSums"
----------------------------------------------------------------------------------
p1        |  a1, b1, ..., x1   |  a1+a2+...+an, b1+b2+...+bn, ..., x1+x2+...+xn  |
----------------------------------------------------------------------------------
p2        |  a2, b2, ..., x2   |    same as above
----------------------------------------------------------------------------------
...       |    ...             |    same as above
----------------------------------------------------------------------------------
pn        |  an, bn, ..., xn   |    same as above
----------------------------------------------------------------------------------

I already use the compute_dbl_sums subroutine, so any bfbflag option would already work.
I looked at the code of compute_dbl_sums and understood that it receives a 2D array, reduces it to a 1D vector by summing along columns, and passes the resulting vector to MPI_ALL_REDUCE. So this is exactly what I want, and I simply create a 2D array "work" of shape (1,m) in global_sums_dbl to accomodate the interface of compute_dbl_sums:

numElem = size(vector)
allocate(work(1,numElem))
work(1,:) = vector
globalSums = c0
call compute_sums_dbl(work,globalSums,communicator,numProcs)

To answer your question, this subroutine is used in the solver if one choose classical Gram-Schmidt as the orthogonalization method.

As for naming, there are already subroutines global_sum_scalar_{dbl,real,int} (in the interface global_sum) that do a global reduction for scalar values (i.e. a single scalar value on each proc, all summed together across procs to obtain the same scalar values on all procs). So I would suggest that I rename the subroutine to global_sum_vector_dbl and the interface to global_sum_vector so that everything is clear. Would that work ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds fine. I think it definitely needs a more descriptive interface name. Looking at what we have now,

global_sum_dbl ! sums a field across all pes to form a single scalar
global_sum_scalar_dbl ! sums a scalar across all pes to form a single scalar

global_sum_vector_dbl might suggest that it is summing a vector across all pes to form a single scalar, but it's not. What it's really doing is global_sum_scalar_dbl but for multiple scalars at the same time. In other words, summing a vector across all pes to form a vector. How about if we call it global_sum_vector_vector_dbl and put it under the generic "global_sum" interface. Would there be a problem with uniqueness in the interface if we tried to do that? At the same time, you could implement the global_sum_vector_dbl to acknowledge the difference?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking is that since it performs a different kind of action, it makes sense for it to be a separate interface... that's why I'm suggesting not putting it in the generic global_sum (there would be no interface conflict if we do, though). Maybe global_allreduce_sum (this is closer to MPI_ALLREDUCE ) ? This way we would have :

global_sum -> interface to reduce a distributed something (presently, a 2D array or a scalar) to a single scalar
global_allreduce_sum -> interface to reduce a distributed something (in this case a 1D vector) to something of the same shape (a 1D vector)

and then the subroutine itself could be global_allreduce_sum_vector_dbl.

What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could be convinced that there should be a separate generic interface. One other thing we could do is update the internal module interface names (not global_sum) to make them clearer. We could come up with a subroutine naming scheme that is more general and extensible. Maybe

global_sum_dbl would become global_sum_field_scalar_dbl
global_sum_scalar_dbl would become global_sum_scalar_scalar_dbl

and the new method could be called

global_sum_vector_vector_dbl

with room to add

global_sum_vector_scalar_dbl

in the future if needed. This should only affect the module and nothing else. I guess we'd do it in both the serial and mpi versions. Otherwise, the global_allreduce_sum seems OK for the generic interface name if we keep those methods separate.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went for a separate interface since I feel it's simplest: e30da2b. We can always revisit this in the future.

@phil-blain
Copy link
Member Author

Thanks @apcraig. I'll look into your comments in the coming days.

I'm investigating the non b4b tests in the base_suite, trying to find a pattern. I suspect this is due to the refactoring I did in 6af142e, but I'm not 100% sure.

I noticed all failing tests are threaded (i.e. -p NxM, M>1, except daley_intel_smoke_gx3_4x1_diag1_run5day_thread, but this job also sets ICE_THREADED to true.)
However, some passing tests are also threaded (./results.csh | grep -E 'PASS .*[0-9]+x[2-9]_.* compare'). Among those, only the following have kdyn=1 (and kevp_kernel unset):

$ ./results.csh | \grep -E 'PASS .*[0-9]+x[2-9]_.* compare'|\grep -v boxadv |\grep -v boxdyn | \
\grep -v alt01 |\grep -v alt03 |\grep -v alt04
PASS daley_intel_smoke_gx3_1x4_debug_diag1_run2day compare c22c6d5 118.90 63.76 28.25
PASS daley_intel_restart_gx3_8x2_debug compare c22c6d5 114.49 59.09 26.38
PASS daley_intel_restart_gx3_8x2_alt02_debug_short compare c22c6d5 84.69 69.08 7.89
PASS daley_intel_smoke_gx3_4x4_alt05_debug_short compare c22c6d5 71.00 43.55 12.86
PASS daley_intel_smoke_gbox128_4x4_boxrestore_debug_short compare c22c6d5 521.69 480.85 10.54
PASS daley_intel_smoke_gx3_8x2_bgcz_debug compare c22c6d5 61.68 9.87 20.30
PASS daley_intel_smoke_gx3_4x2_debug_diag24_fsd1_run5day compare c22c6d5 165.32 86.84 45.15
PASS daley_intel_restart_gx3_4x2_debug_fsd12_short compare c22c6d5 195.67 85.97 64.70

and these are all compiled with debug flags. None of the failing tests are compiled in debug mode. However, some passing tests with EVP dynamics turned on are compiled in non-debug mode ($ ./results.csh | \grep -E 'PASS .* compare'|\grep -v boxadv|\grep -v boxdyn |\grep -v alt01 |\grep -v alt03 |\grep -v alt04 | \grep -v boxslotcyl |\grep -v debug).

So I'm thinking of a compiler optimization that is only triggered when threading is on... but maybe my logic is wrong.

I ran the QC test with the decomposition suggested in the doc (-p 44x1) and it was bit4bit. I'm re-running it again with -p 40x4 to see if this passes QC (I'm expecting this one to not be b4b according to the above).

@phil-blain
Copy link
Member Author

I checked the unexpected missing-data test, MISS daley_intel_smoke_gx3_4x1_diag1_run5day_thread bfbcomp daley_intel_smoke_gx3_8x2_diag1_run5day missing-data. This test does a bit for bit compararison (bfbcomp) with another test, and this other test was not finished when this one ran, and so it was indeed "missing-data". I re-ran the test and it passed.

@apcraig
Copy link
Contributor

apcraig commented Jul 24, 2020

This test does a bit for bit compararison (bfbcomp) with another test, and this other test was not finished when this one ran, and so it was indeed "missing-data".

This happens more than I'd like. I've tried to introduce delays and other things to try to make sure the first test is finished before other tests that need the results, but haven't gotten this to work well yet. I may need to rethink the implementation at some point.

@phil-blain
Copy link
Member Author

This happens more than I'd like. I've tried to introduce delays and other things to try to make sure the first test is finished before other tests that need the results, but haven't gotten this to work well yet. I may need to rethink the implementation at some point.

I think PBS has a syntax for jobs that depend on other jobs, and probably other schedulers do too. But then do we want to invest time in developing our own meta-syntax that can abstract away the implementation differences is another question...

@phil-blain
Copy link
Member Author

About the non bit-for-bit-ness: I did a QC test with a 160x2 configuration, and it passes. I'll gladly test other decomposition if we feel it is necessary.

@phil-blain phil-blain force-pushed the parallel-picard branch 2 times, most recently from a29fdda to 09b2e2f Compare August 24, 2020 15:40
@phil-blain phil-blain marked this pull request as ready for review August 24, 2020 20:59
@apcraig apcraig mentioned this pull request Sep 25, 2020
16 tasks
@apcraig
Copy link
Contributor

apcraig commented Sep 25, 2020

#517 addresses the latest bug. An unmasked halo update was recoded to a masked halo update and this seems to have created the issue. This was fixed in evp, eap, and vp implementations.

@apcraig
Copy link
Contributor

apcraig commented Sep 25, 2020

One other comment. I did run the problematic case (padded blocks with maskhalo) with kdyn=3, and it aborted with a "bad departure points" error. I suspect there are edge configurations that we are testing with evp and/or eap that have not yet been tested with vp. It might be good to run a full test suite (all suites) with kdyn=3 to see what happens or to otherwise expand/add the test coverage with kdyn=3. It raises a questions about how to best test big new options. Running the current (even full) test suite may not be adequate in general. For something like this, kdyn=3 probably needed to be run under many different configurations (maybe technical and science) and at least a few new test cases probably need to be added. And this highlights again the question about how comprehensive we are in our current testing. We test kdyn=1 (evp) with lots of configurations (like padded blocks), but we don't do that with kdyn=2 (eap). We have identified some missing tests now (testing with additional decomps with eap and vp), I will try to create a new PR to add those tests as well.

@apcraig
Copy link
Contributor

apcraig commented Sep 25, 2020

I ran a decomp test suite with evp, eap, and vp-picard. I did not test vp-anderson as that is not ready to use out of the box. Below are the results. evp and eap pass all tests with the various decomps. However, vp-picard is more of a mixed bag. Some configurations don't run at all and some don't restart exactly. The test suite is running the same tests just changing the dynamics option. I will create a new issue and that issue can be closed if it's not of interest. I also understand that I may not have setup the tests properly or that vp-picard may still be under development, but this gives us a data point. Looking at one of the failed runs, I see

(abort_ice)ABORTED: 
(abort_ice) error = (horizontal_remap)ERROR: bad departure points
PASS cheyenne_intel_restart_gx3_4x2x25x29x4_dslenderX2 run 10.46 2.72 5.35
PASS cheyenne_intel_restart_gx3_4x2x25x29x4_dslenderX2 test 
PASS cheyenne_intel_restart_gx3_1x1x50x58x4_droundrobin_thread run 38.01 8.49 21.03
PASS cheyenne_intel_restart_gx3_1x1x50x58x4_droundrobin_thread test 
PASS cheyenne_intel_restart_gx3_4x1x25x116x1_dslenderX1_thread run 12.50 2.70 6.91
PASS cheyenne_intel_restart_gx3_4x1x25x116x1_dslenderX1_thread test 
PASS cheyenne_intel_restart_gx3_6x2x4x29x18_dspacecurve run 12.03 4.06 4.37
PASS cheyenne_intel_restart_gx3_6x2x4x29x18_dspacecurve test 
PASS cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin run 7.95 2.58 2.51
PASS cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin test 
PASS cheyenne_intel_restart_gx3_6x2x50x58x1_droundrobin run 11.20 2.46 6.08
PASS cheyenne_intel_restart_gx3_6x2x50x58x1_droundrobin test 
PASS cheyenne_intel_restart_gx3_4x2x19x19x10_droundrobin run 11.25 3.11 4.81
PASS cheyenne_intel_restart_gx3_4x2x19x19x10_droundrobin test 
PASS cheyenne_intel_restart_gx3_1x20x5x29x80_dsectrobin_short run 30.55 13.72 7.23
PASS cheyenne_intel_restart_gx3_1x20x5x29x80_dsectrobin_short test 
PASS cheyenne_intel_restart_gx3_16x2x5x10x20_drakeX2 run 4.87 1.67 1.78
PASS cheyenne_intel_restart_gx3_16x2x5x10x20_drakeX2 test 
PASS cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_maskhalo run 7.32 2.31 2.42
PASS cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_maskhalo test 
PASS cheyenne_intel_restart_gx3_1x4x25x29x16_droundrobin run 28.46 9.31 17.47
PASS cheyenne_intel_restart_gx3_1x4x25x29x16_droundrobin test 

PASS cheyenne_intel_restart_gx3_4x2x25x29x4_dslenderX2_dyneap run 32.14 24.53 5.23
PASS cheyenne_intel_restart_gx3_4x2x25x29x4_dslenderX2_dyneap test 
PASS cheyenne_intel_restart_gx3_1x1x50x58x4_droundrobin_dyneap_thread run 110.39 80.04 21.20
PASS cheyenne_intel_restart_gx3_1x1x50x58x4_droundrobin_dyneap_thread test 
PASS cheyenne_intel_restart_gx3_4x1x25x116x1_dslenderX1_dyneap_thread run 36.50 26.83 6.77
PASS cheyenne_intel_restart_gx3_4x1x25x116x1_dslenderX1_dyneap_thread test 
PASS cheyenne_intel_restart_gx3_6x2x4x29x18_dspacecurve_dyneap run 46.75 38.86 4.30
PASS cheyenne_intel_restart_gx3_6x2x4x29x18_dspacecurve_dyneap test 
PASS cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_dyneap run 24.54 19.22 2.75
PASS cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_dyneap test 
PASS cheyenne_intel_restart_gx3_6x2x50x58x1_droundrobin_dyneap run 33.14 24.53 5.95
PASS cheyenne_intel_restart_gx3_6x2x50x58x1_droundrobin_dyneap test 
PASS cheyenne_intel_restart_gx3_4x2x19x19x10_droundrobin_dyneap run 36.26 28.27 4.64
PASS cheyenne_intel_restart_gx3_4x2x19x19x10_droundrobin_dyneap test 
PASS cheyenne_intel_restart_gx3_1x20x5x29x80_dsectrobin_dyneap_short run 116.90 100.69 7.68
PASS cheyenne_intel_restart_gx3_1x20x5x29x80_dsectrobin_dyneap_short test 
PASS cheyenne_intel_restart_gx3_16x2x5x10x20_drakeX2_dyneap run 14.25 11.05 1.84
PASS cheyenne_intel_restart_gx3_16x2x5x10x20_drakeX2_dyneap test 
PASS cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_dyneap_maskhalo run 23.86 18.80 2.37
PASS cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_dyneap_maskhalo test 
PASS cheyenne_intel_restart_gx3_1x4x25x29x16_droundrobin_dyneap run 102.08 82.70 38.28
PASS cheyenne_intel_restart_gx3_1x4x25x29x16_droundrobin_dyneap test 

PASS cheyenne_intel_restart_gx3_4x2x25x29x4_dslenderX2_dynpicard run 15.13 7.46 5.30
FAIL cheyenne_intel_restart_gx3_4x2x25x29x4_dslenderX2_dynpicard test 
PASS cheyenne_intel_restart_gx3_1x1x50x58x4_droundrobin_dynpicard_thread run 47.71 18.42 20.95
PASS cheyenne_intel_restart_gx3_1x1x50x58x4_droundrobin_dynpicard_thread test 
PASS cheyenne_intel_restart_gx3_4x1x25x116x1_dslenderX1_dynpicard_thread run 15.70 6.00 6.92
PASS cheyenne_intel_restart_gx3_4x1x25x116x1_dslenderX1_dynpicard_thread test 
FAIL cheyenne_intel_restart_gx3_6x2x4x29x18_dspacecurve_dynpicard run
FAIL cheyenne_intel_restart_gx3_6x2x4x29x18_dspacecurve_dynpicard test 
FAIL cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_dynpicard run
FAIL cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_dynpicard test 
PASS cheyenne_intel_restart_gx3_6x2x50x58x1_droundrobin_dynpicard run 14.13 8.56 6.04
PASS cheyenne_intel_restart_gx3_6x2x50x58x1_droundrobin_dynpicard test 
FAIL cheyenne_intel_restart_gx3_4x2x19x19x10_droundrobin_dynpicard run 
FAIL cheyenne_intel_restart_gx3_4x2x19x19x10_droundrobin_dynpicard test 
FAIL cheyenne_intel_restart_gx3_1x20x5x29x80_dsectrobin_dynpicard_short run
FAIL cheyenne_intel_restart_gx3_1x20x5x29x80_dsectrobin_dynpicard_short test 
PASS cheyenne_intel_restart_gx3_16x2x5x10x20_drakeX2_dynpicard run 15.44 12.28 2.09
FAIL cheyenne_intel_restart_gx3_16x2x5x10x20_drakeX2_dynpicard test 
FAIL cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_dynpicard_maskhalo run
FAIL cheyenne_intel_restart_gx3_8x2x8x10x20_droundrobin_dynpicard_maskhalo test 
FAIL cheyenne_intel_restart_gx3_1x4x25x29x16_droundrobin_dynpicard run
FAIL cheyenne_intel_restart_gx3_1x4x25x29x16_droundrobin_dynpicard test 

@phil-blain
Copy link
Member Author

You are right that I did limited testing with the various options. It's on my list to do more.

apcraig added a commit that referenced this pull request Sep 25, 2020
… decompositions. In #491, an unmasked halo update was changed to a masked halo update.  This affects only padded decompositions with maskhalo_dyn=true and was picked up by an exact restart failure (#517)
DeniseWorthen added a commit to NOAA-EMC/CICE that referenced this pull request Nov 10, 2020
updates include:

* deprecate upwind advection (CICE-Consortium#508)
* add implicit VP solver (CICE-Consortium#491)
phil-blain added a commit to phil-blain/CICE that referenced this pull request Jul 12, 2022
phil-blain added a commit to phil-blain/CICE that referenced this pull request Jul 12, 2022
The VP solver uses a linear solver, FGMRES, as part of the non-linear
iteration. The FGMRES algorithm involves computing the norm of a
distributed vector field, thus performing global sums.

These norms are computed by first summing the squared X and Y components
of a vector field in subroutine 'calc_L2norm_squared', summing these
over the local blocks, and then doing a global (MPI) sum using
'global_sum'.

This approach does not lead to reproducible results when the MPI
distribution, or the number of local blocks, is changed, for reasons
explained in the "Reproducible sums" section of the Developer Guide
(mostly, floating point addition is not associative). This was partly
pointed out in [1] but I failed to realize it at the time.

Make the results of the VP solver more reproducible by using two calls
to 'global_sum_prod' to individually compute the squares of the X and Y
components when computing norms, and then summing these two reproducible
scalars.

The same pattern appears in the FGMRES solver (subroutine 'fgmres'), the
preconditioner 'pgmres' which uses the same algorithm, and the
Classical and Modified Gram-Schmidt algorithms in 'orthogonalize'.

These changes result in twice the number of global sums for fgmres,
pgmres and the MGS algorithm. For the CGS algorithm, the performance
impact is higher as 'global_sum_prod' is called inside the loop, whereas
previously we called 'global_allreduce_sum' after the loop to compute
all 'initer' sums at the same time.

To keep that optimization, we would have to implement a new interface
'global_allreduce_sum_prod' which would take two arrays of shape
(nx_block,ny_block,max_blocks,k) and sum these over their first three
dimensions before performing the global reduction over the k dimension.

We choose to not go that route for now mostly because anyway the CGS
algorithm is (by default) only used for the PGMRES preconditioner, and
so the cost should be relatively low as 'initer' corresponds to
'dim_pgmres' in the namelist, which should be kept low for efficiency
(default 5).

These changes lead to bit-for-bit reproducibility (the decomp_suite
passes) when using 'precond=ident' and 'precond=diag'. 'precond=pgmres'
is still not bit-for-bit because some halo updates are skipped for
efficiency. This will be addressed in a following commit.

[1] CICE-Consortium#491 (comment)
phil-blain added a commit to phil-blain/CICE that referenced this pull request Aug 9, 2022
The VP solver uses a linear solver, FGMRES, as part of the non-linear
iteration. The FGMRES algorithm involves computing the norm of a
distributed vector field, thus performing global sums.

These norms are computed by first summing the squared X and Y components
of a vector field in subroutine 'calc_L2norm_squared', summing these
over the local blocks, and then doing a global (MPI) sum using
'global_sum'.

This approach does not lead to reproducible results when the MPI
distribution, or the number of local blocks, is changed, for reasons
explained in the "Reproducible sums" section of the Developer Guide
(mostly, floating point addition is not associative). This was partly
pointed out in [1] but I failed to realize it at the time.

Make the results of the VP solver more reproducible by using two calls
to 'global_sum_prod' to individually compute the squares of the X and Y
components when computing norms, and then summing these two reproducible
scalars.

The same pattern appears in the FGMRES solver (subroutine 'fgmres'), the
preconditioner 'pgmres' which uses the same algorithm, and the
Classical and Modified Gram-Schmidt algorithms in 'orthogonalize'.

These changes result in twice the number of global sums for fgmres,
pgmres and the MGS algorithm. For the CGS algorithm, the performance
impact is higher as 'global_sum_prod' is called inside the loop, whereas
previously we called 'global_allreduce_sum' after the loop to compute
all 'initer' sums at the same time.

To keep that optimization, we would have to implement a new interface
'global_allreduce_sum_prod' which would take two arrays of shape
(nx_block,ny_block,max_blocks,k) and sum these over their first three
dimensions before performing the global reduction over the k dimension.

We choose to not go that route for now mostly because anyway the CGS
algorithm is (by default) only used for the PGMRES preconditioner, and
so the cost should be relatively low as 'initer' corresponds to
'dim_pgmres' in the namelist, which should be kept low for efficiency
(default 5).

These changes lead to bit-for-bit reproducibility (the decomp_suite
passes) when using 'precond=ident' and 'precond=diag'. 'precond=pgmres'
is still not bit-for-bit because some halo updates are skipped for
efficiency. This will be addressed in a following commit.

[1] CICE-Consortium#491 (comment)
phil-blain added a commit to phil-blain/CICE that referenced this pull request Aug 25, 2022
When the implicit VP solver was added in f7fd063 (dynamics: add implicit
VP solver (CICE-Consortium#491), 2020-09-22), it had not yet been tested with OpenMP
enabled.

The OpenMP implementation was carefully reviewed and then fixed in
d1e972a (Update OMP (CICE-Consortium#680), 2022-02-18), which lead to all runs of the
'decomp' suite completing and all restart tests passing. The 'bfbcomp'
tests are still failing, but this is due to the code not using the CICE
global sum implementation correctly, which will be fixed in the next
commits.

Update the documentation accordingly.
phil-blain added a commit to phil-blain/CICE that referenced this pull request Aug 25, 2022
The VP solver uses a linear solver, FGMRES, as part of the non-linear
iteration. The FGMRES algorithm involves computing the norm of a
distributed vector field, thus performing global sums.

These norms are computed by first summing the squared X and Y components
of a vector field in subroutine 'calc_L2norm_squared', summing these
over the local blocks, and then doing a global (MPI) sum using
'global_sum'.

This approach does not lead to reproducible results when the MPI
distribution, or the number of local blocks, is changed, for reasons
explained in the "Reproducible sums" section of the Developer Guide
(mostly, floating point addition is not associative). This was partly
pointed out in [1] but I failed to realize it at the time.

Make the results of the VP solver more reproducible by using two calls
to 'global_sum_prod' to individually compute the squares of the X and Y
components when computing norms, and then summing these two reproducible
scalars.

The same pattern appears in the FGMRES solver (subroutine 'fgmres'), the
preconditioner 'pgmres' which uses the same algorithm, and the
Classical and Modified Gram-Schmidt algorithms in 'orthogonalize'.

These changes result in twice the number of global sums for fgmres,
pgmres and the MGS algorithm. For the CGS algorithm, the performance
impact is higher as 'global_sum_prod' is called inside the loop, whereas
previously we called 'global_allreduce_sum' after the loop to compute
all 'initer' sums at the same time.

To keep that optimization, we would have to implement a new interface
'global_allreduce_sum_prod' which would take two arrays of shape
(nx_block,ny_block,max_blocks,k) and sum these over their first three
dimensions before performing the global reduction over the k dimension.

We choose to not go that route for now mostly because anyway the CGS
algorithm is (by default) only used for the PGMRES preconditioner, and
so the cost should be relatively low as 'initer' corresponds to
'dim_pgmres' in the namelist, which should be kept low for efficiency
(default 5).

These changes lead to bit-for-bit reproducibility (the decomp_suite
passes) when using 'precond=ident' and 'precond=diag' along with
'bfbflag=reprosum'. 'precond=pgmres' is still not bit-for-bit because
some halo updates are skipped for efficiency. This will be addressed in
a following commit.

Note that calc_bvec loops only over ice points to compute b[xy], so
zero-initialize b[xy] since global_sum_prod loops over the whole array.
The arnoldi_basis_[xy] arrays are already zero-initialized in fgmres and
pgmres.

[1] CICE-Consortium#491 (comment)
phil-blain added a commit to phil-blain/CICE that referenced this pull request Oct 5, 2022
The "Table of namelist options" in the user guide lists 'maxits_nonlin'
as having a default value of 1000, whereas its actual default is 4, both
in the namelist and in 'ice_init.F90'. This has been the case since the
original implementation of the implicit solver in f7fd063 (dynamics: add
implicit VP solver (CICE-Consortium#491), 2020-09-22).

Fix the documentation.
phil-blain added a commit to phil-blain/CICE that referenced this pull request Oct 5, 2022
When the implicit VP solver was added in f7fd063 (dynamics: add implicit
VP solver (CICE-Consortium#491), 2020-09-22), it had not yet been tested with OpenMP
enabled.

The OpenMP implementation was carefully reviewed and then fixed in
d1e972a (Update OMP (CICE-Consortium#680), 2022-02-18), which lead to all runs of the
'decomp' suite completing and all restart tests passing. The 'bfbcomp'
tests are still failing, but this is due to the code not using the CICE
global sum implementation correctly, which will be fixed in the next
commits.

Update the documentation accordingly.
phil-blain added a commit to phil-blain/CICE that referenced this pull request Oct 5, 2022
…oducibility

Make the results of the VP solver reproducible if desired by refactoring
the code to use the subroutines 'global_norm' and 'global_dot_product'
added in the previous commit.

The same pattern appears in the FGMRES solver (subroutine 'fgmres'), the
preconditioner 'pgmres' which uses the same algorithm, and the
Classical and Modified Gram-Schmidt algorithms in 'orthogonalize'.

These modifications do not change the number of global sums in the fgmres,
pgmres and the MGS algorithm. For the CGS algorithm, there is a slight
performance impact as 'global_dot_product' is called inside the loop,
whereas previously we called 'global_allreduce_sum' after the loop to
compute all 'initer' sums at the same time.

To keep that optimization, we would have to implement a new interface
'global_allreduce_sum' which would take an array of shape
(nx_block,ny_block,max_blocks,k) and sum over their first three
dimensions before performing the global reduction over the k dimension.

We choose to not go that route for now mostly because anyway the CGS
algorithm is (by default) only used for the PGMRES preconditioner, and
so the cost should be relatively low as 'initer' corresponds to
'dim_pgmres' in the namelist, which should be kept low for efficiency
(default 5).

These changes lead to bit-for-bit reproducibility (the decomp_suite
passes) when using 'precond=ident' and 'precond=diag' along with
'bfbflag=reprosum'. 'precond=pgmres' is still not bit-for-bit because
some halo updates are skipped for efficiency. This will be addressed in
a following commit.

[1] CICE-Consortium#491 (comment)
phil-blain added a commit to phil-blain/CICE that referenced this pull request Oct 17, 2022
The "Table of namelist options" in the user guide lists 'maxits_nonlin'
as having a default value of 1000, whereas its actual default is 4, both
in the namelist and in 'ice_init.F90'. This has been the case since the
original implementation of the implicit solver in f7fd063 (dynamics: add
implicit VP solver (CICE-Consortium#491), 2020-09-22).

Fix the documentation.
phil-blain added a commit to phil-blain/CICE that referenced this pull request Oct 17, 2022
When the implicit VP solver was added in f7fd063 (dynamics: add implicit
VP solver (CICE-Consortium#491), 2020-09-22), it had not yet been tested with OpenMP
enabled.

The OpenMP implementation was carefully reviewed and then fixed in
d1e972a (Update OMP (CICE-Consortium#680), 2022-02-18), which lead to all runs of the
'decomp' suite completing and all restart tests passing. The 'bfbcomp'
tests are still failing, but this is due to the code not using the CICE
global sum implementation correctly, which will be fixed in the next
commits.

Update the documentation accordingly.
phil-blain added a commit to phil-blain/CICE that referenced this pull request Oct 17, 2022
…oducibility

Make the results of the VP solver reproducible if desired by refactoring
the code to use the subroutines 'global_norm' and 'global_dot_product'
added in the previous commit.

The same pattern appears in the FGMRES solver (subroutine 'fgmres'), the
preconditioner 'pgmres' which uses the same algorithm, and the
Classical and Modified Gram-Schmidt algorithms in 'orthogonalize'.

These modifications do not change the number of global sums in the
fgmres, pgmres and the MGS algorithm. For the CGS algorithm, there is
(in theory) a slight performance impact as 'global_dot_product' is
called inside the loop, whereas previously we called
'global_allreduce_sum' after the loop to compute all 'initer' sums at
the same time.

To keep that optimization, we would have to implement a new interface
'global_allreduce_sum' which would take an array of shape
(nx_block,ny_block,max_blocks,k) and sum over their first three
dimensions before performing the global reduction over the k dimension.

We choose to not go that route for now mostly because anyway the CGS
algorithm is (by default) only used for the PGMRES preconditioner, and
so the cost should be relatively low as 'initer' corresponds to
'dim_pgmres' in the namelist, which should be kept low for efficiency
(default 5).

These changes lead to bit-for-bit reproducibility (the decomp_suite
passes) when using 'precond=ident' and 'precond=diag' along with
'bfbflag=reprosum'. 'precond=pgmres' is still not bit-for-bit because
some halo updates are skipped for efficiency. This will be addressed in
a following commit.

[1] CICE-Consortium#491 (comment)
apcraig pushed a commit that referenced this pull request Oct 20, 2022
* doc: fix typo in index (bfbflag)

* doc: correct default value of 'maxits_nonlin'

The "Table of namelist options" in the user guide lists 'maxits_nonlin'
as having a default value of 1000, whereas its actual default is 4, both
in the namelist and in 'ice_init.F90'. This has been the case since the
original implementation of the implicit solver in f7fd063 (dynamics: add
implicit VP solver (#491), 2020-09-22).

Fix the documentation.

* doc: VP solver is validated with OpenMP

When the implicit VP solver was added in f7fd063 (dynamics: add implicit
VP solver (#491), 2020-09-22), it had not yet been tested with OpenMP
enabled.

The OpenMP implementation was carefully reviewed and then fixed in
d1e972a (Update OMP (#680), 2022-02-18), which lead to all runs of the
'decomp' suite completing and all restart tests passing. The 'bfbcomp'
tests are still failing, but this is due to the code not using the CICE
global sum implementation correctly, which will be fixed in the next
commits.

Update the documentation accordingly.

* ice_dyn_vp: activate OpenMP in 'dyn_prep2' loop

When the OpenMP implementation was reviewed and fixed in d1e972a (Update
OMP (#680), 2022-02-18), the 'PRIVATE' clause of the OpenMP directive
for the loop where 'dyn_prep2' is called in 'implicit_solver' was
corrected in line with what was done in 'ice_dyn_evp', but OpenMP was
left unactivated for this loop (the 'TCXOMP' was not changed to a real
'OMP' directive).

Activate OpenMP for this loop. All runs and restart tests of the
'decomp_suite' still pass with this change.

* machines: eccc : add ICE_MACHINE_MAXRUNLENGTH to ppp[56]

* machines: eccc: use PBS-enabled OpenMPI for 'ppp6_gnu'

The system installation of OpenMPI at /usr/mpi/gcc/openmpi-4.1.2a1/ is
not compiled with support for PBS. This leads to failures as the MPI
runtime does not have the same view of the number of available processors
as the job scheduler.

Use our own build of OpenMPI, compiled with PBS support, for the
'ppp6_gnu'  environment, which uses OpenMPI.

* machines: eccc: set I_MPI_FABRICS=ofi

Intel MPI 2021.5.1, which comes with oneAPI 2022.1.2, seems to have an
intermittent bug where a call to 'MPI_Waitall' fails with:

    Abort(17) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Waitall: See the MPI_ERROR field in MPI_Status for the error code

and no core dump is produced. This affects at least these cases of the
'decomp' suite:

- *_*_restart_gx3_16x2x1x1x800_droundrobin
- *_*_restart_gx3_16x2x2x2x200_droundrobin

This was reported to Intel and they suggested setting the variable
'I_MPI_FABRICS' to 'ofi' (the default being 'shm:ofi' [1]). This
disables shared memory transport and indeeds fixes the failures.

Set this variable for all ECCC machine files using Intel MPI.

[1] https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/environment-variable-reference/environment-variables-for-fabrics-control/communication-fabrics-control.html

* machines: eccc: set I_MPI_CBWR for BASEGEN/BASECOM runs

Intel MPI, in contrast to OpenMPI (as far as I was able to test, and see
[1], [2]), does not (by default) guarantee that repeated runs of the same
code on the same machine with the same number of MPI ranks yield the
same results when collective operations (e.g. 'MPI_ALLREDUCE') are used.

Since the VP solver uses MPI_ALLREDUCE in its algorithm, this leads to
repeated runs of the code giving different answers, and baseline
comparing runs with code built from the same commit failing.

When generating a baseline or comparing against an existing baseline,
set the environment variable 'I_MPI_CBWR' to 1 for ECCC machine files
using Intel MPI [3], so that (processor) topology-aware collective
algorithms are not used and results are reproducible.

Note that we do not need to set this variable on robert or underhill, on
which jobs have exclusive node access and thus job placement (on
processors) is guaranteed to be reproducible.

[1] https://stackoverflow.com/a/45916859/
[2] https://scicomp.stackexchange.com/a/2386/
[3] https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/environment-variable-reference/i-mpi-adjust-family-environment-variables.html#i-mpi-adjust-family-environment-variables_GUID-A5119508-5588-4CF5-9979-8D60831D1411

* ice_dyn_vp: fgmres: exit early if right-hand-side vector is zero

If starting a run with with "ice_ic='none'" (no ice), the linearized
problem for the ice velocity A x = b will have b = 0, since all terms in
the right hand side vector will be zero:

- strint[xy] is zero because the velocity is zero
- tau[xy] is zero because the ocean velocity is also zero
- [uv]vel_init is zero
- strair[xy] is zero because the concentration is zero
- strtlt[xy] is zero because the ocean velocity is zero

We thus have a linear system A x = b with b=0, so we
must have x=0.

In the FGMRES linear solver, this special case is not taken into
account, and so we end up with an all-zero initial residual since
workspace_[xy] is also zero because of the all-zero initial guess
'sol[xy]', which corresponds to the initial ice velocity. This then
leads to a division by zero when normalizing the first Arnoldi vector.

Fix this special case by computing the norm of the right-hand-side
vector before starting the iterations, and exiting early if it is zero.
This is in line with the GMRES implementation in SciPy [1].

[1] https://github.com/scipy/scipy/blob/651a9b717deb68adde9416072c1e1d5aa14a58a1/scipy/sparse/linalg/_isolve/iterative.py#L620-L628

Close: phil-blain#42

* ice_dyn_vp: add global_norm, global_dot_product functions

The VP solver uses a linear solver, FGMRES, as part of the non-linear
iteration. The FGMRES algorithm involves computing the norm of a
distributed vector field, thus performing global sums.

These norms are computed by first summing the squared X and Y components
of a vector field in subroutine 'calc_L2norm_squared', summing these
over the local blocks, and then doing a global (MPI) sum using
'global_sum'.

This approach does not lead to reproducible results when the MPI
distribution, or the number of local blocks, is changed, for reasons
explained in the "Reproducible sums" section of the Developer Guide
(mostly, floating point addition is not associative). This was partly
pointed out in [1] but I failed to realize it at the time.

Introduce a new function, 'global_dot_product', to encapsulate the
computation of the dot product of two grid vectors, each split into two
arrays (for the X and Y components).

Compute the reduction locally as is done in 'calc_L2norm_squared', but
throw away the result and use the existing 'global_sum' function when
'bfbflag' is active, passing it the temporary array used to compute the
element-by-element product.

This approach avoids a performance regression from the added work done
in 'global_sum', such that non-bfbflag runs are as fast as before.

Note that since 'global_sum' loops on the whole array (and not just ice
points as 'global_dot_product'), make sure to zero-initialize the 'prod'
local array.

Also add a 'global_norm' function implemented using
'global_dot_product'. Both functions will be used in subsequent commits
to ensure bit-for-bit reproducibility.

* ice_dyn_vp: use global_{norm,dot_product} for bit-for-bit output reproducibility

Make the results of the VP solver reproducible if desired by refactoring
the code to use the subroutines 'global_norm' and 'global_dot_product'
added in the previous commit.

The same pattern appears in the FGMRES solver (subroutine 'fgmres'), the
preconditioner 'pgmres' which uses the same algorithm, and the
Classical and Modified Gram-Schmidt algorithms in 'orthogonalize'.

These modifications do not change the number of global sums in the
fgmres, pgmres and the MGS algorithm. For the CGS algorithm, there is
(in theory) a slight performance impact as 'global_dot_product' is
called inside the loop, whereas previously we called
'global_allreduce_sum' after the loop to compute all 'initer' sums at
the same time.

To keep that optimization, we would have to implement a new interface
'global_allreduce_sum' which would take an array of shape
(nx_block,ny_block,max_blocks,k) and sum over their first three
dimensions before performing the global reduction over the k dimension.

We choose to not go that route for now mostly because anyway the CGS
algorithm is (by default) only used for the PGMRES preconditioner, and
so the cost should be relatively low as 'initer' corresponds to
'dim_pgmres' in the namelist, which should be kept low for efficiency
(default 5).

These changes lead to bit-for-bit reproducibility (the decomp_suite
passes) when using 'precond=ident' and 'precond=diag' along with
'bfbflag=reprosum'. 'precond=pgmres' is still not bit-for-bit because
some halo updates are skipped for efficiency. This will be addressed in
a following commit.

[1] #491 (comment)

* ice_dyn_vp: do not skip halo updates in 'pgmres' under 'bfbflag'

The 'pgmres' subroutine implements a separate GMRES solver and is used
as a preconditioner for the FGMRES linear solver. Since it is only a
preconditioner, it was decided to skip the halo updates after computing
the matrix-vector product (in 'matvec'), for efficiency.

This leads to non-reproducibility since the content of the non-updated
halos depend on the block / MPI distribution.

Add the required halo updates, but only perform them when we are
explicitely asking for bit-for-bit global sums, i.e. when 'bfbflag' is
set to something else than 'not'.

Adjust the interfaces of 'pgmres' and 'precondition' (from which
'pgmres' is called) to accept 'halo_info_mask', since it is needed for
masked updates.

Closes #518

* ice_dyn_vp: use global_{norm,dot_product} for bit-for-bit log reproducibility

In the previous commits we ensured bit-for-bit reproducibility of the
outputs when using the VP solver.

Some global norms computed during the nonlinear iteration still use the
same non-reproducible pattern of summing over blocks locally before
performing the reduction. However, these norms are used only to monitor
the convergence in the log file, as well as to exit the iteration when
the required convergence level is reached ('nlres_norm'). Only
'nlres_norm' could (in theory) influence the output, but it is unlikely
that a difference due to floating point errors would influence the 'if
(nlres_norm < tol_nl)' condition used to exist the nonlinear iteration.

Change these remaining cases to also use 'global_norm', leading to
bit-for-bit log reproducibility.

* ice_dyn_vp: remove unused subroutine and cleanup interfaces

The previous commit removed the last caller of 'calc_L2norm_squared'.
Remove the subroutine.

Also, do not compute 'sum_squared' in 'residual_vec', since the variable
'L2norm' which receives this value is also unused in 'anderson_solver'
since the previous commit. Remove that variable, and adjust the
interface of 'residual_vec' accordingly.

* ice_global_reductions: remove 'global_allreduce_sum'

In a previous commit, we removed the sole caller of
'global_allreduce_sum' (in ice_dyn_vp::orthogonalize). We do not
anticipate that function to be ued elsewhere in the code, so remove it
from ice_global_reductions. Update the 'sumchk' unit test accordingly.

* doc: mention VP solver is only reproducible using 'bfbflag'

The previous commits made sure that the model outputs as well as the log
file output are bit-for-bit reproducible when using the VP solver by
refactoring the code to use the existing 'global_sum' subroutine.

Add a note in the documentation mentioning that 'bfbflag' is required to
get bit-for-bit reproducible results under different decompositions /
MPI counts when using the VP solver.

Also, adjust the doc about 'bfbflag=lsum8' being the same as
'bfbflag=off' since this is not the case for the VP solver: in the first
case we use the scalar version of 'global_sum', in the second case we
use the array version.

* ice_dyn_vp: improve default parameters for VP solver

During QC testing of the previous commit, the 5 years QC test with the
updated VP solver failed twice with "bad departure points" after a few
years of simulation. Simply bumping the number of nonlinear iterations
(maxits_nonlin) from 4 to 5 makes these failures disappear and allow the
simulations to run to completion, suggesting the solution is not
converged enough with 4 iterations.

We also noticed that in these failing cases, the relative tolerance for
the linear solver (reltol_fmgres = 1E-2) is too small to be reached in
less than 50 iterations (maxits_fgmres), and that's the case at each
nonlinear iteration. Other papers mention a relative tolerance of 1E-1
for the linear solver, and using this value also allows both cases to
run to completion (even without changing maxits_nonlin).

Let's set the default tolerance for the linear solver to 1E-1, and let's
be conservative and bump the number of nonlinear iterations to 10. This
should give us a more converged solution and add robustness to the
default settings.
apcraig added a commit that referenced this pull request Nov 8, 2022
* merge latest master (#4)

* Isotopes for CICE (#423)

Co-authored-by: apcraig <anthony.p.craig@gmail.com>
Co-authored-by: David Bailey <dbailey@ucar.edu>
Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>

* updated orbital calculations needed for cesm

* fixed problems in updated orbital calculations needed for cesm

* update CICE6 to support coupling with UFS

* put in changes so that both ufsatm and cesm requirements for potential temperature and density are satisfied

* Convergence on ustar for CICE. (#452) (#5)

* Add atmiter_conv to CICE

* Add documentation

* trigger build the docs

Co-authored-by: David A. Bailey <dbailey@ucar.edu>

* update icepack submodule

* Revert "update icepack submodule"

This reverts commit e70d1ab.

* update comp_ice.backend with temporary ice_timers fix

* Fix threading problem in init_bgc

* Fix additional OMP problems

* changes for coldstart running

* Move the forapps directory

* remove cesmcoupled ifdefs

* Fix logging issues for NUOPC

* removal of many cpp-ifdefs

* fix compile errors

* fixes to get cesm working

* fixed white space issue

* Add restart_coszen namelist option

* update icepack submodule

* change Orion to orion in backend

remove duplicate print lines from ice_transport_driver

* add -link_mpi=dbg to debug flags (#8)

* cice6 compile (#6)

* enable debug build. fix to remove errors

* fix an error in comp_ice.backend.libcice

* change Orion to orion for machine identification

* changes for consistency w/ current emc-cice5 (#13)

Update to emc/develop fork to current CICE consortium 

Co-authored-by: David A. Bailey <dbailey@ucar.edu>
Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>
Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>
Co-authored-by: Mariana Vertenstein <mvertens@ucar.edu>
Co-authored-by: apcraig <anthony.p.craig@gmail.com>
Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>

* Fixcommit (#14)

Align commit history between emc/develop and cice-consortium/master

* Update CICE6 for integration to S2S


* add wcoss_dell_p3 compiler macro

* update to icepack w/ debug fix

* replace SITE with MACHINE_ID

* update compile scripts

* Support TACC stampede (#19)

* update icepack

* add ice_dyn_vp module to CICE_InitMod

* update gitmodules, update icepack

* Update CICE to consortium master (#23)

updates include:

* deprecate upwind advection (#508)
* add implicit VP solver (#491)

* update icepack

* switch icepack branches

* update to icepack master but set abort flag in ITD routine
to false

* update icepack

* Update CICE to latest Consortium master (#26)


update CICE and Icepack

* changes the criteria for aborting ice for thermo-conservation errors
* updates the time manager
* fixes two bugs in ice_therm_mushy
* updates Icepack to Consortium master w/ flip of abort flag for troublesome IC cases

* add cice changes for zlvs (#29)

* update icepack and pointer

* update icepack and revert gitmodules

* Fix history features

- Fix bug in history time axis when sec_init is not zero.
- Fix issue with time_beg and time_end uninitialized values.
- Add support for averaging with histfreq='1' by allowing histfreq_n to be any value
  in that case.  Extend and clean up construct_filename for history files.  More could
  be done, but wanted to preserve backwards compatibility.
- Add new calendar_sec2hms to converts daily seconds to hh:mm:ss.  Update the
  calchk calendar unit tester to check this method
- Remove abort test in bcstchk, this was just causing problems in regression testing
- Remove known problems documentation about problems writing when istep=1.  This issue
  does not exist anymore with the updated time manager.
- Add new tests with hist_avg = false.  Add set_nml.histinst.

* revert set_nml.histall

* fix implementation error

* update model log output in ice_init

* Fix QC issues

- Add netcdf ststus checks and aborts in ice_read_write.F90
- Check for end of file when reading records in ice_read_write.F90 for
  ice_read_nc methods
- Update set_nml.qc to better specify the test, turn off leap years since we're cycling
  2005 data
- Add check in c ice.t-test.py to make sure there is at least 1825 files, 5 years of data
- Add QC run to base_suite.ts to verify qc runs to completion and possibility to use
  those results directly for QC validation
- Clean up error messages and some indentation in ice_read_write.F90

* Update testing

- Add prod suite including 10 year gx1prod and qc test
- Update unit test compare scripts

* update documentation

* reset calchk to 100000 years

* update evp1d test

* update icepack

* update icepack

* add memory profiling (#36)


* add profile_memory calls to CICE cap

* update icepack

* fix rhoa when lowest_temp is 0.0

* provide default value for rhoa when imported temp_height_lowest
(Tair) is 0.0
* resolves seg fault when frac_grid=false and do_ca=true

* update icepack submodule

* Update CICE for latest Consortium master (#38)


    * Implement advanced snow physics in icepack and CICE
    * Fix time-stamping of CICE history files
    * Fix CICE history file precision

* Use CICE-Consortium/Icepack master (#40)

* switch to icepack master at consortium

* recreate cap update branch (#42)


* add debug_model feature
* add required variables and calls for tr_snow

* remove 2 extraneous lines

* remove two log print lines that were removed prior to
merge of driver updates to consortium

* duplicate gitmodule style for icepack

* Update CICE to latest Consortium/main (#45)

* Update CICE to Consortium/main (#48)


Update OpenMP directives as needed including validation via new omp_suite. Fixed OpenMP in dynamics.
Refactored eap puny/pi lookups to improve scalar performance
Update Tsfc implementation to make sure land blocks don't set Tsfc to freezing temp
Update for sea bed stress calculations

* fix comment, fix env for orion and hera

* replace save_init with step_prep in CICE_RunMod

* fixes for cgrid repro

* remove added haloupdates

* baselines pass with these extra halo updates removed

* change F->S for ocean velocities and tilts

* fix debug failure when grid_ice=C

* compiling in debug mode using -init=snan,arrays requires
initialization of variables

* respond to review comments

* remove inserted whitespace for uvelE,N and vvelE,N

* Add wave-cice coupling; update to Consortium main (#51)


* add wave-ice fields
* initialize aicen_init, which turns up as NaN in calc of floediam
export
* add call to icepack_init_wave to initialize wavefreq and dwavefreq
* update to latest consortium main (PR 752)

* add initializationsin ice_state

* initialize vsnon/vsnon_init and vicen/vicen_init

Co-authored-by: apcraig <anthony.p.craig@gmail.com>
Co-authored-by: David Bailey <dbailey@ucar.edu>
Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>
Co-authored-by: Mariana Vertenstein <mvertens@ucar.edu>
Co-authored-by: Minsuk Ji <57227195+MinsukJi-NOAA@users.noreply.github.com>
Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>
Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>
apcraig added a commit that referenced this pull request Aug 28, 2023
…856)

* merge latest master (#4)

* Isotopes for CICE (#423)

Co-authored-by: apcraig <anthony.p.craig@gmail.com>
Co-authored-by: David Bailey <dbailey@ucar.edu>
Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>

* updated orbital calculations needed for cesm

* fixed problems in updated orbital calculations needed for cesm

* update CICE6 to support coupling with UFS

* put in changes so that both ufsatm and cesm requirements for potential temperature and density are satisfied

* Convergence on ustar for CICE. (#452) (#5)

* Add atmiter_conv to CICE

* Add documentation

* trigger build the docs

Co-authored-by: David A. Bailey <dbailey@ucar.edu>

* update icepack submodule

* Revert "update icepack submodule"

This reverts commit e70d1ab.

* update comp_ice.backend with temporary ice_timers fix

* Fix threading problem in init_bgc

* Fix additional OMP problems

* changes for coldstart running

* Move the forapps directory

* remove cesmcoupled ifdefs

* Fix logging issues for NUOPC

* removal of many cpp-ifdefs

* fix compile errors

* fixes to get cesm working

* fixed white space issue

* Add restart_coszen namelist option

* update icepack submodule

* change Orion to orion in backend

remove duplicate print lines from ice_transport_driver

* add -link_mpi=dbg to debug flags (#8)

* cice6 compile (#6)

* enable debug build. fix to remove errors

* fix an error in comp_ice.backend.libcice

* change Orion to orion for machine identification

* changes for consistency w/ current emc-cice5 (#13)

Update to emc/develop fork to current CICE consortium 

Co-authored-by: David A. Bailey <dbailey@ucar.edu>
Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>
Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>
Co-authored-by: Mariana Vertenstein <mvertens@ucar.edu>
Co-authored-by: apcraig <anthony.p.craig@gmail.com>
Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>

* Fixcommit (#14)

Align commit history between emc/develop and cice-consortium/master

* Update CICE6 for integration to S2S


* add wcoss_dell_p3 compiler macro

* update to icepack w/ debug fix

* replace SITE with MACHINE_ID

* update compile scripts

* Support TACC stampede (#19)

* update icepack

* add ice_dyn_vp module to CICE_InitMod

* update gitmodules, update icepack

* Update CICE to consortium master (#23)

updates include:

* deprecate upwind advection (#508)
* add implicit VP solver (#491)

* update icepack

* switch icepack branches

* update to icepack master but set abort flag in ITD routine
to false

* update icepack

* Update CICE to latest Consortium master (#26)


update CICE and Icepack

* changes the criteria for aborting ice for thermo-conservation errors
* updates the time manager
* fixes two bugs in ice_therm_mushy
* updates Icepack to Consortium master w/ flip of abort flag for troublesome IC cases

* add cice changes for zlvs (#29)

* update icepack and pointer

* update icepack and revert gitmodules

* Fix history features

- Fix bug in history time axis when sec_init is not zero.
- Fix issue with time_beg and time_end uninitialized values.
- Add support for averaging with histfreq='1' by allowing histfreq_n to be any value
  in that case.  Extend and clean up construct_filename for history files.  More could
  be done, but wanted to preserve backwards compatibility.
- Add new calendar_sec2hms to converts daily seconds to hh:mm:ss.  Update the
  calchk calendar unit tester to check this method
- Remove abort test in bcstchk, this was just causing problems in regression testing
- Remove known problems documentation about problems writing when istep=1.  This issue
  does not exist anymore with the updated time manager.
- Add new tests with hist_avg = false.  Add set_nml.histinst.

* revert set_nml.histall

* fix implementation error

* update model log output in ice_init

* Fix QC issues

- Add netcdf ststus checks and aborts in ice_read_write.F90
- Check for end of file when reading records in ice_read_write.F90 for
  ice_read_nc methods
- Update set_nml.qc to better specify the test, turn off leap years since we're cycling
  2005 data
- Add check in c ice.t-test.py to make sure there is at least 1825 files, 5 years of data
- Add QC run to base_suite.ts to verify qc runs to completion and possibility to use
  those results directly for QC validation
- Clean up error messages and some indentation in ice_read_write.F90

* Update testing

- Add prod suite including 10 year gx1prod and qc test
- Update unit test compare scripts

* update documentation

* reset calchk to 100000 years

* update evp1d test

* update icepack

* update icepack

* add memory profiling (#36)


* add profile_memory calls to CICE cap

* update icepack

* fix rhoa when lowest_temp is 0.0

* provide default value for rhoa when imported temp_height_lowest
(Tair) is 0.0
* resolves seg fault when frac_grid=false and do_ca=true

* update icepack submodule

* Update CICE for latest Consortium master (#38)


    * Implement advanced snow physics in icepack and CICE
    * Fix time-stamping of CICE history files
    * Fix CICE history file precision

* Use CICE-Consortium/Icepack master (#40)

* switch to icepack master at consortium

* recreate cap update branch (#42)


* add debug_model feature
* add required variables and calls for tr_snow

* remove 2 extraneous lines

* remove two log print lines that were removed prior to
merge of driver updates to consortium

* duplicate gitmodule style for icepack

* Update CICE to latest Consortium/main (#45)

* Update CICE to Consortium/main (#48)


Update OpenMP directives as needed including validation via new omp_suite. Fixed OpenMP in dynamics.
Refactored eap puny/pi lookups to improve scalar performance
Update Tsfc implementation to make sure land blocks don't set Tsfc to freezing temp
Update for sea bed stress calculations

* fix comment, fix env for orion and hera

* replace save_init with step_prep in CICE_RunMod

* fixes for cgrid repro

* remove added haloupdates

* baselines pass with these extra halo updates removed

* change F->S for ocean velocities and tilts

* fix debug failure when grid_ice=C

* compiling in debug mode using -init=snan,arrays requires
initialization of variables

* respond to review comments

* remove inserted whitespace for uvelE,N and vvelE,N

* Add wave-cice coupling; update to Consortium main (#51)


* add wave-ice fields
* initialize aicen_init, which turns up as NaN in calc of floediam
export
* add call to icepack_init_wave to initialize wavefreq and dwavefreq
* update to latest consortium main (PR 752)

* add initializationsin ice_state

* initialize vsnon/vsnon_init and vicen/vicen_init

* Update CICE (#54)


* update to include recent PRs to Consortium/main

* fix for nudiag_set

allow nudiag_set to be available outside of cesm; may prefer
to fix in coupling interface

* Update CICE for latest Consortium/main (#56)

* add run time info

* change real(8) to real(dbl)kind)

* fix syntax

* fix write unit

* use cice_wrapper for ufs timer functionality

* add elapsed model time for logtime

* tidy up the wrapper

* fix case for 'time since' at the first advance

* add timer and forecast log

* write timer values to timer log, not nu_diag
* write log.ice.fXXX

* only one time is needed

* modify message written for log.ice.fXXX

* change info in fXXX log file

* Update CICE from Consortium/main (#62)


* Fix CESMCOUPLED compile issue in icepack. (#823)
* Update global reduction implementation to improve performance, fix VP bug (#824)
* Update VP global sum to exclude local implementation with tripole grids
* Add functionality to change hist_avg for each stream (#827)
* Update Icepack to #6703bc533c968 May 22, 2023 (#829)
* Fix for mesh check in CESM driver (#830)
* Namelist option for time axis position. (#839)

* reset timer after Advance to retrieve "wait time"

* add logical control for enabling runtime info

* remove zsal items from cap

* fix typo

---------

Co-authored-by: apcraig <anthony.p.craig@gmail.com>
Co-authored-by: David Bailey <dbailey@ucar.edu>
Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>
Co-authored-by: Mariana Vertenstein <mvertens@ucar.edu>
Co-authored-by: Minsuk Ji <57227195+MinsukJi-NOAA@users.noreply.github.com>
Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>
Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>
Co-authored-by: Jun.Wang <Jun.Wang@noaa.gov>
TillRasmussen pushed a commit to TillRasmussen/CICE that referenced this pull request Sep 16, 2023
…ICE-Consortium#856)

* merge latest master (#4)

* Isotopes for CICE (CICE-Consortium#423)

Co-authored-by: apcraig <anthony.p.craig@gmail.com>
Co-authored-by: David Bailey <dbailey@ucar.edu>
Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>

* updated orbital calculations needed for cesm

* fixed problems in updated orbital calculations needed for cesm

* update CICE6 to support coupling with UFS

* put in changes so that both ufsatm and cesm requirements for potential temperature and density are satisfied

* Convergence on ustar for CICE. (CICE-Consortium#452) (#5)

* Add atmiter_conv to CICE

* Add documentation

* trigger build the docs

Co-authored-by: David A. Bailey <dbailey@ucar.edu>

* update icepack submodule

* Revert "update icepack submodule"

This reverts commit e70d1ab.

* update comp_ice.backend with temporary ice_timers fix

* Fix threading problem in init_bgc

* Fix additional OMP problems

* changes for coldstart running

* Move the forapps directory

* remove cesmcoupled ifdefs

* Fix logging issues for NUOPC

* removal of many cpp-ifdefs

* fix compile errors

* fixes to get cesm working

* fixed white space issue

* Add restart_coszen namelist option

* update icepack submodule

* change Orion to orion in backend

remove duplicate print lines from ice_transport_driver

* add -link_mpi=dbg to debug flags (#8)

* cice6 compile (#6)

* enable debug build. fix to remove errors

* fix an error in comp_ice.backend.libcice

* change Orion to orion for machine identification

* changes for consistency w/ current emc-cice5 (#13)

Update to emc/develop fork to current CICE consortium 

Co-authored-by: David A. Bailey <dbailey@ucar.edu>
Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>
Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>
Co-authored-by: Mariana Vertenstein <mvertens@ucar.edu>
Co-authored-by: apcraig <anthony.p.craig@gmail.com>
Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>

* Fixcommit (#14)

Align commit history between emc/develop and cice-consortium/master

* Update CICE6 for integration to S2S


* add wcoss_dell_p3 compiler macro

* update to icepack w/ debug fix

* replace SITE with MACHINE_ID

* update compile scripts

* Support TACC stampede (#19)

* update icepack

* add ice_dyn_vp module to CICE_InitMod

* update gitmodules, update icepack

* Update CICE to consortium master (CICE-Consortium#23)

updates include:

* deprecate upwind advection (CICE-Consortium#508)
* add implicit VP solver (CICE-Consortium#491)

* update icepack

* switch icepack branches

* update to icepack master but set abort flag in ITD routine
to false

* update icepack

* Update CICE to latest Consortium master (CICE-Consortium#26)


update CICE and Icepack

* changes the criteria for aborting ice for thermo-conservation errors
* updates the time manager
* fixes two bugs in ice_therm_mushy
* updates Icepack to Consortium master w/ flip of abort flag for troublesome IC cases

* add cice changes for zlvs (CICE-Consortium#29)

* update icepack and pointer

* update icepack and revert gitmodules

* Fix history features

- Fix bug in history time axis when sec_init is not zero.
- Fix issue with time_beg and time_end uninitialized values.
- Add support for averaging with histfreq='1' by allowing histfreq_n to be any value
  in that case.  Extend and clean up construct_filename for history files.  More could
  be done, but wanted to preserve backwards compatibility.
- Add new calendar_sec2hms to converts daily seconds to hh:mm:ss.  Update the
  calchk calendar unit tester to check this method
- Remove abort test in bcstchk, this was just causing problems in regression testing
- Remove known problems documentation about problems writing when istep=1.  This issue
  does not exist anymore with the updated time manager.
- Add new tests with hist_avg = false.  Add set_nml.histinst.

* revert set_nml.histall

* fix implementation error

* update model log output in ice_init

* Fix QC issues

- Add netcdf ststus checks and aborts in ice_read_write.F90
- Check for end of file when reading records in ice_read_write.F90 for
  ice_read_nc methods
- Update set_nml.qc to better specify the test, turn off leap years since we're cycling
  2005 data
- Add check in c ice.t-test.py to make sure there is at least 1825 files, 5 years of data
- Add QC run to base_suite.ts to verify qc runs to completion and possibility to use
  those results directly for QC validation
- Clean up error messages and some indentation in ice_read_write.F90

* Update testing

- Add prod suite including 10 year gx1prod and qc test
- Update unit test compare scripts

* update documentation

* reset calchk to 100000 years

* update evp1d test

* update icepack

* update icepack

* add memory profiling (CICE-Consortium#36)


* add profile_memory calls to CICE cap

* update icepack

* fix rhoa when lowest_temp is 0.0

* provide default value for rhoa when imported temp_height_lowest
(Tair) is 0.0
* resolves seg fault when frac_grid=false and do_ca=true

* update icepack submodule

* Update CICE for latest Consortium master (CICE-Consortium#38)


    * Implement advanced snow physics in icepack and CICE
    * Fix time-stamping of CICE history files
    * Fix CICE history file precision

* Use CICE-Consortium/Icepack master (CICE-Consortium#40)

* switch to icepack master at consortium

* recreate cap update branch (CICE-Consortium#42)


* add debug_model feature
* add required variables and calls for tr_snow

* remove 2 extraneous lines

* remove two log print lines that were removed prior to
merge of driver updates to consortium

* duplicate gitmodule style for icepack

* Update CICE to latest Consortium/main (CICE-Consortium#45)

* Update CICE to Consortium/main (CICE-Consortium#48)


Update OpenMP directives as needed including validation via new omp_suite. Fixed OpenMP in dynamics.
Refactored eap puny/pi lookups to improve scalar performance
Update Tsfc implementation to make sure land blocks don't set Tsfc to freezing temp
Update for sea bed stress calculations

* fix comment, fix env for orion and hera

* replace save_init with step_prep in CICE_RunMod

* fixes for cgrid repro

* remove added haloupdates

* baselines pass with these extra halo updates removed

* change F->S for ocean velocities and tilts

* fix debug failure when grid_ice=C

* compiling in debug mode using -init=snan,arrays requires
initialization of variables

* respond to review comments

* remove inserted whitespace for uvelE,N and vvelE,N

* Add wave-cice coupling; update to Consortium main (CICE-Consortium#51)


* add wave-ice fields
* initialize aicen_init, which turns up as NaN in calc of floediam
export
* add call to icepack_init_wave to initialize wavefreq and dwavefreq
* update to latest consortium main (PR 752)

* add initializationsin ice_state

* initialize vsnon/vsnon_init and vicen/vicen_init

* Update CICE (CICE-Consortium#54)


* update to include recent PRs to Consortium/main

* fix for nudiag_set

allow nudiag_set to be available outside of cesm; may prefer
to fix in coupling interface

* Update CICE for latest Consortium/main (CICE-Consortium#56)

* add run time info

* change real(8) to real(dbl)kind)

* fix syntax

* fix write unit

* use cice_wrapper for ufs timer functionality

* add elapsed model time for logtime

* tidy up the wrapper

* fix case for 'time since' at the first advance

* add timer and forecast log

* write timer values to timer log, not nu_diag
* write log.ice.fXXX

* only one time is needed

* modify message written for log.ice.fXXX

* change info in fXXX log file

* Update CICE from Consortium/main (CICE-Consortium#62)


* Fix CESMCOUPLED compile issue in icepack. (CICE-Consortium#823)
* Update global reduction implementation to improve performance, fix VP bug (CICE-Consortium#824)
* Update VP global sum to exclude local implementation with tripole grids
* Add functionality to change hist_avg for each stream (CICE-Consortium#827)
* Update Icepack to #6703bc533c968 May 22, 2023 (CICE-Consortium#829)
* Fix for mesh check in CESM driver (CICE-Consortium#830)
* Namelist option for time axis position. (CICE-Consortium#839)

* reset timer after Advance to retrieve "wait time"

* add logical control for enabling runtime info

* remove zsal items from cap

* fix typo

---------

Co-authored-by: apcraig <anthony.p.craig@gmail.com>
Co-authored-by: David Bailey <dbailey@ucar.edu>
Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>
Co-authored-by: Mariana Vertenstein <mvertens@ucar.edu>
Co-authored-by: Minsuk Ji <57227195+MinsukJi-NOAA@users.noreply.github.com>
Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>
Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>
Co-authored-by: Jun.Wang <Jun.Wang@noaa.gov>
DeniseWorthen added a commit to DeniseWorthen/CICE that referenced this pull request Sep 18, 2023
commit 2ed3c05
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Thu Sep 7 15:00:21 2023 -0400

    Update CICE from Consortium/main, add run-time and history-write logging (NOAA-EMC#65)

commit d41c61d
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Wed Aug 2 17:09:26 2023 -0400

    Update CICE from Consortium/main (NOAA-EMC#62)

    * Fix CESMCOUPLED compile issue in icepack. (CICE-Consortium#823)
    * Update global reduction implementation to improve performance, fix VP bug (CICE-Consortium#824)
    * Update VP global sum to exclude local implementation with tripole grids
    * Add functionality to change hist_avg for each stream (CICE-Consortium#827)
    * Update Icepack to #6703bc533c968 May 22, 2023 (CICE-Consortium#829)
    * Fix for mesh check in CESM driver (CICE-Consortium#830)
    * Namelist option for time axis position. (CICE-Consortium#839)

commit 5840cd1
Merge: 6671e32 7df80ba
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Wed Mar 22 07:43:35 2023 -0400

    Merge remote-tracking branch 'Consortium/main' into feature/updcice

commit 6671e32
Merge: ee68d3f d73bb8b
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Mar 7 12:56:52 2023 -0500

    Merge remote-tracking branch 'Consortium/main' into feature/updcice

commit ee68d3f
Merge: dd25b0f e628a9a
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Mar 7 12:56:33 2023 -0500

    Merge branch 'emc/develop' into feature/updcice

commit e628a9a
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Mon Jan 23 07:58:06 2023 -0500

    Update CICE for latest Consortium/main (NOAA-EMC#56)

commit dd25b0f
Merge: ed472ab 506614d
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Sat Jan 14 12:00:15 2023 -0500

    Merge branch 'test' into feature/updcice

commit 506614d
Merge: 7757945 0bf0fdc
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Sat Jan 14 11:58:19 2023 -0500

    Merge remote-tracking branch 'CICE-Consortium/main' into test

commit ed472ab
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Jan 9 12:53:05 2023 +0000

    fix for nudiag_set

    allow nudiag_set to be available outside of cesm; may prefer
    to fix in coupling interface

commit ce2298e
Merge: ad8d577 0bf0fdc
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Wed Jan 4 14:50:34 2023 -0500

    Merge remote-tracking branch 'Consortium/main' into feature/updcice

commit ad8d577
Merge: 90a8b62 b16d7fd
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Dec 6 14:22:06 2022 -0500

    Merge remote-tracking branch 'Consortium/main' into feature/updcice

commit 90a8b62
Merge: fe16051 7757945
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Dec 6 13:37:51 2022 -0500

    Merge branch 'NOAA-EMC:emc/develop' into feature/updcice

commit 7757945
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Fri Nov 25 08:58:08 2022 -0500

    Update CICE (NOAA-EMC#54)

    * update to include recent PRs to Consortium/main

commit fe16051
Merge: b11bfb4 9808b51
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Wed Nov 16 07:59:58 2022 -0500

    Merge remote-tracking branch 'Consortium/main' into feature/updcice

commit b11bfb4
Merge: b893ee9 251ca48
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Nov 8 09:07:10 2022 -0500

    Merge remote-tracking branch 'Consortium/main' into feature/updcice

commit b893ee9
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Wed Nov 2 15:45:10 2022 -0600

    add initializationsin ice_state

    * initialize vsnon/vsnon_init and vicen/vicen_init

commit 2e68b9e
Merge: d6d081a 3820cde
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Wed Nov 2 12:14:34 2022 -0400

    Merge remote-tracking branch 'Consortium/main' into feature/updcice

commit d6d081a
Merge: 1f70caf 968a0ed
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Wed Nov 2 12:13:52 2022 -0400

    Merge branch 'emc/develop' into feature/updcice

commit 968a0ed
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Aug 30 12:51:07 2022 -0400

    Add wave-cice coupling; update to Consortium main (NOAA-EMC#51)

    * add wave-ice fields
    * initialize aicen_init, which turns up as NaN in calc of floediam
    export
    * add call to icepack_init_wave to initialize wavefreq and dwavefreq
    * update to latest consortium main (PR 752)

commit 1f70caf
Merge: 73cc18c fea412a
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Sat Aug 20 10:52:22 2022 -0400

    Merge remote-tracking branch 'Consortium/main' into feature/updcice

commit 73cc18c
Merge: cc0f89c 471c010
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Thu Jun 23 17:42:49 2022 -0400

    Merge remote-tracking branch 'Consortium/main' into feature/addCgridfixes

commit cc0f89c
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Wed Jun 22 14:13:25 2022 -0600

    remove inserted whitespace for uvelE,N and vvelE,N

commit 9e2dd69
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Wed Jun 22 13:07:56 2022 -0600

    respond to review comments

commit 26498db
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Jun 20 15:27:46 2022 -0600

    fix debug failure when grid_ice=C

    * compiling in debug mode using -init=snan,arrays requires
    initialization of variables

commit 2d5487a
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Jun 13 11:56:40 2022 -0600

    change F->S for ocean velocities and tilts

commit a38df37
Merge: ab95d2d 7705e13
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Jun 13 09:34:45 2022 -0400

    Merge remote-tracking branch 'Consortium/main' into feature/addCgridfixes

commit ab95d2d
Merge: cbc6046 c334aee
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Thu Jun 2 15:48:31 2022 -0400

    Merge remote-tracking branch 'Consortium/main' into feature/addCgridfixes

commit cbc6046
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon May 23 19:07:39 2022 -0600

    remove added haloupdates

    * baselines pass with these extra halo updates removed

commit ae50efe
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Sun May 22 16:30:00 2022 -0600

    fixes for cgrid repro

commit dd158e2
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Sat May 14 13:59:26 2022 +0000

    replace save_init with step_prep in CICE_RunMod

commit 247dc1d
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Sat May 14 09:01:12 2022 -0400

    fix comment, fix env for orion and hera

commit 4b28dfe
Merge: c660075 078aab4
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Sat May 14 08:58:48 2022 -0400

    Merge remote-tracking branch 'Consortium/main' into feature/addCgrid

commit c660075
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue May 10 15:28:44 2022 -0400

    Update CICE to Consortium/main (NOAA-EMC#48)

    Update OpenMP directives as needed including validation via new omp_suite. Fixed OpenMP in dynamics.
    Refactored eap puny/pi lookups to improve scalar performance
    Update Tsfc implementation to make sure land blocks don't set Tsfc to freezing temp
    Update for sea bed stress calculations

commit 27dfd1b
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Thu Feb 24 11:19:10 2022 -0500

    Update CICE to latest Consortium/main (NOAA-EMC#45)

commit 8ff0fb2
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Tue Nov 30 05:11:57 2021 -0500

    duplicate gitmodule style for icepack

commit abbebab
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Tue Nov 30 05:10:24 2021 -0500

    remove 2 extraneous lines

    * remove two log print lines that were removed prior to
    merge of driver updates to consortium

commit 7a0b65e
Merge: 55bf9f4 8d4a3c6
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Mon Nov 29 19:06:29 2021 -0500

    Merge branch 'emc/develop' into feature/updcice

commit 55bf9f4
Merge: d83c67b 2b85126
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Mon Nov 29 18:50:34 2021 -0500

    Merge remote-tracking branch 'Consortium/main' into feature/updcice

commit 8d4a3c6
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Mon Nov 22 09:06:56 2021 -0500

    recreate cap update branch (NOAA-EMC#42)

    * add debug_model feature
    * add required variables and calls for tr_snow

commit d83c67b
Merge: 8a88024 d95bd51
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Oct 12 14:16:31 2021 -0400

    Merge branch 'NOAA-EMC:emc/develop' into feature/updcice

commit d95bd51
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Fri Oct 1 09:42:49 2021 -0400

    Use CICE-Consortium/Icepack master (NOAA-EMC#40)

    * switch to icepack master at consortium

commit 8a88024
Merge: d0a45a2 2540695
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Mon Sep 20 09:28:26 2021 -0400

    Merge branch 'NOAA-EMC:emc/develop' into feature/updcice

commit 2540695
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Thu Sep 16 08:28:30 2021 -0400

    Update CICE for latest Consortium master (NOAA-EMC#38)

        * Implement advanced snow physics in icepack and CICE
        * Fix time-stamping of CICE history files
        * Fix CICE history file precision

commit d0a45a2
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Thu Sep 16 07:44:22 2021 -0400

    update icepack submodule

commit 5cb78cd
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Wed Sep 15 09:15:42 2021 -0400

    fix rhoa when lowest_temp is 0.0

    * provide default value for rhoa when imported temp_height_lowest
    (Tair) is 0.0
    * resolves seg fault when frac_grid=false and do_ca=true

commit cd021b5
Merge: 7d2139c 6e89728
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Wed Sep 1 19:39:06 2021 -0400

    Merge remote-tracking branch 'Consortium/master' into feature/updcice

commit 7d2139c
Merge: a1b3375 cb7d616
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Aug 31 14:08:06 2021 -0400

    Merge remote-tracking branch 'Consortium/master' into feature/updcice

commit a1b3375
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Aug 31 14:07:50 2021 -0400

    update icepack

commit 397b4bd
Merge: aade124 7f089d0
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Thu Aug 26 17:20:25 2021 -0400

    Merge branch 'NOAA-EMC:emc/develop' into feature/updcice

commit 7f089d0
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Mon Aug 23 08:19:25 2021 -0400

    add memory profiling (NOAA-EMC#36)

    * add profile_memory calls to CICE cap

commit aade124
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Fri Aug 20 08:14:44 2021 -0400

    update icepack

commit aeb473a
Merge: 71f4fe6 26d917a
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Fri Aug 20 08:03:58 2021 -0400

    Merge remote-tracking branch 'Consortium/master' into feature/updcice

commit 71f4fe6
Merge: 4373d3d 3fd897e
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Sun Aug 15 11:04:41 2021 -0400

    Merge remote-tracking branch 'TCraig/tmB' into feature/updcice

commit 4373d3d
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Sun Aug 15 11:04:01 2021 -0400

    update icepack

commit 3fd897e
Merge: 83068c7 2a692af
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Fri Aug 13 16:24:01 2021 -0600

    Merge branch 'master' of https://github.com/cice-consortium/cice into tmB

commit 83068c7
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Fri Aug 13 09:24:49 2021 -0600

    update evp1d test

commit e31ce7e
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Fri Aug 13 09:00:01 2021 -0600

    reset calchk to 100000 years

commit eaa3c3a
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Thu Aug 12 23:53:55 2021 -0600

    update documentation

commit c5794b4
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Thu Aug 12 23:49:05 2021 -0600

    Update testing

    - Add prod suite including 10 year gx1prod and qc test
    - Update unit test compare scripts

commit 7b5c2b4
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Thu Aug 12 16:57:18 2021 -0600

    Fix QC issues

    - Add netcdf ststus checks and aborts in ice_read_write.F90
    - Check for end of file when reading records in ice_read_write.F90 for
      ice_read_nc methods
    - Update set_nml.qc to better specify the test, turn off leap years since we're cycling
      2005 data
    - Add check in c ice.t-test.py to make sure there is at least 1825 files, 5 years of data
    - Add QC run to base_suite.ts to verify qc runs to completion and possibility to use
      those results directly for QC validation
    - Clean up error messages and some indentation in ice_read_write.F90

commit 96d5851
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Wed Aug 11 13:26:01 2021 -0600

    update model log output in ice_init

commit b3364a6
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Tue Aug 10 22:21:33 2021 -0600

    fix implementation error

commit 15763d8
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Tue Aug 10 17:14:25 2021 -0600

    revert set_nml.histall

commit 441f693
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Tue Aug 10 17:04:56 2021 -0600

    Fix history features

    - Fix bug in history time axis when sec_init is not zero.
    - Fix issue with time_beg and time_end uninitialized values.
    - Add support for averaging with histfreq='1' by allowing histfreq_n to be any value
      in that case.  Extend and clean up construct_filename for history files.  More could
      be done, but wanted to preserve backwards compatibility.
    - Add new calendar_sec2hms to converts daily seconds to hh:mm:ss.  Update the
      calchk calendar unit tester to check this method
    - Remove abort test in bcstchk, this was just causing problems in regression testing
    - Remove known problems documentation about problems writing when istep=1.  This issue
      does not exist anymore with the updated time manager.
    - Add new tests with hist_avg = false.  Add set_nml.histinst.

commit 55586f7
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Jul 20 11:21:12 2021 -0400

    update icepack and revert gitmodules

commit 1721728
Merge: 9057817 85531cf
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Fri Jul 2 09:49:24 2021 -0400

    Merge remote-tracking branch 'Consortium/master' into feature/updcice

commit 9057817
Merge: f3b2652 995f3af
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Thu Jun 24 08:34:26 2021 -0400

    Merge remote-tracking branch 'Consortium/master' into feature/updcice

commit f3b2652
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Thu Jun 24 08:32:44 2021 -0400

    update icepack and pointer

commit 0c39047
Merge: 9a76541 d1f2d15
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Thu Jun 24 08:30:26 2021 -0400

    Merge branch 'emc/develop' into feature/updcice

commit d1f2d15
Merge: 74e7b58 9d88d92
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Fri Jun 11 09:13:53 2021 -0400

    Merge remote-tracking branch 'upstream/emc/develop' into emc/develop

commit 9d88d92
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Thu Jun 10 18:08:12 2021 -0400

    add cice changes for zlvs (NOAA-EMC#29)

commit 74e7b58
Merge: b52e91c 519d339
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Sat Jun 5 17:35:52 2021 -0400

    Merge remote-tracking branch 'upstream/emc/develop' into emc/develop

commit 519d339
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Fri Jun 4 16:41:08 2021 -0400

    Update CICE to latest Consortium master (NOAA-EMC#26)

    update CICE and Icepack

    * changes the criteria for aborting ice for thermo-conservation errors
    * updates the time manager
    * fixes two bugs in ice_therm_mushy
    * updates Icepack to Consortium master w/ flip of abort flag for troublesome IC cases

commit 9a76541
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Fri Jun 4 16:01:59 2021 -0400

    update icepack

commit d8fb6d9
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Wed Jun 2 16:57:19 2021 -0400

    switch icepack branches

    * update to icepack master but set abort flag in ITD routine
    to false

commit 51db2f9
Merge: b52e91c bd512d4
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Jun 1 09:17:15 2021 -0400

    Merge remote-tracking branch 'Consortium/master' into feature/updcice

commit b52e91c
Merge: 840e931 2eca569
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Apr 5 08:40:40 2021 -0400

    Merge remote-tracking branch 'upstream/emc/develop' into emc/develop

commit 2eca569
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Apr 5 08:29:35 2021 -0400

    update icepack

commit 66546ae
Merge: 1e4d393 5a0a559
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Sun Mar 14 09:27:54 2021 -0400

    Merge remote-tracking branch 'Consortium/master' into feature/updcice

commit 1e4d393
Merge: 2a0f332 f773ef3
Author: denise.worthen <denise.worthen@noaa.gov>
Date:   Sun Mar 14 09:19:52 2021 -0400

    Merge remote-tracking branch 'upstream/emc/develop' into feature/updcice

commit 840e931
Merge: 23cdee7 f773ef3
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Nov 10 10:44:16 2020 -0500

    Merge remote-tracking branch 'upstream/emc/develop' into emc/develop

commit f773ef3
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Nov 10 10:37:11 2020 -0500

    Update CICE to consortium master (NOAA-EMC#23)

    updates include:

    * deprecate upwind advection (CICE-Consortium#508)
    * add implicit VP solver (CICE-Consortium#491)

commit 2a0f332
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Nov 10 10:29:03 2020 -0500

    update gitmodules, update icepack

commit 41afe74
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Fri Oct 30 17:47:51 2020 +0000

    add ice_dyn_vp module to CICE_InitMod

commit 2515f77
Merge: 1e4f42b 12fdb47
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Fri Oct 30 11:14:19 2020 -0400

    Merge remote-tracking branch 'consortium/master' into feature/updcice

commit 1e4f42b
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Fri Oct 30 11:13:17 2020 -0400

    update icepack

commit 23cdee7
Merge: 8129aab ac617cd
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Thu Oct 8 07:17:42 2020 -0400

    Merge remote-tracking branch 'upstream/emc/develop' into emc/develop

commit ac617cd
Author: Minsuk Ji <57227195+MinsukJi-NOAA@users.noreply.github.com>
Date:   Thu Oct 8 07:13:14 2020 -0400

    Support TACC stampede (NOAA-EMC#19)

commit 8129aab
Merge: 6d30789 c0a2e2d
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Aug 31 13:29:35 2020 -0400

    Merge remote-tracking branch 'consortium/master' into emc/develop

commit 6d30789
Merge: 5dcfca8 285985c
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Aug 31 16:59:48 2020 +0000

    Merge remote-tracking branch 'upstream/emc/develop' into emc/develop

commit 285985c
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Mon Aug 31 12:53:02 2020 -0400

    Update CICE6 for integration to S2S

    * add wcoss_dell_p3 compiler macro

    * update to icepack w/ debug fix

    * replace SITE with MACHINE_ID

    * update compile scripts

commit 5dcfca8
Merge: 5ecde75 4d7ba5b
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Thu Aug 13 13:06:16 2020 -0400

    Merge remote-tracking branch 'upstream/emc/develop' into emc/develop

commit 4d7ba5b
Merge: d81a834 eb77517
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Thu Aug 13 12:57:55 2020 -0400

    Merge remote-tracking branch 'upstream/master' into emc/develop

commit d81a834
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Thu Aug 13 09:40:18 2020 -0400

    Fixcommit (NOAA-EMC#14)

    Align commit history between emc/develop and cice-consortium/master

commit 5ecde75
Merge: 88cc2fd bdf1a1f
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Wed Aug 12 16:24:04 2020 -0400

    Merge remote-tracking branch 'upstream/emc/develop' into emc/develop

commit bdf1a1f
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Wed Aug 12 15:55:21 2020 -0400

    changes for consistency w/ current emc-cice5 (NOAA-EMC#13)

    Update to emc/develop fork to current CICE consortium

    Co-authored-by: David A. Bailey <dbailey@ucar.edu>
    Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>
    Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>
    Co-authored-by: Mariana Vertenstein <mvertens@ucar.edu>
    Co-authored-by: apcraig <anthony.p.craig@gmail.com>
    Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>

commit 88cc2fd
Merge: c084de4 003aae0
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Wed Aug 5 18:35:50 2020 -0400

    Merge remote-tracking branch 'upstream/master' into emc/develop

commit c084de4
Merge: 86b8dab b055c7f
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Fri Jul 17 17:24:15 2020 -0400

    Merge remote-tracking branch 'upstream/master' into emc/develop

commit 86b8dab
Merge: 9bdb9ad 8f37bfc
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Fri Jul 17 13:15:25 2020 +0000

    Merge remote-tracking branch 'upstream/emc/develop' into HEAD

commit 8f37bfc
Author: Minsuk Ji <57227195+MinsukJi-NOAA@users.noreply.github.com>
Date:   Fri Jul 17 09:05:06 2020 -0400

    cice6 compile (NOAA-EMC#6)

    * enable debug build. fix to remove errors

    * fix an error in comp_ice.backend.libcice

    * change Orion to orion for machine identification

commit 9bdb9ad
Merge: 916c6af c22c6d5
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Wed Jul 15 13:15:02 2020 -0400

    Merge remote-tracking branch 'CICE-Consortium/master' into emc/develop

commit 916c6af
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Jul 14 08:18:33 2020 -0400

    add -link_mpi=dbg to debug flags (NOAA-EMC#8)

commit 8ff4ee0
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Jul 7 15:46:57 2020 -0400

    change Orion to orion in backend

    remove duplicate print lines from ice_transport_driver

commit 4e8cc79
Merge: f92bef3 93f0e86
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Jul 7 15:32:13 2020 -0400

    Merge remote-tracking branch 'upstream/nuopc' into emc/develop

commit f92bef3
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Jul 7 15:20:45 2020 -0400

    update icepack submodule

commit 93f0e86
Merge: 8ebdda9 6b4a277
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Tue Jul 7 13:19:36 2020 -0600

    Merge pull request NOAA-EMC#5 from ESCOMP/coszen

    Add restart_coszen namelist option

commit 6b4a277
Merge: 27dd3b7 8ebdda9
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Tue Jul 7 13:19:22 2020 -0600

    Merge branch 'nuopc' into coszen

commit 50bf856
Merge: 3bb3694 fcf8989
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Jul 7 15:03:30 2020 -0400

    Merge remote-tracking branch 'upstream/master'

commit 27dd3b7
Author: David Bailey <dbailey@ucar.edu>
Date:   Tue Jul 7 12:40:33 2020 -0600

    Add restart_coszen namelist option

commit 8ebdda9
Merge: e4c989c 30a81cc
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Tue Jul 7 12:39:20 2020 -0600

    Merge pull request NOAA-EMC#4 from mvertens/nuopc

    cleanup changes to nuopc branch

commit 30a81cc
Author: Mariana Vertenstein <mvertens@ucar.edu>
Date:   Tue Jul 7 12:17:21 2020 -0600

    fixed white space issue

commit e4c989c
Merge: 178693a fcf8989
Author: David Bailey <dbailey@ucar.edu>
Date:   Mon Jul 6 10:24:24 2020 -0600

    Merge branch 'master' of https://github.com/CICE-Consortium/CICE into nuopc

commit aea1aa8
Merge: 178693a 41855fd
Author: Mariana Vertenstein <mvertens@ucar.edu>
Date:   Sat Jul 4 15:04:13 2020 -0600

    update to latest nuopc changes

commit 41855fd
Author: Mariana Vertenstein <mvertens@ucar.edu>
Date:   Sat Jul 4 14:29:12 2020 -0600

    fixes to get cesm working

commit 3a1b88b
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Sat Jul 4 13:25:48 2020 -0600

    fix compile errors

commit b4afd2e
Author: Mariana Vertenstein <mvertens@ucar.edu>
Date:   Sat Jul 4 11:59:19 2020 -0600

    removal of many cpp-ifdefs

commit 178693a
Merge: 902e883 c762336
Author: David Bailey <dbailey@ucar.edu>
Date:   Thu Jul 2 15:28:07 2020 -0600

    Merge branch 'nuopc' of https://github.com/ESCOMP/CICE into nuopc

commit 902e883
Author: David Bailey <dbailey@ucar.edu>
Date:   Thu Jul 2 15:27:55 2020 -0600

    Fix logging issues for NUOPC

commit 6bccf71
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Thu Jul 2 13:19:25 2020 -0600

    remove cesmcoupled ifdefs

commit c762336
Author: David Bailey <dbailey@ucar.edu>
Date:   Thu Jul 2 11:36:49 2020 -0600

    Move the forapps directory

commit 46fcfba
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Thu Jul 2 05:24:54 2020 -0600

    changes for coldstart running

commit 73e7774
Author: David Bailey <dbailey@ucar.edu>
Date:   Wed Jul 1 14:41:27 2020 -0600

    Fix additional OMP problems

commit ad03424
Author: David Bailey <dbailey@ucar.edu>
Date:   Wed Jul 1 12:52:00 2020 -0600

    Fix threading problem in init_bgc

commit 239c7de
Merge: b4da8a6 415df0e
Author: David Bailey <dbailey@ucar.edu>
Date:   Wed Jul 1 12:50:38 2020 -0600

    Merge branch 'nuopc' of https://github.com/ESCOMP/CICE into nuopc

commit b4da8a6
Merge: 6affdcf 55ca18b
Author: David Bailey <dbailey@ucar.edu>
Date:   Wed Jul 1 12:50:14 2020 -0600

    Merge branch 'master' of https://github.com/CICE-Consortium/CICE into nuopc

commit 415df0e
Merge: b5a6058 6affdcf
Author: David Bailey <dbailey@ucar.edu>
Date:   Wed Jul 1 10:08:07 2020 -0600

    Merge branch 'nuopc' of https://github.com/ESCOMP/CICE into nuopc

commit b5a6058
Merge: 7848fdf 55ca18b
Author: David Bailey <dbailey@ucar.edu>
Date:   Wed Jul 1 10:07:31 2020 -0600

    Merge branch 'master' of https://github.com/CICE-Consortium/CICE into nuopc

commit 6affdcf
Merge: 7848fdf c6c20bf
Author: David Bailey <dbailey@ucar.edu>
Date:   Mon Jun 29 15:31:28 2020 -0600

    Merge branch 'master' of https://github.com/CICE-Consortium/CICE into nuopc

commit 089f60f
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Thu Jun 25 15:18:56 2020 +0000

    update comp_ice.backend with temporary ice_timers fix

commit 6982ee4
Merge: 308a1d4 7848fdf
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Thu Jun 25 14:03:59 2020 +0000

    Merge remote-tracking branch 'upstream/nuopc' into HEAD

commit 7848fdf
Merge: 7e43703 f532dd9
Author: David Bailey <dbailey@ucar.edu>
Date:   Wed Jun 24 15:46:15 2020 -0600

    Merge branch 'master' of https://github.com/CICE-Consortium/CICE into nuopc

commit 308a1d4
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Jun 22 15:05:32 2020 -0600

    Revert "update icepack submodule"

    This reverts commit e70d1ab.

commit e70d1ab
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Jun 22 14:58:13 2020 -0600

    update icepack submodule

commit f41f1e9
Merge: 7ac0e3d 7e43703
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Mon Jun 22 20:52:39 2020 +0000

    Merge remote-tracking branch 'upstream/nuopc' into HEAD

commit 3bb3694
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Fri Jun 5 13:32:44 2020 -0400

    Convergence on ustar for CICE. (CICE-Consortium#452) (NOAA-EMC#5)

    * Add atmiter_conv to CICE

    * Add documentation

    * trigger build the docs

    Co-authored-by: David A. Bailey <dbailey@ucar.edu>

commit 397e588
Merge: d46d691 2054d09
Author: denise.worthen <Denise.Worthen@noaa.gov>
Date:   Tue Jun 2 12:31:56 2020 -0400

    Merge remote-tracking branch 'upstream/master'

commit 7e43703
Merge: 80c9e6e 53715ea
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Tue May 26 09:15:51 2020 -0600

    Merge pull request NOAA-EMC#3 from mvertens/mvertens/nuopc

    changes to satisfy ufsatm and cesm requirements for pot temp and density from atm

commit 53715ea
Author: Mariana Vertenstein <mvertens@ucar.edu>
Date:   Sun May 24 18:06:06 2020 -0600

    put in changes so that both ufsatm and cesm requirements for potential temperature and density are satisfied

commit 80c9e6e
Merge: 8f0b5ee bce31c2
Author: David Bailey <dbailey@ucar.edu>
Date:   Tue May 19 09:09:00 2020 -0600

    Merge branch 'master' of https://github.com/CICE-Consortium/CICE into nuopc

commit 7ac0e3d
Merge: 10e7c20 8f0b5ee
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue May 12 10:02:58 2020 -0400

    Merge pull request #1 from ESCOMP/nuopc

commit 8f0b5ee
Merge: 10e7c20 ce8e5a9
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Sun May 10 08:24:19 2020 -0600

    Merge pull request #2 from apcraig/ufs01

    Update CICE for coupling with UFS

commit ce8e5a9
Author: apcraig <anthony.p.craig@gmail.com>
Date:   Sat May 9 21:29:22 2020 -0600

    update CICE6 to support coupling with UFS

commit 10e7c20
Author: Mariana Vertenstein <mvertens@ucar.edu>
Date:   Wed Apr 29 16:36:09 2020 -0600

    fixed problems in updated orbital calculations needed for cesm

commit 183218a
Author: Mariana Vertenstein <mvertens@ucar.edu>
Date:   Thu Apr 23 17:43:35 2020 -0600

    updated orbital calculations needed for cesm

commit d46d691
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Wed Apr 22 14:48:08 2020 -0400

    merge latest master (NOAA-EMC#4)

    * Isotopes for CICE (CICE-Consortium#423)

    Co-authored-by: apcraig <anthony.p.craig@gmail.com>
    Co-authored-by: David Bailey <dbailey@ucar.edu>
    Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>

commit 71d2ded
Merge: 99470ed 9ac1863
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Thu Apr 2 15:13:02 2020 -0400

    Merge pull request NOAA-EMC#3 from CICE-Consortium/master

commit 99470ed
Merge: 0338d04 7e2a1d9
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Sat Mar 7 10:03:29 2020 -0500

    Merge pull request #2 from CICE-Consortium/master

commit 0338d04
Merge: b5134ad 7e11a34
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Tue Feb 25 08:43:10 2020 -0500

    Merge pull request #1 from CICE-Consortium/master
Merge branch 'emc/develop' into feature/main6.4.2
DeniseWorthen added a commit to DeniseWorthen/CICE that referenced this pull request Sep 22, 2023
updates include:

* deprecate upwind advection (CICE-Consortium#508)
* add implicit VP solver (CICE-Consortium#491)
DeniseWorthen added a commit to DeniseWorthen/CICE that referenced this pull request Apr 7, 2024
commit f36559256eb08272cdfe0706c45e0824e00fb37b
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Sun Apr 7 09:50:39 2024 -0400

    fix bad merge

    * this fixes a block of code w/in a CESMCOUPLED ifdef block

commit cbac04dad1f79eb51900dab8bf6aaa7cddbe82a1
Merge: 5a56c38 7d4e5de
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Wed Apr 3 16:05:31 2024 -0400

    Merge branch 'emc/develop' into feature/pio_options

commit 5a56c38a0d73bf16ddf2024a23f3f3fe4432007a
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Wed Apr 3 15:44:33 2024 -0400

    update with last emc/develop change

commit aca835755aa82ead50040ea7e43ec63619667054
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Feb 22 08:55:44 2024 -0800

    Update IO formats and add new IO namelist controls (#928)

    This provides new features for CICE IO both thru netCDF and PIO.  New namelist are added to control history and restart format, hdf5 compression and chunking, the PIO rearranger, and PIO IO task control.  Separate controls are provided for history and restart files.  The namelist changes are for

      _history_format, restart_format
      history_rearranger, restart_rearranger
      history_iotasks, history_root, history_stride, restart_iotasks, restart_root, and restart_stride
      history_chunksize, history_deflate, restart_chunksize, restart_deflate._

    In particular,

    - Update restart_format and history_format options to 'cdf1', 'cdf2', 'cdf5', 'hdf5', 'pnetcdf1', 'pnetcdf2', 'pnetcdf5', 'default'.  The old options, 'default', 'pio_netcdf', and 'pio_pnetcdf' are still supported and backwards compatible with lcdf64, but are deprecated and no longer documented.  The old options and old namelist lcdf64 are covered by the new options.  Support of the old options should be removed in the future.  Note that some problems were discovered when opening files with hdf5 format but reading non-hdf5 files with a spack built PIO/netCDF.  As a result, the format specified for the restart read is always 'cdf1' which provides flexibility and robustness across software installs, although it may result in serial reads of hdf5 files when a parallel read could be done.
    - Deprecate lcdf64 namelist.  This namelist is no longer needed and is covered by the new restart_format and history_format options.  The namelist still exists and is backwards compatible with the old 'default', 'pio_netcdf', and 'pio_pnetcdf' format options, but is no longer documented.  This should be removed in the future.
    - Add new namelist to control PIO pe/task setup (iotasks, root, stride) for history and restart.  These settings control the PIO IO tasks.  The root, stride, and iotasks are consistent with the MPI communicator.  root=0 is the first MPI task.  These control PIO IO performance and are usually a function of things like the IO and node hardware.  See PIO for more information.  CICE computes PIO iotask, root, and stride defaults for cases where -99 is passed in for some or all of these namelist.  Those defaults are somewhat constrained by a bug in PIO, https://github.com/NCAR/ParallelIO/issues/1986.  The current implementation avoids the bug by limiting the iotasks for some MPI task counts.  This is noted in ice_pio.F90.
    - Add new namelist to control PIO rearranger (rearranger) for history and restart.  Supports 'box', 'subset', and 'default'.  These control how PIO rearrangment is carried out.  default is equivalent to box and the box generally performs better.  See PIO for more information.
    - Add new namelist to support hdf5 compression and chunking (deflate, chunksize) for history and restart.  The deflate controls file compression and is an integer between 0 and 9 where 0 means no compression and 9 is maximum compression.   Generally, the higher the number, the slower the IO and the smaller the file, but the optimal setting depends on the contents of the file.  Chunksize provides a performance control for the hdf5 parallel writes.  It is a 2d array and is associated with the size of the piece of the array written by hdf5.  hdf5 can be read and written in parallel, but that depends on how netCDF and PIO are built.  Note that prior version of PIO, including PIO1, do not support the hdf5 compression and chunking thru the PIO interface.
    - Add new namelist settings (set_nml files) and update the io_suite to cover the new IO options.  Remove old namelist settings associated with the deprecated format options and the lcdf64 namelist.  These deprecated feature are no longer tested.
    - Update documentation to add new namelist and IO features.
    - Update the nuopc/cmeps driver code to support the new features.
    - Update the default ice_in to add the new namelist.
    - Update the derecho netcdf module to a version that supports hdf5.
    - Clean up some code formatting (indentation)

    ---------

    Co-authored-by: Anton Steketee <anton.steketee@anu.edu.au>

commit 9e9e5b3fabd88c429c4632baa8235187129e2dd7
Author: Philippe Blain <philippe.blain@canada.ca>
Date:   Mon Feb 19 14:08:37 2024 -0500

    ug_testing.rst: also mention checking the base suite results (#934)

    In the "End-To-End Testing Procedure" section of the user guide, we
    instruct users to run a base suite and a test suite, but only mention
    checking the results of the test suite.

    Also mention checking the results of the base suite first, to make sure
    everything passes before checking the test suite.

    Suggested-by: Jean-Francois Lemieux <jean-francois.lemieux@canada.ca>

commit 095e62a9342df74261b90fcb7a20d2ecdae2c5bc
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Mon Feb 12 14:49:02 2024 -0800

    Update PULL_REQUEST_TEMPLATE to request detailed information (#931)

    Update PULL_REQUEST_TEMPLATE to request detailed information about changes associated with the PR. This will be useful for the commit log when squash merging the PR.
    ---------

    Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>

commit 1a00e5e4e967c8429a7753ac3597f9c1476cf6b7
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Mon Feb 5 16:22:11 2024 -0700

    Fix for ice_mesh_mod with grid variables removed (#929)

commit 7a4b95e6deec0ec72c1da35a23ae1eb3ffe3d077
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Mon Jan 22 11:12:13 2024 -0800

    Update pio and netcdf error checks (#927)

    Update pio and netcdf error checks

    ---------

    Co-authored-by: anton-climate <anton.steketee@anu.edu.au>
    Co-authored-by: Anton Steketee <79179784+anton-seaice@users.noreply.github.com>

commit 6449f40c41aa1a5c00096696202d7bd7ebd2a69a
Author: JFLemieux73 <31927797+JFLemieux73@users.noreply.github.com>
Date:   Thu Jan 11 19:17:20 2024 +0000

    Add vorticity as a diagnostic output (#924)

    * Added new variable vort for vorticity output

    * Added calc of diag vorticity for evp, vp and eap for B, C and CD grids

    * updated doc and ice_in file for new vorticity variable

    * Changed output frequency of vorticity from m to x

    * Added f_vort to set_nml.histall and set_nml.histdbg

    * Specified location of divu, shear and vort in ice_history.F90

    ---------

    Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>

commit a20bfddf7a1260dbb61241e0838c678d2eecf972
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Thu Jan 11 11:07:36 2024 -0700

    scamn bugfix for nuopc driver (#926)

    Co-authored-by: John Truesdale <jet@ucar.edu>

commit 1314e17b4213c6ce9424eab80763edf4b2ae867f
Author: TRasmussen <33480590+TillRasmussen@users.noreply.github.com>
Date:   Thu Jan 11 18:25:52 2024 +0100

    First round of housekeeping on ice_grid (#921)

    * removal of unused variables.

    * moved xav to transport. Could remove commented code. Could remove xav and yav as they are zero

    * Move derived parameters and only allocate if needed

    * bugfixes for cxp, cyp...

    * fix index and remove commented code in ice_grid

    * new version of transport_remap. xav, yav array where needed. xxav, yyav parameter

    * Removed comments rom ice_transport_remap and arrays for nonuniform grids

commit 37f9a98b1b6529bc957fb888bd00348ab61c8b32
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Dec 21 07:15:02 2023 -0800

    Fix single channel debug failure, Update github actions testing (#922)

    * update ghactions testing

    * refactor min/max global reductions, code away from huge which was giving MPI some problems.

commit b14cedfaed8b81500fc5422cfc44b6d80e5893ef
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Tue Nov 28 15:10:16 2023 -0700

    ice_history: allow per-stream suffix for history filenames (#912)

    * Add capability for h extension

    * Update documentation for hist_str

    * Change hist_str to hist_suffix

    * Change in default namelist

    * Update doc/source/cice_index.rst

    Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>

    * One more hist_str

    ---------

    Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>

commit 21fab166fd2b8e903df366dbc1c518dabd08c23f
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Tue Nov 28 10:04:36 2023 -0800

    Update Icepack to #f6ff8f7c4d4cb6f (#913)

    * Update Icepack to #f6ff8f7c4d4cb6f

    Split the developer guide infrastructure section from the dynamics documentation

    Add a coding standard section to the documentation

    Add a couple sentences about the state of the parameter nghost to the documentation

    Update opticep to use the latest main code for the unit test

    * update documentation

commit 509e2c33e95e3a2370dc406fb2fe4d06192420a6
Author: Philippe Blain <philippe.blain@canada.ca>
Date:   Thu Nov 23 13:09:04 2023 -0500

    ice_history: refactor CMIP history variables (#906)

    * ice_flux: zero-initialize divu and shear in init_history_dyn

    'divu' and 'shear' are accessed in 'accum_hist' when writing the initial
    condition before they are initialized at the start of {eap, evp,
    implicit_solver}. This leads to runtime error when compiling with NaN
    initialization.

    Zero-initialize 'divu' and 'shear' in init_history_dyn, where the
    related variable 'strength' is already zero-initialized.

    * ice_history_shared: disallow 'x' in history frequency variables f_*'

    In the current code, nothing prevents users from leaving 'x' along with
    active frequencies in the individual namelist history frequency
    variables, for example:

        f_aice = 'xmd'

    This configuration does not work correctly, however. The corresponding
    history fields are correctly defined in
    ice_history_shared::define_hist_field, but since the calls to
    ice_history_shared::accum_hist_field in ice_history::accum_hist are only
    done after checking that the first element of each frequency variable is
    not 'x', the corresponding variables in the history files are all zero.

    Prevent that behaviour by actually disallowing 'x' in history frequency
    variables if any other frequencies are active. To implement that, add a
    check in the loop in define_hist_field, which loops through vhistfreq,
    (corresponding to f_aice, etc. in ice_history). Since this subroutine
    initializes 'id(:)' to zero and then writes a (non-zero) index in 'id'
    for any active frequency, it suffices to check that all previous indices
    are non-zero.

    * ice_history: remove uneeded conditions around CMIP history variables

    In ice_history::accum_hist, after the calls to accum_hist, we loop on
    the different output streams, and on the history variables in the
    avail_hist_fields array, to mask out land points and convert units for
    each output variable.

    Since 3c99e106 (Update CICE with CMIP changes. (#191), 2018-09-27), we
    also use this loop to do a special treatment for some CMIP variables
    (namely, averaging them only for time steps where ice is present, and
    masking points where ice is absent).

    This adjustment is done if the corresponding output frequency variable
    (f_sithick, etc.) does not have 'x' as its first element, and if the
    corresponding index in avail_hist_field for that variable/frequency
    (n_sithick(ns)) is not zero. Both conditions are in fact uneeded since
    they are always true.

    The first condition is always true because if the variable is found in
    the avail_hist_field array, which is ensured by the condition on line
    3645, then necessarily its corresponding namelist output frequency won't
    have 'x' as its first character (since this is enforced in
    ice_history_shared::define_hist_field).

    The second condition is always true because if the variable is found in
    the avail_hist_field array, then necessarily its index in that array,
    n_<var>(ns), is non-zero (see ice_history_shared::define_hist_field).

    Remove these uneeded conditions. This commit is best viewed with

        git show --color-moved --color-moved-ws=allow-indentation-change

    * ice_history: use loop index directly for CMIP variables

    In ice_history::accum_hist, there is a special treatment for some CMIP
    variables where they are averaged only for time steps where ice is
    present, and points where there is no ice are masked. This is done on
    the loop on output streams (with loop index n).

    This special averaging is done by accessing a2D and a3Dc using the
    variable n_<var>(ns), which corresponds to the index in the
    avail_hist_field array where this history variable/frequency is defined.
    By construction, this index correponds to the loop index 'n', for both
    the 2D and the 3D loops. Simplify the code by using 'n' directly.

    * ice_history_shared: add two logical components to ice_hist_field

    At the end of ice_history::accum_hist, we do a special processing for
    some CMIP variables: we average them only for time steps where ice is
    present, and also mask ice-free points. The code to do that is repeated
    for each variable to which it applies.

    In order to reduce code duplication, let's introduce two new logical
    components to our 'ice_hist_field' type, defaulting them to .false., and make them optional
    arguments in ice_history_shared::define_hist_field. This allows us to
    avoid defining them for each output variable. We'll set them for CMIP
    variables in a following commit.

    * ice_history: set avg_ice_present, mask_ice_free_points for relevant CMIP variables

    In the previous commit, we added two components to type ice_hist_field
    (avg_ice_present and mask_ice_free_points), relating to some special
    treatment for CMIP variables (whether to average only for time steps
    where the ice is present and to mask ice-free points).

    Set these to .true. in the call to 'define_hist_field' for the relevant
    2D variables [1], and set only 'avg_ice_present' to .true. for the 3D
    variables siitdthick and siitdsnthick, corresponding to the code under
    the "Mask out land points and convert units" loop in
    ice_history::accum_hist.

    [1]
    sithick
    siage
    sisnthick
    sitemptop
    sitempsnic
    sitempbot
    siu
    siv
    sidmasstranx
    sistrxdtop
    sistrydtop
    sistrxubot
    sistryubot
    sicompstren
    sispeed
    sidir
    sialb
    sihc
    siflswdtop
    siflswutop
    siflswdbot
    sifllwdtop
    sifllwutop
    siflsenstop
    siflsensupbot
    sifllatstop
    siflcondtop
    siflcondbot
    sipr
    sifb
    siflsaltbot
    siflfwbot
    siflfwdrain
    sidragtop
    sirdgthick
    siforcetiltx
    siforcetilty
    siforcecoriolx
    siforcecorioly
    siforceintstrx
    siforceintstry

    * ice_history: use avg_ice_present, mask_ice_free_points to reduce duplication

    Some CMIP variables are processed differently in
    ice_history::accum_hist: they are averaged only for time steps when ice
    is present, and points where ice is absent are masked. This processing
    is repeated for each of these variables in the 2D and 3Dc loops.

    To reduce code duplication, use the new components avg_ice_present and
    mask_ice_free_points of ice_hist_field to perform this processing only
    for variables that were defined accordingly. The relevant variables
    already have those components defined as of the previous commit.

    Note that we still need a separate loop for the variable 'sialb' (sea
    ice albedo) to mask points below the horizon.

commit 1cf109b7c350f119e8e3cd8bd918fa31e61d829c
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Mon Nov 20 14:03:59 2023 -0700

    Change to dealloc_grid in CICE_InitMod.F90 (#911)

commit d14bb694f2f8df4e74361a9df999e82eaa44fc8b
Author: Mads Hvid Ribergaard <38077893+mhrib@users.noreply.github.com>
Date:   Fri Nov 17 16:39:01 2023 +0100

    Add missing logical "timer_stats" (#910)

    Co-authored-by: Mads Hvid Ribergaard <mhri@3vsrvp2.usr.dmi.dk>

commit 8573ba8ab196c1e357a101462b16bd92128461b1
Author: TRasmussen <33480590+TillRasmussen@users.noreply.github.com>
Date:   Thu Nov 16 22:12:07 2023 +0100

    New 1d evp solver (#895)

    * New 1d evp solver

    * Small changes incl timer names and inclued private/publice in ice_dyn_core1d

    * fixed bug on gnu debug

    * moved halo update to evp1d, added deallocation, fixed bug

    * fixed deallocation dyn_evp1d

    * bugfix deallocate

    * Remove gather strintx and strinty

    * removed 4 test with evp1d and c/cd grid

    * Update of evp1d implementation

    - Rename halo_HTE_HTN to global_ext_halo and move into ice_grid.F90
    - Generalize global_ext_halo to work with any nghost size (was hardcoded for nghost=1)
    - Remove argument from dyn_evp1d_init, change to "use" of global grid variables
    - rename pgl_global_ext to save_ghte_ghtn
    - Update allocation of G_HTE, G_HTN
    - Add dealloc_grid to deallocate G_HTE and G_HTN at end of initialization
    - Add calls to dealloc_grid to all CICE_InitMod.F90 subroutines
    - Make dimension of evp1d arguments implicit size more consistently
    - Clean up indentation and formatting a bit

    * Clean up trailing blanks

    * resolved name conflicts

    * 1d grid var name change

    ---------

    Co-authored-by: apcraig <anthony.p.craig@gmail.com>

commit 5d09123865b5e8b47ba9d3c389b23743d84908c1
Author: Mads Hvid Ribergaard <38077893+mhrib@users.noreply.github.com>
Date:   Fri Nov 10 01:17:24 2023 +0100

    Rename sum to asum, as "sum" is also a generic fortran function (#905)

    Co-authored-by: Mads Hvid Ribergaard <mhri@3vsrvp2.usr.dmi.dk>

commit 4450a3e8c64bc07d1173eb3e341cd8dea91d5068
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Fri Oct 27 22:22:43 2023 -0700

    Update Icepack to latest version, does not affect CICE (#903)

commit ea241fa81a53b614f54cf5c2dad93bda20b72a78
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Fri Oct 27 16:27:15 2023 -0700

    Update version, remove trailing blanks (#901)

commit 32f233d9728b4e453c0f02fb79a188517a8d5ed4
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Fri Oct 27 16:27:01 2023 -0700

    Update Icepack, add snicar and snicartest tests (#902)

commit 0484dcd1410920f26375b7c280500a5bd16173e9
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Fri Oct 27 09:24:52 2023 -0700

    Split N/E grid computation out of Tlonlat, create NElonlat subroutine. (#899)

    * Split N/E grid computation out of Tlonlat, create NElonlat subroutine.

    See https://github.com/CICE-Consortium/CICE/issues/897

    When TLON, TLAT, ANGLET are on the CICE grid, Tlonlat is NOT called.  This
    meant N and E grid info was never computed.  This would fail during history
    writing with invalid values in N and E grid arrays.  And it would also
    cause problem if the C-grid were run with this type of CICE grid.

    There are no test grids that have TLON, TLAT, ANGLET on them, so this
    error was not found in standard test suites.  This was detected by
    users.

    * Add gx3 grid/kmt files with TLON, TLAT, ANGLET netcdf grid test.

    The grid and kmt files were produced from a gx3 history file.  Results
    are not bit-for-bit with the standard gx3 runs, but seem to be roundoff
    different initially (as expected).

commit 0b5ca0911edaf6081ba891f4287af14ceb201c9f
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Oct 26 19:33:19 2023 -0700

    Revert "Add 5-band dEdd shortwave tests (#896)" (#900)

    This reverts commit b4abca479cd548c3e600a6c645447d5ba9464422.

commit 2e13606558f7ce71633274bc38630caa23de3392
Author: Philippe Blain <philippe.blain@canada.ca>
Date:   Thu Oct 26 13:24:37 2023 -0400

    doc: update histfreq_base and hist_avg descriptions (#898)

    * doc: ug_implementation.rst: do not use curly quotes

    The namelist excerpt in section 'History' of the Implementation part of
    the user guide uses curly quotes (’) instead of regular straight quotes
    ('). This is probably a remnant of the LaTeX version of the doc. These
    quotes can't be used in Fortran and so copy pasting from the doc to the
    namelist causes runtime failures. Use straigth quotes instead.

    * doc: ug_implementation.rst: align histfreq_n with histfreq

    Align frequencies with their respective streams, which makes the example
    clearer.

    * doc: ug_implementation.rst: avoid "now" and "still"

    The documentation talks about the current version of the code, so it is
    unnecessary to use words like "now" and "still" to talk about the model
    features. Remove them.

    * doc: ug_implementation.rst: mention histfreq_base and hist_avg are per-stream

    In 35ec167d (Add functionality to change hist_avg for each stream
    (#827), 2023-05-17), hist_avg was made into an array, allowing each
    stream to individually be set to instantaneous or averaged mode. The
    first paragraph of the "History" section of the user guide was updated,
    but another paragraph a little below was not.

    In 933b148c (Extend restart output controls, provide multiple frequency
    options (#850), 2023-08-24), histfreq_base was also made into an array,
    but the "History" section of the user guide was not updated.

    Adjust the wording of the doc to reflect the fact that both hist_avg and
    histfreq_base are per-stream. Also adjust the namelist excerpt to make
    histfreq_base an array, and align hist_avg with it.

    * doc: ug_implementation.rst: refer to 'timemanager' after mentioning histfreq_base

    In 34dc6670 (Namelist option for time axis position. (#839),
    2023-07-06), the namelist option hist_time_axis was added, and the
    "History" section of the user guide updated to mention it.

    The added sentence, however, separates the mention of 'histfreq_base'
    and the reference to the "Time manager" section, which explains the
    different allowed values for that variable. Move the reference up so
    both are next to each other.

commit b4abca479cd548c3e600a6c645447d5ba9464422
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Oct 26 08:54:40 2023 -0700

    Add 5-band dEdd shortwave tests (#896)

commit 624c28b19b443c031ea862e3e5d2c16387777ddc
Author: Philippe Blain <philippe.blain@canada.ca>
Date:   Thu Oct 26 11:52:26 2023 -0400

    ice_dyn_evp: pass 'grid_location' for LKD seabed stress on C grid (#893)

    When the C grid support was added in 078aab48 (Merge cgridDEV branch
    including C grid implementation and other fixes (#715), 2022-05-10),
    subroutine ice_dyn_shared::seabed_stress_factor_LKD gained a
    'grid_location' optional argument to indicate where to compute
    intermediate quantities and the seabed stress itself (originally added
    in 0f9f48b9 (ice_dyn_shared: add optional 'grid_location' argument to
    seabed_stress_factor_LKD, 2021-11-17)). This argument was however
    forgotten in ice_dyn_evp::evp when this subroutine was adapted for the C
    grid in 48c07c66 (ice_dyn_evp: compute seabed stress factor at CD-grid
    locations, 2021-11-17), such that currently the seabed stress is not
    computed at the correct grid location for the C and CD grids.

    Fix that by correctly passing the 'grid_location' argument. Note that
    the dummy argument is incorrectly declared as 'intent(inout)' in the
    subroutine, so change that to 'intent(in)' so we can pass in character
    constants.

    Closes: https://github.com/CICE-Consortium/CICE/issues/891

commit d3698fb46fc23a81b1df8dba676a5a74d7e96a39
Author: daveh150 <david.hebert@nrlssc.navy.mil>
Date:   Wed Oct 25 16:34:35 2023 -0500

    Add atm_data_version to allow JRA55 forcing filenames to have a unique version string (#876)

    * Add jra55date to allow JRA55 forcing to have creation date in file name

    * Changed jra55_date to atm_data_date. Added atm_data_date to docs.

    * Change jra55_date to atm_data_date. Update JRA55_files to include atm_data_date in file. Update case scripts/namelist.

    * change atm_data_date to atm_data_version. Update set_nml.tx1 default to corrected forcing version

    * Update doc to have atm_data_version in proper alphabetical order

    * Re-add set_nml.jra55. Deleted accitentally

    * Fix type-o in atm_data_dir documentation

    * Add atm_data_version to set_nml.jra55

    * fix spacing after changing atm_data_date to atm_data_version

    * Change atm_data_date to atm_data_version

    * Comment out JRA55 file debugging

    * Update dg_forcing docs to describe atm_data_version string

    * Uncomment JRA55 filename check. Added check for debug_forcing before writing output

    * Correct doc format/links in dg_forcing.rst

commit 8916b9ff2c58a3a095235bb5b4ce7e8a68f76e87
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Wed Oct 18 14:08:21 2023 -0700

    Update update_ocn_f implementation, Add cpl_frazil namelist (#889)

    * Update update_ocn_f implementation

    Add cpl_frazil namelist

    Add update_ocn_f and cpl_frazil to icepack_init_parameters call, set these
    values inside Icepack at initialization.

    Remove update_ocn_f argument from icepack_step_therm2 call

    Update runtime_diags and accum_hist to account for new Icepack and
    cpl_frazil implementation.  These may need an addition update later.

    * Update documentation

commit 6ba070f7e7027f9fd2cc32f2dbe10c9854511d93
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Wed Oct 18 12:35:08 2023 -0700

    Update Documentation to clarify Namelist Inputs (#888)

    * Update Documentation to clarify Namelist Inputs

    * Update documentation

commit a9d6dc75f47a2898f1800ad4ddd96c4992e3bed0
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Wed Oct 18 10:47:01 2023 -0700

    Update input data area for Derecho, switch to campaign (#890)

commit 5ddb74dfb8724ff90aa7e806d5bfcfb4a0990762
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Wed Oct 18 10:46:40 2023 -0700

    Remove cicedynB link (#887)

    Update documentation

commit 96b43fb458fe00696d9532e547a3c5bff113f9f9
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Wed Oct 18 10:46:25 2023 -0700

    Update Icepack CPP USE_SNICARHC to NO_SNICARHC and update logic (#886)

    Update Icepack to version #0c548120ce44382 Oct 16, 2023 includes NO_SNICARHC

commit 48a92ef6dd6bf7884ec8a16b2c082345accae385
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Fri Oct 13 14:22:03 2023 -0700

    Remove use of the deprecated "_old" tfrz_options in set_nml files.  This (#883)

    changes answers for some test cases, as expected.

    Update tfrz_option implementation to not allow _old options.

commit 276563041ea6a2b6b4c70cbfa5173fb85db7b47f
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Oct 12 12:41:17 2023 -0700

    Add perlmutter gnu, intel, cray port (#882)

commit deb247bcec381615d01006bfa15dddf3a4f068fd
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Oct 5 12:50:32 2023 -0700

    Update CICE for E3SM Icepack modifications (#879)

    * Update CICE to run with eclare108213/Icepack branch snicar (#100)

    * Update CICE to run with eclare108213/Icepack branch snicar

    - including https://github.com/eclare108213/Icepack/pull/13, Sept 11, 2022
    - Passes full CICE test suite on cheyenne with 3 compilers except alt04 changes
      answers for all compilers and all tests.  CICE #fea412a55f was baseline.
    - Icepack submodule still points to standard version on main, need to be
      swapped manually to appropriate development version.

    * Remove faero_optics

    * update ciceexe string to account for USE_SNICARHC CPP

    * Update documentation

    * Update test suite to add modal testing

    * Point Icepack submodule to cice-consortium/E3SM-icepack-initial-integration

    Update to snicar branch merge, #8aef3f785ce

    * Add E3SM namelists for CICE. (#101)

    * New e3sm and e3smbgc namelist options

    * Update E3SM test options

    * Add a simple e3sm test suite

    * atmbndy is not actually different

    * Additional changes

    * add Tliquidus_max namelist parameter to CICE

    * Add Tf argument to icepack interfaces

    * Add constant option for tfrz_option

    * Fix some diagnostic prints and add to additional drivers

    * Update messages and change option in alt01

    * Update implementation for latest version of Icepack

    - Update tfrz_option, add _old options for backwards bit-for-bit
    - Fix unittests
    - Add hi_min to namelist and tests

    * Update Icepack

    * Update to E3SM-Project/Icepack/cice-consortium/E3SM-icepack-initial-integration including Icepack1.3.3 release, Dec 15, 2022.

    * Update Icepack to E3SM-Project/Icepack #87db73ba6d93747a9, current head of cice-consortium/E3SM-icepack-initial-integration Feb 3, 2023

    * Update boxchan1e and boxchan1n tests to tfrz_option = 'mushy_old' to recover Consortium main results

    Update Icepack to the latest hash on E3SM-Project Icepack cice-consortium/E3SM-icepack-initial-integration, #96f2fc707fc743d7

    Prior commit was a merge from CICE Consortium Main, #d466031001cf447bcd64220c842dcd2707f61e9, Sept 29, 2023

    * remove icepack

    * update icepack

    ---------

    Co-authored-by: David A. Bailey <dbailey@ucar.edu>
    Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>

commit d466031001cf447bcd64220c842dcd2707f61e90
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Fri Sep 29 12:08:53 2023 -0700

    Add single grid channel capability and test for C-grid (#875)

    * Added code for transport in one grid cell wide channels

    * Update remap advection to support transport in single gridcell channels

    Add single grid east and north channel configurations and tests

    * Update documentation

    * Remove temporary code comments

    ---------

    Co-authored-by: Jean-Francois Lemieux <jean-francois.lemieux@canada.ca>

commit 55342ca7cb4a1be511ade6249e349cb8a8095881
Author: Dougie Squire <42455466+dougiesquire@users.noreply.github.com>
Date:   Tue Sep 26 03:49:09 2023 +1000

    Fix mesh mask check in nuopc/cmeps cap (#873)

commit a5bb4f9a0c180e325e2a5480832f588dbfdd25ec
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Fri Sep 15 16:01:00 2023 -0400

    switch to cesm-style field names (#869)

commit 01ed4db7c4e5857768a37e8b1fd7472ab5121827
Author: JFLemieux73 <31927797+JFLemieux73@users.noreply.github.com>
Date:   Fri Sep 15 19:59:55 2023 +0000

    More accurate calculation of areafact in remapping (#849)

    * Modified doc to specify that l_fixed_area is T for C-grid

    * Initial modifs to calc areafact based on linear interpolation of left and rigth values

    * put back l_fixed_area = .true. for C-grid

    * added temporary comments for PR review

    * Modified areafac calc for case 1 and case 2

    * Corrected minor compilation issues

    * Corrected conditions for case 1 to make sure areas add up

    * Small modif in l_fixed_area section to ensure only one condition is true

    * Modified conditions in locate triangle to be consistent with previous changes for case 1

    * Use other edge areafac_c for TL, BL, TR and BR triangles

    * Some comments removed

    * Fixed out of bounds areafac_ce and now use earea and narea

    * Replaced ib,ie,jb,je in locate_triangle using ilo,ihi,jlo,jhi

    * Modified areafac for TL1, BL2, TR1 and BR2 for area flux consistency

    * Cosmetic changes

    * Added comment to explain latest change

    * Modification of bugcheck condition for l_fixed_area=T

    * update areafac_c, areafac_ce in halo in dynamics

    ---------

    Co-authored-by: apcraig <anthony.p.craig@gmail.com>

commit 06282a538e03599aed27bc3c5506ccc31a590069
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Fri Sep 8 11:26:45 2023 -0700

    Update version to 6.4.2 (#864)

    Update License and Copyright

    Update Icepack for version/copyright

commit 714bab97540e5b75c0f2b6c11cd061277cdb322d
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Sep 7 14:20:29 2023 -0700

    Update Cheyenne and Derecho ports (#863)

    * Update cheyenne and derecho ports

    cheyenne_intel updated to intel/19/1/1, mpt/2.25
    cheyenne_gnu updated to gnu/8.3.0, mpt/2.25
    cheyenne_pgi updated to pgi/19.9, mpt/2.22
    derecho_intel minor updates
    derecho_intelclassic added
    derecho_inteloneapi added (not working)
    derecho_gnu added
    derecho_cray added
    derecho_nvhpc added

    cheyenne_pgi changed answers

    derecho_inteloneapi is not working, compiler issues

    fixes automated qc testing on cheyenne

    * Update permissions on env.chicoma_intel

commit cbbac74cd9073dce8eb44fa23cabb573913aa44f
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Tue Sep 5 14:22:59 2023 -0600

    Only print messages in CAP on master task (#861)

commit 32dc48eae101749b437bd777c18830e3c397b17a
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Aug 31 13:05:54 2023 -0700

    Update Icepack to #23b6c1272b50d42ca, Aug 30, 2023 (#857)

    Includes thin ice enthalpy fix, not bit-for-bit.

commit e8a69abde90b99fc6528d469b8698506a99f6e2a
Author: Denise Worthen <denise.worthen@noaa.gov>
Date:   Mon Aug 28 16:00:41 2023 -0400

    Add logging features to nuopc/cmeps cap; deprecates zsalinity in cap (#856)

    * merge latest master (#4)

    * Isotopes for CICE (#423)

    Co-authored-by: apcraig <anthony.p.craig@gmail.com>
    Co-authored-by: David Bailey <dbailey@ucar.edu>
    Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>

    * updated orbital calculations needed for cesm

    * fixed problems in updated orbital calculations needed for cesm

    * update CICE6 to support coupling with UFS

    * put in changes so that both ufsatm and cesm requirements for potential temperature and density are satisfied

    * Convergence on ustar for CICE. (#452) (#5)

    * Add atmiter_conv to CICE

    * Add documentation

    * trigger build the docs

    Co-authored-by: David A. Bailey <dbailey@ucar.edu>

    * update icepack submodule

    * Revert "update icepack submodule"

    This reverts commit e70d1abcbeb4351195a2b81c6ce3f623c936426c.

    * update comp_ice.backend with temporary ice_timers fix

    * Fix threading problem in init_bgc

    * Fix additional OMP problems

    * changes for coldstart running

    * Move the forapps directory

    * remove cesmcoupled ifdefs

    * Fix logging issues for NUOPC

    * removal of many cpp-ifdefs

    * fix compile errors

    * fixes to get cesm working

    * fixed white space issue

    * Add restart_coszen namelist option

    * update icepack submodule

    * change Orion to orion in backend

    remove duplicate print lines from ice_transport_driver

    * add -link_mpi=dbg to debug flags (#8)

    * cice6 compile (#6)

    * enable debug build. fix to remove errors

    * fix an error in comp_ice.backend.libcice

    * change Orion to orion for machine identification

    * changes for consistency w/ current emc-cice5 (#13)

    Update to emc/develop fork to current CICE consortium

    Co-authored-by: David A. Bailey <dbailey@ucar.edu>
    Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>
    Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>
    Co-authored-by: Mariana Vertenstein <mvertens@ucar.edu>
    Co-authored-by: apcraig <anthony.p.craig@gmail.com>
    Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>

    * Fixcommit (#14)

    Align commit history between emc/develop and cice-consortium/master

    * Update CICE6 for integration to S2S

    * add wcoss_dell_p3 compiler macro

    * update to icepack w/ debug fix

    * replace SITE with MACHINE_ID

    * update compile scripts

    * Support TACC stampede (#19)

    * update icepack

    * add ice_dyn_vp module to CICE_InitMod

    * update gitmodules, update icepack

    * Update CICE to consortium master (#23)

    updates include:

    * deprecate upwind advection (CICE-Consortium#508)
    * add implicit VP solver (CICE-Consortium#491)

    * update icepack

    * switch icepack branches

    * update to icepack master but set abort flag in ITD routine
    to false

    * update icepack

    * Update CICE to latest Consortium master (#26)

    update CICE and Icepack

    * changes the criteria for aborting ice for thermo-conservation errors
    * updates the time manager
    * fixes two bugs in ice_therm_mushy
    * updates Icepack to Consortium master w/ flip of abort flag for troublesome IC cases

    * add cice changes for zlvs (#29)

    * update icepack and pointer

    * update icepack and revert gitmodules

    * Fix history features

    - Fix bug in history time axis when sec_init is not zero.
    - Fix issue with time_beg and time_end uninitialized values.
    - Add support for averaging with histfreq='1' by allowing histfreq_n to be any value
      in that case.  Extend and clean up construct_filename for history files.  More could
      be done, but wanted to preserve backwards compatibility.
    - Add new calendar_sec2hms to converts daily seconds to hh:mm:ss.  Update the
      calchk calendar unit tester to check this method
    - Remove abort test in bcstchk, this was just causing problems in regression testing
    - Remove known problems documentation about problems writing when istep=1.  This issue
      does not exist anymore with the updated time manager.
    - Add new tests with hist_avg = false.  Add set_nml.histinst.

    * revert set_nml.histall

    * fix implementation error

    * update model log output in ice_init

    * Fix QC issues

    - Add netcdf ststus checks and aborts in ice_read_write.F90
    - Check for end of file when reading records in ice_read_write.F90 for
      ice_read_nc methods
    - Update set_nml.qc to better specify the test, turn off leap years since we're cycling
      2005 data
    - Add check in c ice.t-test.py to make sure there is at least 1825 files, 5 years of data
    - Add QC run to base_suite.ts to verify qc runs to completion and possibility to use
      those results directly for QC validation
    - Clean up error messages and some indentation in ice_read_write.F90

    * Update testing

    - Add prod suite including 10 year gx1prod and qc test
    - Update unit test compare scripts

    * update documentation

    * reset calchk to 100000 years

    * update evp1d test

    * update icepack

    * update icepack

    * add memory profiling (#36)

    * add profile_memory calls to CICE cap

    * update icepack

    * fix rhoa when lowest_temp is 0.0

    * provide default value for rhoa when imported temp_height_lowest
    (Tair) is 0.0
    * resolves seg fault when frac_grid=false and do_ca=true

    * update icepack submodule

    * Update CICE for latest Consortium master (#38)

        * Implement advanced snow physics in icepack and CICE
        * Fix time-stamping of CICE history files
        * Fix CICE history file precision

    * Use CICE-Consortium/Icepack master (#40)

    * switch to icepack master at consortium

    * recreate cap update branch (#42)

    * add debug_model feature
    * add required variables and calls for tr_snow

    * remove 2 extraneous lines

    * remove two log print lines that were removed prior to
    merge of driver updates to consortium

    * duplicate gitmodule style for icepack

    * Update CICE to latest Consortium/main (#45)

    * Update CICE to Consortium/main (#48)

    Update OpenMP directives as needed including validation via new omp_suite. Fixed OpenMP in dynamics.
    Refactored eap puny/pi lookups to improve scalar performance
    Update Tsfc implementation to make sure land blocks don't set Tsfc to freezing temp
    Update for sea bed stress calculations

    * fix comment, fix env for orion and hera

    * replace save_init with step_prep in CICE_RunMod

    * fixes for cgrid repro

    * remove added haloupdates

    * baselines pass with these extra halo updates removed

    * change F->S for ocean velocities and tilts

    * fix debug failure when grid_ice=C

    * compiling in debug mode using -init=snan,arrays requires
    initialization of variables

    * respond to review comments

    * remove inserted whitespace for uvelE,N and vvelE,N

    * Add wave-cice coupling; update to Consortium main (#51)

    * add wave-ice fields
    * initialize aicen_init, which turns up as NaN in calc of floediam
    export
    * add call to icepack_init_wave to initialize wavefreq and dwavefreq
    * update to latest consortium main (PR 752)

    * add initializationsin ice_state

    * initialize vsnon/vsnon_init and vicen/vicen_init

    * Update CICE (#54)

    * update to include recent PRs to Consortium/main

    * fix for nudiag_set

    allow nudiag_set to be available outside of cesm; may prefer
    to fix in coupling interface

    * Update CICE for latest Consortium/main (#56)

    * add run time info

    * change real(8) to real(dbl)kind)

    * fix syntax

    * fix write unit

    * use cice_wrapper for ufs timer functionality

    * add elapsed model time for logtime

    * tidy up the wrapper

    * fix case for 'time since' at the first advance

    * add timer and forecast log

    * write timer values to timer log, not nu_diag
    * write log.ice.fXXX

    * only one time is needed

    * modify message written for log.ice.fXXX

    * change info in fXXX log file

    * Update CICE from Consortium/main (#62)

    * Fix CESMCOUPLED compile issue in icepack. (#823)
    * Update global reduction implementation to improve performance, fix VP bug (#824)
    * Update VP global sum to exclude local implementation with tripole grids
    * Add functionality to change hist_avg for each stream (#827)
    * Update Icepack to #6703bc533c968 May 22, 2023 (#829)
    * Fix for mesh check in CESM driver (#830)
    * Namelist option for time axis position. (#839)

    * reset timer after Advance to retrieve "wait time"

    * add logical control for enabling runtime info

    * remove zsal items from cap

    * fix typo

    ---------

    Co-authored-by: apcraig <anthony.p.craig@gmail.com>
    Co-authored-by: David Bailey <dbailey@ucar.edu>
    Co-authored-by: Elizabeth Hunke <eclare@lanl.gov>
    Co-authored-by: Mariana Vertenstein <mvertens@ucar.edu>
    Co-authored-by: Minsuk Ji <57227195+MinsukJi-NOAA@users.noreply.github.com>
    Co-authored-by: Tony Craig <apcraig@users.noreply.github.com>
    Co-authored-by: Philippe Blain <levraiphilippeblain@gmail.com>
    Co-authored-by: Jun.Wang <Jun.Wang@noaa.gov>

commit 933b148cb141a16d74615092af62c3e8d36777a2
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Aug 24 10:23:56 2023 -0700

    Extend restart output controls, provide multiple frequency options (#850)

    * Extend restart output controls, provide multiple streams for possible
    output frequencies.  Convert dumpfreq, dumpfreq_n, dumpfreq_base to
    arrays.

    Modify histfreq_base to make it an array as well.  Now each history stream
    can have it's own base time (init or zero).

    Update documentation.

    * Clean up implementation and documentation

    * Update PR to check github actions

commit 357103a2df0428089d54bdacf9eab621a5e1f710
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Tue Aug 22 11:27:28 2023 -0700

    Deprecate zsalinity (#851)

    * Deprecate zsalinity, mostly with ifdef and comments first for testing

    * Deprecate zsalinity, remove code

    * Add warning message for deprecated zsalinity

    * Update Icepack to #f5e093f5148554674 (deprecate zsalinity)

commit 8322416793ae2b76c2bafa9c7b9b108c289ede9d
Author: Elizabeth Hunke <eclare@lanl.gov>
Date:   Fri Aug 18 17:34:24 2023 -0600

    Updates to advanced snow physics implementation (#852)

    * Replace tr_snow flag with snwredist, snwgrain in some places (tr_snow is still used more generally).  Fix intent(out) compile issue in ice_read_write.F90. Replace badger with chicoma machine files.

    * update icepack to 86cae16d1b7c4c4f8

    ---------

    Co-authored-by: apcraig <anthony.p.craig@gmail.com>

commit 7e8dc5b2aeffe98a6a7fd91dbb8e93ced1e3369c
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Aug 10 13:06:41 2023 -0700

    Update conda_macos to fix problems with Github Actions testing (#853)

    * test ghactions

    * update master to main in github actions

commit 4cb296c4003014fe57d6d00f86868a78a532fc95
Author: JFLemieux73 <31927797+JFLemieux73@users.noreply.github.com>
Date:   Tue Jul 25 16:11:33 2023 +0000

    Modification of edge mask computation when l_fixed_area=T in horizontal remapping (#833)

    * Use same method whether l_fixed_area=T or F to compute masks for edge fluxes

    * Corrected typo in comment

    * Cosmetic (indentation) change in ice_transport_remap.F90

    * Set l_fixed_area value depending of grid type

    * Modifs to the doc for l_fixed_area

    * Use umask for uvel,vvel initialization for boxslotcyl and change grid avg type from S to A in init_state

    * Temporary changes before next PR: l_fixed_area=F for B and C grid

    * Temporary changes before next PR: remove paragraph in the doc

    * Small modifs: l_fixed_area and grid_ice are defined in module ice_transport_remap

commit 9f42a620e9e642c637d8f04441bacb5835ebf0b7
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Jul 20 14:59:42 2023 -0700

    Update Icepack to Consortium main #4728746, July 18 2023 (#846)

    - fix optional arguments issues
    - fix hsn_new(1) bug

    Update optargs unit test, add new test cases

    Add opticep unit test, to test CICE calls to Icepack without optional arguments.
    Add new comparison option to comparelog.csh to compare a unit test with
    a standard CICE test.

    Update unittest_suite

    Update documentation about optional arguments and unit tests

commit f9d3002c86e11ca18b06382fc2d0676c9a945223
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Thu Jul 13 16:01:26 2023 -0700

    Add support for JRA55do (#843)

    * updating paths for local nrlssc builds

    * Add jra55do forcing option

    * Updated env.nrlssc_gnu for new local directory structure

    * Added JRA55do to file names. Added comments for each variable name at top of JRA55do_???_files subroutine

    * Make JRA55 forcing to use common subroutines. Search atm_data_type for specific cases

    * remove extraneous 'i' variable in JRA55_files

    * Changed JRA55 filename JRA55_grid instead of grid at end of filename

    * Add jra55do tests to base_suite and quick_suite. This is done via set_nml options.

    * Update forcing implementation to provide a little more flexibility for
    JRA55, JRA55do, and ncar bulk atm forcing files.

    * Update documentation

    * update Onyx port

    * Update forcing documentation

    Initial port to derecho_intel

    * clean up blank spaces

    ---------

    Co-authored-by: daveh150 <david.hebert@nrlssc.navy.mil>

commit 766ff8d9606ae08bdd34ac2b36b6b068464c7e71
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Tue Jul 11 07:53:22 2023 -0700

    Update Icepack to #d024340f19676b July 6, 2023 (#841)

    Remove deprecated COREII LYq forcing

    Remove deprecated print_points_state

    Update links in rst documentation to point to main, not master

commit 34dc66707f6b691b1689bf36689591af3e8df270
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Thu Jul 6 21:46:58 2023 -0600

    Namelist option for time axis position. (#839)

    * Add option to change location in interval of time axis

    * Only use hist_time_axis when hist_avg is true

    * Add more comments and information in the documentation

    * Add a check on hist_time_axis as well as a global attribute

    * Abort if hist_time_axis is not set correctly.

commit 7eb4dd7e7e2796c5718061d06b86ff602b9d29cc
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Tue Jun 20 09:40:55 2023 -0700

    Update .readthedocs.yaml, add pdf (#837)

    * update readthedocs.yaml, turn on pdf

    * update readthedocs.yaml, turn on pdf

    * update readthedocs.yaml, turn on pdf

    * update readthedocs.yaml, turn on pdf

commit 8e2aab217ece5fae933a1f2ad6e0d6ab81ecad8a
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Tue Jun 20 08:54:25 2023 -0600

    Fix for mesh check in CESM driver (#830)

    * Fix for mesh check in CESM driver

    * Slightly different way to evaluate longitude difference

    * Slightly different way to evaluate longitude difference

    * Put the abs inside the mod

    * Add abort calls back in

commit b98b8ae899fb2a1af816105e05470b829f8b3294
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Wed May 24 09:56:10 2023 -0700

    Update Icepack to #6703bc533c968 May 22, 2023 (#829)

    Remove trailing blanks via automated tool in some Fortran files

commit 35ec167dc6beee685a6e9485b8a1db3604d566bd
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Wed May 17 14:56:26 2023 -0600

    Add functionality to change hist_avg for each stream (#827)

    * Add functionality to change hist_avg for each stream

    * Fix some documentation

    * Try to fix sphinx problem

    * Fix hist_avg documentation

    * Add some metadata changes to time and time_bounds

commit 5b0418a9f6d181d668ddebdc2c540566529e4125
Author: Tony Craig <apcraig@users.noreply.github.com>
Date:   Wed Apr 5 13:29:21 2023 -0700

    Update global reduction implementation to improve performance, fix VP bug (#824)

    * Update global reduction implementation to improve performance, fix VP bug

    This was mainly done for situations like VP that need a fast global sum.
    The VP global sum is still slightly faster than the one computed in the
    infrastructure, so kept that implementation.  Found a bug in the workspace_y
    calculation in VP that was fixed.  Also found that the haloupdate call
    as part of the precondition step generally improves VP performance, so removed
    option to NOT call the haloupdate there.

    Separately, fixed a bug in the tripoleT global sum implementation, added
    a tripoleT global sum unit test, and resynced ice_exit.F90, ice_reprosum.F90,
    and ice_global_reductions.F90 between serial and mpi versions.

    - Refactor global sums to improve performance, move if checks outside do loops
    - Fix bug in tripoleT global sums, tripole seam masking
    - Update VP solver, use local global sum more often
    - Update VP solver, fix bug in workspace_y calculation
    - Update VP solver, always call haloupdate during precondition
    - Refactor ice_exit.F90 and sync serial and mpi versions
    - Sync ice_reprosum.F90 between serial and mpi versions
    - Update sumchk unit test to handle grids better
    - Add tripoleT sumchk test

    * Update VP global sum to exclude local implementation with tripole grids

commit 942449751275ebd884abb5752d03d7ea64b72664
Author: David A. Bailey <dbailey@ucar.edu>
Date:   Thu Mar 23 16:46:05 2023 -0600

    Fix CESMCOUPLED compile issue in icepack. (#823)

    * Fix CESMCOUPLED compile problem in icepack
DeniseWorthen added a commit to DeniseWorthen/CICE that referenced this pull request May 10, 2024
updates include:

* deprecate upwind advection (CICE-Consortium#508)
* add implicit VP solver (CICE-Consortium#491)

update icepack

add ice_dyn_vp module to CICE_InitMod

update gitmodules, update icepack

switch icepack branches

* update to icepack master but set abort flag in ITD routine
to false

update icepack
DeniseWorthen added a commit to DeniseWorthen/CICE that referenced this pull request May 10, 2024
updates include:

* deprecate upwind advection (CICE-Consortium#508)
* add implicit VP solver (CICE-Consortium#491)

update icepack

update gitmodules, update icepack

switch icepack branches

* update to icepack master but set abort flag in ITD routine
to false

update icepack
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants