Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug fix for the loop extents that cause NaNs #739

Merged

Conversation

nikizadehgfdl
Copy link
Contributor

-No answer changes

- Closes issue mom-ocean#734 and hopefully issue mom-ocean#262
- MEKE%Ku array is allocated on data domain and has a mpp_domain_update
  following the loop so it needs to be calculated only on compute domain
- The problem with larger extents of the loops is Lmixscale array is
  initialized/calculate on compute domain and is NaN beyond compute domain
  extents, causing NaN's to lurk into MEKE%Ku and the model.
@adcroft
Copy link
Collaborator

adcroft commented Mar 30, 2018

@adcroft adcroft merged commit 592264a into mom-ocean:dev/gfdl Mar 30, 2018
marshallward added a commit to marshallward/MOM6 that referenced this pull request Nov 4, 2024
This patch replaces the `CS%[uv]_file < 0` checks with
`CS%[uv]_file == -1`.  FMS1 returns negative file handles for missing or
otherwise error-prone files, but the FMS2 IO framework relies on
`newunit=` to autogenerate handle IDs, which are always negative and
cannot be used with checks for negative values.

The check is replaced with equality with -1.  `newunit` is guaranteed to
never return -1 for a valid file, so this is a valid check for a missing
file.  It also lets us continue to use -1 as the initial (unopened)
value.

Behavior is compatible with `mpp_open()` output, so this can also be
used with the FMS1 API.

A better solution would be to introduce some validation function which
is defined by each API, but there is not yet any need for such
sophistication.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants