diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index a22e24ef90..2bbe76335b 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -57,7 +57,7 @@ Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Bugfix Checklist ## -See the [METplus Workflow](https://dtcenter.github.io/METplus/Contributors_Guide/github_workflow.html) for details. +See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **main_\**. Branch name: `bugfix__main__` diff --git a/.github/ISSUE_TEMPLATE/enhancement_request.md b/.github/ISSUE_TEMPLATE/enhancement_request.md index f84aa44202..0c7f6c1331 100644 --- a/.github/ISSUE_TEMPLATE/enhancement_request.md +++ b/.github/ISSUE_TEMPLATE/enhancement_request.md @@ -46,7 +46,7 @@ Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Enhancement Checklist ## -See the [METplus Workflow](https://dtcenter.github.io/METplus/Contributors_Guide/github_workflow.html) for details. +See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature__` diff --git a/.github/ISSUE_TEMPLATE/new_feature_request.md b/.github/ISSUE_TEMPLATE/new_feature_request.md index 5fa488ead3..c76da8ce50 100644 --- a/.github/ISSUE_TEMPLATE/new_feature_request.md +++ b/.github/ISSUE_TEMPLATE/new_feature_request.md @@ -50,7 +50,7 @@ Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## New Feature Checklist ## -See the [METplus Workflow](https://dtcenter.github.io/METplus/Contributors_Guide/github_workflow.html) for details. +See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature__` diff --git a/.github/ISSUE_TEMPLATE/task.md b/.github/ISSUE_TEMPLATE/task.md index 88dfc5c9d3..93e889017d 100644 --- a/.github/ISSUE_TEMPLATE/task.md +++ b/.github/ISSUE_TEMPLATE/task.md @@ -46,7 +46,7 @@ Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Task Checklist ## -See the [METplus Workflow](https://dtcenter.github.io/METplus/Contributors_Guide/github_workflow.html) for details. +See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature__` diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index f3d05479d2..450355440e 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -14,7 +14,7 @@ If **yes**, describe the new output and/or changes to the existing output:
- [ ] Please complete this pull request review by **[Fill in date]**.
## Pull Request Checklist ## -See the [METplus Workflow](https://dtcenter.github.io/METplus/Contributors_Guide/github_workflow.html) for details. +See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the PR definition above. - [ ] Ensure the PR title matches the feature or bugfix branch name. - [ ] Define the PR metadata, as permissions allow. diff --git a/.github/workflows/documentation.yml b/.github/workflows/documentation.yml index 62c284aded..4c09b72413 100644 --- a/.github/workflows/documentation.yml +++ b/.github/workflows/documentation.yml @@ -24,7 +24,7 @@ jobs: - name: Install dependencies run: | python -m pip install --upgrade python-dateutil requests sphinx \ - sphinx-gallery Pillow sphinx_rtd_theme + sphinx-gallery Pillow sphinx_rtd_theme sphinx-panels - name: Build docs run: ./.github/jobs/build_documentation.sh - uses: actions/upload-artifact@v2 diff --git a/met/data/config/GridStatConfig_default b/met/data/config/GridStatConfig_default index e79a3e42b1..89dc90e156 100644 --- a/met/data/config/GridStatConfig_default +++ b/met/data/config/GridStatConfig_default @@ -202,6 +202,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/met/data/config/PointStatConfig_default b/met/data/config/PointStatConfig_default index 048ff2cb4a..4cdaefaf6e 100644 --- a/met/data/config/PointStatConfig_default +++ b/met/data/config/PointStatConfig_default @@ -280,6 +280,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; // Only for HiRA. + orank = NONE; // Only for HiRA. rps = NONE; // Only for HiRA. eclv = NONE; mpr = NONE; diff --git a/met/data/config/TCStatConfig_default b/met/data/config/TCStatConfig_default index 3857a8501f..a9ea169154 100644 --- a/met/data/config/TCStatConfig_default +++ b/met/data/config/TCStatConfig_default @@ -17,8 +17,7 @@ // the analyses will be performed on their union. // // Each configuration filtering option may be overridden by a corresponding -// job command option of the same name, as described in the MET User's Guide: -// https://dtcenter.github.io/MET/latest/Users_Guide/tc-stat.html +// job command option of the same name, as described in the MET User's Guide. // // diff --git a/met/data/table_files/met_header_columns_V10.1.txt b/met/data/table_files/met_header_columns_V10.1.txt index d538d74c92..87b5eba258 100644 --- a/met/data/table_files/met_header_columns_V10.1.txt +++ b/met/data/table_files/met_header_columns_V10.1.txt @@ -10,7 +10,7 @@ V10.1 : STAT : NBRCNT : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID V10.1 : STAT : NBRCTC : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL FY_OY FY_ON FN_OY FN_ON V10.1 : STAT : NBRCTS : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL BASER BASER_NCL BASER_NCU BASER_BCL BASER_BCU FMEAN FMEAN_NCL FMEAN_NCU FMEAN_BCL FMEAN_BCU ACC ACC_NCL ACC_NCU ACC_BCL ACC_BCU FBIAS FBIAS_BCL FBIAS_BCU PODY PODY_NCL PODY_NCU PODY_BCL PODY_BCU PODN PODN_NCL PODN_NCU PODN_BCL PODN_BCU POFD POFD_NCL POFD_NCU POFD_BCL POFD_BCU FAR FAR_NCL FAR_NCU FAR_BCL FAR_BCU CSI CSI_NCL CSI_NCU CSI_BCL CSI_BCU GSS GSS_BCL GSS_BCU HK HK_NCL HK_NCU HK_BCL HK_BCU HSS HSS_BCL HSS_BCU ODDS ODDS_NCL ODDS_NCU ODDS_BCL ODDS_BCU LODDS LODDS_NCL LODDS_NCU LODDS_BCL LODDS_BCU ORSS ORSS_NCL ORSS_NCU ORSS_BCL ORSS_BCU EDS EDS_NCL EDS_NCU EDS_BCL EDS_BCU SEDS SEDS_NCL SEDS_NCU SEDS_BCL SEDS_BCU EDI EDI_NCL EDI_NCU EDI_BCL EDI_BCU SEDI SEDI_NCL SEDI_NCU SEDI_BCL SEDI_BCU BAGSS BAGSS_BCL BAGSS_BCU V10.1 : STAT : GRAD : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL FGBAR OGBAR MGBAR EGBAR S1 S1_OG FGOG_RATIO DX DY -V10.1 : STAT : DMAP : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL FY OY FBIAS BADDELEY HAUSDORFF MED_FO MED_OF MED_MIN MED_MAX MED_MEAN FOM_FO FOM_OF FOM_MIN FOM_MAX FOM_MEAN ZHU_FO ZHU_OF ZHU_MIN ZHU_MAX ZHU_MEAN +V10.1 : STAT : DMAP : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL FY OY FBIAS BADDELEY HAUSDORFF MED_FO MED_OF MED_MIN MED_MAX MED_MEAN FOM_FO FOM_OF FOM_MIN FOM_MAX FOM_MEAN ZHU_FO ZHU_OF ZHU_MIN ZHU_MAX ZHU_MEAN G GBETA BETA_VALUE V10.1 : STAT : ORANK : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT OBS_LON OBS_LVL OBS_ELV OBS PIT RANK N_ENS_VLD (N_ENS) ENS_[0-9]* OBS_QC ENS_MEAN CLIMO_MEAN SPREAD ENS_MEAN_OERR SPREAD_OERR SPREAD_PLUS_OERR CLIMO_STDEV V10.1 : STAT : PCT : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL (N_THRESH) THRESH_[0-9]* OY_[0-9]* ON_[0-9]* V10.1 : STAT : PJC : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL (N_THRESH) THRESH_[0-9]* OY_TP_[0-9]* ON_TP_[0-9]* CALIBRATION_[0-9]* REFINEMENT_[0-9]* LIKELIHOOD_[0-9]* BASER_[0-9]* diff --git a/met/docs/Users_Guide/appendixA.rst b/met/docs/Users_Guide/appendixA.rst index f681f48361..7271ecb1de 100644 --- a/met/docs/Users_Guide/appendixA.rst +++ b/met/docs/Users_Guide/appendixA.rst @@ -6,97 +6,1847 @@ Appendix A FAQs & How do I ... ? Frequently Asked Questions __________________________ -**Q. Why was the MET written largely in C++ instead of FORTRAN?** +File-IO +~~~~~~~ + +**Q. File-IO - How do I improve the speed of MET tools using Gen-Vx-Mask?** + +A. +The main reason to run gen_vx_mask is to make the MET +statistics tools (e.g. point_stat, grid_stat, or ensemble_stat) run +faster. The verification masking regions in those tools can be specified +as Lat/Lon polyline files or the NetCDF output of gen_vx_mask. However, +determining which grid points are inside/outside a polyline region can be +slow if the polyline contains many points or the grid is dense. Running +gen_vx_mask once to create a binary mask is much more efficient than +recomputing the mask when each MET statistics tool is run. If the polyline +only contains a small number of points or the grid is sparse running +gen_vx_mask first would only save a second or two. + +**Q. File-IO - How do I use map_data?** + +A. +The MET repository includes several map data files. Users can modify which +map datasets are included in the plots created by modifying the +configuration files for those tools. The default map datasets are defined +by the map_data dictionary in the ConfigMapData file. + +.. code-block:: none + + map_data = { + + line_color = [ 25, 25, 25 ]; // rgb triple values, 0-255 + line_width = 0.5; + line_dash = ""; + + source = [ + { file_name = "MET_BASE/map/country_data"; }, + { file_name = "MET_BASE/map/usa_state_data"; }, + { file_name = "MET_BASE/map/major_lakes_data"; } + ]; + } + +Users can modify the ConfigMapData contents prior to running 'make install'. +This will change the default map data for all of the MET tools which plots. +Alternatively, users can copy/paste/modify the map_data dictionary into the +configuration file for a MET tool. For example, you could add map_data to +the end of the MODE configuration file to customize plots created by MODE. + +Here is an example of running plot_data_plane and specifying the map_data +in the configuration string on the command line: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/plot_data_plane + sample.grib china_tmp_2m_admin.ps \ + 'name="TMP"; level="Z2"; \ + map_data = { source = [ { file_name = \ + "${MET_BUILD_BASE}/data/map/admin_by_country/admin_China_data"; } \ + ]; }' + +**Q. FILE-IO - How can I understand the number of matched pairs?** + +A. +Statistics are computed on matched forecast/observation pairs data. +For example, if the dimension of the grid is 37x37 up to +1369 matched pairs are possible. However, if the forecast or +observation contains bad data at a point, that matched pair would +not be included in the calculations. There are a number of reasons that +observations could be rejected - mismatches in station id, variable names, +valid times, bad values, data off the grid, etc. +For example, if the forecast field contains missing data around the +edge of the domain, then that is a reason there may be 992 matched pairs +instead of 1369. Users can use the ncview tool to look at an example +netCDF file or run their files through plot_data_plane to help identify +any potential issues. + +One common support question is "Why am I getting 0 matched pairs from +Point-Stat?". As mentioned above, there are many reasons why point +observations can be excluded from your analysis. If running point_stat with +at least verbosity level 2 (-v 2, the default value), zero matched pairs +will result in the following type of log messages to be printed: + +.. code-block:: none + + DEBUG 2: Processing TMP/Z2 versus TMP/Z2, for observation type ADPSFC, over region FULL, for interpolation method UW_MEAN(1), using 0 pairs. + DEBUG 2: Number of matched pairs = 0 + DEBUG 2: Observations processed = 1166 + DEBUG 2: Rejected: station id = 0 + DEBUG 2: Rejected: obs var name = 1166 + DEBUG 2: Rejected: valid time = 0 + DEBUG 2: Rejected: bad obs value = 0 + DEBUG 2: Rejected: off the grid = 0 + DEBUG 2: Rejected: topography = 0 + DEBUG 2: Rejected: level mismatch = 0 + DEBUG 2: Rejected: quality marker = 0 + DEBUG 2: Rejected: message type = 0 + DEBUG 2: Rejected: masking region = 0 + DEBUG 2: Rejected: bad fcst value = 0 + DEBUG 2: Rejected: bad climo mean = 0 + DEBUG 2: Rejected: bad climo stdev = 0 + DEBUG 2: Rejected: mpr filter = 0 + DEBUG 2: Rejected: duplicates = 0 + +This list of the rejection reason counts above matches the order in +which the filtering logic is applied in the code. In this example, +none of the point observations match the variable name requested +in the configuration file. So all of the 1166 observations are rejected +for the same reason. + +**Q. FILE-IO - What types of NetCDF files can MET read?** + +A. +There are three flavors of NetCDF that MET can read directly. + +1. Gridded NetCDF output from one of the MET tools + +2. Output from the WRF model that has been post-processed using the wrf_interp utility + +3. NetCDF data following the `climate-forecast (CF) convention + `_ + +Lastly, users can write python scripts to pass data that's gridded to the +MET tools in memory. If the data doesn't fall into one of those categories, +then it's not a gridded dataset that MET can handle directly. Satellite data, +in general, will not be gridded. Typically it contains a dense mesh of data at +lat/lon points, but typically those lat/lon points are not evenly spaced onto +a regular grid. + +While MET's point2grid tool does support some satellite data inputs, it is +limited. Using python embedding is another option for handling new datasets +not supported natively by MET. + +**Q. FILE-IO - How do I choose a time slice in a NetCDF file?** + +A. +When processing NetCDF files, the level information needs to be +specified to tell MET which 2D slice of data to use. There is +currently no way to explicitly define which time slice to use +other than selecting the time index. + +Let's use plot_data_plane as an example: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/plot_data_plane \ + MERGE_20161201_20170228.nc \ + obs.ps \ + 'name="APCP"; level="(5,*,*)";' + +Assuming that the first array is the time, this will select the 6-th +time slice of the APCP data and plot it since these indices are 0-based. + +**Q. FILE-IO - How do I use the UNIX time conversion?** + +A. +Regarding the timing information in the NetCDF variable attributes: + +.. code-block:: none + + APCP_24:init_time_ut = 1306886400 ; + +“ut” stands for UNIX time, which is the number of seconds +since Jan 1, 1970. It is a convenient way of storing timing +information since it is easy to add/subtract. The UNIX date command +can be used to convert back/forth between unix time and time strings: + +To convert unix time to ymd_hms date: + +.. code-block:: none + + date -ud '1970-01-01 UTC '1306886400' seconds' +%Y%m%d_%H%M%S 20110601_000000 + +To convert ymd_hms to unix date: + +.. code-block:: none + + date -ud ''2011-06-01' UTC '00:00:00'' +%s 1306886400 + +Regarding TRMM data, it may be easier to work with the binary data and +use the trmm2nc.R script described on this +`page `_ +under observation datasets. + +Follow the TRMM binary links to either the 3 or 24-hour accumulations, +save the files, and run them through that script. That is faster +and easier than trying to get an ASCII dump. That Rscript can also +subset the TRMM data if needed. Look for the section of it titled +"Output domain specification" and define the lat/lon's that needs +to be included in the output. + +**Q. Does MET use a fixed-width output format for its ASCII output files?** + +A. +MET does not use the Fortran-like fixed width format in its +ASCII output file. Instead, the column widths are adjusted for each +run to insert at least one space between adjacent columns. The header +columns of the MET output contain user-defined strings which may be +of arbitrary length. For example, columns such as MODEL, OBTYPE, and +DESC may be set by the user to any string value. Additionally, the +amount of precision written is also configurable. The +"output_precision" config file entry can be changed from its default +value of 5 decimal places to up to 12 decimal places, which would also +impact the column widths of the output. + +Due to these issues, it is not possible to select a reasonable fixed +width for each column ahead of time. The AsciiTable class in MET does +a lot of work to line up the output columns, to make sure there is +at least one space between them. + +If a fixed-width format is needed, the easiest option would be +writing a script to post-process the MET output into the fixed-width +format that is needed or that the code expects. + +**Q. Do the ASCII output files created by MET use scientific notation?** + +A. +By default, the ASCII output files created by MET make use of +scientific notation when appropriate. The formatting of the +numbers that the AsciiTable class writes is handled by a call +to printf. The "%g" formatting option can result in +scientific notation: +http://www.cplusplus.com/reference/cstdio/printf/ + +It has been recommended that a configuration option be added to +MET to disable the use of scientific notation. That enhancement +is planned for a future release. + +Gen-Vx-Mask +~~~~~~~~~~~ + +**Q. Gen-Vx-Mask - I have a list of stations to use for verification. +I also have a poly region defined. If I specify both of these should +the result be a union of them?** + +A. +These settings are defined in the "mask" section of the Point-Stat +configuration file. You can define masking regions in one of 3 ways, +as a "grid", a "poly" line file, or a "sid" list of station ID's. + +If you specify one entry for "poly" and one entry for "sid", you +should see output for those two different masks. Note that each of +these settings is an array of values, as indicated by the square +brackets "[]" in the default config file. If you specify 5 grids, +3 poly's, and 2 SID lists, you'd get output for those 10 separate +masking regions. Point-Stat does not compute unions or intersections +of masking regions. Instead, they are each processed separately. + +Is it true that you really want to use a polyline to define an area +and then use a SID list to capture additional points outside of +that polyline? + +If so, your options are: + +1. Define one single SID list which include all the points currently + inside the polyline as well as the extra ones outside. + +2. Continue verifying using one polyline and one SID list and + write partial sums and contingency table counts. + +Then aggregate the results together by running a Stat-Analysis job. + +**Q. Gen-Vx-Mask - How do I define a masking region with a GFS file?** + +A. +Grab a sample GFS file: + +.. code-block:: none + + wget + http://www.ftp.ncep.noaa.gov/data/nccf/com/gfs/prod/gfs/2016102512/gfs.t12z.pgrb2.0p50.f000 + +Use the MET regrid_data_plane tool to put some data on a +lat/lon grid over Europe: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/regrid_data_plane gfs.t12z.pgrb2.0p50.f000 \ + 'latlon 100 100 25 0 0.5 0.5' gfs_euro.nc -field 'name="TMP"; level="Z2";' + +Run the MET gen_vx_mask tool to apply your polyline to the European domain: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/gen_vx_mask gfs_euro.nc POLAND.poly POLAND_mask.nc + +Run the MET plot_data_plane tool to display the resulting mask field: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/plot_data_plane POLAND_mask.nc POLAND_mask.ps 'name="POLAND"; level="(*,*)";' + +In this example, the mask is in roughly the right spot, but there +are obvious problems with the latitude and longitude values used +to define that mask for Poland. + +Grid-Stat +~~~~~~~~~ + +**Q. Grid-Stat - How do I define a complex masking region?** + +A. +A user can define intersections and unions of multiple fields to define masks. +Prior to running Grid-Stat, the user can run the Gen-VX-Mask tool one or +more times to define a more complex masking area by thresholding multiple +fields. + +For example, using a forecast GRIB file (fcst.grb) which contains 2 records, +one for 2-m temperature and a second for 6-hr accumulated precip. The only +grid points that are desired are grid points below freezing with non-zero +precip. The user should run Gen-Vx-Mask twice - once to define the +temperature mask and a second time to intersect that with the precip mask: + +.. code-block:: none + + gen_vx_mask fcst.grb fcst.grb tmp_mask.nc \ + -type data \ + -mask_field 'name="TMP"; level="Z2"' -thresh le273 + gen_vx_mask tmp_mask.nc fcst.grb tmp_and_precip_mask.nc \ + -type data \ + -input_field 'name="TMP_Z2"; level="(*,*)";' \ + -mask_field 'name="APCP"; level="A6";' -thresh gt0 \ + -intersection -name "FREEZING_PRECIP" + +The first one is pretty straight-forward. + +1. The input field (fcst.grb) defines the domain for the mask. + +2. Since we're doing data masking and the data we want lives in + fcst.grb, we pass it in again as the mask_file. + +3. Lastly "-mask_field" specifies the data we want from the mask file + and "-thresh" specifies the event threshold. + + +The second call is a bit tricky. + +1. Do data masking (-type data) + +2. Read the NetCDF variable named "TMP_Z2" from the input file (tmp_mask.nc) + +3. Define the mask by reading 6-hour precip from the mask file + (fcst.grb) and looking for values > 0 (-mask_field) + +4. Apply intersection logic when combining the "input" value with + the "mask" value (-intersection). + +5. Name the output NetCDF variable as "FREEZING_PRECIP" (-name). + This is totally optional, but convenient. + +A user can write a script with multiple calls to Gen-Vx-Mask to +apply complex masking logic and then pass the output mask file +to Grid-Stat in its configuration file. + + +**Q. Grid-Stat - How do I use neighborhood methods to compute fraction +skill score?** + +A. +A common application of fraction skill score (FSS) is comparing forecast +and observed thunderstorms. When computing FSS, first threshold the fields +to define events and non-events. Then look at successively larger and +larger areas around each grid point to see how the forecast event frequency +compares to the observed event frequency. + +Applying this method to rainfall (and monsoons) is also reasonable. +Keep in mind that Grid-Stat is the tool that computes FSS. Grid-Stat will +need to be run once for each evaluation time. As an example, evaluating +once per day, run Grid-Stat 122 times for the 122 days of a monsoon season. +This will result in 122 FSS values. These can be viewed as a time series, +or the Stat-Analysis tool could be used to aggregate them together into +a single FSS value, like this: + +.. code-block:: none + + stat_analysis -job aggregate -line_type NBRCNT \ + -lookin out/grid_stat + +Be sure to pick thresholds (e.g. for the thunderstorms and monsoons) +that capture the "events" that are of interest in studying. + +Also be aware that MET uses the "vld_thresh" setting in the configuration +file to decide how to handle data along the edge of the domain. Let us say +it is computing a fractional coverage field using a 5x5 neighborhood +and it is at the edge of the domain. 15 points contain valid data and +10 points are outside the domain. Grid-Stat computes the valid data ratio +as 15/25 = 0.6. Then it applies the valid data threshold. Suppose +vld_thresh = 0.5. Since 0.6 > 0.5 MET will compute a fractional coverage +value for that point using the 15 valid data points. Next suppose +vld_thresh = 1.0. Since 0.6 is less than 1.0, MET will just skip that +point by setting it to bad data. + +Setting vld_thresh = 1.0 will ensure that FSS will only be computed at +points where all NxN values contain valid data. Setting it to 0.5 only +requires half of them. + +**Q. Grid-Stat - Is an example of verifying forecast probabilities?** + +A. +There is an example of verifying probabilities in the test scripts +included with the MET release. Take a look in: + +.. code-block:: none + + ${MET_BUILD_BASE}/scripts/config/GridStatConfig_POP_12 + +The config file should look something like this: + +.. code-block:: none + + fcst = { + wind_thresh = [ NA ]; + field = [ + { + name = "LCDC"; + level = [ "L0" ]; + prob = TRUE; + cat_thresh = [ >=0.0, >=0.1, >=0.2, >=0.3, >=0.4, >=0.5, >=0.6, >=0.7, >=0.8, >=0.9]; + } + ]; + }; + + obs = { + wind_thresh = [ NA ]; + field = [ + { + name = "WIND"; + level = [ "Z2" ]; + cat_thresh = [ >=34 ]; + } + ]; + }; + +The PROB flag is set to TRUE to tell grid_stat to process this as +probability data. The cat_thresh is set to partition the probability +values between 0 and 1. Note that if the probability data contains +values from 0 to 100, MET automatically divides by 100 to rescale to +the 0 to 1 range. + +**Q. What is an example of using Grid-Stat with regridding and masking +turned on?** + +A. +Run Grid-Stat using the following commands and the attached config file + +.. code-block:: none + + mkdir out + ${MET_BUILD_BASE}/bin/grid_stat \ + gfs_4_20160220_0000_012.grb2 \ + ST4.2016022012.06h \ + GridStatConfig \ + -outdir out + +Note the following two sections of the Grid-Stat config file: + +.. code-block:: none + + regrid = { + to_grid = OBS; + vld_thresh = 0.5; + method = BUDGET; + width = 2; + } + +This tells Grid-Stat to do verification on the "observation" grid. +Grid-Stat reads the GFS and Stage4 data and then automatically regrids +the GFS data to the Stage4 domain using budget interpolation. +Use "FCST" to verify the forecast domain. And use either a named +grid or a grid specification string to regrid both the forecast and +observation to a common grid. For example, to_grid = "G212"; will +regrid both to NCEP Grid 212 before comparing them. + +.. code-block:: none + + mask = { grid = [ "FULL" ]; + poly = [ "MET_BASE/poly/CONUS.poly" ]; } + +This will compute statistics over the FULL model domain as well +as the CONUS masking area. + +To demonstrate that Grid-Stat worked as expected, run the following +commands to plot its NetCDF matched pairs output file: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/plot_data_plane \ + out/grid_stat_120000L_20160220_120000V_pairs.nc \ + out/DIFF_APCP_06_A06_APCP_06_A06_CONUS.ps \ + 'name="DIFF_APCP_06_A06_APCP_06_A06_CONUS"; level="(*,*)";' + +Examine the resulting plot of that difference field. + +Lastly, there is another option for defining that masking region. +Rather than passing the ascii CONUS.poly file to grid_stat, run the +gen_vx_mask tool and pass the NetCDF output of that tool to grid_stat. +The advantage to gen_vx_mask is that it will make grid_stat run a +bit faster. It can be used to construct much more complex masking areas. + +**Q. How do I use one mask for the forecast field and a different +mask for the observation field??** + +A. +You can't define different +masks for the forecast and observation fields in MET tools. MET only lets you +define a single mask (a masking grid or polyline) and then you choose +whether you want to apply it to the FCST, OBS, or BOTH of them. + +Nonetheless, there is a way you can accomplish this logic using the +gen_vx_mask tool. You run it once to pre-process the forecast field +and a second time to pre-process the observation field. And then pass +those output files to your desired MET tool. + +Below is an example using sample data that is included with the MET +release tarball. To illustrate, this command will read 3-hour +precip and 2-meter temperature, and resets the precip at any grid +point where the temperature is less than 290 K to a value of 0: + +.. code-block:: none + + {MET_BUILD_BASE}/bin/gen_vx_mask \ + data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 \ + data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 \ + APCP_03_where_2m_TMPge290.nc \ + -type data \ + -input_field 'name="APCP"; level="A3";' \ + -mask_field 'name="TMP"; level="Z2";' \ + -thresh 'lt290&&ne-9999' -v 4 -value 0 + +So this is a bit confusing. Here's what is happening: + +* The first argument is the input file which defines the grid. + +* The second argument is used to define the masking region and + since I'm reading data from the same input file, I've listed + that file twice. + +* The third argument is the output file name. + +* The type of masking is "data" masking where we read a 2D field of + data and apply a threshold. + +* By default, gen_vx_mask initializes each grid point to a value + of 0. Specifying "-input_field" tells it to initialize each grid + point to the value of that field (in my example 3-hour precip). + +* The "-mask_field" option defines the data field that should be + thresholded. + +* The "-thresh" option defines the threshold to be applied. + +* The "-value" option tells it what "mask" value to write to the + output, and I've chosen 0. + +The example threshold is less than 290 and not -9999 (which is MET's +internal missing data value). So any grid point where the 2 meter +temperature is less than 290 K and is not bad data will be replaced +by a value of 0. + +To more easily demonstrate this, I changed to using "-value 10" and ran +the output through plot_data_plane: + +.. code-block:: none + + {MET_BUILD_BASE}/bin/plot_data_plane \ + + APCP_03_where_2m_TMPge290.nc APCP_03_where_2m_TMPge290.ps \ + + 'name="data_mask"; level="(*,*)";' + +In the resulting plot, anywhere you see the pink value of 10, that's +where gen_vx_mask has masked out the grid point. + +Pcp-Combine +~~~~~~~~~~~ + +**Q. Pcp-Combine - How do I add and subtract with Pcp-Combine?** + +A. +An example of running the MET pcp_combine tool to put NAM 3-hourly +precipitation accumulations data into user-desired 3 hour intervals is +provided below. + +If the user wanted a 0-3 hour accumulation, this is already available +in the 03 UTC file. Run this file +through pcp_combine as a pass-through to put it into NetCDF format: + +.. code-block:: none + + [MET_BUILD_BASE}/pcp_combine -add 03_file.grb 03 APCP_00_03.nc + +If the user wanted the 3-6 hour accumulation, they would subtract +0-6 and 0-3 accumulations: + +.. code-block:: none + + [MET_BUILD_BASE}/pcp_combine -subtract 06_file.grb 06 03_file.grb 03 APCP_03_06.nc + +Similarly, if they wanted the 6-9 hour accumulation, they would +subtract 0-9 and 0-6 accumulations: + +.. code-block:: none + + [MET_BUILD_BASE}/pcp_combine -subtract 09_file.grb 09 06_file.grb 06 APCP_06_09.nc + +And so on. + +Run the 0-3 and 12-15 through pcp_combine even though they already have +the 3-hour accumulation. That way, all of the NAM files will be in the +same file format, and can use the same configuration file settings for +the other MET tools (grid_stat, mode, etc.). If the NAM files are a mix +of GRIB and NetCDF, the logic would need to be a bit more complicated. + +**Q. Pcp-Combine - How do I combine 12-hour accumulated precipitation +from two different initialization times?** + +A. +The "-sum" command assumes the same initialization time. Use the "-add" +option instead. + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/pcp_combine -add \ + WRFPRS_1997-06-03_APCP_A12.nc 'name="APCP_12"; level="(*,*)";' \ + WRFPRS_d01_1997-06-04_00_APCP_A12.grb 12 \ + Sum.nc + +For the first file, list the file name followed by a config string +describing the field to use from the NetCDF file. For the second file, +list the file name followed by the accumulation interval to use +(12 for 12 hours). The output file, Sum.nc, will contain the +combine 12-hour accumulated precipitation. + +Here is a small excerpt from the pcp_combine usage statement: + +Note: For “-add” and "-subtract”, the accumulation intervals may be +substituted with config file strings. For that first file, we replaced +the accumulation interval with a config file string. + +Here are 3 commands you could use to plot these data files: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/plot_data_plane WRFPRS_1997-06-03_APCP_A12.nc \ + WRFPRS_1997-06-03_APCP_A12.ps 'name="APCP_12"; level="(*,*)";' + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/plot_data_plane WRFPRS_d01_1997-06-04_00_APCP_A12.grb \ + WRFPRS_d01_1997-06-04_00_APCP_A12.ps 'name="APCP" level="A12";' + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/plot_data_plane sum.nc sum.ps 'name="APCP_24"; level="(*,*)";' + +**Q. Pcp-Combine - How do I correct a precipitation time range?** + +A. +Typically, accumulated precipitation is stored in GRIB files using an +accumulation interval with a "time range" indicator value of 4. Here is +a description of the different time range indicator values and +meanings: http://www.nco.ncep.noaa.gov/pmb/docs/on388/table5.html + +For example, take a look at the APCP in the GRIB files included in the +MET tar ball: + +.. code-block:: none + + wgrib ${MET_BUILD_BASE}/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 | grep APCP + 1:0:d=05080700:APCP:kpds5=61:kpds6=1:kpds7=0:TR=4:P1=0: \ + P2=12:TimeU=1:sfc:0- 12hr acc:NAve=0 + 2:31408:d=05080700:APCP:kpds5=61:kpds6=1:kpds7=0:TR=4: \ + P1=9:P2=12:TimeU=1:sfc:9- 12hr acc:NAve=0 + +The "TR=4" indicates that these records contain an accumulation +between times P1 and P2. In the first record, the precip is accumulated +between 0 and 12 hours. In the second record, the precip is accumulated +between 9 and 12 hours. + +However, the GRIB data uses a time range indicator of 5, not 4. + +.. code-block:: none + + wgrib rmf_gra_2016040600.24 | grep APCP + 291:28360360:d=16040600:APCP:kpds5=61:kpds6=1:kpds7=0: \ + TR=5:P1=0:P2=24:TimeU=1:sfc:0-24hr diff:NAve=0 + +pcp_combine is looking in "rmf_gra_2016040600.24" for a 24 hour +*accumulation*, but since the time range indicator is no 4, it doesn't +find a match. + +If possible switch the time range indicator to 4 on the GRIB files. If +this is not possible, there is another workaround. Instead of telling +pcp_combine to look for a particular accumulation interval, give it a +more complete description of the chosen field to use from each file. +Here is an example: + +.. code-block:: none + + pcp_combine -add rmf_gra_2016040600.24 'name="APCP"; level="L0-24";' \ + rmf_gra_2016040600_APCP_00_24.nc + +The resulting file should have the accumulation listed at 24h rather than 0-24. + +**Pcp-Combine - How do I use Pcp-Combine as a pass-through to simply reformat +from GRIB to NetCDF or to change output variable name?** + +A. +The pcp_combine tool is typically used to modify the accumulation interval +of precipitation amounts in model and/or analysis datasets. For example, +when verifying model output in GRIB format containing runtime accumulations +of precipitation, run the pcp_combine -subtract option every 6 hours to +create 6-hourly precipitation amounts. In this example, it is not really +necessary to run pcp_combine on the 6-hour GRIB forecast file since the +model output already contains the 0 to 6 hour accumulation. However, the +output of pcp_combine is typically passed to point_stat, grid_stat, or mode +for verification. Having the 6-hour forecast in GRIB format and all other +forecast hours in NetCDF format (output of pcp_combine) makes the logic +for configuring the other MET tools messy. To make the configuration +consistent for all forecast hours, one option is to choose to run +pcp_combine as a pass-through to simply reformat from GRIB to NetCDF. +Listed below is an example of passing a single record to the +pcp_combine -add option to do the reformatting: + +.. code-block:: none + + $MET_BUILD/bin/pcp_combine -add forecast_F06.grb \ + 'name="APCP"; level="A6";' \ + forecast_APCP_06_F06.nc -name APCP_06 + +Reformatting from GRIB to NetCDF may be done for any other reason the +user may have. For example, the -name option can be used to define the +NetCDF output variable name. Presuming this file is then passed to +another MET tool, the new variable name (CompositeReflectivity) will +appear in the output of downstream tools: + +.. code-block:: none + + $MET_BUILD/bin/pcp_combine -add forecast.grb \ + 'name="REFC"; level="L0"; GRIB1_ptv=129; lead_time="120000";' \ + forecast.nc -name CompositeReflectivity + +**Q. Pcp-Combine - How do I use “-pcprx" to run a project faster?** + +A. +To run a project faster, the “-pcprx” option may be used to narrow the +search down to whatever regular expression you provide. Here are a two +examples: + +.. code-block:: none + + # Only using Stage IV data (ST4) + ${MET_BUILD_BASE}/bin/pcp_combine -sum 00000000_000000 06 \ + 20161015_18 12 ST4.2016101518.APCP_12_SUM.nc -pcprx "ST4.*.06h" + + # Specify that files starting with pgbq[number][number]be used: + [MET_BUILD_BASE]/bin/pcp_combine \ + -sum 20160221_18 06 20160222_18 24 \ + gfs_APCP_24_20160221_18_F00_F24.nc \ + -pcpdir /scratch4/BMC/shout/ptmp/Andrew.Kren/pre2016c3_corr/temp \ + -pcprx 'pgbq[0-9][0-9].gfs.2016022118' -v 3 + +**Q. Pcp-Combine - How do I enter the time format correctly?** + +A. +Here is an **incorrect example** of running pcp_combine with sub-hourly +accumulation intervals: + +.. code-block:: none + + # incorrect example: + pcp_combine -subtract forecast.grb 0055 \ + forecast2.grb 0005 forecast.nc -field APCP + +The time signature is entered incorrectly. Let’s assume that "0055" +meant 0 hours and 55 minutes and "0005" meant 0 hours and 5 minutes. + +Looking at the usage statement for pcp_combine (just type pcp_combine with +no arguments): "accum1" indicates the accumulation interval to be used +from in_file1 in HH[MMSS] format (required). + +The time format listed "HH[MMSS]" means specifying hours or +hours/minutes/seconds. The incorrect example is using hours/minutes. + +Below is the **correct example**. Add the seconds to the end of the +time strings, like this: + +.. code-block:: none + + # correct example: + pcp_combine -subtract forecast.grb 005500 \ + forecast2.grb 000500 forecast.nc -field APCP + +**Q. Pcp-Combine - How do I use Pcp-Combine when my GRIB data doesn't have the +appropriate accumulation interval time range indicator?** + +A. +Run wgrib on the data files and the output is listed below: + +.. code-block:: none + + 279:503477484:d=15062313:APCP:kpds5=61:kpds6=1:kpds7=0:TR= 10:P1=3:P2=247:TimeU=0:sfc:1015min \ + fcst:NAve=0 \ + 279:507900854:d=15062313:APCP:kpds5=61:kpds6=1:kpds7=0:TR= 10:P1=3:P2=197:TimeU=0:sfc:965min \ + fcst:NAve=0 + +Notice the output which says "TR=10". TR means time range indicator and +a value of 10 means that the level information contains an instantaneous +forecast time, not an accumulation interval. + +Here's a table describing the TR values: +http://www.nco.ncep.noaa.gov/pmb/docs/on388/table5.html + +The default logic for pcp_combine is to look for GRIB code 61 (i.e. APCP) +defined with an accumulation interval (TR = 4). Since the data doesn't +meet that criteria, the default logic of pcp_combine won't work. The +arguments need to be more specific to tell pcp_combine exactly what to do. + +Try the command: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/pcp_combine -subtract \ + forecast.grb 'name="APCP"; level="L0"; lead_time="165500";' \ + forecast2.grb 'name="APCP"; level="L0"; lead_time="160500";' \ + forecast.nc -name APCP_A005000 + +Some things to point out here: + +1. Notice in the wgrib output that the forecast times are 1015 min and + 965 min. In HHMMSS format, that's "165500" and "160500". + +2. An accumulation interval can’t be specified since the data isn't stored + that way. Instead, use a config file string to describe the data to use. + +3. The config file string specifies a "name" (APCP) and "level" string. APCP + is defined at the surface, so a level value of 0 (L0) was specified. + +4. Technically, the "lead_time" doesn’t need to be specified at all, + pcp_combine + would find the single APCP record in each input GRIB file and use them. + But just in case, the lead_time option was included to be extra certain to + get exactly the data that is needed. + +5. The default output variable name pcp_combine would write would be + "APCP_L0". However, to indicate that its a 50-minute + "accumulation interval" use a + different output variable name (APCP_A005000). Any string name is + possible. Maybe "Precip50Minutes" or "RAIN50". But whatever string is + chosen will be used in the Grid-Stat, Point-Stat, or MODE config file to + tell that tool what variable to process. + +**Q. Pcp_Combine - How do I use “-sum”, “-add”, and “-subtract“ to achieve +the same accumulation interval?** + +A. +Here is an example of using pcp_combine to put GFS into 24- hour intervals +for comparison against 24-hourly StageIV precipitation with GFS data +through the pcp_combine tool. Be aware that the 24-hour StageIV data is +defined as an accumulation from 12Z on one day to 12Z on the next day: +http://www.emc.ncep.noaa.gov/mmb/ylin/pcpanl/stage4/ + +Therefore, only the 24-hour StageIV data can be used to evaluate 12Z to +12Z accumulations from the model. Alternatively, the 6- hour StageIV +accumulations could be used to evaluate any 24 hour accumulation from +the model. For the latter, run the 6-hour StageIV files through pcp_combine +to generate the desired 24-hour accumulation. + +Here is an example. Run pcp_combine to compute 24-hour accumulations for +GFS. In this example, process the 20150220 00Z initialization of GFS. + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/pcp_combine \ + -sum 20150220_00 06 20150221_00 24 \ + gfs_APCP_24_20150220_00_F00_F24.nc \ + -pcprx "gfs_4_20150220_00.*grb2" \ + -pcpdir /d1/model_data/20150220 + +pcp_combine is looking in the */d1/SBU/GFS/model_data/20150220* directory +at files which match this regular expression "gfs_4_20150220_00.*grb2". +That directory contains data for 00, 06, 12, and 18 hour initializations, +but the "-pcprx" option narrows the search down to the 00 hour +initialization which makes it run faster. It inspects all the matching +files, looking for 6-hour APCP data to sum up to a 24-hour accumulation +valid at 20150221_00. This results in a 24-hour accumulation between +forecast hours 0 and 24. -A. MET relies upon the object-oriented aspects of C++, particularly in using the MODE tool. Due to time and budget constraints, it also makes use of a pre-existing forecast verification library that was developed at NCAR. +The following command will compute the 24-hour accumulation between forecast +hours 12 and 36: -**Q. Why is PrepBUFR used?** +.. code-block:: none -A. The first goal of MET was to replicate the capabilities of existing verification packages and make these capabilities available to both the DTC and the public. + ${MET_BUILD_BASE}/bin/pcp_combine \ + -sum 20150220_00 06 20150221_12 24 \ + gfs_APCP_24_20150220_00_F12_F36.nc \ + -pcprx "gfs_4_20150220_00.*grb2" \ + -pcpdir /d1/model_data/20150220 -**Q. Why is GRIB used?** +The "-sum" command is meant to make things easier by searching the +directory. But instead of using "-sum", another option would be the +"- add" command. Explicitly list the 4 files that need to be extracted +from the 6-hour APCP and add them up to 24. In the directory structure, +the previous "-sum" job could be rewritten with "-add" like this: -A. Forecast data from both WRF cores can be processed into GRIB format, and it is a commonly accepted output format for many NWP models. +.. code-block:: none -**Q. Is GRIB2 supported?** + ${MET_BUILD_BASE}/bin/pcp_combine -add \ + /d1/model_data/20150220/gfs_4_20150220_0000_018.grb2 06 \ + /d1/model_data/20150220/gfs_4_20150220_0000_024.grb2 06 \ + /d1/model_data/20150220/gfs_4_20150220_0000_030.grb2 06 \ + /d1/model_data/20150220/gfs_4_20150220_0000_036.grb2 06 \ + gfs_APCP_24_20150220_00_F12_F36_add_option.nc -A. Yes, forecast output in GRIB2 format can be read by MET. Be sure to compile the GRIB2 code by setting the appropriate configuration file options (see Chapter 2). +This example explicitly tells pcp_combine which files to read and +what accumulation interval (6 hours) to extract from them. The resulting +output should be identical to the output of the "-sum" command. -**Q. How does MET differ from the previously mentioned existing verification packages?** +**Q. Pcp-Combine - What is the difference between “-sum” vs. “-add”?** -A. MET is an actively maintained, evolving software package that is being made freely available to the public through controlled version releases. +A. +The -sum and -add options both do the same thing. It's just that +'-sum' could find files more quickly with the use of the -pcprx flag. +This could also be accomplished by using a calling script. -**Q. How does the MODE tool differ from the Grid-Stat tool?** +**Q. Pcp-Combine - How do I select a specific GRIB record?** -A. They offer different ways of viewing verification. The Grid-Stat tool provides traditional verification statistics, while MODE provides specialized spatial statistics. +A. +In this example, record 735 needs to be selected. + +.. code-block:: none + + pcp_combine -add 20160101_i12_f015_HRRR_wrfnat.grb2 \ + 'name="APCP"; level="R735";' \ + -name "APCP_01" HRRR_wrfnat.20160101_i12_f015.nc + +Instead of having the level as "L0", tell it to use "R735" to select +grib record 735. + +Plot-Data-Plane +~~~~~~~~~~~~~~~ + +**Q. Plot-Data-Plane - How do I inspect Gen-Vx-Mask output?** + +A. +Check to see if the call to Gen-Vx-Mask actually did create good output +with Plot-Data-Plane. +Try running the following command from the top-level ${MET_BUILD_BASE} +directory. + +.. code-block:: none + + bin/plot_data_plane \ + out/gen_vx_mask/CONUS_poly.nc \ + out/gen_vx_mask/CONUS_poly.ps \ + 'name="CONUS"; level="(*,*)";' + +View that postscript output file, using something like "gv" +for ghostview: + +.. code-block:: none + + gv out/gen_vx_mask/CONUS_poly.ps + +Please review a map of 0's and 1's over the USA to determine if the output +file is what the user expects. It always a good idea to start with +plot_data_plane when working with data to make sure MET +is plotting the data correctly and in the expected location. + +**Q. Plot-Data-Plane - How do I specify the GRIB version?** + +A. +When MET reads Gridded data files, it must determine the type of +file it's reading. The first thing it checks is the suffix of the file. +The following are all interpreted as GRIB1: .grib, .grb, and .gb. +While these mean GRIB2: .grib2, .grb2, and .gb2. + +There are 2 choices to control how MET interprets a grib file. Renaming +the files to use a particular suffix, or keep them +named and explicitly tell MET to interpret them as GRIB1 or GRIB2 using +the "file_type" configuration option. + +The examples below use the plot_data_plane tool to plot the data. Set + +.. code-block:: none + + "file_type = GRIB2;" + +To keep the files named this as they are, add "file_type = GRIB2;" to all the +MET configuration files (i.e. Grid-Stat, MODE, and so on) that you use: + +.. code-block:: none + + {MET_BASE}/bin/plot_data_plane \ + test_2.5_prog.grib \ + test_2.5_prog.ps \ + 'name="TSTM"; level="A0"; file_type=GRIB2;' \ + -plot_range 0 100 + + +**Q. Plot-Data-Plane - How do I test the variable naming convention? +(Record number example)** + +A. +Make sure MET can read GRIB2 data. Plot the data from that GRIB2 file +by running: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/plot_data_plane LTIA98_KWBR_201305180600.grb2 tmp_z2.ps 'name="TMP"; level="R2"; + +"R2" tells MET to plot record number 2. Record numbers 1 and 2 both +contain temperature data and 2-meters. Here's some wgrib2 output: + +.. code-block:: none + + 1:0:d=2013051806:TMP:2 m above ground:anl:analysis/forecast error 2:3323062:d=2013051806:TMP:2 m above ground:anl: + +The GRIB id info has been the same between records 1 and 2. + +**Q. Plot-Data-Plane - How do I compute and verify wind speed?** + +A. +Here's how to compute and verify wind speed using MET. Good news, MET +already includes logic for deriving wind speed on the fly. The GRIB +abbreviation for wind speed is WIND. To request WIND from a GRIB1 or +GRIB2 file, MET first checks to see if it already exists in the current +file. If so, it'll use it as is. If not, it'll search for the corresponding +U and V records and derive wind speed to use on the fly. + +In this example the RTMA file is named rtma.grb2 and the UPP file is +named wrf.grb, please try running the following commands to plot wind speed: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/plot_data_plane wrf.grb wrf_wind.ps \ + 'name"WIND"; level="Z10";' -v 3 + ${MET_BUILD_BASE}/bin/plot_data_plane rtma.grb2 rtma_wind.ps \ + 'name"WIND"; level="Z10";' -v 3 + +In the first call, the log message should be similar to this: + +.. code-block:: none + + DEBUG 3: MetGrib1DataFile::data_plane_array() -> + Attempt to derive winds from U and V components. + +In the second one, this won't appear since wind speed already exists +in the RTMA file. + +Stat-Analysis +~~~~~~~~~~~~~ + +**Q. Stat-Analysis - How does '-aggregate_stat' work?** + +A. +In Stat-Analysis, there is a "-vx_mask" job filtering option. That option +reads the VX_MASK column from the input STAT lines and applies string +matching with the values in that column. Presumably, all of the MPR lines +will have the value of "FULL" in the VX_MASK column. + +Stat-Analysis has the ability to read MPR lines and recompute statistics +from them using the same library code that the other MET tools use. The +job command options which begin with "-out" are used to specify settings +to be applied to the output of that process. For example, the "-fcst_thresh" +option filters strings from the input "FCST_THRESH" header column. The +"-out_fcst_thresh" option defines the threshold to be applied to the output +of Stat-Analysis. So reading MPR lines and applying a threshold to define +contingency table statistics (CTS) would be done using the +"-out_fcst_thresh" option. + +Stat-Analysis does have the ability to filter MPR lat/lon locations +using the "-mask_poly" option for a lat/lon polyline and the "-mask_grid" +option to define a retention grid. + +However, there is currently no "-mask_sid" option. + +With met-5.2 and later versions, one option is to apply column string +matching using the "-column_str" option to define the list of station +ID's you would like to aggregate. That job would look something like this: + +.. code-block:: none + + stat_analysis -lookin path/to/mpr/directory \ + -job aggregate_stat -line_type MPR -out_line_type CNT \ + -column_str OBS_SID SID1,SID2,SID3,...,SIDN \ + -set_hdr VX_MASK SID_GROUP_NAME \ + -out_stat mpr_to_cnt.stat + +Where SID1...SIDN is a comma-separated list of the station id's in the +group. Notice that a value for the output VX_MASK column using the +"-set_hdr" option has been specified. Otherwise, this would show a list +of the unique values found in that column. Presumably, all the input +VX_MASK columns say "FULL" so that's what the output would say. Use +"-set_hdr" to explicitly set the output value. + +**Q. Stat-Analysis - What is the best way to average the FSS scores +within several days or even several months using +'Aggregate to Average Scores'?** + +A. +Below is the best way to aggregate together the Neighborhood Continuous +(NBRCNT) lines across multiple days, specifically the fractions skill +score (FSS). The Stat-Analysis tool is designed to do this. This example +is for aggregating scores for the accumulated precipitation (APCP) field. + +Run the "aggregate" job type in stat_analysis to do this: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/stat_analysis -lookin directory/file*_nbrcnt.txt \ + -job aggregate -line_type NBRCNT -by FCST_VAR,FCST_LEAD,FCST_THRESH,INTERP_MTHD,INTERP_PNTS -out_stat agg_nbrcnt.txt + +This job reads all the files that are passed to it on the command line with +the "-lookin" option. List explicit filenames to read them directly. +Listing a top-level directory name will search that directory for files +ending in ".stat". + +In this case, the job running is to "aggregate" the "NBRCNT" line type. + +In this case, the "-by" option is being used and lists several header +columns. Stat-Analysis will run this job separately for each unique +combination of those header column entries. + +The output is printed to the screen, or use the "-out_stat" option to +also write the aggregated output to a file named "agg_nbrcnt.txt". + +**Q. Stat-Analysis - How do I use '-by' to capture unique entries?** + +A. +Here is a stat-analysis job that could be used to run, read the MPR lines, +define the probabilistic forecast thresholds, define the single observation +threshold, and compute a PSTD output line. Using "-by FCST_VAR" tells it +to run the job separately for each unique entry found in the FCST_VAR column. + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/stat_analysis \ + -lookin point_stat_model2_120000L_20160501_120000V.stat \ + -job aggregate_stat -line_type MPR -out_line_type PSTD \ + -out_fcst_thresh ge0,ge0.1,ge0.2,ge0.3,ge0.4,ge0.5,ge0.6,ge0.7,ge0.8,ge0.9,ge1.0 \ + -out_obs_thresh eq1.0 \ + -by FCST_VAR \ + -out_stat out_pstd.txt + +The output statistics are written to "out_pstd.txt". + +**Q. Stat-Analysis - How do I use '-filter' to refine my output?** + +A. +Here is an example of running a Stat-Analysis filter job to discard any +CNT lines (continuous statistics) where the forecast rate and observation +rate are less than 0.05. This is an alternative way of tossing out those +cases without having to modify the source code. + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/stat_analysis \ + -lookin out/grid_stat/grid_stat_120000L_20050807_120000V.stat \ + -job filter -dump_row filter_cts.txt -line_type CTS \ + -column_min BASER 0.05 -column_min FMEAN 0.05 + DEBUG 2: STAT Lines read = 436 + DEBUG 2: STAT Lines retained = 36 + DEBUG 2: + DEBUG 2: Processing Job 1: -job filter -line_type CTS -column_min BASER + 0.05 -column_min + FMEAN 0.05 -dump_row filter_cts.txt + DEBUG 1: Creating + STAT output file "filter_cts.txt" + FILTER: -job filter -line_type + CTS -column_min + BASER 0.05 -column_min + FMEAN 0.05 -dump_row filter_cts.txt + DEBUG 2: Job 1 used 36 out of 36 STAT lines. + +This job reads find 56 CTS lines, but only keeps 36 of them where both +the BASER and FMEAN columns are at least 0.05. + +**Q. Stat-Analysis - How do I use the “-by” flag to stratify results?** + +A. +Adding "-by FCST_VAR" is a great way to associate a single value, +of say RMSE, with each of the forecast variables (UGRD,VGRD and WIND). + +Run the following job on the output from Grid-Stat generated when the +"make test" command is run: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/stat_analysis -lookin out/grid_stat \ + -job aggregate_stat -line_type SL1L2 -out_line_type CNT \ + -by FCST_VAR,FCST_LEV \ + -out_stat cnt.txt + +The resulting cnt.txt file includes separate output for 6 different +FCST_VAR values at different levels. + +**Q. Stat-Analysis - How do I speed up run times?** + +A. +By default, Stat-Analysis has two options enabled which slow it down. +Disabling these two options will create quicker run times: + +1. The computation of rank correlation statistics, Spearman's Rank + Correlation and Kendall's Tau. Disable them using "-rank_corr_flag FALSE". + +2. The computation of bootstrap confidence intervals. Disable them using + "-n_boot_rep 0". + +Two more suggestions for faster run times. + +1. Instead of using "-fcst_var u", use "-by fcst_var". This will compute + statistics separately for each unique entry found in the FCST_VAR column. + +2. Instead of using "-out" to write the output to a text file, use "-out_stat" + which will write a full STAT output file, including all the header columns. + This will create a long list of values in the OBTYPE column. To avoid the + long, OBTYPE column value, manually set the output using + "-set_hdr OBTYPE ALL_TYPES". Or set its value to whatever is needed. + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/stat_analysis \ + -lookin diag_conv_anl.2015060100.stat \ + -job aggregate_stat -line_type MPR -out_line_type CNT -by FCST_VAR \ + -out_stat diag_conv_anl.2015060100_cnt.txt -set_hdr OBTYPE ALL_TYPES \ + -n_boot_rep 0 -rank_corr_flag FALSE -v 4 + +Adding the "-by FCST_VAR" option to compute stats for all variables and +runs quickly. + +TC-Stat +~~~~~~~ + +**Q. TC-Stat - How do I use the “-by” flag to stratify results?** + +A. +To perform tropical cyclone evaluations for multiple models use the +"-by AMODEL" option with the tc_stat tool. Here is an example. + +In this case the tc_stat job looked at the 48 hour lead time for the HWRF +and H3HW models. Without the “-by AMODEL” option, the output would be +all grouped together. + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/tc_stat \ + -lookin d2014_vx_20141117_reset/al/tc_pairs/tc_pairs_H3WI_* \ + -lookin d2014_vx_20141117_reset/al/tc_pairs/tc_pairs_HWFI_* \ + -job summary -lead 480000 -column TRACK -amodel HWFI,H3WI \ + -by AMODEL -out sample.out + +This will result in all 48 hour HWFI and H3WI track forecasts to be +aggregated (statistics and scores computed) for each model separately. + +**Q. TC-Stat - How do I use rapid intensification verification?** + +A. +To get the most output, run something like this: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/tc_stat \ + -lookin path/to/tc_pairs/output \ + -job rirw -dump_row test \ + -out_line_type CTC,CTS,MPR + +By default, rapid intensification (RI) is defined as a 24-hour exact +change exceeding 30kts. To define RI differently, modify that definition +using the ADECK, BDECK, or both using -rirw_time, -rirw_exact, +and -rirw_thresh options. Set -rirw_window to something larger than 0 +to enable false alarms to be considered hits when they were "close enough" +in time. + +.. code-block:: none + + {MET_BASE}/bin/tc_stat \ + -lookin path/to/tc_pairs/output \ + -job rirw -dump_row test \ + -rirw_time 36 -rirw_window 12 \ + -out_line_type CTC,CTS,MPR + +To evaluate Rapid Weakening (RW) by setting "-rirw_thresh <=-30". +To stratify your results by lead time, you could add the "-by LEAD" option. + +.. code-block:: none + + {MET_BASE}/bin/tc_stat \ + -lookin path/to/tc_pairs/output \ + -job rirw -dump_row test \ + -rirw_time 36 -rirw_window 12 \ + -rirw_thresh <=-30 -by LEAD \ + -out_line_type CTC,CTS,MPR + +Utilities +~~~~~~~~~ + +**Q. Utilities - What would be an example of scripting to call MET?** + +A. +The following is an example of how to call MET from a bash script +including passing in variables. This shell script is listed below to run +Grid-Stat, call Plot-Data-Plane to plot the resulting difference field, +and call convert to reformat from PostScript to PNG. + +.. code-block:: none + + #!/bin/sh + for case in `echo "FCST OBS"`; do + export TO_GRID=${case} + /usr/local/${MET_BUILD_BASE}/bin/grid_stat gfs.t00z.pgrb2.0p25.f000 \ + nam.t00z.conusnest.hiresf00.tm00.grib2 GridStatConfig \ + /usr/local/${MET_BUILD_BASE}/bin/plot_data_plane \ + *TO_GRID_${case}*_pairs.nc TO_GRID_${case}.ps 'name="DIFF_TMP_P500_TMP_P500_FULL"; \ + level="(*,*)";' + convert -rotate 90 -background white -flatten TO_GRID_${case}.ps + TO_GRID_${case}.png + done + + +**Q. Utility - How do I convert TRMM data files?** + +A. +Here is an example of NetCDF that the MET software is not expecting. Here +is an option for accessing that same TRMM data, following links from the +MET website: +http://dtcenter.org/community-code/model-evaluation-tools-met/input-data + +.. code-block:: none + + # Pull binary 3-hourly TRMM data file + wget + ftp://disc2.nascom.nasa.gov/data/TRMM/Gridded/3B42_V7/201009/3B42.100921.00z.7. + precipitation.bin + # Pull Rscript from MET website + wget http://dtcenter.org/sites/default/files/community-code/met/r-scripts/trmmbin2nc.R + # Edit that Rscript by setting + out_lat_ll = -50 + out_lon_ll = 0 + out_lat_ur = 50 + out_lon_ur = 359.75 + # Run the Rscript + Rscript trmmbin2nc.R 3B42.100921.00z.7.precipitation.bin \ + 3B42.100921.00z.7.precipitation.nc + # Plot the result + ${MET_BUILD_BASE}/bin/plot_data_plane 3B42.100921.00z.7.precipitation.nc \ + 3B42.100921.00z.7.precipitation.ps 'name="APCP_03"; level="(*,*)";' + +It may be possible that the domain of the data is smaller. Here are some options: + +1. In that Rscript, choose different boundaries (i.e. out_lat/lon_ll/ur) + to specify the tile of data to be selected. + +2. As of version 5.1, MET includes support for regridding the data it reads. + Keep TRMM on it's native domain and use the MET tools to do the regridding. + For example, the Regrid-Data-Plane" tool reads a NetCDF file, regrids + the data, and writes a NetCDF file. Alternatively, the "regrid" section + of the configuration files for the MET tools may be used to do the + regridding on the fly. For example, run Grid-Stat to compare to the model + output to TRMM and say + +.. code-block:: none + + "regrid = { field = FCST; + ...}" + +That tells Grid-Stat to automatically regrid the TRMM observations to +the model domain. + +**Q. Other Utilities - How do I convert a PostScript to png?** + +A. +Use the linux “convert” tool to convert a Plot-Data-Plane PostScript +file to a png: + +.. code-block:: none + + convert -rotate 90 -background white plot_dbz.ps plot_dbz.png + +To convert a MODE PostScript to png + +.. code-block:: none + + convert mode_out.ps mode_out.png + +Will result in all 6-7 pages in the PostScript file be written out to a +seperate .png with the following naming convention: + +mode_out-0.png, mode_out-1.png, mode_out-2.png, etc. + +**Q. Utility - How does pairwise differences using plot_tcmpr.R work?** + +A. +One necessary step in computing pairwise differences is "event equalizing" +the data. This means extracting a subset of cases that are common to +both models. + +While the tc_stat tool does not compute pairwise differences, it can apply +the "event_equalization" logic to extract the cases common to two models. +This is done using the config file "event_equal = TRUE;" option or +setting "-event_equal true" on the command line. + +Most of the hurricane track analysis and plotting is done using the +plot_tcmpr.R Rscript. It makes a call to the tc_stat tool to track +data down to the desired subset, compute pairwise differences if needed, +and then plot the result. + +.. code-block:: none + + setenv MET_BUILD_BASE `pwd` + Rscript scripts/Rscripts/plot_tcmpr.R \ + -lookin tc_pairs_output.tcst \ + -filter '-amodel AHWI,GFSI' \ + -series AMODEL AHWI,GFSI,AHWI-GFSI \ + -plot MEAN,BOXPLOT + +The resulting plots include three series - one for AHWI, one for GFSI, +and one for their pairwise difference. + +It's a bit cumbersome to understand all the options available, but this may +be really useful. If nothing else, it could be adapted to dump out the +pairwise differences that are needed. + + +Miscellaneous +~~~~~~~~~~~~~ + +**Q. Regrid-Data-Plane - How do I define a LatLon grid?** + +A. +Here is an example of the NetCDF variable attributes that MET uses to +define a LatLon grid: + +.. code-block:: none + + :Projection = "LatLon" ; + :lat_ll = "25.063000 degrees_north" ; + :lon_ll = "-124.938000 degrees_east" ; + :delta_lat = "0.125000 degrees" ; + :delta_lon = "0.125000 degrees" ; + :Nlat = "224 grid_points" ; + :Nlon = "464 grid_points" ; + +This can be created by running the Regrid-Data-Plane" tool to regrid +some GFS data to a LatLon grid: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/regrid_data_plane \ + gfs_2012040900_F012.grib G110 \ + gfs_g110.nc -field 'name="TMP"; level="Z2";' + +Use ncdump to look at the attributes. As an exercise, try defining +these global attributes (and removing the other projection-related ones) +and then try again. + +**Q. Pre-processing - How do I use wgrib2, pcp_combine regrid and +reformat to format NetCDF files?** + +A. +If you are extracting only one or two fields from a file, using MET's +Regrid-Data-Plane can be used to generate a Lat-Lon projection. If +regridding all fields, the wgrib2 utility may be more useful. Here's an +example of using wgrib2 and pcp_combine to generate NetCDF files +MET can read: + +.. code-block:: none + + wgrib2 gfsrain06.grb -new_grid latlon 112:131:0.1 \ + 25:121:0.1 gfsrain06_regrid.grb2 + +And then run that GRIB2 file through pcp_combine using the "-add" option +with only one file provided: + +.. code-block:: none + + pcp_combine -add gfsrain06_regrid.grb2 'name="APCP"; \ + level="A6";' gfsrain06_regrid.nc + +Then the output NetCDF file does not have this problem: + +.. code-block:: none + + ncdump -h 2a_wgrib2_regrid.nc | grep "_ll" + :lat_ll = "25.000000 degrees_north" ; + :lon_ll = "112.000000 degrees_east" ; + +**Q. TC-Pairs - How do I get rid of WARNING: TrackInfo Using Specify +Model Suffix?** + +A. +Below is a command example to run: + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/tc_pairs \ + -adeck aep142014.h4hw.dat \ + -bdeck bep142014.dat \ + -config TCPairsConfig_v5.0 \ + -out tc_pairs_v5.0_patch \ + -log tc_pairs_v5.0_patch.log \ + -v 3 + +Below is a warning message: + +.. code-block:: none + + WARNING: TrackInfo::add(const ATCFLine &) -> + skipping ATCFLine since the valid time is not + increasing (20140801_000000 < 20140806_060000): + WARNING: AL, 03, 2014080100, 03, H4HW, 000, + 120N, 547W, 38, 1009, XX, 34, NEQ, 0084, 0000, + 0000, 0083, -99, -99, 59, 0, 0, , 0, , 0, 0, + +As a sanity check, the MET-TC code makes sure that the valid time of +the track data doesn't go backwards in time. This warning states that +this is +occurring. The very likely reason for this is that the data being used +are probably passing tc_pairs duplicate track data. + +Using grep, notice that the same track data shows up in +"aal032014.h4hw.dat" and "aal032014_hfip_d2014_BERTHA.dat". Try this: + +.. code-block:: none + + grep H4HW aal*.dat | grep 2014080100 | grep ", 000," + aal032014.h4hw.dat:AL, 03, 2014080100, 03, H4HW, 000, + 120N, 547W, 38, 1009, XX, 34, NEQ, 0084, + 0000, 0000, 0083, -99, -99, 59, 0, 0, , + 0, , 0, 0, , , , , 0, 0, 0, 0, THERMO PARAMS, + -9999, -9999, -9999, Y, 10, DT, -999 + aal032014_hfip_d2014_BERTHA.dat:AL, 03, 2014080100, + 03, H4HW, 000, 120N, 547W, 38, 1009, XX, 34, NEQ, + 0084, 0000, 0000, 0083, -99, -99, 59, 0, 0, , 0, , 0, + 0, , , , , 0, 0, 0, 0, THERMOPARAMS, -9999 ,-9999 , + -9999 ,Y ,10 ,DT ,-999 + +Those 2 lines are nearly identical, except for the spelling of +"THERMO PARAMS" with a space vs "THERMOPARAMS" with no space. + +Passing tc_pairs duplicate track data results in this sort of warning. +The DTC had the same sort of problem when setting up a real-time +verification system. The same track data was making its way into +multiple ATCF files. + +If this really is duplicate track data, work on the logic for where/how +to store the track data. However, if the H4HW data in the first file +actually differs from that in the second file, there is another option. +You can specify a model suffix to be used for each ADECK source, as in +this example (suffix=_EXP): + +.. code-block:: none + + ${MET_BUILD_BASE}/bin/tc_pairs \ + -adeck aal032014.h4hw.dat suffix=_EXP \ + -adeck aal032014_hfip_d2014_BERTHA.dat \ + -bdeck bal032014.dat \ + -config TCPairsConfig_match \ + -out tc_pairs_v5.0_patch \ + -log tc_pairs_v5.0_patch.log -v 3 + +Any model names found in "aal032014.h4hw.dat" will now have _EXP tacked +onto the end. Note that if a list of model names in the TCPairsConfig file +needs specifying, include the _EXP variants to get them to show up in +the output or it won’t show up. + +That'll get rid of the warnings because they will be storing the track +data from the first source using a slightly different model name. This +feature was added for users who are testing multiple versions of a +model on the same set of storms. They might be using the same ATCF ID +in all their output. But this enables them to distinguish the output +in tc_pairs. + +**Q. Why is the grid upside down?** + +A. +The user provides a gridded data file to MET and it runs without error, +but the data is packed upside down. + +Try using the "file_type" entry. The "file_type" entry specifies the +input file type (e.g. GRIB1, GRIB2, NETCDF_MET, NETCDF_PINT, NETCDF_NCCF) +rather than letting the code determine it itself. For valid file_type +values, see "File types" in the *data/config/ConfigConstants* file. This +entry should be defined within the "fcst" or "obs" dictionaries. +Sometimes, directly specifying the type of file will help MET figure +out what to properly do with the data. + +Another option is to use the Regrid-Data-Plane tool. The Regrid-Data-Plane +tool may be run to read data from any gridded data file MET supports +(i.e. GRIB1, GRIB2, and a variety of NetCDF formats), interpolate to a +user-specified grid, and write the field(s) out in NetCDF format. See the +Regrid-Data-Plane tool :numref:`regrid-data-plane` in the MET +User's Guide for more +detailed information. While the Regrid-Data-Plane tool is useful as a +stand-alone tool, the capability is also included to automatically regrid +data in most of the MET tools that handle gridded data. This "regrid" +entry is a dictionary containing information about how to handle input +gridded data files. The "regird" entry specifies regridding logic and +has a "to_grid" entry that can be set to NONE, FCST, OBS, a named grid, +the path to a gridded data file defining the grid, or an explicit grid +specification string. See the :ref:`regrid` entry in +the Configuration File Overview in the MET User's Guide for a more detailed +description of the configuration file entries that control automated +regridding. + +A single model level can be plotted using the plot_data_plane utility. +This tool can assist the user by showing the data to be verified to +ensure that times and locations matchup as expected. + +**Q. Why was the MET written largely in C++ instead of FORTRAN?** + +A. +MET relies upon the object-oriented aspects of C++, particularly in +using the MODE tool. Due to time and budget constraints, it also makes +use of a pre-existing forecast verification library that was developed +at NCAR. + +**Q. How does MET differ from the previously mentioned existing +verification packages?** + +A. +MET is an actively maintained, evolving software package that is being +made freely available to the public through controlled version releases. **Q. Will the MET work on data in native model coordinates?** -A. No - it will not. In the future, we may add options to allow additional model grid coordinate systems. +A. +No - it will not. In the future, we may add options to allow additional +model grid coordinate systems. **Q. How do I get help if my questions are not answered in the User's Guide?** -A. First, look on our `MET User's Guide website `_. If that doesn't answer your question, create a post in the `METplus GitHub Discussions Forum `_. +A. +First, look on our +`MET User's Guide website `_. +If that doesn't answer your question, create a post in the +`METplus GitHub Discussions Forum `_. + -**Q. Where are the graphics?** +**Q. What graphical features does MET provide?** -A. Currently, very few graphics are included. The plotting tools (plot_point_obs, plot_data_plane, and plot_mode_field) can help you visualize your raw data. Also, ncview can be used with the NetCDF output from MET tools to visualize results. Further graphics support will be made available in the future on the MET website. +A. +MET provides some :ref:`plotting and graphics support`. The plotting +tools, including plot_point_obs, plot_data_plane, and plot_mode_field, can +help users visualize the data. + +MET is intended to be a set of command line tools for evaluating forecast +quality. So, the development effort is focused on providing the latest, +state of the art verification approaches, rather than on providing nice +plotting features. However, the ASCII output statistics of MET may be plotted +with a wide variety of plotting packages, including R, NCL, IDL, and GNUPlot. +METViewer is also currently being developed and used by the DTC and NOAA +It creates basic plots of MET output verification statistics. The types of +plots include series plots with confidence intervals, box plots, x-y scatter +plots and histograms. + +R is a language and environment for statistical computing and graphics. +It's a free package that runs on most operating systems and provides nice +plotting features and a wide array of powerful statistical analysis tools. +There are sample scripts on the +`MET website `_ +that you can use and modify to perform the type of analysis you need. If +you create your own scripts, we encourage you to submit them to us through the +`METplus GitHub Discussions Forum `_ +so that we can post them for other users. **Q. How do I find the version of the tool I am using?** -A. Type the name of the tool followed by **-version**. For example, type "pb2nc **-version**". +A. +Type the name of the tool followed by **-version**. For example, +type “pb2nc **-version**”. -**Q. What are MET's conventions for latitude, longitude, azimuth and bearing angles?** +**Q. What are MET's conventions for latitude, longitude, azimuth and +bearing angles?** -A. MET considers north latitude and east longitude positive. Latitudes have range from :math:`-90^\circ` to :math:`+90^\circ`. Longitudes have range from :math:`-180^\circ` to :math:`+180^\circ`. Plane angles such as azimuths and bearing (example: horizontal wind direction) have range :math:`0^\circ` to :math:`360^\circ` and are measured clockwise from the north. +A. +MET considers north latitude and east longitude positive. Latitudes +have range from :math:`-90^\circ` to :math:`+90^\circ`. Longitudes have +range from :math:`-180^\circ` to :math:`+180^\circ`. Plane angles such +as azimuths and bearing (example: horizontal wind direction) have +range :math:`0^\circ` to :math:`360^\circ` and are measured clockwise +from the north. .. _Troubleshooting: Troubleshooting _______________ -The first place to look for help with individual commands is this user's guide or the usage statements that are provided with the tools. Usage statements for the individual MET tools are available by simply typing the name of the executable in MET's *bin/* directory. Example scripts available in the MET's *scripts/* directory show examples of how one might use these commands on example datasets. Here are suggestions on other things to check if you are having problems installing or running MET. +The first place to look for help with individual commands is this +User's Guide or the usage statements that are provided with the tools. +Usage statements for the individual MET tools are available by simply +typing the name of the executable in MET's *bin/* directory. Example +scripts available in the MET's *scripts/* directory show examples of how +one might use these commands on example datasets. Here are suggestions +on other things to check if you are having problems installing or running MET. **MET won't compile** -* Have you specified the locations of NetCDF, GNU Scientific Library, and BUFRLIB, and optional additional libraries using corresponding MET\_ environment variables prior to running configure? +* Have you specified the locations of NetCDF, GNU Scientific Library, + and BUFRLIB, and optional additional libraries using corresponding + MET\_ environment variables prior to running configure? + +* Have these libraries been compiled and installed using the same set + of compilers used to build MET? + +* Are you using NetCDF version 3.4 or version 4? Currently, only NetCDF + version 3.6 can be used with MET. + +**BUFRLIB Errors during MET installation** + +.. code-block:: none + + error message: /usr/bin/ld: cannot find -lbufr + The linker can not find the BUFRLIB library archive file it needs. + + export MET_BUFRLIB=/home/username/BUFRLIB_v10.2.3:$MET_BUFRLIB + +It isn't making it's way into the configuration because BUFRLIB_v10.2.3 +isn't showing up in the output of make. This may indicate the wrong shell +type. The .bashrc file sets the environment for the Bourne shell, but +the above error could indicate that the c- shell is being used instead. -* Have these libraries been compiled and installed using the same set of compilers used to build MET? +Try the following 2 things: -* Are you using NetCDF version 3.4 or version 4? Currently, only NetCDF version 3.6 can be used with MET. +1. Check to make sure this file exists: -**Grid_stat won't run** + .. code-block:: none -* Are both the observational and forecast datasets on the same grid? + ls /home/username/BUFRLIB_v10.2.3/libbufr.a -**MODE won't run** +2. Rerun the MET configure command using the following option on the + command line: -* If using precipitation, do you have the same accumulation periods for both the forecast and observations? (If you aren't sure, run pcp_combine.) + .. code-block:: none + + MET_BUFRLIB=/home/username/BUFRLIB_v10.2.3 -* Are both the observation and forecast datasets on the same grid? +After doing that, please try recompiling MET. If it fails, +please send met_help@ucar.edu the following log files. +"make_install.log" as well as "config.log". -**Point-Stat won't run** +**Command line double quotes** -* Have you run pb2nc first on your PrepBUFR observation data? +Single quotes, double quotes, and escape characters can be difficult for +MET to parse. If there are problems, especially in Python code, try +breaking the command up like the below example. + +.. code-block:: none + + ['/h/WXQC/{MET_BUILD_BASE}/bin/regrid_data_plane', + '/h/data/global/WXQC/data/umm/1701150006', + 'G003', '/h/data/global/WXQC/data/met/nc_mdl/umm/1701150006', '- field', + '\'name="HGT"; level="P500";\'', '-v', '6'] + +**Environment variable settings** + +In the below incorrect example for many environment variables have both +the main variable set and the INC and LIB variables set: + +.. code-block:: none + + export MET_GSL=$MET_LIB_DIR/gsl + export MET_GSLINC=$MET_LIB_DIR/gsl/include/gsl + export MET_GSLLIB=$MET_LIB_DIR/gsl/lib + +**only MET_GSL *OR *MET_GSLINC *AND *MET_GSLLIB need to be set.** +So, for example, either set: + +.. code-block:: none + + export MET_GSL=$MET_LIB_DIR/gsl + +or set: + +.. code-block:: none + + export MET_GSLINC=$MET_LIB_DIR/gsl/include/gsl export MET_GSLLIB=$MET_LIB_DIR/gsl/lib + +Additionally, MET does not use MET_HDF5INC and MET_HDF5LIB. +It only uses MET_HDF5. + +Our online tutorial can help figure out what should be set and what the +value should be: +https://met.readthedocs.io/en/latest/Users_Guide/installation.html + +**NetCDF install issues** + +This example shows a problem with NetCDF in the make_install.log file: + +.. code-block:: none + + /usr/bin/ld: warning: libnetcdf.so.11, + needed by /home/zzheng25/metinstall/lib/libnetcdf_c++4.so, + may conflict with libnetcdf.so.7 + +Below are examples of too many MET_NETCDF options: + +.. code-block:: none + + MET_NETCDF='/home/username/metinstall/' + MET_NETCDFINC='/home/username/local/include' + MET_NETCDFLIB='/home/username/local/lib' + + +Either MET_NETCDF **OR** MET_NETCDFINC **AND** MET_NETCDFLIB need to be set. +If the NetCDF include files are in */home/username/local/include* and the +NetCDF library files are in */home/username/local/lib*, unset the +MET_NETCDF environment variable, then run "make clean", reconfigure, +and then run "make install" and "make test" again. **Error while loading shared libraries** -* Add the lib dir to your LD_LIBRARY_PATH. For example, if you receive the following error: "./mode_analysis: error while loading shared libraries: libgsl.so.19: cannot open shared object file: No such file or directory", you should add the path to the gsl lib (for example, */home/user/MET/gsl-2.1/lib*) to your LD_LIBRARY_PATH. + +* Add the lib dir to your LD_LIBRARY_PATH. For example, if you receive + the following error: "./mode_analysis: error while loading shared + libraries: libgsl.so.19: cannot open shared object file: + No such file or directory", you should add the path to the + gsl lib (for example, */home/user/MET/gsl-2.1/lib*) + to your LD_LIBRARY_PATH. **General troubleshooting** -* For configuration files used, make certain to use empty square brackets (e.g. [ ]) to indicate no stratification is desired. Do NOT use empty double quotation marks inside square brackets (e.g. [""]). +* For configuration files used, make certain to use empty square brackets + (e.g. [ ]) to indicate no stratification is desired. Do NOT use empty + double quotation marks inside square brackets (e.g. [""]). * Have you designated all the required command line arguments? -* Try rerunning with a higher verbosity level. Increasing the verbosity level to 4 or 5 prints much more diagnostic information to the screen. +* Try rerunning with a higher verbosity level. Increasing the verbosity + level to 4 or 5 prints much more diagnostic information to the screen. Where to get help _________________ -If none of the above suggestions have helped solve your problem, help is available through the `METplus GitHub Discussions Forum `_. +If none of the above suggestions have helped solve your problem, help +is available through the +`METplus GitHub Discussions Forum `_. + How to contribute code ______________________ -If you have code you would like to contribute, we will gladly consider your contribution. Please create a post in the `METplus GitHub Discussions Forum `_. +If you have code you would like to contribute, we will gladly consider +your contribution. Please create a post in the +`METplus GitHub Discussions Forum `_. + diff --git a/met/docs/Users_Guide/appendixC.rst b/met/docs/Users_Guide/appendixC.rst index 3285d19e7d..d6523fb1c2 100644 --- a/met/docs/Users_Guide/appendixC.rst +++ b/met/docs/Users_Guide/appendixC.rst @@ -924,7 +924,7 @@ MET produces hit rate (POD) and false alarm rate (POFD) values for each user-spe A ROC plot is shown for an example set of forecasts, with a solid line connecting the points for six user-specified thresholds (0.25, 0.35, 0.55, 0.65, 0.75, 0.85). The diagonal dashed line indicates no skill while the dash-dot line shows the ROC for a perfect forecast. -An ROC curve shows how well the forecast discriminates between two outcomes, so it is a measure of resolution. The ROC is invariant to linear transformations of the forecast, and is thus unaffected by bias. An unbiased (i.e., well-calibrated) forecast can have the same ROC as a biased forecast, though most would agree that an unbiased forecast is "better". Since the ROC is conditioned on the observations, it is often paired with the reliability diagram, which is conditioned on the forecasts. +A ROC curve shows how well the forecast discriminates between two outcomes, so it is a measure of resolution. The ROC is invariant to linear transformations of the forecast, and is thus unaffected by bias. An unbiased (i.e., well-calibrated) forecast can have the same ROC as a biased forecast, though most would agree that an unbiased forecast is "better". Since the ROC is conditioned on the observations, it is often paired with the reliability diagram, which is conditioned on the forecasts. .. _appendixC-roc_example: @@ -1134,7 +1134,7 @@ A mathematical metric, :math:`m(A,B)\geq 0`, must have the following three prope The first establishes that a perfect score is zero and that the only way to obtain a perfect score is if the two sets are identical according to the metric. The second requirement ensures that the order by which the two sets are evaluated will not change the result. The third property ensures that if *C* is closer to *A* than *B* is to *A*, then :math:`m(A,C) < m(A,B)`. -It has been argued in :ref:`Gilleland (2019) ` that the second property of symmetry is not necessarily an important quality to have for a summary measure for verification purposes because lack of symmetry allows for information about false alarms and misses. +It has been argued in :ref:`Gilleland (2017) ` that the second property of symmetry is not necessarily an important quality to have for a summary measure for verification purposes because lack of symmetry allows for information about false alarms and misses. The results of the distance map verification approaches that are included in the Grid-Stat tool are summarized using a variety of measures. These measures include Baddeley's :math:`\Delta` Metric, the Hausdorff Distance, the Mean-error Distance, Pratt's Figure of Merit, and Zhu's Measure. Their equations are listed below. @@ -1205,6 +1205,29 @@ where MED *(A,B)* is as in the Mean-error distance, *N* is the total number of g The range for ZHU is 0 to infinity, with a score of 0 indicating a perfect forecast. +.. _App_C-gbeta: + +:math:`G` and :math:`G_\beta` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Called "G" and "GBETA" in the DMAP output :numref:`table_GS_format_info_DMAP` + +See :numref:`grid-stat_gbeta` for a description. + +Let :math:`y = {y_1}{y_2}` where :math:`y_1 = n_A + n_B - 2n_{AB}`, and :math:`y_2 = MED(A,B) \cdot n_B + MED(B,A) \cdot n_A`, with the mean-error distance (:math:`MED`) as described above, and where :math:`n_{A}`, :math:`n_{B}`, and :math:`n_{AB}` are the number of events within event areas *A*, *B*, and the intersection of *A* and *B*, respectively. + +The :math:`G` performance measure is given by + +.. math:: G(A,B) = y^{1/3} + +and the :math:`G_\beta` performance measure is given by + +.. math:: G_\beta(A,B) = max\{1-\frac{y}{\beta}, 0\} + +where :math:`\beta > 0` is a user-chosen parameter with a default value of :math:`n^2 / 2.0` with :math:`n` equal to the number of points in the domain. The square-root of :math:`G` will give units of grid points, where :math:`y^{1/3}` gives units of grid points squared. + +The range for :math:`G_\beta` is 0 to 1, with a score of 1 indicating a perfect forecast. + Calculating Percentiles _______________________ diff --git a/met/docs/Users_Guide/appendixD.rst b/met/docs/Users_Guide/appendixD.rst index a90c71b620..66c578e91a 100644 --- a/met/docs/Users_Guide/appendixD.rst +++ b/met/docs/Users_Guide/appendixD.rst @@ -77,8 +77,9 @@ All other verification scores with CIs in MET must be obtained through bootstrap 5. Calculate CIs for the parameters directly from the sample (see text below for more details) -Typically, a simple random sample is taken for step 2, and that is how it is done in MET. As an example of what happens in this step, suppose our sample is :math:`X_1,X_2,X_3,X_4`. Then, one possible replicate might be :math:`X_2,X_2,X_2,X_4`. Usually one samples :math:`m = n` points in this step, but there are cases where one should use :math:`m < n`. For example, when the underlying distribution is heavy-tailed, one should use a smaller size m than n (e.g., the closest integer value to the square root of the original sample size). +Typically, a simple random sample is taken for step 2, and that is how it is done in MET. As an example of what happens in this step, suppose our sample is :math:`X_1,X_2,X_3,X_4`. Then, one possible replicate might be :math:`X_2,X_2,X_2,X_4`. Usually one samples :math:`m = n` points in this step, but there are cases where one should use :math:`m < n`. For example, when the underlying distribution is heavy-tailed, one should use a smaller size m than n (e.g., the closest integer value to the square root of the original sample size). See :ref:`Gilleland (2020, part II) ` for considerably more information about the issues with estimators that follow a heavy tailed distribution and the closely related issue of bootstrapping extreme-valued estimators, such as the maximum, in the atmospheric science domain. There are numerous ways to construct CIs from the sample obtained in step 4. MET allows for two of these procedures: the percentile and the BCa. The percentile is the most commonly known method, and the simplest to understand. It is merely the :math:`\alpha / 2` and :math:`1 - \alpha / 2` percentiles from the sample of statistics. Unfortunately, however, it has been shown that this interval is too optimistic in practice (i.e., it doesn't have accurate coverage). One solution is to use the BCa method, which is very accurate, but it is also computationally intensive. This method adjusts for bias and non-constant variance, and yields the percentile interval in the event that the sample is unbiased with constant variance. -If there is dependency in the sample, then it is prudent to account for this dependency in some way. One method that does not make a lot of assumptions is circular block bootstrapping. This is not currently implemented in MET, but will be available in a future release. At that time, the method will be explained more fully here, but until then consult :ref:`Gilleland (2010) ` for more details. +If there is dependency in the sample, then it is prudent to account for this dependency in some way. :ref:`Gilleland (2010) ` describes the bootstrap procedure, along with the above-mentioned parametric methods, in more detail specifically for the verification application. If there is dependency in the sample, then it is prudent to account for this dependency in some way (see :ref:`Gilleland (2020, part I) ` part I for an in-depth discussion of bootstrapping in the competing forecast verification domain). One method that is particularly appropriate for serially dependent data is the circular block resampling procedure for step 2. + diff --git a/met/docs/Users_Guide/config_options.rst b/met/docs/Users_Guide/config_options.rst index a490ea1e27..5c3149b03f 100644 --- a/met/docs/Users_Guide/config_options.rst +++ b/met/docs/Users_Guide/config_options.rst @@ -413,7 +413,7 @@ e.g. model = "GFS"; model = "WRF"; -.._desc: +.. _desc: :ref:`desc ` @@ -1615,136 +1615,6 @@ This dictionary may include the following entries: ]; } -.. _nbrhd: - -:ref:`nbrhd ` - -The "nbrhd" entry is a dictionary that is very similar to the "interp" -entry. It specifies information for computing neighborhood statistics in -Grid-Stat. This dictionary may include the following entries: - -* The "field" entry specifies to which field(s) the computation of - fractional coverage should be applied. Grid-Stat processes each - combination of categorical threshold and neighborhood width to - derive the fractional coverage fields from which neighborhood - statistics are calculated. Users who have computed fractional - coverage fields outside of MET can use this option to disable - these computations. Instead, the raw input values will be - used directly to compute neighborhood statistics: - - * "BOTH" to compute fractional coverage for both the forecast - and the observation fields (default). - - * "FCST" to only process the forecast field. - - * "OBS" to only process the observation field. - - * "NONE" to process neither field. - -* The "vld_thresh" entry is described above. - -* The "shape" entry defines the shape of the neighborhood. - Valid values are "SQUARE" or "CIRCLE" - -* The "width" entry is as described above, and must be odd. - -* The "cov_thresh" entry is an array of thresholds to be used when - computing categorical statistics for the neighborhood fractional - coverage field. - -.. code-block:: none - - nbrhd = { - field = BOTH; - vld_thresh = 1.0; - shape = SQUARE; - width = [ 1 ]; - cov_thresh = [ >=0.5 ]; - } - -.. _fourier: - -:ref:`fourier ` - -The "fourier" entry is a dictionary which specifies the application of the -Fourier decomposition method. It consists of two arrays of the same length -which define the beginning and ending wave numbers to be included. If the -arrays have length zero, no Fourier decomposition is applied. For each array -entry, the requested Fourier decomposition is applied to the forecast and -observation fields. The beginning and ending wave numbers are indicated in -the MET ASCII output files by the INTERP_MTHD column (e.g. WV1_0-3 for waves -0 to 3 or WV1_10 for only wave 10). This 1-dimensional Fourier decomposition -is computed along the Y-dimension only (i.e. the columns of data). It is only -defined when each grid point contains valid data. If either input field -contains missing data, no Fourier decomposition is computed. - -The available wave numbers start at 0 (the mean across each row of data) -and end at (Nx+1)/2 (the finest level of detail), where Nx is the X-dimension -of the verification grid: - -* The "wave_1d_beg" entry is an array of integers specifying the first - wave number to be included. - -* The "wave_1d_end" entry is an array of integers specifying the last - wave number to be included. - -.. code-block:: none - - fourier = { - wave_1d_beg = [ 0, 4, 10 ]; - wave_1d_end = [ 3, 9, 20 ]; - } - -.. _gradient: - -:ref:`gradient ` - -The "gradient" entry is a dictionary which specifies the number and size of -gradients to be computed. The "dx" and "dy" entries specify the size of the -gradients in grid units in the X and Y dimensions, respectively. dx and dy -are arrays of integers (positive or negative) which must have the same -length, and the GRAD output line type will be computed separately for each -entry. When computing gradients, the value at the (x, y) grid point is -replaced by the value at the (x+dx, y+dy) grid point minus the value at -(x, y). - -This configuration option may be set separately in each "obs.field" entry. - -.. code-block:: none - - gradient = { - dx = [ 1 ]; - dy = [ 1 ]; - } - -.. _distance_map: - -:ref:`distance_map ` - -The "distance_map" entry is a dictionary containing options related to the -distance map statistics in the DMAP output line type. The "baddeley_p" entry -is an integer specifying the exponent used in the Lp-norm when computing the -Baddeley Delta metric. The "baddeley_max_dist" entry is a floating point -number specifying the maximum allowable distance for each distance map. Any -distances larger than this number will be reset to this constant. A value of -NA indicates that no maximum distance value should be used. The "fom_alpha" -entry is a floating point number specifying the scaling constant to be used -when computing Pratt's Figure of Merit. The "zhu_weight" specifies a value -between 0 and 1 to define the importance of the RMSE of the binary fields -(i.e. amount of overlap) versus the mean-error distance (MED). The default -value of 0.5 gives equal weighting. - -This configuration option may be set separately in each "obs.field" entry. - -.. code-block:: none - - distance_map = { - baddeley_p = 2; - baddeley_max_dist = NA; - fom_alpha = 0.1; - zhu_weight = 0.5; - } - .. _land_mask: :ref:`land_mask ` diff --git a/met/docs/Users_Guide/figure/appendixC-roc_example.jpg b/met/docs/Users_Guide/figure/appendixC-roc_example.jpg index 5f3e25ba8e..d0680257f7 100644 Binary files a/met/docs/Users_Guide/figure/appendixC-roc_example.jpg and b/met/docs/Users_Guide/figure/appendixC-roc_example.jpg differ diff --git a/met/docs/Users_Guide/figure/grid-stat_fig6.png b/met/docs/Users_Guide/figure/grid-stat_fig6.png new file mode 100644 index 0000000000..e0916152f7 Binary files /dev/null and b/met/docs/Users_Guide/figure/grid-stat_fig6.png differ diff --git a/met/docs/Users_Guide/grid-stat.rst b/met/docs/Users_Guide/grid-stat.rst index f626f77ef8..ef1a465763 100644 --- a/met/docs/Users_Guide/grid-stat.rst +++ b/met/docs/Users_Guide/grid-stat.rst @@ -120,7 +120,30 @@ While :numref:`grid-stat_fig1` and :numref:`grid-stat_fig2` are helpful in illus The absolute difference between the distance maps in the bottom row of :numref:`grid-stat_fig3` (top left), the shortest distances from every grid point in B to the nearest grid point in A (top right), and the shortest distances from every grid point in A to the nearest grid points in B (bottom left). The latter two do not have axes in order to emphasize that the distances are now only considered from within the respective event sets. The top right graphic is the distance map of A conditioned on the presence of an event from B, and that in the bottom left is the distance map of B conditioned on the presence of an event from A. -The statistics derived from these distance maps are described in :numref:`Appendix C, Section %s `. For each combination of input field and categorical threshold requested in the configuration file, Grid-Stat applies that threshold to define events in the forecast and observation fields and computes distance maps for those binary fields. Statistics for all requested masking regions are derived from those distance maps. Note that the distance maps are computed only once over the full verification domain, not separately for each masking region. Events occurring outside of a masking region can affect the distance map values inside that masking region and, therefore, can also affect the distance maps statistics for that region. +The statistics derived from these distance maps are described in :numref:`Appendix C, Section %s `. To make fair comparisons, any grid point containing bad data in either the forecast or observation field is set to bad data in both fields. For each combination of input field and categorical threshold requested in the configuration file, Grid-Stat applies that threshold to define events in the forecast and observation fields and computes distance maps for those binary fields. Statistics for all requested masking regions are derived from those distance maps. Note that the distance maps are computed only once over the full verification domain, not separately for each masking region. Events occurring outside of a masking region can affect the distance map values inside that masking region and, therefore, can also affect the distance maps statistics for that region. + +.. _grid-stat_gbeta: + +:math:`\beta` and :math:`G_\beta` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +See :numref:`App_C-gbeta` for the :math:`G` and :math:`G_\beta` equations. + +:math:`G_\beta` provides a summary measure of forecast quality for each user-defined threshold chosen. It falls into a range from zero to one where one is a perfect forecast and zero is considered to be a very poor forecast as determined by the user through the value of :math:`\beta`. Values of :math:`G_\beta` closer to one represent better forecasts and worse forecasts as it decreases toward zero. Although a particular value cannot be universally compared against any forecast, when applied with the same choice of :math:`\beta` for the same variable and on the same domain, it is highly effective at ranking such forecasts. + +:math:`G_\beta` is sensitive to the choice of :math:`\beta`, which depends on the (i) specific domain, (ii) variable, and (iii) user’s needs. Smaller values make :math:`G_\beta` more stringent and larger values make it more lenient. :numref:`grid-stat_fig6` shows an example of applying :math:`G_\beta` over a range of :math:`\beta` values to a precipitation verification set where the binary fields are created by applying a threshold of :math:`2.1 mmh^{-1}`. Color choice and human bias can make it difficult to determine the quality of the forecast for a human observer looking at the raw images in the top row of the figure (:ref:`Ahijevych et al., 2009 `). The bottom left panel of the figure displays the differences in their binary fields, which highlights that the forecast captured the overall shapes of the observed rain areas but suffers from a spatial displacement error (perhaps really a timing error). + +Whether or not the forecast from :numref:`grid-stat_fig6` is “good” or not depends on the specific user. Is it sufficient that the forecast came as close as it did to the observation field? If the answer is yes for the user, then a higher choice of :math:`\beta`, such as :math:`N/2`, with :math:`N` equal to the number of points in the domain, will correctly inform this user that it is a “good” forecast as it will lead to a :math:`G_\beta` value near one. If the user requires the forecast to be much better aligned spatially with the observation field, then a lower choice, perhaps :math:`\beta = N`, will correctly inform that the forecast suffers from spatial displacement errors that are too large for this user to be pleased. If the goal is to rank a series of ensemble forecasts, for example, then a choice of :math:`\beta` that falls in the steep part of the curve shown in the lower right panel of the figure should be preferred, say somewhere between :math:`\beta = N` and :math:`\beta = N^2/2`. Such a choice will ensure that each member is differentiated by the measure. + +.. _grid-stat_fig6: + +.. figure:: figure/grid-stat_fig6.png + + Top left is an example of an accumulated precipitation (mm/h) forecast with the corresponding observed field on the top right. Bottom left shows the difference in binary fields, where the binary fields are created by setting all values in the original fields that fall above :math:`2.1 mmh^{-1}` to one and the rest to zero. Bottom right shows the results for :math:`G_\beta` calculated on the binary fields using the threshold of :math:`2.1 mmh^{-1}` over a range of choices for :math:`\beta`. + +In some cases, a user may be interested in a much higher threshold than :math:`2.1 mmh^{-1}` of the above example. :ref:`Gilleland, 2021 (Fig. 4) `, for example, shows this same forecast using a threshold of :math:`40 mmh^{-1}`. Only a small area in Mississippi has such extreme rain predicted at this valid time; yet none was observed. Small spatial areas of extreme rain in the observed field, however, did occur in a location far away from Mississippi that was not predicted. Generally, for this type of verification, the Hausdorff metric is a good choice of measure. However, a small choice of :math:`\beta` will provide similar results as the Hausdorff distance (:ref:`Gilleland, 2021 `). The user should think about the average size of storm areas and multiply this value by the displacement distance they are comfortable with in order to get a good initial choice for :math:`\beta`, and may have to increase or decrease its value by trial-and-error using one or two example cases from their verification set. + +Since :math:`G_\beta` is so sensitive to the choice of :math:`\beta`, which is defined relative to the number of points in the verification domain, :math:`G_\beta` is only computed for the full verification domain. :math:`G_\beta` is reported as a bad data value for any masking region subsets of the full verification domain. Practical information _____________________ @@ -234,6 +257,10 @@ The configuration options listed above are common to multiple MET tools and are ___________________________ +.. _nbrhd: + +:ref:`nbrhd ` + .. code-block:: none nbrhd = { @@ -244,7 +271,7 @@ ___________________________ cov_thresh = [ >=0.5 ]; } - + The **nbrhd** dictionary contains a list of values to be used in defining the neighborhood to be used when computing neighborhood verification statistics. The neighborhood **shape** is a **SQUARE** or **CIRCLE** centered on the current point, and the **width** value specifies the width of the square or diameter of the circle as an odd integer. The **field** entry is set to **BOTH, FCST, OBS**, or **NONE** to indicate the fields to which the fractional coverage derivation logic should be applied. This should always be set to **BOTH** unless you have already computed the fractional coverage field(s) with numbers between 0 and 1 outside of MET. @@ -255,6 +282,10 @@ The **cov_thresh** entry contains a comma separated list of thresholds to be app ___________________ +.. _fourier: + +:ref:`fourier ` + .. code-block:: none fourier = { @@ -263,25 +294,36 @@ ___________________ } -The **fourier** entry is a dictionary which specifies the application of the Fourier decomposition method. It consists of two arrays of the same length which define the beginning and ending wave numbers to be included. If the arrays have length zero, no Fourier decomposition is applied. For each array entry, the requested Fourier decomposition is applied to the forecast and observation fields. The beginning and ending wave numbers are indicated in the MET ASCII output files by the INTERP_MTHD column (e.g. WV1_0-3 for waves 0 to 3 or WV1_10 for only wave 10). This 1-dimensional Fourier decomposition is computed along the Y-dimension only (i.e. the columns of data). It is applied to the forecast and observation fields as well as the climatological mean field, if specified. It is only defined when each grid point contains valid data. If any input field contains missing data, no Fourier decomposition is computed. The available wave numbers start at 0 (the mean across each row of data) and end at (Nx+1)/2 (the finest level of detail), where Nx is the X-dimension of the verification grid. +The **fourier** entry is a dictionary which specifies the application of the Fourier decomposition method. It consists of two arrays of the same length which define the beginning and ending wave numbers to be included. If the arrays have length zero, no Fourier decomposition is applied. For each array entry, the requested Fourier decomposition is applied to the forecast and observation fields. The beginning and ending wave numbers are indicated in the MET ASCII output files by the INTERP_MTHD column (e.g. WV1_0-3 for waves 0 to 3 or WV1_10 for only wave 10). This 1-dimensional Fourier decomposition is computed along the Y-dimension only (i.e. the columns of data). It is applied to the forecast and observation fields as well as the climatological mean field, if specified. It is only defined when each grid point contains valid data. If any input field contains missing data, no Fourier decomposition is computed. + +The available wave numbers start at 0 (the mean across each row of data) and end at (Nx+1)/2 (the finest level of detail), where Nx is the X-dimension of the verification grid: + +* The **wave_1d_beg** entry is an array of integers specifying the first wave number to be included. -The **wave_1d_beg** entry is an array of integers specifying the first wave number to be included. The **wave_1d_end** entry is an array of integers specifying the last wave number to be included. +* The **wave_1d_end** entry is an array of integers specifying the last wave number to be included. _____________________ +.. _gradient: + +:ref:`gradient ` + .. code-block:: none - grad = { + gradient = { dx = [ 1 ]; dy = [ 1 ]; } - The **gradient** entry is a dictionary which specifies the number and size of gradients to be computed. The **dx** and **dy** entries specify the size of the gradients in grid units in the X and Y dimensions, respectively. **dx** and **dy** are arrays of integers (positive or negative) which must have the same length, and the GRAD output line type will be computed separately for each entry. When computing gradients, the value at the (x, y) grid point is replaced by the value at the (x+dx, y+dy) grid point minus the value at (x, y). This configuration option may be set separately in each **obs.field** entry. ____________________ +.. _distance_map: + +:ref:`distance_map ` + .. code-block:: none distance_map = { @@ -289,9 +331,10 @@ ____________________ baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } -The **distance_map** entry is a dictionary containing options related to the distance map statistics in the **DMAP** output line type. The **baddeley_p** entry is an integer specifying the exponent used in the Lp-norm when computing the Baddeley Delta metric. The **baddeley_max_dist** entry is a floating point number specifying the maximum allowable distance for each distance map. Any distances larger than this number will be reset to this constant. A value of **NA** indicates that no maximum distance value should be used. The **fom_alpha** entry is a floating point number specifying the scaling constant to be used when computing Pratt's Figure of Merit. The **zhu_weight** specifies a value between 0 and 1 to define the importance of the RMSE of the binary fields (i.e. amount of overlap) versus the mean-error distance (MED). The default value of 0.5 gives equal weighting. This configuration option may be set separately in each **obs.field** entry. +The **distance_map** entry is a dictionary containing options related to the distance map statistics in the **DMAP** output line type. The **baddeley_p** entry is an integer specifying the exponent used in the Lp-norm when computing the Baddeley :math:`\Delta` metric. The **baddeley_max_dist** entry is a floating point number specifying the maximum allowable distance for each distance map. Any distances larger than this number will be reset to this constant. A value of **NA** indicates that no maximum distance value should be used. The **fom_alpha** entry is a floating point number specifying the scaling constant to be used when computing Pratt's Figure of Merit. The **zhu_weight** specifies a value between 0 and 1 to define the importance of the RMSE of the binary fields (i.e. amount of overlap) versus the mean-error distance (MED). The default value of 0.5 gives equal weighting. This configuration option may be set separately in each **obs.field** entry. The **beta_value** entry is defined as a function of n, where n is the total number of grid points in the full verification domain containing valid data in both the forecast and observation fields. The resulting beta_value is used to compute the :math:`G_\beta` statistic. The default function, :math:`N^2 / 2`, is recommended in :ref:`Gilleland, 2021 ` but can be modified as needed. _____________________ @@ -345,7 +388,7 @@ The **output_flag** array controls the type of output that the Grid-Stat tool ge 10. **VAL1L2** for Vector Anomaly L1L2 Partial Sums when climatological data is supplied -11. **VCNT** for Vector Contingency Table Statistics +11. **VCNT** for Vector Continuous Statistics 12. **PCT** for Contingency Table Counts for Probabilistic forecasts @@ -755,7 +798,7 @@ The format of the STAT and ASCII output of the Grid-Stat tool are the same as th - Frequency Bias * - 29 - BADDELEY - - Baddeley's Delta Metric + - Baddeley's :math:`\Delta` Metric * - 30 - HAUSDORFF - Hausdorff Distance @@ -804,6 +847,15 @@ The format of the STAT and ASCII output of the Grid-Stat tool are the same as th * - 45 - ZHU_MEAN - Mean of ZHU_FO and ZHU_OF + * - 46 + - G + - :math:`G` distance measure + * - 47 + - GBETA + - :math:`G_\beta` distance measure + * - 48 + - BETA_VALUE + - Beta value used to compute :math:`G_\beta` If requested using the **nc_pairs_flag** dictionary in the configuration file, a NetCDF file containing the matched pair and forecast minus observation difference fields for each combination of variable type/level and masking region applied will be generated. The contents of this file are determined by the contents of the nc_pairs_flag dictionary. The output NetCDF file is named similarly to the other output files: **grid_stat_PREFIX_ HHMMSSL_YYYYMMDD_HHMMSSV_pairs.nc**. Commonly available NetCDF utilities such as ncdump or ncview may be used to view the contents of the output file. diff --git a/met/docs/Users_Guide/point-stat.rst b/met/docs/Users_Guide/point-stat.rst index 162f86926c..97b93b4b40 100644 --- a/met/docs/Users_Guide/point-stat.rst +++ b/met/docs/Users_Guide/point-stat.rst @@ -121,7 +121,7 @@ The forecast value at P is chosen as the grid point inside the interpolation are HiRA framework ~~~~~~~~~~~~~~ -The Point-Stat tool has been enhanced to include the High Resolution Assessment (HiRA) verification logic (:ref:`Mittermaier, 2014 `). HiRA is analogous to neighborhood verification but for point observations. The HiRA logic interprets the forecast values surrounding each point observation as an ensemble forecast. These ensemble values are processed in two ways. First, the ensemble continuous statistics (ECNT) and the ranked probability score (RPS) line types are computed directly from the ensemble values. Second, for each categorical threshold specified, a fractional coverage value is computed as the ratio of the nearby forecast values that meet the threshold criteria. Point-Stat evaluates those fractional coverage values as if they were a probability forecast. When applying HiRA, users should enable the matched pair (MPR), probabilistic (PCT, PSTD, PJC, or PRC), continuous ensemble statistics (ECNT), or ranked probability score (RPS) line types in the **output_flag** dictionary. The number of probabilistic HiRA output lines is determined by the number of categorical forecast thresholds and HiRA neighborhood widths chosen. +The Point-Stat tool has been enhanced to include the High Resolution Assessment (HiRA) verification logic (:ref:`Mittermaier, 2014 `). HiRA is analogous to neighborhood verification but for point observations. The HiRA logic interprets the forecast values surrounding each point observation as an ensemble forecast. These ensemble values are processed in three ways. First, the ensemble continuous statistics (ECNT), the observation rank statistics (ORANK) and the ranked probability score (RPS) line types are computed directly from the ensemble values. Second, for each categorical threshold specified, a fractional coverage value is computed as the ratio of the nearby forecast values that meet the threshold criteria. Point-Stat evaluates those fractional coverage values as if they were a probability forecast. When applying HiRA, users should enable the matched pair (MPR), probabilistic (PCT, PSTD, PJC, or PRC), continuous ensemble statistics (ECNT), observation rank statistics (ORANK) or ranked probability score (RPS) line types in the **output_flag** dictionary. The number of probabilistic HiRA output lines is determined by the number of categorical forecast thresholds and HiRA neighborhood widths chosen. The HiRA framework provides a unique method for evaluating models in the neighborhood of point observations, allowing for some spatial and temporal uncertainty in the forecast and/or the observations. Additionally, the HiRA framework can be used to compare deterministic forecasts to ensemble forecasts. In MET, the neighborhood is a circle or square centered on the grid point closest to the observation location. An event is defined, then the proportion of points with events in the neighborhood is calculated. This proportion is treated as an ensemble probability, though it is likely to be uncalibrated. @@ -425,6 +425,7 @@ ________________________ pjc = BOTH; prc = BOTH; ecnt = BOTH; // Only for HiRA + orank = BOTH; // Only for HiRA rps = BOTH; // Only for HiRA eclv = BOTH; mpr = BOTH; @@ -450,9 +451,9 @@ The **output_flag** array controls the type of output that the Point-Stat tool g 9. **VL1L2** for Vector L1L2 Partial Sums -10. **VCNT** for Vector Continuous Statistics (Note that bootstrap confidence intervals are not currently calculated for this line type.) +10. **VAL1L2** for Vector Anomaly L1L2 Partial Sums when climatological data is supplied -11. **VAL1L2** for Vector Anomaly L1L2 Partial Sums when climatological data is supplied +11. **VCNT** for Vector Continuous Statistics 12. **PCT** for Contingency Table counts for Probabilistic forecasts @@ -464,13 +465,15 @@ The **output_flag** array controls the type of output that the Point-Stat tool g 16. **ECNT** for Ensemble Continuous Statistics is only computed for the HiRA methodology -17. **RPS** for Ranked Probability Score is only computed for the HiRA methodology +17. **ORANK** for Ensemble Matched Pair Information when point observations are supplied for the HiRA methodology -18. **ECLV** for Economic Cost/Loss Relative Value +18. **RPS** for Ranked Probability Score is only computed for the HiRA methodology -19. **MPR** for Matched Pair data +19. **ECLV** for Economic Cost/Loss Relative Value -Note that the first two line types are easily derived from each other. Users are free to choose which measures are most desired. The output line types are described in more detail in :numref:`point_stat-output`. +20. **MPR** for Matched Pair data + +Note that the FHO and CTC line types are easily derived from each other. Users are free to choose which measures are most desired. The output line types are described in more detail in :numref:`point_stat-output`. Note that writing out matched pair data (MPR lines) for a large number of cases is generally not recommended. The MPR lines create very large output files and are only intended for use on a small set of cases. @@ -489,9 +492,9 @@ point_stat_PREFIX_HHMMSSL_YYYYMMDD_HHMMSSV.stat where PREFIX indicates the user- The output ASCII files are named similarly: -point_stat_PREFIX_HHMMSSL_YYYYMMDD_HHMMSSV_TYPE.txt where TYPE is one of mpr, fho, ctc, cts, cnt, mctc, mcts, pct, pstd, pjc, prc, ecnt, rps, eclv, sl1l2, sal1l2, vl1l2, vcnt or val1l2 to indicate the line type it contains. +point_stat_PREFIX_HHMMSSL_YYYYMMDD_HHMMSSV_TYPE.txt where TYPE is one of mpr, fho, ctc, cts, cnt, mctc, mcts, pct, pstd, pjc, prc, ecnt, orank, rps, eclv, sl1l2, sal1l2, vl1l2, vcnt or val1l2 to indicate the line type it contains. -The first set of header columns are common to all of the output files generated by the Point-Stat tool. Tables describing the contents of the header columns and the contents of the additional columns for each line type are listed in the following tables. The ECNT line type is described in :numref:`table_ES_header_info_es_out_ECNT`. The RPS line type is described in :numref:`table_ES_header_info_es_out_RPS`. +The first set of header columns are common to all of the output files generated by the Point-Stat tool. Tables describing the contents of the header columns and the contents of the additional columns for each line type are listed in the following tables. The ECNT line type is described in :numref:`table_ES_header_info_es_out_ECNT`. The ORANK line type is described in :numref:`table_ES_header_info_es_out_ORANK`. The RPS line type is described in :numref:`table_ES_header_info_es_out_RPS`. .. _table_PS_header_info_point-stat_out: diff --git a/met/docs/Users_Guide/reformat_grid.rst b/met/docs/Users_Guide/reformat_grid.rst index 5975de88d1..191ef4e1a4 100644 --- a/met/docs/Users_Guide/reformat_grid.rst +++ b/met/docs/Users_Guide/reformat_grid.rst @@ -201,6 +201,7 @@ Each NetCDF file generated by the Pcp-Combine tool contains the dimensions and v - lat, lon - Data value (i.e. accumulated precipitation) for each point in the grid. The name of the variable describes the name and level and any derivation logic that was applied. +.. _regrid-data-plane: Regrid-Data-Plane tool ______________________ diff --git a/met/docs/Users_Guide/refs.rst b/met/docs/Users_Guide/refs.rst index 118258f15c..647e3591bf 100644 --- a/met/docs/Users_Guide/refs.rst +++ b/met/docs/Users_Guide/refs.rst @@ -9,6 +9,13 @@ References | Atlantic Basin. *Weather & Forecasting*, 13, 1005-1015. | +.. _Ahijevych-2009: + +| Ahijevych, D., E. Gilleland, B.G. Brown, and E.E. Ebert, 2009. Application of +| spatial verification methods to idealized and NWP-gridded precipitation forecasts. +| Weather Forecast., 24 (6), 1485 - 1497, doi: 10.1175/2009WAF2222298.1. +| + .. _Barker-1991: @@ -108,19 +115,43 @@ References .. _Epstein-1969: | Epstein, E. S., 1969: A scoring system for probability forecasts of ranked categories. -| *J. Appl. Meteor.*, 8, 985–987, 10.1175/1520-0450(1969)008<0985:ASSFPF>2.0.CO;2. +| *J. Appl. Meteor.*, 8, 985-987, 10.1175/1520-0450(1969)008<0985:ASSFPF>2.0.CO;2. | .. _Gilleland-2010: | Gilleland, E., 2010: Confidence intervals for forecast verification. | *NCAR Technical Note* NCAR/TN-479+STR, 71pp. +| + +.. _Gilleland-2017: + +| Gilleland, E., 2017. A new characterization in the spatial verification +| framework for false alarms, misses, and overall patterns. +| Weather Forecast., 32 (1), 187 - 198, doi: 10.1175/WAF-D-16-0134.1. +| + + +.. _Gilleland_PartI-2020: + +| Gilleland, E., 2020. Bootstrap methods for statistical inference. +| Part I: Comparative forecast verification for continuous variables. +| Journal of Atmospheric and Oceanic Technology, 37 (11), 2117 - 2134, +| doi: 10.1175/JTECH-D-20-0069.1. +| + +.. _Gilleland_PartII-2020: + +| Gilleland, E., 2020. Bootstrap methods for statistical inference. +| Part II: Extreme-value analysis. Journal of Atmospheric and Oceanic +| Technology, 37 (11), 2135 - 2144, doi: 10.1175/JTECH-D-20-0070.1. +| -.. _Gilleland-2019: +.. _Gilleland-2021: -| Gilleland, E., 2019: Bootstrap methods for statistical inference. Part II: -| Extreme-value analysis. Submitted to the Journal of Atmospheric and -| Oceanic Technology on 2 December 2019. Re-submitted on 12 May 2020 +| Gilleland, E., 2021. Novel measures for summarizing high-resolution forecast +| performance. Advances in Statistical Climatology, Meteorology and Oceanography, +| 7 (1), 13 - 34, doi: 10.5194/ascmo-7-13-2021. | .. _Gneiting-2004: @@ -167,7 +198,7 @@ References .. _Mason-2008: | Mason, S. J., 2008: Understanding forecast verification statistics. -| *Meteor. Appl.*, 15, 31–40, doi: 10.1002/met.51. +| *Meteor. Appl.*, 15, 31-40, doi: 10.1002/met.51. | @@ -186,7 +217,7 @@ References .. _Murphy-1969: | Murphy, A.H., 1969: On the ranked probability score. *Journal of Applied* -| *Meteorology and Climatology*, 8 (6), 988 – 989, +| *Meteorology and Climatology*, 8 (6), 988 - 989, | doi: 10.1175/1520-0450(1969)008<0988:OTPS>2.0.CO;2. | @@ -248,7 +279,7 @@ References | Tödter, J. and B. Ahrens, 2012: Generalization of the Ignorance Score: | Continuous ranked version and its decomposition. *Mon. Wea. Rev.*, -| 140 (6), 2005 – 2017, doi: 10.1175/MWR-D-11-00266.1. +| 140 (6), 2005 - 2017, doi: 10.1175/MWR-D-11-00266.1. | .. _Weniger-2016: diff --git a/met/docs/conf.py b/met/docs/conf.py index 16bb9bcd99..2b646244a6 100644 --- a/met/docs/conf.py +++ b/met/docs/conf.py @@ -32,7 +32,8 @@ # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. -extensions = ['sphinx.ext.autodoc','sphinx.ext.intersphinx'] +# Adding 'sphinx_panels' to use drop-down menus in appendixA. +extensions = ['sphinx.ext.autodoc','sphinx.ext.intersphinx','sphinx_panels',] # settings for ReadTheDocs PDF creation latex_engine = 'pdflatex' @@ -47,11 +48,11 @@ 'papersize': 'letterpaper', 'releasename':"{version}", 'fncychap': '\\usepackage{fncychap}', - 'fontpkg': '\\usepackage{amsmath,amsfonts,amssymb,amsthm}', + 'fontpkg': '\\usepackage{amsmath,amsfonts,amssymb,amsthm,float}', 'inputenc': '\\usepackage[utf8]{inputenc}', 'fontenc': '\\usepackage[LGR,T1]{fontenc}', - 'figure_align':'htbp', + 'figure_align':'H', 'pointsize': '11pt', 'preamble': r''' diff --git a/met/docs/requirements.txt b/met/docs/requirements.txt index f6b879164e..14529e46cf 100644 --- a/met/docs/requirements.txt +++ b/met/docs/requirements.txt @@ -7,4 +7,5 @@ sphinxcontrib-devhelp==1.0.2 sphinxcontrib-htmlhelp==1.0.3 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 -sphinxcontrib-serializinghtml==1.1.4 \ No newline at end of file +sphinxcontrib-serializinghtml==1.1.4 +sphinx-panels==0.6.0 diff --git a/met/scripts/Rscripts/plot_tcmpr.R b/met/scripts/Rscripts/plot_tcmpr.R index f42ba17c94..cad729c484 100644 --- a/met/scripts/Rscripts/plot_tcmpr.R +++ b/met/scripts/Rscripts/plot_tcmpr.R @@ -377,7 +377,7 @@ if(length(file_list) == 0) { } # Expand any wildcards in the input file list -file_list = system(paste("ls -1", paste(file_list, collapse=" ")), +file_list = system(paste("ls -1d", paste(file_list, collapse=" ")), intern=TRUE); # Read the plotting configuration file, if specified diff --git a/met/scripts/config/GridStatConfig_APCP_12 b/met/scripts/config/GridStatConfig_APCP_12 index e63c3a3d23..76747e7908 100644 --- a/met/scripts/config/GridStatConfig_APCP_12 +++ b/met/scripts/config/GridStatConfig_APCP_12 @@ -158,6 +158,8 @@ distance_map = { baddeley_p = 2; baddeley_max_dist = NA; fom_alpha = 0.1; + zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/met/scripts/config/GridStatConfig_APCP_24 b/met/scripts/config/GridStatConfig_APCP_24 index 9ad9e8bb53..bca332cdde 100644 --- a/met/scripts/config/GridStatConfig_APCP_24 +++ b/met/scripts/config/GridStatConfig_APCP_24 @@ -168,6 +168,8 @@ distance_map = { baddeley_p = 2; baddeley_max_dist = NA; fom_alpha = 0.1; + zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/met/scripts/config/GridStatConfig_POP_12 b/met/scripts/config/GridStatConfig_POP_12 index 20c48f59a6..2df3bc3914 100644 --- a/met/scripts/config/GridStatConfig_POP_12 +++ b/met/scripts/config/GridStatConfig_POP_12 @@ -168,6 +168,8 @@ distance_map = { baddeley_p = 2; baddeley_max_dist = NA; fom_alpha = 0.1; + zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/met/scripts/config/GridStatConfig_all b/met/scripts/config/GridStatConfig_all index 901ec29565..13b7f601f3 100644 --- a/met/scripts/config/GridStatConfig_all +++ b/met/scripts/config/GridStatConfig_all @@ -118,7 +118,7 @@ climo_mean = { // Verification masking regions // mask = { - grid = [ "DTC165", "DTC166" ]; + grid = [ "FULL", "DTC165", "DTC166" ]; poly = [ "${TEST_OUT_DIR}/gen_vx_mask/CONUS_poly.nc", "MET_BASE/poly/LMV.poly" ]; } @@ -199,6 +199,8 @@ distance_map = { baddeley_p = 2; baddeley_max_dist = NA; fom_alpha = 0.1; + zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/met/scripts/config/PointStatConfig b/met/scripts/config/PointStatConfig index 6df8e67daf..513c956ab6 100644 --- a/met/scripts/config/PointStatConfig +++ b/met/scripts/config/PointStatConfig @@ -199,6 +199,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; // Only for HiRA + orank = NONE; // Only for HiRA rps = NONE; // Only for HiRA eclv = BOTH; mpr = BOTH; diff --git a/met/src/basic/vx_config/calculator.cc b/met/src/basic/vx_config/calculator.cc index bd7813f48f..0ce9d6cb6b 100644 --- a/met/src/basic/vx_config/calculator.cc +++ b/met/src/basic/vx_config/calculator.cc @@ -671,7 +671,6 @@ while ( pos < (v.length()) ) { << celltype_to_string(cell.type) << "\"\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/basic/vx_config/config_constants.h b/met/src/basic/vx_config/config_constants.h index d3b8773e58..34fe74c978 100644 --- a/met/src/basic/vx_config/config_constants.h +++ b/met/src/basic/vx_config/config_constants.h @@ -692,6 +692,7 @@ static const char conf_key_baddeley_p[] = "baddeley_p"; static const char conf_key_baddeley_max_dist[] = "baddeley_max_dist"; static const char conf_key_fom_alpha[] = "fom_alpha"; static const char conf_key_zhu_weight[] = "zhu_weight"; +static const char conf_key_beta_value[] = "beta_value"; // // Wavelet-Stat specific parameter key names diff --git a/met/src/basic/vx_config/config_util.cc b/met/src/basic/vx_config/config_util.cc index c55fd819ae..5e8e8593b9 100644 --- a/met/src/basic/vx_config/config_util.cc +++ b/met/src/basic/vx_config/config_util.cc @@ -1493,6 +1493,9 @@ void ClimoCDFInfo::set_cdf_ta(int n_bin, bool ¢er) { exit(1); } + // Initialize + cdf_ta.clear(); + // Even number of bins cannot be centered if(n_bin%2 == 0 && center) { mlog << Warning << "\nClimoCDFInfo::set_cdf_ta() -> " @@ -2450,7 +2453,6 @@ ConcatString fieldtype_to_string(FieldType type) { mlog << Error << "\nfieldtype_to_string() -> " << "Unexpected FieldType value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2520,7 +2522,6 @@ ConcatString setlogic_to_string(SetLogic type) { mlog << Error << "\nsetlogic_to_string() -> " << "Unexpected SetLogic value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2541,7 +2542,6 @@ ConcatString setlogic_to_abbr(SetLogic type) { mlog << Error << "\nsetlogic_to_abbr() -> " << "Unexpected SetLogic value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2562,7 +2562,6 @@ ConcatString setlogic_to_symbol(SetLogic type) { mlog << Error << "\nsetlogic_to_symbol() -> " << "Unexpected SetLogic value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2642,7 +2641,6 @@ ConcatString tracktype_to_string(TrackType type) { mlog << Error << "\ntracktype_to_string() -> " << "Unexpected TrackType value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2698,7 +2696,6 @@ ConcatString interp12type_to_string(Interp12Type type) { mlog << Error << "\ninterp12type_to_string() -> " << "Unexpected Interp12Type value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2738,7 +2735,6 @@ ConcatString mergetype_to_string(MergeType type) { mlog << Error << "\nmergetype_to_string() -> " << "Unexpected MergeType value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2765,7 +2761,6 @@ ConcatString obssummary_to_string(ObsSummary type, int perc_val) { mlog << Error << "\nobssummary_to_string() -> " << "Unexpected ObsSummary value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2805,7 +2800,6 @@ ConcatString matchtype_to_string(MatchType type) { mlog << Error << "\nmatchtype_to_string() -> " << "Unexpected MatchType value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2873,7 +2867,6 @@ ConcatString disttype_to_string(DistType type) { mlog << Error << "\ndisttype_to_string() -> " << "Unexpected DistType value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2915,7 +2908,6 @@ ConcatString griddecomptype_to_string(GridDecompType type) { mlog << Error << "\ngriddecomptype_to_string() -> " << "Unexpected GridDecompType value of " << type << ".\n\n"; exit(1); - break; } return(s); @@ -2939,7 +2931,6 @@ ConcatString wavelettype_to_string(WaveletType type) { mlog << Error << "\nwavlettype_to_string() -> " << "Unexpected WaveletType value of " << type << ".\n\n"; exit(1); - break; } return(s); diff --git a/met/src/basic/vx_config/dictionary.cc b/met/src/basic/vx_config/dictionary.cc index cbe3013cba..55ef59a0ac 100644 --- a/met/src/basic/vx_config/dictionary.cc +++ b/met/src/basic/vx_config/dictionary.cc @@ -219,7 +219,6 @@ switch ( entry.Type ) { << "\n\n DictionaryEntry::assign(const DictionaryEntry &) -> bad object type ... \"" << configobjecttype_to_string(entry.Type) << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -300,7 +299,6 @@ switch ( Type ) { << "bad object type ... \"" << configobjecttype_to_string(Type) << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -377,7 +375,6 @@ switch ( Type ) { mlog << Error << "DictionaryEntry::dump_config_format() -> bad threshold type ... " << Thresh->get_type() << "\n"; exit ( 1 ); - break; } // switch if ( Thresh->get_type() != thresh_na ) out << Thresh->get_value(); @@ -402,7 +399,6 @@ switch ( Type ) { << "bad object type ... \"" << configobjecttype_to_string(Type) << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -1313,7 +1309,7 @@ for (j=0; j<(scope.n_elements() - 1); ++j) { // try current dictionary // - const char * stub = scope[scope.n_elements() - 1].c_str(); +const string stub = scope[scope.n_elements() - 1].c_str(); E = D->lookup_simple(stub); diff --git a/met/src/basic/vx_config/icode.cc b/met/src/basic/vx_config/icode.cc index a45736daf1..fda6db8f01 100644 --- a/met/src/basic/vx_config/icode.cc +++ b/met/src/basic/vx_config/icode.cc @@ -262,7 +262,6 @@ switch ( type ) { default: cerr << "\n\n IcodeCell::as_double() const -> bad type ... \"" << celltype_to_string(type) << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -289,7 +288,6 @@ switch ( type ) { default: cerr << "\n\n IcodeCell::as_int() const -> bad type ... \"" << celltype_to_string(type) << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -392,7 +390,6 @@ switch ( type ) { default: cerr << "\n\n IcodeCell::dump() -> unrecognized type ... \"" << celltype_to_string(type) << "\"\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/basic/vx_config/my_config_scanner.cc b/met/src/basic/vx_config/my_config_scanner.cc index 1acae0582b..288952689b 100644 --- a/met/src/basic/vx_config/my_config_scanner.cc +++ b/met/src/basic/vx_config/my_config_scanner.cc @@ -253,27 +253,27 @@ switch ( c ) { // single character tokens // - case '[': { do_single_char_token(lexeme[0]); is_lhs = false; dict_stack->push_array(); return ( token(lexeme[0]) ); } break; - case '{': { do_single_char_token(lexeme[0]); is_lhs = true; dict_stack->push(); return ( token(lexeme[0]) ); } break; + case '[': { do_single_char_token(lexeme[0]); is_lhs = false; dict_stack->push_array(); return ( token(lexeme[0]) ); } + case '{': { do_single_char_token(lexeme[0]); is_lhs = true; dict_stack->push(); return ( token(lexeme[0]) ); } - case ']': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } break; - case '}': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } break; + case ']': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } + case '}': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } - case '(': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } break; - case ')': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } break; + case '(': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } + case ')': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } - case '+': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } break; + case '+': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } - case '-': { if ( ! need_number ) { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } } break; + case '-': { if ( ! need_number ) { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } } break; - case '*': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } break; - case '^': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } break; + case '*': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } + case '^': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } - // case '=': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } break; + // case '=': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } - case ';': { do_single_char_token(lexeme[0]); is_lhs = true; return ( token( ';' ) ); } break; - case ',': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } break; + case ';': { do_single_char_token(lexeme[0]); is_lhs = true; return ( token( ';' ) ); } + case ',': { do_single_char_token(lexeme[0]); return ( token(lexeme[0]) ); } case '\"': { do_quoted_string(); return ( token ( QUOTED_STRING ) ); } diff --git a/met/src/basic/vx_config/threshold.cc b/met/src/basic/vx_config/threshold.cc index bb1dfaa49a..cf3e9dd26a 100644 --- a/met/src/basic/vx_config/threshold.cc +++ b/met/src/basic/vx_config/threshold.cc @@ -857,7 +857,6 @@ switch ( op ) { mlog << Error << "\nSimple_Node::check(double, double, double) const -> " << "bad op ... " << op << "\n\n"; exit ( 1 ); - break; } // switch @@ -1202,7 +1201,6 @@ if ( Ptype == perc_thresh_climo_dist ) { << "threshold to a probability!\n\n"; exit ( 1 ); - break; } // switch } diff --git a/met/src/basic/vx_log/concat_string.cc b/met/src/basic/vx_log/concat_string.cc index 9fea23870b..d325f0ad8c 100644 --- a/met/src/basic/vx_log/concat_string.cc +++ b/met/src/basic/vx_log/concat_string.cc @@ -834,7 +834,6 @@ switch ( c ) { mlog << Error << "\noperator<<(ostream &, CSInlineCommand) -> " << "bad CSInlineCommand value\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/basic/vx_math/affine.cc b/met/src/basic/vx_math/affine.cc index b7094024e1..974eb32aa4 100644 --- a/met/src/basic/vx_math/affine.cc +++ b/met/src/basic/vx_math/affine.cc @@ -1144,7 +1144,6 @@ switch ( g ) { mlog << Error << "\nConformalAffine::set() -> " << "bad gravity ... " << viewgravity_to_string(g) << "\n\n"; exit ( 1 ); - break; } // switch @@ -1421,7 +1420,6 @@ switch ( g ) { << "\n\n viewgravity_to_uv() -> bad gravity ... " << viewgravity_to_string(g) << "\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/basic/vx_math/so3.cc b/met/src/basic/vx_math/so3.cc index 41e3dbc82b..3f3c93ab73 100644 --- a/met/src/basic/vx_math/so3.cc +++ b/met/src/basic/vx_math/so3.cc @@ -780,7 +780,6 @@ switch ( k ) { default: cerr << "\n\n SO3::operator()(int, int) const -> range check error (2)\n\n"; exit ( 1 ); - break; } diff --git a/met/src/basic/vx_util/GridTemplate.cc b/met/src/basic/vx_util/GridTemplate.cc index 85926d8ae5..f3b83b3252 100644 --- a/met/src/basic/vx_util/GridTemplate.cc +++ b/met/src/basic/vx_util/GridTemplate.cc @@ -182,7 +182,8 @@ GridPoint *GridTemplate::getFirst(const int &base_x, const int &base_y, GridPoint *GridTemplate::getNext(void) const { - while (_pointInGridIterator != _offsetList.end()) + GridPoint *next_point = (GridPoint *)NULL; + if (_pointInGridIterator != _offsetList.end()) { GridOffset *offset = *_pointInGridIterator; @@ -191,11 +192,11 @@ GridPoint *GridTemplate::getNext(void) const _pointInGridReturn.x = _pointInGridBase.x + offset->x_offset; _pointInGridReturn.y = _pointInGridBase.y + offset->y_offset; - return &_pointInGridReturn; + next_point = &_pointInGridReturn; } - return (GridPoint *)NULL; + return next_point; } /********************************************************************** diff --git a/met/src/basic/vx_util/data_line.cc b/met/src/basic/vx_util/data_line.cc index 0a155a8eb2..155e7a449e 100644 --- a/met/src/basic/vx_util/data_line.cc +++ b/met/src/basic/vx_util/data_line.cc @@ -522,6 +522,8 @@ bool DataLine::read_single_text_line(LineDataFile * ldf) { +if ( !ldf ) return ( false ); + #ifdef WITH_PYTHON PyLineDataFile * pldf = dynamic_cast(ldf); @@ -530,7 +532,7 @@ if ( pldf ) { const bool status = read_py_single_text_line(pldf); - return ( status ); + return ( status ); } diff --git a/met/src/basic/vx_util/data_plane.cc b/met/src/basic/vx_util/data_plane.cc index c855b2a1d5..c49c360918 100644 --- a/met/src/basic/vx_util/data_plane.cc +++ b/met/src/basic/vx_util/data_plane.cc @@ -297,6 +297,22 @@ bool DataPlane::is_all_bad_data() const { /////////////////////////////////////////////////////////////////////////////// +int DataPlane::n_good_data() const { + int j, n; + + // + // Count number of good data values + // + + for(j=0,n=0; j " << "unsupported interpolation method encountered: " << interpmthd_to_string(mthd) << "(" << mthd << ")\n\n"; - exit(1); - break; } delete gt; @@ -1187,7 +1185,6 @@ double compute_horz_interp(const DataPlane &dp, << "unsupported interpolation method encountered: " << interpmthd_to_string(mthd) << "(" << mthd << ")\n\n"; exit(1); - break; } delete gt; @@ -1331,7 +1328,6 @@ DataPlane valid_time_interp(const DataPlane &in1, const DataPlane &in2, << "unsupported interpolation method encountered: " << interpmthd_to_string(mthd) << "(" << mthd << ")\n\n"; exit(1); - break; } // Initialize diff --git a/met/src/basic/vx_util/ordinal.cc b/met/src/basic/vx_util/ordinal.cc index 415a9e2634..b99768acde 100644 --- a/met/src/basic/vx_util/ordinal.cc +++ b/met/src/basic/vx_util/ordinal.cc @@ -94,7 +94,6 @@ switch ( n ) { // mlog << Error << "\nordinal_suffix() -> totally confused!\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/basic/vx_util/read_fortran_binary.cc b/met/src/basic/vx_util/read_fortran_binary.cc index 2c37d4b16a..2d2edf38b0 100644 --- a/met/src/basic/vx_util/read_fortran_binary.cc +++ b/met/src/basic/vx_util/read_fortran_binary.cc @@ -71,7 +71,6 @@ switch ( rec_pad_length ) { mlog << Error << "\n\n read_fortran_binary() -> bad record pad size ... " << rec_pad_length << "\n\n"; exit ( 1 ); - break; } @@ -179,7 +178,6 @@ switch ( rec_pad_length ) { mlog << Error << "\n\n read_fortran_binary() -> bad record pad size ... " << rec_pad_length << "\n\n"; exit ( 1 ); - break; } @@ -316,7 +314,6 @@ switch ( rec_pad_length ) { default: mlog << Error << "\n\n peek_record_size() -> bad record pad length\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/basic/vx_util/roman_numeral.cc b/met/src/basic/vx_util/roman_numeral.cc index 61af948566..5ce42c0109 100644 --- a/met/src/basic/vx_util/roman_numeral.cc +++ b/met/src/basic/vx_util/roman_numeral.cc @@ -178,7 +178,6 @@ switch ( n/modulus ) { default: // shouldn't ever happen mlog << Error << "\nrn_add() -> can't handle integer " << n << "\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/basic/vx_util/stat_column_defs.h b/met/src/basic/vx_util/stat_column_defs.h index 02f5648ede..84653915fc 100644 --- a/met/src/basic/vx_util/stat_column_defs.h +++ b/met/src/basic/vx_util/stat_column_defs.h @@ -247,7 +247,8 @@ static const char * dmap_columns [] = { "FBIAS", "BADDELEY", "HAUSDORFF", "MED_FO", "MED_OF", "MED_MIN", "MED_MAX", "MED_MEAN", "FOM_FO", "FOM_OF", "FOM_MIN", "FOM_MAX", "FOM_MEAN", - "ZHU_FO", "ZHU_OF", "ZHU_MIN", "ZHU_MAX", "ZHU_MEAN" + "ZHU_FO", "ZHU_OF", "ZHU_MIN", "ZHU_MAX", "ZHU_MEAN", + "G", "GBETA", "BETA_VALUE" }; static const char * isc_columns [] = { diff --git a/met/src/basic/vx_util/two_to_one.cc b/met/src/basic/vx_util/two_to_one.cc index 3eb7142a18..584d53a17c 100644 --- a/met/src/basic/vx_util/two_to_one.cc +++ b/met/src/basic/vx_util/two_to_one.cc @@ -643,7 +643,6 @@ switch ( k ) { mlog << Error << "\nget_two_to_one() -> " << "bad input values\n\n"; exit ( 1 ); - break; } // switch @@ -696,7 +695,6 @@ switch ( k ) { mlog << Error << "\nget_one_to_two() -> " << "bad input values\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/libcode/vx_afm/afm.cc b/met/src/libcode/vx_afm/afm.cc index 23df3616e3..a36bb6b8e3 100644 --- a/met/src/libcode/vx_afm/afm.cc +++ b/met/src/libcode/vx_afm/afm.cc @@ -1412,7 +1412,6 @@ while ( (*in) >> line ) { mlog << Error << "\nAfm::read() -> bad keyword\n\n"; tok.dump(cerr); exit ( 1 ); - break; } // switch @@ -1614,7 +1613,6 @@ while ( (*in) >> line ) { mlog << Error << "\nAfm::do_startfontmetrics() -> bad keyword\n\n"; tok.dump(cerr); exit ( 1 ); - break; } // switch @@ -1623,10 +1621,6 @@ while ( (*in) >> line ) { } // while - - - - return; } @@ -1681,21 +1675,14 @@ while ( (*in) >> line ) { mlog << Error << "\nAfm::do_startcharmetrics() -> bad keyword\n\n"; tok.dump(cerr); exit ( 1 ); - break; } // switch - if ( tok.keyword == afm_keyword_EndCharMetrics ) break; - } // while - - - - return; } @@ -1746,7 +1733,6 @@ while ( (*in) >> line ) { mlog << Error << "\nAfm::do_startkerndata() -> bad keyword\n\n"; tok.dump(cerr); exit ( 1 ); - break; } // switch @@ -1757,10 +1743,6 @@ while ( (*in) >> line ) { } // while - - - - return; } @@ -1821,23 +1803,14 @@ while ( (*in) >> line ) { mlog << Error << "\nAfm::do_startkernpairs() -> bad keyword\n\n"; tok.dump(cerr); exit ( 1 ); - break; } // switch if ( tok.keyword == afm_keyword_EndKernPairs ) break; - } // while - - - - - - - return; } @@ -1894,7 +1867,6 @@ while ( (*in) >> line ) { mlog << Error << "\nAfm::do_startcomposites() -> bad keyword\n\n"; tok.dump(cerr); exit ( 1 ); - break; } // switch @@ -1902,11 +1874,6 @@ while ( (*in) >> line ) { } // while - - - - - return; } @@ -1992,7 +1959,6 @@ while ( 1 ) { mlog << Error << "\nAfm::do_c(AfmLine &) -> bad token (2)\n\n"; tok.dump(cerr); exit ( 1 ); - break; } // switch @@ -2096,7 +2062,6 @@ while ( 1 ) { mlog << Error << "\nAfm::do_c(AfmLine &) -> bad token (2)\n\n"; tok.dump(cerr); exit ( 1 ); - break; } // switch @@ -2104,12 +2069,6 @@ while ( 1 ) { } // while - - - - - - return; } diff --git a/met/src/libcode/vx_afm/afm_token.cc b/met/src/libcode/vx_afm/afm_token.cc index 45d59be0f3..9dbf1926b4 100644 --- a/met/src/libcode/vx_afm/afm_token.cc +++ b/met/src/libcode/vx_afm/afm_token.cc @@ -239,7 +239,6 @@ switch ( type ) { mlog << Error << "\nAfmToken::as_double() const -> bad token type!\n\n"; dump(cerr); exit ( 1 ); - break; }; diff --git a/met/src/libcode/vx_analysis_util/stat_job.cc b/met/src/libcode/vx_analysis_util/stat_job.cc index dcd03688dc..c6cfafddea 100644 --- a/met/src/libcode/vx_analysis_util/stat_job.cc +++ b/met/src/libcode/vx_analysis_util/stat_job.cc @@ -1974,7 +1974,6 @@ void STATAnalysisJob::setup_stat_file(int n_row, int n) { << "unexpected stat line type \"" << statlinetype_to_string(cur_lt) << "\"!\n\n"; exit(1); - break; } if(c > n_col) n_col = c; } @@ -2052,7 +2051,6 @@ void STATAnalysisJob::setup_stat_file(int n_row, int n) { << "unexpected stat line type \"" << statlinetype_to_string(out_lt) << "\"!\n\n"; exit(1); - break; } // diff --git a/met/src/libcode/vx_data2d_factory/data2d_factory.cc b/met/src/libcode/vx_data2d_factory/data2d_factory.cc index 75664a0d1a..8f2b4fcda8 100644 --- a/met/src/libcode/vx_data2d_factory/data2d_factory.cc +++ b/met/src/libcode/vx_data2d_factory/data2d_factory.cc @@ -108,7 +108,6 @@ MetPythonDataFile * p = 0; << "Support for Python has not been compiled!\n" << "To run Python scripts, recompile with the --enable-python option.\n\n"; exit(1); - break; #endif @@ -118,7 +117,6 @@ MetPythonDataFile * p = 0; << "Support for GrdFileType = \"" << grdfiletype_to_string(type) << "\" not yet implemented!\n\n"; exit(1); - break; case FileType_Bufr: @@ -126,7 +124,6 @@ MetPythonDataFile * p = 0; << "cannot use this factory to read files of type \"" << grdfiletype_to_string(type) << "\"\n\n"; exit(1); - break; case FileType_None: // For FileType_None, silently return a NULL pointer @@ -138,7 +135,6 @@ MetPythonDataFile * p = 0; << "unsupported gridded data file type \"" << grdfiletype_to_string(type) << "\"\n\n"; exit(1); - break; } // end switch diff --git a/met/src/libcode/vx_data2d_factory/var_info_factory.cc b/met/src/libcode/vx_data2d_factory/var_info_factory.cc index 354be867c0..5deb64d629 100644 --- a/met/src/libcode/vx_data2d_factory/var_info_factory.cc +++ b/met/src/libcode/vx_data2d_factory/var_info_factory.cc @@ -71,13 +71,13 @@ VarInfo * VarInfoFactory::new_var_info(GrdFileType type) case FileType_Gb2: #ifdef WITH_GRIB2 vi = new VarInfoGrib2; + break; #else mlog << Error << "\nVarInfoFactory::new_var_info() -> " << "Support for GRIB2 has not been compiled!\n" << "To read GRIB2 files, recompile with the --enable-grib2 option.\n\n"; exit(1); #endif - break; case FileType_NcMet: vi = new VarInfoNcMet; @@ -94,31 +94,29 @@ VarInfo * VarInfoFactory::new_var_info(GrdFileType type) p->set_file_type(type); vi = p; p = 0; + break; #else mlog << Error << "\nVarInfoFactory::new_var_info() -> " << "Support for Python has not been compiled!\n" << "To run Python scripts, recompile with the --enable-python option.\n\n"; exit(1); #endif - break; case FileType_NcCF: - vi = new VarInfoNcCF; - break; + vi = new VarInfoNcCF; + break; case FileType_HdfEos: mlog << Error << "\nVarInfoFactory::new_var_info() -> " << "Support for GrdFileType = " << grdfiletype_to_string(type) << " not yet implemented!\n\n"; exit(1); - break; default: mlog << Error << "\nVarInfoFactory::new_var_info() -> " << "unsupported gridded data file type \"" << grdfiletype_to_string(type) << "\"\n\n"; exit(1); - break; } // end switch mlog << Debug(4) diff --git a/met/src/libcode/vx_data2d_grib/data2d_grib_utils.cc b/met/src/libcode/vx_data2d_grib/data2d_grib_utils.cc index 8e6a694bdc..337526cc4c 100644 --- a/met/src/libcode/vx_data2d_grib/data2d_grib_utils.cc +++ b/met/src/libcode/vx_data2d_grib/data2d_grib_utils.cc @@ -664,7 +664,6 @@ void read_pds(const GribRecord &r, int &bms_flag, << "unexpected time unit of " << (int) pds->fcst_unit << ".\n\n"; exit(1); - break; } // @@ -735,7 +734,6 @@ void read_pds(const GribRecord &r, int &bms_flag, << "unexpected time range indicator of " << (int) pds->tri << ".\n\n"; exit(1); - break; } return; diff --git a/met/src/libcode/vx_data2d_grib/grib_strings.cc b/met/src/libcode/vx_data2d_grib/grib_strings.cc index 2380f10c2f..37a0152c7c 100644 --- a/met/src/libcode/vx_data2d_grib/grib_strings.cc +++ b/met/src/libcode/vx_data2d_grib/grib_strings.cc @@ -124,7 +124,6 @@ ConcatString get_grib_level_list_str(int k, int grib_level) << "unexpected value for k: " << k << "\n\n"; exit(1); - break; } // switch diff --git a/met/src/libcode/vx_data2d_grib/grib_utils.cc b/met/src/libcode/vx_data2d_grib/grib_utils.cc index 64c49234b4..3fbbdd23c5 100644 --- a/met/src/libcode/vx_data2d_grib/grib_utils.cc +++ b/met/src/libcode/vx_data2d_grib/grib_utils.cc @@ -565,7 +565,7 @@ double decode_lat_lon(const unsigned char * p, int n) int i, parity; double answer; -unsigned char c[3]; +unsigned char c[n]; // // For all of the lat/lon parameters, the leftmost bit indicates the diff --git a/met/src/libcode/vx_data2d_grib/var_info_grib.cc b/met/src/libcode/vx_data2d_grib/var_info_grib.cc index de4d685846..a75b699674 100644 --- a/met/src/libcode/vx_data2d_grib/var_info_grib.cc +++ b/met/src/libcode/vx_data2d_grib/var_info_grib.cc @@ -212,12 +212,7 @@ void VarInfoGrib::add_grib_code (Dictionary &dict) } int field_center = dict.lookup_int (conf_key_GRIB1_center, false); int field_subcenter = dict.lookup_int (conf_key_GRIB1_subcenter, false); - Grib1TableEntry tab; - - // if not specified, fill others with default values - if(field_ptv == bad_data_int) field_ptv = default_grib1_ptv; - if(field_center == bad_data_int) field_center = default_grib1_center; - if(field_subcenter == bad_data_int) field_subcenter = default_grib1_subcenter; + Grib1TableEntry tab; // if the name is specified, use it if( !field_name.empty() ){ @@ -229,9 +224,13 @@ void VarInfoGrib::add_grib_code (Dictionary &dict) if( !GribTable.lookup_grib1(field_name.c_str(), default_grib1_ptv, field_code, default_grib1_center, default_grib1_subcenter, tab, tab_match) ) { mlog << Error << "\nVarInfoGrib::add_grib_code() -> " - << "unrecognized GRIB1 field abbreviation '" - << field_name << "' for table version " << field_ptv - << "\n\n"; + << "unrecognized GRIB1 field abbreviation '" << field_name + << "' for table version (" << field_ptv + << "), center (" << field_center + << "), and subcenter (" << field_subcenter + << ") or default table version (" << default_grib1_ptv + << "), center (" << default_grib1_center + << "), and subcenter (" << default_grib1_subcenter << ").\n\n"; exit(1); } } diff --git a/met/src/libcode/vx_data2d_python/data2d_python.cc b/met/src/libcode/vx_data2d_python/data2d_python.cc index 97fab35c6a..3aca4f0119 100644 --- a/met/src/libcode/vx_data2d_python/data2d_python.cc +++ b/met/src/libcode/vx_data2d_python/data2d_python.cc @@ -208,7 +208,6 @@ switch ( Type ) { // assumes Type is already set << "MetPythonDataFile::open(const char * script_filename) -> bad file type: " << grdfiletype_to_string(Type) << "\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/libcode/vx_grid/goes_grid.cc b/met/src/libcode/vx_grid/goes_grid.cc index eed67fbaaa..f7cc8e96fa 100644 --- a/met/src/libcode/vx_grid/goes_grid.cc +++ b/met/src/libcode/vx_grid/goes_grid.cc @@ -522,14 +522,14 @@ void GoesImagerData::compute_lat_lon() } } } + + mlog << Debug(4) << method_name << " lon: " << lon_min << " to " << lon_max + << ", lat: " << lat_min << " to " << lat_max << " at index " + << idx_lon_min << " & " << idx_lon_max << ", " + << idx_lat_min << " & " << idx_lat_max << " from " + << " x: " << x_values[0] << " to " << x_values[nx-1] + << " y: " << y_values[0] << " to " << y_values[ny-1] << "\n"; } - - mlog << Debug(4) << method_name << " lon: " << lon_min << " to " << lon_max - << ", lat: " << lat_min << " to " << lat_max << " at index " - << idx_lon_min << " & " << idx_lon_max << ", " - << idx_lat_min << " & " << idx_lat_max << " from " - << " x: " << x_values[0] << " to " << x_values[nx-1] - << " y: " << y_values[0] << " to " << y_values[ny-1] << "\n"; } //////////////////////////////////////////////////////////////////////// diff --git a/met/src/libcode/vx_grid/st_grid.cc b/met/src/libcode/vx_grid/st_grid.cc index 64ed1a062d..65246ae5fc 100644 --- a/met/src/libcode/vx_grid/st_grid.cc +++ b/met/src/libcode/vx_grid/st_grid.cc @@ -96,7 +96,6 @@ switch ( data.hemisphere ) { mlog << Error << "\nStereographicGrid::StereographicGrid(const StereographicData &) -> " << "bad hemisphere ...\"" << (data.hemisphere) << "\"\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/libcode/vx_nc_obs/nc_point_obs_out.cc b/met/src/libcode/vx_nc_obs/nc_point_obs_out.cc index b758df0093..29e204672f 100644 --- a/met/src/libcode/vx_nc_obs/nc_point_obs_out.cc +++ b/met/src/libcode/vx_nc_obs/nc_point_obs_out.cc @@ -602,6 +602,7 @@ bool MetNcPointObsOut::write_to_netcdf(StringArray obs_names, StringArray obs_un << "variable names are not added because of empty names\n\n"; } else mlog << Debug(7) << method_name << "use_var_id is false\n"; + return true; } //////////////////////////////////////////////////////////////////////// diff --git a/met/src/libcode/vx_nc_util/nc_utils.cc b/met/src/libcode/vx_nc_util/nc_utils.cc index 14c473702c..50c3ff778e 100644 --- a/met/src/libcode/vx_nc_util/nc_utils.cc +++ b/met/src/libcode/vx_nc_util/nc_utils.cc @@ -2855,7 +2855,6 @@ void copy_nc_att(NcFile *nc_from, NcVar *var_to, const ConcatString attr_name) { mlog << Error << "\ncopy_nc_att(NcFile, NcVar, attr_name) -> " << "Does not copy this type \"" << dataType << "\" global NetCDF attribute.\n\n"; exit(1); - break; } } if(from_att) delete from_att; @@ -2891,7 +2890,6 @@ void copy_nc_att(NcVar *var_from, NcVar *var_to, const ConcatString attr_name) { << "Does not copy this type \"" << dataType << "\" NetCDF attributes from \"" << GET_NC_TYPE_NAME_P(var_from) << "\".\n\n"; exit(1); - break; } } if(from_att) delete from_att; @@ -2931,7 +2929,6 @@ void copy_nc_atts(NcFile *nc_from, NcFile *nc_to, const bool all_attrs) { mlog << Error << "\ncopy_nc_atts(NcFile) -> " << "Does not copy this type \"" << dataType << "\" global NetCDF attributes.\n\n"; exit(1); - break; } } } @@ -2983,7 +2980,6 @@ void copy_nc_atts(NcVar *var_from, NcVar *var_to, const bool all_attrs) { << "Does not copy this type \"" << dataType << "\" NetCDF attributes from \"" << GET_NC_TYPE_NAME_P(var_from) << "\".\n\n"; exit(1); - break; } } } @@ -3083,7 +3079,6 @@ void copy_nc_var_data(NcVar *var_from, NcVar *var_to) { << "Does not copy this type \"" << dataType << "\" NetCDF data from \"" << GET_NC_TYPE_NAME_P(var_from) << "\".\n\n"; exit(1); - break; } } diff --git a/met/src/libcode/vx_pb_util/do_blocking.cc b/met/src/libcode/vx_pb_util/do_blocking.cc index 814da8beb5..1e3d48d6dc 100644 --- a/met/src/libcode/vx_pb_util/do_blocking.cc +++ b/met/src/libcode/vx_pb_util/do_blocking.cc @@ -131,7 +131,6 @@ switch ( padsize ) { mlog << Error << "\nwrite_pad() -> " << "bad pad size\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/libcode/vx_pb_util/do_unblocking.cc b/met/src/libcode/vx_pb_util/do_unblocking.cc index f4d5af6cf2..6e8b6b250f 100644 --- a/met/src/libcode/vx_pb_util/do_unblocking.cc +++ b/met/src/libcode/vx_pb_util/do_unblocking.cc @@ -99,7 +99,6 @@ switch ( padsize ) { mlog << Error << "\nread_pad() -> " << "bad pad size\n\n"; exit ( 1 ); - break; } // switch @@ -132,7 +131,6 @@ switch ( padsize ) { mlog << Error << "\nread_pad() -> " << "bad pad size\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/libcode/vx_pb_util/pblock.cc b/met/src/libcode/vx_pb_util/pblock.cc index 26196a8421..643dc06540 100644 --- a/met/src/libcode/vx_pb_util/pblock.cc +++ b/met/src/libcode/vx_pb_util/pblock.cc @@ -79,7 +79,6 @@ void pblock(const char *infile, const char *outfile, Action action) { << "unexpected action requested!\n\n"; exit(1); - break; } // diff --git a/met/src/libcode/vx_ps/vx_ps.cc b/met/src/libcode/vx_ps/vx_ps.cc index 68d311077b..668efdbaba 100644 --- a/met/src/libcode/vx_ps/vx_ps.cc +++ b/met/src/libcode/vx_ps/vx_ps.cc @@ -290,7 +290,6 @@ switch ( Orientation ) { mlog << Error << "\nvoid PSfile::do_prolog() -> bad document orientation ... " << documentorientation_to_string(Orientation) << "\n\n"; exit ( 1 ); - break; } @@ -303,7 +302,6 @@ switch ( Media ) { mlog << Error << "\nvoid PSfile::do_prolog() -> bad document media ... " << documentmedia_to_string(Media) << "\n\n"; exit ( 1 ); - break; } @@ -708,7 +706,6 @@ switch ( fill_flag ) { mlog << Error << "\nPSfile::write_single_node() -> " << "unrecognized fill flag: \"" << fill_flag << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -1154,7 +1151,6 @@ switch ( Media ) { mlog << Error << "\nPSfile::set_media(DocumentMedia) -> bad media size ... " << documentmedia_to_string(Media) << "\n\n"; exit ( 1 ); - break; } @@ -1469,7 +1465,6 @@ switch ( f ) { mlog << Error << "\n\n PSfile::set_family(FontFamily) -> bad font family ... " << fontfamily_to_string(f) << "\n\n"; exit ( 1 ); - break; } // switch @@ -1971,7 +1966,6 @@ switch ( f ) { mlog << Error << "\n\n ff_to_roman() -> bad font family ... " << fontfamily_to_string(f) << "\n\n"; exit ( 1 ); - break; } // switch @@ -2005,7 +1999,6 @@ switch ( f ) { mlog << Error << "\n\n ff_to_italic() -> bad font family ... " << fontfamily_to_string(f) << "\n\n"; exit ( 1 ); - break; } // switch @@ -2039,7 +2032,6 @@ switch ( f ) { mlog << Error << "\n\n ff_to_bold() -> bad font family ... " << fontfamily_to_string(f) << "\n\n"; exit ( 1 ); - break; } // switch @@ -2073,7 +2065,6 @@ switch ( f ) { mlog << Error << "\n\n ff_to_bolditalic() -> bad font family ... " << fontfamily_to_string(f) << "\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/libcode/vx_regrid/vx_regrid.cc b/met/src/libcode/vx_regrid/vx_regrid.cc index 1094fd6c86..0a7da68e3e 100644 --- a/met/src/libcode/vx_regrid/vx_regrid.cc +++ b/met/src/libcode/vx_regrid/vx_regrid.cc @@ -64,7 +64,6 @@ switch ( info.method ) { << "bad interpolation method ... " << interpmthd_to_string(info.method) << "\n\n"; exit(1); - break; } // switch info.method diff --git a/met/src/libcode/vx_render/render_pbm.cc b/met/src/libcode/vx_render/render_pbm.cc index af98e535fd..979c3bb12c 100644 --- a/met/src/libcode/vx_render/render_pbm.cc +++ b/met/src/libcode/vx_render/render_pbm.cc @@ -102,7 +102,6 @@ for (j=0; j<(info.n_filters()); ++j) { default: mlog << Error << "\nrender()(pbm) -> bad filter: \"" << (info.filter(j)) << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -161,7 +160,6 @@ for (j=(info.n_filters() - 1); j>= 0; --j) { default: mlog << Error << "\nrender() -> bad filter: \"" << (info.filter(j)) << "\"\n\n"; exit ( 1 ); - break; } // swtich diff --git a/met/src/libcode/vx_render/render_pcm.cc b/met/src/libcode/vx_render/render_pcm.cc index 3246800645..eb8e82bb4b 100644 --- a/met/src/libcode/vx_render/render_pcm.cc +++ b/met/src/libcode/vx_render/render_pcm.cc @@ -90,7 +90,6 @@ for (j=0; j<(info.n_filters()); ++j) { default: mlog << Error << "\nrender_color_24() -> bad filter: \"" << (info.filter(j)) << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -150,7 +149,6 @@ for (j=(info.n_filters() - 1); j>= 0; --j) { default: mlog << Error << "\nrender() -> bad filter: \"" << (info.filter(j)) << "\"\n\n"; exit ( 1 ); - break; } // swtich diff --git a/met/src/libcode/vx_render/render_pgm.cc b/met/src/libcode/vx_render/render_pgm.cc index 9a199c6d47..76f2d4a33e 100644 --- a/met/src/libcode/vx_render/render_pgm.cc +++ b/met/src/libcode/vx_render/render_pgm.cc @@ -86,7 +86,6 @@ for (j=0; j<(info.n_filters()); ++j) { default: mlog << Error << "\nrender()(pgm) -> bad filter: \"" << (info.filter(j)) << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -145,7 +144,6 @@ for (j=(info.n_filters() - 1); j>= 0; --j) { default: mlog << Error << "\nrender() -> bad filter: \"" << (info.filter(j)) << "\"\n\n"; exit ( 1 ); - break; } // swtich diff --git a/met/src/libcode/vx_render/render_ppm.cc b/met/src/libcode/vx_render/render_ppm.cc index 2b1a63ff6c..20d9f2e1ad 100644 --- a/met/src/libcode/vx_render/render_ppm.cc +++ b/met/src/libcode/vx_render/render_ppm.cc @@ -77,7 +77,6 @@ for (j=0; j<(info.n_filters()); ++j) { default: mlog << Error << "\nrender()(ppm) -> bad filter: \"" << (info.filter(j)) << "\"\n\n"; exit ( 1 ); - break; } // switch @@ -140,7 +139,6 @@ for (j=(info.n_filters() - 1); j>= 0; --j) { default: mlog << Error << "\nrender() -> bad filter: \"" << (info.filter(j)) << "\"\n\n"; exit ( 1 ); - break; } // swtich diff --git a/met/src/libcode/vx_render/rle_filter.cc b/met/src/libcode/vx_render/rle_filter.cc index 39607b27b7..951c974d3a 100644 --- a/met/src/libcode/vx_render/rle_filter.cc +++ b/met/src/libcode/vx_render/rle_filter.cc @@ -76,7 +76,6 @@ switch ( mode ) { default: mlog << Error << "\nRunLengthEncodeFilter::eat(unsigned char) -> bad mode\n\n"; exit ( 1 ); - break; } diff --git a/met/src/libcode/vx_shapedata/shapedata.cc b/met/src/libcode/vx_shapedata/shapedata.cc index 4a0016ea59..319aa69e81 100644 --- a/met/src/libcode/vx_shapedata/shapedata.cc +++ b/met/src/libcode/vx_shapedata/shapedata.cc @@ -1801,7 +1801,6 @@ void boundary_step(const ShapeData &sd, int &xn, int &yn, int &direction) { mlog << Error << "\nboundary_step() -> " << "bad direction: " << direction << "\n\n"; exit(1); - break; } // @@ -1836,7 +1835,6 @@ void boundary_step(const ShapeData &sd, int &xn, int &yn, int &direction) { << "bad step case: " << get_step_case(lr, ur, ul, ll) << "\n\n"; exit(1); - break; } return; diff --git a/met/src/libcode/vx_stat_out/stat_columns.cc b/met/src/libcode/vx_stat_out/stat_columns.cc index b38c2e2992..3f9f8a364d 100644 --- a/met/src/libcode/vx_stat_out/stat_columns.cc +++ b/met/src/libcode/vx_stat_out/stat_columns.cc @@ -3481,7 +3481,8 @@ void write_dmap_cols(const DMAPInfo &dmap_info, // FBIAS, BADDELEY, HAUSDORFF, // MED_FO, MED_OF, MED_MIN, MED_MAX, MED_MEAN, // FOM_FO, FOM_OF, FOM_MIN, FOM_MAX, FOM_MEAN, - // ZHU_FO, ZHU_OF, ZHU_MIN, ZHU_MAX, ZHU_MEAN + // ZHU_FO, ZHU_OF, ZHU_MIN, ZHU_MAX, ZHU_MEAN, + // G, GBETA, BETA_VALUE // at.set_entry(r, c+0, // TOTAL dmap_info.total); @@ -3546,6 +3547,15 @@ void write_dmap_cols(const DMAPInfo &dmap_info, at.set_entry(r, c+20, // ZHU_MEAN dmap_info.zhu_mean); + at.set_entry(r, c+21, // G + dmap_info.g); + + at.set_entry(r, c+22, // GBETA + dmap_info.gbeta); + + at.set_entry(r, c+23, // BETA_VALUE + dmap_info.get_beta_value()); + return; } diff --git a/met/src/libcode/vx_statistics/met_stats.cc b/met/src/libcode/vx_statistics/met_stats.cc index 8541ba23db..a1aa7870bf 100644 --- a/met/src/libcode/vx_statistics/met_stats.cc +++ b/met/src/libcode/vx_statistics/met_stats.cc @@ -2742,11 +2742,12 @@ void DMAPInfo::clear() { fthresh.clear(); othresh.clear(); - total = fy = oy = 0; + total = fy = oy = foy = 0; baddeley = hausdorff = bad_data_double; med_fo = med_of = med_min = med_max = med_mean = bad_data_double; fom_fo = fom_of = fom_min = fom_max = fom_mean = bad_data_double; zhu_fo = zhu_of = zhu_min = zhu_max = zhu_mean = bad_data_double; + g = gbeta = bad_data_double; return; } @@ -2754,10 +2755,12 @@ void DMAPInfo::clear() { //////////////////////////////////////////////////////////////////////// void DMAPInfo::reset_options() { - baddeley_p = 2; // Exponent for lp-norm - baddeley_max_dist = 5.0; // Maximum distance constant - fom_alpha = 0.1; // FOM Alpha - zhu_weight = 0.5; // Zhu Weight + baddeley_p = 2; // Exponent for lp-norm + baddeley_max_dist = bad_data_double; // Maximum distance constant + fom_alpha = 0.1; // FOM Alpha + zhu_weight = 0.5; // Zhu Weight + beta_value = bad_data_double; // G-Beta Value + n_full_points = bad_data_int; // Number of FULL domain points return; } @@ -2774,6 +2777,7 @@ void DMAPInfo::assign(const DMAPInfo &c) { total = c.total; fy = c.fy; oy = c.oy; + foy = c.foy; baddeley = c.baddeley; hausdorff = c.hausdorff; @@ -2796,10 +2800,14 @@ void DMAPInfo::assign(const DMAPInfo &c) { zhu_max = c.zhu_max; zhu_mean = c.zhu_mean; + g = c.g; + gbeta = c.gbeta; + baddeley_p = c.baddeley_p; baddeley_max_dist = c.baddeley_max_dist; fom_alpha = c.fom_alpha; zhu_weight = c.zhu_weight; + beta_value = c.beta_value; return; } @@ -2816,7 +2824,7 @@ double DMAPInfo::fbias() const { } //////////////////////////////////////////////////////////////////////// - + void DMAPInfo::set(const SingleThresh &fthr, const SingleThresh &othr, const NumArray &fdmap_na, const NumArray &odmap_na, const NumArray &fthr_na, const NumArray &othr_na) { @@ -2832,6 +2840,14 @@ void DMAPInfo::set(const SingleThresh &fthr, const SingleThresh &othr, exit(1); } + // Check that beta_value has been set + if(is_bad_data(beta_value) || beta_value <= 0.0) { + mlog << Error << "\nDMAPInfo::set() -> " + << "the beta_value (" << beta_value + << ") must be greater than 0!\n\n"; + exit(1); + } + // Initialize clear(); @@ -2852,28 +2868,29 @@ void DMAPInfo::set(const SingleThresh &fthr, const SingleThresh &othr, non_zero_count = 0; - mlog << Debug(4) << " DMAP.Options: baddeley_p=" << baddeley_p - << ", baddeley_max_dist=" << baddeley_max_dist - << ", fom_alpha=" << fom_alpha - << ", zhu_weight=" << zhu_weight << "\n"; - for (int i=0; i 0) { fy++; med_of_sum += odmap_na[i]; fom_of_sum += 1 / (1 + odmap_na[i] * odmap_na[i] * fom_alpha); } + + // Observation if (othr_na[i] > 0) { oy++; med_fo_sum += fdmap_na[i]; fom_fo_sum += 1 / (1 + fdmap_na[i] * fdmap_na[i] * fom_alpha); } + // Forecast and observation event + if (fthr_na[i] > 0 && othr_na[i] > 0) foy++; + sum_event_diff += (fthr_na[i] - othr_na[i]) * (fthr_na[i] - othr_na[i]); f_distance = (!is_bad_data(baddeley_max_dist) && @@ -2937,26 +2954,67 @@ void DMAPInfo::set(const SingleThresh &fthr, const SingleThresh &othr, zhu_mean = (zhu_fo + zhu_of) / 2; } - mlog << Debug(4) << " DMAP: nf=" << fy << ", no=" << oy << ", total=" << total - << "\tbaddeley=" << baddeley << ", hausdorff=" << hausdorff - << "\n\tmed_fo=" << med_fo << ", med_of=" << med_of - << ", med_min=" << med_min << ", med_max=" << med_max << ", med_mean=" << med_mean - << "\n\tfom_fo=" << fom_fo << ", fom_of=" << fom_of - << ", fom_min=" << fom_min << ", fom_max=" << fom_max << ", fom_mean=" << fom_mean - << "\n\tzhu_fo=" << zhu_fo << ", zhu_of=" << zhu_of - << ", zhu_min=" << zhu_min << ", zhu_max=" << zhu_max << ", zhu_mean=" << zhu_mean + // G and G-Beta + // Reference: + // Gilleland, E.: Novel measures for summarizing high-resolution forecast performance, + // Adv. Stat. Clim. Meteorol. Oceanogr., 7, 13–34, + // https://doi.org/10.5194/ascmo-7-13-2021, 2021. + + // If not set by the user, default maximum distance to the number of pairs + double max_dist = (is_bad_data(baddeley_max_dist) ? + (double) total : baddeley_max_dist); + + double g_med_fo = (oy == 0 ? max_dist : med_fo); + double g_med_of = (fy == 0 ? max_dist : med_of); + int g_y1 = fy + oy - 2 * foy; + double g_y2 = g_med_fo * oy + g_med_of * fy; + double g_y = g_y1 * g_y2; + g = pow(g_y, 1.0 / 3.0); + + // Only compute GBETA over the full verification domain. + // Report bad data for masking regions. + if(total == n_full_points) { + gbeta = max(1.0 - g_y / beta_value, 0.0); + } + else { + gbeta = beta_value = bad_data_double; + } + + // Dump debug distance map info + mlog << Debug(4) << " DMAP.Options: baddeley_p=" << baddeley_p + << ", baddeley_max_dist=" << baddeley_max_dist + << ", fom_alpha=" << fom_alpha + << ", zhu_weight=" << zhu_weight + << ", beta_value=" << beta_value + << ", n_full_points=" << n_full_points << "\n"; + + mlog << Debug(4) << " DMAP: nf=" << fy << ", no=" << oy << ", nfo=" << foy << ", total=" << total + << "\n\tbaddeley=" << baddeley << ", hausdorff=" << hausdorff + << "\n\tmed_fo=" << med_fo << ", med_of=" << med_of + << ", med_min=" << med_min << ", med_max=" << med_max << ", med_mean=" << med_mean + << "\n\tfom_fo=" << fom_fo << ", fom_of=" << fom_of + << ", fom_min=" << fom_min << ", fom_max=" << fom_max << ", fom_mean=" << fom_mean + << "\n\tzhu_fo=" << zhu_fo << ", zhu_of=" << zhu_of + << ", zhu_min=" << zhu_min << ", zhu_max=" << zhu_max << ", zhu_mean=" << zhu_mean + << "\n\ty1=" << g_y1 << ", y2=" << g_y2 << ", y=" << g_y + << "\n\tg=" << g << ", gbeta=" << gbeta + << "\n"; + return; } //////////////////////////////////////////////////////////////////////// void DMAPInfo::set_options(const int _baddeley_p, const double _baddeley_max_dist, - const double _fom_alpha, const double _zhu_weight) { + const double _fom_alpha, const double _zhu_weight, + const double _beta_value, const int _n_full_points) { baddeley_p = _baddeley_p; baddeley_max_dist = _baddeley_max_dist; fom_alpha = _fom_alpha; zhu_weight = _zhu_weight; + beta_value = _beta_value; + n_full_points = _n_full_points; } //////////////////////////////////////////////////////////////////////// diff --git a/met/src/libcode/vx_statistics/met_stats.h b/met/src/libcode/vx_statistics/met_stats.h index 9f29299333..06b6af3fcd 100644 --- a/met/src/libcode/vx_statistics/met_stats.h +++ b/met/src/libcode/vx_statistics/met_stats.h @@ -609,6 +609,8 @@ class DMAPInfo { double baddeley_max_dist; // Maximum distance constant double fom_alpha; // FOM Alpha double zhu_weight; // Zhu Weight + double beta_value; // G-Beta Value + int n_full_points; // Number of FULL domain points public: @@ -622,7 +624,7 @@ class DMAPInfo { SingleThresh othresh; // Counts - int total, fy, oy; + int total, fy, oy, foy; // Distance metrics double baddeley, hausdorff; @@ -636,6 +638,9 @@ class DMAPInfo { // Zhu Metric double zhu_fo, zhu_of, zhu_min, zhu_max, zhu_mean; + // G and G-Beta + double g, gbeta; + // Compute statistics double fbias() const; // fbias = fy / oy @@ -645,12 +650,20 @@ class DMAPInfo { const NumArray &fthr_na, const NumArray &othr_na); void set_options(const int _baddeley_p, const double _baddeley_max_dist, - const double _fom_alpha, const double _zhu_weight); + const double _fom_alpha, const double _zhu_weight, + const double _beta_value, const int _n_full_points); + + // Get functions + double get_beta_value() const; void clear(); void reset_options(); }; +//////////////////////////////////////////////////////////////////////// + +inline double DMAPInfo::get_beta_value() const { return(beta_value); } + //////////////////////////////////////////////////////////////////////// // // Utility functions for parsing data from configuration files diff --git a/met/src/libcode/vx_statistics/pair_data_point.cc b/met/src/libcode/vx_statistics/pair_data_point.cc index 361939a010..daa40d1e9d 100644 --- a/met/src/libcode/vx_statistics/pair_data_point.cc +++ b/met/src/libcode/vx_statistics/pair_data_point.cc @@ -1476,7 +1476,6 @@ bool check_fo_thresh(double f, double o, double cmn, double csd, mlog << Error << "\ncheck_fo_thresh() -> " << "Unexpected SetLogic value of " << type << ".\n\n"; exit(1); - break; } return(status); diff --git a/met/src/libcode/vx_tc_util/track_point.cc b/met/src/libcode/vx_tc_util/track_point.cc index 17946957f0..c3ae422777 100644 --- a/met/src/libcode/vx_tc_util/track_point.cc +++ b/met/src/libcode/vx_tc_util/track_point.cc @@ -270,7 +270,6 @@ void QuadInfo::set_quad_vals(QuadrantType ref_quad, << "unexpected quadrant type encountered \"" << quadranttype_to_string(ref_quad) << "\".\n\n"; exit(1); - break; } return; diff --git a/met/src/tools/core/ensemble_stat/ensemble_stat_conf_info.cc b/met/src/tools/core/ensemble_stat/ensemble_stat_conf_info.cc index ff21bcd27d..9dcf7154d3 100644 --- a/met/src/tools/core/ensemble_stat/ensemble_stat_conf_info.cc +++ b/met/src/tools/core/ensemble_stat/ensemble_stat_conf_info.cc @@ -691,7 +691,7 @@ void EnsembleStatVxOpt::process_config(GrdFileType ftype, Dictionary &fdict, vx_pd.obs_info->set_dict(odict); // Set the GRIB code for point observations - if(!use_var_id) vx_pd.obs_info->add_grib_code(odict); + if(point_vx && !use_var_id) vx_pd.obs_info->add_grib_code(odict); // Dump the contents of the current VarInfo if(mlog.verbosity_level() >= 5) { @@ -1023,7 +1023,6 @@ int EnsembleStatVxOpt::n_txt_row(int i_txt_row) const { << "unexpected output type index value: " << i_txt_row << "\n\n"; exit(1); - break; } return(n); diff --git a/met/src/tools/core/grid_stat/grid_stat.cc b/met/src/tools/core/grid_stat/grid_stat.cc index 1d75923bf3..242a3de5e3 100644 --- a/met/src/tools/core/grid_stat/grid_stat.cc +++ b/met/src/tools/core/grid_stat/grid_stat.cc @@ -1210,6 +1210,13 @@ void process_scores() { DataPlane fcst_dp_dmap, obs_dp_dmap; pd.extend(grid.nx()*grid.ny()); + // Mask out missing data between the fields for a fair comparison + DataPlane fcst_dp_mm = fcst_dp; + DataPlane obs_dp_mm = obs_dp; + mask_bad_data(fcst_dp_mm, obs_dp_mm); + mask_bad_data(obs_dp_mm, fcst_dp_mm); + int n_good_data = obs_dp_mm.n_good_data(); + // Loop over the categorical thresholds for(k=0; kn_obs != pd_u_ptr->n_obs) { + if(pd_u_ptr->n_obs != pd_v_ptr->n_obs) { mlog << Error << "\ndo_vl1l2() -> " << "unequal number of UGRD and VGRD pairs (" << pd_u_ptr->n_obs << " != " << pd_v_ptr->n_obs diff --git a/met/src/tools/core/grid_stat/grid_stat_conf_info.cc b/met/src/tools/core/grid_stat/grid_stat_conf_info.cc index 0bae27039c..96a68fe653 100644 --- a/met/src/tools/core/grid_stat/grid_stat_conf_info.cc +++ b/met/src/tools/core/grid_stat/grid_stat_conf_info.cc @@ -529,6 +529,12 @@ void GridStatVxOpt::clear() { grad_dx.clear(); grad_dy.clear(); + baddeley_p = bad_data_int; + baddeley_max_dist = bad_data_double; + fom_alpha = bad_data_double; + zhu_weight = bad_data_double; + beta_value_fx.clear(); + hss_ec_value = bad_data_double; rank_corr_flag = false; @@ -811,6 +817,14 @@ void GridStatVxOpt::process_config( exit(1); } + beta_value_fx.set(d->lookup(conf_key_beta_value)); + if(!beta_value_fx.is_set()) { + mlog << Error << "\nGridStatVxOpt::process_config() -> " + << "The \"" << conf_key_beta_value + << "\" function is not set!\n\n"; + exit(1); + } + // Conf: hss_ec_value hss_ec_value = odict.lookup_double(conf_key_hss_ec_value); @@ -1108,7 +1122,6 @@ int GridStatVxOpt::n_txt_row(int i_txt_row) const { << "unexpected output type index value: " << i_txt_row << "\n\n"; exit(1); - break; } return(n); diff --git a/met/src/tools/core/grid_stat/grid_stat_conf_info.h b/met/src/tools/core/grid_stat/grid_stat_conf_info.h index ff731985ac..73bc476124 100644 --- a/met/src/tools/core/grid_stat/grid_stat_conf_info.h +++ b/met/src/tools/core/grid_stat/grid_stat_conf_info.h @@ -186,7 +186,8 @@ class GridStatVxOpt { int baddeley_p; // Exponent for lp-norm double baddeley_max_dist; // Maximum distance constant double fom_alpha; // FOM Alpha - double zhu_weight; // Zhu Weight + double zhu_weight; // Zhu Weight + UserFunc_1Arg beta_value_fx; // G-Beta Value Function double hss_ec_value; // MCTS HSS expected correct value bool rank_corr_flag; // Flag for computing rank correlations diff --git a/met/src/tools/core/pcp_combine/pcp_combine.cc b/met/src/tools/core/pcp_combine/pcp_combine.cc index 6ff36272aa..56d24002b5 100644 --- a/met/src/tools/core/pcp_combine/pcp_combine.cc +++ b/met/src/tools/core/pcp_combine/pcp_combine.cc @@ -1320,6 +1320,12 @@ void write_nc_data(unixtime nc_init, unixtime nc_valid, int nc_accum, StringArray sa; NcVar nc_var; + if (!var_info) { + mlog << Error << "\nwrite_nc_data() -> " + << "var_info is null.\n\n"; + exit(1); + } + // // Write to the -name command line argument, if specified. // diff --git a/met/src/tools/core/point_stat/point_stat.cc b/met/src/tools/core/point_stat/point_stat.cc index 9f49049fdb..36c1a3baf6 100644 --- a/met/src/tools/core/point_stat/point_stat.cc +++ b/met/src/tools/core/point_stat/point_stat.cc @@ -95,6 +95,7 @@ // 045 03/28/21 Halley Gotway Add mpr_column and mpr_thresh // filtering options. // 046 05/28/21 Halley Gotway Add MCTS HSS_EC output. +// 047 08/23/21 Seth Linden Add ORANK line type for HiRA. // //////////////////////////////////////////////////////////////////////// @@ -327,7 +328,9 @@ void setup_first_pass(const DataPlane &dp, const Grid &data_grid) { //////////////////////////////////////////////////////////////////////// void setup_txt_files() { - int i, max_col, max_prob_col, max_mctc_col, n_prob, n_cat, n_eclv; + int i, j; + int max_col, max_prob_col, max_mctc_col, max_orank_col; + int n_prob, n_cat, n_eclv, n_ens; ConcatString base_name; // Create output file names for the stat file and optional text files @@ -340,23 +343,20 @@ void setup_txt_files() { ///////////////////////////////////////////////////////////////////// // Get the maximum number of data columns - n_prob = conf_info.get_max_n_fprob_thresh(); + n_prob = max(conf_info.get_max_n_fprob_thresh(), + conf_info.get_max_n_hira_prob()); n_cat = conf_info.get_max_n_cat_thresh() + 1; n_eclv = conf_info.get_max_n_eclv_points(); + n_ens = conf_info.get_max_n_hira_ens(); - // Check for HiRA output - for(i=0; i max_stat_col ? max_prob_col : max_stat_col ); - max_col = ( max_mctc_col > max_col ? max_mctc_col : max_col ); + max_col = (max_prob_col > max_stat_col ? max_prob_col : max_stat_col); + max_col = (max_mctc_col > max_col ? max_mctc_col : max_col); + max_col = (max_orank_col > max_col ? max_orank_col : max_col); // Add the header columns max_col += n_header_columns + 1; @@ -429,6 +429,10 @@ void setup_txt_files() { max_col = get_n_eclv_columns(n_eclv) + n_header_columns + 1; break; + case(i_orank): + max_col = get_n_orank_columns(n_ens) + n_header_columns + 1; + break; + default: max_col = n_txt_columns[i] + n_header_columns + 1; break; @@ -465,6 +469,10 @@ void setup_txt_files() { write_eclv_header_row(1, n_eclv, txt_at[i], 0, 0); break; + case(i_orank): + write_orank_header_row(1, n_ens, txt_at[i], 0, 0); + break; + default: write_header_row(txt_columns[i], n_txt_columns[i], 1, txt_at[i], 0, 0); @@ -1749,6 +1757,22 @@ void do_hira_ens(int i_vx, const PairDataPoint *pd_ptr) { txt_at[i_ecnt], i_txt_row[i_ecnt]); } // end if ECNT + // Write out the ORANK line + if(conf_info.vx_opt[i_vx].output_flag[i_orank] != STATOutputType_None) { + + // Compute ensemble statistics + hira_pd.compute_pair_vals(rng_ptr); + + write_orank_row(shc, &hira_pd, + conf_info.vx_opt[i_vx].output_flag[i_orank], + stat_at, i_stat_row, + txt_at[i_orank], i_txt_row[i_orank]); + + // Reset the observation valid time + shc.set_obs_valid_beg(conf_info.vx_opt[i_vx].vx_pd.beg_ut); + shc.set_obs_valid_end(conf_info.vx_opt[i_vx].vx_pd.end_ut); + } // end if ORANK + // Write out the RPS line if(conf_info.vx_opt[i_vx].output_flag[i_rps] != STATOutputType_None) { diff --git a/met/src/tools/core/point_stat/point_stat.h b/met/src/tools/core/point_stat/point_stat.h index dc232a52de..9b2b823d2a 100644 --- a/met/src/tools/core/point_stat/point_stat.h +++ b/met/src/tools/core/point_stat/point_stat.h @@ -70,8 +70,8 @@ static const char **txt_columns[n_txt] = { sl1l2_columns, sal1l2_columns, vl1l2_columns, val1l2_columns, pct_columns, pstd_columns, pjc_columns, prc_columns, ecnt_columns, - rps_columns, eclv_columns, mpr_columns, - vcnt_columns + orank_columns, rps_columns, eclv_columns, + mpr_columns, vcnt_columns }; // Length of header columns @@ -81,8 +81,8 @@ static const int n_txt_columns[n_txt] = { n_sl1l2_columns, n_sal1l2_columns, n_vl1l2_columns, n_val1l2_columns, n_pct_columns, n_pstd_columns, n_pjc_columns, n_prc_columns, n_ecnt_columns, - n_rps_columns, n_eclv_columns, n_mpr_columns, - n_vcnt_columns + n_orank_columns, n_rps_columns, n_eclv_columns, + n_mpr_columns, n_vcnt_columns }; // Text file abbreviations @@ -92,8 +92,8 @@ static const char *txt_file_abbr[n_txt] = { "sl1l2", "sal1l2", "vl1l2", "val1l2", "pct", "pstd", "pjc", "prc", "ecnt", - "rps", "eclv", "mpr", - "vcnt" + "orank", "rps", "eclv", + "mpr", "vcnt" }; //////////////////////////////////////////////////////////////////////// diff --git a/met/src/tools/core/point_stat/point_stat_conf_info.cc b/met/src/tools/core/point_stat/point_stat_conf_info.cc index 02cb465771..dd0d13dee1 100644 --- a/met/src/tools/core/point_stat/point_stat_conf_info.cc +++ b/met/src/tools/core/point_stat/point_stat_conf_info.cc @@ -523,6 +523,26 @@ int PointStatConfInfo::get_max_n_eclv_points() const { //////////////////////////////////////////////////////////////////////// +int PointStatConfInfo::get_max_n_hira_ens() const { + int i, n; + + for(i=0,n=0; i ECLV lines = // Message Types * Masks * Interpolations * Thresholds * @@ -1241,7 +1275,6 @@ int PointStatVxOpt::n_txt_row(int i_txt_row) const { << "unexpected output type index value: " << i_txt_row << "\n\n"; exit(1); - break; } return(n); @@ -1283,3 +1316,16 @@ int PointStatVxOpt::get_n_oprob_thresh() const { } //////////////////////////////////////////////////////////////////////// + +int PointStatVxOpt::get_n_hira_ens() const { + int n = (hira_info.flag ? hira_info.width.max() : 0); + return(n*n); +} + +//////////////////////////////////////////////////////////////////////// + +int PointStatVxOpt::get_n_hira_prob() const { + return(hira_info.flag ? hira_info.cov_ta.n() : 0); +} + +//////////////////////////////////////////////////////////////////////// diff --git a/met/src/tools/core/point_stat/point_stat_conf_info.h b/met/src/tools/core/point_stat/point_stat_conf_info.h index f9d0d9fe0f..5bc964069d 100644 --- a/met/src/tools/core/point_stat/point_stat_conf_info.h +++ b/met/src/tools/core/point_stat/point_stat_conf_info.h @@ -46,12 +46,13 @@ static const int i_pjc = 12; static const int i_prc = 13; static const int i_ecnt = 14; -static const int i_rps = 15; -static const int i_eclv = 16; -static const int i_mpr = 17; -static const int i_vcnt = 18; +static const int i_orank = 15; +static const int i_rps = 16; +static const int i_eclv = 17; +static const int i_mpr = 18; +static const int i_vcnt = 19; -static const int n_txt = 19; +static const int n_txt = 20; // Text file type static const STATLineType txt_file_type[n_txt] = { @@ -74,10 +75,11 @@ static const STATLineType txt_file_type[n_txt] = { stat_prc, // 13 stat_ecnt, // 14 - stat_rps, // 14 - stat_eclv, // 15 - stat_mpr, // 16 - stat_vcnt, // 17 + stat_orank, // 15 + stat_rps, // 16 + stat_eclv, // 17 + stat_mpr, // 18 + stat_vcnt, // 19 }; @@ -178,19 +180,21 @@ class PointStatVxOpt { int get_n_oprob_thresh() const; int get_n_eclv_points() const; + int get_n_hira_ens() const; + int get_n_hira_prob() const; int get_n_cdf_bin() const; int get_n_ci_alpha() const; }; //////////////////////////////////////////////////////////////////////// -inline int PointStatVxOpt::get_n_msg_typ() const { return(msg_typ.n_elements()); } -inline int PointStatVxOpt::get_n_mask() const { return(mask_name.n_elements()); } -inline int PointStatVxOpt::get_n_interp() const { return(interp_info.n_interp); } +inline int PointStatVxOpt::get_n_msg_typ() const { return(msg_typ.n()); } +inline int PointStatVxOpt::get_n_mask() const { return(mask_name.n()); } +inline int PointStatVxOpt::get_n_interp() const { return(interp_info.n_interp); } -inline int PointStatVxOpt::get_n_eclv_points() const { return(eclv_points.n_elements()); } -inline int PointStatVxOpt::get_n_cdf_bin() const { return(cdf_info.n_bin); } -inline int PointStatVxOpt::get_n_ci_alpha() const { return(ci_alpha.n_elements()); } +inline int PointStatVxOpt::get_n_eclv_points() const { return(eclv_points.n()); } +inline int PointStatVxOpt::get_n_cdf_bin() const { return(cdf_info.n_bin); } +inline int PointStatVxOpt::get_n_ci_alpha() const { return(ci_alpha.n()); } //////////////////////////////////////////////////////////////////////// @@ -266,6 +270,8 @@ class PointStatConfInfo { int get_max_n_fprob_thresh() const; int get_max_n_oprob_thresh() const; int get_max_n_eclv_points() const; + int get_max_n_hira_ens() const; + int get_max_n_hira_prob() const; // Check for any verification of vectors bool get_vflag() const; diff --git a/met/src/tools/core/series_analysis/series_analysis.cc b/met/src/tools/core/series_analysis/series_analysis.cc index 8218363687..b89eeff0bb 100644 --- a/met/src/tools/core/series_analysis/series_analysis.cc +++ b/met/src/tools/core/series_analysis/series_analysis.cc @@ -479,7 +479,6 @@ void get_series_data(int i_series, << "unexpected SeriesType value: " << series_type << "\n\n"; exit(1); - break; } // Setup the verification grid diff --git a/met/src/tools/core/stat_analysis/aggr_stat_line.cc b/met/src/tools/core/stat_analysis/aggr_stat_line.cc index e6f621feb0..a26a3df46b 100644 --- a/met/src/tools/core/stat_analysis/aggr_stat_line.cc +++ b/met/src/tools/core/stat_analysis/aggr_stat_line.cc @@ -765,6 +765,9 @@ void aggr_summary_lines(LineDataFile &f, STATAnalysisJob &job, aggr.wgt[req_stat[i]] = empty_na; } m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // @@ -952,6 +955,9 @@ void aggr_ctc_lines(LineDataFile &f, STATAnalysisJob &job, aggr.cts_info = cur; aggr.hdr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // // Increment counts in the existing map entry @@ -1136,6 +1142,9 @@ void aggr_mctc_lines(LineDataFile &f, STATAnalysisJob &job, aggr.mcts_info = cur; aggr.hdr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // // Increment counts in the existing map entry @@ -1313,6 +1322,9 @@ void aggr_pct_lines(LineDataFile &f, STATAnalysisJob &job, aggr.pct_info = cur; aggr.hdr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // // Increment counts in the existing map entry @@ -1538,6 +1550,9 @@ void aggr_psum_lines(LineDataFile &f, STATAnalysisJob &job, aggr.nbrcnt_info = cur_nbrcnt; aggr.hdr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // // Increment sums in the existing map entry @@ -1693,6 +1708,9 @@ void aggr_grad_lines(LineDataFile &f, STATAnalysisJob &job, aggr.grad_info = cur; aggr.hdr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // // Increment counts in the existing map entry @@ -1801,6 +1819,9 @@ void aggr_wind_lines(LineDataFile &f, STATAnalysisJob &job, aggr.vl1l2_info = cur; aggr.hdr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // // Increment sums in the existing map entry @@ -1926,6 +1947,9 @@ void aggr_mpr_wind_lines(LineDataFile &f, STATAnalysisJob &job, // aggr.hdr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // // Otherwise, add data to existing map entry @@ -2204,6 +2228,10 @@ void aggr_mpr_lines(LineDataFile &f, STATAnalysisJob &job, aggr.hdr.clear(); m[key] = aggr; + + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // // Increment sums in the existing map entry @@ -2304,6 +2332,9 @@ void aggr_isc_lines(LineDataFile &ldf, STATAnalysisJob &job, aggr.oen_na = aggr.baser_na = aggr.fbias_na = (NumArray *) 0; aggr.hdr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // @@ -2549,6 +2580,9 @@ void aggr_ecnt_lines(LineDataFile &f, STATAnalysisJob &job, if(m.count(key) == 0) { aggr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // @@ -2684,6 +2718,9 @@ void aggr_rps_lines(LineDataFile &f, STATAnalysisJob &job, aggr.rps_info = cur; aggr.hdr.clear(); m[key] = aggr; + mlog << Debug(3) << "[Case " << m.size() + << "] Added new case for key \"" + << key << "\".\n"; } // // Increment counts in the existing map entry @@ -2773,6 +2810,9 @@ void aggr_rhist_lines(LineDataFile &f, STATAnalysisJob &job, aggr.clear(); for(i=0; i " << "Unsupported wavelet type value of " << wvlt_type << ".\n\n"; exit(1); - break; } // Conf: wavelet.member @@ -389,7 +388,6 @@ void WaveletStatConfInfo::process_config(GrdFileType ftype, mlog << Error << "\nWaveletStatConfInfo::process_config() -> " << "Unsupported wavelet type value of " << wvlt_type << ".\n\n"; exit(1); - break; } // Initialize the requested wavelet @@ -569,7 +567,6 @@ void WaveletStatConfInfo::process_tiles(const Grid &grid) { << "Unsupported grid decomposition type of " << grid_decomp_flag << ".\n\n"; exit(1); - break; } // end switch // Compute n_scale based on tile_dim diff --git a/met/src/tools/dev_utils/insitu_nc_file.cc b/met/src/tools/dev_utils/insitu_nc_file.cc index 753f44204c..82e0d255ea 100644 --- a/met/src/tools/dev_utils/insitu_nc_file.cc +++ b/met/src/tools/dev_utils/insitu_nc_file.cc @@ -1,5 +1,3 @@ - - // *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=* // ** Copyright UCAR (c) 1992 - 2021 // ** University Corporation for Atmospheric Research (UCAR) @@ -140,7 +138,11 @@ bool InsituNcFile::open(const char * filename) if (!(IS_INVALID_NC_P(_ncFile))) { - close(); + if (_ncFile) // close() is called already + { + delete _ncFile; + _ncFile = (NcFile *)0; + } return false; } @@ -150,7 +152,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(num_recs_dim)) { mlog << Error << "\n" << method_name << " -> " - << "recNum dimension not found in file\n"; + << "recNum dimension not found in file\n"; return false; } @@ -166,7 +168,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(aircraft_id_len_dim)) { mlog << Error << "\n" << method_name << " -> " - << "aircraftIdLen dimension not found in file\n"; + << "aircraftIdLen dimension not found in file\n"; return false; } @@ -177,7 +179,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(aircraft_id_var)) { mlog << Error << "\n" << method_name << " -> " - << "aircraftId variable not found in file\n"; + << "aircraftId variable not found in file\n"; return false; } @@ -187,7 +189,7 @@ bool InsituNcFile::open(const char * filename) if (!get_nc_data(&aircraft_id_var, aircraft_id)) { mlog << Error << "\n" << method_name << " -> " - << "error retrieving aircraftId values from file\n"; + << "error retrieving aircraftId values from file\n"; if(aircraft_id) delete[] aircraft_id; return false; } @@ -197,7 +199,7 @@ bool InsituNcFile::open(const char * filename) for (int i = 0; i < _numRecords; ++i) _aircraftId[i] = &aircraft_id[i * aircraft_id_len]; - if(aircraft_id) delete[] aircraft_id; + if(aircraft_id) { delete[] aircraft_id; aircraft_id = 0; } // timeObs @@ -205,7 +207,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(time_obs_var)) { mlog << Error << "\n" << method_name << " -> " - << "timeObs variable not found in file\n"; + << "timeObs variable not found in file\n"; return false; } @@ -216,7 +218,7 @@ bool InsituNcFile::open(const char * filename) if (!get_nc_data(&time_obs_var, _timeObs)) { mlog << Error << "\n" << method_name << " -> " - << "error retrieving timeObs variable from file\n"; + << "error retrieving timeObs variable from file\n"; return false; } @@ -227,7 +229,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(latitude_var)) { mlog << Error << "\n" << method_name << " -> " - << "latitude variable not found in file\n"; + << "latitude variable not found in file\n"; return false; } @@ -237,7 +239,7 @@ bool InsituNcFile::open(const char * filename) if (!get_nc_data(&latitude_var, _latitude, _numRecords)) { mlog << Error << "\n" << method_name << " -> " - << "error retrieving latitude values from file\n"; + << "error retrieving latitude values from file\n"; return false; } @@ -248,7 +250,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(longitude_var)) { mlog << Error << "\n" << method_name << " -> " - << "longitude variable not found in file\n"; + << "longitude variable not found in file\n"; return false; } @@ -258,7 +260,7 @@ bool InsituNcFile::open(const char * filename) if (!get_nc_data(&longitude_var, _longitude, _numRecords)) { mlog << Error << "\n" << method_name << " -> " - << "error retrieving longitude values from file\n"; + << "error retrieving longitude values from file\n"; return false; } @@ -269,7 +271,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(altitude_var)) { mlog << Error << "\n" << method_name << " -> " - << "altitude variable not found in file\n"; + << "altitude variable not found in file\n"; return false; } @@ -279,7 +281,7 @@ bool InsituNcFile::open(const char * filename) if (!get_nc_data(&altitude_var, _altitude, _numRecords)) { mlog << Error << "\n" << method_name << " -> " - << "retrieving altitude values from file\n"; + << "retrieving altitude values from file\n"; return false; } @@ -290,7 +292,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(qc_confidence_var)) { mlog << Error << "\n" << method_name << " -> " - << "QCconfidence variable not found in file\n"; + << "QCconfidence variable not found in file\n"; return false; } @@ -300,7 +302,7 @@ bool InsituNcFile::open(const char * filename) if (!get_nc_data(&qc_confidence_var, _QCconfidence, _numRecords)) { mlog << Error << "\n" << method_name << " -> " - << "error retrieving QCconfidence values from file\n"; + << "error retrieving QCconfidence values from file\n"; return false; } @@ -311,7 +313,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(med_edr_var)) { mlog << Error << "\n" << method_name << " -> " - << "medEDR variable not found in file\n"; + << "medEDR variable not found in file\n"; return false; } @@ -321,7 +323,7 @@ bool InsituNcFile::open(const char * filename) if (!get_nc_data(&med_edr_var, _medEDR, _numRecords)) { mlog << Error << "\n" << method_name << " -> " - << "error retrieving medEDR values from file\n"; + << "error retrieving medEDR values from file\n"; return false; } @@ -332,7 +334,7 @@ bool InsituNcFile::open(const char * filename) if (IS_INVALID_NC(max_edr_var)) { mlog << Error << "\n" << method_name << " -> " - << "maxEDR variable not found in file\n"; + << "maxEDR variable not found in file\n"; return false; } @@ -342,7 +344,7 @@ bool InsituNcFile::open(const char * filename) if (!get_nc_data(&max_edr_var, _maxEDR, _numRecords)) { mlog << Error << "\n" << method_name << " -> " - << "error retrieving maxEDR values from file\n"; + << "error retrieving maxEDR values from file\n"; return false; } @@ -355,9 +357,9 @@ bool InsituNcFile::open(const char * filename) bool InsituNcFile::getNextRecord(string &aircraftId, time_t &timeObs, - double &latitude, double &longitude, - double &altitude, double &QCconfidence, - double &medEDR, double &maxEDR) + double &latitude, double &longitude, + double &altitude, double &QCconfidence, + double &medEDR, double &maxEDR) { // If we don't have any more records, return diff --git a/met/src/tools/dev_utils/met_nc_file.cc b/met/src/tools/dev_utils/met_nc_file.cc index 664ee7fea4..b1e35a2ab6 100644 --- a/met/src/tools/dev_utils/met_nc_file.cc +++ b/met/src/tools/dev_utils/met_nc_file.cc @@ -184,7 +184,6 @@ bool MetNcFile::readFile(const int desired_grib_code, long lengths[2] = { 1, 1 }; lengths[0] = hdr_buf_size; - lengths[1] = strl_count; // // Get the corresponding header message type diff --git a/met/src/tools/other/gen_vx_mask/gen_vx_mask.cc b/met/src/tools/other/gen_vx_mask/gen_vx_mask.cc index 18eabfd993..3b20c7ad60 100644 --- a/met/src/tools/other/gen_vx_mask/gen_vx_mask.cc +++ b/met/src/tools/other/gen_vx_mask/gen_vx_mask.cc @@ -391,7 +391,6 @@ void process_mask_file(DataPlane &dp) { mlog << Error << "\nprocess_mask_file() -> " << "Unxpected MaskType value (" << mask_type << ")\n\n"; exit(1); - break; } // Clean up diff --git a/met/src/tools/other/gis_utils/gis_dump_shp.cc b/met/src/tools/other/gis_utils/gis_dump_shp.cc index a9ecb7c350..b41841931b 100644 --- a/met/src/tools/other/gis_utils/gis_dump_shp.cc +++ b/met/src/tools/other/gis_utils/gis_dump_shp.cc @@ -144,7 +144,6 @@ switch ( shape_type ) { << "\n\n " << program_name << ": shape file type \"" << shapetype_to_string(shape_type) << "\" is not supported\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/tools/other/gsi_tools/gsi_record.cc b/met/src/tools/other/gsi_tools/gsi_record.cc index 23739a1ea9..88f2e6c543 100644 --- a/met/src/tools/other/gsi_tools/gsi_record.cc +++ b/met/src/tools/other/gsi_tools/gsi_record.cc @@ -133,9 +133,13 @@ gsi_clear(); if ( !(g.Buf) ) return; -extend(g.Nalloc); +if (g.Nalloc > 0) { -memcpy(Buf, g.Buf, Nalloc); + extend(g.Nalloc); + + memcpy(Buf, g.Buf, Nalloc); + +} Shuffle = g.Shuffle; diff --git a/met/src/tools/other/lidar2nc/hdf_utils.cc b/met/src/tools/other/lidar2nc/hdf_utils.cc index d7cd7d5740..165d500bc3 100644 --- a/met/src/tools/other/lidar2nc/hdf_utils.cc +++ b/met/src/tools/other/lidar2nc/hdf_utils.cc @@ -152,7 +152,6 @@ switch ( type ) { mlog << Error << "sizeof_hdf_type() -> unrecognized hdf data type\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/tools/other/lidar2nc/lidar2nc.cc b/met/src/tools/other/lidar2nc/lidar2nc.cc index 547692cf24..4ee2e1adb9 100644 --- a/met/src/tools/other/lidar2nc/lidar2nc.cc +++ b/met/src/tools/other/lidar2nc/lidar2nc.cc @@ -252,7 +252,6 @@ switch ( hdf_type ) { << program_name << ": hdf_type_to_nc_type() -> unrecognized hdf data type ... " << hdf_type << "\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/tools/other/madis2nc/madis2nc.cc b/met/src/tools/other/madis2nc/madis2nc.cc index ba85dbffd0..29c5479f40 100644 --- a/met/src/tools/other/madis2nc/madis2nc.cc +++ b/met/src/tools/other/madis2nc/madis2nc.cc @@ -335,7 +335,6 @@ void process_madis_file(const char *madis_file) { << "MADIS type (" << my_mtype << ") not currently supported.\n\n"; exit(1); - break; } // Close the input NetCDF file @@ -418,28 +417,38 @@ static bool get_filtered_nc_data(NcVar var, float *data, const long dim, const long cur, const char *var_name) { - bool status; + bool status = false; float in_fill_value; const char *method_name = "get_filtered_nc_data(float) "; if (IS_VALID_NC(var)) { - if(!(status = get_nc_data(&var, data, dim, cur))) return status; - - get_nc_att_value(&var, (string)in_fillValue_str, in_fill_value); - mlog << Debug(5) << " " << method_name << GET_NC_NAME(var) << " " - << in_fillValue_str << "=" << in_fill_value << "\n"; - for (int idx=0; idx fill_flag " << fill_flag << " is not supported\n\n"; exit ( 1 ); - break; } // switch @@ -1461,7 +1459,6 @@ switch ( k ) { default: mlog << Error << "\n\n CgraphBase::setlinecap(int) -> bad value ... " << k << "\n\n"; exit ( 1 ); - break; } // switch @@ -1495,7 +1492,6 @@ switch ( k ) { default: mlog << Error << "\n\n CgraphBase::setlinejoin(int) -> bad value ... " << k << "\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/tools/other/mode_graphics/mode_nc_output_file.cc b/met/src/tools/other/mode_graphics/mode_nc_output_file.cc index 708ece18cf..641bf1b408 100644 --- a/met/src/tools/other/mode_graphics/mode_nc_output_file.cc +++ b/met/src/tools/other/mode_graphics/mode_nc_output_file.cc @@ -725,7 +725,6 @@ for (x=0; x bad field\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/tools/other/mode_graphics/plot_mode_field.cc b/met/src/tools/other/mode_graphics/plot_mode_field.cc index 51516be279..23f12ba4bc 100644 --- a/met/src/tools/other/mode_graphics/plot_mode_field.cc +++ b/met/src/tools/other/mode_graphics/plot_mode_field.cc @@ -472,10 +472,9 @@ switch ( plot_field ) { n_obs = mode_in.n_obs_clus_objs(); break; - default: + default: mlog << Error << "\n\n " << program_name << ": do_plot() -> bad field selected\n\n"; exit ( 1 ); - break; } // switch @@ -1329,7 +1328,6 @@ switch ( plot_field ) { default: mlog << Error << "\n\n " << program_name << ": annotate() -> bad plot field\n\n"; exit ( 1 ); - break; } diff --git a/met/src/tools/other/mode_time_domain/fo_graph.cc b/met/src/tools/other/mode_time_domain/fo_graph.cc index 9c22800a4f..d2032d3f66 100644 --- a/met/src/tools/other/mode_time_domain/fo_graph.cc +++ b/met/src/tools/other/mode_time_domain/fo_graph.cc @@ -141,7 +141,7 @@ N_nodes = N*N; TheGraph = new FO_Node [N_nodes]; -memcpy(TheGraph, g.TheGraph, N*sizeof(FO_Node)); +memcpy(TheGraph, g.TheGraph, N_nodes*sizeof(FO_Node)); // // done diff --git a/met/src/tools/other/mode_time_domain/mtd_file_int.cc b/met/src/tools/other/mode_time_domain/mtd_file_int.cc index 1d885d85bf..0668e26396 100644 --- a/met/src/tools/other/mode_time_domain/mtd_file_int.cc +++ b/met/src/tools/other/mode_time_domain/mtd_file_int.cc @@ -275,11 +275,9 @@ Nx = _nx; Ny = _ny; Nt = _nt; -int * d = Data; - -for (j=0; j bad value ... " << r << "\n\n"; + mlog << Error << "\n MtdIntFile::set_radius(int) -> bad value ... " << r << "\n\n"; exit ( 1 ); @@ -324,7 +322,7 @@ void MtdIntFile::set_time_window(int beg, int end) if ( end < beg ) { - mlog << Error << "\n\n MtdIntFile::set_time_window(int) -> bad values ... " << beg << " and " << end << "\n\n"; + mlog << Error << "\n MtdIntFile::set_time_window(int) -> bad values ... " << beg << " and " << end << "\n\n"; exit ( 1 ); @@ -348,7 +346,7 @@ void MtdIntFile::set_threshold(double t) // if ( t < 0.0 ) { // -// mlog << Error << "\n\n MtdIntFile::set_threshold(double) -> bad value ... " << t << "\n\n"; +// mlog << Error << "\n MtdIntFile::set_threshold(double) -> bad value ... " << t << "\n\n"; // // exit ( 1 ); // @@ -444,7 +442,7 @@ var = get_nc_var(&f, data_field_name); //if ( !(var->set_cur(0, 0, 0)) ) { // -// mlog << Error << "\n\n MtdIntFile::read() -> trouble setting corner\n\n"; +// mlog << Error << "\n MtdIntFile::read() -> trouble setting corner\n\n"; // // exit ( 1 ); // @@ -454,7 +452,7 @@ var = get_nc_var(&f, data_field_name); // //if ( ! (var->get(Data, Nt, Ny, Nx)) ) { // -// mlog << Error << "\n\n MtdIntFile::read(const char *) -> trouble getting data\n\n"; +// mlog << Error << "\n MtdIntFile::read(const char *) -> trouble getting data\n\n"; // // exit ( 1 ); // @@ -466,7 +464,7 @@ long lengths[3] = {Nt, Ny, Nx}; //if ( ! get_nc_data(&var, Data, (long *){Nt, Ny, Nx}, (long *){0,0,0}) ) { if ( ! get_nc_data(&var, Data, lengths, offsets) ) { - mlog << Error << "\n\n MtdIntFile::read(const char *) -> trouble getting data\n\n"; + mlog << Error << "\n MtdIntFile::read(const char *) -> trouble getting data\n\n"; exit ( 1 ); @@ -474,7 +472,7 @@ if ( ! get_nc_data(&var, Data, lengths, offsets) ) { // const time_t t_stop = time(0); // for timing the data read operation -// mlog << Debug(5) << "\n\n MtdIntFile::read(): Time to read data = " << (t_stop - t_start) << " seconds\n\n" << flush; +// mlog << Debug(5) << "\n MtdIntFile::read(): Time to read data = " << (t_stop - t_start) << " seconds\n\n" << flush; // // done @@ -570,7 +568,7 @@ long lengths[3] = {Nt, Ny, Nx}; if ( ! put_nc_data(&data_var, Data, lengths, offsets) ) { - mlog << Error << "\n\n MtdIntFile::write(const char *) -> trouble getting data\n\n"; + mlog << Error << "\n MtdIntFile::write(const char *) -> trouble getting data\n\n"; exit ( 1 ); @@ -578,7 +576,7 @@ if ( ! put_nc_data(&data_var, Data, lengths, offsets) ) { //if ( !(data_var->set_cur(0, 0, 0)) ) { // -// mlog << Error << "\n\n MtdIntFile::write() -> trouble setting corner on data field\n\n"; +// mlog << Error << "\n MtdIntFile::write() -> trouble setting corner on data field\n\n"; // // exit ( 1 ); // @@ -588,7 +586,7 @@ if ( ! put_nc_data(&data_var, Data, lengths, offsets) ) { // //if ( !(data_var->put(Data, Nt, Ny, Nx)) ) { // -// mlog << Error << "\n\n MtdIntFile::write() -> trouble writing data field\n\n"; +// mlog << Error << "\n MtdIntFile::write() -> trouble writing data field\n\n"; // // exit ( 1 ); // @@ -606,7 +604,7 @@ if ( is_split ) { if ( !(put_nc_data(&volumes_var, ObjVolume, Nobjects, 0)) ) { - mlog << Error << "\n\n MtdIntFile::write() -> trouble writing object volumes\n\n"; + mlog << Error << "\n MtdIntFile::write() -> trouble writing object volumes\n\n"; exit ( 1 ); @@ -616,7 +614,7 @@ if ( is_split ) { // const time_t t_stop = time(0); // for timing the data write operation -// mlog << Debug(5) << "\n\n MtdIntFile::write(): Time to write data = " << (t_stop - t_start) << " seconds\n\n" << flush; +// mlog << Debug(5) << "\n MtdIntFile::write(): Time to write data = " << (t_stop - t_start) << " seconds\n\n" << flush; // // done @@ -638,7 +636,7 @@ NcFile f(_filename, NcFile::replace); if ( IS_INVALID_NC(f) ) { - mlog << Error << "\n\n MtdIntFile::write(const char *) -> unable to open netcdf output file \"" << _filename << "\"\n\n"; + mlog << Error << "\n MtdIntFile::write(const char *) -> unable to open netcdf output file \"" << _filename << "\"\n\n"; // exit ( 1 ); @@ -666,7 +664,7 @@ MtdIntFile MtdIntFile::const_t_slice(const int t) const if ( (t < 0) || (t >= Nt) ) { - mlog << Error << "\n\n MtdIntFile MtdIntFile::const_t_slice(int) const -> range check error\n\n"; + mlog << Error << "\n MtdIntFile MtdIntFile::const_t_slice(int) const -> range check error\n\n"; exit ( 1 ); @@ -735,7 +733,7 @@ MtdIntFile MtdIntFile::const_t_mask(const int t, const int obj_num) const // if ( (t < 0) || (t >= Nt) ) { - mlog << Error << "\n\n MtdIntFile MtdIntFile::const_t_mask(int) const -> range check error\n\n"; + mlog << Error << "\n MtdIntFile MtdIntFile::const_t_mask(int) const -> range check error\n\n"; exit ( 1 ); @@ -921,7 +919,7 @@ void MtdIntFile::zero_border(int n) if ( !Data ) { - mlog << Error << "\n\n MtdIntFile::zero_border(int) -> no data field!\n\n"; + mlog << Error << "\n MtdIntFile::zero_border(int) -> no data field!\n\n"; exit ( 1 ); @@ -929,7 +927,7 @@ if ( !Data ) { if ( 2*n >= min(Nx, Ny) ) { - mlog << Error << "\n\n MtdIntFile::zero_border(int) -> border size too large!\n\n"; + mlog << Error << "\n MtdIntFile::zero_border(int) -> border size too large!\n\n"; exit ( 1 ); @@ -982,7 +980,7 @@ void MtdIntFile::set_to_zeroes() if ( !Data ) { - mlog << Error << "\n\n MtdIntFile::set_to_zeroes() -> no data!\n\n"; + mlog << Error << "\n MtdIntFile::set_to_zeroes() -> no data!\n\n"; exit ( 1 ); @@ -1008,7 +1006,7 @@ MtdIntFile MtdIntFile::split_const_t(int & n_shapes) const if ( Nt != 1 ) { - mlog << Error << "\n\n split_const_t(int &) -> not const-time slice!\n\n"; + mlog << Error << "\n split_const_t(int &) -> not const-time slice!\n\n"; exit ( 1 ); @@ -1244,7 +1242,7 @@ int MtdIntFile::volume(int k) const if ( !ObjVolume ) { - mlog << Error << "\n\n MtdIntFile::volume(int) -> field not split!\n\n"; + mlog << Error << "\n MtdIntFile::volume(int) -> field not split!\n\n"; exit ( 1 ); @@ -1252,7 +1250,7 @@ if ( !ObjVolume ) { if ( (k < 0) || (k >= Nobjects) ) { - mlog << Error << "\n\n MtdIntFile::volume(int) -> range check error!\n\n"; + mlog << Error << "\n MtdIntFile::volume(int) -> range check error!\n\n"; exit ( 1 ); @@ -1273,7 +1271,7 @@ int MtdIntFile::total_volume() const if ( !ObjVolume ) { - mlog << Error << "\n\n MtdIntFile::total_volume() -> field not split!\n\n"; + mlog << Error << "\n MtdIntFile::total_volume() -> field not split!\n\n"; exit ( 1 ); @@ -1343,7 +1341,7 @@ int * d = Data; // if ( n_new == 0 ) { // -// mlog << Error << "\n\n MtdIntFile::sift_objects() -> no objects left!\n\n"; +// mlog << Error << "\n MtdIntFile::sift_objects() -> no objects left!\n\n"; // // exit ( 1 ); // @@ -1522,7 +1520,7 @@ for (x=0; x empty object!\n\n"; + mlog << Error << "\n MtdIntFile::calc_3d_centroid() const -> empty object!\n\n"; exit ( 1 ); @@ -1572,7 +1570,7 @@ for (x=0; x empty object!\n\n"; + // mlog << Error << "\n MtdIntFile::calc_2d_centroid_at_t() const -> empty object!\n\n"; // exit ( 1 ); @@ -1624,7 +1622,7 @@ MtdIntFile MtdIntFile::select(int n) const // 1-based if ( (n < 1) || (n > Nobjects) ) { - mlog << Error << "\n\n MtdIntFile::select(int) -> range check error on n ... " + mlog << Error << "\n MtdIntFile::select(int) -> range check error on n ... " << "NObjects = " << Nobjects << " ... " << "n = " << n << "\n\n"; @@ -1676,7 +1674,7 @@ MtdIntFile MtdIntFile::select_cluster(const IntArray & a) const // 1-based if ( (a.min() < 0) || (a.max() > Nobjects) ) { - mlog << Error << "\n\n MtdIntFile::select_cluster(const IntArray &) -> range check error\n\n"; + mlog << Error << "\n MtdIntFile::select_cluster(const IntArray &) -> range check error\n\n"; exit ( 1 ); @@ -1737,7 +1735,7 @@ int MtdIntFile::x_left(const int y) const if ( (y < 0) || (y >= Ny) ) { - mlog << Error << "\n\n MtdIntFile::x_left(int) -> range check error\n\n"; + mlog << Error << "\n MtdIntFile::x_left(int) -> range check error\n\n"; exit ( 1 ); @@ -1766,7 +1764,7 @@ int MtdIntFile::x_right(const int y) const if ( (y < 0) || (y >= Ny) ) { - mlog << Error << "\n\n MtdIntFile::x_right(int) -> range check error\n\n"; + mlog << Error << "\n MtdIntFile::x_right(int) -> range check error\n\n"; exit ( 1 ); @@ -1855,7 +1853,7 @@ Mtd_2D_Moments MtdIntFile::calc_2d_moments() const if ( Nt != 1 ) { - mlog << Error << "\n\n MtdIntFile::calc_2d_moments() const -> not a 2D object!\n\n"; + mlog << Error << "\n MtdIntFile::calc_2d_moments() const -> not a 2D object!\n\n"; exit ( 1 ); @@ -1997,7 +1995,7 @@ for (t=0; t<(mask.nt()); ++t) { if ( nc < 0 ) { - mlog << Error << "\n\n split(const MtdIntFile &, int &) -> can't find cell!\n\n"; + mlog << Error << "\n split(const MtdIntFile &, int &) -> can't find cell!\n\n"; exit ( 1 ); @@ -2031,7 +2029,7 @@ void adjust_obj_numbers(MtdIntFile & s, int delta) if ( s.nt() != 1 ) { - mlog << Error << "\n\n adjust_obj_numbers() -> not const-time slice!\n\n"; + mlog << Error << "\n adjust_obj_numbers() -> not const-time slice!\n\n"; exit ( 1 ); diff --git a/met/src/tools/other/modis_regrid/cloudsat_swath_file.cc b/met/src/tools/other/modis_regrid/cloudsat_swath_file.cc index 49d6fc5d4d..c3d60ab1ab 100644 --- a/met/src/tools/other/modis_regrid/cloudsat_swath_file.cc +++ b/met/src/tools/other/modis_regrid/cloudsat_swath_file.cc @@ -401,7 +401,6 @@ for (j=0; j bad number type ... " << numbertype_to_string(Numbertype) << "\n\n"; exit ( 1 ); - break; } // switch @@ -573,7 +572,6 @@ switch ( Numbertype ) { << "\n\n SatAttribute::set_value() -> bad numbertype ... " << numbertype_to_string(Numbertype) << "\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/tools/other/modis_regrid/modis_file.cc b/met/src/tools/other/modis_regrid/modis_file.cc index c3825f8d85..33327eed82 100644 --- a/met/src/tools/other/modis_regrid/modis_file.cc +++ b/met/src/tools/other/modis_regrid/modis_file.cc @@ -369,9 +369,11 @@ status = get_double_data(sst, n0, n1, dt); if ( !status || (dt < 0.0) ) { - mlog << Error - << "\n\n ModisFile::open(const char *) -> bad scan start time (" - << dt << ") in file \"" << _filename << "\"\n\n"; + if ( status ) { + mlog << Error + << "\n\n ModisFile::open(const char *) -> bad scan start time (" + << dt << ") in file \"" << _filename << "\"\n\n"; + } close(); @@ -794,12 +796,10 @@ double ModisFile::lat(int n0, int n1) const { -double v; +double v = bad_data_double; float f[2]; -(void) get_float_data(Latitude, n0, n1, f[0]); - -v = f[0]; +if (get_float_data(Latitude, n0, n1, f[0])) v = f[0]; return ( v ); @@ -813,14 +813,16 @@ double ModisFile::lon(int n0, int n1) const { -double v; +double v = bad_data_double; float f[2]; -(void) get_float_data(Longitude, n0, n1, f[0]); +if (get_float_data(Longitude, n0, n1, f[0])) { + + v = f[0]; -v = f[0]; + v = -v; // west longitude positive -v = -v; // west longitude positive +} return ( v ); @@ -881,22 +883,22 @@ switch ( NumberType ) { case nt_int_8: status = get_int8_data (Field, n0, n1, c); - value = (double) c; + if (status) value = (double) c; break; case nt_int_16: status = get_int16_data (Field, n0, n1, s); - value = (double) s; + if (status) value = (double) s; break; case nt_float_32: status = get_float_data (Field, n0, n1, f); - value = (double) f; + if (status) value = (double) f; break; case nt_float_64: status = get_double_data (Field, n0, n1, d); - value = d; + if (status) value = d; break; @@ -907,7 +909,6 @@ switch ( NumberType ) { << "\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/tools/other/wwmca_tool/af_file.cc b/met/src/tools/other/wwmca_tool/af_file.cc index 8e42f35045..1a60ecd0f8 100644 --- a/met/src/tools/other/wwmca_tool/af_file.cc +++ b/met/src/tools/other/wwmca_tool/af_file.cc @@ -163,7 +163,6 @@ switch ( Hemisphere ) { default: mlog << Error << "\nAFDataFile::assign(const AFDataFile &) -> bad hemisphere ... " << Hemisphere << "\n\n"; exit ( 1 ); - break; } diff --git a/met/src/tools/other/wwmca_tool/wwmca_ref.cc b/met/src/tools/other/wwmca_tool/wwmca_ref.cc index 828e517485..ab52792d8b 100644 --- a/met/src/tools/other/wwmca_tool/wwmca_ref.cc +++ b/met/src/tools/other/wwmca_tool/wwmca_ref.cc @@ -384,7 +384,6 @@ if ( Width > 1 ) { << "\n\n WwmcaRegridder::set_config(MetConfig & wc, const char * config_filename) -> " << "bad interpolation method ... " << interpmthd_to_string(Method) << "\n\n"; exit ( 1 ); - break; } // switch @@ -479,7 +478,6 @@ switch ( Hemi ) { mlog << Error << "\nWwmcaRegridder::get_interpolated_data(DataPlane &) const -> " << "bad hemisphere ... " << junk << "\n\n"; exit ( 1 ); - break; } // switch diff --git a/met/src/tools/tc_utils/tc_dland/tc_dland.cc b/met/src/tools/tc_utils/tc_dland/tc_dland.cc index e7c378549c..92195d7af1 100644 --- a/met/src/tools/tc_utils/tc_dland/tc_dland.cc +++ b/met/src/tools/tc_utils/tc_dland/tc_dland.cc @@ -290,7 +290,7 @@ void process_distances() { // Write the computed distances to the output file mlog << Debug(3) << "Writing distance to land variable.\n"; if(!put_nc_data_with_dims(&dland_var, &dland[0], grid.ny(), grid.nx())) { - if(dland) { delete dland; dland = (float *) 0; } + if(dland) { delete [] dland; dland = (float *) 0; } mlog << Error << "\nprocess_distances() -> " << "error with dland_var->put\n\n"; exit(1); diff --git a/met/src/tools/tc_utils/tc_gen/tc_gen.cc b/met/src/tools/tc_utils/tc_gen/tc_gen.cc index 21ecc6269f..bfb3960226 100644 --- a/met/src/tools/tc_utils/tc_gen/tc_gen.cc +++ b/met/src/tools/tc_utils/tc_gen/tc_gen.cc @@ -431,6 +431,12 @@ void do_genesis_ctc(const TCGenVxOpt &vx_opt, const GenesisInfo *fgi = gpd.fcst_gen(i); const GenesisInfo *bgi = gpd.best_gen(i); + if(!fgi && !bgi) { + mlog << Error << "\ndo_genesis_ctc() -> " + << "Both the forecast and the best track are null at index " << i << ".\n\n"; + exit(1); + } + // Initialize diff.clear(); diff --git a/met/src/tools/tc_utils/tc_stat/tc_stat_job.cc b/met/src/tools/tc_utils/tc_stat/tc_stat_job.cc index 59e4dd9616..ed0e22757c 100644 --- a/met/src/tools/tc_utils/tc_stat/tc_stat_job.cc +++ b/met/src/tools/tc_utils/tc_stat/tc_stat_job.cc @@ -84,7 +84,6 @@ TCStatJob *TCStatJobFactory::new_tc_stat_job_type(const char *type_str) { mlog << Error << "\nTCStatJobFactory::new_tc_stat_job_type() -> " << "unsupported job type \"" << type_str << "\"\n\n"; exit(1); - break; } // end switch return(job); @@ -3614,6 +3613,7 @@ StringArray TCStatJobProbRIRW::parse_job_command(const char *jobstring) { //////////////////////////////////////////////////////////////////////// void TCStatJobProbRIRW::close_dump_file() { + const char *method_name = "TCStatJobProbRIRW::do_job() -> "; // Close the current output dump file stream if(DumpOut) { @@ -3644,7 +3644,7 @@ void TCStatJobProbRIRW::close_dump_file() { // Open the dump file back up for reading if(!f.open(DumpFile.c_str())) { - mlog << Error << "\nTCStatJobProbRIRW::close_dump_file() -> " + mlog << Error << "\n" << method_name << "can't open the dump file \"" << DumpFile << "\" for reading!\n\n"; exit(1); @@ -3666,7 +3666,10 @@ void TCStatJobProbRIRW::close_dump_file() { TCStatJob::open_dump_file(); // Write the reformatted AsciiTable - *DumpOut << out_at; + + if(DumpOut) *DumpOut << out_at; + else mlog << Warning << "\n" << method_name + << "can't write the reformatted AsciiTable because DumpOut is null\n\n"; // Call parent to close the dump file TCStatJob::close_dump_file(); @@ -3958,6 +3961,48 @@ void TCStatJobProbRIRW::do_output(ostream &out) { //////////////////////////////////////////////////////////////////////// +TCLineCounts::TCLineCounts() { + // Read and keep counts + NRead = 0; + NKeep = 0; + + // Checking entire track + RejTrackWatchWarn = 0; + RejInitThresh = 0; + RejInitStr = 0; + + // Filtering on track attributes + RejRIRW = 0; + RejLandfall = 0; + + // Checking track point attributes + RejAModel = 0; + RejBModel = 0; + RejDesc = 0; + RejStormId = 0; + RejBasin = 0; + RejCyclone = 0; + RejStormName = 0; + RejInit = 0; + RejInitHour = 0; + RejLead = 0; + RejValid = 0; + RejValidHour = 0; + RejInitMask = 0; + RejValidMask = 0; + RejLineType = 0; + RejWaterOnly = 0; + RejColumnThresh = 0; + RejColumnStr = 0; + RejMatchPoints = 0; + RejEventEqual = 0; + RejOutInitMask = 0; + RejOutValidMask = 0; + RejLeadReq = 0; +} + +//////////////////////////////////////////////////////////////////////// + void setup_table(AsciiTable &at, int n_hdr_cols, int prec) { int i; diff --git a/met/src/tools/tc_utils/tc_stat/tc_stat_job.h b/met/src/tools/tc_utils/tc_stat/tc_stat_job.h index 111c6a3564..8eeb816241 100644 --- a/met/src/tools/tc_utils/tc_stat/tc_stat_job.h +++ b/met/src/tools/tc_utils/tc_stat/tc_stat_job.h @@ -158,6 +158,8 @@ struct TCLineCounts { int RejOutInitMask; int RejOutValidMask; int RejLeadReq; + + TCLineCounts(); }; //////////////////////////////////////////////////////////////////////// diff --git a/scripts/regression/test_nightly.sh b/scripts/regression/test_nightly.sh index f63653d155..992981f702 100755 --- a/scripts/regression/test_nightly.sh +++ b/scripts/regression/test_nightly.sh @@ -21,7 +21,7 @@ #======================================================================= # Constants -EMAIL_LIST="johnhg@ucar.edu hsoh@ucar.edu jpresto@ucar.edu" +EMAIL_LIST="johnhg@ucar.edu hsoh@ucar.edu jpresto@ucar.edu linden@ucar.edu" KEEP_DAYS=5 # Usage statement diff --git a/test/config/GridStatConfig_APCP_regrid b/test/config/GridStatConfig_APCP_regrid index dfcfbf73a7..af00db9f28 100644 --- a/test/config/GridStatConfig_APCP_regrid +++ b/test/config/GridStatConfig_APCP_regrid @@ -150,6 +150,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_GRIB_lvl_typ_val b/test/config/GridStatConfig_GRIB_lvl_typ_val index 1b3a539f57..8ace03905d 100644 --- a/test/config/GridStatConfig_GRIB_lvl_typ_val +++ b/test/config/GridStatConfig_GRIB_lvl_typ_val @@ -247,6 +247,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_GRIB_set_attr b/test/config/GridStatConfig_GRIB_set_attr index dae59b6282..917a0caf66 100644 --- a/test/config/GridStatConfig_GRIB_set_attr +++ b/test/config/GridStatConfig_GRIB_set_attr @@ -179,6 +179,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_GTG_latlon b/test/config/GridStatConfig_GTG_latlon index 0f18b04d2d..36356d9b48 100644 --- a/test/config/GridStatConfig_GTG_latlon +++ b/test/config/GridStatConfig_GTG_latlon @@ -158,6 +158,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_GTG_lc b/test/config/GridStatConfig_GTG_lc index 531a3bb9cf..c32de7a1b7 100644 --- a/test/config/GridStatConfig_GTG_lc +++ b/test/config/GridStatConfig_GTG_lc @@ -158,6 +158,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_apply_mask b/test/config/GridStatConfig_apply_mask index 1c62a9a929..9142054e30 100644 --- a/test/config/GridStatConfig_apply_mask +++ b/test/config/GridStatConfig_apply_mask @@ -159,6 +159,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_climo_WMO b/test/config/GridStatConfig_climo_WMO index 39e64895ce..e23f74a2be 100644 --- a/test/config/GridStatConfig_climo_WMO +++ b/test/config/GridStatConfig_climo_WMO @@ -219,6 +219,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_climo_prob b/test/config/GridStatConfig_climo_prob index b5f4629bf1..a97291af88 100644 --- a/test/config/GridStatConfig_climo_prob +++ b/test/config/GridStatConfig_climo_prob @@ -229,6 +229,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_fourier b/test/config/GridStatConfig_fourier index 6b2ba521f3..ad8c121b23 100644 --- a/test/config/GridStatConfig_fourier +++ b/test/config/GridStatConfig_fourier @@ -185,6 +185,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_grid_weight b/test/config/GridStatConfig_grid_weight index af74b784c7..4cd850faf1 100644 --- a/test/config/GridStatConfig_grid_weight +++ b/test/config/GridStatConfig_grid_weight @@ -170,6 +170,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_interp_shape b/test/config/GridStatConfig_interp_shape index f6a69c5e35..66d1658886 100644 --- a/test/config/GridStatConfig_interp_shape +++ b/test/config/GridStatConfig_interp_shape @@ -152,6 +152,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_mpr_thresh b/test/config/GridStatConfig_mpr_thresh index 4e95022a48..268e24cbc6 100644 --- a/test/config/GridStatConfig_mpr_thresh +++ b/test/config/GridStatConfig_mpr_thresh @@ -217,6 +217,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_no_leap b/test/config/GridStatConfig_no_leap index 3b80dc0330..b33a52c222 100644 --- a/test/config/GridStatConfig_no_leap +++ b/test/config/GridStatConfig_no_leap @@ -159,6 +159,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_prob_as_scalar b/test/config/GridStatConfig_prob_as_scalar index f497641518..8a462ce48a 100644 --- a/test/config/GridStatConfig_prob_as_scalar +++ b/test/config/GridStatConfig_prob_as_scalar @@ -180,6 +180,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_python b/test/config/GridStatConfig_python index 9bb128ac92..3d9059219b 100644 --- a/test/config/GridStatConfig_python +++ b/test/config/GridStatConfig_python @@ -156,6 +156,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_python_mixed b/test/config/GridStatConfig_python_mixed index 38a5f9fa90..9b868a2745 100644 --- a/test/config/GridStatConfig_python_mixed +++ b/test/config/GridStatConfig_python_mixed @@ -164,6 +164,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_rtma b/test/config/GridStatConfig_rtma index b49db75a73..178a1269b1 100644 --- a/test/config/GridStatConfig_rtma +++ b/test/config/GridStatConfig_rtma @@ -160,6 +160,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_rtma_perc_thresh b/test/config/GridStatConfig_rtma_perc_thresh index 55b42b4456..3a739645ad 100644 --- a/test/config/GridStatConfig_rtma_perc_thresh +++ b/test/config/GridStatConfig_rtma_perc_thresh @@ -163,6 +163,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_st4 b/test/config/GridStatConfig_st4 index 00dbf1d8a0..5f210db851 100644 --- a/test/config/GridStatConfig_st4 +++ b/test/config/GridStatConfig_st4 @@ -164,6 +164,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/GridStatConfig_st4_censor b/test/config/GridStatConfig_st4_censor index d0ae97d7d9..1942cfe107 100644 --- a/test/config/GridStatConfig_st4_censor +++ b/test/config/GridStatConfig_st4_censor @@ -173,6 +173,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/PointStatConfig_APCP b/test/config/PointStatConfig_APCP index 003f9f50a0..ea8b55a5b4 100644 --- a/test/config/PointStatConfig_APCP +++ b/test/config/PointStatConfig_APCP @@ -120,6 +120,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = BOTH; mpr = NONE; diff --git a/test/config/PointStatConfig_APCP_HIRA b/test/config/PointStatConfig_APCP_HIRA index eefbcfdda0..412d2d8fe7 100644 --- a/test/config/PointStatConfig_APCP_HIRA +++ b/test/config/PointStatConfig_APCP_HIRA @@ -123,6 +123,7 @@ output_flag = { pjc = STAT; prc = STAT; ecnt = STAT; + orank = STAT; rps = STAT; eclv = STAT; mpr = STAT; diff --git a/test/config/PointStatConfig_GTG_latlon b/test/config/PointStatConfig_GTG_latlon index 7e6e2c53d2..1ac24d927e 100644 --- a/test/config/PointStatConfig_GTG_latlon +++ b/test/config/PointStatConfig_GTG_latlon @@ -142,6 +142,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = BOTH; mpr = BOTH; diff --git a/test/config/PointStatConfig_GTG_lc b/test/config/PointStatConfig_GTG_lc index 4b6df47f87..5efd47662c 100644 --- a/test/config/PointStatConfig_GTG_lc +++ b/test/config/PointStatConfig_GTG_lc @@ -150,6 +150,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = BOTH; mpr = BOTH; diff --git a/test/config/PointStatConfig_INTERP_OPTS b/test/config/PointStatConfig_INTERP_OPTS index 4923dab75b..c255c5b565 100644 --- a/test/config/PointStatConfig_INTERP_OPTS +++ b/test/config/PointStatConfig_INTERP_OPTS @@ -133,6 +133,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = NONE; mpr = STAT; diff --git a/test/config/PointStatConfig_LAND_TOPO_MASK b/test/config/PointStatConfig_LAND_TOPO_MASK index 74d7e28be1..4d1ab530fc 100644 --- a/test/config/PointStatConfig_LAND_TOPO_MASK +++ b/test/config/PointStatConfig_LAND_TOPO_MASK @@ -173,6 +173,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = NONE; mpr = BOTH; diff --git a/test/config/PointStatConfig_MASK_SID b/test/config/PointStatConfig_MASK_SID index 95c30a9d16..e4174e9e9a 100644 --- a/test/config/PointStatConfig_MASK_SID +++ b/test/config/PointStatConfig_MASK_SID @@ -128,6 +128,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = BOTH; mpr = BOTH; diff --git a/test/config/PointStatConfig_PHYS b/test/config/PointStatConfig_PHYS index 4a5640ddcb..7a767a54da 100644 --- a/test/config/PointStatConfig_PHYS +++ b/test/config/PointStatConfig_PHYS @@ -129,6 +129,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = BOTH; mpr = BOTH; diff --git a/test/config/PointStatConfig_PHYS_pint b/test/config/PointStatConfig_PHYS_pint index 9102584bff..4149345038 100644 --- a/test/config/PointStatConfig_PHYS_pint +++ b/test/config/PointStatConfig_PHYS_pint @@ -124,6 +124,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = BOTH; mpr = BOTH; diff --git a/test/config/PointStatConfig_WINDS b/test/config/PointStatConfig_WINDS index 4f17fff75a..2b3ff9988e 100644 --- a/test/config/PointStatConfig_WINDS +++ b/test/config/PointStatConfig_WINDS @@ -145,6 +145,7 @@ output_flag = { prc = NONE; ecnt = NONE; rps = NONE; + orank = NONE; eclv = NONE; mpr = NONE; } diff --git a/test/config/PointStatConfig_aeronet b/test/config/PointStatConfig_aeronet index b36dd7f39c..58579bdf68 100644 --- a/test/config/PointStatConfig_aeronet +++ b/test/config/PointStatConfig_aeronet @@ -193,6 +193,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = NONE; mpr = STAT; diff --git a/test/config/PointStatConfig_airnow b/test/config/PointStatConfig_airnow index cbf3c76c11..7420b10455 100644 --- a/test/config/PointStatConfig_airnow +++ b/test/config/PointStatConfig_airnow @@ -223,6 +223,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = NONE; mpr = STAT; diff --git a/test/config/PointStatConfig_climo b/test/config/PointStatConfig_climo index cd81aa6fbf..cb6eaea84b 100644 --- a/test/config/PointStatConfig_climo +++ b/test/config/PointStatConfig_climo @@ -262,6 +262,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = NONE; mpr = BOTH; diff --git a/test/config/PointStatConfig_climo_WMO b/test/config/PointStatConfig_climo_WMO index fa7499ab84..201e794890 100644 --- a/test/config/PointStatConfig_climo_WMO +++ b/test/config/PointStatConfig_climo_WMO @@ -210,6 +210,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = BOTH; + orank = NONE; rps = BOTH; eclv = NONE; mpr = BOTH; diff --git a/test/config/PointStatConfig_climo_prob b/test/config/PointStatConfig_climo_prob index 18ac41ccdf..2447701f95 100644 --- a/test/config/PointStatConfig_climo_prob +++ b/test/config/PointStatConfig_climo_prob @@ -212,6 +212,7 @@ output_flag = { pjc = BOTH; prc = BOTH; ecnt = NONE; + orank = NONE; rps = NONE; eclv = BOTH; mpr = BOTH; diff --git a/test/config/PointStatConfig_dup b/test/config/PointStatConfig_dup index 4dfe9f5c11..9298bb58ec 100644 --- a/test/config/PointStatConfig_dup +++ b/test/config/PointStatConfig_dup @@ -146,6 +146,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = STAT; mpr = BOTH; diff --git a/test/config/PointStatConfig_mpr_thresh b/test/config/PointStatConfig_mpr_thresh index daf7a11d2f..66bde341ef 100644 --- a/test/config/PointStatConfig_mpr_thresh +++ b/test/config/PointStatConfig_mpr_thresh @@ -204,6 +204,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = NONE; mpr = NONE; diff --git a/test/config/PointStatConfig_obs_summary b/test/config/PointStatConfig_obs_summary index 908f86e44c..b8129521ec 100644 --- a/test/config/PointStatConfig_obs_summary +++ b/test/config/PointStatConfig_obs_summary @@ -135,6 +135,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = STAT; mpr = BOTH; diff --git a/test/config/PointStatConfig_obs_summary_all b/test/config/PointStatConfig_obs_summary_all index 636ed3d001..080ba56dff 100644 --- a/test/config/PointStatConfig_obs_summary_all +++ b/test/config/PointStatConfig_obs_summary_all @@ -204,6 +204,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = STAT; mpr = BOTH; diff --git a/test/config/PointStatConfig_prob b/test/config/PointStatConfig_prob index b1fe365e2b..e2ed42b77f 100644 --- a/test/config/PointStatConfig_prob +++ b/test/config/PointStatConfig_prob @@ -131,6 +131,7 @@ output_flag = { pjc = BOTH; prc = BOTH; ecnt = NONE; + orank = NONE; rps = NONE; eclv = BOTH; mpr = NONE; diff --git a/test/config/PointStatConfig_python b/test/config/PointStatConfig_python index 0c66c8c4f0..752fb7a928 100644 --- a/test/config/PointStatConfig_python +++ b/test/config/PointStatConfig_python @@ -201,6 +201,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = NONE; mpr = STAT; diff --git a/test/config/PointStatConfig_qty b/test/config/PointStatConfig_qty index 98ad597e46..6f7db1b826 100644 --- a/test/config/PointStatConfig_qty +++ b/test/config/PointStatConfig_qty @@ -153,6 +153,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = BOTH; mpr = BOTH; diff --git a/test/config/PointStatConfig_sid_inc_exc b/test/config/PointStatConfig_sid_inc_exc index 62e6e9b43f..9dc3e7fdcf 100644 --- a/test/config/PointStatConfig_sid_inc_exc +++ b/test/config/PointStatConfig_sid_inc_exc @@ -136,6 +136,7 @@ output_flag = { pjc = NONE; prc = NONE; ecnt = NONE; + orank = NONE; rps = NONE; eclv = NONE; mpr = BOTH; diff --git a/test/config/ref_config/GridStatConfig_03h b/test/config/ref_config/GridStatConfig_03h index 70f885dd08..dc40220c7b 100644 --- a/test/config/ref_config/GridStatConfig_03h +++ b/test/config/ref_config/GridStatConfig_03h @@ -158,6 +158,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/config/ref_config/GridStatConfig_24h b/test/config/ref_config/GridStatConfig_24h index a66ffcf276..450b7117aa 100644 --- a/test/config/ref_config/GridStatConfig_24h +++ b/test/config/ref_config/GridStatConfig_24h @@ -158,6 +158,7 @@ distance_map = { baddeley_max_dist = NA; fom_alpha = 0.1; zhu_weight = 0.5; + beta_value(n) = n * n / 2.0; } //////////////////////////////////////////////////////////////////////////////// diff --git a/test/hdr/met_10_1.hdr b/test/hdr/met_10_1.hdr index 245eac4203..ba80ba12b8 100644 --- a/test/hdr/met_10_1.hdr +++ b/test/hdr/met_10_1.hdr @@ -10,7 +10,7 @@ NBRCNT : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_L NBRCTC : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL FY_OY FY_ON FN_OY FN_ON NBRCTS : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL BASER BASER_NCL BASER_NCU BASER_BCL BASER_BCU FMEAN FMEAN_NCL FMEAN_NCU FMEAN_BCL FMEAN_BCU ACC ACC_NCL ACC_NCU ACC_BCL ACC_BCU FBIAS FBIAS_BCL FBIAS_BCU PODY PODY_NCL PODY_NCU PODY_BCL PODY_BCU PODN PODN_NCL PODN_NCU PODN_BCL PODN_BCU POFD POFD_NCL POFD_NCU POFD_BCL POFD_BCU FAR FAR_NCL FAR_NCU FAR_BCL FAR_BCU CSI CSI_NCL CSI_NCU CSI_BCL CSI_BCU GSS GSS_BCL GSS_BCU HK HK_NCL HK_NCU HK_BCL HK_BCU HSS HSS_BCL HSS_BCU ODDS ODDS_NCL ODDS_NCU ODDS_BCL ODDS_BCU LODDS LODDS_NCL LODDS_NCU LODDS_BCL LODDS_BCU ORSS ORSS_NCL ORSS_NCU ORSS_BCL ORSS_BCU EDS EDS_NCL EDS_NCU EDS_BCL EDS_BCU SEDS SEDS_NCL SEDS_NCU SEDS_BCL SEDS_BCU EDI EDI_NCL EDI_NCU EDI_BCL EDI_BCU SEDI SEDI_NCL SEDI_NCU SEDI_BCL SEDI_BCU BAGSS BAGSS_BCL BAGSS_BCU GRAD : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL FGBAR OGBAR MGBAR EGBAR S1 S1_OG FGOG_RATIO DX DY -DMAP : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL FY OY FBIAS BADDELEY HAUSDORFF MED_FO MED_OF MED_MIN MED_MAX MED_MEAN FOM_FO FOM_OF FOM_MIN FOM_MAX FOM_MEAN ZHU_FO ZHU_OF ZHU_MIN ZHU_MAX ZHU_MEAN +DMAP : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL FY OY FBIAS BADDELEY HAUSDORFF MED_FO MED_OF MED_MIN MED_MAX MED_MEAN FOM_FO FOM_OF FOM_MIN FOM_MAX FOM_MEAN ZHU_FO ZHU_OF ZHU_MIN ZHU_MAX ZHU_MEAN G GBETA BETA_VALUE ORANK : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT OBS_LON OBS_LVL OBS_ELV OBS PIT RANK N_ENS_VLD N_ENS _VAR_ PCT : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL N_THRESH _VAR_ PJC : VERSION MODEL DESC FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_UNITS FCST_LEV OBS_VAR OBS_UNITS OBS_LEV OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THRESH ALPHA LINE_TYPE TOTAL N_THRESH _VAR_