Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update develop-ref after #2803 and #2806 #2808

Merged
merged 91 commits into from
Feb 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
91 commits
Select commit Hold shift + click to select a range
ac9f454
Store change of sonar.login to sonar.token.
Nov 20, 2023
c431c85
Store change of sonar.login to sonar.token.
Nov 20, 2023
16c058a
Hotfix to develop, updating aclocal.m4 and config.h.in to what is cre…
Nov 27, 2023
ba0f3b2
Adding -lnetcdf to configure_lib_args for NetCDF-CXX compilation
jprestop Nov 28, 2023
c7e3951
Added eckit and atlas loads and paths
jprestop Nov 29, 2023
60bfc0f
Added atlas and eckit loads and paths
jprestop Nov 29, 2023
0481f05
Changing -j to "-j 5" as the recommended value
jprestop Nov 30, 2023
80bf1c5
Update ensemble-stat.rst
CPKalb Nov 30, 2023
f99f626
Updated values for GRIB2CLIB_NAME and BUFRLIB_NAME
jprestop Dec 4, 2023
9ee3b88
Updated for proj, eckit, and atlas
jprestop Dec 8, 2023
4135e27
Feature #2761 develop seneca (#2762)
JohnHalleyGotway Dec 12, 2023
8f99ef0
Per #2761, define runtime python version for testing rather than usi…
Dec 12, 2023
40ae2fb
Per #2761, fix setting ci-skip-all
Dec 13, 2023
7c6df3c
Per #2761, patching test_util.R to use the -C command line option for…
Dec 13, 2023
59d3afa
#2652 Added find_var_by_standard_name and separated common codes to f…
Dec 13, 2023
2d65071
Merge remote-tracking branch 'origin/develop' into bugfix_2652_polar_CF
Dec 13, 2023
50bcea9
#2757 Get the email list from the environment variable MET_CRON_EMAIL…
Dec 13, 2023
7ff8131
#2757 THe SonarQube token and URL are replaced by ujsing the environm…
Dec 13, 2023
dde1255
#2757 The SonarQube token and URL are replaced with the pre-defined s…
Dec 13, 2023
ea9e543
Merge pull request #2764 from dtcenter/bugfix_2652_polar_CF
hsoh-u Dec 14, 2023
c8e4a17
Merge pull request #2765 from dtcenter/feature_2757_SonarQube_token
hsoh-u Dec 14, 2023
79654f0
Bugfix #2670 develop --enable-python (#2768)
JohnHalleyGotway Dec 15, 2023
6af1d2f
#2755 Added a header count and checking header count instead of using…
Dec 15, 2023
bca2c83
Update install_met_env.acorn_py3.10
jprestop Dec 19, 2023
9e2825c
Update install_met_env.wcoss2_py3.10
jprestop Dec 19, 2023
7615f67
Feature #2776 cleanup (#2777)
JohnHalleyGotway Jan 10, 2024
07cacb0
Bugfix #2782 develop MASSDEN (#2783)
JohnHalleyGotway Jan 11, 2024
ce97b05
Merge remote-tracking branch 'origin/develop-ref' into develop
metplus-bot Jan 11, 2024
65b2663
Feature #2701 ismn (#2758)
JohnHalleyGotway Jan 16, 2024
96fa912
Merge remote-tracking branch 'origin/develop-ref' into develop
metplus-bot Jan 16, 2024
53338a0
Feature 2753 comp script config (#2771)
jprestop Jan 16, 2024
e7b1f0e
#2697 Share the temporary file for blocking
Jan 17, 2024
5e6ed6f
#2697 Reduced the temporary files for pb2nc
Jan 17, 2024
d7f5d43
Moved typedef unixtime from vx_cal.h to time_array.h
Jan 17, 2024
72c0293
#2697 Changed static const to constexpr for SonarQube (code smell)
Jan 17, 2024
51227b9
Removing ${MAKE_ARGS} in some locations
jprestop Jan 17, 2024
2e02ea0
#2673 Removed unused variable grid_x and grid_y
Jan 18, 2024
2663290
#2673 Renamed the shadowed variable n to nd
Jan 18, 2024
4231886
#2673 Use nullptr instead of literal 0
Jan 18, 2024
1b6f4dc
#2673 Moved down namespace declarations. Removed unused variables
Jan 18, 2024
f3bfbe9
#2673 Reduced the scope of variables
Jan 18, 2024
01f721b
#2673 Use nullptr instead of literal 0
Jan 18, 2024
fc4ea74
#2673 Removed unused variables
Jan 18, 2024
52a40ac
#2673 Renamed ex to ex2 which becomes shadowed variable
Jan 18, 2024
21360e4
#2673 Removed always true condition
Jan 18, 2024
714c749
#2673 Moved down namespace declarations. Use nullptr instead of liter…
Jan 18, 2024
4f0b1c0
#2673 Use nullptr instead of literal 0. Removed unused variables
Jan 18, 2024
6c4cf76
#2673 Moved down namespace declarations.
Jan 18, 2024
db0300b
#2673 Added namespace std to string
Jan 18, 2024
d187809
#2673 Removed unsued variables. Check nullptr of gt
Jan 18, 2024
1789317
#2673 Moved down namespace declarations.
Jan 18, 2024
fd9bad6
#2673 Added namespace std to string
Jan 18, 2024
72f2469
#2772 Added quit_msg
Jan 19, 2024
464f9a7
Feature #2547 Read WRF output files natively (#2790)
georgemccabe Jan 19, 2024
a5cc010
Merge remote-tracking branch 'origin/develop-ref' into develop
metplus-bot Jan 19, 2024
d5b4550
#2772 Use JSON for attrubutes and numpy serialization dor 2D data ins…
Jan 19, 2024
73d4464
Hotfix to the develop branch after PR #2790 merged the feature_2547_w…
JohnHalleyGotway Jan 19, 2024
791efe1
Merge branch 'develop' into feature_2697_pb2nc_temp_file
Jan 19, 2024
edeb415
Merge pull request #2792 from dtcenter/feature_2697_pb2nc_temp_file
hsoh-u Jan 19, 2024
185fa45
Merge remote-tracking branch 'origin/develop' into feature_2772_pytho…
Jan 20, 2024
50f228a
Feature 2588 install rewrite (#2791)
lisagoodrich Jan 22, 2024
0232c14
Merge pull request #2774 from dtcenter/bugfix_2755_python_emb_for_sin…
hsoh-u Jan 22, 2024
cc38310
Minor hotfix to develop to fix a typo in the comments of the PB2NC co…
JohnHalleyGotway Jan 23, 2024
2367d9a
Feature #2796 develop gha node20 (#2797)
JohnHalleyGotway Jan 24, 2024
f7d1bb5
Feature #2796 develop gha_node20, fix artifact names (#2799)
JohnHalleyGotway Jan 25, 2024
7874116
#2772 Change nan and inf to -9999 on reading ASCII input if failed to…
Jan 26, 2024
6ea6416
Merge pull request #2800 from dtcenter/feature_2772_python_embedding_nan
hsoh-u Jan 29, 2024
d15dc3a
Feature #2801 warnings (#2802)
JohnHalleyGotway Jan 29, 2024
6ca1511
#2772 Initial release, Separated from point.py
Jan 29, 2024
355b384
#2772 Added point_nc.py
Jan 29, 2024
118a2be
#2772 Changed write_tmp_nc and read_tmp_nc to write_tmp_py and read_t…
Jan 29, 2024
1cba5ec
#2772 Removed python_key_point_data & python_key_point_data_list and …
Jan 29, 2024
d64d127
#2772 Renamed tmp_nc_base_name, tmp_nc_file_var_name & tmp_nc_point_v…
Jan 29, 2024
de19d84
#2772 More log messages for error
Jan 29, 2024
df63d2a
#2772 Changed API (log_msg to log_message)
Jan 29, 2024
be02853
#2772 Use met_point_nc_tools instead of met_point_tools
Jan 29, 2024
549ff3d
#2772 Changed APIs
Jan 29, 2024
858a484
#2772 Changed API
Jan 29, 2024
6a54b1b
#2772 Changed default temp output format to JSON ande numpoy serializ…
Jan 29, 2024
1600963
#2772 Allow to keep the temporary files by using the environment vari…
Jan 29, 2024
6264c6d
Added log message if the temprary fiule was not deleted
Jan 29, 2024
0defee8
Feature #2745 mvmode enhancements (#2779)
davidalbo Jan 30, 2024
88ee7fa
Merge remote-tracking branch 'origin/develop-ref' into develop
metplus-bot Jan 30, 2024
c1532dd
Per #2772, added MET_PYTHON_EXE to various test cases and removed wha…
jprestop Jan 31, 2024
06498d7
Per #2772, add documentation about 3 new environment variables.
JohnHalleyGotway Jan 31, 2024
c0c3d69
Per #2772, tweak the wording.
JohnHalleyGotway Jan 31, 2024
e3343f7
Fixing typo
jprestop Feb 1, 2024
06d7825
Merge pull request #2803 from dtcenter/feature_2772_python_embedding_…
hsoh-u Feb 1, 2024
4647a35
Feature #2772 python_embedding_tmp_file (#2807)
hsoh-u Feb 2, 2024
3c3b57c
Feature #2805 filter_set_hdr (#2806)
JohnHalleyGotway Feb 2, 2024
f1cab6e
Merge remote-tracking branch 'origin/develop-ref' into develop
metplus-bot Feb 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
110 changes: 84 additions & 26 deletions docs/Users_Guide/config_options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -494,6 +494,49 @@ Where code is running in a production context, it is worth being familiar with
the binding / affinitization method on the particular system and building it
into any relevant scripting.

.. _met_keep_temp_file:

MET_KEEP_TEMP_FILE
------------------

The MET_KEEP_TEMP_FILE environment variable can be set to control the runtime
behavior of the MET tools. The MET tools write temporary files in several places
in the application and library code. By default, those temporary files are deleted
when they are no longer needed. However it can be useful for development, testing,
and debugging to keep them for further inspection. Setting this environment variable
to a value of :code:`yes` or :code:`true` instructs the MET tools to retain temporary
files instead of deleting them.

Note that doing so may fill up the temporary directory. It is the responsiblity of
the user to monitor the temporary directory usage and remove temporary files that
are no longer needed.

When running with this option, users are advised to refer to section
:numref:`config_tmp_dir` and write temporary files to a personal location rather than
the default shared :code:`/tmp` directory.

.. _met_python_debug:

MET_PYTHON_DEBUG
----------------

The MET_PYTHON_DEBUG environment variable can be set to enable debugging log messages
related to Python embedding. These log messages are disabled by default. The environment
variable can be set to a value of :code:`all` for all log messages, :code:`dataplane`
for log messages when reading gridded data, or :code:`point` for log messages when
reading point data.

.. _met_python_tmp_format:

MET_PYTHON_TMP_FORMAT
---------------------

The MET_PYTHON_TMP_FORMAT environment variable defines whether temporary files for
Python embedding should be written as NetCDF files or using JSON/NumPy serialization.
By default, they are written using JSON for attributes and NumPy serialization for data
to avoid NetCDF library conflicts between MET and Python. Setting this environment
variable to :code:`netcdf` enables the use of temporary NetCDF files instead.

Settings Common to Multiple Tools
=================================

Expand Down Expand Up @@ -3770,13 +3813,22 @@ Where "job_name" is set to one of the following:

* "filter"

To filter out the STAT or TCMPR lines matching the job filtering
criteria specified below and using the optional arguments below.
To filter out the STAT lines matching the job filtering criteria
specified below and using the optional arguments below.
The output STAT lines are written to the file specified using the
"-dump_row" argument.

Required Args: -dump_row

|
Optional Args:

.. code-block:: none

-set_hdr column_name value
May be used multiple times to override data written to the
output dump_row file.

|

* "summary"

Expand Down Expand Up @@ -3805,8 +3857,8 @@ Where "job_name" is set to one of the following:

* Format the -column option as LINE_TYPE:COLUMN.

|
|

Use the -derive job command option to automatically derive
statistics on the fly from input contingency tables and partial
sums.
Expand All @@ -3832,10 +3884,14 @@ Where "job_name" is set to one of the following:

.. code-block:: none

-by column_name to specify case information
-out_alpha to override default alpha value of 0.05
-derive to derive statistics on the fly
-column_union to summarize multiple columns
-by column_name
To specify case information.
-out_alpha
To override the default alpha value.
-derive
To derive statistics on the fly.
-column_union
To summarize multiple columns.

* "aggregate"

Expand All @@ -3852,8 +3908,8 @@ Where "job_name" is set to one of the following:
ISC, ECNT, RPS, RHIST, PHIST, RELP, SSVAR

Required Args: -line_type
|

|

* "aggregate_stat"

Expand Down Expand Up @@ -3887,8 +3943,8 @@ Where "job_name" is set to one of the following:
.. code-block:: none

-out_thresh or -out_fcst_thresh and -out_obs_thresh
When -out_line_type FHO, CTC, CTS, MCTC, MCTS,
PCT, PSTD, PJC, PRC
When -out_line_type FHO, CTC, CTS, MCTC, MCTS,
PCT, PSTD, PJC, PRC

Additional Optional Args for -line_type MPR:

Expand All @@ -3901,14 +3957,14 @@ Where "job_name" is set to one of the following:
-out_obs_wind_thresh
-out_wind_logic
When -out_line_type WDIR

Additional Optional Arg for:

.. code-block:: none

-line_type ORANK -out_line_type PHIST, SSVAR ...
-out_bin_size

Additional Optional Args for:

.. code-block:: none
Expand All @@ -3917,14 +3973,14 @@ Where "job_name" is set to one of the following:
-out_eclv_points

* "ss_index"

The skill score index job can be configured to compute a weighted
average of skill scores derived from a configurable set of
variables, levels, lead times, and statistics. The skill score
index is computed using two models, a forecast model and a
reference model. For each statistic in the index, a skill score
is computed as:

SS = 1 - (S[model]*S[model])/(S[reference]*S[reference])

Where S is the statistic.
Expand Down Expand Up @@ -4135,17 +4191,19 @@ Where "job_name" is set to one of the following:
"-rank_corr_flag value"
"-vif_flag value"

For aggregate and aggregate_stat job types:

.. code-block:: none

"-out_stat path" to write a .stat output file for the job
including the .stat header columns. Multiple
values for each header column are written as
a comma-separated list.
"-set_hdr col_name value" may be used multiple times to explicity
specify what should be written to the header
columns of the output .stat file.
-out_stat path
To write a .stat output file for aggregate and aggregate_stat jobs
including the .stat header columns. Multiple input values for each
header column are written to the output as a comma-separated list
of unique values.

-set_hdr col_name value
May be used multiple times to explicity specify what should be
written to the header columns of the output .stat file for
aggregate and aggregate_stat jobs or output dump_row file
for filter jobs.

When using the "-by" job command option, you may reference those columns
in the "-set_hdr" job command options. For example, when computing statistics
Expand Down
14 changes: 7 additions & 7 deletions docs/Users_Guide/stat-analysis.rst
Original file line number Diff line number Diff line change
Expand Up @@ -604,7 +604,7 @@ The Stat-Analysis tool supports several additional job command options which may
This job command option is extremely useful. It can be used multiple times to specify a list of STAT header column names. When reading each input line, the Stat-Analysis tool concatenates together the entries in the specified columns and keeps track of the unique cases. It applies the logic defined for that job to each unique subset of data. For example, if your output was run over many different model names and masking regions, specify **-by MODEL,VX_MASK** to get output for each unique combination rather than having to run many very similar jobs.

.. code-block:: none

-column_min col_name value
-column_max col_name value
-column_eq col_name value
Expand All @@ -615,30 +615,30 @@ This job command option is extremely useful. It can be used multiple times to sp
The column filtering options may be used when the **-line_type** has been set to a single value. These options take two arguments, the name of the data column to be used followed by a value, string, or threshold to be applied. If multiple column_min/max/eq/thresh/str options are listed, the job will be performed on their intersection. Each input line is only retained if its value meets the numeric filtering criteria defined, matches one of the strings defined by the **-column_str** option, or does not match any of the string defined by the **-column_str_exc** option. Multiple filtering strings may be listed using commas. Defining thresholds in MET is described in :numref:`config_options`.

.. code-block:: none

-dump_row file

Each analysis job is performed over a subset of the input data. Filtering the input data down to a desired subset is often an iterative process. The **-dump_row** option may be used for each job to specify the name of an output file to which the exact subset of data used for that job will be written. When initially constructing Stat-Analysis jobs, users are strongly encouraged to use the option and check its contents to ensure that the analysis was actually done over the intended subset.

.. code-block:: none

-out_line_type name

This option specifies the desired output line type(s) for the **aggregate_stat** job type.

.. code-block:: none

-out_stat file
-set_hdr col_name string

The Stat-Analysis tool writes its output to either the log file or the file specified using the **-out** command line option. However the **aggregate** and **aggregate_stat** jobs create STAT output lines and the standard output written lacks the full set of STAT header columns. The **-out_stat** job command option may be used for these jobs to specify the name of an output file to which full STAT output lines should be written. When the **-out_stat** job command option is used for **aggregate** and **aggregate_stat** jobs the output is sent to the **-out_stat** file instead of the log or **-out** file.

Jobs will often combine output with multiple entries in the header columns. For example, a job may aggregate output with three different values in the **VX_MASK** column, such as "mask1", "mask2", and "mask3". The output **VX_MASK** column will contain the unique values encountered concatenated together with commas: "mask1,mask2,mask3". Alternatively, the **-set_hdr** option may be used to specify what should be written to the output header columns, such as "-set_hdr VX_MASK all_three_masks".
Jobs will often combine output with multiple entries in the header columns. For example, a job may aggregate output with three different values in the **VX_MASK** column, such as "mask1", "mask2", and "mask3". The output **VX_MASK** column will contain the unique values encountered concatenated together with commas: "mask1,mask2,mask3". Alternatively, the **-set_hdr** option may be used to specify what should be written to the output header columns, such as "-set_hdr VX_MASK all_three_masks". When **-set_hdr** is specified for **filter** jobs, it controls what is written to the **-dump_row** output file.

When using the "-out_stat" option to create a .stat output file and stratifying results using one or more "-by" job command options, those columns may be referenced in the "-set_hdr" option. When using multiple "-by" options, use "CASE" to reference the full case information string:

.. code-block:: none

-job aggregate_stat -line_type MPR -out_line_type CNT -by FCST_VAR,OBS_SID \
-set_hdr VX_MASK OBS_SID -set_hdr DESC CASE

Expand All @@ -662,7 +662,7 @@ When processing input MPR lines, these options may be used to define a masking g
When processing input MPR lines, these options are used to define the forecast, observation, or both thresholds to be applied when computing statistics. For categorical output line types (FHO, CTC, CTS, MCTC, MCTS) these define the categorical thresholds. For continuous output line types (SL1L2, SAL1L2, CNT), these define the continuous filtering thresholds and **-out_cnt_logic** defines how the forecast and observed logic should be combined.

.. code-block:: none

-out_fcst_wind_thresh thresh
-out_obs_wind_thresh thresh
-out_wind_thresh thresh
Expand Down
5 changes: 4 additions & 1 deletion internal/test_unit/R_test/test_util.R
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,10 @@ compareStatLty = function(stat1, stat2, lty, verb=0, strict=0){
# compare the information in the header columns
for(intCol in 2:21){
listMatch = apply(data.frame(dfV1[,intCol], dfV2[,intCol]), 1,
function(a){ a[1] == a[2] });
function(a){
same = (a[1] == a[2]) | (is.na(a[1]) & is.na(a[2]));
same[is.na(same)] = FALSE;
return(same); });
intNumDiff = sum( !listMatch[ !is.na(listMatch) ] );
if( 0 < intNumDiff ){
if( 1 <= verb ){
Expand Down
25 changes: 10 additions & 15 deletions internal/test_unit/xml/unit_python.xml
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,7 @@
<test name="python_numpy_point_stat">
<exec>&MET_BIN;/point_stat</exec>
<env>
<pair><name>MET_PYTHON_EXE</name> <value>&MET_PYTHON_EXE;</value></pair>
<pair><name>FCST_COMMAND</name> <value>&MET_BASE;/python/examples/read_ascii_numpy.py &DATA_DIR_PYTHON;/fcst.txt FCST</value></pair>
<pair><name>OBS_COMMAND</name> <value>&MET_BASE;/python/examples/read_ascii_numpy.py &DATA_DIR_PYTHON;/obs.txt OBS</value></pair>
</env>
Expand Down Expand Up @@ -480,21 +481,6 @@
</output>
</test>

<test name="python_point2grid_pb2nc_TMP">
<exec>&MET_BIN;/point2grid</exec>
<param> \
'PYTHON_NUMPY=&MET_BASE;/python/examples/read_met_point_obs.py &OUTPUT_DIR;/pb2nc/ndas.20120409.t12z.prepbufr.tm00.nc' \
G212 \
&OUTPUT_DIR;/python/pb2nc_TMP.nc \
-field 'name="TMP"; level="*"; valid_time="20120409_120000"; censor_thresh=[ &lt;0 ]; censor_val=[0];' \
-name TEMP \
-v 1
</param>
<output>
<grid_nc>&OUTPUT_DIR;/python/pb2nc_TMP.nc</grid_nc>
</output>
</test>

<!-- Invokes user-python logic to read a point obs -->
<test name="python_point2grid_pb2nc_TMP_user_python">
<exec>&MET_BIN;/point2grid</exec>
Expand Down Expand Up @@ -535,6 +521,7 @@
<test name="python_plot_point_obs_CONFIG">
<exec>&MET_BIN;/plot_point_obs</exec>
<env>
<pair><name>MET_PYTHON_EXE</name> <value>&MET_PYTHON_EXE;</value></pair>
<pair><name>TO_GRID</name> <value>NONE</value></pair>
</env>
<param> \
Expand All @@ -561,6 +548,7 @@
> &OUTPUT_DIR;/python/ensemble_stat/input_file_list; \
&MET_BIN;/ensemble_stat</exec>
<env>
<pair><name>MET_PYTHON_EXE</name> <value>&MET_PYTHON_EXE;</value></pair>
<pair><name>DESC</name> <value>NA</value></pair>
<pair><name>OBS_ERROR_FLAG</name> <value>FALSE</value></pair>
<pair><name>SKIP_CONST</name> <value>FALSE</value></pair>
Expand All @@ -587,6 +575,7 @@
<test name="python_point_stat_GRIB1_NAM_GDAS_WINDS">
<exec>&MET_BIN;/point_stat</exec>
<env>
<pair><name>MET_PYTHON_EXE</name> <value>&MET_PYTHON_EXE;</value></pair>
<pair><name>BEG_DS</name> <value>-1800</value></pair>
<pair><name>END_DS</name> <value>1800</value></pair>
<pair><name>OUTPUT_PREFIX</name> <value>GRIB1_NAM_GDAS_WINDS</value></pair>
Expand All @@ -605,6 +594,9 @@
</test>

<test name="python_plot_data_plane_SEMILATLON_ZONAL_MEAN">
<env>
<pair><name>MET_PYTHON_EXE</name> <value>&MET_PYTHON_EXE;</value></pair>
</env>
<exec>&MET_BIN;/plot_data_plane</exec>
<param> \
PYTHON_NUMPY \
Expand All @@ -619,6 +611,9 @@
</test>

<test name="python_pcp_combine_SEMILATLON_MERIDIONAL_MEAN">
<env>
<pair><name>MET_PYTHON_EXE</name> <value>&MET_PYTHON_EXE;</value></pair>
</env>
<exec>&MET_BIN;/pcp_combine</exec>
<param> \
-add PYTHON_NUMPY \
Expand Down
1 change: 1 addition & 0 deletions internal/test_unit/xml/unit_stat_analysis_ps.xml
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@
-job filter -line_type MPR -fcst_var TMP -fcst_lev Z2 -vx_mask DTC165 \
-column_str OBS_SID KDLN,KDHT,KDEN,KDLS,KDMA,KDMN,KDVT,KDEW \
-column_str_exc OBS_SID KDLN,KDHT \
-set_hdr DESC FILTER_OBS_SID \
-dump_row &OUTPUT_DIR;/stat_analysis_ps/POINT_STAT_FILTER_OBS_SID.stat \
-v 1
</param>
Expand Down
4 changes: 2 additions & 2 deletions scripts/python/examples/read_ascii_numpy.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
###########################################

def log(msg):
dataplane.log_msg(msg)
dataplane.log_message(msg)

def set_dataplane_attrs():
# attrs is a dictionary which contains attributes describing the dataplane.
Expand Down Expand Up @@ -95,5 +95,5 @@ def set_dataplane_attrs():
attrs = set_dataplane_attrs()
log("Attributes:\t" + repr(attrs))

# Sets fill_value if it exists
# Sets fill_value if it exists at the dataplane
#attrs['fill_value'] = 255 # for letter.txt
7 changes: 5 additions & 2 deletions scripts/python/examples/read_ascii_numpy_grid.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,11 @@
met_data = dataplane.read_2d_text_input(input_file)
print("Data Shape:\t" + repr(met_data.shape))
print("Data Type:\t" + repr(met_data.dtype))
except NameError:
print("Can't find the input file")
except NameError as ex:
print(" === ERROR from read_ascii_numpy_grid.py")
print(f" Exception: {type(ex)} {ex}")
print(f" sys.argv: {sys.argv}")
print(" Can't find the input file")

# attrs is a dictionary which contains attributes describing the dataplane.
# attrs should have 9 items, each of data type string:
Expand Down
2 changes: 1 addition & 1 deletion scripts/python/examples/read_ascii_xarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
###########################################

def log(msg):
dataplane.log_msg(msg)
dataplane.log_message(msg)

log("Python Script:\t" + repr(sys.argv[0]))

Expand Down
Loading