Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make python friendly CLI scripts #262

Merged
merged 4 commits into from
Jun 28, 2024

Conversation

BaptisteVandecrux
Copy link
Member

No description provided.

BaptisteVandecrux added a commit that referenced this pull request Jun 24, 2024
@BaptisteVandecrux BaptisteVandecrux changed the base branch from add-historical-data-to-l3 to develop June 26, 2024 19:30
BaptisteVandecrux added a commit that referenced this pull request Jun 27, 2024
This update:
- clears up the SHF LHF calculation
- find breaks in the gps coordinates (indicative of station relocation)
- for each interval between breaks, smoothes, interpolate and extrapolate the gps coordinates

new required package: statsmodel
@PennyHow
Copy link
Member

All looks good, @BaptisteVandecrux. These are all fairly straightforward and logical changes. Feel free to merge.

@PennyHow
Copy link
Member

Just so you know, I took out the requirement for process_l3_test action to pass in order for the branch to use, as we are not using it for now.

@BaptisteVandecrux BaptisteVandecrux merged commit f79edbf into develop Jun 28, 2024
4 checks passed
@BaptisteVandecrux BaptisteVandecrux deleted the make-python-friendly-CLI-scripts branch June 28, 2024 12:05
ladsmund pushed a commit that referenced this pull request Jul 2, 2024
This update:
- clears up the SHF LHF calculation
- find breaks in the gps coordinates (indicative of station relocation)
- for each interval between breaks, smoothes, interpolate and extrapolate the gps coordinates

new required package: statsmodel
BaptisteVandecrux added a commit that referenced this pull request Jul 5, 2024
* implemented gps postprocessing on top of the #262

This update:
- clears up the SHF LHF calculation
- reads dates of station relocations (when station coordinates are discontinuous) from the `aws-l0/metadata/station_configurations` 
- for each interval between station relocations, a linear function is fitted to the GPS observations of latitude longitude and altitude and is used to interpolate and extrapolate the gps observations
- these new smoothed and gap-free coordinates are the variables `lat, lon, alt`
- for bedrock stations (like KAN_B) static coordinates are used to build `lat, lon, alt`
- eventually `lat_avg`, `lon_avg` `alt_avg` are calculated from `lat, lon, alt` and added as attributes to the netcdf files.

Several minor fixes were also brought like:
- better encoding info removal when reading netcdf
- printing to files variables full of NaNs at `L2` and `L3/stations` but not printing them in the `L3/sites files`.
- recalculate dirWindSpd if needed for historical data
- due to xarray version, new columns need to be added manually before a concatenation of different datasets in join_l3
BaptisteVandecrux added a commit that referenced this pull request Aug 20, 2024
* Update .gitignore

* L2 split from L3 CLI processing

* unit tests moved to separate module

* file writing functions moved to separate module

* Loading functions moved to separate module

* Handling and reformating functions moved

* resampling functions moved

* aws module updated with structure changes

* get_l2 and l2_to_l3 process test added

* data prep and write function moved out of AWS class

* stations for testing changed

* creating folder before writing files, writing hourly daily monthly files out in L2toL3, trying not to re-write sorted tx file if already sorted

* update get_l3 to add historical data

* resampling frequency specified

* renamed join_levels to join_l2 because join_l3 will have different merging function, attribute management and use site_id and list_station_id

* skipping resample after join_l2, fixed setup.py for join_l2

* fixing test

* fixed function names

* update get_l3 to add historical data

* update get_l3 to add historical data

* Create get_l3_new.py

* further work on join_l3, varible_aliases in ressource folder

* cleaning up debug code in join_l3

* small fix in join_l3

* working verion

* delete encoding info after reading netcdf, debug of getColNames

* delete get_l3.py

* removing variables and output files metadata

* new variable description files

* added back ressource files, use level attributes for output definition

* make list of sites from station_config, switched print to logger.info

* removing get_l3, remove inst. values from averaged files, fixes on logging, attributes and tests,

* Updates to numpy dependency version and pandas deprecation warnings (#258)

* numpy dependency <2.0

* resample rules updated (deprecation warning)

* fillna replaced with ffill (deprecation warning)

* get_l3 called directly rather than from file

* process action restructured

* small changes following review, restored variable.csv history

renamed new variable.csv

moved old variable.csv

renamed new variables.csv

recreate variables.csv

* buiding a station list instead of a station_dict

* renamed l3m to l3_merged, reintroduced getVars and getMeta

* moving gcnet_postprocessing as part of readNead

* sorting out the station_list in reverse chronological order

* using tilde notation in setup.py

* better initialisation of station_attributes attribute

* moved addMeta, addVars, roundValues, reformatTime, reformatLon to write.py

* Inline comment describing enocding attribute removal when reading a netcdf

* loading toml file as dictionary within join_l3

instead of just reading the stid to join

* ressources renamed to resources (#261)

* using project attribute of a station locate AWS file and specify whether it's a Nead file

* update test after moving addVars and addMeta

* fixed logger message in resample

* better definition of monthly sample rates in addMeta

* dummy datasaet built in unit test now has 'level' attribute

* not storing timestamp_max for each station but pulling the info directly from the dataset when sorting

* removing unecessary import of addMeta, roundValues

* make CLI scripts usable within python

* return result in join_l2 and join_l3

* removing args from join_l2 function

* proper removal of encoding info when reading netcdf

* Refactored and Organized Test Modules

- Moved test modules and data from the package directory to the root-level tests directory.
- Updated directory structure to ensure clear separation of source code and tests.
- Updated import statements in test modules to reflect new paths.
- Restructured the tests module:
  - Renamed original automatic tests to `e2e` as they primarily test the main CLI scripts.
  - Added `unit` directory for unit tests.
  - Created `data` directory for shared test data files.

This comprehensive refactoring improves project organization by clearly separating test code from application code. It facilitates easier test discovery and enhances maintainability by following common best practices.

* Limited the ci tests to only run e2e

* naming conventions changed

* Feature/smoothing and extrapolating gps coordinates (#268)

* implemented gps postprocessing on top of the #262

This update:
- clears up the SHF LHF calculation
- reads dates of station relocations (when station coordinates are discontinuous) from the `aws-l0/metadata/station_configurations` 
- for each interval between station relocations, a linear function is fitted to the GPS observations of latitude longitude and altitude and is used to interpolate and extrapolate the gps observations
- these new smoothed and gap-free coordinates are the variables `lat, lon, alt`
- for bedrock stations (like KAN_B) static coordinates are used to build `lat, lon, alt`
- eventually `lat_avg`, `lon_avg` `alt_avg` are calculated from `lat, lon, alt` and added as attributes to the netcdf files.

Several minor fixes were also brought like:
- better encoding info removal when reading netcdf
- printing to files variables full of NaNs at `L2` and `L3/stations` but not printing them in the `L3/sites files`.
- recalculate dirWindSpd if needed for historical data
- due to xarray version, new columns need to be added manually before a concatenation of different datasets in join_l3

* Updated persistence.py to use explicit variable thresholds

Avoided applying the persistence filter on averaged pressure variables (`p_u` and `p_l`) due to their 0 decimal precision often leading to incorrect filtering.

* Fixed bug in persistence QC where initial repetitions were ignored

* Relocated unit persistence tests
* Added explicit test for `get_duration_consecutive_true`
* Renamed `duration_consecutive_true` to `get_duration_consecutive_true` for imperative clarity

* Updated python version in unittest

* Fixed bug in get_bufr

Configuration variables were to strictly validated.
* Made bufr_integration_test explicit

* Added __all__ to get_bufr.py

* Applied black code formatting

* Made bufr_to_csv as cli script in setup.py

* Updated read_bufr_file to use wmo_id as index

* Added script to recreate bufr files

* Added corresponding unit tests
* Added flag to raise exceptions on errors
* Added create_bufr_files.py to setup

* Updated tests parameters

Updated station config:
* Added sonic_ranger_from_gps
* Changed height_of_gps_from_station_ground from 0 to 1

* Added test for missing data in get_bufr

- Ensure get_bufr_variables raises AttributeError when station dimensions are missing

* Updated get_bufr to support static GPS heights.

* Bedrock stations shouldn’t depend on the noisy GPS signal for elevation.
* Added station dimension values for WEG_B
* Added corresponding unittest

* Updated github/workflow to run unittests

Added eccodes installation

* Updated get_bufr to support station config files in folder

* Removed station_configurations.toml from repository
* Updated bufr_utilities.set_station to validate wmo id
* Implemented StationConfig io tests
* Extracted StationConfiguration utils from get_bufr
* Added support for loading multiple station configuration files

Other
* Made ArgumentParser instantiation inline

* Updated BUFRVariables with scales and descriptions

* Added detailed descriptions with references to the attributes in BUFRVariables
* Change the attribute order to align with the exported schema
* Changed variable roundings to align with the scales defined in the BUFR schemas:
  * Latitude and longitude is set to 5. Was 6
  * heightOfStationGroundAboveMeanSeaLevel is set to 1. Was 2
  * heightOfBarometerAboveMeanSeaLevel is set to to 1. Was 2
  * pressure is set to -1. Was 1. Note: The BUFRVariable unit is Pa and not hPA
  * airTemperature is set to 2. Was 1.
  * heightOfSensorAboveLocalGroundOrDeckOfMarinePlatformTempRH is set to 2. Was 4
  * heightOfSensorAboveLocalGroundOrDeckOfMarinePlatformWSPD is set to 2. Was 4
 * Added unit tests to test the roundings
* Updated existing unit tests to align with corrected precision

* Increased the real_time_utilities rounding precisions

* Updated get_bufr to separate station position from bufr

* The station position determination (AWS_latest_locations) is separated from the bufr file export
* Updated the unit tests

Corrected minimum data check to allow p_i or t_i to be nan

Renamed process_station parameters for readability
* Rename now_timestamp -> target_timestamp
* Rename time_limit -> linear_regression_time_limit

Applied black

* Minor cleanup

* Updated StationConfiguration IO to handle unknown attributes from input

* Updated docstring in create_bufr_files.py

* Renamed e2e unittest methods

Added missing "test" prefix required by the unittest framework.

* Feature/surface heights and thermistor depths (#278)

* processes surface heights variables: `z_surf_combined`, `z_ice_surf`, `snow_height`, and thermistors' depths: `d_t_i_1-11`
* `variable.csv` was updated accordingly
* some clean-up of turbulent fluxes calculation, including renaming functions
* handling empty station configuration files and making errors understandable
* updated join_l3 so that surface height and thermistor depths in historical data are no longer ignored and to adjust the surface height between the merged datasets

* calculated either from `gps_lat, gps_lon, gps_alt` or `lat, lon, alt`, static values called `latitude`, `longitude` and `altitude` are saved as attributes along with  `latitude_origin`, `longitude_origin` and `altitude_origin` which states whether they come from gappy observations  `gps_lat, gps_lon, gps_alt`  or from gap-filled postprocess `lat, lon, alt`
* changed "H" to "h" in pandas and added ".iloc" when necessary to remove deprecation warnings

* made `make_metadata_csv.py` to update latest location in `aws-l3/AWS_station_metadata.csv` and `aws-l3/AWS_sites_metadata.csv`

---------

Co-authored-by: Penny How <pho@geus.dk>

* L2toL3 test added (#282)

* 3.8 and 3.9 tests removed, tests only for 3.10 and 3.11

* echo syntax changed

* updated input file paths
---------

* better adjustment of surface height in join_l3, also adjusting z_ice_surf (#289)

* different decoding of GPS data if "L" is in GPS string (#288)

* Updated pressure field for BUFR output files

* Updated get_l2 to use aws.vars and aws.meta

get_l2 were previously also loading vars and meta in addition to AWS.
AWS is populating meta with source information during instantiation.

* Removed static processing level attribute from file_attributes

* Run black on write.py

* Implemented alternative helper functions for reading variables and metadata files

* Refactor getVar getMeta
* Use pypromice.resources instaed of pkg_resources

* Select format from multiple L0 input files

The format string was previously selected from the last l0 file.

* Updated attribute metadata

* Added test case for output meta data
* Added json formatted source string to attributes
* Added title string to attributes
* Updated ID string to include level
* Added utility function for fetching git commit id

* Updated test_process with full pipeline test

* Added test station configuration
* Cleanup test data files

* Removed station configuration generation

* Renamed folder name in temporaty test directory

* Added data issues repository path as an explicit parameter to AWS

* Added data issues path to process_test.yml

* Applied black on join_l3

* Updated join_l3 to generate source attribute for sites

Validate attribute keys in e2e test

* job name changed

* Bugfix/passing adj dir to l3 processing plus attribute fix (#292)

* passing adjustment_dir to L2toL3.py

* fixing attributes in join_l3

- station_attribute containing info from merged dataset was lost when concatenating the datasets
- The key "source" is not present in the attributes of the old GC-Net files so `station_source = json.loads(station_attributes["source"])` was throwing an error

* give data_issues_path to get_l2tol3 in test_process

* using data_adjustments_dir as input in AWS.getL3

* adding path to dummy data_issues folder to process_test

* making sure data_issues_path  is Path in get_l2tol3

---------

Co-authored-by: PennyHow <pho@geus.dk>
Co-authored-by: Mads Christian Lund <maclu@geus.dk>
BaptisteVandecrux added a commit that referenced this pull request Aug 20, 2024
* implemented gps postprocessing on top of the #262

This update:
- clears up the SHF LHF calculation
- reads dates of station relocations (when station coordinates are discontinuous) from the `aws-l0/metadata/station_configurations` 
- for each interval between station relocations, a linear function is fitted to the GPS observations of latitude longitude and altitude and is used to interpolate and extrapolate the gps observations
- these new smoothed and gap-free coordinates are the variables `lat, lon, alt`
- for bedrock stations (like KAN_B) static coordinates are used to build `lat, lon, alt`
- eventually `lat_avg`, `lon_avg` `alt_avg` are calculated from `lat, lon, alt` and added as attributes to the netcdf files.

Several minor fixes were also brought like:
- better encoding info removal when reading netcdf
- printing to files variables full of NaNs at `L2` and `L3/stations` but not printing them in the `L3/sites files`.
- recalculate dirWindSpd if needed for historical data
- due to xarray version, new columns need to be added manually before a concatenation of different datasets in join_l3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants