Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docks #409

Merged
merged 15 commits into from
Nov 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 0 additions & 47 deletions docs/code_ref/util.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,50 +7,3 @@ specifically.
.. automodapi:: stixcore.util

.. automodapi:: stixcore.util.logging

stix-pipline CLI
================

usage: stix-pipline [-h] [-t TM_DIR] [-f FITS_DIR] [-s SPICE_DIR] [-p SOOP_DIR] [--idl_enabled] [--idl_disabled]
[--idl_gsw_path IDL_GSW_PATH] [--idl_batchsize IDL_BATCHSIZE] [--stop_on_error]
[--continue_on_error] [-o OUT_FILE] [-l LOG_FILE]
[--log_level {CRITICAL,ERROR,WARNING,INFO,DEBUG,NOTSET}] [-b {TM,LB,L0,L1,L2,ALL}]
[-e {TM,LB,L0,L1,L2,ALL}] [--filter FILTER] [-c [CLEAN]]

Runs the stix pipeline

optional arguments:
-h, --help show this help message and exit
-t TM_DIR, --tm_dir TM_DIR
input directory to the (tm xml) files
-f FITS_DIR, --fits_dir FITS_DIR
output directory for the
-s SPICE_DIR, --spice_dir SPICE_DIR
directory to the spice kernels files
-p SOOP_DIR, --soop_dir SOOP_DIR
directory to the SOOP files
--idl_enabled IDL is setup to interact with the pipeline
--idl_disabled IDL is setup to interact with the pipeline
--idl_gsw_path IDL_GSW_PATH
directory where the IDL gsw is installed
--idl_batchsize IDL_BATCHSIZE
batch size how many TM prodcts batched by the IDL bridge
--stop_on_error the pipeline stops on any error
--continue_on_error the pipeline reports any error and continouse processing
-o OUT_FILE, --out_file OUT_FILE
file all processed files will be logged into
-l LOG_FILE, --log_file LOG_FILE
a optional file all logging is appended
--log_level LLEVEL
LLEVEL: {CRITICAL,ERROR,WARNING,INFO,DEBUG,NOTSET}
the level of logging
-b LEVEL, --start_level LEVEL
LEVEL: {TM,LB,L0,L1,L2,ALL}
the processing level where to start
-e LEVEL, --end_level LEVEL
LEVEL: {TM,LB,L0,L1,L2,ALL}
the processing level where to stop the pipeline
--filter FILTER, -F FILTER
filter expression applied to all input files example '*sci*.fits'
-c, --clean
clean all files from <fits_dir> first
30 changes: 30 additions & 0 deletions docs/developers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -130,3 +130,33 @@ Documentation is built using `Sphinx <https://www.sphinx-doc.org/en/master/>`_ s
tests above this can be run manually or through tox. To run manually cd to the docs directory and
run `'make html'` to run via tox `'tox -e build_docs'`. There is a known dependency on Graphviz.
If you have any problems (on Windows) follow `this <https://bobswift.atlassian.net/wiki/spaces/GVIZ/pages/20971549/How+to+install+Graphviz+software>`_ instructions.

End to End Testing
------------------

Changing the code base might result in a change of the generated FITS data products the processing pipelines are generating. This might happen on purpose or unintentionally while enhancing data definitions (structural changes in the FITS extensions and header keyword) but also in the data itself due to changed number crunching methods. To avoid unnoticed changes in the generated fits files there is the end 2 end testing hook in place. If many of the number crunching methods are covered by unit test this additional test is to ensure the data integrity. If a change of a FITS product was on purpose a new version for that product has to be released, reprocessed and delivered to SOAR

This additional test step can be triggered locally but is also integrated into the CI get actions. In order to merge new PRs the test have to pass or the failure manually approved.

Provide test data to compare to
*******************************
A predefined set of FITS products together with the original TM data that was used to create them are public available on the processing server as zip file: https://pub099.cs.technik.fhnw.ch/data/end2end/data/head.zip . This TM data is used to generate new FITS products with th elates code base and afterwards compared for completeness and identical data.

Running the tests
*****************

The end to end tests are defines as normal unit tests here stixcore/processing/tests/test_end2end.py but marked with @pytest.mark.end2end. In the CI runs on two separate test runs one for the end to end test and one for all others unit tests.

run it with `pytest -v -m end2end`

Manually approve failed end to end tests
****************************************

Before a PR can be merged a set of test have to pass in the CI including the end to end testing. If you have changed the code base that way that your generated test fits product are not identical with the original test files you will be noted by a failed test result.

If your changes where intended and your are happy with the reported differences of the original and current test fits products a repo admin can merge the PR by bypassing the test in the GitHub UI. If you are not happy that changes where happen att all rework your code until you can explain the reported matching errors.

Update the original test data
*****************************

On each merge to the git master brunch a web hook (https://pub099.cs.technik.fhnw.ch/end2end/rebuild_hook.cgi - credentials stored as git secrets) is triggered to regenerate the original test data and TM source data and the https://pub099.cs.technik.fhnw.ch/data/end2end/data/head.zip gets replaced with the latest data. For that regenerating of the data a dedicated STIXCore environment is running on pub099 (/data/stix/end2en). That STIXCore environment always pulls the latest code updates from the master branch and reprocess the data. That way each new PR has to generated identical data as the last approved merged PR or the needs manual approval for the detected changes.
Loading
Loading