You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Significantly, the last is painfully true. If you do not have automated regression tests you will not become aware of problems by 3rd party software: JDK, Java libraries required by Maven dependencies from the pom.xml file, Maven plugins themselves (also from pom.xml) or errors in the incoming/upstream data (either by UN/CEFACT editors or GEFEG-FX export changes).
In addition, developers often look into automated tests as the first place to find code examples of usage of the functionality on how to use the software. Often a debug shows how the data flows through the software. Therefore, I would suggest adding an automated test full feature for the creation of the complete (data of the) website: https://vocabulary.uncefact.org/
Some suggestions on implementation details of the Maven build system:
The Java build system Maven is copying during build & testing (e.g. mvn install) all resources from /src/test/resources and /src/main/resources into /target/test-classes to access these resources via Classloader.
I find it helpful to separate the test resources into two subfolders "test-input" and "test-reference".
It is useful not only the generate the data but to compare it with the previously generated data (otherwise there is no real regression test). The third newly created resource folder in the /target/test-classes directory would be "test-output".
(if you use AL2 you may copy the functionality in the end (and access to test files) from a toolkit test).
I would suggest not only generating the deliverables into the /target directory but copying them into the source tree via pom.xml.
For example, in the ODF Toolkit, I am generating Java Sources from XML grammar (a typed DOM) and copying these files in pom.xml into the source directory. Please note the variables I am using.
Having the created human-readable data in a Git repository showing changes in tooling (or even together aligned with the sources they created them) seems of great advantage to me.
Nevertheless, usually, 3rd party libraries (JARs) are downloaded by Maven from its Maven repository and once downloaded saved/cached locally in the <USER>/.m2 directory to avoid downloading every time the same JAR files.
We might ask ourselves, why do not we download the UN/CEFACT artefacts required as input from a Maven repository?
A JAR is basically a ZIP file with an additional manifest file. We could download XLS, XSD, etc. for each version we like to generate deliverables for. It is an upstream problem that not all deliverables are on the website and are not accessible by tooling at all (e.g. via Maven). I plan the same for the OASIS ODF TC to upload the XML RNG grammar (and all deliverables) as from my perspective the information of a data/software specification has to be machine-accessible to allow digitalization.
In the future, the half-yearly UN/CEFACT releases should be done with as much automation as possible to avoid human errors and time and allow reproducibility (for example, I have no idea where the sources for CCL regression tests exist creating the PDF validation reports from https://unece.org/trade/uncefact/unccl)...
The complete last paragraph is mostly an upstream UN/CEFACT problem, but I found it worth here to mention!
The text was updated successfully, but these errors were encountered:
There are three quotes I would like to remind you on:
Significantly, the last is painfully true. If you do not have automated regression tests you will not become aware of problems by 3rd party software: JDK, Java libraries required by Maven dependencies from the pom.xml file, Maven plugins themselves (also from pom.xml) or errors in the incoming/upstream data (either by UN/CEFACT editors or GEFEG-FX export changes).
In addition, developers often look into automated tests as the first place to find code examples of usage of the functionality on how to use the software. Often a debug shows how the data flows through the software. Therefore, I would suggest adding an automated test full feature for the creation of the complete (data of the) website: https://vocabulary.uncefact.org/
Some suggestions on implementation details of the Maven build system:
I find it helpful to separate the test resources into two subfolders "test-input" and "test-reference".
It is useful not only the generate the data but to compare it with the previously generated data (otherwise there is no real regression test). The third newly created resource folder in the /target/test-classes directory would be "test-output".
(if you use AL2 you may copy the functionality in the end (and access to test files) from a toolkit test).
For example, in the ODF Toolkit, I am generating Java Sources from XML grammar (a typed DOM) and copying these files in pom.xml into the source directory. Please note the variables I am using.
Having the created human-readable data in a Git repository showing changes in tooling (or even together aligned with the sources they created them) seems of great advantage to me.
Nevertheless, usually, 3rd party libraries (JARs) are downloaded by Maven from its Maven repository and once downloaded saved/cached locally in the <USER>/.m2 directory to avoid downloading every time the same JAR files.
We might ask ourselves, why do not we download the UN/CEFACT artefacts required as input from a Maven repository?
A JAR is basically a ZIP file with an additional manifest file. We could download XLS, XSD, etc. for each version we like to generate deliverables for. It is an upstream problem that not all deliverables are on the website and are not accessible by tooling at all (e.g. via Maven). I plan the same for the OASIS ODF TC to upload the XML RNG grammar (and all deliverables) as from my perspective the information of a data/software specification has to be machine-accessible to allow digitalization.
In the future, the half-yearly UN/CEFACT releases should be done with as much automation as possible to avoid human errors and time and allow reproducibility (for example, I have no idea where the sources for CCL regression tests exist creating the PDF validation reports from https://unece.org/trade/uncefact/unccl)...
The complete last paragraph is mostly an upstream UN/CEFACT problem, but I found it worth here to mention!
The text was updated successfully, but these errors were encountered: