-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some ideas for more modern conda build recipes for ska packages #1
Conversation
chandra.time/meta.yaml
Outdated
@@ -0,0 +1,47 @@ | |||
{% set data = load_setup_py_data() %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this doesn't work on Py3, I think because load_setup_py_data is trying to JSON serialize the cythonize portion of the setup
ext_modules=cythonize(extensions),
Not sure why this seems to work on Py2. A work-around for this is just to specify the version until we patch load_setup_py_data().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like conda-build 2.1.15 works fine with this (and ignores the the bits that don't JSON serialize). conda-build 2.1.16 and conda-build 3.0.6 seem to break on the same recipe, however. Oy.
# Packages required to run the package. These are the dependencies that | ||
# will be installed automatically whenever the package is installed. | ||
run: | ||
- python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also probably needs astropy.
@@ -0,0 +1,2 @@ | |||
import Chandra.cmd_states | |||
Chandra.cmd_states.test() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jzuhone Regarding cmd_states, it looks like the last two releases didn't get git tags, so I've just pushed the last one. Also when building for conda I get test errors on looking for "SKA" (as not set). Are those test failures unique to my setup? I don't see anything in your build/test or scripts setting SKA. It looks like 'conda-build' does seem to ignore the test failures and continue though.
> raise KeyError(key) from None
E KeyError: 'SKA'
../../os.py:669: KeyError
================================================================================== 3 failed in 7.33 seconds ==================================================================================
===== cmd_states-3.11-py36_0 OK =====
import: 'Chandra.cmd_states'
TEST END: /data/fido/c3/conda-bld/linux-64/cmd_states-3.11-py36_0.tar.bz2
INFO:conda_build.config:--dirty flag not specified. Removing build folder after successful build/test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't see this error, but I also have the SKA
environment variable set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like a reasonable requirement to have SKA set appropriately to build, but maybe we need to discuss? I'm wondering how this plays in with potential automated remote builds. And pardon my ignorance, but is conda-build running tests by default in the build process? From the discussion that seems to be the case. We can certainly skip tests that require SKA data or other particular interface requirements (like a .netrc file).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, regarding SKA I also wasn't sure if we needed a convention.
Regarding cond-build running tests by default, there is really no "default". I just started the convention to add a run_test.py
file that would run module tests if they seemed doable. The big advantage I saw to doing tests from the package build process is the fact that the tests are run in the environment defined by the recipe requirements. That means the tests are a great way to see if the recipe requirements/dependencies are really correct (assuming some appropriate level of tests coverage). For other kinds of system/regression/environment tests, obviously, it makes more sense to run ska_testr in the full/built environment.
Regarding test conventions and skipping; the conda recipe can run basically anything in a run_test.py or run_test.sh, so we could customize the called tests there. Or skip altogether by not defining a run_test*... or we could update the module tests to skip those tests from that side.
Also, right now there are no tests run on the modules that are dependencies of testr
(Ska.Shell, Ska.File, pyyaks, ska_path?), as the modules wanted testr
to run their tests, it hadn't been build yet, and I didn't see a great way to bootstrap that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, regarding potential automated remote builds, we'd have a bunch more hurdles there. Right now, these recipes are expecting to find git repositories in a local directory etc...
No description provided.