-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make sure optional dependencies are optional for tests #260
Conversation
self.tuple = (1, 2, "a", A()) | ||
self.list = [1, 1.1, "a", 1j, self.inst] | ||
self.dict = {"a": 1, "b": 2, "ref": self.inst} | ||
self.ref = self.numeric |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a bit awkward, but these classes should only be defined if we have numpy, so I moved them inside the above if statement. There is probably a cleaner way to do this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since all of the tests in this module requires NumPy, this is a case where one may consider raising SkipTest
at the module, like this: https://github.com/enthought/traitsui/blob/464b20200eef30f23317ffd3f6a125520329f7e2/traitsui/tests/ui_editors/test_data_frame_editor.py#L11-L14
With that, when we run the test with discover, the entire module will be skipped. And if one wants to run the test module on its own, e.g. python -m unittest apptools.persistence.tests.test_state_pickler
, they ought to have NumPy because the subject state_pickler
requires NumPy. It is understandable to get an exception there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I flip flop between the two (wrapping code in a condition VS raising SkipTest at the module) depending on the situation (e.g. how optional the dependency is, how much code needs to be wrapped in an if-block, ...). This is an option for consideration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach here means in both run scenarios (unittest discover
) and (unittest <module-name>
), the tests are consistently skipped in the absence of numpy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this particular case I went with the approach you described as it is cleaner IMO. For the rest I left them as is.
I am curious to test the new news fragment system with this scenario, as this is a PR that will need to be backported for the 5.0.0 release and it has a news fragment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LVGTM. Thank you.
There is an alternative way to skip the entire test module for you to consider. (I don't have a particular preference over which one.)
@kitchoi still looking okay to merge? |
Yes. Thank you! |
* make sure optional dependencies are optional for tests * add a news fragment * apply suggestion from code review
* Make sure optional dependencies are optional for tests (#260) * make sure optional dependencies are optional for tests * add a news fragment * apply suggestion from code review * Add extras_require to setup.py for optional dependencies (#257) * add optional dependencies for specific cub packages to extras_require * add details to README about optional dependencies * list io imports as standard library imports * add prefrences/configobj to extras_rrequire as well (leave configobj in install_requires too) * add specific install instructions * formatting * remove unneeded back ticks * move install instructions into a separate section * add news fragment * add PR number to news fragment * Remove image license files (#262) * remove image license files * move internal image license file to root directory * add image_LICENSE_CP.txt and include it in manifest * Apply suggestions from code review Co-authored-by: Poruri Sai Rahul <rporuri@enthought.com> Co-authored-by: Poruri Sai Rahul <rporuri@enthought.com> * Add a new category for build system related changes (#261) * update changelog Co-authored-by: Poruri Sai Rahul <rporuri@enthought.com> Co-authored-by: Kit Choi <kitchoi@users.noreply.github.com>
* Make sure optional dependencies are optional for tests (#260) * make sure optional dependencies are optional for tests * add a news fragment * apply suggestion from code review * Add extras_require to setup.py for optional dependencies (#257) * add optional dependencies for specific cub packages to extras_require * add details to README about optional dependencies * list io imports as standard library imports * add prefrences/configobj to extras_rrequire as well (leave configobj in install_requires too) * add specific install instructions * formatting * remove unneeded back ticks * move install instructions into a separate section * add news fragment * add PR number to news fragment * Remove image license files (#262) * remove image license files * move internal image license file to root directory * add image_LICENSE_CP.txt and include it in manifest * Apply suggestions from code review Co-authored-by: Poruri Sai Rahul <rporuri@enthought.com> Co-authored-by: Poruri Sai Rahul <rporuri@enthought.com> * Add a new category for build system related changes (#261) * update changelog Co-authored-by: Poruri Sai Rahul <rporuri@enthought.com> Co-authored-by: Kit Choi <kitchoi@users.noreply.github.com>
fixes #99
This PR uses the existing
apptools._testing.optional_dependencies
to ensure that optional dependencies are optional for tests. The test suite should no longer fail if certain optional dependencies are not installed.Checklist