Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing tests #354

Closed
DavidBAEpstein opened this issue May 30, 2022 · 6 comments
Closed

Missing tests #354

DavidBAEpstein opened this issue May 30, 2022 · 6 comments
Assignees

Comments

@DavidBAEpstein
Copy link
Contributor

Description

Both Issues #351 and #352 indicate that our testing regime is inadequate.
For now, I will discuss only a test associated with #352. I would do this in bash as follows.

  1. Produce a random name for an environment, for example ran357
  2. Make a copy of requirements.dev.conda.yml with name ran357.yml
  3. Use sed to replace tiatoolbox.dev with ran357
  4. Save error message from 'conda env create --file ran357.yml'
  5. Assert error message is empty
    Please let me know how to proceed, as I have never before written any test. Our onenote notebook assumes that only python files are being tested.

Possibly the other requirements files could be similarly tested.

The other failure of requirements.dev.conda.yml that I experienced was the failure of 'make html'. Not much point in this case of checking the error output.
One could check that docs/_build/html exists and is not empty.

@shaneahmed
Copy link
Member

shaneahmed commented May 30, 2022

Hi David, Re #352, our tests run using the pip installation file. The conda installation requires a lot of time on readthedocs and travis and therefore is not automatically tested. We should have however tested the conda requirements file before making those changes in the conda file manually. I have made a note of this and in future for any conda requirements update we will need to manually check the requirements file on Windows/Mac/Linux.

To resolve #352, I have created #353. Please can you check this file as this should fix the problem.

@shaneahmed
Copy link
Member

Re #351, we wanted to setup automated testing for jupyter notebooks but we haven't added those yet. We can discuss this in the meeting.

@DavidBAEpstein
Copy link
Contributor Author

Good. If we test jnbs, we need to take into account that several of them invite the user to CHANGE the source code (Changing ON_GPU from True to False and vice versa). Both versions need to be tested. The default should be ON_GPU = True, and this is what should be held by the master branch.

The ON_GPU = False version should be automatically generated and then checked separately. However, the False version could take a long time to run.
Another problem, this time with ON_GPU = True, is that @vqdang tells me that, in certain circumstances, the mechanism used by Python when the number of workers is changed, can lead to overflow in some systems but not in others. I believe that several members of the TIAToolbox team are aware of this.

@John-P
Copy link
Contributor

John-P commented Jul 4, 2022

The PR #392 may help address some of these issues.

@DavidBAEpstein
Copy link
Contributor Author

DavidBAEpstein commented Jul 4, 2022 via email

@shaneahmed shaneahmed linked a pull request Jul 8, 2022 that will close this issue
4 tasks
@shaneahmed
Copy link
Member

shaneahmed commented Aug 26, 2022

We have added tests for Jupyter notebooks where it was possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants