Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug? in 3.2.0 for tests using parallel mclapply #335

Closed
BenoitLondon opened this issue Sep 23, 2018 · 7 comments
Closed

Bug? in 3.2.0 for tests using parallel mclapply #335

BenoitLondon opened this issue Sep 23, 2018 · 7 comments

Comments

@BenoitLondon
Copy link

BenoitLondon commented Sep 23, 2018

I have a function that uses parallel::mclapply and in tests I call it with ncores = 2
It passes the tests and coverage worked fine until new version 3.2.0 was released
Now coverage fails claiming that the tests with parallel crashes whereas the test itself does not break.

If I change the argument in the tests to ncores = 1 coverage works again...

Hard to give my example here but if necessary I can try recreate the bug ( looks like parallel breaks sthg in covr?) maybe it s intentional so that "if in covr mode we should not use multicore" but I didnt know then...

@BenoitLondon BenoitLondon changed the title Bug in new version for tests using parallel mclapply Bug? in 3.2.0 for tests using parallel mclapply Sep 23, 2018
@BenoitLondon
Copy link
Author

Actually just found this thread saying parallel stuff is not tracked by covr...
https://stat.ethz.ch/pipermail/bioc-devel/2015-August/007905.html

@BenoitLondon
Copy link
Author

I guess you can close the issue maybe a warning when encountering parallel code would be handy :)
Thanks for this great package!

@jimhester
Copy link
Member

If you give me the package you are trying to run I can look at it.

@rdiaz02
Copy link

rdiaz02 commented Sep 24, 2018

I just run into the same situation (package OncoSimulR's tests were exhausting the memory in both my laptop and TravisCI --- https://api.travis-ci.org/v3/job/432504775/log.txt; this did not happen with the same code with version 3.1.0 of covr).

The following test.something.R file (in, say, the testthtat's directory of the package) shows the problem.

test_that("1", {
    u <- mclapply(1:100, function(i) {Sys.sleep(0.1); runif(i)},
                  mc.cores = 2)
})

test_that("2", {
    u <- mclapply(1:100, function(i) {Sys.sleep(0.1); runif(i)},
                  mc.cores = 2)
})

test_that("3", {
    u <- mclapply(1:100, function(i) {Sys.sleep(0.1); runif(i)},
                  mc.cores = 2)
})

test_that("4", {
    u <- mclapply(1:100, function(i) {Sys.sleep(0.1); runif(i)},
                  mc.cores = 2)
})

test_that("5", {
    u <- mclapply(1:100, function(i) {Sys.sleep(0.1); runif(i)},
                  mc.cores = 2)
})

If one runs under covr 3.2.0, one can see more than 20 R processes running. Under covr 3.1.0 one only sees the expected three (parent + 2 children) all the time.

It looks like something gets confused and multiple mclapplys (respecting the mc.cores = 2, though) are being run recursively. If one adds the mc.allow.recursive = FALSE option to the mclapply call, the problem disappears.

I am not sure if this is an issue with covr or with mclapply. It certainly did not happen with covr 3.1.0 (or earlier).

These are two images from doing ps -A -f --forest | grep "exec/R -f testthat.R". First one is using covr 3.2.0
covr320

(notice the multiple nested pairs)

and the second from covr 3.1.0 or from covr 3.2.0 and mc.allow.recursive = FALSE.
covr310

@rdiaz02
Copy link

rdiaz02 commented Sep 24, 2018

It is not just mc.allow.recursive = FALSE, though, and something else seems to be going on. With version covr 3.2.0, even with mc.allow.recursive = FALSE, the code above takes much longer to run than with version 3.1.0 (elapsed time is more than doubled).

(Trying this "for real" with OncoSimulR, and fixing the mclapply call to use mc.allow.recursive = FALSE, package coverage is run in about 5 minutes with covr 3.1.0, but had not finished after 20' with covr 3.2.0).

@jimhester
Copy link
Member

@rdiaz02, thank you for providing a package that illustrates the issue, I was able to reproduce it and fix it yesterday. There was an issue with how the mcexit patch was being applied.

@rdiaz02
Copy link

rdiaz02 commented Sep 25, 2018

@jimhester thank you for the fast fix (and the package).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants