Running out of memory when using future_lapply within another software #670
-
Hi, I am creating a land use change model that links R with a dedicated environmental modelling software called Dinamica EGO. Dinamica EGO has the capability to run either R or Python code from inside it's models launching it's own local sessions. I have a section of R code that optionally involves a parallel process using a future_lapply loop (see code chunk below) or can perform the same task using sequential processing. I have obviously tested the code in isolation in R and the parallel processing seems to work fine in terms of clearing the memory of the workers (I include However when I run this code as part of my model in Dinamica, it seems like the memory of the workers is not being cleared because I receive an error along the lines of: "No process exists with this PID, i.e. the localhost worker is no longer alive" which I believe can indicate that a memory limit has been reached? This problem could be related to how the Dinamica EGO software is creating it's local R sessions and I have reached out to the software team to aks. However another important detail is that the Dinamica model itself is being run using a As I said, a hyper-specific use case but still I thought I would create a topic incase some may have some advice for me? The code chunk used in the R session started by Dinamica EGO:
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
I am a novice in futureverse, but I have a few questions that might help others help you:
|
Beta Was this translation helpful? Give feedback.
-
Looking at your code snippet, each results <- future.apply::future_by(Model_lookup, INDICES = seq_len(nrow(Model_lookup)), future.packages = packs, FUN = function(row) {
#vector trans_name
Trans_name <- row[1, "Trans_name"]
Trans_ID <- row[1, "Trans_ID"]
...
}) Because future.apply chunks up the iterations, this construct will export only the chunk of rows that are needed, instead of the full By using You could also try to use BTW, instead of PS. @blenback, please see https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks for how to format your code blocks using Markdown. You can edit your existing comments to fix this. |
Beta Was this translation helpful? Give feedback.
Looking at your code snippet, each
future_lapply()
iteration uses only one row ofModel_lookup
, e.g.Model_lookup[i, "Trans_name"]
. This means you can avoid exporting the fullModel_lookup
object. I recommend usingfuture_by()
instead, which has the capability of iterating over rows, e.g.Because future.apply chunks up the iterations, this construct will export only the chunk of rows that are needed, instead of the full
Model_lookup
object. This scales much …