You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is more or less a repeat of my comment in this issue.
RoadRunner currently suffers from slow build times which is noticeable when building lots of models. While #921 most likely mitigates some of this, roadrunner still needs to compile every model which is an expensive operation. The new LLJit has a 'number of threads' option but despite setting this to 15, it seems that we are still not using the available computational resources effectively:
The problem here is that the although we may be using many threads, we are still using one core. One way we can make better use of the resources available is to build a roadrunner container with an API for batch loading/simulating models. Under the hood, we do the work in parallel, using a Queue to limit the number of active cores to a user specified number. Moreover, this would lower the "activation energy" required for natively supporting HPC scheduling systems like SGE or slurm.
A rough API ( assuming we choose a vector/array like container) would be something like
#include "RoadRunnerVector.h"
using namespace rr;
RoadRunnerVector rrv("path/to/sbml/directory");
// OR
RoadRunnerVector rrv({sbmlFile1, sbmlFile2});
// access and simulate 1 model
rrv[0]->simulate(...);
rrv["MyModel"]->simulate(...).
// batch simulation.
rrv->simulate(...);
rrv->steadystate(...);
The text was updated successfully, but these errors were encountered:
This is more or less a repeat of my comment in this issue.
RoadRunner currently suffers from slow build times which is noticeable when building lots of models. While #921 most likely mitigates some of this, roadrunner still needs to compile every model which is an expensive operation. The new LLJit has a 'number of threads' option but despite setting this to 15, it seems that we are still not using the available computational resources effectively:
The problem here is that the although we may be using many threads, we are still using one core. One way we can make better use of the resources available is to build a roadrunner container with an API for batch loading/simulating models. Under the hood, we do the work in parallel, using a Queue to limit the number of active cores to a user specified number. Moreover, this would lower the "activation energy" required for natively supporting HPC scheduling systems like SGE or slurm.
A rough API ( assuming we choose a vector/array like container) would be something like
The text was updated successfully, but these errors were encountered: