Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐕 Batch: Running the same model in parallel #1223

Open
miquelduranfrigola opened this issue Aug 5, 2024 · 2 comments
Open

🐕 Batch: Running the same model in parallel #1223

miquelduranfrigola opened this issue Aug 5, 2024 · 2 comments
Assignees

Comments

@miquelduranfrigola
Copy link
Member

Summary

As we work on running more than one Ersilia model in parallel, @JHlozek highlighted the scenario where we want to run the same model in multiple processes/terminals. This would be a very interesting case to consider in repositories like Olinda, where we need to make precalculations across a large set of inputs.

Objective(s)

  1. Run the same model in multiple terminals/processes (i.e. sessions).
  2. Optionally, make sure parallelization works both in docker and in conda serving modes.
  3. Ideally, parallelization should work both from the CLI (i.e. ersilia run -i ...) and from the Python API (mdl.run(...)). The Python API parallelization may be more difficult and it is less critical.

Documentation

No specific documentation available for this, although we should include parallelization as part of our main documentation in Gitbook and the README file.

@miquelduranfrigola
Copy link
Member Author

@DhanshreeA what's your take on this? Is this something that may be relatively easy to address?

@DhanshreeA
Copy link
Member

I am hoping this works out of the box really since all session artifacts go within their respective folders. I'll test this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Queue
Development

No branches or pull requests

2 participants