Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tackle distributed computing limitation with 50<x<600 nodes #82

Open
asier-gutierrez opened this issue May 3, 2021 · 1 comment
Open
Labels
bug Something isn't working enhancement New feature or request help wanted Extra attention is needed

Comments

@asier-gutierrez
Copy link
Contributor

Memory and/or IO overflows when computing with large amounts of nodes.

We should discuss which is the safest and more intelligent way of fixing this issue.

Examples of fixes:

  • Like in Horovod Library "... bandwidth-optimal ring ...".
  • Multiple masters...

However, the enhancement should take into account the distribution strategy depending on the size of the files. For instance, it is not the same to distribute computation with 600 nodes in 1 master with 599 slaves of 1mb files vs 100mb files. It may be the case that 1m599s strategy works in the case of 1mb files and not in the other. We have to find out an strategy that distributes the work robustly and efficiently in all the posible scenarios.

@asier-gutierrez asier-gutierrez added bug Something isn't working enhancement New feature or request labels May 3, 2021
@asier-gutierrez asier-gutierrez added the help wanted Extra attention is needed label May 6, 2021
@asier-gutierrez
Copy link
Contributor Author

It is a problem of both IO and master's RAM. Depending on the data, one problem will show up sooner than the other. In the case of BNE, the first problem when scaling nodes is IO first and RAM second.

Proposal of fixes:

  1. Change the architecture: M masters, S slaves and D data nodes (the ones that will load the data and distribute through the infiniband network).
  2. There is a 200 GB local SSD available as temporary storage during jobs ($TMPDIR=/scratch/tmp/[jobid]) according to the documentation. We could make the master node to load on the 200GB SSD and actively remove and add the information. This is a little bit messy.
  3. Equally distribute to all the local nodes storage (tmpfs) the data that they have operate with.

And the inconveniences:
2. BSC-specific fix that will limit the number of concurrent nodes.
3. BSC-less-but-still-specific fix that it is likely to be slow due to the fact that tmpfs is HD and not SSD.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant