Tackle distributed computing limitation with 50<x<600 nodes #82
Labels
bug
Something isn't working
enhancement
New feature or request
help wanted
Extra attention is needed
Memory and/or IO overflows when computing with large amounts of nodes.
We should discuss which is the safest and more intelligent way of fixing this issue.
Examples of fixes:
However, the enhancement should take into account the distribution strategy depending on the size of the files. For instance, it is not the same to distribute computation with 600 nodes in 1 master with 599 slaves of 1mb files vs 100mb files. It may be the case that 1m599s strategy works in the case of 1mb files and not in the other. We have to find out an strategy that distributes the work robustly and efficiently in all the posible scenarios.
The text was updated successfully, but these errors were encountered: