-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deprecate or improve @parallel for
#19578
Comments
cc: @andreasnoack |
I'd vote for keeping one paradigm that works well. So +1 to deprecate |
Our |
Considering @ararslan changed his vote, it is currently 4 to 1 in favor of removing |
I agree with deprecating this. "Parallel for" syntax of some kind I think only makes sense with shared memory, since then there is at least hope of matching serial behavior. At the very least it would have to be renamed, since we have two forms of parallelism now. Certainly the reducing form can be deprecated to parallel mapreduce. We already have most of the machinery there with the |
Now that you mention it, usage in the context of SharedArrays is most appropriate. Suggest we deprecate it for one cycle, and plan on moving it to SharedArrays (just the file for now, under SharedArrays module when we have it). The documentation can be updated right away to restrict its mention to SharedArrays only. |
This doesn't seem quite ready or release blocking. Sorry, @amitmurthy! |
Please don't deprecate |
We should sort out which names refer to distributed vs. shared memory for 1.0. |
|
I suggest simply renaming |
Hi, I am a PhD student in parallel and distributed computing, and just recently found out about the effort of including native parallel and distributed support in the language. I think it is great. I do suggest though, that you keep distributed/parallel macros (OpenMP style) and functional programming map/reduce since they are part of two different programming styles. But I think that the reduce operation should be also handled by a specialization of the reduce function, a map_reduce function would be inconsistent. I also think that using local parallelism, with co-routines or threads should be simpler and more straightforward then distributed parallelism because the need for local parallelism is much more common. What I think I missed from multi-threading are critical sessions and atomic instructions. Do you intent to implementing them for 1.0? |
And the resolution is? neither |
It's called |
Got it. Didn't realize I had to use 'using Distributed' to get it. |
1.0 Moved a lot of stuff out of |
Thanks, schnaltterm, Lots of good info in those release notes. |
Could it be possible to add Also just to share some user experience, my only shaky awareness of parallelism is through the ubiquitous |
I'll close this as the renaming to |
This issue is to debate changes to
@parallel for
, if any.Options:
Deprecate
@parallel for
- the syntax, while convenient, differs from a typicalfor
loop w.r.t. the iterables that can be supported, local variables, error handling, etc. The unstated assumption that an@parallel for
is just afor
loop executed in parallel is not always correct.pmap
with batching can be used in scenarios where@parallel for
was previously used with the additional benefits of retrying on errors and elastic nodes (sincepmap
uses worker pools).Keep
@parallel for
with some changesa. Make the non-reducer case blocking by default. Today you need to prefix a
@sync
to detect errors which would otherwise be discarded silently. See Optimization to ALS-WR abhijithch/RecSys.jl#36 (comment).b. Generalize it to support more types of iterators
c. Any other improvements?
The text was updated successfully, but these errors were encountered: