Skip to content

Commit

Permalink
Replace @async mentions in manual with Threads.@spawn (#55315)
Browse files Browse the repository at this point in the history
  • Loading branch information
Satvik authored Aug 5, 2024
1 parent 40ecf69 commit 4200203
Show file tree
Hide file tree
Showing 9 changed files with 28 additions and 28 deletions.
2 changes: 1 addition & 1 deletion doc/src/base/parallel.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ end
ev = OneWayEvent()
@sync begin
@async begin
Threads.@spawn begin
wait(ev)
println("done")
end
Expand Down
6 changes: 3 additions & 3 deletions doc/src/devdocs/probes.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ Now we can start `bpftrace` and have it monitor `rt__new__task` for *only* this

And if we spawn a single task:

`@async 1+1`
`Threads.@spawn 1+1`

we see this task being created:

Expand All @@ -215,8 +215,8 @@ we see this task being created:
However, if we spawn a bunch of tasks from that newly-spawned task:

```julia
@async for i in 1:10
@async 1+1
Threads.@spawn for i in 1:10
Threads.@spawn 1+1
end
```

Expand Down
10 changes: 5 additions & 5 deletions doc/src/manual/asynchronous-programming.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,8 @@ the next input prompt appears. That is because the REPL is waiting for `t`
to finish before proceeding.

It is common to want to create a task and schedule it right away, so the
macro [`@async`](@ref) is provided for that purpose --- `@async x` is
equivalent to `schedule(@task x)`.
macro [`Threads.@spawn`](@ref) is provided for that purpose --- `Threads.@spawn x` is
equivalent to `task = @task x; task.sticky = false; schedule(task)`.

## Communicating with Channels

Expand Down Expand Up @@ -186,7 +186,7 @@ A channel can be visualized as a pipe, i.e., it has a write end and a read end :

# we can schedule `n` instances of `foo` to be active concurrently.
for _ in 1:n
errormonitor(@async foo())
errormonitor(Threads.@spawn foo())
end
```
* Channels are created via the `Channel{T}(sz)` constructor. The channel will only hold objects
Expand Down Expand Up @@ -264,10 +264,10 @@ julia> function make_jobs(n)
julia> n = 12;
julia> errormonitor(@async make_jobs(n)); # feed the jobs channel with "n" jobs
julia> errormonitor(Threads.@spawn make_jobs(n)); # feed the jobs channel with "n" jobs
julia> for i in 1:4 # start 4 tasks to process requests in parallel
errormonitor(@async do_work())
errormonitor(Threads.@spawn do_work())
end
julia> @elapsed while n > 0 # print out results
Expand Down
8 changes: 4 additions & 4 deletions doc/src/manual/distributed-computing.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ An important thing to remember is that, once fetched, a [`Future`](@ref Distribu
locally. Further [`fetch`](@ref) calls do not entail a network hop. Once all referencing [`Future`](@ref Distributed.Future)s
have fetched, the remote stored value is deleted.

[`@async`](@ref) is similar to [`@spawnat`](@ref), but only runs tasks on the local process. We
[`Threads.@spawn`](@ref) is similar to [`@spawnat`](@ref), but only runs tasks on the local process. We
use it to create a "feeder" task for each process. Each task picks the next index that needs to
be computed, then waits for its process to finish, then repeats until we run out of indices. Note
that the feeder tasks do not begin to execute until the main task reaches the end of the [`@sync`](@ref)
Expand Down Expand Up @@ -657,7 +657,7 @@ julia> function make_jobs(n)
julia> n = 12;
julia> errormonitor(@async make_jobs(n)); # feed the jobs channel with "n" jobs
julia> errormonitor(Threads.@spawn make_jobs(n)); # feed the jobs channel with "n" jobs
julia> for p in workers() # start tasks on the workers to process requests in parallel
remote_do(do_work, p, jobs, results)
Expand Down Expand Up @@ -896,7 +896,7 @@ conflicts. For example:
```julia
@sync begin
for p in procs(S)
@async begin
Threads.@spawn begin
remotecall_wait(fill!, p, S, p)
end
end
Expand Down Expand Up @@ -978,7 +978,7 @@ and one that delegates in chunks:
julia> function advection_shared!(q, u)
@sync begin
for p in procs(q)
@async remotecall_wait(advection_shared_chunk!, p, q, u)
Threads.@spawn remotecall_wait(advection_shared_chunk!, p, q, u)
end
end
q
Expand Down
6 changes: 3 additions & 3 deletions doc/src/manual/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -943,7 +943,7 @@ Consider the printed output from the following:

```jldoctest
julia> @sync for i in 1:3
@async write(stdout, string(i), " Foo ", " Bar ")
Threads.@spawn write(stdout, string(i), " Foo ", " Bar ")
end
123 Foo Foo Foo Bar Bar Bar
```
Expand All @@ -956,7 +956,7 @@ in the above example results in:

```jldoctest
julia> @sync for i in 1:3
@async println(stdout, string(i), " Foo ", " Bar ")
Threads.@spawn println(stdout, string(i), " Foo ", " Bar ")
end
1 Foo Bar
2 Foo Bar
Expand All @@ -969,7 +969,7 @@ You can lock your writes with a `ReentrantLock` like this:
julia> l = ReentrantLock();
julia> @sync for i in 1:3
@async begin
Threads.@spawn begin
lock(l)
try
write(stdout, string(i), " Foo ", " Bar ")
Expand Down
4 changes: 2 additions & 2 deletions doc/src/manual/methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -614,7 +614,7 @@ Start some other operations that use `f(x)`:
julia> g(x) = f(x)
g (generic function with 1 method)
julia> t = @async f(wait()); yield();
julia> t = Threads.@spawn f(wait()); yield();
```

Now we add some new methods to `f(x)`:
Expand All @@ -639,7 +639,7 @@ julia> g(1)
julia> fetch(schedule(t, 1))
"original definition"
julia> t = @async f(wait()); yield();
julia> t = Threads.@spawn f(wait()); yield();
julia> fetch(schedule(t, 1))
"definition for Int"
Expand Down
14 changes: 7 additions & 7 deletions doc/src/manual/networking-and-streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ Let's first create a simple server:
```julia-repl
julia> using Sockets
julia> errormonitor(@async begin
julia> errormonitor(Threads.@spawn begin
server = listen(2000)
while true
sock = accept(server)
Expand Down Expand Up @@ -305,11 +305,11 @@ printed the message and waited for the next client. Reading and writing works in
To see this, consider the following simple echo server:

```julia-repl
julia> errormonitor(@async begin
julia> errormonitor(Threads.@spawn begin
server = listen(2001)
while true
sock = accept(server)
@async while isopen(sock)
Threads.@spawn while isopen(sock)
write(sock, readline(sock, keep=true))
end
end
Expand All @@ -319,7 +319,7 @@ Task (runnable) @0x00007fd31dc12e60
julia> clientside = connect(2001)
TCPSocket(RawFD(28) open, 0 bytes waiting)
julia> errormonitor(@async while isopen(clientside)
julia> errormonitor(Threads.@spawn while isopen(clientside)
write(stdout, readline(clientside, keep=true))
end)
Task (runnable) @0x00007fd31dc11870
Expand Down Expand Up @@ -357,10 +357,10 @@ ip"74.125.226.225"

All I/O operations exposed by [`Base.read`](@ref) and [`Base.write`](@ref) can be performed
asynchronously through the use of [coroutines](@ref man-tasks). You can create a new coroutine to
read from or write to a stream using the [`@async`](@ref) macro:
read from or write to a stream using the [`Threads.@spawn`](@ref) macro:

```julia-repl
julia> task = @async open("foo.txt", "w") do io
julia> task = Threads.@spawn open("foo.txt", "w") do io
write(io, "Hello, World!")
end;
Expand All @@ -379,7 +379,7 @@ your program to block until all of the coroutines it wraps around have exited:
julia> using Sockets
julia> @sync for hostname in ("google.com", "github.com", "julialang.org")
@async begin
Threads.@spawn begin
conn = connect(hostname, 80)
write(conn, "GET / HTTP/1.1\r\nHost:$(hostname)\r\n\r\n")
readline(conn, keep=true)
Expand Down
2 changes: 1 addition & 1 deletion doc/src/manual/performance-tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -1723,7 +1723,7 @@ using Distributed
responses = Vector{Any}(undef, nworkers())
@sync begin
for (idx, pid) in enumerate(workers())
@async responses[idx] = remotecall_fetch(foo, pid, args...)
Threads.@spawn responses[idx] = remotecall_fetch(foo, pid, args...)
end
end
```
Expand Down
4 changes: 2 additions & 2 deletions doc/src/manual/running-external-programs.md
Original file line number Diff line number Diff line change
Expand Up @@ -332,8 +332,8 @@ will attempt to store the data in the kernel's buffers while waiting for a reade
Another common solution is to separate the reader and writer of the pipeline into separate [`Task`](@ref)s:

```julia
writer = @async write(process, "data")
reader = @async do_compute(read(process, String))
writer = Threads.@spawn write(process, "data")
reader = Threads.@spawn do_compute(read(process, String))
wait(writer)
fetch(reader)
```
Expand Down

0 comments on commit 4200203

Please sign in to comment.