Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Event Loop #14996

Open
wants to merge 56 commits into
base: master
Choose a base branch
from
Open

Conversation

ysbaddaden
Copy link
Contributor

@ysbaddaden ysbaddaden commented Sep 12, 2024

Almost identical to #14959 but with cleaner history and more documentation that led me to identify issues in the arena.

Overall design

The logic of the event loop doesn't change much from libevent: we try to execute an operation (e.g. read, write, connect, ...) on nonblocking file descriptors; if the operation would block (EAGAIN) we create an event that references the operation along with the current fiber; we eventually rely on the polling system (epoll or kqueue) to report when an fd is ready, which will dequeue a pending event and resume its associated fiber (one at a time).

Unlike libevent that will add and remove the fd to and from the polling system every time we'd block, this event loop adds it once and will only remove it when closing the fd. The theory is that being notified for readiness (once thanks to edge-triggered) is less expensive than always modifying the polling system. In practice a purely IO-bound benchmark with long running sockets, we notice up to 20% performance improvement. Actual applications should see less improvements.

Implementation details

Unlike the previous attempts to integrate epoll and kqueue directly that required a global events object and a global lock to protect it, this PR only needs fine-grained locks for each IO object and operation (read, write) to add the events to the waiting lists. In practice, the locks should never be contented (unless you share a fd for the same read or write operation in multiple fibers).

Caveat: timers are still global to the event loop, and we need a global lock to protect it. This means that IO with timeout will still see contention. Improving the timers data structure will come later (e.g. lock free, more precise, faster operations).

To avoid keeping pointers to the IO object that could prevent the GC from collecting lost IO objects, this PR introduces "poll descriptor" objects (the name comes from Go's netpoll) that keep the list of readers and writers and don't point back to the IO object. The GC collecting an IO object is fine: the finalizer will close the fd and tell the event loop to cleanup the fd resources associated poll descriptor (so we can safely reuse the fd).

To avoid pushing raw pointers into the kernel data structures, and to quickly retrieve the poll descriptor from a mere fd, but also to avoid programming errors that would segfault the program, this PR introduces a Generational Arena to store the "Poll Descriptors" (the name is inherited from Go's netpoll) so we only store an index into the polling system. Another benefit is that we can reuse the existing allocation when a fd is reused. If we try to retrieve an outdated index (the allocation was freed or reallocated) the arena will raise an explicit exception.

The poll descriptors associate a fd to an event loop instance, so we can still have multiple event loops per processes, yet make sure that an fd is only ever in one event loop. When a fd will block on another event loop instance, the fd will be transferred automatically (i.e. removed from the old one & added to the new one). The benefits are numerous: this avoids having multiple event loops being notified at the same time; this avoids having to close/remove the fd from each event loop instances; this avoids cross event loop enqueues that are much slower than local enqueues in RFC 2.

A limitation is that trying to move a fd from one evloop to another while there are pending waiters will raise an exception. We can't move timeout events along with the fd from one event loop instance to another one, but that would also break the "always local enqueues" benefit.

Most application shouldn't notice any impact because of this design choice, since a fd is usually not shared across fibers (concurrency issues), except maybe a server socket with multiple accepting fibers? In that case you'll need to make sure the fibers are on the same thread (preview_mt) or same context (RFC 2).

Availability

We may want to have a compile time flag to enable the new event loop before we merge? For the time being this PR uses the new shiny evloop by default (to have CI runs) and introduces the :evloop_libevent flag to fallback to libevent.

TODO

A couple changes before merge (but not blocking review):

  • Compile time flag(s):
    • the epoll/kqueue evloop is enabled by default on FreeBSD, Linux and macOS (tested and working), but disabled on the other BSD (slow, broken or untested) and Solaris (untested).
    • -Devloop=libevent to return to libevent (in case of regressions or to test/benchmark);
    • -Devloop=epoll to use epoll (e.g. Solaris);
    • -Devloop=kqueue to use kqueue (on *BSD);
  • EINTR: automatically retry epoll_wait or kevent with infinite timeout;
  • Remove the #try_run? and #try_lock? methods that are no longer needed.

REGRESSIONS/ISSUES

  • Timers are noticeably slower than libevent (especially Fiber.yield): we should consider a minheap (4-heap) or a skiplist;

  • DragonFlyBSD: running std_spec is eons slower than libevent; it regularly hangs on evloop.run until the stack pool collector timeout kicks in (i.e. loops on 5s pauses);

  • OpenBSD: running std_spec is noticeably slower (4:15 minutes) compared to libevent (1:16 minutes); it appears that the main fiber keeps re-enqueueing itself from the evloop run (10us on each run);

  • NetBSD: the evloop doesn't work with kevent returning ENOENT for the signal loop fd and EINVAL when trying to set an EVFILT_TIMER.

For the reasons above the kqueue evloop is disabled by default on DragonFly(BSD), OpenBSD and NetBSD.

@ysbaddaden
Copy link
Contributor Author

ysbaddaden commented Sep 19, 2024

While trying to integrate the evloop with the experimental shard for RFC 2, I noticed a race condition between #process_timers and #evented_close with MT since we can close a fd from any thread while another is processing timers.

I solved it with the requirement that to resume an IO event with a timeout we must successfully dequeue the event from both queues (poll descriptor waiters and timers) and a bias on timers: they always win. This allowed to keep the Event allocated on the stack. This involves some locks, but the race condition should be rare enough that I don't expect it to be a contention point (I could be wrong).

Notes:

  1. the bias on timers only means that the timer wins and it must be the one to resume the fiber, it doesn't mean that we must timeout the event. If #process_timer fails to dequeue the IO event from the waiters list we could consider the IO event to be ready and not call event.timed_out!;
  2. the fix allows to process timers independently from the event loop (which might give more weight to the above note); we may be able to find a way to run timers in a more timely manner in the future; for example from the schedulers' run loop?

@crysbot
Copy link

crysbot commented Sep 24, 2024

This pull request has been mentioned on Crystal Forum. There might be relevant details there:

https://forum.crystal-lang.org/t/new-event-loop-unix-call-for-reviews-tests/7207/1

@ysbaddaden
Copy link
Contributor Author

I just noticed one difference in IO::Evented (for libevent): it's #evented_read and #evented_write methods will resume the pending readers, or writers, when something raised inside (e.g. IO::Error, IO::TimeoutError, ...).

I'm not sure if this has any impact on this evloop. If the fd has any error, then epoll_wait and kevent shall report the error and the evloop will resume the waiting fibers accordingly.

@GeopJr
Copy link
Contributor

GeopJr commented Sep 29, 2024

I'm unable to build this on Debian Sid with mmap failed with Cannot allocate memory (ENOMEM)

full log
$ make clean clean_cache clean_crystal crystal
rm -rf .build
rm -rf ./docs
rm -rf src/llvm/ext/llvm_ext.o
rm -rf man/*.gz
rm -rf /home/geopjr/.cache/crystal
make: Nothing to be done for 'clean_crystal'.
Using /usr/bin/llvm-config-17 [version=17.0.6]
g++ -c  -o src/llvm/ext/llvm_ext.o src/llvm/ext/llvm_ext.cc -I/usr/lib/llvm-17/include -std=c++17   -fno-exceptions -funwind-tables -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS
CRYSTAL_CONFIG_BUILD_COMMIT="15093e1" CRYSTAL_CONFIG_PATH='$ORIGIN/../share/crystal/src' SOURCE_DATE_EPOCH="1727195906" CC="cc -fuse-ld=lld" CRYSTAL_CONFIG_LIBRARY_PATH='$ORIGIN/../lib/crystal' ./bin/crystal build -D strict_multi_assign -D preview_overload_order -Dwithout_interpreter  -o .build/crystal src/compiler/crystal.cr -D without_openssl -D without_zlib -D use_pcre2 --error-trace
In src/compiler/crystal.cr:11:18

 11 | Crystal::Command.run
                       ^--
Error: instantiating 'Crystal::Command.run()'


In src/compiler/crystal/command.cr:54:3

 54 | def self.run(options = ARGV)
      ^--
Error: instantiating 'run(Array(String))'


In src/compiler/crystal/command.cr:55:18

 55 | new(options).run
                   ^--
Error: instantiating 'Crystal::Command#run()'


In src/compiler/crystal/command.cr:75:7

 75 | init
      ^---
Error: instantiating 'init()'


In src/compiler/crystal/command.cr:214:10

 214 | Init.run(options)
            ^--
Error: instantiating 'Crystal::Init.run(Array(String))'


In src/compiler/crystal/tools/init.cr:25:31

 25 | InitProject.new(config).run
                              ^--
Error: instantiating 'Crystal::Init::InitProject#run()'


In src/compiler/crystal/tools/init.cr:251:22

 251 | views.each &.render
                    ^-----
Error: instantiating 'Crystal::Init::View+#render()'


In src/compiler/crystal/tools/init.cr:195:31

 195 | File.write(full_path, to_s)
                             ^---
Error: instantiating 'to_s()'


In src/object.cr:97:12

 97 | String.build do |io|
             ^----
Error: instantiating 'String.build()'


In src/string.cr:295:21

 295 | String::Builder.build(capacity) do |builder|
                       ^----
Error: instantiating 'String::Builder.build(Int32)'


In src/string.cr:295:21

 295 | String::Builder.build(capacity) do |builder|
                       ^----
Error: instantiating 'String::Builder.build(Int32)'


In src/object.cr:97:12

 97 | String.build do |io|
             ^----
Error: instantiating 'String.build()'


In src/object.cr:98:7

 98 | to_s io
      ^---
Error: instantiating 'to_s(String::Builder)'


There was a problem expanding macro 'def_to_s'

Called macro defined in src/ecr/macros.cr:35:3

 35 | macro def_to_s(filename)

Which expanded to:

 > 1 |     def to_s(__io__ : IO) : Nil
 > 2 |       ::ECR.embed "/home/geopjr/Projects/crystal/src/compiler/crystal/tools/init/template/example_spec.cr.ecr", "__io__"
 > 3 |     end
 > 4 |   
Error: expanding macro


There was a problem expanding macro 'embed'

Called macro defined in src/ecr/macros.cr:69:3

 69 | macro embed(filename, io_name)

Which expanded to:

 > 1 |     {{ run("ecr/process", "/home/geopjr/Projects/crystal/src/compiler/crystal/tools/init/template/example_spec.cr.ecr", "__io__") }}
 > 2 |   
Error: expanding macro


There was a problem expanding macro 'def_to_s'

Called macro defined in src/ecr/macros.cr:35:3

 35 | macro def_to_s(filename)

Which expanded to:

 > 1 |     def to_s(__io__ : IO) : Nil
 > 2 |       ::ECR.embed "/home/geopjr/Projects/crystal/src/compiler/crystal/tools/init/template/example_spec.cr.ecr", "__io__"
 > 3 |     end
 > 4 |   
Error: expanding macro


There was a problem expanding macro 'embed'

Called macro defined in src/ecr/macros.cr:69:3

 69 | macro embed(filename, io_name)

Which expanded to:

 > 1 |     {{ run("ecr/process", "/home/geopjr/Projects/crystal/src/compiler/crystal/tools/init/template/example_spec.cr.ecr", "__io__") }}
 > 2 |   
Error: Error executing run (exit code: 1): ecr/process /home/geopjr/Projects/crystal/src/compiler/crystal/tools/init/template/example_spec.cr.ecr __io__


stderr:

    mmap failed with Cannot allocate memory (ENOMEM)

make: *** [Makefile:227: .build/crystal] Error 1

FWIW, I can build up to 2821fbb just fine and I have plenty of available memory while building this branch

Crystal 1.12.1 (2024-05-17)

LLVM: 17.0.6
Default target: x86_64-pc-linux-gnu

libevent-dev 2.1.12-stable-10
libgc-dev 1:8.2.8-1

edit: (it builds successfully with -Devloop_libevent, as expected)

@ysbaddaden
Copy link
Contributor Author

ysbaddaden commented Sep 30, 2024

Sadly the log doesn't tell much because this is a runtime error in a macro run (to generate Crystal code from ECR).

The only new mmap is the one for the arena... which is set to the hardware limit of open fds. What does ulimit -Hn reports? On my Ubuntu it's 1048576 open files. Maybe it's infinite and I have an issue capping to Int32::MAX.

@GeopJr
Copy link
Contributor

GeopJr commented Sep 30, 2024

$ ulimit -Hn
1073741816

Can't remember if I set it that high or if it was the default :/

I did some debugging of the macros if it helps at all, new ev, evloop_libevent

edit: here's the diff (though it'd be easier to view with Meld)

@ysbaddaden
Copy link
Contributor Author

The large number of fds is the problem. The new evloop is trying to virtually allocate ~40GB of memory and refuses to. What's weird is that this is only reserved memory, not allocated memory 🤔

It's also working on macos despite the hardware limit being infinite (capped to Int32::MAX).

What's the software limit for ulimit -Sn? It's 1024 on my system (ubuntu). Maybe for starters I should only consider the software limit, then add support for blocks to the arena and use the hardware limit (in a subsequent PR).

@GeopJr
Copy link
Contributor

GeopJr commented Sep 30, 2024

$ ulimit -Sn
1024

@ysbaddaden
Copy link
Contributor Author

@GeopJr I made a change to select the soft limit instead of the hard limit. That should workaround the mmap issue —until we improve the arena to allocate individual blocks dynamically, instead of a single region at once.

@GeopJr
Copy link
Contributor

GeopJr commented Oct 1, 2024

Can confirm that it compiles successfully, thanks! (Can also confirm that I didn't face any issues with the new ev + GTK)

@ysbaddaden
Copy link
Contributor Author

While integrating with the execution_context shard I identified a race condition in Waiters#add when called in parallel to Waiters#consume_each. We call the method on close, error or hup after which the fd will never block again; just setting @ready wouldn't be sufficient (we reset it) so I introduced an @closed variable so trying to add would never add. The actual syscall (read, write, connect, etc) will eventually report an error and the IO be properly closed.

Keeps information about the event that a fiber is waiting on, can be a
time event and/or an IO event.
Keeps waiting reader and writer events for an IO. The event themselves
keep the information about the event and the associated fiber.
A simple, unoptimized, data structure to keep a list of timed events (IO
timeout, sleep or select timeout).
The foundation for the system specific epoll and kqueue event loops.
Specific to Linux and Android. It might be working on Solaris too
through their Linux compatibility layer.
For BSD and Darwin.
This is now only required for the libevent event loop, and the wasi
pseudo event loop.
Also includes the `:evloop_libevent` flag to fallback to libevent.
We can merely return the dequeued variable, which is the intent of the
method: to report whether the timer was dequeued (and thus own) or not.
@ysbaddaden
Copy link
Contributor Author

ysbaddaden commented Oct 15, 2024

While trying the evloop with execution contexts on freebsd I noticed that EventLoop#interrupt didn't interrupt a blocking kqueue. That was because I misunderstood how EVFILT_USER was working. The man pages are pretty scarce but https://habr.com/en/articles/600123/#user-triggered-events-with-kqueue was very helpful!

Now this is working. In FreeBSD at least.

@ysbaddaden
Copy link
Contributor Author

We got a report from LavinMQ that an evloop run can crash with an arena: invalid index exception.

There's a race condition to understand. It is likely that T1 gets events or a timer about fd, then T2 closes fd and deallocates it from the arena; finally T1 tries to get fd from the arena, and oops: not allocated (or new generation).

Fixes race conditions in the evloop when another thread closes the fd,
and thus frees the poll descriptor (waitlists):

- After waiting for events and processing the events: processing events
  and timers would fail to get the index and raise an exception.

- After trying to read or write but before we could actually get the
  poll descriptor: we'd fail to get the index and raise an exception.

- After trying to read/write and after we allocated the poll descriptor,
  but before we could wait: we'd be adding to a freed poll descriptor.

- Resetting the arena index on IO (__evloop_data) allowed a parallel
  wait to re-allocate the poll descriptor (despite the IO being closed).

- ...

Preventing parallel accesses to an arena object may be a bit drastic,
but at least we know for sure that we don't have any parallelism issue.

It also allows to drop the lock on poll descriptor as well as on each
waiting list: we don't need them anymore. A disadvantage is that
parallel read/write/close and evloop process event/timeout may
conflict... though they already competed for the 3 aforementioned locks.
@ysbaddaden
Copy link
Contributor Author

ysbaddaden commented Oct 21, 2024

The last issue was fixed in 6985080. Please see the commit description for details.

The patch looks more complex than it really is: it's mostly indentation changes because all the public arena methods now yield instead of returning a value + the locks got removed in Waiters and PollDescriptor.

Avoids an issue with the timecop shard that overrides Time.monotonic.
@ysbaddaden
Copy link
Contributor Author

@straight-shoota the check format CI action fails, but I can't reproduce on my local (neither with crystal 1.14.0 nor a compiler from this branch) 😕

@straight-shoota
Copy link
Member

The failure happens on the pull_request trigger which runs in the merge branch (pull/14996/merge), i.e. the PR branch merged with the target branch (master). The PR is currently based on an old commit from master and a compiler built from that misses #14718.
If you merge current master into this branch (or use a recent compiler build form master such as nightly), you should be able to reproduce.

@straight-shoota straight-shoota added this to the 1.15.0 milestone Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Approved
Development

Successfully merging this pull request may close these issues.

5 participants