-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: requestIdleCallback(cb) #2543
Comments
cc @ofrobots |
Why not userland? What information about the server load isn't available to the userland and if it is not available - wouldn't it be better to expose it instead and let userland solutions evolve (at least first)? |
@benjamingr By definition, it's impossible for userland JS to know when V8 isn't busy doing anything, because when it's running userland JS, it's busy doing that. You could sort of fake it with setTimeouts or by writing a C++ addon that uses the |
Well, if you schedule a macrotask (setTimeout) wouldn't it run after node is "no longer busy"? How do we define idle? Idle in a UI setting is different from idle on a server. I definitely see the usages of "running low priority background work" in node - but what do we consider idle here? |
@isaacs What is |
@benjamingr I searched the code in vain for a ready-to-use definition of
Therefore I think we need to first extend |
A browser's event loop works at discrete intervals so there is a useful definition of "idle": no more work for this quantum, next quantum hasn't started yet, remaining time is idle time. Node's event loop however is continuous and it's only truly idle when it's waiting for timers to expire and nothing else. As soon as there is any I/O involved, it can go from idle to busy at any time without warning. How do browsers process |
@bnoordhuis I think there is one more idle situation on server, that is, when I/O is in progress, i.e. when the event loop polls for i/o but there are no pending callbacks yet. |
@pmq20 Waiting for I/O is not idleness in the "predictable quantum of time" definition from the W3C document. Outside of the "timers only" scenario, node is very much like Chuck Norris in that it never sleeps, it just waits. (Yes, I slipped a Chuck Norris joke in there. I apologize for nothing.) Tangential: the concept of idle callbacks has some relation with the idle GC feature that was removed three years ago in commit d607d85. The idea was to run the garbage collector proactively at quiescent times but it never worked all that well and for pretty much the same reasons that make |
@bnoordhuis That's a good point. Doing GC at those times turned out to be a bad idea, I do recall. I'd just as soon tell people to do "long-running work" using child procs (or web workers on browsers). It makes more sense on browsers, since you do have some kinds of work that must block the same thread as UI (updating the dom, etc) and doing that when idle is less likely to impact the user. If there's no way to do this in a way that makes sense, then that's reasonable. But I can expect that someone will ask for it at some point. |
Is it related to what has later been described here? http://v8project.blogspot.dk/2015/08/getting-garbage-collection-for-free.html |
Closing since it doesn't seem very feasible. Maybe we can re-evaluate in the future. |
Let me describe my use case and maybe that can inform a plausible design. Marshaling between workers is too expensive and add too much overhead. Especially when you have lots of little memoized data shared between multiple parts of the app. It is also difficult to break a part the app at a reasonable seam that is stable over time. The easiest way to get good scheduling is to break it apart into small slices of work - which is why the I/O model of Node is good to begin with. This is why rearchitected React (Fiber) to use cooperative scheduling on the client, powered by requestIdleCallback. It lets us gather all input from all the I/O in the beginning of the frame and then use the rest of the available time to prioritize and do the highest priority work. In my Node use case I have a HTTP request/response model where I need to on demand fetch some data from the backend. Different segments of the response has a dependency on different data from the backend. The segments can be computed out of order but each segment has a priority associated with it. If I have data available from one of the lower priority segments but not higher priority segment, I don't want to block. I want to use the idle CPU time to compute some of the lower priority segments. E.g.: Request data for Segment 1. If the data for Segment 1 returns just after I've started processing Segment 2, then I've probably done the wrong choice because now Segment 1 gets delayed a bit even though it has higher priority. However, in most cases it's a better use of time. My goal is to minimize these cases. If I understand the Node architecture well enough, it is possible to get into situations like this: Request data for Segment 1. In this case we have already received data for Segment 1 by the time we start processing Segment 3, but because they returned in the order I can't change the priority. I'd like to have a way to wait for all my I/O processing to be done and built up my own priority queue. Request data for Segment 1. So, for my use case it's not quite EDIT: I've been told |
What you're asking for seem to be higher-level than just I do not think that
Thing is, that isn't indicative of I/O being idle, or really if the CPU would be idle if an Immediate wasn't called. It is entirely possible for the event loop to fire practically, or entirely, consecutively. To be a bit more clear, all realistic "idle time" is if libuv is waiting during You can do a pseudo priority queuing by utilizing the call order of the following, which is called in the following order:
*Note: I think it is possible for an add-on to fire "immediate-like" events after Immediates the "immediate ("check") phase" |
Do keep in mind there is no regularly scheduled "active" work in Node.js like a render frame to differentiate from "idle" time. Work may come in at any time where we would then be calling an "idle" callback. |
@Fishrock123 What if, if there are pending "idle" callbacks, That way if the queue is empty, that's considered an "idle" period. It may only be idle for a fraction of time but it would otherwise be waiting. |
The following scenario interests me: I've some kafka consumers those do some work like database queries. Because of reactive nature there is no need to run them immediately, especially if other work like processing http request can/should be done. I think the very good definition of
I see waiting for IO, timers and so on not as "direct" callbacks. They may happen in the future, but not in the processing queue for now. So, if the queue is empty then I would like to execute my low prio callback. Probably I want consider every callback/promise in the callback as low priority work and execute it only if queue is empty. If some timer is fired or IO waiting is over or new work come in while executing my consumer callback, well, I don't care about this new situation. The new event should be scheduled after my callback like usual. It would be nice if there is some way to preempt my callback, but I'm not sure that the complexity worth it, or it's possible at all. |
@Bessonov Suggestion: Rather than resurrecting old closed issues, add a new issue and reference this one, you will probably get more success then 🙂 |
Maybe someone will find it useful, class to visualize debug in THREE.js with 2d/3d rapier support and update method for animation: import * as THREE from 'three';
export class DebugController {
#scene;
#physicsWorld;
#lines = null;
#dimensions;
/**
* Constructor for DebugController class.
*
* @param {RAPIER.World} physicsWorld - The physics object.
* @param {THREE.scene} scene - The scene object.
* @param {2 | 3} dimensions - The dimensions of the physics (default is 2).
* @return {void} Calls the update method to draw debug lines.
*/
constructor({
physics,
scene,
dimensions = 2,
}) {
this.#dimensions = dimensions
this.#scene = scene;
this.#physicsWorld = physics;
this.update();
}
/**
* Initialize debug lines
*/
#init = () => {
let material = new THREE.LineBasicMaterial({
color: 0xffffff,
vertexColors: true
});
let geometry = new THREE.BufferGeometry();
this.#lines = new THREE.LineSegments(geometry, material);
this.#scene.add(this.#lines);
}
/**
* Update debug lines
* You will need to call this method every frame in your scene loop
*/
update = () => {
if (this.#lines === null) this.#init();
const { vertices, colors } = this.#physicsWorld.debugRender();
this.#lines.geometry.setAttribute('position', new THREE.BufferAttribute(vertices, this.#dimensions));
this.#lines.geometry.setAttribute('color', new THREE.BufferAttribute(colors, 4));
}
/**
* Remove debug lines in non-blocking way
* You will need to call this method when you want to switch off debug mode
*/
clean = () => {
this.#lines.material.visible = false;
if (typeof requestIdleCallback === 'function') {
requestIdleCallback(this.#removeHandler);
} else {
setTimeout(this.#removeHandler, 1);
}
}
/**
* Remove debug lines
*/
#removeHandler = () => {
this.#removeElement(this.#lines);
this.#lines = null;
}
/**
* Remove the given object3D and its associated resources from the scene.
*
* @param {THREE.Object3D} object3D - The object3D to be removed.
* @return {boolean} Returns `true` if the object3D was successfully removed, `false` otherwise.
*/
#removeElement(object3D) {
if (!(object3D instanceof THREE.Object3D)) return false;
if (object3D.geometry) object3D.geometry.dispose();
if (object3D.material) {
if (object3D.material instanceof Array) {
object3D.material.forEach(material => material.dispose());
} else {
object3D.material.dispose();
}
}
object3D.removeFromParent();
return true;
}
} |
This seems interesting and useful, and probably wouldn't be terrifically hard to add to node: https://w3c.github.io/requestidlecallback/
It just landed in Chrome Canary: https://plus.google.com/+IlyaGrigorik/posts/bPNjgMwcMKs
Could be worth waiting a bit to see how people find it in browsers, or if V8 has any sort of special support for it in upcoming versions.
The text was updated successfully, but these errors were encountered: