-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a time-based limit in Terminal._innerWrite #1818
Conversation
The idea is that it should run for a bit and then let the renderer draw a frame so that the terminal look responsive. The existing approach limits the work done using a fixed number elements from the write buffer so the duration of a frame can vary widely. This approach looks at the clock to determine when to stop, we basically allocate an amount of time each frame to write, while the rest can be used for rendering. From my tests this change makes the terminal feel a lot smoother.
I could also put this behind a flag so users can choose between: smoothness vs. quick to finish. |
@juancampa Yeah the current buffer write time estimation needs some refinement, I like the idea to do it time based. Problem is that there are 3 major timing factors, that are under heavy changes atm:
Input flow hiccups occur whenever the sum of the 3 factors differ to much from the average sum for one rendered frame. I phrased it this vagely because users' perception works more with differences than absolute numbers - means a constantly low framerate is typically seen as a better experience than a very high framerate with random hiccups. Its all about difference, the smoother the flow the better. To sum this up: I think we need a tiny benchmark that tracks runtimes of typical input data chunks. Based on that data we then can adjust the chunks size to keep the fps at a certain level. We would have to leave some room for "random" GC events, something like this:
Now what happens if the incoming data chunks are rather small and many? Yeah this is bigger problem than it might seem - its actually what |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR, I've been wanting this for a while. We may need to hold off on merging for a bit as it will probably mess up all our test numbers for the big PRs but i definitely want this to go in 😃
const data = writeBatch.shift(); | ||
const time = Date.now(); | ||
while (this.writeBuffer.length > 0) { | ||
const data = this.writeBuffer.shift(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When #1796 goes in we can tweak this further to avoid the shift call (pass in a code(s) or start/offset).
👍 for holding it off abit until we are done with the big PRs. Imho we should also establish some way to split data chunks to avoid the hiccup problem as indicated above. Currently a really big chunk would simply "stall" the terminal. |
Agreed. Big chunks are uninterruptible. We should probably tackle that in a follow-up PR. |
Also, @jerch thanks for the detailed explanation! I agree with your proposal. This solution is not final as it doesn't achieve the desired FPS perfectly, but it does take us most of the way there. I'm down for refining it as a follow-up task. |
@jerch can you clarify what you mean here? Doesn't this PR solve this provided rendering is fast enough? |
@Tyriar We can easily avoid this by inspecting the chunks sizes and do the estimate based on that instead of the number of chunks. |
@juancampa I had a playground branch in summer that did the mini benchmark approach with good results (was not consuming much of its own). It basically shimmed in from time to time and tracked the runtime for the bunch of chunks at hand and created a throughtput number from it. Only tricky thing was the time resolution of the browser due to spectre, most chunks were to short/quickly handled to get a reliable number. |
@jerch Mind sharing it? :) |
@juancampa Sorray cant find it anymore, was around the testbed PRs (#1528, #1529, #1530) but did not ended up in one of em. |
@juancampa (Hint: It will double the throughput speed of hyper, but fps will go down slightly without a big chunk fix as proposed above.) |
I smiled while reading this because It's *exactly* the changes I have on my
branch. I'm buffering for 16ms (at most 200k) before sending it to the
renderer process (I tested multiple values and these seemed to produce good
results on my laptop). Glad to have your validation.
A system that benchmarks and dynamically adjusts these values might be,
again, a follow up improvement.
…On Fri, Dec 14, 2018, 8:29 PM jerch ***@***.*** wrote:
@juancampa <https://github.com/juancampa>
Slightly offtopic: Since you come from hyper - you might want to check
whether hyper would benefit from a tiny buffer between pty and browser
engine. We had the same problem in the demo app, the event loop got bombed
by tons of small chunks by the websocket and made testing totally
unreliable. #1744 <#1744> solved
it.
(Hint: It will double the throughput speed of hyper, but fps will go down
slightly without a big chunk fix as proposed above.)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1818 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABWF2K2UuRfeNgx155sjTmJFGNxH5Vvuks5u5FCEgaJpZM4ZJfU6>
.
|
…limit in Terminal._innerWrite
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of the buffer-related changes are in and WebGL will probably be an addon when it gets merged (which we can re-test), I say let's push this in for 3.12.0 🎉
Created #1955 to follow up the removal of |
I'm opening up this PR mostly to hear what you guys think this proposal
Problem
Currently
WRITE_BATCH_SIZE
limits the number of writes to allow the renderer to draw a frame this allows the terminal to update but framerate varies widelySolution
The approach I'm proposing looks at the clock to determine when to stop as opposed to a fixed number of iterations. Effectively this allocates some time for writing, while the rest can be used for rendering.
Tests run
From my tests this change makes the terminal feel a lot smoother when processing a lot of output with the drawback that it takes a bit longer to print it all. I ran tests with different limits and here are the results:
Test setup:
find ~
WRITE_TIMEOUT_MS=12ms
WRITE_TIMEOUT_MS=24ms
WRITE_TIMEOUT_MS=30ms
WRITE_TIMEOUT_MS=MAX_INT
As you can see, it takes a bit longer to display everything, which is expected because more frames are being drawn, but IMO it actually feels faster because you see a lot more of the output, and the terminal doesn't look like it's choking.
Caveats
This was only tested with the webgl renderer by @Tyriar which is still WIP (and also working amazing btw)