-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: revise usage of property getters #1402
Comments
@jerch interesting 😮. One way we could remove getters and still keep many of the benefits is for properties that we're just exposing readonly version of them we can have the interface property marked as interface IBufferSet {
// Usages via the IBufferSet will not allow assignment
readonly activeBuffer: IBuffer;
}
class BufferSet implements IBufferSet {
public activeBuffer: IBuffer;
}
// Instead of
class BufferSet {
private _activeBuffer: IBuffer;
public get activeBuffer(): IBuffer { return this._activeBuffer}
}
What do you mean by additional debounce buffer? |
Nice observation! I've seen the performance issue with the getters when experimenting with xterm.js a while ago. Regarding the debounce buffer: @Tyriar |
@ debounce: |
Gonna close this since most should be addressed with #1403 About the debounce thing: |
Yes an option for this at the very least seems like a good idea for remote use cases. I have doubts that this would be faster in an Electron environment as there will be additional work/GC maintaining a buffer or strings, flushing it and concatenating them (unless there was a new API with an option which fired |
@Tyriar I am not aware of a string[] data event. Imho this also depends on the way how the data is generated in the slave program - if a C program generates single byte writes to stdout and the system is almost idle, those bytes will most likely not be buffered up by the OS itself, instead be propagated to the pty one by one, with many expensive context switches under linux at least. Thats what I've encountered when I tested the alternative poller implementation for node-pty. At the higher level inside the JS part of node-pty it boils down to the question if many callback invocations are more expensive than a tiny buffer with small delay and the expensive additional string/buffer copying + gc'ing. For websocket this seems to be the case, I guess because of the wrapping/unwrapping of a single byte into a much bigger frame (almost no data load) and the transport abstraction between (even local). When I moved the escape sequence parser to JS it originally used function callbacks for the transition actions and was 10x slower than with the switch it has now. This was mainly due to function calls with almost no data load and non local state handling (attribute lookups). The parser was busy with handling its object state and not with the data lol. |
@jerch I mean if we need to send 3 data events, we could fire |
Hmm interesting idea. I think only tests can tell if there is anything to save by an approach like this. |
Due to some performance tests for the new parser stuff I encountered the internally used property getters being a major bottleneck. Here are some numbers for
ls -lh /usr/lib
on my ubuntu machine:.buffer
toactionPrint
method (see new parser #1399): ~6s in total, ~3.4s for Javascript--> 1.5x as fast just by avoiding the
.buffer
getter? What about all those other getters 😉@Tyriar It seems the property accessors are not optimized by most JS engines, at least Firefox and Chrome show the same bad performance for them while a direct access via a public attribute is much faster. I am aware that getter/setter provide a nice way of encapsulation while being able to change the underlying implementation, but with such a huge performance impact I propose to at least revise their widespread usage in the core parts. Maybe the access could be be realized by attributes, that are handled especially careful.
Somewhat offtopic:
Btw with an additional debounce buffer in the demo app server script I was able to get
ls -lh /usr/lib
with the changes in 2. down to under 3s in total. That is only 300% the time the native xterm needs for the same command - this is really amazing, congrats for getting the rendering part this fast!Without the buffer the websocket eats much time itself sending single or two byte frames (thats the greyish thing in chrome summary pie). Not being a "bug" of xterm.js itself it still might be worth a note in the docs, if people are just copying the demo over to their apps.
Update: Fix timings and add some pics:
master branch
changes from 2. without debounce buffer
changes from 2. with debounce buffer
The text was updated successfully, but these errors were encountered: