-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark results #89
Comments
It doesn't look like |
And |
Others are quite slow indeed. |
The reason for using 26 bits is that we have only 52bits of precision in javascript. |
@indutny sorry I don't get it the 26 vs 52 (53). That means words[i] as a Number could store up to 53 bits. Is any function doing in-place multiplication where it is beneficial to have that overhead? I probably miss something here. |
@axic in |
@indutny that was my guess. As a long term change, how about using 32bits in the entire library and changing mulTo to:
Working on 32bits would speed up the rest I would say considerably. Is there any analysis which methods are the most frequently used in common use cases of this library? ECC, serialization, what else? |
@axic it will make things slower. The most expensive operation through the library is multiplication, so the whole library is optimized for doing it. Additions are quite fast already (see benchmark results), and there is simply no crypto that is based solely on additions, so the bottleneck will be Example: |
@indutny I do get that mul is crucial, that's why I've asked if there's any analysis of how much time is spent on each for the typical use cases. Informed decisions can be made based on such data. |
Rerun the benchmarks with 4.6.6. There are some improvements, we're still behind on:
Full output:
|
JFYI @axic I got less performance with 16-bit word (instead 26-bit) in secp256k1-node embedded pure js |
How are we still behind on
Sorry, but I still don't see what value this PR provides, except making benchmarks run slower. I'm actually thinking about removing some of these libraries from them, because it became to cumbersome. |
Ah, sorry about that! |
I don't think this issue needs to remain open? Long lived issues are not easy to read/track. |
I have run the benchmarks (including #88) and it could be better. Not sure whether it is representative at all in terms of commonly used methods.
Where bn.js failed:
Regarding toString(10): a dedicated case might speed things up.
Re: toString(16): the fact that 26 bits are stored in a word makes it a lengthier process. Storing 24 or 32 bits would make this significantly faster. What is the actual reason behind going for 26 bits?
The full output:
The text was updated successfully, but these errors were encountered: