Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show current benchmarking results #123

Open
jcready opened this issue Jun 24, 2021 · 16 comments
Open

Show current benchmarking results #123

jcready opened this issue Jun 24, 2021 · 16 comments
Labels
documentation Improvements or additions to documentation help wanted Extra attention is needed

Comments

@jcready
Copy link
Contributor

jcready commented Jun 24, 2021

The manual currently provides a comparison between code size vs speed, but it only shows the resulting size of the generated code so we don't know the runtime difference. There is already a benchmarks package which provides code to perform benchmarks, but it leave something to be desired. Namely: the current results. :)

It would be helpful to show this information (especially the perf.ts results) somewhere. Ideally it could be included in the code size vs speed section of the manual.

P.S. A comparison against protobuf.js would be very nice to compare the performance against since that library tends to be the fastest out there at the moment.

P.P.S. I've already taken a crack at adding protobuf.js to the perf.ts benchmarks locally and it seems like protobuf.js can decode/encode the binary about twice as fast as protobuf-ts@2.0.0-alpha.27. Any ideas how that gap could be closed? Is protobuf.js taking shortcuts that aren't conformant to the proto spec? Are there any techniques that could be copied from protobuf.js?

Thank you again for this wonderful project!

@timostamm
Copy link
Owner

Coincidentally, I just updated the code size benchmarks this morning, cleanup up the code as well.

It's easy to measure code size, not so much the performance. For example, google-protobuf deserializes into an intermediate state. So a simple roundtrip might show different results, compared to when all fields are set with the getter / setter methods.

That being said, having performance benchmarks for protobuf.js would be great.

jcready added a commit to jcready/protobuf-ts that referenced this issue Jun 24, 2021
Partially address timostamm#123

Results of the perf benchmark on my machine:

### read binary
google-protobuf             :     413.662 ops/s
ts-proto                    :   1,324.736 ops/s
protobuf-ts (speed)         :   1,461.452 ops/s
protobuf-ts (speed, bigint) :   1,475.889 ops/s
protobuf-ts (size)          :   1,250.677 ops/s
protobuf-ts (size, bigint)  :   1,255.167 ops/s
protobufjs                  :   1,732.049 ops/s
### write binary
google-protobuf             :     906.883 ops/s
ts-proto                    :   3,805.993 ops/s
protobuf-ts (speed)         :     430.632 ops/s
protobuf-ts (speed, bigint) :     448.063 ops/s
protobuf-ts (size)          :     378.682 ops/s
protobuf-ts (size, bigint)  :     392.511 ops/s
protobufjs                  :   1,539.768 ops/s
### from partial
ts-proto                    :   4,503.332 ops/s
protobuf-ts (speed)         :   1,568.577 ops/s
protobuf-ts (size)          :   1,555.881 ops/s
### read json
ts-proto                    :   3,719.366 ops/s
protobuf-ts (speed)         :     889.613 ops/s
protobuf-ts (size)          :     890.034 ops/s
protobufjs                  :   4,120.232 ops/s
### write json
ts-proto                    :  13,200.842 ops/s
protobuf-ts (speed)         :   1,865.668 ops/s
protobuf-ts (size)          :   1,862.537 ops/s
protobufjs                  :   4,199.372 ops/s
### read json string
ts-proto                    :     957.057 ops/s
protobuf-ts (speed)         :     416.284 ops/s
protobuf-ts (size)          :     421.48  ops/s
protobufjs                  :     910.256 ops/s
### write json string
ts-proto                    :   1,000.572 ops/s
protobuf-ts (speed)         :     943.338 ops/s
protobuf-ts (size)          :     949.784 ops/s
protobufjs                  :   1,446.891 ops/s
@timostamm
Copy link
Owner

Thanks for the PR! The makefile bug from this comment is fixed in commit 326299f, making sure the performance benchmarks run with the same payload every time. Not saying benchmarks should only run with the large payload, this was just fixing the obvious makefile bug.

All testees are in the same ballpark with the large payload size:

### read binary
google-protobuf             : 11.26 ops/s
ts-proto                    : 25.34 ops/s
protobuf-ts (speed)         : 27.98 ops/s
protobuf-ts (speed, bigint) : 25.9 ops/s
protobuf-ts (size)          : 24.43 ops/s
protobuf-ts (size, bigint)  : 23.38 ops/s
### write binary
google-protobuf             : 16.15 ops/s
ts-proto                    : 13.11 ops/s
protobuf-ts (speed)         : 12.12 ops/s
protobuf-ts (speed, bigint) : 11.8 ops/s
protobuf-ts (size)          : 10.1 ops/s
protobuf-ts (size, bigint)  : 9.76 ops/s
### from partial
ts-proto                    : 25.39 ops/s
protobuf-ts (speed)         : 22.25 ops/s
protobuf-ts (size)          : 21.27 ops/s
### read json
ts-proto                    : 41.19 ops/s
protobuf-ts (speed)         : 16.16 ops/s
protobuf-ts (size)          : 16.55 ops/s
### write json
ts-proto                    : 138.19 ops/s
protobuf-ts (speed)         : 23.38 ops/s
protobuf-ts (size)          : 23.19 ops/s
### read json string
ts-proto                    : 11.74 ops/s
protobuf-ts (speed)         : 7.78 ops/s
protobuf-ts (size)          : 7.77 ops/s
### write json string
ts-proto                    : 16.13 ops/s
protobuf-ts (speed)         : 16.15 ops/s
protobuf-ts (size)          : 16.82 ops/s

I think the benchmarks should run on several payload sizes. There are some factor 10 gaps with the smaller payload you were measuring that are worth closer investigation.

@timostamm
Copy link
Owner

For reference:

Large payload: 1.2MiB - FileDescriptorSet for packages/test-fixtures/**/*.proto
Small payload: 49KiB - FileDescriptorSet just for google/protobuf/descriptor.proto

@fenos
Copy link
Contributor

fenos commented Jul 12, 2021

Great results!

One thing that caught my eyes is the massive difference in writing JSON between ts-proto and protobuf-ts
do you know the reason why protobuf-ts is way slower?

@timostamm
Copy link
Owner

ts-proto uses protobuf.js for JSON, which doesn't fully the official JSON format, see protobufjs/protobuf.js#1304. I am sure they are skipping a few things.

But look closely at the numbers. They are put into relation when you realize that they measure turning the internal representation into a JSON object. What you need in practice is a JSON string:

### read json string
ts-proto                    : 11.74 ops/s
protobuf-ts (size)          : 7.77 ops/s
### write json string
ts-proto                    : 16.13 ops/s
protobuf-ts (size)          : 16.82 ops/s

@timostamm timostamm added the documentation Improvements or additions to documentation label Jul 17, 2021
@timostamm
Copy link
Owner

I think the manual deserves a performance comparison table at the end of the section Code size vs speed. It should just show numbers for binary I/O and JSON (string) I/O. It should show generator version number and parameters, preferably in a one simple table. It should be mentioned how and where this is measured, and with what payload size. The table should be generated by a script, similar to the code size report.

These are the results including protobuf.js:

### read binary
google-protobuf             :       9.938 ops/s
ts-proto                    :      23.604 ops/s
protobuf-ts (speed)         :      23.742 ops/s
protobuf-ts (speed, bigint) :      23.066 ops/s
protobuf-ts (size)          :      24.891 ops/s
protobuf-ts (size, bigint)  :      23.829 ops/s
protobufjs                  :      28.464 ops/s
### write binary
google-protobuf             :      15.118 ops/s
ts-proto                    :      13.626 ops/s
protobuf-ts (speed)         :      12.078 ops/s
protobuf-ts (speed, bigint) :      12.036 ops/s
protobuf-ts (size)          :      10.554 ops/s
protobuf-ts (size, bigint)  :      10.672 ops/s
protobufjs                  :      12.305 ops/s
### from partial
ts-proto                    :      40.744 ops/s
protobuf-ts (speed)         :      26.53  ops/s
protobuf-ts (size)          :      27.213 ops/s
### read json string
ts-proto                    :      14.237 ops/s
protobuf-ts (speed)         :       8.307 ops/s
protobuf-ts (size)          :       8.469 ops/s
protobufjs                  :      15.367 ops/s
### write json string
ts-proto                    :      18.328 ops/s
protobuf-ts (speed)         :      18.403 ops/s
protobuf-ts (size)          :      18.34  ops/s
protobufjs                  :      23.837 ops/s
### read json object
ts-proto                    :      34.747 ops/s
protobuf-ts (speed)         :      17.509 ops/s
protobuf-ts (size)          :      17.0   ops/s
protobufjs                  :      46.793 ops/s
### write json object
ts-proto                    :     182.47  ops/s
protobuf-ts (speed)         :      30.375 ops/s
protobuf-ts (size)          :      30.049 ops/s
protobufjs                  :      47.009 ops/s

@timostamm timostamm added the help wanted Extra attention is needed label Jul 18, 2021
@timostamm
Copy link
Owner

Looks like there has been a regression in v2.0.0-alpha.9. We stopped generating create for speed optimized code. Thanks to @odashevskii-plaid, this is fixed. It bumps up the performance of read and create methods a bit:

### read binary
google-protobuf             :      11.525 ops/s
ts-proto                    :      26.28  ops/s
protobuf-ts (speed)         :      31.584 ops/s
protobuf-ts (speed, bigint) :      33.79  ops/s
protobuf-ts (size)          :      24.935 ops/s
protobuf-ts (size, bigint)  :      25.073 ops/s
protobufjs                  :      32.129 ops/s
### write binary
google-protobuf             :      16.832 ops/s
ts-proto                    :      14.168 ops/s
protobuf-ts (speed)         :      12.636 ops/s
protobuf-ts (speed, bigint) :      12.769 ops/s
protobuf-ts (size)          :      10.969 ops/s
protobuf-ts (size, bigint)  :      11.045 ops/s
protobufjs                  :      12.902 ops/s
### from partial
ts-proto                    :      40.707 ops/s
protobuf-ts (speed)         :      29.767 ops/s
protobuf-ts (size)          :      27.98  ops/s
### read json string
ts-proto                    :      14.963 ops/s
protobuf-ts (speed)         :       8.485 ops/s
protobuf-ts (size)          :       8.272 ops/s
protobufjs                  :      15.59  ops/s
### write json string
ts-proto                    :      18.633 ops/s
protobuf-ts (speed)         :      19.347 ops/s
protobuf-ts (size)          :      18.997 ops/s
protobufjs                  :      27.291 ops/s
### read json object
ts-proto                    :      37.691 ops/s
protobuf-ts (speed)         :      18.774 ops/s
protobuf-ts (size)          :      16.267 ops/s
protobufjs                  :      44.944 ops/s
### write json object
ts-proto                    :     200.92  ops/s
protobuf-ts (speed)         :      31.027 ops/s
protobuf-ts (size)          :      32.502 ops/s
protobufjs                  :      45.461 ops/s

@jcready
Copy link
Contributor Author

jcready commented Aug 31, 2021

Was this this discovery made in some off-github discussion? I'm curious more than anything since I can't find any issue or PR mentioning this. Glad it was spotted though!

Completely unrelated, but what on Earth is going on with ts-proto's "write json object" benchmark? Being nearly an order of magnitude faster than the underlying library it uses (protobufjs) seems odd.

@timostamm
Copy link
Owner

Was this this discovery made in some off-github discussion?

See #147 (comment) and #147 (comment)

Completely unrelated, but what on Earth is going on with ts-proto's "write json object" benchmark?

It's impressive, right? I don't think ts-proto is sharing any code with protobufjs for JSON.

@osadi
Copy link

osadi commented Oct 31, 2024

Hello.
I'm sorry if this is the wrong place for questions. It looked like it could belong here though so I'll try.

When I run make perf with a 'small' (27 fields, ~12K .proto file) protobuf3 message I get some numbers that I wasn't expecting:

pbtsSpeed                                 : 1,248,300.903 ops/s
pbtsSize                                  : 623,115.493 ops/s
pbjs                                      : 2,287,120.467 ops/s

Where protobuf.js is about twice as fast. Is this what is referenced above?

There are some factor 10 gaps with the smaller payloads you were measuring that are worth closer investigation.

Am I in the smaller payload segment? Am I holding it wrong?

protoc --version
libprotoc 25.1

    "@protobuf-ts/plugin": "^2.9.4",
    "@protobuf-ts/runtime": "^2.9.4",
    "protobufjs": "^7.4.0",
    "protobufjs-cli": "^1.1.3",

I'm happy to provide additional information, just tell me what you need.

All the best

@jcready
Copy link
Contributor Author

jcready commented Oct 31, 2024

Without seeing your benchmark code it's hard to say. It depends on what types of fields you're using. Are you using int32/int64/uint32/uint64 fields? In the case of string: how long are they? Are the strings all ascii? Are you including multi-byte UTF-8 characters? How many? How many multi-byte UTF-8 characters do you have in a row? What JS runtime and version are you using? All of these things will affect the results.

You can see some differences just based on the string length being decoded here where Node v18.13 and higher starts performing faster using the TextDecoder.decode() (which is what protobuf-ts uses) once the string length is greater than 16 characters long.

And here's another case where different browsers start performing better using TextDecoder.decode() instead of the protobufjs decoder once the string length gets above 8-32 characters.

So yeah, it depends on what you're trying to encode/decode.

@osadi
Copy link

osadi commented Oct 31, 2024

Thanks for your reply. I'll address what I can, and if it's helpful I could attach the protos, although hard to read bc employer, they might shed some insight.

I've only looked at decoding.

  • Without seeing your benchmark code:
    I've ran make perf from packages/benchmarks from this repository, but with my proto files.

  • Are you using int32/int64/uint32/uint64 fields
    int32/uint32/double

  • Are the strings all ascii?
    Yes

  • Are you including multi-byte UTF-8 characters
    No

  • What JS runtime and version are you using?

node --version
v22.9.0

Not running in the browser. Tested with supplying TextEncoder as well as Buffer. Buffer was slower in this case.

This is what I'm having as payload for toBinary to get a message which I then try to decode:

  a: "AAAAAAA-1", // string
  aa: "AAAAAAAAAAAAAAAAA", // string
  somethingTimestamp: {
    seconds: 1701854312n, // int64
    nanos: 4992885, // int32
  },
  something: {
    timestamp: {
      seconds: 1701854311n, // int64
      nanos: 959000000, // int32
    },
    a: 1, // optional double
    aa: 0, // optional uint32
    aaa: 0, // optional uint32
    aaaa: 1, // enum
    aaaaa: 1, // enum
    aaaaaa: 2, // enum
    aaaaaaa: 60, // optional uint32
    aaaaaaaaaaaaa: 0, // optional double
    aaaaaaaaaaaaaaaaaaaaaaaaa: 0, // enum
  },

I added a repro repo here https://github.com/osadi/pbbench which is using benny, and not the code from this repo, but the numbers looked similar when I tried the same code in both.

@jcready
Copy link
Contributor Author

jcready commented Oct 31, 2024

So yeah, protobufjs is likely to always be faster with just ascii strings. Its utf8 string decoder takes some shortcuts that a conformant one cannot.

@osadi
Copy link

osadi commented Nov 1, 2024

I'm probably misunderstanding something.

I've removed the only two string fields in my example and protobuf.js seems to do even better? I really want to use protobuf-ts, I just need to figure out if I'm doing it wrong, and try to sell it to the team..

I'm not decoding the object as json, or jsonString, if that's what you're referring to. It's a base64 which is fed to Buffer, which is then the input to pbts.decode(). That operation happens outside of the performance checking loop so it shouldn't affect anything?

Thank you. And again, if I should remove my comments derailing your actual question just let me know. No problems.

@jcready
Copy link
Contributor Author

jcready commented Nov 1, 2024

Out of curiosity is there a reason you're looking to use protobuf-ts over protobuf-es? I only ask because the latter is A) much more actively maintained, B) supports more/newer protobuf features, and C) the project is lead by the same person who made protobuf-ts, but with protobuf-es they're actually getting paid to work on it :)

FWIW I think protobuf.js is going to be faster in most cases than either protobuf-ts or protobuf-es (but not like an order of magnitude faster, we're talking less than twice as fast). If you care about protobuf conformance then it's probably wise to stay away from protobuf.js, but if speed is paramount it's really hard to beat protobuf.js.

@osadi
Copy link

osadi commented Nov 1, 2024

:) No there's no reason. I was just tasked with working on an app, that used protobuf.js. And I wasn't to keen on using that so looked at alternatives.
I've found this, ts-proto and pbf, but not protobuf-es. So it's only for lack of discoverability and search-fu on my end that I ended up here.

I'm happy you took the time to answer, and provided alternatives. Much appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants