You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We pass "dynamic" messages between nodes in Proto.Actor.
We have tried doing this using the Any type.
Which resulted in a huge perf drop.
from 1.85 mil msg/sec to 1.1 mil msg/sec, I suspect this is simply due to longer TypeURI's and more data to allocate.
The ptypes.UnmarshallAny however also rely on reflection to create the inner .Message value. return reflect.New(t.Elem()).Interface().(proto.Message), nil
We did a spike, where we replaced this with a map[string] func() proto.Message.
So instead of using reflection, we simply resolve the factory func from the proto typename.
where each factory func itself would be something like func () { return &messages.MyMessage{} }
This moved us from our original 1.85 mil msg/sec to 2+ mil msg/sec.
So roughly 10% increase, this is for a full roundtrip. so only measuring deserialization in isolation would yiled a much higher increase.
This would be fairly easy to generate in the generated proto->go files.
You already register typenames and types.
Is this out of scope or something that could be of interest?
The text was updated successfully, but these errors were encountered:
How many CPUs is your program running on, and have you profiled the source of the slowness?
We noticed significant lock contention in reflect.ptrTo (golang/go#17973) which should hopefully be fixed in Go 1.9 (golang/go#18177). One that is addressed, there isn't any obvious reason why the reflect version should be a particular bottleneck.
Going to close this as it seems the bottle neck is not with the Go protobuf code, but rather with the reflect package. Since Go1.9, it could have been improved. Who knows?
As discussed here:
gogo/protobuf#255 (comment)
We pass "dynamic" messages between nodes in Proto.Actor.
We have tried doing this using the
Any
type.Which resulted in a huge perf drop.
from 1.85 mil msg/sec to 1.1 mil msg/sec, I suspect this is simply due to longer TypeURI's and more data to allocate.
The ptypes.UnmarshallAny however also rely on reflection to create the inner
.Message
value.return reflect.New(t.Elem()).Interface().(proto.Message), nil
We did a spike, where we replaced this with a
map[string] func() proto.Message
.So instead of using reflection, we simply resolve the factory func from the proto typename.
where each factory func itself would be something like
func () { return &messages.MyMessage{} }
This moved us from our original 1.85 mil msg/sec to 2+ mil msg/sec.
So roughly 10% increase, this is for a full roundtrip. so only measuring deserialization in isolation would yiled a much higher increase.
This would be fairly easy to generate in the generated proto->go files.
You already register typenames and types.
Is this out of scope or something that could be of interest?
The text was updated successfully, but these errors were encountered: