Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use sync.Pool to share per-connection transport write buffer. #6309

Merged
merged 19 commits into from
Jul 20, 2023

Conversation

s-matyukevich
Copy link
Contributor

@s-matyukevich s-matyukevich commented May 23, 2023

This is an attempt to fix #5759.

The use-case where this fix could be especially useful is a server with a lot of mostly idle connections (like connections that handle only healthchecks) In our infrastructure this use-case is very common, as we have some servers that handle request from thousands of clients, all of which generate relatively small volume of requests but has an active healthcheck setup, which forces every client connect to every server. Some kind of subsetting would be useful in such scenario, but gRPC doesn't really implement subsetting. Grpc-go isn't great at handling memory in such use-case. Right now a lot of our users put envoy in front of grpc server just because it handles memory per connection a lot more efficiently (in one case we were able to reduce the memory consumed by an app from 1.5 GB to 500 MB just by putting envoy in front of the grpc server)

This PR adds sync.Pool to the transport layer to share the write buffer between connections (I plan to provide a similar fix for the read buffer if this PR will be accepted) The buffer is claimed in the Write method and released on Flush, with an additional optimization that preserves the buffer if a single Write results in multiple Flushes.

Testing

I submitted #6299 so that I can reproduce the case described above with benchmarks. I used the following comnand to demonstrate the impact of this change

go run benchmark/benchmain/main.go -workloads all -kbps 0 -latency 0s -maxConcurrentCalls 1000 -reqSizeBytes 1 -respSizeBytes 1 -benchtime 1m -shareConnection=false -sleepBetweenRPCs=20s -resultFile /tmp/master1 -memProfile /tmp/master1-mem

As you can see, I used tiny requests with a combination of large delays between requests to minimize the impact of other allocations on memory stats.

Here are the results that I got:

Without the optimization:

root@traffic-grpc-go-benchmark-manual:/tmp/grpc-go# go tool pprof -top -inuse_space ../master3-mem
File: main
Type: inuse_space
Time: May 23, 2023 at 12:36am (UTC)
Showing nodes accounting for 104.04MB, 95.85% of 108.55MB total
Dropped 24 nodes (cum <= 0.54MB)
      flat  flat%   sum%        cum   cum%
   59.04MB 54.39% 54.39%    59.04MB 54.39%  google.golang.org/grpc/internal/transport.newBufWriter (inline)
   32.49MB 29.94% 84.33%    32.49MB 29.94%  bufio.NewReaderSize (inline)
    5.50MB  5.07% 89.40%     5.50MB  5.07%  runtime.malg
       2MB  1.84% 91.24%        2MB  1.84%  google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders
    1.50MB  1.38% 92.62%     1.50MB  1.38%  sync.(*Pool).pinSlow
       1MB  0.92% 93.55%        1MB  0.92%  bytes.growSlice
       1MB  0.92% 94.47%        1MB  0.92%  golang.org/x/net/http2/hpack.(*headerFieldTable).addEntry (inline)
    0.50MB  0.46% 94.93%    94.04MB 86.63%  google.golang.org/grpc/internal/transport.NewServerTransport
    0.50MB  0.46% 95.39%        1MB  0.92%  golang.org/x/net/http2/hpack.NewDecoder
    0.50MB  0.46% 95.85%     1.50MB  1.38%  bytes.(*Buffer).grow
         0     0% 95.85%        1MB  0.92%  bytes.(*Buffer).ReadFrom
         0     0% 95.85%        1MB  0.92%  golang.org/x/net/http2.(*Framer).ReadFrame
         0     0% 95.85%        1MB  0.92%  golang.org/x/net/http2.(*Framer).readMetaFrame
         0     0% 95.85%        1MB  0.92%  golang.org/x/net/http2/hpack.(*Decoder).Write
         0     0% 95.85%        1MB  0.92%  golang.org/x/net/http2/hpack.(*Decoder).parseFieldLiteral
         0     0% 95.85%        1MB  0.92%  golang.org/x/net/http2/hpack.(*Decoder).parseHeaderFieldRepr
         0     0% 95.85%        1MB  0.92%  golang.org/x/net/http2/hpack.(*dynamicTable).add
         0     0% 95.85%        1MB  0.92%  google.golang.org/grpc.(*Server).Serve
         0     0% 95.85%    94.04MB 86.63%  google.golang.org/grpc.(*Server).Serve.func3
         0     0% 95.85%    94.04MB 86.63%  google.golang.org/grpc.(*Server).handleRawConn
         0     0% 95.85%        5MB  4.61%  google.golang.org/grpc.(*Server).handleRawConn.func1
         0     0% 95.85%        1MB  0.92%  google.golang.org/grpc.(*Server).handleStream
         0     0% 95.85%    94.04MB 86.63%  google.golang.org/grpc.(*Server).newHTTP2Transport
         0     0% 95.85%        1MB  0.92%  google.golang.org/grpc.(*Server).processStreamingRPC
         0     0% 95.85%        5MB  4.61%  google.golang.org/grpc.(*Server).serveStreams
         0     0% 95.85%        1MB  0.92%  google.golang.org/grpc.(*Server).serveStreams.func1.1
         0     0% 95.85%        1MB  0.92%  google.golang.org/grpc/benchmark/latency.(*conn).Read
         0     0% 95.85%        1MB  0.92%  google.golang.org/grpc/benchmark/latency.(*listener).Accept
         0     0% 95.85%     1.50MB  1.38%  google.golang.org/grpc/internal/transport.(*bufferPool).get (inline)
         0     0% 95.85%        5MB  4.61%  google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams
         0     0% 95.85%     1.50MB  1.38%  google.golang.org/grpc/internal/transport.(*http2Server).handleData
         0     0% 95.85%        1MB  0.92%  google.golang.org/grpc/internal/transport.NewServerTransport.func2
         0     0% 95.85%    92.54MB 85.25%  google.golang.org/grpc/internal/transport.newFramer
         0     0% 95.85%        1MB  0.92%  io.Copy (inline)
         0     0% 95.85%        1MB  0.92%  io.CopyN
         0     0% 95.85%        1MB  0.92%  io.ReadAtLeast
         0     0% 95.85%        1MB  0.92%  io.ReadFull (inline)
         0     0% 95.85%        1MB  0.92%  io.copyBuffer
         0     0% 95.85%        1MB  0.92%  net.(*TCPListener).Accept
         0     0% 95.85%        1MB  0.92%  net.(*TCPListener).accept
         0     0% 95.85%     5.50MB  5.07%  runtime.newproc.func1
         0     0% 95.85%     5.50MB  5.07%  runtime.newproc1
         0     0% 95.85%     5.50MB  5.07%  runtime.systemstack
         0     0% 95.85%     1.50MB  1.38%  sync.(*Pool).Get
         0     0% 95.85%     1.50MB  1.38%  sync.(*Pool).pin

With the optimization:

root@traffic-grpc-go-benchmark-manual:/tmp/grpc-go# go tool pprof -top -inuse_space ../fork3-mem
File: main
Type: inuse_space
Time: May 23, 2023 at 12:42am (UTC)
Showing nodes accounting for 55.53MB, 100% of 55.53MB total
      flat  flat%   sum%        cum   cum%
   31.98MB 57.59% 57.59%    31.98MB 57.59%  bufio.NewReaderSize (inline)
    8.50MB 15.31% 72.90%     8.50MB 15.31%  runtime.malg
    2.50MB  4.51% 77.41%     2.50MB  4.51%  sync.(*Pool).pinSlow
       2MB  3.60% 81.01%        2MB  3.60%  golang.org/x/net/http2/hpack.(*headerFieldTable).addEntry (inline)
    1.50MB  2.70% 83.71%        3MB  5.40%  google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders
       1MB  1.80% 85.52%        1MB  1.80%  bytes.growSlice
       1MB  1.80% 87.32%        1MB  1.80%  context.(*cancelCtx).Done
       1MB  1.80% 89.12%    35.48MB 63.89%  google.golang.org/grpc/internal/transport.NewServerTransport
       1MB  1.80% 90.92%        1MB  1.80%  google.golang.org/grpc/internal/transport.newControlBuffer (inline)
    0.53MB  0.96% 91.88%     0.53MB  0.96%  google.golang.org/grpc/internal/transport.newFramer.func1
    0.50MB  0.91% 92.79%     0.50MB  0.91%  runtime.doaddtimer
    0.50MB  0.91% 93.70%     0.50MB  0.91%  google.golang.org/protobuf/reflect/protoregistry.(*Files).RegisterFile.func2
    0.50MB   0.9% 94.60%     0.50MB   0.9%  google.golang.org/grpc.(*Server).processStreamingRPC
    0.50MB   0.9% 95.50%     0.50MB   0.9%  golang.org/x/net/http2.NewFramer
    0.50MB   0.9% 96.40%     0.50MB   0.9%  golang.org/x/net/http2.NewFramer.func2
    0.50MB   0.9% 97.30%     0.50MB   0.9%  net.newFD (inline)
    0.50MB   0.9% 98.20%     0.50MB   0.9%  google.golang.org/grpc/internal/transport.(*loopyWriter).registerStreamHandler (inline)
    0.50MB   0.9% 99.10%     0.50MB   0.9%  google.golang.org/grpc/internal/transport.newLoopyWriter
    0.50MB   0.9%   100%     0.50MB   0.9%  context.WithValue
         0     0%   100%        1MB  1.80%  bytes.(*Buffer).ReadFrom
         0     0%   100%        1MB  1.80%  bytes.(*Buffer).grow
         0     0%   100%     1.50MB  2.70%  golang.org/x/net/http2.(*Framer).ReadFrame
         0     0%   100%     0.53MB  0.96%  golang.org/x/net/http2.(*Framer).WriteHeaders
         0     0%   100%     0.53MB  0.96%  golang.org/x/net/http2.(*Framer).endWrite
         0     0%   100%        1MB  1.80%  golang.org/x/net/http2.(*Framer).readMetaFrame
         0     0%   100%        1MB  1.80%  golang.org/x/net/http2/hpack.(*Decoder).Write
         0     0%   100%        1MB  1.80%  golang.org/x/net/http2/hpack.(*Decoder).parseFieldLiteral
         0     0%   100%        1MB  1.80%  golang.org/x/net/http2/hpack.(*Decoder).parseHeaderFieldRepr
         0     0%   100%        1MB  1.80%  golang.org/x/net/http2/hpack.(*Encoder).WriteField
         0     0%   100%        2MB  3.60%  golang.org/x/net/http2/hpack.(*dynamicTable).add
         0     0%   100%     0.50MB   0.9%  google.golang.org/grpc.(*Server).Serve
         0     0%   100%    35.48MB 63.89%  google.golang.org/grpc.(*Server).Serve.func3
         0     0%   100%    35.48MB 63.89%  google.golang.org/grpc.(*Server).handleRawConn
         0     0%   100%        6MB 10.81%  google.golang.org/grpc.(*Server).handleRawConn.func1
         0     0%   100%     0.50MB   0.9%  google.golang.org/grpc.(*Server).handleStream
         0     0%   100%    35.48MB 63.89%  google.golang.org/grpc.(*Server).newHTTP2Transport
         0     0%   100%        6MB 10.81%  google.golang.org/grpc.(*Server).serveStreams
         0     0%   100%     0.50MB   0.9%  google.golang.org/grpc.(*Server).serveStreams.func1.1
         0     0%   100%        1MB  1.80%  google.golang.org/grpc/benchmark/latency.(*conn).Read
         0     0%   100%     0.50MB   0.9%  google.golang.org/grpc/benchmark/latency.(*listener).Accept
         0     0%   100%     0.53MB  0.96%  google.golang.org/grpc/internal/transport.(*bufWriter).Write
         0     0%   100%     2.50MB  4.51%  google.golang.org/grpc/internal/transport.(*bufferPool).get (inline)
         0     0%   100%        1MB  1.80%  google.golang.org/grpc/internal/transport.(*http2Client).handleData
         0     0%   100%        1MB  1.80%  google.golang.org/grpc/internal/transport.(*http2Client).reader
         0     0%   100%        6MB 10.81%  google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams
         0     0%   100%     1.50MB  2.70%  google.golang.org/grpc/internal/transport.(*http2Server).handleData
         0     0%   100%     2.03MB  3.66%  google.golang.org/grpc/internal/transport.(*loopyWriter).handle
         0     0%   100%     1.53MB  2.76%  google.golang.org/grpc/internal/transport.(*loopyWriter).headerHandler
         0     0%   100%     0.53MB  0.96%  google.golang.org/grpc/internal/transport.(*loopyWriter).originateStream
         0     0%   100%     2.03MB  3.66%  google.golang.org/grpc/internal/transport.(*loopyWriter).run
         0     0%   100%     1.53MB  2.76%  google.golang.org/grpc/internal/transport.(*loopyWriter).writeHeader
         0     0%   100%        2MB  3.60%  google.golang.org/grpc/internal/transport.NewServerTransport.func2
         0     0%   100%    32.48MB 58.49%  google.golang.org/grpc/internal/transport.newFramer
         0     0%   100%     0.53MB  0.96%  google.golang.org/grpc/internal/transport.newHTTP2Client.func6
         0     0%   100%     0.50MB  0.91%  google.golang.org/grpc/interop/grpc_testing.file_grpc_testing_control_proto_init
         0     0%   100%     0.50MB  0.91%  google.golang.org/grpc/interop/grpc_testing.init.1
         0     0%   100%     0.50MB   0.9%  google.golang.org/grpc/metadata.NewIncomingContext (inline)
         0     0%   100%     0.50MB  0.91%  google.golang.org/protobuf/internal/filedesc.Builder.Build
         0     0%   100%     0.50MB  0.91%  google.golang.org/protobuf/internal/filetype.Builder.Build
         0     0%   100%     0.50MB  0.91%  google.golang.org/protobuf/reflect/protoregistry.(*Files).RegisterFile
         0     0%   100%     0.50MB  0.91%  google.golang.org/protobuf/reflect/protoregistry.rangeTopLevelDescriptors
         0     0%   100%        1MB  1.80%  io.Copy (inline)
         0     0%   100%        1MB  1.80%  io.CopyN
         0     0%   100%        1MB  1.80%  io.ReadAtLeast
         0     0%   100%        1MB  1.80%  io.ReadFull (inline)
         0     0%   100%        1MB  1.80%  io.copyBuffer
         0     0%   100%     0.50MB   0.9%  net.(*TCPListener).Accept
         0     0%   100%     0.50MB   0.9%  net.(*TCPListener).accept
         0     0%   100%     0.50MB   0.9%  net.(*netFD).accept
         0     0%   100%     0.50MB  0.91%  runtime.doInit
         0     0%   100%     0.50MB  0.91%  runtime.main
         0     0%   100%     0.50MB  0.91%  runtime.mcall
         0     0%   100%     0.50MB  0.91%  runtime.modtimer
         0     0%   100%     8.50MB 15.31%  runtime.newproc.func1
         0     0%   100%     8.50MB 15.31%  runtime.newproc1
         0     0%   100%     0.50MB  0.91%  runtime.park_m
         0     0%   100%     0.50MB  0.91%  runtime.resetForSleep
         0     0%   100%     0.50MB  0.91%  runtime.resettimer (inline)
         0     0%   100%     8.50MB 15.31%  runtime.systemstack
         0     0%   100%     3.03MB  5.46%  sync.(*Pool).Get
         0     0%   100%     2.50MB  4.51%  sync.(*Pool).pin

As you can see, in the unoptimized benchmark run newBufWriter method allocates 59.04MB or 54.39% of the whole memory. In the optimized run we now see this line instead 0.53MB 0.96% 91.88% 0.53MB 0.96% google.golang.org/grpc/internal/transport.newFramer.func1 which corresponds to the memory allocated by the sync.Pool that I added to the newFramer function.

I also have some results that demonstrates that this change doesn't negatively impact performance in the more realistic benchmark setups, but I will provide those results a bit later.

RELEASE NOTES:

  • client & server: Add experimental [With]SharedWriteBuffer to improve performance by reducing allocations when sending RPC messages. (Disabled by default.)

}
writeBufferPoolMap[size] = pool
}
writeBufferMutex.Unlock()
Copy link
Contributor Author

@s-matyukevich s-matyukevich May 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory, we can get rid of this mutex and pool map if we store sync.Pool per ClientConnection and per server. However, this will require modifying a lot of internal interface to pass the pool all the way down to the transport layer. This also will prevent us from sharing the pool between different ClientConnections and servers, so I decided to take the mutex approach.

As far as I understand, this mutex shouldn't really impact the performance, as it got called only on new connection creation, which happens not very often in http2.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When do entries from this map ever get deleted? Can this map grow in a unbounded fashion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now - never. In theory it can grow unbounded if the app creates multiple servers and/or ClientConnections with different buffer sizes. I can't think about any use-case where this could be useful, but it is possible. I am not sure if it worth adding more complicated logic to protect against this, because my understanding is that go runtime will eventually deallocate all memory from unused pools, so we will be waisting memory only on storing pointers to empty pools, which should be neglectable in practise.

Still I can fix this if you prefer:

  • I can add a goroutine that removes unused pools. I'll probably need to store something like lastModifedTime for each pool to do that. The downsides are some additional complexity + the fact that we need to take the lock in the goroutine.
  • I can store a pool per Server and ClientConnection and propagate it all the way down to the transport layer. The downsides are the fact that we need to extend some internal interfaces + we won't be able to reuse pools between different connections that uses the same buffer size (which I assume is a very common use-case) But in this case we can get rid of the lock completely.

Do you have any other ideas? Let me know what I should do here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory it can grow unbounded if the app creates multiple servers and/or ClientConnections with different buffer sizes. I can't think about any use-case where this could be useful, but it is possible.

I agree. I don't expect applications to create multiple grpc.ClientConns or grpc.Servers and pass them different values for the send and receive buffer sizes. But given that we are going to have a dial option and a server option to enable the sync.Pool on the client and server respectively, we could document it there, just so no one complains saying a few bytes are leaked.

@dfawley : Thoughts/Objections?

@arvindbr8 arvindbr8 self-assigned this May 23, 2023
@s-matyukevich
Copy link
Contributor Author

I was trying to test whether this change causes any significant CPU/latency regressions and run benchmarks on a dedicated 8 CPU 32GB virtual machine. I used #6299 with the following parameters

go run benchmark/benchmain/main.go -workloads all -kbps 0 -latency 0s -maxConcurrentCalls 50 -reqSizeBytes 1024,102400 -respSizeBytes 1024,102400 -benchtime 1m -connections=50 -sleepBetweenRPCs=0s,10s -resultFile /tmp/master1 -memProfile /tmp/master1-mem -memProfileRate 1024

Here are the results.

@s-matyukevich
Copy link
Contributor Author

I tested this with a real app and got 40% memory reduction. Each instance of the app is handling ~1.2k connections and has low volume of requests.
Screenshot 2023-05-30 at 3 23 51 PM

@dfawley
Copy link
Member

dfawley commented May 31, 2023

Unfortunately with @arvindbr8 and @easwars out of town we are a bit short staffed right now. Will resume in ~2 weeks if that's okay, thanks!

@s-matyukevich
Copy link
Contributor Author

Unfortunately with @arvindbr8 and @easwars out of town we are a bit short staffed right now. Will resume in ~2 weeks if that's okay, thanks!

Sounds good! Thanks for the update.

@dfawley dfawley added this to the 1.57 Release milestone Jun 2, 2023
@easwars easwars added the Type: Performance Performance improvements (CPU, network, memory, etc) label Jun 27, 2023
Copy link
Contributor

@easwars easwars left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are your thoughts about making this an opt-in feature, i.e, we could have a dial option which the user would have to set for them to get this behavior. If the dial option is not set, the user would get the old behavior.

At a later point in time, we could get rid of the dial option completely and make this the default behavior, or continue to have the dial option (but make the use of the sync.Pool the default one, and the user would have to use the dial option to explicitly disable it).

internal/transport/http_util.go Outdated Show resolved Hide resolved
}
writeBufferPoolMap[size] = pool
}
writeBufferMutex.Unlock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When do entries from this map ever get deleted? Can this map grow in a unbounded fashion?

@s-matyukevich
Copy link
Contributor Author

What are your thoughts about making this an opt-in feature, i.e, we could have a dial option which the user would have to set for them to get this behavior. If the dial option is not set, the user would get the old behavior.

At a later point in time, we could get rid of the dial option completely and make this the default behavior, or continue to have the dial option (but make the use of the sync.Pool the default one, and the user would have to use the dial option to explicitly disable it).

Sure, will do that. If we are not enabling this by default, what do you think if I add a similar fix for the read buffer in the same PR? I can use a single option to enable both.

@easwars
Copy link
Contributor

easwars commented Jun 27, 2023

Sure, will do that. If we are not enabling this by default, what do you think if I add a similar fix for the read buffer in the same PR? I can use a single option to enable both.

Yes, totally. And if we are going to have it for the server as well, we need a ServerOption as well. Thanks.

Copy link
Contributor

@easwars easwars left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a small vet failure with the docstring not starting with the name of the function. Could you please take care of that. Thanks.

Copy link
Member

@dfawley dfawley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks! Just a couple minor things before we can merge it.

server.go Show resolved Hide resolved
dialoptions.go Show resolved Hide resolved
internal/transport/http_util.go Outdated Show resolved Hide resolved
@dfawley dfawley merged commit 9bb44fb into grpc:master Jul 20, 2023
1 check passed
@ukai
Copy link

ukai commented Sep 8, 2023

https://pkg.go.dev/google.golang.org/grpc#WithSharedWriteBuffer

 If this option is set to true every connection will release the buffer after flushing the data on the wire.

not release, but reuse ?

@s-matyukevich
Copy link
Contributor Author

https://pkg.go.dev/google.golang.org/grpc#WithSharedWriteBuffer

 If this option is set to true every connection will release the buffer after flushing the data on the wire.

not release, but reuse ?

release, so it can be reused by other connections.

smira added a commit to smira/talos that referenced this pull request Sep 12, 2023
Fixes siderolabs#7576

See grpc/grpc-go#6309

Signed-off-by: Andrey Smirnov <andrey.smirnov@siderolabs.com>
@whs whs mentioned this pull request Sep 13, 2023
smira added a commit to smira/talos that referenced this pull request Sep 13, 2023
Fixes siderolabs#7576

See grpc/grpc-go#6309

Signed-off-by: Andrey Smirnov <andrey.smirnov@siderolabs.com>
func getWriteBufferPool(writeBufferSize int) *sync.Pool {
writeBufferMutex.Lock()
defer writeBufferMutex.Unlock()
size := writeBufferSize * 2
Copy link
Contributor

@raghav-stripe raghav-stripe Feb 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the significance of the * 2 here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC this matches the legacy behavior, which always doubled for reasons nobody knows but we don't want to change now due to it affecting existing applications.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fair. Do you mind expanding on the legacy behavior you are describing? It seems like if there is no shared buffer pool, the buffer is allocated per the input size. It only seems to be in the case of the buffer pool that we're allocating 2x the size (and this seems to be the PR the write buffer pool was added)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may be right.. The old code multiplied by 2 when doing this allocation, so it seems we have changed the behavior for legacy users already:

https://github.com/grpc/grpc-go/blame/d524b409462c601ef3f05a7e1fba19755a337c77/internal/transport/http_util.go#L321

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - it just felt somewhat bizarre because this is the very PR that also removed that same * 2 in the default path (see line 321 above), but added the * 2 for the buffer pool path.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you think it's worth updating, I created #6983

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 14, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Status: Requires Reporter Clarification Type: Performance Performance improvements (CPU, network, memory, etc)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

transport: investigate dynamic buffer sizes for the write buffer
7 participants