Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pool: track usage of incoming streams #10710

Merged
merged 2 commits into from
Jun 7, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions client/rpc.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ import (
inmem "github.com/hashicorp/nomad/helper/codec"
"github.com/hashicorp/nomad/helper/pool"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/yamux"
)

// rpcEndpoints holds the RPC endpoints
Expand Down Expand Up @@ -282,7 +281,7 @@ func (c *Client) setupClientRpcServer(server *rpc.Server) {
// connection.
func (c *Client) rpcConnListener() {
// Make a channel for new connections.
conns := make(chan *yamux.Session, 4)
conns := make(chan *pool.Conn, 4)
c.connPool.SetConnListener(conns)

for {
Expand All @@ -301,7 +300,7 @@ func (c *Client) rpcConnListener() {

// listenConn is used to listen for connections being made from the server on
// pre-existing connection. This should be called in a goroutine.
func (c *Client) listenConn(s *yamux.Session) {
func (c *Client) listenConn(s *pool.Conn) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tiniest of nitpicks, but with the type being changed to *pool.Conn, naming this argument s doesn't make sense anymore. Maybe rename the arg to p?

(pool.Conn is kind of a weirdly named type in general because it's more like "connection factory" or "connect proxy" given that Accept() returns net.Conn, but it isn't a "connection pool" either as it contains the connection pool. But probably best not to rework the whole thing. 😀 )

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll change it to c. FWIW, pool.Conn is the underlying physical connection - pool.Accept() returns a wrapped yamux.Session which also implements net.Conn (and clients expect)..

for {
conn, err := s.Accept()
if err != nil {
Expand Down
65 changes: 50 additions & 15 deletions helper/pool/pool.go
Original file line number Diff line number Diff line change
Expand Up @@ -56,12 +56,21 @@ type Conn struct {
clientLock sync.Mutex
}

// markForUse does all the bookkeeping required to ready a connection for use.
// markForUse does all the bookkeeping required to ready a connection for use,
// and ensure that active connections don't get reaped.
func (c *Conn) markForUse() {
c.lastUsed = time.Now()
atomic.AddInt32(&c.refCount, 1)
}

// releaseUse is the complement of `markForUse`, to free up the reference count
func (c *Conn) releaseUse() {
refCount := atomic.AddInt32(&c.refCount, -1)
if refCount == 0 && atomic.LoadInt32(&c.shouldClose) == 1 {
c.Close()
}
}

func (c *Conn) Close() error {
return c.session.Close()
}
Expand Down Expand Up @@ -122,6 +131,40 @@ func (c *Conn) returnClient(client *StreamClient) {
}
}

func (c *Conn) IsClosed() bool {
return c.session.IsClosed()
}

func (c *Conn) Accept() (net.Conn, error) {
s, err := c.session.AcceptStream()
if err != nil {
return nil, err
}

c.markForUse()
return &incomingStream{
Stream: s,
parent: c,
}, nil
}

// incomingStream wraps yamux.Stream but frees the underlying yamux.Session
// when closed
type incomingStream struct {
*yamux.Stream

parent *Conn
}

func (s *incomingStream) Close() error {
err := s.Stream.Close()

// always release parent even if error
s.parent.releaseUse()

return err
}

// ConnPool is used to maintain a connection pool to other
// Nomad servers. This is used to reduce the latency of
// RPC requests between servers. It is only used to pool
Expand Down Expand Up @@ -157,7 +200,7 @@ type ConnPool struct {

// connListener is used to notify a potential listener of a new connection
// being made.
connListener chan<- *yamux.Session
connListener chan<- *Conn
}

// NewPool is used to make a new connection pool
Expand Down Expand Up @@ -220,7 +263,7 @@ func (p *ConnPool) ReloadTLS(tlsWrap tlsutil.RegionWrapper) {

// SetConnListener is used to listen to new connections being made. The
// channel will be closed when the conn pool is closed or a new listener is set.
func (p *ConnPool) SetConnListener(l chan<- *yamux.Session) {
func (p *ConnPool) SetConnListener(l chan<- *Conn) {
p.Lock()
defer p.Unlock()

Expand Down Expand Up @@ -276,7 +319,7 @@ func (p *ConnPool) acquire(region string, addr net.Addr, version int) (*Conn, er
// If there is a connection listener, notify them of the new connection.
if p.connListener != nil {
select {
case p.connListener <- c.session:
case p.connListener <- c:
default:
}
}
Expand Down Expand Up @@ -386,14 +429,6 @@ func (p *ConnPool) clearConn(conn *Conn) {
}
}

// releaseConn is invoked when we are done with a conn to reduce the ref count
func (p *ConnPool) releaseConn(conn *Conn) {
refCount := atomic.AddInt32(&conn.refCount, -1)
if refCount == 0 && atomic.LoadInt32(&conn.shouldClose) == 1 {
conn.Close()
}
}

// getClient is used to get a usable client for an address and protocol version
func (p *ConnPool) getRPCClient(region string, addr net.Addr, version int) (*Conn, *StreamClient, error) {
retries := 0
Expand All @@ -408,7 +443,7 @@ START:
client, err := conn.getRPCClient()
if err != nil {
p.clearConn(conn)
p.releaseConn(conn)
conn.releaseUse()

// Try to redial, possible that the TCP session closed due to timeout
if retries == 0 {
Expand Down Expand Up @@ -461,7 +496,7 @@ func (p *ConnPool) RPC(region string, addr net.Addr, version int, method string,
p.clearConn(conn)
}

p.releaseConn(conn)
conn.releaseUse()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While reviewing this PR I'm noticing we have an existing hidden temporal coupling of conn.releaseUse() to getRPCClient. If someone were to call conn.releaseUse outside of this method without first calling p.clearConn(conn), we can get a value of -1 for the reference count and then not close the connection. The current code is correct but easy for someone to break.

Two suggestions:

  • Move this into a defer conn.releaseUse() right after we check the error from getRPCClient to make the temporal coupling more explicit.
  • Maybe make releaseUse tolerant of misuse by having it close on refCount < 1:
// releaseUse is the complement of `markForUse`, to free up the reference count
func (c *Conn) releaseUse() {
	refCount := atomic.AddInt32(&c.refCount, -1)
	if refCount < 1 && atomic.LoadInt32(&c.shouldClose) == 1 {
		c.Close()	
	}
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it's brittle, I'll move to defer.

I'm unsure about handling negative refCounts. It's unclear how refCount can be negative by re-ordering calls. Only conn.releaseUse decrement the refCount. So negative values indicate double-release bugs, and defensive handling will probably lead to yet more subtle cases of unexpected connection closing like this one. I wish we can simply panic on negative values so we can find the source of double free case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So negative values indicate double-release bugs, and defensive handling will probably lead to yet more subtle cases of unexpected connection closing like this one.

That's a good point. Totally agreed on that.


// If the error is an RPC Coded error
// return the coded error without wrapping
Expand All @@ -475,7 +510,7 @@ func (p *ConnPool) RPC(region string, addr net.Addr, version int, method string,

// Done with the connection
conn.returnClient(sc)
p.releaseConn(conn)
conn.releaseUse()
return nil
}

Expand Down
3 changes: 1 addition & 2 deletions helper/pool/pool_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ import (
"github.com/hashicorp/nomad/helper/freeport"
"github.com/hashicorp/nomad/helper/testlog"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/yamux"
"github.com/stretchr/testify/require"
)

Expand Down Expand Up @@ -47,7 +46,7 @@ func TestConnPool_ConnListener(t *testing.T) {
pool := newTestPool(t)

// Setup a listener
c := make(chan *yamux.Session, 1)
c := make(chan *Conn, 1)
pool.SetConnListener(c)

// Make an RPC
Expand Down