Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

drastically reduce allocations in ring buffer implementation #64

Merged
merged 3 commits into from
Nov 20, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions bench_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ func BenchmarkSendRecv(b *testing.B) {
recvBuf := make([]byte, 512)

doneCh := make(chan struct{})
b.ResetTimer()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest you also add b.ReportAllocs() here and in BenchmarkSendRecvLarge

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure this is necessary, you can add -benchmem to go test to achieve the same result.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, but I'm suggesting it because allocs seem to be of ongoing concern. Up to you.

go func() {
defer close(doneCh)
defer server.Close()
Expand Down
39 changes: 31 additions & 8 deletions util.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,15 +68,21 @@ type segmentedBuffer struct {
cap uint32
len uint32
bm sync.Mutex
// read position in b[0].
// read position in b[bPos].
// We must not reslice any of the buffers in b, as we need to put them back into the pool.
readPos int
b [][]byte
// bPos is an index in b slice. If bPos == len(b), it means that buffer is empty.
bPos int
// b is used as a growable buffer. Each Append adds []byte to the end of b.
// If there is no space available at the end of the buffer (len(b) == cap(b)), but it has space
// at the beginning (bPos > 0 and at least 1/4 of the buffer is empty), data inside b is shifted to the beginning.
// Each Read reads from b[bPos] and increments bPos if b[bPos] was fully read.
b [][]byte
}

// NewSegmentedBuffer allocates a ring buffer.
func newSegmentedBuffer(initialCapacity uint32) segmentedBuffer {
return segmentedBuffer{cap: initialCapacity, b: make([][]byte, 0)}
return segmentedBuffer{cap: initialCapacity, b: make([][]byte, 0, 16)}
}

// Len is the amount of data in the receive buffer.
Expand Down Expand Up @@ -109,15 +115,15 @@ func (s *segmentedBuffer) GrowTo(max uint32, force bool) (bool, uint32) {
func (s *segmentedBuffer) Read(b []byte) (int, error) {
s.bm.Lock()
defer s.bm.Unlock()
if len(s.b) == 0 {
if s.bPos == len(s.b) {
return 0, io.EOF
}
data := s.b[0][s.readPos:]
data := s.b[s.bPos][s.readPos:]
n := copy(b, data)
if n == len(data) {
pool.Put(s.b[0])
s.b[0] = nil
s.b = s.b[1:]
pool.Put(s.b[s.bPos])
s.b[s.bPos] = nil
s.bPos++
s.readPos = 0
} else {
s.readPos += n
Expand Down Expand Up @@ -152,6 +158,23 @@ func (s *segmentedBuffer) Append(input io.Reader, length uint32) error {
if n > 0 {
s.len += uint32(n)
s.cap -= uint32(n)
// s.b has no available space at the end, but has space at the beginning
if len(s.b) == cap(s.b) && s.bPos > 0 {
if s.bPos == len(s.b) {
// the buffer is empty, so just move pos
s.bPos = 0
s.b = s.b[:0]
} else if s.bPos > cap(s.b)/4 {
// at least 1/4 of buffer is empty, so shift data to the left to free space at the end
copied := copy(s.b, s.b[s.bPos:])
// clear references to copied data
for i := copied; i < len(s.b); i++ {
s.b[i] = nil
}
s.b = s.b[:copied]
s.bPos = 0
}
}
s.b = append(s.b, dst[0:n])
}
return err
Expand Down