Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix incorrect unsafe usage #220

Merged
merged 6 commits into from
Jun 15, 2020
Merged

Fix incorrect unsafe usage #220

merged 6 commits into from
Jun 15, 2020

Conversation

jrick
Copy link
Contributor

@jrick jrick commented Apr 28, 2020

After checkptr fixes by 2fc6815, it was discovered that new issues
were hit in production systems, in particular when a single process
opened and updated multiple separate databases. This indicates that
some bug relating to bad unsafe usage was introduced during this
commit.

This commit combines several attempts at fixing this new issue. For
example, slices are once again created by slicing an array of "max
allocation" elements, but this time with the cap set to the intended
length. This operation is espressly permitted according to the Go
wiki, so it should be preferred to type converting a
reflect.SliceHeader.

Fixes #214.

After checkptr fixes by 2fc6815, it was discovered that new issues
were hit in production systems, in particular when a single process
opened and updated multiple separate databases.  This indicates that
some bug relating to bad unsafe usage was introduced during this
commit.

This commit combines several attempts at fixing this new issue.  For
example, slices are once again created by slicing an array of "max
allocation" elements, but this time with the cap set to the intended
length.  This operation is espressly permitted according to the Go
wiki, so it should be preferred to type converting a
reflect.SliceHeader.
}

func unsafeByteSlice(base unsafe.Pointer, offset uintptr, i, j int) []byte {
return (*[maxAllocSize]byte)(unsafeAdd(base, offset))[i:j:j]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to other reviewers:

The specific wiki page that permits this is: https://github.com/golang/go/wiki/cgo#turning-c-arrays-into-go-slices

These aren't actually C arrays, but are arrays produced by mmap outside the heap so they can be treated as such.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had meant to comment this exact url here and forgot. Will update if desired.

One difference between here and that wiki is that this conversion allows the subslice to begin in the middle of the allocation instead of always at the start, or to subslice from the start of some address after adding an offset. I don't know if either of these is a concern, but I suspect not.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work! I think including the link (and a small description) as a comment would be extremely helpful

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Build is failing on make fmt because of the blank comment line from this btw.

@marcinja
Copy link

LGTM minus the comment formatting build fail. The change to remove reflect.SliceHeader looks good and seems correct based off of the wiki page in the comment. I also checked each of the other changes that make use of the new unsafeX helper functions and they look correct. The tests pass locally for me on Linux/x86-64.

I also successfully ran tests for a project I work on (Sia) with Go 1.14 and this using this branch for our bolt dependency. I've also been running that patched version of Sia for a few hours now with no issues. I'll report back if I run into anything of course.

@imclaren
Copy link

imclaren commented May 3, 2020

Thanks for this! bbolt still works for me on 486 and darwin hardware, but this bug now causes bbolt to segfault on raspberrypi arm (it used to work before I updated to the latest version of go)

When I try your new branch on a memory constrained device (raspberrypi arm), it won't build because low memory devices do not have enough memory to create slices of size maxAllocSize:

page.go:67:11: type [268435455]leafPageElement too large
page.go:82:11: type [268435455]branchPageElement too large

Is there a way of implementing the same fix without creating slices that are too large to fit in memory?

@imclaren
Copy link

imclaren commented May 3, 2020

Please also note that this branch builds correctly on the rpi (arm) if you revert the following two functions back (i.e. re-include the reflect.SliceHeader calls) in page.go:

func (p *page) leafPageElements() []leafPageElement
func (p *page) branchPageElements() []branchPageElement

@jrick
Copy link
Contributor Author

jrick commented May 3, 2020

@imclaren It builds for me now when I perform the cross compile. Dividing the max alloc size by the size of the data type in the slice is enough (vs using a literal 0xFFFFFFF which older versions of bbolt used to presumably work around this)

@imclaren
Copy link

imclaren commented May 3, 2020

@jrick yes thanks this builds for me now and seems to be running without issues for my project with a bolt dependency.

@imclaren
Copy link

@jrick Thanks again for your work on this - I have been stress testing this branch and unfortunately it still gives me out of memory errors when we divide the max alloc size by the size of the data type.

Again, bolt runs without problems if we revert the following two functions back (i.e. re-include the reflect.SliceHeader calls) in page.go:

func (p *page) leafPageElements() []leafPageElement
func (p *page) branchPageElements() []branchPageElement

I think that any solution that refers to maxAllocSize rather than being limited to the actual size required will be problematic for low memory systems.

@tmm1
Copy link
Contributor

tmm1 commented May 19, 2020

unfortunately it still gives me out of memory errors when we divide the max alloc size by the size of the data type.

Before you were getting a compile error? Now a runtime error?

Can you post the full error and backtrace?

@gyuho
Copy link
Contributor

gyuho commented May 19, 2020

@jrick Thanks for the fix.

still gives me out of memory errors when we divide the max alloc size by the size of the data typ

@imclaren As @tmm1 asked, could you post more details?

@imclaren
Copy link

imclaren commented May 20, 2020

@tmm and @gyuho thanks your assistance (and thanks again @jrick):

If we revert back to @jrick's #e04f391 then page.go includes the following functions:

// leafPageElements retrieves a list of leaf nodes.
func (p *page) leafPageElements() []leafPageElement {
	if p.count == 0 {
		return nil
	}
	return (*[maxAllocSize]leafPageElement)(unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
		unsafe.Sizeof(leafPageElement{}), 0))[:p.count:p.count]
}
// branchPageElements retrieves a list of branch nodes.
func (p *page) branchPageElements() []branchPageElement {
	if p.count == 0 {
		return nil
	}
	return (*[maxAllocSize]branchPageElement)(unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
		unsafe.Sizeof(branchPageElement{}), 0))[:p.count:p.count]
}

On raspberrypi arm, if I try to build an app that tries to create or open a bolt database (i.e. includes the following line):

boltDB, err := bolt.Open("~/bolt.db", 0600, &bolt.Options{Timeout: 1 * time.Second})

Then I get the following build error:

../bbolt/page.go:70:11: type [268435455]leafPageElement too large
../bbolt/page.go:94:11: type [268435455]branchPageElement too large

I believe that this build error occurs because low memory devices do not have enough memory to create arrays of size maxAllocSize.

In response to my comment in relation to this build problem, in #81f2578 @jrick changed the 2 functions in page.go as follows:

// leafPageElements retrieves a list of leaf nodes.
func (p *page) leafPageElements() []leafPageElement {
	if p.count == 0 {
		return nil
	}
	const maxArraySize = maxAllocSize / unsafe.Sizeof(leafPageElement{})
	return (*[maxArraySize]leafPageElement)(unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
		unsafe.Sizeof(leafPageElement{}), 0))[:p.count:p.count]
}
// branchPageElements retrieves a list of branch nodes.
func (p *page) branchPageElements() []branchPageElement {
	if p.count == 0 {
		return nil
	}
	const maxArraySize = maxAllocSize / unsafe.Sizeof(branchPageElement{})
	return (*[maxArraySize]branchPageElement)(unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
		unsafe.Sizeof(branchPageElement{}), 0))[:p.count:p.count]
}

If we use #81f2578 on raspberrypi arm, then apps that create or open a bolt database (i.e. include the following line) build:

boltDB, err := bolt.Open("~/bolt.db", 0600, &bolt.Options{Timeout: 1 * time.Second})

However, bolt is still allocating and using memory based on a proportion of maxAllocSize (i.e. the maximum memory available on the rpi) rather than based on the size of the byte slices to be cached.

In other words, allocating memory based on maxAllocSize rather than the size of the byte slices means that bolt uses a lot more memory than the memory that is currently utilised in the bbolt master branch. The rpi runs out of memory (and my apps crash with out of memory errors) when I try to cache multiple small byte slices in parallel for workloads that ran fine using the main branch.

Please note that I have also tested my app by replacing bbolt with badger and using badger's low memory options. I recognise that badger uses a completely different caching algorithm (an LSM tree) but badger also uses a lot less memory than this branch (i.e. #81f2578) when I try to cache multiple small byte slices in parallel.

@jrick
Copy link
Contributor Author

jrick commented May 20, 2020

There is something amiss if that array slice is causing an issue. It should not be generating an actual array that large at runtime if the cgo documentation is to be believed.

To be clear, this problem is only seen because of these two slices in page.go, and not all of the other places that are calling unsafeByteSlice? Perhaps the compiler only handles this case correctly when working on byte slices, not slices of other types, and we need to do an unsafe conversion between a byte slice to a properly-typed slice.

@jrick
Copy link
Contributor Author

jrick commented May 20, 2020

@imclaren try this patch and let me know if anything improves. Hopefully this helps the compiler out a bit to generate the correct code we expect.

diff 81f25783ae43c0699147f7d8b251753ede487a5b /home/jrick/src/bbolt
blob - 334f0ab8c6df82f854d9a6a83929d051ea346ab2
file + page.go
--- page.go
+++ page.go
@@ -65,8 +65,8 @@ func (p *page) leafPageElements() []leafPageElement {
                return nil
        }
        const maxArraySize = maxAllocSize / unsafe.Sizeof(leafPageElement{})
-       return (*[maxArraySize]leafPageElement)(unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
-               unsafe.Sizeof(leafPageElement{}), 0))[:p.count:p.count]
+       ptr := unsafeAdd(unsafe.Pointer(p), unsafe.Sizeof(*p))
+       return (*[maxArraySize]leafPageElement)(ptr)[:p.count:p.count]
 }
 
 // branchPageElement retrieves the branch node by index
@@ -81,8 +81,8 @@ func (p *page) branchPageElements() []branchPageElemen
                return nil
        }
        const maxArraySize = maxAllocSize / unsafe.Sizeof(branchPageElement{})
-       return (*[maxArraySize]branchPageElement)(unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
-               unsafe.Sizeof(branchPageElement{}), 0))[:p.count:p.count]
+       ptr := unsafeAdd(unsafe.Pointer(p), unsafe.Sizeof(*p))
+       return (*[maxArraySize]branchPageElement)(ptr)[:p.count:p.count]
 }
 
 // dump writes n bytes of the page to STDERR as hex output.

@jrick
Copy link
Contributor Author

jrick commented May 20, 2020

I'm looking at the object disassembly of those functions now and I'm not seeing that patch to make any noticeable difference to the compiler output.

Using the current pull request branch:

  page.go:69            0x4633e                 e1a02001                MOVW R1, R2                                     
  page.go:69            0x46342                 e3e0b4ff                MVN $4278190080, R11                            
  page.go:69            0x46346                 e152000b                CMP R11, R2                                     
  page.go:68            0x4634a                 8a00000a                B.HI 0x4637a                                    
  unsafe.go:10          0x4634e                 e2800010                ADD $16, R0, R0                                 
  page.go:68            0x46352                 e58d0014                MOVW R0, 0x14(R13)                              
  page.go:68            0x46356                 e58d2018                MOVW R2, 0x18(R13)                              
  page.go:68            0x4635a                 e58d201c                MOVW R2, 0x1c(R13)                              
  page.go:68            0x4635e                 e49df00c                RET #12                                         

Using the following patch:

diff 81f25783ae43c0699147f7d8b251753ede487a5b /home/jrick/src/bbolt
blob - 334f0ab8c6df82f854d9a6a83929d051ea346ab2
file + page.go
--- page.go
+++ page.go
@@ -65,8 +65,9 @@ func (p *page) leafPageElements() []leafPageElement {
                return nil
        }
        const maxArraySize = maxAllocSize / unsafe.Sizeof(leafPageElement{})
-       return (*[maxArraySize]leafPageElement)(unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
-               unsafe.Sizeof(leafPageElement{}), 0))[:p.count:p.count]
+       ptr := unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
+               unsafe.Sizeof(leafPageElement{}), 0)
+       return (*[maxArraySize]leafPageElement)(ptr)[:p.count:p.count]
 }
 
 // branchPageElement retrieves the branch node by index
  page.go:68            0x46346                 00000000                AND.EQ R0, R0, R0                               
  page.go:70            0x4634a                 e1a02001                MOVW R1, R2                                     
  page.go:70            0x4634e                 e3e0b4ff                MVN $4278190080, R11                            
  page.go:70            0x46352                 e152000b                CMP R11, R2                                     
  page.go:70            0x46356                 8a00000a                B.HI 0x46386                                    
  unsafe.go:10          0x4635a                 e2800010                ADD $16, R0, R0                                 
  page.go:70            0x4635e                 e58d0014                MOVW R0, 0x14(R13)                              
  page.go:70            0x46362                 e58d2018                MOVW R2, 0x18(R13)                              
  page.go:70            0x46366                 e58d201c                MOVW R2, 0x1c(R13)                              
  page.go:70            0x4636a                 e49df00c                RET #12                                         

And lastly further simplifying to replace unsafeIndex with unsafeAdd (because since it only needs index 0 after applying the page header offset, the other function is not needed):

diff 81f25783ae43c0699147f7d8b251753ede487a5b /home/jrick/src/bbolt
blob - 334f0ab8c6df82f854d9a6a83929d051ea346ab2
file + page.go
--- page.go
+++ page.go
@@ -65,8 +65,8 @@ func (p *page) leafPageElements() []leafPageElement {
                return nil
        }
        const maxArraySize = maxAllocSize / unsafe.Sizeof(leafPageElement{})
-       return (*[maxArraySize]leafPageElement)(unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
-               unsafe.Sizeof(leafPageElement{}), 0))[:p.count:p.count]
+       ptr := unsafeAdd(unsafe.Pointer(p), unsafe.Sizeof(*p))
+       return (*[maxArraySize]leafPageElement)(ptr)[:p.count:p.count]
 }
 
 // branchPageElement retrieves the branch node by index
  page.go:68            0x46346                 00000000                AND.EQ R0, R0, R0                               
  page.go:69            0x4634a                 e1a02001                MOVW R1, R2                                     
  page.go:69            0x4634e                 e3e0b4ff                MVN $4278190080, R11                            
  page.go:69            0x46352                 e152000b                CMP R11, R2                                     
  page.go:69            0x46356                 8a00000a                B.HI 0x46386                                    
  unsafe.go:6           0x4635a                 e2800010                ADD $16, R0, R0                                 
  page.go:69            0x4635e                 e58d0014                MOVW R0, 0x14(R13)                              
  page.go:69            0x46362                 e58d2018                MOVW R2, 0x18(R13)                              
  page.go:69            0x46366                 e58d201c                MOVW R2, 0x1c(R13)                              
  page.go:69            0x4636a                 e49df00c                RET #12                                         

Perhaps someone more well versed in arm assembly can spot something there.

@imclaren
Copy link

Yes, the build error only occurs in relation to the two functions in page.go. If you revert the two functions back (i.e. re-include the reflect.SliceHeader calls) in page.go then there are no build errors.

My working assumption is that the following creates an array of size maxArraySize and then returns a slice of this array with capacity p.count, and the original array of size maxArraySize is retained (at least for a while before it is possibly garbage collected?):

return (*[maxArraySize]leafPageElement)(unsafeIndex(unsafe.Pointer(p), unsafe.Sizeof(*p),
		unsafe.Sizeof(leafPageElement{}), 0))[:p.count:p.count]

I could be wrong though. I don't know how unsafe calls used to create pointers to arrays work.

@imclaren
Copy link

In other words, what may be happening is 'A possible "gotcha"' as described in https://blog.golang.org/slices-intro

@tmm1
Copy link
Contributor

tmm1 commented May 21, 2020

However, bolt is still allocating and using memory based on a proportion of maxAllocSize (i.e. the maximum memory available on the rpi) rather than based on the size of the byte slices to be cached.

How are you measuring this? It is normal for VSS size to be large. Only RSS really matters, and with memory mapped files it can be hard to calculate actual memory usage.

When you say running out of memory, what exactly is happening? Your golang program should be receiving an error like ENOMEM and should spit out a stack trace of where the error is received.

How big is your bolt database file on disk?

What model rpi are you using, and what does uname -a say?

@tmm1
Copy link
Contributor

tmm1 commented May 21, 2020

it won't build because low memory devices do not have enough memory to create slices of size maxAllocSize:

page.go:67:11: type [268435455]leafPageElement too large
page.go:82:11: type [268435455]branchPageElement too large

I am not able to reproduce this. Can you please provide more details?

$ head -1 go.mod
module go.etcd.io/bbolt

$ git branch
  master
* memfix

$ go version
go version go1.14.3 darwin/amd64

$ GOOS=linux GOARCH=arm GOARM=7 go build
$

@tmm1
Copy link
Contributor

tmm1 commented May 21, 2020

I am not able to reproduce this

Never mind. I had to revert the last commit first:

$ git revert -n 81f25783ae43c0699147f7d8b251753ede487a5b
$ GOOS=linux GOARCH=arm GOARM=7 go build
# go.etcd.io/bbolt
./page.go:67:11: type [268435455]leafPageElement too large
./page.go:82:11: type [268435455]branchPageElement too large

@tmm1
Copy link
Contributor

tmm1 commented May 21, 2020

cc @aclements @ianlancetaylor maybe you know who can shed some light here?

@tmm1
Copy link
Contributor

tmm1 commented May 21, 2020

My working assumption is that the following creates an array of size maxArraySize and then returns a slice of this array with capacity p.count, and the original array of size maxArraySize is retained (at least for a while before it is possibly garbage collected?):

I'm pretty sure this is incorrect. A large array is not being created, rather only a pointer to such an array.

@imclaren
Copy link

I'm pretty sure this is incorrect. A large array is not being created, rather only a pointer to such an array.

@tmm1 are we saying the same thing in a different way? I assume that returning and retaining a pointer to an array (including by re-slicing a slice) means that the underlying original array cannot be garbage collected. See ‘A possible "gotcha"' as described in https://blog.golang.org/slices-intro

@tmm1
Copy link
Contributor

tmm1 commented May 21, 2020

There is no underlying original array in this case. Only a slice view into a memory mapped file. I don't think that gotcha applies. See https://github.com/golang/go/wiki/cgo#turning-c-arrays-into-go-slices

@tmm1
Copy link
Contributor

tmm1 commented May 21, 2020

If you have an extremely large boltdb file on disk, then I'm guessing what you're running into is the 32-bit address space limitation caused by the VMSPLIT_3G kernel option. More details about your environment and database would confirm or deny this theory, but I have run into it myself in the past running golang code on the rpi.

You can for example run pmap <pid> on your process and see what the total size reported at the end is. See for instance golang/go#35677

@imclaren
Copy link

To be fair my use case is unusual. I am trying to cache the maximum number of byte slices as quickly as possible within the memory constraints of the rpi.

I benchmarked by varying the size of the byte slices and tracking memory consumption using the main branch of bolt (using an old version of golang which didn’t trigger constant segfaults), with this branch, and with badger using low memory options.

This branch used a lot more memory than the main branch of bolt and badger even when I varied the size of the byte slice.

@imclaren
Copy link

Oh and I tested starting with an empty bolt database

@jrick
Copy link
Contributor Author

jrick commented May 21, 2020

reinterpreting an existing slice as a slice header to unsafely modify seems sane to me, but i would modify the cap before the length. it seems that a lot of bad things may happen between the statement that would otherwise set the length before the cap.

edit: and the Data pointer before either

@imclaren
Copy link

Note that main uses reflect.SliceHeaders for these functions in page.go:

// leafPageElements retrieves a list of leaf nodes.
func (p *page) leafPageElements() []leafPageElement {
	if p.count == 0 {
		return nil
	}
	return *(*[]leafPageElement)(unsafe.Pointer(&reflect.SliceHeader{
		Data: uintptr(unsafe.Pointer(p)) + unsafe.Sizeof(*p),
		Len:  int(p.count),
		Cap:  int(p.count),
	}))
}

// branchPageElements retrieves a list of branch nodes.
func (p *page) branchPageElements() []branchPageElement {
	if p.count == 0 {
		return nil
	}
	return *(*[]branchPageElement)(unsafe.Pointer(&reflect.SliceHeader{
		Data: uintptr(unsafe.Pointer(p)) + unsafe.Sizeof(*p),
		Len:  int(p.count),
		Cap:  int(p.count),
	}))
}

@jrick
Copy link
Contributor Author

jrick commented May 21, 2020

This helper feels like a fairly ergonomic way to use the reflect.SliceHeader :

func unsafeSlice(slice unsafe.Pointer, data unsafe.Pointer, len int) {
        s := (*reflect.SliceHeader)(slice)
        s.Data = uintptr(data)
        s.Cap = len
        s.Len = len
}

Then in page.go for example it can be used with:

func (p *page) branchPageElements() []branchPageElement {
        if p.count == 0 {
                return nil
        }
        var elems []branchPageElement
        data := unsafeAdd(unsafe.Pointer(p), unsafe.Sizeof(*p))
        unsafeSlice(unsafe.Pointer(&elems), data, int(p.count))
        return elems
}

This keeps the correct usage of reflect.SliceHeader a guarantee as it must point to an existing correctly-typed slice, and is able to operate on slices of any element type.

@YoyinZyc
Copy link

cc @jpbetz @liggitt

@tmm1
Copy link
Contributor

tmm1 commented May 30, 2020

Is there anything left to do here? Has anyone stress-tested the latest changes to confirm the original issue is resolved?

@imclaren
Copy link

imclaren commented Jun 6, 2020

Hi @jrick and @tmm I have been testing these latest changes and they seem to work on the rpi without segfaults or obvious memory leaks.

I note that if you run the new TestManyDBs test created by @jrick but put 25mb of random bytes rather than use 16 byte random keys, bolt seems to use about 1.5gb to 2gb of memory.

To compare, I created a test that just puts each 25mb of random bytes to different filesystem files using the TestManyDBs test, and that uses about 250mb to 300mb of memory. I expect that this is a bolt design limitation not a bug.

Let me know if you want me to post the code that uses the the TestManyDBs test to just save bytes to flat files.

@AkihiroSuda
Copy link

@gyuho @xiang90 Any chance to move this forward? Thanks 🙏

@gyuho
Copy link
Contributor

gyuho commented Jun 15, 2020

Let me know if you want me to post the code that uses the the TestManyDBs test to just save bytes to flat files.

Can you share this, in case other people want to run similar workloads?

@gyuho gyuho merged commit 232d8fc into etcd-io:master Jun 15, 2020
@imclaren
Copy link

@gyuho I have packaged this up as a golang low memory disk cache, which is available here:

https://github.com/imclaren/calmcache

You can just go get it and run the tests.

@jrick jrick deleted the memfix branch June 15, 2020 13:59
andrewpmartinez added a commit to openziti/foundation that referenced this pull request Jun 17, 2020
- need fix from etcd-io/bbolt#214 and
  etcd-io/bbolt#220 for full go 1.14 compat
AkihiroSuda added a commit to AkihiroSuda/containerd that referenced this pull request Jun 22, 2020
We had once updated bbolt from v1.3.3 to v1.3.4 in containerd#4134,
but reverted to v1.3.3 in containerd#4156 due to "fatal error: sweep increased
allocation count" (etcd-io/bbolt#214).

The issue was fixed in bbolt v1.3.5 (etcd-io/bbolt#220).

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
fahedouch pushed a commit to fahedouch/containerd that referenced this pull request Aug 7, 2020
We had once updated bbolt from v1.3.3 to v1.3.4 in containerd#4134,
but reverted to v1.3.3 in containerd#4156 due to "fatal error: sweep increased
allocation count" (etcd-io/bbolt#214).

The issue was fixed in bbolt v1.3.5 (etcd-io/bbolt#220).

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
tussennet pushed a commit to tussennet/containerd that referenced this pull request Sep 11, 2020
We had once updated bbolt from v1.3.3 to v1.3.4 in containerd#4134,
but reverted to v1.3.3 in containerd#4156 due to "fatal error: sweep increased
allocation count" (etcd-io/bbolt#214).

The issue was fixed in bbolt v1.3.5 (etcd-io/bbolt#220).

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
AkihiroSuda added a commit to AkihiroSuda/containerd that referenced this pull request Nov 10, 2020
We had once updated bbolt from v1.3.3 to v1.3.4 in containerd#4134,
but reverted to v1.3.3 in containerd#4156 due to "fatal error: sweep increased
allocation count" (etcd-io/bbolt#214).

The issue was fixed in bbolt v1.3.5 (etcd-io/bbolt#220).

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit bebfbab)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
leoluk pushed a commit to monogon-dev/monogon that referenced this pull request Mar 30, 2021
This unbreaks bbolt (as part of containerd) on 1.14+ (see etcd-io/bbolt#201 and
etcd-io/bbolt#220), pulls in my patch to ignore image-defined volumes
(containerd/cri#1504) and gets us some robustness fixes in containerd CNI/CRI integration
(containerd/cri#1405). This also updates K8s at the same time since they share a lot of
dependencies and only updating one is very annoying. On the K8s side we mostly get the standard stream of fixes
plus some patches that are no longer necessary.

One annoying on the K8s side (but with no impact to the functionality) are these messages in the logs of various
components:
```
W0714 11:51:26.323590       1 warnings.go:67] policy/v1beta1 PodSecurityPolicy is deprecated in v1.22+, unavailable in v1.25+
```
They are caused by KEP-1635, but there's not explanation why this gets logged so aggressively considering the operators
cannot do anything about it. There's no newer version of PodSecurityPolicy and you are pretty much required to use it if
you use RBAC.

Test Plan: Covered by existing tests

Bug: T753

X-Origin-Diff: phab/D597
GitOrigin-RevId: f6c447da1de037c27646f9ec9f45ebd5d6660ab0
plorenz pushed a commit to openziti/transport that referenced this pull request Mar 30, 2022
- need fix from etcd-io/bbolt#214 and
  etcd-io/bbolt#220 for full go 1.14 compat
plorenz pushed a commit to openziti/storage that referenced this pull request Mar 30, 2022
- need fix from etcd-io/bbolt#214 and
  etcd-io/bbolt#220 for full go 1.14 compat
plorenz pushed a commit to openziti/identity that referenced this pull request Jun 29, 2022
- need fix from etcd-io/bbolt#214 and
  etcd-io/bbolt#220 for full go 1.14 compat
plorenz pushed a commit to openziti/agent that referenced this pull request Jun 29, 2022
- need fix from etcd-io/bbolt#214 and
  etcd-io/bbolt#220 for full go 1.14 compat
plorenz pushed a commit to openziti/metrics that referenced this pull request Jun 29, 2022
- need fix from etcd-io/bbolt#214 and
  etcd-io/bbolt#220 for full go 1.14 compat
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

sweep increased allocation count with v1.3.4
9 participants