Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix bug in pinsets and add a stress test for the scenario #3273

Merged
merged 4 commits into from
Oct 9, 2016

Conversation

whyrusleeping
Copy link
Member

Sometime after having ~5000 items in a pinset, we start to get some hash collisions when mapping the 32bit int space over an 8 bit integer space. The easy enough fix is to modulo the hash output down into our final key space before we even get to that point.

Longer term, i want to see us using the HAMT code for this purpose (though its not yet ready).

License: MIT
Signed-off-by: Jeromy why@ipfs.io

@whyrusleeping whyrusleeping added the status/in-progress In progress label Sep 29, 2016
@jbenet
Copy link
Member

jbenet commented Sep 29, 2016

Don't merge this before I CR it-- i will do so in the next day or two.

@whyrusleeping
Copy link
Member Author

@jbenet SGTM

@jbenet
Copy link
Member

jbenet commented Sep 29, 2016

@whyrusleeping can you explain more what's going on? we should have a long comment somewhere in the code explaining the algorithm, and then how this change affects it.

@whyrusleeping
Copy link
Member Author

@jbenet will do

@whyrusleeping
Copy link
Member Author

@jbenet added a large comment, and a third commit that cleans up the logic around the bugged area a bit to make things a little more obvious and readable

@whyrusleeping
Copy link
Member Author

@jbenet Can you review this? This is fairly important and its been 'a day or two'

@Kubuxu
Copy link
Member

Kubuxu commented Oct 5, 2016

The fix looks solid to me, the bug was simple: the recursive fanout was done with fanout of 1<<32 and just later it was contracting it to fanout of 256 thus overriding fanned out keys with data if the lower 256 bits were the same.

License: MIT
Signed-off-by: Jeromy <why@ipfs.io>
License: MIT
Signed-off-by: Jeromy <why@ipfs.io>
Switched from using a map to an array since the bounds are
small and fixed. This should save us some significant time and on
accesses

License: MIT
Signed-off-by: Jeromy <why@ipfs.io>
@ghost
Copy link

ghost commented Oct 6, 2016

To everybody reading this, please make sure to have backups of your pinsets: ipfs pin ls > pinset.txt

@ghost ghost added kind/bug A bug in existing code (including security flaws) topic/repo Topic repo labels Oct 6, 2016
License: MIT
Signed-off-by: Jeromy <why@ipfs.io>
@whyrusleeping
Copy link
Member Author

I removed all of the old 'multiset' code that made things much much more confusing. I also cleaned up a few different things and added a bunch more comments. I think its much easier to understand whats going on now.

@whyrusleeping
Copy link
Member Author

I have a program that is able to find 'lost' hashes if you havent run a garbage collection yet.
The only downside is that it also reports pins that you've manually removed via ipfs pin rm.

https://github.com/whyrusleeping/ipfs-see-all

I'll be updating the build instructions soon and providing pre-built binaries to download from dist.ipfs.io shortly.

@jbenet
Copy link
Member

jbenet commented Oct 7, 2016

Hey @tv42 -- could you CR this and verify it's right?

@tv42
Copy link
Contributor

tv42 commented Oct 7, 2016

1f853c5 LGTM

@whyrusleeping
Copy link
Member Author

Doing some more tests, the bug is triggered any time we hit more than 8192 pins, very reliably.

@Kubuxu
Copy link
Member

Kubuxu commented Oct 7, 2016

8192 is the moment we start hashing pins, from my analysis you will have almost 100% failure rate at 8192+256+1.

@Kubuxu
Copy link
Member

Kubuxu commented Oct 7, 2016

But estimated length might be higher than 8192 even if pincount is lower than 8192 so we will start hashing earlier, if estimated length is greater or equal to 8192 you will need just 256+1 pins to trigger faulty code.

@whyrusleeping
Copy link
Member Author

The tests show a zero percent failure rate up until 8192 pins. After that,
it's a 100%. (Over 50 runs at each pin count)

On Fri, Oct 7, 2016, 14:59 Jakub Sztandera notifications@github.com wrote:

But estimated length might be higher than 8192 even if pincount is lower
than 8192 so we will start hashing earlier, if estimated length is greater
or equal to 8192 you will need just 256+1 pins to trigger faulty code.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#3273 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABL4HFWcFqoBtGsIsVCUwgFPBfIjl9hCks5qxsDAgaJpZM4KKalh
.

@Kubuxu
Copy link
Member

Kubuxu commented Oct 8, 2016

Yes as then it will skip this: https://github.com/ipfs/go-ipfs/pull/3273/files#diff-15e7154f15253315d2a8ba7e1744d9e7L116 branch and proceed to split the 8192 pins into buckets.

@whyrusleeping
Copy link
Member Author

Gonna go ahead and merge this, no sense waiting any longer.

@whyrusleeping whyrusleeping merged commit 391b78a into master Oct 9, 2016
@whyrusleeping whyrusleeping deleted the fix/pin-fail branch October 9, 2016 00:09
@whyrusleeping whyrusleeping removed the status/in-progress In progress label Oct 9, 2016
@ghost ghost mentioned this pull request Dec 23, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) topic/repo Topic repo
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants