-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Garbage collector deletes data stored in the MFS (which was pinned) #7008
Comments
After a fresh start of the ipfs-daemon I cannot remove the one file I identified so far from the MFS.
I try to recover from the situation by just adding all files again to the ipfs repo (with pin=0). Hopefully just the blocks are missing and not the metadata is corrupt. |
So the issue are 'just' missing blocks, which also lead to non-fullfillable requests like After adding all files again without pinning I could remove the file with the issue and found 3 other files which blocks was also missing. I added them too from a backup and could continue. So the GC seems to be not safe to use when anything is happening to the MFS, especially worrying was for me that the file was in the MFS and pinned too. Since the files was all pinned I don't see how this was happening in the first place. Maybe This still doesn't explain, while a file which is in the MFS can lose it blocks when the GC is running. |
This sounds like a missing lock somewhere. The team is in over-drive right now trying to get #6776 out the door, so response might be delayed by a week or two. |
@ribasushi I don't expect a priority on this one, since it's just a race condition anyway. Maybe just happening in my setup and similar ones. But I think it should be reviewed if the first RC is out, just to make sure it's not a widespread issue. :) I commented several times to document my recovery efforts to make sure to get the most informations on this event as possible, not to push it again. Some thoughts on this topic: There was no error, warning or info message while this happened or afterwards while the access was not possible. I'm wondering how I'm not sure how the |
I can confirm this bug for this version as well:
I basically have to stop my scripts and add the data back to the repo with pin=0 to make sure everything is still available for IPFS after each run of the GC :/ |
Probably related to #6113. |
Version information:
go-ipfs version: 0.4.23-6ce9a355f
Repo version: 7
System version: amd64/linux
Golang version: go1.14
Description:
I'm using IPFS in a script which updates the local MFS as needed. New files are added with
ipfs cp /ipfs/<cid> /path/to/file
afteripfs-cluster-ctl
added them to the cluster.So the files are pinned locally (by the cluster service) and also stored in the MFS.
Files which should be deleted are removed from the MFS and I use
ipfs-cluster-ctl
to add a expire timeout of 14 days to the file.Since I started to add a lot of files to the repo, I decided to let the garbage collector deal with old stuff and clean up the repo.
After the garbage collector completed the work.
Now I cannot get the hashes or the content of some in the MFS stored files. This is unexpected and should not happen (as far as I understand).
ipfs files ls /path/to/file/ | grep "filename"
shows that the directory still contains the file, when the daemon is freshly started. After afiles stat --hash
on the file, the directory cannot be listed anymore until the daemon is restarted.ipfs-cluster-ctl
shows me the CID and that it's allocated on the local node (and pinned).ipfs dht findprovs <CID>
(the cid taken fromipfs-cluster-ctl
) returns with no result - which explains why I cannot access the file anymore.ipfs pin ls --timeout=120s /ipfs/<CID>
results in a timeout.$ ipfs repo verify
returns with a successful integrity check of the repo.IPFS/IPFS-Cluster stores the blocks and the databases on a ZFS filesystem which reports no integrity errors.
The text was updated successfully, but these errors were encountered: