-
Notifications
You must be signed in to change notification settings - Fork 490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: ability to prune the old ancient blockchain data #718
Comments
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
Hi, pruning is available as a sub command of the new bor cli with bor version v0.3.3. Please refer to the this post for more info (specifically the change log part). Hope this helps. Also, just fyi, we don't support snap sync yet as it doesn't work out of the box for polygon mainnet. You might want to run it on a full sync node. |
@manav2401 this is different though. geth/bor's pruning is stale state pruning, this is ancient block pruning. @jsvisa +1, would love to see this (ideally with EIP-4444) implemented in bor |
@petejkim Sorry, this not fully EIP-4444, it will prune the old historical data only, and not handle the p2p issues. |
Yep, which is why I said "ideally"...because I'd like to see it happen. |
yeah Im also looking for that, lets make it happen |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
/ping |
Hi @jsvisa |
just have the feeling that this is going to take years to get it working. |
@kmalloc I understand your concerns but I want you to know that we will take this up soon depending on availability. The team is working on some prior commitments. |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue was closed because it has been stalled for 28 days with no activity. |
Rationale
I'm running a new snap-sync node, downloading a fresh snapshot from https://snapshots.matic.today, after the syncing progress, found the local chaindata consuming 1.1/1.6TB are the ancient data, used toooomuch disk:
The old ancient data is useless in most cases, so if we support the ancient data pruning, we can use fewer disks.
Implementation
Seems the binance smartchain has supported this feature(merged in #543) maybe we can backport this feature into go-ethereum.
The text was updated successfully, but these errors were encountered: