-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Go-IPFS 0.5.0 Release #7109
Comments
This seems a little iffy. Non-thoroughly tested functionality is only safe after its thoroughly tested until then assuming its safe just increases the likelihood of issues |
@bonedaddy it's an addition to the experimental |
Dependencies can be pulled in which may have potentially adverse effects. Personally speaking letting untested functionality be included releases is a bad practice that shouldn't be done. For ex: what if the dag imnport/export commands suffer from a bug themselves that would be discovered via testing through the RC process? The long term effect of not having properly tested functionality going through the RC process is more work needing to be done in dealing with issues, and fixing issues. ex: lets imagine we're at a time when go-ipfs is at v1.0.0; Will these practices of untested functionality being included in releases continue? If not, why not make changes now that result in better testing, and better change management. Nothing is lost, but everything is gained. |
@bonedaddy sorry, I now see the confusion. The text should have read |
Ah okay, good stuff 🚀 |
FYI, companion and ipfs desktop broke because of 1b49047 -- looks like @lidel's on it because firefox companion is fixed, but chrome hasn't gotten a new version. sounds like this will break a lot of users, so you want to definitely flag it in the release notes, and here, very prominently. It took me a while to figure out what was wrong. |
Unfortunately Chrome Web Store is super slow with accepting updates. Chromium users need to wait for the Stable channel update to v2.11.0, or uninstall it and install Beta, which just got approved: v2.11.0.904 |
I'm getting a http 405 from http://127.0.0.1:5001/api/v0/id |
Yes, all calls to RPC API at /api/v0 on the API port need to be POST now. |
After switching to POSTs all our local tests are passing, including p2p stream tests. Thumbs up! |
Is there a way to set CIDv1 as default for all operations, to make it easier to upgrade, without having to specify this on each and every operation? |
Running |
That would make it harder to upgrade. The upgrade will impact finding content added with CIDv1, it will not impact finding content added with CIDv0. |
@meeDamian off topic, don't use go 1.14. Use either 1.13.x or 1.14.2. Go 1.14 has several known bugs that will cause go-ipfs to hang. Otherwise, that looks like a bug, probably a bug in the test. Thanks for finding it! Please file a new issue. |
Thank you, and apologies for not being clear. That being said, I'll try with explicit latest versions of |
I understand what you mean, but I don't care about the compability with 0.4.x When I add files thru the cluster-ctl, I end up with a lot of CIDs which are v1 anyway. I just want it to work either completely or not at all - the mixture of CIDv1/0 is hard to debug: If the requested initial CID is a v0 bitswap will make it possible to fetch any further content, while when the initial CID is v1 it won't work. I just want to avoid using CIDv0 for anything at this time, just to make sure when I save content for the next year, I don't have to remove and readd it. |
Can't we check while building for go 1.14.0/1.14.1 and just abort with an error? 🤔 |
Open issue: #5230 Please keep questions in this issue on-topic for the go-ipfs 0.5.0 release.
Yes. We'd have to patch bin/check_go_version. Want to submit a patch? |
Thanks, true this was going a bit off-topic - sorry! With the current CIDv1 approach we will run into some issues with the gateways. When we update the software on the gateways, the won't be able to find v1 content of 0.4.x clients, which will probably lead to a lot of confusions about the availability of the data - many people use the gateways to check if their data is available. I think it makes sense to add a compability fallback to the old v1 format in the DHT requests: If there's no result in a reasonable amount of time 0.5.0 could generate the old CIDv1 format and query for this the DHT as well. This would open up a transition period for data stored on 0.4.x nodes without adding too much stress to the DHT. |
This is not a decision we plan to revisit unless it becomes an active problem for 0.5.0 users. If that happens, we can cut a patch release to turn every DHT query into two DHT queries. However, that would be expensive and we'd very much like to avoid it. In general, the vast majority of users still use CIDv0. Those using CIDv1 must have explicitly opted in, are usually "in the loop", and will likely upgrade immediately. While we could fix new nodes finding content published by old nodes, fixing the reverse would be infeasible. We'd have to publish two provider records for ever piece of data and providing is already slow as it is (although that should get much better after the release). |
I'm a bit lost on why this compatabilty is broken. If I understand correctly, CIDv1 published by nodes before 0.5.0 will be unable to be discovered by 0.5.0 nodes, but CIDv0 will be? What exactly is causing this issue? |
@bonedaddy the new DHT advertises and searches for multihashes instead of CIDs. That way, it doesn't matter if content was advertised with CIDv1/CIDv0. This change is intended to ease the CIDv0 to CIDv1 migration. For example, without this change, if I try to load We could advertise both CIDv1 and CIDv0 CIDs, but that would double the number of DHT queries. |
Ah okay makes sense. If my node has CIDv1 hashes generated before 0.5.0, and I switch to 0.5.0 that doesn't break anything on the node itself, just makes it less likely that I will find CIDv1 published by nodes prior to 0.5.0 yea? |
Exactly. Also, nodes prior to 0.5.0 may have trouble finding CIDv1 content published by our post 0.5.0 node (because this content will be advertised with raw multihashes instead of CIDv1). |
RC3 is now live: https://dist.ipfs.io/go-ipfs/v0.5.0-rc3 We're now in the "fine-tuning" stage and this release should be pretty stable. Please test widely. Since RC2:
|
CHANGE We initially planned to bump the DHT protocol's version number to prevent old DHT nodes (those without the 0.5.0 improvements) from polluting the new DHT. Under this plan, new DHT nodes would still respond to requests from old DHT nodes, but they wouldn't make requests to old DHT nodes or route requests to old DHT nodes. We've abandoned this plan. To pull this off without degrading service for both old and new nodes, we needed 20% of DHT servers to upgrade all at once. Our plan was to simply operate this 20% of the network ourselves. Given the current size of the reachable DHT, this should be feasible. However, while we built a system to do this, it did not perform well enough when tested on the live network for us to be comfortable relying on it to support the weight of the entire network. Therefore, while this release will use the new DHT, it will not change the DHT protocol version number. In other words, the final release's DHT will behave the same as the DHT in the RCs. What does this mean for you? When 0.5.0 is released, please upgrade ASAP.
|
RC4 is now live: https://dist.ipfs.io/go-ipfs/v0.5.0-rc4 Barring any significant issues will be the final RC before the release. Please test widely. Note: While this release has been thoroughly tested, it is a very large release. We don't expect any major bugs, but we're ready to cut a patch release a week after the release if necessary. However, I will be very sad if we need to cut two patch releases so please test this RC thoroughly. Since RC3:
|
Filed #7220 |
Released! |
Congrats! |
@Stebalien I think this can be unpinned :) |
@RubenKelevra Never! (thanks for the reminder) |
… to [go-ipfs 0.5.0 Release](ipfs/kubo#7109)
go-ipfs 0.5.0 Release
Release: https://dist.ipfs.io#go-ipfs
We're happy to announce go-ipfs 0.5.0, ...
🗺 What's left for release
🔦 Highlights
UNDER CONSTRUCTION
This release includes many important changes users should be aware of.
New DHT
This release includes an almost completely rewritten DHT implementation with a new protocol version. From a user's perspective, providing content, finding content, and resolving IPNS records should simply get faster. However, this is a significant (albeit well tested) change and significant changes are always risky, so heads up.
Old v. New
The current DHT suffers from three core issues addressed in this release:
Reachable
We have addressed the problem of undialable nodes by having nodes wait to join the DHT as "server" nodes until they've confirmed that they are reachable from the public internet. Additionally, we've introduced:
Unfortunately, there's a significant downside to this approach: VPNs, offline LANs, etc. where all nodes on the network have private IP addresses and never communicate over the public internet. In this case, none of these nodes would be "publicly reachable".
To address this last point, go-ipfs 0.5.0 will run two DHTs: one for private networks and one for the public internet. That is, every node will participate in a LAN DHT and a public WAN DHT.
RC2 NOTE: All the features not enabled in RC1 have been enabled in RC2.
RC1 NOTE: Several of these features have not been enabled in RC1:
Query Logic
We've fixed the DHT query logic by correctly implementing Kademlia (with a few tweaks). This should significantly speed up:
In both cases, we now continue till we find the closest peers then stop.
Routing Tables
Finally, we've addressed the poorly maintained routing tables by:
Testing
The DHT rewrite was made possible by our new testing framework, testground, which allows us to spin up multi-thousand node tests with simulated real-world network conditions. With testground and some custom analysis tools, we were able to gain confidence that the new DHT implementation behaves correctly.
Refactored Bitswap
This release includes a major bitswap refactor running a new, but backwards compatible, bitswap protocol. We expect these changes to improve performance significantly.
With the refactored bitswap, we expect:
Note, the new bitswap won't magically make downloading content any faster until both seeds and leaches have updated. If you're one of the first to upgrade to 0.5.0 and try downloading from peers that haven't upgraded, you're unlikely to see much of a performance improvement, if any.
Provider Record Changes
When you add content to your IPFS node, you advertise this content to the network by announcing it in the DHT. We call this "providing".
However, go-ipfs has multiple ways to address the same underlying bytes. Specifically, we address content by content ID (CID) and the same underlying bytes can be addressed using (a) two different versions of CIDs (CIDv1 and CIDv2) and (b) with different "codecs" depending on how we're interpreting the data.
Prior to go-ipfs 0.5.0, we used the content id (CID) in the DHT when sending out provider records for content. Unfortunately, this meant that users trying to find data announced using one CID wouldn't find nodes providing the content under a different CID.
In go-ipfs 0.5.0, we're announcing data by multihash, not CID. This way, regardless of the CID version used by the peer adding the content, the peer trying to download the content should still be able to find it.
Warning: as part of the network, this could impact finding content added with CIDv1. Because go-ipfs 0.5.0 will announce and search for content using the bare multihash (equivalent to the v0 CID), go-ipfs 0.5.0 will be unable to find CIDv1 content published by nodes prior to go-ipfs 0.5.0 and vice-versa. As CIDv1 is not enabled by default so we believe this will have minimal impact. However, users are strongly encouraged to upgrade as soon as possible.
IPFS/Libp2p Address Format
If you've ever run a command like
ipfs swarm peers
, you've likely seen paths that look like/ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID
. These paths are not file paths, they're multiaddrs; addresses of peers on the network.Unfortunately,
/ipfs/Qm...
is also the same path format we use for files. This release, changes the multiaddr format from/ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID
to/ip4/193.45.1.24/tcp/4001/p2p/QmSomePeerID
to make the distinction clear.What this means for users:
/ipfs/QmSomePeerID
looks like/ipfs/QmSomeFile
, your tool may break if you upgrade this library./ipfs/
..., you'll need to search for/p2p/...
.Minimum RSA Key Size
Previously, IPFS did not enforce a minimum RSA key size. In this release, we've introduced a minimum 2048 bit RSA key size. IPFS generates 2048 bit RSA keys by default so this shouldn't be an issue for anyone in practice. However, users who explicitly chose a smaller key size will not be able to communicate with new nodes.
Unfortunately, the some of the bootstrap peers did intentionally generate 1024 bit RSA keys so they'd have vanity peer addresses (starting with QmSoL for "solar net"). All IPFS nodes should also have peers with >= 2048 bit RSA keys in their bootstrap list, but we've introduced a migration to ensure this.
We implemented this change to follow security best practices and to remove a potential foot-gun. However, in practice, the security impact of allowing insecure RSA keys should have been next to none because IPFS doesn't trust other peers on the network anyways.
Subdomain Gateway
The gateway will redirect from
http://localhost:5001/ipfs/CID/...
tohttp://CID.ipfs.localhost:5001/...
by default. This will:Paths addressing the gateway by IP address (
http://127.0.0.1:5001/ipfs/CID
) will not be altered as IP addresses can't have subdomains.Note: cURL doesn't follow redirects by default. To avoid breaking cURL and other clients that don't support redirects, go-ipfs will return the requested file along with the redirect. Browsers will follow the redirect and abort the download while cURL will ignore the redirect and finish the download.
TLS By Default
In this release, we're switching TLS to be the default transport. This means we'll try to encrypt the connection with TLS before re-trying with SECIO.
Contrary to the announcement in the go-ipfs 0.4.23 release notes, this release does not remove SECIO support to maintain compatibility with js-ipfs.
SECIO Deprecation Notice
SECIO should be considered to be well on the way to deprecation and will be
completely disabled in either the next release (0.6.0, ~mid May) or the one
following that (0.7.0, ~end of June). Before SECIO is disabled, support will be
added for the NOISE transport for compatibility with other IPFS implementations.
QUIC Upgrade
If you've been using the experimental QUIC support, this release upgrades to a new and incompatible version of the QUIC protocol (draft 27). Old and new go-ipfs nodes will still interoperate, but not over the QUIC transport.
We intend to standardize on this draft of the QUIC protocol and enable QUIC by default in the next release if all goes well.
RC2 NOTE: QUIC has been upgraded back to the latest version.
RC1 NOTE: We've temporarily backed out of the new QUIC version because it
currently requires go 1.14 and go 1.14 has some scheduler bugs that go-ipfs can
reliably trigger.
Badger Datastore
In this release, we're calling the badger datastore (enabled at initialization with
ipfs init --profile=badgerds
) as stable. However, we're not yet enabling it by default.The benefit of badger is that adding/fetching data to/from badger is significantly faster than adding/fetching data to/from the default datastore, flatfs. In some tests, adding data to badger is 32x faster than flatfs (in this release).
However,
TL;DR: Use badger if performance is your main requirement, you rarely/never delete anything, and you have some memory to spare.
Systemd Support
For Linux users, this release includes support for two systemd features: socket activation and startup/shutdown notifications. This makes it possible to:
You can find the new systemd units in the go-ipfs repo under misc/systemd.
IPFS API Over Unix Domain Sockets
This release supports exposing the IPFS API over a unix domain socket in the filesystem. You use this feature, run:
Repo Migration
IPFS uses repo migrations to make structural changes to the "repo" (the config, data storage, etc.) on upgrade.
This release includes two very simple repo migrations: a config migration to ensure that the config contains working bootstrap nodes and a keystore migration to base32 encode all key filenames.
In general, migrations should not require significant manual intervention. However, you should be aware of migrations and plan for them.
ipfs update
,ipfs update
will run the migration for you.ipfs daemon --migrate
, ipfs will migrate your repo for you on start.Otherwise, if you want more control over the repo migration process, you can manually install and run the repo migration tool.
Bootstrap Peer Changes
AUTOMATIC MIGRATION REQUIRED
The first migration will update the bootstrap peer list to:
/dnsaddr/bootstrap.libp2p.io
./ipfs/QmPeerID
to/p2p/QmPeerID
.We're migrating addresses for a few reasons:
/p2p/Qm...
.Note: This migration won't add the new bootstrap peers to your config if you've explicitly removed the old bootstrap peers. It will also leave custom entries in the list alone. In other words, if you've customized your bootstrap list, this migration won't clobber your changes.
Keystore Changes
AUTOMATIC MIGRATION REQUIRED
Go-IPFS stores additional keys (i.e., all keys other than the "identity" key) in the keystore. You can list these keys with
ipfs key
.Currently, the keystore stores keys as regular files, named after the key itself. Unfortunately, filename restrictions and case-insensitivity are platform specific. To avoid platform specific issues, we're base32 encoding all key names and renaming all keys on-disk.
Changelog
TODO
✅ Release Checklist
For each RC published in each stage:
version.go
has been updatedChecklist:
make test
)make test_go_lint
)./bin/mkreleaselog
to generate a nice starter listrepo/version.go
has been updatedgit merge vX.Y.Z
).version.go
tovX.(Y+1).0-dev
.❤️ Contributors
< list generated by bin/mkreleaselog >
Would you like to contribute to the IPFS project and don't know how? Well, there are a few places you can get started:
help wanted
label in the go-ipfs repoThe best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the
#ipfs
channel on Freenode, which is also accessible through our Matrix bridge.The text was updated successfully, but these errors were encountered: