Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Go-IPFS 0.5.0 Release #7109

Closed
55 of 71 tasks
Stebalien opened this issue Apr 7, 2020 · 66 comments
Closed
55 of 71 tasks

Go-IPFS 0.5.0 Release #7109

Stebalien opened this issue Apr 7, 2020 · 66 comments
Labels
kind/enhancement A net-new feature or improvement to an existing feature

Comments

@Stebalien
Copy link
Member

Stebalien commented Apr 7, 2020

go-ipfs 0.5.0 Release

Release: https://dist.ipfs.io#go-ipfs

We're happy to announce go-ipfs 0.5.0, ...

🗺 What's left for release

🔦 Highlights

UNDER CONSTRUCTION

This release includes many important changes users should be aware of.

New DHT

This release includes an almost completely rewritten DHT implementation with a new protocol version. From a user's perspective, providing content, finding content, and resolving IPNS records should simply get faster. However, this is a significant (albeit well tested) change and significant changes are always risky, so heads up.

Old v. New

The current DHT suffers from three core issues addressed in this release:

  1. Most peers in the DHT cannot be dialed (e.g., due to firewalls and NATs). Much of a DHT query time is wasted trying to connect to peers that cannot be reached.
  2. The DHT query logic doesn't properly terminate when it hits the end of the query and, instead, aggressively keeps on searching.
  3. The routing tables are poorly maintained. This can cause a search that should be logarithmic in the size of the network to be linear.
Reachable

We have addressed the problem of undialable nodes by having nodes wait to join the DHT as "server" nodes until they've confirmed that they are reachable from the public internet. Additionally, we've introduced:

  • A new libp2p protocol to push updates to our peers when we start/stop listen on protocols.
  • A libp2p event bus for processing updates like these.
  • A new DHT protocol version. New DHT nodes will not admit old DHT nodes into their routing tables. Old DHT nodes will still be able to issue queries against the new DHT, they just won't be queried or referred by new DHT nodes. This way, old, potentially unreachable nodes with bad routing tables won't pollute the new DHT.

Unfortunately, there's a significant downside to this approach: VPNs, offline LANs, etc. where all nodes on the network have private IP addresses and never communicate over the public internet. In this case, none of these nodes would be "publicly reachable".

To address this last point, go-ipfs 0.5.0 will run two DHTs: one for private networks and one for the public internet. That is, every node will participate in a LAN DHT and a public WAN DHT.

RC2 NOTE: All the features not enabled in RC1 have been enabled in RC2.

RC1 NOTE: Several of these features have not been enabled in RC1:

  1. We haven't yet switched the protocol version and will be running the DHT in "compatibility mode" with the old DHT. Once we flip the switch and enable the new protocol version, we will need to ensure that at least 20% of the publicly reachable DHT speaks the new protocol, all at once. The plan is to introduce a large number of "booster" nodes while the network transitions.
  2. We haven't yet introduced the split LAN/WAN DHTs. We're still testing this approach and considering alternatives.
  3. Because we haven't introduced the LAN/WAN DHT split, IPFS nodes running in DHT server mode will continue to run in DHT server mode without waiting to confirm that they're reachable from the public internet. Otherwise, we'd break IPFS nodes running DHTs in VPNs and disconnected LANs.
Query Logic

We've fixed the DHT query logic by correctly implementing Kademlia (with a few tweaks). This should significantly speed up:

  • Publishing IPNS & provider records. We previously continued searching for closer and closer peers to the "target" until we timed out, then we put to the closest peers we found.
  • Resolving IPNS addresses. We previously continued IPNS record searches until we ran out of peers to query, timed out, or found 16 records.

In both cases, we now continue till we find the closest peers then stop.

Routing Tables

Finally, we've addressed the poorly maintained routing tables by:

  • Reducing the likelihood that the connection manager will kill connections to peers in the routing table.
  • Keeping peers in the routing table, even if we get disconnected from them.
  • Actively and frequently querying the DHT to keep our routing table full.

Testing

The DHT rewrite was made possible by our new testing framework, testground, which allows us to spin up multi-thousand node tests with simulated real-world network conditions. With testground and some custom analysis tools, we were able to gain confidence that the new DHT implementation behaves correctly.

Refactored Bitswap

This release includes a major bitswap refactor running a new, but backwards compatible, bitswap protocol. We expect these changes to improve performance significantly.

With the refactored bitswap, we expect:

  • Few to no duplicate blocks when fetching data from other nodes speaking the new protocol.
  • Better parallelism when fetching from multiple peers.

Note, the new bitswap won't magically make downloading content any faster until both seeds and leaches have updated. If you're one of the first to upgrade to 0.5.0 and try downloading from peers that haven't upgraded, you're unlikely to see much of a performance improvement, if any.

Provider Record Changes

When you add content to your IPFS node, you advertise this content to the network by announcing it in the DHT. We call this "providing".

However, go-ipfs has multiple ways to address the same underlying bytes. Specifically, we address content by content ID (CID) and the same underlying bytes can be addressed using (a) two different versions of CIDs (CIDv1 and CIDv2) and (b) with different "codecs" depending on how we're interpreting the data.

Prior to go-ipfs 0.5.0, we used the content id (CID) in the DHT when sending out provider records for content. Unfortunately, this meant that users trying to find data announced using one CID wouldn't find nodes providing the content under a different CID.

In go-ipfs 0.5.0, we're announcing data by multihash, not CID. This way, regardless of the CID version used by the peer adding the content, the peer trying to download the content should still be able to find it.

Warning: as part of the network, this could impact finding content added with CIDv1. Because go-ipfs 0.5.0 will announce and search for content using the bare multihash (equivalent to the v0 CID), go-ipfs 0.5.0 will be unable to find CIDv1 content published by nodes prior to go-ipfs 0.5.0 and vice-versa. As CIDv1 is not enabled by default so we believe this will have minimal impact. However, users are strongly encouraged to upgrade as soon as possible.

IPFS/Libp2p Address Format

If you've ever run a command like ipfs swarm peers, you've likely seen paths that look like /ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID. These paths are not file paths, they're multiaddrs; addresses of peers on the network.

Unfortunately, /ipfs/Qm... is also the same path format we use for files. This release, changes the multiaddr format from /ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID to /ip4/193.45.1.24/tcp/4001/p2p/QmSomePeerID to make the distinction clear.

What this means for users:

  • Old-style multiaddrs will still be accepted as inputs to IPFS.
  • If you were using a multiaddr library (go, js, etc.) to name files because /ipfs/QmSomePeerID looks like /ipfs/QmSomeFile, your tool may break if you upgrade this library.
  • If you're manually parsing multiaddrs and are searching for the string /ipfs/..., you'll need to search for /p2p/....

Minimum RSA Key Size

Previously, IPFS did not enforce a minimum RSA key size. In this release, we've introduced a minimum 2048 bit RSA key size. IPFS generates 2048 bit RSA keys by default so this shouldn't be an issue for anyone in practice. However, users who explicitly chose a smaller key size will not be able to communicate with new nodes.

Unfortunately, the some of the bootstrap peers did intentionally generate 1024 bit RSA keys so they'd have vanity peer addresses (starting with QmSoL for "solar net"). All IPFS nodes should also have peers with >= 2048 bit RSA keys in their bootstrap list, but we've introduced a migration to ensure this.

We implemented this change to follow security best practices and to remove a potential foot-gun. However, in practice, the security impact of allowing insecure RSA keys should have been next to none because IPFS doesn't trust other peers on the network anyways.

Subdomain Gateway

The gateway will redirect from http://localhost:5001/ipfs/CID/... to http://CID.ipfs.localhost:5001/... by default. This will:

  • Ensure that every dapp gets its own browser origin.
  • Make it easier to write websites that "just work" with IPFS because absolute paths will now work.

Paths addressing the gateway by IP address (http://127.0.0.1:5001/ipfs/CID) will not be altered as IP addresses can't have subdomains.

Note: cURL doesn't follow redirects by default. To avoid breaking cURL and other clients that don't support redirects, go-ipfs will return the requested file along with the redirect. Browsers will follow the redirect and abort the download while cURL will ignore the redirect and finish the download.

TLS By Default

In this release, we're switching TLS to be the default transport. This means we'll try to encrypt the connection with TLS before re-trying with SECIO.

Contrary to the announcement in the go-ipfs 0.4.23 release notes, this release does not remove SECIO support to maintain compatibility with js-ipfs.

SECIO Deprecation Notice

SECIO should be considered to be well on the way to deprecation and will be
completely disabled in either the next release (0.6.0, ~mid May) or the one
following that (0.7.0, ~end of June). Before SECIO is disabled, support will be
added for the NOISE transport for compatibility with other IPFS implementations.

QUIC Upgrade

If you've been using the experimental QUIC support, this release upgrades to a new and incompatible version of the QUIC protocol (draft 27). Old and new go-ipfs nodes will still interoperate, but not over the QUIC transport.

We intend to standardize on this draft of the QUIC protocol and enable QUIC by default in the next release if all goes well.

RC2 NOTE: QUIC has been upgraded back to the latest version.

RC1 NOTE: We've temporarily backed out of the new QUIC version because it
currently requires go 1.14 and go 1.14 has some scheduler bugs that go-ipfs can
reliably trigger.

Badger Datastore

In this release, we're calling the badger datastore (enabled at initialization with ipfs init --profile=badgerds) as stable. However, we're not yet enabling it by default.

The benefit of badger is that adding/fetching data to/from badger is significantly faster than adding/fetching data to/from the default datastore, flatfs. In some tests, adding data to badger is 32x faster than flatfs (in this release).

However,

  1. Badger is complicated while flatfs pushes all the complexity down into the filesystem itself. That means that flatfs is only likely to loose your data if your underlying filesystem gets corrupted while there are more opportunities for badger itself to get corrupted.
  2. Badger can use a lot of memory. In this release, we've tuned badger to use very little (~20MiB) of memory by default. However, it can still produce large (1GiB) spikes in memory usage when garbage collecting.
  3. Badger isn't very aggressive when it comes to garbage collection and we're still investigating ways to get it to more aggressively clean up after itself.

TL;DR: Use badger if performance is your main requirement, you rarely/never delete anything, and you have some memory to spare.

Systemd Support

For Linux users, this release includes support for two systemd features: socket activation and startup/shutdown notifications. This makes it possible to:

  • Start IPFS on demand on first use.
  • Wait for IPFS to finish starting before starting services that depend on it.

You can find the new systemd units in the go-ipfs repo under misc/systemd.

IPFS API Over Unix Domain Sockets

This release supports exposing the IPFS API over a unix domain socket in the filesystem. You use this feature, run:

> ipfs config Addresses.API "/unix/path/to/socket/location"

Repo Migration

IPFS uses repo migrations to make structural changes to the "repo" (the config, data storage, etc.) on upgrade.

This release includes two very simple repo migrations: a config migration to ensure that the config contains working bootstrap nodes and a keystore migration to base32 encode all key filenames.

In general, migrations should not require significant manual intervention. However, you should be aware of migrations and plan for them.

  • If you update go-ipfs with ipfs update, ipfs update will run the migration for you.
  • If you start the ipfs daemon with ipfs daemon --migrate, ipfs will migrate your repo for you on start.

Otherwise, if you want more control over the repo migration process, you can manually install and run the repo migration tool.

Bootstrap Peer Changes

AUTOMATIC MIGRATION REQUIRED

The first migration will update the bootstrap peer list to:

  1. Replace the old bootstrap nodes (ones with peer IDs starting with QmSoL), with new bootstrap nodes (ones with addresses that start with /dnsaddr/bootstrap.libp2p.io.
  2. Rewrite the address format from /ipfs/QmPeerID to /p2p/QmPeerID.

We're migrating addresses for a few reasons:

  1. We're using DNS to address the new bootstrap nodes so we can change the underlying IP addresses as necessary.
  2. The new bootstrap nodes use 2048 bit keys while the old bootstrap nodes use 1024 bit keys.
  3. We're normalizing the address format to /p2p/Qm....

Note: This migration won't add the new bootstrap peers to your config if you've explicitly removed the old bootstrap peers. It will also leave custom entries in the list alone. In other words, if you've customized your bootstrap list, this migration won't clobber your changes.

Keystore Changes

AUTOMATIC MIGRATION REQUIRED

Go-IPFS stores additional keys (i.e., all keys other than the "identity" key) in the keystore. You can list these keys with ipfs key.

Currently, the keystore stores keys as regular files, named after the key itself. Unfortunately, filename restrictions and case-insensitivity are platform specific. To avoid platform specific issues, we're base32 encoding all key names and renaming all keys on-disk.

Changelog

TODO

✅ Release Checklist

For each RC published in each stage:

  • version string in version.go has been updated
  • tag commit with vX.Y.Z-rcN
  • upload to dist.ipfs.io
    1. Build: https://github.com/ipfs/distributions#usage.
    2. Pin the resulting release.
    3. Make a PR against ipfs/distributions with the updated versions, including the new hash in the PR comment.
    4. Ask the infra team to update the DNSLink record for dist.ipfs.io to point to the new distribution.
  • cut a pre-release on github and upload the result of the ipfs/distributions build in the previous step.
  • Announce the RC:

Checklist:

  • Stage 0 - Automated Testing
    • Feature freeze. If any "non-trivial" changes (see the footnotes of docs/releases.md for a definition) get added to the release, uncheck all the checkboxes and return to this stage.
    • Automated Testing (already tested in CI) - Ensure that all tests are passing, this includes:
  • Stage 1 - Internal Testing
    • CHANGELOG.md has been updated
    • Network Testing:
      • test lab things - TBD
    • Infrastructure Testing:
      • Deploy new version to a subset of Bootstrappers
      • Deploy new version to a subset of Gateways
      • Deploy new version to a subset of Preload nodes
      • Collect metrics every day. Work with the Infrastructure team to learn of any hiccup
    • IPFS Application Testing - Run the tests of the following applications:
  • Stage 2 - Community Dev Testing
    • Reach out to the IPFS early testers listed in docs/EARLY_TESTERS.md for testing this release (check when no more problems have been reported). If you'd like to be added to this list, please file a PR.
    • Reach out to on IRC for beta testers.
    • Run tests available in the following repos with the latest beta (check when all tests pass):
  • Stage 3 - Community Prod Testing
    • Documentation
    • Invite the IPFS early testers to deploy the release to part of their production infrastructure.
    • Invite the wider community through (link to the release issue):
  • Stage 4 - Release
    • Final preparation
    • Publish a Release Blog post (at minimum, a c&p of this release issue with all the highlights, API changes, link to changelog and thank yous)
    • Broadcasting (link to blog post)
  • Post-Release
    • Bump the version in version.go to vX.(Y+1).0-dev.
    • Create an issue using this release issue template for the next release.
    • Make sure any last-minute changelog updates from the blog post make it back into the CHANGELOG.

❤️ Contributors

< list generated by bin/mkreleaselog >

Would you like to contribute to the IPFS project and don't know how? Well, there are a few places you can get started:

⁉️ Do you have questions?

The best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the #ipfs channel on Freenode, which is also accessible through our Matrix bridge.

@Stebalien Stebalien added the kind/enhancement A net-new feature or improvement to an existing feature label Apr 7, 2020
@Stebalien Stebalien mentioned this issue Apr 7, 2020
21 tasks
@Stebalien Stebalien pinned this issue Apr 7, 2020
@bonedaddy
Copy link
Contributor

Merge #6870 (punted till after the RC as these changes should be pretty safe).

This seems a little iffy. Non-thoroughly tested functionality is only safe after its thoroughly tested until then assuming its safe just increases the likelihood of issues

@ribasushi
Copy link
Contributor

@bonedaddy it's an addition to the experimental ipfs dag command section, it doesn't affect anything in the actual daemon operations. Originally it was re-slated for 0.6, but an interop concern is making this bubble up again,

@bonedaddy
Copy link
Contributor

@bonedaddy it's an addition to the experimental ipfs dag command section, it doesn't affect anything in the actual daemon operations. Originally it was re-slated for 0.6, but an interop concern is making this bubble up again,

Dependencies can be pulled in which may have potentially adverse effects. Personally speaking letting untested functionality be included releases is a bad practice that shouldn't be done. For ex: what if the dag imnport/export commands suffer from a bug themselves that would be discovered via testing through the RC process? The long term effect of not having properly tested functionality going through the RC process is more work needing to be done in dealing with issues, and fixing issues.

ex: lets imagine we're at a time when go-ipfs is at v1.0.0; Will these practices of untested functionality being included in releases continue? If not, why not make changes now that result in better testing, and better change management. Nothing is lost, but everything is gained.

@ribasushi
Copy link
Contributor

@bonedaddy sorry, I now see the confusion. The text should have read not included in RC1. There is another RC coming up towards the end of the week, we just needed to get the DHT parts there as soon as possible to gain more feedback.

@bonedaddy
Copy link
Contributor

@bonedaddy sorry, I now see the confusion. The text should have read not included in RC1. There is another RC coming up towards the end of the week, we just needed to get the DHT parts there as soon as possible to gain more feedback.

Ah okay, good stuff 🚀

@jbenet
Copy link
Member

jbenet commented Apr 7, 2020

@jbenet
Copy link
Member

jbenet commented Apr 7, 2020

FYI, companion and ipfs desktop broke because of 1b49047 -- looks like @lidel's on it because firefox companion is fixed, but chrome hasn't gotten a new version. sounds like this will break a lot of users, so you want to definitely flag it in the release notes, and here, very prominently. It took me a while to figure out what was wrong.

@jbenet
Copy link
Member

jbenet commented Apr 7, 2020

  • IPFS API Over Unix Domain Sockets
  • /p2p/ multiaddrs

@lidel
Copy link
Member

lidel commented Apr 7, 2020

Unfortunately Chrome Web Store is super slow with accepting updates.

Chromium users need to wait for the Stable channel update to v2.11.0, or uninstall it and install Beta, which just got approved: v2.11.0.904

@ianopolous
Copy link
Member

ianopolous commented Apr 7, 2020

I'm getting a http 405 from http://127.0.0.1:5001/api/v0/id
We do a GET request. In my opinion it doesn't make sense that this needs to be a POST.
Do all api calls now have to be POSTs? If so, that should be highlighted here as that's a breaking change.
EDIT: After discovering that browsers let random websites make GETs but not POSTs to localhost this makes sense.

@lidel
Copy link
Member

lidel commented Apr 7, 2020

Yes, all calls to RPC API at /api/v0 on the API port need to be POST now.

@ianopolous
Copy link
Member

After switching to POSTs all our local tests are passing, including p2p stream tests.

Thumbs up!

@RubenKelevra
Copy link
Contributor

Warning: as part of the network, this could impact finding content added with CIDv1. Because go-ipfs 0.5.0 will announce and search for content using the bare multihash (equivalent to the v0 CID), go-ipfs 0.5.0 will be unable to find CIDv1 content published by nodes prior to go-ipfs 0.5.0 and vice-versa. As CIDv1 is not enabled by default so we believe this will have minimal impact. However, users are strongly encouraged to upgrade as soon as possible.

Is there a way to set CIDv1 as default for all operations, to make it easier to upgrade, without having to specify this on each and every operation?

@meeDamian
Copy link

Running TEST_NO_FUSE=1 make test_short fails on t0054 #23 fifo import test:

@Stebalien
Copy link
Member Author

@RubenKelevra

Is there a way to set CIDv1 as default for all operations, to make it easier to upgrade, without having to specify this on each and every operation?

That would make it harder to upgrade. The upgrade will impact finding content added with CIDv1, it will not impact finding content added with CIDv0.

@Stebalien
Copy link
Member Author

@meeDamian off topic, don't use go 1.14. Use either 1.13.x or 1.14.2. Go 1.14 has several known bugs that will cause go-ipfs to hang.

Otherwise, that looks like a bug, probably a bug in the test. Thanks for finding it! Please file a new issue.

@meeDamian
Copy link

meeDamian commented Apr 20, 2020

@meeDamian off topic, don't use go 1.14.

Thank you, and apologies for not being clear. go 1.14 on Dockerhub currently resolves to 1.14.2.

That being said, I'll try with explicit latest versions of 1.13.x and 1.14.x first 🙂.

@RubenKelevra
Copy link
Contributor

@RubenKelevra

Is there a way to set CIDv1 as default for all operations, to make it easier to upgrade, without having to specify this on each and every operation?

That would make it harder to upgrade. The upgrade will impact finding content added with CIDv1, it will not impact finding content added with CIDv0.

I understand what you mean, but I don't care about the compability with 0.4.x

When I add files thru the cluster-ctl, I end up with a lot of CIDs which are v1 anyway.

I just want it to work either completely or not at all - the mixture of CIDv1/0 is hard to debug:

If the requested initial CID is a v0 bitswap will make it possible to fetch any further content, while when the initial CID is v1 it won't work.

I just want to avoid using CIDv0 for anything at this time, just to make sure when I save content for the next year, I don't have to remove and readd it.

@RubenKelevra
Copy link
Contributor

@meeDamian off topic, don't use go 1.14. Use either 1.13.x or 1.14.2. Go 1.14 has several known bugs that will cause go-ipfs to hang.

Otherwise, that looks like a bug, probably a bug in the test. Thanks for finding it! Please file a new issue.

Can't we check while building for go 1.14.0/1.14.1 and just abort with an error? 🤔

@Stebalien
Copy link
Member Author

@RubenKelevra

I understand what you mean, but I don't care about the compability with 0.4.x

When I add files thru the cluster-ctl, I end up with a lot of CIDs which are v1 anyway.

I just want it to work either completely or not at all - the mixture of CIDv1/0 is hard to debug:

If the requested initial CID is a v0 bitswap will make it possible to fetch any further content, while when the initial CID is v1 it won't work.

I just want to avoid using CIDv0 for anything at this time, just to make sure when I save content for the next year, I don't have to remove and readd it.

Open issue: #5230

Please keep questions in this issue on-topic for the go-ipfs 0.5.0 release.

Can't we check while building for go 1.14.0/1.14.1 and just abort with an error? thinking

Yes. We'd have to patch bin/check_go_version. Want to submit a patch?

@RubenKelevra
Copy link
Contributor

Thanks, true this was going a bit off-topic - sorry!


With the current CIDv1 approach we will run into some issues with the gateways. When we update the software on the gateways, the won't be able to find v1 content of 0.4.x clients, which will probably lead to a lot of confusions about the availability of the data - many people use the gateways to check if their data is available.

I think it makes sense to add a compability fallback to the old v1 format in the DHT requests:

If there's no result in a reasonable amount of time 0.5.0 could generate the old CIDv1 format and query for this the DHT as well.

This would open up a transition period for data stored on 0.4.x nodes without adding too much stress to the DHT.

@Stebalien
Copy link
Member Author

This is not a decision we plan to revisit unless it becomes an active problem for 0.5.0 users. If that happens, we can cut a patch release to turn every DHT query into two DHT queries. However, that would be expensive and we'd very much like to avoid it.

In general, the vast majority of users still use CIDv0. Those using CIDv1 must have explicitly opted in, are usually "in the loop", and will likely upgrade immediately.

While we could fix new nodes finding content published by old nodes, fixing the reverse would be infeasible. We'd have to publish two provider records for ever piece of data and providing is already slow as it is (although that should get much better after the release).

@bonedaddy
Copy link
Contributor

I'm a bit lost on why this compatabilty is broken. If I understand correctly, CIDv1 published by nodes before 0.5.0 will be unable to be discovered by 0.5.0 nodes, but CIDv0 will be? What exactly is causing this issue?

@Stebalien
Copy link
Member Author

@bonedaddy the new DHT advertises and searches for multihashes instead of CIDs. That way, it doesn't matter if content was advertised with CIDv1/CIDv0. This change is intended to ease the CIDv0 to CIDv1 migration.

For example, without this change, if I try to load https://localhost:8080/ipfs/Qmdfqcz3LN59zqkZp4akTT8hjjx84UoUwFnykmmbi6RnsG/index.html and I get redirected to https://bafybeihdzgwvtbtsbfmrijh77zav4krhbu7h32rmmnrposu6dcvjole244.ipfs.localhost:8080/index.html (the subdomain gateway with a base32 CIDv1), I might not be able to find the content in the network because it was added with CIDv0 and I'm trying to find it after converting the CID to a v1 CID. With this change, I'll just try to search for the multihash directly (which is equivalent to the CIDv0 version).

We could advertise both CIDv1 and CIDv0 CIDs, but that would double the number of DHT queries.

@bonedaddy
Copy link
Contributor

Ah okay makes sense. If my node has CIDv1 hashes generated before 0.5.0, and I switch to 0.5.0 that doesn't break anything on the node itself, just makes it less likely that I will find CIDv1 published by nodes prior to 0.5.0 yea?

@Stebalien
Copy link
Member Author

Exactly. Also, nodes prior to 0.5.0 may have trouble finding CIDv1 content published by our post 0.5.0 node (because this content will be advertised with raw multihashes instead of CIDv1).

@Stebalien
Copy link
Member Author

RC3 is now live: https://dist.ipfs.io/go-ipfs/v0.5.0-rc3

We're now in the "fine-tuning" stage and this release should be pretty stable. Please test widely.

Since RC2:

  • Many typo fixes.
  • Merged some improvements to the gateway directory listing template.
  • Performance tweaks in bitswap and the DHT (mostly around lock contention and allocations).
  • More integration tests for the DHT.
  • Fixed redirects to the subdomain gateway for directory listings.
  • Merged some debugging code for QUIC.
  • Update the WebUI to pull in some bug fixes.
  • Update flatfs to fix some issues on windows in the presence of AVs.
  • Updated go version to 1.13.10.
  • Avoid adding IPv6 peers to the WAN DHT if their only "public" IP addresses aren't in the public internet IPv6 range.

@Stebalien
Copy link
Member Author

Stebalien commented Apr 25, 2020

CHANGE

We initially planned to bump the DHT protocol's version number to prevent old DHT nodes (those without the 0.5.0 improvements) from polluting the new DHT. Under this plan, new DHT nodes would still respond to requests from old DHT nodes, but they wouldn't make requests to old DHT nodes or route requests to old DHT nodes.

We've abandoned this plan. To pull this off without degrading service for both old and new nodes, we needed 20% of DHT servers to upgrade all at once. Our plan was to simply operate this 20% of the network ourselves. Given the current size of the reachable DHT, this should be feasible.

However, while we built a system to do this, it did not perform well enough when tested on the live network for us to be comfortable relying on it to support the weight of the entire network.

Therefore, while this release will use the new DHT, it will not change the DHT protocol version number. In other words, the final release's DHT will behave the same as the DHT in the RCs.

What does this mean for you? When 0.5.0 is released, please upgrade ASAP.

  1. This will improve query performance for everyone as 0.5.0 nodes maintain better routing tables and avoid joining the DHT when they're unreachable.
  2. This will significantly reduce the load on the DHT. The new DHT makes 3-4x fewer new connections per request and we'll be able to reduce that even further in the next release if enough of the DHT upgrades.

@Stebalien
Copy link
Member Author

Stebalien commented Apr 25, 2020

RC4 is now live: https://dist.ipfs.io/go-ipfs/v0.5.0-rc4

Barring any significant issues will be the final RC before the release. Please test widely.

Note: While this release has been thoroughly tested, it is a very large release. We don't expect any major bugs, but we're ready to cut a patch release a week after the release if necessary. However, I will be very sad if we need to cut two patch releases so please test this RC thoroughly.

Since RC3:

  • Reduce duplicate blocks in bitswap by increasing some timeouts and fixing the use of sessions in the ipfs pin command.
  • Fix some bugs in the ipfs dht CLI commands.
  • Ensure bitswap cancels are sent when the request is aborted.
  • Optimize some bitswap hot-spots and reduce allocations.
  • Harden use of the libp2p identify protocol to ensure we never "forget" our peer's protocols. This is important in this release because we're using this information to determine whether or not a peer is a member of the DHT.
  • Fix some edge cases where we might not notice that a peer has transitioned to/from a DHT server/client.
  • Avoid forgetting our observed external addresses when no new connections have formed in the last 10 minutes. This has been a mild issue since 2016 but was exacerbated by this release as we now push address updates to our peers when our addresses change. Unfortunately, combined, this meant we'd tell our peers to forget our external addresses (but only if we haven't formed a single new connection in the last 10 minutes).

@pataquets
Copy link
Contributor

Filed #7220

@Stebalien
Copy link
Member Author

Released!

@b5
Copy link
Contributor

b5 commented Apr 28, 2020

Congrats!

@RubenKelevra
Copy link
Contributor

@Stebalien I think this can be unpinned :)

@Stebalien Stebalien unpinned this issue May 10, 2020
@Stebalien
Copy link
Member Author

@RubenKelevra Never! (thanks for the reminder)

flyskywhy added a commit to flyskywhy/textiot that referenced this issue Jul 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement A net-new feature or improvement to an existing feature
Projects
None yet
Development

No branches or pull requests