Skip to content

Commit

Permalink
docs: warn users who are falling behind reprovides
Browse files Browse the repository at this point in the history
Fixes: ipfs#9704
Fixes: ipfs#9702
Fixes: ipfs#9703
Fixes: ipfs#9419
  • Loading branch information
Jorropo committed Jun 6, 2023
1 parent db966fe commit 4ff3ed9
Show file tree
Hide file tree
Showing 3 changed files with 67 additions and 58 deletions.
17 changes: 16 additions & 1 deletion docs/changelogs/v0.21.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
- [Gateway: DAG-CBOR/-JSON previews and improved error pages](#gateway-dag-cbor-json-previews-and-improved-error-pages)
- [Gateway: subdomain redirects are now `text/html`](#gateway-subdomain-redirects-are-now-texthtml)
- [`ipfs dag stat` deduping statistics](#ipfs-dag-stat-deduping-statistics)
- [Accelerated DHT Client is no longer experimental](#--empty-repo-is-now-the-default)
- [📝 Changelog](#-changelog)
- [👨‍👩‍👧‍👦 Contributors](#-contributors)

Expand Down Expand Up @@ -76,7 +77,7 @@ for Kubo `v0.21`.

In this release, we improved the HTML templates of our HTTP gateway:

1. You can now preview the contents of a DAG-CBOR and DAG-JSON document from your browser, as well as follow any IPLD Links ([CBOR Tag 42](https://github.com/ipld/cid-cbor/)) contained within them.
1. You can now preview the contents of a DAG-CBOR and DAG-JSON document from your browser, as well as follow any IPLD Links ([CBOR Tag 42](https://github.com/ipld/cid-cbor/)) contained within them.
2. The HTML directory listings now contain [updated, higher-definition icons](https://user-images.githubusercontent.com/5447088/241224419-5385793a-d3bb-40aa-8cb0-0382b5bc56a0.png).
3. On gateway error, instead of a plain text error message, web browsers will now get a friendly HTML response with more details regarding the problem.

Expand Down Expand Up @@ -124,6 +125,20 @@ Ratio: 1.615755

`ipfs --enc=json dag stat`'s keys are a non breaking change, new keys have been added but old keys with previous sementics are still here.

#### Accelerated DHT Client is no longer experimental

The [accelerated DHT client](docs/config.md#routingaccelerateddhtclient) is now
the main recommended solution for users who are hosting lots of data.
By trading some upfront DHT caching and increased memory usage,
one gets provider throughput improvements up to 6 millions times bigger dataset.
See [the docs](docs/config.md#routingaccelerateddhtclient) for more info.

The `Experimental.AcceleratedDHTClient` flag moved to `[Routing.AcceleratedDHTClient](docs/config.md#routingaccelerateddhtclient)`.
A config migration has been added to handle this automatically.

A new tracker estimates the providing speed and warns users if they
should be using AcceleratedDHTClient because they are falling behind.

### 📝 Changelog

### 👨‍👩‍👧‍👦 Contributors
51 changes: 49 additions & 2 deletions docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@ config file at runtime.
- [`Reprovider.Strategy`](#reproviderstrategy)
- [`Routing`](#routing)
- [`Routing.Type`](#routingtype)
- [`Routing.AcceleratedDHTClient`](#routingaccelerateddhtclient)
- [`Routing.Routers`](#routingrouters)
- [`Routing.Routers: Type`](#routingrouters-type)
- [`Routing.Routers: Parameters`](#routingrouters-parameters)
Expand Down Expand Up @@ -1348,7 +1349,7 @@ Type: `array[peering]`
### `Reprovider.Interval`

Sets the time between rounds of reproviding local content to the routing
system.
system.

- If unset, it uses the implicit safe default.
- If set to the value `"0"` it will disable content reproviding.
Expand Down Expand Up @@ -1423,6 +1424,52 @@ Default: `auto` (DHT + IPNI)

Type: `optionalString` (`null`/missing means the default)


### `Routing.AcceleratedDHTClient`

Utilizes an alternative DHT client using a Full-Routing-Table strategy, it will
every hour do a complete scan of the DHT and record all nodes found.
Then when a lookup is tried instead of having to go through multiple Kad hops it
is able to find the 20 final nodes by looking up the recorded network table.

This means you trade off cpu and memory due to the extra periodic scans and more
records to keep. However the latency of individual operations should be ~10x faster
and the provide throughput up to 6 millions times faster.

This is not compatible with `Routing.Type` `custom`. If you are using composable routers
you can configure this individualy on each router.

When it is enabled:
- DHT operations (reads and writes) should complete much faster giving lower latency
- The provider will now use a keyspace sweeping mode allowing to keep alive
CID sets that are multiple orders of magnitude bigger alive.
- The standard Bucket-Routing-Table DHT will still run for the DHT server if a
mode that enables the DHT server is used. This means the classical routing
table will still be used to answer other nodes, this is because this is critical
to get right in order to not harm the network.
- The operations `ipfs stats dht` will default to showing information about the accelerated DHT client

**Caveats:**
1. Running the accelerated client likely will result in more resource consumption (connections, RAM, CPU, bandwidth)
- Users that are limited in the number of parallel connections their machines/networks can perform will likely suffer
- The resource usage is not smooth as the client crawls the network in rounds and reproviding is similarly done in rounds
- Users who previously had a lot of content but were unable to advertise it on the network will see an increase in
egress bandwidth as their nodes start to advertise all of their CIDs into the network. If you have lots of data
entering your node that you don't want to advertise, then consider using [Reprovider Strategies](#reproviderstrategy)
to reduce the number of CIDs that you are reproviding. Similarly, if you are running a node that deals mostly with
short-lived temporary data (e.g. you use a separate node for ingesting data then for storing and serving it) then
you may benefit from using [Strategic Providing](experimental-features.md#strategic-providing) to prevent advertising
of data that you ultimately will not have.
2. Currently, the DHT is not usable for queries for the first 5-10 minutes of operation as the routing table is being
prepared. This means operations like searching the DHT for particular peers or content will not work initially.
- You can see if the DHT has been initially populated by running `ipfs stats dht`
3. Currently, the accelerated DHT client is not compatible with LAN-based DHTs and will not perform operations against
them

Default: `false`

Type: `bool` (missing means `false`)

### `Routing.Routers`

**EXPERIMENTAL: `Routing.Routers` configuration may change in future release**
Expand Down Expand Up @@ -1465,7 +1512,7 @@ HTTP:

DHT:
- `"Mode"`: Mode used by the DHT. Possible values: "server", "client", "auto"
- `"AcceleratedDHTClient"`: Set to `true` if you want to use the experimentalDHT.
- `"AcceleratedDHTClient"`: Set to `true` if you want to use the acceleratedDHT.
- `"PublicIPNetwork"`: Set to `true` to create a `WAN` DHT. Set to `false` to create a `LAN` DHT.

Parallel:
Expand Down
57 changes: 2 additions & 55 deletions docs/experimental-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -513,7 +513,7 @@ ipfs config --json Experimental.StrategicProviding true
- [ ] provide roots
- [ ] provide all
- [ ] provide strategic

## GraphSync

### State
Expand Down Expand Up @@ -546,59 +546,6 @@ Stable, enabled by default

[Noise](https://github.com/libp2p/specs/tree/master/noise) libp2p transport based on the [Noise Protocol Framework](https://noiseprotocol.org/noise.html). While TLS remains the default transport in Kubo, Noise is easier to implement and is thus the "interop" transport between IPFS and libp2p implementations.

## Accelerated DHT Client

### In Version

0.9.0

### State

Experimental, default-disabled.

Utilizes an alternative DHT client that searches for and maintains more information about the network
in exchange for being more performant.

When it is enabled:
- DHT operations should complete much faster than with it disabled
- A batching reprovider system will be enabled which takes advantage of some properties of the experimental client to
very efficiently put provider records into the network
- The standard DHT client (and server if enabled) are run alongside the alternative client
- The operations `ipfs stats dht` and `ipfs stats provide` will have different outputs
- `ipfs stats provide` only works when the accelerated DHT client is enabled and shows various statistics regarding
the provider/reprovider system
- `ipfs stats dht` will default to showing information about the new client

**Caveats:**
1. Running the experimental client likely will result in more resource consumption (connections, RAM, CPU, bandwidth)
- Users that are limited in the number of parallel connections their machines/networks can perform will likely suffer
- Currently, the resource usage is not smooth as the client crawls the network in rounds and reproviding is similarly
done in rounds
- Users who previously had a lot of content but were unable to advertise it on the network will see an increase in
egress bandwidth as their nodes start to advertise all of their CIDs into the network. If you have lots of data
entering your node that you don't want to advertise consider using [Reprovider Strategies](config.md#reproviderstrategy)
to reduce the number of CIDs that you are reproviding. Similarly, if you are running a node that deals mostly with
short-lived temporary data (e.g. you use a separate node for ingesting data then for storing and serving it) then
you may benefit from using [Strategic Providing](#strategic-providing) to prevent advertising of data that you
ultimately will not have.
2. Currently, the DHT is not usable for queries for the first 5-10 minutes of operation as the routing table is being
prepared. This means operations like searching the DHT for particular peers or content will not work
- You can see if the DHT has been initially populated by running `ipfs stats dht`
3. Currently, the accelerated DHT client is not compatible with LAN-based DHTs and will not perform operations against
them

### How to enable

```
ipfs config --json Experimental.AcceleratedDHTClient true
```

### Road to being a real feature

- [ ] Needs more people to use and report on how well it works
- [ ] Should be usable for queries (even if slower/less efficient) shortly after startup
- [ ] Should be usable with non-WAN DHTs

## Optimistic Provide

### In Version
Expand Down Expand Up @@ -640,7 +587,7 @@ than the classic client.
size estimation available the client will transparently fall back to the classic approach.
2. The chosen peers to store the provider records might not be the actual closest ones. Measurements showed that this
is not a problem.
3. The optimistic provide process returns already after 15 out of the 20 provider records were stored with peers. The
3. The optimistic provide process returns already after 15 out of the 20 provider records were stored with peers. The
reasoning here is that one out of the remaining 5 peers are very likely to time out and delay the whole process. To
limit the number of in-flight async requests there is the second `OptimisticProvideJobsPoolSize` setting. Currently,
this is set to 60. This means that at most 60 parallel background requests are allowed to be in-flight. If this
Expand Down

0 comments on commit 4ff3ed9

Please sign in to comment.