Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

ipfs-http-client should warn about limited number of pubsub topic subscriptions #3983

Closed
chrispanag opened this issue Dec 16, 2021 · 6 comments
Assignees
Labels
exp/intermediate Prior experience is likely helpful help wanted Seeking public contribution on this issue kind/enhancement A net-new feature or improvement to an existing feature kind/maybe-in-helia P3 Low: Not priority right now pkg:http-client Issues related only to ipfs-http-client topic/devexp Developer Experience

Comments

@chrispanag
Copy link

chrispanag commented Dec 16, 2021

TLDR: this is an artificial limit imposed by the number of socket connections open or a browser limit, see explainer and suggestion to warn users in #3983 (comment)


  • Version:
    ipfs-http-client v0.55.0 (this was also happening with v0.54.2)
    go-ipfs v0.11.0 (this was also happening with v0.9.1)

  • Platform:
    Linux localhost 5.11.0-37-generic #41-Ubuntu SMP Mon Sep 20 16:39:20 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    Darwin Christoss-MBP 21.1.0 Darwin Kernel Version 21.1.0: Wed Oct 13 17:33:23 PDT 2021; root:xnu-8019.41.5~1/RELEASE_X86_64 x86_64

    (seems platform irrelevant)

    Node Version: v16.13.1 (LTS)

  • Subsystem:
    ipfs-http-client

Severity: Medium - A non-essential functionality does not work, performance issues, etc.

Description:

When trying to open more than 6 pubsub topics through an ipfs-http-client instance using ipfs.pubsub.subscribe(<topic>);, the number of topics opened, validated through ipfs.pubsub.ls() or through the CLI (ipfs pubsub ls) caps at 6.

Expected Behaviour
When doing running ipfs pubsub ls there should be outputted as many pubsub topics as invocations of ipfs.pubsub.subscribe(<unique_topic>);.

Actual Behaviour

The number of open pubsub topics outputted by ipfs pubsub ls caps at 6.

Steps to reproduce the error:

  1. Subscribe to more than 6 pubsub topics through ipfs-http-client
  2. Run ipfs pubsub ls to compare the number of topics open.
const IPFS = require("ipfs-http-client");

const GO_IPFS_HOST = "localhost";
const GO_IPFS_PORT = 5001;

// Constant to change
const NUMBER_OF_TOPICS = 10; // If this is more than 6, then there will be **less pubsub topics open** than topics opened.

(async () => {
    const ipfs = IPFS.create({ host: GO_IPFS_HOST, port: GO_IPFS_PORT })
    console.log("IPFS is ready");
    const topics = [];
    for (let i = 0; i < NUMBER_OF_TOPICS; i++) {
        await ipfs.pubsub.subscribe(`test-topic-${i}`);
        topics.push(`test-topic-${i}`);
    }

    console.log("All topics are ready");
    console.log("\n");

    topics.forEach((t) => console.log(t));

    const pubsubTopics = await ipfs.pubsub.ls();

    console.log("\n");
    console.log("Open pubsub topics:")
    console.log(pubsubTopics)

    if (pubsubTopics.length < topics.length) {
        console.log("There are less pubsub topics than what we opened! :(")
    } else if (pubsubTopics.length == topics.length) {
        console.log("There are the same number of pubsub topics as the ones we opened! :)")
    } else {
        // THIS WON'T EVER HAPPEN IF THE IPFS NODE IS FRESH
        console.log("There are more pubsub topics than the ones we opened! :( You might need to restart your IPFS node.")
    }
})();

The above code reproduces the issue. You can also clone and run node test-pubsub-only.js from the following repository:

https://github.com/chrispanag/orbit-db-pubsub-issue-replication

Additional Information - Observations

The issue seems to be specific to ipfs-http-client not been able to open more than 6 pubsub subscriptions, because when a separate instance of it is run concurrently, it can also open at most 6 additional pubsub subscriptions.

Additionally, when more than 6 subscription topics are opened through go-ipfs CLI, the behaviour is correct.

Finally, the same thing happens when a 1s delay is added between opening each pubsub topic. So it seems it's not a performance or rate-limiting issue.

@chrispanag chrispanag added the need/triage Needs initial labeling and prioritization label Dec 16, 2021
@achingbrain
Copy link
Member

This sounds like a duplicate of #3741 - have you read that issue?

@lidel
Copy link
Member

lidel commented Dec 17, 2021

Folks will get bitten by this in the browser context over and over again.
We should find a way to proactively avoid silent failures / bugs like this.

Will get even worse when enabling ipns-over-pubsub where each name requires listening on a separate topic (we could change the spec to work better in browsers, but it is like that atm).

HTTP/1.1 vs HTTP/2

iiuc this limit of 6 connections is specific to HTTP/1.1.
If the RPC API is exposed over HTTP/2 then everything is multiplexed over a single TCP connection, and the Chromium limit for the number of multiplexed streams is 100 (which is way more manageable).

@chrispanag if you need a quick workaround, put go-ipfs behind a reverse proxy (like nginx) with HTTP/2 set up, and see if that helped.

Ceiling detection?

@achingbrain perhaps js-ipfs running in the browser could run a self-test to progressively determine what the ceiling of the current runtime is, and then throw an error before the ceiling (if present) it is reached, instead of failing silently?

To make the error actionable, if ceiling is <10 then we should print console message suggesting exposing /api/v0 over HTTP/2 to increase topic limit to ~100.

@lidel lidel changed the title [ipfs-http-client] underlying go-ipfs node subscribes to less pubsub topics than commanded [ipfs-http-client] underlying go-ipfs node subscribes to less pubsub topics than 6 Dec 17, 2021
@chrispanag
Copy link
Author

This sounds like a duplicate of #3741 - have you read that issue?

No I didn't! Thanks for pointing this out, setting a custom http.Agent solved the issue!


Apart from that I can only agree with @lidel on "Folks will get bitten by this in the browser context over and over again". Maybe there should be an explicit mention of this in the docs and also some way so that it doesn't fail silently.

@lidel thanks for the suggestion, fortunately I'm doing the job on NodeJS so the solution was easy (just increasing the number of socket connections open).

@lidel lidel added exp/intermediate Prior experience is likely helpful help wanted Seeking public contribution on this issue kind/enhancement A net-new feature or improvement to an existing feature pkg:http-client Issues related only to ipfs-http-client topic/devexp Developer Experience P3 Low: Not priority right now and removed need/triage Needs initial labeling and prioritization labels Jan 14, 2022
@lidel lidel changed the title [ipfs-http-client] underlying go-ipfs node subscribes to less pubsub topics than 6 ipfs-http-client: limited number of pubsub topic subscriptions Jan 14, 2022
@lidel lidel changed the title ipfs-http-client: limited number of pubsub topic subscriptions ipfs-http-client should warn about limited number of pubsub topic subscriptions Jan 14, 2022
@RangerMauve
Copy link
Contributor

I've run up against similar issues when working on exposing hypercore protocol extension messages for pub/sub over HTTP gateways.

The approach I used was to have all of the incoming messages be sent over a single server sent events connection with the event type set to the gossip topic, and having clients POST to a url with the topic name (and optionally a peer I'd) to broadcast out.

The main benefits over something like a websocket protocol is that it's very webby and doesn't take any extra APIs beyond what's already in the browser.

I've been thinking of integrating something similar into js-ipfs-fetch and the Agregore Browser

@SgtPooki SgtPooki self-assigned this May 17, 2023
@SgtPooki SgtPooki moved this to 🥞 Todo in js-ipfs deprecation May 17, 2023
@SgtPooki SgtPooki moved this from 🥞 Todo to 🛑 Blocked in js-ipfs deprecation May 17, 2023
@SgtPooki
Copy link
Member

js-ipfs is being deprecated in favor of Helia. You can #4336 and read the migration guide.

Please feel to reopen with any comments by 2023-06-02. We will do a final pass on reopened issues afterward (see #4336).

@achingbrain assigning to you because i'm not sure if we're handling the root issue of this problem with pubsub in helia/libp2p.

@SgtPooki SgtPooki assigned achingbrain and unassigned SgtPooki May 26, 2023
@achingbrain
Copy link
Member

Please port your app to use https://github.com/ipfs/js-kubo-rpc-client in place of ipfs-http-client - it's a drop-in replacement and is where Kubo fixes and compatibility updates will land in future.

@github-project-automation github-project-automation bot moved this from 🛑 Blocked to ✅ Done in js-ipfs deprecation May 30, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
exp/intermediate Prior experience is likely helpful help wanted Seeking public contribution on this issue kind/enhancement A net-new feature or improvement to an existing feature kind/maybe-in-helia P3 Low: Not priority right now pkg:http-client Issues related only to ipfs-http-client topic/devexp Developer Experience
Projects
No open projects
Status: Done
Development

No branches or pull requests

5 participants