Skip to content
This repository has been archived by the owner on Apr 29, 2020. It is now read-only.

Create a Roadmap with Steps for Building the Interplanetary Test Lab #1

Open
SidHarder opened this issue Feb 22, 2017 · 2 comments
Open
Labels

Comments

@SidHarder
Copy link
Collaborator

No description provided.

@haadcode
Copy link

Continuing the discussion from ipfs/team-mgmt#354, I wanted to provide additional perspective to what InterPlanetary Test Lab should do.

I like what @whyrusleeping has described in ipfs/notes#191. It seems very (go/js-)ipfs specific in a way that it only runs the ipfs binary and I'd like to see if we can extend that so that we could run "any" workload in a IPFS test network/cluster/test lab.

I'm obviously looking at this with Orbit in mind: I would love to be able to define a set of tests for Orbit that can be run in the Test Lab. These would include basic integration testing (eg. make sure 100 clients all connect correctly and are able to send messages to each other), load-testing (raw throughput), stability tests (long-running processes), network error testing (what happens when there are network breakdowns or disturbance), etc etc. A Minimum Viable Test for orbit would be "start 100 clients, have each join #test-lab, each client sends 1000 messages, one message every 500ms, each client is expected to receive all messages".

I would like to be able to define the test code, but also define which version of IPFS to run in each client, ie. I'd like to be able to say "these 10 runners should use go-ipfs@0.4.5 and these 10 should use go-ipfs@0.4.6 but with a different config, and these 10 should use js-ipfs".

From UX perspective, it'd be ideal if my "test package" could be easily build/setup locally, eg. a docker container, or a .zip file or smth and I could then run "iptl haad@testlab.ipfs.io /path/to/my-tests.package". Running that command would spin up a test network as specified and start running the tests. I would then like to be able to open a UI, see each "runner" in a list and look into what's happening, eg. the stdout/stderr logs for each client, see the configuration that each runner uses, etc. The Test Lab wouldn't need to automatically tell me the metrics/stats/etc. (the output of the tests), I can write the logic in my test packages (code) and decide what kind of output I want to have and how to use/visualize it. I could grab that from stdout (logs) or perhap my test program can write a file on disk that I can grab later/upload somewhere. My main reference point here is Spark running on a Mesos-cluster that we build in my previous job to run big data computing. A screenshot of Mesos' dashboard. This is not to say we should use Mesos, but rather an example of what a nice UX could look like, and it might be worth the time to take a look at how Spark does its distributed "jobs" architecture as we might find some useful ideas there.

The important part in all this is, imo, that we build something that can be used by developers who build systems and apps using/on IPFS and not limit the test lab to testing the ipfs binaries only.

Hope this helps to build the use cases and specs for what we're aiming for.

I'll be working on defining and writing these types of tests for Orbit in this sprint, so hopefully at the end of the sprint we can run the MVT (as described above) in the Test Lab. Perhaps its stretching the scope given we only have 2 weeks, but that would be the ideal outcome from my perspective.

@hsanjuan
Copy link
Member

+1 to @haadcode. Think of ipfs-cluster too. I have written down a few use-cases: ipfs-cluster/ipfs-cluster#12 (comment)

It goes down to:

  • Allow to create a large network of ipfs-nodes + whatever application we need on top.
  • Allow to loop, randomly select, nodes in that network and run commands (i.e. locally)
  • Allow to control networking between nodes in that network (i.e. creating artificial bottlenecks or cutting network paths)
  • Measure how long it takes to run tests and plot it.

Might work with what was built around Kubernetes (or by improving on that). Ideally, I should not care if it's Kubernetes below, an AWS VPC or something in Digital Ocean, as long as something comes up and I get a list of of the hostnames in the test cluster and a standarized way to access/control them (which could be ssh).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants