Skip to content
This repository has been archived by the owner on May 23, 2023. It is now read-only.

How to run a test

caktux edited this page May 5, 2015 · 16 revisions

Setup your AWS credentials

  • AWS account required
  • Put your credentials in ~/.boto, it should look like
[Credentials]
aws_access_key_id = <ACCESS_KEY_ID>
aws_secret_access_key = <SECRET_ACCESS_KEY>

Also see boto config. This will give access to your AWS account.

  • Test that you have access with a boto command:
$  list_instances --region us-west-1

You'll get an access forbidden message if credentials are wrong.

Note: If multiple users are working on system-testing, they will work on the same setup, so coordinate! Fix planned for 0.2.1

Security Groups

docker-machine creates a security group of the same name for itself, which will also be used for the client instances themselves. It only adds the ports it needs, 22 and 2376 (Docker Remote API), so if you are setting this up in your own account or a new region, make sure you add at least 30303 (TCP and UDP) and optionally 8545 if you plan to run transaction tests.

For ElasticSearch, create a security group called elasticsearch and open these ports: 22, 443, 2376, 5000 and 9200. TODO: Fix having to open 9200

Install the testing tool

Follow instructions from the readme. See also the complete usage options.

Run a scenario

testing --scenario p2p_connect

It will tell if tests have passed, and details about each client.

To visualize the network topology, run:

python testing/analyze_network.py

(TODO: needs fixing)

Optionally, evaluate results on the ElasticSearch instance

./kibana.py

(does not work when running in a container, simply open the ElasticSearch instance's IP at https://<INSTANCE_IP> in your favorite browser)

Cleaning up

Stopping clients

testing stop [nodename]

Removing client instances

testing rm [nodename]

Removing all instances

testing cleanup