Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run tests in parallel locally with makefile #1808

Merged
merged 14 commits into from
Apr 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/scripts/unit_test_runner.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

# used to pass --cov=$path and --cov-append to pytest
if [ "$1" != "" ]; then
pytest "$1" tests/unit/
pytest "$1" tests/unit/ -n auto
status_code=$?
python -m coverage report
else
pytest tests/unit/
pytest tests/unit/ -n auto
status_code=$?
fi

Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,6 @@ jobs:
- name: Install dependencies
run: |
pip install ".[test]"
solc-select install 0.8.0
solc-select use 0.8.0

- name: Setup node
uses: actions/setup-node@v3
Expand Down
81 changes: 54 additions & 27 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,19 @@
# Contributing to Slither

First, thanks for your interest in contributing to Slither! We welcome and appreciate all contributions, including bug reports, feature suggestions, tutorials/blog posts, and code improvements.

If you're unsure where to start, we recommend our [`good first issue`](https://github.com/crytic/slither/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) and [`help wanted`](https://github.com/crytic/slither/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) issue labels.

## Bug reports and feature suggestions

Bug reports and feature suggestions can be submitted to our issue tracker. For bug reports, attaching the contract that caused the bug will help us in debugging and resolving the issue quickly. If you find a security vulnerability, do not open an issue; email opensource@trailofbits.com instead.

## Questions
Questions can be submitted to the issue tracker, but you may get a faster response if you ask in our [chat room](https://empireslacking.herokuapp.com/) (in the #ethereum channel).

Questions can be submitted to the "Discussions" page, and you may also join our [chat room](https://empireslacking.herokuapp.com/) (in the #ethereum channel).

## Code

Slither uses the pull request contribution model. Please make an account on Github, fork this repo, and submit code contributions via pull request. For more documentation, look [here](https://guides.github.com/activities/forking/).

Some pull request guidelines:
Expand All @@ -23,6 +27,7 @@ Some pull request guidelines:
## Directory Structure

Below is a rough outline of slither's design:

```text
.
├── analyses # Provides additional info such as data dependency
Expand All @@ -39,50 +44,72 @@ Below is a rough outline of slither's design:
A code walkthrough is available [here](https://www.youtube.com/watch?v=EUl3UlYSluU).

## Development Environment

Instructions for installing a development version of Slither can be found in our [wiki](https://github.com/crytic/slither/wiki/Developer-installation).

To run the unit tests, you need to clone this repository and run `pip install ".[dev]"`.
To run the unit tests, you need to clone this repository and run `make test`. Run a specific test with `make test TESTS=$test_name`. The names of tests can be obtained with `pytest tests --collect-only`.

### Linters

Several linters and security checkers are run on the PRs.

To run them locally in the root dir of the repository:

- `pylint slither tests --rcfile pyproject.toml`
- `black . --config pyproject.toml`
- `make lint`

> Note, this only validates but does not modify the code.

To automatically reformat the code:

- `make reformat`

We use pylint `2.13.4`, black `22.3.0`.

### Detectors tests
### Testing

Slither's test suite is divided into three categories end-to-end (`tests/e2e`), unit (`tests/unit`), and tools (`tests/tools/`).

How do I know what kind of test(s) to write?

- End-to-end: functionality that requires invoking `Slither` and inspecting some output such as printers and detectors.
- Unit: additions and modifications to objects should be accompanied by a unit test that defines the expected behavior. Aim to write functions in as pure a way as possible such that they are easier to test.
- Tools: tools built on top of Slither (`slither/tools`) but not apart of its core functionality

#### Adding detector tests

For each new detector, at least one regression tests must be present.

- Create a test in `tests`
- Update `ALL_TEST` in `tests/test_detectors.py`
- Run `python ./tests/test_detectors.py --generate`. This will generate the json artifacts in `tests/expected_json`. Add the generated files to git.
- If updating an existing detector, identify the respective json artifacts and then delete them, or run `python ./tests/test_detectors.py --overwrite` instead.
- Run `pytest ./tests/test_detectors.py` and check that everything worked.

To see the tests coverage, run `pytest tests/test_detectors.py --cov=slither/detectors --cov-branch --cov-report html`.
To run tests for a specific detector, run `pytest tests/test_detectors.py -k ReentrancyReadBeforeWritten` (the detector's class name is the argument).
To run tests for a specific version, run `pytest tests/test_detectors.py -k 0.7.6`.
The IDs of tests can be inspected using `pytest tests/test_detectors.py --collect-only`.

### Parser tests
- Create a test in `tests/ast-parsing`
- Run `python ./tests/test_ast_parsing.py --compile`. This will compile the artifact in `tests/ast-parsing/compile`. Add the compiled artifact to git.
- Run `python ./tests/test_ast_parsing.py --generate`. This will generate the json artifacts in `tests/ast-parsing/expected_json`. Add the generated files to git.
- Run `pytest ./tests/test_ast_parsing.py` and check that everything worked.

To see the tests coverage, run `pytest tests/test_ast_parsing.py --cov=slither/solc_parsing --cov-branch --cov-report html`
To run tests for a specific test case, run `pytest tests/test_ast_parsing.py -k user_defined_value_type` (the filename is the argument).
To run tests for a specific version, run `pytest tests/test_ast_parsing.py -k 0.8.12`.
To run tests for a specific compiler json format, run `pytest tests/test_ast_parsing.py -k legacy` (can be legacy or compact).
The IDs of tests can be inspected using ``pytest tests/test_ast_parsing.py --collect-only`.
1. Create a test in `tests/e2e/detectors`
2. Update `ALL_TEST` in `tests/e2e/detectors/test_detectors.py`
3. Run `python tests/e2e/detectors/test_detectors.py --generate`. This will generate the json artifacts in `tests/expected_json`. Add the generated files to git. If updating an existing detector, identify the respective json artifacts and then delete them, or run `python ./tests/test_detectors.py --overwrite` instead.
4. Run `pytest tests/e2e/detectors/test_detectors.py` and check that everything worked.

> ##### Helpful commands for detector tests
>
> - To see the tests coverage, run `pytest tests/e2e/detectors/test_detectors.py --cov=slither/detectors --cov-branch --cov-report html`.
> - To run tests for a specific detector, run `pytest tests/e2e/detectors/test_detectors.py -k ReentrancyReadBeforeWritten`(the detector's class name is the argument).
> - To run tests for a specific version, run `pytest tests/e2e/detectors/test_detectors.py -k 0.7.6`.
> - The IDs of tests can be inspected using `pytest tests/e2e/detectors/test_detectors.py --collect-only`.

#### Adding parsing tests

1. Create a test in `tests/e2e/solc_parsing/`
2. Run `python tests/e2e/solc_parsing/test_ast_parsing.py --compile`. This will compile the artifact in `tests/e2e/solc_parsing/compile`. Add the compiled artifact to git.
3. Run `python tests/e2e/solc_parsing/test_ast_parsing.py --generate`. This will generate the json artifacts in `tests/e2e/solc_parsing/expected_json`. Add the generated files to git.
4. Run `pytest tests/e2e/solc_parsing/test_ast_parsing.py` and check that everything worked.

> ##### Helpful commands for parsing tests
>
> - To see the tests coverage, run `pytest tests/e2e/solc_parsing/test_ast_parsing.py --cov=slither/solc_parsing --cov-branch --cov-report html`
> - To run tests for a specific test case, run `pytest tests/e2e/solc_parsing/test_ast_parsing.py -k user_defined_value_type` (the filename is the argument).
> - To run tests for a specific version, run `pytest tests/e2e/solc_parsing/test_ast_parsing.py -k 0.8.12`.
> - To run tests for a specific compiler json format, run `pytest tests/e2e/solc_parsing/test_ast_parsing.py -k legacy` (can be legacy or compact).
> - The IDs of tests can be inspected using ``pytest tests/e2e/solc_parsing/test_ast_parsing.py --collect-only`.

### Synchronization with crytic-compile

By default, `slither` follows either the latest version of crytic-compile in pip, or `crytic-compile@master` (look for dependencies in [`setup.py`](./setup.py). If crytic-compile development comes with breaking changes, the process to update `slither` is:

- Update `slither/setup.py` to point to the related crytic-compile's branch
- Create a PR in `slither` and ensure it passes the CI
- Once the development branch is merged in `crytic-compile@master`, ensure `slither` follows the `master` branch
Expand Down
88 changes: 88 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
SHELL := /bin/bash

PY_MODULE := slither
TEST_MODULE := tests

ALL_PY_SRCS := $(shell find $(PY_MODULE) -name '*.py') \
$(shell find test -name '*.py')

# Optionally overriden by the user, if they're using a virtual environment manager.
VENV ?= env

# On Windows, venv scripts/shims are under `Scripts` instead of `bin`.
VENV_BIN := $(VENV)/bin
ifeq ($(OS),Windows_NT)
VENV_BIN := $(VENV)/Scripts
endif

# Optionally overridden by the user in the `release` target.
BUMP_ARGS :=

# Optionally overridden by the user in the `test` target.
TESTS :=

# Optionally overridden by the user/CI, to limit the installation to a specific
# subset of development dependencies.
SLITHER_EXTRA := dev

# If the user selects a specific test pattern to run, set `pytest` to fail fast
# and only run tests that match the pattern.
# Otherwise, run all tests and enable coverage assertions, since we expect
# complete test coverage.
ifneq ($(TESTS),)
TEST_ARGS := -x -k $(TESTS)
COV_ARGS :=
else
TEST_ARGS := -n auto
COV_ARGS := # --fail-under 100
endif

.PHONY: all
all:
@echo "Run my targets individually!"

.PHONY: dev
dev: $(VENV)/pyvenv.cfg

.PHONY: run
run: $(VENV)/pyvenv.cfg
@. $(VENV_BIN)/activate && slither $(ARGS)

$(VENV)/pyvenv.cfg: pyproject.toml
# Create our Python 3 virtual environment
python3 -m venv env
$(VENV_BIN)/python -m pip install --upgrade pip
$(VENV_BIN)/python -m pip install -e .[$(SLITHER_EXTRA)]

.PHONY: lint
lint: $(VENV)/pyvenv.cfg
. $(VENV_BIN)/activate && \
black --check . && \
pylint $(PY_MODULE) $(TEST_MODULE)
# ruff $(ALL_PY_SRCS) && \
# mypy $(PY_MODULE) &&

.PHONY: reformat
reformat:
. $(VENV_BIN)/activate && \
black .

.PHONY: test tests
test tests: $(VENV)/pyvenv.cfg
. $(VENV_BIN)/activate && \
pytest --cov=$(PY_MODULE) $(T) $(TEST_ARGS) && \
python -m coverage report -m $(COV_ARGS)

.PHONY: doc
doc: $(VENV)/pyvenv.cfg
. $(VENV_BIN)/activate && \
PDOC_ALLOW_EXEC=1 pdoc -o html slither '!slither.tools'

.PHONY: package
package: $(VENV)/pyvenv.cfg
. $(VENV_BIN)/activate && \
python3 -m build

.PHONY: edit
edit:
$(EDITOR) $(ALL_PY_SRCS)
6 changes: 4 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,10 @@
"packaging",
"prettytable>=0.7.2",
"pycryptodome>=3.4.6",
"crytic-compile>=0.3.1,<0.4.0",
# "crytic-compile@git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile",
# "crytic-compile>=0.3.1,<0.4.0",
"crytic-compile@git+https://github.com/crytic/crytic-compile.git@windows-rel-path#egg=crytic-compile",
"web3>=6.0.0",
"solc-select@git+https://github.com/crytic/solc-select.git@query-artifact-path#egg=solc-select",
],
extras_require={
"lint": [
Expand All @@ -31,6 +32,7 @@
"deepdiff",
"numpy",
"coverage[toml]",
"filelock",
],
"doc": [
"pdoc",
Expand Down
79 changes: 79 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# pylint: disable=redefined-outer-name
import os
from pathlib import Path
import tempfile
import shutil
from contextlib import contextmanager
import pytest
from filelock import FileLock
from solc_select import solc_select
from slither import Slither


def pytest_configure(config):
"""Create a temporary directory for the tests to use."""
if is_master():
config.stash["shared_directory"] = tempfile.mkdtemp()


def pytest_unconfigure(config):
"""Remove the temporary directory after the tests are done."""
if is_master():
shutil.rmtree(config.stash["shared_directory"])


def pytest_configure_node(node):
"""Configure each worker node with the shared directory."""
node.workerinput["shared_directory"] = node.config.stash["shared_directory"]


def is_master():
"""Returns True if the current process is the master process (which does not have a worker id)."""
return os.environ.get("PYTEST_XDIST_WORKER") is None


@pytest.fixture
def shared_directory(request):
"""Returns the shared directory for the current process."""
if is_master():
return request.config.stash["shared_directory"]
return request.config.workerinput["shared_directory"]


@pytest.fixture
def solc_binary_path(shared_directory):
"""
Returns the path to the solc binary for the given version.
If the binary is not installed, it will be installed.
"""

def inner(version):
lock = FileLock(f"{shared_directory}/{version}.lock", timeout=60)
with lock:
if not solc_select.artifact_path(version).exists():
print("Installing solc version", version)
solc_select.install_artifacts([version])
return solc_select.artifact_path(version).as_posix()

return inner


@pytest.fixture
def slither_from_source(solc_binary_path):
@contextmanager
def inner(source_code: str, solc_version: str = "0.8.19"):
"""Yields a Slither instance using source_code string and solc_version.
Creates a temporary file and compiles with solc_version.
"""

fname = ""
try:
with tempfile.NamedTemporaryFile(mode="w", suffix=".sol", delete=False) as f:
fname = f.name
f.write(source_code)
solc_path = solc_binary_path(solc_version)
yield Slither(fname, solc=solc_path)
finally:
Path(fname).unlink()

return inner
13 changes: 6 additions & 7 deletions tests/e2e/compilation/test_resolution.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@

from crytic_compile import CryticCompile
from crytic_compile.platform.solc_standard_json import SolcStandardJson
from solc_select import solc_select

from slither import Slither

Expand All @@ -24,8 +23,8 @@ def test_node_modules() -> None:
_run_all_detectors(slither)


def test_contract_name_collisions() -> None:
solc_select.switch_global_version("0.8.0", always_install=True)
def test_contract_name_collision(solc_binary_path) -> None:
solc_path = solc_binary_path("0.8.0")
standard_json = SolcStandardJson()
standard_json.add_source_file(
Path(TEST_DATA_DIR, "test_contract_name_collisions", "a.sol").as_posix()
Expand All @@ -34,13 +33,13 @@ def test_contract_name_collisions() -> None:
Path(TEST_DATA_DIR, "test_contract_name_collisions", "b.sol").as_posix()
)

compilation = CryticCompile(standard_json)
compilation = CryticCompile(standard_json, solc=solc_path)
slither = Slither(compilation)

_run_all_detectors(slither)


def test_cycle() -> None:
solc_select.switch_global_version("0.8.0", always_install=True)
slither = Slither(Path(TEST_DATA_DIR, "test_cyclic_import", "a.sol").as_posix())
def test_cycle(solc_binary_path) -> None:
solc_path = solc_binary_path("0.8.0")
slither = Slither(Path(TEST_DATA_DIR, "test_cyclic_import", "a.sol").as_posix(), solc=solc_path)
_run_all_detectors(slither)
6 changes: 4 additions & 2 deletions tests/tools/read-storage/test_read_storage.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,9 @@ def deploy_contract(w3, ganache, contract_bin, contract_abi) -> Contract:

# pylint: disable=too-many-locals
@pytest.mark.usefixtures("web3", "ganache")
def test_read_storage(web3, ganache) -> None:
def test_read_storage(web3, ganache, solc_binary_path) -> None:
solc_path = solc_binary_path(version="0.8.10")

assert web3.is_connected()
bin_path = Path(TEST_DATA_DIR, "StorageLayout.bin").as_posix()
abi_path = Path(TEST_DATA_DIR, "StorageLayout.abi").as_posix()
Expand All @@ -100,7 +102,7 @@ def test_read_storage(web3, ganache) -> None:
contract.functions.store().transact({"from": ganache.eth_address})
address = contract.address

sl = Slither(Path(TEST_DATA_DIR, "storage_layout-0.8.10.sol").as_posix())
sl = Slither(Path(TEST_DATA_DIR, "storage_layout-0.8.10.sol").as_posix(), solc=solc_path)
contracts = sl.contracts

srs = SlitherReadStorage(contracts, 100)
Expand Down
Loading