Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add memray to tox.ini. #219

Merged
merged 5 commits into from
Sep 12, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 20 additions & 16 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,29 +4,33 @@ on: [push, pull_request]

jobs:
build:

runs-on: ${{ matrix.os }}
strategy:
max-parallel: 4
matrix:
os: [ubuntu-latest, macOS-latest, windows-latest]
python-version: [3.8, 3.9, "3.10"]
python-version: [3.8, 3.9, "3.10", "3.11"]

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}

- name: update pip
run: |
python -m pip install --upgrade pip

- name: update pip
run: |
python -m pip install --upgrade pip
- name: install tox
run: |
python -m pip install tox tox-gh-actions

- name: install tox
run: |
python -m pip install tox tox-gh-actions
- name: run tox
run: |
python -m tox

- name: run tox
run: |
python -m tox
- name: check benchmarks run (only on macOS)
if: matrix.os == 'macOS-latest'
run: |
python -m tox -e benchmark
24 changes: 24 additions & 0 deletions benchmarks/test_support_enumeration.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
"""
Benchmarks for support enumeration
"""
import unittest

import numpy as np

from nashpy.algorithms.support_enumeration import support_enumeration


def test_support_enumeration_on_two_by_two_game(benchmark):
A = np.array(((1, -1), (-1, 1)))
eqs = support_enumeration(A, -A)
benchmark(tuple, eqs)

def test_support_enumeration_on_three_by_three_game(benchmark):
A = np.array(((0, 1, -1), (-1, 0, 1), (1, -1, 0)))
eqs = support_enumeration(A, -A)
benchmark(tuple, eqs)

def test_support_enumeration_on_four_by_four_game(benchmark):
A = np.array(((0, 1, -1, 1/4), (-1, 0, 1, 1/4), (1, -1, 0, 1/4), (1/4, 1/4, 1/4, 1/4)))
eqs = support_enumeration(A, -A)
benchmark(tuple, eqs)
1 change: 1 addition & 0 deletions docs/contributing/discussion/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ Discussion
mypy/index.rst
sphinx/index.rst
doctests/index.rst
pytest-benchmark/index.rst
alex/index.rst
github_actions/index.rst
readthedocs/index.rst
Expand Down
40 changes: 40 additions & 0 deletions docs/contributing/discussion/pytest-benchmark/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
Writing benchmarks with pytest benchmark and memray
===================================================

`pytest-benchmark <https://github.com/ionelmc/pytest-benchmark>`_ is a tool that allows you to write benchmarks to be run with pytest.

The `pytest-benchmark` creates a :code:`benchmark` fixture that can be passed to
a test. For example consider this test:

.. literalinclude:: /../benchmarks/test_support_enumeration.py
:pyobject: test_support_enumeration_on_two_by_two_game

The `benchmark` fixture is a function with signature::

benchmark(<function>, *args, **kwargs)

Running benchmarks
------------------

To run the benchmarks no extra commands are necessary. For example if the
benchmarks are located in :code:`benchmarks/` then the following command will
run them::

python -m pytest benchmarks

Comparing benchmarks
--------------------

To save the results of a benchmark run you can run::

python -m pytest benchmarks --benchmark-autosave

To compare the results with a saved set of benchmarks::

python -m pytest benchmarks --benchmark-compare

Profiling memory with memray
----------------------------

`pytest-memray <https://pytest-memray.readthedocs.io/en/latest/>`_ is a plugin
that profiles the memory use when running a specific set of tests.
10 changes: 9 additions & 1 deletion docs/contributing/discussion/tox/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,12 @@ This is done thanks to configurations written in :code:`tox.ini`::

[tox]
isolated_build = True
envlist = py38, py39
envlist = py38, py39, py310, py311

Running specific benchmarks
---------------------------

The benchmark code is configured using the :code:`[testenv:benchmark]`. This
gives a specific set of jobs to be run with the command::

$ python -m tox benchmark
12 changes: 12 additions & 0 deletions docs/contributing/how-to/how-to-run-benchmarks/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
.. _how-to-run-benchmarks:

How to run benchmarks
=====================

To install :code:`tox`::

$ python -m pip install tox

To run all benchmarks run::

$ python -m tox benchmark
4 changes: 4 additions & 0 deletions docs/contributing/how-to/how-to-run-tests/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,7 @@ If you want to run the tests across a single version of Python::
$ python -m tox -e <version>

where :code:`version` is either :code:`py38` or :code:`py39`.

To run all tests in parallel::

$ python -m tox -p
1 change: 1 addition & 0 deletions docs/contributing/how-to/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,4 @@ How to:
how-to-push-changes/index.rst
how-to-open-a-pull-request/index.rst
how-to-format-markdown-files/index.rst
how-to-run-benchmarks/index.rst
18 changes: 16 additions & 2 deletions tox.ini
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
[tox]
isolated_build = True
envlist = py38, py39, py310
envlist = py38, py39, py310, py311

[gh-actions]
python =
3.8: py38
3.9: py39
3.10: py310
3.11: py311

[flake8]
per-file-ignores =
Expand Down Expand Up @@ -39,6 +40,19 @@ commands =
python -m black --check tests/
python -m mypy --ignore-missing-imports src/nashpy
python -m interrogate -v --ignore-init-method --ignore-init-module --fail-under 100 src/nashpy --exclude src/nashpy/version.py --ignore-magic
python -m pytest --cov=nashpy --cov-fail-under=100 --doctest-glob="*.md" --doctest-glob="*.rst"
python -m pytest tests --cov=nashpy --cov-fail-under=100 --doctest-glob="*.md" --doctest-glob="*.rst"
python -m flake8 src/
python -m flake8 tests/

[testenv:benchmark]
deps =
pytest
pytest-memray
pytest-benchmark
commands =
python -m pytest --memray benchmarks --benchmark-autosave

[testenv:docs]
extras = doc
commands =
sphinx-build docs docs/_build/html -W -b html
Loading