Skip to content

Commit

Permalink
Adds a runner script to execute all tests in the source directory (#…
Browse files Browse the repository at this point in the history
…327)

# Description

This PR adds a script to run all the unit tests (any Python script named
`test_*.py`) and report on the success rate and timing. It also enables
the skipping of known breaking tests via tests_to_skip.py. The tests can
be run via `orbit -t`, and the test output will be printed to both a log
(`test_results.log`) and to the console.

This can be used for CI/CD but also can be temporarily used to run each
test without invoking them manually one by one until we have a better
solution.

A few tests are currently broken; for now, these are added to
tests_to_skip - I will put follow-up issues + PRs to fix these as soon
as possible. I recommend we add running tests as a requirement before
merging PRs until the CI/CD project is complete, too. One way to help
facilitate this is to have PR authors copy/paste the results at the end
of the test run to the PR description.

Fixes #309 

## Type of change

- New feature (non-breaking change which adds functionality)

## Screenshots

An example test result summary with timeout set to 60 seconds running on
my workstation:

```
===================
Test Result Summary
===================
Total: 39
Passing: 29
Failing: 0
Skipped: 8
Timing Out: 2
Passing Percentage: 94.87%
Total Time Elapsed: 0.0h31.0m52.38s
```

## Checklist

- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./orbit.sh --format`
- [ ] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my
feature works
- [ ] I have updated the changelog and the corresponding version in the
extension's `config/extension.toml` file
- [ ] I have added my name to the `CONTRIBUTORS.md` or my name already
exists there
  • Loading branch information
jsmith-bdai authored Jan 9, 2024
1 parent 963f304 commit 51ccd99
Show file tree
Hide file tree
Showing 5 changed files with 280 additions and 0 deletions.
1 change: 1 addition & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ To upload images to a PR -- simply drag and drop an image while in edit mode and
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] I have run all the tests with `./orbit.sh --test` and they pass
- [ ] I have updated the changelog and the corresponding version in the extension's `config/extension.toml` file
- [ ] I have added my name to the `CONTRIBUTORS.md` or my name already exists there

Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
**/cmake-build*/
**/build*/
**/*.so
**/*.log*

# Omniverse
**/*.dmp
Expand Down
9 changes: 9 additions & 0 deletions orbit.sh
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,7 @@ print_help () {
echo -e "\t-f, --format Run pre-commit to format the code and check lints."
echo -e "\t-p, --python Run the python executable (python.sh) provided by Isaac Sim."
echo -e "\t-s, --sim Run the simulator executable (isaac-sim.sh) provided by Isaac Sim."
echo -e "\t-t, --test Run all python unittest tests."
echo -e "\t-o, --docker Run the docker container helper script (docker/container.sh)."
echo -e "\t-v, --vscode Generate the VSCode settings file from template."
echo -e "\t-d, --docs Build the documentation from source using sphinx."
Expand Down Expand Up @@ -310,6 +311,14 @@ while [[ $# -gt 0 ]]; do
# exit neatly
break
;;
-t|--test)
# run the python provided by isaacsim
python_exe=$(extract_python_exe)
shift # past argument
${python_exe} tools/run_all_tests.py $@
# exit neatly
break
;;
-o|--docker)
# run the docker container helper script
docker_script=${ORBIT_PATH}/docker/container.sh
Expand Down
249 changes: 249 additions & 0 deletions tools/run_all_tests.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,249 @@
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

"""A runner script for all the tests within source directory.
.. code-block:: bash
./orbit.sh -p tools/run_all_tests.py
# for dry run
./orbit.sh -p tools/run_all_tests.py --discover_only
# for quiet run
./orbit.sh -p tools/run_all_tests.py --quiet
# for increasing timeout (default is 600 seconds)
./orbit.sh -p tools/run_all_tests.py --timeout 1000
"""

from __future__ import annotations

import argparse
import logging
import os
import subprocess
import time
from datetime import datetime
from pathlib import Path
from prettytable import PrettyTable

# Tests to skip
from tests_to_skip import TESTS_TO_SKIP


def parse_args() -> argparse.Namespace:
"""Parse command line arguments."""
parser = argparse.ArgumentParser(description="Run all tests under current directory.")
# add arguments
parser.add_argument(
"--skip_tests",
default="",
help="Space separated list of tests to skip in addition to those in tests_to_skip.py.",
type=str,
nargs="*",
)
parser.add_argument("--discover_only", action="store_true", help="Only discover and print tests, don't run them.")
parser.add_argument("--quiet", action="store_true", help="Don't print to console, only log to file.")
parser.add_argument("--timeout", default=600, help="Timeout for each test in seconds.")
# parse arguments
args = parser.parse_args()
return args


def test_all(
test_dir: str,
tests_to_skip: list[str],
log_path: str,
timeout: float = 600.0,
discover_only: bool = False,
quiet: bool = False,
) -> bool:
"""Run all tests under the given directory.
Args:
test_dir: Path to the directory containing the tests.
tests_to_skip: List of tests to skip.
log_path: Path to the log file to store the results in.
timeout: Timeout for each test in seconds. Defaults to 600 seconds (10 minutes).
discover_only: If True, only discover and print the tests without running them. Defaults to False.
quiet: If False, print the output of the tests to the terminal console (in addition to the log file).
Defaults to False.
Returns:
True if all un-skipped tests pass or :attr:`discover_only` is True. Otherwise, False.
"""
# Create the log directory if it doesn't exist
os.makedirs(os.path.dirname(log_path), exist_ok=True)

# Add file handler to log to file
logging_handlers = [logging.FileHandler(log_path)]
# We also want to print to console
if not quiet:
logging_handlers.append(logging.StreamHandler())
# Set up logger
logging.basicConfig(level=logging.INFO, format="%(message)s", handlers=logging_handlers)

# Discover all tests under current directory
all_test_paths = [str(path) for path in Path(test_dir).resolve().rglob("*test_*.py")]
skipped_test_paths = []
test_paths = []
# Check that all tests to skip are actually in the tests
for test_to_skip in tests_to_skip:
for test_path in all_test_paths:
if test_to_skip in test_path:
break
else:
raise ValueError(f"Test to skip '{test_to_skip}' not found in tests.")
# Remove tests to skip from the list of tests to run
if len(tests_to_skip) != 0:
for test_path in all_test_paths:
if any([test_to_skip in test_path for test_to_skip in tests_to_skip]):
skipped_test_paths.append(test_path)
else:
test_paths.append(test_path)
else:
test_paths = all_test_paths

# Sort test paths so they're always in the same order
all_test_paths.sort()
test_paths.sort()
skipped_test_paths.sort()

# Print tests to be run
logging.info("\n" + "=" * 60 + "\n")
logging.info(f"The following {len(all_test_paths)} tests will be run:")
for i, test_path in enumerate(all_test_paths):
logging.info(f"{i + 1:02d}: {test_path}")
logging.info("\n" + "=" * 60 + "\n")

logging.info(f"The following {len(skipped_test_paths)} tests are marked to be skipped:")
for i, test_path in enumerate(skipped_test_paths):
logging.info(f"{i + 1:02d}: {test_path}")
logging.info("\n" + "=" * 60 + "\n")

# Exit if only discovering tests
if discover_only:
return True

results = {}

# Resolve python executable to use
orbit_shell_path = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "orbit.sh")
# Run each script and store results
for test_path in test_paths:
results[test_path] = {}
before = time.time()
logging.info("\n" + "-" * 60 + "\n")
logging.info(f"[INFO] Running '{test_path}'\n")
try:
completed_process = subprocess.run(
["bash", orbit_shell_path, "-p", test_path], check=True, capture_output=True, timeout=timeout
)
except subprocess.TimeoutExpired as e:
result = "TIMEDOUT"
stdout = e.stdout
stderr = e.stderr
except Exception as e:
result = "FAILED"
stdout = e.stdout
stderr = e.stderr
else:
result = "PASSED" if completed_process.returncode == 0 else "FAILED"
stdout = completed_process.stdout
stderr = completed_process.stderr

after = time.time()
time_elapsed = after - before
# Decode stdout and stderr and write to file and print to console if desired
stdout_str = stdout.decode("utf-8") if stdout is not None else ""
stderr_str = stderr.decode("utf-8") if stderr is not None else ""
# Write to log file
logging.info(stdout_str)
logging.info(stderr_str)
logging.info(f"[INFO] Time elapsed: {time_elapsed:.2f} s")
logging.info(f"[INFO] Result '{test_path}': {result}")
# Collect results
results[test_path]["time_elapsed"] = time_elapsed
results[test_path]["result"] = result

# Calculate the number and percentage of passing tests
num_tests = len(all_test_paths)
num_passing = len([test_path for test_path in test_paths if results[test_path]["result"] == "PASSED"])
num_failing = len([test_path for test_path in test_paths if results[test_path]["result"] == "FAILED"])
num_timing_out = len([test_path for test_path in test_paths if results[test_path]["result"] == "TIMEDOUT"])
num_skipped = len(skipped_test_paths)

if num_tests == 0:
passing_percentage = 100
else:
passing_percentage = (num_passing + num_skipped) / num_tests * 100

# Print summaries of test results
summary_str = "\n\n"
summary_str += "===================\n"
summary_str += "Test Result Summary\n"
summary_str += "===================\n"

summary_str += f"Total: {num_tests}\n"
summary_str += f"Passing: {num_passing}\n"
summary_str += f"Failing: {num_failing}\n"
summary_str += f"Skipped: {num_skipped}\n"
summary_str += f"Timing Out: {num_timing_out}\n"

summary_str += f"Passing Percentage: {passing_percentage:.2f}%\n"

# Print time elapsed in hours, minutes, seconds
total_time = sum([results[test_path]["time_elapsed"] for test_path in test_paths])

summary_str += f"Total Time Elapsed: {total_time // 3600}h"
summary_str += f"{total_time // 60 % 60}m"
summary_str += f"{total_time % 60:.2f}s"

summary_str += "\n\n=======================\n"
summary_str += "Per Test Result Summary\n"
summary_str += "=======================\n"

# Construct table of results per test
per_test_result_table = PrettyTable(field_names=["Test Path", "Result", "Time (s)"])
per_test_result_table.align["Test Path"] = "l"
per_test_result_table.align["Time (s)"] = "r"
for test_path in test_paths:
per_test_result_table.add_row(
[test_path, results[test_path]["result"], f"{results[test_path]['time_elapsed']:0.2f}"]
)

for test_path in skipped_test_paths:
per_test_result_table.add_row([test_path, "SKIPPED", "N/A"])

summary_str += per_test_result_table.get_string()

# Print summary to console and log file
logging.info(summary_str)

# Only count failing and timing out tests towards failure
return num_failing + num_timing_out == 0


if __name__ == "__main__":
# parse command line arguments
args = parse_args()
# add tests to skip to the list of tests to skip
tests_to_skip = TESTS_TO_SKIP
tests_to_skip += args.skip_tests
# configure test directory (source directory)
test_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "source")
# configure logging
log_file_name = "test_results_" + datetime.now().strftime("%Y-%m-%d_%H-%M-%S") + ".log"
log_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "logs", log_file_name)

# run all tests
test_success = test_all(
test_dir, tests_to_skip, log_path, timeout=args.timeout, discover_only=args.discover_only, quiet=args.quiet
)
# update exit status based on all tests passing or not
if not test_success:
exit(1)
20 changes: 20 additions & 0 deletions tools/tests_to_skip.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

# The following tests are skipped by run_tests.py
TESTS_TO_SKIP = [
# orbit
"test_argparser_launch.py", # app.close issue
"test_env_var_launch.py", # app.close issue
"test_kwarg_launch.py", # app.close issue
"compat/sensors/test_camera.py", # Timing out
"test_differential_ik.py", # Failing
# orbit_tasks
"test_data_collector.py", # Failing
"test_environments.py", # Failing between 2 environments
"test_record_video.py", # Failing
"test_rsl_rl_wrapper.py", # Timing out (10 minutes)
"test_sb3_wrapper.py", # Timing out (10 minutes)
]

0 comments on commit 51ccd99

Please sign in to comment.