Posidonius is a N-body code based on the tidal model used in Mercury-T (Bolmont et al. 2015). It uses a symplectic integrator (WHFast, Rein & Tamayo 2015) to compute the evolution of positions and velocities, which is also combined with a midpoint integrator to calculate the spin evolution in a consistent way. As Mercury-T, Posidonius takes into account tidal forces, rotational-flattening effects and general relativity corrections. It also includes different evolution models for FGKML stars and gaseous planets.
The N-Body code is written in Rust (Blanco-Cuaresma et al. 2017) and a python package is provided to easily define simulation cases in JSON format, which is readable by the Posidonius integrator.
- Rust: see rustup
- Python: see anaconda
First, install the N-body simulator by running from the current directory:
cargo install --path . --force
The executable will be copied into $HOME/.cargo/bin/
. Then, install the python package to create cases by running:
curl -O https://www.blancocuaresma.com/s/repository/posidonius/input.tar.gz
tar -zxvf input.tar.gz && rm -f input.tar.gz
pip install .
The --user
flag can be used with pip
to install the package in your $HOME/.local/lib/python3.*/site-packages/
.
Both tools can be uninstalled by executing:
cargo uninstall posidonius
pip uninstall posidonius
The user can design his/her own simulation with a python script, which will create a JSON file with the simulation description. This file can be read later on by Posidonius to start the simulation. The code includes several examples in the cases directory, and they can be executed by running:
python cases/Bolmont_et_al_2015/case3.py target/case3.json
python cases/Bolmont_et_al_2015/case4.py target/case4.json
python cases/Bolmont_et_al_2015/case7.py target/case7.json
python cases/example.py target/example.json
The simulations can be started using JSON files (which describe the initial conditions). When starting a simulation, the recovery and historic snapshot file names should be specified. The former will contain the information needed to resume interrupted simulations, while he latter stores the evolution of the simulation over the years.
posidonius start target/case3.json target/case3.bin target/case3_history.bin
posidonius start target/case4.json target/case4.bin target/case4_history.bin
posidonius start target/case7.json target/case7.bin target/case7_history.bin
posidonius start target/example.json target/example.bin target/example_history.bin
The flag --silent
can be added to avoid printing the current year of the simulation. An execution time limit can also be specified with the flag --limit
, this can be useful for supercomputers that only allow processes to last a given amount of real time (not simulation time).
Interrupted simulations can be restored using the recovery snapshot file. The historic snapshot filename has to be specified also to continue storing the history of the simulation.
posidonius resume target/case3.bin target/case3_history.bin
posidonius resume target/case4.bin target/case4_history.bin
posidonius resume target/case7.bin target/case7_history.bin
posidonius resume target/example.bin target/example_history.bin
The flag --silent
can be added to avoid printing the current year of the simulation. n execution time limit can also be specified with the flag --limit
, this can be useful for supercomputers that only allow processes to last a given amount of real time (not simulation time). In case the user wants to change historic or recovery snapshot periods when resuming a simulation, it can be done with the flags --historic-snapshot-period
and --recovery-snapshot-period
plus the new period (in days) after each one. The flag --time-limit
can be used to change the simulation time limit (e.g., to increase it for a previous short simulation that looks promising).
While a simulation is in progress or when it has ended, the historic snapshot file can be converted to plain text tab-separated files (one per body in the system):
python scripts/raw_history.py target/case3_history.bin
In the same way, the historic snapshot file can be interpreted by transforming positions/velocities to heliocentric coordinates using the most massive body as the reference one, computing keplerian parameters, and generating a plot + plain text tab-separated file with the history of the simulation:
python scripts/explore_history.py target/case3.json target/case3_history.bin
python scripts/explore_history.py target/case4.json target/case4_history.bin
python scripts/explore_history.py target/case7.json target/case7_history.bin
python scripts/explore_history.py target/example.json target/example_history.bin
To explore what possible resonances might be present in the system:
python scripts/explore_timed_resonances.py target/case3_history.bin
python scripts/explore_timed_resonances.py target/case4_history.bin
python scripts/explore_timed_resonances.py target/case7_history.bin
python scripts/explore_timed_resonances.py target/example_history.bin
Finally, to study a given resonance (e.g., 3:2) between planet one and two:
python scripts/explore_single_resonance.py target/case3_history.bin 1 2 3 2
python scripts/explore_single_resonance.py target/case4_history.bin 1 2 3 2
python scripts/explore_single_resonance.py target/case7_history.bin 1 2 3 2
python scripts/explore_single_resonance.py target/example_history.bin 1 2 3 2
This section is intended only for developers.
NOTE: Rust floats (f64) follow IEEE_754 decimal64, hence only the first 16 decimals are significant (see page 74, Beginning Rust book).
Prepare a python environment:
python3 -m venv venv # or "virtualenv venv" if still using Python 2
source venv/bin/activate
pip install '.[dev]'
Optionally, if you need to build a posidonius wheel, as well as install it, you can do it with:
rm -rf dist/
python -m build --wheel
pip install dist/posidonius-*whl
Run the python tests:
pytest
If tests fail, it is because the current posidonius version generate JSON files that differ from the previous version. The error will contain the names of the files that differ so that the developer can compare them:
vimdiff posidonius/tests/data/test_tides-enabled_tides/case.json posidonius/tests/data/tmp/test_tides-enabled_tides/case.json
If the new file (in the tmp
directory) is the expected one given the changes done to the code, then the case.json
files could be deleted and recreated by running the test again:
find posidonius/tests/data/ -name 'case.json' -delete
pytest
Run the rust tests (the stack size needs to be increased since Kaula was integrated):
RUST_MIN_STACK=33554432 cargo test
If the test results for a case differ from previous executions and the difference is justified and expected by a change in the code, then all the files particle_*.json
can be erased and recreated:
find tests/data/ -name 'particle_*.json' -delete
RUST_MIN_STACK=33554432 cargo test
But if the difference is not expected, then the changes made to the code have affected the expected behavior of posidonius and the changes you have made to your code to identify the reason/bug.
If the results when using a JSON generated with rust differ from the results when using a JSON generated by python, the first thing to check is if there are differences between the JSON files:
vimdiff tests/data/test_tides-enabled_tides/case.json posidonius/tests/data/test_tides-enabled_tides/case.json
If it is necessary to recreate all the rust test's case.json
, it is enough to delete them all, run the test again and run the cleaning script (re-format the JSON files to make them easily comparable to the python generated ones):
find tests/data/ -name 'case.json' -delete
RUST_MIN_STACK=33554432 cargo test --no-fail-fast
python scripts/clean_json.py
If the results still differ, it is necessary to verify the changes you have made to your code to identify the reason/bug.
We can use benchmarks to assess the relative performance of code changes (i.e., statistically significant variations in execution time / speed). First, we need a reference
measurement before we make any changes:
rm -rf target/criterion/
cargo bench -- --save-baseline reference
Second, we can apply our changes to the code. And finally, we can run the benchmarks again to compare it to the previous reference
measurement:
cargo bench -- --baseline reference
If we need to execute only some of the benchmarks, they can be filtered by their name:
cargo bench -- --baseline reference calculate_additional_effects/all
If gnuplot is installed in the system, a report with plots will be generated/updated after each execution:
open target/criterion/report/index.html
Finally, multiple baselines can also be compared among them:
cargo install critcmp --force
cargo bench -- --save-baseline before calculate_additional_effects/all
cargo bench -- --save-baseline after calculate_additional_effects/all
critcmp before after
Using cargo-flamegraph it is possible to generate a flame graph, which allows to easily visualize what elements in the code take more execution time (i.e., identify most expensive parts of posidonius at runtime). In the flame graph, the main function of posidonius will be closer to the bottom, and the called functions will be stacked on top.
The flame graph allows the developer to identify parts of the code to be optimized, but properly measuring the impact of the optimization needs to be done with the benchmark tests.
Install cargo-flamegraph and its system dependencies in GNU/Linux:
sudo apt install linux-tools-common linux-tools-generic # only in GNU/Linux (debian-based)
cargo install flamegraph --force
Setup the system to allow performance measurement for non-root users (only for GNU/Linux):
sudo sh -c 'echo -1 >/proc/sys/kernel/perf_event_paranoid'
sudo sysctl -w kernel.perf_event_paranoid=-1
Run an example to build a flame graph (add --root
after cargo flamegraph
if this is run on a system different from GNU/Linux, and it will ask for your user password):
python cases/example.py target/example.json
rm -f target/example*bin
cargo flamegraph --bin posidonius -- start target/example.json target/example.bin target/example_history.bin
The previous command will generate the files perf.data
and flamegraph.svg
, the former can be inspected with the perf
command (only in GNU/Linux) while the latter is better visualized in a browser (it allows easy interaction to explore the graph and expand the function names).