A warm thank you to @zuurw for the free logo design.
Software written for my master's thesis.
This code takes raw neural recordings as input, and generates the figures found in the thesis (and more) as output. In between, it processes the raw recordings, trains neural networks, calculates detection performance metrics, etc.
The name of this Python package, sharp
, comes from the sharp wave-ripple,
the electrical brain motif related to memories and learning that is studied
in the thesis and the paper. More specifically, we seek to find new real-time
algorithms that make earlier online sharp wave-ripple detections.
Jump to: Installation | Usage
See the Usage section below.
On code documentation: README's are provided for most sub-packages, and docstrings are provided for most modules, classes, and methods. Care is taken to organize the code, and to name objects in a logical way.
(Also see the installation Notes below).
Clone this repository to your computer (e.g. to a directory ~/code/sharp
as
in this example):
$ git clone git@github.com:tfiers/sharp.git ~/code/sharp
The software is written in Python 3.7, and requires recent installations of SciPy and PyTorch. These are most easily installed with the conda package manager. (Install either Anaconda or the much smaller miniconda).
Next, install the required packages that are not publicly available on PyPI:
~/code/sharp$ pip install -r requirements.txt
This will fetch them automatically from their respective git repositories.
For now, the dependency
fklab-python-core
is closed source, and needs to be downloaded and installed manually. Request access to its git repository by contacting Kloosterman Lab. Clone the repository, enter its directory, and install withpip install .
. Verify that it is installed correctly by tryingimport fklab
in Python. See also the notes below.
Next, install the sharp
package (and its dependencies that are publicly
available on PyPI):
~/code/sharp$ pip install -e .
Optionally enable tab-autocompletion for sharp
commands by adding the
following line to your .bashrc
:
. ~/code/sharp/sharp/cmdline/enable-autocomplete.sh
You can verify whether the installation was succesful by trying on the command line:
$ sharp
A message starting with
Usage: sharp <options> <command>
should appear.
Another way to test is to run Python and try:
import sharp
- This installation procedure has been tested on Windows 10 and Ubuntu 16.04.
- If no GPU acceleration is desired, the significantly smaller CPU-only
version of PyTorch may be installed (i.e.
pytorch-cpu
, which corresponds toCUDA = None
on the PyTorch "get-started" page). - The installation of
fklab-python-core
might fail while trying to build itsradonc
extension (especially on Windows). This extension is not used in sharp, and can be excluded from the install: editfklab-python-core/setup.py
, and remove the lineext_modules = [radon_ext]
in thesetup()
call.
In your terminal, run sharp config
, passing the name of a new directory in
which a run configuration, logs, and (optionally) task output files will be
stored:
$ sharp config ~/my-sharp-cfg
Edit the newly created "config.py
" file in this directory and change the
settings (such as the location of raw data and output directories) to suit your
needs.
- See the config specification for explanations of the different options.
- See the test
config.py
file from this repository for a concrete example of a customized configuration. - Set
central_server = None
to test without having to start a central server first.
When your config.py
file is ready, run "sharp worker
", passing the name
of your config directory:
$ sharp worker ~/my-sharp-cfg
This will run the tasks specified in the get_tasks
method of your config.py
file (these tasks typically generate figures), together with the tasks on which
they depend (typically processing raw data, training neural networks,
calculating evaluation metrics, ...).
Notes
sharp
internally outsources task dependency resolution and scheduling to Luigi.- On Linux-like operating systems, by default, as many subprocesses as CPU's
will be launched, to run tasks in parallel. The number of subprocesses can
be set explicitly with the
-n
or--num-subprocesses
option. - On Windows, running Luigi tasks in subprocesses is not supported,
and
sharp worker
will therefore run only one task at a time. If task parallelization on Windows is desired, run a central Luigi server (see below), and start multiplesharp worker
processes.
Sharp can be used without a central Luigi server. Running such a server however provides:
- Visualization of task completion progress, and the task dependency graph;
- A centralized task scheduler, to distribute task execution over multiple nodes in a computing cluster (or over multiple Python processes on Windows).
Sharp provides a utility script, "sharp scheduler start
", to configure and
start a central Luigi server as a background (daemon) process. It takes as
argument the name of a new directory in which the server logs, task history
database, and scheduler PID and state files will be stored:
$ sharp scheduler start ~/luigi-scheduler
Similarly, "sharp scheduler stop
" and "sharp scheduler state
" commands are
provided.
The Luigi scheduling server can only run as a daemon on Linux-like operating systems. If you want to use the scheduling server on Windows, run it manually, using the
luigid
command without the--background
option.
On each host of your computing cluster, simply start a
$ sharp worker ~/my-sharp-cfg
process.
Sharp provides a utility script, "sharp slurm
", to automatically start
workers on different nodes, if SLURM
is used to manage jobs on the computing cluster:
$ sharp slurm ~/my-sharp-cfg --nodes=2
This:
- submits a well-behaved Slurm job that runs "
sharp worker ~/my-sharp-cfg
" on two cluster nodes; - shows the Slurm job queue and job details.