This script is used to cut long recordings containing multiple stimulations into single-stimulation clips.
It uses the video file and a corresponding stimulation trace stored as a text (Labscribe) or binary file (Bonsai)
The script processes all videos found in the input directory that have a txt/csv/bin file with the same name. The latter is read to determine the stimulation onsets. Extracted clips are created in {video-name}-cropped subfolder and are numbered from 0 to the number of stimulations found in the stimulation file.
The stimulation trace files can either be :
- A text/CSV file with two columns, the first one being the time and the second the voltage.
- A binary file saved with Bonsai, with three columns, the first one being the time steps, the second one being the blue laser voltage and the third one the orange laser voltage.
If not installed already, install Miniforge, as user, add to PATH and make it the default interpreter. For more information, you can check this page. Open a terminal, run conda init
and restart the terminal.
Create a virtual environment (that can be the same than the one for features-from-dlc
if you plan to use that later on) :
conda create -n ffd python=3.12
videocutter
relies on ffmpeg
to read and write videos. First, check if ffmpeg
is installed : open a terminal, type in ffmpeg
and see if it is recognized. If that's the case, jump directly to the next section.
For Linux, just install it with your distribution package manager. For Windows :
- You can grab ffmpeg executables on gyan.dev. Choose the latest git master branch essentials build, extract it and store it somewhere relevant on your computer.
somewhere/relevant/ffmpeg-xxx/bin/
is the directory containing theffmpeg.exe
andffprobe.exe
files. If you're able, add this directory to thePATH
environment variable, otherwise you will need to specify to specify it with the--ffmpeg-dir
option when usingvideocutter
in the command line, or in theFFMPEG_DIR
variable from the example script (see the Usage). - If you have
chocolatey
,scoop
orwinget
available in the terminal, justchoco/scoop/winget install ffmpeg
. It will pull the latest build from gyan.dev and add it automatically to thePATH
. - Alternatively, you can install
ffmpeg
withconda
. From within the environment in which you'll install and usevideocutter
:conda install -c conda-forge ffmpeg
. Note that this version is less optimized than the gyan.dev build, thus slower.
Within a virtual environment with Python 3.12, install with pip
from the terminal :
pip install videocutter
From a terminal within the virtual environment in which you installed videocutter
, you can check the default values with :
videocutter --help
Then, use it like so :
- With text files, with all default values
videocutter /path/to/your/videos
- With Bonsai files, extract blue laser and use all default values
videocutter /path/to/your/videos blue
- With text files, changing the time before and after onset :
videocutter /path/to/your/videos --time-before 1 --time-after 2
- With binary files, extract another laser color and change video files extension :
videocutter /path/to/your/videos orange --video-ext avi
- Use your own
ffmpeg
:
videocutter /path/to/your/videos --ffmpeg-dir /path/to/your/ffmpeg/bin
Copy the example from examples/cut_videos.py
, fill in the parameters and run the script.
- The format of txt files exported from Labscribe depends on its version... Sometimes the values are separated by commas (
,
), sometimes tabulations. To be sure, open the file with a text editor and see if there are "," or big spaces between values on a row. Edit theSEP
parameter accordingly in the example script or with the--sep
argument in the CLI. - Again with Labscribe, it's unclear when or which versions creates a header to this text file (names for each columns). The two should work : if there are non-numeric values on the first line of the file, it will be considered as a header and discarded.
- If the stimulation trace was edited in Labscribe (for example, if you annotated the traces), the exported file might contain a third column and it will most likely not work.
videocutter
has been primarly developed by Guillaume Le Goc in Julien Bouvier's lab at NeuroPSI, with Edwin Gatier's algorithm for trial detection.