-
Notifications
You must be signed in to change notification settings - Fork 10
Settings.ini
Here we will explain the parameters and configurations that you can set in settings.ini
and utils/advanced_settings.ini
As you might already have guessed, resolution is the width and height of the video stream that will be passed to the neural network.
In case of a RealSense
or Generic
camera it will directly change the output resolution on the device level, while output from Pylon
controlled cameras will be resized by cv2.resize(frame, (width, height))
(Note: Basler cameras controlled by Pylon
can be configured using the Basler PylonViewer software, which will be stored after closing the device).
Similiar to resolution the framerate is directly altering the output on the device level (RealSense
and Generic
). Unfortunately, Pylon
cameras can only be configured "easily" with the PylonViewer. Note: If you want to achieve lowest latency with DLStream performance, you might need to increase the camera framerate beyond DLStreams actual performance. This will allow you to get out these extra milliseconds, but you might end up in an uneven framerate (e.g. 33). Nothing to worry though! DLStream is saving the time between two frames in the output, so that you always have a time reference.
This is the path to your output folder where the output video and csv file will be stored. The path (string) needs to be specified without " "
or ' '
.
A number (int) usually between 0-10 that specifies the camera that OpenCv should access if multiple cameras are connected. Unfortunately, this number is not directly linked to the USB port number or starting at 0 and then increasing so that 3 cameras would be 0, 1 and 2. But before you get angry at the weird source naming, if you only have one camera DLStream will find it automatically (Yeah!).
Use this to select a video input source.
If STREAMING_SOURCE= camera: A camera connected to the computer will be used (usual way)
If STREAMING_SOURCE= ipwebcam: A webcam connected via network or internet will be used as input. Make sure to configure this correctly in all other steps.
If STREAMING_SOURCE= video: The video specified in VIDEO_PATH will be used a simulated input stream with FRAMERATE fps. Very usefull to test experiments or triggers on prerecorded footage!
This setting will change what pose estimation network source DLStream is expecting. The standard way is DLC, all other options are currently a beta release! See Installation & Testing to see further details on integrating other network sources into DLStream! Note: The distinction between maDLC and DLC is if you have trained a multiple animal network, not if you are using the newest version of DLC!
MODEL_PATH = The full path to the exported model (DLC-LIVE, DEEPPOSEKIT) or folder of your DLC installation (see DLC_PATH) or folder of your SLEAP models
For standard DLC (and maDLC) please continue to use the Deeplabcut/deeplabcut folder within your DLC installation path (not the model folder). For any exported model (DPK, DLC-LIVE etc.) please use the direct path to the model.
For SLEAP enter the path to the folder (SLEAP will autodetect the model file within):
For single instance tracking models: e.g. D:\SLEAP\2animal_diffcolor\models\210210_132803.single_instance.1227
For multiple instance tracking models (seperate both folder by a ","): e.g. D:\SLEAP\example_data\models\baseline_model.centroids, D:\SLEAP\example_data\models\baseline_model.topdown
For standard DLC and maDLC it is necessary to specify the name of the model. Additionally this parameter will be used for benchmarking.
ALL_BODYPARTS = used in DLC-LIVE ,DeepPoseKit and SLEAP for now to create posture (has to be in right order!); if left empty or too short, auto-naming will be enabled in style bp1, bp2 ...
Current DLC-Live and DPK models do not export body part names, which are necessary for DLStream postures. Therefore the pose estimation will be converted with this parameter. Note that incomplete naming will result in auto-naming (bp1, bp2...)
This is the path to your deeplabcut folder where the neural network of your choice should be stored. Note, that this not the project folder where in the current version of DLC the network was trained. If you have a DLC version > 1.11 (and there is no reason currently to not have it), you will need to copy your network into the model folder in your DLC path. This will change once we incorporated DLC-Live, so stay tuned. Again, the full path (string) needs to be specified without " "
or ' '
.
This is the full name (string) of your dlc trained neural network within the model folder of your dlc path. Note: If you want to be able to quickly switch between models (without copy+pasting the names all the time), you can use ;
to comment the unused lines like this ;MODEL = not_used_network
.
Again, the full name (string) needs to be specified without " "
or ' '
.
This number is directly linked to the set_up_experiment()
function in DeepLabStream.py
. If you added imported and added your experiment correctly, you can use the settings to pick the experiment that will be started using the GUI's Start Experiment
feature.
If you created a config file that utilizes a base
experiment, you need to enter "BASE" here. This will allow you to load any Experiment directly from the config file by entering the name of it below. Note, that currently the config file needs to stay in the experiments/configs folder!
If you want to use custom experiments, you need to enter "CUSTOM". This will allow you to import Experiments from experiments/custom/experiments by their name as before.
If EXP_ORIGIN = CUSTOM: This name directly referes to the Experiment
module and selects the experiment that will be run when using DLStream. If you added imported and added your experiment correctly (Exact name).
If EXP_ORIGIN = BASE: This name directly referes to the config file (.ini
) and will load the experiment specified with the parameters set in the file.
You can use the settings to pick the experiment that will be started using the GUI's Start Experiment
feature.
This boolean setting will automatically record a video output when you press Start Experiment
in the DLStream GUI. If it is set to False, you can start recording at any time by pressingg Start recording
.
This is the path to a video that you want to use for offline DLStream testing in DLStream or the VideoAnalyzer.py
(A very usefull tool to test your experiments before actually starting them). Again, the full path (string) needs to be specified without " "
or ' '
.
This boolean setting will force DLStream to use a prerecorded video specified in VIDEO_SOURCE
as input and treat it like a camera stream. Note that it will take FRAMERATE
and RESOLUTION
to change the output framerate and resolution accordingly.
This boolean setting will force DLStream to use a webcam connected to DLStream via network. DLStream will wait for input from that stream even if it is not running, so make sure that your webcam/computer on the other side is streaming. Note that it will take FRAMERATE
and RESOLUTION
to change the output framerate and resolution accordingly.
You can ignore these settings if you are a regular user.
In its core DLStream was developed to incorporate multiple streams/cameras in the same framework. Therefore, we developed DLStream to use all available input from a RealSense
camera (depth, color and infrared). For all other cameras, this setting is useless. If you are interested in utilizing your RealSense
camera, reach out to us.
This is an experimental setting in line with the above Stream
setting. It allows the simulateous analysis of multiple streams. This will slow down DLStream and has not been fully incorporated yet into the experiment/trigger environment. It takes a boolean (True or False) and will allow the camera manager to acess multiple cameras. Note, that currently only cameras from the same type can be accessed simultaneously (Pylon
or RealSense
).
Specifiy the port for the ipwebcam, that is used to connect both computers. See SmoothStream for additional details, how to set this up.
For multiple animal experiments you will need to change this value to the maximum number of animals that your pose estimation will track.
If set to TRUE the parameters CROP_X and CROP_Y can be used specifiy the x and y coordinates for cropping the camera frame to the relevant part of the stream. This can result in increased performance. Example: CROP_X = 0, 50 CROP_Y = 0, 50
Will flatten any multiple animal pose estimation (SLEAP, maDLC) to generate a instance-free merged skeleton. This can be used to use single animal experiments without the need to customize them into multiple animal experiments, but is mainly for debugging.
This will split flat skeletons (e.g. from 2 different colored single instance tracking in SLEAP or DLC) into seperate instances for use in multiple animal experiments. We recommend to use this setting if you want to start multiple animal experiments with such models. However splitting can only be done in equal parts (each animal getting the same amount of bodyparts) and would fail when additional user-defined labels where set.
handles missing bodyparts (NaN values) by selected method:
If HANDLE_MISSING is skip: the complete skeleton is removed (default);
If HANDLE_MISSING is null: the missing coordinate is set to 0.0, not recommended for experiments
where continuous monitoring of parameters is necessary.
If HANDLE_MISSING is pass: the missing coordinate is left NaN. This is useful to keep identities, but can yield
unexpected results down the line if NaN values are not caught.
If HANDLE_MISSING is reset: the whole skeleton is set to NaN. This is useful to keep identities, but can yield
unexpected results down the line if NaN values are not caught.
Missing skeletons will not be passed to the experiment, while resetting coordinates might lead to false results returned by triggers. Our current default is skipping/removing incomplete skeletons completely, but if you are only interested in a limited number of the tracked body parts, we recommend handling any NaN values during the experiment or trigger.
Will repeat videos in an endless loop when selected as camera source until DLStream is manually stopped. If it is set to False, DLStream will stop automatically if the end of the video was reached.