Skip to content

Tools usage

Valerio Castelli edited this page May 29, 2018 · 27 revisions

Introduction

This wiki page describes the syntax and proper usage of each of the tools that form the Robotic Performance Prediction Framework (RPPF). The tools list is organized in sections according to the directory each tool belongs to; within each section, the tools are listed in alphabetical order.


python

addMITScale.py

Syntax: $ python addMITScale.py <xml_input_path> <xml_output_path>
Usage: this tool adds to each MIT dataset XML file stored in <xml_input_path> the missing scale parameter (which for MIT is 1 pixel = 1 meter) and stores the modified XML files in <xml_output_path>.

adjust_metric.py

Syntax: $ python adjust_metric.py <runs_outputs_folder>
Usage: this tool filters, from each run, the redundant poses that occur whenever the robot stops in a position - either to "think" or because the exploration has ended.

adjust_output.py

Syntax: $ python adjust_metric.py <runs_outputs_folder>
Usage: this tool removes the last line of the SLAM output log file from each run in <runs_outputs_folder>, in order to avoid parsing errors due to untimely termination of the exploration process.

avg.py

Syntax: $ python avg.py <errors.csv>
Usage: this tool aggregates the performance data of the individual runs for each of the datasets contained in the <errors.csv> (see writecsv.py below) and writes a corresponding <avg.csv> file.

compare_images.py

Usage: this module is part of the RPPF autonomous exploration system and is not meant to be invoked manually. It performs a bitmap comparison between two snapshots of the same virtual environment at different time points and computes a metric that describes how much the two are different.

csv_to_rosbag.py

Syntax: $ python <ground_truth_csv_file> <odometry_csv_file> <laser_csv_file> <output_bag_path>
Usage: this tool merges the ground truth, odometry and laser csv files of the RAWSEEDS Bicocca dataset into a single ROS .bag file, performing the necessary synchronisation adjustments. The resulting file is stored in <output_bag_path> with the same name of the ground truth file and can be subsequently played and inspected with the standard rosbag tool. It is an expansion of a previous work made by Martin Guenther.

generateAll.py

Syntax: $ python generateAll.py <folder_of_individual_output_run> [True|False]>
Usage: given the directory of an individual output run, this tool generates all the support files that are necessary for the RPPF to actually train the models and perform predictions. More specifically:

  • If the second (optional) parameter is set to True, it converts the .bag ground truth trajectory data into a .log ground truth trajectory data. Otherwise, the .log file is assumed to be already present and the conversion is skipped;
  • It creates a Relations folder and generates both the ordered and randomly sampled relations files;
  • It invokes the Freiburg Metric Evaluator tool on the generated relations, storing the corresponding error files in an Error folder;
  • It plots a trajectories.png file overlaying the ground truth trajectory (in green) with the estimated SLAM trajectory (in red);
  • It uses the last available map snapshot and the freshly computed mean translation error from the randomly sampled relations to generate an overlayed errorMap.png file.

Under normal conditions, there is no need to manually execute this component, as it is automatically invoked by launch.py at the end of each exploration run. However, it can also be executed manually, for instance to compute the relations associated with existing real world datasets (e.g. the RAWSEEDS Bicocca indoor datasets).

launch.py

Syntax: $ python launch.py <runs_datasets_folder>
Usage: given a directory containing the .world and .png files of the datasets to explore and the necessary .inc files for Stage, it launches an automatic exploration process that performs multiple autonomous explorations for each dataset. By modifying the internal parameters of this file it is possible to set the number of runs to perform for each dataset (num_runs), the amount of time to wait between snapshots (seconds_mapsave), the overall maximum number of snapshots (maxmapsave) and the threshold for the image comparison algorithm below which the exploration process is halted.

plot_GT.py

Syntax: $ python plot_GT.py <xml_input_path> <output_path> [True|False] [True|False]
Usage: plots the XML files contained in <xml_input_path> in <output_path>. It supports two optional parameters:

  • If the third parameter is True (default: True), the axes are removed from the plot; otherwise they are kept;
  • If the fourth parameter is True (default: False), the topological room graph is drawn on top of the map; otherwise, it is not drawn.

plot_MIT.py

Syntax: $ python plot_GT.py <xml_input_path> <output_path> [True|False]
Usage: plots the XML files contained in <xml_input_path> in <output_path>. If the third parameter is True (default: True), the axes are removed from the plot; otherwise they are kept.

quatToEulerGT.py

Syntax: $ python <original_quaternion_csv_ground_truth_file> <slam_log_file>
Usage: this tool converts the ground truth txt file format used by the vision.in.tum.de datasets into a .log file that can be used with generateAll.py. Not only it converts the original quaternion-based angle representation into a euler-based one, it also realignes and rotates the ground truth reference frame to match the one used by the SLAM representation in order to allow meaningful comparisons. NOTE: this is a highly experimental tool! As we lack a formal and complete definition of the rototranslation required to align the two frames, the value of the angle must be either determined by hand and guesswork or by using the experimental angle estimation routine included in the script; however, even the latter approach requires manual handwork, in that it is necessary to specify a pair of matching timestamp lines in the two files to estimate the angle difference.

rescaler.py

Syntax: $ python rescaler.py <xml_input_path> <xml_output_path> <scale>
Usage: given an XML input path, it rescales all XML files in it by a factor of <scale>, intended as a magnifying effect on the amount of pixels. For instance, if the original XML file has a scale of 1 pixel = 10 centimeters and a line is 200 pixels long, a scale of 2 will have the effect of turning the XML scale to 1 pixel = 5 centimeters long and the line length will be doubled to 400 pixels. It is a necessary preprocessing step in order to plot higher resolution images. It works on both "standard" XML files and MIT XML files, provided that the latter have already been preprocessed with the addMITScale.py script.

writecsv.py

Syntax: $ python writecsv.py <runs_output_folder>
Usage: given the general runs output folder, i.e. the one containing all the explored datasets in it, it produces a summary csv file containing, for each run of each dataset, its performance measures.

writeTextOnImage.py

Syntax: $ python writeTextonImage.py <folder_of_individual_output_run>
Usage: this module is part of the RPPF autonomous exploration system and is not meant to be invoked manually. It takes the last available map snapshot from the Maps folder within <folder_of_individual_output_run> and overlays it with the value of the mean translation error, as computed by the Freiburg Metric Evaluator on the randomly sampled relations. Under normal conditions, there is no need to manually execute this component, as it is automatically invoked by generateAll.py at the end of its computation. However, it can also be executed manually.

xmlToWorld.py

Syntax: $ python xmlToWorld.py <xml_input_folder> <world_output_folder>
Usage: for each XML file inside the specified <xml_input_folder>, this tool produces a corresponding .world file inside <world_output_folder>.


predictor

batch_extract.py

Syntax: $ python batch_extract.py
Usage: this tool reconstruct the layouts of the environments. It takes in input a folder containing the batch of the .world and .png files of the environments that need to be analysed and produces a folder containing a subfolder for every environment, each subfolder containing the .xml file with the reconstructed layout as well as some additional debugging images. The parameters of this tool, including (but not limited to) the source and destination folders, are specified in the parameters.py module. Please refer to the thesis of Calabrese and Arcerito for additional information as to the effect of all the parameters.

analyzer.py

Syntax: $ python analyzer.py <runs_folder> <layouts_folder> <voronoi_folder> <world_folder> <models_folder> <[optional args]>
Usage: this is the analysis tool of the framework. Its task is to analyze the error data of the different runs of the training datasets, correlate it with properties extracted from such datasets and build a prediction model. It requires the following parameters:

  • <runs_folder> is the folder in which all the training datasets and their respective runs are stored. At the top level, it must contain a folder for each dataset that is comprised in the training set. Within each dataset's folder, there must be a folder for each run that needs to be analyzed, named using the "run" convention, where is replaced by an increasing positive non-zero integer number (e.g. "run1, run2, run3...");
  • <layouts_folder> is the folder in which the layout information of each training dataset, as extracted by the Layout Extractor, is stored. At the top level, it must contain a folder for each dataset that is comprised in the training set. Within each dataset's folder, there must be the .xml file generated by the Layout Extractor. Please note that this folder is not required for the computation of the voronoi travelled distance estimator, but only for the computation of those metrics that are related to the topological (i.e. rooms and their connections) features of the datasets, and may thus be removed in a future revision of the framework;
  • <voronoi_folder> is the folder in which the data related to the voronoi graph of each training dataset, as extracted by the Voronoi Extractor, is stored. At the top level, it must contain an folder and a folder. The folder must contain a <datasetname_voronoi.png> file for each dataset that is comprised in the training set; the folder, initially empty, is automatically populated by the analysis tool with a filtered logical model of the voronoi graphs of the folder upon launch to speed up the computation of subsequent invocations of the tool;
  • <world_folder> is the folder in which the Stage data of each training dataset, as used to perform the Stage simulations, is stored. At the top level, it must contain a <datasetname.world> and a <datasetname.png> file for each dataset that is comprised in the training set. This folder is used to establish which datasets should be considered part of the training set;
  • <models_folder> is the output folder in which the tool stores the results of the correlation analyses performed on the training dataset;
  • --n_folds: specifies the k number of folds of k-fold cross validation (default is 5);
  • --laser_range: specifies the maximum range in meters of the virtual laser (default is 30);
  • --laser_fov: specifies the field of view in degrees of the virtual laser (default is 270);
  • --min_rotation_distance: specifies the minimum distance between two nodes for the computation of the travelled rotation (default is 0.5);
  • --linear_regression: uses the linear regression technique for learning;
  • --feature_selection: uses the feature selection technique for learning;
  • --elastic_net: uses the ElasticNet regularized regression technique for learning (this is the default behavior);
  • --predictor: specifies the predictor to be used with the linear regression machine learning technique (default is "overrideAll", which trains all possible predictors);
  • --debug_mode: specifies whether the true trajectory length and true trajectory rotation features should be used. 0 disables them, 1 enables them in addition to the other features, 2 enables them with all other features disabled (default is 0);
  • --voronoi_progress: if enabled, saves a snapshot of the Voronoi graph traversal process for each visited node. Note that this option considerably slows down the computation, and should be used for visualization and debugging purposes only;
  • --voronoi_progress_folder: specifies the location where to save the Voronoi graph traversal snapshots if the Voronoi progress option is enabled.

predictor.py

Syntax: $ python predictor.py <layouts_folder> <models_folder> <datasets_to_predict_folder>
Usage: This is the prediction tool of the framework. Its task is to use the models obtained by the analysis tool to predict the error data of the desired test datasets.
It requires the following parameters:

  • <layouts_folder> is the folder in which the layout information of each training dataset, as extracted by the Layout Extractor, is stored;
  • <models_folder> is the input folder from which the tool loads the trained models;
  • <datasets_to_predict_folder> is the folder in which all the data related to the datasets of the test set for which the tool must perform predictions is stored. At the top level, it must contain a <datasetname.world> and a <datasetname.png> file for each dataset that is comprised in the test set. Additionally, it must contain a folder, which in turn must contain an folder and a folder organized in the same way as the training voronoi folder.

/devel/lib/ipa_room_segmentation

Note: this is a temporary path that is only available after compilation. The binaries in this path should be moved into a more appropriate permanent folder once the work on the RPPF is done.

voronoi_extractor

Syntax: $ ./voronoi_extractor <world_folder> <voronoi_folder> true|false
Usage: given a folder containing the .world and .png files of a set of environments, it extracts the corresponding voronoi graphs and stores them in <voronoi_folder>. Consequently, it is necessary that the <voronoi_folder> passed as an argument points to the folder in which the analysis tools expect the voronoi images, not the top level one. The third argument can be either true or false but presently can't be omitted; if true, the voronoi graph is drawn on top of the environment map, whereas if false it is drawn alone in an otherwise clean image (the former is essentially a debug option, while the latter produces the actual image expected by the analysis tool).