-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data Storage and Preprocessing Inquiry for 3D Lidar Data in PyTorch #276
Comments
Hi @primacaredataset too many things to unpack.
I think one key aspect to consider is to store ouster data in its raw form rather than storing the unpacked PointCloud messages. so consider saving the topics
I haven't explored that myself so I don't think I could provide a good recommendation but I am pretty sure there are plenty of software that successfully integrated and trained against data from ouster sensor. We do have one demo/example that shows how to apply Yolov5 model to perform tracking using sensor data. This a 2D example which isn't probably what you are looking for but it falls in the same category.
Horizontally, you could limit your beam through the Azimuth Window through the sensor configuration Hope this helps! |
Hi @Samahu, Thank you for your comprehensive response. I successfully utilized the /ouster/lidar_packets topic to save the data, significantly reducing the dataset size. I've completed the saving process and am currently focused on data processing. Now that I have the saved files, I'm looking to extract the points data from the /ouster/points topic by running the record.launch command. However, I'm uncertain about the exact steps to extract the x, y, z data from the 6.5 million numbers at each time step. Do you have any guidance on how I can extract this data and possibly save it in a .txt or .csv file? I tried to use the explanations provided by @zuzi-m here: But it seems that you have changed the files and for example, point_os1.h doesn't exist anymore. Additionally, regarding the extraction of a portion of the point cloud, such as a specific number of channels or a particular range of horizontal azimuth, I'm eager to understand how to extract part of the data from these 6.5 million data points (the data is saved in the rosbag format). For instance, dividing 6.5 million by 365 could provide the data for angles between 0 and 1, or dividing by 128 could yield the data for one channel (or something like this)????? Thank you, and best regards. |
Once you saved the raw packets in the bag file you could extract the point cloud through |
This depends on the sensor type that you are using. Please refer to the sensor documentation. |
Hi @Samahu, I have successfully extracted XYZ data from the ouster/points topic and saved them in HD5 format. However, I am facing a challenge. The Lidar data is collected at 20 Hz, but due to the point cloud size at each timestamp, the process of saving it takes longer than the 50-millisecond interval between messages. Consequently, I am missing a significant portion of the data, with approximately 60% being lost during the saving process. Is there a way to manage this and ensure that I can save all the data without missing any? |
Hi,
I have a question regarding the utilization of 3D Lidar data in a PyTorch deep learning framework. I apologize if this question has been asked before; if so, please guide me to the relevant resources.
I recently collected a dataset using the Ouster OS1 through ROS. Each time step yields around 6.3 million data points. Given the 20 Hz sample rate and hours of data collection, managing this volume of data is challenging. My objective is to input these 6.3 million data points at each time step into a deep learning model for a classification task.
My primary concern is how to effectively save the data from the rosbags, including the 6.3 million points per time step, for seamless integration with a deep learning model. Could you provide recommendations on the appropriate format for data storage and any general advice on how to proceed?
Additionally, I am interested in exploring existing codes or sample GitHub repositories that have applied similar machine learning or deep learning techniques to Ouster Lidar data. Do you have any recommendations in this regard?
Furthermore, I have another question. If I want to consider only a specific subset of the data, such as focusing on beams between 20 and 50 and between 360 degrees, is there any existing code that allows me to extract these specific parts from the entire 6.3 million data points?
Thank you in advance for your assistance.
Platform (please complete the following information):
The text was updated successfully, but these errors were encountered: