Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove lidar code in nuImages #442

Merged
merged 7 commits into from
Aug 3, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/instructions_nuimages.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ nuImages includes surface classes as well:
| Label | Short Description |
| --- | --- |
| [`flat.driveable_surface`](#1-flatdriveable_surface) | All paved or unpaved surfaces that a car can drive on with no concern of traffic rules. |
| [`vehicle.ego`](#2-vehicleego) | The vehicle on which the cameras, radar and lidar are mounted, that is sometimes visible at the bottom of the image. |
| [`vehicle.ego`](#2-vehicleego) | The vehicle on which the sensors are mounted, that are sometimes visible at the bottom of the image. |

### 1. flat.driveable_surface
![driveable_1](https://www.nuscenes.org/public/images/taxonomy_imgs/nuimages/driveable_1.png)
Expand Down
23 changes: 11 additions & 12 deletions docs/schema_nuimages.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ attribute {

calibrated_sensor
---------
Definition of a particular sensor (lidar/camera, but no radar) as calibrated on a particular vehicle.
Definition of a particular camera as calibrated on a particular vehicle.
All extrinsic parameters are given with respect to the ego vehicle body frame.
All camera images come undistorted and rectified.
Contrary to nuScenes, all camera images come distorted and unrectified.
```
calibrated_sensor {
"token": <str> -- Unique record identifier.
Expand All @@ -42,7 +42,7 @@ category
---------
Taxonomy of object categories (e.g. vehicle, human).
Subcategories are delineated by a period (e.g. `human.pedestrian.adult`).
The categories in nuImages are the same as in the nuScenes (w/o lidarseg), plus `flat.driveable_surface`.
The categories in nuImages are the same as in nuScenes (w/o lidarseg), plus `flat.driveable_surface`.
```
category {
"token": <str> -- Unique record identifier.
Expand Down Expand Up @@ -100,11 +100,11 @@ object_ann {

sample_data
---------
A sensor data e.g. image or lidar pointcloud. Note that we don't have radar in nuImages.
Sample_data covers all sensor data, regardless of whether it is a keyframe or not.
For every keyframe image or lidar, we also include up to 6 past and 6 future sweeps at 2 Hz.
We can navigate between consecutive lidar or camera sample_datas using the `prev` and `next` pointers.
Only keyframe (sample) images are annotated.
Sample_data contains the images and information about when they were captured.
Sample_data covers all images, regardless of whether they are a keyframe or not.
Only keyframes are annotated.
For every keyframe, we also include up to 6 past and 6 future sweeps at 2 Hz.
We can navigate between consecutive images using the `prev` and `next` pointers.
The sample timestamp is inherited from the keyframe camera sample_data timestamp.
```
sample_data {
Expand All @@ -126,15 +126,14 @@ sample_data {
sample
---------
A sample is an annotated keyframe selected from a large pool of images in a log.
Every sample has up to 13 lidar sample_datas and 13 camera sample_datas corresponding to it.
These include the actual lidar and camera keyframe sample_datas, which can be accessed via the `key_*_token` fields.
Every sample has up to 13 camera sample_datas corresponding to it.
These include the keyframe, which can be accessed via `key_camera_token`.
```
sample {
"token": <str> -- Unique record identifier.
"timestamp": <int> -- Unix time stamp.
"log_token": <str> -- Foreign key pointing to the log.
"key_camera_token": <str> -- Foreign key of the sample_data corresponding to the camera keyframe.
"key_lidar_token": <str> -- Foreign key of the sample_data corresponding to the lidar keyframe.
}
```

Expand All @@ -146,7 +145,7 @@ A specific sensor type.
sensor {
"token": <str> -- Unique record identifier.
"channel": <str> -- Sensor channel name.
"modality": <str> {camera, lidar} -- Sensor modality. Supports category(ies) in brackets.
"modality": <str> -- Sensor modality. Always "camera" in nuImages.
}
```

Expand Down
408 changes: 31 additions & 377 deletions python-sdk/nuimages/nuimages.py

Large diffs are not rendered by default.

53 changes: 22 additions & 31 deletions python-sdk/nuimages/scripts/render_images.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,8 @@ def render_images(nuim: NuImages,
Note: The images rendered here are keyframes only.
:param nuim: NuImages instance.
:param mode: What to render:
"image" for the image without annotations,
"annotated" for the image with annotations,
"raw" for the image without annotations,
"dept_dense" for dense depth image,
"dept_sparse" for sparse depth image,
"pointcloud" for a birds-eye view of the pointcloud,
"trajectory" for a rendering of the trajectory of the vehice,
"all" to render all of the above separately.
:param cam_name: Only render images from a particular camera, e.g. "CAM_BACK'.
Expand All @@ -45,7 +42,7 @@ def render_images(nuim: NuImages,
"""
# Check and convert inputs.
assert out_type in ['image', 'video'], ' Error: Unknown out_type %s!' % out_type
all_modes = ['annotated', 'image', 'depth_dense', 'depth_sparse', 'pointcloud', 'trajectory']
all_modes = ['image', 'annotated', 'trajectory']
assert mode in all_modes + ['all'], 'Error: Unknown mode %s!' % mode
assert not (out_type == 'video' and mode == 'trajectory'), 'Error: Cannot render "trajectory" for videos!'

Expand Down Expand Up @@ -75,8 +72,8 @@ def render_images(nuim: NuImages,
sample_tokens_cam = []
for sample_token in sample_tokens:
sample = nuim.get('sample', sample_token)
sd_token_camera = sample['key_camera_token']
sensor = nuim.shortcut('sample_data', 'sensor', sd_token_camera)
key_camera_token = sample['key_camera_token']
sensor = nuim.shortcut('sample_data', 'sensor', key_camera_token)
if sensor['channel'] == cam_name:
sample_tokens_cam.append(sample_token)
sample_tokens = sample_tokens_cam
Expand All @@ -103,8 +100,8 @@ def render_images(nuim: NuImages,
sample_tokens_cleaned = []
for sample_token in sample_tokens:
sample = nuim.get('sample', sample_token)
sd_token_camera = sample['key_camera_token']
category_names = sd_to_object_cat_names[sd_token_camera]
key_camera_token = sample['key_camera_token']
category_names = sd_to_object_cat_names[key_camera_token]
if any([c in category_names for c in filter_categories]):
sample_tokens_cleaned.append(sample_token)
sample_tokens = sample_tokens_cleaned
Expand All @@ -120,42 +117,42 @@ def render_images(nuim: NuImages,
sample = nuim.get('sample', sample_token)
log = nuim.get('log', sample['log_token'])
log_name = log['logfile']
sd_token_camera = sample['key_camera_token']
sensor = nuim.shortcut('sample_data', 'sensor', sd_token_camera)
key_camera_token = sample['key_camera_token']
sensor = nuim.shortcut('sample_data', 'sensor', key_camera_token)
sample_cam_name = sensor['channel']
sd_tokens_camera = nuim.get_sample_content(sample_token, modality='camera')
sd_tokens = nuim.get_sample_content(sample_token)

# We cannot render a video if there are missing camera sample_datas.
if len(sd_tokens_camera) < 13 and out_type == 'video':
if len(sd_tokens) < 13 and out_type == 'video':
continue

for mode in modes:
out_path_prefix = os.path.join(out_dir, '%s_%s_%s_%s' % (log_name, sample_token, sample_cam_name, mode))
if out_type == 'image':
write_image(nuim, sd_token_camera, mode, '%s.jpg' % out_path_prefix)
write_image(nuim, key_camera_token, mode, '%s.jpg' % out_path_prefix)
elif out_type == 'video':
write_video(nuim, sd_tokens_camera, mode, out_path_prefix, cleanup=cleanup)
write_video(nuim, sd_tokens, mode, out_path_prefix, cleanup=cleanup)


def write_video(nuim: NuImages,
sd_tokens_camera: List[str],
sd_tokens: List[str],
mode: str,
out_path_prefix: str,
cleanup: bool = True) -> None:
"""
Render a video by combining all the images of type mode for each sample_data.
:param nuim: NuImages instance.
:param sd_tokens_camera: All camera sample_data tokens in chronological order.
:param sd_tokens: All sample_data tokens in chronological order.
:param mode: The mode - see render_images().
:param out_path_prefix: The file prefix used for the images and video.
:param cleanup: Whether to delete images after rendering the video.
"""
# Loop through each frame to create the video.
out_paths = []
for i, sd_token_camera in enumerate(sd_tokens_camera):
for i, sd_token in enumerate(sd_tokens):
out_path = '%s_%d.jpg' % (out_path_prefix, i)
out_paths.append(out_path)
write_image(nuim, sd_token_camera, mode, out_path)
write_image(nuim, sd_token, mode, out_path)

# Create video.
first_im = cv2.imread(out_paths[0])
Expand All @@ -177,27 +174,21 @@ def write_video(nuim: NuImages,
out.release()


def write_image(nuim: NuImages, sd_token_camera: str, mode: str, out_path: str) -> None:
def write_image(nuim: NuImages, sd_token: str, mode: str, out_path: str) -> None:
"""
Render a single image of type mode for the given sample_data.
:param nuim: NuImages instance.
:param sd_token_camera: The sample_data token of the camera.
:param sd_token: The sample_data token.
:param mode: The mode - see render_images().
:param out_path: The file to write the image to.
"""
if mode == 'annotated':
nuim.render_image(sd_token_camera, annotation_type='all', out_path=out_path)
nuim.render_image(sd_token, annotation_type='all', out_path=out_path)
elif mode == 'image':
nuim.render_image(sd_token_camera, annotation_type='none', out_path=out_path)
elif mode == 'depth_dense':
nuim.render_depth_dense(sd_token_camera, out_path=out_path)
elif mode == 'depth_sparse':
nuim.render_depth_sparse(sd_token_camera, out_path=out_path)
elif mode == 'pointcloud':
nuim.render_pointcloud(sd_token_camera, out_path=out_path)
nuim.render_image(sd_token, annotation_type='none', out_path=out_path)
elif mode == 'trajectory':
sd_camera = nuim.get('sample_data', sd_token_camera)
nuim.render_trajectory(sd_camera['sample_token'], out_path=out_path)
sample_data = nuim.get('sample_data', sd_token)
nuim.render_trajectory(sample_data['sample_token'], out_path=out_path)
else:
raise Exception('Error: Unknown mode %s!' % mode)

Expand Down
31 changes: 13 additions & 18 deletions python-sdk/nuimages/tests/test_foreign_keys.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,28 +115,23 @@ def test_prev_next(self) -> None:
Test that the prev and next points in sample_data cover all entries and have the correct ordering.
"""
# Register all sample_datas.
sample_to_sample_datas = {'camera': defaultdict(lambda: []), 'lidar': defaultdict(lambda: [])}
sample_to_sample_datas = defaultdict(lambda: [])
for sample_data in self.nuim.sample_data:
if sample_data['fileformat'] == 'jpg':
modality = 'camera'
else:
modality = 'lidar'
sample_to_sample_datas[modality][sample_data['sample_token']].append(sample_data['token'])
sample_to_sample_datas[sample_data['sample_token']].append(sample_data['token'])

print('Checking prev-next pointers for completeness and correct ordering...')
for sample in self.nuim.sample:
for modality in ['camera', 'lidar']:
# Compare the above sample_datas against those retrieved by using prev and next pointers.
sd_tokens_pointers = self.nuim.get_sample_content(sample['token'], modality)
sd_tokens_all = sample_to_sample_datas[modality][sample['token']]
self.assertTrue(set(sd_tokens_pointers) == set(sd_tokens_all),
'Error: Inconsistency in prev/next pointers!')

timestamps = []
for sd_token in sd_tokens_pointers:
sample_data = self.nuim.get('sample_data', sd_token)
timestamps.append(sample_data['timestamp'])
self.assertTrue(sorted(timestamps) == timestamps, 'Error: Timestamps not properly sorted!')
# Compare the above sample_datas against those retrieved by using prev and next pointers.
sd_tokens_pointers = self.nuim.get_sample_content(sample['token'])
sd_tokens_all = sample_to_sample_datas[sample['token']]
self.assertTrue(set(sd_tokens_pointers) == set(sd_tokens_all),
'Error: Inconsistency in prev/next pointers!')

timestamps = []
for sd_token in sd_tokens_pointers:
sample_data = self.nuim.get('sample_data', sd_token)
timestamps.append(sample_data['timestamp'])
self.assertTrue(sorted(timestamps) == timestamps, 'Error: Timestamps not properly sorted!')


if __name__ == '__main__':
Expand Down
11 changes: 0 additions & 11 deletions python-sdk/nuimages/tests/test_overflow_bug.py

This file was deleted.

Loading