Skip to content
/ bpycv Public

Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)

License

Notifications You must be signed in to change notification settings

DIYer22/bpycv

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

bpycv: Computer Vision and Deep Learning Utils for Blender

Contents: Features | Install | Demo | Tips

0
Figure.1 Render instance annoatation, RGB image and depth

▮ Features

  • Render annotations for semantic segmentation, instance segmentation and panoptic segmentation
  • Generate 6DoF pose ground truth
  • Render depth ground truth
  • Pre-defined domain randomization:
  • Easy installation and demo running
  • Docker support: docker run -v /tmp:/tmp diyer22/bpycv (see Dockerfile)
  • A Python Codebase for building synthetic datasets (see example/ycb_demo.py)
  • Conversion to Cityscapes annotation format
  • Easy development and debugging due to:
    • No complicated packaging
    • Use of Blender's native API and calling methods

News: We win 🥈2nd place in IROS 2020 Open Cloud Robot Table Organization Challenge (OCRTOC)

▮ Install

bpycv support Blender 2.9, 3.0, 3.1+

  1. Download and install Blender here.

  2. Open Blender dir in terminal and run install script:

# For Windows user: ensure powershell has administrator permission

# Ensure pip: equl to /blender-path/3.xx/python/bin/python3.10 -m ensurepip
./blender -b --python-expr "from subprocess import sys,call;call([sys.executable,'-m','ensurepip'])"
# Update pip toolchain
./blender -b --python-expr "from subprocess import sys,call;call([sys.executable]+'-m pip install -U pip setuptools wheel'.split())"
# pip install bpycv
./blender -b --python-expr "from subprocess import sys,call;call([sys.executable]+'-m pip install -U bpycv'.split())"
# Check bpycv ready
./blender -b -E CYCLES --python-expr "import bpycv,cv2;d=bpycv.render_data();bpycv.tree(d);cv2.imwrite('/tmp/try_bpycv_vis(inst-rgb-depth).jpg', d.vis()[...,::-1])"

▮ Demo

1. Fast Instance Segmentation and Depth Demo

Copy-paste this code to Scripting/Text Editor and click Run Script button(or Alt+P)

import cv2
import bpy
import bpycv
import random
import numpy as np

# remove all MESH objects
[bpy.data.objects.remove(obj) for obj in bpy.data.objects if obj.type == "MESH"]

for index in range(1, 20):
    # create cube and sphere as instance at random location
    location = [random.uniform(-2, 2) for _ in range(3)]
    if index % 2:
        bpy.ops.mesh.primitive_cube_add(size=0.5, location=location)
        categories_id = 1
    else:
        bpy.ops.mesh.primitive_uv_sphere_add(radius=0.5, location=location)
        categories_id = 2
    obj = bpy.context.active_object
    # set each instance a unique inst_id, which is used to generate instance annotation.
    obj["inst_id"] = categories_id * 1000 + index

# render image, instance annoatation and depth in one line code
result = bpycv.render_data()
# result["ycb_meta"] is 6d pose GT

# save result
cv2.imwrite(
    "demo-rgb.jpg", result["image"][..., ::-1]
)  # transfer RGB image to opencv's BGR

# save instance map as 16 bit png
cv2.imwrite("demo-inst.png", np.uint16(result["inst"]))
# the value of each pixel represents the inst_id of the object

# convert depth units from meters to millimeters
depth_in_mm = result["depth"] * 1000
cv2.imwrite("demo-depth.png", np.uint16(depth_in_mm))  # save as 16bit png

# visualization instance mask, RGB, depth for human
cv2.imwrite("demo-vis(inst_rgb_depth).jpg", result.vis()[..., ::-1])

Open ./demo-vis(inst_rgb_depth).jpg:

demo-vis(inst_rgb_depth)

2. YCB Demo

mkdir ycb_demo
cd ycb_demo/

# prepare code and example data
git clone https://github.com/DIYer22/bpycv
git clone https://github.com/DIYer22/bpycv_example_data

cd bpycv/example/

blender -b -P ycb_demo.py

cd dataset/vis/
ls .  # visualize result here
# 0.jpg

Open visualize result ycb_demo/bpycv/example/dataset/vis/0.jpg:
0
instance_map | RGB | depth

example/ycb_demo.py Inculding:

  • Domain randomization for background, light and distractor (from ShapeNet)
  • Codebase for building synthetic datasets base on YCB dataset

3. 6DoF Pose Demo

Generate and visualize 6DoF pose GT: example/6d_pose_demo.py

▮ Tips

Blender may can't direct load .obj or .dea file from YCB and ShapeNet dataset.
It's better to transefer and format using meshlabserver by run meshlabserver -i raw.obj -o for_blender.obj -m wt



suggestion and pull request are welcome 😊

Releases

No releases published

Packages

No packages published

Languages