Skip to content

Integrated ROS capabilities for planning, predicate inference, gripper control, and perception for use with the KUKA LBR IIWA and Universal Robots.

License

Notifications You must be signed in to change notification settings

cpaxton/costar_stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoSTAR

Collaborative System for Task Automation and Recognition

Build Status

CoSTAR is an end-user interface for authoring robot task plans developed at Johns Hopkins University. It includes integrated perception and planning capabilities, plus a Behavior Tree based user interface.

CoSTAR Expert User Demonstration

Our goal is to build a system which facilitates end-user instruction of robots to solve a variety of different problems. CoSTAR allows users to program robots to perform complex tasks such as sorting, assembly, and more. Tasks are represented as Behavior Trees. For videos of our system in action, you can check out the CoSTAR YouTube Channel.

To take full advantage of CoSTAR, you will need an RGB-D camera and supported hardware:

  • a KUKA LBR iiwa or Universal Robots UR5
  • a Robotiq 3-finger gripper or 2-finger gripper
  • a Da Vinci Research Kit -- in development.

This is a project by members of the JHU Laboratory for Computational Sensing and Robotics, namely Chris Paxton, Kel Guerin, Andrew Hundt, and Felix Jonathan. If you find this code useful, please cite:

@article{paxton2017costar,
  title={Co{STAR}: Instructing Collaborative Robots with Behavior Trees and Vision},
  author={Paxton, Chris and Hundt, Andrew and Jonathan, Felix and Guerin, Kelleher and Hager, Gregory D},
  journal={Robotics and Automation (ICRA), 2017 IEEE International Conference on},
  note={Available as arXiv preprint arXiv:1611.06145},
  year={2017}
}

Interested in contributing? Check out the development guidelines, which are a work in progress.

Installation

Check out installation instructions.

We are working on experimental install scripts:

Tests

Run the IIWA test script:

rosrun costar_bringup iiwa_test.py

It will start gazebo and move the arm to a new position. If this test passes, CoSTAR is set up right.

There is a more detailed startup guide.

CoSTAR Packages

For more information on how to collect data for the "block stacking" task, check out the block stacking data collection notes

  • data collection with an rgbd camera
    • Object on Table Segmenter: Utility for dataset collection with depth cameras. It provides a simple process for defining regions of a scene that are table, object, robot etc and generates files accordingly.
  • Locating AR Tag markers with a known shape in an image
    • alvar_data_collection: utilities to define the black and white printed AR tags we use for ar_track_alvar, which define the positions and orientations of objects in space (see the video above).

More minor utilities:

  • making changes to robot path planning scenes with MoveIt
    • moveit_collision_environment: Publishes a MoveIt planning scene that contains the collision object and table that is detected via TF frames defined for those objects.
    • To visualize the current configuration run roslaunch ur5_moveit_config moveit_rviz.launch
  • handling symmetrical objects
    • object_symmetry_republisher: Takes in object information from perception (for example, sp_segmenter) and outputs poses for possible symmetries of that object.

Sister repositories

These are repositories that have been integrated with costar_stack, though not necessarily required depending on your setup. Also see the .travis.yml in this repository for additional repositories that have been used with costar_stack.

Contact

CoSTAR is maintained by Chris Paxton (cpaxton@jhu.edu).

Other core contributors include:

  • Felix Jonathan
  • Andrew Hundt