This is a Python library for online task discovery with mobile robots, it uses exploration, perception and a novel datastructure based on situational affordances to build an actionable environment representation, the behavior-oriented situational graph. The framework can easily be extended with new behaviors, new situations and new mission objectives.
- requires python 3.9
- goto the folder of the repo:
cd <path>/situational-graph
- install dependencies:
pip install -r requirements.txt
- install repo as a local editable package:
pip install -e .
- select a demo config in
src/configuration/config.py
and run__main__.py
.
- requires Boston Dynamics Spot
- set login information in
src/data_providers/real/login.json
- select a demo config in
src/configuration/config.py
and run__main__.py
.
More generally is expected of robots to interact more richly with the world. Which is why us roboticists are no longer content with simply detecting and recognizing objects in images. Instead, what is desired is higher-level understanding and reasoning about complete dynamic 3D scenes. In robotics and related research fields, the study of understanding is often referred to as semantics, which dictates what does the world ‘mean' to a robot, this is strongly tied to the question of how to represent that meaning. Part of this twofold challenge is semantic mapping, which lies at the intersection of computer vision, task & motion planning, and simultaneous localization & mapping. My goal is to advance the design principles of semantic mapping to generalize its integration across task domains, by first analyzing the specific search-and-rescue domain.
This work was supported by TNO and the AIRlab.
- Sunday, Dec 4@20:03:: 5426 total
- Tuesday, Dec 6@20:54:: 4897 total
- Wednesday, Dec 21@00:08:: 4690 total (in src)