Skip to content

System architecture

Aurora Tijerina edited this page Jun 18, 2021 · 1 revision

Actual State Overview

The system architecture has been redesigned to use topics, ations and services as needed by the specific module's functionality.

System transform tree looks as the following:

Modules

Natural Language Understanding (NLU)

This module is in charge of HRI via speech to text, following predefined stories/forms/conversations according to the challenge.

Inputs (text):

  • Commands (stop, go to)
  • Conversation flow

Output (robot actions):

  • say action (response)
  • general action

Process:

  1. NLU (RASA) receives starting intent that triggers specific forms
  • Ex. “Please get name of person”
  1. Form is executed and action server sends actions to main engine module.

Note: Each time an utterance is generated a say action must be sent to the parser/main engine via HTTP-Request.

Object recognition

The object recognition module consists of a constant flow of data via ROS topic. In general, this module only needs to keep track of specific objects (position and orientation).

System proposal:

  • Feedback: Publisher node
  • Personalization: define specific objects that want to be tracked (via Topic)

Functionalities:

  • Object detection
  • Object identification
    • Position
    • Pose
  • Manipulation
    • Arm coordination according to object pose
Message/Topic name Data
/objects_detected Array objects
ObjectDetected
  • geometry_msgs/Pose object_pose
  • string id objname_#
  • bool in_view

Person recognition

The object recognition module consists of a constant flow of data via ROS topic. In general, this module only needs to keep track of people identified (position and orientation) and assign them IDs and characteristics.

Functionalities:

  • Recognize pose
    • Position of person in space
  • Person identification
    • Face identification
    • Characteristic identification
      • Clothes
      • Height
      • Age
      • Gender
      • Face
      • Skin color
      • Complexion
Message/Topic name Data
/people_detected Array people
Person
  • geometry_msgs/Pose person_pose
  • string id personName_#
  • int32 id
  • string[] clothes
  • int32 heigh
  • string age
  • string name
  • bool inview

Mechanism Control [WIP]

In charge of executing the arm and elevator mechanism control actions; controlled as action servers. Functionalities:

  • Move elevator to certain position
  • Position and rotate the TCP
Action Description
grab_object Move arm towards the object in a certain position for grabbing.
lift_object Lift grabbed the object and secure it inside the robot's equilibrium.

Action Servers [TODO]

Navigation

In charge of executing the navigation actions; controlled with an action server. Functionalities:

  • Go to defined position
  • Navigate through a specified room
  • Approach certain position

To see a more detailed description please visit the Navigation Action Server wiki page.

More information