FBM1 - “Object Perception”

From RoCKIn Wiki
Revision as of 11:56, 24 June 2015 by BRSU (Talk | contribs) (List of objects and associated frames)

Jump to: navigation, search
To be updated for the 2015 RoCKIn Competition

List of objects and associated frames

moved to Section 5.1.2 (Feature Variation) + 5.1.3 (Associated Frames)

Benchmark execution

  1. An object of unknown class and unknown instance will be placed on a table in front of the robot
  2. The robot must determine the object’s class, its instance within that class as well as the 2D pose of the object w.r.t. the reference system specified on the table
  3. The preceding steps are repeated until time runs out or 10 objects have been processed

See the Rule Book for further details.

For each presented object, the robot must produce the result consisting of:

  • object class name [string]
  • object instance name [string]
  • object localization (x [m], y [m], theta [rad])

Example of expected result:

object_class: a
object_name: a3
  x: 0.1
  y: 0.2
  theta: 1.23

List of variables to be logged

The robot is required to log any sensor data used to perform the benchmark (e.g., images, point clouds). The modalities for this are explained by "this document". Only relevant data is expected to be logged (i.e., the pointcloud used to classify an object, more than one if an algorithm requiring multiple pointclouds is used). There are no restriction about the framerate: data can be saved, for the relevant parts of the benchmark, at the rate they are acquired or produced. The log may be a rosbag or the corresponding YAML representation, as specified in document: "RoCKIn YAML Data File Specification".

The following are expected ROS topic names and corresponding data types:

  • object_class [std_msgs/String]: the recognized object class
  • object_instance [std_msgs/String]: the recognized object instance
  • object_pose2d [geometry_msgs/Pose2D]: the 2D pose of the recognized object
  • object_pose [geometry_msgs/Pose]: the 3D pose of the recognized object (if available)
  • image [sensor_msgs/Image]: sensorial data used to classify the object
  • pointcloud [sensor_msgs/PointCloud2]: sensorial data used to classify the object

+Important!+ Calibration parameters for cameras must be saved. This must be done also for other sensors (e.g., Kinect) that require calibration, if a calibration procedure has been applied instead of using the default values (e.g., those provided by OpenNI).