FBM1 - “Object Perception”

From RoCKIn Wiki
Revision as of 22:26, 13 April 2015 by Rockinadmin (Talk | contribs) (List of objects and associated frames)

Jump to: navigation, search
To be updated for the 2015 RoCKIn Competition

List of objects and associated frames

NOTE: The localization of the association frame of the box was change.


Object Classes/Instances:

Object Classes Object Instances
[a]: Mugs [a1]: Large black mug [IKEA Code: 401.439.93]
[a2]: Small white mug with yellow dots [IKEA Code: 702.348.97]
[a3]: Coffee mugs [IKEA Code: 001.525.50]
[a4]: Black jug [IKEA Code: 602.936.08]
[b]: Forks and knives [b1]: Fork [IKEA Code 101.858.52]
[b2]: Knife [IKEA code 901.929.62]
[c]: Boxes [c1]: Yellow [IKEA Code: 200.474.50]
[c2]: Pink [IKEA Code: 200.474.50]
[d]: Picture frame [d1]: Gold-color frame [IKEA Code: 402.323.95]
[d2]: Small black picture frame [IKEA Code: 601.674.93]




Associated Frames (pictures with the corresponding representation can be downloaded "here"):

[a] Associated frame on the bottom plane of the mug with (see figures "here") with:

  • Z-axis perpendicular to the “table” (surface upon which the object lies), pointed upward and coincident with the axis of symmetry of the mug (without taking into account the mug handle).
  • XY plane on the “table” with x-axis pointing to the mug handle.

[b] Associated frame on the bottom of the knife/fork with (see figures "here") with:

  • Z-axis perpendicular to the “table” (surface upon which the object lies), pointed upwards.
  • XY plane on the “table” with x-axis pointing to the top part of the knife/fork.

[c] Associated frame on a top corner of the box (see figures "here") with:

  • Z-axis perpendicular to the “table” (surface upon which the object lies), pointed upwards;
  • XY plane on the “table” with x-axis pointing to the smallest vertex of the box;
  • X and Y axis must be coincident with vertices of the object (selection of the right corner);
  • Note that the objects are not symmetrical. On one side there are a couple of hinges that teams should identify to remove ambiguities.

[d] Associated frame on a corner of the picture frame plane (see figures "here") with:

  • Z-axis perpendicular to the “table” (surface upon which the object lies), pointed upwards;
  • XY plane on the “table” with x-axis pointing to the smallest vertex of the picture plane;
  • X and Y-axis must be coincident with vertices of the object (selection of the right corner);
  • To remove the ambiguity, teams should identify the RoCKIn logo (pictures for both frames can be downloaded "here").

Benchmark execution

  1. An object of unknown class and unknown instance will be placed on a table in front of the robot
  2. The robot must determine the object’s class, its instance within that class as well as the 2D pose of the object w.r.t. the reference system specified on the table
  3. The preceding steps are repeated until time runs out or 10 objects have been processed

See the Rule Book for further details.

For each presented object, the robot must produce the result consisting of:

  • object class name [string]
  • object instance name [string]
  • object localization (x [m], y [m], theta [rad])

Example of expected result:

object_class: a
object_name: a3
object_pose:
  x: 0.1
  y: 0.2
  theta: 1.23

List of variables to be logged

The robot is required to log any sensor data used to perform the benchmark (e.g., images, point clouds). The modalities for this are explained by "this document":http://rm.isr.ist.utl.pt/attachments/625/robot_data.txt. Only relevant data is expected to be logged (i.e., the pointcloud used to classify an object, more than one if an algorithm requiring multiple pointclouds is used). There are no restriction about the framerate: data can be saved, for the relevant parts of the benchmark, at the rate they are acquired or produced. The log may be a rosbag or the corresponding YAML representation, as specified in document:"RoCKIn YAML Data File Specification".

The following are expected ROS topic names and corresponding data types:

  • object_class [std_msgs/String]: the recognized object class
  • object_instance [std_msgs/String]: the recognized object instance
  • object_pose2d [geometry_msgs/Pose2D]: the 2D pose of the recognized object
  • object_pose [geometry_msgs/Pose]: the 3D pose of the recognized object (if available)
  • image [sensor_msgs/Image]: sensorial data used to classify the object
  • pointcloud [sensor_msgs/PointCloud2]: sensorial data used to classify the object

+Important!+ Calibration parameters for cameras must be saved. This must be done also for other sensors (e.g., Kinect) that require calibration, if a calibration procedure has been applied instead of using the default values (e.g., those provided by OpenNI).