TBM1 - “Getting to Know My Home”

From RoCKIn Wiki
Revision as of 17:11, 10 April 2015 by Rockinadmin (Talk | contribs) (Created page with "== Robot setup == See Preparing the robot for task benchmarks. === A set of objects which will be rearranged before each...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Robot setup

See Preparing the robot for task benchmarks.

A set of objects which will be rearranged before each run

The following elements will be rearranged before each run:

  • one door connecting two rooms
  • two pieces of furniture
  • three objects

The names of rooms, furniture and object categories will be distributed to the teams during the setup days and will be according to Environment specifications 2.4 and 2.6 of the rulebook.

Format of Semantic Map output by teams

The output provided by the teams is a set of files that must be saved in a USB stick given to the teams before the test. The USB stick will be formatted with FAT32 file system and all the files should be saved in a folder with the name of the team.

The following files will be evaluated:

  1. semantic map file
  2. pictures of objects/furniture
  3. metric map files


+1. semantic map file+

This must be a text file named 'semantic_map.txt' containing a set of Prolog-like statements (or facts) in the following form:

predicate(arg_1, ..., arg_n).

The following predicates will be considered for evaluation:


  • Predicates about doors and their status

Definition:

type(door_ID, door).
connect(door_ID, room_name, room_name).
isOpen(door_ID, true|false ).

Example:

type(door36,door)
type(door126,door)
connects(door36, kitchen, office).
connects(door126, kitchen, bathroom).
isOpen(door36, true).
isOpen(door126, false).
  • Predicates about location of pieces of furniture

Definition:

type(furniture_ID, furniture_name)
in(furniture_ID, room_name).

Example:

type(ch1, kitchen_chair).
in(ch1, living_room).
  • Note:* Only one piece of furniture for each type will be involved in the test and described in the semantic map file.


  • Predicates about position, location and properties of objects

Definition:

type(object_ID, object_category).
in(object_ID, room_name).
on(object_ID, furniture_name).
position(object_ID, [X, Y, Z]).
color(object_ID, color_name).
picture(object_ID, image_filename).

Example:

type(obj33,apple).
in(obj33,kitchen).
on(obj33,kitchen_table).
position(obj33, [3.0, 3.0, 1.0]).
color(obj33,red).
picture(obj33,image_obj33.jpg).


  • Notes:*
  1. Object_ID can be any valid identifier (one letter followed by additional letters, digits or underscore '_' symbols).
  2. The language is case insensitive (all lowercase is preferred).
  3. Only the information about the door/furniture/objects that are involved in the change during the test will be evaluated. Additional information about other doors/furniture/objects can be included in the file as well, but they will not be used for determining the score.


+2. pictures of objects+

Actual images in standard image format (JPEG, PNG, BMP, PPM) named in the picture predicate of the semantic map. These images will be evaluated by a referee through visual inspection. The object named in the semantic map file must be "reasonably" in the foreground.

+3. metric map files+

Metric map (possibly acquired before the test) should be included, preferably in ROS format (i.e. bitmap (PNG/PPM) + YAML file), using the global reference system provided during the setup days.

  • Note:* this map will not be evaluated, but it is useful for benchmarking and statistics.


A complete example of output files for the situation described below is given in the following file.

Situation: _in this run the following changes are executed: the door connecting the kitchen and the hallway is closed, a kitchen chair is moved to the living room, a plant is moved to the hallway, a can of coke is placed in the kitchen table (that is in the kitchen), a box of biscuits (with main color yellow) is placed on the coffee table (that is in the living room), and a green apple is placed on the kitchen chair moved to the living room._

Output semantic map file that refers to the changes in the environment.

type(door_1, door).
connects(door_1, kitchen, hallway).
isOpen(door_1, false).

type(kitchen_chair_1, chair).
in(kitchen_chair_1, living_room).

type(plant_1, plant).
in(plant_1, hallway).

type(object_1, coke).
in(object_1, kitchen).
on(object_1, kitchen_table).
position(object_1, [3.0, 2.5, 1.0]).
color(object_1, red).
picture(object_1, object_1.jpg).

type(object_2, biscuits).
in(object_2, living_room).
on(object_2, coffee_table).
position(object_2, [11.0, 9.5, 0.5]).
color(object_2, yellow).
picture(object_2, object_2.jpg).

type(object_3, apple).
in(object_3, living_room).
on(object_3, kitchen_chair).
position(object_3, [23.0, 7.5, 0.5]).
color(object_3, green).
picture(object_3, object_3.jpg).


  • output-checker.py* [[1]] is a Python script to parse and evaluate a file against a ground truth.

For using it, you need to install PySWIP, which is a Python interface to SWIProlog. Follow instructions in https://code.google.com/p/pyswip/ explaining also how to install SWIProlog. Note: For Windows, it seems better to install 32 bit versions of the software.

The output of this program is either the description of syntax errors in the files or the result of the comparison. Note that this is not exactly the program that will be used for the evaluation of the test, but it is useful to determine if the output is well-formed. For example, this script does not evaluate positions correctly.


List of variables to be logged

The robot is required to log any sensor data used to perform the benchmark (e.g., images, robot pose). The modalities for this are explained by "this document":http://rm.isr.ist.utl.pt/attachments/622/robot_data.txt. Only relevant data is expected to be logged (i.e., the pointcloud used to recognize an object, more than one if an algorithm requiring multiple pointclouds is used). There are no restriction about the framerate: data can be saved, for the relevant parts of the benchmark, at the rate they are acquired or produced. The log may be a rosbag or the corresponding YAML representation, as specified in document:"RoCKIn YAML Data File Specification".

The following are expected ROS topic names and corresponding data types:

  • image [sensor_msgs/Image]: sensorial data used to recognize the objects
  • pointcloud [sensor_msgs/PointCloud2]: sensorial data used to recognize the objects
  • pose2d [@geometry_msgs/Pose2D@]: 2D pose of the robot while moving in the environment, as perceived by the robot
  • pose [@geometry_msgs/Pose@]: 3D pose of the robot while moving in the environment, as perceived by the robot (if available)

+Important!+ Calibration parameters for cameras must be saved. This must be done also for other sensors (e.g., Kinect) that require calibration, if a calibration procedure has been applied instead of using the default values (e.g., those provided by OpenNI).