TBM1 - “Getting to Know My Home”
Contents
Names of locations, furniture, and objects used in the test
The following elements will be rearranged before each run:
- one door connecting two rooms
- two pieces of furniture
- three objects
Among the three objects that will be used in the test, two will be chosen by the referee and one can be chosen by the team. The object chosen by the teams might be (but it is not mandatory) the object to be used in Phase 2.
The following image shows the layout of the apartment with names of rooms and doors.
Names to be used in the output file
Rooms:
hallway living_room kitchen dining_room bedroom
Door ID:
door_entrance door_exit door_bedroom
Furniture:
chair arm_chair coffee_table kitchen_table dining_table sofa hanger lamp nightstand bed bookshelf plant
Object types:
glass cup vase book dish pillow table_cloth frame box fork can
Color names:
red green blue yellow pink black white gray
Objects with color properties:
book (white, blue, black) box (yellow, pink) cup (white, gray, blue)
The following image contains all the objects mentioned above.
Format of Semantic Map output
The output provided by the teams is a set of files that must be saved in a USB stick given to the teams before the test. The USB stick will be formatted with FAT32 file system and all the files should be saved in a folder with the name of the team.
The following files will be evaluated:
- semantic map file
- pictures of objects/furniture
- metric map files
1. semantic map file
This must be a text file named 'semantic_map.txt' containing a set of Prolog-like statements (or facts) in the following form:
predicate(arg_1, ..., arg_n).
The following predicates will be considered for evaluation:
- Predicates about doors and their status
Definition:
type(door_ID, door). connects(door_ID, room_name, room_name). isOpen(door_ID, true|false ).
Example:
type(door_entrance,door). connects(door_entrance, outside, hallway). isOpen(door_entrance, true).
- Predicates about location of pieces of furniture
Definition:
type(furniture_ID, furniture_name). in(furniture_ID, room_name).
Example:
type(obj111, lamp). in(obj111, living_room).
- Note:* Only one piece of furniture for each type will be involved in the test and described in the semantic map file.
- Predicates about position, location and properties of objects
Definition:
type(object_ID, object_category). in(object_ID, room_name). on(object_ID, furniture_name). position(object_ID, [X, Y, Z]). color(object_ID, color_name). picture(object_ID, image_filename).
Example:
type(obj33,book). in(obj33,kitchen). on(obj33,kitchen_table). position(obj33, [3.0, 3.0, 1.0]). color(obj33,white). picture(obj33,image_obj33.jpg).
- Notes:*
- Object_ID can be any valid identifier (one letter followed by additional letters, digits or underscore '_' symbols).
- The language is case insensitive (all lowercase is preferred).
- Only the information about the door/furniture/objects that are involved in the change during the test will be evaluated. Additional information about other doors/furniture/objects can be included in the file as well, but they will not be used for determining the score.
2. pictures of objects
Actual images in standard image format (JPEG, PNG, BMP, PPM) named in the picture predicate of the semantic map. These images will be evaluated by a referee through visual inspection. The object named in the semantic map file must be "reasonably" in the foreground.
3. metric map files
Metric map (possibly acquired before the test) should be included, preferably in ROS format (i.e. bitmap (PNG/PPM) + YAML file), using the global reference system provided during the setup days.
- Note:* this map will not be evaluated, but it is useful for benchmarking and statistics.
A complete example of output files for the situation described below is given in the following file.
Situation: In this run the following changes are executed: the door connecting the kitchen and the hallway is closed, a kitchen chair is moved to the living room, a plant is moved to the hallway, a can of coke is placed in the kitchen table (that is in the kitchen), a box of biscuits (with main color yellow) is placed on the coffee table (that is in the living room), and a green apple is placed on the kitchen chair moved to the living room.
Output semantic map file that refers to the changes in the environment.
type(door_1, door). connects(door_1, kitchen, hallway). isOpen(door_1, false). type(kitchen_chair_1, chair). in(kitchen_chair_1, living_room). type(plant_1, plant). in(plant_1, hallway). type(object_1, coke). in(object_1, kitchen). on(object_1, kitchen_table). position(object_1, [3.0, 2.5, 1.0]). color(object_1, red). picture(object_1, object_1.jpg). type(object_2, biscuits). in(object_2, living_room). on(object_2, coffee_table). position(object_2, [11.0, 9.5, 0.5]). color(object_2, yellow). picture(object_2, object_2.jpg). type(object_3, apple). in(object_3, living_room). on(object_3, kitchen_chair). position(object_3, [23.0, 7.5, 0.5]). color(object_3, green). picture(object_3, object_3.jpg).
How to check syntax of the output file with the semantics of the changes
output-checker.py is a Python script to parse and evaluate a file against a ground truth. For using it, you need to install PySWIP, which is a Python interface to SWIProlog.
Follow instructions in "here" explaining also how to install SWIProlog.
Note: For Windows, it seems better to install 32 bit versions of the software.
The output of this program is either the description of syntax errors in the files or the result of the comparison. Note that this is not exactly the program that will be used for the evaluation of the test, but it is useful to determine if the output is well-formed.
For example, this script does not evaluate positions.
Which methods are considered "natural" for the interactions between person, robot and environment
Question: In the rules for TBM1 it is written: "in case of an HRI-based approach, a team member can guide the robot in the environment, e.g. by following a person, and show the changes with only natural interactions (speech and gesture). No input devices are allowed (e.g., touch screens, tablets,mouse, keyboard, etc.)."
However, we feel this is not 100% exact. We want to ask you if using some laser-pointer (as professor Daniele Nardi shows in his talks about semantic mapping) will be allowed. Or just some colourful "stick" hold in an operator hands?
Answer: Using a laser pointer in a natural way to point at objects in the environment is fine. Handling a colored stick is also fine if it is used just to indicate objects. It will not be fine to use tools in a non-natural way. Examples of "non-natural" uses are: 1) using the stick or the laser pointer in such a way to encode a message or a command to the robot, 2) use the colored stick to touch or to be very close to the object or in any other way that would result in fully driving the robot with only colored blob tracking, etc.).
Question:
What approaches are allowed for a human following? Does it have to be a general approach? (eg. using leg detection, body detection) or it can follow some visual marker on operator?
Answer:
Visual markers are limited to what can be reasonable to assume to be worn by a person.
For example, if the person has a team logo printed on his/her t-shirt and the robot is able
to recognize this logo, then this is ok. While using QR-codes, RFID-tags, blinking lights, or
other special objects that are used only for the goal of recognizing the person will not
be considered as "natural". (Please do not design your team logo as a QR-code ;-)))
More in general, even if in this test the user is chosen by the team, you should still consider the situation that the interaction with the robot could have been done by anyone else, not requiring complex instructions about how to interact with the robot and not requiring any wearable non-natural marker.
Clarification about score
For scoring an achievement, the list of predicates that must be correctly written in the output file are illustrated below.
1. Door achievement
isOpen(door_ID,true/false).
2. Furniture achievement
type(furniture_ID, furniture_name). in(furniture_ID, room_name).
3. Object achievement
type(object_ID, object_category). in(object_ID, room_name). on(object_ID, furniture_name). position(object_ID, [X, Y, Z]). color(object_ID, color_name). picture(object_ID, image_filename).
Note: color is evaluated only for objects with color property.
IMPORTANT: *NEW* Penalizing Behavior
- Object description is missing or wrong in one of the following predicates: position, color, and picture.
This PB applies to each of the predicates. Thus it can result in up to three PB for each object.
With this new PB, missing or wrong position, color, or picture predicates do not invalidate the Achievement for that object, but they just add a Penalized Behavior in the score.
Examples:
1) all the predicates are correctly written in the output file:
type(...) in(...) on(...) position(...) color(...) picture(...)
Score = 1 Achievement
2) the predicates type, in, on are correctly written in the output file,
but one of position, color, or picture is wrong or missing.
Score = 1 Achievement + 1 Penalized Behavior
3) the predicates type, in, on are correctly written in the output file,
but all the predicates position, color, and picture are wrong or missing.
Score = 1 Achievement + 3 Penalized Behaviors
4) Any of type, in, on is wrong or missing (even if position, color, or picture are correct)
Score = no Achievement