FAQ Home

From RoCKIn Wiki
Jump to: navigation, search

General questions

Q: Offline logging: do we really need to log all the data listed on the wiki also if we do not actually collect/use those?

A: You are asked to log all the data you actually use while running the benchmark (e.g., if you do not use an RGB-D camera to detect objects, the point-cloud used for identifying the object is note required, however if you use it the point cloud should be logged relative to the "shots" used for object recognition). What is listed on the wiki are example of relevant data teams may use to perform the tasks, but we do not require you to log something you don't actually use (e.g., we all expect you localize and thus we expect you to provide us localization data) We aim at collecting as much information as possible to run offline benchmarking and to provide a set of useful logs for other people performing research in robotics, and willing to compare their results or develop new algorithms. This is the benchmarking spirit of the Rockin project.


Q: Offline logging: can we cllect the benchmarking data on the robot Hard Disk and then copy it immediately after the benchmark on the USB Stick?

A: We understand the issues that could come from mounting and unmounting the USB Stick in a rush during the competition. We allow you to save the data on you hard drive and copy immediately after the benchmark them on the USB. This has to be done immediately after the robot exits the testbed and should take 1 minute to stay on schedule. If you decide to go for this, which we understand is for the benefit of the benchmarking, we suggest to test the procedure so that it goes as smooth and possible and it will not take much time, e.g., by appointing a team member responsible for this operation.


Q: I'm getting an error compiling the RSBB ROS client. I followed all instructions but I get the error @???: not found@.

A: Perhaps you only installed the dependencies after you tried to compile the first time. If so, delete your catkin build directory with something like @rm -rf ~/catkin_ws/build@ and run catkin_make again.


Q: I'm getting an error running the RSBB ROS client: @ERROR: cannot launch node of type [roah_rsbb_comm_ros/comm]: can't locate node [comm] in package [roah_rsbb_comm_ros]@.

A: Did you configure your catkin workspace properly? @source ~/catkin_ws/devel/setup.bash@


Q: In the ROS RSBB client, the @/roah_rsbb/benchmark/state@ topic is used only for the FBMs or also for the TBMs?

A: For all. Most have only one goal, but still follows the state sequence.


Q: In the ROS RSBB client, the @/roah_rsbb/benchmark@ uses values from 0 to 6, correct?

A: That is the idea, but *do not use numbers* directly in your code. Use the constants defined in the ROS generated files, for example @RoahRsbb::Benchmark::HSUF@ .


Q: The RSBB controls all the execution? The robot has to be ready for any task at any moment? In the ROS RSBB client, the robot has to listen to the @/roah_rsbb/benchmark@ to know the task and then wait for @/roah_rsbb/benchmark/state@ to change to _execute_ to begin the task, correct?

A: That is correct. Note that the state will only become _execute_ after you call @/roah_rsbb/end_prepare@. You can only call it after the state becomes _prepare_. This ping-pong like interaction is designed to keep the robot synchronized with the RSBB even if the wireless communication is very bad.



Task-related questions

TBM1 - “Getting to Know My Home”

Q: What is the time limit for switching to Phase 2? What is the time limit for the execution of Phase 2?

A: There is only a global time limit for the entire task. How to distribute this time between the two phases is left to the teams. For example, teams can choose to switch to Phase 2 after the Phase 1 is finished, or after some predefined time, or upon a request from the user interacting with the robot. In this latter case, only natural interaction is allowed (e.g., speech or gestures). Using the keyboard or the mouse of a laptop on-board or off-board the robot is not considered natural interaction. Using a touch screen with a nice GUI may be considered natural interaction, but this solution should be submitted to the TC during the set-up days of the competition to get an approval.


Q: In Phase 2 how is the robot supposed to interact with pieces of furniture if the positions of the pieces of furniture are not defined in the semantic map file?

A: The fact that the positions of pieces of furniture are not registered in the semantic map file does not mean that the robot should not memorize these information acquired during Phase 1 in its own memory. So the task can be executed (even if the information are not present in the semantic map file), since the robot has to detect the changed position of the furniture and thus can memorize any information about it (e.g., its position) that may be needed during Phase 2.


TBM2 - “Welcoming Visitors”

Q: In the rules, it is said: "Whenever a person rings the door bell, the robot can use its own on-board audio system to detect the bell ring(s)." However, according a package to communicate with the server of intelligent flat, we can also use the topic provided by this package. (Thus, we don't have to recognise the bell sound). Is this correct?

A: This is correct. The robot can use either the sound or the software signal or both in order to detect the event.


Q: I'd like to ask about the video-intercom. It is clear to us how we will obtain video stream from the camera to be used for face/cloth recognition. But we didn't find any information about the intercom. Does the robot suppose to communicate via some intercom? How will it obtain the audio stream? How shall the robot "talk" to it? Or shall be the speech communication direct between person and robot using on board microphone?

A: There will not be a full video/audio intercom device in this year competition. There will be only a standard ip camera mounted a the front door to retrieve images from the ringing visitors. So you can recognize the person only by images. But you are allowed to go to the door afterwards and cross-check using speech for example.


Q: You eliminated the speech recognition in this task. My team is detecting people by video, but we put certain effort to also have speech recognition to confirm, that vision is right. I understand, that there might be some technical issues of intercom. However, excluding all the options for speech recognition seems as a step back. We put certain effort to the speech recognition.

A: The speech communication is not eliminated. There is only no functionality provided by the intercom device. You are still allowed to go to the door and perform a speech dialog there with the visitor using your on-board speech system. You will not receive any penalties for this.


Q: Can we use a workaround for managing the door handle? For instance adding a small rope or cloth in the door handle for pulling?

A: Extensions to the environment, in this case adding something to a door handle, will not be allowed. But for the TBM 2 the robot does not have to open the door using the its manipulator. The robot simply can request help from a human, either a referee, team member or the visitor.


TBM3 - “Catering for Granny Annie’s Comfort”

Q: We could not find information regarding specific commands that we need to send to the devices. For instance which is the command that we need to send to the SMARTIF Server to turn on one of the lights?

A: All devices are controlled through the RSBB connection. If you are using ROS, the topics and services are only available during this specific benchmark, as described in the "RoAH RSBB Comm ROS Github docs":https://github.com/joaocgreis/roah_rsbb_comm_ros#catering-for-granny-annies-comfort . If you are not using the ROS package, the relevant fields in the "RobotState proto message":https://github.com/joaocgreis/roah_rsbb_comm/blob/master/proto/RobotState.proto#L63-72 have to be filled. Additionally, there are diagrams representing the communication methods at Robot Setup Home.


Q: "In the segment to bring back an object, is it always just one object or can it be more than one?"

A: Only one object will be asked for.


Q: "In the segment to bring back an object, when delivering the object, how is it supposed to be delivered? Should the robot extend the arm and wait for the granny to grab the object? Or should it release the object close to granny?"

A: The robot should extend the arm to Granny Annie and drop the object.


Q: "Throughout this task, can Granny move around or does she stay in the same place?"

A: Granny Annie will stay in the same place during the whole task.


Q: "Is the time limit for this task only in place for the first segment (comfort providing) or does it also extend for the second segment (bring back an object)? Also, what is the time limit?"

A: There is no time limit for each step. Teams have up to *ten minutes* to complete the three steps of task and have to repeat the task twice. In addition, they will have two and a half minutes to prepare their setup and more two and a half minutes to clear the arena for the next team, as according to the rulebook.


Q: The rulebook mentions that the robot can ask Granny Annie for her position if it is not known. In the ROS RSBB client there is a topic for the selected position, but how do I ask for it?

A: By calling the service @/roah_rsbb/tablet/map@ . After doing so, the tablet will display a map asking for Granny's position. If you are not using the ROS client, you must set the @tablet_display_map@ field correctly inside the @RobotState@ proto. There are diagrams representing the communication methods at Robot Setup Home.


FBM1 - “Object Perception”

Q: How do we signalize that we are ready with the task iteration and are waiting for the next object?

A: By completing the previous one and sending the result. The RSBB will then stop the benchmark if it was the last one or ask you to prepare for the next one.


Q: Does transmitting the result also serve as a signal that the robot is ready for the next object?

A: Exactly.


Q: Will there be a bidirectional communication with the RSBB in order to let the robot know that a new object will be placed?

A: Yes. The robot is asked to prepare for each goal (move to position if needed, ...). The robot then has to signal it is prepared. Only then the object is placed. When the object is ready, the robot receives the Execute command.


Q: Are there any datasets about the objects used for FBM1?

A: See http://thewiki.rockinrobotchallenge.eu/index.php?title=Datasets


FBM2 - “Navigation”

Q: Static Object Types: it is said that the items will be lying on the ground. However, can you specified the minimum height of the object?

A: The minimum size for the static obstacles will be 10x10x10 cm.


Q: Dynamic Object Types: it is said that people movement will be unpredictable. However, how are you going to guarantee the "benchmark meaning" (ie. each team will have same conditions, that we are able to compare our results)

A: We understand this issue and will, as far as it is possible, maintain the competition conditions to all teams. The dynamic object would be a walking person, not running, in "reasonable" situations. So we do not expect the robots to be able to cope with nasty people, but with people like Annie. In the trial we will take care to have similar situations for the teams.


Q: Are you going to specify a minimum distance of an object/a person to the waypoint?

A: The minimum distance between the waypoint and the nearest obstacle will be 50 cm.


FBM3 - “Speech Understanding”

Q: RFbox communication isn't mentioned in the FBM3 specification, but mentioned in the github repository roah_rsbb_comm_ros in the lists of service calls for testing. Could one provide more detailed specification? What data and when should be send/received? Please clarify.

A: In the case of FBM3, the RSBB will only start and stop the benchmark. All data should be recorded as offline data and delivered in the USB stick, as specified in the rulebook.