Methods for Objective Comparison of Results in Intelligent Robotics Research

Sunday, October 1st 8:30 am to 5:40 pm

Time Presenter Title
8:30
8:45
Welcome Organizers
8:45
9:15
   Jelizaveta Konstantinova  Robotics and long-term cutting-edge innovation  
9:15
9:50
 Constantinos Cham Benchmarking motion planning systems
9:50
10:30
Francesco Amigoni   Predicting Benchmarks
10:30
11:00
  Coffee break with discussions
11:00
11:45
  Diego Torricelli     Benchmarking Locomotion
11:45
12:30
Tobias Fischer Robostack, reproduce and benchmark systems with complex system architectures
12:30
14:00
  Lunch break
14:00
14:30
Jesse Havilland DGBench: An Open-Source, Reproducible Benchmark for Dynamic Grasping


14:30 14:50


14:50 15:10


15:10 15:30
Award session
Plancher et al.

Mayer et al.

Daum et al.

RobotPerf: An Open-Source, Vendor-Agnostic, Benchmarking Suite for Evaluating Robotics Computing System Performance 
CoBRA: A Composable Benchmark for Robotics Applications

Benchmarking ground truth trajectories with robotic total stations
15:30
16:30
 Coffee Break and Poster Session
16:30
17:00
Enrica Zereik Reproducible Research on open-source cheap hardware
17:00
17:30
Fabio Bonsignorio Why Reproducibility and Benchmarking are needed in Intelligent Robotics: Hidden Flaws and Possible Solutions
17:30
17:40
Organizers Closing remarks