June, 2015: a raceway in California hosts the biggest real size humanoid robot competition in history with 26 teams from all over the world. It was the Darpa Robotics Challenge. The goal of the contest was to push the limits in robotics to assisting humans in responding to natural and man-made disasters. Two years before, as the first part of the contest, took place the Virtual Robotics Challenge (VRC) which consisted on replicate the same set of tasks proposed in challenge finals but using cloud-based robotic simulation instead of real robots.
The Open Source Robotics Foundation (OSRF), thanks to its open source robotics simulator Gazebo and the ROS (robot operative system) framework, was selected to rule this virtual contest. It was a challenge to manage the software infrastructure, from the simulator to machine provisioning, and testing played a key role. The talk will review the testing practices that were designed and implemented during the development of the software infrastructure used for the Virtual Robotics Challenge.
How was the testing of a robotics contest in the cloud done? What did we learn about testing software from organizing the VRC? How using open source software helped to organize the VRC?
The talk will review the testing practices that were designed and implemented during the development of the software infrastructure used for the Virtual Robotics Challenge. The scope of the techniques applied goes from automated testing of VRC software pieces (Gazebo simulator, DRCSim - DRC specific ROS wrappers and materials and the Cloudsim web provisioning tool) to the manual testing plan. Interesting points to explore in the talk:
The goal would be to provide the audience with feedback and conclusions about software testing and testing decisions in a real big open source robotics software event from first hand.
Speakers: Jose Luis Rivero