Forum Discussion on Open-Source Robotic Manipulation and Benchmarking
ROS-Industrial Consortium Americas 2023 Annual Meeting
This workshop was held during the ROS-Industrial Consortium Americas 2023 Annual Meeting held on May 25, 2023 in Detroit, Michigan.
Workshop Overview
To support and improve the development and deployment of robotic manipulation systems, the open-source ecosystem should facilitate development and dissemination of open-source assets (i.e., hardware, software, datasets), benchmarking practices, and sharing of results. Such a distributed, community-driven venue would enable researchers and developers to share and learn about open-source resources, find tools to utilize them, collaborate on developing systematic robot experimentation methodologies, and disseminate their findings. During this workshop, we will facilitate forum discussions around two topics: (1) identification of current gaps that limit the effectiveness of the ecosystem (e.g., hardware access, simulation fidelity, lack of relevant assets), and (2) proposing solutions to the identified issues and improve the state of the ecosystem (e.g., establishing advisory boards, integrating benchmark promotion within ROS, developing streamlined infrastructure). The primary goal is to receive feedback from workshop attendees to drive the development and implementation of new activities for an improved open-source ecosystem.
Key Takeaways
Based on the discussions had at the workshop, a set of key takeaways have been summarized below:
Lack of awareness of what benchmarks are available, how to conduct benchmarking, and clear success criteria
There are different acceptable thresholds of performance in an academic research space vs. industrial application
Benchmarking should be conducted by certified centers for industrial applications due to this difference, whereas using students for academic research may be acceptable
It is difficult for industry to understand and communicate the value of benchmarking
Industry won’t care as much about high mix low volume performance, whereas low mix high volume is easier for them to grasp
If benchmarking can support risk assessment, that would be valuable to industry
Need to be able to make the connection between performance on a benchmark and performance in a real-world application
The use of an abstract artifact needs to be able to translate to dollars saved
Benchmarking reliability is important in evaluating “shrink” in a manufacturing process (i.e., dollars lost during downtime)
Benchmarks need to be able to demonstrate meeting a custom specification
Let’s limit what we’re benchmarking at first (e.g., motion planning, grasp planning), before we get into application spaces
Standards for collecting data may be needed to effectively compare results if comparison is performed at this level
The funding to conduct benchmarking is an issue; could utilize a bounty-type system
Industry currently doesn’t have incentive to publish or share their data
Companies may not contribute to a database of results unless it is behind a paywall, it becomes a consumer products report, etc.
Government acquisitions will care a lot about comparing a performance benchmark to an established standard
ROS-I EU group user survey results indicate industrial customers desire the following for a reference robot cell to use for testing:
The two most popular applications being pick and place and assembly
Robot with 1300mm reach, 10kg payload; UR, ABB, or Fanuc
Robot-mounted RGBD and force-torque sensing and cell-mounted RGBD
2-finger or vacuum grippers
CoppeliaSim is a decent simulator for manipulation: https://www.coppeliarobotics.com/
How to Enhance the Open-Source Ecosystem?
For Better Performance Benchmarking in Robotics
For Better Comparison Between Methods
Boosting Use of Datasets and Open-Source Tools
Enhance Communication Between Researchers
Current Gaps
Based on an online survey of over 100 respondents to provide feedback on the current state of open-source assets and benchmarking resources for robotic manipulation, those that were manipulation researchers (57%) rated the following statements as Never, Rarely, Sometimes, Often, or Frequently (results below are ordered from highest to lowest frequency):
Barriers (highest to lowest frequency)
My research is limited by a lack of relevant comparable benchmarks in the field
My research is limited by current robot simulation capabilities
I face barriers when attempting to integrate
open-source assets into my researchMy research is limited by a lack of relevant
open-source assets in the fieldMy research is limited by access to robotic hardware
Activity (highest to lowest frequency)
I learn about the availability of new open-source assets
I utilize open-source assets (e.g., YCB Object Set, Cornell Grasp Dataset, GPD) in my research
I benchmark my robotic manipulation research to others in the field
I contribute to open-source for robotic manipulation
Future Solutions
Modular Benchmarking Software Pipelines
Distributed Physical Benchmarking Facilities
Online Community Resources
Working Groups and Advocacy
Facilitators
Adam Norton, University of Massachusetts Lowell
Berk Calli, Worcester Polytechnic Institute
Funded by the National Science Foundation, Pathways to Enable Open-Source Ecosystems (POSE), Award TI-2229577