Benchmarking Robot Manipulation:
Improving Interoperability and Modularity


A proposed full-day workshop at the International Conference on Robotics and Automation (ICRA) 2025

Benchmarking robot manipulation capabilities and comparing research solutions is either performed at the component level or holistically. Physical evaluations typically involve the latter which requires a full robot manipulation system. However, researchers often contribute a single novel software component – such as a grasp planner or perception module – while leveraging multiple open-source products as part of the manipulation pipeline. The high dimensionality of a robot manipulation system makes it difficult to determine what factors contributed to the resulting performance and reproduce experiments. A lack of standards and guidelines on component structures, input/output formats (for pipeline integration), and test and evaluation procedures to ensure compatibility and usability places a significant burden on researchers. This workshop seeks to unite the existing community of users and developers of open-source products in robot manipulation towards the development of standards and guidelines to improve interoperability and modularity, such that (1) side-by-side benchmarking of manipulation pipelines and components of truly comparable software configurations can be conducted, (2) experiment reproducibility can be improved, (3) implementation effort of the complex robot manipulation pipeline is reduced, and (4) the barrier to entry for new researchers is lowered.

This workshop is one of several events to activate the open-source ecosystem for robot manipulation, under the Collaborative Open-source Manipulation Performance Assessment for Robotics Enhancement (COMPARE) Ecosystem project. The COMPARE project seeks to improve the compatibility of all types of open-source products including software, hardware designs, objects and artifacts, datasets, and benchmarking protocols. This workshop’s focus is on software components, including perception, motion planners, grasp planners, and learning methods (among others). By aligning on methods for interoperability and modularity of these components, we can enable more effective benchmarking of the algorithms driving robot manipulation performance. 

As part of COMPARE, several working groups are being established to begin development on standards and guidelines for developing open-source products, conducting benchmarking, and reporting performance results. Each working group (WG) is scoped to a category of open-source product with individual task groups (TG) to specialize on more specific types of products. Most relevant to this workshop is the Software WG which includes TGs for Perception, Motion Planners, Grasp Planners, Learning, and Control/Execution Pipelines. This workshop will provide a venue for the leads of each WG/TG to present what has been developed thus far, as means of stimulating discussion amongst the workshop participants and outreach to recruit additional members of each WG/TG such that developments continue outside of the workshop.

This is the 5th session in a series of conference workshops on improving open-source products for robot manipulation and benchmarking. Prior workshops were held at HRI 2023, ROS-Industrial Consortium Americas 2023, ICRA 2023, and RSS 2023.

Invited Speakers

Jose Avendano Arbelaez

MathWorks

Constantinos Chamzas

Worcester Polytechnic Institute

Bhavana Chandrashekhar

Amazon Robotics

Sergey Levine

UC Berkeley

Stefan Schaal

Intrinsic

You! Consider contributing to this workshop

COMPARE Speakers

Daniel Nakhimovich

Rutgers University

Shambhuraj Mane

Worcester Polytechnic Institute

Ricardo Frumento

University of South Florida

Yifan Zhu

Yale University

Schedule

Contributions

Extended abstracts (2-4 pages) are sought that discuss issues faced, successes achieved, recommendations for standards/guidelines to improve open-source products and benchmarking, as well as other related topics. Submissions may be in the form of position papers, proposals for new efforts, or reporting of new results, with the expectation that authors of accepted papers will provide a short presentation at the workshop (5-10 minutes), present a poster during coffee breaks, and participate in topic discussions. All papers will be shared on the workshop webpage unless indicated otherwise. 

Organizers

Adam Norton

University of Massachusetts Lowell

Holly Yanco

University of Massachusetts Lowell

Kostas Bekris

Rutgers University

Berk Calli

Worcester Polytechnic University

Aaron Dollar

Yale University

Yu Sun

University of South Florida

Funded by the National Science Foundation, Pathways to Enable Open-Source Ecosystems (POSE), Award TI-2346069