Benchmarking Robot Manipulation:
Improving Interoperability and Modularity
A full-day workshop to be held at the Robotics: Science and Systems (RSS) 2025 conference on June 25th, 2025
Relevant COMPARE Slack Channel: #2025-rss-workshop
Benchmarking robot manipulation capabilities and comparing research solutions is either performed at the component level or holistically. Physical evaluations typically involve the latter which requires a full robot manipulation system. However, researchers often contribute a single novel software component – such as a grasp planner or perception module – while leveraging multiple open-source products as part of the manipulation pipeline. The high dimensionality of a robot manipulation system makes it difficult to determine what factors contributed to the resulting performance and reproduce experiments. A lack of standards and guidelines on component structures, input/output formats (for pipeline integration), and test and evaluation procedures to ensure compatibility and usability places a significant burden on researchers. This workshop seeks to unite the existing community of users and developers of open-source products in robot manipulation towards the development of standards and guidelines to improve interoperability and modularity, such that (1) true side-by-side comparison benchmarking of manipulation pipelines and components can be conducted, (2) experiment reproducibility can be improved, (3) implementation effort of the complex robot manipulation pipeline is reduced, and (4) the barrier to entry for new researchers is lowered.
This workshop is part of a project to develop an improved open-source and benchmarking ecosystem for robot manipulation. We previously conducted four conference workshops (HRI 2023, ROS-I 2023, ICRA 2023, RSS 2023) to scope the ecosystem; this workshop is one of several activities aimed at now developing and activating the ecosystem. Invited speakers, poster presenters (selected from contributed 2-4 page extended abstracts), and workshop attendees will integrate researchers from communities that make up the robot manipulation software pipeline (e.g., grasping, motion planning, perception, learning, control) and benchmarking.
Speakers
Yasemin Bekiroğlu
Chalmers University of Technology
[invited, confirmed]
Constantinos Chamzas
Worcester Polytechnic Institute
[invited, confirmed]
Bhavana Chandrashekhar
Amazon Robotics
[invited, confirmed]
Katerina Fragkiadaki
Carnegie Mellon University
[invited, confirmed]
Radhika Gudipati
Advanced Research + Invention Agency
[invited, confirmed]
Sergey Levine
University of California Berkeley
[invited, confirmed]
Stefan Schaal
Intrinsic
[invited, confirmed]
Shambhuraj Mane
Worcester Polytechnic Institute
[COMPARE team]
Daniel Nakhimovich
Rutgers University
[COMPARE team]
Adam Norton
University of Massachusetts Lowell
[COMPARE team]
Tentative Schedule
8:30 Opening and introduction of workshop participants
8:45 COMPARE talk: Adam Norton
Interoperability and Modularity
8:55 Invited talk: Stefan Schaal
9:20 Invited talk: Constantinos Chamzas
9:45 Submitted lightning talks (~3-5)
10:00 Coffee break with posters
10:30 COMPARE talk: Daniel Nakhimovich
10:50 Invited talk: Radhika Gudipati
11:15 Invited talk: Sergey Levine
11:40 Breakout discussions
12:10 Reviewing breakout discussions, compiling results
12:30 Lunch break
Benchmarking Robot Manipulation
14:00 Invited talk: Bhavana Chandrashekhar
14:25 Invited talk: Yasemin Bekiroğlu
14:50 COMPARE talk: Shambhuraj Mane
15:10 Submitted lightning talks (~3-5)
15:30 Coffee break with posters
16:00 Invited talk: Katerina Fragkiadaki
16:20 Invited talk: TBD
16:45 Breakout discussions
17:15 Reviewing breakout discussions, compiling results
17:35 Closing and next steps
17:45 Workshop end
Participation
Participants will be asked to fill out post-it notes to a poster board to highlight contributions they can make to open-source and benchmarking for robot manipulation research (what they can “give”) and those that they seek others to contribute to fill gaps to support their research (what they want to “get”). This exercise will be held during the coffee breaks to prepare for the breakout discussions to facilitate networking, potential matchmaking by the workshop organizers during these periods, and such that we can characterize the feedback received.
During breakout sessions, we will divide into smaller groups to discuss improvements we can make as a community to the interoperability and modularity of the manipulation pipeline to enable more effective benchmarking, around topics such as (1) standards and guidelines on component/pipeline structures (e.g., input/output formats), (2) test and evaluation methods to ensure compatibility and usability, and (3) online resources to connect and support researchers. The workshop organizers will facilitate discussions followed by a review session where each group will provide a high-level summary. We will compile these results into key takeaways and post the workshop outputs on this webpage, similar to what has been done during our previous workshops (e.g., ICRA 2023).
Contributions
Short papers (2-4 pages following RSS guidelines; anonymization is not required) are sought to be presented as lightning talks followed by poster presentations. The workshop is structured around two main topics:
Interoperability and Modularity: Even though research labs often focus on a specific aspect of robotic manipulation (e.g., perception, planning, learning), they still need to implement the whole software pipeline to conduct experimental analysis. We will discuss how to reduce such a development burden via modular coding practices that would allow for manipulation pipelines that are reusable across research contexts.
Benchmarking Robot Manipulation: Protocols and tools that are widely accepted by the community are needed to enable systematic experimental analysis. We will discuss how to enable reliable quantified performance analysis in manipulation research and examine successful examples both in robotics and neighboring fields.
Contributions may be in the form of position papers, proposals for new efforts, or reporting of new results, discussing issues faced, successes achieved, and/or analyses of the current landscape of robotic manipulation when implementing open-source assets, improving modularity of software components, and conducting benchmarking. It is expected that authors of accepted papers will provide a lightning talk presentation, poster, and participate in topic discussions at the workshop. At least one author of each accepted submission must register for the workshop and attend in-person; remote presentation will not be allowed.
March 31, 2025: Call for submissions open
April 25, 2025, 23:59 Anywhere on Earth: Early submission deadline for short papers to ensure decision by RSS 2025 early registration deadline (April 30)
April 28, 2025: Notification of acceptance of early workshop submissions
May 23, 2025, 23:59 Anywhere on Earth: Submission deadline for short papers to ensure decision by RSS 2025 standard registration deadline (June 5)
June 2, 2025: Notification of acceptance for workshop submissions
June 25, 2025: Date of workshop at RSS 2025
Submissions should be e-mailed to compare.ecosystem@gmail.com with the text “[RSS 2025 Workshop]” in the subject line. Authors of accepted submissions are encouraged to upload their papers to arXiv.org; they will also be hosted on a publicly accessible Google Drive folder and linked on the workshop website along with presentation slides and videos of presentations.
All questions regarding the workshop can be e-mailed to compare.ecosystem@gmail.com
Organizers
Adam Norton
University of Massachusetts Lowell
Holly Yanco
University of Massachusetts Lowell
Kostas Bekris
Rutgers University
Berk Calli
Worcester Polytechnic University
Aaron Dollar
Yale University
Yu Sun
University of South Florida