Advancing Robot Manipulation Through Open-Source Ecosystems
2023 IEEE International Conference on Robotics and Automation (ICRA) Conference Workshop
This full-day workshop was held during the International Conference on Robotics and Automation (ICRA) 2023 conference on May 29, 2023 in London.
Abstract
Advancement in robot manipulation is limited by a lack of systematic development and benchmarking methodologies, causing inefficiencies and even stagnation. There are several assets available in the robotics literature (e.g., YCB, NIST-ATB), yet an active and effective mechanism to disseminate and use them is lacking, which significantly reduces their impact and utility. This workshop will take a step towards removing the roadblocks to the development and assessment of robot manipulation hardware and software by reviewing, discussing, and laying the groundwork for an open-source ecosystem. The workshop aims to determine the needs and wants of robot manipulation researchers regarding open-source asset development, utilization, and dissemination. As such, the workshop will play a crucial role for identifying the preconditions and requirements to develop an open-source ecosystem that provides physical, digital, instructional, and functional assets for performance benchmarking and comparison. Discussions will include ways for maintaining ecosystem activity over time and identifying methods and principles to achieve a sustainable open-source effort. Accordingly, the invited speakers of this workshop includes experts that led well-established, successful open-source efforts (e.g., Robotarium, ROS-Industrial) along with experimentation experts and developers of newer open-source assets (e.g., NIST-MOAD, Household Cloth Object Set). The overarching goal is to learn from the successful examples and open communication channels between new and experienced researchers.
Key Takeaways
Based on the discussions had at the workshop, a set of key takeaways have been summarized and organized into topics below:
Replicability techniques
Using Docker containers provides a lightweight virtual machine that guarantees correct dependencies are in place
Replicability is not as valued; R-papers exist, but not a lot of people are submitting
Pay attention to versioning to support future-proofing
A lot of performance measurements are hardware specific, so need to consider those equally with software (look to Eurobench project for example: https://eurobench2020.eu/the-eurobench-platform-a-unified-benchmarking-software-for-remote-testing/)
The context of a test can be replicated for comparison between solutions, but must be characterized in a standard way to allow for systematic testing as components change (e.g., software, hardware, environment, etc.)
Running another user’s code
Some safety barriers must be in place to constrain what the user’s code is able to do
For manipulation, integrating elements like additional collision avoidance for safety will be necessary
Pre-checks of user code can occur in simulation; doesn’t have to be of the full task, but a reduced version that mostly checks for whether the robot leaves the workspace, experience runtime errors, etc.
The quirks of an implementation that will impact transferring robot behaviors from simulation to physical are really only learned through experience, trial-and-error
Docker containers are safe; if a user’s code exits the software application, they are still just within the Docker environment, not the entire computer where it is being run
Sustainability
Initial funding from federal grants may allow for development when the asset is novel, but continuous funding to maintain is trickier, although more can be received if additional novel functionality is being added
From experiences such as running Robotarium, charging users will not work; received feedback it would be prohibitive to usage
Modular software pipeline
Such a framework would allow for researchers to work on the individual component their research is concerned with (e.g., grasp planning)
Move It is a good framework to use as a baseline for motion planning
You should not be required to open-source your results or code when using the system
The initial build of the software pipeline will still be usable if someone doesn’t have the required hardware, just in simulation, which then could be run in a collaborator’s facility
The software pipeline may need to be designed to run on a standard set of hardware first in order to demonstrate some functionality before expanding to more hardware
Benchmarking of the assets used to drive the provided components in the pipeline (e.g., perception, motion planning) may be required first in order to understand their impact on performance
The community must agree on a set of performance measures for these packages so we can define metrics for this benchmarking
Researchers with end-to-end solutions (i.e., developing all parts of the pipeline) would likely not be able to utilize the pipeline, but they also shouldn’t need to
With that said, any benchmarking protocols should still be able to be run for both solutions that use the pipeline and those that don’t
The pipeline would leverage existing open-source packages as its default behaviors
Open-source datasets
Need more datasets for assembly and production environments
Need more thorough documentation of how to generate datasets
Standardization of the properties of physical objects that are included in a dataset is needed; for example, stiffness is an important characteristic for flexible materials, but is difficult to measure properly
Datasets we should include in the proposed OSE:
PartNet-Mobility Dataset for articulated objects: https://sapien.ucsd.edu/browse
ARMBench Dataset of picking failures: http://armbench.s3-website-us-east-1.amazonaws.com/
Benchmarking robot manipulation performance
Bottlenecks
Variable hardware
Automated experimentation
Consensus on metrics and protocols
Component vs. holistic evaluations
Consensus on manipulation taxonomy
Truly modular plug-and-play software
Simulation capabilities
Inconsistent configuration reporting
Benchmark applicability
Online venues for sharing results
Sharing failure in addition to success
Assuring data quality
Sustainability of benchmarking efforts
Methods for comparison needed before usage can be advocated
Incentives
Review criteria
Weighting factors to favor research papers or funding proposals that feature benchmarking comparisons and utilization of open-source
Industry relevance
If benchmarks are relevant to industry and real-world applicability, could lead to additional funding from industry to continue development
Setting performance targets
Establishing a desired threshold of performance for researchers to strive for (e.g., competition with an absolute goal, not just relative between teams)
Overview
Presentations and guided discussion will take place across three categories:
Open-source Hardware and Infrastructure: hardware designs that incorporate commercial off-the-shelf (COTS) materials and rapid-prototyping (e.g., Yale OpenHand), test apparatuses (e.g., ACRV Picking Benchmark), and remotely accessible infrastructure for conducting robot manipulation experiments (e.g., Robotarium).
Open-source Software and Functionality: algorithms and behaviors that drive robot manipulation operations (e.g., MoveIt motion planner) and calibration (e.g., DREAM camera calibration), typically accessible via the Robot Operating System (ROS).
Open-source Benchmarking Protocols and Datasets: task protocols and metrics (e.g., RGMC service and manufacturing tasks) and datasets generated from physical testing (e.g., Cornell Grasp Dataset) or simulations (e.g., Jacquard Dataset).
After each category sessions' presentations and Q+A for each presenter is completed, a guided discussion will be facilitated amongst the workshop participants. The following topics, among others, will be put forth to motivate these discussions:
Availability: What open-source assets are available in this category? What types of assets are there too many or too few of? How is the availability of these assets promoted, or how should it be?
Composition: What formats or structures of open-source assets in this category are used? What characteristics are they missing and which are unnecessary?
Applicability: Are the open-source assets reviewed in this category applicable to your research? What uses cases would they be applicable to? Are there particular domains or applications that would benefit greatly from open-source assets?
Benefits: What are the benefits of having this open-source asset available? How do you use it for your own work? Are there missing features that would provide greater benefit to you or others?
Implementation: What are the barriers to using open-source assets in this category? Are there existing instructions and documentation that assist in implementation, or are these features lacking? What level of support is desired to ease implementation?
Speakers
Sean Wilson
Georgia Institute of Technology
Cindy Grimm
Oregon State University
Felix Widmaier
Max Planck Institute of Intelligent Systems
Lily Baye-Wallace
Southwest Research Institute
Jürgen “Juxi” Leitner
LYRO Robotics
Megan Zimmerman
National Institute of Standards and Technology (NIST)
Irene Garcia-Camacho
Institut de Robòtica i Informàtica Industrial
You!
Consider contributing to this workshop! See below
Schedule
Invited talks: 30 minutes each (20 presentation + 10 questions)
Short talks: 15 minutes each (12 presentation + 3 questions)
All times given are in London local time (GMT +1)
Introduction
9:00 Opening and introduction of workshop participants
9:10 Open-Source Robotic Manipulation and Benchmarking: Current Gaps and Future Solutions, Holly Yanco [presentation]
Remotely Accessible Infrastructure and Open-source Hardware
9:30 Robotarium, Sean Wilson [presentation]
10:00 REMOTE: Remote Experimentation of Manipulation for Online Test and Evaluation, Cindy Grimm [presentation]
10:30 Coffee break
11:00 The TriFinger Platform and the Real Robot Challenge for Learning Agile Manipulation, Felix Widmaier [presentation]
11:30 Short talk: The Tilburg Dexterous Hand: A Low Cost Research Platform for Everyone, Giacomo Spigler [paper | presentation]
11:45 Discussion: Bottlenecks of Benchmarking and Performance Evaluation
Lunch break
12:30 - 2:00
Open-source Software and Functionality
2:00 ROS-Industrial: Software Architecture for Extensibility, Lily Baye-Wallace [presentation]
2:30 What has Benchmarking ever done for us (in Robotic Manipulation)?, Jürgen “Juxi” Leitner [youtube]
3:00 Short talk: PickSim: A dynamically configurable Gazebo pipeline for robotic manipulation learning, Guillaume Duret [paper | presentation]
3:15 Discussion: Modular Benchmarking Software Pipelines to Streamline Benchmarking
3:45 Coffee break
Open-source Benchmarking Protocols and Datasets
4:15 NIST Manufacturing Object Assemblies Dataset, Megan Zimmerman
4:30 Challenges in Comparing Cloth Manipulation, Irene Garcia-Camacho
500 Short talk: An Open-source Recipe for Building Simulated Robot Manipulation Benchmarks, Linghao Chen [paper | presentation | youtube]
5:15 Short talk: ARMBench: Amazon Robotic Manipulation Benchmark Dataset, Manikantan Nambi and Chaitanya Mitash
5:30 Closing and workshop end
Participation
The workshop will be hybrid, with a focus on in-person participation, but a virtual option for remote attendees to watch presentations and participate in discussion will be available. Join the COMPARE project Slack workspace, channel #icra-2023-workshop, to participate in discussions pre, during, and post-workshop: https://join.slack.com/t/compare-ecosystem/shared_invite/zt-1nfgdwq4z-_8_PsXVhJ6H1FAZuQizjTA
Contributions
Short papers are sought to be presented that discuss issues faced, successes achieved, and/or analyses of the current landscape of robotic manipulation when developing or utilizing open-source assets and conducting benchmarking. Submissions may be in the form of position papers, proposals for new efforts, or reporting of new results, 2-4 pages, with the expectation that authors of accepted papers will provide a presentation at the workshop and participate in topic discussions.
Submissions of papers should use the ICRA 2023 format, 2-4 pages in length (excluding references), anonymization not required. Contributed papers should fit into one or more of the following topics of the workshop: open-source hardware and infrastructure, open-source software and functionality, open-source benchmarking protocols and datasets, the availability of open-source assets, their composition, applicability or lack there of, benefits of open-source, and barriers to implementation, among others.
All submissions will be reviewed and authors of accepted papers will be asked to give a 10 minute talk at the workshop. At least one author of each accepted submission must register for the workshop and attend in-person; remote presentation will not be allowed.
February 22, 2023: Call for submissions open
March 17, 2023, 23:59 Anywhere on Earth (AoE): Early submission deadline for short papers to ensure decision by ICRA 2023 early registration deadline (April 1)
March 24, 2023: Notification of acceptance of early workshop submissions
April 28, 2023, 23:59 AoE: Submission deadline for short papers
May 5, 2023: Notification of acceptance for workshop submissions
May 29, 2023: Date of workshop at ICRA 2023
Submissions should be e-mailed to adam_norton@uml.edu with the text “[ICRA 2023 Workshop Submission]” in the subject line. Authors of accepted submissions are encouraged to upload their papers to arXiv.org; they will also be hosted on a publicly accessible Google Drive folder and linked on the workshop website along with presentation slides and videos of presentations.
IEEE RAS Technical Committees
This workshop is endorsed by:
The IEEE RAS Technical Committee on Robotic Hands, Grasping & Manipulation (TC RHGM): https://www.ieee-ras.org/robotic-hands-grasping-and-manipulation
The IEEE RAS Technical Committee for Performance Evaluation & Benchmarking of Robotic and Automation Systems (TC PEBRAS): https://www.ieee-ras.org/performance-evaluation
Organizers
Adam Norton, University of Massachusetts Lowell
Holly Yanco, University of Massachusetts Lowell
Berk Calli, Worcester Polytechnic Institute
Aaron Dollar, Yale University
Contact
Please contact Adam Norton with any questions or comments via e-mail: adam_norton@uml.edu
Funded by the National Science Foundation, Pathways to Enable Open-Source Ecosystems (POSE), Award TI-2229577