Collaborative Open-source Manipulation Performance Assessment for Robotics Enhancement (COMPARE) Ecosystem
Funded by the National Science Foundation, Pathways to Enable Open-Source Ecosystems (POSE), Awards TI-2346069 and TI-2229577
This page details the background work conducted during COMPARE Phase I (2022—2024) used to motivate our work during COMPARE Phase II (2024 onward).
COMPARE Phase I (2022—2024) Survey on Open-Source and Benchmarking for Robotics
We conducted a survey for researchers to provide feedback on the current state of open-source assets and benchmarking resources for robotics, and future activities for improvement. The results as of September 2023 from over 100 online respondents and ~100 workshop participants are summarized below.
Current Limitations
The barriers most frequently faced are: (1) a lack of relevant comparable benchmarks, (2) limitations of simulation capabilities, and (3) issues when integrating open-source products. While the proposed OSE does not seek to explicitly improve (2), increasing benchmarking throughout the community will lead to higher demand for better simulation capabilities. The activities which respondents least frequently performed are: (1) contributing to open-source, and (2) benchmarking to compare to others in the field. The lack of clear instructions in published benchmarks was highlighted in open response. During workshop discussions (see links to individual workshop content below) and deep dive meetings, several bottlenecks to conducting quality robot manipulation performance evaluation were identified, summarized as follows:
Variation in available resources across labs for physical robot testing: robots, personnel, tools
Lack of community consensus on metrics, protocols, definitions of robot manipulation taxonomy, data collection methods, and software component structures to enable compatibility, among others
Lack of applicability of benchmarking protocols to real-world tasks, more diversity needed
Publications are inconsistent in reporting of system configuration and benchmarking protocols used
Lack of truly modular software to enable component-level and holistic system evaluations
Lack of incentives as publications do not require/favor benchmarking, few other venues available
The value of benchmarking is not well understood for industry and/or their customers
Recommendations for Improvement
The survey included several potential activities and mechanisms for respondents to rate in terms of benefit. Those rated as highest benefit were: organized repositories of (1) robot manipulation benchmarking results and (2) open-source products for robot manipulation. The remaining items were rated just below those, but all of similar benefit: (3) distributed robot benchmarking centers, (4) dedicated conference tracks and journals for benchmarking, (5) remote access to robot hardware, and (6) review panels to ensure open-source contributions meet established standards. Open response comments further highlighted the need for (6) as well as the benefit of having truly modular software to enable improved performance benchmarking. Our community discussions also led to several recommendations of OSE activities in order to build incentives for the community to conduct quality robot manipulation performance evaluation, summarized as follows:
Advocate for publication review criteria to favor research that includes benchmarking comparisons and utilization of open-source products
Develop mechanisms to ensure industry relevance and applicability of benchmarks, which may also enable transition into industrial products
Set performance targets (e.g., achieving a mean picks per hour metric that exceeds the typical human rate) for benchmarking across the community (e.g., a competition) by establishing a desired threshold of performance rather than just relative performance comparison
COMPARE Phase I (2022—2024) Workshops
Each page features a summary of the findings from each workshop:
Advancing HRI Research and Benchmarking Through Open-Source Ecosystems, Human-Robot Interaction (HRI) 2023, Stockholm, Sweden, March 13, 2023: https://www.robot-manipulation.org/workshops/hri-2023
Forum Discussion on Open-Source Robotic Manipulation and Benchmarking: Current Gaps and Future Solutions, ROS-Industrial Consortium Americas 2023 Annual Meeting, Detroit, Michigan, May 25, 2023: https://www.robot-manipulation.org/workshops/ros-i-2023
Advancing Robot Manipulation Through Open-Source Ecosystems, International Conference on Robotics and Automation (ICRA 2023), London, UK, May 29, 2023: https://www.robot-manipulation.org/workshops/icra-2023
Forum to Develop an Open-Source Ecosystem for Robotic Manipulation, Robotics: Science and Systems (RSS) 2023, Daegu, Republic of Korea, July 10, 2023: https://www.robot-manipulation.org/workshops/rss-2023
COMPARE Phase I (2022—2024) Charter for Strategic Development and Ecosystem Functions
To support the development, benchmarking, and deployment of robot systems, an open-source ecosystem (OSE) is under development through the National Science Foundation (NSF) Pathways to Enable Open-Source Ecosystems (POSE) program.
The OSE will facilitate the development and dissemination of the open-source assets for robotic manipulation, i.e., robot hardware, software, benchmarking practices. It will create a community-driven platform that researchers and developers can share and learn about these open-source resources, find tools to easily utilize them, collaborate on developing systematic robot experimentation methodologies, and disseminate their findings effectively. As such the OSE will address the issues facing the advancement of robot manipulation due to the lack of systematic development and benchmarking methodologies, as well as the unique challenges related to physical assets (both equipment and benchmarking tools), preventing the robotic manipulation community from effectively utilizing such resources compared to other research domains like computer vision.
The OSE will support a wide variety of open-source assets (OSAs) available or to be developed in the future for robot manipulation, including physical object sets and hardware designs, digital datasets and simulations, instructional task protocols and metrics, and functional algorithms and robotic behaviors.
Participants in the OSE will include all stakeholders of these OSAs, including developers (those who develop and maintain the OSA), contributors (those who iterate on an existing OSA), and users (those who implement OSAs in their research and development practices).
OSE structure, governance, functions, and activities will be facilitated by a distributed set of OSE participants in various roles (e.g., leadership and participatory positions), diverse in terms of terms of backgrounds, research domains, and developmental maturity, spanning industry, academia, and government entities.
Development of the OSE will be conducted transparently, with continuous input solicited from the community and outputs shared from the initial core development team, through a series of workshops, meetings, surveys, and asynchronous communication via online platforms.
It is intended that the Open Source Security Foundation (OpenSSF) best practices for their Free/Libre and Open Source Software (FLOSS) projects be followed for all OSAs utilized within the OSE, although additional consideration will be needed for non-code-based OSAs such as those that are physical and instructional.
During Phase I of the project (2022 - 2023), the scope of the OSE shall be investigated and discovered, towards a proposal for Phase II (2024 - 2026). During Phase II, the OSE will be formally developed and established through the execution of multiple activities and functions, including, but not limited to, the following:
Establish COMPARE clearinghouse, a comprehensive and organized repository of OSAs with mechanisms to support contribution of new OSAs by developers, iterations of OSAs by contributors, and implementation of OSAs for experimentation usage.
Establish advisory committees and working groups for reviewing OSA development, implementation, and benchmarking with OSE participants, with groups established for particular applications (e.g., bimanual manipulation, human-robot handovers, etc.).
Establish collaborative test facilities for conducting interlaboratory experiments and round robin testing of OSAs and robotic solutions that utilize OSAs to evaluate their robustness and demonstrate replicable performance.
Propose new research conference tracks and journals specific to the OSE topics of interest, including open-source benchmarking and dataset generation.
Provide roadmaps and frameworks for OSA development maturity with supporting templates and processes for effectively reaching each stage.
Provide webinars, tutorials, and instructions for OSE participation as a developer, contributor, and user, for usage of the previously described assets and activities.
Define OSE-specific standards for OSA structure, deployment, and interoperability, for potential support by standards development organizations (SDOs) like IEEE or ASTM.
Provision OSE participants with a common set of facilities, including robotic equipment, tools, hardware OSAs, software OSAs, and access to remotely accessible supporting infrastructure, as needed for participating organizations, with eligibility of such provisions to be based on a set of established criteria.
Coordinate large-scale events across the OSE such as distributed hackathons, grand challenges, and competitions to ignite targeted, active development of OSAs, robotic capabilities, and/or benchmarking of robot systems.
Identify strategies for a sustainable OSE, i.e. establishing monetary (company or government partnerships, membership mechanisms) and human resources (governance structure) for long-term operation.
Contact
If you are interested in contributing to the development of the COMPARE open-source ecosystem or have any questions or comments, please e-mail Adam Norton at adam_norton@uml.edu