[COMPARE] Ecosystem Digest, March 23, 2026

6 views
Skip to first unread message

COMPARE Ecosystem

unread,
9:23 AM (9 hours ago) 9:23 AM
to compare-...@googlegroups.com

COMPARE Ecosystem Digest, March 23, 2026

If any COMPARErs are at the European Robotics Forum (ERF) this week in Stavanger, Norway, reach out to me, your Community Facilitator, Adam (pictured below), to meet up!

I presented a poster today on our most work towards developing guidelines for modular components in robot grasping and manipulation pipelines for picking in clutter. I’ll also be participating in two workshops tomorrow: WS#60: Modular and Interoperable Robotics Future: Workshop to Revolutionize the European Robotics Ecosystem, and WS#21: Test Before Invest: Robotics Benchmarking, Experiment Reproducibility, Software and Middlewares, Testing and Quality Assurance. Hope to see you there! 🤖

image.jpeg

Adam Norton, Community Facilitator, with his COMPARE poster at ERF 2026 in Stavanger, Norway

Looking forward, we’re planning an informal community session at ICRA this year to review future developments for the COMPARE Ecosystem, including expanding to additional robot manipulation focus areas. Based on our feedback and observations thus far, the two prime topics for consideration are (1) multi-fingered dexterous manipulation and (2) deformable object manipulation. Let us know if you’ll be attending ICRA this year and are interested in participating! More information will be available soon. 

Keep the source open and mark that bench! 🤖

Here’s what you may have missed and what’s coming up soon:

New manipulation datasets added to robot-manipulation.org, CVPR workshops have been announced, ERF is this week, and RoboSoft is next month!

💬 = recent discussions in the image.png

🤖 = recent additions to image.png

image.jpeg

Datasets


🤖 Manipulation Datasets: 10 new entries (21 total):

Dex1B: Articulation [Sim; Point clouds, Robot pose, Action sequences, Depth maps; Articulated Object Manipulation, Grasping; 1,000,000,000 samples; 2 fundamental tasks (Grasping and Articulation) across 6,000+ objects]

image.jpeg

Flat’n’Fold [Real; RGB images, Depth images, Point clouds, Action sequences, Robot pose, Robot joint states, Tracker data; Deformable Object Manipulation; 2,099 samples; 2 main tasks (Flattening and Folding) across 44 unique garments in 8 categories]

image.jpeg

Galaxea Open-World Dataset [Real; RGB images, Depth images, Robot joint states, Action sequences; Pick-and-Place, General Home/Service Tasks; 100,000 samples; 150+ task categories, 58 operational skills]

image.jpeg

HORA (Hand–Object to Robot Action) [Real; RGB images, Depth images, 6D poses, Robot joint states; Pick-and-Place, Human-Robot Handovers, General Home/Service Tasks; 150,000 samples; Covers diverse manipulation tasks derived from multiple public HOI datasets plus custom recordings]

image.jpeg

Purpose-driven Robotic Interaction in Scene Manipulation (PRISM) [Sim; RGB images, Depth images, 6D poses; Pick-and-Place, Tool Use; 378,844 samples; 568 unique task categories]

image.jpeg

RefSpatial [Real, Sim; RGB images, Depth images, 3D skeleton; Pick-and-Place; 2,500,000 samples; 31 spatial relations (left/right, above/below, front/back, near/far, metric distance, orientation, etc.); single-step and multi-step spatial reasoning (up to 5 steps)]

image.jpeg

Robo360 [Real; Video, Audio, Robot joint states; Articulated Object Manipulation, Pick-and-Place; 2,000 samples; Diverse object manipulation tasks across 100+ objects with varying material properties]

image.jpeg

RoboCerebra [Sim; RGB images, Video; General Home/Service Tasks; 1,000 samples; 100 task variants across 6 subtask-type categories; 4-15 subtask steps per trajectory]

image.jpeg

RoboMIND [Real; RGB images, Depth images, Robot joint states, Action sequences; Articulated Object Manipulation; 107,000 samples; 479 distinct tasks (v1.2); 279 tasks in initial v1.0 release]

image.jpeg

RoboVerse [Sim; RGB images, Depth images, Robot joint states, 6D poses, Action sequences; Pick-and-Place, Articulated Object Manipulation; 500,000 samples; 276 task categories; 1,000+ distinct task variants; Open6DOR subset alone has 5,000+ tasks across position, rotation, and 6-DoF tracks]

image.jpeg

image.jpeg

Events


Workshops

🤖 Workshops: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2026 takes place in Colorado, USA, June 3-7, 2026, with 6 workshops related to robot manipulation and perception running June 3-4:

Embodied Reasoning in Action: Workshop and Challenge on Embodied Reasoning for Robotic Manipulation

Bridging Vision, Language, and Action: What’s Missing in Actionable Visual Perception for Robotics

ScaleBot: The First Workshop on Scalable Robot Learning Systems

Unified Robotic Vision with Cross-Modal Sensing and Alignment

6th Workshop on 3D Scene Understanding for Vision, Graphics, and Robotics

OpenSUN3D: 6th Workshop on Open-World 3D Scene Understanding with Foundation Models

image.jpeg

What’s coming up soon?

ERF 2026: European Robotics Forum (ERF) 2026, March 23 - 27, 2026, Stavanger, Norway

image.jpeg

RoboSoft 2026: 9th IEEE-RAS International Conference on Soft Robotics 2026, April 7 - 11, 2026, Kanazawa, Ishikawa, Japan

image.jpeg


Subscribe to the Robot Manipulation Events Google Calendar to stay in the loop!


🤖 Have suggestions for open-source products or benchmarking assets that should be added? Submit them here! https://forms.gle/LHrtmDpm82X4qrDk6

🤖 Have suggestions for events we should add? Use this form to let us know! https://forms.gle/PfiSRjcuQnavbPNS9 


image.png



--
Adam Norton, Community Facilitator, COMPARE Ecosystem
Improving Open-Source and Benchmarking for Robot Manipulation
Robot-Manipulation.org: Home of the COMPARE Ecosystem
COMPARE Slack: Collaborate with other researchers on Slack
COMPARE Google Group: Join the mailing list to receive announcements in your inbox
Reply all
Reply to author
Forward
0 new messages