If any COMPARErs are at the European Robotics Forum (ERF) this week in Stavanger, Norway, reach out to me, your Community Facilitator, Adam (pictured below), to meet up! I presented a poster today on our most work towards developing guidelines for modular components in robot grasping and manipulation pipelines for picking in clutter. I’ll also be participating in two workshops tomorrow: WS#60: Modular and Interoperable Robotics Future: Workshop to Revolutionize the European Robotics Ecosystem, and WS#21: Test Before Invest: Robotics Benchmarking, Experiment Reproducibility, Software and Middlewares, Testing and Quality Assurance. Hope to see you there! 🤖 
Adam Norton, Community Facilitator, with his COMPARE poster at ERF 2026 in Stavanger, Norway Looking forward, we’re planning an informal community session at ICRA this year to review future developments for the COMPARE Ecosystem, including expanding to additional robot manipulation focus areas. Based on our feedback and observations thus far, the two prime topics for consideration are (1) multi-fingered dexterous manipulation and (2) deformable object manipulation. Let us know if you’ll be attending ICRA this year and are interested in participating! More information will be available soon. Keep the source open and mark that bench! 🤖 |

🤖 Manipulation Datasets: 10 new entries (21 total): Dex1B: Articulation [Sim; Point clouds, Robot pose, Action sequences, Depth maps; Articulated Object Manipulation, Grasping; 1,000,000,000 samples; 2 fundamental tasks (Grasping and Articulation) across 6,000+ objects] 
Flat’n’Fold [Real; RGB images, Depth images, Point clouds, Action sequences, Robot pose, Robot joint states, Tracker data; Deformable Object Manipulation; 2,099 samples; 2 main tasks (Flattening and Folding) across 44 unique garments in 8 categories] 
Galaxea Open-World Dataset [Real; RGB images, Depth images, Robot joint states, Action sequences; Pick-and-Place, General Home/Service Tasks; 100,000 samples; 150+ task categories, 58 operational skills] 
HORA (Hand–Object to Robot Action) [Real; RGB images, Depth images, 6D poses, Robot joint states; Pick-and-Place, Human-Robot Handovers, General Home/Service Tasks; 150,000 samples; Covers diverse manipulation tasks derived from multiple public HOI datasets plus custom recordings] 
Purpose-driven Robotic Interaction in Scene Manipulation (PRISM) [Sim; RGB images, Depth images, 6D poses; Pick-and-Place, Tool Use; 378,844 samples; 568 unique task categories] 
RefSpatial [Real, Sim; RGB images, Depth images, 3D skeleton; Pick-and-Place; 2,500,000 samples; 31 spatial relations (left/right, above/below, front/back, near/far, metric distance, orientation, etc.); single-step and multi-step spatial reasoning (up to 5 steps)] 
Robo360 [Real; Video, Audio, Robot joint states; Articulated Object Manipulation, Pick-and-Place; 2,000 samples; Diverse object manipulation tasks across 100+ objects with varying material properties] 
RoboCerebra [Sim; RGB images, Video; General Home/Service Tasks; 1,000 samples; 100 task variants across 6 subtask-type categories; 4-15 subtask steps per trajectory] 
RoboMIND [Real; RGB images, Depth images, Robot joint states, Action sequences; Articulated Object Manipulation; 107,000 samples; 479 distinct tasks (v1.2); 279 tasks in initial v1.0 release] 
RoboVerse [Sim; RGB images, Depth images, Robot joint states, 6D poses, Action sequences; Pick-and-Place, Articulated Object Manipulation; 500,000 samples; 276 task categories; 1,000+ distinct task variants; Open6DOR subset alone has 5,000+ tasks across position, rotation, and 6-DoF tracks] 
|