3D Part Segmentation for Inertial Identification

Understanding Objects Through Shape and Structure

From our paper:
The Sum of Its Parts: Visual Part Segmentation for Inertial Parameter Identification
By Philippe Nadeau, Matthew Giamou and Jonathan Kelly
In IEEE International Conference on Robotics and Automation (ICRA) 2023
đŸ”— Paper đŸ”— Dataset đŸ”— Code


Understanding Objects Through Shape and Structure

Collaborative robots (cobots) must understand the physical properties of the objects they manipulate to plan effective grasps and motions. However, existing inertial parameter identification methods rely purely on force-torque sensing and require fast, dynamic motions to achieve accurate estimates—motions that are unsafe in human-centric environments.

We introduce a technique that combines visual shape information with force-torque data to improve inertial parameter estimation. Instead of relying solely on high-speed dynamic movements, our method segments objects into distinct homogeneous parts and uses their shape information to constrain the parameter identification problem, making accurate estimation possible even at low, safe speeds.

How It Works

Our key idea is that man-made objects are often composed of a few homogeneous-density parts (e.g., a hammer with a metal head and wooden handle). We show that 3D part segmentation can simplify the problem of identifying inertial parameters into the one of estimating the mass of each part. We leverage this observation by performing part segmentation before estimating inertial parameters. Our method consists of four main steps:

  1. 3D Scanning & Reconstruction: An RGB-D camera captures point clouds of the object.
  2. Surface-Based Clustering: A fast, heuristic method groups points into clusters based on local features (e.g., positions, normals, and colors).
  3. Volumetric Shape Segmentation: A hierarchical convex decomposition algorithm further refines the segmentation, ensuring each part is a realistic, physically meaningful component.
  4. Inertial Parameter Estimation: The mass distribution of the object is inferred by solving a simple convex optimization problem.

Key Benefits

Compared to other methods, our approach offers several advantages:

  • Safe, stop-and-go identification: Eliminates the need for unsafe fast motions, but can still produce the full set of inertial parameters.
  • More accurate estimates: Shape constraints reduce estimation errors compared to purely force-based methods and enforce physical consistency.
  • Public dataset & open-source code: Includes 20 segmented workshop tools with meshes, point clouds, and inertial parameters. Our dataset is available here and the source code of our method is here.

Experiments

To evaluate our method, we created a dataset of 20 common workshop tools, providing ground-truth meshes, segmented point clouds, and inertial parameters. The dataset enables benchmarking of the complete pipeline: 3D shape reconstruction, part segmentation, and inertial parameter estimation. Our dataset enables the evaluation of novel algorithms in simulations (just as we did in this project).

Part_Segmentations_GIF

In real-world tests, we deployed an xArm 7 cobot equipped with a RealSense D435 depth camera. The robot scanned a hammer, segmenting its head and handle before estimating its inertial parameters. The entire process was performed using only safe stop-and-go motions.

Robot_Scanning_Hammer_GIF

Finally, we validated the accuracy of our method by performing a challenging hammer balancing act, where the robot precisely placed the hammer onto a narrow target—demonstrating that the identified center of mass was highly accurate.

Robot_Balancing_Hammer_GIF

Our work shows that part-aware inertial parameter estimation can dramatically improve cobot safety and accuracy, making dynamic object understanding feasible in real-world human-robot collaboration.


If you’re interested in learning more, check out the full paper here, reach out to me directly, or watch this three-minute presentation of the work: