Fascination With Technology Simulation Makes Robots Smarter

Source: MIT | Translated by AI 3 min Reading Time

Related Vendors

In our "Fascination with Technology" section, we present impressive projects from research and development to designers every week. Today: how robots can guess the physical properties of an object through simulation by picking it up.

Thanks to a novel simulation method, robots can, for example, guess the weight of an object.(Image: MIT News, iStock)
Thanks to a novel simulation method, robots can, for example, guess the weight of an object.
(Image: MIT News, iStock)

A person clearing out an attic can often guess the contents of a box simply by picking it up and shaking it, without seeing what's inside. Researchers from MIT, Amazon Robotics, and the University of British Columbia have taught robots to do something similar.

They have developed a technique that enables robots to collect information about an object's weight, softness, or content using only internal sensors by picking it up and gently shaking it. With their method, which does not require external measuring devices or cameras, the robot can accurately determine parameters such as an object's mass within seconds.

Simulation With Robot And Object Models

The key to their approach is a simulation process that incorporates models of the robot and the object to identify the object's properties while the robot interacts with it. "This idea is very general, and I believe we are just scratching the surface of what a robot can learn this way. My dream is for robots to go out into the world, touch things, and move around their environment to discover the properties of everything they interact with," says Peter Yichen Chen, an MIT postdoc and lead author of a paper on this technique.

Algorithm "Observes" Movement of Robot And Object

The researchers' method utilizes proprioception, also known as deep sensitivity. This is the ability of a human or robot to perceive its movement or position in space. For example, a person lifting a dumbbell in the gym can feel the weight of the dumbbell in their wrist and biceps, even though they are holding the dumbbell in their hand. Similarly, a robot can "feel" the heaviness of an object through the many joints in its arm.

As the robot lifts an object, the researchers' system collects signals from the robot's joint encoders, which are sensors that capture the rotational position and speed of the joints during movement. To estimate the properties of an object during the interaction between the robot and the object, their system relies on two models: one that simulates the robot and its movement, and another that simulates the object's dynamics.

A human doesn't have super-precise measurements of the joint angles in their fingers or the exact torque they're exerting on an object, but a robot does. We take advantage of these capabilities.

MIT-Postdoktorand Chao Liu


Their algorithm "observes" the movement of the robot and the object during a physical interaction and uses the joint sensor data to work backward and identify the properties of the object. For instance, a heavier object moves more slowly than a lighter one when the robot exerts the same force.

A precise digital twin of the real world is really important for the success of our method.

Peter Yichen Chen


The Trick: Differentiable Simulations

They use a technique called differentiable simulation, which allows the algorithm to predict how small changes in the properties of an object, such as mass or softness, affect the robot's end position.

Once the simulation matches the real movements of the robot, the system has identified the correct property. The algorithm can do this in seconds and only needs to see one real trajectory of the robot in motion to perform the calculations.

The technique could also determine properties such as the moment of inertia or the viscosity of a liquid in a container. Since their algorithm does not require an extensive dataset for training, unlike other methods that rely on computer vision or external sensors, it would also be less prone to errors when confronted with unknown environments or new objects.

What the Researchers Still Aim to Achieve

In the future, the researchers want to attempt to combine their method with computer vision to develop an even more powerful multimodal sensor technology. "This work does not aim to replace computer vision. Both methods have their pros and cons. But here we have shown that we can already determine some of these properties without a camera," says Chen. The researchers also want to explore applications for more complex robotic systems, such as soft robots, and more complex objects, such as sloshing liquids or granular media like sand.

Co-authors include MIT postdoc Chao Liu, Pingchuan Ma PhD '25, Jack Eastman MEng '24, Dylan Randle and Yuri Ivanov from Amazon Robotics, MIT professors of electrical engineering and computer science Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), and Wojciech Matusik, head of the Computational Design and Fabrication Group within CSAIL. The research findings will be presented at the International Conference on Robotics and Automation.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent