CAMBRIDGE, Mass. — Researchers at the Massachusetts Institute of Technology this week announced that they have enabled a soft robotic arm to understand its configuration in 3D space using only motion and position data from its own “sensorized” skin.
Soft robots are constructed from highly compliant materials and are inspired by living organisms. Proponents say they are safer, more adaptable, and more resilient alternatives to traditional rigid robots. But making these deformable robots fully autonomous is challenging because they can move in a virtually infinite number of directions at any given moment. That makes it difficult to train planning and control models.
Traditional methods to achieve autonomous control use large systems of multiple motion-capture cameras that provide the robots feedback about 3D movement and positions. But those are impractical for soft robots in real-world applications.
Sifting signals for sensorized orientation
In a paper being published in the journal IEEE Robotics and Automation Letters, the MIT researchers described a system of soft sensors that cover a robot’s body to provide “proprioception” — meaning awareness of motion and position of its body. That feedback runs into a novel deep-learning model that sifts through the noise and captures clear signals to estimate the robot’s 3D configuration.
The researchers validated their system on a soft robotic arm resembling an elephant trunk that can predict its own position as it autonomously swings around and extends.
The sensors can be fabricated using off-the-shelf materials, so any lab can develop its own sensorized systems, said Ryan Truby, a postdoctoral student in the MIT Computer Science and Artificial Laboratory (CSAIL) who is co-first author on the paper, along with CSAIL postdoc Cosimo Della Santina.
“We’re sensorizing soft robots to get feedback for control from sensors, not vision systems, using a very easy, rapid method for fabrication,” he said. “We want to use these soft robotic trunks, for instance, to orient and control themselves automatically, to pick things up and interact with the world. This is a first step toward that type of more sophisticated automated control.”
One future goal is to help make artificial limbs that can more dexterously manipulate objects in the environment.
“Think of your own body: You can close your eyes and reconstruct the world based on feedback from your skin,” said co-author Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “We want to design those same capabilities for soft robots.”
Shaping soft robot perception
A longtime goal in soft robotics has been fully integrated body sensors. Traditional, rigid sensors detract from a soft robot body’s natural compliance, complicate its design and fabrication, and they can cause various mechanical failures. Sensors using soft materials could be a more suitable alternative, but they require specialized materials and methods for their design, making them difficult for many robotics labs to fabricate and integrate in soft robots.
While working in his CSAIL lab one day looking for inspiration for sensor materials, Truby made an interesting connection. “I found these sheets of conductive materials used for electromagnetic interference shielding that you can buy anywhere in rolls,” he said.
These materials have “piezoresistive” properties, changing in electrical resistance when strained. Truby realized they could make effective soft sensors if they were placed on certain spots on the trunk. As the sensor deforms in response to the trunk’s stretching and compressing, its electrical resistance is converted to a specific output voltage. The voltage is then used as a signal correlating to that movement.
But the material didn’t stretch much, which would limit its use for soft robotics. Inspired by kirigami — a variation of origami that includes making cuts in a material — Truby designed and laser-cut rectangular strips of conductive silicone sheets into various patterns, such as rows of tiny holes or crisscrossing slices like a chain-link fence. That made them far more flexible, stretchable, “and beautiful to look at,” he said.
The researchers’ robotic trunk includes three segments, each with four fluidic actuators for a total of 12 used to move the arm. They fused one sensor over each segment, with each sensor covering and gathering data from one embedded actuator in the soft robot.
The sensorized actuators used “plasma bonding,” a technique that energizes a surface of a material to make it bond to another material. It takes roughly a couple hours to shape dozens of sensors that can be bonded to the soft robots using a handheld plasma-bonding device.
‘Learning’ configurations
As hypothesized, the sensors did capture the trunk’s general movement. But the signals they gathered were really noisy.
“Essentially, they’re non-ideal sensors in many ways,” Truby said. “But that’s just a common fact of making sensors from soft conductive materials. Higher-performing and more reliable sensors require specialized tools that most robotics labs do not have.”
To estimate the sensorized soft robot’s configuration, the researchers built a deep neural network to do most of the heavy lifting, by sifting through the noise to capture meaningful feedback signals. They developed a new model to kinematically describe the soft robot’s shape that vastly reduces the number of variables needed for their model to process.
In experiments, the researchers had the sensorized trunk swing around and extend itself in random configurations over approximately an hour and a half. They used the traditional motion-capture system for ground-truth data.
In training, the model analyzed data from its sensors to predict a configuration, and compared its predictions to that ground truth data which was being collected simultaneously. In doing so, the model “learns” to map signal patterns from its sensors to real-world configurations.
The results indicated, that for certain and steadier configurations, the robot’s estimated shape matched the ground truth.
Improving sensorized models
Next, the MIT researchers aim to explore new sensor designs for improved sensitivity and to develop new models and deep-learning methods to reduce the required training for every new sensorized soft robot. They also hope to refine the system to better capture the robot’s full dynamic motions.
Currently, the neural network and sensor skin are not sensitive to capture subtle motions or dynamic movements. But, for now, this is an important first step for learning-based approaches to soft robotic control, said Truby.
“Like our soft robots, living systems don’t have to be totally precise,” he said. “Humans are not precise machines, compared to our rigid robotic counterparts, and we do just fine.”
Editor’s Note: This article was republished from MIT News.
Tell Us What You Think!