Daniel Ackerman | MIT News, Author at The Robot Report https://www.therobotreport.com/author/daniel-ackerman/ Robotics news, research and analysis Wed, 26 May 2021 14:59:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Daniel Ackerman | MIT News, Author at The Robot Report https://www.therobotreport.com/author/daniel-ackerman/ 32 32 MIT’s Digger Finger senses shapes of buried objects https://www.therobotreport.com/mits-digger-finger-can-identify-buried-objects/ https://www.therobotreport.com/mits-digger-finger-can-identify-buried-objects/#respond Wed, 26 May 2021 14:57:57 +0000 https://www.therobotreport.com/?p=559646 The technology uses tactile sensing to identify objects underground and might one day help disarm land mines or inspect cables.

The post MIT’s Digger Finger senses shapes of buried objects appeared first on The Robot Report.

]]>
Digger Finger

MIT developed a “Digger Finger” robot that digs through granular material and senses the shapes of buried objects. Photo Credit: MIT

Over the years, robots have gotten quite good at identifying objects — as long as they’re out in the open.

Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.

MIT researchers have designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers said the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.

The study’s lead author is Radhen Patel, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Co-authors include CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.

Seeking to identify objects buried in granular material — sand, gravel, and other types of loosely packed particles — isn’t a brand new quest. Previously, researchers have used technologies that sense the subterranean from above, such as Ground Penetrating Radar or ultrasonic vibrations. But these techniques provide only a hazy view of submerged objects. They might struggle to differentiate rock from bone, for example.

“So, the idea is to make a finger that has a good sense of touch and can distinguish between the various things it’s feeling,” said Adelson. “That would be helpful if you’re trying to find and disable buried bombs, for example.” Making that idea a reality meant clearing a number of hurdles.

The team’s first challenge was a matter of form: The robotic finger had to be slender and sharp-tipped.

In prior work, the researchers had used a tactile sensor called GelSight. The sensor consisted of a clear gel covered with a reflective membrane that deformed when objects pressed against it. Behind the membrane were three colors of LED lights and a camera. The lights shone through the gel and onto the membrane, while the camera collected the membrane’s pattern of reflection. Computer vision algorithms then extracted the 3D shape of the contact area where the soft finger touched the object. The contraption provided an excellent sense of artificial touch, but it was inconveniently bulky.

For the Digger Finger, the researchers slimmed down their GelSight sensor in two main ways. First, they changed the shape to be a slender cylinder with a beveled tip. Next, they ditched two-thirds of the LED lights, using a combination of blue LEDs and colored fluorescent paint. “That saved a lot of complexity and space,” said Ouyang. “That’s how we were able to get it into such a compact form.” The final product featured a device whose tactile sensing membrane was about 2 square centimeters, similar to the tip of a finger.

With size sorted out, the researchers turned their attention to motion, mounting the finger on a robot arm and digging through fine-grained sand and coarse-grained rice. Granular media have a tendency to jam when numerous particles become locked in place. That makes it difficult to penetrate. So, the team added vibration to the Digger Finger’s capabilities and put it through a battery of tests.

“We wanted to see how mechanical vibrations aid in digging deeper and getting through jams,” said Patel. “We ran the vibrating motor at different operating voltages, which changes the amplitude and frequency of the vibrations.” They found that rapid vibrations helped “fluidize” the media, clearing jams and allowing for deeper burrowing — though this fluidizing effect was harder to achieve in sand than in rice.

A closeup photograph of the Digger Finger and a diagram of its parts. | Photo Credit: MIT

They also tested various twisting motions in both the rice and sand. Sometimes, grains of each type of media would get stuck between the Digger-Finger’s tactile membrane and the buried object it was trying to sense. When this happened with rice, the trapped grains were large enough to completely obscure the shape of the object, though the occlusion could usually be cleared with a little robotic wiggling. Trapped sand was harder to clear, though the grains’ small size meant the Digger Finger could still sense the general contours of target object.

Patel said that operators will have to adjust the Digger Finger’s motion pattern for different settings “depending on the type of media and on the size and shape of the grains.” The team plans to keep exploring new motions to optimize the Digger Finger’s ability to navigate various media.

Adelson said the Digger Finger is part of a program extending the domains in which robotic touch can be used. Humans use their fingers amidst complex environments, whether fishing for a key in a pants pocket or feeling for a tumor during surgery. “As we get better at artificial touch, we want to be able to use it in situations when you’re surrounded by all kinds of distracting information,” said Adelson. “We want to be able to distinguish between the stuff that’s important and the stuff that’s not.”

Funding for this research was provided, in part, by the Toyota Research Institute through the Toyota-CSAIL Joint Research Center; the Office of Naval Research; and the Norwegian Research Council.

Editor’s Note: This article was republished from MIT New.

The post MIT’s Digger Finger senses shapes of buried objects appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mits-digger-finger-can-identify-buried-objects/feed/ 0
Algorithm helps robot swarms complete missions with minimal wasted effort https://www.therobotreport.com/algorithm-helps-robot-swarms-complete-missions-efficiently/ https://www.therobotreport.com/algorithm-helps-robot-swarms-complete-missions-efficiently/#respond Thu, 13 May 2021 20:08:43 +0000 https://www.therobotreport.com/?p=559591 Algorithm accepts or rejects each robot's mission trajectory, depending on whether it increases or decreases the team’s objective function.

The post Algorithm helps robot swarms complete missions with minimal wasted effort appeared first on The Robot Report.

]]>

MIT researchers developed an algorithm that coordinates the performance of robot teams for missions like mapping or search-and-rescue in complex, unpredictable environments. | Credit: Jose-Luis Olivares, MIT

Sometimes, one robot isn’t enough.

Consider a search-and-rescue mission to find a hiker lost in the woods. Rescuers might want to deploy a squad of wheeled robots to roam the forest, perhaps with the aid of drones scouring the scene from above. The benefits of a robot team are clear. But orchestrating that team is no simple matter. How to ensure the robots aren’t duplicating each other’s efforts or wasting energy on a convoluted search trajectory?

MIT researchers have designed an algorithm to ensure the fruitful cooperation of information-gathering robot teams. Their approach relies on balancing a tradeoff between data collected and energy expended – which eliminates the chance that a robot might execute a wasteful maneuver to gain just a smidgeon of information. The researchers said this assurance is vital for robot teams’ success in complex, unpredictable environments.

“Our method provides comfort, because we know it will not fail, thanks to the algorithm’s worst-case performance,” said Xiaoyi Cai, a PhD student in MIT’s Department of Aeronautics and Astronautics (AeroAstro).

The research will be presented at the IEEE International Conference on Robotics and Automation in May. Cai is the paper’s lead author. His co-authors include Jonathan How, the R.C. Maclaurin Professor of Aeronautics and Astronautics at MIT; Brent Schlotfeldt and George J. Pappas, both of the University of Pennsylvania; and Nikolay Atanasov of the University of California at San Diego.

Robot teams have often relied on one overarching rule for gathering information: The more the merrier. “The assumption has been that it never hurts to collect more information,” said Cai. “If there’s a certain battery life, let’s just use it all to gain as much as possible.” This objective is often executed sequentially — each robot evaluates the situation and plans its trajectory, one after another. It’s a straightforward procedure, and it generally works well when information is the sole objective. But problems arise when energy efficiency becomes a factor.

Fig. 1. Overview of the proposed distributed planning approach for non-monotone information gathering. Robots generate individual candidate trajectories and jointly build a team plan via distributed local search, by repeatedly proposing changes to the collective trajectories.

Cai said the benefits of gathering additional information often diminish over time. For example, if you already have 99 pictures of a forest, it might not be worth sending a robot on a miles-long quest to snap the 100th. “We want to be cognizant of the tradeoff between information and energy,” said Cai. “It’s not always good to have more robots moving around. It can actually be worse when you factor in the energy cost.”

The researchers developed a robot team planning algorithm that optimizes the balance between energy and information. The algorithm’s “objective function,” which determines the value of a robot’s proposed task, accounts for the diminishing benefits of gathering additional information and the rising energy cost. Unlike prior planning methods, it doesn’t just assign tasks to the robots sequentially. “It’s more of a collaborative effort,” said Cai. “The robots come up with the team plan themselves.”

Cai’s method, called Distributed Local Search, is an iterative approach that improves the team’s performance by adding or removing individual robot’s trajectories from the group’s overall plan. First, each robot independently generates a set of potential trajectories it might pursue. Next, each robot proposes its trajectories to the rest of the team. Then the algorithm accepts or rejects each individual’s proposal, depending on whether it increases or decreases the team’s objective function. “We allow the robots to plan their trajectories on their own,” said Cai. “Only when they need to come up with the team plan, we let them negotiate. So, it’s a rather distributed computation.”

Distributed Local Search proved its mettle in computer simulations. The researchers ran their algorithm against competing ones in coordinating a simulated team of 10 robots. While Distributed Local Search took slightly more computation time, it guaranteed successful completion of the robots’ mission, in part by ensuring that no team member got mired in a wasteful expedition for minimal information. “It’s a more expensive method,” said Cai. “But we gain performance.”

The advance could one day help robot teams solve real-world information gathering problems where energy is a finite resource, according to Geoff Hollinger, a roboticist at Oregon State University, who was not involved with the research. “These techniques are applicable where the robot team needs to trade off between sensing quality and energy expenditure. That would include aerial surveillance and ocean monitoring.”

Cai also points to potential applications in mapping and search-and-rescue – activities that rely on efficient data collection. “Improving this underlying capability of information gathering will be quite impactful,” he said. The researchers next plan to test their algorithm on robot teams in the lab, including a mix of drones and wheeled robots.

Editor’s Note: This article was republished from MIT News.

The post Algorithm helps robot swarms complete missions with minimal wasted effort appeared first on The Robot Report.

]]>
https://www.therobotreport.com/algorithm-helps-robot-swarms-complete-missions-efficiently/feed/ 0
Radio frequency perception helps robot grasp hidden objects https://www.therobotreport.com/rf-grasp-robot-radio-waves-hidden-objects/ https://www.therobotreport.com/rf-grasp-robot-radio-waves-hidden-objects/#respond Fri, 02 Apr 2021 14:29:12 +0000 https://www.therobotreport.com/?p=559281 MIT researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or…

The post Radio frequency perception helps robot grasp hidden objects appeared first on The Robot Report.

]]>
MIT RF Grasp

MIT’s RF Grasp system uses both a camera and an RF reader to find and grab tagged objects, even when they’re fully blocked from the camera’s view. | Photo Credit: MIT

MIT researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.

The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include MIT Associate Professor Fadel Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech. You can read the paper here (PDF).

As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment.

“Perception and picking are two roadblocks in the industry today,” said Rodriguez. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls.

But radio waves can.

For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.

The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.

“RF is such a different sensing modality than vision,” said Rodriguez. “It would be a mistake not to explore what RF can do.”

RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they’re fully blocked from the camera’s view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot’s wrist. The RF reader stands independent of the robot and relays tracking information to the robot’s control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot’s decision making was one of the biggest challenges the researchers faced.

“The robot has to decide, at each point in time, which of these streams is more important to think about,” said Boroushaki. “It’s not just eye-hand coordination, it’s RF-eye-hand coordination. So, the problem gets very complicated.”

The robot initiates the seek-and-pluck process by pinging the target object’s RF tag for a sense of its whereabouts. “It starts by using RF to focus the attention of vision,” said Adib. “Then you use vision to navigate fine maneuvers.” The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren’s source.

RF Grasp

With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot’s decision making.

RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to “declutter” its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez said this demonstrates RF Grasp’s “unfair advantage” over robots without penetrative RF sensing. “It has this guidance that other systems simply don’t have.”

RF Grasp could one day perform fulfillment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item’s identity without the need to manipulate the item, expose its barcode, then scan it. “RF has the potential to improve some of those limitations in industry, especially in perception and localization,” said Rodriguez.

Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. “Or you could imagine the robot finding lost items. It’s like a super-Roomba that goes and retrieves my keys, wherever the heck I put them.”

Editor’s Note: This article was republished from MIT News.

The post Radio frequency perception helps robot grasp hidden objects appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rf-grasp-robot-radio-waves-hidden-objects/feed/ 0
Deep learning optimizes sensor placement for soft robots https://www.therobotreport.com/deep-learning-sensor-placement-soft-robots/ https://www.therobotreport.com/deep-learning-sensor-placement-soft-robots/#respond Mon, 22 Mar 2021 16:01:40 +0000 https://www.therobotreport.com/?p=559173 There are some tasks traditional robots – the rigid and metallic kind – simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts…

The post Deep learning optimizes sensor placement for soft robots appeared first on The Robot Report.

]]>
deep learning sensors soft robots

MIT built a deep learning neural network to aid the design of soft robots, such as these iterations of a robotic elephant. | Photo Credit: MIT

There are some tasks traditional robots – the rigid and metallic kind – simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.

MIT researchers developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design.

“The system not only learns a given task, but also how to best design the robot to solve that task,” said Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”

The research will be presented at the IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Related: Designing a quadruped controlled & powered by pneumatics

Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.

Soft robots are flexible and pliant — they generally feel more like a bouncy ball than a bowling ball. “The main problem with soft robots is that they are infinitely dimensional,” said Spielberg. “Any point on a soft-bodied robot can, in theory, deform in any way possible.” That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back into the robot’s control program. But the researchers wanted to create a soft robot untethered from external aid.

“You can’t put an infinite number of sensors on the robot itself,” said Spielberg. “So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?” The team turned to deep learning for an answer.

The researchers developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.

By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans’ intuition on where to site the sensors.

Related: Metamaterials could lead to transforming robots

The researchers pitted their algorithm against a series of expert predictions. For three different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. And the results weren’t close.

deep learning sensors soft robots

“Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go,” said Amini. “It turns out there are a lot more subtleties in this problem than we initially expected.”

Spielberg said their work could help to automate the process of robot design. In addition to developing algorithms to control a robot’s movements, “we also need to think about how we’re going to sensorize these robots, and how that will interplay with other components of that system,” he said. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. “That’s something where you need a very robust, well-optimized sense of touch,” said Spielberg. “So, there’s potential for immediate impact.”

“Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,” said Rus. “The sensors are an important aspect of the process, as they enable the soft robot to “see” and understand the world and its relationship with the world.”

Editor’s Note: This article was republished from MIT News.

The post Deep learning optimizes sensor placement for soft robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/deep-learning-sensor-placement-soft-robots/feed/ 0
Soft actuator enables smaller, more agile drone design https://www.therobotreport.com/soft-actuator-enables-smaller-more-agile-drone-design/ https://www.therobotreport.com/soft-actuator-enables-smaller-more-agile-drone-design/#respond Tue, 02 Mar 2021 18:32:51 +0000 https://www.therobotreport.com/?p=559040 New soft actuator can flap nearly 500 times per second, giving drones insect-like resilience.

The post Soft actuator enables smaller, more agile drone design appeared first on The Robot Report.

]]>
soft actuator agile drones

Insects’ acrobatic traits help them navigate the aerial world, with all of its wind, obstacles and uncertainty. | Credit: Kevin Yufeng Chen

If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.

Chen, a member of the Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, has developed insect-sized drones with unprecedented dexterity and resilience. The aerial robots are powered by a new class of soft actuator, which allows them to withstand the physical travails of real-world flight. Chen hopes the robots could one day aid humans by pollinating crops or performing machinery inspections in cramped spaces.

Chen’s work appears this month in the journal IEEE Transactions on Robotics. His co-authors include MIT PhD student Zhijian Ren, Harvard University PhD student Siyi Xu, and City University of Hong Kong roboticist Pakpong Chirarattananon.

Typically, drones require wide open spaces because they’re neither nimble enough to navigate confined spaces nor robust enough to withstand collisions in a crowd. “If we look at most drones today, they’re usually quite big,” said Chen. “Most of their applications involve flying outdoors. The question is: Can you create insect-scale robots that can move around in very complex, cluttered spaces?”

According to Chen, “The challenge of building small aerial robots is immense.” Pint-sized drones require a fundamentally different construction from larger ones. Large drones are usually powered by motors, but motors lose efficiency as you shrink them. So, Chen said, for insect-like robots “you need to look for alternatives.”

Related: Skydio 1st U.S. drone maker to reach unicorn status

The principal alternative until now has been employing a small, rigid actuator built from piezoelectric ceramic materials. While piezoelectric ceramics allowed the first generation of tiny robots to take flight, they’re quite fragile. And that’s a problem when you’re building a robot to mimic an insect — foraging bumblebees endure a collision about once every second.

Chen designed a more resilient tiny drone using soft actuators instead of hard, fragile ones. The soft actuator is made of thin rubber cylinders coated in carbon nanotubes. When voltage is applied to the carbon nanotubes, they produce an electrostatic force that squeezes and elongates the rubber cylinder. Repeated elongation and contraction causes the drone’s wings to beat – fast.

Chen’s soft actuator can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” said Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.

“Achieving flight with a centimeter-scale robot is always an impressive feat,” said Farrell Helbling, an assistant professor of electrical and computer engineering at Cornell University, who was not involved in the research. “Because of the soft actuators’ inherent compliance, the robot can safely run into obstacles without greatly inhibiting flight. This feature is well-suited for flight in cluttered, dynamic environments and could be very useful for any number of real-world applications.”

Helbling adds that a key step toward those applications will be untethering the robots from a wired power source, which is currently required by the actuators’ high operating voltage. “I’m excited to see how the authors will reduce operating voltage so that they may one day be able to achieve untethered flight in real-world environments.”

Q&A: Ingenuity Mars Helicopter chief engineer Bob Balaram

Building insect-like robots can provide a window into the biology and physics of insect flight, a longstanding avenue of inquiry for researchers. Chen’s work addresses these questions through a kind of reverse engineering. “If you want to learn how insects fly, it is very instructive to build a scale robot model,” he said. “You can perturb a few things and see how it affects the kinematics or how the fluid forces change. That will help you understand how those things fly.” But Chen aims to do more than add to entomology textbooks. His drones can also be useful in industry and agriculture.

Chen said his mini-aerialists could navigate complex machinery to ensure safety and functionality. “Think about the inspection of a turbine engine. You’d want a drone to move around [an enclosed space] with a small camera to check for cracks on the turbine plates.”

Other potential applications include artificial pollination of crops or completing search-and-rescue missions following a disaster. “All those things can be very challenging for existing large-scale robots,” said Chen. Sometimes, bigger isn’t better.

The post Soft actuator enables smaller, more agile drone design appeared first on The Robot Report.

]]>
https://www.therobotreport.com/soft-actuator-enables-smaller-more-agile-drone-design/feed/ 0
‘Robomorphic computing’ aims to quicken robots’ response time https://www.therobotreport.com/robomorphic-computing-hasten-robots-response-time/ https://www.therobotreport.com/robomorphic-computing-hasten-robots-response-time/#respond Thu, 21 Jan 2021 16:31:26 +0000 https://www.therobotreport.com/?p=558724 Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman. Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds. Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits reaction…

The post ‘Robomorphic computing’ aims to quicken robots’ response time appeared first on The Robot Report.

]]>
robomorphic computing

MIT developed ‘robomorphic computing, an automated way to design custom hardware to speed up a robot’s operation. | Credit: Jose-Luis Olivares, MIT

Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman.

Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds.

Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits reaction time, says Neuman, who recently graduated with a PhD from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot’s “mind” and body. The method, called “robomorphic computing,” uses a robot’s physical layout and intended applications to generate a customized computer chip that minimizes the robot’s response time.

The advance could fuel a variety of robotics applications, including, potentially, frontline medical care of contagious patients. “It would be fantastic if we could have robots that could help reduce risk for patients and hospital workers,” says Neuman.

Neuman will present the research at April’s International Conference on Architectural Support for Programming Languages and Operating Systems. MIT co-authors include graduate student Thomas Bourgeat and Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Neuman’s PhD advisor. Other co-authors include Brian Plancher, Thierry Tambe, and Vijay Janapa Reddi, all of Harvard University. Neuman is now a postdoctoral NSF Computing Innovation Fellow at Harvard’s School of Engineering and Applied Sciences.

Related: How Boston Dynamics’ robots learned to dance

There are three main steps in a robot’s operation, according to Neuman. The first is perception, which includes gathering data using sensors or cameras. The second is mapping and localization: “Based on what they’ve seen, they have to construct a map of the world around them and then localize themselves within that map,” says Neuman. The third step is motion planning and control — in other words, plotting a course of action.

These steps can take time and an awful lot of computing power. “For robots to be deployed into the field and safely operate in dynamic environments around humans, they need to be able to think and react very quickly,” says Plancher. “Current algorithms cannot be run on current CPU hardware fast enough.”

Neuman adds that researchers have been investigating better algorithms, but she thinks software improvements alone aren’t the answer. “What’s relatively new is the idea that you might also explore better hardware.” That means moving beyond a standard-issue CPU processing chip that comprises a robot’s brain — with the help of hardware acceleration.

Hardware acceleration refers to the use of a specialized hardware unit to perform certain computing tasks more efficiently. A commonly used hardware accelerator is the graphics processing unit (GPU), a chip specialized for parallel processing. These devices are handy for graphics because their parallel structure allows them to simultaneously process thousands of pixels. “A GPU is not the best at everything, but it’s the best at what it’s built for,” says Neuman. “You get higher performance for a particular application.”

Most robots are designed with an intended set of applications and could therefore benefit from hardware acceleration. That’s why Neuman’s team developed robomorphic computing.

The system creates a customized hardware design to best serve a particular robot’s computing needs. The user inputs the parameters of a robot, like its limb layout and how its various joints can move. Neuman’s system translates these physical properties into mathematical matrices. These matrices are “sparse,” meaning they contain many zero values that roughly correspond to movements that are impossible given a robot’s particular anatomy. (Similarly, your arm’s movements are limited because it can only bend at certain joints — it’s not an infinitely pliable spaghetti noodle.)

Related: 8 degrees of difficulty for autonomous navigation

The system then designs a hardware architecture specialized to run calculations only on the non-zero values in the matrices. The resulting chip design is therefore tailored to maximize efficiency for the robot’s computing needs. And that customization paid off in testing.

Hardware architecture designed using this method for a particular application outperformed off-the-shelf CPU and GPU units. While Neuman’s team didn’t fabricate a specialized chip from scratch, they programmed a customizable field-programmable gate array (FPGA) chip according to their system’s suggestions. Despite operating at a slower clock rate, that chip performed eight times faster than the CPU and 86 times faster than the GPU.

“I was thrilled with those results,” says Neuman. “Even though we were hamstrung by the lower clock speed, we made up for it by just being more efficient.”

Plancher sees widespread potential for robomorphic computing. “Ideally we can eventually fabricate a custom motion-planning chip for every robot, allowing them to quickly compute safe and efficient motions,” he says. “I wouldn’t be surprised if 20 years from now every robot had a handful of custom computer chips powering it, and this could be one of them.” Neuman adds that robomorphic computing might allow robots to relieve humans of risk in a range of settings, such as caring for covid-19 patients or manipulating heavy objects.

“This work is exciting because it shows how specialized circuit designs can be used to accelerate a core component of robot control,” says Robin Deits, a robotics engineer at Boston Dynamics who was not involved in the research. “Software performance is crucial for robotics because the real world never waits around for the robot to finish thinking.” He adds that Neuman’s advance could enable robots to think faster, “unlocking exciting behaviors that previously would be too computationally difficult.”

Neuman next plans to automate the entire system of robomorphic computing. Users will simply drag and drop their robot’s parameters, and “out the other end comes the hardware description. I think that’s the thing that’ll push it over the edge and make it really useful.”

Editor’s Note: This article was republished from MIT News.

The post ‘Robomorphic computing’ aims to quicken robots’ response time appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robomorphic-computing-hasten-robots-response-time/feed/ 0
RoboGrammar wants to automate your robot’s design https://www.therobotreport.com/robogrammar-automate-optimize-robot-design/ https://www.therobotreport.com/robogrammar-automate-optimize-robot-design/#respond Mon, 30 Nov 2020 16:25:53 +0000 https://www.therobotreport.com/?p=107422 Robot design is still a very manual process, but MIT researchers can automate and optimize robot design with a system called RoboGrammar.

The post RoboGrammar wants to automate your robot’s design appeared first on The Robot Report.

]]>

MIT researchers have automated and optimized robot design with a system called RoboGrammar. The system creates arthropod-inspired robots for traversing a variety of terrains. Pictured are several robot designs generated with RoboGrammar. | Credit: MIT

So, you need a robot that climbs stairs. What shape should that robot be? Should it have two legs, like a person? Or six, like an ant?

Choosing the right shape will be vital for your robot’s ability to traverse a particular terrain. And it’s impossible to build and test every potential form. But now an MIT system makes it possible to simulate them and determine which design works best.

You start by telling the system, called RoboGrammar, which robot parts are lying around your shop — wheels, joints, etc. You also tell it what terrain your robot will need to navigate. And RoboGrammar does the rest, generating an optimized structure and control program for your robot.

The advance could inject a dose of computer-aided creativity into the field. “Robot design is still a very manual process,” said Allan Zhao, the paper’s lead author and a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He described RoboGrammar as “a way to come up with new, more inventive robot designs that could potentially be more effective.”

Zhao is the lead author of the paper, which he will present at the SIGGRAPH Asia conference. Co-authors include PhD student Jie Xu, postdoc Mina Konaković-Luković, postdoc Josephine Hughes, PhD student Andrew Spielberg, and professors Daniela Rus and Wojciech Matusik, all of MIT.

Ground rules of RoboGrammar

Robots are built for a near-endless variety of tasks, yet “they all tend to be very similar in their overall shape and design,” said Zhao. For example, “when you think of building a robot that needs to cross various terrains, you immediately jump to a quadruped,” he added, referring to a four-legged animal like a dog. “We were wondering if that’s really the optimal design.”

Zhao’s team speculated that more innovative design could improve functionality. So they built a computer model for the task – a system that wasn’t unduly influenced by prior convention. And while inventiveness was the goal, Zhao did have to set some ground rules.

Related: Researchers building robot with wheels and legs to traverse any terrain

The universe of possible robot forms is “primarily composed of nonsensical designs,” Zhao wrote in the paper. “If you can just connect the parts in arbitrary ways, you end up with a jumble,” he said. To avoid that, his team developed a “graph grammar” – a set of constraints on the arrangement of a robot’s components. For example, adjoining leg segments should be connected with a joint, not with another leg segment. Such rules ensure each computer-generated design works, at least at a rudimentary level.

Zhao said the rules of his graph grammar were inspired not by other robots but by animals – arthropods in particular. These invertebrates include insects, spiders, and lobsters. As a group, arthropods are an evolutionary success story, accounting for more than 80 percent of known animal species.

“They’re characterized by having a central body with a variable number of segments. Some segments may have legs attached,” said Zhao. “And we noticed that that’s enough to describe not only arthropods but more familiar forms as well,” including quadrupeds. Zhao adopted the arthropod-inspired rules thanks in part to this flexibility, though he did add some mechanical flourishes. For example, he allowed the computer to conjure wheels instead of legs.

A phalanx of robots

Using Zhao’s graph grammar, RoboGrammar operates in three sequential steps: defining the problem, drawing up possible robotic solutions, then selecting the optimal ones. Problem definition largely falls to the human user, who inputs the set of available robotic components, like motors, legs, and connecting segments. “That’s key to making sure the final robots can actually be built in the real world,” said Zhao. The user also specifies the variety of terrain to be traversed, which can include combinations of elements like steps, flat areas, or slippery surfaces.

With these inputs, RoboGrammar then uses the rules of the graph grammar to design hundreds of thousands of potential robot structures. Some look vaguely like a racecar. Others look like a spider, or a person doing a push-up. “It was pretty inspiring for us to see the variety of designs,” said Zhao. “It definitely shows the expressiveness of the grammar.” But while the grammar can crank out quantity, its designs aren’t always of optimal quality.

Choosing the best robot design requires controlling each robot’s movements and evaluating its function. “Up until now, these robots are just structures,” said Zhao. The controller is the set of instructions that brings those structures to life, governing the movement sequence of the robot’s various motors. The team developed a controller for each robot with an algorithm called Model Predictive Control, which prioritizes rapid forward movement.

RoboGrammar

The input to the system is a set of base robot components, such as links, joints, and end structures, and at least one terrain, such as stepped terrain or terrain with wall obstacles. RoboGrammar provides a recursive graph grammar to efficiently generate hundreds of thousands of robot structures built with the given components. It then uses Graph Heuristic Search coupled with model predictive control (MPC) to facilitate exploration of the large design space, and identify high performing examples for a given terrain.| Credit: MIT

“The shape and the controller of the robot are deeply intertwined,” said Zhao, “which is why we have to optimize a controller for every given robot individually.” Once each simulated robot is free to move about, the researchers seek high-performing robots with a “graph heuristic search.” This neural network algorithm iteratively samples and evaluates sets of robots, and it learns which designs tend to work better for a given task. “The heuristic function improves over time,” said Zhao, “and the search converges to the optimal robot.”

This all happens before the human designer ever picks up a screw.

“This work is a crowning achievement in the a 25-year quest to automatically design the morphology and control of robots,” said Hod Lipson, a mechanical engineer and computer scientist at Columbia University, who was not involved in the project. “The idea of using shape-grammars has been around for a while, but nowhere has this idea been executed as beautifully as in this work. Once we can get machines to design, make and program robots automatically, all bets are off.”

Zhao intends the system as a spark for human creativity. He described RoboGrammar as a “tool for robot designers to expand the space of robot structures they draw upon.” To show its feasibility, his team plans to build and test some of RoboGrammar’s optimal robots in the real world. Zhao added that the system could be adapted to pursue robotic goals beyond terrain traversing. And he said RoboGrammar could help populate virtual worlds. “Let’s say in a video game you wanted to generate lots of kinds of robots, without an artist having to create each one,” said Zhao. “RoboGrammar would work for that almost immediately.”

One surprising outcome of the project? “Most designs did end up being four-legged in the end,” said Zhao. Perhaps manual robot designers were right to gravitate toward quadrupeds all along. “Maybe there really is something to it.”

Editor’s Note: This article was republished from MIT News.

The post RoboGrammar wants to automate your robot’s design appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robogrammar-automate-optimize-robot-design/feed/ 0
MIT neural network learns when it shouldn’t be trusted https://www.therobotreport.com/mit-neural-network-learned-when-not-to-be-trusted/ https://www.therobotreport.com/mit-neural-network-learned-when-not-to-be-trusted/#respond Sun, 29 Nov 2020 16:00:10 +0000 https://www.therobotreport.com/?p=107405 Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they’re correct? Alexander Amini and his…

The post MIT neural network learns when it shouldn’t be trusted appeared first on The Robot Report.

]]>
neural networks

MIT developed a way for a deep learning neural network to rapidly estimate confidence levels in their output. The advance could enhance safety and efficiency in AI-assisted decision making. Credits: Credit: iStock image edited by MIT News

Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they’re correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.

They’ve developed a quick way for a neural network to crunch data, and output not just a prediction but also the model’s confidence level based on the quality of the available data. The advance might save lives, as deep learning is already being deployed in the real world today. A network’s level of certainty can be the difference between an autonomous vehicle determining that “it’s all clear to proceed through the intersection” and “it’s probably clear, so stop just in case.”

Current methods of uncertainty estimation for neural networks tend to be computationally expensive and relatively slow for split-second decisions. But Amini’s approach, dubbed “deep evidential regression,” accelerates the process and could lead to safer outcomes. “We need the ability to not only have high-performance models, but also to understand when we cannot trust those models,” says Amini, a PhD student in Professor Daniela Rus’ group at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

“This idea is important and applicable broadly. It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model,” says Rus.

Amini will present the research at the NeurIPS conference, along with Rus, who is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, director of CSAIL, and deputy dean of research for the MIT Stephen A. Schwarzman College of Computing; and graduate students Wilko Schwarting of MIT and Ava Soleimany of MIT and Harvard.

Efficient uncertainty

After an up-and-down history, deep learning has demonstrated remarkable performance on a variety of tasks, in some cases even surpassing human accuracy. And nowadays, deep learning seems to go wherever computers go. It fuels search engine results, social media feeds, and facial recognition. “We’ve had huge successes using deep learning,” says Amini. “Neural networks are really good at knowing the right answer 99 percent of the time.” But 99 percent won’t cut it when lives are on the line.

“One thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong,” says Amini. “We really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently.”

Related: Neural network plus motion planning equals more useful robots

Neural networks can be massive, sometimes brimming with billions of parameters. So it can be a heavy computational lift just to get an answer, let alone a confidence level. Uncertainty analysis in neural networks isn’t new. But previous approaches, stemming from Bayesian deep learning, have relied on running, or sampling, a neural network many times over to understand its confidence. That process takes time and memory, a luxury that might not exist in high-speed traffic.

The researchers devised a way to estimate uncertainty from only a single run of the neural network. They designed the network with bulked up output, producing not only a decision but also a new probabilistic distribution capturing the evidence in support of that decision. These distributions, termed evidential distributions, directly capture the model’s confidence in its prediction. This includes any uncertainty present in the underlying input data, as well as in the model’s final decision. This distinction can signal whether uncertainty can be reduced by tweaking the neural network itself, or whether the input data are just noisy.


Neural network confidence check

To put their approach to the test, the researchers started with a challenging computer vision task. They trained their neural network to analyze a monocular color image and estimate a depth value (i.e. distance from the camera lens) for each pixel. An autonomous vehicle might use similar calculations to estimate its proximity to a pedestrian or to another vehicle, which is no simple task.

Their network’s performance was on par with previous state-of-the-art models, but it also gained the ability to estimate its own uncertainty. As the researchers had hoped, the network projected high uncertainty for pixels where it predicted the wrong depth. “It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator,” Amini says.

To stress-test their calibration, the team also showed that the network projected higher uncertainty for “out-of-distribution” data – completely new types of images never encountered during training. After they trained the network on indoor home scenes, they fed it a batch of outdoor driving scenes. The network consistently warned that its responses to the novel outdoor scenes were uncertain. The test highlighted the network’s ability to flag when users should not place full trust in its decisions. In these cases, “if this is a health care application, maybe we don’t trust the diagnosis that the model is giving, and instead seek a second opinion,” says Amini.

The network even knew when photos had been doctored, potentially hedging against data-manipulation attacks. In another trial, the researchers boosted adversarial noise levels in a batch of images they fed to the network. The effect was subtle – barely perceptible to the human eye – but the network sniffed out those images, tagging its output with high levels of uncertainty. This ability to sound the alarm on falsified data could help detect and deter adversarial attacks, a growing concern in the age of deepfakes.

Deep evidential regression is “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems,” says Raia Hadsell, an artificial intelligence researcher at DeepMind who was not involved with the work. “This is done in a novel way that avoids some of the messy aspects of other approaches – e.g. sampling or ensembles – which makes it not only elegant but also computationally more efficient — a winning combination.”

Deep evidential regression could enhance safety in AI-assisted decision making. “We’re starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences,” says Amini. “Any user of the method, whether it’s a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision.” He envisions the system not only quickly flagging uncertainty, but also using it to make more conservative decision making in risky scenarios like an autonomous vehicle approaching an intersection.

“Any field that is going to have deployable machine learning ultimately needs to have reliable uncertainty awareness,” he says.

Editor’s Note: This article was republished from MIT News.

The post MIT neural network learns when it shouldn’t be trusted appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-neural-network-learned-when-not-to-be-trusted/feed/ 0