Listen to this article
|
Researchers at the Massachusetts Institute of Technology have applied ideas from the use of artificial intelligence to mitigate traffic congestion to tackle robotic path planning in warehouses. The team has developed a deep-learning model that can decongest robots nearly four times faster than typical strong random search methods, according to MIT.
A typical automated warehouse could have hundreds of mobile robots running to and from their destinations and trying to avoid crashing into one another. Planning all of these simultaneous movements is a difficult problem. It’s so complex that even the best path-finding algorithms can struggle to keep up, said the university researchers.
The scientists built a deep-learning model that encodes warehouse information, including its robots, planned paths, tasks, and obstacles. The model then uses this information to predict the best areas of the warehouse to decongest and improve overall efficiency.
“We devised a new neural network architecture that is actually suitable for real-time operations at the scale and complexity of these warehouses,” stated Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE) at MIT. “It can encode hundreds of robots in terms of their trajectories, origins, destinations, and relationships with other robots, and it can do this in an efficient manner that reuses computation across groups of robots.”
Wu is also a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).
A divide-and-conquer approach to path planning
The MIT team’s technique for the deep-learning model was to divide the warehouse robots into groups. These smaller groups can be decongested faster with traditional algorithms used to coordinate robots than the entire group as a whole.
This is different from traditional search-based algorithms, which avoid crashes by keeping one robot on its course and replanning the trajectory for the other. These algorithms have an increasingly difficult time coordinating everything as more robots are added.
“Because the warehouse is operating online, the robots are replanned about every 100 milliseconds,” said Wu. “That means that every second, a robot is replanned 10 times. So these operations need to be very fast.”
To keep up with these operations, the MIT researchers used machine learning to focus the replanning on the most actionable areas of congestion. Here, the researchers saw the most room for improvement when it came to total travel time of robots. This is why they decided to tackle smaller groups of robots at the same time.
For example, in a warehouse with 800 robots, the network might cut the warehouse floor into smaller groups that contain 40 robots each. Next, it predicts which of these groups has to most potential to improve the overall solution if a search-based solver were used to coordinate the trajectories of robots in that group.
Once it finds the most promising robot group using a neural network, the system decongests it with a search-based solver. After this, it moves on to the next most promising group.
Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.
How MIT picked the best robots to start with
The MIT team said its neural network can reason about groups of robots efficiently because it captures complicated relationships that exist between individual robots. For example, it can see that even though one robot may be far away from another initially, their paths could still cross at some point during their trips.
Another advantage the system has is that it streamlines computation by encoding constraints only once, rather than repeating the process for each subproblem. This means that in a warehouse with 800 robots, decongesting 40 robots requires holding the other 760 as constraints.
Other approaches require reasoning about all 800 robots once per group in each iteration. Instead, the MIT system only requires reasoning about the 800 robots once across all groups in iteration.
The team tested this technique in several simulated environments, including some set up like warehouses, some with random obstacles, and even maze-like settings that emulate building interiors. By identifying more effective groups to decongest, the learning-based approach decongests the warehouse up to four times faster than strong, non-learning-based approaches, said MIT.
Even when the researchers factored in the additional computational overhead of running the neural network, its approach still solved the problem 3.5 times faster.
In the future, Wu said she wants to derive simple, rule-based insights from their neural model, since the decisions of the neural network can be opaque and difficult to interpret. Simpler, rule-based methods could also be easier to implement and maintain in actual robotic warehouse settings, she said.
“This approach is based on a novel architecture where convolution and attention mechanisms interact effectively and efficiently,” commented Andrea Lodi, the Andrew H. and Ann R. Tisch Professor at Cornell Tech, and who was not involved with this research. “Impressively, this leads to being able to take into account the spatiotemporal component of the constructed paths without the need of problem-specific feature engineering.”
“The results are outstanding: Not only is it possible to improve on state-of-the-art large neighborhood search methods in terms of quality of the solution and speed, but the model [also] generalizes to unseen cases wonderfully,” she said.
In addition to streamlining warehouse operations, the MIT researchers said their approach could be used in other complex planning tasks, like computer chip design or pipe routing in large buildings.
Wu, senior author of a paper on this technique, was joined by lead author Zhongxia Yan, a graduate student in electrical engineering and computer science. The work will be presented at the International Conference on Learning Representations. Their work was supported by Amazon and the MIT Amazon Science Hub.
Tell Us What You Think!