Academia / Research Archives - The Robot Report https://www.therobotreport.com/category/research-development/ Robotics news, research and analysis Sat, 13 Apr 2024 01:39:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Academia / Research Archives - The Robot Report https://www.therobotreport.com/category/research-development/ 32 32 Project CETI develops robotics to make sperm whale tagging more humane https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/ https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/#respond Sun, 14 Apr 2024 12:00:50 +0000 https://www.therobotreport.com/?p=578695 Project CETI is using robotics, machine learning, biology, linguistics, natural language processing, and more to decode whale communications. 

The post Project CETI develops robotics to make sperm whale tagging more humane appeared first on The Robot Report.

]]>
Sperm whales in the ocean.

Project CETI is a nonprofit scientific and conservation initiative that aims to decode whale communications. | Source: Project CETI

Off the idyllic shores of Dominica, a country in the Caribbean, hundreds of sperm whales gather deep in the sea. While their communication sounds like a series of clicks and creaks to the human ear, these whales have unique, regional dialects and even accents. A multidisciplinary group of scientists, led by Project CETI, is using soft robotics, machine learning, biology, linguistics, natural language processing, and more to decode their communications. 

Founded in 2020, Project CETI, or the Cetacean Translation Initiative, is a nonprofit organization dedicated to listening to and translating the communication systems of sperm whales. The team is using specially created tags that latch onto whales and gather information for the team to decode. Getting these tags to stay on the whales, however, is no easy task. 

“One of our core philosophies is we could never break the skin. We can never draw blood. These are just our own, personal guidelines,” David Gruber, the founder and president of Project CETI, told The Robot Report

“[The tags] have four suction cups on them,” he said. “On one of the suction cups is a heart sensor, so you can get the heart rate of the whale. There’s also three microphones on the front of it, so you hear the whale that it’s on, and you can know the whales that’s around it and in front of it.

“So you’ll be able to know from three different microphones the location of the whales that are speaking around it,” explained Gruber. “There’s a depth sensor in there, so you can actually see when the whale was diving and so you can see the profiles of it going up and down. There’s a temperature sensor. There’s an IMU, and it’s like a gyroscope, so you can know the position of the whale.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Finding a humane way to tag whales

One of the core principles of Project CETI, according to Gruber, is to use technology to bring people closer to animals. 

“There was a quote by Stephen Hawking in a BBC article, in which he posited that the full development of AI and robotics would lead to the extinction of the human race,” Gruber said. “And we thought, ‘This is ridiculous, why would scientists develop something that would lead to our own extinction?’ And it really inspired us to counter this narrative and be like, ‘How can we make robots that are actually very gentle and increase empathy?’”

“In order to deploy those tags onto whales, what we needed was a form of gentle, stable, reversible adhesion,” Alyssa Hernandez, a functional morphologist, entomologist, and biomechanist on the CETI team, told The Robot Report. “So something that can be attached to the whale, where it would go on and remain on the whale for a long amount of time to collect the data, but still be able to release itself eventually, whether naturally by the movements of the whale, or by our own mechanism of sort of releasing the tag itself.”

This is what led the team to explore bio-inspired techniques of adhesion. In particular, the team settled on studying suction cups that are common in marine creatures. 

“Suction discs are pretty common in aquatic systems,” said Hernandez. “They show up in multiple groups of organisms, fish, cephalopods, and even aquatic insects. And there are variations often on each of these discs in terms of the morphology of these discs, and what elements these discs have.”

Hernandez was able to draw on her biology background to design suction-cup grippers that would work particularly well on sperm whales that are constantly moving through the water. This means the suction cup would have to withstand changing pressures and forces. They can stay on a whale’s uneven skin even when it’s moving. 

“In the early days, when we first started this project, the question was, ‘Would the soft robots even survive in the deep sea?’” said Gruber. 

Project CETI.

An overview of Project CETI’s mission. | Source: Project CETI

How suction cup shape changes performance

“We often think of suction cups as round, singular material elements, and in biology, that’s not usually the case,” noted Hernandez. “Sometimes these suction disks are sort of elongated or slightly different shaped, and oftentimes they have this sealing rim that helps them keep the suction engaged on rough surfaces.”

Hernandez said the CETI team started off with a standard, circular suction cup. Initially, the researchers tried out multiple materials and combinations of stiff backings and soft rims. Drawing on her biology experience, Hernandez began to experiment with more elongated, ellipse shapes. 

“I often saw [elongated grippers] when I was in museums looking at biological specimens or in the literature, so I wanted to look at an ellipse-shaped cup,” Hernandez said. “So I ended up designing one that was a medium-sized ellipse, and then a thinner ellipse as well. Another general design that I saw was more of this teardrop shape, so smaller at one end and wider at the base.” 

Hernadez said the team also looked at peanut-shaped grippers. In trying these different shapes, she looked for one that would provide increased resistance over the more traditional circular suction cups. 

“We tested [the grippers] on different surfaces of different roughness and different compliance,” recalled Hernandez. “We ended up finding that compared to the standard circle, and variations of ellipses, this medium-sized ellipse performed better under shear conditions.” 

She said the teardrop-shaped gripper also performed well in lab testing. These shapes performed better because, unlike a circle, they don’t have a uniform stiffness throughout the cup, allowing them to bend with the whale as it moves. 

“Now, I’ve modified [the suction cups] a bit to fit our tag that we currently have,” Hernandez said. “So, I have some versions of those cups that are ready to be deployed on the tags.”

Project CETI boat with people interacting with drones.

Project CETI uses drones to monitor sperm whale movements and to place the tags on the whales. | Source: Project CETI

Project CETI continues iterating

The Project CETI team is actively deploying its tags using a number of methods, including having biologists press them onto whales using long poles, a method called pole tagging, and using drones to press the tags onto the whales. 

Once they’re on the whale, they stay on for anywhere from a few hours to a few days. Once they fall off, the CETI team has a mechanism that allows them to track the tags down and pull all of the gathered data off of them. CETI isn’t interested in making tags that can stay on the whales long-term, because sperm whales can travel long distances in just a few days, and it could hinder their ability to track the tags down once they fall off. 

The CETI team said it plans to continue iterating on the suction grippers and trying new ways to gently get crucial data from sperm whales. It’s even looking into tags that would be able to slightly crawl to different positions on the whale to gather information about what the whale is eating, Gruber said. The team is also interested in exploring tags that could recharge themselves. 

“We’re always continuing to make things more and more gentle, more and more innovative,” said Gruber. “And putting that theme forward of how can we be almost invisible in this project.”

The post Project CETI develops robotics to make sperm whale tagging more humane appeared first on The Robot Report.

]]>
https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/feed/ 0
CMU, NASA JPL collaborate to make EELS snake robot to explore distant oceans https://www.therobotreport.com/cmu-nasa-jpl-collaborate-make-eels-snake-robot-explore-distant-oceans/ https://www.therobotreport.com/cmu-nasa-jpl-collaborate-make-eels-snake-robot-explore-distant-oceans/#respond Sat, 13 Apr 2024 12:00:39 +0000 https://www.therobotreport.com/?p=578658 NASA scientists hope to use EELS to search for signs of life in the ocean beneath the icy crust of Saturn's Enceladus moon. 

The post CMU, NASA JPL collaborate to make EELS snake robot to explore distant oceans appeared first on The Robot Report.

]]>
Version 1.0 of the EELS robot during field testing in Alberta, Canada in September 2023.

Version 1.0 of the EELS robot during field testing in Alberta, Canada, in September 2023. | Source: NASA/JPL-Caltech

In a collaboration that was 17 years in the making, Carnegie Mellon University, or CMU, researchers worked with NASA Jet Propulsion Laboratory to create an autonomous snake-like robot. The Exobiology Extant Life Surveyor, or EELS, is a self-propelled robot. NASA scientists said they hope to use EELS to search for signs of life in the ocean beneath the icy crust of Saturn’s Enceladus moon.

EELS was developed at NASA’s JPL with collaboration from Carnegie Mellon, Arizona State University, and the University of California, San Diego. Howie Choset, CMU’s Kavčić-Moura Professor of Computer Science in the School of Computer Science, Matt Travers, a senior systems scientist at the school’s Robotics Institute (RI), and Andrew Orekhov, a project scientist in the RI, contributed to the project

The resulting robot can navigate extreme terrains, including ice, sand, rocks, cliff walls, deep craters, underground lava tubes, and glaciers. The CMU team developed the controllers for the robot. In addition, an early prototype used modules developed by HEBI Robotics, a university spinout that Choset founded in 2014. 

“Enceladus is essentially covered with water,” Choset told The Robot Report. “But it’s underneath the rock that forms the moon. In the South Pole, the rock and ice are about 2 km [1.2 mi.] thick, and there are geysers that spit the water out from the underground ocean into space. So, there’s a belief that if you fly a spacecraft to Enceladus, land, and then get into the geysers, you may be able to swim in this extraterrestrial ocean.” 

EELS snake robot built for space applications

“So, we’ve been working on snake robots for a very long time,” Choset said. “And what’s nice about snake robots in general, is they can use their many joints and their slender physique to thread through tightly packed volumes and get to locations that people in machinery otherwise can’t access.”

This makes snake robots good for many applications, including search and rescue, he said. In this case, EELS will use these capabilities to wriggle into cracks in Enceladus’ layer of ice. EELS stands out from other snake robots because of its “wheels.” These wheels look more like corkscrews than traditional wheels, said Choset. 

“When those corkscrews rotate, they kind of penetrate the ice a little bit, but also gives the mechanism the ability to roll forward,” he explained. “So the robot has the ability to propel itself, not only with the snake-like motion but also these corkscrew wheels that allow it to traverse icy surfaces really quickly.” 

Choset said these wheels will help the robot to better move across ice until it can find a crack or geyser hole to crawl into.

“The autonomy that we developed is the robot’s ability to get into a tight space, and then use the constraints of that tight space to propel itself forward,” he said. 

But that’s only half of the battle. Once the EELS robot has found its way into one of these holes, it has to be able to swim through Enceladus’ ocean to search for potential signs of life. Choset’s team already had experience building swimming snake robots. 

“We built a variety of snake robots, but the one we most recently built was a swimming one called HUMRS, which stands for ‘Hardened Underwater Modular Robot Snake,'” Choset said. The CMU team was able to apply what it learned while developing HUMRS to this project with NASA JPL. 

Connections bring the right people on board

Choset’s long-held connections within the industry brought him onto the EELS project, along with his expertise in designing snake-like robots. 

“I went to Caltech as a graduate student, and JPL was part of Caltech,” he said. “So, whenever there’s an opportunity to work with JPL, the Jet Propulsion Lab, I jump on it, because it reminds me of my young graduate student days.” 

It wasn’t just the chance to work with JPL that brought Choset on board, however. He was recruited by Rohan Thakkar, a researcher who worked in Choset’s group 17 years ago as a high school student. 

“I think it’s important for people to realize that it’s not just a bunch of engineers getting together to build some mechanism as if they’re reading from a recipe or a cookbook,” Choset said. “Engineering is very important, but I want people to recognize the engineers behind the engineering.”

Choset said that personal connections, like the one between him and his CMU students, are what keeps the industry running. 

Editor’s note: HEBI Robots will exhibit at Booth 448-12 at the Robotics Summit & Expo, which will be on May 1 and 2 at the Boston Convention and Exhibition Center. Registration is now open.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post CMU, NASA JPL collaborate to make EELS snake robot to explore distant oceans appeared first on The Robot Report.

]]>
https://www.therobotreport.com/cmu-nasa-jpl-collaborate-make-eels-snake-robot-explore-distant-oceans/feed/ 0
Massachusetts governor visits MassRobotics to celebrate National Robotics Week https://www.therobotreport.com/massachusetts-governor-visits-massrobotics-to-celebrate-national-robotics-week/ https://www.therobotreport.com/massachusetts-governor-visits-massrobotics-to-celebrate-national-robotics-week/#respond Mon, 08 Apr 2024 18:48:46 +0000 https://www.therobotreport.com/?p=578617 Massachusetts Gov. Maura Healey also visited a high school robotics team and touted a bill proposing innovation investment.

The post Massachusetts governor visits MassRobotics to celebrate National Robotics Week appeared first on The Robot Report.

]]>
Massachusetts Gov. Maura Healey (center), with Lt. Gov. Kimberly Driscoll, MassTech CEO Carolyn Kirk, Undersecretary of Economic Foundations Ashley Stolba, and MassRobotics' team in Boston.

Massachusetts Gov. Maura Healey (blue jacket, center), with Lt. Gov. Kimberly Driscoll, MassTech CEO Carolyn Kirk, Undersecretary of Economic Foundations Ashley Stolba, and MassRobotics’ team in Boston. Source: Office of the Governor

To kick off National Robotics Week, Massachusetts Governor Maura T. Healey today continued her Mass Leads Road Show with visits to MassRobotics and the North Andover High School.  

“Massachusetts is proud to be home to one of the lead robotics hubs in the world, and it’s essential that we continue to lengthen this lead through targeted investments like the Mass Leads Act,” said Gov. Healey. “It was great to see the innovative work being done in robotics from high school students in North Andover to cutting-edge startups at MassRobotics.” 

The visits were part of the governor’s Mass Leads Act Road Show, during which she is traveling to communities across the commonwealth to highlight the ways in which her recently proposed economic development bill would grow the state’s economy, support businesses, and attract talent. The bill proposes $25 million for a new Robotics Investment Program that would advance the state’s leadership in the robotics sector through research, commercialization, and training.

MassRobotics supports local innovators

“We are excited to be joined by the governor, lieutenant governor, and staff members to celebrate National Robotics Week,” stated Tom Ryden, executive director of MassRobotics. “Robotics is an important industry in the state, employing over 5,000 people and shipping thousands of robots every month.”

“Massachusetts is truly the hub of robotics and recognized as a world leader,” he added. “With the continued support in the Mass Leads Act, this exciting industry will continue to grow in size and impact throughout the state.”

MassRobotics describes itself as “the largest independent robotics hub dedicated to accelerating innovation and adoption in the field of robotics.” The Boston-based organization recently kicked off Mass Robotics Accelerator, powered by Mass Tech Collaborative, to support 10 startups through an intensive 13-week program.

During their visit, Gov. Healey and Lt. Gov.  Kim Driscoll toured a lab space and met some of the startups housed at the facility. They also saw a classroom that is used for STEM (science, technology, engineering, and mathematics) education. 

Massachusetts Gov. Healey with rStream CEO Ian Goodine and CTO Ethan Walko, co-founders of Accelerator startup rStream.

Lt. Gov. Driscoll and Gov. Healey with CEO Ian Goodine and CTO Ethan Walko, co-founders of Accelerator startup rStream. Source: MassRobotics

See Accelerator startups at Robotics Summit & Expo

MassRobotics will host a pavilion with the startups in its accelerator program at the 2024 Robotics Summit & Expo, which will be on May 1 and 2 at the Boston Convention and Exhibition Center. The startups will exhibit on the show floor and discuss their experiences in a session on Wednesday, May 1, at 4:15 p.m. EDT.

MassRobotics, a strategic partner of WTWH Media, which produces The Robot Report and the Robotics Summit, will also host an Engineering Career Fair and announce its Form & Function Challenge winners. Registration is now open for the Robotics Summit & Expo.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Massachusetts invests in robotics leadership

Earlier this year, the Massachusetts Technology Collaborative (MassTech) launched a new $5 million initiative to boost the robotics sector across the state. The new department, established within the Innovation Institute at MassTech, is focused on supporting robotics research and development, testing, and workforce development.

“The investments proposed in the Mass Leads Act will help Massachusetts secure our leadership in the robotics sector,” said Yvonne Hao, secretary of economic development for Massachusetts. “The proposed robotics capital program at MassTech and reauthorization of the R&D Fund will drive innovation by funding research, commercialization, and training across the state.”

“MassTech’s mission is to create opportunities for growth in the Massachusetts innovation economy, and that definitely includes robotics,” said Carolyn Kirk, CEO of MassTech. “Our Innovation Institute implements a unique model for the state that spurs economic growth — together with industry leaders, academic researchers, and policymakers. MassTech is proud to help drive the competitiveness of tech and innovation through strategic investments and partnerships.”

The MassTech Collaborative Innovation Institute has received state funding.

The Innovation Institute has received state funding. Source: MassTech Collaborative

Governor visits North Andover High School, new Amazon warehouse

In North Andover, Gov. Healey met with the high school robotics club and congratulated the team ahead of the Vex Robotics World Championships, which they will compete in later this month. She saw its robotics projects and presented a proclamation for National Robotics Week.

“There’s a reason why 1 in 4 robotics patents are earned by Massachusetts inventors – it’s because we prioritize giving this industry the tools it needs to thrive,” said Driscoll. “Our administration is excited to continue supporting robotics entrepreneurs, as well as expanding opportunities for students to participate in STEM education and see themselves in a future career like robotics.”

Healey and Driscoll also visited a new Amazon warehouse in North Andover with 4 million sq. ft. of space. It cost $400 million to build and will employ 1,500 people, according to Amazon. The facility will also include thousands of robots, reported The Boston Globe.

“It’s applied robotics,” said Tye Brady, chief technologist at Amazon. “They’re not doing backflips or dances out there on the floor — I love those, I get it — but they’re doing the job of moving goods on time and very reliably.”

Brady will deliver a keynote on Amazon’s robotics strategy on May 1 at the Robotics Summit & Expo.

Massachusetts is committed to continuing its leadership in artificial intelligence and robotics and to retaining more of the talent that comes out its many educational institutions, the governor told The Robot Report.

The post Massachusetts governor visits MassRobotics to celebrate National Robotics Week appeared first on The Robot Report.

]]>
https://www.therobotreport.com/massachusetts-governor-visits-massrobotics-to-celebrate-national-robotics-week/feed/ 0
Robotics innovation is key to reshoring the $1T apparel manufacturing industry https://www.therobotreport.com/robotics-innovation-key-reshoring-trillion-dollar-apparel-manufacturing/ https://www.therobotreport.com/robotics-innovation-key-reshoring-trillion-dollar-apparel-manufacturing/#respond Fri, 05 Apr 2024 12:00:31 +0000 https://www.therobotreport.com/?p=578537 Lack of onshore garment manufacturing is both a national security risk and a lost business opportunity. Robotic sewing could be the answer.

The post Robotics innovation is key to reshoring the $1T apparel manufacturing industry appeared first on The Robot Report.

]]>
Traditional sewing machines were controlled via Robot Operating System (ROS) to achieve synchronized apparel operation with the robot.

Sewing machines were controlled via ROS to synchronized apparel operation with a robot. | Source: ARM Institute

A staggering 97% of the apparel sold and worn in the U.S. is made overseas, according to the American Apparel & Footwear Association. Not only does this mean that the U.S. lost these jobs when apparel manufacturing moved overseas, but it poses a significant risk to our national security, as evidenced by the nation’s struggle to manufacture and obtain personal protective equipment (PPE) at the height of the COVID-19 pandemic.

PPE was rationed for medical professionals in 2020, but even that wasn’t enough. Images went viral of doctors and nurses fashioning their own masks or re-wearing dirty PPE.

Though the pandemic images of PPE scarcity may have faded from recent memory, the security risk remains. Our nation’s inability to produce PPE has implications for natural disasters. In addition, the lack of onshore apparel manufacturing limits our ability to manufacture military uniforms, tents, parachutes, and other supplies needed to support the U.S. military.

Beyond national security, losing the apparel industry to offshore manufacturing also became a lost business opportunity. According to a Manufacturing Perception Report from the Thomas Network, 61% of Americans surveyed claimed they’re more likely to buy products if they are labeled as being made in the U.S. That’s a significant opportunity, particularly when you’re looking at a trillion-dollar industry.

So, what now? How do we begin to re-shore such a massive industry that has now long since been lost to competing nations? The ARM Institute and its members said they believe that the key lies within robotics and automation.

Robotics as an enabler for reshoring

Even prior to the COVID-19 pandemic, the ARM Institute and its member organizations recognized that robotics and AI could be the key to reshoring this industry. Once it realized the need, the institute began funding projects centered on automating the more manual and tedious aspects of apparel manufacturing.

However, this was no small feat. To start, when the industry has looked at automation in the past, it was unable to overcome the difficulties in getting robots to manipulate and handle pliable materials. The ARM Institute-funded Robotic Assembly of Garments Project led by Siemens Technology with Bluewater Defense, Sewbo, and the University of California at Berkeley took an important step in overcoming this barrier.

This project developed a new robotic assembly process that stiffens garment pieces by laminating its fabric with water-soluble thermoplastic polymer, allowing the robot to handle the previously limp fabric. It then developed a flexible robotic system to assemble fabric pieces into garments.

Traditional sewing machines were controlled via the Robot Operating System (ROS) to synchronize operation with the robot. The polymer used in the stiffening process is easily removed through washing and can be recycled for multiple process cycles.

Development didn’t stop there. While the garments project took a huge step towards proving the viability of robotics in clothing manufacturing, it had a higher cycle time than current manual processes.

More ARM Institute projects

This project led to other development. Subsequent projects took lessons from prior ones and improved processes, further demonstrating not only the viability for using robotics for apparel manufacturing, but also the importance of doing so.

More ARM Institute projects centered on robotic sewing have included:

The U.S Department of Commerce’s National Institute of Standards and Technology (NIST) funded the Rapid-Response Automated PPE Production in Shipping Containers project through an American Rescue Act Grant. This enabled the ARM Institute to work with fellow Manufacturing USA Institute AFFOA (Advanced Functional Fabrics of America) and several of its members to scale their projects and use in-house engineering expertise.

Work on this project is under way toward the creation of shipping containers housing robotic production that can easily be deployed where and when PPE is needed.

Momentum for apparel automation continues

While these projects have catalyzed the foundational robotics advancements needed to make apparel manufacturing safer and more productive, continued collaboration between industry, government, and academia is needed to build on this momentum.

The ARM Institute is dedicated to making this possible. The Manufacturing of Garments and Other Textile Goods will be included as a special topic area in the ARM Institute’s upcoming Technology Project Call.

Beyond impact for consumer goods and national security, reshoring apparel manufacturing also represents opportunity for the U.S. workforce. While offshore operations today depend on manual, ergonomically unfriendly processes in cramped, often dirty settings, the use of robots will make roles in these factories safer, more engaging, and higher-paying.

While robots take on the dull, dirty, and dangerous tasks, human labor can be freed up to work on operating robots and planning robotics integration. Many of these roles will be available through flexible, low-cost training. These are roles that don’t currently exist in the U.S., resulting in increased employment opportunities for U.S. workers.

The ability to re-shore apparel manufacturing is well within reach, and the ARM Institute is dedicated to working with its members to lead this effort through robotics innovations.

Editor’s notes: This article was syndicated from The Robot Report‘s sibling site Engineering.com.

Dr. Larry Sweet, director of engineering at the ARM Institute, will present a session on “Delivering AI and Machine Learning Enabled Robotics to the Manufacturing and Field Service Operations” at the Robotics Summit & Expo. It will be at 2:45 p.m. EDT on Wednesday, May 1, at the Boston Convention and Exhibition Center.

Sweet will share updates on current ARM Institute projects, technical approaches, best practices, and lessons learned. He will also describe steps to make advanced technology more accessible to manufacturers of all sizes and to facilitate the work of systems integrators. Register now for the event.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post Robotics innovation is key to reshoring the $1T apparel manufacturing industry appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robotics-innovation-key-reshoring-trillion-dollar-apparel-manufacturing/feed/ 0
Southwest Research Institute to make robot programming more user friendly with SWORD https://www.therobotreport.com/southwest-research-institute-makes-robot-programming-more-user-friendly-sword/ https://www.therobotreport.com/southwest-research-institute-makes-robot-programming-more-user-friendly-sword/#respond Sun, 31 Mar 2024 12:07:40 +0000 https://www.therobotreport.com/?p=578350 The Southwest Research Institute offers the SwRI Workbench for Offline Robotics Development for motion-planning applications.

The post Southwest Research Institute to make robot programming more user friendly with SWORD appeared first on The Robot Report.

]]>
SwRI Workbench for Offline Robotics Development (SWORD)

SwRI Workbench for Offline Robotics Development allows manufacturing engineers to independently use complex robotics and simplifies motion planning for seasoned developers. Source: Southwest Research Institute

An industry push for more automation is advancing the Robot Operating System, or ROS, beyond the academic and manufacturing domains into agriculture, automotive, retail, healthcare and more. Various forecasts project that the open-source advanced robotics market will grow more than 10% annually between 2024 and 2029.

These trends are motivating for robotics engineers at Southwest Research Institute (SwRI) and our colleagues at the ROS-Industrial Consortium and supporting industries. We also recognize that the usability of robotics software is still an impediment to even higher levels of adoption.

Over the years, the ROS-I Consortium has held frequent roadmapping sessions with a wide variety of end users and ROS developers to address ease of use and continuing education. The identified need is a lower barrier of entry for non-programmers (or entry-level developers) to harness the power of tools in the ROS ecosystem, but in a way that aligns with industry adoption of digital thread and Industry 4.0 strategies.

The traditional ROS workflow is software programming-intense, requiring developers deeply familiar with available ROS libraries and tools. Even experienced developers within the ROS-I ecosystem and beyond may spend significant time — days to weeks — on the initial setup and configuration of a ROS application.

Listening to the voice of our own developers, our diverse stakeholders, and consortium members, we heard the need for easier access to the ROS motion-planning tools, while maintaining a tie back to the CAD ecosystem where the products to be worked on are conceived and maintained.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


SWORD is a graphical toolkit for robotics developers

The Southwest Research Institute is launching the SwRI Workbench for Offline Robotics Development (SWORD) featuring a graphical toolkit for developing and testing advanced robotic motion-planning applications.

SWORD is implemented as a plugin to the open-source FreeCAD application, allowing users to integrate robotics capabilities into a cross-platform CAD environment. It provides a graphical interface to many powerful motion-planning libraries.

The goal is to bring ROS to a manufacturing/industrial audience in a way that is more approachable and resides in an environment that is familiar. Most manufacturing engineers are competent with CAD and understand their processes, often doing various forms of programs on process-oriented systems.

SWORD seeks to bring advanced motion-planning capability to this audience enabling to set up their systems and take advantage of these more advanced tools in their operational environments. Through the first Beta test, the team at SwRI has collected feedback from end users and is nearing the release the first version of SWORD. SWORD currently offers the capabilities below:

Environment modeling

  • Create workcell model (robot, fixtures, end-of-arm tooling); see Figure 1 below.

    • Use CAD modeling tools or import existing CAD/mesh models

    • Use Convex Hull and Decomposition tools to generate collision geometry

  • Import and export URDF (Unified Robotics Description Format) files

  • Manipulate robot position

    • Joint Sliders to control individual joint positions

    • TCP Dragger to simulate movement using various IK solver

Figure 1: An example of URDF creation and evaluation in SWORD.

Figure 1: An example of URDF creation and evaluation in SWORD. Click here to enlarge. Source: Southwest Research Institute

Command language

  • Define robot motion using either Cartesian or Joint waypoints

    • Currently waypoints must be manually defined, but import and CAD-generated waypoints are planned for an upcoming release.

  • Specify different move segment types (joint/cartesian) and motion groups

  • Insert supplementary commands (I/O, delays, etc.)

Motion planning

  • Generate motion plan using a variety of Tesseract-supported path planners

    • Currently uses default Profiles (configuration) for each planner, but profile editing is planned for an upcoming release.

  • Create custom planning pipelines for application-specific behavior; see Figure 2 below.

  • Compute the Allowed Collision Matrix

    • Currently no way to review or adjust the results, but this functionality is planned for an upcoming release.

  • Review computed motion trajectory

SWORD is officially released, and seats are available. You can request a trial version to understand if it is right for your organization. If you are interested in a trial license, or want to learn more or get a guided tour from SwRI, please contact Jeremy Zoss or Matt Robinson.

Figure 2: Setting up a motion planning pipeline for testing and evaluation in SWORD.

Figure 2: Setting up a motion planning pipeline for testing and evaluation in SWORD. Click here to enlarge. Source: Southwest Research Institute

Matt Robinson, Southwest Research InstituteAbout the author and the Southwest Research Institute

Matthew Robinson is program manager for ROS-Industrial Consortium Americas at the Southwest Research Institute. He was previously research team leader and a graduate fellow at the Edison Welding Institute. Robinson has participated in RoboBusiness Direct and has an M.S.W.E. from The Ohio State University.

Since 1947, the nonprofit SwRI in San Antonio, Texas, has taken a multidisciplinary approach to research and development for government and industry clients.

The post Southwest Research Institute to make robot programming more user friendly with SWORD appeared first on The Robot Report.

]]>
https://www.therobotreport.com/southwest-research-institute-makes-robot-programming-more-user-friendly-sword/feed/ 0
Northeastern University Mars Rover Team wins Winter Canadian International Challenge https://www.therobotreport.com/northeastern-university-mars-rover-team-wins-winter-canadian-international-challenge/ https://www.therobotreport.com/northeastern-university-mars-rover-team-wins-winter-canadian-international-challenge/#respond Wed, 27 Mar 2024 19:58:50 +0000 https://www.therobotreport.com/?p=578286 Northeastern University students won a contest in which four teams' rovers completed tasks in simulated Martian environments.

The post Northeastern University Mars Rover Team wins Winter Canadian International Challenge appeared first on The Robot Report.

]]>
The Northeastern Mars Rover team took home its first gold last month at the inaugural Winter Canadian International Rover Challenge. Photo by Matthew Modoono/Northeastern University

Brooke Chalmers, who studies computer science, and Jason Kobrin, who studies mechanical engineering, work on the Mars Rover in the Richards Hall Makerspace. Credit: Matthew Modoono/Northeastern University

When the student leaders of the Northeastern University Mars Rover Team decided they were going to participate in the inaugural Winter Canadian International Rover Challenge, they thought it would be good practice more than anything else.

They didn’t expect to win the competition. Yet, that’s exactly what happened.

The Northeastern team took home the gold last month, beating McMaster University for the top spot with a score of 237.71 points to McMaster’s 137.13.

“It was pretty huge for us in terms of team morale,” said Brooke Chalmers, a third-year student at Northeastern and the integration lead and software co-lead for the Mars rover group. “It really felt like all the hours that we put in during the prior weeks paid off in a way.”

It’s the first competition win for the six-year-old club, which is composed of students studying computer science, engineering, and life science.

The university team of about 50 students had been hard at work developing and iterating on its latest robotic rover: the Watney, Mark V. 

Coming in at 50 kg (110 lb.), the rover features a 5052 aluminum alloy chassis, six 3D-printed nylon wheels, a robotic arm with end-of-arm tooling (EOAT), a life-detection module for sample collection, and 14 onboard cameras. 

The Canadian competition was broken up into four challenges designed to put students’ rovers through simulated environments similar to tasks a rover might have to complete while on Mars’ surface. Each challenge was ranked using a 100-point scale.  

In the Arm Dexterity Challenge, for example, students were tasked with controlling the rover’s robotic arm to restore power to a campsite. The challenge involved navigating the robot through four control panels where the robotic arm had to press buttons and flip switches, explained Jason Kobrin, a fourth-year student at Northeastern and a mechanical operations co-lead for the Mars rover group.

The robotic arm on the Northeastern team's Mars Rover.

The robotic arm on the Northeastern team’s Mars Rover. Credit: Matthew Modoono/Northeastern University

Northeastern team redesigns robot arm for strength

Of the four teams taking part in the challenge, Northeastern scored the highest for the challenge, with a score of 49.49 points. 

Kobrin said the team has spent the past year completing redesigning the robot’s arm, which used to be one of the rover’s weak points during previous competitions. It’s now one of the rover’s biggest strengths. The robot arm has six degrees of freedom and can carry loads up to about 10 kg (22 lb.). 

“In order to improve that, we redesigned our arm this year to use better motors and to be easier to control overall,” he said. 

It’s by taking part in these competitions and through regular testing where the team was able to narrow in on the rover’s shortcomings and improve its capabilities, Kobrin said. By working on the rover, students are also getting the opportunity to improve their own skills. 

“Every week, it’s continuous improvement,” he noted. “Whether it’s adding a new portion of software code [or] whether designing a new mount for our cameras, every little improvement makes a huge difference.” 

“For everybody to be able to design and build this robot to function well but also to be able to control it in high-pressure situations and to reach the goals we were seeking to reach, is just really impressive,” added Kobrin. 

The team thought the two-day event hosted in Niagara Falls, Ontario, would be a great primer to test out the capabilities of the machine before the team took part in the upcoming annual University Rover Challenge (URC). The URC is the Mars Society’s premier Mars rover student competition held at the Mars Desert Research Center outside Hanksville, Utah. 

The URC competition is old hat for the group, having participated in the challenge in 2019, 2022, and 2023. The competition was canceled in 2020 and 2021 because of the pandemic.

“We went into this competition thinking, ‘OK, we’re going to use this as an opportunity to prepare for URC. We’re going to test stuff to make sure it all works,’” Chalmers said.

Connecting parts for the University Rover Challenge.

The students will be competing at the University Rover Challenge this spring. Photos by Matthew Modoono/Northeastern University

Difficult terrain and team excitement

The team had its best showing during the Winter Transversal Challenge, with a finishing score of 84.72 points. For the challenge, the rover had to roll through treacherous and uneven terrain while avoiding obstacles.

“All the challenges involved some degree of the rover driving around and moving over difficult terrain, but this challenge was focused entirely on that,” said Chalmers. 

With the overall win, Chalmers said she’s hopeful that new members will be excited to join. 

“Most people on the team have been talking about this with their friends and family and talking about what we are doing, which is really cool,” she said. “I know a few of my friends have expressed interest in joining the team since. It’s very exciting to have something to talk about and have something to show for all the effort we put in.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Cesareo Contreras, Northeastern University.About the author

Cesareo Contreras is a Northeastern Global News reporter and has covered robotics extensively. This article is reposted with permission.

 

 

The post Northeastern University Mars Rover Team wins Winter Canadian International Challenge appeared first on The Robot Report.

]]>
https://www.therobotreport.com/northeastern-university-mars-rover-team-wins-winter-canadian-international-challenge/feed/ 0
Labor shortages still driving robotics adoption, finds HowToRobot https://www.therobotreport.com/labor-shortages-driving-robotics-adoption-finds-howtorobot/ https://www.therobotreport.com/labor-shortages-driving-robotics-adoption-finds-howtorobot/#respond Thu, 21 Mar 2024 18:32:17 +0000 https://www.therobotreport.com/?p=578230 HowToRobot said its findings confirm that businesses see robots and automation as a supplement to human labor, not a replacement.

The post Labor shortages still driving robotics adoption, finds HowToRobot appeared first on The Robot Report.

]]>
Businesses are adopting automation to augment scarce workers and increase productivity, found HowToRobot.

Businesses are adopting automation to mitigate labor shortages and increase productivity, found HowToRobot. Source: Adobe Stock

Businesses are adopting automation not to replace workers but to augment scarce labor, according to data that HowToRobot released yesterday. It found that 80% of respondents said their projects are intended to free employees from manual tasks and move them to other more value-adding activities.

The Denmark-based provider of a global automation market platform said its findings confirm that businesses are seeing robotics and automation as a supplement to human labor, not as a replacement.

“From our daily conversations with manufacturers worldwide, it’s clear that the lack of labor has been the driving factor behind business decisions to automate in 2023,” said Søren Peters, CEO of HowToRobot, in a release.

“By automating the most cumbersome tasks, businesses have been freeing employees to take on other tasks needed to maintain production levels and fulfill customer orders,” he said. “Not one we asked did this because they wanted to lay off people.”

Labor shortages peaked in North America and Europe in the spring of 2022 as the COVID-19 pandemic eased, said HowToRobot. These labor shortages continued at elevated levels in 2023.

In the U.S., manufacturing job opening rates averaged 4.5% in 2023, almost twice the pre-pandemic average of 2.8% from 2013 to 2019, according to the U.S. Bureau of Labor Statistics (BLS).


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Labor shortages prevent productivity growth

The second biggest motivation for automating in 2023 was to increase productivity. HowToRobot reported that 70.9% of respondents’ projects had this goal.

Robots can improve productivity by reducing the labor hours needed to create the same output, it said. This enables companies to increase wages, reduce prices, and grow their profits, leading to stronger economic growth.

With high inflation levels over the past three years, productivity has been particularly important for businesses, said Peters.

Labor shortages have also constrained businesses’ ability to expand production capacity with manual labor, further increasing demand for automation, said HowToRobot. In 2023, 60.6% of automation projects sought to increase capacity, making it the third-largest motivation to automate, the company noted.

Hourly compensation in U.S. manufacturing alone grew by 18% between 2019 and 2023, according to BLS data.

“When costs are rising rapidly, what do you do as a business?” said Peters. “Either you cut down or invest in areas that increase your productivity.”

“We are seeing that many of those who had the foresight – and funds – to invest in automation are now coming out on top,” he added. “They are more competitive and can afford to pay their employees better.”

How to robot graph about automation drivers.

Moving workers to more value-added tasks, productivity, and capacity were top reasons for adopting automation.. | Source: HowToRobot

Product quality, working conditions matter, finds HowToRobot

About a third — 36.2% — of automation projects last year sought to improve product quality and uniformity with automation, making it a moderately important business goal, said HowToRobot.

Manual operations during certain parts of the manufacturing process can result in varying product quality and uniformity, which can lead to more customer claims and resource waste. Peters said that businesses are increasingly focusing on offsetting high input costs.

“We also see a growing awareness about the environmental impact of wasteful processes and how robots and automation can help reduce the ecological footprint of manufacturing operations,” Peters said.

Many businesses also looked for automation to improve their working environments. Last year, 31.5% of automation projects had this goal, said HowToRobot.

For example, businesses can automate demanding tasks that involve repetitive motion, heavy lifting, or hazardous environments. This can free up employees to take on less backbreaking and more meaningful tasks, HowToRobot said.

“It’s becoming clear for a growing number of businesses that investing in employee well-being also involves automation,” Peters said.

Peters has said that businesses shouldn’t wait to plan future robotics investments. According to HowToRobot, the time from starting an automation project to signing an agreement with a vendor can vary from a few weeks to more than a year.

The earlier a business starts the process, the sooner it can reap the benefits of automation, the company said.

 

The post Labor shortages still driving robotics adoption, finds HowToRobot appeared first on The Robot Report.

]]>
https://www.therobotreport.com/labor-shortages-driving-robotics-adoption-finds-howtorobot/feed/ 0
Stanford researchers aim to enhance robots with augmented motors https://www.therobotreport.com/stanford-researchers-aim-to-enhance-robots-with-augmented-motors/ https://www.therobotreport.com/stanford-researchers-aim-to-enhance-robots-with-augmented-motors/#respond Wed, 20 Mar 2024 18:13:37 +0000 https://www.therobotreport.com/?p=578224 Researchers at Stanford have invented a way to augment electric motors to make them more efficient at performing dynamic movements.

The post Stanford researchers aim to enhance robots with augmented motors appeared first on The Robot Report.

]]>
Energy-recycling actuator prototype.

Energy-recycling actuator prototype. | Source: Erez Krimsky

Whether it’s a powered prosthesis to assist a person who has lost a limb or an independent robot navigating the outside world, we are asking machines to perform increasingly complex, dynamic tasks. But the standard electric motor was designed for steady, ongoing activities like running a compressor or spinning a conveyor belt – even updated designs waste a lot of energy when making more complicated movements.

Researchers at Stanford University have invented a way to augment electric motors to make them much more efficient at performing dynamic movements through a new type of actuator, a device that uses energy to make things move. Their actuator, published March 20 in Science Robotics, uses springs and clutches to accomplish a variety of tasks with a fraction of the energy usage of a typical electric motor.

“Rather than wasting lots of electricity to just sit there humming away and generating heat, our actuator uses these clutches to achieve the very high levels of efficiency that we see from electric motors in continuous processes, without giving up on controllability and other features that make electric motors attractive,” said Steve Collins, associate professor of mechanical engineering and senior author of the paper.

Springing into action

The actuator works by harnessing the ability of springs to produce force without using energy – springs resist being stretched out and try to rebound to their natural length when released. When the actuator is, say, lowering something heavy, the researchers can engage the springs so that they stretch, taking some of the load off the motor. Then, by locking the springs in the stretched-out position, that energy can be stored to assist the motor in another task later on.

The key to engaging and disengaging the springs quickly and efficiently is a series of electroadhesive clutches. Each rubber spring is sandwiched between two clutches: one that connects the spring to the joint to assist the motor and one that locks the spring in a stretched position when it’s not being used.

These clutches consist of two electrodes – one attached to the spring and one attached to the frame or motor – that slide smoothly past each other when they aren’t active. To engage a clutch, the researchers apply a large voltage to one of its electrodes. The electrodes are drawn together with an audible click – like a faster, stronger version of the static electricity that makes a balloon stick to the wall after you rub it on carpet. Releasing the spring is as simple as grounding the electrode and dropping its voltage back to zero.

“They’re lightweight, they’re small, they’re really energy efficient, and they can be turned on and off rapidly,” said Erez Krimsky, lead author of the paper, who recently completed his PhD in Collins’ lab. “And if you have lots of clutched springs, it opens up all these exciting possibilities for how you can configure and control them to achieve interesting outcomes.”

The actuator built by Collins and Krimsky has a motor augmented with six identical clutched springs, which can be engaged in any combination. The researchers ran the design through a series of challenging motion tests that included rapid acceleration, changing loads, and smooth, steady movement. At every task, the augmented motor used at least 50% less power than a standard electric motor and, in the best case, reduced power consumption by 97%.

Motors that can do more

With significantly more efficient motors, robots could travel further and accomplish more. A robot that can run for a full day, instead of only an hour or two before needing to recharge, has the potential to undertake much more meaningful tasks. And there are plenty of unsafe situations – involving toxic materials, hazardous environments, or other dangers – where we would much prefer to send a robot than risk a person.

“This has implications for assistive devices like prosthetics or exoskeletons as well,” Krimsky said. “If you don’t need to constantly recharge them, they can have a more significant impact for the people that use them.”

Currently, it takes a few minutes for the actuator’s controller to calculate the most efficient way to use the combination of springs to accomplish a brand-new task, but the researchers have plans to shorten that timeframe considerably. They envision a system that can learn from previous tasks, creating a growing database of increasingly efficient movements and using artificial intelligence to intuit how to effectively accomplish something new.

“There are a bunch of little control and design tweaks we’d like to make, but we think that the technology is really at a place where it’s ready for commercial translation,” Collins said. “We’d be excited to try to spin this out from the lab and start a company to begin making these actuators for the robots of the future.”

Collins is a member of Stanford Bio-Xthe Wu Tsai Human Performance Alliance, and the Wu Tsai Neurosciences Institute; and a faculty affiliate of the Stanford Institute for Human-Centered Artificial Intelligence.

This work was funded by the National Science Foundation.

Editor’s Note: This article was syndicated from Stanford University’s blog.

The post Stanford researchers aim to enhance robots with augmented motors appeared first on The Robot Report.

]]>
https://www.therobotreport.com/stanford-researchers-aim-to-enhance-robots-with-augmented-motors/feed/ 0
Learn about generative AI’s impact on robotics at the Robotics Summit & Expo https://www.therobotreport.com/learn-about-generative-ai-impact-on-robotics-at-the-robotics-summit-expo/ https://www.therobotreport.com/learn-about-generative-ai-impact-on-robotics-at-the-robotics-summit-expo/#respond Wed, 06 Mar 2024 16:53:12 +0000 https://www.therobotreport.com/?p=578086 Researchers are already using generative AI to make robots faster learners, and a summit panel will discuss how this technology can be applied to robotics at scale. 

The post Learn about generative AI’s impact on robotics at the Robotics Summit & Expo appeared first on The Robot Report.

]]>

In the past year, advances in artificial intelligence dominated the news cycle. OpenAI’s Generate Pre-trained Transformer, or GPT 3.5, gained 100 million users in just two months. And while researchers and developers are already using generative AI to enable robots to learn challenging manipulation tasks more quickly, it’s not yet clear how this technology breakthrough will be applied to robotics at scale.

At the 2024 Robotics Summit & Expo, which will be at the Boston Convention & Exhibition Center on May 1 and 2, a panel will discuss the application of large language models (LLMs) and text-generation applications to robotics. The speakers will also explore how generative AI can benefit robotics design, model training, simulation, control, human-machine interaction, and more.

Generative AI experts to speak

The Robotics Summit & Expo panel will include the following generative AI experts:

  • Sandra Skaff is senior alliances and ecosystem manager at NVIDIA. There, she leads robotics ecosystem development and works with cross-functional teams to support partners and customers in developing their products using NVIDIA’s Isaac platform. Before this role, Skaff was the global lead of academic research engagements including the NVAIL program, where she was responsible for setting strategy and managing engagements with top research labs in the areas of AI, robotics, and data science.
  • Juan Aparicio is the co-founder and CEO of Reshape Automation. He is a robotics and automation enthusiast on a mission to scale and democratize access to robotics technology in manufacturing and beyond. During his career, Aparicio has brought together the worlds of industrial automation, robotics, and AI with his work featured in The New York Times, MIT Tech Review, Wired, Forbes, The Robot Report, and other media outlets. Before founding Reshape Automation, Aparicio worked at various robotics companies, including Rapid Robotics, Ready Robotics, and Siemens.
  • Russ Tedrake is the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT, the director of the Center for Robotics at the Computer Science and Artificial Intelligence Lab (CSAIL), and the leader of Team MIT’s entry in the DARPA Robotics Challenge. He is also vice president of Robotics Research at the Toyota Research Institute. Tedrake is the recipient of the 2021 Jamieson Teaching Award, the NSF CAREER Award, the MIT Jerome Saltzer Award for undergraduate teaching, the DARPA Young Faculty Award in Mathematics, and the 2012 Ruth and Joel Spira Teaching Award. He was named a Microsoft Research New Faculty Fellow.

Generative AI panel at Robotics Summit.

More than 5k developers to gather at Robotics Summit

Since 2018, the Robotics Summit & Expo has become the world’s leading robotics development event. This year, it will feature keynotes from Agility Robotics, Amazon, Disney, Medtronic, and Teradyne and more than 5,000 attendees from across the robotics ecosystem.

The summit also offers numerous technical sessions and networking opportunities to give attendees opportunities to lean how to develop the next generation of commercial robots. You can view the complete Robotics Summit agenda here. Speakers are still being added.

In addition, the Robotics Summit & Expo will showcase more than 200 exhibitors, as well as a women in robotics breakfast, a career fair, an engineering theater, a startup showcases, and more!

New to the event is the RBR50 Robotics Innovation Awards Gala. The event will include the chance to hear from the Robot of the Year, Startup of the Year, and Application of the Year winners. A limited number of tickets is available to summit attendees.

The Robotics Summit & Expo will be co-located with DeviceTalks Boston, the premier industry event for medical technology professionals. DeviceTalks attracts engineering and business professionals from a broad range of healthcare and medical technology backgrounds.

It will also be co-located with the Digital Transformation Forum, an inaugural event designed to help manufacturers engage with industry leaders, technology experts, and peers who are navigating the complexities of digital transformation. Participants will gain insights into strategies, emerging technologies, and best practices.

Registration is now open for the 2024 Robotics Summit & Expo. Register by March 8 to take advantage of early-bird pricing.

For information about sponsorship and exhibition opportunities, download the prospectus or contact Colleen Sepich at csepich[AT]wtwhmedia.com.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post Learn about generative AI’s impact on robotics at the Robotics Summit & Expo appeared first on The Robot Report.

]]>
https://www.therobotreport.com/learn-about-generative-ai-impact-on-robotics-at-the-robotics-summit-expo/feed/ 0
Researchers develop interface for quadriplegics to control robots https://www.therobotreport.com/researchers-develop-interface-for-quadriplegics-to-control-robots/ https://www.therobotreport.com/researchers-develop-interface-for-quadriplegics-to-control-robots/#respond Sat, 02 Mar 2024 21:00:17 +0000 https://www.therobotreport.com/?p=578049 Head-Worn Assistive Device impresses expert evaluator Henry Evans during a trial to control Hello Robot's Stretch mobile manipulator.

The post Researchers develop interface for quadriplegics to control robots appeared first on The Robot Report.

]]>
a quadriplegic wearing a an assistive device on his head that enables him to control robots.

Carnegie Mellon University researchers lived with Henry and Jane Evans for a week to test their Head-Worn Assistive Teleoperation (HAT) device with Henry, who lost his ability to speak and move his limbs 20 years ago. | Credit: CMU

No one could blame Carnegie Mellon University students Akhil Padmanabha and Janavi Gupta if they were a bit anxious this past August as they traveled to the Bay Area home of Henry and Jane Evans.

The students were about to live with strangers for the next seven days. On top of that, Henry, a person with quadriplegia, would spend the week putting their Head-Worn Assistive Teleoperation (HAT) — an experimental interface to control a mobile robot — to the test.

HAT requires fewer fine motor skills than other interfaces to help people with some form of paralysis or similar motor impairments control a mobile robot and manipulator. It allows users to control a mobile robot via head motion and speech recognition, and versions of the device have featured a hands-free microphone and head-worn sensor.

Padmanabha and Gupta quickly realized that any trepidation they may have felt was misplaced. Henry, who lost the ability to move his limbs and talk after a brain-stem stroke two decades ago, enjoyed using HAT to control the robot by moving his head and in some situations preferred HAT to the computer screen he normally uses.

“We were excited to see it work well in the real world,” said Padmanabha, a Ph.D. student in robotics who leads the HAT research team. “Henry became increasingly proficient in using HAT over the week and gave us lots of valuable feedback.”

During the home trial, the researchers had Henry perform predefined tasks, such as fetching a drink, feeding himself and scratching an itch. Henry directed a robot — Stretch, a commercially available mobile robot outfitted with a pincer-like gripper on its single arm — using HAT to control it.

Daily, Henry performed the so-called blanket+tissue+trash task, which involved moving a blanket off his body, grabbing a tissue and wiping his face with it, and then throwing the tissue away. As the week progressed, Henry could do it faster and faster and with fewer errors.

Henry said he preferred using HAT with a robot for certain tasks rather than depending on a caregiver.

“Definitely scratching itches,” he said. “I would be happy to have it stand next to me all day, ready to do that or hold a towel to my mouth. Also, feeding me soft foods, operating the blinds and doing odd jobs around the room.”

One innovation in particular, software called Driver Assistance that helps align the robot’s gripper with an object the user wants to pick up, was “awesome,” Henry said. Driver Assistance leaves the user in control while it makes the fine adjustments and corrections that can make controlling a robot both tedious and demanding.

“That’s better than anything I have tried for grasping,” Henry said, adding that he would like to see Driver Assistance used for every interface that controls Stretch robots.

Praise from Henry, as well as his suggestions for improving HAT, is no small thing. He has collaborated in multiple research projects, including the development of Stretch, and his expertise is widely admired within the assistive robotics community. He’s even been featured by The Washington Post and last year appeared on the cover of IEEE Spectrum.

Via email, Henry said his incentive for participating in research is simple. “Without technology I would spend each day staring at the ceiling waiting to die,” he said. “To be able to manipulate my environment again according to my will is motivation enough.”

Padmanabha said user-centered or participatory design is important within the assistive device community and requires getting feedback from potential users at every step. Henry’s feedback proved extremely helpful and gave the team new ideas to think about as they move forward.

The HAT researchers will present the results of their study at the ACM/IEEE International Conference on Human-Robot Interaction March 11–15 in Boulder, Colorado.

HAT originated more than two years ago in a project course taught by Zackory Erickson, an assistant professor in the Robotics Institute. The students contacted Henry as part of their customer discovery process. Even then, he was excited about the possibility of using a prototype.

The project showed promise and later was spun out of the class. An early version of HAT was developed and tested in the lab by participants both with and without motor impairments. When it came time to do an in-home case study, Henry seemed the logical person to start with.

During the weeklong study, Padmanabha and Gupta lived in the Evans home around the clock, both for travel convenience and to be able to perform testing whenever Henry was ready. Having strangers in the house 24/7 is typical of the studies Henry’s been involved in and is no big deal for him or Jane.

“We’re both from large families,” he said.

Padmanabha and Gupta, a computer science major, likewise adjusted quickly to the new surroundings and got used to communicating with Henry using a letterboard, a tool that allows Henry to spell out words by looking at or pointing a laser at each letter. The pair even played poker with Henry and Jane, with Henry using Stretch to manipulate his cards.

In the earlier tests, HAT used head movements and voice commands to control a robot. Henry can’t speak, but he can move his left thumb just enough to click a computer mouse. So the team reconfigured HAT for the Evans trial, substituting computer clicks for voice commands as a way to shift between modes that include controlling the movement of the robot base, arm or wrist, or pausing the robot.

“Among people with motor impairments, everyone has different levels of motor function,” Padmanabha said. “Some may have head movement, others may only have speech, others just have clicking capabilities. So it’s important that you allow for customization of your interface.”

Head motions are key to using HAT, which detects head movement using a sensor in a cap, headband or — in Henry’s case — a chin strap.

“People use head gesturing as a way to communicate with each other and I think it’s a natural way of controlling or gesturing to a robot,” Padmanabha said.

A graphical user interface — a computer screen — is more typical for controlling robots. But Gupta said users don’t like using a computer screen to control a robot that is operating around their body.

“It can be scary to have a robot close to your face, trying to feed you or wipe your face,” she said. Many user studies therefore shy away from attempting tasks that come close to the face. But once Henry got used to HAT, he didn’t hesitate to perform such tasks, she added.

A computer screen is available to control Stretch in tasks that are out of the user’s line of sight, such as sending the robot to fetch something from another room. At Henry’s suggestion, the researchers made it possible to use HAT to control a computer cursor with head movements.

In addition to Gupta, Padmanabha and Erickson, the research team includes CMU’s Carmel Majidi, the Clarence H. Adamson Professor of Mechanical Engineering; Douglas Weber, the Akhtar and Bhutta Professor of Mechanical Engineering; and Jehan Yang, a Ph.D. student in mechanical engineering. Also included are Vy Nguyen of Hello Robot, maker of Stretch; and Chen Chen, an undergraduate at Tsinghua University in Beijing, who implemented the Driver Assistance software.

Though Stretch is commercially available, it is still primarily used by researchers and CMU has 10–15 of them. It’s a simple robot with limited capabilities, but Padmanabha said its approximate $25,000 price tag inspires hope for expanded use of mobile robots.

“We’re getting to the price point where we think robots could be in the home in the near future,” he said.

Henry said Stretch/HAT still needs systemwide debugging and added features before it is more widely adopted. He thinks that might occur in as little as five years, though that will depend not only on price and features, but the choice of market.

“I believe the market for elderly people is larger and more affluent and will therefore develop faster than the market for people with disabilities,” he said.

Editor’s Note: This article was republished from Carnegie Mellon University.

The post Researchers develop interface for quadriplegics to control robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-develop-interface-for-quadriplegics-to-control-robots/feed/ 0
Punyo is a soft robot from TRI designed for whole-body manipulation https://www.therobotreport.com/punyo-soft-robot-from-tri-designed-for-whole-body-manipulation/ https://www.therobotreport.com/punyo-soft-robot-from-tri-designed-for-whole-body-manipulation/#comments Thu, 29 Feb 2024 21:59:50 +0000 https://www.therobotreport.com/?p=578030 TRI's Punyo humanoid robot can manipulate objects with its whole body, giving it more flexibility when it comes to household tasks.

The post Punyo is a soft robot from TRI designed for whole-body manipulation appeared first on The Robot Report.

]]>

While humanoid robots have burst into mainstream attention in the past year, and more and more companies have released their own models, many operate similarly. The typical humanoid uses arms and grippers to handle objects, and their rigid legs provide a mode of transportation. Researchers at the Toyota Research Institute, or TRI, said they want to take humanoids a step further with the Punyo robot. 

Punyo isn’t a traditional humanoid robot in that it doesn’t yet have legs. So far, TRI‘s team is working with just the torso of a robot and developing manipulation skills. 

“Our mission is to help people with everyday tasks in our homes and elsewhere,” said Alex Alspach, one of TRI’s tech leads for whole-body manipulation, in a video (see above). “Many of these manipulation tasks require more than just our hands and fingers.” 

When humans have to carry a large object, we don’t just use our arms to carry it, he explained. We might lean the object against our chest to lighten the load on our arms and use our backs to push through doors to reach our destination.

Manipulation that uses the whole body is tricky for humanoids, where balance is a major issue. However, the researchers at TRI designed its robot to do just that. 

“Punyo does things differently. Taking advantage of its whole body, it can carry more than it could simply by pressing with outstretched hands,” added Andrew Beaulieu, one of TRI’s tech leads for whole-body manipulation. “Softness, tactile sensing, and the ability to make a lot of contact advantageously allow better object manipulation.” 

TRI said that the word “punyo” is a Japanese word that elicits the image of a cute yet resilient robot. TRI’s stated goal was to create a robot that is soft, interactive, affordable, safe, durable, and capable.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Robot includes soft limbs with internal sensors

Punyo’s hands, arms, and chest are covered with compliant materials and tactile sensors that allow it to feel contact. The soft materials allow the robot’s body to conform with the objects it’s manipulating.

Underneath it are two “hard” robot arms, a torso frame, and a waist actuator. TRI says it aimed to combine the precision of a traditional robot with the compliance, impact resistance, and sensing simplicity of soft robotic systems

The entirety of Punyo’s arms are covered in air-filled bladders or bubbles. These bubbles connect via a tube to a pressure sensor. This sensor can feel forces applied to the outer surfaces of the bubble.

Each bubble can also be individually pressurized to a desired stiffness, and add around 5 cm of compliance to the surface of the robot’s arms. 

Instead of traditional grippers, Punyo has “paws” made up of a single high-friction latex bubble with a camera inside. The team printed the inside of these bubbles with a dot pattern. The camera watches for deformities in this pattern to estimate forces. 

Left: Under Punyo’s sleeves are air-filled bubbles, air tubes, and pressure sensors that add compliance and tactile sensing to the arms. Right: Closeup of a pair of arm bubbles.

Left: Under Punyo’s sleeves are bubbles, air tubes, and pressure sensors that add compliance and tactile sensing to the arms. Right: Closeup of a pair of arm bubbles. | Source: Toyota Research Institute

Punyo learns to use full-body manipulation

Punyo learned contact-rich policies using two methods: diffusion policy and example-guided reinforcement learning. TRI announced its diffusion policy method last year. With this method, the robot uses human demonstrations to learn robust sensorimotor policies for hard-to-model tasks.

Example-guided reinforcement learning is a method that requires tasks to be modeled in simulation and with a small set of demonstrations to guide the robot’s exploration. TRI said it uses this kind of learning to achieve robust manipulation policies for tasks it can model in simulation

When the robot can see demonstrations of these tasks it can more efficiently learn them. It also gives TRI team more room to influence the style of motion the robot uses to achieve the task.

The team uses adversarial motion priors (AMP), which are traditionally used for stylizing computer-animated characters, to incorporate human motion imitation into its reinforcement pipeline. 

Reinforcement learning does require the team to model tasks in simulation for training. To do this, TRI used a model-based planner for demonstrations instead of teleoperation. It called this process “plan-guided reinforcement learning.”

TRI claimed that using a planner makes longer-horizon tasks that are difficult to teleoperate possible. The team can also automatically generate any number of demonstrations, reducing its pipeline’s dependence on human input. This moves TRI closer to scaling up the number of tasks tha tPunyo can handle. 

The post Punyo is a soft robot from TRI designed for whole-body manipulation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/punyo-soft-robot-from-tri-designed-for-whole-body-manipulation/feed/ 1
Researchers debut TWIN lower limb exoskeleton https://www.therobotreport.com/researchers-debut-twin-lower-limb-exoskeleton/ https://www.therobotreport.com/researchers-debut-twin-lower-limb-exoskeleton/#respond Thu, 29 Feb 2024 18:52:21 +0000 https://www.therobotreport.com/?p=578028 TWIN is a motorized exoskeleton designed to be customized to enhance the physical abilities of each wearer.

The post Researchers debut TWIN lower limb exoskeleton appeared first on The Robot Report.

]]>

TWIN is made of lightweight materials, and it has modular components for usability and transportation. | Source: IIT

Italian researchers last week announced the TWIN powered exoskeleton for lower limbs. They said they have designed it to be easier to wear than competing exoskeletons on the market.

TWIN is the product of a collaboration between Rehab Technologies IIT – INAIL, the joint laboratory between the Istituto Italiano di Tecnologia (IIT– Italian Institute of Technology) in Genoa, and the Prosthetic Center of INAIL, the prosthetic unit of the National Institute for Insurance against Accidents at Work in Bologna. IIT’s Matteo Laffranchi coordinated the project, which began in 2013 with the goal of developing innovative, high-tech, cost-effective systems for patients with physical impairments.

The researchers presented TWIN during a press conference at the Museum of Science and Technology in Milan. Two patients who had been involved in testing the system demonstrated the exoskeleton

One of the patients was Alex Santucci, who had worked with technicians and researchers throughout the project’s design phase and who participated in clinical experiments as a tester (see photo below). The clinical experiments took place at the Prosthetic Center of INAIL in Vigorso di Budrio, the Montecatone Rehabilitation Institute in Imola, and illa Beretta in Costa Masnaga (LC).

Alex Santucci demonstrated the TWIN lower-limb exoskeleton.

Alex Santucci demonstrated the TWIN lower-limb exoskeleton. Source: IIT-Istituto Italiano di Tecnologia

Exoskeleton designed for portability, ease of use

TWIN is intended to enhance the physical abilities of the wearer, according to the research organizations. While the exoskeleton is just a prototype now, the collaborators said they hope to bring it to market soon. 

The researchers designed TWIN to allow individuals with reduced or even absent motor abilities in their lower limbs to maintain an upright position. This can include people with complete spinal cord injuries.

The exoskeleton is not self-balancing, but it can enable wearers to walk with the assistance of crutches or walkers and to stand up and sit down, said the researchers in a release

The researchers said TWIN has two unique features that make it stand out. The first is that it’s made of lightweight materials such as aluminum alloy. The second is that the system contains modular components to further facilitate usability and transportability. 

The structure of TWIN is adjustable based on the patient’s physical characteristics. It can be adjusted using telescopic links placed at the level of the femur and the tibia. The team also made ankle and foot support available in various sizes. This is so TWIN can adapt to the ergonomics of the user, whether they are male or female, young or an adult. 

The TWIN motors activate the knee and hip joints, imposing a completely configurable movement pattern on the patient’s limbs, in terms of step length and type, and walking speed. The battery has a lifespan of approximately four hours and requires and hour to recharge.

TWIN features three operating models that also adapt to the patient. The exoskeleton evaluates the degree of motor deficit of the person wearing it, particularly their ability to perform autonomous walking. The exoskeleton’s operating modes are:

  • Walk mode: The team designed this for patients with absent motor function. In this mode, the exoskeleton imposes a walking pattern according to programmed parameters. 
  • Retrain mode: Retrain mode is for patients with partial impairment of lower-limb motor function who are capable of performing a more or less autonomous movement but face some difficulties in some phases of the steps. In this case, the exoskeleton supports the patient’s movement with more or less intensity, directing them toward an optimal reference trajectory.
  • TwinCare mode: The team designed this mode for patients with partial and differentiated motor impairment between two limbs. In this case, one leg might be healthy and can move autonomously, while the other requires assistance in some phases of the step. 

What comes next for TWIN?

An operator, such as a physiotherapist in a rehabilitation clinic, can control TWIN using a specific Android app installed on the included tablet. The graphical interface allows for controlling the exoskeleton in the execution of various programmed activities, setting the kinematic parameters of movement, and choosing between different step execution modes.

In addition to rehabilitation clinics during physiotherapy sessions, TWIN can be worn daily even just for a few hours. Assuming the upright position brings significant benefits in terms of musculoskeletal circulatory, psychological, and digestive system functionality for wheelchair users, said the developers.

They added that their next goal for TWIN is CE marking. They are working with a partner on European certification, which will be followed by the industrialization process. Once TWIN is on the market, it can be used by patients in need, reintegrating severely injured workers into social and work environments, said the researchers.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post Researchers debut TWIN lower limb exoskeleton appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-debut-twin-lower-limb-exoskeleton/feed/ 0
MIT AI model promises to simplify path planning in warehouses https://www.therobotreport.com/mit-ai-model-promises-to-simplify-path-planning-in-warehouses/ https://www.therobotreport.com/mit-ai-model-promises-to-simplify-path-planning-in-warehouses/#respond Wed, 28 Feb 2024 21:37:59 +0000 https://www.therobotreport.com/?p=578018 MIT AI experts have applied a deep-learning model that can decongest robots nearly four times faster than typical strong random search methods. 

The post MIT AI model promises to simplify path planning in warehouses appeared first on The Robot Report.

]]>
MIT researchers have applied AI for traffic mitigation to managing multiple warehouse robots.

MIT researchers have applied AI for traffic mitigation to managing multiple warehouse robots. Source: Adobe Stock

Researchers at the Massachusetts Institute of Technology have applied ideas from the use of artificial intelligence to mitigate traffic congestion to tackle robotic path planning in warehouses. The team has developed a deep-learning model that can decongest robots nearly four times faster than typical strong random search methods, according to MIT. 

A typical automated warehouse could have hundreds of mobile robots running to and from their destinations and trying to avoid crashing into one another. Planning all of these simultaneous movements is a difficult problem. It’s so complex that even the best path-finding algorithms can struggle to keep up, said the university researchers.

The scientists built a deep-learning model that encodes warehouse information, including its robots, planned paths, tasks, and obstacles. The model then uses this information to predict the best areas of the warehouse to decongest and improve overall efficiency. 

“We devised a new neural network architecture that is actually suitable for real-time operations at the scale and complexity of these warehouses,” stated Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE) at MIT. “It can encode hundreds of robots in terms of their trajectories, origins, destinations, and relationships with other robots, and it can do this in an efficient manner that reuses computation across groups of robots.”

Wu is also a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).

A divide-and-conquer approach to path planning

The MIT team’s technique for the deep-learning model was to divide the warehouse robots into groups. These smaller groups can be decongested faster with traditional algorithms used to coordinate robots than the entire group as a whole. 

This is different from traditional search-based algorithms, which avoid crashes by keeping one robot on its course and replanning the trajectory for the other. These algorithms have an increasingly difficult time coordinating everything as more robots are added. 

“Because the warehouse is operating online, the robots are replanned about every 100 milliseconds,” said Wu. “That means that every second, a robot is replanned 10 times. So these operations need to be very fast.”

To keep up with these operations, the MIT researchers used machine learning to focus the replanning on the most actionable areas of congestion. Here, the researchers saw the most room for improvement when it came to total travel time of robots. This is why they decided to tackle smaller groups of robots at the same time. 

For example, in a warehouse with 800 robots, the network might cut the warehouse floor into smaller groups that contain 40 robots each. Next, it predicts which of these groups has to most potential to improve the overall solution if a search-based solver were used to coordinate the trajectories of robots in that group. 

Once it finds the most promising robot group using a neural network, the system decongests it with a search-based solver. After this, it moves on to the next most promising group.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


How MIT picked the best robots to start with

The MIT team said its neural network can reason about groups of robots efficiently because it captures complicated relationships that exist between individual robots. For example, it can see that even though one robot may be far away from another initially, their paths could still cross at some point during their trips. 

Another advantage the system has is that it streamlines computation by encoding constraints only once, rather than repeating the process for each subproblem. This means that in a warehouse with 800 robots, decongesting 40 robots requires holding the other 760 as constraints. 

Other approaches require reasoning about all 800 robots once per group in each iteration. Instead, the MIT system only requires reasoning about the 800 robots once across all groups in iteration. 

The team tested this technique in several simulated environments, including some set up like warehouses, some with random obstacles, and even maze-like settings that emulate building interiors. By identifying more effective groups to decongest, the learning-based approach decongests the warehouse up to four times faster than strong, non-learning-based approaches, said MIT.

Even when the researchers factored in the additional computational overhead of running the neural network, its approach still solved the problem 3.5 times faster. 

In the future, Wu said she wants to derive simple, rule-based insights from their neural model, since the decisions of the neural network can be opaque and difficult to interpret. Simpler, rule-based methods could also be easier to implement and maintain in actual robotic warehouse settings, she said.

“This approach is based on a novel architecture where convolution and attention mechanisms interact effectively and efficiently,” commented Andrea Lodi, the Andrew H. and Ann R. Tisch Professor at Cornell Tech, and who was not involved with this research. “Impressively, this leads to being able to take into account the spatiotemporal component of the constructed paths without the need of problem-specific feature engineering.”

“The results are outstanding: Not only is it possible to improve on state-of-the-art large neighborhood search methods in terms of quality of the solution and speed, but the model [also] generalizes to unseen cases wonderfully,” she said.

In addition to streamlining warehouse operations, the MIT researchers said their approach could be used in other complex planning tasks, like computer chip design or pipe routing in large buildings. 

Wu, senior author of a paper on this technique, was joined by lead author Zhongxia Yan, a graduate student in electrical engineering and computer science. The work will be presented at the International Conference on Learning Representations. Their work was supported by Amazon and the MIT Amazon Science Hub.

The post MIT AI model promises to simplify path planning in warehouses appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-ai-model-promises-to-simplify-path-planning-in-warehouses/feed/ 0
New computer model could help robots collect Moon dust https://www.therobotreport.com/new-computer-model-could-help-robots-collect-moon-dust/ https://www.therobotreport.com/new-computer-model-could-help-robots-collect-moon-dust/#respond Sat, 24 Feb 2024 17:21:49 +0000 https://www.therobotreport.com/?p=577982 Robots have emerged as a method to collect regolith due to their lower risks and costs compared to human spaceflight.

The post New computer model could help robots collect Moon dust appeared first on The Robot Report.

]]>
researchers have developed virtual regolith to train robots for lunar operations.

The same experiments were set up in, both, simulation and reality to see if the virtual regolith behaved realistically. This test looked at how small (16 g) samples of material flowed through narrow funnels. | Credit: Joe Louca

Researchers claim a new computer model mimics Moon dust so well that it could lead to smoother and safer Lunar robot teleoperations. The tool, developed by researchers at the University of Bristol and based at the Bristol Robotics Laboratory, could be used to train astronauts ahead of Lunar missions.

Working with their industry partner, Thales Alenia Space in the UK, who has specific interest in creating working robotic systems for space applications, the team investigated a virtual version of regolith, another name for Moon dust.

Lunar regolith is of particular interest for the upcoming Lunar exploration missions planned over the next decade. From it, scientists can potentially extract valuable resources such as oxygen, rocket fuel or construction materials, to support a long-term presence on the Moon.

To collect regolith, remotely operated robots emerge as a practical choice due to their lower risks and costs compared to human spaceflight. However, operating robots over these large distances introduces large delays into the system, which make them more difficult to control.

Now that the team know this simulation behaves similarly to reality, they can use it to mirror operating a robot on the Moon. This approach allows operators to control the robot without delays, providing a smoother and more efficient experience. You learn more by reading the technical paper here.

Lead author Joe Louca, based in Bristol’s School of Engineering Mathematics and Technology explained: “Think of it like a realistic video game set on the Moon – we want to make sure the virtual version of moon dust behaves just like the actual thing, so that if we are using it to control a robot on the Moon, then it will behave as we expect.

“This model is accurate, scalable, and lightweight, so can be used to support upcoming lunar exploration missions.”

This study followed from previous work of the team, which found that expert robot operators want to train on their systems with gradually increasing risk and realism. That means starting in a simulation and building up to using physical mock-ups, before moving on to using the actual system. An accurate simulation model is crucial for training and developing the operator’s trust in the system.

While some especially accurate models of Moon dust had previously been developed, these are so detailed that they require a lot of computational time, making them too slow to control a robot smoothly. Researchers from DLR (German Aerospace Centre) tackled this challenge by developing a virtual model of regolith that considers its density, stickiness, and friction, as well as the Moon’s reduced gravity. Their model is of interest for the space industry as it is light on computational resources, and, hence, can be run in real-time. However, it works best with small quantities of Moon dust.

The Bristol team’s aims were to, firstly, extend the model so it can handle more regolith, while staying lightweight enough to run in real-time, and then to verify it experimentally.

Joe Louca added: “Our primary focus throughout this project was on enhancing the user experience for operators of these systems – how could we make their job easier?

“We began with the original virtual regolith model developed by DLR, and modified it to make it more scalable.

“Then, we conducted a series of experiments – half in a simulated environment, half in the real world – to measure whether the virtual moon dust behaved the same as its real-world counterpart.”

As this model of regolith is promising for being accurate, scalable and lightweight enough to be used in real-time, the team will next investigate whether it can be used when operating robots to collect regolith.

They also plan to investigate whether a similar system could be developed to simulate Martian soil, which could be of benefit for future exploration missions, or to train scientists to handle material from the highly anticipated Mars Sample Return mission.

Editor’s Note: This article was republished from the University of Bristol.

The post New computer model could help robots collect Moon dust appeared first on The Robot Report.

]]>
https://www.therobotreport.com/new-computer-model-could-help-robots-collect-moon-dust/feed/ 0
New Wyss project aims to control exosuit with brain signals https://www.therobotreport.com/new-wyss-project-aims-to-control-exosuit-with-brain-signals/ https://www.therobotreport.com/new-wyss-project-aims-to-control-exosuit-with-brain-signals/#respond Wed, 21 Feb 2024 21:23:43 +0000 https://www.therobotreport.com/?p=577945 The Wyss Center's Synapsuit project aims to develop high-performance algorithms that decode complex brain signals.

The post New Wyss project aims to control exosuit with brain signals appeared first on The Robot Report.

]]>
Researchers at the Wyss Center work on the Synapsuit exosuit project.

Researchers at the Wyss Center work on the Synapsuit exosuit project. | Source: Wyss Center

Researchers at the Wyss Center have an ongoing project to develop AI algorithms that use brain signals to control a lightweight exosuit.

The Synapsuit project aims to develop high-performance algorithms that decode complex brain signals. In turn, these signals control a lightweight, soft, wearable exosuit that supports arm and hand movement in real-time. The Wyss team collaborates with local and international partners on this project aimed at accelerating neuro-rehabilitation.

“Neuroscience is rapidly merging with AI, allowing us to discover important patterns hidden inside seemingly chaotic brain signals,” said Dr. Kyuhwa Lee, principal investigator, Wyss Center. “Using cutting-edge machine learning approaches, we aim to translate movement intentions into action for people living with movement disorders following spinal cord injury and stroke.”

Wyss plans to continue partnering to explore “new standards of neuro-AI technologies.” The group aims to assist people living with severe upper-limb motor disabilities to produce arm and hand movements. To achieve this, the team plans to gather large amounts of clinical data using flexible, high-density ECoG electrodes and develop new AI algorithms to decode the movement intentions of people experiencing motor disability.

More about the Wyss Center exosuit project

Synapsuit.

The components that make up the Synapsuit. | Source: Wyss Center

One partner in the exosuit project is Nuerosoft Bioelectronics, a brain-computer interface (BCI) technology maker.

“At Neurosoft Bioelectronics, we are committed to pushing the boundaries of BCI technology,” said CEO Dr. Nicolas Vachicouras. “Our cutting-edge, soft implantable electrodes offer a novel way to record signals from previously unexplored brain regions. By integrating these electrodes into the Synapsuit project, we aim to significantly improve the decoding of movement intentions, thereby taking a critical step towards restoring functional mobility to those who need it most.”

Combining the algorithms with a brain-controlled exosuit could accelerate neuro-rehab methods, Wyss says, by supporting the movement of those suffering from stroke and spinal cord injury.

The team records brain signals using soft, foldable, and flexible electrodes that conform to any neural tissues. They then feed the signals into the neuro-AI decoder that sends a command to the fully flexible, soft exosuit. The exosuit sends an electrical current through transcutaneous neurostimulation, controlling the muscle that directly moves the arm and the hand.

Combined with a special material called electrostatic clutch (ES-clutch), the exosuit allows the arm and hand to hold posture on demand without causing fatigue.

“We want to develop a highly usable, practical exosuit that can be used in daily life by people living with motor disability,” said Dr. Yun-Jae Won, principal investigator, Korea Electronics Technology Institute.

Editor’s Note: This article was syndicated from The Robot Report’s sister site Medical Design & Outsourcing. 

The post New Wyss project aims to control exosuit with brain signals appeared first on The Robot Report.

]]>
https://www.therobotreport.com/new-wyss-project-aims-to-control-exosuit-with-brain-signals/feed/ 0