Sensors / Sensing Systems Archives - The Robot Report https://www.therobotreport.com/category/technologies/sensors-sensing/ Robotics news, research and analysis Sat, 13 Apr 2024 01:14:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Sensors / Sensing Systems Archives - The Robot Report https://www.therobotreport.com/category/technologies/sensors-sensing/ 32 32 Project CETI develops robotics to make sperm whale tagging more humane https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/ https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/#respond Sun, 14 Apr 2024 12:00:50 +0000 https://www.therobotreport.com/?p=578695 Project CETI is using robotics, machine learning, biology, linguistics, natural language processing, and more to decode whale communications. 

The post Project CETI develops robotics to make sperm whale tagging more humane appeared first on The Robot Report.

]]>
Sperm whales in the ocean.

Project CETI is a nonprofit scientific and conservation initiative that aims to decode whale communications. | Source: Project CETI

Off the idyllic shores of Dominica, a country in the Caribbean, hundreds of sperm whales gather deep in the sea. While their communication sounds like a series of clicks and creaks to the human ear, these whales have unique, regional dialects and even accents. A multidisciplinary group of scientists, led by Project CETI, is using soft robotics, machine learning, biology, linguistics, natural language processing, and more to decode their communications. 

Founded in 2020, Project CETI, or the Cetacean Translation Initiative, is a nonprofit organization dedicated to listening to and translating the communication systems of sperm whales. The team is using specially created tags that latch onto whales and gather information for the team to decode. Getting these tags to stay on the whales, however, is no easy task. 

“One of our core philosophies is we could never break the skin. We can never draw blood. These are just our own, personal guidelines,” David Gruber, the founder and president of Project CETI, told The Robot Report

“[The tags] have four suction cups on them,” he said. “On one of the suction cups is a heart sensor, so you can get the heart rate of the whale. There’s also three microphones on the front of it, so you hear the whale that it’s on, and you can know the whales that’s around it and in front of it.

“So you’ll be able to know from three different microphones the location of the whales that are speaking around it,” explained Gruber. “There’s a depth sensor in there, so you can actually see when the whale was diving and so you can see the profiles of it going up and down. There’s a temperature sensor. There’s an IMU, and it’s like a gyroscope, so you can know the position of the whale.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Finding a humane way to tag whales

One of the core principles of Project CETI, according to Gruber, is to use technology to bring people closer to animals. 

“There was a quote by Stephen Hawking in a BBC article, in which he posited that the full development of AI and robotics would lead to the extinction of the human race,” Gruber said. “And we thought, ‘This is ridiculous, why would scientists develop something that would lead to our own extinction?’ And it really inspired us to counter this narrative and be like, ‘How can we make robots that are actually very gentle and increase empathy?’”

“In order to deploy those tags onto whales, what we needed was a form of gentle, stable, reversible adhesion,” Alyssa Hernandez, a functional morphologist, entomologist, and biomechanist on the CETI team, told The Robot Report. “So something that can be attached to the whale, where it would go on and remain on the whale for a long amount of time to collect the data, but still be able to release itself eventually, whether naturally by the movements of the whale, or by our own mechanism of sort of releasing the tag itself.”

This is what led the team to explore bio-inspired techniques of adhesion. In particular, the team settled on studying suction cups that are common in marine creatures. 

“Suction discs are pretty common in aquatic systems,” said Hernandez. “They show up in multiple groups of organisms, fish, cephalopods, and even aquatic insects. And there are variations often on each of these discs in terms of the morphology of these discs, and what elements these discs have.”

Hernandez was able to draw on her biology background to design suction-cup grippers that would work particularly well on sperm whales that are constantly moving through the water. This means the suction cup would have to withstand changing pressures and forces. They can stay on a whale’s uneven skin even when it’s moving. 

“In the early days, when we first started this project, the question was, ‘Would the soft robots even survive in the deep sea?’” said Gruber. 

Project CETI.

An overview of Project CETI’s mission. | Source: Project CETI

How suction cup shape changes performance

“We often think of suction cups as round, singular material elements, and in biology, that’s not usually the case,” noted Hernandez. “Sometimes these suction disks are sort of elongated or slightly different shaped, and oftentimes they have this sealing rim that helps them keep the suction engaged on rough surfaces.”

Hernandez said the CETI team started off with a standard, circular suction cup. Initially, the researchers tried out multiple materials and combinations of stiff backings and soft rims. Drawing on her biology experience, Hernandez began to experiment with more elongated, ellipse shapes. 

“I often saw [elongated grippers] when I was in museums looking at biological specimens or in the literature, so I wanted to look at an ellipse-shaped cup,” Hernandez said. “So I ended up designing one that was a medium-sized ellipse, and then a thinner ellipse as well. Another general design that I saw was more of this teardrop shape, so smaller at one end and wider at the base.” 

Hernadez said the team also looked at peanut-shaped grippers. In trying these different shapes, she looked for one that would provide increased resistance over the more traditional circular suction cups. 

“We tested [the grippers] on different surfaces of different roughness and different compliance,” recalled Hernandez. “We ended up finding that compared to the standard circle, and variations of ellipses, this medium-sized ellipse performed better under shear conditions.” 

She said the teardrop-shaped gripper also performed well in lab testing. These shapes performed better because, unlike a circle, they don’t have a uniform stiffness throughout the cup, allowing them to bend with the whale as it moves. 

“Now, I’ve modified [the suction cups] a bit to fit our tag that we currently have,” Hernandez said. “So, I have some versions of those cups that are ready to be deployed on the tags.”

Project CETI boat with people interacting with drones.

Project CETI uses drones to monitor sperm whale movements and to place the tags on the whales. | Source: Project CETI

Project CETI continues iterating

The Project CETI team is actively deploying its tags using a number of methods, including having biologists press them onto whales using long poles, a method called pole tagging, and using drones to press the tags onto the whales. 

Once they’re on the whale, they stay on for anywhere from a few hours to a few days. Once they fall off, the CETI team has a mechanism that allows them to track the tags down and pull all of the gathered data off of them. CETI isn’t interested in making tags that can stay on the whales long-term, because sperm whales can travel long distances in just a few days, and it could hinder their ability to track the tags down once they fall off. 

The CETI team said it plans to continue iterating on the suction grippers and trying new ways to gently get crucial data from sperm whales. It’s even looking into tags that would be able to slightly crawl to different positions on the whale to gather information about what the whale is eating, Gruber said. The team is also interested in exploring tags that could recharge themselves. 

“We’re always continuing to make things more and more gentle, more and more innovative,” said Gruber. “And putting that theme forward of how can we be almost invisible in this project.”

The post Project CETI develops robotics to make sperm whale tagging more humane appeared first on The Robot Report.

]]>
https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/feed/ 0
NEURA and Omron Robotics partner to offer cognitive factory automation https://www.therobotreport.com/neura-omron-robotics-partner-offer-cognitive-factory-automation/ https://www.therobotreport.com/neura-omron-robotics-partner-offer-cognitive-factory-automation/#respond Thu, 04 Apr 2024 12:55:34 +0000 https://www.therobotreport.com/?p=578518 NEURA Robotics and Omron Robotics and Safety Technologies say their strategic alliance will make cognitive systems 'plug and play.'

The post NEURA and Omron Robotics partner to offer cognitive factory automation appeared first on The Robot Report.

]]>
NEURA Robotics lab.

NEURA has developed cognitive robots in a variety of form factors. Source: NEURA Robotics

Talk about combining robotics and artificial intelligence is all the rage, but some convergence is already maturing. NEURA Robotics GmbH and Omron Robotics and Safety Technologies Inc. today announced a strategic partnership to introduce “cognitive robotics” into manufacturing.

“By pooling our sensor and AI technologies and expertise into an ultimate platform approach, we will significantly shape the future of the manufacturing industry and set new standards,” stated David Reger, founder and CEO of NEURA Robotics.

Reger founded the company in 2019 with the intention of combining sensors and AI with robotics components for a platform for app development similar to that of smartphones. The “NEURAverse” offers flexibility and cost efficiency in automation, according to the company.

“Unlike traditional industrial robots, cognitive robots have the ability to learn from their environment, make decisions autonomously, and adapt to dynamic production scenarios,” said Metzingen, Germany-based NEURA. “This opens new application possibilities including intricate assembly tasks, detailed quality inspections, and adaptive material handling processes.”

Omron has sensor, channel expertise

“We see NEURA’s cognitive technologies as a compelling growth opportunity for industrial robotics,” added Olivier Welker, president and CEO of Omron Robotics and Safety Technologies. “By combining NEURA’s innovative solutions with Omron’s global reach and automation portfolio, we will provide customers new ways to increase safety, productivity, and flexibility in their operations.”

Pleasanton, Calif.-based Omron Robotics is a subsidiary of OMRON Corp. focusing on automation and safety sensing. It designs and manufactures industrial, collaborative, and mobile robots for various industries.

“We’ve known Omron for quite some time, and even before I started NEURA, we had talked about collaborating,” Reger told The Robot Report. “They’ve tested our products, and we’ve worked together on how to benefit both sides.”

“We have the cognitive platform, and they’re one of the biggest sensor, controllers, and safety systems providers,” he added. “This collaboration will integrate our cognitive abilities and NEURAverse with their sensors for a plug-and-play solution, which everyone is working toward.”

Omron Robotics' Olivier Welker and NEURA's David Reger.

Omron Robotics’ Olivier Welker and NEURA’s David Reger celebrate their partnership. Source: NEURA

Collaboration has ‘no limits’

When asked whether NEURA and Omron Robotics’ partnership is mainly focused on market access, Reger replied, “It’s not just the sales channel … there are no really big limits. From both sides, there will be add-ons.”

Rather than see each other as competitors, NEURA and Omron Robotics are working to make robots easier to use, he explained.

“As a billion-dollar company, it could have told our startup what it wanted, but Omron is different,” said Reger. “I felt we got a lot of respect from Olivier and everyone in that organization. It won’t be a one-sided thing; it will be just ‘Let’s help each other do something great.’ That’s what we’re feeling every day since we’ve been working together. Now we can start talking about it.”

NEURA has also been looking at mobile manipulation and humanoid robots, but adding capabilities to industrial automation is the “low-hanging fruit, where small changes can have a huge effect,” said Reger. “A lot of things for humanoids have not yet been solved.”

“I would love to just work on household robots, but the best way to get there is to use the synergy between industrial robotics and the household market,” he noted. “Our MAiRA, for example, is a cognitive robot able to scan an environment and from an idle state pick any known or unknown objects.”

MAiRA cognitive robot on MAV mobile base.

MAiRA cognitive robot on MAV mobile base. Source: NEURA Robotics

Ease of use drives NEURA strategy

NEURA and Omron Robotics promise to make robots easier to use, helping overall adoption, Reger said.

“A big warehouse company out of the U.S. is claiming that it’s already using more than 1 million robots, but at the same time, I’m sure they’d love to use many more robots,” he said. “It’s also in the transformation from a niche market into a mass market. We see that’s currently only possible if you somehow control the environment.”

“It’s not just putting all the sensors inside the robot, which we were first to do, and saying, ‘OK, now we’re able to interact with a human and also pick objects,'” said Reger. “Imagine there are external sensors, but how do you calibrate them? To make everything plug and play, you need new interfaces, which means collaboration with big players like Omron that provide a lot of sensors for the automation market.”

NEURA has developed its own sensors and explored the balance of putting processing in the cloud versus the edge. To make its platform as popular with developers as that of Apple, however, the company needs the support of partners like Omron, he said.

Reger also mentioned NEURA’s partnership with Kawasaki, announced last year, in which Kawasaki offers the LARA CL series cobot with its portfolio. “Both collaborations are incredibly important for NEURA and will soon make sense to everyone,” he said.

NEURA to be at Robotics Summit & Expo

Reger will be presenting a session on “Developing Cognitive Robotics Systems” at 2:45 p.m. EDT on Wednesday, May 1, Day 1 of the Robotics Summit & Expo. The event will be at the Boston Convention and Exhibition Center, and registration is now open.

“I’ll be talking about making robots cognitive to enable AI to be useful to humanity instead of competing with us,” he said. “AI is making great steps, but if you look at what it’s doing, like drawing pictures or writing stories — these are things that I’d love to do but don’t have the time for. But if I ask, let’s say, AI to take out the garbage or show it a picture of garbage, it can tell me how to do it, but it’s simply not able to do something about it yet.”

NEURA is watching humanoid development but is focusing on integrating cognitive robotics with sensing and wearables as it expands in the U.S., said Reger. The company is planning for facilities in Detroit, Boston, and elsewhere, and it is looking for leadership team members as well as application developers and engineers.

“We don’t just want a sales office, but also production in the U.S.,” he said. “We have 220 people in Germany — I just welcomed 15 new people who joined NEURA — and are starting to build our U.S. team. In the past several months, we’ve gone with only European and American investors, and we’re looking at the Japanese market. The U.S. is now open to innovation, and it’s an exciting time for us to come.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post NEURA and Omron Robotics partner to offer cognitive factory automation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/neura-omron-robotics-partner-offer-cognitive-factory-automation/feed/ 0
Stealthy startup Mendaera is developing a fist-sized medical robot with Dr. Fred Moll’s support https://www.therobotreport.com/mendaera-developing-fist-sized-medical-robot-with-dr-fred-moll-support/ https://www.therobotreport.com/mendaera-developing-fist-sized-medical-robot-with-dr-fred-moll-support/#respond Fri, 22 Mar 2024 14:43:38 +0000 https://www.therobotreport.com/?p=578241 Mendaera is working on medical technology that combines robotics, AI, and real-time imaging in a compact device.

The post Stealthy startup Mendaera is developing a fist-sized medical robot with Dr. Fred Moll’s support appeared first on The Robot Report.

]]>
Mendaera logo.

Editor’s Note: This article was syndicated from The Robot Report’s sister site Medical Design & Outsourcing.

The veil is starting to lift on medical robotics startup Mendaera Inc. as it exits stealth mode and heads toward regulatory submission with a design freeze on its first system and verification and validation imminent.

Two former Auris Health leaders co-founded the San Mateo, Calif.-based company. Mendaera also has financial support from Dr. Fred Moll, the Auris and Intuitive Surgical co-founder who is known as “the father of robotic surgery.”

“Among the innovators in the field, Mendaera’s efforts to make robotics commonplace earlier in the healthcare continuum are unique and can potentially change the future of care delivery,” stated Moll in a release.

But Mendaera isn’t a surgical robotics developer. Instead, it said it is working on technology that combines robotics, artificial intelligence, and real-time imaging in a compact device “no bigger than your fist” for procedures including percutaneous instruments.

Mendaera co-founder and CEO Josh DeFonzo.

Mendaera co-founder and CEO Josh DeFonzo. | Source: Mendaera

Josh DeFonzo, co-founder and CEO of Mendaera, offered new details about his startup’s technology and goals in an exclusive interview, as he announced the acquisition of operating room telepresence technology that Avail Medsystems developed.

Avail, which shut down last year, was founded by former Intuitive Surgical and Shockwave Medical leader Daniel Hawkins, who’s now CEO at MRI automation software startup Vista.ai

“We’re a very different form factor of robot that focuses on what I’ll describe as gateway procedures,” DeFonzo said. “It’s a different category of robots that we don’t believe the market has seen before [as] we’re designing and developing it.”

Those procedures include vascular access for delivery of devices or therapeutic agents; access to organs for surgical or diagnostics purposes; and pain management procedures such as regional anesthesia, neuraxial blocks, and chronic pain management. DeFonzo declined to go into much detail about specific procedures because the product is still in the development stage.

“The procedures that we are going after are those procedures that involve essentially a needle or a needle-like device and real-time imaging, and as such, there are specific procedures that we think the technology will perform very well at,” he said. “However, the technology is also designed to be able to address any suite of procedures that use those two common denominators: real-time imaging and a percutaneous instrument.”

“And the reason that’s an important point to make is that oftentimes, when you are a specialist who performs these procedures, you don’t perform just one,” added DeFonzo. “You perform a number of procedures: central venous catheters [CVCs], peripherally inserted central catheter [PICC] lines, regional anesthetic blocks that are in the interscalene area or axial blocks. The technology is really designed to enable specialists — of whom there are many — the ability to perform these procedures more consistently with a dramatically lower learning curve.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Mendaera marks progress to date

Preclinical testing has shown the technology has improved accuracy and efficiency in comparison with freehand techniques, regardless of the individual’s skill level, asserted DeFonzo. User research spanned around 1,000 different healthcare providers ranging from emergency medicine and interventional radiology to licensed medical doctors, nurse practitioners, and physician’s assistants.

“It seems to be very stable across user types,” he said. “So whether somebody is a novice, of intermediate skill level, or advanced, the robot is a great leveler in terms of being able to provide consistent outcomes.”

“Whereas when you look at the same techniques performed freehand, the data generally tracks with what you would expect: lesser skilled people are less accurate; more experienced people are more accurate,” DeFonzo noted. “But even in that most skilled category, we do find that the robot makes a fairly remarkable improvement on accuracy and timeliness of intervention.”

Last year, the startup expanded into a production facility to accommodate growth and volume manufacturing for the product’s launch and said its system will be powered by handheld ultrasound developer Butterfly Network’s Ultrasound-on-Chip technology.

Butterfly Network won FDA clearance in 2017 for the Butterfly iQ for iPhone. | Source: Butterfly Network

Mendaera’s aim is to eventually deploy these systems “to the absolute edge of healthcare,” starting with hospitals, ambulatory surgical centers and other procedural settings, said DeFonzo. The company will then push to alternative care sites and primary care clinics as evidence builds to support the technology.

“The entire mission for the company is to ensure essentially that high-quality intervention is afforded to every patient at every care center at every encounter,” he said. “We want to be able to push that as far to the edge of healthcare as possible, and that’s certainly something we aim to do over time, but it’s not our starting point explicitly.”

“As a practical starting point, however, we do see ourselves working in the operating room, in the interventional radiology suite, and likely in cath labs to facilitate these gateway procedures, the access that is afforded adjacent to a larger intervention,” DeFonzo acknowledged.

Mendaera said it expects to submit its system to the U.S. Food and Drug Administration for review through the 510(k) pathway by the end of 2024 with the goal of offering the product clinically in 2025.

“What we really want to do with this technology is make sure that we’re leveraging not just technological trends, but really important forces in the space — robotics, imaging and AI — to dramatically improve access to care,” said DeFonzo. “Whether you’re talking about something as basic as a vascular access procedure or something as complex as transplant surgery or neurosurgery, we need to leverage technology to improve patient experience.”

“We need to leverage technology to help hospitals become more financially sustainable, ultimately improving the healthcare system as we do it,” he said. “So our vision was to utilize technology to provide solutions that aggregate across many millions, if not tens and hundreds of millions, of procedures to make a ubiquitous technology that really helps benefit our healthcare system.”

Mendaera’s research and development group will work with employees from Avail on how to best add the telepresence technology to the mix.

“We see a lot of power in what the Avail team has built,” DeFonzo said. “Bringing that alongside robotic technology, our imaging partnerships and AI, we think that we’ve got a really good opportunity to digitize to a further extent not only expertise in the form of the robot, but [also] clinical judgment, like how do you ensure that the right clinician and his or her input is present ahead of technologies like artificial intelligence that hopefully augment all users in an even more scalable way.”

The post Stealthy startup Mendaera is developing a fist-sized medical robot with Dr. Fred Moll’s support appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mendaera-developing-fist-sized-medical-robot-with-dr-fred-moll-support/feed/ 0
Bota Systems launches upgraded force-torque sensor for small cobots https://www.therobotreport.com/bota-systems-launches-upgraded-force-torque-sensor-for-small-cobots/ https://www.therobotreport.com/bota-systems-launches-upgraded-force-torque-sensor-for-small-cobots/#respond Fri, 22 Mar 2024 13:59:14 +0000 https://www.therobotreport.com/?p=578238 Bota Systems' latest multi-axis sensor provides a sensitivity level three to five times higher than the current SensONE sensor. 

The post Bota Systems launches upgraded force-torque sensor for small cobots appeared first on The Robot Report.

]]>
Bota Systems' SensONE T5 force-torque sensor on a UR cobot. The company is an official distribution and integration partner of Universal Robots and Mecademic.

The SensONE T5 force-torque sensor on a UR collaborative robot. | Source: Bota Systems

Bota Systems AG has launched the SenseONE T5, a high-sensitivity version of its SensONE multi-axis force-torque sensor. The company said its latest sensor provides a sensitivity level of 0.05 N / 0.002 Nm, which is three to five times higher than its predecessor.

Zurich-based Bota Systems said it built the SenseONE T5 for collaborative robots with small payloads of up to 11.02 lb. (5 kg). The compact and lightweight sensor offers optimal sensitivity for small robots, according to the company

“This new force-torque sensor’s excellent sensitivity opens up exciting new possibilities for collaborative small-payload robots, which are used for performing highly sensitive applications,” said Ilias Patsiaouras, co-founder and chief technology officer of Bota Systems, in a release. “The SensONE T5 will find its niche in end-of-line quality testing of small parts, such as buttons in electronics, as well as precision assembly of highly detailed, delicate tasks, such as the routing and installation of electric cables into cabinets.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


SenseONE T5 designed for ease of integration

A robotic force-torque sensor is a device that simultaneously measures force and torque that is applied to a surface. The measured output signals are used for real-time feedback control, thus enabling cobots to perform challenging human-machine interaction tasks, explained Bota Systems.

It added that the sensor most commonly used for such robotic applications is a six-axis force-torque sensor, which measures the force and torque on all three axes. Bota Systems said it designed its latest system for challenging applications.

The SenseONE T5 comes in a dustproof, water-resistant, and compact package. The company claimed that it is easy to integrate into a robotic arm and requires no mounting adapter. 

Temperature drift on the sensor is negligible, and the new sensor provides accuracy exceeding 2% with a sampling rate of up to 2,000 Hz, said Bota. The sensor is available in two communications options: Serial USB/RS422 and EtherCat. It comes with smooth TwinCAT, URcap, ROS, LabVIEW, and MATLAB software integration, according to the company.

Bota Systems' SensONE T5 force-torque sensor.

The SensONE T5 force-torque sensor can be integrated into robotic arms without a mounting adapter. | Source: Bota Systems

See Bota Systems at the Robotics Summit & Expo

Bota Systems is an official distribution and integration partner of Universal Robots and Mecademic. In October 2023, the company added NEXT Robotics to its distributor network.

NEXT is now its official distributor for the German-speaking countries of Germany, Austria, and Switzerland. That same month, Bota Systems raised $2.5 million in seed funding. 

Marathon Venture Capital led the round, along with participation from angel investors. Bota Systems said it plans to use the funding to grow its team to address increasing demand by leading research labs and manufacturing companies. It also plans to accelerate its product roadmap.

To learn more about Bota Systems, visit it at Booth 315 at the Robotics Summit & Expo, which will be held on May 1 and 2 in Boston.

This will be the largest Robotics Summit ever. It will include more than 200 exhibitors, various networking opportunities, a women in robotics breakfast, a career fair, an engineering theater, a startup showcase, and more. Registration is now open for the event.

The post Bota Systems launches upgraded force-torque sensor for small cobots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/bota-systems-launches-upgraded-force-torque-sensor-for-small-cobots/feed/ 0
AMD announces Embedded+ architecture to accelerate edge AI https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/ https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/#respond Tue, 06 Feb 2024 14:00:30 +0000 https://www.therobotreport.com/?p=577788 AMD Embedded+ combines embedded processors with adaptive systems on chips to shorten edge AI time to market.

The post AMD announces Embedded+ architecture to accelerate edge AI appeared first on The Robot Report.

]]>
AMD's new Embedded+ architecture for high-performance compute.

The new AMD Embedded+ architecture for high-performance compute. Source: Advanced Micro Devices

Robots and other smart devices need to process sensor data with a minimum of delay. Advanced Micro Devices Inc. today launched AMD Embedded+, a new computing architecture that combines AMD Ryzen Embedded processors with Versal adaptive systems on chips, or SoCs. The single integrated board is scalable and power-efficient and can accelerate time to market for original design manufacturer, or ODM, partners, said the company.

“In automated systems, sensor data has diminishing value with time and must operate on the freshest information possible to enable the lowest-latency, deterministic response,” stated Chetan Khona, senior director of industrial, vision, healthcare, and sciences markets at AMD, in a release.

“In industrial and medical applications, many decisions need to happen in milliseconds,” he noted. “Embedded+ maximizes the value of partner and customer data with energy efficiency and performant computing that enables them to focus in turn on addressing their customer and market needs.”

For more than 50 years, AMD said it has innovated in high-performance computing, graphics, and visualization technologies. The Santa Clara, Calif.-based company claimed that Fortune 500 businesses, billions of people, and research institutions around the world rely on its technology daily.

In the two years since it acquired Xilinx, AMD said it has seen increasing demand for AI in industrial/manufacturing, medical/surgical, smart-city infrastructure, and automotive markets. Not only can Embedded+ support video codecs and AI inferencing, but the combination of Ryzen and Versal can enable real-time control of robot arms, Khona said.

“Diverse sensor data is relied upon more than ever before, across applications,” said Khona in a press briefing last week. “The question is how to get sensor data from autonomous systems into a PC if it isn’t on a USB or some consumer interface.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


AMD Embedded+ paves a path to sensor fusion 

“The market for bringing processing closer to the sensor is growing rapidly,” said Khona. The use cases for embedded AI are growing, with the machine vision market growing to $600 million and sensor data analysis to $1.4 billion by 2028, he explained.

“AMD makes the path to sensor fusion, AI inferencing, industrial networking, control, and visualization simpler with this architecture and ODM partner products,” Khona said. He described the single mother board as usable with multiple types of sensors, allowing for offloaded processing and situational awareness.

AMD said it has validated the Embedded+ integrated compute platform to help ODM customers reduce qualification and build times without needing to expend additional hardware or research and development resources. The architecture enables the use of a common software platform to develop designs with low power, small form factors, and long lifecycles for medical, industrial, and automotive applications, it said.

The company asserted that Embedded+ is the first architecture to combine AMD x86 compute with integrated graphics and programmable I/O hardware for critical AI-inferencing and sensor-fusion applications. “Adaptive computing excels in deterministic, low-latency processing, whereas AI Engines improve high performance-per-watt inferencing,” said AMD.

Ryzen Embedded processors, which contain high-performance Zen cores and Radeon graphics, also offer rendering and display options for an enhanced 4K multimedia experience. In addition, it includes a built-in video codec for 4K H.264/H.265 encode/decode.

The combination of low-latency processing and high performance-per-watt inferencing enables high performance for tasks such as integrating adaptive computing in real time with flexible I/O, AI Engines for inferencing, and AMD Radeon graphics, said AMD.

It added that the new system combines the best of each technology. Embedded+ enables 10GigE vision and CoaXpress connectivity to camera via SFP+, said AMD, and image pre-processing occurs at pixel clock rates. This is especially important for mobile robot navigation, said Khona.

Sapphire delivers first Embedded+ ODM system

Embedded+ also allows system designers to choose from an ecosystem of ODM board offerings based on the architecture, said AMD. They can use it to scale their product portfolios to deliver performance and power profiles best suited to customers’ target applications, it asserted.

Sapphire Technology has built the first ODM system with the Embedded+ architecture, the Sapphire Edge+ VPR-4616-MB, a low-power Mini-ITX form factor motherboard. It offers the full suite of capabilities in as low as 30W of power by using the Ryzen Embedded R2314 processor and Versal AI Edge VE2302 Adaptive SoC.

The Sapphire Edge+ VPR-4616-MB is also available in a full system, including memory, storage, power supply, and chassis. Versal is a programmable network on a chip that can be tuned for power or performance, said AMD. With Ryzen, it provides programmable logic for sensor fusion and real-time controls, it explained.

“By working with a compute architecture that is validated and reliable, we’re able to focus our resources to bolster other aspects of our products, shortening time to market and reducing R&D costs,” said Adrian Thompson, senior vice president of global marketing at Sapphire Technology. “Embedded+ is an excellent, streamlined platform for building solutions with leading performance and features.”

The Embedded+ qualified VPR-4616-MB from Sapphire Technology is now available for purchase.

The post AMD announces Embedded+ architecture to accelerate edge AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/feed/ 0
KettyBot Pro will provide personalized customer service, says Pudu Robotics https://www.therobotreport.com/kettybot-pro-will-provide-personalized-customer-service-says-pudu-robotics/ https://www.therobotreport.com/kettybot-pro-will-provide-personalized-customer-service-says-pudu-robotics/#respond Wed, 31 Jan 2024 14:00:53 +0000 https://www.therobotreport.com/?p=577701 KettyBot Pro's new features include a larger screen for personalized advertising, cameras for navigation, and smart tray inspection.

The post KettyBot Pro will provide personalized customer service, says Pudu Robotics appeared first on The Robot Report.

]]>
KettyBot Pro is designed for multiple functions.

KettyBot Pro is designed for multiple functions. Source: Pudu Robotics

Pudu Technology Co. today launched KettyBot Pro, the newest generation of its delivery and reception robot. The service robot is designed to address labor shortages in the retail and restaurant industries and enhance customer engagement, said the company.

“In addition to delivering food and returning items, KettyBot can attract, greet, and guide customers in dynamic environments while generating advertising revenue, reducing overhead, and enhancing the in-store experience,” stated Shenzhen, China-based Pudu.

“We hear from various businesses that it’s hard to maintain adequate service levels due to staff being overwhelmed and stretched thin,” said Felix Zhang, founder and CEO of Pudu Robotics, in a release. “Robots like KettyBot Pro lend a helping hand by collaborating with human staff, improving their lives by taking care of monotonous tasks so that they can focus on more value-added services like enhancing customer experience. And people love that you can talk to it.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


KettyBot Pro designed to step up service

KettyBot Pro can enhance the customer experience with artificial intelligence-enabled voice interaction, said Pudu Robotics. The mobile robot also has autonomous path planning.

The company said the latest addition to its fleet of commercial service robots includes the following new features:

  • Passability upgrade: A new RGBD depth camera — with an ultra-wide angle that boosts the robot’s ability to detect and avoid objects — reduces KettyBot’s minimum clearance from 55 to 52 cm (21.6 to 20.4 in.) under ideal conditions. This allows the robot to navigate through narrow passageways and operate in busy dining rooms and stores.
  • Smart tray inspection: Pudu claimed that this functionality is “a first in the industry.” The robot uses a fisheye camera above the tray to detect the presence or absence of objects on the tray. Once a customer has picked up their meal, the vision system will automatically recognize the completion of thetask and proceed to the next one without the need for manual intervention.
  • Customization for customers: The integration with PUDU Open Platform allows users to personalize KettyBot Pro’s expressions, voice, and content for easy operation and the creation of differentiated services. In a themed restaurant, the KettyBot Pro can display expressions or play lines associated with relevant characters as it delivers meals. It can also provide personalized welcome messages and greeting services, such as birthday services in star-rated hotels.
  • Mobile advertising display: Through the PUDU Merchant Management Platform, businesses can flexibly edit personalized advertisements, marketing videos, and more. Equipped with an 18.5 in. (38.1 cm) large screen, the KettyBot Pro offers new ways to promote menu updates and market products for restaurant and retail clients.
  • New color schemes: The KettyBot is now available in “Pure Black” in addition to the white and yellow, or the yellow and black color scheme of the original model. Pudu said this variety will will better meet the aesthetic preferences of customers in different industries across global markets. For instance, high-end hotels and business venues regard Pure Black as the premium choice, it said.

Pudu Robotics builds for growth

Founded in 2016, Pudu Robotics said it has shipped nearly 70,000 units in more than 60 countries. Since its launch in 2021, global brands such as KFC, MediaMarkt, Pizza Hut, and Walmart have successfully deployed KettyBot in high-traffic environments. These companies use the robot to deliver orders, market menu items and products, and welcome guests, said Pudu.

With growing healthcare needs and advances in artificial intelligence, the U.S. service robotics market is poised to grow this year, Zhang told The Robot Report.

Pudu Robotics — which reached $100 million in revenue in 2022 — is building two new factories near Shanghai that it said will triple the company’s annual capacity and help it meet global demand.

The post KettyBot Pro will provide personalized customer service, says Pudu Robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/kettybot-pro-will-provide-personalized-customer-service-says-pudu-robotics/feed/ 0
The role of ToF sensors in mobile robots https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/ https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/#respond Tue, 23 Jan 2024 17:52:25 +0000 https://www.therobotreport.com/?p=568708 Time-of-flight or ToF sensors provide mobile robots with precise navigation, low-light performance, and high frame rates for a range of applications.

The post The role of ToF sensors in mobile robots appeared first on The Robot Report.

]]>
ToF sensors provide 3D information of the world around a mobile robot, providing important data to the robots perception algorithms. | Credit: E-con Systems

ToF sensors provide 3D information of the world around a mobile robot, providing important data to the robots perception algorithms. | Credit: E-con Systems

In the ever-evolving world of robotics, the seamless integration of technologies promises to revolutionize how humans interact with machines. An example of transformative innovation, the emergence of time-of-flight or ToF sensors is crucial in enabling mobile robots to better perceive the world around them.

ToF sensors have a similar application to lidar technology in that both use multiple sensors for creating depth maps. However, the key distinction lies in these cameras‘ ability to provide depth images that can be processed faster, and they can be built into systems for various applications.

This maximizes the utility of ToF technology in robotics. It has the potential to benefit industries reliant on precise navigation and interaction.

Why mobile robots need 3D vision

Historically, RGB cameras were the primary sensor for industrial robots, capturing 2D images based on color information in a scene. These 2D cameras have been used for decades in industrial settings to guide robot arms in pick-and-pack applications.

Such 2D RGB cameras always require a camera-to-arm calibration sequence to map scene data to the robot’s world coordinate system. 2D cameras are unable to gauge distances without this calibration sequence, thus making them unusable as sensors for obstacle avoidance and guidance.

Autonomous mobile robots (AMRs) must accurately perceive the changing world around them to avoid obstacles and build a world map while remaining localized within that map. Time-of-flight sensors have been in existence since the late 1970s and have evolved to become one of the leading technologies for extracting depth data. It was natural to adopt ToF sensors to guide AMRs around their environments.

Lidar was adopted as one of the early types of ToF sensors to enable AMRs to sense the world around them. Lidar bounces a laser light pulse off of surfaces and measures the distance from the sensor to the surface.

However, the first lidar sensors could only perceive a slice of the world around the robot using the flight path of a single laser line. These lidar units were typically positioned between 4 and 12 in. above the ground, and they could only see things that broke through that plane of light.

The next generation of AMRs began to employ 3D stereo RGB cameras that provide 3D depth information data. These sensors use two stereo-mounted RGB cameras and a “light dot projector” that enables the camera array to accurately view the projected light on the science in front of the camera.

Companies such as Photoneo and Intel RealSense were two of the early 3D RGB camera developers in this market. These cameras initially enabled industrial applications such as identifying and picking individual items from bins.

Until the advent of these sensors, bin picking was known as a “holy grail” application, one which the vision guidance community knew would be difficult to solve.

The camera landscape evolves

A salient feature is the cameras’ low-light performance which prioritizes human-eye safety. The 6 m (19.6 ft.) range in far mode facilitates optimal people and object detection, while the close-range mode excels in volume measurement and quality inspection.

The cameras return the data in the form of a “point cloud.” On-camera processing capability mitigates computational overhead and is potentially useful for applications like warehouse robots, service robots, robotic arms, autonomous guided vehicles (AGVs), people-counting systems, 3D face recognition for anti-spoofing, and patient care and monitoring.

Time-of-flight technology is significantly more affordable than other 3D-depth range-scanning technologies like structured-light camera/projector systems.

For instance, ToF sensors facilitate the autonomous movement of outdoor delivery robots by precisely measuring depth in real time. This versatile application of ToF cameras in robotics promises to serve industries reliant on precise navigation and interaction.

How ToF sensors take perception a step further

A fundamental difference between time-of-flight and RGB cameras is their ability to perceive depth. RGB cameras capture images based on color information, whereas ToF cameras measure the time taken for light to bounce off an object and return, thus rendering intricate depth perception.

ToF sensors capture data to generate intricate 3D maps of surroundings with unparalleled precision, thus endowing mobile robots with an added dimension of depth perception.

Furthermore, stereo vision technology has also evolved. Using an IR pattern projector, it illuminates the scene and compares disparities of stereo images from two 2D sensors – ensuring superior low-light performance.

In comparison, ToF cameras use a sensor, a lighting unit, and a depth-processing unit. This allows AMRs to have full depth-perception capabilities out of the box without further calibration.

One key advantage of ToF cameras is that they work by extracting 3D images at high frame rates — with the rapid division of the background and foreground. They can also function in both light and dark lighting conditions through the use of active lighting components.

In summary, compared with RGB cameras, ToF cameras can operate in low-light applications and without the need for calibration. ToF camera units can also be more affordable than stereo RGB cameras or most lidar units.

One downside for ToF cameras is that they must be used in isolation, as their emitters can confuse nearby cameras. ToF cameras also cannot be used in overly bright environments because the ambient light can wash out the emitted light source.

what is a tof camera illustration.

A ToF sensor is nothing but a sensor that uses time of flight to measure depth and distance. | Credit: E-con Systems

Applications of ToF sensors

ToF cameras are enabling multiple AMR/AGV applications in warehouses. These cameras provide warehouse operations with depth perception intelligence that enables robots to see the world around them. This data enables the robots to make critical business decisions with accuracy, convenience, and speed. These include functionalities such as:

  • Localization: This helps AMRs identify positions by scanning the surroundings to create a map and match the information collected to known data
  • Mapping: It creates a map by using the transit time of the light reflected from the target object with the SLAM (simultaneous localization and mapping) algorithm
  • Navigation: Can move from Point A to Point B on a known map

With ToF technology, AMRs can understand their environment in 3D before deciding the path to be taken to avoid obstacles. 

Finally, there’s odometry, the process of estimating any change in the position of the mobile robot over some time by analyzing data from motion sensors. ToF technology has shown that it can be fused with other sensors to improve the accuracy of AMRs.

About the author

Maharajan Veerabahu has more than two decades of experience in embedded software and product development, and he is a co-founder and vice president of product development services at e-con Systems, a prominent OEM camera product and design services company. Veerabahu is also a co-founder of VisAi Labs, a computer vision and AI R&D unit that provides vision AI-based solutions for their camera customers.

The post The role of ToF sensors in mobile robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/feed/ 0
Gecko Robotics, Rho Impact study how robots and AI could improve sustainability https://www.therobotreport.com/gecko-robotics-rho-impact-study-how-robots-ai-could-improve-sustainability/ https://www.therobotreport.com/gecko-robotics-rho-impact-study-how-robots-ai-could-improve-sustainability/#respond Sun, 21 Jan 2024 13:30:03 +0000 https://www.therobotreport.com/?p=577546 Automated inspection of critical infrastructure can provide data to help multiple industries, report Gecko Robotics and Rho Impact.

The post Gecko Robotics, Rho Impact study how robots and AI could improve sustainability appeared first on The Robot Report.

]]>
Pipeline inspection robots can improve sustainability and profitability, says Gecko Robotics.

Robotics, data, and AI promise to help both sustainability and profitability. Source: Gecko Robotics

Robots and artificial intelligence have a significant role to play in maintaining crumbling infrastructure — and they could help bring about a zero-carbon economy while they’re at it.

That’s the conclusion of a new report by Gecko Robotics and Rho Impact, which studied how these technologies could reduce the environmental impact of critical infrastructure by bringing them into the digital world.

The potential is massive: The report claimed that digitizing carbon-intensive infrastructure could reduce emissions by a whopping 853 million metric tons (MMT) of CO2 annually. This is the equivalent of taking almost two thirds of gas-powered vehicles in the U.S. off the road, according to Gecko Robotics and Rho Impact.

Gecko identifies five areas for digital transformation

The report looked at five different sectors to demonstrate how digital transformation technologies could help create efficiencies and reduce emissions.

Oil and gas pipelines: Using robots for early detection of corrosion and other damage on components could reduce fugitive emissions, the unintended discharge of gases. AI-powered preventative maintenance programs can help avoid pipeline failure, resulting in a possible 556 MMT CO2e reduction in fugitive emissions, according to the companies.

Pulp and paper industry: Digitizing physical assets in this industry could help prevent corrosion of components. Robotics can be used to identify and address corrosion in paper mill tanks and pressure vessels, and AI can be used to make paper mill boiler operations more efficient.

Not only could digitalization reduce annual emissions by 46 MMT CO2e, per the report, but it could also result in a 6% improvement in emissions efficiency.

Maritime transportation: Digitalization could reduce greenhouse gas emissions by optimizing loads and detecting leaks on large ships, which can be up to 70% more efficient than smaller ones.

The report suggested that robots could inspect these large vessels more efficiently, reducing their time in the repair dock. Shippers could deploy AI to optimize loads. As a result, 11 MMT of CO2e emissions could be prevented by making the most efficient vessels more available.

Bridge infrastructure: Deploying robots to collect inspection and maintenance data, and using AI to analyze it and predict outcomes, could help reduce the time bridges are partially or fully closed for maintenance and repair, said Gecko Robotics and Rho Impact.

Digitalizing the inspection process would generate better data on bridges and could help reduce traffic-related emissions by 10 MMT CO2e, the report claimed.

Data key to addressing climate change, says report

The report concluded that it all comes down to data. Bringing major carbon-emitting industries into the digital world requires a comprehensive and detailed understanding of their infrastructure.

But manual inspection methods can result in limited data that doesn’t adequately identify the critical defects in infrastructural assets. Those assets don’t get the maintenance they need, leading to a shortened lifespan and premature replacement—which can have significant and avoidable business and environmental impacts. 

By contrast, robotic inspections enable operators to collect comprehensive and detailed data, allowing them to prioritize maintenance and repair work that helps make their operations more efficient and extends their lifetime.

Deploying robots and AI as part of a digital transformation strategy makes the task of collecting and gaining insight from that critical data easier than ever. Not only could these technologies help industry meet the challenge of global warming, but they could also help boost their bottom lines.

About the author

Matthew Greenwood is a freelance writer for Engineering.com, a sibling site to The Robot Report. He has a background in strategic communications. He writes about technology, manufacturing, and aerospace.

The post Gecko Robotics, Rho Impact study how robots and AI could improve sustainability appeared first on The Robot Report.

]]>
https://www.therobotreport.com/gecko-robotics-rho-impact-study-how-robots-ai-could-improve-sustainability/feed/ 0
Ansys, NVIDIA team up to test autonomous vehicle sensors https://www.therobotreport.com/ansys-nvidia-team-up-to-test-autonomous-vehicle-sensors/ https://www.therobotreport.com/ansys-nvidia-team-up-to-test-autonomous-vehicle-sensors/#respond Sat, 13 Jan 2024 21:00:14 +0000 https://www.therobotreport.com/?p=577484 Integration will enable engineers to simulate accurate camera, lidar, radar and thermal camera sensors to train autonomous vehicles.

The post Ansys, NVIDIA team up to test autonomous vehicle sensors appeared first on The Robot Report.

]]>
a look at ansys sensors being tested on autonomous vehicles.

Ansys AVxcelerate Sensors will be able to generate simulated sensor data based on scenario-based road conditions made within NVIDIA DRIVE Sim. Credit: Ansys

Ansys AVxcelerate Sensors, the simulation giant’s autonomous vehicle (AV) sensor modeling and testing software, is now available within NVIDIA DRIVE Sim, a scenario-based simulator to develop and test automotive AI. By integrating these technologies, engineers can simulate camera, lidar, radar and thermal camera sensor data to help train and validate ADAS and AV systems.

NVIDIA DRIVE Sim is based on NVIDIA Omniverse, an industrial digitization platform based on universal scene description (OpenUSD) applications.

To meet safety standards, the AI within ADAS and AV systems must be trained and tested on millions of edge cases on billions of miles of roads. It’s not possible to do all these tests in the real world within a reasonable budget or amount of time. As a result, engineers need to use simulated environments to safely test at scale. With the latest integration of Ansys and NVIDIA technology, sensor and software performance can be tested in a digital world to meet these safety requirements.

In other words, engineers will be able to predict the performance of AV sensors, such as camera, lidar, radar and thermal camera sensors, using Ansys AVxcelerate Sensors’ simulations which will gather inputs based on digital worlds created using NVIDIA DRIVE Sim.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


“Perception is crucial for AV systems, and it requires validation through real-world data for the AI to make smart, safe decisions,” said Walt Hearn, senior vice president of worldwide sales and customer excellence at Ansys. “Combining Ansys AVxcelerate Sensors with NVIDIA DRIVE Sim, powered by Omniverse, provides a rich playground for developers to test and validate critical environmental interactions without limitations, paving the way for OEMs to accelerate AV technology development.”

In other autonomous vehicle news, Waymo recently announced it will begin testing its robotaxis on highways in Phoenix without a human safety driver. The rides will be available only to company employees, and their guests, to start. Waymo’s vehicles have been allowed on highways but required a human safety driver in the front seat to handle any issues.

Kodiak Robotics introduced at CES 2024 its sixth-generation, driverless-ready semi truck. The company said its self-driving truck is designed for scaled deployment and builds on five years of real-world testing. This testing included 5,000 loads carried over more than 2.5 million miles. The Mountain View, Calif.-based company said it will use the new truck for its own driverless operations on roadways between Dallas and Houston this year.

The post Ansys, NVIDIA team up to test autonomous vehicle sensors appeared first on The Robot Report.

]]>
https://www.therobotreport.com/ansys-nvidia-team-up-to-test-autonomous-vehicle-sensors/feed/ 0
Top 10 robots seen at CES 2024 https://www.therobotreport.com/top-10-robots-seen-at-ces-2024/ https://www.therobotreport.com/top-10-robots-seen-at-ces-2024/#comments Sat, 13 Jan 2024 05:24:16 +0000 https://www.therobotreport.com/?p=577461 A quick look at some of the most noteworthy robots at the 2024 CES show in Las Vegas.

The post Top 10 robots seen at CES 2024 appeared first on The Robot Report.

]]>
collage of images of robot from CES 2024.

LAS VEGAS — CES 2024 featured a wide range of emerging technologies, from fitness devices and videogames to autonomous vehicles. But robots always have a significant presence in the sprawling exhibit halls.

The Robot Report visited numerous booths in the Eureka Park section of CES, as well as in the other focused sections of the event. Here are some highlights from this week’s event:

Mobinn climbs stairway toward success

Mobinn is a Korean startup focused on last-mile autonomous delivery vehicles. While the concept of last-mile delivery isn’t new, Mobinn demonstrated an innovative wheeled robot that can climb stairs.

The robot is capable of going up and down stairs through the implementation of compliant wheels. A self-leveling box on top of the robot keeps the cargo level, so your drinks and food don’t spill out of their containers.

Glidance guides people with vision impairments

a hero image of the Glidance device.

The latest prototype of the Glidance device is fully functional for public demos and includes a camera in the handle and radar sensors in the base. | Credit: The Robot Report

RoboBusiness PitchFire 2023 winner Glidance also won a CES 2024 Innovation Award and showed the latest functional prototype of its Glide device in its booth. The robotic Glide guides sight-impaired individuals similar to a guide dog. 

The startup is designing Glide to be affordable and easy to learn how to use when it starts shipping later this year. I tried Glide firsthand (while closing my eyes). The experience was incredible, and I can only imagine how promising this technology would be for an individual with sight loss.

The team at Glidance mentioned to me that celebrity Stevie Wonder came to the Glidance booth for a demo of the product during CES.

Unitree H1 humanoid steals the show

There were two full-size humanoid robots at CES 2024.

Kepler Robotics had a stationary model of the new Kepler Forerunner K1 in its Eureka Park booth. The robot includes 14 linear-axis motors and 14 rotary-axis motors. Unfortunately, the company was unable to give live demos of the Forerunner.

closeup hero image of the unitree h1 robot

The Unitree H1 humanoid robot uses sensors in its “head” to perceive the world around the robot as it navigates and avoids obstacles. | Credit: The Robot Report

The internet influencer darling of CES has to be the Unitree H1 humanoid, and the company was giving nearly continuous live demos of the H1 at its booth.

Kudos to the Unitree marketing team for its now-infamous “kick the robot” videos that have been shared on social media over the past six months. In the videos, H1 appears to be a solid humanoid platform with respectable balance and agility.

However, as a longtime robotics industry insider and experienced robotics applications engineer, I thought the Unitree H1 product demos at CES 2024 were cringe-worthy, as the Unitree demo team walked the H1 robot into crowds of “internet tech influencers” with their cameras ablaze.

The 150 lb. (68 kg) robot danced with the public inches away. A single tripping incident would have sent the robot tumbling into an innocent bystander and made instant headlines. It would have been a public relations disaster and a setback for the industry.

However, there’s no denying that the H1 was a crowd favorite at CES 2024, and the company and its robot received a lot of news media attention. 

Hyundai displays future mobility tech at CES 2024

[See image gallery at www.therobotreport.com] Hyundai got my vote as one of the industry-leading mobility and robotics leaders at CES 2024. It is the parent company of Boston Dynamics, but at CES 2024, the Spot and Stretch robots played minor roles in Hyundai’s story.

The company had multiple large-scale booths showing autonomy concepts for the future, including autonomous mobility for both humans and freight, as well as a look at the future of autonomous construction vehicles. Unfortunately, I didn’t get to witness either of the live mobility demonstrations, but the Hyundai Construction Xite concept tractor was an impressive incarnation of autonomous construction designs.

hero image of the hyundai construction xite prototype autonomous tractor.

Hyundai presented a concept for the future of autonomous construction equipment with the display of the Construction Xite tractor (Editors note: for scale, the bucket arm is over 10 ft tall). | Credit: The Robot Report

AV24 rolls into the showroom

The Indy Autonomous Challenge (IAC) had an impressive booth in the automotive hall of CES 2024, surrounded by well-known brand names. On display was a fully functional version of the newest AV24 autonomous racecar, showing off the integration of an entirely new autonomy stack in the vehicle.

The IAC has partnered with many of the leading automotive technology companies to embed the latest lidar, radar, vision, and GPS sensors within the vehicle. 

dSpace announced an extended partnership with IAC that will deliver digital twins of each university team’s vehicle along with digital twins of each of the race tracks. In turn, these will enable the teams to train the AI drivers completely in simulation and then port the AI models and drive code directly the physical race cars.

In addition, some sanctioned sim races are possible later this year, said the IAC organizers.

Embodied AI displays updated Moxie

The latest generation of Moxie by Embodied AI was on display in Amazon’s booth at CES. Embodied recently announced new tutoring functionality with the latest software release for Moxie, and it demonstrated the software at the expo.

Amazon had a separate expo suite that featured all of the physical Amazon consumer and smart home products (Amazon Astro was noticeably absent from the display). Moxie entertained the gathered crowds as it demonstrated its interactivity.

Fingervision measures gripper force

My “Unknown Discovery” award of CES 2024 goes to a young Japanese startup called Fingervision. This was a serendipitous discovery of an innovation that uses tiny cameras built into the gripper fingers of an industrial robot,

They provide feedback on the grip force and “slippage” of an item held with the gripper. This is accomplished by imaging the area where the fingers touch an object through an opaque surface. Thus the origin of the company name.

The company has deployed its first grippers into an application where robots are picking up fried chicken nuggets and packaging them.

 

Honorable mentions from CES 2024

Gatik keeps on trucking — autonomously

Gatik showed the third generation of its on-road autonomous truck. The company has made its mark on autonomous logistics through the deployment of driving algorithms that plans paths so that the vehicle only makes right-hand turns, avoiding more complex and dangerous left-hand turns. 

Gatik first demonstrated fully driverless, freight-only commercial deliveries on the middle mile with Walmart in 2021. Shortly after, it executed the first fully driverless deployment for Canada’s largest retailer, Loblaw.

The company also announced a partnership with Goodyear tires to develop “Smart Tires” that can provide real-time feedback to the autonomous driver with data about the condition of the tires to help maintain traction and control.

Bobcat Rogue X2 gets ready to move the earth

At CES 2024, Bobcat showed off an autonomous concept prototype, the Bobcat Rogue X2, at CES 2024. The all-electric, autonomous robot is designed for handling material movement and ground-altering operations at construction, mining, and agriculture sites.

The design prototype of the Rogue X2 at CES had wheels rather than tracks, but manually driven Bobcats can be equipped with tracks, so a production version of the Rogue could have similar configurations.

Ottonomy IO partners with Harbor Lockers

Through a new partnership with Harbor Lockers, the latest generation of Ottobot can now be configured with a payload of Harbor Lockers. This includes the Harbor Locker physical locker infrastructure, as well as the Harbor Locker application interface.

This is the first time that Ottonomy is partnering with a third-party vendor to extend the autonomous last-mile delivery solution. 

Lawn-mowing robots arrive in North America

CES is one of the world’s biggest consumer electronics shows. While The Robot Report doesn’t typically cover consumer robotics, it is notable that lawn-mowing robotics were ubiquitous at CES this year, with a dozen vendors showing their autonomous systems.

The European market for consumer lawnmowers is already mature, but the North American market is in the early stages of adoption. Without testing all of the different lawnmowing robots, it’s difficult to determine the market leaders, but the two most promising solutions that I saw at the show included the new Yarbo Lawn Mower and the latest generation of Navimow from Segway Robotics.

[See image gallery at www.therobotreport.com]

The post Top 10 robots seen at CES 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/top-10-robots-seen-at-ces-2024/feed/ 3
Indy Autonomous Challenge announces new racecar and additional races https://www.therobotreport.com/indy-autonomous-challenge-announces-new-racecar-additional-races/ https://www.therobotreport.com/indy-autonomous-challenge-announces-new-racecar-additional-races/#respond Wed, 10 Jan 2024 19:48:28 +0000 https://www.therobotreport.com/?p=577401 The Indy Autonomous Challenge announced a completely new sensor and compute architecture for the AV24 racecar.

The post Indy Autonomous Challenge announces new racecar and additional races appeared first on The Robot Report.

]]>
The Indy Autonomous Challenge has revised the sensors and compute in the AV24 racecar.

The IAC has revised the sensors and compute in the AV24 racecar. Source: Indy Autonomous Challenge

The Indy Autonomous Challenge, or IAC, made two major announcements at CES 2024 this week. The first was that the IAC plans to present four autonomous racecar events in 2024, and the second was an updated technology stack.

The first event of the year is the IAC@CES, which takes place tomorrow at the Las Vegas Motor Speedway. The Robot Report will be in attendance to cover this event later this week.

More Indy Autonomous Challenge races to come

The IAC will also participate for the second year in a row at the Milano Monza Open-Air Motor Show from June 16 to 18 in Milan, Italy. Last year, the challenge debuted autonomous road racing with the IAC autonomous race cars for the first time.

Unlike other oval track-based races, the Milan Monza event challenges the university teams to develop their AI drivers for a road course. It is arguably one of the most famous road-racing venues in the world and exposes the IAC to a global racing audience, said event organizers.

The third event in 2024 will be from July 11 to 14 at the Goodwood Festival of Speed in the U.K. Described as “motorsport’s ultimate summer garden party,” the festival features the treacherous Goodwood hill climb.

This year, the IAC race cars will attempt the hill climb while setting new autonomous speed records. At last year’s event, the course was captured digitally, and the university teams are using that data to train their AI drivers.

Finally, IAC will return this year to the famous Indy Motor Speedway on Sept. 6, where it all started back in October 2021. The event expects to set new speed records and enable more university teams to qualify for head-to-head racing at the event.

Tech stack gets updates for the AV24

The other big news from IAC this week is the launch of a new generation of autonomous racecar, called the AV24. The original race platform, the AV21, has aged since its launch at the first race.

Winning university teams PoliMOVE and TUM have set multiple speed records over the past three years, pushing the AV21 to its sensor and computing limits. The platform has also suffered from maintenance and troubleshooting issues, especially in the fragility of the wiring harnesses. Some of the harness problems have plagued many of the teams as they prepared prior competitions.

In response, the IAC team went through the sensor, networking, and compute stack and re-engineered an entirely new platform that should enable the university teams to continue to push the limits of speed and control while testing and developing cutting-edge AI driver algorithms. AV24 does not include any changes in the race car chassis, the engine, or the physical dimensions of the vehicle.

Here’s a look at what’s new in the AV24 technology stack.

a bulleted list of the new AV24 race car sensors and specs.

The new IAC AV24 race car includes all new sensors and compute architecture. | Credit: IAC

Most notably, the AV24 now includes split braking controls that will allow the AV24 to manage braking on all four wheels of the vehicle separately, essentially giving the AI drivers more control of the vehicle than is humanly possible.

“The IAC event has succeeded beyond our wildest dreams,” said Paul Mitchell, co-founder and CEO of the Indy Autonomous Challenge. “We originally thought it would be a one-and-done challenge, but the event has thrived, so it was time to go back to the drawing board and deploy a new technology stack leveraging the best technology from our event partners.”

The post Indy Autonomous Challenge announces new racecar and additional races appeared first on The Robot Report.

]]>
https://www.therobotreport.com/indy-autonomous-challenge-announces-new-racecar-additional-races/feed/ 0
Butterfly Network to bring Ultrasound-on-Chip tech to surgical robotics https://www.therobotreport.com/butterfly-network-bring-ultrasound-chip-tech-surgical-robotics/ https://www.therobotreport.com/butterfly-network-bring-ultrasound-chip-tech-surgical-robotics/#comments Fri, 22 Dec 2023 13:30:11 +0000 https://www.therobotreport.com/?p=568946 Butterfly Network and Mendaera are jointly developing and commercializing a robot with Butterfly’s Ultrasound-on-Chip technology.

The post Butterfly Network to bring Ultrasound-on-Chip tech to surgical robotics appeared first on The Robot Report.

]]>
Butterfly Network has developed Ultrasound-on-Chip technology.

The original Ultrasound-on-Chip technology. | Source: Butterfly Network

Butterfly Network Inc. announced this week that it has partnered with Mendaera to commercialize a new surgical robotic system. It will combine Butterfly’s portable ultrasound sensors and software with Mendaera’s robot technology, said the companies.

“Mendaera’s robotic system is perfectly suited to leverage Butterfly’s proprietary Ultrasound-on-Chip by benefiting from the wide array of ultrasonic sensing applications that only our chip can offer,” said Darius Shahida, chief strategy officer of Butterfly Network, in a release. “We are excited to welcome the Mendaera team as a ‘Powered by Butterfly’ partner and believe our joint solution will expand Butterfly’s reach and clinical impact into the interventional space.”

In 2011, Dr. Jonathan Rothberg founded Butterfly Network, which is listed on the New York Stock Exchange through a business combination with Longview Acquisition Corp. The company said its mission is “to democratize medical imaging and contribute to the aspiration of global health equity, making high-quality ultrasound affordable, easy-to-use, globally accessible, and intelligently connected, including for the 4.7 billion people around the world lacking access to ultrasound.”

Butterfly Network, Mendaera aim to democratize robot surgery

Butterfly Network claimed that its Butterfly iQ+ system is “the world’s first handheld, single-probe, whole-body ultrasound system using semiconductor technology.” The Burlington, Mass.–based company develops the technology for handheld ultrasound deployed in Ukraine and recently announced a foray into brain-computer interfaces.

San Mateo, Calif.-based Mendaera has developed a platform that it said combines robotics, real-time imaging, artificial intelligence, and connectivity to enable intervention at scale. The company recently completed the research and design process for its system and raised $24 million in Series A funding in August.

Mendaera’s collaborative robot is compatible with Butterfly’s ultrasound device and is connected by the Butterfly Garden software development kit (SDK). The partners said they aim to create a system capable of improving precision and consistency for image-guided, needle-based interventions.

Butterfly said the new category of robotics could increase access to high-quality interventional treatment. The companies said they expect to submit the new system to the U.S. Food and Drug Admininstration by 2025. Upon commercialization, Mendaera and Butterfly have agreed to include a revenue share for every unit sold.

Josh DeFonzo, Mendaera co-founder and CEO, called the decision to work with Butterfly Network “a clear choice.” He said the programmable platform could make ultrasonic imaging and intervention ubiquitous.

Editor’s note: This article was syndicated from MassDevice, a sibling site to The Robot Report.

The post Butterfly Network to bring Ultrasound-on-Chip tech to surgical robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/butterfly-network-bring-ultrasound-chip-tech-surgical-robotics/feed/ 1
Persee N1 3D camera module from Orbbec uses NVIDIA Jetson https://www.therobotreport.com/persee-n1-3d-camera-module-from-orbbec-uses-nvidia-jetson/ https://www.therobotreport.com/persee-n1-3d-camera-module-from-orbbec-uses-nvidia-jetson/#respond Thu, 21 Dec 2023 14:50:45 +0000 https://www.therobotreport.com/?p=568957 Orbbec's Persee N1 combines a stereo-vision 3D camera and a computer based on NVIDIA Jetson for accurate and reliable data. 

The post Persee N1 3D camera module from Orbbec uses NVIDIA Jetson appeared first on The Robot Report.

]]>
Persee N1.

Orbbec’s Persee N1, which currently retails for $499.99. | Source: Orbbec

Orbbec Inc. has released the Persee N1, which it claimed is “an all-in-one combination of a popular stereo-vision 3D camera and a purpose-built computer based on the NVIDIA Jetson platform.” The company said its latest product delivers accurate and reliable data for indoor and semi-outdoor operations.

The Persee N1 is equipped with industry-standard interfaces for useful accessories and data connections, said Orbbec. The Troy, Mich.-based company added that the camera module also gives developers access to the benefits of the Ubuntu OS and Open Computer Vision (OpenCV) libraries. 

According to industry reports, an estimated 89% of all embedded vision projects use OpenCV. Orbbec said that this integration marks the beginning of a deeper collaboration between it and Open Source Vision Foundation, the nonprofit that operates OpenCV

“The Persee N1 features robust support for the industry-standard computer vision and AI toolset from OpenCV,” said Dr. Satya Mullick, CEO of OpenCV, in a release. “OpenCV and Orbbec have entered a partnership to ensure OpenCV compatibility with Orbbec’s powerful new devices and are jointly developing new capabilities for the 3D vision community.”

Persee N1 ready for edge AI and robotics

By delivering accurate data, Persee N1 is suitable for robotics, retail, healthcare, dimensioning, and interactive gaming applications, said Orbbec. 

The Persee N1 designed to be easy to set up using the Orbbec software development kit (SDK) and Ubuntu-based software environment, the company explained. It includes a Gemini 2 camera, based on active stereo IR technology, as well as Orbbec’s custom ASIC for high-quality, in-camera depth processing. 

It also includes the NVIDIA Jetson platform for edge AI and robotics. Orbbec recently became an NVIDIA Partner Network (NPN) Preferred Partner, deepening its relationship with NVIDIA

“The self-contained Persee N1 camera-computer makes it easy for computer vision developers to experiment with 3D vision,” stated Amit Banerjee, head of platform and partnerships at Orbbec. “This combination of our Gemini 2 RGB-D camera and the NVIDIA Jetson platform for edge AI and robotics allows AI development while at the same time enabling large-scale, cloud-based commercial deployments.”

The Persee N1 has HDMI and multiple USB ports for easy connections to a monitor and keyboar, said Orbbec. The USB ports also allow for data, and the camera module has a Power over Ethernet (POE) port for combined data and power connections. It also features MicroSD and M.2 slots for expandable storage. 


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Orbbec strengthens partnerships

With the release of Persee N1, Orbbec said it is strengthening its relationships with NVIDIA and OpenCV, both huge players in the robotics space. 

In August, Orbbec released a product line collaboration with Microsoft. The companies based this suite of products on Microsoft’s indirect time-of-flight (iTOF) depth-sensing technology that it brought to market with the HoloLens 2.

The cameras combine Microsoft’s iToF with Orbbec’s high-precision depth camera design and in-house manufacturing capabilities. 

Earlier this year, Orbbec also released a 3D camera SDK Programming Guide that uses ChatGPT. The guide allows developers to create their applications and sample codes by talking to ChatGPT.

The post Persee N1 3D camera module from Orbbec uses NVIDIA Jetson appeared first on The Robot Report.

]]>
https://www.therobotreport.com/persee-n1-3d-camera-module-from-orbbec-uses-nvidia-jetson/feed/ 0
Sanctuary AI secures IP assets to advance touch, grasping in general-purpose robots https://www.therobotreport.com/sanctuary-ai-secures-ip-assets-advancing-touch-grasping-general-purpose-robots/ https://www.therobotreport.com/sanctuary-ai-secures-ip-assets-advancing-touch-grasping-general-purpose-robots/#respond Wed, 20 Dec 2023 14:00:02 +0000 https://www.therobotreport.com/?p=568928 In addition to Sanctuary AI's internal developments, IP assets from Giant.AI and Tangible Research have accelerated progress on its roadmap.

The post Sanctuary AI secures IP assets to advance touch, grasping in general-purpose robots appeared first on The Robot Report.

]]>
close up of the hand of the Sanctuary Pheonix robot.

Sanctuary asserts that robotic manipulation including tactile sensing is critical to the success of humanoids. | Credit: Sanctuary AI

Sanctuary AI, which is developing general-purpose humanoid robots, has announced the recent acquisition of intellectual property, or IP, adding to its asset portfolio of touch and grasping technologies.

The Vancouver, Canada-based company said it expects this IP to play a pivotal role in its ambitious roadmap for the construction of general-purpose robots. According to Sanctuary AI, the integration of vision systems and touch sensors, which offer tactile feedback, plays a pivotal role in the realization of embodied artificial general intelligence (AGI).

It has already secured patents for numerous technologies developed both internally and through strategic acquisitions from external sources. The company acquired the latest assets from Giant.AI Inc. and Tangible Research.

Sanctuary AI is one of several robotics companies developing humanoid robots. The company unveiled the Phoenix humanoid robot in May 2023, when it publicly demonstrated its sixth-generation unit. This was also the first generation of humanoids from Sanctuary to feature bipedal locomotion.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Sanctuary AI takes touch-oriented approach

Unlike its competitors, Sanctuary AI said it has taken a different approach by starting with an intense investment in grasping capabilities combined with hand-eye coordination of human-analog hands and arms. 

Geordie Rose, co-founder and CEO of Sanctuary, was one of three executives from humanoid robotics companies to speak at RoboBusiness 2023’s keynote on the “State of Humanoid Robotics Development.” He described the importance of humanoids being able to do real work by manipulating any object that they might encounter.

This philosophy is a cornerstone to Sanctuary’s development roadmap, and Rose said it is essential to the success of humanoid robots in the future. It is also key to the company’s acquisition plan.

screenshot of recent Sanctuary patent illustrating functional hands.

Grasping is a cornerstone in Sanctuary’s product development roadmap, as shown here in this screenshot from a recent Sanctuary USPTO patent. | Credit: USPTO

Surveying the landscape on the way to AGI

Rose told The Robot Report that he believes that “the humanoid competitive landscape is bisecting into two general theses:”

  1. Single-purpose bipedal robots to move totes and boxes in retail, warehousing, and logistics
  2. General-purpose robots developing generalized software control systems for robots with hands that can act across multiple use cases and industries

Bipedal robots that can walk on two legs have been around for several decades, yet no significant supplier has matured or commercialized the technology despite having the necessary resources, Rose said, citing Honda, Boston Dynamics, and Toyota as examples.

Rose added that the “technology gap” for general-purpose humanoids is related to dexterous manipulation and grasping, which his company is developing and for which it has obtained patents. 

“Replicating human-like touch is potentially more important than vision when it comes to grasping and manipulation in unstructured environments,” said Jeremy Fishel, principal researcher at Sanctuary AI and founder of Tangible Research. “It has been an effort many years in the making to meet the complex blend of performance, features, and durability to achieve general-purpose dexterity.”

Sanctuary claimed that the best way to build the world’s first AGI is to build software for controlling sophisticated robots with humanlike senses (vision, hearing, proprioception, and touch), actions (movement, speech), and goals (completing work tasks).

keynote panel on stage at the RoboBusiness 2023 event.

RoboBusiness 2023 featured a keynote panel with speakers from three leading humanoid manufacturers. Seated left to right: moderator Mike Oitzman | Jonathan Hurst, chief robot officer of Agility Robotics | Geordie Rose, CEO of Sanctuary | Nick Paine, CTO of Apptronik.

IP portfolio around grasping grows

Sanctuary AI’s new IP assets expand on a growing patent portfolio that already protects several key grasping technologies for both non-humanoid and humanoid robots, including visual servoing, real-time simulation of the grasping process, and mapping between visual and haptic data. All of these are key to enabling any robot that must interact with and manipulate objects in unstructured or dynamic environments.

“In dynamic and unstructured environments, coordination between touch and vision is an absolute necessity,” said Rose. “We spent over a year performing industry-wide analysis before acquiring Jeremy’s team. Beyond the functional sensitivity, the technology is designed to be simulateable, enabling us to fast-track our AI model development.” 

According to Rose, “Sanctuary AI is focused on creating the world’s first human-like intelligence in general-purpose robots that will help address the massive labor shortages that organizations are facing around the world. This is a civilization-scale initiative that requires long-term planning and prioritization.”

“Our strategy is unique in that the prioritized focus is on the highest value part of the value chain, which is our clear focus on hand dexterity, fine manipulation, and touch,” he noted. “We believe hands, or more specifically grasping and manipulation, are the key pathway to applying real-world AI to the labor market, given that more than 98% of all work requires the dexterity of the human hand.”

“The acquisition of Tangible Research, the purchase of Giant.AI’s entire patent portfolio, along with our own independent activity, further deepens our IP and expertise in this critical area,” Rose explained.

screenshot of the Sanctuary humanoid robot from US patent.

The Phoenix humanoid robot has taken a deliberate and careful path to market, based on an IP portfolio to support Sanctuary’s product roadmap. | Credit: USPTO

Sanctuary AI patents show multi-purpose robot progress

You can learn a lot about a company’s technical trajectory by looking closely at its IP portfolio. Sanctuary AI’s patents from the past few years include “software-compensated robotics” (USPTO US 11312012 B2), which uses recurrent neural networks and image processing to control the operation and/or movement of an end effector.

A patent for “systems, devices, and methods for grasping by multi-purpose robots” (USPTO 11717963 B2) describes the training and operation of semi-autonomous robots to complete different work objectives.

Finally, the most cryptic of this group of patents is “haptic photogrammetry In robots and methods for operating the same” (USPTO US 11717974 B1), which describes methods for operating robots based on environment models including haptic data.

The market for humanoids has made notable progress in 2023, with plenty of product announcements. Agility Robotics offers one of the most mature systems on the market and has announced publicly that it is testing its robots in both Amazon warehouses and at GXO Logistics.

You can see Phoenix do things like placing items in a plastic bag, stacking blocks, and more as part of Sanctuary AI’s “Robots Doing Stuff” series on its YouTube channel.

The post Sanctuary AI secures IP assets to advance touch, grasping in general-purpose robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/sanctuary-ai-secures-ip-assets-advancing-touch-grasping-general-purpose-robots/feed/ 0
NVIDIA Jetson supports Zipline drone deliveries, as Omniverse enables Amazon digital twins https://www.therobotreport.com/nvidia-jetson-supports-zipline-drone-deliveries-as-omniverse-enables-amazon-digital-twins/ https://www.therobotreport.com/nvidia-jetson-supports-zipline-drone-deliveries-as-omniverse-enables-amazon-digital-twins/#respond Tue, 19 Dec 2023 16:15:41 +0000 https://www.therobotreport.com/?p=568936 NVIDIA technologies are helping supply chains add new levels of automation, as seen in its work with Adobe, Amazon, and Zipline.

The post NVIDIA Jetson supports Zipline drone deliveries, as Omniverse enables Amazon digital twins appeared first on The Robot Report.

]]>
NVIDIA Jetson Xavier NX processes sensor inputs for the P1 delivery drone.

NVIDIA Jetson Xavier NX processes sensor inputs for the P1 delivery drone. Source: Zipline

Robotics, simulation, and artificial intelligence are providing new capabilities for supply chain automation. For example, Zipline International Inc. drone deliveries and Amazon Robotics digital twins for package handling demonstrate how NVIDIA Corp. technologies can enable industrial applications.

“You can pick the right place for your algorithms to run to make sure you’re getting the most out of the hardware and the power that you are putting into the system,” said A.J. Frantz, navigation lead at Zipline, in a case study.

NVIDIA claimed that its Jetson Orin modules can perform up to 275 trillion operations per second (TOPS) to provide mission-critical computing for autonomous systems in everything from delivery services and agriculture to mining and undersea exploration. The Santa Clara, Calif.-based company added that Jetson’s energy efficiency can help businesses electrify their vehicles and reduce carbon emissions to meet sustainability goals.

Zipline drones rely on Jetson Xavier NX to avoid obstacles

Founded in 2011, Zipline said it has completed more than 800,000 deliveries of food, medication, and more in seven countries. The San Francisco-based company said its drones have flown over 55 million miles using NVIDIA Jetson edge AI platform for autonomous navigation and landings.

Zipline, which raised $330 million in April at a valuation of $4.2 billion, is a member of the NVIDIA Inception program, in which startups can get technology support. The company’s Platform One, or P1, drone uses Jetson Xavier NX system-on-module (SOM) to process sensor inputs.

“The NVIDIA Jetson module in the wing is part of what delivers our acoustic detection and avoidance system, so it allows us to listen for other aircraft in the airspace around us and plot trajectories that avoid any conflict,” Frantz explained.

Zipline’s fixed-wing drones can fly out more than 55 miles (88.5 km), at 70 mph (112.6 kph) from several distribution centers and then return. Capable of hauling up to 4 lb. (1.8 kg) of cargo, they autonomously fly and release packages at their destinations by parachute.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


P2 hybrid drone includes Jetson Orin NX for sensor fusion, safety

Zipline’s Platform Two, or P2, hybrid drone can fly fast on fixed-wing flights, as well as hover. It can carry 8 lb. (3.6 kg) of cargo for 10 miles (16 km), as well as a droid that can be lowered on a tether to precisely place deliveries. It’s intended for use in dense, urban environments.

The P2 uses two Jetson Orin NX modules. One is for sensor fusion system to understand environments. The other is in the droid for redundancy and safety.

Zipline claimed that its drones, nicknamed “Zips,” can deliver items 7x faster than ground vehicles. It boasted that it completes one delivery every 70 seconds globally.

“Our aircraft fly at 70 miles per hour, as the crow flies, so no traffic, no waiting at lights — we’re talking minutes here in terms of delivery times,” said Joseph Mardall, head of engineering at Zipline. “Single-digit minutes are common for deliveries, so it’s faster than any alternative.”

In addition to transporting pizza, vitamins, and medications, Zipline works with Walmart, restaurant chain Sweetgreen, Michigan Medicine, MultiCare Health Systems, Intermountain Health, and the government of Rwanda, among others. It delivers to more than 4,000 hospitals and health centers.

Amazon uses Omniverse, Adobe Substance 3D for realistic packages

For warehouse robots to be able to handle a wide range of packages, they need to be trained on massive but realistic data sets, according to Amazon Robotics.

“The increasing importance of AI and synthetic data to run simulation models comes with new challenges,” noted Adobe Inc. in a blog post. “One of these challenges is the creation of massive amounts of 3D assets to train AI perception programs in large-scale, real-time simulations.”

Amazon Robotics turned to Adobe Substance 3D, Universal Scene Description (USD), and NVIDIA Omniverse to develop random but realistic 3D environments and thousands of digital twins of packages for training AI models.

NVIDIA Omniverse integrates with Adobe Substance 3D to generate realistic package models.

NVIDIA Omniverse integrates with Adobe Substance 3D to generate realistic package models for training robots. Source: Adobe

NVIDIA Omniverse allows simulations to be modified, shared

“The Virtual Systems Team collaborates on a wide range of projects, encompassing both extensive solution-level simulations and individual workstation emulators as part of larger solutions,” explained Hunter Liu, technical artist at Amazon Robotics.

“To describe the 3D worlds required for these simulations, the team utilizes USD,” he said. “One of the team’s primary focuses lies in generating synthetic data for training machine learning models used in intelligent robotic perception programs.”

The team uses Houdini for procedural mesh generation and Substance 3D Designer for texture generation and loading virtual boxes into Omniverse, added Haining Cao, a texturing artist at Amazon Robotics.

The team has developed multiple workflows to represent the vast variety of packages that Amazon handles. It has gone from generating two to 300 assets per hour, said Liu.

“To introduce further variations, we utilize PDG (Procedural Dependency Graph) within Houdini,” he noted. “PDG enables us to efficiently batch process multiple variations, transforming the Illustrator files into distinct meshes and textures.”

After generating the synthetic data and publishing the results to Omniverse, the Adobe-NVIDIA integration enables Amazon’s team to change parameters to, for example, simulate work cardboard. The team can also use Python to trigger randomized values and collaborate on the data within Omniverse, said Liu.

In addition, Substance 3D includes features for creating “intricate and detailed textures while maintaining flexibility, efficiency, and compatibility with other software tools,” he said. Simulation-specific extensions bundled with NVIDIA Isaac Sim allow for further generation of synthetic data and live simulations using robotic manipulators, lidar, and other sensors, Liu added.

The post NVIDIA Jetson supports Zipline drone deliveries, as Omniverse enables Amazon digital twins appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-jetson-supports-zipline-drone-deliveries-as-omniverse-enables-amazon-digital-twins/feed/ 0