Human Robot Interaction / Haptics Archives - The Robot Report https://www.therobotreport.com/category/design-development/haptics/ Robotics news, research and analysis Mon, 25 Mar 2024 14:11:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Human Robot Interaction / Haptics Archives - The Robot Report https://www.therobotreport.com/category/design-development/haptics/ 32 32 ASTM International names new president, continues robotics standards work https://www.therobotreport.com/astm-international-names-new-president-continues-robotics-standards-work/ https://www.therobotreport.com/astm-international-names-new-president-continues-robotics-standards-work/#respond Mon, 25 Mar 2024 13:47:29 +0000 https://www.therobotreport.com/?p=578260 ASTM International, which develops standards for robot and other technologies, named Andrew G. Kireta Jr. as its president.

The post ASTM International names new president, continues robotics standards work appeared first on The Robot Report.

]]>
ASTM International offers global access to fully transparent standards development, resulting in high market relevance and technical excellence in standardization.

ASTM offers visibility into global standards development. Source: ASTM International

ASTM International today announced the appointment of Andrew G. Kireta Jr. as its new president, effective May 1, 2024. It said his background in standards development and familiarity with the organization positions him to lead future growth and innovation.

Kireta will succeed Katharine Morgan, who served in the role since 2017 and will retire after a “distinguished 40-year career with ASTM.” 

“We are thrilled to welcome Andy as president of ASTM International,” stated Bill Griese, 2024 chair of ASTM’s board of directors. “Andy has spent years supporting ASTM International in a variety of volunteer roles and is exceptionally well-suited to lead the organization forward.”

“He brings a strong commitment to ASTM’s mission, values, and membership,” Griese added. “Kathie’s dedication and engagement have made it possible for us to find the right leader for ASTM’s future, and we are delighted she will help to ensure a smooth transition as Andy assumes the role in May.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Kireta brings executive experience

Kireta is president and CEO of the Copper Development Association. He has been with that not-for-profit trade association since 1992, serving the past two decades in an executive management capacity.

Andy Kireta, president, ASTM International

Andy Kireta, president, ASTM International. Source: LinkedIn

In addition, ASTM International noted that Kireta has been an ASTM member since 1998. He joined the organization‘s board of directors in 2014, serving as chair of the audit and finance committee in 2017, vice chair in 2018 and 2019, and chair of the board in 2020. Kireta also previously served as vice chair and chair of the board of SEI International, an ASTM affiliate.

“I am honored and excited to serve as the new president of ASTM International,” said Kireta. “I have great respect for ASTM’s mission, staff, members, and partners, and I am humbled to lead an organization that has made such a meaningful impact on industry and society over its 125-year history. I am eager to work with the ASTM community to build upon that success as we advance our mission of helping our world work better.”

Learn about ASTM International robot standards

ASTM International said it is committed to serving global societal needs and improving public health and safety, consumer confidence, and overall quality of life. The Conshohocken, Pa.-based organization has 35,000 members worldwide working to develop and refine more than 12,900 technical standards and representing over 90 industry sectors.

As robots expand from factories into other environments, safety and reliability have become increasingly important. ASTM has been developing standards for robotic grasping and manipulation, legged robots, assembly robots, vision guidance for bin picking, and additive manufacturing in construction.

The F45 Committee on Robotics, Automation, and Autonomous Systems is working to develop standard terminology, practices, classifications, guides, test methods, and specifications applicable to these systems.

Adam Norton, associate director of the NERVE Center at the University of Massachusetts, Lowell, will present a session on “ASTM Standards for Robotics and Autonomous Systems” at 1:30 p.m. ET on Thursday May 2 at the Robotics Summit & Expo in Boston.

He will provide an overview the committee’s activities, as well as open a discussion to gather industry feedback on recommendations for future standards to ensure alignment with both developer and user needs. Registration is now open for the event. 

“We integrate consensus standards – developed with our international membership of volunteer technical experts – and innovative services to improve lives … helping our world work better,” ASTM said.

The post ASTM International names new president, continues robotics standards work appeared first on The Robot Report.

]]>
https://www.therobotreport.com/astm-international-names-new-president-continues-robotics-standards-work/feed/ 0
Researchers develop interface for quadriplegics to control robots https://www.therobotreport.com/researchers-develop-interface-for-quadriplegics-to-control-robots/ https://www.therobotreport.com/researchers-develop-interface-for-quadriplegics-to-control-robots/#respond Sat, 02 Mar 2024 21:00:17 +0000 https://www.therobotreport.com/?p=578049 Head-Worn Assistive Device impresses expert evaluator Henry Evans during a trial to control Hello Robot's Stretch mobile manipulator.

The post Researchers develop interface for quadriplegics to control robots appeared first on The Robot Report.

]]>
a quadriplegic wearing a an assistive device on his head that enables him to control robots.

Carnegie Mellon University researchers lived with Henry and Jane Evans for a week to test their Head-Worn Assistive Teleoperation (HAT) device with Henry, who lost his ability to speak and move his limbs 20 years ago. | Credit: CMU

No one could blame Carnegie Mellon University students Akhil Padmanabha and Janavi Gupta if they were a bit anxious this past August as they traveled to the Bay Area home of Henry and Jane Evans.

The students were about to live with strangers for the next seven days. On top of that, Henry, a person with quadriplegia, would spend the week putting their Head-Worn Assistive Teleoperation (HAT) — an experimental interface to control a mobile robot — to the test.

HAT requires fewer fine motor skills than other interfaces to help people with some form of paralysis or similar motor impairments control a mobile robot and manipulator. It allows users to control a mobile robot via head motion and speech recognition, and versions of the device have featured a hands-free microphone and head-worn sensor.

Padmanabha and Gupta quickly realized that any trepidation they may have felt was misplaced. Henry, who lost the ability to move his limbs and talk after a brain-stem stroke two decades ago, enjoyed using HAT to control the robot by moving his head and in some situations preferred HAT to the computer screen he normally uses.

“We were excited to see it work well in the real world,” said Padmanabha, a Ph.D. student in robotics who leads the HAT research team. “Henry became increasingly proficient in using HAT over the week and gave us lots of valuable feedback.”

During the home trial, the researchers had Henry perform predefined tasks, such as fetching a drink, feeding himself and scratching an itch. Henry directed a robot — Stretch, a commercially available mobile robot outfitted with a pincer-like gripper on its single arm — using HAT to control it.

Daily, Henry performed the so-called blanket+tissue+trash task, which involved moving a blanket off his body, grabbing a tissue and wiping his face with it, and then throwing the tissue away. As the week progressed, Henry could do it faster and faster and with fewer errors.

Henry said he preferred using HAT with a robot for certain tasks rather than depending on a caregiver.

“Definitely scratching itches,” he said. “I would be happy to have it stand next to me all day, ready to do that or hold a towel to my mouth. Also, feeding me soft foods, operating the blinds and doing odd jobs around the room.”

One innovation in particular, software called Driver Assistance that helps align the robot’s gripper with an object the user wants to pick up, was “awesome,” Henry said. Driver Assistance leaves the user in control while it makes the fine adjustments and corrections that can make controlling a robot both tedious and demanding.

“That’s better than anything I have tried for grasping,” Henry said, adding that he would like to see Driver Assistance used for every interface that controls Stretch robots.

Praise from Henry, as well as his suggestions for improving HAT, is no small thing. He has collaborated in multiple research projects, including the development of Stretch, and his expertise is widely admired within the assistive robotics community. He’s even been featured by The Washington Post and last year appeared on the cover of IEEE Spectrum.

Via email, Henry said his incentive for participating in research is simple. “Without technology I would spend each day staring at the ceiling waiting to die,” he said. “To be able to manipulate my environment again according to my will is motivation enough.”

Padmanabha said user-centered or participatory design is important within the assistive device community and requires getting feedback from potential users at every step. Henry’s feedback proved extremely helpful and gave the team new ideas to think about as they move forward.

The HAT researchers will present the results of their study at the ACM/IEEE International Conference on Human-Robot Interaction March 11–15 in Boulder, Colorado.

HAT originated more than two years ago in a project course taught by Zackory Erickson, an assistant professor in the Robotics Institute. The students contacted Henry as part of their customer discovery process. Even then, he was excited about the possibility of using a prototype.

The project showed promise and later was spun out of the class. An early version of HAT was developed and tested in the lab by participants both with and without motor impairments. When it came time to do an in-home case study, Henry seemed the logical person to start with.

During the weeklong study, Padmanabha and Gupta lived in the Evans home around the clock, both for travel convenience and to be able to perform testing whenever Henry was ready. Having strangers in the house 24/7 is typical of the studies Henry’s been involved in and is no big deal for him or Jane.

“We’re both from large families,” he said.

Padmanabha and Gupta, a computer science major, likewise adjusted quickly to the new surroundings and got used to communicating with Henry using a letterboard, a tool that allows Henry to spell out words by looking at or pointing a laser at each letter. The pair even played poker with Henry and Jane, with Henry using Stretch to manipulate his cards.

In the earlier tests, HAT used head movements and voice commands to control a robot. Henry can’t speak, but he can move his left thumb just enough to click a computer mouse. So the team reconfigured HAT for the Evans trial, substituting computer clicks for voice commands as a way to shift between modes that include controlling the movement of the robot base, arm or wrist, or pausing the robot.

“Among people with motor impairments, everyone has different levels of motor function,” Padmanabha said. “Some may have head movement, others may only have speech, others just have clicking capabilities. So it’s important that you allow for customization of your interface.”

Head motions are key to using HAT, which detects head movement using a sensor in a cap, headband or — in Henry’s case — a chin strap.

“People use head gesturing as a way to communicate with each other and I think it’s a natural way of controlling or gesturing to a robot,” Padmanabha said.

A graphical user interface — a computer screen — is more typical for controlling robots. But Gupta said users don’t like using a computer screen to control a robot that is operating around their body.

“It can be scary to have a robot close to your face, trying to feed you or wipe your face,” she said. Many user studies therefore shy away from attempting tasks that come close to the face. But once Henry got used to HAT, he didn’t hesitate to perform such tasks, she added.

A computer screen is available to control Stretch in tasks that are out of the user’s line of sight, such as sending the robot to fetch something from another room. At Henry’s suggestion, the researchers made it possible to use HAT to control a computer cursor with head movements.

In addition to Gupta, Padmanabha and Erickson, the research team includes CMU’s Carmel Majidi, the Clarence H. Adamson Professor of Mechanical Engineering; Douglas Weber, the Akhtar and Bhutta Professor of Mechanical Engineering; and Jehan Yang, a Ph.D. student in mechanical engineering. Also included are Vy Nguyen of Hello Robot, maker of Stretch; and Chen Chen, an undergraduate at Tsinghua University in Beijing, who implemented the Driver Assistance software.

Though Stretch is commercially available, it is still primarily used by researchers and CMU has 10–15 of them. It’s a simple robot with limited capabilities, but Padmanabha said its approximate $25,000 price tag inspires hope for expanded use of mobile robots.

“We’re getting to the price point where we think robots could be in the home in the near future,” he said.

Henry said Stretch/HAT still needs systemwide debugging and added features before it is more widely adopted. He thinks that might occur in as little as five years, though that will depend not only on price and features, but the choice of market.

“I believe the market for elderly people is larger and more affluent and will therefore develop faster than the market for people with disabilities,” he said.

Editor’s Note: This article was republished from Carnegie Mellon University.

The post Researchers develop interface for quadriplegics to control robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-develop-interface-for-quadriplegics-to-control-robots/feed/ 0
Punyo is a soft robot from TRI designed for whole-body manipulation https://www.therobotreport.com/punyo-soft-robot-from-tri-designed-for-whole-body-manipulation/ https://www.therobotreport.com/punyo-soft-robot-from-tri-designed-for-whole-body-manipulation/#comments Thu, 29 Feb 2024 21:59:50 +0000 https://www.therobotreport.com/?p=578030 TRI's Punyo humanoid robot can manipulate objects with its whole body, giving it more flexibility when it comes to household tasks.

The post Punyo is a soft robot from TRI designed for whole-body manipulation appeared first on The Robot Report.

]]>

While humanoid robots have burst into mainstream attention in the past year, and more and more companies have released their own models, many operate similarly. The typical humanoid uses arms and grippers to handle objects, and their rigid legs provide a mode of transportation. Researchers at the Toyota Research Institute, or TRI, said they want to take humanoids a step further with the Punyo robot. 

Punyo isn’t a traditional humanoid robot in that it doesn’t yet have legs. So far, TRI‘s team is working with just the torso of a robot and developing manipulation skills. 

“Our mission is to help people with everyday tasks in our homes and elsewhere,” said Alex Alspach, one of TRI’s tech leads for whole-body manipulation, in a video (see above). “Many of these manipulation tasks require more than just our hands and fingers.” 

When humans have to carry a large object, we don’t just use our arms to carry it, he explained. We might lean the object against our chest to lighten the load on our arms and use our backs to push through doors to reach our destination.

Manipulation that uses the whole body is tricky for humanoids, where balance is a major issue. However, the researchers at TRI designed its robot to do just that. 

“Punyo does things differently. Taking advantage of its whole body, it can carry more than it could simply by pressing with outstretched hands,” added Andrew Beaulieu, one of TRI’s tech leads for whole-body manipulation. “Softness, tactile sensing, and the ability to make a lot of contact advantageously allow better object manipulation.” 

TRI said that the word “punyo” is a Japanese word that elicits the image of a cute yet resilient robot. TRI’s stated goal was to create a robot that is soft, interactive, affordable, safe, durable, and capable.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Robot includes soft limbs with internal sensors

Punyo’s hands, arms, and chest are covered with compliant materials and tactile sensors that allow it to feel contact. The soft materials allow the robot’s body to conform with the objects it’s manipulating.

Underneath it are two “hard” robot arms, a torso frame, and a waist actuator. TRI says it aimed to combine the precision of a traditional robot with the compliance, impact resistance, and sensing simplicity of soft robotic systems

The entirety of Punyo’s arms are covered in air-filled bladders or bubbles. These bubbles connect via a tube to a pressure sensor. This sensor can feel forces applied to the outer surfaces of the bubble.

Each bubble can also be individually pressurized to a desired stiffness, and add around 5 cm of compliance to the surface of the robot’s arms. 

Instead of traditional grippers, Punyo has “paws” made up of a single high-friction latex bubble with a camera inside. The team printed the inside of these bubbles with a dot pattern. The camera watches for deformities in this pattern to estimate forces. 

Left: Under Punyo’s sleeves are air-filled bubbles, air tubes, and pressure sensors that add compliance and tactile sensing to the arms. Right: Closeup of a pair of arm bubbles.

Left: Under Punyo’s sleeves are bubbles, air tubes, and pressure sensors that add compliance and tactile sensing to the arms. Right: Closeup of a pair of arm bubbles. | Source: Toyota Research Institute

Punyo learns to use full-body manipulation

Punyo learned contact-rich policies using two methods: diffusion policy and example-guided reinforcement learning. TRI announced its diffusion policy method last year. With this method, the robot uses human demonstrations to learn robust sensorimotor policies for hard-to-model tasks.

Example-guided reinforcement learning is a method that requires tasks to be modeled in simulation and with a small set of demonstrations to guide the robot’s exploration. TRI said it uses this kind of learning to achieve robust manipulation policies for tasks it can model in simulation

When the robot can see demonstrations of these tasks it can more efficiently learn them. It also gives TRI team more room to influence the style of motion the robot uses to achieve the task.

The team uses adversarial motion priors (AMP), which are traditionally used for stylizing computer-animated characters, to incorporate human motion imitation into its reinforcement pipeline. 

Reinforcement learning does require the team to model tasks in simulation for training. To do this, TRI used a model-based planner for demonstrations instead of teleoperation. It called this process “plan-guided reinforcement learning.”

TRI claimed that using a planner makes longer-horizon tasks that are difficult to teleoperate possible. The team can also automatically generate any number of demonstrations, reducing its pipeline’s dependence on human input. This moves TRI closer to scaling up the number of tasks tha tPunyo can handle. 

The post Punyo is a soft robot from TRI designed for whole-body manipulation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/punyo-soft-robot-from-tri-designed-for-whole-body-manipulation/feed/ 1
Stretch 3 from Hello Robot designed for open-source mobile manipulation https://www.therobotreport.com/stretch-3-mobile-manipulator-hello-robot-designed-open-source-development/ https://www.therobotreport.com/stretch-3-mobile-manipulator-hello-robot-designed-open-source-development/#respond Thu, 15 Feb 2024 16:14:28 +0000 https://www.therobotreport.com/?p=577891 Stretch 3 is a portable and lightweight platform for robotics developers and could lead to household applications.

The post Stretch 3 from Hello Robot designed for open-source mobile manipulation appeared first on The Robot Report.

]]>

Hello Robot today launched the third edition of its Stretch mobile manipulator robot. The company described Stretch 3 as a refinement over the previous edition, which was popular as a research platform. Hello Robot said it has improved the manufacturability and the usability of the robot.

New features in Stretch 3 include a rotating 3D camera at the top of the mast, designed for perception functions and observing the environment around the robot. Another key feature is a more robust DexWrist 3 gripper, which now includes a built-in 3D camera to enable vision servoing of the gripper fingers.

The wrist is equipped with a quick-change feature that enables the gripper to be quickly swapped out for specialized end effectors or even an iPad (as seen in the video above).

view of the mobile manipulator and wrist of the Hello Robot.

Stretch 3 includes several updates, including a quick-change wrist, a wrist-mounted camera, and strengthened materials. | Credit: Hello Robot

Hello Robot serves growing open-source community

Stretch 3 empowers a growing community of developers to create a future in which friendly robots fold laundry, feed pets, support older adults, and enhance life in new ways, according to Hello Robot. If there is a “secret sauce” to the go-to market plan for Stretch, it has to be the vibrant research community that has grown to support the platform.

“With Stretch 3, we are taking a real step towards a future with home robots,” said Dr. Aaron Edsinger, co-founder and CEO of the company. “We designed Stretch 3 to help our community leverage recent advances in AI.”

Charlie Kemp, co-founder and chief technology officer of Hello Robot, was a professor at Georgia Tech University and brought the research credibility and connections that fueled the initial development of Stretch. His robotics laboratory at Georgia Tech deployed the first few versions of the robot and put it through its paces as a research platform.

Stretch 3 specs

  • Payload: 2 kg (4.4 lb.)
  • Weight: 24.5 kg (54 lbs.)
  • Size: 33 x 34 x 141 cm (13 x 13.4 x 55.5 in.)
  • Runtime: Two to five hours
  • Software development kit: ROS 2 and Python
top of mast view, including a new rotating 3D camera.

Stretch 3 includes a rotating 3D camera at the top of its mast, enabling perception of the surrounding area and AI-based motion. | Credit: Hello Robot

‘App store’ approach could extend mobile manipulation

These robots will need applications for versatile uses. Hello Robot’s open platform has attracted innovators from across the world, including Fortune 500 companies, top-tier research labs, and universities in over 14 countries. Members of its developer community regularly release open code, data, models, publications, and educational materials, accelerating progress toward a future with household robots.

Edsinger told The Robot Report that he envisions an online “app store” for Stretch where the community can share new skills that users can download and install onto the robot. Each user could then customizing the robot with the desired capabilities for their unique needs.

“Thanks to advances in AI, robots like Stretch are developing faster than expected,” said Edsinger. “A robot autonomously doing laundry was once considered a long-term ‘grand challenge’ but is now within reach.”

Stretch 3 could help with household chores

During a recent visit to Hello Robot headquarters in Martinez, Calif., I had the opportunity to observe V Nguyen, an occupational therapist at Hello Robot, as she demonstrated the Stretch 3 to a end user with disabilities. I asked the individual about how he might envision using the robot.

The most important goal of this end user is to regain some agency and independence with some of the most basic in-home tasks. They include retrieving a pair of pants from the floor or even helping to dress in the morning.

The user cited other tasks like opening and closing a deadbolt on the front door, or removing a hot dish from the microwave. Stretch offers the potential of improving the daily lives of numerous people while enabling them to maintain their independence.

In January 2023, Hello Robot earned a $2.5 million grant from the National Institute of Health to help commercialize its mobile manipulator technology.

Stretch 3 is priced at $24,950 and is available now on Hello Robot’s website for researchers, educators, developers, and enthusiasts.

hello robot stretch3.

Stretch 3 is portable, lightweight, and designed from the ground up to work around people. | Credit: Hello Robot

The post Stretch 3 from Hello Robot designed for open-source mobile manipulation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/stretch-3-mobile-manipulator-hello-robot-designed-open-source-development/feed/ 0
KettyBot Pro will provide personalized customer service, says Pudu Robotics https://www.therobotreport.com/kettybot-pro-will-provide-personalized-customer-service-says-pudu-robotics/ https://www.therobotreport.com/kettybot-pro-will-provide-personalized-customer-service-says-pudu-robotics/#respond Wed, 31 Jan 2024 14:00:53 +0000 https://www.therobotreport.com/?p=577701 KettyBot Pro's new features include a larger screen for personalized advertising, cameras for navigation, and smart tray inspection.

The post KettyBot Pro will provide personalized customer service, says Pudu Robotics appeared first on The Robot Report.

]]>
KettyBot Pro is designed for multiple functions.

KettyBot Pro is designed for multiple functions. Source: Pudu Robotics

Pudu Technology Co. today launched KettyBot Pro, the newest generation of its delivery and reception robot. The service robot is designed to address labor shortages in the retail and restaurant industries and enhance customer engagement, said the company.

“In addition to delivering food and returning items, KettyBot can attract, greet, and guide customers in dynamic environments while generating advertising revenue, reducing overhead, and enhancing the in-store experience,” stated Shenzhen, China-based Pudu.

“We hear from various businesses that it’s hard to maintain adequate service levels due to staff being overwhelmed and stretched thin,” said Felix Zhang, founder and CEO of Pudu Robotics, in a release. “Robots like KettyBot Pro lend a helping hand by collaborating with human staff, improving their lives by taking care of monotonous tasks so that they can focus on more value-added services like enhancing customer experience. And people love that you can talk to it.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


KettyBot Pro designed to step up service

KettyBot Pro can enhance the customer experience with artificial intelligence-enabled voice interaction, said Pudu Robotics. The mobile robot also has autonomous path planning.

The company said the latest addition to its fleet of commercial service robots includes the following new features:

  • Passability upgrade: A new RGBD depth camera — with an ultra-wide angle that boosts the robot’s ability to detect and avoid objects — reduces KettyBot’s minimum clearance from 55 to 52 cm (21.6 to 20.4 in.) under ideal conditions. This allows the robot to navigate through narrow passageways and operate in busy dining rooms and stores.
  • Smart tray inspection: Pudu claimed that this functionality is “a first in the industry.” The robot uses a fisheye camera above the tray to detect the presence or absence of objects on the tray. Once a customer has picked up their meal, the vision system will automatically recognize the completion of thetask and proceed to the next one without the need for manual intervention.
  • Customization for customers: The integration with PUDU Open Platform allows users to personalize KettyBot Pro’s expressions, voice, and content for easy operation and the creation of differentiated services. In a themed restaurant, the KettyBot Pro can display expressions or play lines associated with relevant characters as it delivers meals. It can also provide personalized welcome messages and greeting services, such as birthday services in star-rated hotels.
  • Mobile advertising display: Through the PUDU Merchant Management Platform, businesses can flexibly edit personalized advertisements, marketing videos, and more. Equipped with an 18.5 in. (38.1 cm) large screen, the KettyBot Pro offers new ways to promote menu updates and market products for restaurant and retail clients.
  • New color schemes: The KettyBot is now available in “Pure Black” in addition to the white and yellow, or the yellow and black color scheme of the original model. Pudu said this variety will will better meet the aesthetic preferences of customers in different industries across global markets. For instance, high-end hotels and business venues regard Pure Black as the premium choice, it said.

Pudu Robotics builds for growth

Founded in 2016, Pudu Robotics said it has shipped nearly 70,000 units in more than 60 countries. Since its launch in 2021, global brands such as KFC, MediaMarkt, Pizza Hut, and Walmart have successfully deployed KettyBot in high-traffic environments. These companies use the robot to deliver orders, market menu items and products, and welcome guests, said Pudu.

With growing healthcare needs and advances in artificial intelligence, the U.S. service robotics market is poised to grow this year, Zhang told The Robot Report.

Pudu Robotics — which reached $100 million in revenue in 2022 — is building two new factories near Shanghai that it said will triple the company’s annual capacity and help it meet global demand.

The post KettyBot Pro will provide personalized customer service, says Pudu Robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/kettybot-pro-will-provide-personalized-customer-service-says-pudu-robotics/feed/ 0
Nala Robotics incorporates generative AI into restaurant robot recipes https://www.therobotreport.com/nala-robotics-incorporates-generative-ai-into-restaurant-robot-recipes/ https://www.therobotreport.com/nala-robotics-incorporates-generative-ai-into-restaurant-robot-recipes/#comments Mon, 29 Jan 2024 18:52:35 +0000 https://www.therobotreport.com/?p=577664 Nala Robotics said it is looking at how AI can help robots create recipes with the ingredients at hand, reducing food waste.

The post Nala Robotics incorporates generative AI into restaurant robot recipes appeared first on The Robot Report.

]]>
Nala Robotics provides an automated food bowl machine.

Nala provides automation for food bowls and other meals. Source: Nala Robotics

Since the public debut of generative artificial intelligence and large language models in late 2022, robotics developers have been working to take advantage of the latest AI capabilities. Nala Robotics Inc. said that ChatGPT enables its autonomous chefs to prepare almost any recipe.

Generative AI and robots can help restaurants and commercial kitchens save money, as well as address labor turnover and shortages, according to Ajay Sunkara, founder and CEO of Nala Robotics.

“I started the company six years ago, and our automation of commercial kitchens went through different phases of development during the pandemic,” Sunkara told The Robot Report. “Nala started with the intention of making food consistently, but hygiene and labor shortages changed our priorities. Then there was the emergence of generative AI.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Nala Robotics pivots post-pandemic

“We built our system to address the issues the industry is facing, as well as the technology innovations emerging our path today,” added Sunkara. “Nala runs one of the only robotic commercial kitchens in the U.S. in Naperville, Ill. It has been operating for more than 25 months.”

The Chicago-based company sells The Wingman robotic fryer, the Nala Chef automated kitchen, and the Spotless robot for loading and unloading dishwashers. It also provides systems that can assemble sandwiches, food bowls, and pizzas.

“We’ve pivoted in the past few years when we learned of the need for end-to-end solutions,” Sunkara noted. “Most previous innovations in food robotics can handle one task or area, but with hygiene concerns during the COVID-19 pandemic, the industry needed machines to handle everything from ingredients to delivery.”

“The second aspect or priority we had to change was the cost impact,” he added. “When inflation was low before the pandemic, robotics was a distant arena for traditional commercial restaurants. Now, any small or midsize business can afford to look at robotics because of wage growth. In California, the minimum wage is $20, which made robotics a more affordable alternative to help ease pressure on restaurant owners.”

While the majority of Nala Robotics’ customers are larger chains, adding robotics is more involved for them because of the need to customize systems to their processes. The company is working on pilots with bother larger and smaller customers, said Sunkara.

AI addresses need for kitchen skills

A new employee or a robot now requires about the same amount of time to learn a range of ingredients and how to do things such as build a sandwich and add condiments, asserted Sunkara. 

“A large chain has more throughput than a [delivery-only] cloud kitchen, but robots can help, whether it’s 20 units or less,” he said. “We’ve experimented with machine learning for a long time and have data showing significant results with our models.”

Sunkara said that a key application for AI is in building new recipes.

“For example, fusion restaurants such as our kitchen can tell ChatGPT the ingredients we have for a given day — tomatoes, pumpkins, etc.,” he said. “The AI can come back with a potential recipe. That’s an area where humans have not gone before.”

Optimizing ingredient use can help reduce food waste, said Sunkara. What’s the best mix of automation and human oversight?

“It depends on the application,” he replied. “The majority of preparation of cut vegetables and frozen food has been automated for some time now, but not everything is cost-effective for a commercial kitchen or a small restaurant to automate. We have to be strategic about where we apply automation for cost savings.”

“It’s a matter of paying a worker for eight hours versus a robot for 24 hours, but there’s the utilization rate and payback time,” Sunkara said. “We differ from our competition in our approach to maintenance, and most of our systems are built for end-to-end use.”

“For example, our frying system can not only put fries or wings in oil and take them out it can adjust the count of wings or weight, measure the temperature of the oil, sauce the wings, and clean the utensils and packaging,” he claimed. “This is where you’ll see a significant impact — you need to save a full labor hour, not a half or one-quarter hour of labor.”

Nala Robotics is one of the first food technology companies to integrate with AI for such multi-tasking, said Sunkara. It offers its systems through direct sales, rental, lease, and a robotics-as-a-service (RaaS) model. In RaaS, a customer uses Nala’s infrastructure and pays by the dish.

Automating the future of work and home tasks

Nala Robotics is also actively exploring generative AI for human-machine interaction (HMI) and for robots to self-correct and improve efficiency on their own, said Sunkara.

“There is a danger of overexuberance in AI,” he acknowledged. “The past few years have seen a lot of machine learning development, and AI is an extension of those models. But it’s more like the software industry, where the dot-com era went really fast, and the market wasn’t ready to absorb those changes.”

“With robotics, the whole industry has to work together to be accepted,” Sunkara added. “Automation helped as people got more accustomed to remote work during the pandemic, and they’re now ready for AI.”

While relatively little money is currently being invested into household robotics, Sunkara said he believes that the potential market is “huge” if they could do everyday tasks.

“First, our goal is to get into commercial environments, where it was hard to show restaurant owners the potential of robots,” said Sunkara. “Whatever experience we’re gaining can eventually be utilized in at-home tasks.”

The post Nala Robotics incorporates generative AI into restaurant robot recipes appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nala-robotics-incorporates-generative-ai-into-restaurant-robot-recipes/feed/ 1
Pudu Robotics CEO predicts that service robot market will expand https://www.therobotreport.com/pudu-ceo-predicts-service-robot-market-to-expand/ https://www.therobotreport.com/pudu-ceo-predicts-service-robot-market-to-expand/#respond Sat, 27 Jan 2024 13:00:33 +0000 https://www.therobotreport.com/?p=577638 Pudu Robotics, a leading service robot exporter in China, says that demand and applications are likely to expand globally.

The post Pudu Robotics CEO predicts that service robot market will expand appeared first on The Robot Report.

]]>
Parkhotel employees in Eisenstadt, Austria, celebrate the arrival of Pudu's service robots.

Parkhotel employees in Eisenstadt, Austria, celebrate the arrival of service robots. Source: Pudu Robotics

Commercial service robots are more common in East Asia than elsewhere, but the rest of the world could catch up in 2024, according to Pudu Technology Co. The Shenzhen, China-based company claimed that it is China’s top exporter of such robots.

“If 2023 was the year of GenAI, I believe 2024 will be the year of the robot,” stated Felix Zhang, founder and CEO of Pudu Robotics. “While humanoid robots and food-making robots grabbed headlines in 2023, the untold story is that it’s the humble service robot — robots that skillfully deliver items and clean floors, often in high-traffic areas — that are actually ready to scale in 2024.”

Last year, Pudu Robotics said it deployed robots across 600 cities in 60 countries. The company also partnered with SoftBank and Nippon Otis Elevators, opened its autonomous mobile robot (AMR) management platform to developers, and won Red Dot and iF Design awards. In addition, it raised more than $15 million in Series C3 funding.

Zhang discussed Pudu’s current offerings and his outlook for this year with The Robot Report:

Service robots to take on more healthcare roles

You have predicted more robots in hospitals and senior living facilities. Does Pudu offer robotics specifically for elder care?

Zhang: Pudu Robotics offers several robots that are deployed in senior living facilities to assist facility staff and residents in their day-to-day tasks and improve the emotional well-being of the elderly. The robots include the BellaBot and KettyBot, two models of delivery robots that can serve food or medicine, assist with returning items, and in some cases even interact with residents.

In addition, although it’s not designed to interact with residents, the PUDU CC1 cleaning robot can help keep senior living facilities tidy, as it is designed to scrub, sweep, vacuum and mop in care homes and other commercial settings. These capabilities automate menial tasks for overwhelmed workers and set the standard for hygiene in autonomous cleaning.

For example, a chain elderly care institution in Hong Kong, which operates 12 nursing homes with 1,600 beds, has adopted CC1 for cleaning the internal environment, reducing the workload of the staff.

PUDU’s robots are especially timely, as more than 1 in 6 Americans are now 65 years or older, and life expectancy continues to grow, thanks to medical advancements. Our aging population is contributing to a major healthcare staffing crisis.

The next 12 months will see the increased adoption of robots in healthcare, as short-staffed senior-living facilities employ the technology to complete tasks. They can monitor daily routines, provide reminders for medication schedules, detect changes in body temperature, and warn medical professionals and families of any abnormalities. 

Robots can also provide emotional support for the elderly and robot-assisted living will become a crucial asset for the growing elderly population. Loneliness is a common problem for many older people, robots provide company and can engage in activities such as communication, storytelling, and playing music.

The traditional way of caring for the elderly often falls short of meeting all of their needs, and robots are able to fill the gaps.

Pudu addresses global markets, economic headwinds

How is the U.S. market for service robots growing in comparison with other regions?

Zhang: The global market for service robots is soaring, and the U.S. is beginning to catch up with its peers in Asia. According to the International Federation of Robotics [IFR], a non-profit industry association, sales of robots used in the service industry grew by 37% worldwide in 2022.

In 2024, the U.S. market is expected to generate the most revenue in the service robot industry, but regions like Japan are leading the way in development and adoption of the technology. In many developing countries, the service industry is hobbled by ever-mounting challenges in hiring workers.

In response, Pudu Robotics has engaged in a massive expansion beyond the borders of its home market since 2020, achieving rapid growth in shipments. Pudu leads the global market as China’s No. 1 service robot exporter, and cumulative global shipments are over 70,000 units. 

In the U.S., employers facing staff shortages have turned to commercial service robots to provide relief for their remaining workers. Quick-service restaurants [QSRs], for example, expect 51% of tasks to be automated by 2025, while full-service restaurants expect to automate 27% of tasks. Service robots are automating menial tasks, improving overall efficiency, and preventing burnout among their human colleagues. 

While the challenge of labor shortages is universal, how will robotics adoption overcome current economic headwinds?

Zhang: Currently, there are 4 million more open jobs than there are available workers in the U.S. to fill them. As society’s tolerance and acceptance of new technology grows, robots will plug this hole.

U.S. restaurants are a prime example. Owners face a “perfect storm” of an aging population, soaring child-care costs that shrink the pool of available workers, and a pandemic that pointed many workers towards more stable careers.

While economic headwinds may cause some delay, the world is still turning towards an increasingly automated future. Robots are the long-term solution for massive problems facing several industries.

Integration and AI to make robots more useful

From hospitality to healthcare and retail, which areas have the most demand? How much integration will be necessary?

Zhang: We’re seeing an increase in demand for service robots most from the restaurant industry, followed by hospitality, healthcare, and retail by order of demand increase. To meet that global demand, Pudu Robotics is building two new factories near Shanghai that will triple the company’s annual capacity.

Businesses across all four categories are finding it hard to maintain adequate service levels due to staffs being stretched thin. Integration will happen across these industries, as they all are dealing with the effects of the labor shortage. Business will still need human workers, but robots can supplement and improve efficiencies. 

While large language models (LLMs) are improving human-machine interactions, how will they be instantiated in robots rather than on tablets and phones?

Zhang: Large language models can effectively enhance human-robot interaction, particularly in semantic understanding. Take, for instance, a robot serving as a shopping guide in a supermarket.

Previously, to find a specific brand of electric toothbrushes, customers needed to navigate through “personal care,” then “toothbrushes,” followed by “electric toothbrushes,” and finally the brand. With significant improvements in voice recognition and semantic understanding, it’s now possible to locate the item directly through a single command. 

LLMs are highly beneficial for advancing end-to-end algorithms in modules such as positioning, navigation, and perception, significantly enhancing the efficiency of their evolution to achieve global optimization. Robots and LLM integration is already under way, but manufacturers need to ensure that the generated content aligns with human values and safety standards, while also ensuring that robots can reliably and responsibly interact with the real world.

In 2024, these models will be used effectively in robots — as well as tablets and phones — as robots with AI voice interactions and eye-catching displays make use of the advancing technology.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post Pudu Robotics CEO predicts that service robot market will expand appeared first on The Robot Report.

]]>
https://www.therobotreport.com/pudu-ceo-predicts-service-robot-market-to-expand/feed/ 0
Intuition Robotics brings generative AI capabilities to ElliQ 3 https://www.therobotreport.com/intuition-robotics-brings-generative-ai-capabilities-to-elliq-3/ https://www.therobotreport.com/intuition-robotics-brings-generative-ai-capabilities-to-elliq-3/#respond Tue, 23 Jan 2024 20:11:26 +0000 https://www.therobotreport.com/?p=577564 ElliQ 3's hardware improvements pave the way for its software upgrades, which center around integrating generative AI capabilities.

The post Intuition Robotics brings generative AI capabilities to ElliQ 3 appeared first on The Robot Report.

]]>
The latest version of Intuition Robotics ElliQ robot.

According to data from New York State, Intuition Robotics’ ElliQ has resulted in a 95% reduction in loneliness amongst users. | Source: Intuition Robotics

Intuition Robotics Ltd. has unveiled the latest edition of its AI companion for older adults, ElliQ 3. The Palo Alto, Calif.-based company said that the social robot has been proven to benefit seniors’ health, social connectedness, and independence.

The latest version of ElliQ is integrated with generative AI to bring the technology into seniors’ everyday lives, said Intuition Robotics. The company added that its $25 million in funding from August 2023 enabled its software and hardware updates.

Intuition Robotics said ElliQ’s new capabilities will help expand the availability and accessibility of the robot. They are intended to help the robot further enhance the independence and wellness of older adults and decrease feelings of loneliness. 

“It’s astounding to see that the first people to live with and build long-term relationships with an AI are individuals in their 80s and 90s,” said Dor Skuler, co-founder and CEO of Intuition Robotics, in a release.

“Through this relationship, ElliQ is proving to be highly effective in reducing older adults’ sense of loneliness, improving health and independence, and increasing social connectedness,” he added. “The launch of ElliQ 3.0 allows us to reach more older adults and expand partnerships with government services for the aging and the healthcare ecosystem while offering valuable insights and context.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Hardware upgrades prepare ElliQ 3 for scale

Intuition Robotics said it geared many of ElliQ 3’s hardware upgrades toward scaling quickly. The company plans to increase its manufacturing processes to meet the growing demand for its robot. Intuition Robotics collaborated on ElliQ 3’s updated design with Yves Behar’s design studio, Fuseproject.

The newest version of ElliQ is 1.3 lb. (0.58 kg) lighter than older versions and has a 36% smaller footprint. A lighter robot means that older adults will have an easier time handling it. 

ElliQ 3 comes about a year after ElliQ 2.0. Some of ElliQ 3’s other hardware upgrades include:

  • An upgraded system architecture that leverages an octa-core system on chip (SoC) and a built-in, dual-core AI processing unit (APU), both powered by MediaTek 
  • 33% more RAM
  • Twice the amount of computing power and memory
  • A fully integrated screen, which can improve customer experience and system resilience

“By bringing our products and cutting-edge AI technology to ElliQ, we’re helping people in their 70s, 80s, and 90s experience the benefits of the latest innovations in computing,” stated Adam King, vice president and general manager of the Client Computing Business at MediaTek. “Our advanced processing and connectivity makes it easier for those using ElliQ to experience a more in-depth level of companionship.”

Steve, an ElliQ user, playing a game with ElliQ.

ElliQ 3 includes an integrated screen, making it more durable and easier to use. | Source: Intuition Robotics

ElliQ 3 brings generative AI to aging populations

ElliQ 3’s hardware improvements pave the way for its software upgrades, which center around integrating generative AI capabilities. Large language models (LLMs) drive these new capabilities, which Intuition Robotics claimed can extend and enrich ElliQ’s conversations with older adults. 

Generative AI provides context for many conversations, allowing ElliQ 3 users to discuss a large number of topics more naturally, said Intuition. It has also integrated LLM technology with its Relationship Orchestration Engine. 

The Relationship Orchestration Engine makes real-time decisions regarding actions, scripted conversations, and, now, generative AI conversations. With LLM technology integrated, ElliQ can now fuse both content and memory, according to Intuition Robotics.

ElliQ 3 understands, classifies, and remembers information from scripted and open-ended conversations, explained the company. It also takes in non-spoken actions and choices through other modalities. This allows the robot to refer to this information in future conversations. 

Any relevant information ElliQ gathers is maintained in a user profile, so the robot can follow up on new conversations, suggestions, and activities. ElliQ can use these capabilities to strengthen its relationship with the user or to promote social connectedness, Intuition Robotics said. 

Generative AI also helps ElliQ 3 to do more activities with its users, said the company. The robot can paint or write poems with a user, which can contributes to the user’s cognitive wellness and creativity, it said. ElliQ 3 also features synchronized events, starting with Bingo, in which customers can participate in real time with others. 

“Today’s older adults are harnessing the power of AI to transform their lives and embrace new technologies that bring companionship, knowledge, and connection into their daily routines,” said Rick Robinson, vice president and general manager of AgeTech Collaborative from AARP. “As we continue to combat the epidemic of loneliness among seniors, solutions like ElliQ play a vital role, offering not just innovation, but also a lifeline to a brighter, more connected future where our aging population can enjoy the full spectrum of opportunities that the Digital Age has to offer.” 

To ensure safety, Intuition Robotics said it has developed and deployed “guardrail” mechanisms that automatically monitor and mediate conversations in real time. This helps the robot to better control the flow of conversation, decide when a context switch is appropriate, and avoid AI “hallucinations” or inappropriate responses. 

Maintaining the things users already love about ElliQ

While ElliQ 3 comes with a host of new capabilities, Intuition Robotics said the newest version still maintains ElliQ’s personality. Through a combination of training, prompting, and scripting by ElliQ’s Character Design Team, the robot maintains its characteristic “empathy, curiosity, and humor,” it said. 

Intuition Robotics asserted that this pre-built personality enables users “to benefit from the infinite possibilities of AI,” while the company can still create a safe space for a vulnerable population. 

The company says that ElliQ is available to hundreds of thousands of older adults as a fully subsidized service. It’s available through government agencies, non-profit organizations, Medicaid Managed Care Organization providers, and healthcare payers. 

Some of its partners include the New York State Office for the Aging, Inclusa (a Humana company), and the Area Agency on Aging of Broward County. It also includes new partners such as the Olympic Area Agency on Aging, Ypsilanti Meals on Wheels, and more. 

The post Intuition Robotics brings generative AI capabilities to ElliQ 3 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/intuition-robotics-brings-generative-ai-capabilities-to-elliq-3/feed/ 0
Asensus places first Senhance surgical robot for pediatrics in Japan https://www.therobotreport.com/asensus-places-first-pediatric-surgical-robot-japan/ https://www.therobotreport.com/asensus-places-first-pediatric-surgical-robot-japan/#respond Tue, 16 Jan 2024 21:36:56 +0000 https://www.therobotreport.com/?p=577494 Asensus Surgical announced the fourth hospital in Japan to adopt its Senhance system dedicated to pediatric procedures.

The post Asensus places first Senhance surgical robot for pediatrics in Japan appeared first on The Robot Report.

]]>
Asensus has developed Senhance for laparoscopic procedures.

Senhance is designed to keep time and cost per procedure down. Source: Asensus Surgical

Asensus Surgical Inc. today announced that Nagoya University Hospital in Japan agreed last month to lease its Senhance surgical robot. This is the first pediatric installation in Japan and the fourth globally for Senhance in 2023, said the company.

“The Senhance System is specifically equipped to meet the demands of pediatric surgery, and we are excited to work with Nagoya University Hospital,” said Anthony Fernando, president and CEO of Asensus Surgical, in a release.

“With its specialized instrumentation and advanced clinical intelligence, the system offers a unique advantage for pediatric patients, reducing invasiveness and increasing precision in a way that sets it apart,” he said. “Our experience in Europe and the U.S. has shown success in various pediatric procedures, and we’re eager to extend these benefits in Japan.”

Asensus develops Senhance, LUNA to improve outcomes

Asensus Surgical claimed that its combination of machine vision, augmented intelligence, and deep-learning capabilities could improve healthcare outcomes. The company designed Senhance for use in general laparoscopic and laparoscopic gynecological procedures.

It won clearance from the U.S. Food and Drug Administration (FDA) in 2017. Since then, Asensus has secured expanded indications, deals with Google and Nvidia, and hospital placements around the world.

Asensus designs surgical robots to operate on children.

Nagoya University Hospital, Japan, initiated a Senhance Surgical System dedicated to pediatric procedures. Source: Asensus Surgical

Senhance became the first digital laparoscopic surgery system for children when the FDA cleared its pediatric indication in March, according to Asensus.

While the system encountered a recall because of “unintended movement,” Senhance is currently available in the U.S., EU, Japan, Russia, and other countries.

Asensus added that it has built on digital laparoscopy and the Senhance Surgical System to develop LUNA, a next-generation robot whose Intelligent Surgical Unit is designed to increase surgeon control and reduce surgical variability. It recently conducted an in vivo lab evaluation of LUNA.

Fourth deployment in Japan aimed at children

The Nagoya University Hospital deployment marked the continued expansion for Asensus in Japan. Over the past year or so, company has placed Senhance systems at Kashiwa and Kitakyushu General Hospital. In addition, Saiseikai Shiga Hospital in Ritto, Japan, agreed to lease a surgical robot.

“The Senhance System provides a valuable solution for pediatric surgery,” stated Dr. Hiroo Uchida of the Department of Pediatric Surgery at Nagoya University Hospital. “Designed with smaller patients in mind, the reusable 3 mm [0.11 in.] instruments offer a distinct advantage.”

“Having our experience in laparoscopic surgery, we find the system very adaptable, such as instinctive camera control and haptic feedback with crucial safety features,” he said. “In addition, we believe the system offers economic value. This represents a significant step forward aligning with our goal of providing the best care for our young patients.”

Editor’s note: This article first appeared on MassDevice, a sibling site to The Robot Report.

The post Asensus places first Senhance surgical robot for pediatrics in Japan appeared first on The Robot Report.

]]>
https://www.therobotreport.com/asensus-places-first-pediatric-surgical-robot-japan/feed/ 0
Ddog project at MIT connects brain-computer interface with Spot robot https://www.therobotreport.com/ddog-mit-project-connects-brain-computer-interface-spot-robot/ https://www.therobotreport.com/ddog-mit-project-connects-brain-computer-interface-spot-robot/#respond Wed, 03 Jan 2024 20:33:37 +0000 https://www.therobotreport.com/?p=577287 Project Ddog aims to turn a Boston Dynamics Spot quadruped into a basic communicator for those with physical challenges such as ALS. 

The post Ddog project at MIT connects brain-computer interface with Spot robot appeared first on The Robot Report.

]]>

An MIT research team, led by Nataliya Kos’myna, recently published a paper about its Ddog project. It aims to turn a Boston Dynamics Spot quadruped into a basic communicator for people with physical challenges such as ALS, cerebral palsy, and spinal cord injuries. 

The project‘s system uses a brain-computer interface (BCI) system including AttentivU. This comes in the form of a pair of wireless glasses with sensors embedded into the frames. These sensors can measure a person’s electroencephalogram (EEG), or brain activity, and electrooculogram, or eye movements. 

This research builds on the university‘s Brain Switch, a real-time, closed-loop BCI that allows users to communicate nonverbally and in real time with a caretaker. Kos’myna’s Ddog project extends the application using the same tech stack and infrastructure as Brain Switch. 

Spot could fetch items for users 

There are 30,000 people living with ALS (amyotrophic lateral sclerosis) in the U.S. today, and an estimated 5,000 new cases are diagnosed each year, according to the National Organization for Rare Disorders. In addition, about 1 million Americans are living with cerebral palsy, according to the Cerebral Palsy Guide. 

Many of these people already have or will eventually lose their ability to walk, get themselves dressed, speak, write, or even breathe. While aids for communication do exist, most are eye-gaze devices that allow users to communicate using a computer. There aren’t many systems that allow the user to interact with the world around them. 

Ddog’s biggest advantage is its mobility. Spot is fully autonomous. This means that when given simple instructions, it can carry them out without intervention.

Spot is also highly mobile. Its four legs mean that it can go almost anywhere a human can, including up and down slopes and stairs. The robot’s arm accessory allows it perform tasks like delivering groceries, moving a chair, or bringing a book or toy to the user. 

The MIT system runs on just two iPhones and a pair of glasses. It doesn’t require sticky electrodes or backpacks, making it much more accessible for everyday use than other aids, said the team.

How Ddog works

The first thing Spot must do when working with a new user in a new environment is create a 3D map of the world its working within. Next, the first iPhone will prompt the user by asking what they want to do next, and the user will answer by simply thinking of what they want. 

The second iPhone runs the local navigation map, controls Spot’s arm, and augments Spot’s lidar with the iPhone’s lidar data. The two iPhones communicate with each other to track Spot’s progress in completing tasks.

The MIT team designed to system to work fully offline or online. The online version has a more advanced set of machine learning models and better fine-tuned models. 

Ddog overview.

An overview of the Project Ddog system. | Source: MIT

The post Ddog project at MIT connects brain-computer interface with Spot robot appeared first on The Robot Report.

]]>
https://www.therobotreport.com/ddog-mit-project-connects-brain-computer-interface-spot-robot/feed/ 0
Glide to work with people with blindness to navigate the world https://www.therobotreport.com/glide-works-with-people-with-blindness-navigate-world-says-glidance-ceo/ https://www.therobotreport.com/glide-works-with-people-with-blindness-navigate-world-says-glidance-ceo/#respond Sun, 31 Dec 2023 14:00:03 +0000 https://www.therobotreport.com/?p=577254 Glidance co-founder Amos Miller discusses the development and commercialization of the Glide device for aiding people with vision impairment.

The post Glide to work with people with blindness to navigate the world appeared first on The Robot Report.

]]>
a blind woman crossing the street in a crosswalk, guided by Glidance Glide device

The mission for Glide is to provide independence and agency to sight-impaired individuals. | Credit: Glidance

Glidance Inc. has been developing Glide, a robotic walking aid for people with vision impairments. The Seattle-based company’s device has an ergonomic handle, and its sensors are designed to help users avoid obstacles, find waypoints on maps, and stop at stairs and elevators.

In October 2023, Glidance, which is a resident member of MassRobotics, won the RoboBusiness Pitchfire startup competition. The Consumer Technology Association (CTA) has also recognized the company with an innovation award, and it plans to demonstrate Glide at CES 2024 from Jan. 9 to 12 in Las Vegas.

The Robot Report recently spoke with Amos Miller, founder and CEO of Glidance, about the development of Glide and plans for commercialization.

Tell us what you are building.

Miller: At Glidance, our mission is to revolutionize independent mobility for people with sight loss. And I don’t use the word “revolutionize” lightly.

We are doing that with a new self-guided mobility aid called “Glide” that uses AI and sensors to guide a person, show them the way, help them avoid obstacles, make them aware of what’s around them, and bring back their independence and their ability to get around with confidence.

What was your inspiration for this product, and how did you came up with the concept of giving independence back to people with vision impairments?

Miller: I lost my sight in my 20s as a result of a genetic condition called retinitis pigmentosa. I lost my sight gradually while I was finishing my computer science degree and starting my career in high tech.

By the age of 30, I had lost all useful sight. I have lived in Israel, the U.K., Singapore, and now in the U.S. I have lived my entire adult life with sight loss. Everywhere I go, I have to deal with independent mobility every day of my life. 

I am a guide-dog user and I can also use a cane, but I’m a terrible cane user. I’ve always appreciated the guide dog as an assistant. But a dog doesn’t help if you don’t know the layout of a train station, and you have to wait 30 minutes for somebody to come meet you and guide you to your train. Those are the types of challenges that I’ve always had to deal with daily. 

Why weren’t you a good cane user? What are some of the problems that you had trying to use a cane?

Miller: The cane is an amazing technology. It has been around for thousands of years. Today, it is by far the most used assistive technology, and people can buy it for 25 bucks. 

To use a cane effectively, you “shoreline” to feel an obstacle and get around that obstacle. Shoreline means that as you’re walking along a sidewalk, you tap along the edge of the building or the edge of the road so that you can keep a straight line. But you still have to be extremely well-oriented as to where you are within a town or a building. 

You have to take all the signals around you too, using all of your senses, to know where you are along the street. Mentally, you are feeling for the next landmark — it could be a tree, it could be a bush, it could be a lamppost. That requires a lot of concentration and a lot of skill. This skill can be developed, but it takes time to develop.

I use a guide dog, which is a very different guiding solution, but I still need my orientation. With a guide dog, the dog guides you through the world. So it’s probably closer to what the Glide does.

I would say that a lot of blind people would consider Glidance to be a little bit like an electronic guide dog. From a behavioral perspective, it has similarities in the way that it guides.

A real guide dog sees an obstacle ahead of time and takes you around it. And that’s exactly how Glidance works. Glide sees the obstacle and takes you around. The result is a much lower cognitive load and allows you to listen to your e-mails or talk on the phone while you move with Glide.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The evolution of Glide

How did Glide evolve from the initial concept, which pulled the user through the world, to the final version?

Miller: Initially, we explored putting a motor on the wheels, as that was the natural place to start. If you work with a sighted person who’s guiding you, there are two ways of doing that.

One is that you touch their elbow or the other is the unwelcome way, in that they grab your hand. If they grab your hand, they are now pulling you, and you feel a total loss of agency. You’re now just a trolley, along for the ride. 

The preferred way for sighted guiding is that I touch the elbow of the sighted person, and they walk. But I still determine speed, I determine the steps and the angles that I move my body in. 

When you have a robot that pulls you around, you lose that agency immediately. I have tried other robots, and when the robot is completely autonomous and pulls you around, you lose all sense of agency. It just doesn’t feel right. 

By removing the motors from the wheels, the user simply nudges the device forward; it’s very light. The moment they start pushing it forward, the wheels start to servo and steer the way. But all the agency in the control is still with the user. They don’t have to move knobs up and down to control the speed of the robot, when they want to stop, they stop.

From an experience perspective, our users love that. It also reduces the complexity of the system, reduces the weight of the unit, doesn’t require big batteries, and lowers the overall cost.  

Can you take us through the obstacle-avoidance system? What are some of the sensors that you’re using? 

Miller: We have local and global planning onboard the system. We have a variety of sensors that are at the mobile unit level and another set at the bottom of the unit just above the wheels.

f there’s something in the way, the sensors will detect it, and the wheels will start to steer you around it. The user is controlling the speed, and Glide knows how fast you’re going. 

How are the haptics giving the user feedback about pacing and other elements that it perceives?

Miller: The goal is for the robot to communicate with the user so that the user stops before an obstacle. Because the robot is not pulling the user along, it can indicate to the user through haptics and audio on the handle.

For example, it double-taps on the handle to indicate to the user to slow down.

How does learning to use a guide dog compare to the experience learning to use Glide?

Miller: A guide dog is amazing, but you do have to go to a residential program and train with a new dog. It takes weeks of training to trust the animal and learn to work as a team. It takes a lot of effort, plus you have to replace your dog every five to six years, which is again another upheaval in your life. 

We know that there are 7.3 million people with significant or total sight loss in the U.S., but there are only 10,000 new guide dogs available in any given year. I think dogs will continue to be part of the fabric of independent mobility for years to come.

At the same time, 99.9% of blind people will never have the benefit of a guide dog. 

A new Glide user can learn to use the solution in a couple of hours. For an individual who loses their sight late in life, which is now an emerging trend, a solution like Glide may be the fastest and simplest method to return a sense of independence to that individual. This is the opportunity for Glide.

Glidance gets ready for market

Will Glide integrate its device with GPS, Google Maps, or Apple Maps to use navigation instructions?

Miller: We will have mapping capabilities in Glide, but I don’t plan for Glide to be a navigation aid. I don’t want to build my own navigation app.

But it will work with existing navigation apps. So if you set a destination for a restaurant on Google, its walking directions are sent to Glide to use as waypoints. Glide will do the local planning to those waypoints along the way and get you to that restaurant.

We also plan to integrate apps like the Target app, where you create your shopping list on the Target app, and the Target app tells you where you are in the store and where the product is.

Glide has cameras and wheel odometry, and all the necessary sensors to do SLAM [simultaneous localization and mapping] in the store. So Glide could pair up with a Target app and help you get to that shelf in the store. 

Can you share the price point for the production solution?

Miller: This solution must be affordable. It’s not going to be $25 like a cane, but we are aiming at the price range of a cellphone subscription.

You’ll start with a basic subscription, depending on the level of features that you want. The basic features will fit the needs of new users, and world travelers can enhance the product by turning on additional features to meet their needs.

We are also working with the VA [U.S. Department of Veterans Affairs] and with insurance companies to make sure that anyone can get the device. We expect to start our beta program in the spring of 2024.

Where can folks find you at CES 2024?

Miller: The CTA Foundation gave us an award for a free booth at Eureka Park.

Editor’s note: This interview was transcribed by https://otter.ai and edited for clarity. You can listen to the full interview with Amos Miller in this episode of The Robot Report Podcast:

The post Glide to work with people with blindness to navigate the world appeared first on The Robot Report.

]]>
https://www.therobotreport.com/glide-works-with-people-with-blindness-navigate-world-says-glidance-ceo/feed/ 0
Sanctuary AI secures IP assets to advance touch, grasping in general-purpose robots https://www.therobotreport.com/sanctuary-ai-secures-ip-assets-advancing-touch-grasping-general-purpose-robots/ https://www.therobotreport.com/sanctuary-ai-secures-ip-assets-advancing-touch-grasping-general-purpose-robots/#respond Wed, 20 Dec 2023 14:00:02 +0000 https://www.therobotreport.com/?p=568928 In addition to Sanctuary AI's internal developments, IP assets from Giant.AI and Tangible Research have accelerated progress on its roadmap.

The post Sanctuary AI secures IP assets to advance touch, grasping in general-purpose robots appeared first on The Robot Report.

]]>
close up of the hand of the Sanctuary Pheonix robot.

Sanctuary asserts that robotic manipulation including tactile sensing is critical to the success of humanoids. | Credit: Sanctuary AI

Sanctuary AI, which is developing general-purpose humanoid robots, has announced the recent acquisition of intellectual property, or IP, adding to its asset portfolio of touch and grasping technologies.

The Vancouver, Canada-based company said it expects this IP to play a pivotal role in its ambitious roadmap for the construction of general-purpose robots. According to Sanctuary AI, the integration of vision systems and touch sensors, which offer tactile feedback, plays a pivotal role in the realization of embodied artificial general intelligence (AGI).

It has already secured patents for numerous technologies developed both internally and through strategic acquisitions from external sources. The company acquired the latest assets from Giant.AI Inc. and Tangible Research.

Sanctuary AI is one of several robotics companies developing humanoid robots. The company unveiled the Phoenix humanoid robot in May 2023, when it publicly demonstrated its sixth-generation unit. This was also the first generation of humanoids from Sanctuary to feature bipedal locomotion.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Sanctuary AI takes touch-oriented approach

Unlike its competitors, Sanctuary AI said it has taken a different approach by starting with an intense investment in grasping capabilities combined with hand-eye coordination of human-analog hands and arms. 

Geordie Rose, co-founder and CEO of Sanctuary, was one of three executives from humanoid robotics companies to speak at RoboBusiness 2023’s keynote on the “State of Humanoid Robotics Development.” He described the importance of humanoids being able to do real work by manipulating any object that they might encounter.

This philosophy is a cornerstone to Sanctuary’s development roadmap, and Rose said it is essential to the success of humanoid robots in the future. It is also key to the company’s acquisition plan.

screenshot of recent Sanctuary patent illustrating functional hands.

Grasping is a cornerstone in Sanctuary’s product development roadmap, as shown here in this screenshot from a recent Sanctuary USPTO patent. | Credit: USPTO

Surveying the landscape on the way to AGI

Rose told The Robot Report that he believes that “the humanoid competitive landscape is bisecting into two general theses:”

  1. Single-purpose bipedal robots to move totes and boxes in retail, warehousing, and logistics
  2. General-purpose robots developing generalized software control systems for robots with hands that can act across multiple use cases and industries

Bipedal robots that can walk on two legs have been around for several decades, yet no significant supplier has matured or commercialized the technology despite having the necessary resources, Rose said, citing Honda, Boston Dynamics, and Toyota as examples.

Rose added that the “technology gap” for general-purpose humanoids is related to dexterous manipulation and grasping, which his company is developing and for which it has obtained patents. 

“Replicating human-like touch is potentially more important than vision when it comes to grasping and manipulation in unstructured environments,” said Jeremy Fishel, principal researcher at Sanctuary AI and founder of Tangible Research. “It has been an effort many years in the making to meet the complex blend of performance, features, and durability to achieve general-purpose dexterity.”

Sanctuary claimed that the best way to build the world’s first AGI is to build software for controlling sophisticated robots with humanlike senses (vision, hearing, proprioception, and touch), actions (movement, speech), and goals (completing work tasks).

keynote panel on stage at the RoboBusiness 2023 event.

RoboBusiness 2023 featured a keynote panel with speakers from three leading humanoid manufacturers. Seated left to right: moderator Mike Oitzman | Jonathan Hurst, chief robot officer of Agility Robotics | Geordie Rose, CEO of Sanctuary | Nick Paine, CTO of Apptronik.

IP portfolio around grasping grows

Sanctuary AI’s new IP assets expand on a growing patent portfolio that already protects several key grasping technologies for both non-humanoid and humanoid robots, including visual servoing, real-time simulation of the grasping process, and mapping between visual and haptic data. All of these are key to enabling any robot that must interact with and manipulate objects in unstructured or dynamic environments.

“In dynamic and unstructured environments, coordination between touch and vision is an absolute necessity,” said Rose. “We spent over a year performing industry-wide analysis before acquiring Jeremy’s team. Beyond the functional sensitivity, the technology is designed to be simulateable, enabling us to fast-track our AI model development.” 

According to Rose, “Sanctuary AI is focused on creating the world’s first human-like intelligence in general-purpose robots that will help address the massive labor shortages that organizations are facing around the world. This is a civilization-scale initiative that requires long-term planning and prioritization.”

“Our strategy is unique in that the prioritized focus is on the highest value part of the value chain, which is our clear focus on hand dexterity, fine manipulation, and touch,” he noted. “We believe hands, or more specifically grasping and manipulation, are the key pathway to applying real-world AI to the labor market, given that more than 98% of all work requires the dexterity of the human hand.”

“The acquisition of Tangible Research, the purchase of Giant.AI’s entire patent portfolio, along with our own independent activity, further deepens our IP and expertise in this critical area,” Rose explained.

screenshot of the Sanctuary humanoid robot from US patent.

The Phoenix humanoid robot has taken a deliberate and careful path to market, based on an IP portfolio to support Sanctuary’s product roadmap. | Credit: USPTO

Sanctuary AI patents show multi-purpose robot progress

You can learn a lot about a company’s technical trajectory by looking closely at its IP portfolio. Sanctuary AI’s patents from the past few years include “software-compensated robotics” (USPTO US 11312012 B2), which uses recurrent neural networks and image processing to control the operation and/or movement of an end effector.

A patent for “systems, devices, and methods for grasping by multi-purpose robots” (USPTO 11717963 B2) describes the training and operation of semi-autonomous robots to complete different work objectives.

Finally, the most cryptic of this group of patents is “haptic photogrammetry In robots and methods for operating the same” (USPTO US 11717974 B1), which describes methods for operating robots based on environment models including haptic data.

The market for humanoids has made notable progress in 2023, with plenty of product announcements. Agility Robotics offers one of the most mature systems on the market and has announced publicly that it is testing its robots in both Amazon warehouses and at GXO Logistics.

You can see Phoenix do things like placing items in a plastic bag, stacking blocks, and more as part of Sanctuary AI’s “Robots Doing Stuff” series on its YouTube channel.

The post Sanctuary AI secures IP assets to advance touch, grasping in general-purpose robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/sanctuary-ai-secures-ip-assets-advancing-touch-grasping-general-purpose-robots/feed/ 0
RoboBusiness Pitchfire winner Glidance helping sight-impaired individuals regain independence https://www.therobotreport.com/robobusiness-pitchfire-winner-glidance-helping-sight-impaired-individuals-regain-their-independence/ https://www.therobotreport.com/robobusiness-pitchfire-winner-glidance-helping-sight-impaired-individuals-regain-their-independence/#respond Fri, 15 Dec 2023 16:47:58 +0000 https://www.therobotreport.com/?p=568894 An in-depth conversation with Amos Miller, CEO and cofounder of Glidance, the 2023 RoboBusiness Pitchfire winner.

The post RoboBusiness Pitchfire winner Glidance helping sight-impaired individuals regain independence appeared first on The Robot Report.

]]>

In this episode, we learn about the innovative solution from Glidance, the 2023 RoboBusiness Pitchfire winner.

pitchfire-featured

Glidance CEO Amos Miller (center) was surrounded by the RoboBusiness Pitchfire judges after winning the event.

Cohosts Steve Crowe and Mike Oitzman sit down with Glidance co-founder and CEO Amos Miller to learn all about the genesis of the company and its physical guidance robot. The device guides sight-impaired individual by perceiving the world around them and leading them on a safe path to their destination. This device is easy to use and quick to learn, and Amos is promising to restore the independence of sight-impaired individuals. Amos lost his sight to retinitis pigmentosa in his 20s.

Glidance is a member of MassRobotics and launched in the shared workspace in January 2023. As Amos describes in the podcast, the company started with an idea and quickly prototyped “Wizard of Oz” style. “It’s inspiring to have startups like Guidance in our community,” said Joyce Sidopoulos, chief of operations at MassRobotics. “Their technology is revolutionizing accessibility and a testament to the positive impact technology and robotics can have on people’s lives and our society.”

an illustration of an individual using the Glidance solution.

Glide has a small form factor and is designed to have a price tag similar to a cell phone. | Credit: Glidance

While guide dogs are a tremendous solution and companion for blind individuals, unfortunately, the need far outstrips the number of available guide dogs in any given year. The Glidance robot has the opportunity to help blind individuals regain independence and agency, especially those who lose their sight late in life. The Glide unit goes into beta testing in the middle of 2024 and promises to be as affordable as a new cell phone.

If you would like to learn more, go to the Glidance website: https://glidance.io/

The company will also be exhibiting at CES 2024, in the Eureka Park exhibit hall.

 

Episode timeline

18:36  Interview with Amos Miller, CEO and co-founder of Glidance

News of the week

  •  Meet the artist training Spot robots to make their own art
    • A Live webcam of an art exhibit by Agnieszka Pilat called Heterobata, is on display at the National Gallery of Victoria’s Triennial Show in Melbourne, Australia. The robots are programmed to understand a range of commands, and they will act autonomously to execute them in whatever order desired. Each of Pilat’s robots is programmed with a different personality and a different role to play in the exhibit. She described them as a “nascent moment in technology,” with emerging personalities mimicking how organisms become specialized over time.
  • Tesla demonstrates Optimus Gen 2 dexterity, recalls 2M vehicles
    • Tesla released a video showing the improving capabilities of its Optimus humanoid robot, but it faces safety scrutiny over Autopilot.
    • BAD NEWS: 
      • The company is recalling more than 2 million vehicles as the National Highway Traffic Safety Administration (NHTSA) continues to investigate safety problems around its Autopilot system.
      • When Tesla first announced the feature, some drivers recorded themselves with their hands off the steering wheel, and critics have asserted that the company didn’t clearly state the risks of relying too heavily on Autopilot.
      • In February, Tesla issued a voluntary recall of 363,000 Model S, Model 3, Model X and Model Y vehicles.
      • The NHTSA has reviewed 956 crashes in which Autopilot was allegedly in use, reported The Wall Street Journal. The agency expressed concern about the software and its use rather than Tesla’s reliance on vision over lidar. The Washington Post said that at least eight incidents resulted in fatalities or serious injuries.

SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post RoboBusiness Pitchfire winner Glidance helping sight-impaired individuals regain independence appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robobusiness-pitchfire-winner-glidance-helping-sight-impaired-individuals-regain-their-independence/feed/ 0
SRI International designs XRGo to protect pharmaceutical workers through teleoperation https://www.therobotreport.com/sri-international-designs-xrgo-to-protect-pharmaceutical-workers-through-teleoperation/ https://www.therobotreport.com/sri-international-designs-xrgo-to-protect-pharmaceutical-workers-through-teleoperation/#respond Tue, 12 Dec 2023 14:00:54 +0000 https://www.therobotreport.com/?p=568822 SRI International says its XRGo teleoperation capability could help ensure pharmaceutical quality and open up jobs to a wider range of candidates.

The post SRI International designs XRGo to protect pharmaceutical workers through teleoperation appeared first on The Robot Report.

]]>
SRI International is developing the XRGo telemanipulation robot

SRI’s XRGo telemanipulation software working with a Staubli robot arm. Source: SRI International

In pharmaceutical manufacturing, every time people enter a clean room to address a manufacturing error, they could disrupt the sterile environment. SRI International has developed the XRGo telemanipulation software, which it said could allow staffers to remotely control a third-party robotic arm.

XRGo provides fine controls so an operator can conduct tasks such as adjusting misaligned test tubes or performing routine maintenance without ever needing to enter the room, said SRI. This also protects the pharmaceuticals from contamination, the company said.

“Automated processes require human oversight and occasional intervention,” wrote Bill Rusitzky, vice president of business development at SRI, in a blog post.

“By allowing people to do this from outside the sterile area, we avoid having to endanger the pharmaceuticals or, in some cases, throw them out,” he added. “As much as possible, we want to remove the interaction between people and the pharmaceutical product. The Annex 1 regulation in Europe is just the start of what governments will start to require.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


XRGo could open up pharmaceutical jobs

Remote manipulation could also help protect pharmaceutical workers, noted SRI. Some products, such as the radioactive chemicals used in chemotherapy, are hazardous.

“Removing people from those situations is critical and has the added long-term benefit of opening these jobs up to people who were not physically capable of performing them in person,” Rusitzky said. “The pharmaceutical companies that we’re working with are confident that this is something that the industry needs to do, and they’re excited about XRGo’s potential in this space.”

Menlo Park, Calif.-based SRI, which has developed wearable and teleoperated systems for a variety of uses, recently rebranded to reflect its heritage of scientific and technological innovation.

SRI VP discusses telemanipulation development

How long have you been working on XRGo for minimally invasive surgery?

Rusitzky: SRI has been working in the field of robotics for decades. Some may remember Shakey may remember Shakey the robot from the late 1960s. We also spun out of Intuitive Surgical in the late ’90s.

XRGo is continued expertise in the field.

Does XRGo use standard augmented reality, virtual reality, and control hardware? 

Rusitzky:  Yes, we use standard off-the-shelf VR headsets. Today, we use the Meta/Facebook VR Headset

What robots does the teleoperation software work with? Are they your own or another off-the-shelf?

Rusitzky: XRGo supports SRI‘s Taurus and Staubli, and we are working on Denso, Omron, and others. The system is designed to work with third-party robot arms.

Did SRI need U.S. Food and Drug Administration (FDA) approval for this technology? When do you expect such approvals?

Rusitzky: We are working with pharma companies to test use cases. Once the use cases have been fully tested, we will work with the Pharma companies to get FDA approval that are required.

Will you license it as you did with the da Vinci to Intuitive Surgical?

Rusitzky: We are working on the right business model. We expect to license XRGo.

What price point are you aiming for with the system, and who are the target customers? When will you pursue other potential markets?

Rusitzky: We aren’t at liberty to publicly share a price at this point.

Targets for now [include] pharma, but we can also see this working wherever humans can’t physically go, such as bomb disposal, food processing, and high-risk energy maintenance in offshore oil.

Are customers using the proof of concept, or are you still refining it and waiting for approvals? 

Rusitzky: Yes, we are working with one of the largest pharma companies.

The post SRI International designs XRGo to protect pharmaceutical workers through teleoperation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/sri-international-designs-xrgo-to-protect-pharmaceutical-workers-through-teleoperation/feed/ 0
ISO and ASTM define standard for additive manufacturing in construction https://www.therobotreport.com/iso-astm-define-additive-manufacturing-construction-standard/ https://www.therobotreport.com/iso-astm-define-additive-manufacturing-construction-standard/#respond Fri, 08 Dec 2023 14:31:12 +0000 https://www.therobotreport.com/?p=568795 ISO and ASTM have defined an additive construction standard in what could be the first of joint efforts around robotics in that industry.

The post ISO and ASTM define standard for additive manufacturing in construction appeared first on The Robot Report.

]]>
Additive manufacturing for construction gets safety standards from ISO.

ISO and ASTM have jointly defined standards for additive manufacturing in construction. Credit: Adobe Stock

Robotic arms and additive manufacturing are changing how materials are handled and how buildings are constructed. However, as robots enter new environments and take on new tasks, the need grows for developers, integrators, and end users to be aware of quality and safety requirements. The International Organization for Standardization, or ISO, has posted documents to help them meet those requirements.

Standard defines safe design for industrial end effectors

An example of relevant industrial standards, ISO/TR 20218-1:2018 provides guidance on the safe design and integration of end effectors for robots. It also adds to ISO 10218-2:2011 on how to integrate robots.

The document covers collaborative robot applications, where robots share workspace with people.

“In such collaborative applications, the end-effector design is of major importance, particularly characteristics such as shapes, surfaces and application function (e.g. clamping forces, residual material generation, temperature),” wrote ISO.

It emphasized the importance of conducting safety assessments. Even when robots are marketed as collaborative, their payloads or motion may not be.

Safety incidents are rare, but interested parties should do their due diligence when developing and deploying robots, noted Aaron Prather, director of robotics and autonomous systems at ASTM International.

ISO and ASTM publish first joint standard for AC

Announced this week, ISO/ASTM 52939:2023 specifies qualification principles for structural and infrastructure elements in additive manufacturing for construction. It provides criteria for additive construction (AC) processes, quality, and factors for system operations, as well as processes on a site.

The new standard applies to all additive manufacturing technologies in building and construction of load bearing and non-load bearing structures, as well as structural elements for residential and commercial applications. It does not cover metals, material properties, operational safety, packaging of equipment and materials, or guidelines for operating specific robots.

ISO/ASTM 52939:2023 is the first jointly published standard from ISO and ASTM International, said Prather. Standards bodies typically focus on different technologies, applications, and industries, but global cooperation can improve worker safety, product quality, and regulatory environments, he noted.

“This addresses moving from traditional construction standards and bridging over to additive,” Prather told The Robot Report. “[It could be] the first step on many to come.”

“This standard also sets the basis for the coming construction robot standards that are in the works across numerous organizations,” he added.

The standard is voluntary, and builders must follow local and regional requirements, noted ISO. In October, ASTM International announced a roadmap for digitalization of the construction industry, sponsored by the National Institute of Science and Technology (NIST).


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Additive manufacturing changing construction 

Several companies have demonstrated the potential of additive manufacturing for production-grade elements and 3D-printed buildings, including ABB, Branch Technology, HS2, ICON Technology, Mighty Buildings, and SQ4D.

These systems often combine concrete extruders with gantry robots or industrial robot arms. There has also been research into using drones for repair of difficult-to-reach structures, and NASA conducted a 3D-printed habitat challenge in 2019.

Possible advantages include stronger and unique architectures, less waste of materials, and even reduction in carbon emissions, according to research reports from Palgrave Macmillan and Frontiers Media. The global market for 3D printing systems in construction is modest but could increase from $13.38 million in 2023 to $22.63 million by 2030 at a compound annual growth rate (CAGR) of 7.8%, predicted Virtue Market Research.

The post ISO and ASTM define standard for additive manufacturing in construction appeared first on The Robot Report.

]]>
https://www.therobotreport.com/iso-astm-define-additive-manufacturing-construction-standard/feed/ 0