Microprocessors / SoCs Archives - The Robot Report https://www.therobotreport.com/category/technologies/microprocessors-socs/ Robotics news, research and analysis Thu, 11 Apr 2024 21:02:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Microprocessors / SoCs Archives - The Robot Report https://www.therobotreport.com/category/technologies/microprocessors-socs/ 32 32 BlackBerry and AMD partner to reduce latency in robotics https://www.therobotreport.com/blackberry-amd-partner-reduce-latency-in-robotics/ https://www.therobotreport.com/blackberry-amd-partner-reduce-latency-in-robotics/#respond Thu, 11 Apr 2024 20:57:05 +0000 https://www.therobotreport.com/?p=578674 BlackBerry and Advanced Micro Devices said they plan to address the need for 'hard' real-time capabilities in robotics-focused hardware.

The post BlackBerry and AMD partner to reduce latency in robotics appeared first on The Robot Report.

]]>
AMD's Kria K26 SOM will work with the BlackBerry QNX SDP.

AMD’s Kria K26 SOM will power the hardware with the BlackBerry QNX SDP. | Source: AMD

BlackBerry Ltd. announced at Embedded World this week that it is collaborating with Advanced Micro Devices Inc. The partners said they want to enable next-generation robotics by reducing latency and jitter and with “repeatable determinism.”

The companies said they will jointly “address the critical need for ‘hard’ real-time capabilities in robotics-focused hardware.” BlackBerry and AMD plan to release an affordable system-on-module (SOM) platform that delivers enhanced performance, reliability, and scalability for robotic systems in industrial healthcare

This platform will combine BlackBerry’s QNX expertise in real-time foundational software and the QNX Software Development Platform (SDP) with heterogeneous hardware powered by the AMD Kria K26 SOM. It features both Arm and FPGA (field programmable gate array) logic-based architecture.

“With the QNX Software Development Platform, customers can start development quickly on the AMD Kria KR260 Starter Kit and seamlessly scale to other higher-performance AMD platforms as their needs evolve,” stated Chetan Khona, senior director of industrial, vision, healthcare, and sciences markets at AMD.

“Combining the industry-leading strengths of AMD and QNX will provide a foundation platform that opens new doors for innovation and takes the future of robotics technology well beyond the constraints experienced until now,” he said.

BlackBerry, AMD provide capabilities with less latency

With Kria, an Arm sub-system can power the advanced capabilities of the QNX microkernel real-time operation system (RTOS), said Advanced Micro Devices and BlackBerry. It can do this while allowing users to run low-latency, deterministic functions on the programmable logic of the AMD Kria KR260 robotics starter kit. 

This combination enables sensor fusion, high-performance data processing, real-time control, industrial networking, and reduced latency in robotics applications, said the companies.

They added that customers can benefit from integration and optimization of software and hardware components. This results in streamlined development processes and accelerated time to market for robotics innovations, said AMD and BlackBerry. 

“An integrated solution by BlackBerry QNX through our collaboration with AMD will provide an integrated software-hardware foundation offering real-time performance, low latency, and determinism to ensure that critical robotic tasks are executed with the same level of precision and responsiveness every single time,” said Grant Courville, vice president of product and strategy at BlackBerry QNX.

“These are crucial attributes for industries carrying out finely tuned operations, such as the fast-growing industries of autonomous mobile robots and surgical robotics” he added. “Together with AMD, we are committed to driving technological advancements that address some of these most complex challenges and transform the future of the robotics industry.”

The integrated system is now available to customers.

See AMD at Robotics Summit & Expo

For more than 50 years, Advanced Micro Devices has been a leading innovator in high-performance computing (HPC), graphics, and visualization technologies. The Santa Clara, Calif.-based company noted that billions of people, Fortune 500 businesses, and scientific research institutions worldwide rely on its technology daily.

AMD recently released the Embedded+ HPC architecture, the Spartan UltraScale+ FPGA family, and Versal Gen 2 for AI and edge processing.

Kosta Sidopoulos, a product engineer at AMD, will be speaking at the Robotics Summit & Expo, which takes place May 1 and 2 at the Boston Convention and Exhibition Center. His talk on “Enabling Next-Gen AI Robotics” will delve into the unique features and capabilities of AMD’s AI-enabled products. It will highlight their adaptability and scalability for diverse robotics applications.

Registration is now open for the Robotics Summit & Expo, which will feature more than 70 speakers, 200 exhibitors, and up to 5,000 attendees, as well as numerous networking opportunities.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post BlackBerry and AMD partner to reduce latency in robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/blackberry-amd-partner-reduce-latency-in-robotics/feed/ 0
AMD releases Versal Gen 2 to improve support for embedded AI, edge processing https://www.therobotreport.com/amd-releases-versal-gen-2-to-support-ai-edge-processing/ https://www.therobotreport.com/amd-releases-versal-gen-2-to-support-ai-edge-processing/#respond Tue, 09 Apr 2024 08:15:20 +0000 https://www.therobotreport.com/?p=578606 The first devices in AMD Versal Series 2 target high-efficiency for AI Engines, and Subaru is one of its first customers.

The post AMD releases Versal Gen 2 to improve support for embedded AI, edge processing appeared first on The Robot Report.

]]>
AMD Versal AI Edge and Prime Gen 2.

The AMD Versal AI Edge and Prime Gen 2 are next-gen SoCs. Source: Advanced Micro Devices

To enable more artificial intelligence on edge devices such as robots, hardware vendors are adding to their processor portfolios. Advanced Micro Devices Inc. today announced the expansion of its adaptive system on chip, or SoC, line with the new AMD Versal AI Edge Series Gen 2 and Versal Prime Series Gen 2.

“The demand for AI-enabled embedded applications is exploding and driving the need for solutions that bring together multiple compute engines on a single chip for the most efficient end-to-end acceleration within the power and area constraints of embedded systems,” stated Salil Raje, senior vice president and general of the Adaptive and Embedded Computing Group at AMD.

“Based on over 40 years of adaptive computing leadership in high-security, high-reliability, long-lifecycle, and safety-critical applications, these latest-generation Versal devices offer high compute efficiency and performance on a single architecture that scales from the low end to high end,” he added.

For more than 50 years, AMD said it has been a leading innovator in high-performance computing (HPC), graphics, and visualization technologies. The Santa Clara, Calif.-based company noted that billions of people, Fortune 500 businesses, and scientific research institutions worldwide rely on its technology daily.

Versal Gen 2 addresses three phases of accelerated AI

Advanced Micro Devices said the Gen 2 systems put preprocessing, AI inference, and postprocessing on a single device to deliver accelerated AI. This provides the optimal mix for accelerated AI meet the complex processing needs of real-world embedded systems, it asserted.

  • Preprocessing: The new systems include FPGA (field-programmable gate array) logic fabric for real-time preprocessing; flexible connections to a wide range of sensors; and implementation of high-throughput, low-latency data-processing pipelines.
  • AI inference: AMD said it provides an array of vector processes in the form of next-generation AI Engines for efficient inference.
  • Postprocessing: Arm CPU cores provide the power needed for complex decision-making and control for safety-critical applications, said AMD.

“This single-chip intelligence can eliminate the need to build multi-chip processing solutions, resulting in smaller, more efficient embedded AI systems with the potential for shorter time to market,” the company said.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


AMD builds to maximize power and compute

AMD said its latest systems offer up to 10x more scalar compute compared with the first generation, so the devices can more efficiently handle sensor processing and complex scalar workloads. The Versal Prime Gen 2 devices include new hard IP for high-throughput video processing, including up to 8K multi-channel worflows.

This makes the scalable portfolio suitable for applications such as ultra-high-definition (UHD) video streaming and recording, industrial PCs, and flight computers, according to the company.

In addition, the new SoCs include new AI Engines that AMD claimed will deliver three times the TOPS (trillions of operations per second) per watt than the first-generation Versal AI Edge Series devices.

“Balancing performance, power, [and] area, together with advanced functional safety and security, Versal Series Gen 2 devices deliver new capabilities and features,” said AMD. It added that they “enable the design of high-performance, edge-optimized products for the automotive, aerospace and defense, industrial, vision, healthcare, broadcast, and pro AV [autonomous vehicle] markets.”

“Single-chip intelligence for embedded systems will enable pervasive AI, including robotics … smart city, cloud and AI, and the digital home,” said Manuel Uhm, director for Versal marketing at AMD, in a press briefing. “All will need to be accelerated.”

The Versal Prime Gen 2 SoC.

The Versal Prime Gen 2 is designed for high-throughput applications such as video processing. Source: AMD

Versal powers Subaru’s ADAS vision system

Subaru Corp. is using AMD’s adaptive SoC technology in current vehicles equipped with its EyeSight advanced driver-assistance system (ADAS). EyeSight is integrated into certain car models to enable advanced safety features including adaptive cruise control, lane-keep assist, and pre-collision braking.

“Subaru has selected Versal AI Edge Series Gen 2 to deliver the next generation of automotive AI performance and safety for future EyeSight-equipped vehicles,” said Satoshi Katahira. He is general manager of the Advanced Integration System Department and ADAS Development Department, Engineering Division, at Subaru.

“Versal AI Edge Gen 2 devices are designed to provide the AI inference performance, ultra-low latency, and functional safety capabilities required to put cutting-edge AI-based safety features in the hands of drivers,” he added.

Vivado and Vitis part of developer toolkits

AMD said its Vivado Design Suite tools and libraries can help boost productivity and streamline hardware design cycles, offering fast compile times and enhanced-quality results. The company said the Vitis Unified Software Platform “enables embedded software, signal processing, and AI design development at users’ preferred levels of abstraction, with no FPGA experience needed.”

Earlier this year, AMD released the Embedded+ architecture for accelerated edge AI, as well as the Spartan UltraScale+ FPGA family for edge processing.

Early-access documentation for Versal Series Gen 2 is now available, along with first-generation Versal evaluation kits and design tools. AMD said it expects Gen 2 silicon samples to be available in the first half of 2025, followed by evaluation kits and system-on-modules samples in mid-2025, and production silicon in late 2025.

The post AMD releases Versal Gen 2 to improve support for embedded AI, edge processing appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-releases-versal-gen-2-to-support-ai-edge-processing/feed/ 0
Top 10 robotics news stories of March 2024 https://www.therobotreport.com/top-10-robotic-stories-of-march-2024/ https://www.therobotreport.com/top-10-robotic-stories-of-march-2024/#respond Mon, 01 Apr 2024 17:01:03 +0000 https://www.therobotreport.com/?p=578366 From events like MODEX and GTC to new product launches, there was no shortage of robotics news to cover in March 2024. 

The post Top 10 robotics news stories of March 2024 appeared first on The Robot Report.

]]>
March 2024 was a non-stop month for the robotics industry. From events such as MODEX and GTC to exciting new deployments and product launches, there was no shortage of news to cover. 

Here are the top 10 most popular stories on The Robot Report this past month. Subscribe to The Robot Report Newsletter or listen to The Robot Report Podcast to stay updated on the latest technology developments.


10. Robotics Engineering Career Fair to connect candidates, employers at Robotics Summit

The career fair will draw from the general robotics and artificial intelligence community, as well as from attendees at the Robotics Summit & Expo. Past co-located career fairs have drawn more than 800 candidates, and MassRobotics said it expects even more people at the Boston Convention and Exhibition Center this year. Read More


SMC released LEHR series grippers for UR cobot arms in March 2024.

9. SMC adds grippers for cobots from Universal Robots

SMC recently introduced a series of electric grippers designed to be used with collaborative robot arms from Universal Robots. Available in basic and longitudinal types, SMC said the LEHR series can be adapted to different industrial environments like narrow spaces. Read More


anyware robotics pixmo robot.8. Anyware Robotics announces new add-on for Pixmo unloading robots

Anyware Robotics announced in March 2024 an add-on for its Pixmo robot for truck and container unloading. The patent-pending accessory includes a vertical lift with a conveyor belt that is attached to Pixmo between the robot and the boxes to be unloaded. Read More


image of Phoenix humanoid robot, full body, not a render.

7. Accenture invests in humanoid maker Sanctuary AI in March 2024

In its Technology Vision 2024 report, Accenture said 95% of the executives it surveyed agreed that “making technology more human will massively expand the opportunities of every industry.” Well, Accenture put its money where its mouth is. Accenture Ventures announced a strategic investment in Sanctuary AI, one of the companies developing humanoid robots. Read More


Cambrian Robotics is applying machine vision to industrial robots

6. Cambrian Robotics obtains seed funding to provide vision for complex tasks

Machine vision startup Cambrian Robotics Ltd. has raised $3.5 million in seed+ funding. The company said it plans to use the investment to continue developing its AI platform to enable robot arms “to surpass human capabilities in complex vision-based tasks across a variety of industries.” Read More


Mobile Industrial Robots introduced the MiR1200 pallet jack in March 2024.5. Mobile Industrial Robots launches MiR1200 autonomous pallet jack

Autonomous mobile robots (AMRs) are among the systems benefitting from the latest advances in AI. Mobile Industrial Robots at LogiMAT in March 2024 launched the MiR1200 Pallet Jack, which it said uses 3D vision and AI to identify pallets for pickup and delivery “with unprecedented precision.” Read More


4. Reshape Automation aims to reduce barriers of robotics adoption

Companies in North America bought 31,159 robots in 2023. That’s a 30% decrease from 2022. And that’s not sitting well with robotics industry veteran Juan Aparicio. After working at Siemens for a decade and stops at Ready Robotics and Rapid Robotics, Aparicio hopes his new startup Reshape Automation can chip away at this problem. Read More


Apptronik Apollo moves a tote.

3. Mercedes-Benz testing Apollo humanoid

Apptronik announced that leading automotive brand Mercedes-Benz is testing its Apollo humanoid robot. As part of the agreement, Apptronik and Mercedes-Benz will collaborate on identifying applications for Apollo in automotive settings. Read More


NVIDIA CEO Jenson Huang on stage with a humanoid lineup in March 2024.

2. NVIDIA announces new robotics products at GTC 2024

The NVIDIA GTC 2024 keynote kicked off like a rock concert in San Jose, Calif. More than 15,000 attendees filled the SAP Arena in anticipation of CEO Jensen Huang’s annual presentation of the latest product news from NVIDIA. He discussed the new Blackwell platform, improvements in simulation and AI, and all the humanoid robot developers using the company’s technology. Read More


Schneider cobot product family.

1. Schneider Electric unveils new Lexium cobots at MODEX 2024

In Atlanta, Schneider Electric announced the release of two new collaborative robots: the Lexium RL 3 and RL 12, as well as the Lexium RL 18 model coming later this year. From single-axis machines to high-performance, multi-axis cobots, the Lexium line enables high-speed motion and control of up to 130 axes from one processor, said the company. It added that this enables precise positioning to help solve manufacturer production, flexibility, and sustainability challenges. Read More

 

The post Top 10 robotics news stories of March 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/top-10-robotic-stories-of-march-2024/feed/ 0
Delta Electronics demonstrates digital twin, power systems at GTC https://www.therobotreport.com/delta-electronics-demonstrates-digital-twin-power-systems-at-gtc/ https://www.therobotreport.com/delta-electronics-demonstrates-digital-twin-power-systems-at-gtc/#respond Thu, 28 Mar 2024 20:25:24 +0000 https://www.therobotreport.com/?p=578308 Delta Electronics has developed digital twins with NVIDIA for designing and managing industrial automation and AI data centers.

The post Delta Electronics demonstrates digital twin, power systems at GTC appeared first on The Robot Report.

]]>
Delta Electronics at NVIDIA GTC 2024.

Delta exhibited its data center and other technologies at NVIDIA GTC 2024. Source: Delta Electronics

SAN JOSE, Calif. — Artificial intelligence and robotics both devour power, but simulation, next-generation processors, and good product design can mitigate the draw. At NVIDIA Corp.’s GTC event last week, Delta Electronics Inc. demonstrated how its digital twin platform, developed on NVIDIA Omniverse, can help enhance smart manufacturing capabilities.

“We’ve partnered with NVIDIA on energy-efficient designs to support AI,” Franziskus Gehle, general manager of the Power Solutions business unit at Delta, told The Robot Report. “We’ve co-developed 5.5 kW designs for 98% efficiency.”

The Taipei, Taiwan-based company explained how its technologies can benefit industrial automation and warehouse operations. Delta also showed its ORV3 AI server infrastructure product and DC converters and other technologies designed to support graphics processing unit (GPU) operations.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Delta designs simulation to manage automation

Founded in 1971, Delta Electronics said it is a global leader in switching power supplies and thermal management products. The company’s portfolio includes systems for industrial automation, building automation, telecommunications power, data center infrastructure, electric vehicle charging, renewable energy, and energy storage and display.

Delta added that its energy-efficient products can support sustainable development. The company has sales offices, research and development centers, and factories at nearly 200 locations around the world. It provides articulated robot arms, SCARA robots, and robot controllers with integrated servo drives.

“Since 1995, Delta has supplied automation components, and it now offers a full product line,” said Claire Ou, senior principal for strategic marketing in the Power and System business group at Delta. “We’ve used NVIDIA simulation for our customers and ourselves, for machine tools and semiconductors.”

“Because Delta has a lot of factories around the world, it’s best to do test runs to fine-tune our hardware and software before implementation,” she told The Robot Report. “Our solutions can monitor and manage warehouses and factories for maximum productivity.”

In addition, Delta has developed its own standalone simulation software in addition to NVIDIA Omniverse, and it can integrate data from both. In the past, automation designers, manufacturers, and users worked with different tools, but customers are now optimistic about easier collaboration, said Ou.

“In 2012, Industry 4.0 was about digitalizing manufacturing,” she noted. “Since then, our management and monitoring systems have been integrated into global factories. We’re also working with data for construction and smart buildings.”

NVIDIA partners for digital twins to manage power

“We are honored to be the only power and thermal management solutions provider at NVIDIA GTC 2024, where we will showcase the NVIDIA Omniverse-powered digital twin we have developed, which underscores our superior expertise in next-generation electronics manufacturing,” stated Mark Ko, vice chairman of Delta Electronics. “We look forward to helping transcend the boundaries of energy efficiency in the AI realm using the latest technologies.”

Delta has deployed its power management technology to leading cloud solution providers (CSPs) and AI developers such as Meta (parent of Facebook), Microsoft, and Amazon Web Services, noted Gehle.

“Our customers have doubled their power requirements in the past six months rather than in years,” he said. “All of their road maps anticipate a significant increase in power demand, so they need management in place for next-generation GPUs and power-hungry generative AI.”

“We used digital twins and Omniverse to design and pre-qualify our products worldwide,” Gehle explained. “It’s important that our data center plans are aligned with those of our customers.”

At GTC, Delta presented an integrated Open Rack Version 3 (ORV3) system for AI server infrastructure with server power supplies boasting energy efficiency as high as 97.5%. It also included SD-WAN, Common Redundant Power Supply Units (CRPS) with 54Vdc output, ORV3 18kW/33kW HPR Power Shelves, a Battery Backup Unit (BBU), a Mini UPS, and a liquid cooling system.

In addition, the company showed its portfolio of DC/DC converters, power chokes, and 3D Vapor Chambers for GPU operations.

“The new era of AI-powered manufacturing is marked by digital twins and synthetic data, which can enhance efficiency and productivity before actual production begins,” said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA, in a release.

“By developing its digital platform on NVIDIA Omniverse, Delta can virtually link specific production lines and aggregate data from a diverse range of equipment and systems to create a digital twin of its operations,” he said. “And with NVIDIA Isaac Sim, it can generate synthetic data to train its computer models to achieve 90% accuracy.”

The post Delta Electronics demonstrates digital twin, power systems at GTC appeared first on The Robot Report.

]]>
https://www.therobotreport.com/delta-electronics-demonstrates-digital-twin-power-systems-at-gtc/feed/ 0
NVIDIA announces new robotics products at GTC 2024 https://www.therobotreport.com/nvidia-announces-new-robotics-products-at-gtc-2024/ https://www.therobotreport.com/nvidia-announces-new-robotics-products-at-gtc-2024/#respond Tue, 19 Mar 2024 11:02:34 +0000 https://www.therobotreport.com/?p=578193 NVIDIA CEO Jenson Huang wowed the crowd in San Jose with the company's latest processor, AI, and simulation product announcements.

The post NVIDIA announces new robotics products at GTC 2024 appeared first on The Robot Report.

]]>
NVIDIA CEO Jenson Huang on stage with a humanoid lineup.

NVIDIA CEO Jenson Huang ended his GTC 2024 keynote backed by life size images of all of the various humanoids in development and powered by the Jetson Orin computer. | Credit: Eugene Demaitre

SAN JOSE, Calif. — The NVIDIA GTC 2024 keynote kicked off like a rock concert yesterday at the SAP Arena. More than 15,000 attendees filled the arena in anticipation of CEO Jensen Huang’s annual presentation of the latest product news from NVIDIA.

To build the excitement, the waiting crowd was mesmerized by an interactive and real-time generative art display running live on the main stage screen, driven by the prompts of artist Refik Anadol Dustio.

New foundation for humanoid robotics

The big news from the robotics side of the house is that NVIDIA launched a new general-purpose foundation model for humanoid robots called Project GR00T. This new model is designed to bring robotics and embodied AI together while enabling the robots to understand natural language and emulate movements by observing human actions.

GR00T training model diagram.

Project GR00T training model. | Credit: NVIDIA

GR00T stands for “Generalist Robot 00 Technology,” and with the race for humanoid robotics heating up, this new technology is intended to help accelerate development. GR00T is a large multimodal model (LMM) providing robotics developers with a generative AI platform to begin the implementation of large language models (LLMs).

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Huang. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”

GR00T uses the new Jetson Thor

As part of its robotics announcements, NVIDIA unveiled Jetson Thor for humanoid robots, based on the NVIDIA Thor system-on-a-chip (SoC). Significant upgrades to the NVIDIA Isaac robotics platform include generative AI foundation models and tools for simulation and AI workflow infrastructure.

The Thor SoC includes a next-generation GPU based on NVIDIA Blackwell architecture with a transformer engine delivering 800 teraflops of 8-bit floating-point AI performance. With an integrated functional safety processor, a high-performance CPU cluster, and 100GB of Ethernet bandwidth, it can simplify design and integration efforts, claimed the company.

Image of a humanoid robot.

Project GR00T, a general-purpose multimodal foundation model for humanoids, enables robots to learn different skills. | Credit: NVIDIA

NVIDIA showed humanoids in development with its technologies from companies including 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics, and XPENG Robotics.

“We are at an inflection point in history, with human-centric robots like Digit poised to change labor forever,” said Jonathan Hurst, co-founder and chief robot officer at Agility Robotics. “Modern AI will accelerate development, paving the way for robots like Digit to help people in all aspects of daily life.”

“We’re excited to partner with NVIDIA to invest in the computing, simulation tools, machine learning environments, and other necessary infrastructure to enable the dream of robots being a part of daily life,” he said.

NVIDIA updates Isaac simulation platform

The Isaac tools that GR00T uses are capable of creating new foundation models for any robot embodiment in any environment, according to NVIDIA. Among these tools are Isaac Lab for reinforcement learning, and OSMO, a compute orchestration service.

Embodied AI models require massive amounts of real and synthetic data. The new Isaac Lab is a GPU-accelerated, lightweight, performance-optimized application built on Isaac Sim for running thousands of parallel simulations for robot learning.

simulation screen shots.

NVIDIA software — Omniverse, Metropolis, Isaac and cuOpt — combine to create an ‘AI gym’
where robots, AI agents can work out and be evaluated in complex industrial spaces. | Credit: NVIDIA

To scale robot development workloads across heterogeneous compute, OSMO coordinates the data generation, model training, and software/hardware-in-the-loop workflows across distributed environments.

NVIDIA also announced Isaac Manipulator and Isaac Perceptor — a collection of robotics-pretrained models, libraries and reference hardware.

Isaac Manipulator offers dexterity and modular AI capabilities for robotic arms, with a robust collection of foundation models and GPU-accelerated libraries. It can accelerate path planning by up to 80x, and zero-shot perception increases efficiency and throughput, enabling developers to automate a greater number of new robotic tasks, said NVIDIA.

Among early ecosystem partners are Franka Robotics, PickNik Robotics, READY Robotics, Solomon, Universal Robots, a Teradyne company, and Yaskawa.

Isaac Perceptor provides multi-camera, 3D surround-vision capabilities, which are increasingly being used in autonomous mobile robots (AMRs) adopted in manufacturing and fulfillment operations to improve efficiency and worker safety. NVIDIA listed companies such as ArcBest, BYD, and KION Group as partners.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


‘Simulation first’ is the new mantra for NVIDIA

A simulation-first approach is ushering in the next phase of automation. Real-time AI is now a reality in manufacturing, factory logistics, and robotics. These environments are complex, often involving hundreds or thousands of moving parts. Until now, it was a monumental task to simulate all of these moving parts.

NVIDIA has combined software such as Omniverse, Metropolis, Isaac, and cuOpt to create an “AI gym” where robots and AI agents can work out and be evaluated in complex industrial spaces.

Huang demonstrated a digital twin of a 100,000-sq.-ft, warehouse — built using the NVIDIA Omniverse platform for developing and connecting OpenUSD applications — operating as a simulation environment for dozens of digital workers and multiple AMRs, vision AI agents, and sensors.

Each mobile robot, running the NVIDIA Isaac Perceptor multi-sensor stack, can process visual information from six sensors, all simulated in the digital twin.

robots working together in a warehouse.

Image depicting AMR and a manipulator working together to
enable AI-based automation in a warehouse powered by NVIDIA Isaac. | Credit: NVIDIA

At the same time, the NVIDIA Metropolis platform for vision AI can create a single centralized map of worker activity across the entire warehouse, fusing data from 100 simulated ceiling-mounted camera streams with multi-camera tracking. This centralized occupancy map can help inform optimal AMR routes calculated by the NVIDIA cuOpt engine for solving complex routing problems.

cuOpt, an optimization AI microservice, solves complex routing problems with multiple constraints using GPU-accelerated evolutionary algorithms.

All of this happens in real-time, while Isaac Mission Control coordinates the entire fleet using map data and route graphs from cuOpt to send and execute AMR commands.

NVIDIA DRIVE Thor for robot axis

The company also announced NVIDIA DRIVE Thor, which now supersedes NVIDIA DRIVE Orin as a SoC for autonomous driving applications.

Multiple autonomous vehicles are using NVIDA architectures, including robotaxis and autonomous delivery vehicles from companies including Nuro, Xpeng, Weride, Plus, and BYD.

The post NVIDIA announces new robotics products at GTC 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-announces-new-robotics-products-at-gtc-2024/feed/ 0
AMD unveils Spartan UltraScale+ FPGA family for edge processing https://www.therobotreport.com/amd-unveils-spartan-ultrascale-fpga-family-for-edge-processing/ https://www.therobotreport.com/amd-unveils-spartan-ultrascale-fpga-family-for-edge-processing/#respond Wed, 06 Mar 2024 13:49:37 +0000 https://www.therobotreport.com/?p=578071 AMD said the latest addition to its portfolio of FPGAs and adaptive SoCs delivers cost and power-efficient performance. 

The post AMD unveils Spartan UltraScale+ FPGA family for edge processing appeared first on The Robot Report.

]]>
An aerial view of the AMD Spartan UltraScale+ FPGA.

The Spartan UltraScale+ FPGA is designed to provide cost and energy-efficient compute. | Source: AMD

As robots and sensors proliferate, the need for robust compute has increased. Advanced Micro Devices Inc. yesterday announced its AMD Spartan UltraScale+ FPGA family. The company said the latest addition to its portfolio of field-programmable gate arrays, or FPGAs, and adaptive systems on chips, or SoCs, delivers cost and power-efficient performance for a wide range of I/O-intensive applications at the edge.

“For over 25 years, the Spartan FPGA family has helped power some of humanity’s finest achievements, from lifesaving automated defibrillators to the CERN particle accelerator advancing the boundaries of human knowledge,” stated Kirk Saban, corporate vice president of the Adaptive and Embedded Computing Group at AMD.

“Building on proven 16-nm technology, the Spartan UltraScale+ family’s enhanced security and features, common design tools, and long product lifecycles further strengthen our market-leading FPGA portfolio and underscore our commitment to delivering cost-optimized products for customers,” he added.

AMD claimed that its Spartan UltraScale+ devices offer a high I/O to logic cell ratio in FPGAs with built-in 28 nm and lower process technology. The Santa Clara, Calif.-based company said they consume as much as 30% less total power than its previous generation. The FPGAs also include the most robust set of security features in the cost-optimized portfolio, it asserted. 

AMD optimizes Spartan UltraScale+ for the edge

The high I/O counts and flexible interfaces of the new Spartan UltraScale+ FPGAs enable them to efficiently interface with multiple devices or systems, said AMD. The company said this will help address “the explosion of sensors and connected devices” such as robots. 

“Spartan UltraScale+ is primarily targeted for robot actuators, joint control, and camera sensors,” Rob Bauer, senior manager of cost-optimized silicon marketing at AMD, told The Robot Report“IoT [Internet of Things] devices are growing 2.3X from 2022 to 2028, according to the FPGA Market Global Forecast. There’s a need for supply chain stability and longevity.”

“The high programmable I/O count enables interfacing with a very wide range of sensors, and that in combination with programmable logic allows sensor processing and control in a low-latency, deterministic, and real-time manner,” he explained. “Programmable I/O is made up of a combination of 3.3V HDIO, HPIO, and the new high-performance XP5IO capable of supporting 3.2G MIPI D-PHY.”

The FPGAs offer up to 572 I/Os and voltage support up to 3.3V. It enables any-to-any connectivity for edge connectivity for edge sensing and control applications.

AMD said its devices feature the “proven” 16nm fabric and support for a wide array of packaging, starting as small as 10x10mm. These provide high I/O density in an compact footprint. 

In addition, the company said its portfolio provides the scalability to start with cost-optimized FPGAs and continue through to midrange and high-end products. It estimated that the Spartan UltraScale+ reduces power consumption by 30% in comparison with its 28 nm Artix 7 family by using 16 nm FinFET technology and hardened connectivity. 

“Generational power improvement is up to 30%. This is already significant, as there could be multiple such devices used in a robot today that can be upgraded with lower-power, newer-generation devices,” Bauer said. “Additionally, as these devices are then expected to enable the nervous system of the robot by interfacing and putting out data between the sensors and the controller, which can now be done at a better overall power efficiency up to 60%.”

These devices are the first AMD UltraScale+ FPGAs with a hardened LPDDR5 memory controller and PCIe Gen4 x8 support, providing both power efficiency and future-ready capabilities for customers, said AMD. 


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Spartan UltraScale+ includes several security features

AMD said its new devices’ security features include:

  • IP protection: Support for post-quantum cryptography (PQC) with NIST-approved algorithms offers state-of-the-art IP protection against evolving cyberattacks and threats. A physical unclonable function provides each device with a unique fingerprint for added security.
  • Tampering prevention: PPK/SPK key support helps manage obsolete or compromised security keys, while differential power analysis helps protect against side-channel attacks. The devices contain a permanent tamper penalty to further protect against misuse.
  • Uptime maximization: Enhanced single-event upset performance helps fast and secure configuration with increased reliability for customers, said AMD.

“We have many features in addition to PQC to enable secure authentication in post-quantum age,” Bauer said. “Spartan UltraScale+ devices are able to meet many of the requirements listed in IEC 62443, as it offers a long list of security features such as PUF, hardware root of trust, true random-number generator, AES-GCM-256, eFUSE, soft error mitigation, security monitor, DPA counter measures, temperature and voltage monitoring, tamper logging, JTAG monitoring, and more.”

Robotics and generative AI are contributing to chipset demand, according to Omdia, which estimated that the global market for dedicated SoCs could reach $866 million by 2028.

AMD said its entire portfolio of FPGAs and adaptive SoCs is supported by the AMD Vivado Design Suite and Vitis Unified Software Platform. This allows hardware and software designers to use “a single design cockpit from design to verification” to maximize the productivity benefits of these tools, it said.

The Spartan UltraScale+ FPGA sampling and evaluation kits will be available in the first half of 2025, according to AMD. Documentation is available now, and tools support started with AMD Vivado Design Suite in the fourth quarter of 2024.

The post AMD unveils Spartan UltraScale+ FPGA family for edge processing appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-unveils-spartan-ultrascale-fpga-family-for-edge-processing/feed/ 0
Locus Lock promises to protect autonomous systems from GPS spoofing https://www.therobotreport.com/locus-lock-promises-protect-autonomous-systems-gps-spoofing/ https://www.therobotreport.com/locus-lock-promises-protect-autonomous-systems-gps-spoofing/#respond Mon, 26 Feb 2024 15:21:58 +0000 https://www.therobotreport.com/?p=577991 Locus Lock has developed software-defined radio to overcome GPS spoofing for more secure autonomous navigation.

The post Locus Lock promises to protect autonomous systems from GPS spoofing appeared first on The Robot Report.

]]>
Locus Lock is designing RF systems to provide navigational security.

Locus Lock is designing RF systems to provide navigational security. Source: Locus Lock

Flying back from Miami last week, I put my life in the hands of two strangers, just because they wore gold epaulets. These commercial pilots, in turn, trusted their onboard computers to safely navigate the passengers home. The computers accessed satellite data from the Global Positioning System to set the course.

This chain of command is very fragile. The International Air Transport Association (IATA) reported last month an increased level of GPS spoofing and signal jamming since the outbreak of the wars in Ukraine and Israel. This poses the threat of catastrophe to aviators everywhere.

For example, last September, OPS Group reported that a European flight en route to Dubai almost entered into Iranian airspace without clearance. In 2020, Iran shot down an uncleared passenger aircraft that entered its territory. This has made the major airlines, avionics manufacturers, and NATO militaries and governments scramble to find solutions.

Navigational problems can be risky for commercial aircraft. Source: OPS Group

Navigational errors can be very dangerous for commercial aircraft. Source: OPS Group

Locus Lock founder came out of drone R&D

At ff Venture Capital, we recognize that GPS spoofing and jamming are fundamental problems for aerial, terrestrial, and marine autonomous systems in moving the industry forward. This investment thesis is grounded on a simple belief that the deployment of cost-effective uncrewed systems requires the trust of human operators who can’t afford to question the data.

When machines go awry, so does the industry. Just ask Cruise! This conviction led us to invest in Locus Lock. The company said it is taking an innovative software approach to GNSS signal processing using radio frequency, at a fraction of the cost of comparable hardware sold by military contractors.

Last week, I sat down with Locus Lock founder Hailey Nichols, a former University of Texas researcher in the school’s Radionavigation Laboratory. UT’s Lab is best known for its work with SpaceX and Starlink.

Nichols explained her transition from academic to founder: “I was always enthralled with the idea of aerospace and studied at MIT, where I was obsessed with the control and robotic side of aerospace. After I graduated, I worked at Aurora Flight Sciences, which is a subsidiary of Boeing, and I was a UAV software engineer.”

At Aurora, Nichols focused on integrating suites of sensors such as lidar, GPS, radar, and computer vision for uncrewed aerial vehicles (UAVs). However, she quickly became frustrated with the costs and quality of the sensors.

“They were physically heavy [and] power-intensive, and it made it quite hard for engineers to integrate,” she recalled. “This problem frustrated me so much that I went back to grad school to study it further, and I joined a lab down at the University of Texas.”

In Austin, the roboticist saw a different approach to sensor data, using software for signal processing.

“The radio navigation lab was very highly specialized in signal processing, specifically bringing in advanced software algorithms and robust estimation techniques forward to sensor technology,” explained Nichols. “This enabled more precise, secure, and reliable data, like positioning, navigation, and timing.”

Her epiphany came when she saw the market demand for the lab’s GNSS receiver from the U.S. Department of Defense and commercial partners after Locus Lock published research on autonomous vehicles accurately navigating urban canyons.

Navigating urban canyons is a challenge for conventional satellite-based systems.

Navigating urban canyons is a challenge for conventional satellite-based systems. Source: Quora

Reliable navigation needed for dual-use applications

Today, Locus Lock is ready to market its product more widely for dual-use applications across the spectrum of autonomy for commercial and defense use cases.

“Current GPS receivers often fail in what’s called ‘urban multipath,'” said Nichols. “This is where there’s building interference and shrouding of the sky can cause positioning errors. This can be problematic for autonomous cars, drones, and outdoor robotics that need access to centimeter-level positioning to make safe and informed decisions about where they are on the road or in the sky.”

The RF engineer continued: “Our other applicable industry is defense tech. With the rise of the Ukraine conflict and the Israel conflict in the Middle East, we’ve seen a massive amount of deliberate interference. So bad actors that are either spoofing or jamming, causing major outages or disruptions in GPS positioning.”

Locus Lock addresses this problem by enabling its GPS processing suite as a software solution, and unlike hardware, it’s affordable and extremely flexible.

“The ability to be backward-compatible and future-proof where we can constantly update and evolve our GPS processing suite to evolving attack vectors ensures that our customers are given the most cutting-edge and up-to-date processing techniques to enable centimeter-level positioning globally,” added Nichols.

“So our GNSS receivers are software-defined radio [SDR] with a specialized variant of inertially aided RTK [real-time kinematics],” she said, claiming that it provides a differentiator from competing products. “What that means is we’re doing some advanced sensor-fusion techniques with GNSS signals in addition to inertial navigation to ensure that, even in these pockets of urban canyons where you may not have access to GNSS signals … the GPS receiver [will] still provide centimeter-level positioning.”

As Nichols boasted, Locus Lock is an enabler of “next generation autonomous mobility.”

Locus Lock looks to affordable centimeter-level accuracy

While traditional GPS components cost around $40,000, Locus Lock said its its proprietary software and a 2-in. board cost around $2,000. Today, centimeter accuracy is inaccessible to most robot companies because most suppliers of robust hardware are military contractors, including L3Harris Technologies, BAE Systems, Northrop Grumman, and Elbit Systems.

“We’ve specifically made sure to cater our solution towards more low-cost environments that can proliferate mass-market autonomy and robotics into the ecosystem,” stated Nichols.

Locus Lock puts its software on a 2-in. board.

Locus Lock puts its software on a 2-in. board. Source: Oliver Mitchell

Nichols added that Locus Lock’s GNSS receiver is able to pull in data from global and regional satellite constellations.

“[This gives] us more access to any signals in the sky at any given time,” said the startup founder. “Diversity is also increasingly important in next-generation GPS receivers because it allows the device to evade jammed or afflicted channels.”

Grand View Research estimated that the SDR market will climb to nearly $50 billion by 2030. As uncrewed systems proliferate, Locus Lock’s price point should also come down, asserted Nichols.

“And while there are some companies that have progressed their autonomy stacks to be quite high, they haven’t gotten their prices down to make sense in a mass-market scenario,” she said. “And so it’s crucial to enable this next generation of autonomous mobility at large to not compromise on performance but to be able to provide this at an affordable price. Locus Lock is providing high-end performance at a much lower price point.”

Nichols even predicted that the company could eventually get product to under $1,000, if not less, with more adoption.

Global software defined radio market, research by Grand View Research

Source: Grand View Research

Tesla Optimus takes steps toward more mobile systems

Yesterday, Tesla published on X the latest video of its Optimus humanoid moving fluidly at an incredible gait for a robot. Pitchbook recently predicted that this could be a breakout period for humanoids, with 84 leading companies now having raised over $4.6 billion.

At the same time, the prospect of such advanced machines being hijacked via GPS spoofing into the service of terrorists, cybercriminals, or hostile governments is very real and horrifying. Thankfully, Nichols and her team are working with the Army Futures Command.

“A lot of this work has been done in spoofing and jamming — not only detection, but also mitigation,” she said. “We detect the type of RF environment that we are operating in to mitigate it and inform that end user with the situational awareness that is needed to assess ongoing attacks.”

“In addition, we can iterate much faster and bring in world-class experts on security and encryption to ensure that we protect secure military signals as much as possible,” said Nichols. “Our software can find assured reception that is demanded by these increasingly expensive and important assets that the military needs to protect.”

In ffVC’s view, our newest portfolio company is mission-critical to operating drones, robots, and other autonomous vessels safely, affordably, and securely in an increasingly dangerous world.

The post Locus Lock promises to protect autonomous systems from GPS spoofing appeared first on The Robot Report.

]]>
https://www.therobotreport.com/locus-lock-promises-protect-autonomous-systems-gps-spoofing/feed/ 0
AMD announces Embedded+ architecture to accelerate edge AI https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/ https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/#respond Tue, 06 Feb 2024 14:00:30 +0000 https://www.therobotreport.com/?p=577788 AMD Embedded+ combines embedded processors with adaptive systems on chips to shorten edge AI time to market.

The post AMD announces Embedded+ architecture to accelerate edge AI appeared first on The Robot Report.

]]>
AMD's new Embedded+ architecture for high-performance compute.

The new AMD Embedded+ architecture for high-performance compute. Source: Advanced Micro Devices

Robots and other smart devices need to process sensor data with a minimum of delay. Advanced Micro Devices Inc. today launched AMD Embedded+, a new computing architecture that combines AMD Ryzen Embedded processors with Versal adaptive systems on chips, or SoCs. The single integrated board is scalable and power-efficient and can accelerate time to market for original design manufacturer, or ODM, partners, said the company.

“In automated systems, sensor data has diminishing value with time and must operate on the freshest information possible to enable the lowest-latency, deterministic response,” stated Chetan Khona, senior director of industrial, vision, healthcare, and sciences markets at AMD, in a release.

“In industrial and medical applications, many decisions need to happen in milliseconds,” he noted. “Embedded+ maximizes the value of partner and customer data with energy efficiency and performant computing that enables them to focus in turn on addressing their customer and market needs.”

For more than 50 years, AMD said it has innovated in high-performance computing, graphics, and visualization technologies. The Santa Clara, Calif.-based company claimed that Fortune 500 businesses, billions of people, and research institutions around the world rely on its technology daily.

In the two years since it acquired Xilinx, AMD said it has seen increasing demand for AI in industrial/manufacturing, medical/surgical, smart-city infrastructure, and automotive markets. Not only can Embedded+ support video codecs and AI inferencing, but the combination of Ryzen and Versal can enable real-time control of robot arms, Khona said.

“Diverse sensor data is relied upon more than ever before, across applications,” said Khona in a press briefing last week. “The question is how to get sensor data from autonomous systems into a PC if it isn’t on a USB or some consumer interface.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


AMD Embedded+ paves a path to sensor fusion 

“The market for bringing processing closer to the sensor is growing rapidly,” said Khona. The use cases for embedded AI are growing, with the machine vision market growing to $600 million and sensor data analysis to $1.4 billion by 2028, he explained.

“AMD makes the path to sensor fusion, AI inferencing, industrial networking, control, and visualization simpler with this architecture and ODM partner products,” Khona said. He described the single mother board as usable with multiple types of sensors, allowing for offloaded processing and situational awareness.

AMD said it has validated the Embedded+ integrated compute platform to help ODM customers reduce qualification and build times without needing to expend additional hardware or research and development resources. The architecture enables the use of a common software platform to develop designs with low power, small form factors, and long lifecycles for medical, industrial, and automotive applications, it said.

The company asserted that Embedded+ is the first architecture to combine AMD x86 compute with integrated graphics and programmable I/O hardware for critical AI-inferencing and sensor-fusion applications. “Adaptive computing excels in deterministic, low-latency processing, whereas AI Engines improve high performance-per-watt inferencing,” said AMD.

Ryzen Embedded processors, which contain high-performance Zen cores and Radeon graphics, also offer rendering and display options for an enhanced 4K multimedia experience. In addition, it includes a built-in video codec for 4K H.264/H.265 encode/decode.

The combination of low-latency processing and high performance-per-watt inferencing enables high performance for tasks such as integrating adaptive computing in real time with flexible I/O, AI Engines for inferencing, and AMD Radeon graphics, said AMD.

It added that the new system combines the best of each technology. Embedded+ enables 10GigE vision and CoaXpress connectivity to camera via SFP+, said AMD, and image pre-processing occurs at pixel clock rates. This is especially important for mobile robot navigation, said Khona.

Sapphire delivers first Embedded+ ODM system

Embedded+ also allows system designers to choose from an ecosystem of ODM board offerings based on the architecture, said AMD. They can use it to scale their product portfolios to deliver performance and power profiles best suited to customers’ target applications, it asserted.

Sapphire Technology has built the first ODM system with the Embedded+ architecture, the Sapphire Edge+ VPR-4616-MB, a low-power Mini-ITX form factor motherboard. It offers the full suite of capabilities in as low as 30W of power by using the Ryzen Embedded R2314 processor and Versal AI Edge VE2302 Adaptive SoC.

The Sapphire Edge+ VPR-4616-MB is also available in a full system, including memory, storage, power supply, and chassis. Versal is a programmable network on a chip that can be tuned for power or performance, said AMD. With Ryzen, it provides programmable logic for sensor fusion and real-time controls, it explained.

“By working with a compute architecture that is validated and reliable, we’re able to focus our resources to bolster other aspects of our products, shortening time to market and reducing R&D costs,” said Adrian Thompson, senior vice president of global marketing at Sapphire Technology. “Embedded+ is an excellent, streamlined platform for building solutions with leading performance and features.”

The Embedded+ qualified VPR-4616-MB from Sapphire Technology is now available for purchase.

The post AMD announces Embedded+ architecture to accelerate edge AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/feed/ 0
Indy Autonomous Challenge announces new racecar and additional races https://www.therobotreport.com/indy-autonomous-challenge-announces-new-racecar-additional-races/ https://www.therobotreport.com/indy-autonomous-challenge-announces-new-racecar-additional-races/#respond Wed, 10 Jan 2024 19:48:28 +0000 https://www.therobotreport.com/?p=577401 The Indy Autonomous Challenge announced a completely new sensor and compute architecture for the AV24 racecar.

The post Indy Autonomous Challenge announces new racecar and additional races appeared first on The Robot Report.

]]>
The Indy Autonomous Challenge has revised the sensors and compute in the AV24 racecar.

The IAC has revised the sensors and compute in the AV24 racecar. Source: Indy Autonomous Challenge

The Indy Autonomous Challenge, or IAC, made two major announcements at CES 2024 this week. The first was that the IAC plans to present four autonomous racecar events in 2024, and the second was an updated technology stack.

The first event of the year is the IAC@CES, which takes place tomorrow at the Las Vegas Motor Speedway. The Robot Report will be in attendance to cover this event later this week.

More Indy Autonomous Challenge races to come

The IAC will also participate for the second year in a row at the Milano Monza Open-Air Motor Show from June 16 to 18 in Milan, Italy. Last year, the challenge debuted autonomous road racing with the IAC autonomous race cars for the first time.

Unlike other oval track-based races, the Milan Monza event challenges the university teams to develop their AI drivers for a road course. It is arguably one of the most famous road-racing venues in the world and exposes the IAC to a global racing audience, said event organizers.

The third event in 2024 will be from July 11 to 14 at the Goodwood Festival of Speed in the U.K. Described as “motorsport’s ultimate summer garden party,” the festival features the treacherous Goodwood hill climb.

This year, the IAC race cars will attempt the hill climb while setting new autonomous speed records. At last year’s event, the course was captured digitally, and the university teams are using that data to train their AI drivers.

Finally, IAC will return this year to the famous Indy Motor Speedway on Sept. 6, where it all started back in October 2021. The event expects to set new speed records and enable more university teams to qualify for head-to-head racing at the event.

Tech stack gets updates for the AV24

The other big news from IAC this week is the launch of a new generation of autonomous racecar, called the AV24. The original race platform, the AV21, has aged since its launch at the first race.

Winning university teams PoliMOVE and TUM have set multiple speed records over the past three years, pushing the AV21 to its sensor and computing limits. The platform has also suffered from maintenance and troubleshooting issues, especially in the fragility of the wiring harnesses. Some of the harness problems have plagued many of the teams as they prepared prior competitions.

In response, the IAC team went through the sensor, networking, and compute stack and re-engineered an entirely new platform that should enable the university teams to continue to push the limits of speed and control while testing and developing cutting-edge AI driver algorithms. AV24 does not include any changes in the race car chassis, the engine, or the physical dimensions of the vehicle.

Here’s a look at what’s new in the AV24 technology stack.

a bulleted list of the new AV24 race car sensors and specs.

The new IAC AV24 race car includes all new sensors and compute architecture. | Credit: IAC

Most notably, the AV24 now includes split braking controls that will allow the AV24 to manage braking on all four wheels of the vehicle separately, essentially giving the AI drivers more control of the vehicle than is humanly possible.

“The IAC event has succeeded beyond our wildest dreams,” said Paul Mitchell, co-founder and CEO of the Indy Autonomous Challenge. “We originally thought it would be a one-and-done challenge, but the event has thrived, so it was time to go back to the drawing board and deploy a new technology stack leveraging the best technology from our event partners.”

The post Indy Autonomous Challenge announces new racecar and additional races appeared first on The Robot Report.

]]>
https://www.therobotreport.com/indy-autonomous-challenge-announces-new-racecar-additional-races/feed/ 0
Eyeonic Vision System Mini unveiled by SiLC Technologies at CES 2024 https://www.therobotreport.com/silc-launches-eyeonic-mini-ces-2024/ https://www.therobotreport.com/silc-launches-eyeonic-mini-ces-2024/#respond Tue, 09 Jan 2024 15:00:13 +0000 https://www.therobotreport.com/?p=577373 SiLC says its new Eyeonic Mini AI machine vision system provides sub-millimeter resolution at a significantly reduced size.

The post Eyeonic Vision System Mini unveiled by SiLC Technologies at CES 2024 appeared first on The Robot Report.

]]>
Eyeonic Vision System Mini from SiLC Technologies

The Eyeonic Vision System Mini is designed to be compact and power-efficient. Source: SiLC Technologies

SiLC Technologies Inc. today at CES launched its Eyeonic Vision System Mini, which integrates a full, multi-channel frequency modulated continuous wave (FMCW) lidar on a single silicon photonic chip and an integrated FMCW lidar system on chip (SoC). The Eyeonic Mini “sets a new industry benchmark in precision,” said the Monrovia, Calif.-based company.

“Our FMCW lidar platform aims to enable a highly versatile and scalable platform to address the needs of many applications,” said Dr. Mehdi Asghari, CEO of SiLC Technologies, in a release.

“At CES this year, we’re demonstrating our long-range vision capabilities of over 2 km [1.2 mi.],” he added. “With the Eyeonic Mini, we’re showcasing our high precision at shorter distances. Our FMCW lidar solutions, at short or long distances, bring superior vision to machines to truly enable the next generation of AI based automation.”

Founded in 2018, SiLC Technologies said its 4D+ Eyeonic lidar chip integrates all the photonics functions needed to enable a coherent vision sensor. The company added that the system offers a small footprint and addresses the need for low power consumption and cost, making it suitable for robotics, autonomous vehicles, biometrics, security, and industrial automation.

In November 2023, SiLC raised $25 million to expand production of its Eyeonic Vision System.

Eyeonic Mini uses Surya SoC for precision

To be useful, robots need powerful, compact, and scalable vision that won’t be affected by complex or unpredictable environments or conditions, as well as interference from other systems, asserted SiLC Technologies. Sensors must also provide motion, velocity, polarization, and precision, capabilities that the company said make FMCW superior to existing time-of-flight (ToF) systems.

FMCW technology enables newer imaging systems to directly capture images for AI, factory robots, home security, autonomous vehicles, and perimeter security applications, said SiLC.

The Eyeonic Mini uses what it described as “the industry’s first purpose-built” digital lidar processor SoC, the iND83301 or “Surya” developed by indie Semiconductor. As a result, the company said, it can deliver “an order of magnitude greater precision than existing technologies while being one-third the size of last year’s pioneering model.”

“The Eyeonic Mini represents the next evolution of our close collaboration with SiLC. The combination of our two unique technologies has created an industry-leading solution in performance, size, cost, and power,” said Chet Babla, senior vice president for strategic marketing at indie Semiconductor. “This creates a strong foundation for our partnership to grow and address multiple markets, including industrial automation and automotive.”

With Surya, a four-channel FMCW lidar chip provides robots with sub-millimeter depth precision from distances exceeding 10 m (32.8 ft.), said SiLC. This is useful for warehouse automation and machine vision applications, it noted.

Dexterity uses sensors for truck loading, unloading

For instance, said SiLC Technologies, AI-driven palletizing robots equipped with the Eyeonic Mini can view and interact with pallets, optimize package placement, and efficiently and safely load them onto trucks. With more than 13 million commercial trucks in the U.S., this technology promises to significantly boost efficiency in loading and unloading processes, the company said.

Dexterity Inc. said it is working to give robots the intelligence to see, move, touch, think and learn, freeing human workers for other warehouse and logistics tasks. The Redwood City, Calif.-based company is incorporating SiLC’s technology into its autonomy platform.

“At Dexterity, we focus on AI, machine learning, and robotic intelligence to make warehouses more productive, efficient and safe,” said CEO Samir Menon. “We are excited to partner with SiLC to unlock lidar for the robotics and logistics markets.”

“Their technology is a revolution in depth sensing and will enable easier and faster adoption of warehouse automation and robotic truck load and unload,” he said.

At CES this week in Las Vegas, SiLC Technologies is demonstrating the new Eyeonic Mini in private meetings at the Westgate Hotel. For more information or to schedule an appointment, e-mail SiLC at contact@SiLC.com.

The post Eyeonic Vision System Mini unveiled by SiLC Technologies at CES 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/silc-launches-eyeonic-mini-ces-2024/feed/ 0
NVIDIA Jetson supports Zipline drone deliveries, as Omniverse enables Amazon digital twins https://www.therobotreport.com/nvidia-jetson-supports-zipline-drone-deliveries-as-omniverse-enables-amazon-digital-twins/ https://www.therobotreport.com/nvidia-jetson-supports-zipline-drone-deliveries-as-omniverse-enables-amazon-digital-twins/#respond Tue, 19 Dec 2023 16:15:41 +0000 https://www.therobotreport.com/?p=568936 NVIDIA technologies are helping supply chains add new levels of automation, as seen in its work with Adobe, Amazon, and Zipline.

The post NVIDIA Jetson supports Zipline drone deliveries, as Omniverse enables Amazon digital twins appeared first on The Robot Report.

]]>
NVIDIA Jetson Xavier NX processes sensor inputs for the P1 delivery drone.

NVIDIA Jetson Xavier NX processes sensor inputs for the P1 delivery drone. Source: Zipline

Robotics, simulation, and artificial intelligence are providing new capabilities for supply chain automation. For example, Zipline International Inc. drone deliveries and Amazon Robotics digital twins for package handling demonstrate how NVIDIA Corp. technologies can enable industrial applications.

“You can pick the right place for your algorithms to run to make sure you’re getting the most out of the hardware and the power that you are putting into the system,” said A.J. Frantz, navigation lead at Zipline, in a case study.

NVIDIA claimed that its Jetson Orin modules can perform up to 275 trillion operations per second (TOPS) to provide mission-critical computing for autonomous systems in everything from delivery services and agriculture to mining and undersea exploration. The Santa Clara, Calif.-based company added that Jetson’s energy efficiency can help businesses electrify their vehicles and reduce carbon emissions to meet sustainability goals.

Zipline drones rely on Jetson Xavier NX to avoid obstacles

Founded in 2011, Zipline said it has completed more than 800,000 deliveries of food, medication, and more in seven countries. The San Francisco-based company said its drones have flown over 55 million miles using NVIDIA Jetson edge AI platform for autonomous navigation and landings.

Zipline, which raised $330 million in April at a valuation of $4.2 billion, is a member of the NVIDIA Inception program, in which startups can get technology support. The company’s Platform One, or P1, drone uses Jetson Xavier NX system-on-module (SOM) to process sensor inputs.

“The NVIDIA Jetson module in the wing is part of what delivers our acoustic detection and avoidance system, so it allows us to listen for other aircraft in the airspace around us and plot trajectories that avoid any conflict,” Frantz explained.

Zipline’s fixed-wing drones can fly out more than 55 miles (88.5 km), at 70 mph (112.6 kph) from several distribution centers and then return. Capable of hauling up to 4 lb. (1.8 kg) of cargo, they autonomously fly and release packages at their destinations by parachute.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


P2 hybrid drone includes Jetson Orin NX for sensor fusion, safety

Zipline’s Platform Two, or P2, hybrid drone can fly fast on fixed-wing flights, as well as hover. It can carry 8 lb. (3.6 kg) of cargo for 10 miles (16 km), as well as a droid that can be lowered on a tether to precisely place deliveries. It’s intended for use in dense, urban environments.

The P2 uses two Jetson Orin NX modules. One is for sensor fusion system to understand environments. The other is in the droid for redundancy and safety.

Zipline claimed that its drones, nicknamed “Zips,” can deliver items 7x faster than ground vehicles. It boasted that it completes one delivery every 70 seconds globally.

“Our aircraft fly at 70 miles per hour, as the crow flies, so no traffic, no waiting at lights — we’re talking minutes here in terms of delivery times,” said Joseph Mardall, head of engineering at Zipline. “Single-digit minutes are common for deliveries, so it’s faster than any alternative.”

In addition to transporting pizza, vitamins, and medications, Zipline works with Walmart, restaurant chain Sweetgreen, Michigan Medicine, MultiCare Health Systems, Intermountain Health, and the government of Rwanda, among others. It delivers to more than 4,000 hospitals and health centers.

Amazon uses Omniverse, Adobe Substance 3D for realistic packages

For warehouse robots to be able to handle a wide range of packages, they need to be trained on massive but realistic data sets, according to Amazon Robotics.

“The increasing importance of AI and synthetic data to run simulation models comes with new challenges,” noted Adobe Inc. in a blog post. “One of these challenges is the creation of massive amounts of 3D assets to train AI perception programs in large-scale, real-time simulations.”

Amazon Robotics turned to Adobe Substance 3D, Universal Scene Description (USD), and NVIDIA Omniverse to develop random but realistic 3D environments and thousands of digital twins of packages for training AI models.

NVIDIA Omniverse integrates with Adobe Substance 3D to generate realistic package models.

NVIDIA Omniverse integrates with Adobe Substance 3D to generate realistic package models for training robots. Source: Adobe

NVIDIA Omniverse allows simulations to be modified, shared

“The Virtual Systems Team collaborates on a wide range of projects, encompassing both extensive solution-level simulations and individual workstation emulators as part of larger solutions,” explained Hunter Liu, technical artist at Amazon Robotics.

“To describe the 3D worlds required for these simulations, the team utilizes USD,” he said. “One of the team’s primary focuses lies in generating synthetic data for training machine learning models used in intelligent robotic perception programs.”

The team uses Houdini for procedural mesh generation and Substance 3D Designer for texture generation and loading virtual boxes into Omniverse, added Haining Cao, a texturing artist at Amazon Robotics.

The team has developed multiple workflows to represent the vast variety of packages that Amazon handles. It has gone from generating two to 300 assets per hour, said Liu.

“To introduce further variations, we utilize PDG (Procedural Dependency Graph) within Houdini,” he noted. “PDG enables us to efficiently batch process multiple variations, transforming the Illustrator files into distinct meshes and textures.”

After generating the synthetic data and publishing the results to Omniverse, the Adobe-NVIDIA integration enables Amazon’s team to change parameters to, for example, simulate work cardboard. The team can also use Python to trigger randomized values and collaborate on the data within Omniverse, said Liu.

In addition, Substance 3D includes features for creating “intricate and detailed textures while maintaining flexibility, efficiency, and compatibility with other software tools,” he said. Simulation-specific extensions bundled with NVIDIA Isaac Sim allow for further generation of synthetic data and live simulations using robotic manipulators, lidar, and other sensors, Liu added.

The post NVIDIA Jetson supports Zipline drone deliveries, as Omniverse enables Amazon digital twins appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-jetson-supports-zipline-drone-deliveries-as-omniverse-enables-amazon-digital-twins/feed/ 0
Acceleration Robotics launches ROBOTCORE ROS 2 hardware https://www.therobotreport.com/acceleration-robotics-launches-robotcore-ros-2-hardware/ https://www.therobotreport.com/acceleration-robotics-launches-robotcore-ros-2-hardware/#respond Fri, 01 Dec 2023 18:34:55 +0000 https://www.therobotreport.com/?p=568630 Acceleration Robotics introduced RobotCore ROS 2, a time-sensitive networking solution delivering speeds thousands times faster than current standards.

The post Acceleration Robotics launches ROBOTCORE ROS 2 hardware appeared first on The Robot Report.

]]>
mobile manipulator in a warehouse.

A mobile robot and a mobile manipulator operator side by side in a warehouse. | Credit: Adobe Stock

One emerging protocol for real-time, deterministic communication in robotics is time-sensitive networking, or TSN. The key advantage of TSN is that it uses standard Ethernet networking cables, hubs, and switches.

When architecting a control system, roboticists and control engineers no longer need to manufacture custom cabling for communication between a robot controller and the drives. Networked architecture enables a more modular design for a robotic system.

Networked control architecture goes back decades

Adept Technology was one of the early industrial robot companies to market and sell a modular and networked controller and motion control architecture. Adept used FireWire (IEEE 1394) as the communications method for its multi-axis linear motion controllers. It replaced traditional control architecture with networked controller, motion control, and vision processing modules communicated over FireWire.

I was the product manager for the Adept Technology controller product line and an IEEE committee member for the standardization of the FireWire IEEE 1394 protocol as a bus for real-time control communication at the heart of the Adept SmartAxis product line.

So much has happened since 2002. Processor speed and capability increased, while its physical dimensions decreased. Power consumption has been optimized for power-constrained (battery-operated) applications.

The explosive growth of the Internet and demand for higher and higher bandwidth communication created a foundation for the improvement of every type of application that depends on networked interaction. The concept of the “cloud” and “edge” based processing opened the floodgates for entirely new business models across the entire business and IT ecosystem.

In the world of robotics, the Robot Operating System (ROS) evolved to become an open-source platform widely adopted by university labs, robotic startups, and mature robotics companies.

ROS is now in its second generation, and ROS 2 was redesigned from the ground up to meet the needs of production systems, providing the necessary core software and control components to accelerate robotics development.  

Acceleration Robotics releases high-speed isochronous networking 

Acceleration Robotics has now extended the concept of real-time isochronous network communication development pioneered by companies like Adept Technology, with the release of ROBOTCORE ROS 2 and ROBOTCORE RTPS. This marks a significant milestone in the field of robotics networking by implementing hardware (prototyped with an FPGA) technology that allows robots to exchange information in less than 2.5 microseconds.

These products are intended to revolutionize robotics and ROS 2 networking communications by delivering speeds that are 62x to thousands of times faster than current standards.

”We present a review and classification of the various communication standards relevant to the field, as well as an introduction to the typical problems with traditional switched Ethernet networks,” said Víctor Mayoral Vilches, founder, chairman, and CTO of Acceleration Robotics.

“We discuss some of the TSN features that are important for deterministic communications and test one of the shaping mechanisms, the time-aware shaper, in an example robotic scenario,” he added. “Our research suggests that TSN will eventually replace many real-time industrial solutions. The outcome should create a unified landscape of physically inter-operable robots and components.”

Regarding the current state of the market, Vilches went on to say, “Right now, we’ve completed projects and collaborations with AMD, Intel, and Microchip. Our work with them continues, but we’re expanding in 2024 further and into case studies. Essentially, we will be focusing more on bringing these technologies into real industrial use cases and have real impact.”

“We’re closing deals with end-users mostly in Spain, the Basque Country, and focused on big industrial robot automation setups wherein robots need to move faster and thereby compute/communicate faster,” he said.

Vilches was involved in the TSN technical effort and pioneered much of the initial work in robotics, including a paper from 2018 that is still widely cited by silicon vendors today. According to Vilches, the team used a lot of the initial work and concepts from the Firewire working group to get inspired for this expansion.

graph of robotcore performance versus other solutions.

Round-trip average network latency breakdown. ROBOTCORE ROS 2 enhances the networking architecture of robotic systems and tackles a common criticism ROS 2: its latency overhead over the DDS communication middleware. It does so by building a hardware implementation of the core ROS 2 abstraction layers (RCL, RMW) and by establishing direct hardware data paths with the underlying DDS middleware. This remove the latency overhead of ROS 2 over DDS for speed and absolute determinism. | Credit: Acceleration Robotics

When asked how much the speed improvement in baseline networking has enabled the extension of TSN to Ethernet, Vilches said: “First, networking links (data link layers) nowadays are rather impressive and empower data exchange within very few nanoseconds. We’re getting very impressive results with some of the modern FPGA solutions we are using now. This is especially true for 10G, 100G, and above Ethernet links, but also applies to wireless alternatives as well as deterministic endeavors (such as TSN with 10G NICs, etc.).”

“Second, [we did a] complete rewrite of the networking layers into hardware,” he explained. “It took about five years to study properly the bottlenecks in the robotics stack in alignment with popular approaches (e.g. ROS) and propose a solution.”

Vilches wrote a bit about this context in a recent blog post. An Acceleration Robotics team identified the bottlenecks and discussed the need for a complete redesign of the underlying networking layers, known as the ROS 2 underlayers, to achieve low-latency isochronous communications. This involved ensuring that every layer of the Open Systems Interconnection (OSI) stack could provide such capabilities.

chart of ping pong communciation.

ROBOTCORE ROS 2 delivers absolute determinism via hardware when combined with ROBOTCORE RTPS and ROBOTCORE UDP/IP. Compared to software-based solutions, it ensures that the communication latency is lower and that it remains the same, regardless of the load of the system. This crucial for real-time robotics and solving major communication bottlenecks. | Credit: Acceleration Robotics

One of the mistakes the team observed in past approaches is that the de facto strategy in the industry to meet timing deadlines is a laborious, empirical, and case-by-case tuning of the system. They concluded that this “CPU whack-a-mole” approach in robotics is unsustainable for real-time systems and this makes it hard to scale.

The team completely rewrote the robot networking stack, including ROS 2, and implemented it in hardware. The resulting design was prototyped and implemented into FPGAs. 

Vilches and a group of academics and industry leaders set out to create a methodology to measure robotics and control system performance. Acceleration Robotics published the RobotPerf documentation earlier this year.

ROBOTCORE ROS 2 hardware implements the ROS 2 robotics framework for unprecedented network interface speed and efficiency. This robot core (IP core) uses FPGA technology to boost ROS 2 communication speed. This speeds data processing, reduces latency, and improves robotics system synchronization. ROBOTCORE ROS 2 powers the future of accelerated robotics networking by sending or receiving packages in less than 2.5 microseconds, 62x faster than CPU-based software.

ROBOTCORE offers versatility for applications

The ROBOTCORE ROS 2 is versatile and widely compatible, making it a viable solution for scenarios where speed and reliability are non-negotiable. These applications include:

  • Industrial automation: Streamlines communication in manufacturing and assembly lines for enhanced operational efficiency.
  • Remote operation: Offers smooth and responsive control in teleoperation systems, crucial for precision tasks.
  • Autonomous vehicles: Ensures rapid data exchange essential for the real-time decision-making of autonomous driving systems.
  • Research and development: Provides a reliable platform for developing and testing next-generation robotic technologies.

The post Acceleration Robotics launches ROBOTCORE ROS 2 hardware appeared first on The Robot Report.

]]>
https://www.therobotreport.com/acceleration-robotics-launches-robotcore-ros-2-hardware/feed/ 0
AMD launches Kria K24 SOM and starter kit for industrial and commercial applications https://www.therobotreport.com/amd-launches-kria-k24-som-and-starter-kit-for-industrial-and-commercial-applications/ https://www.therobotreport.com/amd-launches-kria-k24-som-and-starter-kit-for-industrial-and-commercial-applications/#respond Tue, 19 Sep 2023 13:09:23 +0000 https://www.therobotreport.com/?p=567970 K24 SOM and KD240 Kit enable the design of power-efficient, production-ready solutions for motor control and digital signal processing applications with a fast time to market

The post AMD launches Kria K24 SOM and starter kit for industrial and commercial applications appeared first on The Robot Report.

]]>

AMD (NASDAQ: AMD) today announced AMD Kria K24 System-on-Module (SOM) and KD240 Drives Starter Kit, the latest additions to the Kria portfolio of adaptive SOMs and developer kits. AMD Kria K24 SOM offers power-efficient compute in a small form factor and targets cost-sensitive industrial and commercial edge applications. Advanced InFO (Integrated Fan-Out) packaging makes the K24 half the size of a credit card while using half the power1 of the larger, connector-compatible Kria K26 SOM.

The AMD Kria K260 was a 2023 RBR50 innovation award winner for the integration of a complete robotics control foundation solution.

The K24 SOM provides high determinism and low latency for powering electric drives and motor controllers used in compute-intensive digital signal processing (DSP) applications at the edge. Key applications include electric motor systems, robotics for factory automation, power generation, public transportation such as elevators and trains, surgical robotics and medical equipment like MRI beds, and EV charging stations.

Coupled with the KD240 Drives Starter Kit, an out-of-the-box-ready motor control-based development platform, the products offer a seamless path to production deployment with the K24 SOM. Users can quickly be up and running, speeding time to market for motor control and DSP applications without requiring FPGA programming expertise.

plan view of the K24 circuit board.

The AMD Kria K24 SOM is a great base solution for OEMs to embed within larger solutions. | Credit: AMD

“The AMD Kria K24 SOM and KD240 development platform build on the breakthrough design experience introduced by the Kria SOM portfolio, offering solutions for robotics, control, vision AI and DSP applications,” said Hanneke Krekels, corporate vice president, Core Vertical Markets, AMD. “System architects must meet growing demands for performance and power efficiency while keeping expenses down. The K24 SOM delivers high performance-per-watt in a small form factor and houses the core components of an embedded processing system on a single production-ready board for a fast time to market.”

Many factories have hundreds of motors powering robotics that drive assembly lines and other equipment. It is estimated that around 70% of the total global electrical use by the industrial sector is tied to electric motors and motor-driven systems2. As such, even a 1% improvement in the efficiency of a drive system can have a significant positive impact on operational expenses and the environment.

“The AMD Kria SOM portfolio has helped make robust hardware for robotics and industrial edge applications available to the masses and we’re excited to see the portfolio extended with the new K24 SOM and KD240 Starter Kit,” said Greg Needel, CEO of Rev Robotics. “With Kria SOMs we’re able to simplify development of even advanced control loop algorithms, adapt to changing software and hardware requirements, and build really cool things for both commercial and STEM educational customers.”

Simplified DSP Development and Accelerated Design Cycles

The K24 SOM features a custom-built Zynq UltraScale+ MPSoC device and the supporting KD240 starter kit is a sub-$400 FPGA-based motor control kit. Enabling developers to begin at a more evolved point in the design cycle, the KD240 provides easy access for entry-level developers compared to other processor-based control kits.

The K24 SOM comes qualified for use in industrial environments with support for more design flows than any generation before it. That includes familiar design tools like Matlab Simulink and languages like Python with its extensive ecosystem support for the PYNQ framework. Ubuntu and Docker are also supported. Software developers can also use the AMD Vitis™ motor control libraries while maintaining support for traditional development flows.

With the launch of Kria K26 SOM, AMD introduced the first App Store for edge applications. By introducing the KD240 Starter Kit, AMD is now the first to offer pre-built motor control apps, allowing users to create power-efficient industrial solutions that are reliable, available, and with advanced security features.

The KD240 is supported by an optional Motor Accessory Pack (MACCP), with additional motor kits available in the future that can be purchased separately for an enhanced ramp-up experience for developers.

Access to a Family of Scalable SOMs

Kria SOMs allow developers to skip the substantial design efforts around the selected silicon device and instead focus on providing differentiated, value-added features. 

Connector compatibility enables easy migration between the K24 and K26 SOM without changing boards, allowing system architects to balance power, performance, size and cost for energy-efficient systems.

K24 SOMs are offered in both commercial and industrial versions and are built for 10-year industrial lifecycles. In addition to support for expanded temperature ranges, the industrial-grade SOM includes ECC-protected LPDDR4 memory for high-reliability systems.

The K24 SOM (commercial and industrial versions) and KD240 Drives Starter Kit are available to order now via direct order and worldwide channel distributors. The K24 commercial version is shipping today, and the industrial version is expected to ship in Q4.  

AMD, the AMD Arrow logo, Kria, Vitis and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners.

1 Based on AMD internal analysis in August 2023, comparing the dimensions of the Kria K24 SOM versus the Kria K26 SOM. Power consumption was measured By AMD Labs in August 2023, using the xmutil platform utility tool running on a FOC sensor-based bistream on the K24 SOM and a Smart Camera bitstream on the K26 SOM. Results may vary.  (SOM-002).

2 Source: Energy Efficiency 2022, International Energy Agency, December 2022

The post AMD launches Kria K24 SOM and starter kit for industrial and commercial applications appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-launches-kria-k24-som-and-starter-kit-for-industrial-and-commercial-applications/feed/ 0
indie and SiLC partner to deliver coherent detection-based LiDAR https://www.therobotreport.com/indie-and-silc-partner-to-deliver-coherent-detection-based-lidar/ https://www.therobotreport.com/indie-and-silc-partner-to-deliver-coherent-detection-based-lidar/#respond Tue, 08 Aug 2023 19:56:52 +0000 https://www.therobotreport.com/?p=566418 FMCW LiDAR fuses indie's SuryaTM SoC with SiLC's EyeonicTM Vision Sensor for 10x performance and cost benefits, integrating photonics.

The post indie and SiLC partner to deliver coherent detection-based LiDAR appeared first on The Robot Report.

]]>
indie and SILC logos with image of circuit board behind it.

MCW LiDAR platform integrates indie’s Surya SoC and SiLC’s Eyeonic Vision Sensor, enabling the industry’s most compact and highest-performance coherent vision system.

indie Semiconductor, an Autotech solutions innovator, and SiLC Technologies, Inc (SiLC), a leader in silicon photonics innovation, have entered a light detection and ranging (LiDAR) partnership that enables coherent detection-based LiDAR platforms for next-generation sensing applications, including driver assistance, autonomous mobility, robotics, and industrial automation. This partnership will deliver fully integrated vision system platforms deploying frequency modulated continuous wave (FMCW) detection, redefining benchmarks for rapidly emerging LiDAR applications.

FMCW-based LiDAR delivers multiple real-world benefits compared to direct detection-based Time of Flight (TOF) solutions, including long-range with high precision, interference immunity, per-point instantaneous velocity and motion measurement. This ground-breaking partnership combines award-winning products from indie and SiLC into reference platforms that enable an order of magnitude improvement in sensing performance, manufacturability, power consumption, form factor and cost relative to competing systems.

“indie is excited to partner with SiLC to bring the processing innovation from Surya to FMCW LiDAR, offering a breakthrough reference design,” said Chet Babla, senior vice president, of strategic marketing at indie Semiconductor. “By combining the software-defined high-performance – but low power – analog and digital processing and system control capabilities of Surya, coupled with SiLC’s Eyeonic vision solution, system integrators and OEMs are enabled with 4D FMCW imaging for mass market deployment into multiple applications.”

Ralf Muenster, vice president of business development and marketing at SiLC added, “We are excited to partner with indie to bring industry-leading FMCW-based LiDAR platforms to market. Our state-of-the-art FMCW LiDAR sensor features the highest integration, resolution, precision, and longest range of any other competing approach while remaining the only commercially available solution to offer polarization information.”

Driven by strong market demand, reference platforms featuring Surya and Eyeonic have already been deployed. They are being evaluated by select lead automotive, tier one and industrial OEMs, and both indie and SiLC are actively developing new reference platforms to showcase the scalability and flexibility of their combined technologies.

The post indie and SiLC partner to deliver coherent detection-based LiDAR appeared first on The Robot Report.

]]>
https://www.therobotreport.com/indie-and-silc-partner-to-deliver-coherent-detection-based-lidar/feed/ 0
Inuitive announces NU4100 IC robotics processor https://www.therobotreport.com/inuitive-announces-nu4100-ic-robotics-processor/ https://www.therobotreport.com/inuitive-announces-nu4100-ic-robotics-processor/#respond Mon, 26 Sep 2022 23:06:10 +0000 https://www.therobotreport.com/?p=563924 Inuitive, a Vision-on-Chip processors company, announced the launch of its NU4100, an expansion of its Vision and AI IC portfolio.

The post Inuitive announces NU4100 IC robotics processor appeared first on The Robot Report.

]]>
NU4100

Inuitive’s NU4100 IC can be used for robotics, drones, VR and edge-AI applications. | Source: Inuitive

Inuitive, Ltd., a Vision-on-Chip processors company, announced the launch of its NU4100, an expansion of its Vision and AI IC portfolio. Based on Inuitive’s unique architecture and advanced 12nm process technology, the NU4100 IC supports integrated dual-channel 4K ISP, enhanced AI processing and depth sensing in a single-chip, low-power design, setting the new industry standard for Edge-AI performance.

The NU4100 is the second generation of the NU4x00 series of products. The NU4x00 series is ideal for robotics, drones, VR, and edge-AI applications that demand multiple sensor aggregation, processing, packing and streaming. It is specifically designed for robots and other applications that must sense and analyze the environment using three, six or more cameras, as they make real-time actionable decisions based on that input.

“Robots designers demand higher resolutions, an ever-increasing number of channels, and high-performing, enhanced AI and VSLAM capabilities,” Shlomo Gadot, Inuitive’s CEO, said. “The NU4100 addition to the Vision-on-Chip series of processors is a true revolution, based on all integrated vision capabilities, combined in a single, complete-mission computer chip. The integrated dual-camera ISP provides much-needed flexibility without having to add more components, which, in turn, require additional processing power at a higher price point.”

Mr. Gadot also said, “Inuitive is committed to bringing the most advanced technology to the market. NU4500, the next processor in our roadmap, is planned for tape-out on Q1 2023 with additional 8 cores of ARM A55, more than double the AI compute power and H.265 & H.264 video encoder & decoder and is to be the ultimate single-chip solution for robotic and applications.”

The NU4100 supports multi-camera designs and can simultaneously process and stream two imager channels of up to 12MP, or 4K resolution, each at 60 frames per second (fps), while running advanced AI networks. This IC enhances the level of integration for products using Inuitive technology and speeds the AI processing power by 2X-4X while consuming 20% less power than Inuitive’s first generation.

The new NU4100 was quickly adopted by the CE & Metaverse industry leaders, already securing it for their market products, instead of any alternatives. Customer products powered by NU4100 will be available starting 1Q 2023.

“Robots are increasingly reliant on vision processors. Their ability to perceive and understand the environment is fundamental to achieving a higher level of robot autonomy,” Dor Zepeniuk, CTO and VP of Product at Inuitive, said. “Processing streams of input from multiple cameras expand the robot’s independence and flexibility, while the integrated dual-channel 4K ISP improves the system’s capabilities. Both, in turn, serve the end goal of designing powerful products that are lower on cost.”

Main features and capabilities of the new NU4100 include:

  • Proprietary Inuitive Depth Vision Accelerators (IDVA):
    • High-throughput, low-latency, depth-from-stereo HW engine
    • SLAM HW Accelerators
    • General purpose Imaging/Vision engines
  • Dual camera ISP unit – up to 12Mp per video stream
  • Dual-core Vision-DSP with 384GOPs – optimized for computer vision functions
  • Efficient AI Engine with 3.2TOPs processing power for DNN
  • ARM Cortex-A5 CPU running Linux OS
  • Connectivity for up to 6 Camera devices
  • Fast interfaces – USB3.0, MIPI CSI/DSI – Rx & Tx, LPDDR4 and more

The high-resolution and advanced AI processing provided by the new IC can benefit many other Edge-AI applications. Applications such as Industry 4.0 facilities can leverage the high Edge-AI performance and image resolution for improved process control and a higher level of automation. Likewise, drones can use the ISP and Neural Network-Based vision effects, such as low-light enhancement, to autonomously operate in both dark and lit environments.

The NU4100 samples are already available and will be ready for mass production by January 2023

The post Inuitive announces NU4100 IC robotics processor appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inuitive-announces-nu4100-ic-robotics-processor/feed/ 0