Sensors, machine vision, and feedback for robotic designs https://www.therobotreport.com/category/technologies/cameras-imaging-vision/ Robotics news, research and analysis Tue, 16 Apr 2024 21:55:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Sensors, machine vision, and feedback for robotic designs https://www.therobotreport.com/category/technologies/cameras-imaging-vision/ 32 32 Teledyne FLIR IIS announces new Bumblebee X stereo vision camera https://www.therobotreport.com/teledyne-flir-iis-announces-new-bumblebee-x-stereo-vision-camera/ https://www.therobotreport.com/teledyne-flir-iis-announces-new-bumblebee-x-stereo-vision-camera/#respond Tue, 16 Apr 2024 21:55:29 +0000 https://www.therobotreport.com/?p=578731 Bumblebee X is a new GigE powered stereo imaging solution that delivers high-accuracy and low-latency for robotic guidance and pick & place applications.

The post Teledyne FLIR IIS announces new Bumblebee X stereo vision camera appeared first on The Robot Report.

]]>
teledyne flir logo and multiple products in the background.

Bumblebee X is a new GigE-powered stereo imaging solution that delivers high-accuracy and low-latency for robotic guidance and pick-and-place applications. | Credit: Teledyne FLIR

Teledyne FLIR IIS (Integrated Imaging Solutions) today announced the new Bumblebee X series – an advanced stereo-depth vision solution optimized for multiple applications. The imaging device is a comprehensive industrial-grade (IP67) stereo vision solution with onboard processing to build successful systems for warehouse automation, robotics guidance, and logistics.

Bumblebee X 5GIGE delivers on the essential need for a comprehensive and real-time stereo vision solution, the Wilsonville, Ore.-based company says. Customers can test and deploy depth sensing systems that work up to ranges of 20 meters with the wide baseline solution.

product image showing front and rear of the camera.

The Teledyne FLIR Bumblebee X camera is packaged in an IP76 enclosure, and ready for industrial use cases. | Credit: Teledyne FLIR

Available in three configurations

The new camera is available in three different configurations, which are identical except for the field of view (FOV) of the camera lens. Teledyne designed the camera to operate accurately across varying distances. The low latency and GigE networking make it ideal for real-time applications such as autonomous mobile robots, automated guided vehicles, pick and place, bin picking, and palletization, the company said. 

“We’re thrilled to announce the release of Bumblebee X, a new comprehensive solution for tackling complex depth sensing challenges with ease,” said Sadiq Panjwani, General Manager at Teledyne FLIR IIS. “Our team’s extensive stereo vision expertise and careful attention to customer insights have informed the design of the hardware, software, and processing at the core of Bumblebee X. With high accuracy across a large range of distances, this solution is perfect for factories and warehouses.”

Specifications

a table of specs for the teledyne bumblebee camera configurations.

This table compares the specs for the three different configurations of the Bumblebee X camera. Check the website for actual specs. | Credit: Teledyne FLIR

Key features include:

  • Factory-calibrated 9.4 in (24 cm) baseline stereo vision with 3 MP sensors for high accuracy and low latency real-time applications
  • IP67 industrial-rated vision system with ordering options of color and monochrome, different field-of-views, and 1GigE or 5GigE PoE
  • Onboard processing to output a depth map and color data for point cloud conversion and colorization
  • Ability to trigger an external pattern projector and synchronize multiple systems together for more precise 3D depth information

Teledyne FLIR manages a software library with articles, example code, and Windows, Linux, and Robotics Operating System (ROS) support. Order requests will be accepted at the end of Q2, 2024.

The post Teledyne FLIR IIS announces new Bumblebee X stereo vision camera appeared first on The Robot Report.

]]>
https://www.therobotreport.com/teledyne-flir-iis-announces-new-bumblebee-x-stereo-vision-camera/feed/ 0
Micropsi Industries’ MIRAI 2 offers faster deployment and scalability https://www.therobotreport.com/micropsi-industries-mirai-2-offers-faster-deployment-and-scalability/ https://www.therobotreport.com/micropsi-industries-mirai-2-offers-faster-deployment-and-scalability/#respond Wed, 20 Mar 2024 20:47:18 +0000 https://www.therobotreport.com/?p=578226 MIRAI 2 comes with five new features that Micropsi Industries says enhance manufacturers' ability to reliably solve automation tasks. 

The post Micropsi Industries’ MIRAI 2 offers faster deployment and scalability appeared first on The Robot Report.

]]>
MIRAI 2.

MIRAI 2 is the latest generation of Micropsi Industries’ AI-vision software. | Source: Micropsi Industries

Micropsi Industries today announced MIRAI 2, the latest generation of its AI-vision software for robotic automation. MIRAI 2 comes with five new features that the company says enhance manufacturers’ ability to reliably solve automation tasks with variance in position, shape, color, lighting, or background. 

The Berlin, Germany-based company says MIRAI 2 offers users even greater reliability, easier and faster deployment, and robot-fleet scalability. MIRAI 2 is available immediately. 

“MIRAI 2 is all about scale: It’s MIRAI for more powerful robots, larger fleets of robots, and tougher physical environments, and it brings more tools to prepare for changes in the environment,” Ronnie Vuine, founder of Micropsi Industries and responsible for product development, said in a release. “We’ve let our most demanding automotive OEM customers drive the requirements for this version without sacrificing the simplicity of the product. It still wraps immensely powerful machine learning in a package that delivers quick and predictable success and is at home in the engineering environment it’s being deployed in.”

5 new functions available with MIRAI 2

MIRAI is an advanced AI-vision software system that enables robots to dynamically respond to varying conditions within the factory environment. Micropsi Industries highlighted five new functions available with MIRAI 2:

  • Robot skill-sharing: This new function allows users to share skills between multiple robots at the same site or elsewhere. If the conditions at the sites are identical, which could include lighting, background, and more, then users need very little or no additional training when adding installations. The company says it can also handle small differences in conditions by recording data from multiple installations into a single, robust skill. 
  • Semi-automatic data recording: Semi-automatic training allows users to record episodes of data for skills without having to hand-guide the robot. Micropsi Industries said this feature reduces the workload on users and increases the quality of recorded data. Additionally, MIRAI can now automatically record all relevant data. Users only need to prepare the training situations and corresponding robot target poses.
  • No F/T sensor: Users can train and run skills without connecting a force/torque sensor. The company says this reduces costs, simplifies tool geometry and cabling setup, and makes skill applications more robust and easier to train overall. 
  • Abnormal condition detection: MIRAI can now be configured to stop skills when unexpected conditions are encountered, allowing users to handle these exceptions in their robot program or alert a human operator.
  • Industrial PC: The MIRAI software can now be run on a selection of industrial-grade hardware for higher dependability in rough factory conditions.

MIRAI 2 detects unexpected workspace situations

MIRAI can pick up on variances in position, shape, color, lighting, and background. It can operate with real factory data without the need for CAD data, controlled light, visual feature predefinition, or extensive knowledge of computer vision. 

MIRAI 2 offers customers improved reliability thanks to its ability to detect unexpected workspace situations. The system has a new, automated way to collect training data and the option to run the software on the highest industry-standard PCs. This results in higher dependability in rough factory conditions. 

MIRAI 2’s new features assist in recording the required data for training robots, which means that training the system is easier and faster. Additionally, the system comes equipped with MIRAI skills, which are trained guidelines that tell robots how to behave when performing a desired action. These can now be easily and quickly shared with an entire fleet of robots. 

“By integrating new features and capabilities into our offerings, we can address the unique challenges faced by these industries even more effectively,” Gary Jackson, recently appointed CEO of Micropsi Industries, said in a release. “Recognizing the complexities of implementing advanced AI in robotic systems, we’ve assembled expert teams that combine our in-house talent with select system integration partners to ensure that our customers’ projects are supported successfully, no matter how complex the requirements.”

The post Micropsi Industries’ MIRAI 2 offers faster deployment and scalability appeared first on The Robot Report.

]]>
https://www.therobotreport.com/micropsi-industries-mirai-2-offers-faster-deployment-and-scalability/feed/ 0
Slamcore Aware provides visual spatial intelligence for intralogistics fleets https://www.therobotreport.com/slamcore-aware-provides-visual-spatial-intelligence-for-intralogistics-fleets/ https://www.therobotreport.com/slamcore-aware-provides-visual-spatial-intelligence-for-intralogistics-fleets/#respond Mon, 11 Mar 2024 12:01:31 +0000 https://www.therobotreport.com/?p=578119 Slamcore Aware combines the Slamcore SDK with industrial-grade hardware to provide robot-like localization for manually driven vehicles.

The post Slamcore Aware provides visual spatial intelligence for intralogistics fleets appeared first on The Robot Report.

]]>
Slamcore Aware identifies people and other vehicles for enhanced safety and efficiency. Source: Slamcore

Slamcore Aware is designed to be simple and quick to commission. Source: Slamcore

Just as advanced driver-assist systems, or ADAS, mark progress toward autonomous vehicles, so too can spatial intelligence assist manually driven vehicles in factories and warehouses. At MODEX today, Slamcore Ltd. launched Slamcore Aware, which it said can improve the accuracy, robustness, and scalability of 3D localization data for tracking intralogistics vehicles.

“Prospective customers tell us that they are looking for a fast-to-deploy and scalable method that will provide the location data they desperately need to optimize warehouse and factory intralogistics for speed and safety,” stated Owen Nicholson, CEO of Slamcore. “Slamcore Aware marks a significant leap forward in intralogistics management bringing the power of visual spatial awareness to almost any vehicle in a way that is scalable and can cope with the highly dynamic and complex environments inside today’s factories and warehouses.”

Robots and autonomous machines need to efficiently locate themselves, plus map and understand their surroundings in real time, according to Slamcore. The London-based company said its hardware and software can help developers and manufacturers with simultaneous localization and mapping (SLAM).

Slamcore asserted that its spatial intelligence software is accurate, robust, and computationally efficient. It works “out of the box” with standard sensors and can be tuned for a wide range of custom sensors or compute, accelerating time to market, said the company.

Slamcore Aware brings AMR accuracy to vehicles

Slamcore Aware collects and processes visual data to provide rich, real-time information on the exact position and orientation of manually driven vehicles, said Slamcore. Unlike existing systems, the new product can scale easily across large, complex, and ever-changing industrial sites, the company claimed.

Slamcore Aware combines the Slamcore software development kit (SDK) with industrial-grade hardware, providing a unified approach for fast installation on intralogistics vehicles and integration with new and existing Real Time Location Systems (RTLS).

It incorporates AI to perceive and classify people and other vehicles, said Slamcore. RTLS applications can use this enhanced data to significantly improve efficiency and safety of operations, it noted.

The new product brings SLAM technologies developed for autonomous mobile robots (AMRs) to manual vehicles, providing estimation of location and orientation of important assets with centimeter-scale precision, said the company.

With stereo cameras and advanced algorithms, the Slamcore Aware module can automatically calculate the location of the vehicle it is fitted to and then create a map of a facility as the vehicle moves around. It can note changes to layout and the position of vehicles, goods, and people, even in highly dynamic environments, Slamcore said.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


‘Inside-out’ approach offers scalability

Existing navigation systems require the installation of receiver antennas across facilities to provide “line-of-sight” connectivity, said Slamcore. However, they become more expensive as facilities scale, with large or complex sites needing hundreds of antennas to track even a handful of vehicles.

Even with this expensive infrastructure, coverage is often unreliable, reducing the effectiveness of RTLS and warehouse robots, Slamcore said. The company said Slamcore Aware addresses these industry pain points.

The system takes an “inside-out” approach that scales in line with the number of vehicles deployed, regardless of the areas they must cover or the complexity of internal layouts. As new vehicles are added to the fleet, an additional module can be simply fitted to each one so that every vehicle automatically and continuously determines its location wherever it is across the whole site, said Slamcore in a release.

Visual spatial intelligence data is processed at the edge, onboard the vehicle, explained the company. Position and orientation data is shared via a lightweight and flexible application programming interface (API) for use in nearly any route-planning, analytics, and optimization platform without compromising performance, it said.

Slamcore is offering Slamcore Aware to facility operators, fleet management and intralogistics specialists, systems integrators, and other RTLS specialists. The company is exhibiting at MODEX in Atlanta for the first time this week at Booth A13918. It will also be at LogiMAT in Stuttgart, Germany.

 

The post Slamcore Aware provides visual spatial intelligence for intralogistics fleets appeared first on The Robot Report.

]]>
https://www.therobotreport.com/slamcore-aware-provides-visual-spatial-intelligence-for-intralogistics-fleets/feed/ 0
RIOS Intelligent Machines raises Series B funding, starts rolling out Mission Control https://www.therobotreport.com/rios-intelligent-machines-raises-series-b-funding-starts-rolls-out-mission-control/ https://www.therobotreport.com/rios-intelligent-machines-raises-series-b-funding-starts-rolls-out-mission-control/#comments Fri, 08 Mar 2024 15:56:52 +0000 https://www.therobotreport.com/?p=578111 RIOS has gotten investment from Yamaha and others to continue developing machine vision-driven robotics for manufacturers.

The post RIOS Intelligent Machines raises Series B funding, starts rolling out Mission Control appeared first on The Robot Report.

]]>
RIOS Intelligent Machines works with NVIDIA Isaac Sim

RIOS works with NVIDIA Isaac Sim and serves the wood-products industry. Source: RIOS Intelligent Machines

RIOS Intelligent Machines Inc. this week announced that it has raised $13 million in Series B funding, co-led by Yamaha Motor Corp. and IAG Capital Partners. The company said it plans to use the investment to develop and offer artificial intelligence and vision-driven robotics, starting with a product for the lumber and plywood-handling sector.

Menlo Park, Calif.-based RIOS said its systems can enhance production efficiency and control. The company focuses on three industrial segments: wood products, beverage distribution, and packaged food products.

RIOS works with NVIDIA Omniverse on factory simulations. It has also launched its Mission Control Center, which uses machine vision and AI to help manufacturers improve quality and efficiency.

RIOS offers visibility to manufacturers

“Customers in manufacturing want a better way to introspect their production — ‘Why did this part of the line go down?'” said Clinton Smith, co-founder and CEO of RIOS. “But incumbent tools have not been getting glowing reviews. Our standoff vision system eliminates a lot of that because our vision and AI are more robust.”

The mission-control product started as an internal tool and is now being rolled out to select customers, Smith told The Robot Report. “We’ve observed that customers want fine-grained control of processes, but there are a lot of inefficiencies, even at larger factories in the U.S.”

Manufacturers that already work with tight tolerances, such as in aerospace or electronics, already have well-defined processes, he noted. But companies with high SKU turnover volumes, such as with seasonal variations, often find it difficult to rely on a third party’s AI, added Smith.

“Mission Control is a centralized platform that provides a visual way to visualize processes and to start to interact with our robotics,” he explained. ‘We want operators to identify what to work on and what metrics to count for throughput and ROI [return on investment], but if there’s an error on the data side, it can be a pain to go back to the database.”

Smith shared the example of a bottlecap tracker. In typical machine learning, this requires a lot of data to be annotated before training models and then looking at the results.

With RIOS Mission Control, operators can monitor a process and select a counting zone. They can simply draw a box around a feature to be annotated, and the system will automatically detect and draw comparisons, he said.

“You place a system over the conveyor, pick an item, and you’re done,” said Smith. “It’s not just counting objects. For example, our wood products customers want to know where there are knots in boards to cut around. It could also be used in kitting applications.”

RIOS is releasing the feature in phases and is working on object manipulation. Smith said the company is also integrating the new feature with its tooling. In addition, RIOS is in discussions with customers, which can use its own or their existing cameras for Mission Control.

Investors express confidence in automation approach

Yamaha has been an investor in RIOS Intelligent Machines since 2020. The vehicle maker said it has more than doubled its investment in RIOS, demonstrating its confidence in the company’s automation technologies and business strategy.

IAG Capital Partners is a private investment group in Charleston, S.C. The firm invests in early-stage companies and partners with innovators to build manufacturing companies. Dennis Sacha, partner at IAG, will be joining the RIOS board of directors.

“RIOS’s full production vision — from automation to quality assurance to process improvement to digital twinning — and deep understanding of production needs positions them well in the world of manufacturing,” said Sacha, who led jet engine and P-3 production for six years during his career in the U.S. Navy.

In addition, RIOS announced nearly full participation from its existing investors, including Series A lead investor, Main Sequence, which doubled its pro-rata investment. RIOS will be participating in MODEX, GTC, and Automate.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post RIOS Intelligent Machines raises Series B funding, starts rolling out Mission Control appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rios-intelligent-machines-raises-series-b-funding-starts-rolls-out-mission-control/feed/ 1
Cambrian Robotics obtains seed funding to provide vision for complex tasks https://www.therobotreport.com/cambrian-robotics-obtains-seed-funding-to-provide-vision-for-complex-tasks/ https://www.therobotreport.com/cambrian-robotics-obtains-seed-funding-to-provide-vision-for-complex-tasks/#respond Fri, 08 Mar 2024 14:17:10 +0000 https://www.therobotreport.com/?p=578107 Cambrian will use the funding to continue its mission of giving industrial robots human-level capabilities for complex tasks.

The post Cambrian Robotics obtains seed funding to provide vision for complex tasks appeared first on The Robot Report.

]]>
Cambrian Robotics is applying machine vision to industrial robots

Cambrian is developing machine vision to give industrial robots new capabilities. Source: Cambrian Robotics

Machine vision startup Cambrian Robotics Ltd. this week announced that it has raised $3.5 million in seed+ funding. The company said it plans to use the investment to continue developing its artificial intelligence platform to enable robot arms “to surpass human capabilities in complex vision-based tasks across a variety of industries.”

Cambrian Robotics said its technology “empowers to automate a broad range of tasks, particularly those in advanced manufacturing and quality assurance that demand high precision and accuracy within dynamic environments. The London-based company has offices in Augsburg, Germany, and the U.S.

Cambrian noted that its executive team, led by CEO Miika Satori, has over 50 years of combined experience in AI and robotics. Joao Seabra, chief technology officer, is an award-winning roboticist, and Dr. Alexandre Borghi, head of AI, previously led research teams at a $3 billion AI chip startup.

“We are incredibly excited about the possibilities that our recent fundraising opens up,” said Satori. “Our primary goals are to enhance the scalability of the product and strengthen our sales and operations in our main target markets.”

“In addition, we are bringing new AI-vision-based skills to robot arms, further pushing boundaries in the field of robotics,” he added. “We are equally thrilled to begin collaborating with our new investors, whose support is pivotal in driving these advancements forward.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Cambrian Robotics vision already in use

Cambrian Robotics claimed that its AI-driven vision software and camera hardware enables existing robots to automate complex tasks that were previously only possible with manual methods. It said its systems enable robots to execute intricate assembly processes, bin picking, kitting, and pick-and-place operations “with unmatched accuracy in any lighting condition — a true breakthrough compared to current industry-leading AI vision capabilities.”

In addition, Cambrian can be installed in about half a day, works with all major industrial and collaborative robots, and can pick microparts precisely and in less than 200ms, said the company. Cambrian claimed that its technology is unique in that it can pick a wide range of parts, including transparent, plastic, and shiny metal.

Appliance manufacturers globally have deployed Cambrian for monitoring quality assurance and manufacturing defects that were previously unseen to the human eye, the company said. Cambrian is testing and deploying its vision systems to leading manufacturers including Toyota, Audi, Suzuki, Kao, and Electrolux.

“Although in our factories we have a high level of automation, we still have an important quantity of flexible components and manual processes, which add variability,” said Jaume Soriano, an industrial engineer at Electrolux Group. “Cambrian helps us keep moving toward a more automated manufacturing reality while being able to deal with variable scenarios.”

Cybernetix Ventures leads investment

Cybernetix Ventures and KST Invest GmbH led Cambrian Robotics’ seed funding, with participation from Yamaha Motor Ventures and Digital Media Professionals (DMP).

“Machine vision is a crowded space, but Cambrian has strong differentiation with its unique ability to identify small and transparent items with proprietary visual AI software,” said Fady Saad, founder and general partner of Cybernetix, who will join Cambrian’s board of directors. “Miika and his exceptional team have also managed to bring the product to market with active revenue from top brands.”

Boston-based Cybernetix Ventures is a venture capital firm investing into early-stage robotics, automation, and industrial AI startups. It offers its expertise to companies poised to make major impacts in sectors including advanced manufacturing, logistics/warehousing, architecture, engineering and construction and healthcare/medical devices.

KST Invest is a private fund established by one of the owner families of a leading German industrial automation firm. The fund has the objective to invest in robotics and advanced manufacturing among other themes. “Innovation is the livelihood of any business in industrial automation, specifically the combination of vision and robotics,” it said.

Cambrian is also backed by ff Venture Capital (ffVC), which invested in the company’s seed round. ffVC initially seeded Cambrian after the startup graduated from its accelerator, AI Nexus Lab, in partnership with New York University’s Tandon School of Engineering in Brooklyn.

Cambrian is already working with major manufacturers. Source: Cambrian Robotics

Cambrian is already working with major manufacturers. Source: Cambrian Robotics

The post Cambrian Robotics obtains seed funding to provide vision for complex tasks appeared first on The Robot Report.

]]>
https://www.therobotreport.com/cambrian-robotics-obtains-seed-funding-to-provide-vision-for-complex-tasks/feed/ 0
Pleora adds RapidPIX lossless compression technology https://www.therobotreport.com/pleora-adds-rapidpix-lossless-compression-technology/ https://www.therobotreport.com/pleora-adds-rapidpix-lossless-compression-technology/#respond Wed, 21 Feb 2024 13:58:46 +0000 https://www.therobotreport.com/?p=577933 Pleora Technologies said RapidPIX meets the low latency and reliability demands of machine vision and medical imaging applications.

The post Pleora adds RapidPIX lossless compression technology appeared first on The Robot Report.

]]>
Pleora Technologies' iPORT NTx-Mini-LC with RapidPIX compression.

Pleora Technologies’ iPORT NTx-Mini-LC with RapidPIX compression. | Source: Pleora Technologies

Pleora Technologies has introduced its patented RapidPIX lossless compression technology. The company said RapidPix can increase data throughput by almost 70% while meeting the low latency and reliability demands of machine vision applications.

The Kanata, Ontario, Canada-based company said RapidPIX is initially available on Pleora’s new iPORT NTx-Mini-LC platform, which provides a compression-enabled drop-in upgrade of the NTX-Mini embedded interface.

“System designers have been asking us for ways to increase resolution and frame rates over existing Ethernet infrastructure for machine vision applications, without compromising on latency or image data integrity that Pleora is known for,” Jonathan Hou, president of Pleora Technologies, said. “With RapidPIX we’re meeting this demand.

“Pleora’s patented compression technique delivers bandwidth advantages that increase performance without impacting the data quality required for accurate processing in critical applications. As an immediate advantage, designers can cost-effectively increase data throughput while retaining existing installed infrastructure. While boosting performance our compression technology helps further conserve valuable resources, including power consumption, to reduce system costs.”

Pleora: added compression has many benefits

Pleora said that with added compression, engineers can deploy the iPORT NTx-Mini-LC to support low latency transmission of GigE Vision-compliant packets at more than 1.5 Gbps throughput rates over existing 1 Gb Ethernet infrastructure. 

With RapidPIX, systems feed imagining data into the RapidPIX encoder. The encoder then analyses imaging data against compression profiles and selects the best approach based on the application requirements. Pleora said latency performance is less than two lines, or approximately 0.022 milliseconds, when deployed on a system operating at 1024×1024 resolution, Mono8 pixel format with two taps at 40 MHz. The company says users can further reduce latency performance depending on the number of taps and pixel format. 

Pleora said the lossless compression system also minimizes the amount of data transmitted over the network. This reduces power consumption. Additionally, the mathematically lossless compression technology supports multi-taps and multi-components. 

To speed time-to-market, Pleora offers the iPORT NTx-Mini-LC with RapidPIX Development Kit. The company said this kit helps manufacturers develop system or camera prototypes and proof-of-concepts easily and rapidly, often without undertaking hardware development.

The post Pleora adds RapidPIX lossless compression technology appeared first on The Robot Report.

]]>
https://www.therobotreport.com/pleora-adds-rapidpix-lossless-compression-technology/feed/ 0
The role of ToF sensors in mobile robots https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/ https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/#respond Tue, 23 Jan 2024 17:52:25 +0000 https://www.therobotreport.com/?p=568708 Time-of-flight or ToF sensors provide mobile robots with precise navigation, low-light performance, and high frame rates for a range of applications.

The post The role of ToF sensors in mobile robots appeared first on The Robot Report.

]]>
ToF sensors provide 3D information of the world around a mobile robot, providing important data to the robots perception algorithms. | Credit: E-con Systems

ToF sensors provide 3D information of the world around a mobile robot, providing important data to the robots perception algorithms. | Credit: E-con Systems

In the ever-evolving world of robotics, the seamless integration of technologies promises to revolutionize how humans interact with machines. An example of transformative innovation, the emergence of time-of-flight or ToF sensors is crucial in enabling mobile robots to better perceive the world around them.

ToF sensors have a similar application to lidar technology in that both use multiple sensors for creating depth maps. However, the key distinction lies in these cameras‘ ability to provide depth images that can be processed faster, and they can be built into systems for various applications.

This maximizes the utility of ToF technology in robotics. It has the potential to benefit industries reliant on precise navigation and interaction.

Why mobile robots need 3D vision

Historically, RGB cameras were the primary sensor for industrial robots, capturing 2D images based on color information in a scene. These 2D cameras have been used for decades in industrial settings to guide robot arms in pick-and-pack applications.

Such 2D RGB cameras always require a camera-to-arm calibration sequence to map scene data to the robot’s world coordinate system. 2D cameras are unable to gauge distances without this calibration sequence, thus making them unusable as sensors for obstacle avoidance and guidance.

Autonomous mobile robots (AMRs) must accurately perceive the changing world around them to avoid obstacles and build a world map while remaining localized within that map. Time-of-flight sensors have been in existence since the late 1970s and have evolved to become one of the leading technologies for extracting depth data. It was natural to adopt ToF sensors to guide AMRs around their environments.

Lidar was adopted as one of the early types of ToF sensors to enable AMRs to sense the world around them. Lidar bounces a laser light pulse off of surfaces and measures the distance from the sensor to the surface.

However, the first lidar sensors could only perceive a slice of the world around the robot using the flight path of a single laser line. These lidar units were typically positioned between 4 and 12 in. above the ground, and they could only see things that broke through that plane of light.

The next generation of AMRs began to employ 3D stereo RGB cameras that provide 3D depth information data. These sensors use two stereo-mounted RGB cameras and a “light dot projector” that enables the camera array to accurately view the projected light on the science in front of the camera.

Companies such as Photoneo and Intel RealSense were two of the early 3D RGB camera developers in this market. These cameras initially enabled industrial applications such as identifying and picking individual items from bins.

Until the advent of these sensors, bin picking was known as a “holy grail” application, one which the vision guidance community knew would be difficult to solve.

The camera landscape evolves

A salient feature is the cameras’ low-light performance which prioritizes human-eye safety. The 6 m (19.6 ft.) range in far mode facilitates optimal people and object detection, while the close-range mode excels in volume measurement and quality inspection.

The cameras return the data in the form of a “point cloud.” On-camera processing capability mitigates computational overhead and is potentially useful for applications like warehouse robots, service robots, robotic arms, autonomous guided vehicles (AGVs), people-counting systems, 3D face recognition for anti-spoofing, and patient care and monitoring.

Time-of-flight technology is significantly more affordable than other 3D-depth range-scanning technologies like structured-light camera/projector systems.

For instance, ToF sensors facilitate the autonomous movement of outdoor delivery robots by precisely measuring depth in real time. This versatile application of ToF cameras in robotics promises to serve industries reliant on precise navigation and interaction.

How ToF sensors take perception a step further

A fundamental difference between time-of-flight and RGB cameras is their ability to perceive depth. RGB cameras capture images based on color information, whereas ToF cameras measure the time taken for light to bounce off an object and return, thus rendering intricate depth perception.

ToF sensors capture data to generate intricate 3D maps of surroundings with unparalleled precision, thus endowing mobile robots with an added dimension of depth perception.

Furthermore, stereo vision technology has also evolved. Using an IR pattern projector, it illuminates the scene and compares disparities of stereo images from two 2D sensors – ensuring superior low-light performance.

In comparison, ToF cameras use a sensor, a lighting unit, and a depth-processing unit. This allows AMRs to have full depth-perception capabilities out of the box without further calibration.

One key advantage of ToF cameras is that they work by extracting 3D images at high frame rates — with the rapid division of the background and foreground. They can also function in both light and dark lighting conditions through the use of active lighting components.

In summary, compared with RGB cameras, ToF cameras can operate in low-light applications and without the need for calibration. ToF camera units can also be more affordable than stereo RGB cameras or most lidar units.

One downside for ToF cameras is that they must be used in isolation, as their emitters can confuse nearby cameras. ToF cameras also cannot be used in overly bright environments because the ambient light can wash out the emitted light source.

what is a tof camera illustration.

A ToF sensor is nothing but a sensor that uses time of flight to measure depth and distance. | Credit: E-con Systems

Applications of ToF sensors

ToF cameras are enabling multiple AMR/AGV applications in warehouses. These cameras provide warehouse operations with depth perception intelligence that enables robots to see the world around them. This data enables the robots to make critical business decisions with accuracy, convenience, and speed. These include functionalities such as:

  • Localization: This helps AMRs identify positions by scanning the surroundings to create a map and match the information collected to known data
  • Mapping: It creates a map by using the transit time of the light reflected from the target object with the SLAM (simultaneous localization and mapping) algorithm
  • Navigation: Can move from Point A to Point B on a known map

With ToF technology, AMRs can understand their environment in 3D before deciding the path to be taken to avoid obstacles. 

Finally, there’s odometry, the process of estimating any change in the position of the mobile robot over some time by analyzing data from motion sensors. ToF technology has shown that it can be fused with other sensors to improve the accuracy of AMRs.

About the author

Maharajan Veerabahu has more than two decades of experience in embedded software and product development, and he is a co-founder and vice president of product development services at e-con Systems, a prominent OEM camera product and design services company. Veerabahu is also a co-founder of VisAi Labs, a computer vision and AI R&D unit that provides vision AI-based solutions for their camera customers.

The post The role of ToF sensors in mobile robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/feed/ 0
KEF Robotics takes a modular approach to aircraft navigation and autonomy https://www.therobotreport.com/kef-robotics-takes-modular-approach-aircraft-navigation-autonomy/ https://www.therobotreport.com/kef-robotics-takes-modular-approach-aircraft-navigation-autonomy/#respond Thu, 18 Jan 2024 16:00:04 +0000 https://www.therobotreport.com/?p=577308 KEF Robotics says its vision software works with different hardware and software to enable drones to navigate in GPS-denied environments.

The post KEF Robotics takes a modular approach to aircraft navigation and autonomy appeared first on The Robot Report.

]]>
KEF Robotics' Tailwind software provides drone autonomy in GPS-denied environments with visual navigation.

Tailwind provides visual navigation to drones in GPS-denied environments. Source: KEF Robotics

While autopilots have helped fly aircraft for nearly a century, recent improvements in computer vision and autonomy promise to bring more software-based capabilities onboard. KEF Robotics Inc. has been developing technologies to increase aircraft safety, reliability, and range.

Founded in 2018, KEF said it provides algorithms that use camera data to enable autonomous flight across a variety of platforms and use cases. The Pittsburgh-based company works with designers to integrate these autonomy features into their aircraft.

“Our company’s mantra is to provide visual autonomy capabilities with any camera, any drone, or any computer,” said Eric Amoroso, co-founder and chief technology officer of KEF Robotics. “Being flexible and deployable to drones changes the integration from days to hours, as well as providing safe, reliable navigation,” he told The Robot Report.

“Think of us as an alternative to GPS,” said Olga Pogoda, chief operating officer at KEF Robotics. “The situation in Ukraine shows the difficulty of operating without GPS and true autonomy on the aircraft.”

KEF Robotics enables aircraft to operate without signals

“We founded KEF while entering a Lockheed Martin competition, which whittled 200 teams down to nine,” recalled Amoroso. “The drones had to be autonomous, which was a perfect test case for modular, third-party software.”

Since then, KEF Robotics has worked with the Defense Threat Reduction Agency (DTRA), which uses drones with multiple sensors to search for weapons of mass destruction.

The company said its Tailwind visual navigation software can use stereo cameras for hazard detection and avoidance and that it uses machine learning to localize objects and complete missions. This is particularly important for defense and security missions.

“Our long-term goal is to allow an aircraft to complete complex missions with a button push,” said Pogoda. “An operator can provide an overhead image of a building and a general direction.”

“Then, the autonomous aircraft can take off, fly to a location, and conduct a search pattern,” she explained. “It can reroute based on hazards on the way, and then it can take pictures or readings and come back to an operator without transmitting any signals.”

KEF Robotics said Tailwind, which can work at night and long-range, GPS-denied flights, is in testing and on its way to availability. The software has been validated at speeds up to 100 mph and provides closed-loop autonomous operations with drift rates of 2% of the distance traveled. It has not yet been qualified for extreme weather or conditions such as dust, fog, or smoke.

Integration important for modular approach

As with other autonomous systems such as cars, a technology stack with layers of capabilities from different, specialized providers is evolving for aircraft and drones.

“We’re seeing an interesting economic trend in purchasing aircraft — manufacturers are focusing on producing aircraft, and autonomy software is complex,” Pogoda noted. “More companies just want to build the aircraft with open interfaces to allow their customers to add capabilities after the initial delivery.”

To facilitate a more rapid integration of advanced autonomy, the U.S. Department of Defense’s Modular Open Systems Approach (MOSA) is an initiative intended to save money, enable faster and easier equipment upgrades, and improve system interoperability. KEF Robotics is following this approach.

“MOSA says that everything should be open architecture, and the industry must create tools for everything to work together,” said Pogoda.

KEF Robotics has won Small Business Innovation Research (SBIR) grants to advance its technology. How does modular software figure in?

“The Defense Innovation Unit started pushing the MOSES philosophy that companies like KEF Robotics are embracing to rapidly integrate and innovate UAS [uncrewed aircraft system] technology,” Amoroso said. “We specialize in providing plug-and-play visual perception — technology that is expensive and challenging to develop if you’re also designing novel UAS. With MOSA, drone builders can let KEF Robotics focus on the reliability and performance challenges of visual perception while selling a product with state-of-the-art autonomy.”

“The conflict in Ukraine showed the crippling impacts of widespread GPS jamming and the utility of low-cost UAS,” he added. “It’s only through MOSA do we believe that we can circumvent these threats affordably and at scale.”

“KEF offers two forms of our solution,” said Amoroso. “One is for those interested in GPS-denied navigation, collision avoidance, and target localization. It’s a hardware-based payload that includes systems to communicate with an autopilot.”

“There’s also a software-only deployment for drones that may already have such hardware onboard,” he added. “We follow the MOSA philosophy for deploying our software, along with others’ software and camera drivers.”

For example, KEF Robotics’ software can take measurements and localize them, and a third-party architecture can do custom object detection to spot smokestacks, Amoroso said.

KEF Robotics collaborates with Auterion, Duality AI

“Before KEF came along, there was already a great community working on GPS-denied navigation, including Auterion and Cloud Ground Control,” said Amoroso. “But we had teams and companies coming to us saying, ‘How can we get vision navigation to be plug and play?'”

In June 2023, Auterion Government Solutions partnered with KEF Robotics to combine AuterionOS with Tailwind for robots and autonomous systems.

“We have a great relationship with Auterion, which sees the same core needs for ease of integration and reliability,” Amoroso said. “We offer an instantiation of vision-based navigation, but we want to set it up for new players to slot in and offer their solutions more easily, such as a lidar-based state estimator.”

“We started chatting early last year about how we’d work with Auterion Enterprise PX4,” he noted. “Auterion wanted to see a GPS-denied demonstration with its own UAVs, and within 18 hours, we got our system running with autopilot in a closed loop. We’re still doing demos with them and are interested in getting our software working with Auterion’s Skynode.”

In November, KEF Robotics said it was working with Duality AI’s Falcon digital-twin integration program to develop autonomy software for a tethered uncrewed aircraft system (TeUAS) under a U.S. Army SBIR contract. Falcon can simulate different environments and drone configurations.

“It can simulate challenging scenarios like cluttered forests to test our software and drones,” said Amoroso. “This is similar to how simulation can help autonomous vehicles augment safety, with the benefit of being able to deploy different camera configurations and software.”

Why decoupling software and hardware makes sense 

How does KEF divide tasks between its systems and those of its partners? “The industry has already aggregated around some standards, but there are always customizations involved to meet a customer’s needs,” replied Amoroso.

“Some customers will say it’s OK to plug in our navigational messaging, and others prefer a companion computer that can monitor measurements or guidance commands to verify or support their own planning,” he said. “It’s important to be flexible and to understand early on what are the interfaces and to do drone demos to show that we can still execute a mission even if we don’t have full control of position or velocity.”

“But the advantage is, by decoupling autonomy from specific hardware, we can generalize our approach and rapidly integrate on a new platform,” Amoroso said. “If an aircraft has an open design, we can integrate our complex software in less than a day, start flying, and then progress to a tighter integration at a customer’s request.”

KEF Robotics is currently focused on defense applications, with a multi-aircraft demonstration of Tailwind for the Army planned for September 2024.

KEF Robotics has designed its software to be hardware-agnostic

KEF has designed its autonomy software to be hardware-agnostic. Source: KEF Robotics

The post KEF Robotics takes a modular approach to aircraft navigation and autonomy appeared first on The Robot Report.

]]>
https://www.therobotreport.com/kef-robotics-takes-modular-approach-aircraft-navigation-autonomy/feed/ 0
GRIT Vision System applies AI to Kane Robotics’ cobot weld grinding https://www.therobotreport.com/grit-vision-system-applies-ai-kane-robotics-cobot-weld-grinding/ https://www.therobotreport.com/grit-vision-system-applies-ai-kane-robotics-cobot-weld-grinding/#respond Thu, 11 Jan 2024 13:00:53 +0000 https://www.therobotreport.com/?p=577388 Kane Robotics has developed the GRIT Vision System to improve cobot material removal and finishing for customers such as Paul Mueller Co.

The post GRIT Vision System applies AI to Kane Robotics’ cobot weld grinding appeared first on The Robot Report.

]]>
A worker manipulates a Kane cobot to instruct it.

A worker manipulates a Kane cobot to instruct it. Source: Kane Robotics

Kane Robotics Inc. has combined artificial intelligence with visual sensors to enable its collaborative robot to automatically track and grind weld seams with high accuracy and speed. The company said its GRIT Vision System applies computer vision to the GRIT cobot for material-removal tasks such as sanding, grinding, and polishing in manufacturing.

Launched in 2023, the GRIT robot works alongside humans to perform labor-intensive finishing for any size and type of manufacturer, stated Kane Robotics in a release. Though initially designed for material removal in the aerospace industry, the robot can be configured for metalworking, woodworking, and other types of manufacturing, explained the Austin, Texas-based company.

Kane Robotics cited the case of Paul Mueller Co., which sought a more efficient way to grind welds on large steel tanks. The Springfield, Mo.-based stainless steel equipment manufacturer also wanted to reduce fatigue-related injuries and improve working conditions.

The GRIT Vision System demonstrated its skill in live object detection and adaptive recalibration, allowing Paul Mueller to grind different-sized tank shells and various types of weld seams, said Kane Robotics.

Engineer explains the GRIT Vision System

Dr. Arlo Caine, a consulting engineer at Kane Robotics and a member of the company’s advisory board, helped design the GRIT Vision System. He is an expert in robot programming, mechanical product design, collaborative robotics, machine learning, and computer vision.

Caine is also a professor in the Department of Mathematics and Statistics and associate chair and faculty fellow of the Center for Excellence in Math and Science Teaching at California State Polytechnic University – Pomona. He replied to the following questions from The Robot Report about the GRIT Vision System.

We’ve seen a lot of interest in the past few years around applying machine vision and cobots to welding and finishing. How is Kane Robotics’ approach different from others?

Caine: The GRIT Vision System includes a camera integrated with Kane’s cobot arm and proprietary AI software. The AI uses the camera’s images to “think” as it directs the cobot to follow the weld seam. When weld seams are imperfect, the AI’s automatic steering tracks the uneven pattern and redirects the robotic arm accordingly.

Kane engineers teach the AI to recognize a variety of welds prior to installation. Through software updates, the vision system learns to detect variations in the welds and improve grinding accuracy.

This varies from other cobot welding systems because the Kane AI vision system “sees” the path for the grinding tool to follow, even as the weld seam disappears. Most vision systems can see and react to objects, but those objects don’t disappear, as does a weld seam.

Kane’s vision system overcomes this problem by learning the various stages of each particular weld-grinding process so the cobot recognizes when to continue or stop grinding. The cobot world hasn’t seen this before.

Grind Pilot AI in robotic welding operation

Kane’s proprietary AI software reports on robotic welding operation. Click here to enlarge. Source: Kane Robotics

How does the GRIT robot know when a job is completed?

Caine: The cobot does the dull and dirty work of holding the grinder, and the vision system handles the monotony of tracking large seams for long periods. But the human operator still selects how hard to push for the given abrasive, how fast to move along the seam, and how many passes to make to achieve the required finish. A human operator makes the final judgment about when the grinding job is “complete.”

What constitutes “done” means different things to different customers and different applications. For Kane customer Paul Mueller Co., the applications were so numerous and varied that teaching GRIT fully was out of scope for the bid.

Kane taught the system to do the basic work required and left the management to the human operator. This relates back to Kane’s philosophy of simplicity and automating only what is most crucial to help humans do their jobs better.

Kane cobot keeps humans in the loop

While many people think that automation is about robots replacing human workers entirely, why is keeping a human in the loop “indispensable” for these manufacturing tasks?

Caine: Vision systems use machine learning to correct cobots’ movements in near real time. But manufactured parts are rarely exact replicas of their perfect CAD models and precise cobot movements, so human judgment is still needed. Human operators ultimately determine how to best grind the welds.

Human operators decide how to position the tool, what abrasive to use, and when to meet finish specifications. After the GRIT Vision System takes control of the cobot, a live custom interface allows the human operator to assess and adjust as needed.

It’s important to highlight that our system is truly collaborative. The cobot does the tedious and monotonous work, and the skilled operator shapes the finish.

Do you work only with FANUC‘s CRX-10iA/L cobot arm? Was any additional integration necessary?

Caine: The basic operations of the vision system are:

  1. The vision system detects the weld
  2. A guidance algorithm computes robot movements to follow the weld, and
  3. A real-time control program commands the robot controller to make the robotic arm react quickly.

Operations 1 and 2 are robot-independent. Kane has only implemented Operation 3 for all FANUC CRX types, not just the CRX-10iA/l. We are in the process of developing Part 3 for Universal Robots‘ arms.

Each project will have different integrations according to the needs of the job, whether different sizes or styles of grinders or different grinding media.

The Paul Mueller project integrated a 2HP pneumatic belt grinder. The tool can be configured with various contact arms and wheels to use a variety of different belts — 1 to 2 in. wide, 30 in. long — with different abrasive qualities, from coarse to fine to blending.

Kane’s AI program is robust enough to accommodate the use of a variety of tools with the same vision system.

AI learns the welding process

Can you describe in a bit more detail how AI helps with object detection and understanding disappearing weld seams?

Caine: Kane’s proprietary AI software discerns the weld seam from the surrounding material based on dozens of frames per second captured by the GRIT Vision System’s camera. The AI uses a real-time object-detection model to visually identify the weld seam.

Camera and grinding end-of-arm tooling with Grit cobot.

Camera and grinding end-of-arm tooling with Grit cobot. Source: Kane Robotics

The vision system learns the various stages of a particular weld-grinding process so the cobot will recognize when to stop or continue grinding. This capability is still in development, as Kane collects more data from customers on what all the different phases of weld grinding “down to finish” look like.

In each phase of the assigned work, the AI makes many small decisions per second to accomplish its assigned task, but it doesn’t have the executive authority to sequence those tasks or be responsible for quality control … yet.

Did Kane Robotics work with Paul Mueller Co. in the development of the GRIT system?

Caine: Yes, we partnered with Paul Mueller on this development and used the GRIT Vision System and its proprietary AI for the first time in a commercial application with their team.

The Paul Mueller team was able to assess the new product in real time and offer suggestions for adjustments and improvements. They continue to relay data to us as they test and use the system, which further teaches GRIT to understand what a finished product should look like.

What feedback did you get, and how did you address it?

Caine: Paul Mueller has been pleased with the results of the GRIT Vision System’s weld grinding. They are still in the testing and training phase, but they have expressed satisfaction with the design, the intuitive nature of the AI interface and the overall performance of the system.

Paul Mueller suggested that Kane include an option in the AI interface for the operator to specify the liftoff distance before and after grinding, increasing the ability of the system to conduct grinding around obstacles extending from the surface of the tank — such as fittings, lifting lugs, manways, etc.

Paul Mueller also wanted a more powerful tool turning a larger range of belts to apply to larger and smaller seams. Due to the payload limitations of the CRX-10iA/l, we went with the 2HP belt grinder to get the most power we could find for the weight.

During the design process, we met with the lead grinding technician at Paul Mueller to make sure our tool design was robust enough to implement their processes. As a result, we built a system that their operators find useful and intuitive to operate.

Kane offers GRIT Vision System for applications beyond welding

For future applications of AI vision, do you have plans to work with a specific partner or task next?

Caine: Weld grinding is an exciting new space for cobot vision systems. Kane is ready to work with welders and manufacturing teams to employ the GRIT Vision System for grinding all types of welds in multiple industries.

But the GRIT Vision System is not only applicable to weld grinding; we also plan to offer it for other types of operations in various industries, including sanding composite parts in aerospace assembly, polishing metal pieces in automotive manufacturing, sanding wood in furniture-making and related industries, and other material-removal applications.

The post GRIT Vision System applies AI to Kane Robotics’ cobot weld grinding appeared first on The Robot Report.

]]>
https://www.therobotreport.com/grit-vision-system-applies-ai-kane-robotics-cobot-weld-grinding/feed/ 0
Eyeonic Vision System Mini unveiled by SiLC Technologies at CES 2024 https://www.therobotreport.com/silc-launches-eyeonic-mini-ces-2024/ https://www.therobotreport.com/silc-launches-eyeonic-mini-ces-2024/#respond Tue, 09 Jan 2024 15:00:13 +0000 https://www.therobotreport.com/?p=577373 SiLC says its new Eyeonic Mini AI machine vision system provides sub-millimeter resolution at a significantly reduced size.

The post Eyeonic Vision System Mini unveiled by SiLC Technologies at CES 2024 appeared first on The Robot Report.

]]>
Eyeonic Vision System Mini from SiLC Technologies

The Eyeonic Vision System Mini is designed to be compact and power-efficient. Source: SiLC Technologies

SiLC Technologies Inc. today at CES launched its Eyeonic Vision System Mini, which integrates a full, multi-channel frequency modulated continuous wave (FMCW) lidar on a single silicon photonic chip and an integrated FMCW lidar system on chip (SoC). The Eyeonic Mini “sets a new industry benchmark in precision,” said the Monrovia, Calif.-based company.

“Our FMCW lidar platform aims to enable a highly versatile and scalable platform to address the needs of many applications,” said Dr. Mehdi Asghari, CEO of SiLC Technologies, in a release.

“At CES this year, we’re demonstrating our long-range vision capabilities of over 2 km [1.2 mi.],” he added. “With the Eyeonic Mini, we’re showcasing our high precision at shorter distances. Our FMCW lidar solutions, at short or long distances, bring superior vision to machines to truly enable the next generation of AI based automation.”

Founded in 2018, SiLC Technologies said its 4D+ Eyeonic lidar chip integrates all the photonics functions needed to enable a coherent vision sensor. The company added that the system offers a small footprint and addresses the need for low power consumption and cost, making it suitable for robotics, autonomous vehicles, biometrics, security, and industrial automation.

In November 2023, SiLC raised $25 million to expand production of its Eyeonic Vision System.

Eyeonic Mini uses Surya SoC for precision

To be useful, robots need powerful, compact, and scalable vision that won’t be affected by complex or unpredictable environments or conditions, as well as interference from other systems, asserted SiLC Technologies. Sensors must also provide motion, velocity, polarization, and precision, capabilities that the company said make FMCW superior to existing time-of-flight (ToF) systems.

FMCW technology enables newer imaging systems to directly capture images for AI, factory robots, home security, autonomous vehicles, and perimeter security applications, said SiLC.

The Eyeonic Mini uses what it described as “the industry’s first purpose-built” digital lidar processor SoC, the iND83301 or “Surya” developed by indie Semiconductor. As a result, the company said, it can deliver “an order of magnitude greater precision than existing technologies while being one-third the size of last year’s pioneering model.”

“The Eyeonic Mini represents the next evolution of our close collaboration with SiLC. The combination of our two unique technologies has created an industry-leading solution in performance, size, cost, and power,” said Chet Babla, senior vice president for strategic marketing at indie Semiconductor. “This creates a strong foundation for our partnership to grow and address multiple markets, including industrial automation and automotive.”

With Surya, a four-channel FMCW lidar chip provides robots with sub-millimeter depth precision from distances exceeding 10 m (32.8 ft.), said SiLC. This is useful for warehouse automation and machine vision applications, it noted.

Dexterity uses sensors for truck loading, unloading

For instance, said SiLC Technologies, AI-driven palletizing robots equipped with the Eyeonic Mini can view and interact with pallets, optimize package placement, and efficiently and safely load them onto trucks. With more than 13 million commercial trucks in the U.S., this technology promises to significantly boost efficiency in loading and unloading processes, the company said.

Dexterity Inc. said it is working to give robots the intelligence to see, move, touch, think and learn, freeing human workers for other warehouse and logistics tasks. The Redwood City, Calif.-based company is incorporating SiLC’s technology into its autonomy platform.

“At Dexterity, we focus on AI, machine learning, and robotic intelligence to make warehouses more productive, efficient and safe,” said CEO Samir Menon. “We are excited to partner with SiLC to unlock lidar for the robotics and logistics markets.”

“Their technology is a revolution in depth sensing and will enable easier and faster adoption of warehouse automation and robotic truck load and unload,” he said.

At CES this week in Las Vegas, SiLC Technologies is demonstrating the new Eyeonic Mini in private meetings at the Westgate Hotel. For more information or to schedule an appointment, e-mail SiLC at contact@SiLC.com.

The post Eyeonic Vision System Mini unveiled by SiLC Technologies at CES 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/silc-launches-eyeonic-mini-ces-2024/feed/ 0
Micropsi Industries’ new CEO talks about AI plus robotics for better production https://www.therobotreport.com/micropsi-industries-new-ceo-talks-about-ai-plus-robotics-better-production/ https://www.therobotreport.com/micropsi-industries-new-ceo-talks-about-ai-plus-robotics-better-production/#respond Fri, 05 Jan 2024 19:36:24 +0000 https://www.therobotreport.com/?p=577340 Micropsi is exploring how to put its vision AI and data collection to work for manufacturing and more industries, says new CEO Gary Jackson.

The post Micropsi Industries’ new CEO talks about AI plus robotics for better production appeared first on The Robot Report.

]]>
Gary Jackson, CEO, Micropsi

New CEO Gary Jackson (above) is working with Micropsi founder Ronny Vuine. Source: Micropsi

Last month, Micropsi Industries appointed Gary Jackson as CEO, allowing founder Ronny Vuine to focus on product innovation and serving customers. The company’s MIRAI vision system uses artificial intelligence to control industrial and collaborative robot arms in real time.

Jackson has more than 30 years of executive experience and was previously CEO of video analytics provider Drishti, which was recently acquired. He has led software companies such as Vantive, Ounce Labs, Shunra, and Zeekit, said Micropsi.

“I’ve been working in enterprise business-to-business software for my entire career,” Jackson told The Robot Report. “What I bring to the table is, No. 1, an understanding of how to organize and manage a software business. No. 2, I would say my DNA and my training is more on the sales and go-to-market side of things.”

“My most recent company was doing automated video analyses of human processes in manufacturing,” he recalled. “We would look at a cycle and the human interaction with tools and parts in the environment and break it down into its components parts. And then we’d identify where there were anomalies in the process, the parts, or the tools used and send alerts to management in real time.”

Jackson looks to apply more AI to manufacturing

With his experience with machine learning and quality assurance, Jackson said he understands the roles that robots and AI can play in production.

“I’ve been studying manufacturing floors and production lines for years, and I have certainly observed situations where the robots’ part in the in the overall process was defined in a particular way that interacted with the humans,” he said. “Drishti was installed at Deloitte’s smart factory in Wichita, Kan. It’s a spectacular facility that’s not just a test site or a STEM [science, technology, engineering, and mathematics] kit for students; it’s a full factory line, with something like 11 workstations with robotic and manual assembly.”

Micropsi has offices in Berlin and San Francisco. The company said MIRAI enables robots to learn from humans and respond directly to sensor information so they can deal with variances and cost-effectively operate in dynamic environments.

“During my time at Drishti, there were many, many use cases with robotics that our customers asked us to help with where analyzing video wasn’t sufficient to solve a problem,” said Jackson. “For instance, one had a robotic arm using a gas sensor that had difficulty determining if there were leaks in the line behind a refrigerator or some other unit. The reason was that these tubes could be very different in location from one unit to the next.”

“It was the area of the largest variance, and I saw that Micropsi had solved that problem for one of its customers,” he noted. “Micropsi was able to deal with the variability … and I had never seen a production environment where a robot could actually follow the curve of whatever anomaly was going on in that assembly process.”

Delving more deeply into data to demonstrate value

Micropsi also plans to share more of the data captured by cameras, said Jackson. It is exploring how to feed that data to command-and-control systems and the dashboards that factory personnel look at daily for key performance indicators (KPIs).

“I know that the data is massively valuable,” he said. “Now, we’re empowered to assemble and display it for integration with other systems. That’s all to be determined.”

Micropsi said that it already works with leading automotive and electronics manufacturers and that it expects to continue growing in the U.S. Jackson asserted that delivering value to customers is more important than making a quick sale.

“I’m not just interested in revenue,” he said. “In fact, my first message to my sales team was, ‘I don’t want a single dollar from any one of your customers unless I’m satisfied that we can actually solve the problem.’ And I will have a 95% customer success rate because that’s just the way I operate.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Micropsi looks to new use cases, market growth

“Now, as I look forward with Micropsi, we can extend what has been done already into use cases that we haven’t even touched yet, just by expanding the capability of the AI,” Jackson said. “So for instance, we are going to look heavily at things like anomaly detection and picking and not just assembly. So these are these are things that we can do since we have the robotic arm and the camera in place.”

“There are things that we can do with the AI that we haven’t even explored yet,” he added. “We’re at just the tip of the iceberg.”

Jackson also said he expects the economic constraints of the past year to loosen up in 2024 and 2025 in response to ongoing supply chain and labor challenges. Despite all of the recent hype around generative AI, Micropsi needs to do more to promote its unique use of vision AI for manufacturing, he said.

“The list of companies trying to do what Micropsi has done is very small,” said Jackson. “My goal is to have wins within my first 90 days that we can show and measure the difference AI is going to make. From that win, we’ll plan the next one and the next. It’s really the only way to not only remain sticky with the customer, but also to help advance the industry.”

The post Micropsi Industries’ new CEO talks about AI plus robotics for better production appeared first on The Robot Report.

]]>
https://www.therobotreport.com/micropsi-industries-new-ceo-talks-about-ai-plus-robotics-better-production/feed/ 0
NVIDIA picks 6 noteworthy autonomous systems of 2023 https://www.therobotreport.com/nvidia-picks-6-noteworthy-autonomous-systems-2023/ https://www.therobotreport.com/nvidia-picks-6-noteworthy-autonomous-systems-2023/#respond Sat, 23 Dec 2023 14:00:45 +0000 https://www.therobotreport.com/?p=568983 NVIDIA picks robots that showed special prowess -- swimming, diving, gripping, seeing, strolling and flying -- through 2023.

The post NVIDIA picks 6 noteworthy autonomous systems of 2023 appeared first on The Robot Report.

]]>
Images of NVIDIA's roundup of cool robots in 2023.

Top row, from left to right: the Ella smart stroller, Soft Robotics’ food packer, and the TM25S. Bottom row: Salidrone, M4, and Zipline’s delivery drone. | Source: NVIDIA

Outside the glare of the klieg lights that ChatGPT commanded this past year, a troupe of autonomous machines nudged forward the frontiers of robotics, according to NVIDIA.

Here are six that showed promise, swimming, diving, gripping, seeing, strolling and flying through 2023.
 

Ella smart stroller makes a splash at CES

Ella — a smart stroller from Glüxkind Technologies, a startup in Vancouver, Canada — kicked off the year when it was named an honoree in the CES 2023 Innovation Awards.

The canny carriage uses computer vision running on the NVIDIA Jetson edge AI platform to follow parents. Its AI-powered abilities, like smart braking and a rock-my-baby mode, captured the attention of media outlets like Good Morning America and The Times of London as well as an NVIDIA AI Podcast interview with its husband-and-wife cofounders.

A member of NVIDIA Inception, a free program for cutting-edge startups, Glüxkind was one of seven companies with NVIDIA-powered products recognized at the Las Vegas event in January. They included:

  • John Deere for its fully autonomous tractor
  • AGRIST for its robot that automatically harvests bell peppers
  • Inception member Skydio for its drone that can fly at a set distance and height without manual intervention
  • Neubility, another Inception member, for its self-driving delivery robot
  • Seoul Robotics, a partner in the NVIDIA Metropolis vision AI software, for its Level 5 Control Tower that can turn standard vehicles into self-driving care
  • WHILL for its one-person vehicle that automatically guides a user inside places like airports or hospitals

mGripAI dexterously packs food

Bedford, Mass.-based Inception member Soft Robotics introduced its mGripAI system to an $8 trillion food industry hungry for automation. It combines 3D vision and AI to grasp delicate items such as chicken wings, attracting investors that include Tyson Foods and Johnsonville.

Soft Robotics uses the NVIDIA Omniverse platform and NVIDIA Isaac Sim robotics simulator to create 3D renderings of chicken parts on conveyor belts or in bins. With help from AI and the ray-tracing capabilities of NVIDIA RTX technology, the robot gripper can handle as many as 100 picks per minute, even under glare or changing light conditions.

“We’re all in on Omniverse and Isaac Sim, and that’s been working great for us,” David Weatherwax, senior director of software engineering at Soft Robotics, said in a January interview.

TM25S provides a keen eye in the factory

In a very different example of industrial digitalization, electronics manufacturer Quanta is inspecting the quality of its products using the TM25S, an AI-enabled robot from its subsidiary, Techman Robot.

Using Omniverse, Techman built a digital twin of the inspection robot — as well as the product to be inspected — in Isaac Sim. Programming the robot in simulation reduced time spent on the task by over 70%, compared with programming manually on the real robot.

Then, with optimization tools in Isaac Sim, Techman explored a massive number of program options in parallel on NVIDIA GPUs. The end result, shown in the video below, was an efficient solution that reduced the cycle time of each inspection by 20%.

Saildrone takes to the seas for data science

Saildrone, another Inception startup in Alameda, Calif., created uncrewed watercraft that can cost-effectively gather data for science, fisheries, weather forecasting and more.

NVIDIA Jetson modules process data streams from their sensors, some with help from NVIDIA Metropolis vision AI software such as NVIDIA DeepStream, a development kit for intelligent video analytics.

The video below shows how three of Saildrone’s smart sailboats are helping evaluate ocean health around the Hawaiian Islands.

Caltech M4 sets its sights on Mars

The next stop for one autonomous vehicle may be the red planet.

Caltech’s Multi-Modal Mobility Morphobot, or M4, can configure itself to walk, fly, or drive at speeds up to 40 mph (see video below). An M42 version is now in development at NASA as a Mars rover candidate and has attracted interest for other uses such as reconnaissance in fire zones.

Since releasing a paper on it in Nature Communications, the team has been inundated with proposals for the shape-shifting drone built on the NVIDIA Jetson platform.

Zipline delivery drones fly high

The year ended on a high note with Zipline announcing that its delivery drones flew more than 55 million miles and made more than 800,000 deliveries since the company’s start in 2011. The San Francisco-based company said it now completes one delivery every 70 seconds, globally.

That’s a major milestone for the Inception startup, the field it’s helping pioneer, and the customers who can receive everything from pizza to vitamins up to seven faster than by truck.

Zipline’s latest drone uses two Jetson Orin NX modules. It can carry 8 lb. of cargo for 10 miles at up to 70 mph to deliver packages in single-digit minutes while reducing carbon emissions 97% in comparison with gasoline-based delivery vehicles.

NVIDIA notes maker machines that inspire and amuse

Individual makers designed two autonomous vehicles this year worth special mentions.

Cool Jetson-based robot of 2023

Goran Vuksic with his AI-powered droid. | Source: NVIDIA

Kabilan KB, a robotics developer and student in Coimbatore, India, built an autonomous wheelchair using Jetson to run computer vision models that find and navigate a path to a user’s desired destination. The undergrad at the Karunya Institute of Technology and Sciences aspires to one day launch a robotics startup.

Finally, an engineering manager in Copenhagen who’s a self-described Star Wars fanatic designed an AI-powered droid based on an NVIDIA Jetson Orin Nano Developer Kit. Goran Vuksic shared his step-by-step technical guide, so others can build their own sci-fi companions.

More than 6,500 companies and 1.2 million developers — as well as a community of makers and enthusiasts — use the NVIDIA Jetson and Isaac platforms for edge AI and robotics.

To get a look at where autonomous machines will go next, see what’s coming at CES in 2024.

Editor’s note: This blog reposted with permission from NVIDIA.

The post NVIDIA picks 6 noteworthy autonomous systems of 2023 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-picks-6-noteworthy-autonomous-systems-2023/feed/ 0
Persee N1 3D camera module from Orbbec uses NVIDIA Jetson https://www.therobotreport.com/persee-n1-3d-camera-module-from-orbbec-uses-nvidia-jetson/ https://www.therobotreport.com/persee-n1-3d-camera-module-from-orbbec-uses-nvidia-jetson/#respond Thu, 21 Dec 2023 14:50:45 +0000 https://www.therobotreport.com/?p=568957 Orbbec's Persee N1 combines a stereo-vision 3D camera and a computer based on NVIDIA Jetson for accurate and reliable data. 

The post Persee N1 3D camera module from Orbbec uses NVIDIA Jetson appeared first on The Robot Report.

]]>
Persee N1.

Orbbec’s Persee N1, which currently retails for $499.99. | Source: Orbbec

Orbbec Inc. has released the Persee N1, which it claimed is “an all-in-one combination of a popular stereo-vision 3D camera and a purpose-built computer based on the NVIDIA Jetson platform.” The company said its latest product delivers accurate and reliable data for indoor and semi-outdoor operations.

The Persee N1 is equipped with industry-standard interfaces for useful accessories and data connections, said Orbbec. The Troy, Mich.-based company added that the camera module also gives developers access to the benefits of the Ubuntu OS and Open Computer Vision (OpenCV) libraries. 

According to industry reports, an estimated 89% of all embedded vision projects use OpenCV. Orbbec said that this integration marks the beginning of a deeper collaboration between it and Open Source Vision Foundation, the nonprofit that operates OpenCV

“The Persee N1 features robust support for the industry-standard computer vision and AI toolset from OpenCV,” said Dr. Satya Mullick, CEO of OpenCV, in a release. “OpenCV and Orbbec have entered a partnership to ensure OpenCV compatibility with Orbbec’s powerful new devices and are jointly developing new capabilities for the 3D vision community.”

Persee N1 ready for edge AI and robotics

By delivering accurate data, Persee N1 is suitable for robotics, retail, healthcare, dimensioning, and interactive gaming applications, said Orbbec. 

The Persee N1 designed to be easy to set up using the Orbbec software development kit (SDK) and Ubuntu-based software environment, the company explained. It includes a Gemini 2 camera, based on active stereo IR technology, as well as Orbbec’s custom ASIC for high-quality, in-camera depth processing. 

It also includes the NVIDIA Jetson platform for edge AI and robotics. Orbbec recently became an NVIDIA Partner Network (NPN) Preferred Partner, deepening its relationship with NVIDIA

“The self-contained Persee N1 camera-computer makes it easy for computer vision developers to experiment with 3D vision,” stated Amit Banerjee, head of platform and partnerships at Orbbec. “This combination of our Gemini 2 RGB-D camera and the NVIDIA Jetson platform for edge AI and robotics allows AI development while at the same time enabling large-scale, cloud-based commercial deployments.”

The Persee N1 has HDMI and multiple USB ports for easy connections to a monitor and keyboar, said Orbbec. The USB ports also allow for data, and the camera module has a Power over Ethernet (POE) port for combined data and power connections. It also features MicroSD and M.2 slots for expandable storage. 


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Orbbec strengthens partnerships

With the release of Persee N1, Orbbec said it is strengthening its relationships with NVIDIA and OpenCV, both huge players in the robotics space. 

In August, Orbbec released a product line collaboration with Microsoft. The companies based this suite of products on Microsoft’s indirect time-of-flight (iTOF) depth-sensing technology that it brought to market with the HoloLens 2.

The cameras combine Microsoft’s iToF with Orbbec’s high-precision depth camera design and in-house manufacturing capabilities. 

Earlier this year, Orbbec also released a 3D camera SDK Programming Guide that uses ChatGPT. The guide allows developers to create their applications and sample codes by talking to ChatGPT.

The post Persee N1 3D camera module from Orbbec uses NVIDIA Jetson appeared first on The Robot Report.

]]>
https://www.therobotreport.com/persee-n1-3d-camera-module-from-orbbec-uses-nvidia-jetson/feed/ 0
Trimble provides real-time, centimeter-level accuracy for Sabanto autonomous tractors https://www.therobotreport.com/trimble-provides-real-time-centimeter-level-accuracy-for-sabanto-autonomous-tractors/ https://www.therobotreport.com/trimble-provides-real-time-centimeter-level-accuracy-for-sabanto-autonomous-tractors/#respond Fri, 15 Dec 2023 14:46:45 +0000 https://www.therobotreport.com/?p=568890 Sabanto has integrated GNSS receivers and corrections service from Trimble into its systems to make positioning more precise.

The post Trimble provides real-time, centimeter-level accuracy for Sabanto autonomous tractors appeared first on The Robot Report.

]]>
Sabanto offers low-cost retrofits to make machinery autonomous, plus Trimble's positioning technology.

Sabanto offers retrofits to make machinery autonomous, plus positioning technology from Trimble. | Credit: Sabanto

Farmers can take advantage of increasingly autonomous systems to increase productivity, minimize downtime, and alleviate worker shortages. Trimble Inc. announced that it has integrated its high-accuracy positioning technology with Sabanto Inc.’s autonomous tractor fleet.

Westminster, Colo.-based Trimble said its BX992 Dual Antenna GNSS receiver and satellite-delivered CenterPoint RTX corrections service can provide centimeter-level L-Band corrections to Itasca, Ill.-based Sabanto’s systems nearly anywhere in the world.

This combination can help farmers maintain reliability, minimize input costs, and make full use of autonomous vehicles, said the companies.

Trimble provides positioning data for a tractor in a field, shown here with overlay graphics illustrating a path in the field.

Core positioning, modeling, connectivity, and data analytics technologies connect the digital and physical worlds to improve productivity, quality, safety, transparency, and sustainability. | Credit: Trimble

Sabanto and Trimble work to accelerate ag adoption

“In 2022, Trimble Ventures announced an investment with Sabanto focused on autonomous workflows in farming applications,” said Finlay Wood, general manager for off-road autonomy at Trimble, in a release. “This announcement underscores our goal to invest in early and growth-stage companies that are accelerating innovation, digital transformation and sustainability in the industries Trimble serves.”

“It’s exciting to witness how Trimble’s technology and our Trimble Ventures relationship can accelerate the adoption of autonomy in the agriculture industry, as evidenced by this next phase of our collaboration with Sabanto,” he added.

In addition to RTX corrections, the company said it will offer correction stream-switching, enabling farmers to automatically switch from IP to satellite to provide the best signal in every environment.

“With a customer base in agriculture, as well as municipalities and airports with a remote environment, our customers are often in areas without reliable cell service,” said Craig Rupp, founder and CEO of Sabanto. “A reliable correction signal is extremely important to keeping our autonomous machinery working around the clock.”

“It’s exciting to see our two businesses find synergies to improve ag autonomy, delivering a better experience for our farmers through continuous connectivity, regardless of the environment,” he said.

The global market for autonomous tractors could expand from $2.4 billion in 2023 to $7.1 billion by 2028 at a compound annual growth rate (CAGR) of 24%, according to Markets and Markets. The research firm said drivers include increasing demand for efficiency and sustainability, as well as maturing software and Internet of Things (IoT) technologies.

The post Trimble provides real-time, centimeter-level accuracy for Sabanto autonomous tractors appeared first on The Robot Report.

]]>
https://www.therobotreport.com/trimble-provides-real-time-centimeter-level-accuracy-for-sabanto-autonomous-tractors/feed/ 0
Flexxbotics reduces changeover time to 10 minutes at SpiTrex Orthopedics https://www.therobotreport.com/flexxbotics-reduces-changeover-time-10-minutes-spitrex-orthopedics/ https://www.therobotreport.com/flexxbotics-reduces-changeover-time-10-minutes-spitrex-orthopedics/#respond Thu, 07 Dec 2023 14:00:57 +0000 https://www.therobotreport.com/?p=568778 Flexxbotics' systems enable robots to communicate directly with SpiTrex's FOBA laser marking systems and change jobs in real time.

The post Flexxbotics reduces changeover time to 10 minutes at SpiTrex Orthopedics appeared first on The Robot Report.

]]>
Flexxbotics and SpiTrex partnership graphic.

SpiTrex Orthopedics selected Flexxbotics systems to work in its FOBA laser marking work cells. | Source: Flexxbotics

Flexxbotics announced today that SpiTrex Orthopedics, a global medical device contract manufacturer, has chosen its systems for robot-driven manufacturing with autonomous process control. The company’s systems will work in SpiTrex’s FOBA laser marking work cells. 

Flexxbotics said its systems enable robots to communicate directly with the FOBA laser marking machines and change jobs in real time within sequence. This direct communication can reduce changeover time at SpiTrex to just 10 minutes, the company claimed.

Before they started using Flexxbotics’ technology, SpiTrex engineers could spend an hour or more setting up a workcell, twice a week, when they need to run a new job. 

“The autonomous changeover process, coupled with the closed feedback loop functionality, enables us to produce extremely high-tolerance parts through continuous flow which reduces the overall lead time by 20+%,” said Brett Gopal, SpiTrex Orthopedics senior vice president of operations, in a release. “Flexxbotics directly improves our throughput and ROC [receiver operating characteristic], which in turn increases profitability.”

With the new technology, SpiTrex’s robots can also directly connect with the vision system verifying the laser marking on each part. The robots need to autonomously sort each part based on the results of the vision verification.

Flexxbotics said that by directly connecting robots to vision systems, the company can ensure closed-loop quality for higher yields. 

“We are proud to work with SpiTrex as they robot-enable manufacturing operations in their smart factories,” stated Tyler Bouchard, co-founder and CEO of Flexxbotics.

“We understand the necessity for the highest levels of precision while increasing output in advanced machining operations using robotics, particularly in sectors like medical, defense, aerospace and automotive,” he added. “We believe that autonomous manufacturing cannot be achieved without autonomous process control which is why we are focused on robot-machine orchestration.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Why SpiTrex chose Flexxbotics

There were a number of factors that SpiTrex considered when it chose Flexxbotics. First, the company said it was attracted to Flexxbotics’ ability to autonomously configure a robot and the FOBA laser marker for each job that will run.

SpiTrex also liked its ability to perform closed-loop, in-line inspection with COGNEX’s vision system, which directs the robot to sort parts based on the FOBA marking results and conformance to critical characteristics.

In addition, Flexxbotics offers open connectivity and interoperability between robots. It also supplies FOBA laser markers, and COGNEX vision sensors, along with existing IT business systems.

SpiTrex was interested in the company‘s interchangeable carousels with universal switching processes, which enable quick changeover for faster set-up times. The manufacturer also landed on Flexxbotics because it provides a direct feed of work cell operations and inspection data into quality repository.

Altogether, SpiTrex said that Flexxbotics’ system provided enough flexibility to start with initial work cells, quickly get success, and then scale to additional FOBA laser marking work cells factory-wide. 

“Flexxbotics is the only robot machine tending software solution we found capable of delivering the precision, cycle-time and closed-loop compliance required,” Gopal said. “We are impressed with Flexxbotics’ autonomous process control using robots, and the ability to close-the-loop by alerting upstream and downstream workcells of quality problems based on automated inspection results, which is quite unique.”

Inside the implementaton

Bouchard told The Robot Report that SpiTrex didn’t need to make any alterations to its process to implement Flexxbotics’ systems. The company simply needed to add a new in-feed fixture for part presentation to the robots, he explained.

“The project is ongoing with an incremental work cell-by-work cell rollout to all 10 FOBA work cells to reduce disruption,” Bouchard said. “Each work cell takes a business week for initial installation, followed by a production ramp-up optimization period that can range between two to four weeks, depending on the variety and complexity of the parts being processed.”

At SpiTrex’s facility, Flexxbotics’ system directs robots to use COGNEX camera images to determine the pass/fail status of each part. If it detects non-conformance issues that the system needs to correct, It sends alerts with the images to pre- and post-process work cells.

“Human supervision is minimal, with a 10:1 machine-to-operator ratio, where personnel simply stage part lots and then input key variables such as radius, length, and diameter of the parts,” Bouchard said.

The post Flexxbotics reduces changeover time to 10 minutes at SpiTrex Orthopedics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/flexxbotics-reduces-changeover-time-10-minutes-spitrex-orthopedics/feed/ 0