Tech Companies Designing Chips to Aid in Robot Navigation

There was a large turnout at this year’s IEEE ISSCC (International Solid State Circuits Conference) conference, with big-name tech…

Cabe Atwell
5 years agoRobotics

There was a large turnout at this year’s IEEE ISSCC (International Solid State Circuits Conference) conference, with big-name tech companies showing off their latest ICs and SoCs. A notable trend stood out between a few of those companies and academic institutions, who have designed new platforms to aid robot autonomy and navigation — including path planning and robotic coordination.

Intel was on hand to detail a new low-power robot SoC (in 22nm CMOS) for their integrated battery-powered Mini-Bots. The small robots are part of their collaborative multi-robot system for applications such as search and rescue. The secret to their ability lies with an integrated camera, LIDAR, audio sensors, and low-powered custom SoC that handles nearly every aspect of autonomy and navigation.

The SoC was designed to handle tasks such as sensor data fusion, localization and mapping, multi-robot collaborative intelligent decision-making, object detection and recognition, collision avoidance, path planning, and motion control.

Electrical and computer engineers from the University of Minnesota’s Science and Engineering department have designed an in-memory computing graph ASIC chip with “Wavefront Expansion,” created for path-planning and navigation.

The chip is a 40 x 40 array that uses a different type of logic where values are encoded by how long it takes a signal to pass through its gates. The elements within the array serve as vertices in a graph and its edges that connect them. Programming those elements allow robots to simulate different terrain- including hills, valleys, and obstacles they must traverse. This will enable them to navigate efficiently in nearly any environment.

Engineers from the University of Michigan have developed a high-performance, energy-efficient 6D vision processor for MAV (Micro Air Vehicle) autonomous navigation. The processor is the first ASIC that accelerates real-time optical-flow 3D coordinate and 3D motion computation. Rather than relying on SLAM technology, the platform uses stereo cameras and a semi-global matching (SGM) algorithm to achieve autonomous navigation.

The 6D vision processor was made using three distinct parts — a convolutional neural network that designates landmarks from images taken by the cameras, an accelerator that matches the markers taken from the images, and a platform that refines the MAV’s trajectory processed using multiple image frames. The chip manages to crunch that data garnered from stereoscopic VGA-quality video (1920 X 1080 @ 25fps), to achieve autonomous navigation.

These are just several new developments that companies have been working on to achieve greater navigational efficiency for robotic and drone applications, and it will be interesting to see where these achievements lead to in next-generation platforms.

Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles