Tiny AI Devices Invade the Maker Movement

The tiny white orb-shaped robot scuttled across the carpet, its flashing headlamp illuminating the floor before it. “It can recognize…

Rex St. John
7 years agoMachine Learning & AI

The tiny white orb-shaped robot scuttled across the carpet, its flashing headlamp illuminating the floor before it. “It can recognize faces,” said the hardware engineer in charge of implementing the compact internals of the Xpider. “We are working on a software layer to make it easy to drag-and-drop commands using the onboard neural network engine.

As I have toured the world attending hardware conferences and Maker Faires, the theme of providing small edge devices with more intelligence has grown stronger with each passing month. More and more inventive developers and teams seem to be working on solving the next big problem facing the Maker Movement: How to make AI-enabled hardware and perceptual computing gadgets usable to a wider audience of developers.

The Acceleration of AI-focused IoT Hardware

It is no secret that AI is among the top growth areas in technology today. The white hot expansion of interest in areas relating to Deep Learning has caused an explosion of industry investment into software such as Facebook’s Caffe2 and Google’s TensorFlow frameworks resulting in tens of thousands of new job openings according to a recent Gartner report.

The industry has also moved aggressively to bring down the costs of key technologies associated with computer vision and robotics such as compact 3D cameras, low-cost Lidar and integrated robotics solutions such as the new TurtleBot 3 series from OSRF and Robotis.

Only a few years ago, building a robot capable of mapping and navigating an indoor environment, manipulating objects and recognizing obstacles would have cost developers thousands of dollars. The Turtlebot 3 Burger form-factor costs around $550 and is capable of a wide variety of functionality once available only for significantly more money.

As a result, the intense competition to accelerate computer learning and perception tasks has now penetrated well into the realm of the underlying hardware, encouraging major players like Google, Facebook and Microsoft to delve into custom silicon designs of their own. These efforts are trickling out slowly to the Maker Movement… but there is still work to be done.

Big Silicon Players Jumping in Head First

While most of the hardware innovation currently occurring to support AI is happening in the realm of “Big Iron” aka servers processing intensive workloads, we are now seeing the effects of this competition increasingly trickling down to a wider audience. A great example of this is the NVIDIA Jetson series, which has achieved a notable degree of success (helped along by a mature set of tools known as CUDA).

Intel has undertaken it’s own efforts with the purchases of companies like Movidius and the recent release of the Fathom Neural Compute Stick for $79. Meanwhile, Qualcomm, a major Intel competitor, has been busy optimizing the Hexagon DSP accelerator (which comes inside the SnapDragon 835 SoC) for Deep Learning tasks.

Finally, Arm (my employer), recently announced a software toolkit containing numerous optimizations for Deep Learning and computer vision called Compute Library. Compute Library is especially interesting because approximately 17.7 billion Arm-based processors were sold in 2016 meaning that the surface area for Deep Learning on many embedded devices at low cost is now much broader.

While much progress is being made to bring down costs, the Maker Movement has a much more stringent set of usability and accessibility requirements before these technologies can see broad adoption. Simply put: These tools must become even easier and cheaper before they really take off with makers and tinkerers…and that is exactly what is going on in the market!

AI Hardware and Software Must Be Packaged as a Solution

One barrier slowing adoption of AI and perception technology by the Maker mass market is the requirement that AI hardware and software must come packaged as an integrated solution instead of a loose collection of components.

Instead of needing just an Arduino, developers now really need an entire robot, drone or integrated camera solution to even get started with AI tinkering. Such solutions require device-makers to think very carefully about how to package advanced AI and computer vision algorithms in such a way that they can be simple and easy to use.

The Xpider is one example. It comes with a cheapish (~$130) robotic chassis, a camera and some additional sensors plus an IDE environment to allow developers to tap into it’s computer vision and learning capabilities quickly.

Another example of companies working towards lowering barriers to entry for advanced robotics is Calgary-based EZ-Robot, who have exerted significant effort towards producing a modular robotics system with high-speed / low-latency cameras that can be rapidly trained to recognize and track objects in a very short period of time.

Still other examples of efforts to lower barriers-to-entry include the JeVois platform (see above picture), which was successfully crowdfunded last year by founder Laurent Itti. The JeVois is an integrated camera plus low-energy compute solution that comes with a wide variety of Linux kernel optimizations and specialized camera integration that allows makers to plug the device directly into a Raspberry Pi or Arduino via USB. This allows makers to begin using phenomenally advanced compute vision algorithms without needing a Phd.

The cost? Less than $49.99. That’s not bad at all.

Still other options to engage in computer vision include Intel’s RealSense camera and competitive sensors produced by Orbbec3D (a low-cost alternative).

The Need for Open-Source Training Data

One final note is that training hardware gadgets to engage in AI-specific tasks may in some cases require high-quality, freely available and open source training data. Efforts are underway to collect such data for vertical-specific applications like autonomous vehicles, but performing interesting developer tasks such as teaching a device to recognize between dogs and cats or Chinese characters may prove challenging without having such data available in open-source form.

Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles