Getting Started with the Intel Neural Compute Stick 2 and the Raspberry Pi

Getting started with Intel’s Movidius hardware

Alasdair Allan
5 years ago

Three years ago now a startup called Movidius launched what was the world’s first deep learning processor on a USB stick. Based around their Myriad 2 Vision Processor (VPU), the Fathom Neural Compute Stick was the first of its kind. But then the company was bought by Intel and, apart from a brief update at CES, both they and the stick disappeared from view. It wasn’t until the following year that Intel launched a re-branded version of the stick.

Another year passed, and Intel unveiled the second generation stick making use of the Myriad X VPU. But, unlike the new Google Coral USB Accelerator which shipped with Raspberry Pi support out of the box, as late as November last year there wasn’t any support for using it with non-x86_64 architectures.

That changed in December with software support, and documentation, finally being released on how the use stick with Raspbian, although initial reports suggested that the process wasn’t particularly user friendly.

ℹ️ Information This walkthrough will also work when setting up and using the original Movidius Neural Compute Stick with the Raspberry Pi without changes.
Opening the Box

The stick arrives in a small unassuming blue box. Inside the box is the Neural Compute Stick itself, and a small leaflet pointing you at the instructions.

At 73 × 27 × 14 mm and weighing in at 35g the second generation Neural Compute Stick probably won’t be mistaken for a USB flash drive, at least not one from this decade. That might not seem important until you realise that the stick is so large it tends to block nearby ports, or with some computers, be hard to use at all. You may need to pick up a USB hub to use it.

Setting Up Your Raspberry Pi

Unlike Google’s new Coral Dev Board, which needs a lot of setup work done before you can get started, there isn’t really a lot to do here. Grab a Raspberry Pi, a power supply, a USB cable, and a micro SD card, and you’re ready.

If you’re used the Raspberry Pi for anything before it’s probably a good idea to install a fresh version of the operating systems and work from a clean slate. Go ahead and download the latest release of Raspbian Stretch (with desktop) and then set up your Raspberry Pi.

Unless you’re using wired networking, or have a display and keyboard attached to the Raspberry Pi, at a minimum you’ll need to put the Raspberry Pi on to your wireless network, and enable SSH. Although you might want to go ahead and enable VNC as well, as it might prove useful.

Once you’ve set up your Raspberry Pi go ahead and power it on, and then open up a Terminal window on your laptop and SSH into the Raspberry Pi.

% ssh pi@raspberrypi.local

Once you’ve logged in you might want to change the hostname to something less generic using the raspi-config tool, to let you tell it apart from all the other Raspberry Pi boards on your network, I chose neural2.

Powering Your Raspberry Pi

The more modern Raspberry Pi boards, and especially the latest model the Raspberry Pi 3, Model B+, needs a USB power supply that will provide +5V consistently at 2 to 2.5A. Depending on what peripherals the board needs to support that can be a problem.

Typically the Raspberry Pi uses between 500 and 1,000mA depending on the current state of the board. However attaching a monitor to the HDMI port uses 50mA, adding a camera module requires 250mA, and keyboards and mice can take as little as 100mA or well over 1,000mA depending on the model. With the Neural Compute Stick itself requiring at least 500mA.

However I’ve found that most USB chargers will tend to under supply the Raspberry Pi, and as a result the board will register a low power condition and start to throttle the CPU speed. If things get worse the board may suffer brown outs and start to randomly, or repeatedly, reboot.

If you have a monitor attached to your Raspberry Pi board this is the point where you’ll see a yellow lightning bolt in the top right hand corner of your screen. However if you’re running your Raspberry Pi headless, you can still check from the command line using vcgencmd.

$ vcgencmd get_throttled

However the output is in the form of binary flags, and therefore more than somewhat impenetrable. Fortunately it’s not that hard to put together a script to parse the output of the command and get something that’s a bit more human readable.

$ sh ./throttled.sh
Status: 0x50005
Undervolted:
   Now: YES
   Run: YES
Throttled:
   Now: YES
   Run: YES
Frequency Capped:
   Now: NO
   Run: NO
$

If you the script reports that the board is under-volted it’s likely you should replace your power supply with a more suitable one before proceeding. Unless you’re really sure about your own power supply, I’d recommend you pick on the official Raspberry Pi USB power supply.

The official supply has been designed to consistently provide +5.1V despite rapid fluctuations in current draw. It also has an attached micro USB cable, which means that you don’t accidentally use a poor quality cable — something that can really be an issue.

Those fluctuations in demand is something that happens a lot with when you’re using peripherals with the Raspberry Pi, and something that other supplies — designed to provide consistent current for charging cellphones — usually don’t cope with all that well.

Installing the Software

You’re ready to install the software needed to support the Neural Compute Stick. You should ignore the instructions the helpful leaflet in the box pointed you at, they’re aimed at x86_64 computers. They aren’t going to be useful.

Instead Intel has provided alternative instructions, and we’re going to base ourselves around those. Go ahead and grab the OpenVINO toolkit,

$ wget https://download.01.org/opencv/2019/openvinotoolkit/l_openvino_toolkit_raspbi_p_2019.1.094.tgz$ tar -zxvf l_openvino_toolkit_raspbi_p_2019.1.094.tgz

and then modify the setup script to reflect the installation path,

$sed-i "s||$(pwd)/inference_engine_vpu_arm|" inference_engine_vpu_arm/bin/setupvars.sh

before appending the setup script to the end of your .bashrc file.

$ echo "source inference_engine_vpu_arm/bin/setupvars.sh" >> .bashrc
$ source .bashrc
[setupvars.sh] OpenVINO environment initialized
$

In the latest release, that would be OpenVINO 2019R1, there is a problem with the PYTHONPATH configuration. So you also need to append a slight addition to the path at the bottom of .bashrc to fix the problem.

export PYTHONPATH="${PYTHONPATH}:/home/pi/inference_engine_vpu_arm/python/python3.5/armv7l"

Then run the rules script to install new udev rules so that your Raspberry Pi can recognise the Neural Compute Stick when you plug it in.

$sudousermod-a -G users"$(whoami)"
$sh inference_engine_vpu_arm/install_dependencies/install_NCS_udev_rules.sh
Update udev rules so that the toolkit can communicate with your neural compute stick
[install_NCS_udev_rules.sh] udev rules installed
$

You should go ahead logout of the Raspberry Pi, and back in again, so that all these changes can take affect. Then plug in the Neural Compute Stick.

Checking dmesg you should see something a lot like this at the bottom,

[ 1491.382860] usb 1-1.2: new high-speed USB device number 5 using dwc_otg
[ 1491.513491] usb 1-1.2: New USB device found, idVendor=03e7, idProduct=2485
[ 1491.513504] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 1491.513513] usb 1-1.2: Product: Movidius MyriadX
[ 1491.513522] usb 1-1.2: Manufacturer: Movidius Ltd.
[ 1491.513530] usb 1-1.2: SerialNumber: 03e72485

if you don’t see similar messages then the stick hasn’t been recognised. Try rebooting your Raspberry Pi and check again,

$ dmesg | grep Movidius
[    2.062235] usb 1-1.2: Product: Movidius MyriadX
[    2.062244] usb 1-1.2: Manufacturer: Movidius Ltd.
$

and you should that the stick has been detected.

Running Your first Machine Learning Model

Unlike the newer Edge TPU based Coral hardware from Google, or NVIDIA’s new Jetson Nano, the out-of-the-box experience with the Neural Compute Stick is built around C++ rather than Python. However we’re lacking some of the tools we need to do that, so first of all we need to install cmake.

$ sudo apt-get install cmake

Then we can go ahead and build one of the pre-trained Face Detection demos.

$ cd inference_engine_vpu_arm/deployment_tools/inference_engine/samples
$ mkdir build
$ cd build
$cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a"
$ make-j2 object_detection_sample_ssd
   .
   .
   .
[100%] Linking CXX executable ../armv7l/Release/object_detection_sample_ssd
[100%] Built target object_detection_sample_ssd
$

Unlike the Intel distribution, the Raspberry Pi version of the toolkit is lacking the model and associated XML file with the model topology. So before we can run the model we also need to download both of those files.

$ wget --no-check-certificate https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/face-detection-adas-0001/FP16/face-detection-adas-0001.bin$ wget --no-check-certificate https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/face-detection-adas-0001/FP16/face-detection-adas-0001.xml

We’ll also need an image to run the face detection demo on, I grabbed an image I had lying around of me taken at CES earlier in the year and copied it in my home directory on the Raspberry Pi using scp from my laptop.

Now we can run our demo,

$ ./armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i ~/me.jpg
[ INFO ] InferenceEngine:
API version ............ 1.4
Build .................. 19154
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /home/pi/me.jpg
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. 19154
Description ....... myriadPlugin
[ INFO ] Loading network files:
face-detection-adas-0001.xml
face-detection-adas-0001.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ WARNING ] Image is resized from (960, 960) to (672, 384)
[ INFO ] Batch size is 1
[ INFO ] Start inference (1 iterations)
[ INFO ] Processing output blobs
[0,1] element, prob = 1    (410.391,63.5742)-(525.469,225.703) batch id : 0 WILL BE PRINTED!   .
   .
   .
[ INFO ] Image out_0.bmp created!
total inference time: 155.233
Average running time of one iteration: 155.233 ms
Throughput: 6.44194 FPS
[ INFO ] Execution successful
$

as the image is processed you’ll see a number of detections, in my image there is one very definite detection of a face with a probability of ‘1’ and then a over a hundred other detections, all with probabilities less than ‘0.03’ which were deemed by the code to be not significant.

An output BMP file will be automatically generated where all the significant detections will have a bounding box drawn around the object. At this point we at least know that everything works.

But looking in the inference_engine/samples/object_detection_sample_ssd directory at the C++ code of the object_detection_sample_ssd demo we see that things are a lot more low level than those of use used to dealing with machine learning from higher level languages are used to, we’re going to need to get the Python wrappers working.

Adding a Camera

We’re going to make use of the Raspberry Pi camera module for our next demo so before we get hands on with the Python wrappers, let’s make sure we have our camera installed and working.

To attach the camera module to your Raspberry Pi, turn the camera module over so it’s face down and pull the black latch outward. Then slide the ribbon cable under the latch with the blue strip facing towards you. The ribbon cable should slide smoothly beneath it t, and you shouldn’t have to force it. Then push the black latch back in to secure the cable in place.

If your Raspberry Pi is powered on and running, you’ll need to power it down before attaching the camera module. In your SSH session you should go ahead and power down the board using the shutdown command to bring it to a clean halt.

$ sudo shutdown -h now

Unplug the power cable and then pull the black latch of the board’s camera connector, located just to the right of the 3.5mm jack and the Ethernet socket, upwards. Follow the same procedure as for the camera module, this time the blue strip should face towards the Ethernet jack. Afterwards power the board back up and log back into via SSH.

Now you’ve got the camera physically connected, you’ll need to enable it. You can use the raspi-config utility to do that.

$ sudo raspi-config

Scroll down and select “Interfacing Options,” and then select “Camera” from the next menu. Hit “Yes” when prompted, and then “Finish” to quite out of the configuration tool. Select “Yes” when asked whether you want to reboot.

You can check that the camera is working by using the raspistill command.

$ raspistill -o testshot.jpg

this will leave a file called testshot.jpg in the home directory, you can use scp to copy it from the Raspberry Pi back to your laptop.

While we can use the still images taken by the camera and feed them to our model by hand, if you have a monitor attached there is some code included in the software development kit that you can run that’ll demonstrate real-time inferencing on top of a video feed from the camera.

The script will need to access the camera from Python and, out of the box, the picamera Python module may not be installed on your Raspberry Pi. So before running the demo code we should go ahead and do that.

$ sudo apt-get install python3-picamera

We’re also going to want a Video4Linux (V4L) compatible device, so you should load the appropriate BCM2835 kernel module,

$ sudo modprobe bcm2835-v4l2

to create a V4L compatible /dev/video0 device. You’ll probably also want to append the BCM2835 module to /etc/modules,

$ sudo -i
# echo 'bcm2835-v4l2' >> /etc/modules
# exit
$

so that the module is loaded on reboot.

Running Machine Learning in Python

The Python wrappers for the Intel OpenVINO toolkit require NumPy and OpenCV support so before we do anything else, lets go ahead and install both those packages for Python 3.5.

$ sudo apt-get install python3-numpy
$ pip3 install opencv-python

and install some other additional libraries needed by the code.

$ sudo apt-get install libgtk-3-dev
$ sudo apt-get install libavcodec-dev
$ sudo apt-get install libavformat-dev
$ sudo apt-get install libswscale-dev

We’ll be using a modified version of the object_detection_demo_ssd_async.py demo script. While I managed to get it to mostly run, it had problems opening the camera stream, so I had to kludge things a little bit to make them work.

Open the script in your favourite editor

$ cd ~/inference_engine_vpu_arm/deployment_tools/inference_engine/samples/python_sample/object_detection_demo_ssd_async
$ vi object_detection_demo_ssd_async.py

then modify lines 83 through to 85,

if args.input == 'cam':
        input_stream = 0
    else:

to have the full path to our V4L device.

if args.input == 'cam':
        input_stream = '/dev/video0'
    else:

First of all this will stop the script hanging with a pipeline error, and secondly avoid the infamous "Trying to dispose element pipeline0, but it is in PAUSED instead of the NULL state" error you get when trying to force quit the script after it hangs.

We can reuse the same model we downloaded for our earlier example. So going ahead and dropping the copy of our new script into our home directory we can set it going,

$ cd ~/inference_engine_vpu_arm/deployment_tools/inference_engine/samples/python_sample/object_detection_demo_ssd_async
$ python3 ./object_detection_demo_ssd_async.py -m ../../build/face-detection-adas-0001.xml  -i cam -d MYRIAD -pt 0.5

If all goes well you should see a window open up on your desktop with a video feed from the Pi Camera Module, with real time inferencing overlaid on top.

This will work just fine on the primary display for your Raspberry Pi, so if you have a monitor attached the window should just pop open.

If you’re running headless the easiest thing is to enable VNC, and connect to your Raspberry Pi that way. Although bear in mind that if you don’t have a monitor connected you‘ll need to set a default resolution using raspi-config, as the default display is just 720×480 pixels in size. Go to Advanced Options and then Resolution and select a display size that’ll fit on your local machine’s display

If you’re are connected to the Raspberry Pi from your local machine it is possible to get the window to display on your local desktop, so long as you have an X Server running and have enabled X11 forwarding to your local machine.

$ ssh -XY pi@neural.local
password:

Otherwise the script will close with a cannot open display error.

Summary

Despite getting there first, or more likely because of it, I haven’t been that impressed with the standard of documentation around the Intel Neural Compute Stick 2. There are multiple versions of the documentation on the developer site, and they mostly all contradict each other in subtle ways. There are also a number of, albeit minor, errors in the Intel demo code, and at least for the Raspberry Pi, the setup scripts that install pre-compiled models seem to be missing from the distribution entirely.

Compared with the documentation around Google’s new Coral hardware, or NVIDIA’s new Jetson Nano, the getting started instructions for the Compute Stick are not developer friendly. The Python wrappers also seem to be at a lower abstraction level than those shipping with the Coral hardware.

Finally unlike Google’s new Coral hardware, which has an online compiler to convert your own models to the correct format, you’ll need to install OpenVINO on another x86-based Linux box if you want to convert your models to use with the Compute Stick on the Raspberry Pi.

Overall the developer experience on the Raspberry Pi just feels like a cut down version of the full OpenVINO toolkit thrown together. Which is somewhat disappointing.

Alasdair Allan
Scientist, author, hacker, maker, and journalist. Building, breaking, and writing. For hire. You can reach me at 📫 alasdair@babilim.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles