Skip to content

Vision Processing on a Raspberry Pi

brookshank edited this page Feb 24, 2017 · 3 revisions

As of 2017, grip provides an easy way to vision process on a co processor. There are many advantages to processing on a co processor rather than on the rio, but mainly:

Speed

The roborio's processor is slow, and is already being used to control a robot. A co processor such as a Raspberry Pi can focus entirely on vision processing, which allows for a higher frame rate.

This page will help anyone who wishes to vision process on a Raspberry Pi using openCV / grip. For this tutorial, I will assume that you are in possession of:

  • A Raspberry Pi, preferably a Raspberry Pi 3 (or whatever model is latest as of reading)

  • A USB camera -- A Microsoft Lifecam HD 3000 will be used in this tutorial

  • A computer with the grip GUI on it. Information for how to set this up can be found here

##Setting up the Raspberry Pi

You can use any OS that you like, however I would suggest Raspbian Lite. It takes up a very small space, its easy to configure, and its fairly minimal. If you have experience in Linux, Arch would probably be a little faster but not noticeably.

There are plenty of resources availible for setting up a Raspberry Pi with Raspbian.

Once you have the OS installed, you may need to enable SSH before you can access it. Hook the pi up to a monitor and keyboard, log in with user: pi and password: raspberry. run sudo raspi-config, go to advanced options, and enable SSH. If you want, you can also go to hostname, and change it to something more specific.

Networking

If you are using a Raspberry Pi 3, there is a wifi chip built in. This lets you connect to both the internet through wifi, as well as the robot's own network over ethernet. Again, there are plenty of guides for setting this up and I won't cover it here. However, by default ethernet is higher priority for internet. This means that if both ethernet and wifi interfaces are used, all WLAN network will attempt to go through the ethernet's gateway. We need to fix this. Edit /etc/network/interfaces with your favorite text editor (nano and vi are installed by default, I would suggest using nano if you are not experienced in console text editors.sudo nano /etc/network/interfaces is the full command)

Add up route add default gw GATEWAY to the bottom of this file, where GATEWAY is the gateway to wlan0 from the command route -n.

Restart your pi, and you should be able to ssh from either your wifi network, or the robot's own network. You should also be able to contact the outside internet.

###openCV TODO: integrate instructions for this into this page. For now, just follow this guide

By now, you should have a Raspberry Pi configured to be used with ssh, able to run openCV in python (either 2.7 or 3.*, either works fine).

##Working in GRIP Now that you have the pi set up, it is time to begin work on vision processing. Open the grip program on another computer and follow this guide to get the basics. However, instead of masking and finding the blob, we wish to find and filter contours. Contours are the best option for tracking objects, as they give easily usable data to use in other code. To generate helpful contours, end your pipeline with find contours -> filter contours.

Once you feel that your filtering is fairly decent, proceed. Don't spend too much time on this the first time around, it takes some time to get a feel for how grip operates. On top of that, the filtering will be a bit different on the pi itself. Go to tools -> generate code, and choose python.

Take this file, and make note of the class name and the file name.

Clone this wiki locally