Darkman9333 Darkman9333 - 3 days ago 8
Java Question

Using OpenCV on raspberry pi for vision tracking FRC

I'm a senior in high school currently a programmer for the robotics team. This year we plan on doing some vision processing/tracking to automatically find the goal and align ourselves with the goal. We use java to program our robot and are apart of the FRC (First Robotics Competition). We're having some trouble with the standard way of getting vision tracking to work, using RoboRealm, and I had a thought to use a Raspberry Pi as a co-processor solely for vision tracking purposes. I've done a little bit of research as to what to use and it appears OpenCV is the best. I have little experience in coding on the Raspberry Pi, but a basic understanding of python. I was thinking of having the raspberry pi do all the tracking of the goal (has retro-reflective tape along the outer edge of the goal), and somehow sending that signal (through roborio -- on-board FRC standard processor) and to my java code, which will then tell our robot to either turn more left or more right depending on how far off from the target we are. I'm just curious if this is in the realm of do-ability for a beginner programmer such as myself. Any feedback would be great!

Thanks!

Answer

Everything you've said sounds very doable using contour features You can use a bounding rectangle/circle,etc to extract center of mass (COM) coordinates of your goal. At this point you can do a simple thresholding like you said, if the COM is to the left, move left and vice versa.

The biggest issue would be reliably locating the goal, if you've never done CV before its easy to underestimate the difficulty of this task. My advice is to try to make the goal as apparent as possible. Since its reflective perhaps you can illuminate it to make it stand out more? Maybe shine IR(infra-red) from the robot and use an IR filter on the camera. You could also do this with any regular light in the visible spectrum. Once you've created adequate contrast between the goal and the background you could do simple thresholding or possibly do template matching (though much slower, and it won't work if the goal is at an angle or skewed).

I hope I've given you some ideas, good luck.

EDIT
In your comment you mentioned your target is green which can simplify your problem. I'm not sure how much you know about CV, but images come in RGB format. Each pixel has a red, green, and blue component. If you are looking for green it might be nice to split the colors and only the green channel of the image for thresholding THe open CV site has GREAT tutorials for getting started. I would highly recommend you (and anyone else on your team) take a look at this. I'd recommend you read:

  1. Gui Features in OpenCV
    a. Images
    b. videos
  2. Core operations - Basic Operations on Images
  3. Image Processing (this is the big one)
    c. Image Thresholding
    d. Smoothing (almost every image in every cv algorithm is smooth during pre processing)
    e. Morphological Transformations (may help clean the image after thresholding)
    i. Contours (this is where you get coordinates)

another tip is during algorithm development work with still images. Take a few different images of your goal from a perspective the robot would likely encounter. Do all your testing and development on those. Once you have a high confidence level then move to video. Even then I would start with offline video (a recording you capture, not real time). If you find issues its easy to reproduce them (just go back to that troublesome time-stamp in the vid and tweak your algorithm). Then finally do it with online (real-time) video.

Last piece of advice, even if your ultimate goal is to run on the RPi feel free to test your CV algorithm on any computer you have. If you use a laptop most of the time, throw opencv on that, the main difference in porting to Rpi will be the way you address the RPi camera module. But if you are still at early stages using stills and offline video this won't make a difference. But that's just my opinion, I know I myself have trouble dragging out my Pi to code when I'm on my windows laptop all day. I'm far more likely to work on code using my everyday PC.

Comments