Pi Wars Week -29: OpenCV tools
Right now my 3D printer is working overtime printing prototype body shell pieces, so while it does that how about we take a trip back to a blog entry I wrote about the process of preparing for vison-based autonomous challenges in a previous contest?
Last year I wrote two very helpful utilities to aid in tuning our target detection, which definitely aided our success in the Over The Rainbow challenge. The first was a white balancer: to reliably detect certain colours, the camera needs to know the colour of light it's taking pictures in, and the best way to do that is to show it something white and tell "this is white". With this done, the robot should be able to detect a given colour whether it's in a dimly lit shed or under the competition arena's LED floodlights. If you don't whte balance the camera, it will make an effort to do it itself automatically, but this is usually unpredictable and prone to being affected by other things in its field of vision.
We wrote a simple script that would take a picture and adjust the camera's sensitivity to red and blue light (its "gains") until the R, G and B values of pixels in the central part of the image were at about the same level. We would hold a piece of white card in front of the camera and run the script, so it could adjust its levels until the white card looked white, and then save those values to a text file.
Click to embiggen: What the camera sees under a warm white bulb, and the same scene after white balancing
The rainbow challenge program would then read in these values and use them to set the camera's gains, so we could be sure it was properly adjusted to the lighting.
picam.awb_mode = 'off' # don't auto white balance with open("rbgains.txt") as f: content = f.readlines() content = [x.strip() for x in content] redgain = float(content[2:]) bluegain = float(content[2:]) picam.awb_gains = (redgain, bluegain)
Having a separate script for finding the white balance and saving the results meant we could white balance it once and remember the values across multiple attempts at a challenge, even if we had to restart the Pi in between.
Tune and tune again
The second tool was used to establish the actual colour values that the robot was looking for in the Rainbow challenge. It would take a picture, look for regions within a certain range of colours intende to match the target balls, and draw a box around them. The limits of the colour range could be adjusted with keyboard commands and the script would reevaluate the matching regions, letting us quickly home in on the smallest range of colour values that would recognise a given target but eliminate similarly-coloured objects. The final values would then be printed out so we could copy them into the autonomous challenge code.
But how do we know which area is the actual target?
The script also worked out the area, extent and aspect of the shape. The challenge that year was to find coloured balls, which appear as circles to the camera. Deciding whether a patch of colour looked like a ball involved drawing the smallest rectangle which could fit around it and measuring two properties. Firstly, the extent: the fraction of the rectangle's area which the target area filled. A circle will occupy pi/4 or about 75% of the smallest square around it, so we looked for an extent close to 0.75. The second measure was the aspect: the ratio of the width to its height, which would be close to 1.0 for a circular shape.
0.756 extent and 0.957 aspect - that's definitely a ball
area = cv2.contourArea(cnt) x,y,w,h = cv2.boundingRect(cnt) rect_area = w*h extent = float(area)/rect_area aspect = float(w)/h print "extent = " + str(extent) print "aspect = " + str(aspect) cv2.circle(img, (centroid_x,centroid_y), 3, (0,255,0), 1); if (extent > 0.65 and extent < 0.9 and aspect > 0.85 and aspect < 1.15): cv2.rectangle(img, (x, y), (x+w, y+h), (0,255,0), 1); else: cv2.rectangle(img, (x, y), (x+w, y+h), (0,0,255), 1);
If a region met both of those criteria, the bounding box would be green; if not, it was red. This was definitely to help in tuning our challenge code, and not at all an excuse for me to make a Robocop-style target finding HUD...
How the Pi sees us... probably