Pi Wars Week -23: Inner Vision
As promised, this week I've started on vision code for the autonomous challenges. With ideal timing, Neil on the "friendly competing" Team Dangle posted a blog about their vision code, which pointed me towards a sensible strategy. I started off with my usual approach to CV tasks, which is "edge detect, find lines, realise there's no sensible robust algorithm for turning a collection of line segments into an idea of where the robot is, stop". Neil's post tallied quite well with the concepts in Brian Starkey's talk from the Pi Wars mini conference, which was to reduce the task of finding the line we're following into finding the line-ish bit in a number of horizontal slices. I won't get into his example code, but suffice to say I picked up the theme and ran with it.
Firstly, taking inspiration from Brian the Younger, I ran the blurred image through an edge detector. This produces a mostly grey image, with a lighter stripe on the black->white transition on the left hand side of the line, and a dark stripe on the right hand side. This means I can look for the darkest and brightest pixels in the slice and they should find the two edges of the line. This gives me two points that should sit either side of the centre of the line, and hopefully be more precise as it's using the sharp edges of the line rather than the entire bright middle section.
Finding the brightest point in the edge detect is equivalent to finding the point with the sharpest colour change in the original image
Secondly, finding a straight line is all well and good, but like last year's straight-ish line challenge, the Lava Palaver course isn't actually straight. I wanted a way to figure out at least how far we are away from the next corner so we can slow down into the turns like a proper racing driver, and ideally predict which way it's going to turn so we can prepare our racing line.
My initial thought was to look at the gradient between each pair of successive points on the line, and break the values I got into clusters.
Math is hard, let's go
shopping find a more robust algorithm
There are two problems with this: clustering (done properly) is Hard Maths, and the gradient between point pairs in the line can differ quite a lot when the distances are only a few pixels. To try and make the algorithm a little more robust, I instead took the gradient between the first and fourth points, then checked the gradient for each successive point from the first until they started to deviate from this value. This gives a fairly safe approximation of the group of points that fall in a straight line from the bottom of the image. Which way the gradient starts to change at the top tells me which way the next corner will turn.
Visual representation of my algorithm
We'll see what happens when I let this algorithm loose and it can actually start driving - I have no idea whether it will go insane as soon as it hits the first corner and the line at the bottom of the screen isn't straight.