So, I started thinking about line detection to improve my graphics compression. There are lots of methods out there, but most involve inferring the geometry of an actual picture. All I'm really looking for is a way to find simple, one-pixel-wide lines on a 1-bit color image.
Basically, my thoughts boiled down to "how to find a Bresenham line," and from that, "how to reverse engineer the Bresenham line algorithm to do detection."
I haven't figured that out, though.
The checking part is probably easy. You could probably just look at 3x3 portions of the screen centered at the pixel you want to check and move in the direction of neighboring pixels, checking for lines. You could probably also ignore certain directions, e.g. the direction you previously came from or other directions that wouldn't fit the concurrently computed slope of the line.
I don't know. Maybe there's another (better) way? The Hough transform method looks promising, but I have no idea how to implement it at all.
One really sloppy way I can think of is to just copy the source image to another GRP, find a pixel, and then draw line segments from that starting pixel to neighboring pixels and confirming whether or not the drawn line covers only pixels from the source image. Copy GRP over again, repeat for the next nearest pixel, and repeat until there are either no more neighbors or the line covers a space that wasn't a pixel in the source. This is probably way too slow to be convenient, though.
Does anyone have any smart ideas?
Basically, my thoughts boiled down to "how to find a Bresenham line," and from that, "how to reverse engineer the Bresenham line algorithm to do detection."
I haven't figured that out, though.
The checking part is probably easy. You could probably just look at 3x3 portions of the screen centered at the pixel you want to check and move in the direction of neighboring pixels, checking for lines. You could probably also ignore certain directions, e.g. the direction you previously came from or other directions that wouldn't fit the concurrently computed slope of the line.
I don't know. Maybe there's another (better) way? The Hough transform method looks promising, but I have no idea how to implement it at all.
One really sloppy way I can think of is to just copy the source image to another GRP, find a pixel, and then draw line segments from that starting pixel to neighboring pixels and confirming whether or not the drawn line covers only pixels from the source image. Copy GRP over again, repeat for the next nearest pixel, and repeat until there are either no more neighbors or the line covers a space that wasn't a pixel in the source. This is probably way too slow to be convenient, though.
Does anyone have any smart ideas?