So, actually, that last method I proposed won't work at all because that's not how the line algorithm works. Dead in the water.
Edge detection exists using Fourier transformation stuff as well as with the Gaussian method, but edge detection doesn't do line detection, e.g. doesn't spit out the endpoints of a line in an image should it exist. It just makes the image marginally easier to work with.
Simple line detection
Root / Programming Questions / [.]
MZ952Created:
So, I started thinking about line detection to improve my graphics compression. There are lots of methods out there, but most involve inferring the geometry of an actual picture. All I'm really looking for is a way to find simple, one-pixel-wide lines on a 1-bit color image.
Basically, my thoughts boiled down to "how to find a Bresenham line," and from that, "how to reverse engineer the Bresenham line algorithm to do detection."
I haven't figured that out, though.
The checking part is probably easy. You could probably just look at 3x3 portions of the screen centered at the pixel you want to check and move in the direction of neighboring pixels, checking for lines. You could probably also ignore certain directions, e.g. the direction you previously came from or other directions that wouldn't fit the concurrently computed slope of the line.
I don't know. Maybe there's another (better) way? The Hough transform method looks promising, but I have no idea how to implement it at all.
One really sloppy way I can think of is to just copy the source image to another GRP, find a pixel, and then draw line segments from that starting pixel to neighboring pixels and confirming whether or not the drawn line covers only pixels from the source image. Copy GRP over again, repeat for the next nearest pixel, and repeat until there are either no more neighbors or the line covers a space that wasn't a pixel in the source. This is probably way too slow to be convenient, though.
Does anyone have any smart ideas?
When you find an end point, you can traverse back up the line to, if you're lucky, find yourself ending up at the line's opposite endpoint. If not, you can travel as far as you can to get a good approximation of the direction the opposite endpoint, so if you have a list of endpoints, you can find the point which fits closest to the approximated angle.
Couple of edge cases for you to consider.
- A poly line, one line continues where another ends
- Intersecting lines. Two or more lines meet midway on a line
- How would large blocks of solid color be handled.
- A curved line.
When you find an end point, you can traverse back up the line to, if you're lucky, find yourself ending up at the line's opposite endpoint. If not, you can travel as far as you can to get a good approximation of the direction the opposite endpoint, so if you have a list of endpoints, you can find the point which fits closest to the approximated angle.A problem with that though: say you don't get the whole line on the screen. Say the user swiped an eraser through one of the endpoints. You then no longer have all the information to confirm whether it's a line or not. Unless the "stair pattern" [there's a recognizable pattern in the number of pixels per step of the line] repeats itself again further down, that portion of the top of the line would probably be unrecognizable.
... I think the problem is you are going about this from the wrong direction. ...I could be, but at this point, I think it's highly unlikely. I posted a thread about my dilemma before, but what I'm actually trying to do is create a fast and cheap compression method. "Fast" in execution time during compression AND decompression, and "cheap" such that the resultant size of the data are small (compared to SB's teeny tiny RAM). Why do I need it fast and cheap? Because I'm making animation software and it's gonna need appreciably high framerates. Drawing tools involve complexities such as random values and there's no telling how long one can spend on a single frame of animation (have you ever tried your hand at animation? It's a love-hate sort of thing. Thinking about animating something fills me with equal parts dread and elation), so storing the actual input information and tool types used and whatnot is not feasible at all. I definitely need this kind of compression, which I actually already have. My current compression is really simple. It looks for solid rectangles and straight or diagonal lines of constant color and stores and compresses the endpoints into an array. I can whittle a 320x240 frame of little complexity down into an array of a few thousand elements, and decompression of frames can reach framerates upwards of 50 fps. Shit hits the fan when the frames get as complex as, say, lots of one-pixel-wide draft drawings or something. Being able to recognize those scribbles of one-pixel-wide lines as actual start-point-end-point lines instead of breaking them up into rows and columns of pixels would work wonders for me in both speed and cheapness. Edit: also, I think such an ability to detect those lines would help the algorithm detect larger filled polygon shapes on the screen and break them down more efficiently.
Yeah, store the drawing data from the beginning as start and end points with a stroke width and color. Quantize the user's raw touch input using some time quanta that doesn't destroy quality, and then from the list of points (which define line segments), combine 'collinear' points (with some threshold). You can let the user set the threshold for collinear slope through a "Smoothing/Correction" interface. This method won't work on curves and idk how curve-fitting works, but you end up with essentially two points for a reasonably smooth line.
When displaying the image you just interpret those segments as GLINE instructions.
Well if you are trying to make animations, I think it would probably be a good idea to copy Adobe Flash and have pre-fab items. Basically drawing commands for a character or part of a character and you just keep track of where to draw the part and with what modifiers each frame. For modifiers I am thinking some nice matrix manipulation for things like translate, rotate, and scale. Don't know if SmileBasic can do that in real-time, but moving around a pre-made object should take a whole lot less space than specifying how to draw it each time. This really might be too big of a project for SmileBasic honestly. Thousands of lines on a touch screen is not particularly fun to type in.
Another idea is to simply make it in a different program on the PC then convert the output into a SmileBasic friendly format. Scream it over with Petit Modem, then have a player application on SmileBasic.It is much easier to make an editing GUI on the PC.
The reason why I'm making it on SB is because I like the platform. When the DSi released a long while back, it came preinstalled with an application called Flipnote, which was a low-power, hand-drawn animation app. As a kid, I had became addicted to it had come to really love the DS platform for art programs in general.
Line detection would help me optimize my current algorithm greatly, but I actually don't need it to pull large framerates off to an appreciable degree, and the more I look into it the more I believe that vanilla line detection is near impossible (unless, of course, you store user input, which doesn't help for breaking down polygons).
I still haven't reverse engineered the bresenham line algorithm, but it seems to me that SB uses some other method to draw lines, because I'm noticing odd line artifacts for special cases. Also, things like line endpoints and midsections can be erased or drawn over, making it more difficult to handle in either case, stored user input or not.
That approach might work very well for drafting and Flipnote lacked a good drafting system, so I might make it a separate kind of feature in my program (not to appease your ideas, I'm basically making this for solely myself lol). Like, a separate layer that doesn't show in the animation that is used for making sketches and drafts.
But for the actual frames themselves, I don't want to get into the mess of mixing and handling the two approaches simultaneously, especially for the single edge-case of one-pixel-wide lines. I'm keeping this as close to Flipnote as possible because that's what's most familiar to me, so solely bitmap.
Looked up flipnote a bit, it sounds like there were a couple frame types, a brand new image, and an image type that only encodes differences from the previous frame (good if you just have a character whose mouth moves but the rest of the frame doesn't). From there you could probably do some sort of run length encoding, or pattern encoding and get fairly small frame sizes. If you are doing black and white only you could probably pack 32 pixels into an int.
Yeah, flipnote checks for differences between frames. That's how it compresses its paints so well and how it makes its file sizes so small (999 complex frames in mere kilobytes). I've thought a little bit about how such a system would work in SB, but I've never actually prototyped anything. Come to think of it, it might be seriously worth looking into again.