So, actually, that last method I proposed won't work at all because that's not how the line algorithm works. Dead in the water.
Edge detection exists using Fourier transformation stuff as well as with the Gaussian method, but edge detection doesn't do line detection, e.g. doesn't spit out the endpoints of a line in an image should it exist. It just makes the image marginally easier to work with.
I think it should be possible to find the end of a line by checking for a black pixel, and looking for specifically one black pixel neighboring it. If that is found, you know you've found a pixel at the end of a line. If a line's end touches another line though, it will interfere.
When you find an end point, you can traverse back up the line to, if you're lucky, find yourself ending up at the line's opposite endpoint. If not, you can travel as far as you can to get a good approximation of the direction the opposite endpoint, so if you have a list of endpoints, you can find the point which fits closest to the approximated angle.
Yeah, store the drawing data from the beginning as start and end points with a stroke width and color. Quantize the user's raw touch input using some time quanta that doesn't destroy quality, and then from the list of points (which define line segments), combine 'collinear' points (with some threshold). You can let the user set the threshold for collinear slope through a "Smoothing/Correction" interface. This method won't work on curves and idk how curve-fitting works, but you end up with essentially two points for a reasonably smooth line.
When displaying the image you just interpret those segments as GLINE instructions.
Well if you are trying to make animations, I think it would probably be a good idea to copy Adobe Flash and have pre-fab items. Basically drawing commands for a character or part of a character and you just keep track of where to draw the part and with what modifiers each frame. For modifiers I am thinking some nice matrix manipulation for things like translate, rotate, and scale. Don't know if SmileBasic can do that in real-time, but moving around a pre-made object should take a whole lot less space than specifying how to draw it each time. This really might be too big of a project for SmileBasic honestly. Thousands of lines on a touch screen is not particularly fun to type in.
Another idea is to simply make it in a different program on the PC then convert the output into a SmileBasic friendly format. Scream it over with Petit Modem, then have a player application on SmileBasic.It is much easier to make an editing GUI on the PC.

The reason why I'm making it on SB is because I like the platform. When the DSi released a long while back, it came preinstalled with an application called Flipnote, which was a low-power, hand-drawn animation app. As a kid, I had became addicted to it had come to really love the DS platform for art programs in general.
Line detection would help me optimize my current algorithm greatly, but I actually don't need it to pull large framerates off to an appreciable degree, and the more I look into it the more I believe that vanilla line detection is near impossible (unless, of course, you store user input, which doesn't help for breaking down polygons).
I still haven't reverse engineered the bresenham line algorithm, but it seems to me that SB uses some other method to draw lines, because I'm noticing odd line artifacts for special cases. Also, things like line endpoints and midsections can be erased or drawn over, making it more difficult to handle in either case, stored user input or not.
eraser paths are just more lines but in the background color ww
That approach might work very well for drafting and Flipnote lacked a good drafting system, so I might make it a separate kind of feature in my program (not to appease your ideas, I'm basically making this for solely myself lol). Like, a separate layer that doesn't show in the animation that is used for making sketches and drafts.
But for the actual frames themselves, I don't want to get into the mess of mixing and handling the two approaches simultaneously, especially for the single edge-case of one-pixel-wide lines. I'm keeping this as close to Flipnote as possible because that's what's most familiar to me, so solely bitmap.
Looked up flipnote a bit, it sounds like there were a couple frame types, a brand new image, and an image type that only encodes differences from the previous frame (good if you just have a character whose mouth moves but the rest of the frame doesn't). From there you could probably do some sort of run length encoding, or pattern encoding and get fairly small frame sizes. If you are doing black and white only you could probably pack 32 pixels into an int.
Yeah, flipnote checks for differences between frames. That's how it compresses its paints so well and how it makes its file sizes so small (999 complex frames in mere kilobytes). I've thought a little bit about how such a system would work in SB, but I've never actually prototyped anything. Come to think of it, it might be seriously worth looking into again.