LoginLogin
Might make SBS readonly: thread

Programming help wanted (for hire)

Root / General / [.]

MZ952Created:
I'm in the middle of developing a mediocre animation app for SB3 (and SB4, but my current focus is SB3). The app will function a lot like Flipnote for the DSi and 3DS--I'm a big fan of Flipnote and I've always looked forward to building an app in its likeness. I'm building it mainly for my own selfish purposes, but I'll be cleaning it up with pretty GUIs and stable functionality and releasing it for others to use if they wish. The app is mostly there, I've nailed down most of the essential functionalities (multiple layers, fast frame compression/expansion (for the ~8 mb memory constraint), decent playback frame rates, a proper undo system, etc.), and what I'm left with is connecting the pieces together to form a coherent whole. What I haven't approached yet is a simple project export system. A system to take what was drawn and animated on the app in SmileBASIC and bring it to PC in the form of sequential frames for further editing and post-processing. Because I'll be sharing this app around (both to those who are and to those who aren't familiar with SmileBASIC), I'd like such a system that can work for everyone. That is, a method that doesn't require extra hardware, hacked systems, or hours of PetitModem symphonies. I think that method may lie in screenshots. I envision an export process that involves the user being prompted to screenshot a series of "QR code-like" information displayed in series on the screen. The user would grab those screenshots off their SD card, drop them into a simple executable in Windows, and the executable would process the data encoded in the screenshots and then spit out a folder of sequential, properly-labeled .pngs or something for use in other applications. I think based on the nature of my program (pixelart-esque frames consisting of a limited color palette), this may be entirely feasible. (Actually, for an export process that anybody could quickly and easily grasp, I think this is my *only* choice.) This is where I need your help. I would call myself a good SB programmer, but I know precisely squat about: getting pixel data from .jpegs (3DS screenshots are lossy .jpegs); writing simple, distributable executables for Windows; and creating .png files. Basically, I don't know any language other than SmileBASIC, and I think it would take me a year or more just to learn the basics of these things and get such an executable out and ready for use. I'm offering anyone with the skills for this 100 USD (via PayPal; amount is totally negotiable) to help me build such an export tool for my program. --- I think making this thing will take a couple of steps. Firstly, we have to determine what kind of pixel data we can gather from the minimally-compressed .jpeg that comes out of SB3. I've heard from a few people that one bit per pixel is the likely case, and based on how .jpeg compression works I think that's accurate. (JPEG uses 8x8 matrix discrete cosine transform to whittle away redundant color information from an image and preserve the more important brightness information. If we set only white and black pixels, I think we can get a completely preserved bit stream by rounding up or down the brightness of a pixel based on its distance from 0 or 255. My few tests in SB3 seem to support this.) If this turns out not to be the case, then we may have to use 1 bit per 2 pixels, or 1 bit per 4 pixels, etc. Lastly, we have to determine how to format and compress (losslessly) the data to minimize the number of screenshots required. I think I have this part figured out (although I have more testing to do). From my current model, the worst-case scenario of having to transmit ~8 mb of animation data (which translates to, like, several thousand roughly drawn frames), should yield a little under a few hundred (1-bit per pixel) screenshots. (Yikes! But then again, that's several thousand frames.) The average animation (a couple hundred frames) would likely yield a few dozen or so (again, 1-bit per pixel) screenshots. Small animations should fit inside a dozen or less. These may be liberal estimations. It all hinges on the outcome of the first step.

If you're interested or want to see how the development of the app is going, I've set up a discord here: https://discord.gg/KbgZsKs

Sounds interesting, I definitely wouldn't mind trying to write a program to convert data to animation. What do you have so far for output data - is it just plans right now?

Hmm... I think this would make a nice contest, as opposed to choosing one... you do you.

... What do you have so far for output data - is it just plans right now?
You mean like, what are the constituents of the data being exported? I'm still rounding out all the angles, but uh, each frame has 4 layers and a translucent foreground "shade layer," each layer may consist of 8 drawing colors that regard translucency, and each layer as a whole may be both "color filtered" and made more or less translucent. So, there's color manipulation and alpha compositing involved with processing a frame. (I can give a much better explanation than that as to what's actually going on though lol) Frame dimensions may be up to uh 512x240 px (on the 3DS. Switch version'll probably use 512x512) and uh oh each frame has a back page color of any color not bound to the palette. (To be honest, the only reason why the color and other limitations exist is because I'm a firm believer that uh limitations breed creativity. I think that's why Flipnote succeeded where other apps for the same platforms like uh Inchworm animation did not.)

Hmm... I think this would make a nice contest, as opposed to choosing one... you do you.
I don't think a contest would be a nice idea. Like uh, I'm looking to work directly with someone to create this thing. I think making it would require a decent amount of experimentation, and I don't think many people would like to do that in a contest format without the promise of being paid for their troubles when it's all done.

If you just want the art you could make a special single step mode that you could use the screen grab function with. It sounds like that is something you could already do. A series of screen captures should also dump into a video editor nicely. As for QR codes, someone had a qr code generator over on the fuzearena forums, maybe you could port that over. I don't think they had qr codes that held much data however, but it would be a huge headstart.

Yeah, I mean, that was always a uh option. But the screenshots are very poor quality, not something I as an animator would like to be doing post-processing with. Especially with pixel art involved. But yeah, if nothing else, lol.
... As for QR codes, someone had a qr code generator over on the fuzearena forums, maybe you could port that over. I don't think they had qr codes that held much data however, but it would be a huge headstart.
By QR code-like, I was really meaning only by appearance (in some very stretched sense of "QR" as well). Like, my only trouble with this whole thing is that I don't know how to program away from the environment of SB. If I knew some variant of C well enough to stroll through jpeg images and collect pixel data and create proper pngs or some other lossless variant, I probably would've built the thing by myself. Generating the data to screenshot shouldn't be trouble whatsoever.

Hold tight. I'm working on something right now for JPEG-to-pixel. It'll be done by 10 PM Pacific. I'm also working on a nice JavaFX-based user-interface, where they can drag files to the window and have them converted (after selecting a destination folder). It's a WIP. edit: I decided to test it out at 9 PM and ran into issues with JavaFX not being included in Java 11. I'll deal with it tomorrow.

I think I can help :) PetitModem, while normally used for its modem capabilities, has a screenshot option that allows you to take screenshots of "data" and send them to your PC to decode via the PetitModem PC application (and then decompress it if needed). You could have your program convert the current frame to the PZG format via the LZSS library, and then copy the code PetitModem uses to encode data into your program. Then you should be set. Let me know if you need help with this. The only downsides are that the final PNG will be 512x512 in size, so the image will need to be cropped, and that this will only work for the 3DS right now, unless we can convert the LZSS library and the decoding code.

I think I can help :) PetitModem, while normally used for its modem capabilities, has a screenshot option that allows you to take screenshots of "data" and send them to your PC to decode via the PetitModem PC application (and then decompress it if needed). You could have your program convert the current frame to the PZG format via the LZSS library, and then copy the code PetitModem uses to encode data into your program. Then you should be set. Let me know if you need help with this. The only downsides are that the final PNG will be 512x512 in size, so the image will need to be cropped, and that this will only work for the 3DS right now, unless we can convert the LZSS library and the decoding code.
I don't think MZ952 is doubting the effectiveness of Petit Modem; I think they're concerned with how difficult it is to use. It would take quite a while to transfer the 8MB+ of data using PetitModem, whereas with screenshots you just drag them into a program. Not to mention you need cables and a Windows PC for PetitModem. " I'd like such a system that can work for everyone. That is, a method that doesn't require extra hardware, hacked systems, or hours of PetitModem symphonies"

I don't think MZ952 is doubting the effectiveness of Petit Modem; I think they're concerned with how difficult it is to use. It would take quite a while to transfer the 8MB+ of data using PetitModem, whereas with screenshots you just drag them into a program. Not to mention you need cables and a Windows PC for PetitModem. " I'd like such a system that can work for everyone. That is, a method that doesn't require extra hardware, hacked systems, or hours of PetitModem symphonies"
That's my point. PetitModem also has a feature for transferring via screenshots.
PetitModem, while normally used for its modem capabilities, has a screenshot option that allows you to take screenshots of "data" and send them to your PC to decode via the PetitModem PC application (and then decompress it if needed).
This is the first thing I said! Did you read my post?

I don't think MZ952 is doubting the effectiveness of Petit Modem; I think they're concerned with how difficult it is to use. It would take quite a while to transfer the 8MB+ of data using PetitModem, whereas with screenshots you just drag them into a program. Not to mention you need cables and a Windows PC for PetitModem. " I'd like such a system that can work for everyone. That is, a method that doesn't require extra hardware, hacked systems, or hours of PetitModem symphonies"
That's my point. PetitModem also has a feature for transferring via screenshots.
PetitModem, while normally used for its modem capabilities, has a screenshot option that allows you to take screenshots of "data" and send them to your PC to decode via the PetitModem PC application (and then decompress it if needed).
This is the first thing I said! Did you read my post?
Yeah. For some odd reason, I thought you were proposing converting the data to a 512 by 512 DAT (GRP) -> compressing -> petitmodem. Like, faking a GRP, and sending it over the audio line. I'd never heard of the screenshot feature.

... That's my point. PetitModem also has a feature for transferring via screenshots.
I've never heard of such a feature, huh. (It would make sense though. By my rough calculations, transferring data via screenshots is faster (bits-per-second-wise) than via the audio modem. 1 mb in 12 ish minutes (depending on how long you can put up with hitting the screenshot button) vs 55 minutes for the same 1 mb.)) If that's the case, then that solves half my problem. The other half is having a PC program that can interpret the data and spit out the frames. Edit: reason I haven't heard of it is because I've been using an earlier version of PetitModem apparently. The page hasn't been updated with the newest version.

Snail, 12me12, and others did research about color compression in an attempt to improve Amihart's program archiver, and I think they found that you could use 5-bit grayscale reliably. So, each pixel would store 5 bits instead of 1. So, about 60KB per image. Not bad.

... That's my point. PetitModem also has a feature for transferring via screenshots.
I've never heard of such a feature, huh. (It would make sense though. By my rough calculations, transferring data via screenshots is faster (bits-per-second-wise) than via the audio modem. 1 mb in 12 ish minutes (depending on how long you can put up with hitting the screenshot button) vs 55 minutes for the same 1 mb.)) If that's the case, then that solves half my problem. The other half is having a PC program that can interpret the data and spit out the frames. Edit: reason I haven't heard of it is because I've been using an earlier version of PetitModem apparently. The page hasn't been updated with the newest version.
Yeah, the feature's only available in the latest builds. Here's the key for the latest one (includes LZSS library): NJEQK3A4 Latest PC app: http://rei.to/PetitModemPC1.4.1.zip As for the other issue, you might be able to just use the PetitModem PC app to decode the data, and its companion LZSS app to decompress the saved graphics. The resulting image will be 512x512, so you'll need to have it cropped, but maybe this could be used to our advantage (storing multiple frames on one graphics page to be cropped into different images).

... As for the other issue, you might be able to just use the PetitModem PC app to decode the data, and its companion LZSS app to decompress the saved graphics. The resulting image will be 512x512, so you'll need to have it cropped, but maybe this could be used to our advantage (storing multiple frames on one graphics page to be cropped into different images).
I already punched the numbers on that. If it takes like 20 seconds for LZSS to compress a frame and there are 200 frames to compress, it'll take a little over an hour to complete. I just can't have that, it's not better than sending over the animation file as-is (via the audio modem). The animation data is already under a compression scheme in the save-file, something comparable to LZ (although LZ is typically better). What I'm leaning towards is using this new PetitModem feature to do the file transfer from 3ds->PC, and then having some as of yet unbuilt program decode the transfered data and turn it into frames. Should be much much faster. It also cuts half the work out of this endeavor. I haven't tested it yet, but just exploring the code from the website, looks like it encodes 16 bits into 4 horizontal pixels? If correct (which it probably isn't, because I haven't explored the code so thoroughly), that puts it at 2 bpp which, if--
... and I think they found that you could use 5-bit grayscale reliably. So, each pixel would store 5 bits instead of 1. So, about 60KB per image. Not bad.
--holds true, then uh perhaps the creator of PM relied more on their wonderful LZSS compression in the screenshot method, rather than exploring avenues for stuffing in more bits per pixel. Maybe the screenshot method could be made better?

I just thought of something. All the compression solutions that have been proposed so far rely on compressing individual frames, using something like LZSS. But what if we used frame redundancy instead? Given two sequential frames of an animation, it's highly unlikely that all the pixels will be different. We can use this to our advantage by developing a video format that only stores the changed pixels. I saw this in a Tom Scott video a while ago and just realized its applicability. EDIT: The first sentence may have been misleading. I realize that it's not exactly what we're doing, but the rest is nonetheless applicable.

I just thought of something. All the compression solutions that have been proposed so far rely on compressing individual frames, using something like LZSS. But what if we used frame redundancy instead? Given two sequential frames of an animation, it's highly unlikely that all the pixels will be different. We can use this to our advantage by developing a video format that only stores the changed pixels. I saw this in a Tom Scott video a while ago and just realized its applicability. EDIT: The first sentence may have been misleading. I realize that it's not exactly what we're doing, but the rest is nonetheless applicable.
Yeah, that's exactly what I'm doing. I haven't like fully rounded it out yet, but I've got a compression scheme called GS4 that compares multiple frames for changes between them. Imagine it like uh, each frame has a unit thickness of 1 and we stack each frame atop one another in the correct display order. The algorithm searches for rectangular prisms of like-color through the 3d structure of the piled frames, and stores the 2 coordinates (x0,y0,z0;x1,y1,z1) needed to describe that rectangular prism, When we go to extract each frame after compression, what we do is uh The output data is sorted by z-precedence. So, those rectangular prisms beginning at z0 (frame 0) are listed first. We look at only the x y components and draw their rectangle on the screen. Before this loop, we have created several lists, one list for each z unit (each frame). If a prism has a z-endpoint at 0, then we store the 2d coordinates in list 0. If at like 22, then we stow them in 22. After we've drawn all 2d rectangles for z0, we move to z1. Before drawing any z1 rectangles, we run through the z0 list and draw the stored rectangles in that list with transparency. (The prisms here started at 0 and ended at 0.) After removing all rectangles whose z-endpoints ended at the frame before it, we then begin drawing the prisms that began at z1. Rinse and repeat. The algorithm has 3 degrees of freedom when deciding which prisms to store. It judges by volume; however, it gives weight to those that extend farther on the z-axis. This minimizes the number of drawing commands that need to be executed when animating through a series of frames. It works well when there is a limited color palette. My original plan was to apply some variant of LZW to the output, draw the binary of that on the screen, screenshot it, and have a PC application decode it all.

No offense, but that seems like a horrible attempt to reinvent the wheel. In a lot of scenarios that system would use more space than a raw bitmapped image. Going based on the fact that your code searches for like-colored (please define) areas, what would happen if I were to have black-and-white dithered (pseudo-grayscale) art where very few adjacent pixels were the same color? To describe an individual pixel in a 400 by 240 images, you'd be using 17 bits per pixel (plus the color, so 20 for a Flipnote-paletted program). With bitmaps, you'd only need 3 bits per pixel. If you stored the second coordinate's offset rather than the actual coordinate, then I guess you could increase the density a little. My system would be something like this: 1. For the first frame of the animation, store the actual image. Nothing fancy, just a bitmap. 2. For the second frame, the first bit would be a "break-even" bit. Basically, if the uncompressed frame uses less data than the compressed version (which could occur in some instances), then the flag is set to 1. 3. Once we get to a frame that is actually compressed (flag set to 0), we analyze the next bit. If it's set to 1, it's considered a continuation of the previous frame, and uses the same color as the previous frame's pixel. Since it doesn't need to redefine the color, it omits the rest of the bits. 4. If the bit is set to 0, then the next three bits will be actual color data. Flipnote uses 6 colors, so we can store the color in 3 bits, and use a color LUT to get the actual RGB color. 5. Rinse and repeat. It could be improved by having frame references, so a frame could reference an arbitrary frame's color data rather than just the preceding frame's. Also, we could use something like RLE for storing long sequences of repeating pixels, to avoid having to write the pixel repeat flag so much. Instead, we'd do something like "the next 20 pixels are all repeats" (saving 15 bits). I think my system is better for a few reasons: 1. Worst case scenario (random pixels, no repeating pixels), my method uses 20% of what your method uses. 2. The efficiency of my method increases linearly with color depth. 3. Your method's efficiency decreases as the screen resolution increases (since it needs more bits to store the rectangle's coordinates). EDIT: I just realized it's 17 bits per coordinate. In reality, each pixel would use 71 bits! So worst case scenario, my system would use 5% of what yours uses.