[Challenge]Make a costume pixel lossless compressor/decompressor

I was wanting to send and store a few Snap! costumes onto a server and wanted to reduce the number of bytes used but still preserve all the image information.

But then, I've thought - this would could make a good challenge for others to come up with algorithms and code to see what can be achieved just using Snap! and not any JavaScript/primitive calls

So the challenge is - can you reduce a costume to a lower number of pixels and then recreate it perfectly from the compressed data?

i.e. Standard Alonzo image is 90 wide by 120 tall.

This translates into a 4 column list, 10800 in length

So, the uncompressed size is 43,200 individual byte values

How much can you reduce it to?

FAQ

  1. FTAOD The "competition" element of the challenge is to be applied to the standard Alonzo bit-mapped image :slight_smile:

I'd appreciate if older Snappers didn't rush in with solutions that compress down to 20 bytes :slight_smile:

i don't know how to doi t but i assume thzt it would be possible to turn the image into the bunch of numbers in row and columns that you get in picross games
image

yes - the pixels of csotume reporter does that

it gives a list of the red,green, blue and transparancy value of each dot
image

It doesn't give it in an X & Y format for speed purposes

that's not how picross work, the numbers correspond to how many pixels are adjacents in that row/column

Yes - its a bit different in Snap!
Page 79 of the manual explains things a bit

The most basic RLE encoding gives me 2049 items.

That would have to be something like merely keeping the costume’s name, which I don’t think is what you are looking for. :smirk:
Seriously, any (general-purpose) algorithm reducing the amount of data to less than 5 kbytes is going to be excellent, IMO.

BTW Snap! itself apparently employs data compression, since while every pixel is characterized by 4 bytes, adding one Alonzo costume to a project increases its memory usage by only about 11 kbytes (approximately 1 byte per pixel).

Great initiative!

How many unique run lengths? (I found 70). When combined with another lossless compression method, 3.5 - 4k is probably the best attainable result. But who knows what someone will invent.

2049 items of {{R,G,B,A}, chunk length}.
75 unique RGBA values

It just loads the image as a png file, so snap isn't really doing any special compression. Audio on the other hand, snap uncompresses it and stores it as a wav file (or at least that's what I get when I export it).

PNG does lossless compression, too. Whether “special” is a matter of tastedefinition.

Do the competition rules allow solutions that work only for Alonzo? Or does the solution have to work on arbitrary picture files? Alonzo (like all cartoon characters) is easy to compress because (as dardoro suggests) you get lots of runs of identical colors.

The challenge is to write a working lossless compressor/decompressor.

Any image will do but just using Alonzo as the benchmark so we can have a simple definitive "winner"

ANd I was throwing it out for some younger Snappers to maybe look at hence my followup

:slight_smile:

Yeah I read that. I'm not planning to compete; I just want to clarify what counts as a solution.

So in practice the challenge focuses on compressing cartoon characters, and that seems like a sensible limitation: challenging but doable.

I think this misses the point of the question. Is a solution good enough if it only works for Alonzo? So I could enter
untitled script pic (2)
and then have ITEM OF variants that check for the word "Alonzo" as the input, and if so, read the desired pixel from the disk file, carefully not reading the entire costume into memory?

As a more realistic example, we've been talking about run length encoding, which is a great compression technique for cartoon costumes but not so good for photo costumes (or watercolor ones such as the ones from Meghan Taylor). Is that acceptable?

(Sorry for being lawyerish about this, but I believe in resolving ambiguities before entries are judged, rather than after.)

I think Simon is trying to spark some interest in a small but constructive algorithmical problem. Something like the Advent of Code Lite for Snappers.

So I made a reference to RLE encodings as a simple and easy to follow solution that can be further expanded.

And it's great for cartoons, as I said. But for photos it's likely to make the picture twice as big, every pixel combined with a run length of 1. So, is that accepted as a solution? I think Simon was sort of hinting about that, and you made it very explicit, but I just want (still!) a yes or no about whether it qualifies as a solution. You clearly say yes, but it's Simon who has to make it official. :~)

It's as Dariusz says

Not intended to be a competition as such - more about just implenting compression/decompression methods using Snap!

Just inspired by the fact that we can't easily convert a costume image into a compressed png inside Snap! so why not make our own compressors :slight_smile:

I'm not into "serious" (widely used) compression software, but I assume for every object one or several from an array of algorithms is / are selected, dependent on their respective results when applied to that one object. The selected algorithm(-s) is / are mentioned in the compressed file's header, so at decompression time it's clear which decompression algorithm is to be used. Is it like that?

TLDR Nothing wrong with a specialized algorithm.