[Challenge]Make a costume pixel lossless compressor/decompressor

can you provide an example of export data to server / import data from server

What is transfered? a list, 43,200 individual numbers, a string (csv) ?

I was using MQTT to save costumes on a broker
So this sends the data from the Alonzo costume as a list of individual byte values (0-255) length 43202 including 1 byte for width and 1 for height

I should really encode the width/heights into 2 bytes as a costume can be the size of the stage.

And to save 2 bytes :slight_smile: we could only send width and do some maths based on length of data to get the height


you overthinking here :slight_smile:

Yes - I think so :slight_smile:

For now, i'm at this point: 4398 bytes + 2 bytes for width and height (like Dardoro)

originals pixels 43800 bytes
so 4398 is a 1 : 0.10 ratio (pretty good)

i know i can do better... i have a new idea...

Oups, it doesn't work...

My new idea: has anyone tried lzw compression in snap?
Maybe @dardoro?

Idk if it's a better way to acheive this challenge...

Not really, 43800 BYTES.

To get meaningful metrics, I think that the compression goal should be the size of the flattened list of bytes (<256). And a matching decompressor. It will be most effective and directly useable with MQTT lib

A bit of a murky explanation...

Snap lists have built-in boundaries but to transmit data over the network we need some "serialization".
If all values are < 256, and rows have the same, known length, data can be sent as a flattened list of bytes.

The result of the "RLE compact" can be expressed as a fixed chunk record, {R,G,B,A, chunk length} => 4 B + 1..2 bytes for length. For small chunks < 256, the total size will be 2050 x 5 ~ 10kB. For bigger chunks, up to 65535 (2B), the total size is 2050 x 6 ~ 12kB.
So the compression rate for the byte-encoded data is 12 : 44 .

If an image has less than 256 colors, {RGBA} can be palletized and expressed as a single byte.
The total size will be 2050 x 3 = 6150B + 74*4 (pallete).
For Alonzo's costume with small chunks total size will be 2050 x 2 = 4010 + pallete.


LZW uses a dictionary of 4096 items of multibyte sequences. For relatively small images, it may be an overkill.

Sorry, my error... i correct my post...

Do you have another idea that could improve the compression

What's about Huffman coding

RLE isin't efficient with thes background:
Brick Wall 2

RLE ratio: 1 : 1.24 (increase the size !)
RLE BY REGION ratio: 1 : 0.8

Yes :frowning:

Photo-realistic images don't normally compress well with just RLE

Did you try some compression algorythm ?

No - not tried an image like that

What's your results with alonzo cst

I just experimented a bit before I thought it would be a good challenge for people who've not played with different compression/decompression algorithms

compress script pic (17)

Exported as a *.png, this image has 370kB. It's slightly above 50% for a dedicated image compressor.

RLE has only orphaned regions (length < 5) :frowning:

For the camera images, the alpha channel can be neglected - a 25% reduction is possible.

The distribution of the bytes is slightly skewed, so Huffman compression may be the way to go.

@cymplecy, can you move this topic to share your project category
(so it won't automatically close after 1 month)

I have done that

But it's not needed - if you ever want to post on a closed thread - just email Brian and he'll re-open it for you :slight_smile:


Maybe. That's not a general rule; you've just only ever had good reasons. ;~)