Adding colors: what's the math?

I'm experimenting with "Media Computation with Costumes" (ref. manual, ch. H, 1st section).
I would like to know how what formulas are used to calculate pixel values when one stamps partly transparent costumes onto each other.
To make it more specific: which pixel values (RGBA) does the script below yield as a function of parameters r1 .. a2?

untitled script pic (31).

I haven't found it in the manual, and I assume only @jens or perhaps @bh knows the answer.
Thank you in advance!

i'm pretty sure snap uses straight alpha, not premultiplied.
you can check by seeing whether fully transparent white is visible, and only fully transparent black is invisible. if transparent white is visible, it's premultiplied.

I'd suspect that even if there was a definite theoretical answer, that the JS canvas anti-aliasing stuff that goes on when things are stamped and converted to pen trails, might make the answer mute

Just my guess :slight_smile:

there is an answer, because otherwise the computer wouldn't be able to do it.
the antialiasing can also be calculated (just some extra math to the alpha based on the shape before compositing) but i don't think that's what they want here

Alpha compositing is not implemented pixel-by-pixel, but instead makes use of parts of the canvas API (without references to any standards). As a result, it should be considered "implementation specific" and may be dependent on the underlying operating system or drivers.

Layers are added (conceptually) one at a time, so imagine we have only two layers, a background and an overlay. This lets me imagine that the background layer doesn't have an alpha value; there's nothing behind it.

So, convert the top layer's alpha to a fraction: 255 -> 1.0 and so on. Assign the background layer an alpha fraction of 1 minus the top layer's alpha. Then multiply the RGB values by the fractions and add the results.

It's probably a bit more complicated, since the second layer has an alpha of its own, which must also be taken into account. Would the Wikipedia link from post 2 provide a decent approximation of the algorithm used? (I'm a bit worried by @dardoro's reference to the Canvas API)

The background of my question is I'm attempting to speed up my remix of @joecooldoo's version of Conway's life game; the idea surfaced that I might have Snap's rather fast graphic subsystem do much of the calculation.

the wikipedia link is almost certainly the exact algorithm. there's no reason to use any other algorithm, it would just be inaccurate. this is far too simple of an operation to consider implementation specific.
if different computers or software did it differently, we wouldn't be able to look at any transparent images and see the same result.

On the first iteration, the bottom layer is the background color, and it really doesn't mean anything to assign it a transparency, because there's nothing underneath it.

On the second iteration, the bottom layer is the result of combining the original bottommost layers, and now that combination is the background for the next higher layer.

But I could be wrong; I've been wrong before.

they're grabbing the pen layer, the pen layer is a transparent image, it doesn't include the background

Well, then, behind the pen layer is a background, even if plain white, and that has to be opaque no matter what its alpha channel says. No?

the original post reports the pen trails, which doesn't depend on the background, and is seperate from it. the pen trails itself still has transparency and is one transparent image on top of the other. it doesn't include the background.

(1) What "other"? I don't understand this sentence.

(2) Okay, so, first combine the pen trails (with transparency) and the background (without) to make a new image without transparency, to use as the background for adding the next layer. Oh wait, I think I see what you mean, when you stamp a sprite costume onto the pen trails, you have a first class picture that has to have transparency values. It's the "first class picture" part that's the problem, because you're not just changing what's on the screen right now; you're keeping it around and might use it later over a different background.

I'll have to think more about this, you're right. (Or I guess I could ask Wikipedia.) (How did people used to live without Wikipedia?)

Whatever the algorithm is, I'm pretty sure it's not in Snap!, but in the browser. I could be wrong though.

it's in the browser, and i sent the algorithm first post.