I have two (same-sized) images, let's call them X and Y, that I’d like to combine into a new image, as if image Y were underneath, with image X put on top (so X covering Y, or X/Y for short).
How do I combine these images?
(I wonder how Snap! does it, and if I can replicate that in a custom block)
Both images are defined by RGBA-pixel maps: items 1, 2, 3 are Red, Green, Blue values (0-255); item 4 is opAcity (0 = transparent, 255 = opaque).
Some requirements I think are reasonable
If either image is fully opaque, the combination will be fully opaque, too.
a. If the top image is fully opaque, the combined properties will be those of the top image.
b. If the top image is fully transparent, the combined properties will be those of the bottom image.
A combination's opacity must at least be the largest of the component opacities.
Each color value of the combination must be in the range from the lowest to the highest of the corresponding component values (inclusive),
All values of a combination must be within the range 0 - 255
If three images are combined (X/Y/Z), it doesn't matter how they are grouped: X / (Y/Z) = (X/Y) / Z.
The order of the images does matter: X/Y ≠ Y/X.
If an image is fully transparent, it doesn’t exist.
While writing this topic I've come up with a trial solution (see below), but I'm pretty sure it's not perfect, as it violates my second-to-last requirement.
My trial solution
To keep things simple I'm going to describe a set of rules on a pixel scale.
And for opacity I am substituting A' = A / 255
The way the new costume and pixels of work is for whatever reason they return every pixels rgba from top left going rightwards then down one until it gets to the bottom. But not in a 3d list with each item being a list of the rgba, but a huge 2d list with each item in the 2d list being the rgba vales of the color
That’s why you have to input the width and hight manually because it has to do the wrapping itself
Why it’s not using 3d lists I have no idea
Thankfully this means you might be able to just use append to add the list data on but I’m not sure how the actual algorithm for the wrapping works, this might mean you have to calculate where to add each line of costume pixel data and that would be very difficult
However that may be, meanwhile I’ve been thinking about a better solution myself, too - and I think I found one that does meet my requirements. I’m now taking into account the opacity of the lower layer in calculating the RGB values, while compensating for the combined opacity:
You mean a weighted average, don't you? In the extreme case, for example, if the top image is opaque, the bottom one doesn't contribute at all.
I believe the right way to think about this is recursively. Let's say you have a pile of N images.
If N=1, the result is that one image.
Otherwise, for each pixel, focus your attention on the top component. The result pixel will be the weighted sum of the top component and what's under the top component, i.e., all but the top component. "Weighted sum" in this context means
(top RGB)*(top opacity as fraction) + (all but top RGB)*(1 - top opacity)
Right? If the top opacity is 1, then only the top RGB contributes. If the top opacity is 0, then the top RGB doesn't contribute at all. If it's 50%, then the two RGB values are averaged. Etc.
Now, what's that "all but top RGB"? Glad you asked. It's a recursive call on layers 2 to N.
Sorry for responding to myself, but this just occurred to me: It's not obvious to me that overlaying is associative, as you claim. Suppose we have A/B/C and they all have opacity 75%. If you take A/(B/C) (which I'm arguing is the right thing) then A contributes 75% to the final color, while C contributes 25% of 25% = 1/16 ≈ 6%. If you take (A/B)/C, then C contributes 25% to the total, while A contributes 75% of 75% = 9/16 ≈ 56%. (B's contribution is left as an exercise for the reader. So is the contribution of the background.)
I agree your approach will work, if all layers are available. The underlying condition is that the bottom image is fully opaque. This may the (black or white) canvas itself.
I was also thinking of cases where you'd want to combine two or more images that are not fully opaque, so as to create a new pre-synthesized image for general use. In that case the combination won't be fully opaque either. And then my algorithm comes in.
I like your suggestion to make it recursive, plus I vectorized it so as to process images, not single pixels:
I disagree. The opacity of A/B is 93.75%, so C will contribute nowhere near 25% to the final color.
Think of A, B and C as three partly transparent overhead sheets (for our junior audience: these were drawings on polyester… well, nvm). If you were to glue together A and B (with invisible and fully transparent miracle glue), and cover C with it, would that look any different from A over glued-together B&C?
Graphics elements are blended into the elements already rendered on the canvas using simple alpha compositing, in which the resulting color and opacity at any given pixel on the canvas is the result of the following formulas (all color values use premultiplied alpha):
Er, Eg, Eb - Element color value
Ea - Element alpha value
Cr, Cg, Cb - Canvas color value (before blending)
Ca - Canvas alpha value (before blending)
Cr', Cg', Cb' - Canvas color value (after blending)
Thank you for searching (and finding) relevant reference documents!
However IMAO these "official" formulas are inconsistent with common sense, e.g.:
Suppose both source and backdrop color values are the same (let's assume: 1), and both alpha values are 0.5. From the formula follows the composite color value will be 1 x 0.5 + 1 x 0,5 x (1 - 0.5) = 0.75.
From the same paragraph in the reference document: αo = αs + αb x (1 - αs).
So in this case: the composite alpha value will be 0.5 + 0.5 x (1 - 0.5) = 0.75.
Now if this 2-composite is used as the new "source" against another backdrop with the same color value (= 1) and fully opaque (alpha = 1).
The resulting 3-composite will have a color value of 0.75 x 0.75 + 1 x 1 x (1 - 0.75) = 0.8125.
This doesn't seem consistent with the notion that the backdrop (and therefore the combination) is fully opaque, and all layers have the same color value of 1.
In contrast, my formula (post #5) makes a color correction by including the the composite alpha value as a divider in the composite color value (0.75 / 0.75 = 1)
The second reference has comparable issues. Only if the backdrop alpha is less than one, of course, but apparently the formulas are supposed to have taken that into account already.
I'm probably overlooking something ... I just don't see what.
That looks good. Apparently Snap!'s built-in ghost and overlay mechanisms are OK.
Still I wonder about the formulas from your post #17. Perhaps the composite color they report is to be interpreted as the effective screen color of a combination of "source" and "backdrop" against an opaque black background. Whereas my formulas are about the combination of multiple images without necessarily a black opaque background.
Interestingly (though perhaps slightly off-topic) I found that when a sprite is being dragged over an off-stage area (e.g. -300, 0), slightly left of stage), the screen RGBA will be influenced except if he sprite is the one with the script that tests for the RGBA values:
(screen color (-300, 0) is a variable of which a reporter is displayed).
On topic again: you'll find that the (screen) colors as influenced by a sprite's costume at the white background off-stage areas are very different from those at the black background stage. Which I think demonstrates my point on the formulas (post #17 vs. #5) being about different entities.