How can I combine images?

I have two (same-sized) images, let's call them X and Y, that I’d like to combine into a new image, as if image Y were underneath, with image X put on top (so X covering Y, or X/Y for short).

How do I combine these images?
(I wonder how Snap! does it, and if I can replicate that in a custom block)

Both images are defined by RGBA-pixel maps: items 1, 2, 3 are Red, Green, Blue values (0-255); item 4 is opAcity (0 = transparent, 255 = opaque).

Some requirements I think are reasonable

  1. If either image is fully opaque, the combination will be fully opaque, too.
    a. If the top image is fully opaque, the combined properties will be those of the top image.
    b. If the top image is fully transparent, the combined properties will be those of the bottom image.
  2. A combination's opacity must at least be the largest of the component opacities.
  3. Each color value of the combination must be in the range from the lowest to the highest of the corresponding component values (inclusive),
  4. All values of a combination must be within the range 0 - 255
  5. If three images are combined (X/Y/Z), it doesn't matter how they are grouped: X / (Y/Z) = (X/Y) / Z.
  6. The order of the images does matter: X/Y ≠ Y/X.
  7. If an image is fully transparent, it doesn’t exist.

While writing this topic I've come up with a trial solution (see below), but I'm pretty sure it's not perfect, as it violates my second-to-last requirement.

My trial solution
To keep things simple I'm going to describe a set of rules on a pixel scale.
And for opacity I am substituting A' = A / 255

A'X/Y = A'X + (1 - A'X) × A'Y
RGBX/Y = A'X × RGBX + (1 - A'X) × RGBY

Demonstration of the trial solution violating my second-to-last requirement

Color xp script pic

Does anyone have suggestions for improvement?

Link to the project

hmmm, I usually simply average the pixels...

And why do you need/want these rules?

The way the new costume and pixels of work is for whatever reason they return every pixels rgba from top left going rightwards then down one until it gets to the bottom. But not in a 3d list with each item being a list of the rgba, but a huge 2d list with each item in the 2d list being the rgba vales of the color
That’s why you have to input the width and hight manually because it has to do the wrapping itself

Why it’s not using 3d lists I have no idea

Thankfully this means you might be able to just use append to add the list data on but I’m not sure how the actual algorithm for the wrapping works, this might mean you have to calculate where to add each line of costume pixel data and that would be very difficult

Well, yeah … that’s probably the fastest solution. I’m not sure if it meets my ”Requirements”, though.

I want to develop a graphical interface (for easy data entry even on tablets). And I want to know how stuff works.

@cookieclickerer33: idk either.

However that may be, meanwhile I’ve been thinking about a better solution myself, too - and I think I found one that does meet my requirements. I’m now taking into account the opacity of the lower layer in calculating the RGB values, while compensating for the combined opacity:

A'X/Y = A'X + (1 - A'X) × A'Y (unchanged)
RGBX/Y = ( A'X × RGBX + (1 - A'X) × A’Y × RGBY ) / A'X/Y

Code

Test against second-to-last requirement

Over-stamping costumes with "ghost" > 0 on the stage does not meet your requirements?

Perhaps. Haven't tried that yet. Thanks for the suggestion.
BTW is that what @jens calls "average the pixels"?

You mean a weighted average, don't you? In the extreme case, for example, if the top image is opaque, the bottom one doesn't contribute at all.

I believe the right way to think about this is recursively. Let's say you have a pile of N images.

If N=1, the result is that one image.

Otherwise, for each pixel, focus your attention on the top component. The result pixel will be the weighted sum of the top component and what's under the top component, i.e., all but the top component. "Weighted sum" in this context means
(top RGB)*(top opacity as fraction) + (all but top RGB)*(1 - top opacity)

Right? If the top opacity is 1, then only the top RGB contributes. If the top opacity is 0, then the top RGB doesn't contribute at all. If it's 50%, then the two RGB values are averaged. Etc.

Now, what's that "all but top RGB"? Glad you asked. It's a recursive call on layers 2 to N.

this is what I call averaging pixels, just literally, you know:

https://snap.berkeley.edu/snap/snap.html#present:Username=jens&ProjectName=double%20exposure

Oh, I think the task is to implement/simulate the merging of pixels that have different opacities.

Cool project though!

I would like to ask why this is as you’re really one of the only people who would know

(a) speed, and (b) that's the way JS Canvas stores bitmaps, so we don't have to do any reshaping.

Now it's very easy for a snap user to convert it to a 3d list.

And back

So I don't see a reason for it to be changed.

Sorry for responding to myself, but this just occurred to me: It's not obvious to me that overlaying is associative, as you claim. Suppose we have A/B/C and they all have opacity 75%. If you take A/(B/C) (which I'm arguing is the right thing) then A contributes 75% to the final color, while C contributes 25% of 25% = 1/16 ≈ 6%. If you take (A/B)/C, then C contributes 25% to the total, while A contributes 75% of 75% = 9/16 ≈ 56%. (B's contribution is left as an exercise for the reader. So is the contribution of the background.)

I agree your approach will work, if all layers are available. The underlying condition is that the bottom image is fully opaque. This may the (black or white) canvas itself.

I was also thinking of cases where you'd want to combine two or more images that are not fully opaque, so as to create a new pre-synthesized image for general use. In that case the combination won't be fully opaque either. And then my algorithm comes in.

I like your suggestion to make it recursive, plus I vectorized it so as to process images, not single pixels:

With 3 pixel maps 480 x 360 on top of each other, processing takes about 0.4 sec. (on a 2017 iPad Pro).

I disagree. The opacity of A/B is 93.75%, so C will contribute nowhere near 25% to the final color.

Think of A, B and C as three partly transparent overhead sheets (for our junior audience: these were drawings on polyester… well, nvm). If you were to glue together A and B (with invisible and fully transparent miracle glue), and cover C with it, would that look any different from A over glued-together B&C?

For reference base spec Compositing and Blending Level 1

5.1. Simple alpha compositing

The formula for simple alpha compositing is

co = Cs x αs + Cb x αb x (1 - αs)

Where

  • co: the premultiplied pixel value after compositing
  • Cs: the color value of the source graphic element being composited
  • αs: the alpha value of the source graphic element being composited
  • Cb: the color value of the backdrop
  • αb: the alpha value of the backdrop

Browser SVG standard Clipping, Masking and Compositing – SVG 1.1 (Second Edition)

14.2 Simple alpha compositing

Graphics elements are blended into the elements already rendered on the canvas using simple alpha compositing, in which the resulting color and opacity at any given pixel on the canvas is the result of the following formulas (all color values use premultiplied alpha):

  • Er, Eg, Eb - Element color value
  • Ea - Element alpha value
  • Cr, Cg, Cb - Canvas color value (before blending)
  • Ca - Canvas alpha value (before blending)
  • Cr', Cg', Cb' - Canvas color value (after blending)
  • Ca' - Canvas alpha value (after blending)

Ca' = 1 - (1 - Ea) * (1 - Ca)
Cr' = (1 - Ea) * Cr + Er
Cg' = (1 - Ea) * Cg + Eg
Cb' = (1 - Ea) * Cb + Eb

Thank you for searching (and finding) relevant reference documents!

However IMAO these "official" formulas are inconsistent with common sense, e.g.:

Suppose both source and backdrop color values are the same (let's assume: 1), and both alpha values are 0.5. From the formula follows the composite color value will be 1 x 0.5 + 1 x 0,5 x (1 - 0.5) = 0.75.
From the same paragraph in the reference document: αo = αs + αb x (1 - αs).
So in this case: the composite alpha value will be 0.5 + 0.5 x (1 - 0.5) = 0.75.

Now if this 2-composite is used as the new "source" against another backdrop with the same color value (= 1) and fully opaque (alpha = 1).
The resulting 3-composite will have a color value of 0.75 x 0.75 + 1 x 1 x (1 - 0.75) = 0.8125.
This doesn't seem consistent with the notion that the backdrop (and therefore the combination) is fully opaque, and all layers have the same color value of 1.

In contrast, my formula (post #5) makes a color correction by including the the composite alpha value as a divider in the composite color value (0.75 / 0.75 = 1)

The second reference has comparable issues. Only if the backdrop alpha is less than one, of course, but apparently the formulas are supposed to have taken that into account already.

I'm probably overlooking something :smirk: ... I just don't see what.

There is a simple test to see how the built-in alpha composition/blending works

Space to sample the pixel's RGBA

That looks good. Apparently Snap!'s built-in ghost and overlay mechanisms are OK.

Still I wonder about the formulas from your post #17. Perhaps the composite color they report is to be interpreted as the effective screen color of a combination of "source" and "backdrop" against an opaque black background. Whereas my formulas are about the combination of multiple images without necessarily a black opaque background.

Interestingly (though perhaps slightly off-topic) I found that when a sprite is being dragged over an off-stage area (e.g. -300, 0), slightly left of stage), the screen RGBA will be influenced except if he sprite is the one with the script that tests for the RGBA values: RGBA Blending script pic
(screen color (-300, 0) is a variable of which a reporter is displayed).

A remix of @dardoro's project with an extra reporter. Turn the full scree mode off, and enable "Flat design

On topic again: you'll find that the (screen) colors as influenced by a sprite's costume at the white background off-stage areas are very different from those at the black background stage. Which I think demonstrates my point on the formulas (post #17 vs. #5) being about different entities.