When did this become a feature?

I was trying to make a thing to represent a ring with JSON, and I noticed this:

pic

When was this added? I don't remember seeing this two days ago.

And how is it different from this?
untitled script pic (12)

Jens is experimenting with matrix convolutions for media computation. (All goes over my head.)

The difference between 𝚺 and SUM is that the former works on lists of lists of lists... of numbers, like untitled script pic (2). Not clear that it needs to be primitive. As always, don't count on anything in dev actually becoming official.

Yikes! So the 1st one works even with the infinite regression of lists, but the 2nd one doesn't? That's crazy!

That was on the main site. version 9.something.

Oh! I guess he decided, then...

yes, but still it's experimental, sorta, sorry! I need it to be in the current main for a bunch of experiments that cannot wait until v10, but still that feature might go again. The idea behind 𝚺 is to support convolutions of any data structure regardless of rank and dimensions. Convolutions involve item-wise multiplication, which Snap already supports for data of arbitrary ranks and dimensions, but the actual "folding" part, the summation, currently isn't, and that's what this tries to address. Yeah, I know, it's easy to write 𝚺 yourself in Snap, so maybe it doesn't need to be a primitive.

This was added in version 9.2.14 of Snap!

I'm pretty thick-skinned, especially when it comes to comments from children, but actually the behavior of SUM on arrays is rather carefully thought out, not just by us but by a longish series of computer scientists starting with Kenneth Iverson (mid-'60s) and a much longer series of mathematicians (1800s).

The canonical example, for us, is the bitmap of a costume. (The entire costume also includes its X and Y dimensions.) A bitmap is a 3-D array of numbers, but it's much more fruitfully considered as a 2-D array of pixels, where each pixel is an RGBA color vector. Let's say you want to turn the costume into grayscale, maintaining the lightness of each pixel. So, for each pixel you have to compute (R+G+B)/3 and use that as the new R,G,B. So, suppose you've already computed


to get rid of the Alpha value.

The next step isn't to add up all 4XY numbers in the bitmap! No, you want to add the three numbers R+G+B of each pixel, ending up with an X by Y array of sums.

Disappointingly, you can't just say


because SUM sums the first dimension of the array, and you want to sum the last dimension. If this were real APL, you'd be able to say +/bitmap[;;1,2,3] for what SUM does, or +/[3]bitmap[;;1,2,3] for what we need here (specifying to sum the third dimension). But instead, after loading the APL library, I ended up saying

I expect Jens will point out a much easier way I didn't think of.

(I'm leaving out the steps of dividing by 3 and reconstructing the bitmap including alpha, to keep focus on the idea of non-symmetric dimensions.)

TL;DR: Given a multi-dimensional array, it's not always the right thing to think of it as an array of atoms (such as numbers) and squish it down to a single value.

sum_of_clr_channels

(this also removes the alpha channel, but you'd stil have to divide the result by 3 to get the actual grayscale values).

even more elegantly - and better - you can use the NTSC forumula to directly compute grayscales:

This formula applies adjusted weights to the color channels, and at the same time gets rid of alpha, because in Snap! instead of raising an error in the case of non-matching data dimensions we ignore the overshooting ones, which is really convenient.

Note, that you can now also reshape the result to have the same height and width as the original costume (not the bitmap),

and use it directly as costume:

and then you can apply matrix-kernel convolutions

for context-sensitive graphic effects, such as outline:

Now, the kicker - for me - is that the same idea also works not just for matrix convolutions, but also for vector convolutions. We can see these effects (above) when convolving pixel matrices, and we can hear them when convolving sound sample vectors:

and we can use the same block. I might have already mentioned this elsewhere, but this is my current plan to introduce students to the concept of artificial neural networks, because they work the exact same way. So, here's the "bigger picture" of this stupid little Sigma thing. In the final version of the convolution block I'm considering turning the sum-of-multiplication part into a general dyadic dot-multiply reporter. Then we can just use Rosenblatt's perceptron rule directly.

I'm so confused. I tried this:


where the first cell has
1234
Going through your algorithm step by step, COLUMNS gives

with the same four-item first cell. In other words, it exchanges the top two dimensions, leaving the third dimension (the pixel vector) alone. This is why I used TRANSPOSE instead of COLUMNS: to get the pixel vector to the top dimension:

where the first cell is now
1-13
the Red (so to speak) of the corner of the first row and the corner of the second row.

But, back to your algorithm, the next step gives


not changing anything, in my small example, because there were already only three rows. It selects rows, not RGB from RGBA.

Finally, SUM sums the top dimension (the rows), so in my example the first number is 1+5+9, not 1+2+3 as desired:

So, how come it works for you?

dunno what your transpose block does, but I suspect it does an actual matrix transposition of the 3D(!) structure, not just the columns... (?)

The bitmap we're getting when querying the pixels is not a 3D one, just a 2D one, similar to this:

now we can take the first 3 columns to get rid of the 4th one:

and then take their sum:

and we get a vector, a list of scalars. It's this list that we can then reshape to the height and width of the original costume. Only at this step, not before.

Oh. Duh. I keep forgetting that!

...What makes you say that?

Uhh, wow that's a long history, but when I said "That's crazy!", you didn't read the whole post. I had a question there as well,

This one.

The context was "I'm pretty thick-skinned..." I can get upset when insulted by adults.

No, I got that. To me the most natural thing when confronted with a deep list of lists is to think of it as a tree, and to focus on the leaves, so my instinct is to add all the numbers wherever they're found. But I brought up the example of averaging the R, G, and B of a pixel to convert an image to grayscale because you don't want to add all the numbers; you just want to add the three numbers of each individual pixel. So it's perfectly appropriate to have two tools, one for each purpose. I think maybe sometimes we could do a better job of naming things to make it clearer which way a particular tool operates.

clipping mask script pic

converts image to greyscale pretty fast

Yeah that's what Jens did, except that your version includes the alpha values, so should divide by four, and even then will be on the bright side.

i just removed the alpha. wasnt that hard. i have a project with a couple of image processing filters