Sigh.
Folks, you don't miss anything important. Really. I didn't write those minor release notes for consumption by the general public, but addressed to those collaborating in the technical salt-mines of some very specific ongoing work. The reason to classify it a "minor" release instead of a "patch" is purely technical. While I'm proud to keep up a fast pace in developing Snap! the relevance of individual changes and novelties is often next to nil to end-users.
Background:
We're on a somewhat long-term research journey into a media-computation driven approach to teaching AI. We've started this a while (3 years) ago with our "Grand Gestures" activity, last year we've been exploring Eliza-like chatbots and are currently in the process of publishing materials for teaching about Next-Token Prediction Systems like ChatGPT. For next year (2025) we've promised to deliver a unit / course / quarry about neural networks, and in between everything else on my plate that's what I'm looking into every now and then.
Better Matrix Kernel Convolution Support
My current plan is to segue into neural networks not like everybody else does with perceptrons but with image filters, because both share the idea of matrix convolution, a special case of matrix multiplication, which has been one of the powerful ideas behind the whole hyperblocks / linear algebra endeavor in recent years. (but that might change, which is why I hate talking about it, especially in public at this early stage). Turns out that I needed to tweak how Snap!'s hyperblocks zip data of different ranks slightly in order to accomplish certain edge cases of convolving a 2D-Matrix that's really a 3D one because it also has color channels. That's all. See, nothing spectacular, I told you, right? You're unlikely to notice anything in your projects, I promise. That's the whole "secret" behind "better matrix kernel convolution support" .
Pixel Matrices
Since this particular class of graphic effects ("matrix filters") relies on taking into account each pixel's neighbors, arranging them into a table whose dimensions mirror the costume's dimensions often makes more sense than treating them as a single stream of data. We can easily use reshape
to rearrange the pixels into such a table, and it's also fun to inspect such a table, especially when it represents a grayscale costume without color channels:
Graphic Filter Effects Tools in the Pixels Library
As an introduction into matrix kernel convolution I've added a few blocks to the Pixels library that will let us experiment with some pre-fabricated filter effects, like "outline":
One idea is that learners can edit these blocks to find out how they're made, and try inventing their own custom effects, like this one:
Again the idea being that shifting a smaller matrix over each pixel of a larger one to compute a weighted average can enhance certain features, which is also the idea (with lots of handwaving) of computing a so-called "feature map" from an input matrix in a neural network.
Support for THIS "inputs" Selector Inside Custom Block Definitions
The kernel
reporter in the above example is just creating a little 3x3 table. I've decided to offer this as custom block so learners can better copy example filters they find in online resources, of which there are many, rather than having to stitch them together using nested list reporters. It's just a quick ramp-up, nothing more. When I wrote this little helper block I didn't want to manually arrange all the inputs in the definition, so I felt it was easier and quicker to add "THIS inputs" support to custom blocks, which now lets me write it like this:
Again, this is just a very minor detail, something that has been on my list for some time and that I've finally found a reason to get around to implementing. In general, such meta-programming wouldn't be considered great style, but here it feels less bad, I think. But it's also not big news and no earth-shaking new features, just minor maintenance, nothing to drool over .
Hyperized ITEM OF
When I made the "convolve matrix" reporter I was looking for a "hyper" way to do it (i.e. without map
), and it turns out that letting the item of
reporter handle a list of multi-dimensional requests was really helpful, so I've added that. Basically it lets us stash several requests and run them at once, like this:
I know this is kinda getting over the top, and don't feel bad if you don't immediately realize why we might want this. Now we're already kind of too deep inside the technical bowels of the optimized "convolve matrix" block, and you don't really need to worry about how that is done. Bottom line: It's ugly and also not written for end-user consumption. In the context of introducing image filters I just want learners to use the filters and play with inventing their own ones, not to think about how to write matrix convolution themselves. (In case you wonder, if I were to teach you how to write them yourself I'd use 2 map
reporters and a custom convolution ("*
") operator). Anyway, here are the gory innards of that block:
See, you don't wanna know, told ya
That's all, I hope it quenches your curiosity. Cheers!