New Snap! release 9.2.0


  • New Features:
    • (better) matrix-kernel convolution support, automatic zero-padding
    • new graphic filter effects tools in the pixels library
    • support for THIS "inputs" selector inside custom block definitions
  • Notable Changes:
    • hyperized ITEM OF
    • tweaked hyperDyadic() to zip matching atoms based on comparing their dimensions backwards (as in NumPy)
    • the SWITCH TO COSTUME command now accepts > 4 column pixel matrices both with or without color channels
    • the NEW COSTUME reporter now accepts > 4 column pixel matrices both with or without color channels if its dimension inputs are left blank or zero
    • playing back a list of numbers as sound now uses the host device's sample rate once the microphone has been initialized, otherwise 44.1 kHz (as before)
    • MQTT library update, thanks, Simon!
  • Notable Fixes:
    • fixed a RESHAPE edge case when passing in a single zero dimension
    • made sure ITEM OF returns data matching the shape specified by the query struct (automatic zero-padding)
    • fixed speech balloons inside ASK menus
    • added safeguard against accidentally querying too many list dimensions (e.g. when forgetting to transpose a convolution)

What is this a release of?

Snap! itself

Oh okay

Thanks for informing us!

Just to note (in case someone else tries making a small costume) - the costume matrix must be at least 5 pixels wide for the new feature to work

2DMatrixofCostume script pic (2)

Please allow me to play the Devil’s Advocate :wink:

I had to look that up in Wikipedia. Why is there not a shred of documentation (ref.manual, on-line help) for this function?

I assume it supports …

… which is quite useful in itself (just undocumented).

Good, so we don’t have to use metaprogramming any more for that in some higher-order functions.
Even so: all of THIS is still undocumented (no inline help, and none of the 468 occurences of “this” in the ref.manual refer to this function)

I remember ITEM OF was already hyperized before. How is it different now?

BTW (very much off-topic)

I never look up earlier versions of Snap! (too lazy), so I have to dig into my own memory. Even now I’m wondering if I’m right in writing the above sentence … Imagine living in a country where the government only tolerates one truth, and deliberately, and forcefully, bends history to its will ... after a few decades, most people really don't know anything but the official story anymore.

Is HYPERDIADIC a hidden function?

As a kind of mirror image, I’m looking forward to a version of () OF COSTUME () that will report a matrix of rows-columns-rgb(a).

Yeah, it was hyperized, and has been for a while. I'm also wondering what the changes were, and why it matters to put it in the release notes again.

the next update of snap should have a contains brodcast block.

What would that block do?
Perhaps you mean a block that will broadcast a message which contains data?

Perhaps they mean it is now hyperized in more than 1 dimension, like:

Yeah, except I get that exact result in 9.1.0 (ok, technically my snap mod, but I haven't updated it yet), and I even tested it in snap 8.


Folks, you don't miss anything important. Really. I didn't write those minor release notes for consumption by the general public, but addressed to those collaborating in the technical salt-mines of some very specific ongoing work. The reason to classify it a "minor" release instead of a "patch" is purely technical. While I'm proud to keep up a fast pace in developing Snap! the relevance of individual changes and novelties is often next to nil to end-users.


We're on a somewhat long-term research journey into a media-computation driven approach to teaching AI. We've started this a while (3 years) ago with our "Grand Gestures" activity, last year we've been exploring Eliza-like chatbots and are currently in the process of publishing materials for teaching about Next-Token Prediction Systems like ChatGPT. For next year (2025) we've promised to deliver a unit / course / quarry about neural networks, and in between everything else on my plate that's what I'm looking into every now and then.

Better Matrix Kernel Convolution Support

My current plan is to segue into neural networks not like everybody else does with perceptrons but with image filters, because both share the idea of matrix convolution, a special case of matrix multiplication, which has been one of the powerful ideas behind the whole hyperblocks / linear algebra endeavor in recent years. (but that might change, which is why I hate talking about it, especially in public at this early stage). Turns out that I needed to tweak how Snap!'s hyperblocks zip data of different ranks slightly in order to accomplish certain edge cases of convolving a 2D-Matrix that's really a 3D one because it also has color channels. That's all. See, nothing spectacular, I told you, right? You're unlikely to notice anything in your projects, I promise. That's the whole "secret" behind "better matrix kernel convolution support" :slight_smile: .

Pixel Matrices

Since this particular class of graphic effects ("matrix filters") relies on taking into account each pixel's neighbors, arranging them into a table whose dimensions mirror the costume's dimensions often makes more sense than treating them as a single stream of data. We can easily use reshape to rearrange the pixels into such a table, and it's also fun to inspect such a table, especially when it represents a grayscale costume without color channels:

Graphic Filter Effects Tools in the Pixels Library

As an introduction into matrix kernel convolution I've added a few blocks to the Pixels library that will let us experiment with some pre-fabricated filter effects, like "outline":

One idea is that learners can edit these blocks to find out how they're made, and try inventing their own custom effects, like this one:

Again the idea being that shifting a smaller matrix over each pixel of a larger one to compute a weighted average can enhance certain features, which is also the idea (with lots of handwaving) of computing a so-called "feature map" from an input matrix in a neural network.

Support for THIS "inputs" Selector Inside Custom Block Definitions

The kernel reporter in the above example is just creating a little 3x3 table. I've decided to offer this as custom block so learners can better copy example filters they find in online resources, of which there are many, rather than having to stitch them together using nested list reporters. It's just a quick ramp-up, nothing more. When I wrote this little helper block I didn't want to manually arrange all the inputs in the definition, so I felt it was easier and quicker to add "THIS inputs" support to custom blocks, which now lets me write it like this:

Again, this is just a very minor detail, something that has been on my list for some time and that I've finally found a reason to get around to implementing. In general, such meta-programming wouldn't be considered great style, but here it feels less bad, I think. But it's also not big news and no earth-shaking new features, just minor maintenance, nothing to drool over :slight_smile: .

Hyperized ITEM OF

When I made the "convolve matrix" reporter I was looking for a "hyper" way to do it (i.e. without map), and it turns out that letting the item of reporter handle a list of multi-dimensional requests was really helpful, so I've added that. Basically it lets us stash several requests and run them at once, like this:

I know this is kinda getting over the top, and don't feel bad if you don't immediately realize why we might want this. Now we're already kind of too deep inside the technical bowels of the optimized "convolve matrix" block, and you don't really need to worry about how that is done. Bottom line: It's ugly and also not written for end-user consumption. In the context of introducing image filters I just want learners to use the filters and play with inventing their own ones, not to think about how to write matrix convolution themselves. (In case you wonder, if I were to teach you how to write them yourself I'd use 2 map reporters and a custom convolution ("*") operator). Anyway, here are the gory innards of that block:

See, you don't wanna know, told ya :slight_smile:

That's all, I hope it quenches your curiosity. Cheers!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.