Memory footprint

I had to use for a few days a not so recent PC (but a decent one, with a 2Ghz i7 and 8GB) and I realized that I cannot switch from BloP (my extension) to Snap after opening a BloP language, that is programming language built by creating several Snap custom blocks. When I try to do it Chrome will crash. I realized that, whereas if I open a 120MB Snap project with no custom blocks the "memory footprint" in the Chrome task manager is about 300MB, when I open a 200KB BloP language with about 80 custom blocks, the memory footprint goes up to 2500MB. So, when I try to switch from BloP to Snap (that would reopen the project in the BloP IDE) the Chrome tab chrashes.

Is there something basically "heavy" in the creation of custom blocks in Snap, or is this just problem of my extension?

This is really a question for @Jens, but I'm going to guess that when you built BIoP on a large computer, the browser gave you a large memory limit and you didn't use it up, so JS never ran its garbage collector. A quick web search suggests that you can't force a garbage collection in JS (a design flaw imho) but maybe you can tell your browser to set a lower limit on the memory a page can allocate -- that's beyond my level of expertise. So, try exporting the 80 custom blocks, then start a fresh project in your small computer and importing the blocks. See what size the new project is when you save it.

Thanks Brian. As you suggested, I made some more experiments, this time with Snap, and I realized that:

  • creating a single custom block with 1 parameter whose definition contains 256 SAY blocks (embedding the parameter in each one of the 256 SAY blocks) the memory footprint goes from 236MB (after loading Snap) to 412MB, that is an increase of about 180MB
  • duplicating the 256 SAY blocks definition in the script area the memory footprint goes from 412MB to 523MB, that is an increase of about 110MB. So, there is no much difference
  • loading one of the largest BloP custom blocks (whose definition is about 350 blocks) increases the footprint of about 490MB. So, the order of magnitude is about the same

(Each time I waited a few seconds, in order to give the garbage collector the chance to reduce the footprint (this allowed me to reduce the footprint of 5 to 40 MB)

I guess the only chance to reduce the footprint for now, so to avoid Chrome crashing, is then to optimize the definition of my custom blocks by trying to reduce the number of blocks inside the definition (something that I can do as I reused many times the same code).

I would be very interested in knowing in Snap 5.3.7 which are the main elements that require a good amount of memory. I think I will wait for a few hints from @jens.

I'm creating reduced block versions of both standard and non-standard programming languages, by programming them with Snap. To make the environment "safe" (that is not to allow students to impair the IDE) I created a Snap extension called BloP that allows to reduce the elements in the Snap IDE and to lock them. So, for example, for my reduced version of C/C++, I created about 80 blocks. Some of them are very simple (but even loading them increases the memory footprint of about 4-5MB) and some are instead very complex (for example the block that takes care of calling a function, that increases the footprint of about 490MB).

I will be sharing the new BloP very soon, as soon as the main parts are fixed, as currently I'm still working on moving BloP's code to Snap 5.3.7.

Unfortunately, when I load another project that requires a lot of memory, both Snap and BloP crash. I verified it by both loading two times a 120MB project in Snap (with no custom blocks) or a 300KB project in BloP (with 80 custom blocks). The first load works as expected. The second one causes the crash.

I tried by setting the stage to null just before loading the new project, but the Javascript garbage collector refuses to make the memory available. I guess this is a problem that, except by reducing the size of the project, cannot be solved.

Okay, this is now sounding more like a bug report, so could you please tell us what OS, what browser, what browser version you're using?

Also, please try limiting the browser's memory allocation (or just do the experiment on your old small computer) and see whether it dies or just slows down?

The old computer is a Windows 7 PC with 8GB of RAM and Chrome 78.0.3904.108 Official build 64 bits. When I load the 125MB project the first time, the memory footprint of the Chrome tab where it is running goes up to 2.5GB. The second time it crashes.

On the newer computer (Windows 10, 16GB, Chrome 78.0.3904.108 Official build 64 bits) the problem is not present at all. Indeed, when I load the 125MB project the first time the memory footprint goes up to 900MB, the second time instead it increases up to 1.4GB, and then it stays around 1.5GB for every further load.

Searching on the net, I didn't find any way to manage JS memory limits. What can maybe be interesting is that I have

window.performance.memory.jsHeapSizeLimit

set to about 2.4GB on the old PC, and to 4.3GB on the newer one.

Try setting it to however big a fresh Snap! is plus 200MB, then load the project. What I'm hoping is that the load will be very slow, but will work, after a bazillion garbage collections.

I tried changing this limit inside both Chrome debugger and Snap's scripts, but it is always reset to 2.4GB.

OK I'm out of suggestions, we need @jens.

I dunno. Memory is handled by JavaScript, not by me. I do know that I'm creating a bazillion canvas elements, one for each Morph, and am caching these. This does mean that a lot of blocks take up a lot of resources, which is one of the reasons why projects with a lot of blocks, e.g. in libraries, can be more memory intensive than those that import a lot of pictures or sounds.

I've talked about this often and in detail, most recently at the Snap Conference this fall in Heidelberg: Snap! tries to accomplish a delicate balance of supporting old and cheap hardware (such as laptops running Windows XP and Chromebooks, because typically that's what many kids will be given), and creating an experience of general purpose-ness and extendability.

Pre-rendering all Morphs when a project loads takes up a lot of time and resources, but it gives everybody roughly the same user-experience afterwards, regardless of the power of their computer. Projects - mostly - run at the same frame rate and the UI looks and feels - mostly - the same, for example when switching between sprites or when dragging blocks and scripts around.

One downside of caching every Morph is that it takes up a lot of resources, including memory. It's the very essence of pre-computing stuff and caching to assume that accessing memory is faster than re-computing and re-rendering something. On faster computers we could save a lot of memory and loading time by rendering things "just in time". On older and less powerful computers, however, projects would have a lot more "lag" and random pauses in them.

So, yes, I'm assuming that Snap's memory footprint must be enormous by design, especially when a lot of blocks are involved. And just because you hide blocks and scripts inside custom block definitions doesn't hide these Morphs from memory.

So, if I understand it correctly, a solution for my extension would be to defer rendering of custom blocks definitions so that they are rendered only when I switch back to Snap, right?

that would be possible solution, yes!

Excellent Jens! Thanks a lot for taking the time to explain to me how block rendering works in Snap.

as long as we don't yet have full introspection in Snap not rendering custom block definitions until we want to edit or see them is something I'm seriously considering to speed up project loading times. But, see, even as I'm writing this down you can notice the trade-offs between general-purpose-ness and optimization :slight_smile:

Snap Loading time, even if a bit long, is not unreasonable. I verified that for the average Snap project (much smaller than my 125MB projects) the memory footprint is not so high and a smooth experience is a value that I won't disregard.

I completed my attempt to reduce the number of blocks in the definition of my custom blocks by creating further custom blocks for all similar sequences in the code. My feeling is that the total number of blocks has been reduced a lot, but this didn't reduce much the memory footprint.

So I would like to attempt to defer the rendering of the definition of custom blocks. Could you please @jens give me a few hints on where and what in the code I should look at in order to avoid that canvases for the internal definitions of custom blocks are created? Thanks in advance.

Rendering in governed in Morphic. Every Morph creates its canvas when it is initialized and calls drawNew(). You'll probably have to override this part for blocks...

:call_me_hand:t2:

I never looked at how blocks are actually rendered so far, so this part of Snap is pretty new to me.

In order to understand which are the relevant attributes I looked at how the script area is created when a custom block is edited.

Is the fullCopy function the one that is relevant to creating copies of the cached canvases? I have found the body.expression path in the definition that looks meaningful. Inside this path I see cachedFullImage that seems it could be the relevant one, but it is null.