Parallelization library (what am I missing?)

I stumbled upon the (somewhat basic) Parallellization library, and wondered if its two blocks' definitions could be simplified. E.g.:

Parallellization library update script pic (1)

... might be rewritten as:

Parallellization library update script pic (2)

... without changing its functional behaviour:

Parallellization library update script pic (3)

Parallellization library update script pic (4)

I decided to go one step further: (the launch (run (action)) construct may look unnecessarily complicated - I need it though for optimization of the other block within the library; below is just a simplified example illustrating my point)

Parallellization library update script pic (5)

but now the functional behaviour becomes erratic:

Parallellization library update script pic (6)

Does anyone understand what happens?


The action must be evaluated right at the launch phase not captured "live" in the closure/context.

and for DO IN PARALELL AND WAIT, you can

EDIT: why isnt this working?

It seems to be slightly overengineered :wink:
This works for me...
untitled script pic - 2024-10-08T200955.855

Yet another intricacy of Snap! ‘s environment management :smirk:
Thanks for pointing it out!

So my final version of do in parallel and wait has become:

I don’t think it can be made any shorter, and simpler (Occam’s razor of computing).

Moreover I do think Snap! could do with at least one more inter-process synchronization mechanism (semaphores) even though Snap! ‘s threads are claimed to be “not really asynchronous” - but in extreme cases they are, IMAO.

why run with inputs?
also, I think there should be a warp block so snap does not waste time by yielding after each action is launched.

ok but what is the problem

Like @dardoro has pointed out, if you use with inputs, the action (or item) is connected with its “environment” (variables) at the time the relevant script is launched. I’m not sure as to exactly what happens otherwise, but it’s not what you and I want.

A warp block is not strictly necessary, and should therefore be avoided. In the general case, the number of parallel processes to be started may be large, and the scheduler may be hindered in trying to allocate some time to each of the processes, including pre-existing ones. Another application of Occam’s razor in computing.

  1. The "action" variable instance is shared by every ring/lambda.
  2. "Launch" is "lazy" and evaluation of the variable is delayed
    untitled script pic - 2024-10-09T001336.699

A new thread is "launched" as suspended, added to the tail of the queue, and started at the next cycle (after yield).
untitled script pic - 2024-10-09T002502.213
If you block yield with "warp" all threads start at once, after the yield (effectively after two yields)
untitled script pic - 2024-10-09T003821.001

The Snap UI's "liveness" makes things even worse. Your script's result bubble may be altered post-mortem. The same script as above but a just-in-time "snapshot" of the variable tell you the truth...
untitled script pic - 2024-10-09T010819.006

But given enough "yield" cycles
untitled script pic - 2024-10-09T005845.486

It's quite obvious if you get the grasp of it but at first glance...
Sorry for the rather rough English.

I see it now - thanks again for explaining. Though Snap! may look simple, below the surface it’s a whole different story. This matter is definitely going to be beyond the grasp of, say, UCB’s undergraduate arts students taking an introductory CS course. Then again, they may write code involving parallel processes, but (fortunately) do not necessarily need to understand the inner workings of all enabling library blocks. :smirk:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.