Parallelization library (what am I missing?)

I stumbled upon the (somewhat basic) Parallellization library, and wondered if its two blocks' definitions could be simplified. E.g.:

... might be rewritten as:

... without changing its functional behaviour:

I decided to go one step further: (the launch (run (action)) construct may look unnecessarily complicated - I need it though for optimization of the other block within the library; below is just a simplified example illustrating my point)

but now the functional behaviour becomes erratic:

Does anyone understand what happens?


The action must be evaluated right at the launch phase not captured "live" in the closure/context.

and for DO IN PARALELL AND WAIT, you can

EDIT: why isnt this working?

It seems to be slightly overengineered :wink:
This works for me...

Yet another intricacy of Snap! ‘s environment management :smirk:
Thanks for pointing it out!

So my final version of do in parallel and wait has become:

I don’t think it can be made any shorter, and simpler (Occam’s razor of computing).

Moreover I do think Snap! could do with at least one more inter-process synchronization mechanism (semaphores) even though Snap! ‘s threads are claimed to be “not really asynchronous” - but in extreme cases they are, IMAO.

why run with inputs?
also, I think there should be a warp block so snap does not waste time by yielding after each action is launched.

ok but what is the problem

Like @dardoro has pointed out, if you use with inputs, the action (or item) is connected with its “environment” (variables) at the time the relevant script is launched. I’m not sure as to exactly what happens otherwise, but it’s not what you and I want.

A warp block is not strictly necessary, and should therefore be avoided. In the general case, the number of parallel processes to be started may be large, and the scheduler may be hindered in trying to allocate some time to each of the processes, including pre-existing ones. Another application of Occam’s razor in computing.

  1. The "action" variable instance is shared by every ring/lambda.
  2. "Launch" is "lazy" and evaluation of the variable is delayed

A new thread is "launched" as suspended, added to the tail of the queue, and started at the next cycle (after yield).


If you block yield with "warp" all threads start at once, after the yield (effectively after two yields)

The Snap UI's "liveness" makes things even worse. Your script's result bubble may be altered post-mortem. The same script as above but a just-in-time "snapshot" of the variable tell you the truth...

But given enough "yield" cycles

It's quite obvious if you get the grasp of it but at first glance...
Sorry for the rather rough English.

I see it now - thanks again for explaining. Though Snap! may look simple, below the surface it’s a whole different story. This matter is definitely going to be beyond the grasp of, say, UCB’s undergraduate arts students taking an introductory CS course. Then again, they may write code involving parallel processes, but (fortunately) do not necessarily need to understand the inner workings of all enabling library blocks. :smirk:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.