I made A BETTER stream

Like this one?

why would you ever need it

the second one is stream while the blocks I made for a replacement is a lot faster and can pick specific indexes

Running the project leads to it taking a long time, whence why I asked.

some projects like my midi player use lists upwards to 50,000 items per song and need a var that it can pull from that is not slowing down other blocks. while lists are fast and are faster it takes all the resources and more when not even used

Read this: Streams library 2.0 development, part 1 - #108 by bh

If you have an amount of data that's large, but not too large to fit in the browser's memory, plain old lists (which are usually implemented as arrays in Snap! ) are just as fast, finding the $$n$$th item in constant time.

If you build a Snap! list using IN FRONT OF, so you get a linked list rather than an array, then finding the $$n$$th item takes time proportional to $$n$$, whereas your data structure will take (at best) time $$\sqrt n$$, which is indeed a big improvement.

Streams are good in special situations: The data won't fit in memory; you really only need the first few items of a huge list; or there are actually infinitely many values, such as the set of all prime numbers, which won't fit in a regular list, nor in your version, but can (in a certain sense) fit in a stream.

The stream library is annoyingly slow. We're working on it.

I hadn’t noticed this post yet. I wonder if the ideas behind it can help tackle issues with the implementation of the streams concept in Snap!. Would @the_lucky13 be so kind as to explain what their blocks do?

BTW I think fetching a single item from anywhere within a stream is not a frequently used operation (though, of course, faster is better). What really matters is if you can process newly disposable or newly requested data in an efficient way. The mechanism of the current Streams library is both rather slow and - perhaps even more importantly - runs into memory trouble after, say, a few hundred thousand frames - so a fundamentally better streams mechanism must probably work iteratively below the hood, not recursively.

Are the recursive calls tail calls? If so, they should already work as if written iteratively. If not, perhaps we can rewrite them as tail calls. See section 1.2.1.

Do you mean Snap! already offers tail call optimization support?

Yes, of course! Snap! is a Scheme.


On my platdorm, this crashes:

This doesn’t:
tail call optimization? script pic 2

(with a = 1234567, b = 0)

[Link([Snap! Build Your Own Blocks]

That's the problem.

Interesting. For me, this:

works for a=1234567 but crashes for a=12345678. Let me look into it...

EDIT: Even this:

works for me, and the dev mode untitled script pic (4) claims the stack doesn't grow. But untitled script pic (5) grows and grows, for all recursive-procedure versions and for your iterative one. Needs more research...

EDIT 2: Even just this:
untitled script pic (6)
makes FRAMES grow and grow, so I guess it's counting all frames that have been allocated rather than how many are still allocated.

So … the Snap! interpreter does attempt tail call optimization, but it’s not very effective, more like “tail call improvement”?
I wonder if this can be solved within Snap!. at least eventually.
And if not, there may be a way around it at the application (or library) level; that’s going to look quite a bit different from the current blocks though, I guess.

It's a bug in Snap! related to the THIS CALLER feature. Jens is working on it as we speak. Thanks for reporting it!

Great! If the bug has been identified and jens is working in it, it’s probably going to be solved sooner or later.

Probably sooner. It's rare for Jens to take a long time to fix a bug. :~)

I’m looking forward to it!

So I haven't been on in a while but the hyperise lib uses a compression value for how many items are in a index. that makes it a lot faster. it uses a call function to do operations in those indexes. the listify makes a hyper into a list and hyperise is to make the hyper from the list. also The add and delete functions are very slow right now.