Lazy evaluation in Snap!

I made a project that attempts to achieve Haskell's semantics in Snap!: https://snap.berkeley.edu/snapsource/snap.html#present:Username=theaspiringhacker&ProjectName=lazy-snap

However, I'm not sure if I correctly replicated function application behavior. I force the thunk to get the function, but I believe that Haskell only forces thunks when pattern matching.

1 Like

Nice. Look at my streams library for an alternative implementation of lazy evaluation, following SICP rather than Haskell.

Of course to make it perfect you need to be able to overlay all the primitives with ones that return a thunk, except for SAY, which has to repeatedly force thunks until it reaches a non-thunk (and therefore you need to be able to distinguish thunks from other functions, e.g., by putting a pointer to (list [* THUNK *]) in front of them (using a singleton list rather than a function so you can ensure that the first item is EQ to the tag, not just = ). Sorry if you already know all that. :slight_smile:

My FFI function primIO is supposed to wrap a primitive Snap! block. See the definitions of sayIO and sayTimedIO (which wrap say _ and say _ for _ secs respectively).

I don't need to repeatedly force thunks. Rather, I wrote primitives such as function application and case analysis that take thunks and force them, so that when one forces the top-level thunk, there is a "chain reaction" that causes the appropriate inner thunks to get forced in the course of evaluating the outer thunk.

My thunk design is based on the Haskell STG machine. The thunk has a pointer to some code to execute, and when that code gets executed, the pointer gets re-pointed to some dummy code that simply returns the result. See page 11 of https://www.microsoft.com/en-us/research/wp-content/uploads/1992/04/spineless-tagless-gmachine.pdf. The stream library uses what the paper calls the "cell model," with the flag and cell to store the thunk result.

Yes, that's important for efficiency, to avoid repeated evaluation of the underlying expression -- although that also means that if you thunkify something with side effects, the result might not be what you expected, if the side effect should happen repeatedly.

For repeated evaluation, see the difference between call-by-name and call-by-need. The point of Haskell's monadic IO and referential transparency is that the programmer needs not care about when or how many times something is evaluated when reasoning about the program's effects. The program is literally a pure lambda calculus expression to normalize.

Note that the Church-Rosser theorem does not apply to Haskell, as strict versus lazy evaluation does in fact matter when a program may diverge. Some people see _|_ (what people say a diverging expression evaluates to) as a side effect.

In contrast, the Church-Rosser theorem does apply to Idris.

Okay, you win, you know more about this topic than I do. I'll shut up now... :slight_smile:

10 posts were split to a new topic: College admissions