I hope this makes sense with the code, since I can't attach images. I was looking at the definition of the combine block, and the recursive definition it shows makes it look like:
(combine (list 1, 2, 3) ( _ ^ _ ))
would be evaluated as
(2 ^ (3 ^ 4))
but in fact it's evaluated as
((2 ^ 3) ^ 4)
Does anyone have any insight into this? I'm referring to the definition that I'm seeing when I click 'edit' on the combine block. I'm guessing that it's not actually executing the given code, but that the primitive calls Javascript from somewhere else, but it seems odd that the definition in the blocks wouldn't have the same behavior.
I tried pulling out the primitive block so combine would run the block code underneath, and I got different behavior from when I ran it with the primitive.
The block evaluates the first 2 items together, so it would first do ((1) ^ (2)) , then it takes the result of that, and evaluates that with the 3rd item, so it would become (((1) ^ (2)) ^ (3)) , then it repeats.
Or at least that's what the javascript definition does. And yes, you were right about the primitive block running a jsvascript function. However if you click on the switch on the block, then click ok, the block will then run the definition you can see (which is really handy for comparing discrepancies like this).
After testing, you are right, the snap definition does combine in a different order.
When I used the block as is to report (combine (list 2, 3, 4) ( _ ^ _ )) , it returns 4096 (which is 8^4), but when I use the block definition that's inside the edit window for combine, it's a much, much bigger number (likely because it's 2^81).
It sounds like the other poster is saying the Javascript the block usually runs is a different algorithm than the block one that I can see, but I don't know where to find the Javascript definition, so I wasn't sure if there was something I was missing. Thank you to both of you for chiming in. This is my first time using combine, and I was confused when it evaluated in the opposite direction.
combine in Snap does neither imply nor guarantee any kind of "direction", it is not a shorthand notation for any imperative loop but a higher order function (an abstraction). This is really important, pedagogically. Combine simply "combines" the elements of a list using a dyadic function. It's pretty clear that you should expect commutative / associative operations to result in what you'd expect them to result in, but not any other.
if it's the "definition" you see when you edit the primitive, then please note that there's a primitive bock in there, and that the fallback code can - of course - do something else. Again, the order of operations in a HOF, especially in combine is not guaranteed.
I feel like having different results is not a very good thing, since, one, it can confuse users like the OP, and second, it may result in a program running differently if you accidentally edited the block. I feel like the definition should report the same result as the javascript definition.
We should definitely have an official policy about this. My preference is the opposite of ego-lay_atman-bay's: I would like the Snap! version to be the simplest possible code that captures the essential idea:
If the goal were actually to implement Snap! in Snap! (so, making the Boolean input to PRIMITIVE default to False) then yeah, COMBINE-in-Snap! should fold the same way as COMBINE-in-JS, even though we make no promise about folding order. But if the goal is pedagogic, then the simpler the better. (But also we should prefer functional to imperative code, as in the MAP example above.)
I acknowledge that a third possible goal would be for the Snap! code to supplement the manual and the help screens as reference documentation for the primitive. I guess that would be useful for the more obscure blocks, such as DISTRIBUTION OF, but for the quotidian blocks such as FOR and MAP, any user who knows enough to look inside the definition and read the code they find there already knows how those blocks work.
I know Jens was initially inspired to build this feature by Smalltalk, which actually does run the Smalltalk code in its primitive block definitions, if I'm understanding correctly. But Smalltalk code runs a lot faster than Snap! code, and so I don't think we're ever actually going to do that. (I mean, maybe someday there'll be a compiler, but I think that would work against liveliness, so maybe not.)
So I think the goal is pedagogic, in two senses. One is simply to impress upon users that Snap! is a real programming language, powerful enough to implement the Snap! primitives. For that purpose, it doesn't matter so much how we resolve this policy question. But the other is to impress upon users the idea that they, even they in their newbieness, could define an awful lot of quotidian Snap! themselves--that there's no magic involved.
Another pedagogic virtue of stripped-down versions is that for not-so-newbie users we could have curriculum that poses exercises such as "add the value/index/list feature to MAP."
But I offer this opinion modestly; I definitely see that there are arguments for other points of view.
It's pretty clear that you should expect commutative / associative operations to result in what you'd expect them to result in, but not any other.
I don't think it's that easy to determine what's going to be 'clear' or 'expected' for the people who are using Snap!, especially since everyone comes in with their own perspectives and experiences. What may be clear or the obvious expectation to you may be very surprising to someone who is coming from a very different background than you have.
The issue here was that the behavior of the block, when given a non-associative function, was not what you get when you trace the Snap! code inside the definition. When you pass in something you're not supposed to pass in, unexpected or undesired things can happen. However, I would expect that when I opened up the block and went step by step through the Snap! code that I saw, that I would get the same answer that the block gave to me.
My goal in posting was to try to figure out what was going on, since I didn't know why the block code definition would give different answers than the block. You have cleared that up, and I understand where you are coming from in showing the Snap! definition you have, so thank you.
The main concern I have with the way it currently stands, from a pedagogic perspective, is that we are encouraging students to engage in inquiry and use the resources at their disposal to learn more about how everything works, but the code given will create an incorrect mental model of what's happening under the hood, and it will undermine the message that computers do what they are programmed to do, not what we 'want' them to do.
Answering 'why did the block return this value?' with 'just don't put in that function' squashes the curiosity we're trying to encourage. It also misses a chance to show students that you can still trace code when you do something 'wrong', and the ability to do so is essential for effective debugging.
Part of understanding abstraction is being able to move between layers of abstraction and understand how they interact with each other, and having the blocks operate differently than the given definition, even when students are doing things they're 'not supposed' to do, makes it harder for them to sense-make about how abstraction works in a complex system.
Yes, I get that. But any abstraction (such as a programming language) hides aspects of how the system "really" works. For example, some of the complexity of the actual Snap! interpreter code is to defend against the fact that browsers may interrupt the running of the Snap! environment at any time, and then not continue running Snap! from the same point, but instead starting it over. We definitely don't want our users to be thinking about that! We abstract it away. (In particular, we don't rely on the JS stack, but instead build our own stacks in the heap.)
So the question of whether any particular behavior should be above or below the abstraction barrier is an independent question, not derivable from a general principle about abstraction.
In this particular case, I would argue that languages whose designers want to enable users to apply their version of COMBINE to non-associative operations generally provide explicit fold-left and fold-right primitives, rather than providing only one of them and specifying how it folds. I think one could make an argument that we should provide fold-left and fold-right in addition to the existing COMBINE (not instead of, because we don't want users to think about folding order).
Another situation in which a similar issue arises is floating point computation. Every so often we get a question on the forum about why
When it comes up, we answer it, but I argue that we should display floating point values by determining the value within the range represented by a given floating point bit pattern with the smallest number of nonzero decimal digits, and display that numeral -- in this case, 0.6. (This would not, of course, affect the internal floating point representation.) So, that's another case in which I would argue for hiding an issue about how the computer "really" works. There are only, like, six people in the world who really understand floating point, and I don't aspire to adding our users to them.
On the other hand, I (not speaking for Jens or anyone else here) do want our user to understand recursive functions, which are both things of beauty and at the heart of the building blocks of reality. This goal of mine is problematic because iterative, array-based methods are super optimized by Javascript in browsers, whereas recursive functions rely on our simple-minded implementation of linked lists. That's why the higher order functions, which were built in user code in BYOB, became primitives in Snap!, at the cost of losing them as programming exercises for students. The "blocks all the way down" feature is in part an effort to have our cake and eat it too about this.
By the way, about "computers do what they are programmed to do, not what we 'want' them to do," I would argue that this is only partly a law of nature, and also partly the result of weaker-than-necessary programming tools. (And yes, I understand that the Halting Theorem is part of the law-of-nature part!) Already you can often ask ChatGPT to write the code for you, although it isn't perfect and maybe never will be. I hate that peanut-butter-and-jelly exercise that all other programming teachers love, because it suggests that the computer is malevolent, so you have to program very defensively. Our tools just don't yet match our expectations.
I agree with your point about abstraction always hiding details. My perspective is more focused on the particular experience of saying the details don't matter versus providing details that are incorrect.
I think what it come down to me is that the Snap*!* definitions were conveying an incorrect model of how the block worked (or at least a model that didn't have good predictive value for edge cases). One could argue that the point of abstraction is that you don't need a model for how the block works in order to use it effectively, and I agree. However, I do think that if a model is provided, it should be a correct model (i.e. good predictive value), or at least as correct as it can be within the given constraints. Given the nature of the underlying Javascript and the desire to showcase recursion, I can see why the current definition was chosen, but I weigh the competing priorities differently, which is totally fine!
I'm trying not to get too carried away in my responses, because the discussion is giving me a lot to think about in how we give people a low floor to a complex system without constraining their choices/pathway to the point that they have no agency in the learning process, and I could probably go on for way too long about it. If there is a better forum for this type of discussion, I'm happy to move my responses there.
One thing that this got me thinking about was on the usefulness of learners differentiating between the goal, algorithm, code, and execution of a program. You mentioned earlier that the given Snap*!* code is intended to convey the 'essential idea' (which I interpreted as 'goal' in the aforementioned framework) of the block in the simplest way possible. I'm currently still trying to get my thoughts around it, but my gut instinct is that giving a particular code example, especially uncommented, is not going to be that useful for conveying the goal or essential idea of a program, as students would need to make the leap from code to algorithm to goal. (I do, however, strongly agree with your point that students should see that these blocks can be constructed in Snap*!* and are not 'magic'. )
I also dislike the 'peanut butter and jelly'/ 'brush your teeth' / 'make a cup of tea' set of exercises for the same reason. I think people just like it because it's funny and not because it has any particular learning value.
Again, thank you so much for taking the time to share all of your insights into the philosophy behind Snap*!*.
actually... no! In this case Snap! does exactly what Smalltalk does / used to do, it defers primitives to run as primitives (in Smalltalk they used to be written in assembly language, and later in Squeak in C), and it offers readable and executable fallback code that you can choose to run instead, albeit much slower. Snap! (because of modern JS) actually runs a lot faster than Smalltalk used to run back in the day. Gosh, when I was working on Smalltalk at IBM in the mid 90's we figured that it was totally acceptable for a program (we didn't call them "app" back then) to require up to 5 minutes to fully launch (we used to call that "warming up the pointers"). And even then evaluating a HOF like select (keep in Snap!), collect (map in Snap!) or inject (combine in Snap!) was slow but oh-so-elegant :).
Ah, you're pushing one of my buttons. Commenting code is all a bad idea. Program documentation is a good idea, but it shouldn't be at the granularity of lines of code. We document the overall purpose of a block in its help screen, but line-by-line comments are usually just stupid:
and if they're not stupid, it means the programmer is being clever and instead of commenting the code should rewrite it to be self-documenting. This is especially true in a block language, in which there is no annoyance cost to using long, multi-word, descriptive variable names. (And also no need for hideous camel case, by the way.) I always tell my students that any time I find myself tempted to comment my code I rewrite it instead.
(Longer explanation: A program is a tree structure, with big modules (input, processing, output is the traditional example), smaller submodules, individual procedures, and then lines of code. In a perfect world, the program documentation would have the same tree structure, and the most important parts of the documentation would be at the higher levels of the tree. In this world, though, programmers don't have the patience to do all that, and so instead we get documentation at the leaves of the tree, i.e., comments in the code. A better choice would be for the programmer to write a document separate from the code, explaining the important chunks of the program and also the data structures used.
PS One of the few virtues of the horrible World Wide Web is that these days we do get tree-structured documentation of programming languages, if not of other software.
PPS For custom blocks, users can put a comment on the hat block of the script, which is used as the block's help screen. But that comment should be outward-looking, explaining how you're meant to use the block, rather than the sort of preconditions/postconditions inward-looking comment that those other programming teachers love.
The problem with that whole assumption is it flies in the face of how the human brain works. I want you to go find code from 5, 10, 15... every five years back till your first block of code and tell me how long it takes to figure out the "self documenting" of each block.
Brains change. Frequently. It's called learning and this idea that code is perfect and undilutable is almost as bad as abstraction.
The reason all that old software had fantastic manuals is because they had technical writers, and those technical writers generally weren't the people writing the code, but for a whole heap of bad reasons, eventually got phased out and caused the mess we have today.
In this world, though, programmers don't have the patience to do all that
Nah, they don't get trained as technical writers, the assumption is that someone who writes code is a technical writer, and it's a bad one.
If technical writing was taught, we wouldn't be having this discussion.
With the big caveat that comments are in the standards (three years before functions!), so in many ways this is a moot point for me...
I agree with your general issue with commenting code, although I do think that your example of line-by-line comments is a bit of a straw man. I believe that well written code should be readable by both a human and a computer, and abstraction should be leveraged to ensure code is more readable.
I've found that in the classroom, comments are often used more as a scaffolding technique. I'm not saying that comments are the only solution to this problem, or that they have to be the kind that particularly grieve you, but I do think that if the goal of that code is to give the essential idea of what the block does, it needs something more so that novice programmers can connect the code to the essential idea. I will admit, I didn't think through exactly what those comments might be, or whether comments were the best approach here, but just pulled out comments as one of the most common ways to scaffold that process.
For reference, my perspective is using this within the context of BJC, so recursion hasn't been introduced yet (Abstraction: Making Computers Do Math). I know it's not really fair to come with this limited point of view, but this is where the issue came up for me. To selfishly refocus on that issue, for me the central question isn't about comments or not comments. Given that the code is there, the questions for me are:
Is the hope that the students can understand how the given code relates to the essential idea / general behavior of the block?
If so, is it expected that the relationship will be clear to students when they read the given code?
If not, what tools do students have at their disposal to make that connection?
In reference to the goal of "they, even they in their newbieness, could define an awful lot of quotidian Snap*!* themselves", I will go with 'yes' to the first question. Based on my experiences, I will go with 'no' to the second question.
The thing is, I think that they could understand an iterative/imperative 'reduce'-style solution to this without too much trouble, based on what they've learned, because it is closer to how a human would go about the problem. At the same time, in my heart of hearts, I think that recursion is the 'better' solution, and I want to present that, too. Maybe part of this is a conflict of values between the CSP framework and other things. I feel a lot of tension between what the CSP is asking to focus on and a more functions-first Scheme-style intro class.
Their first draft, 15 years ago or whenever it was, was even worse. I had to fight to get them to admit that functions exist. And to agree that their readers would give the "looping" point to recursive code. And to MAP.
In that case, you'll note that the curriculum explicitly says "use COMBINE only with these eight functions" (2.4.3). So I feel justified in saying that if a student asks about folding order, the right answer is "use COMBINE only with associative operations." :~)