I agree that that's more block languagy. I'm struggling to love the distinction between constants and variables, like the CONST declaration in Those Other Languages, although otoh in the Prolog context it really isn't like T.O.L. in which a "constant" is really just a variable (living with the other variables in memory) with a straitjacket. In fact I'm wondering if you shouldn't have a one for variables. Ideally when you drag out the upvar your copy would have a question mark prepended, but that would require a mod to Snap!.
You didn't invent this nomenclature, but I'm wondering why Prolog treats the relation as a different kind of thing from its args. It's treated uniformly in unification, right? You can use a variable in the functor position, to ask "what relation hold between liz and bill?"
The exercise with identifier blocks is a mere experiment for now - I’m not sure if this approach is turning out beneficial on balance. I tried the ? reporter before, but (like you are writing) the upvar name will not automatically be prefixed with “?”, so the “?” needs to be part of the upvar name proper: ; thus the generating block would be like: , looking odd.
As for the difference Prolog makes between functor and arguments: I agree it seems unnecessary, so the Amalog interpreter doesn’t, really:
In Amalog, the only real difference between item 1 (functor) and the following arguments is that a Snap! script may be used as functor, with the final argument (= item last) as the reported result of the script (function / predicate) having been applied on the other arguments - unless the script is a command, in which case the result is “true” by definition.
There are many optimizations that constant functors enable. And for a particular task if one wanted variable relations then every clause could be (relation ?relation ?arg-1 ... ?arg-n)
Furthermore, constant relations makes it easy to interpret logic programs as first-order logic (really the Horn clause subset).
Yes. But can't you recognize the common special case of a constant functor and do the optimizations? IIRC SICP does that; its databases of assertions and rules are indexed on the functor, if constant, so you can quickly find relevant facts and rules, but also you can use a variable and search the entire database.
I mean, don't you ever think "Julia is some sort of cousin of mine, but I forget exactly what kind" and then wouldn't you like to follow that with "Let me ask Prolog"? (I used to give implementing "nth cousin k times removed" in logic programming as an exam question, so I have a warm spot in my heart for those obscure relations.)
I hate the name "functor," by the way, because it only encourages students to try to do composition of functions
grandparent(x,y) :- parent(x,(parent(y)) //wrong!
Why aren't they called "relations"?
Although SICP has optimizations for constant ones below the abstraction barrier, in its user-level documentation it treats assertions, rules, and queries as arbitrary token strings, so you can say
(assert! (Brian likes potstickers))
and ask questions such as (?who likes potstickers) and (Brian likes ?what).
P.S. That would also be more Snap!ly, because we have Smalltalk-style title text interspersed with arguments.
P.P.S. And anyway I can never remember the order of arguments to a relation such as PARENT, and it'd be easier for me if the relation were
The most obvious is that a goal need only be matched with the heads of those clauses with the same predicate. Either a quick table lookup or a pointer to those clauses.
For many logic programs one can understand them both procedurally and declaratively. Sometimes instead of debugging a predicate procedurally one looks at the code and asks "Is this saying something true?"
Why not allow any number of left arguments, if you're going to distinguish the relation as an upvar?
But in a perfect world I could say
?list items from ?a to ?b are ?result
and you'd make a relation named _ items from _ to _ are _. But the user would refer to it the way I wrote it above. I know, upvars would have to be more flexible.
Because people have figured out how to evaluate the Horn clause subset of first-order logic very efficiently. And I'm pretty sure higher-order is computationally expensive to evaluate.
Yeah. I remember AI people at both MIT and Stanford working on that sort of thing in my even more ignorant youth. But still, one should be able to use easy methods on easy problems and still have hard methods for hard problems. (Not so much conceptually hard, I think, as just exponential time.)
There’s practical reasoning behind that: dropping a reporter block onto a double arrow ( ◁▷ ) that is followed by at least one slot in the same line, is rather difficult, or close to impossible (granted, one may make a copy of the block, paste that onto the arrow, subsequently delete the original). Therefore I only use multi-inputs at the end of a line - if necessary I’ll insert a line break, which IMO isn’t suitable here. OTOH, it’s not very likely that anyone is going to use an input list here. So I’ll reconsider.
In order to make a reasonably efficient logic programming interpreter, I’ll need to implement memoization. I edited some code I used earlier for memoizing recursive operations:
When I’m going to use this for storing solutions to a goal (“relation” will be item 1 of the arguments), I need to make sure that the number of arguments of that type of relation is always the same, for this algorithm to work.. So if a type of relation’s arguments number is variable, I may have to insert the arity as an extra argument for memoization.
The function uses the following simple lookup function:
You can have the value part of each kv-pair to be itself a pair consisting of a value and a sub-dictionary. Then, if there are no more levels you report the value part, and if there are more levels you do the recursive call on the sub-dictionary part. (Pretty sure this is in Chapter 3. ;~P)