Logic programming library development log

Why a chick in an egg?

The item is being spawned (like in a game). I searched for “spawn” on an emoji-webpage, and this was their no. 1 result. Do you like it?

Oh. I see. That makes sense!

This alternative I like, too:

Logic programming DEVLAB script pic 3

But first I’m going to read all 72 (!) pages of SICP section 4.4 now :flushed:

I agree that that's more block languagy. I'm struggling to love the distinction between constants and variables, like the CONST declaration in Those Other Languages, although otoh in the Prolog context it really isn't like T.O.L. in which a "constant" is really just a variable (living with the other variables in memory) with a straitjacket. In fact I'm wondering if you shouldn't have a untitled script pic (2) one for variables. Ideally when you drag out the upvar your copy would have a question mark prepended, but that would require a mod to Snap!.

You didn't invent this nomenclature, but I'm wondering why Prolog treats the relation as a different kind of thing from its args. It's treated uniformly in unification, right? You can use a variable in the functor position, to ask "what relation hold between liz and bill?"

The exercise with identifier blocks is a mere experiment for now - I’m not sure if this approach is turning out beneficial on balance. I tried the ? reporter before, but (like you are writing) the upvar name will not automatically be prefixed with “?”, so the “?” needs to be part of the upvar name proper: Logic programming DEVLAB script pic 5; thus the generating block would be like: Logic programming DEVLAB script pic 4, looking odd.

:smirk:

As for the difference Prolog makes between functor and arguments: I agree it seems unnecessary, so the Amalog interpreter doesn’t, really:

Indeed one can ask for various relations between “arguments”:

One step further, though this final step is mostly cosmetic, would be to simply define a goal as a tuple:

In Amalog, the only real difference between item 1 (functor) and the following arguments is that a Snap! script may be used as functor, with the final argument (= item last) as the reported result of the script (function / predicate) having been applied on the other arguments - unless the script is a command, in which case the result is “true” by definition.

Logic programming DEVLAB script pic 9

Don't forget to handle (?relation ?person bill). :~)

There are many optimizations that constant functors enable. And for a particular task if one wanted variable relations then every clause could be (relation ?relation ?arg-1 ... ?arg-n)

Furthermore, constant relations makes it easy to interpret logic programs as first-order logic (really the Horn clause subset).

Which relatives is bill having arguments with? :wink: I'm afraid that's a subject for juicier fora.

Interesting. Such as ... ?

That looks a bit overdone for most practical purposes. OTOH it's probably easy to transform a database to enable it.

What practical advantages does that bring?

Yes. But can't you recognize the common special case of a constant functor and do the optimizations? IIRC SICP does that; its databases of assertions and rules are indexed on the functor, if constant, so you can quickly find relevant facts and rules, but also you can use a variable and search the entire database.

I mean, don't you ever think "Julia is some sort of cousin of mine, but I forget exactly what kind" and then wouldn't you like to follow that with "Let me ask Prolog"? (I used to give implementing "nth cousin k times removed" in logic programming as an exam question, so I have a warm spot in my heart for those obscure relations.)

I hate the name "functor," by the way, because it only encourages students to try to do composition of functions

grandparent(x,y) :- parent(x,(parent(y)) //wrong!

Why aren't they called "relations"?

Although SICP has optimizations for constant ones below the abstraction barrier, in its user-level documentation it treats assertions, rules, and queries as arbitrary token strings, so you can say

(assert! (Brian likes potstickers))

and ask questions such as (?who likes potstickers) and (Brian likes ?what).

P.S. That would also be more Snap!ly, because we have Smalltalk-style title text interspersed with arguments.

P.P.S. And anyway I can never remember the order of arguments to a relation such as PARENT, and it'd be easier for me if the relation were

(parent of ?x is ?y)

I don’t like it either - to me it sounds like a very strict bureaucrat from former East Germany.

Interesting line of thought. Perhaps even something like ... ?

Logic programming DEVLAB script pic (8)

Logic programming DEVLAB script pic (6)

This enables us to style a fact / goal / head as:

  • a postfix relation of 1 argument
  • an infix relation with 2 arguments
  • an infix relation with 1 leading argument, and any number of trailing arguments
    The relation would always be item 1 of the block's output.

A separate block would support calling Snap! functions.

The most obvious is that a goal need only be matched with the heads of those clauses with the same predicate. Either a quick table lookup or a pointer to those clauses.

For many logic programs one can understand them both procedurally and declaratively. Sometimes instead of debugging a predicate procedurally one looks at the code and asks "Is this saying something true?"

But why does the logic have to be first order to do that? Mathematicians evaluate the truth of higher order relations all the time.

Why not allow any number of left arguments, if you're going to distinguish the relation as an upvar?

But in a perfect world I could say

?list items from ?a to ?b are ?result

and you'd make a relation named _ items from _ to _ are _. But the user would refer to it the way I wrote it above. I know, upvars would have to be more flexible.

Because people have figured out how to evaluate the Horn clause subset of first-order logic very efficiently. And I'm pretty sure higher-order is computationally expensive to evaluate.

Yeah. I remember AI people at both MIT and Stanford working on that sort of thing in my even more ignorant youth. But still, one should be able to use easy methods on easy problems and still have hard methods for hard problems. (Not so much conceptually hard, I think, as just exponential time.)

OK. I was going to look into that option anyway.

There’s practical reasoning behind that: dropping a reporter block onto a double arrow ( ◁▷ ) that is followed by at least one slot in the same line, is rather difficult, or close to impossible (granted, one may make a copy of the block, paste that onto the arrow, subsequently delete the original). Therefore I only use multi-inputs at the end of a line - if necessary I’ll insert a line break, which IMO isn’t suitable here. OTOH, it’s not very likely that anyone is going to use an input list here. So I’ll reconsider.

… back to SICP :wink:

In order to make a reasonably efficient logic programming interpreter, I’ll need to implement memoization. I edited some code I used earlier for memoizing recursive operations:


(corrected version; the original contained a bug)

When I’m going to use this for storing solutions to a goal (“relation” will be item 1 of the arguments), I need to make sure that the number of arguments of that type of relation is always the same, for this algorithm to work.. So if a type of relation’s arguments number is variable, I may have to insert the arity as an extra argument for memoization.

The function uses the following simple lookup function:

I would have preferred to use item (key) of (table) - unfortunately that doesn’ t work for numbers, so I made this one up.

You can have the value part of each kv-pair to be itself a pair consisting of a value and a sub-dictionary. Then, if there are no more levels you report the value part, and if there are more levels you do the recursive call on the sub-dictionary part. (Pretty sure this is in Chapter 3. ;~P)

I searched SICP for multilevel memoization, but didn’t find it. Pretty sure that what you suggest is exactly what my code does, though.