Logic programming library development log

Circa 1981 John McCarthy visited the Uppsala Programming Methodology and AI Lab (UPMAIL) for a week. He wanted to understand Prolog and of course started implementing it in Lisp. Since I had been researching logic programming for over a year at that point he would often ask me questions about Prolog. But when he asked about "cut" I knew it and didn't know it. I thought it was so ugly I was busy designing alternatives (like lazy bags of solutions). But I was embarrassed that I couldn't answer his questions about cut very well.

P.S. I was creating LM-Prolog (GitHub - thezerobit/lm-prolog: LM-PROLOG from http://www.sics.se/~matsc/lm-prolog.tar.gz) - Prolog for the Lisp Machine and I'm pretty sure I never implemented cut (or if I did it says something that I don't remember).

Some approaches I’m considering (both for efficiency and so as to avoid infinite loops):

  • (temporarily) storing intermediate solutions as facts;
  • stopping the database search once a goal without variables has been proved;
  • organizing the database into categories of facts vs. rules (facts first), by arity, and using trees.
  • keeping track of previous steps, and make sure they aren’t repeated.

I wonder if a “cut” will be necessary at all once these features will have been implemented.

What are you hinting at?

Prolog top-level command line typically shows the first solution with an interface for seeing the next one or aborting this search. The search stops when the set of subgoals is empty.

Lots of interesting examples work fine without cut.

I suppose the search will only be aborted as soon as a solution has been found for the top-level goal - so until that milestone the search will be exhaustive at deeper levels (?)

(At least, I think that's what Brian is refering to.)

Huh. Doesn't that mean that SOME situations @qw23 could make that could NOT be done because of this?

Perhaps that’s what @bh was hinting at, IDK. We may still hear it from the :horse:’s mouth.

However that may be, just now I added a mechanism that will take care of infinite :loop:s as much as reasonably possible. It remembers all goals “under search” - if it encounters one, it will return the empty list 1. This way it will not try to answer a question by trying to answer the same question.

Another mechanism I added is memoization. Solutions to goals, once found, are remembered1. This will save processing time, especially when doing complicated search actions.

Note 1: I’m aware in the current software version the mechanisms are somewhat primitive and will not work with parallel processes. This will be fixed eventually.

The way I think of it is as a depth-first search. When a solution is found and displayed to the user we are still deep in the computation. If the user indicates they want the next solution it is as if the current goal failed and backtracking happens as normal.

I updated the project, it's now a version 1.3. I had discovered a nasty error, impeding more complicated searches.

The Prolog-ish1 interpreter can find Fibonacci numbers now, more or less in linear time

  • I tried fib (20), it took my low-end PC less than 4 seconds (with Turbo mode engaged);
  • Fib (100) was calculated in about 17 seconds.

Impressive? Well, not for a well-designed memoization scheme2, but for a basic logic programming interpreter it's not bad at all, even if I do say so myself. :smirk:

Link: see post #19.

Note 1: I may rename it "Amalog", myself being an amateur creator, not a pro.
Note 2: Even in Snap!, fib (2000) may be calculated recursively, with full precision (Bignums library), within 1 second.


Depth-first search: that sounds quite complicated from an implementation point-of-view. I don't think I will want to do that. Or am I missing a point?

Indeed. You can't make a perfect infinite loop detector, even if you have quantum computers or whatever big new thing they invent next.

Is that the right thing? Or should it throw a failure? (These are real questions, not Teacher questions, i.e., I don't know the answer.)

Provided that you un-remember them when a failure is thrown!

Whaddya mean, "even in Snap! "? :~P

Yah. When you do a tree-recursive call (might happen more than once for one parent call, e.g., the two recursive calls in FIB), you are doing a depth-first search of the implicit tree of calls. That tree isn't an explicit data structure in your code; it consists of a bunch of stack frames in the interpreter as your program runs. All the work is done by those recursive calls. So, you don't have to do anything to implement this depth-first search.

Now, if you wanted a breadth-first search, then you'd have to do some explicit work to make it happen.

True, but that doesn't mean one shouldn't try. I suspect that one can detect non-termination for all naturally occurring programs (i.e. ones that humans construct for any purpose other than to defeat the detector). But this might be very very hard to do in general.

If someone is going to try and calculate fib (n) as fib (n+2) - fib (n+1), though logically correct ... I'm afraid they are on their own. :smirk:
For now, I'm happy with detecting the 80% or so unintentional yet more or less trivial infinite loops, such as caused by:

sibling (?a, ?b) :- sibling (?b, ?a)

I'm not sure whether I found a good-enough mechanism for this, though, i.e. one that will prevent most unintentional infinite loops, while not rejecting correct solutions.

In my implementation (as in the example MacLisp tiny Prolog interpreter of @toontalk's former colleague (?)) an empty list denotes failure.

Do you mean: if the fact or rule that the memory was based on, is retracted? I didn't implement any retract mechanism (yet).
If otherwise: please explain.

:wink:

I can see the latter. What I fail to see is how the process (executing my program) will know immediately if the first solution has been found. In my current implementation it won't know if there has been a hit until returning to the root of the tree, as there may be additional conditions at any intermediate level. Or perhaps an extra argument must be passed to each call within the process, indicating whether all conditions at each level have been satisfied. (?)

Please explain, are you talking about any oofs that could lead to or not?

Yes, please do.

Suppose someone wanting to calculate fib (3) using this unusual (though mathematically correct) algorithm:

  1. fib (3) = fib (5) - fib (4)
  2. fib (4) = fib (6) - fib (5)
  3. fib (5) = fib (7) - fib (6)
    (etc.)

They'd be busy forever, wouldn't they?
But then trying to calculate fib (n) like this is so stupid that we'd have to assume it were intentional - so why build in a feature to prevent it?

True, I thought you were talking about numbers that weren't (x (>or=) 0).

I hadn’t thought of that. Negative fibonacci numbers don’t have a physical meaning, but neither do positive, really (the whole rabbit story was, ultimately, nothing but a fabrication). What I find fascinating is how the fibonacci series approaches an exponential curve when the argument moves towards positive infinity, and oscillates for negative arguments.

Huh, I thought it was the same as positive, but the negative sign is in every other one. Then again, that's probably what you mean by "oscillates". Of course, what about non-integer fibonacci numbers?

AFAIK they’re undefined … but perhaps we can move back to this topic’s subject. Did you try writing a logic program to be run by my interpreter?

I think there was a project that plotted the output in the complex plane, so they're not really UNDEFINED, per se.

No, I don't even know how.

Sure. I how about a sudoku solver?