Logic programming library development log

That would be a sensible limitation, I agree.

I hadn't thought of that yet. But OK, your suggested coping strategy sounds like a plan. It's going to require lots of formula manipulation ... recursive as hell, I guess. (love it)

Nice! Does its implementation involve streams? :wink:

Only conceptually. It was specially programmed instead of using a stream abstraction. Maybe someone wants to try this with the new streams library?

It's not just a good idea, it's the law. :~)

If you read that Wikipedia article, the whole notion of computable numbers is pretty austere if viewed as an actual representation of specific numbers. Basically, numbers are represented as programs, but that notation really doesn't help you know even approximately what the number is. If you know that 𝜋 is about 3.14, or even that it's between 3 and 4, that gives you a sense of the numerical relationship between the diameter and the circumference of a circle. If you look at (but don't run) a program to compute 𝜋, for all you know, it could be 300 or 30,000.

And in particular, by looking at programs, you can't tell which of two numbers is bigger. Quick, which is bigger, 𝜋 or $$\sqrt{10}$$? Even those representations, let alone programs, don't tell you. But if you look at the latter in floating point, ≈ 3.162, you can see instantly that it's bigger than 𝜋.

As a way for mathematicians to develop intuitions about what it means for a number to be computable, representing numbers as computer programs is indeed very useful and straightforward. But as a way to develop intuitions about any particular number, I'll take floating point, thank you.


As the article explains, you can't usefully think about the computation of a real number as cranking out digits one by one, because half the time your representation will be inaccurate--worse than floating point! Inaccurate is worse than imprecise.

3 accurate
3.1 accurate
3.14 accurate
3.141 inaccurate, should be 3.142
3.1415 inaccurate, should be 3.1416
3.14159 accurate
3.141592 inaccurate, should be 3.141593
3.1415926 inaccurate, should be 3.1415927

and so on.

This is why the actual representations used in the theory involve bracketing the desired number between a smaller rational and a larger one.

EDIT: It's also why IEEE floating point requires the computing hardware to maintain two more bits of precision than the number of bits visible to users in the computer's memory. One extra bit isn't enough, but two are.

It seems to me that problem with computing pi is due to base-10. base-2 should be fine. I found this in the Wikipedia article and I don't understand what is means by "binary decimal".

But I guess most people don't want to see the digits of pi in binary. Something like 11.00100100001111110011111

No, even in binary, chopping off the first n bits will give an inaccurate answer half the time, for the same reason as in decimal. If you chop off your 𝜋-in-binary at 11.00, that's inaccurate; it should be rounded to 11.01 (closer to 3¼ than to 3).

As for "binary decimal," they mean base-2 but including the fraction part. I.e., they mean floating point. :~) Well, the "floating" part means it'd be 0.1100100100001111110011111×2². "Binary decimal" means what you wrote.

Would that be the Indiana ∏ bill? :wink:

As for the discussion on precision in working with irrational numbers (or computable numbers, if you prefer):

  • FWIW: I agree with bh that base doesn’t matter, fundamentally;
  • Whether e.g. 3.141 or 3.142 is a valid 3rd decimal representation of 𝜋
    I consider a matter of semantics. The first (3.141) would be a lower bound approximation, or - equivalently - a truncated version of the infinite stream of numerals representing π in base 10. The second representation (3.142) would be π rounded to the nearest 3rd decimal.
  • more interesting, IMO, is how to compare two irrationals whose values are exactly specified, e.g. (bh’s example): 𝜋 vs. √10.
    A first approach might be analytical, with a limited number of queries, and three possible findings: true, false, and undecided.
    The second approach, if the first one reports “undecided”, would have to be numerical. Assuming upper bounds of errors can be calculated for each approximation, and given the definition of computable numbers, different numbers can always (?) be told apart in finite time; however for two numbers that are actually the same this may not always be provable in finite time; thus some sort of search limit must be brought in place, and “undecided” will still be a possible result.

I’m nit a mathematician by training, so correct me if I’m wrong.

(a) "Semantics" means "meaning." So, "a matter of semantics" doesn't mean what you think it means. (This is not just me being an old fogey complaining about changes in language. I think using "semantics" to mean "syntax" (yes; think about it) really makes it hard for us to know what we're talking about!)

(b) Round() [why doesn't markup turn that into a big circle? :~/] and floor() are not pretty-much-equivalent representations of a number. Floor() is a very useful function, but as a representation of a number it's just plain wrong. Any finite representation of an irrational number is bound to be imprecise, but within the limitation of its precision, it should be accurate: It should be the closest to the actual value as any such representation can be. That describes round(), but not floor().

One more thing we can agree on. :smirk:
Oops, I had overlooked “not”

I'm thinking about how irrational numbers should be displayed. Considering the following

It would seem that if the print function when displaying N digits "peeked" at the N+1th digit, rounded, and displayed the N digits it would always be accurate for that representation (i.e. number of digits). It might seem odd to see the last digit change sometimes as one "zoomed in" but that's OK, right?

Alternatively it could display the upper and lower bounds or the average of the bounds followed by the appropriate "plus or minus" expression.

Sorry about bringing up binary - I was confused.

I think that's right. I'm trying to remember why you have to keep two extra bits in your floating point unit, but that's not for displaying a number; it's for arithmetic operations on two inputs, so who knows. (I mean, I know who knows, but it isn't me.)

Even preceding digits might change when you zoom in, depending on the number being approximated (e.g. 30.99982something will be approximated as 3?, 31, 31.0, 31.00, 31.000, 30.9998, …). You can, of course, wait until all of the unveiled digits correspond with the closest n-digit approximation of the number; with this example that would imply skipping 31, 31.0, 31.00, and 31.000 - so jumping from 3? to 30.9998.

Elaborating on my argument of "semantics" (warning: pseudocode!):

IF ((you consider "zooming in" as a process of unveiling an infinite tape of digits that was always there in the first place) OR ((your output device could be a printer) AND (you want to consistently show one extra digit at a time))) THEN the lower bound approach is the only option.

Umm yeah I guess, although I don't get the part about the printer, which also can't print the entire infinite tape. But my takeaway from all that is that the infinite tape of digits is a flawed metaphor, mainly because it encourages confusing numbers with numerals. ("Digits" aren't properties of a number.) Maybe it has to be two tapes, with the range of values with n-digit precision?

But our concerns are different. I'm not really thinking about visualization; I'm thinking only about accuracy. Probably Tufte could figure out how to satisfy all our constraints.

Of course it can't; the tape is imaginary, like an infinite stream. My point is a printer can't erase what's already been printed.

Indeed I've assumed @toontalk 's point - it's not really mine, I've just been thinking with him - is about visualisation. But then I may be wrong.

I'm unclear what accuracy is about. I am assuming that @qw23 earlier comment is correct:

Assuming upper bounds of errors can be calculated for each approximation, and given the definition of computable numbers, different numbers can always (?) be told apart in finite time; however for two numbers that are actually the same this may not always be provable in finite time; thus some sort of search limit must be brought in place, and “undecided” will still be a possible result.

What do we want to do with computable numbers? (1) compare them (2) display them and (3) operate on them (which I'm guessing is straightforward). How does accuracy fit in except when visualizing them?

Oh. I'm not thinking about computable numbers per se at all. By definition, all the numbers we show users are computable. My concern is that computable numbers are first of all numbers; they have specific values. And I want not to lie to users, not even a little bit; whatever representation we show users should be the closest we can get, at that precision, to the underlying numeric value. So, users should never see 3.1415 as a representation of 𝜋 to five digits.

All this zooming-in stuff is not my department. ("Once the rockets are up...") Just, looking out for little lies.

I coded all of SICP section 4.4 in Snap! I'm currently struggling with were to start and how to make it work with a sensible data structure (I'm intimidated a bit by 67 blocks of code). I think I'd better not implement full Prolog, but rather Datalog with a few extensions.

Didn't you already implement Prolog?

I think, as far as taking over the world is concerned, people will better understand what you're about if the queries, assertions, and rules look like Prolog syntax. But, you know, having logic programming will be such a big win compared with not having logic programming that I'm not fussy. I just want to make sure that the successful variable bindings are available to the program that calls your library, so it's a real integration. (And we should talk about loading and saving databases.)

What I built was (coincidentally) actually more like Datalog, a subset of Prolog.
And I don’t mean to use Datalog’s usual bottom-up evaluation model - nothing wrong with Prolog’s top-down query model IMO.
What I really mean is I’m not striving for a full Prolog implementation - perhaps eventually, in a second release.

Gotcha. Awesome.

An anomaly from SICP, section 4.4.4.5 (repeated once):

(define (fetch-assertions pattern FRAME)

(if (use-index? pattern)

(get-indexed-assertions pattern)
(get-all-assertions)))

(define (fetch-rules pattern FRAME)

(if (use-index? pattern)

(get-indexed-rules pattern)
(get-all-rules)))

Now am I missing something? or is frame a superfluous argument?