Bignums/random need rounding

I don't know whether Bignum is considered part of Snap! itself, but...

When I pass the report of random directly into Bignum calculations, the result is 0. But when I round the report of random before passing it into the same calculation, it works. Green flag to reproduce

Also, can the size of speech/thought bubbles be controlled? Bignums overflow to the sides and can't be seen, click the sprite to see correct calculations not fitting in speech bubbles

Looks like it could be the random block that is the issue

1 Like

On a whim, I gave a try wrapping the inputs to random in the Bignum-library's Identity function, hoping maybe if random's inputs were forced to be Bignums, it would cooperate. But no joy.

Also the bounds I'm putting into random are not bignums, the problem reproduces passing random(2,10) as an input to the Bignum calculation. Seems like that should work ok, up to MAX_INT32 or 64 or whatever is in play in Snap!

Adding @djdolphin to the conversation.

RANDOM with bignums on should report an integer, and it seems to do so. ROUND, however, seems to report a float when given a bignum input. I think this is a bug in the bignum library (not converting ROUND to bignum mode). But it means that for actually big numbers, at least, the result of ROUND will be truncated to however many digits floats have these days. If I replace the ROUND in your project with FLOOR or CEILING, which do handle bignums correctly, your algorithm still produces 0 as the answer.

All this makes me diffidently suggest that something in your algorithm relies on the limited precision of floating point, and 0 really is the right answer for exact integers. Because the only bug I can find in bigunums, namely ROUND, is the one that gives you the answer you're expecting.

How diffident!!

I'm pretty sure the bug/unjustified assumption is not in my code, but I'm open to the possibility. Let me work on a simpler script that makes the issue more transparent...

OK check out this simpler script, with a recursive a^n function that assumes n is a power of 2

RANDOM with bignums on should report an integer, and it seems to do so. ROUND, however, seems to report a float when given a bignum input.

@cymplecy demonstrated above that RANDOM returns floats with Bignum inputs.

Also note I am getting problems when giving small ints to RANDOM, and then giving that small random number as input to my Bignum calculation.

When I ROUND the small random before passing it on, the problem is solved. So that seems to locate the problem in RANDOM.

Ok, it appears there are several problems here:

  1. RANDOM returns an inexact Scheme number, which makes all the following calculations happen with floating point numbers. Floor and ceiling also report inexact numbers when the input is an inexact number, so those also produce weird outputs. But if you replace ROUND with SCHEME NUMBER [exact] OF, you'll see the calculations work as expected.
  2. For some reason, when the inexact Scheme numbers get too big, calculations are returning 0 instead of Infinity.
  3. Since there's no bignum version of the ROUND block, it's returning a regular JS floating point number. When one of these is encountered, the number is automatically coerced to an exact Scheme number, making the calculations work as expected.

OK that kind of makes sense. So is there a bug to be fixed, (or now that your answer is on the web) it is documented and thus a 'feature'?

The RANDOM and ROUND blocks definitely should not behave this way.

Calculations with big, floating point numbers returning 0 seems to be the fault of the Scheme numbers library, and I'm not sure if that's intended or not. But I'd hope not, since it doesn't make much sense...

Well at this point the behavior is noted and reproduced, I'll leave it in y'alls capable hands to hopefully someday get around to fixing it. For now the workaround of ROUND gets me where I need, and I realize a fix will depend on a volunteer. If the bug is still around in a couple years, maybe even I'll dig in and try to fix it.

Oh, Dylan will fix it. (He hasn't realized yet that when he's 47 years old he'll still be getting bug reports about bignums -- don't tell him!)

A quick read through schemeNumber.js seems to indicate that the R6RS imperial colonizers mucked with the numeric tower specification, so it's probably their fault.

But, a "big, floating point number" is just an IEEE double, right? It's not a special Scheme thing? So I don't understand where the 0 comes from. I mean, it shouldn't really be running library code for the actual arithmetic.

The problem that started all this discussion, I think, is that the library considers "3" (including the quotes) exact, but considers 3 inexact, because it declares all native numbers inexact. I think that's a mistake, at least for our purposes. Native integers should be marked exact. Do you agree, @djdolphin? They're trying to have a contract loophole in case that integer turns out to be the result of an overflow or something.

Hey, I'll be happy if this library somehow survives 30 years of technology churn.

Not sure. It's just a wrapper around a native IEEE-754 double-precision float number, but some of the arithmetic operations may be custom.

I already overrode the library's default behavior with native JS types. Both "3" and 3 are coerced to exact numbers in order to handle native numbers from blocks like COSTUME #, etc.

If you have BIGNUMS enabled, the only reason you may encounter floating point arithmetic instead of exact arithmetic is if you get a hold of an inexact Scheme number. The only way to get one is if you explicitly ask for it with the inexact function, or if there's a mistake in the library as in RANDOM.

So a float isn't a "Scheme number"? That seems weird; in Scheme, Scheme numbers include floats, e.g., what you get from (sqrt 2).

Well, there's a regular JS float and a Scheme number version of it. I believe the "Scheme" version is just the JS object {_d: <put a JS float here>} with a prototype that adds some special methods. Normally the Scheme numbers library would treat the JS and "Scheme" versions the same, but I overrode that behavior to make it try to turn JS floats into exact numbers where possible.

A lot of primitives return JS floats, but you should usually only get a "Scheme" float if you ask for one. A single floating point operand contaminates whatever calculations you're doing, giving you a floating point result, so I think quietly turning JS floats into exact numbers is the most reasonable behavior. It makes them play nice with the other, mostly exact numbers.

Does that make sense? If it's possible to get an exact answer (i.e., the answer is rational), then the library should try to do the computation in exact rationals in the first place, rather than taking a float as being exact.

Because floats have a fixed width, there are only finitely many of them. (I suppose you could have extra-long floats of arbitrary precision; maybe that's what R6 did. But even so, there'd only be countably many of them.) Since they are meant to represent the real numbers, of which there are uncountably many, it follows that every float must represent an infinite number of real numbers, namely, the ones in a range [min,max] where min and max are the smallest and largest real numbers to which this is the closest float. You can imagine the endpoints as being halfway between adjacent float values, although actually I think it's the geometric mean rather than the arithmetic mean.

So for example, let's say you take the square root of a rational number. The answer will be rational if and only if numerator and denominator (in lowest terms) are both perfect squares. So the square root function can detect that case, and do the arithmetic in exact rationals, never generating a float. The same theory, basically, works for fractional exponents in general. I'm not saying you have to write this code; it should already be in the Scheme number library.

So the library is right that if you're looking at a float, even if its value seems to be an integer, it should be labelled inexact.

It only takes integer value floats as being exact, and I think it does make sense. Floats with an integer value are visually indistinguishable from exact integers in Snap!. So if I get a float with the value 1 from the COSTUME number block, there's no reason it should force whatever calculations I do with it to use floating point arithmetic.

If you have a JS float that actually has a decimal point, on the other hand, that remains a float (though it's turned into a "Scheme" float).

EDIT: There's the alternative of overriding every primitive whose range is integers so that it returns an exact Scheme number instead of a JS float, but I don't think that's reasonable. Jens would hate it, and it would be way harder to maintain.

You already asked me to write that, and I already did. :slight_smile: It's not a part of the library, for some reason.

Oh. Well, if COSTUME reports a float, that's the problem right there. Is there some reason it doesn't report an integer?

JS only has integers

[edit] I meant floats, of course! (dunno where my brain is today, sorry!)