Is it dependant on the underlying OS / browser?
I'm sure that means something to you but it means nothing to me
Mainly it depends on your computer hardware. These days most new computers are 64 bits wide. So the biggest positive integer is (2^63)-1 and the smallest negative integer is -(2^63). (The asymmetry is because 0 takes up one of the positive slots.)
The reason I'm asking this is that I'm doing an Advent Of Code puzzle and this one is dealing in high numbers
It's been running for a while and a variable that i'm incrementing by 853 each iteration is now (occasional) showing decimal points in the watcher
its only at
which is well short of 2^63 (9223372036854776000)
and its running on a Windows 7 64bit OS
Is it maybe an watcher issue?
If I just click on the variable reporter its giving me 11026580822
Ugh, this sounds like a roundoff error. Are you using powers or logs? Or trig functions?
I've reduced my script to this which shows same issue
(takes about 30 secs on my machine before it starts showing up)
[edit 07:46 31Dec20 Just tried it out on a Raspberry Pi 4 (32 bit OS) and started showing the issue around same value of result [/edit]
But you are adding a huge number to a much smaller number. (The name "largest" is sort of misleading in this context!) I would have thought that the difference still isn't great enough for roundoff error to be possible, but I guess I'm wrong.
Oh well, it's been a long time since I knew anything at all about architecture, really.
Having played around, I think it might just be some issue with the watcher as the variable itself always seems to report an integer and when I convert it to a string using join, there is no decimal point in it.
Interesting. I'm sure Jens will instantly know why the watcher has a different idea.
In my testing I was able to get numbers as large as 10^1024 with no noticeable roundoff error.
Is used this script to generate the number
and when scrolling through I did not see any missing digits or trailing 0's, which I would expect would happen if there was roundoff error.
However, this is not really a number, simply a string of digits. However, Snap! not having forced types means that it is near impossible to distinguish a string of digits stored as a string to a collection of digits stored as a number.
However, with the number set to , performing seemed to result in the number being truncated, without any of the decimal points being kept.
Even more confusingly, performing does not remove any digits, but shifts the (display) formatting to scientific notation.
Could this be related to the way that Snap! does not have strict types, and this number was generate using ? This could explain why Snap! can store numbers 1000 digits long, but loses accuracy when divided, as it must be converted to a number, rather than string of digits. This could also be why the number is displayed raw at first, but when 1 is subtracted from it, Snap! converts the string to a number, which it realizes is over the 21 digit limit and shows it in scientific notation. However, this is just speculation and could be wrong. Further testing is definitely needed.
Eh? You had 1024 digits, and now you have 16. (Actually it's more like 15½ decimal digits, because the last one might be off a little. It's 52 bits exactly.)
Internally, Snap!'s implementation distinguishes numbers from strings. The range of JOIN is strings; the range of arithmetic operations is numbers. The reason it doesn't seem like that to users is that the domains of these functions are extended; arithmetic operations accept a string of digits as a number, and string operations accept a number as a string of digits.
Sorry, I forgot to clarify that after the first example, 'number' is only set to a 22 digit number.
This all makes a lot more sense now. Thanks for clearing this up!
But it's still only 15½ after you do arithmetic on it.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.