right how dare he consider the possibility that you may have known something that he didn't instead of just assuming you had no idea what you were doing and correcting you!
No, just wondering. I have arguments with colleagues all the time about the number/numeral distinction in the context of conversion to and from binary, but here you're pretty much working entirely in the domain of numbers.
In that case, 0.1 would be a better choice because it doesn't bring up the issue of floating point roundoff. But what if the input is something.9? Wouldn't you then get integer something+1?
I will demonstrate with integer but it's the same logic with decimals...
if you pick a random number between 1 and 3 you can get 1,2 or 3 right
if you pick a random number between (1+1) and (3+1) you can get 2,3 or 4 after you substract the result by 1 and you will get 1,2 or 3
if you add .1 instead of 1, the original random block will return a decimal: if you pick a random number between (1+0.1) and (3+0.1) you can get 1.1 to 3.1 after you substract the result by 0.1 and you will get a decimal number between 1 and 3
If you're programming a computerized casino, 1 error in 26M is too many. (And if you're programming the missiles carrying the nuclear bombs, it's WAY too many!)
Are we talking CS-math or real-life-human-math? Because in CS-math, no, that's a floating-point number (between 0-1). In real-life-human-math, 1.0000(...) is an integer, because 1.0000(...) = 1.000 = 1.00 = 1.0 = 1. 1 is an integer.
I think we are talking at cross purposes. The error I had in mind isn't calling an integer a decimal; it's that your algorithm will (I think) give the wrong answer if the delta (0.1 or 0.001 or whatever) plus the decimal fraction part of the candidate random number (in 27.9 or 3.999 or whatever) add up exactly to 1. The integer part of the sum will then be one more than what it should be.
Touché !
I think i understand: If i add/substract 0.1, the decimal part of n1 AND n2 shouldn't be .9 because if there are, when i call the pick a random function with 2 integers, it return an integer instead of a real
example, i can't call my function with 1.9 and 2.9 (when i add/substract .1) (always return 1.9 or 2.9 instead of a real)
The old function
I modified the function slightly:
(this is why i don't choose .1 for add/substract: )
(i don't have this problem with .5!)