# Block workarounds (later became :lambda: calculus)

So's he! But he has a special interest in functional programming. He likes to program in Haskell.

Yeah, I couldn't figure out what it was trying to do, with so many inputs! Were you trying to do this:

yep but the if then else has to be replaced w/ the ?: block

YAY!

OK fine so why did you find it hard? Do you not really trust recursive functions?

No but I got an error:

so I made it w/ a range

OHHHH! I get it.

The problem is in your ?: function. It breaks everything that depends on it; your subtraction didn't work for me, even though its code is correct. I blush that I didn't actually try it before. :~(

You correctly declare the second and third inputs to be of type Any (Unevaluated) because otherwise a recursion will just keep going when it hits its base case. But, having done that, in order to report the value of the chosen branch, you have to CALL whichever branch you choose:

It's that outer (pale color) CALL that you forgot. Do that and everything works.

PS In a true functional language, all evaluation of inputs would be delayed until needed with no extra effort.

So,I have completeted the divmod function!!!(with a correct ?: block)
(@theaspiringhacker bh is calling you here

)

Hey, great work. If you want, you can invent positive and negative integers, then rationals. And then you could try to do reals, but that may require more math than you know yet.

Why not use decimal instead of unary?
Lists of numbers!

Lists of digits, you mean? Sure, you could do that, but the arithmetic operators would be way more complicated, with carrying to the next digit place and all that. Place value in decimal is designed to optimize human beings counting on their fingers, not to optimize anything relevant to computers. (There's an argument that memory use is optimized by base 3, which sounds weird but it's the closest integer to e, and it's base e that's theoretically best in the particular way this argument uses.) Church numerals are optimized for proving theorems about computability. Binary, as used in computers, is optimized for being implemented using transistors (or, before that, vacuum tubes), which are bistable in their input-output behavior. (There are two flat regions, one around zero and one around whatever the maximum output the particular transistor is designed for, which in computers typically means five volts. A flat region means that the output doesn't change when the input changes a little bit.)

yep

why?

Yes,as they are repeat()times builtin

So maybe we should continue that
After all JavaScript Numbers are C long doubles are just 64 digits of digits,exponents,and negative flags.
Computers can handle them built in so we can handle it too.
No worries I will make converter
(really just multipling them by 264 and converting them to church numerals
well thats unpractical bc lambda calculus functions all work at O(n) speed and doing a big church nueral will take for-ever)

They are bulit in the ieee-something standard

Do you mean fractions?

def gcd(a, b):
while a % b != 0:
mo, no = a, b
a, b = no, mo % no
return b
class Fraction:
def __init__(self,num,den):
self.num = num
self.den = den
if self.den<=0:
self.num = -1*num
self.den = -1*den
self.val = self.num/self.den

def __str__(self):
return str(self.num)+'/'+str(self.den)
def __int__(self):
return int(self.val)
def __chr__(self):
return chr(int(self.val))
tp1 = self.num*sf.den+sf.num*self.den
tp2 = self.den*sf.den
return Fraction(tp1//gcd(tp1, tp2), tp2//gcd(tp1, tp2))
def __sub__(self, sf):
return self+Fraction(sf.num*-1,sf.den)
def __mul__(self, sf):
return Fraction(sf.num*self.num, sf.den*self.den)
def __truediv__(self, sf):
return Fraction(sf.den*self.num, sf.num*self.den)
def __eq__(self, sf):
tp1=self.num*sf.den
tp2=sf.num*self.den
return tp1==tp2


while we have classes too:

Streams
I will try to port the streams library here

Base 3:

Okay, so, let's say you want to put your house number (from your address) on the door of the house. So you go to the hardware store and buy digits with adhesive on one side and sparkly-reflective stuff on the other side, to attach to the door.

Now, let's say you'd like to be able to represent any number up to ten thousand (or any other arbitrary limit) without another trip to the hardware store. How many digits do you have to buy? Answer: If using base 10, you need four of each of the ten possible digits, roughly, total of 40. (That's not the exact answer because we don't put leading zeros in house numbers.) If using base 2, lg(10000)≈13.28, so you need 14 of each of the two possible digits, total of 28. What about base 3? Log₃(10000)≈8.38, so you need nine each of three digits, total of 27. Just slightly better than base 2.

Have you learned differential calculus? You can write a formula for the number of digits you have to buy as a function of the base, and then you can differentiate the formula and set that to zero to find the base with the minimum possible cost, and it turns out to be e. Since e is between 2 and 3, but closer to 3, that's the minimum practical (integer) base.

Continue what? Using binary? Of course, when it comes to building actual computer hardware, there are many reasons for binary besides transistors, e.g., arithmetic function circuits are built out of Boolean function circuits, and true/false is a binary set.

But in lambda calculus, remember, we're optimizing for proving theorems. And if you want to prove theorems about rational numbers, what you want is a pair of integers, each of which is represented as a Church numeral.

Real numbers are a lot trickier than I think you realize. Streams, yes, but streams of what? Digits, left to right? If so, the promise in your stream has to know how to compute the next digit. That's a little tricky. No, a lot tricky, except in easy cases such as square roots of integers.

But the fundamental problem is much deeper than the tactical difficulty. Even if your computer has an infinite amount of memory, so you can compute any positive integer as a Church numeral, it still can't compute any real number, not even close, because the real numbers are non-denumerable. See this thread for that discussion. If you think it through, you'll see that all of the numbers representable in any fixed-width format such as IEEE floating point are rational; you can't even represent a single irrational number in floating point.

But if you do want to represent a real number as a stream, you'll be better off making a stream of approximations rather than a stream of digits. Two streams, actually, one approaching the value from below and one approaching it from above. That set of two streams is called a Dedekind cut and it's one of the standard ways to define a real number.

One reason that approximate values are better than digits is that digits are a property of the base as well as of the number, whereas the approximate values are numbers, not tied to a particular representation. So maybe two consecutive approximations differ only in the fifth digit down from where the previous two differed, but you don't know that until you compute one or two moves ahead of the digit you're going to emit next. Or another pair of consecutive approximations differ in the same digit as the previous pair. (For example, your sequence of approximations goes ... 3.10, 3.1275, 3.1348, 3.14000, etc.)

But, again, the main reason is that each term in a sequence of approximations has a clear relationship to the desired value (namely, being near it), whereas it's hard to come up with an explanatory theory about why the 200th digit of pi should be one thing rather than another.

What's that?
According to the following things

It looks like some max-min problem(最值问题)

So I said that

Ok

Then I may do:
TT=3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068
well you can round it up to any digit

It's a way to study the instantaneous rate of change of some function. And yes, one use of it is that the rate of change will be zero at a maximum or minimum, supposing the function is continuous and differentiable. (The rate of change is positive on one side of the min or max, and negative on the other side. So if the rate of change is continuous, there has to be a point at which the rate is zero, and that's the min or max.)

No, you can't round it to the 1000th digit.

Ok so now I know it and I find that I use that a lot I just don't know that its called like this

that's right but you won't want to do so too.
3.141592653589793238462643383279502884197169399375105820974944592307
is enough for any complex use like calculating how to let spaceships not collide and attach.

You're still mixing two questions: (1) how to handle the practical needs of practical computation, and (2) how to think about what's theoretically possible and how to handle the reasoning needs of mathematical proof. Yes, totally, IEEE floating point (invented by a Berkeley professor, by the way) is the right way to do practical computation. But it's not the right way to reason about what can and can't be represented. We know that your approximation to pi isn't pi because it has a finite number of (nonzero) digits, and so it's rational.

Oh.
BTW,my list blocks doesn't work.
oh they work now

hello?

There didn't seem to be anything for me to answer in your previous message. ¯\_(ツ)_/¯

ok

When I want to calculate how many atoms are in a gram of graphite...