The type-theoretic view on subtyping of the different sets of numbers?

@bharvey wrote,

The acid test is whether your language considers 3.0 to be an integer. The only right answer to that question is "yes."

He was criticizing type systems where 3.0 : float but not 3.0 : int. That got me thinking, when we learn math, we think of the real numbers being a superset of the rationals, which are a superset of the integers, which are a superset of the naturals, etc. However, when using mathematically oriented programming languages, the sets of numbers do not have a subtyping relationship, because from a formal point of view, they are different constructions. One defines them, like the naturals here:

data Nat : Type where
    Z : Nat
    S : Nat -> Nat

Then, one defines integers, rationals, and so on.

According to Integer - Wikipedia, one can define the integers as a pair of natural numbers along with an equivalence relation (a, b) ~ (c, d) that is true when a + d = b + c. The section adds that automated theorem provers might use other definitions, and agda-stdlib/src/Data/Integer/Base.agda at f2744a829ea7dbef68aa15e7c549485c112ddfb7 · agda/agda-stdlib · GitHub shows that Agda defines integers to either be a "positive" Nat or a "negative" of one more than a Nat.

According to https://en.wikipedia.org/wiki/Rational_number#Formal_construction, one can define the rationals as a pair of integers as well as an equivalence relation (a, b) ~ (c, d) that is true when a * d = b * c.

We like to think that 3 as an integer and 3 as a rational are the same 3, but with these formal constructions, (S (S (S Z), Z) : Int is different from ((S (S (S Z), Z), (S Z, Z)) : Rat. It wouldn't make sense to write ((S (S (S Z), Z), (S Z, Z)) : Int.

We give a semantic meaning to the rationals that causes us to consider them to be a superset of the integers. Indeed, there exists a function from Int to Rat that specifies this semantic meaning. However, there also exists a function from Bool to Char, yet we don't consider Bool to be a subset of Char. Bool and Char are different, and likewise, Int and Rat are different.

With upcasting, maybe one can have an implicit "conversion" function. However, for downcasting, such a function would be partial (a no-no for good programming style as well as for proof soundness). My main point is that the belief that the rationals is a superset of the integers such that some rational numbers should have the type of "integer" isn't logical. Whether we are dealing with machine arithmetic or formal mathematical definitions, the different sets of numbers have different representations that should correspond with different types.

In set theory or type theory, how often does one make the distinction between 3 the integer and 3 the rational, and when would one consider the rationals to be a superset of the integers?

Oh, pooh. You can just formally treat everything as a Conway number (or a pair of them, if you want Complex numbers). (I love Conway numbers. What a tour de force! All the reals, all the infinitesimals, all the transfinites (cardinal and ordinal), and then some! All just from sets.)

If someone doing formal language theory uses a formalization that makes 3.0 a different animal from 3, it's not because the mathematics requires that. It's because the someone is exposing machine hardware to the user. If a language includes exact rationals (as it should), and a different language doesn't, then yes, you have to use a different type system to describe the two, but you mustn't pretend that there's anything mathematical about the difference. The type system is merely documenting a misfeature of the second language.

The rationals just are a superset of the integers, however defined and however represented in a computer.

Mind you, the case is mathematically different for the case of the reals. Strictly speaking there are no irrational numbers inside a computer, because it has only a finite set of bits. By being clever about the representation, I'm sure you could get the algebraic numbers to be representable, e.g., by representing a number as the coefficients of a polynomial of which it is a root, plus an index to distinguish multiple roots (e.g., counterclockwise from the positive real axis). But I can see how you might want a theoretical handle on the limitations of possible values, the way they wanted a theory of the straightedge-and-compass constructible numbers.

P.S. Or, conversely, you could include countably many irrationals in your representation using extension fields, e.g., Q(π).

What if you display fractions as 1.5(3) [meaning 1.5333333333333...] for 46/30

Sure, I guess that would be fine (although, this is the 21st Century, and computer displays can display overbars on the repeated digits, the standard notation). But I suspect that it's easier to do arithmetic on a pair of integers than on a repeating (or even finite) decimal.

I said display because this is even harder than normal numbers to switch base.
looks like this then:
1.53
1.5<span style="text-decoration: overline">3</span>

Hmm I guess it's not the 21st Century in the forum software. :slight_smile:

(Well for formal language theory, the difference is usually that you want to be able to do induction on your nats using whatever structural induction primitives your theory is built on; the same inductive proofs shouldn't work for rationals or reals.)

Why do language theorists have to prove theorems about integers? Haven't the number theorists taken care of all that?

I don't know about number theory, but programming languages allow an exact way of writing proofs. In math, your proofs can be handwavy, but in programming, your programs must be precise so that a computer can execute them. When using proof assistants, your proofs are your programs.

As the programming language is based on a lambda calculi, it has no primitive notion of number. That's good; the core calculus should be minimal. The programmer defines various numbers as data types so that others can use them. It's no different from, say, SICP's chapter covering the rational and complex number libraries, except the programmer must also define the naturals, integers, and reals. (Or, they might be compiler primitives for efficiency, but exposed as mathematical definitions to the programmer.) Of course, the programming language can manipulate proofs, so naturally, the library shall include proofs of various properties pertaining to the numbers and their operations.

The benefit of writing proofs in programming languages is that the typechecker verifies them.

It's not as much proving theorems about integers as it is proving theorems with integers.

Lemma: Given an integer n, Selection Sort correctly sorts lists of length n.
Proof: By induction on n.

But the whole point of that SICP section is that at the end of the day, the user just thinks in terms of numbers, and it has to just work. And in Scheme (integer? 3.0) is true.

Umm. I don't think you should be so quick to dismiss the 3000 years of mathematicians developing rigorous proof, without which there wouldn't be any computers.

Then, IMO, it would be better to interpret integer? as a predicate that checks a dynamic property, not as something that checks a type.

I could define some kind of Num interface or typeclass that includes an integer? operation, and different number types could overload the function.

  • For a natural number, integer? would always return true.
  • For an integer, integer? would always return true.
  • For a rational number, integer? would return true when the denominator is 1 after reducing to simplest form.

However, an inhabitant of rational number wouldn't inhabit integer even if integer? returns true when applied to it.

I didn't mean to insult number theorists. What I meant was that there's value in formally specifying your proofs in a manner that is checkable by machine.

Sure. Although of course there have to be true statements in the realm of integers that your proof checker can't prove, and we can never be sure that your proof system is even consistent. :slight_smile:

Yes, exactly! And it has to work for 3.0, too.

I think that I probably misunderstood what you meant when you made that statement about how 3.0 should be an integer under a good type system. You seem to have an integer? predicate in mind. I have no objections to such a predicate. However, to me, that isn't what types are. Types are a static construct, not something that you check at runtime.

If a rational number is a pair of integers, then it doesn't make sense for rational numbers to be a supertype of integers. It wouldn't make sense to be able to use an integer everywhere that a rational number (AKA a pair of integers) are used. It also wouldn't make sense to be able to use rational numbers that represent integral quantities where integers are expected. That is even so in languages without a type checker; doing (+ (cons 1 1) (cons 1 1)) doesn't make sense and results in a type error. (cons 1 1), even though we may semantically interpret it as the rational number 1/1, is not computationally interchangeable with the integer 1.

So, 3.0 (or 3/1, or 6/2) should not have the type of integer.

In a statically typed language, we can express all this as

type rat = {
    num : int;
    denom : int;
}

(In OCaml syntax)

However, nothing is stopping one from making the different number types from implementing a numeric interface or typeclass, allowing for function overloading.

class MaybeIntegral a where
    isInt :: a -> Bool
    toInt :: a -> Maybe Int

instance MaybeIntegral Int where
    isInt = const True
    toInt = Just

(in Haskell syntax)

So, there is nothing preventing you from having integer? in a statically typed language, even though rational numbers and integers are different types. integer? is orthogonal to types.

I guess what I'm saying is that a type system in which 3.0 isn't an integer is a wrong, horrible type system. If for some reason type systems have to make this horrible mistake, then type systems are worse than useless, because they only encourage people to think that 3.0 isn't an integer -- that the stupid weaknesses of computers have to be presented to the user as representing some kind of truth. The stupid weaknesses of computers should be hidden behind a rock-solid abstraction barrier.

And yes, I know that Snap! is halfhearted about this. At least (3.0 = 3) returns True, but there are contexts that won't accept 3.0 here and there. I want us to present the full Scheme numeric tower by default, but Jens is worried about slowing things down.

As for exact rationals, of course I wouldn't say (cons 1 1) to make one; I'd say (make-rational 1 1), which would probably be implemented as

(cons rational-tag (cons numer denom))

where

(define rational-tag (cons 'rational '()))

(so that another (cons 'rational '()) won't be EQ? to it.) And yes, I expect + to accept exact rationals!

I know, I'm attacking the whole guiding spirit of your life. I'm sorry. If you want to play with type systems just as formal systems, like, I don't know, Hilbert spaces or topologies, that's fine. Just don't teach programmers that there's something called an integer and 3.0 isn't one, because that's dead wrong.

I couldn't disagree more. Computers should act like computers, be expected to act like computers, and do computer things. If we have one definition of how things work for high-level programming, and another for low-level programming, that further alienates high-level programmers from low-level stuff and how computers work, and that's never a good thing. Software is already getting too heavy as it is.

Of course some abstractions should be used to improve readability and intuitiveness, but they should be evaluated on whether they're actually useful. A "rock-solid abstraction barrier" with the sole purpose of making computer math more like "regular" math without any benefit to real-world usage is neither necessary nor helpful.

1 Like

Indeed this is a very fundamental disagreement between us. Back before there was computer science, computers not only acted like computers -- they're going to do that regardless of what we think -- but did so in public! This is exactly like smoking in public, and, well, not quite as disgusting, but disgusting in the same manner. That's how it was when I learned to program. Every day there was some new obviously false thing we were supposed to learn; even before 3.0 not being an integer was the immortal X=X+1. That's the sort of thing that chases away the non-white-or-Asian-males from CS.

But now there's computer science, the entire purpose of which is to force computers to behave in a civilized way in public, just like smokers (here in Berkeley anyway). Computer science taught us that in real life, types aren't disjoint, and are multidimensional. So, 3.0 is an Integer, and a Rational, and a Real, and a Complex, and perhaps also an Exact, depending on the computation that led to it. Oh, and it's a Word, too, just a special word whose "letters" are digits.

So, too, today the computer screen (or the phone screen) displays a bunch of widgets, and the user doesn't have to think about pixels and layers and anti-aliasing and all that. No, each widget is a thing, and you can move it around on your desk the same as you can move things on your real desk. And if you move it over a wastebasket, it's not just the widget that disappears; the real (or more real anyway) thing represented by that widget disappears too.

Well, the proof checker isn't proving anything; it merely checks the proof that the user writes (proof checker). The mathematician is still the one writing the proof. And, proof assistant or no proof assistant, there will always be true statements that the mathematician cannot prove (unless the logical system isn't sophisticated enough to have natural numbers).

Similarily, that one can't prove that the proof system is consistent is something that is true in a logic with natural numbers regardless of whether the mathematician is using a proof assistant to check the proof or not.

So, I don't think that this argument makes sense as a reason why proof checkers are more fallible than mathematicians who have worked without them; rather, it describes the incompleteness of consistent formal logics in general. Unless I misunderstood what you were trying to say?

(I assume that you are referencing Godel's Incompleteness Theorems.)

I agree that 3.0 is not an integer, but my argument is more general than the argument about machine representation. My point is that the representation in general, whether that be hardware representation or not, of integers versus rationals or floats, are going to be different because one must do different things to compute with them. The algorithm for adding two rational numbers is different from the algorithm for adding two natural numbers. And, in the constructivist view, proofs are all about telling other mathematicians how to compute something.

However, I think that the goal of making programming languages more like math is admirable. I feel that what you're advocating for is intentional "leaky abstraction."