The type-theoretic view on subtyping of the different sets of numbers?

Yes, of course. All I meant is that the proof checker doesn't magically undo the limitations of proof.

Look, we have different endpoints in our thinking because we are starting in different places, and that's fine. For proof purposes, mathematicians invent all sorts of formalisms that you wouldn't use for real work, e.g., Turing machines.

But my question is, what should we present to users (a category in which I'm including application programmers and Snap! users). And that's the context in which I insist that 3.0 is an integer and is equal to 3.

There was a time, back in the age of huge computers, when computer manufacturers made two kinds of computer, "scientific" and "business." The former did computations in binary, while the latter used (binary coded) decimal arithmetic. Why on earth did they think that made sense? Because they thought that what was presented to the user had to be the same as what was happening in the computer hardware!

Today we understand that that's silly -- that we can present numbers to users in a notation different from that used by the hardware. (The actual concern was that dollars and cents involve a decimal point, but we don't want to divide something by three and have the user see $1.3333333333. They didn't get it that you can have the computer do exact integer arithmetic on cents, not dollars, and do the conversion to dollars for printing at the last moment.) My argument is that telling a user that 3.0 is different from 3 is just as silly. Especially, making users declare variables to be int or float is silly.

These are both great examples of things that people seem to complain about but don't really matter in the long run. Sure, someone who sees a traditional equality sign used as an assignment statement (or expression) will be confused... for the first two minutes. It turns out humans are intelligent creatures who can distinguish between the mathematical and the computer world. Some languages have "fixed" this issue by using a different symbol to mean assignment, but in the long run nobody really cares, because we're smart enough to handle = meaning two different things in two different contexts.

On the other hand, something like garbage collection makes programming a whole lot easier for a lot of people. People want to use languages with garbage collection because they don't want to have to deal with memory management, which can overcomplicate things when you just want to get something done. However, it's resulted in many professional developers not knowing how to manage memory effectively, and modern programs having memory issues. This isn't to say that garbage collection is bad - it's an effective way of speeding up software development, and the tradeoff of disconnecting programmers with the computer is worth it in this case. However, the "is it worth it" question should be asked for each possibly meaningless abstraction added to make programming more like the real world. In the "3.0 is not an integer" case, in my opinion, it isn't. Making numeric types represent mathematical classes of numbers, instead of actual types of numbers stored in memory, doesn't add much and might even be confusing with some type systems (adding two floating-points and not knowing if you'll get an integer or a floating-point might not be fun).

1 Like

I do too, but most programmers don't need their programming languages to be more like math at the expense of disconnection from the computer, in professional applications. Not having programming languages perfectly reflect mathematical rules is not always (or even usually) "wrong" and "horrible".

What can I say, but I don't think this accurately reflects the history, which is that (1) people make = vs == mistakes all the time, even people who theoretically know the difference cold; (2) there was that idiot who made the one-question test for predicting who will survive computer science class that revolved around the asymmetry of = as assignment; (3) our field has traditionally been restricted to white and Asian males in part because programming seems to require believing things that don't make sense.

I think you are thinking of the performance of people who have survived learning to program and therefore have jobs as professional programmers. Those are a small percentage of the population and an even smaller percentage of our target audience; we are in the business of supporting nontraditional CS students' learning.

Garbage collection is indeed great, but not really relevant to this discussion; neither GC nor non-GC systems require you to believe things that aren't true. (Well, unless you count "human beings can be good at memory management.")

Yes, Wrong and horrible. If those bad languages called their two number types "fixed point" and "floating point," then they wouldn't be asking people to believe falsehoods, and we could then discuss whether it's helpful or harmful to require programmers to know about internal number representations. But to call a type "integer" and then say that 3.0 doesn't belong to it is wrong and horrible. In the argument between Alice and Humpty Dumpty, we're supposed to be on Alice's side.

I remember reading an article that pointed out the different kinds of equality in mathematics. I can't find it, but I did find this: equality in nLab. Even in math, we use the equals sign for different purposes, even if we don't realize it. For example, there's definitional equality and propositional equality. The == from C-style languages is really a predicate.

Why do you think that one's (in)ability to understand the meaning of the equals sign in programming is related to race and a cause of racial inequality (pun not intended)? :confused:

But they are all equivalence relations (i.e., reflexive, symmetric, and transitive).

Oy, this is a long conversation. The most important part: I do not think that there is any inherent/genetic/insuperable reason for any race to have trouble programming. The reason this question comes up at all is the very strong empirical fact that in the US, at least, computer programmers are virtually all white and Asian males. Nobody likes this! Which makes it all the harder to explain. Also, although computer science is a particularly extreme case, white and Asian males dominate many fields. Professors are largely white and Asian males, although there has been some progress on this in recent years. Even in some non-technical fields there are racial and sexual disparities. (But it should be noted that women have been outperforming men in school for a while, other than in CS.)

It's also clear that in some ways these problems are the result of social conditions. Women and minorities are paid less for the same work. Minorities are more likely to be sent to prison for the same offense. (To save typing, hereafter by "minority" I mean non-Asian. And yes, I recognize that "Asian" is an overbroad category, and the US experience of Koreans and Vietnamese are quite different.) And starting around age 10, girls are taught they're destined for non-technical fields.

The question for our group (BJC and Snap!) is how we can improve the situation in computer science. There has been a lot of progress in the last decade, largely due to the vision of Jan Cuny, a Program Officer at the National Science Foundation, who got the College Board to create the new AP CS Principles exam and provided funding for curriculum development and teacher preparation efforts. In schools that have adopted CS Principles, we do get close to 50% girls signing up for the class; minority enrollment isn't quite where it should be (measured against each school's minority population), but is making progress. But we are not, so far, retaining female and minority students through the CS curriculum.

Some programs have been trying to use computer applications that appeal directly to underrepresented groups. One well-known example uses recursive fractals to represent cornrows and related hair-weaving techniques. But there's a limit to how far that approach can be taken, and in any case, being as I am a white male with a preference for mathematical abstraction over real stuff, I don't have much to contribute in that direction.

But we can also try to eliminate things that pose unnecessary obstacles for any beginner. Especially if the beginner has felt marginalized all through school and has low self-confidence as a result, small obstacles may not feel small.

That's why a blocks language is useful in high school. For the Scratch target age range, just knowing what the words mean and how to spell them is an obstacle, and a drag-and-drop programming interface is obviously beneficial. By high school, some kids are totally ready for a text-based programming language, which includes having the skill of touch-typing as well as a good vocabulary, but others aren't. The ones who aren't have typically had poor elementary schools and parents who couldn't afford to spend a lot of time reading to them in the first few years of life. And so they are disproportionately minority.

The research on the effect on learning of = for assignment isn't open and shut, but there's some evidence that it gets in the way of learning, and there's certainly no evidence in favor of it. But it seems clear to me that the fact that it's wrong can't make it any easier to learn. As for integers and reals, as I said, if you want to say that 3.0 isn't a fixed-point number, that at least wouldn't be wrong. But it would still be an obstacle to learning. There's no reason for beginning programmers to have to worry about different flavors of number.

Tl;dr: Things that are hard for beginners are extra hard for beginners who also feel left out at school in the first place.

1 Like

I like beginner languages with good abstractions that take us away from scary computer implementation details.

It's honestly painful every time to explain to beginner programmers, for example, why in Java (I can't understand why anyone would ever "teach" such a terrible, horrible, no good, very bad language at all*) they can't just use the == operator on capital I Integer, or even why capital I Integer needs to exist in the first place.

The point being, if you're an experienced low-level systems programmer and you absolutely need your floating point arithmetic to be fast then go ahead and use real floating point operations. But for everyone else, abstractions like transparent arbitrary precision rationals serve to eliminate useless tripping points for beginning programmers so they can focus on the important things about computing instead of "why is for(double i = 0; i < 1; i += 1/n) a bad idea again?"

* This is a lie. It's known that people teach Java because companies need their e n t e r p r i s e - s c a l e Java software maintained (instead of, for example, it being a useful beginner language)

As for the java Integers, they are for things like

List<Integer> data;

where you cannot do

List<int> data;

They are also useful for

Number num;

now num can store any number: Byte, Short, Integer, Long, Float, Double
All capital.

... which is exactly what's going to help bring people into computer science, right? :stuck_out_tongue:

1 Like

I can't tell why the bottom shouldn't be possible, but it isn't.

Right, and did you feel physical pain while typing out this post? Congrats, that's how I feel :stuck_out_tongue:

(It's comparable to whenever I hear students at my university complain that <scheme-based teaching language in the intro cs course> is a waste of time and they hate it)

If it makes you feel better, every year I hear from one or two former students who say "I hated 61A [SICP] at the time, but now I'm out in industry and I find I use those ideas all the time."