Some Questions About Older Computers

Hello.

I have some questions about older computers, and I know that Brian has some experience with them. (I apologize if this isn't the right place to ask, but Stack Exchange is blocked for me.)

  1. Why did computers used to use 9-bit bytes?
    1. Why did they switch to 8-bit bytes?
  2. Was core memory annoying to work with, seeing as how it wipes the bit whenever you read it?
  3. What did the startup process look like?
  4. How did you connect a TTY to a computer?
  5. Was the fact that you could look at the entire history by reading the paper printed by a TTY helpful?
  1. Byte size is a choice downstream from word size. In my youth, the main scientific computers, the IBM 7090/7094 and the DEC PDP-6/PDP-10 were both 36-bit words. So one way to divide that into smaller units was four 9-bit bytes. But it was also common to see six 6-bit bytes, which is why all those ancient (or simulated-ancient, in the movies) computer printouts had only upper case letters. The IBM ones had machine instructions that knew about 6-bit bytes. The DEC ones had machine instructions that were happy with any byte size; in particular, ASCII text was represented as 7-bit bytes, five to a word, with one bit unused. (Of course programmers found ways to squeeze extra information into that bit.)

    Later machines were designed with power-of-two word sizes, and so 8-bit bytes were a natural consequence.

  2. This is a matter of abstraction. The actual cores were surrounded by electronics that regenerated the data if necessary. The programmer never saw a refresh problem. Anyway, at least cores retain their value when you turn the computer off (or the power fails). Solid state memory has to be refreshed all the time!

  3. Every computer model is different. The 7090/7094 had a load button that would read one punch card (20 36-bit words) into memory and run the program in it. Typically that loader card would continue reading punch cards with more of the loader, which would then load the OS from magnetic tape. The DEC machines had an architecture with 16 word-size registers built into the processor, but those registers were also the first 16 words of the main memory. (So, e.g., copy this register to that register was done by the same instruction that said load this memory address into that register.) On the PDP-6, though, there were also 16 words of core with those addresses that were ordinarily unused, but when you pushed the load button on the console, the machine entered a mode in which the core words were used instead of the registers, and the machine started running instructions at address 0. (The first execution of an instruction at an address >15 went back to using the regular registers.) The PDP-10 didn't use the shadow registers for loading, though; I forget what it did.

  4. Look up "RS-232."

  5. Ah, you are pulling at the end of a long thread. When paper terminals (often but not always actual Teletypes™) were common, people wrote software around the affordances of paper, and then yes, you often had to look back to earlier in the history physically. When the first display terminals came out, they had to use the same software that was already written for TTYs (and so they were called "glass TTYs"), and indeed it was hard to edit on a glass TTY using a text editor written for paper. It took a long time for people to understand that you could write editing software specifically for displays! In 1980, when I was ordering our new PDP-11 for my high school, I was discussing how it would be configured with a group of kids. They had all had their introduction to computing with the (lesser) computer at the local junior high school (this was before the name "middle school" was invented), which had a paper-based text editor but had some glass TTYs, and it was really super painful trying to edit with them, so all the kids vehemently insisted on paper terminals, but luckily I had grown up at the MIT and Stanford AI Labs, where we had display-oriented text editors, which are just way better in every way than scrolling back through rolls of paper. So on this subject I just said "wait until you see a real operating system" and of course I was right; once they found out about display text editors nobody wanted paper.

Thank you for answering my questions. I'll look up "RS-232".

These are just a bunch of questions I've been thinking about.

You're welcome!

Wow, RS-232 has had a varied history!

I still have this in my wallet:

not because I ever need to remember the RS-232 pinout any more, but because the other side of the card is the ASCII code. :~)

Wow!

That makes sense. :-D­

Oh, one more question.

How were punch cards read?

Do you mean, how does the physical card reader work, or what the encoding into punches is?

You'd think the card readers would be optical, shining lights through the 80x12 possible holes in the card and using photocells on the other side of the card to detect the lights. There were such machines, but the ones I remember used springy metal contacts on one side of the card making contact with a grounded metal plate on the other side. I say "you'd think" because this mechanical connection gave rise to jams that could damage the card reader and rip the card itself to shreds. (Immature people punched cards with all 80x12 possible holes punched, which were almost guaranteed to wreck the card reader when read.)

Back in the days when pretty much all computers came from IBM, they were leased, not sold, and they came with an accessory: a human being, the "customer engineer," who was an IBM employee who spent his (sorry, this is the '50s we're talking about) time at the customer's offices. If the card reader jammed while reading your card deck, you weren't allowed to try to fix it yourself; you had to go find the CE to fix it. So this happened to me (age 14 maybe?) one day, and I dutifully found the CE, who came in, looked at my card stuck halfway into the card reader, looked at me, glared, said "If I ever catch you doing this, I'll kill you," grabbed the card and yanked it back out. I think the card itself wasn't even damaged, let alone the card reader, so they weren't all that delicate, but it was definitely possible to ruin those spring switches that way, kind of like backing your car over those evil vehicle diodes with the springloaded mini-spikes that would puncture your tires if you went over then in the wrong direction.

Okay, on to the encoding of information on the card. There were two encodings, for text and binary data. A card had 80 columns by 12 rows. Recall that the 7090/7094 had a 36-bit word. So for binary data, each row could fit two such words, in columns 1-36 and 37-72. IIRC all the values in 1-36 came before all the ones in 37-72. Columns 73-80 were unused in this mode, and were typically punched with a sequence number so that you could sort the deck after spilling it on the floor.

For text, each column held one character. The rows were labelled, from top to bottom: +, -, and then 0-9.

Riddle: How did they bury Thomas J. Watson? Answer: Nine edge in, face down. (This is how you describe how to orient a deck of cards when placing it in the card reader.)

So. The space character was represented by a column with no punches. A column with a single punch represented, no surprise, +, -, or a digit 0-9. Two-punch columns represented letters of the alphabet, but it couldn't be any two punches; it would be a digit 1-9 along with +, -, or 0. That's just enough to represent the 27 letters of the alphabet:

ABCDEFGHIJKLMNOPQR/STUVWXYZ

Yes, that's right, the 19th letter of the alphabet is slash, in between R and S.

So, that's it, except that you could also combine +, -, or 0 with one of the combinations 7-3, 7-4, 8-3 or 8-4 to generate twelve more punctuation characters, but I don't think anyone memorized those.

By convention, columns 73-80 were reserved for sequence numbers for text cards, too, so a line of text had 72 useful characters.

It's really scary that I can't remember what I did yesterday but I can remember the punch card codes from 65 years ago.

Wheren't those called "lace cards"?

Okay. Nice and simple.

How did you have access to a computer when you where a teen? (If you don't mind me asking.)

Heh. That's about what they are.

Was Watson the CEO? President? of IBM at the time?
If so, that's a pretty good joke. :-)­

So, how did combining +, -, and 0 work to select a letter?

I would have guessed after Z for its position, but it also makes sense there.

What do they say? Repetition builds memorization?


Seriously, thanks for answering my questions. I love learning about older computing, and this thread (and your expertise) have been really helpful.

The KA10 model had hardware read-in. You first selected a boot device using seven toggle switches, then pushed the READ IN switch. The processor then sent a signal on the I/O bus to read one word from the boot device. That word is used (as a BLKI pointer) to read more data from the device, which is then executed.

Yeah.

I grew up in New York. Columbia Univ. runs a Saturday program for high schoolers called the Science Honors Program, in which I mainly took math courses (real college math courses! Lots of fun.) but also took a programming class in which we got to run Fortran programs on a 7094. It was really frustrating, though, because we'd hand in a program deck one Saturday and then a week later we'd get back an error message about a missing comma or something. But then also Henry Mullish at NYU mentored a whole bunch of us who got to program their 7094, mostly in assembly language at that point.

But the story about the jammed card happened at the Univ. of Rhode Island, where my family spent a summer, I took an intro EE course (DC circuits) and also got to hang out at their computer center programming their IBM 1410, a "business" (as opposed to "scientific") computer whose hardware did arithmetic in decimal rather than in binary.

The president of IBM then was Thomas J. Watson, Jr. The joke was about his father, the former president.

+1 was A, +2 B, ... +9 I, -1 J, ... -9 R, 01 /, 02 S, ... 09 Z.

None of it makes sense at all! We can say


to get the position of a letter in the alphabet, but they had that stupid slash in the way.

Oh right! Thanks!

We're getting SMB 1985 here!

Oh, cool!

That sounds like fun!

Wow!

How did that work, doing arithmetic in decimal?

Oh. yeah. I guess they just wanted it to be the first 0 letter, I guess. :-(­

That makes more sense!

Glad we came a long way since then, because now the only thing that gets my CompSci Teacher mad is when students come over to her with code, ask her to help dissect and fix it, and figure out that they forgot a semicolon. She gets mad and says that it "wasted her time".

One question - you remember when higher level languages such as FORTRAN and COBOL came out? How were computers trained to read these different languages other than Assembly Code? Or did IBM literally have to make a new machine to run FORTRAN and COBOL and everyone who used it had to learn FORTRAN and COBOL...

They had compiler programs, same as now. (The code was written on punch cards, but it's the same thing.)

Huh. Interseting.

It was awesome. I don't think I've ever learned as much as I did the two years I was in SHP. First time I was really challenged mathematically. And, you had to pass a test to get in, but once you were in, you were in forever. No grades, no jumping through hoops, learning for fun.

And Henry at NYU was amazing too. They treated us like adults, gave us real work to do that they actually needed done. These experiences are what made me a teacher, and made me the kind of teacher I am.

How does it work adding in binary? Each decimal digit was represented as four bits, using only ten out of sixteen possible bit patterns. An eleventh bit pattern was a minus sign iirc.

He wasn't mad at me; I'd done everything right by getting him to fix it. He was just raising a hypothetical situation in which he would get mad. :~)

I used to have a rule, teaching high school, that I wouldn't look at buggy code that had a variable named X. You had to give the variables good names before I'd read the code. The great thing was that pretty often that would fix the bug without me doing anything else!

The first compiler had to be written in Assembler. Once they had a working compiler, they'd rewrite it in its own language, e.g., a Fortran compiler in Fortran. And then to install Fortran on another computer architecture, they'd write a cross compiler, a program that ran on the first computer but compiled code to run on the second computer. And finally they'd use that to compile the original compiler for the new machine.

Oh. I see.

Weird. 5 bit patterns were unused.