Why did APL use strange symbols?

Why couldn't APL just use plaintext or actual mathematical symbols?

They wanted each command to only be one char. I don't know why 0xE2 0x8A 0xA2 is better than 0x69 0x66, but they made a decision.

"Actual mathematical symbols" was the goal, and many of the APL symbols are in fact what mathematicians use, such as ! for factorial, ⌊ for floor, × (rather than *) for times, ⍲ for nand, and so on. What APL did change was that the function symbol always comes before the operand if monadic, so !5 rather than 5!, ⌊3.14 rather than ⌊3.14⌋, and so on. Iverson was mainly a mathematician, not a programmer, and he wanted a language that would feel comfortable to mathematicians. He had to make some compromises because the computer interfaces of the day were typewriters with a single monospace font, nothing like subscripts or superscripts or overbars for square root. (As it is, the weird character set was what kept APL from taking over the world, because you had to invest in special IBM-built terminals to use it.)

For the array operations, which mainly don't exist in math notation, he tried to extend the same general idea. So if you want to reshape an array, you use ρ, the Greek letter rho, for R.

One advantage of the single-character names is that for a mathematician the main way to build complicated programs out of primitive operations is composition of functions rather than a sequence of instructions. In most languages that quickly leads to very long instruction lines. For a tiny example, compare these two ways to represent finding the average of the values in a list:


vs. (+/L)÷ρL

Why not ? Is it because APL is matrix oriented? Even then, matrix multiplication is usually expressed through juxtaposition and there is no set symbol.

I dunno, but neither of those is in ASCII. @spaceelephant's kinda snarky comment about three-byte vs. two-byte Unicode sequences misses the point that APL predates Unicode by about 35 years. So there was a very sharp distinction between the languages that restricted themselves to the 128 ASCII characters, which included maybe 20 punctuation characters, and APL, which tried to be familiar to mathematicians rather than to computer engineers. (To this day, despite Unicode, standard computer keyboards include only the ASCII characters.) Maybe Iverson chose × for multiplication rather than · for fear that the latter might be confused with the decimal point? Of course × could be confused with the letter x, but maybe that's why APL used italic letters. Both are used in linear algebra; vectors have a "dot product" and a "cross product."