Oh, right, I'm an idiot, we use UNICODE OF to check the case. So <, =, > are all consistent and life is good. Good because words are ordered in dictionary order, which is the right thing, rather than Unicode order.
I'm sure that some HCI grad student has researched what people expect. My instinct is that you're too experienced a programmer to think like a regular person. So is Ken. My bet is that if you first ask "Are 'spaghetti' and 'Spaghetti' the same word?" and then ask "Are 'turkey' and 'Turkey' the same word?" you'll get a yes, but if you first ask "Are 'frankfurter' and 'Frankfurt' the same word?" then the answer about turkey will be no. If I'm right, then the turkey example gives us no advice about case sensitivity, imho.
As for "removes data," I'm all for removing data! For example, I think
should show 1.5. (I'm pretty sure I remember some version of Scheme having a primitive that reports the value in a given range that has the fewest number of decimal digits.) (Just to be clear, I'm just talking about the printform; this wouldn't entail a change to the floating point representation.)
On one's cell phone, the default mode loses tons of data by autocorrecting the keyboard, although there's a (hard to find) non-autocorrect mode.