Imagine if the letter Q had been left out of Unicode's Latin alphabet. The argument against it is that it can be written with a capital O combined with a comma. (That's going to play hell with naive sorting algorithms, of course, but oh well.) Oh, and also imagine your name is Quentin.
But the letter wasn't left out of Unicode; it's actually typed in the article. It's just internally represented as multiple codepoints, much like one of parts of my name (é) may be.
Frankly, this is irrelevant to the actual problem, which is the input system, and which has nothing to do with Unicode. Nothing prevents a single key from typing multiple codepoints at once.
> It's just internally represented as multiple codepoints
And in fact it is not, and even in the article it is U+09CE. One codepoint. If his input method irks him, he's as free to tweak it as I am to switch to Dvorak.
Also folks, there's no "CJK unification" project. It's Han unification. Han characters are Han characters, just like Latin characters are Latin characters. Just because German has ß and Danish has Ø doesn't mean A isn't a Latin character and not, say, a French one. Not to get all Ayn Rand-y, but A is A is U+0041 in all Western European/Latin alphabets. It makes sense for 中国 and 日本to have the same encoding in Chinese and Japanese.
I hate to say it, but I think the author's objections seem to stem from his lack of understanding of character encoding issues. I don't know Bengali at all and so I will try to refrain from commenting on it, but I do speak and read Japanese fluently and Han Unification is a very, very good thing. Can you imagine the absolute hell you would have to go through trying to determine if place names were the same if they used different code points for identical characters -- just because of geopolitical origins?
Yes, there are some frustrating issues -- it has been historically difficult to set priorities for fonts in X and since Chinese fonts tend to have more glyphs, you often wind up with Chinese glyphs when you wanted Japanese glyphs. But this is not an encoding issue. Localization/Internationalization is really difficult. Making a separate code point for every glyph is not going to change that for the better.
I feel that way too. The distinction between codepoint, glyph, grapheme, character, (...) is not an easy one, and that's what he seems to be stumbling over. Unicode worries itself about only some of these issues, many of the other issues are about rendering (the job of the font) or input.
Combining characters are not just used for Bengali though. E.g. umlauted letters in European languages can also be expressed using combining characters, and implementations need to deal with those when sorting.
> Imagine if the letter Q had been left out of Unicode's Latin alphabet.
To properly write my european last name I have to press between 2 and 4 different simultaneous keys, depending on the system. Han unification is beyond misguided, but combining characters is not the problem.
Han unification as a hole is misguided? I'll grant you that some characters which were unified probably shouldn't have been, and maybe some that some that should have been weren't, but what's the argument for the whole thing to be misguided?
Should Norwegian A and English A be different Unicode code points just because Norwegian also has Ø, proving that it is a different writing system? You may want to debate whether i and ı should the same letter (they aren't), but most letters in the Turkish alphabet are the same as the letters in the English alphabet.
We'll the Turkish i/ı/I/I is I think exactly the example I would have come up with of characters that looks the same as i/I, but should have it's own code point, just like cyrillic characters have their own code points despite looking like latin characters.
Absolutely. So i/ı/I/I do have their own codepoints. But the rest of the letters, which are the same, don't. Just like han unification. Letters which are the same are the same, and those which are not are not, even if they look pretty close.
The thing is that the turkish "i" and "I" don't have their own codepoints, it is the same one as latin "i" and "I", when they should have been their own codepoints representing the same glyphs. That way going from I to ı and from i to İ wouldn't be a locale dependant problem.
When Chinese linguists came up with hanyu pinyin, they specifically wanted to pick up Latin characters (1) for Chinese phonetics, so that Chinese phonetic writing could use what we'd call "white men's writing system".
Now, they did use the letter Q for the sound tɕʰ that was formerly often romanized as "ch". It is not really a "k" as Q is in English.
Are people now saying that hanyu pinyin should use a different coding to English, because it would be more "respectable" for non-English languages to have their own code points even if the character has same roots and appearance? That is absolutely pointless. The whole idea of using Q for tɕʰ is that you can use the same letter, same coding, same symbol as in English.
(1) OK they did add ü to the mix, although that is usually only used in romanization in linguistics or textbooks, and regular pinyin just replaces it with u.
My first choice as theoretical Quentin wouldn't be "how can I frame this accidental, perhaps even flagrantly disrespectful omission as antiprogressive and dissect the credentials, experience, and ethnicity of the people who made the mistake via culture essay," it would probably be "where do I issue a pull request to fix this mistake or in what way can I help?"
Maybe that's just me. I look forward to the future where any mistake not involving a straight white Anglo-Saxon man or his customs can be built up as antiprogressive agenda, and the best advocacy is taking the people who made them down rather than fixing the problem that is the, you know, problem.
(As an aside, imagine my surprise to see a Model View Culture link on HN given how much MVC absolutely hates and criticizes HN, including a weekly "worst of HN" comment dissection.)
Anyone can propose the addition of a new character to Unicode. It doesn't take $18,000 as some people think. You just need to convince the Unicode Consortium that it makes sense (preferably with solid evidence on use of the character). The process is discussed at: http://unicode.org/pending/proposals.html
I have a proposal of my own in the works to add a character to Unicode, so I'll see how it goes. There's a discussion of how someone successfully got the power symbol added to Unicode at https://github.com/jloughry/Unicode so take a look if you're thinking of proposing a character.
"My first choice as theoretical Quentin wouldn't be "how can I frame this accidental, perhaps even flagrantly disrespectful omission as antiprogressive and dissect the credentials, experience, and ethnicity of the people who made the mistake via culture essay," it would probably be "where do I issue a pull request to fix this mistake or in what way can I help?" "