Well, I vote for Han unification of #Unicode, and I rather think that more Chinese characters should have been unified (e.g., 高 & 髙, ē”¢ & ē”£, 內 & 内). šŸ¤·

#ę¼¢å­— #hanzi #hanja #kanji

If you believe that Chinese characters in #Chinese, #Korean, and #Japanese should all be divided into language-specific codes, then it is logical that the Latin characters in English, French, Italian, and German should all be divided into language-specific codes as well.

#Unicode


They don't need to be split up. You could have "高" and "髙" as adjacent codepoints in a single unified CJK plane.

No, believe me, 高 and 髙 are the identical character with the same reading and the same meaning in all Chinese, Korean, and Japanese.


And yet, people have (sometimes strong) language-specific preferences for how the character should be written.

And so you end up having two create and distribute separate fonts for the different CJK languages anyhow.

Not sure how that is an improvement over being able to define a single CJK font that encompasses the usage preferences of all its users.

I believe that preference, which is held by some people, mainly in Japan, was wrong from the beginning, i.e., from the creation of JIS X 0208 before Unicode.


We could have a debate about descriptivism versus prescriptivism and so on - can a language area be "wrong" about its own use of language - but setting that aside, that matter of fact is that people in practice disagree about the characters being interchangeable. And that makes them not unified.

If I'm wrong, then I'm sure China will be perfectly fine with standardizing on the Japanese way of writing them for all international use. They're the same after all.

I don't believe that simplified characters should be merged with the original characters (e.g., 體 & 体). I just want to say that it would be nicer if these characters with small stylistic differences (e.g., 高 & 髙, åƹ & åƾ) were unified from the beginning.

Well, I also agree that we can't change reality in either direction.

I think that's really the point: to at least some users, those differences aren't small.

Here in Japan there's another, related issue where some family and place names were/are traditionally written with variant characters. That worked fine when everything was written by hand, but those variants got left out when defining print types and fonts, leaving a lot of people frustrated about it.

And yes, it is what it is. We're human; nothing ever ends up 100% clean and logical :)

Yes, I know that some Japanese people are picky about the kanji form of their surnames, but I believe that the pickiness came from the fact that when JIS X 0208 was defined in the first place, it assigned some style differences as separate code points. Why I believe that is because if you learn Chinese calligraphy, you'll find that there are far more style differences in Chinese characters than that, and people don't get picky about the ones that aren't encoded in JIS X 0208 or Unicode.

You can see it the other way: the reason Japan encoded these differences is because people felt strongly about them.

Well, probably not. The JIS X 0208 standard is moving toward bringing back together code points that were too finely split. They were split without much thought in the first place.

en.wikipedia.org/wiki/JIS_X_02

ćƒ•ć‚©ćƒ­ćƒ¼

@hongminhee @jannem I agree that those JIS variants seems too much, when it comes to information exchange. My understanding is that JIS X 0208 had those variants for names because it was considered to be the character set the Japanese government would use internally. That is not true anymore; the government uses their own character set (colloquially called 住åŸŗę–‡å­—) that encodes even more variants. But it's only for their internal use. It's not supported by the vast majority of computer systems in Japan.

Ā· Ā· Web Ā· 0 Ā· 0 Ā· 1
ćƒ­ć‚°ć‚¤ćƒ³ć—ć¦ä¼šč©±ć«å‚åŠ 
Fedibird

꧘怅ćŖē›®ēš„恫ä½æćˆć‚‹ć€ę—„ęœ¬ć®ę±Žē”Øćƒžć‚¹ćƒˆćƒ‰ćƒ³ć‚µćƒ¼ćƒćƒ¼ć§ć™ć€‚å®‰å®šć—ćŸåˆ©ē”Øē’°å¢ƒćØć€å¤šę•°ć®ē‹¬č‡Ŗę©Ÿčƒ½ć‚’ęä¾›ć—ć¦ć„ć¾ć™ć€‚