开发者

Why the creators of Windows and Linux systems chose different ways to support Unicode?

开发者 https://www.devze.com 2023-01-01 14:41 出处:网络
As far as I know Linux chose backward compatibility of UTF-8, whereas Windows added compl开发者_运维技巧etely new API functions for UTF-16 (ending with \"W\"). Could these decisions be different? Whic

As far as I know Linux chose backward compatibility of UTF-8, whereas Windows added compl开发者_运维技巧etely new API functions for UTF-16 (ending with "W"). Could these decisions be different? Which one proved better?


UTF-16 is pretty much a loss, the worst of both worlds. It is neither compact (for the typical case of ASCII characters), nor does it map each code unit to a character. This hasn't really bitten anyone too badly yet since characters outside of the Basic Multilingual Plane are still rarely-used, but it sure is ugly.

POSIX (Linux et al) has some w APIs too, based on the wchar_t type. On platforms other than Windows this typically corresponds to UTF-32 rather than UTF-16. Which is nice for easy string manipulation, but is incredibly bloated.

But in-memory APIs aren't really that important. What causes much more difficulty is file storage and on-the-wire protocols, where data is exchanged between applications with different charset traditions.

Here, compactness beats ease-of-indexing; UTF-8 is clearly proven the best format for this by far, and Windows's poor support of UTF-8 causes real difficulties. Windows is the last modern operating system to still have locale-specific default encodings; everyone else has moved to UTF-8 by default.

Whilst I seriously hope Microsoft will reconsider this for future versions, as it causes huge and unnecessary problems even within the Windows-only world, it's understandable how it happened.

The thinking back in the old days when WinNT was being designed was that UCS-2 was it for Unicode. There wasn't going to be anything outside the 16-bit character range. Everyone would use UCS-2 in-memory and naturally it would be easiest to save this content directly from memory. This is why Windows called that format “Unicode”, and still to this day calls UTF-16LE just “Unicode” in UI like save boxes, despite it being totally misleading.

UTF-8 wasn't even standardised until Unicode 2.0 (along with the extended character range and the surrogates that made UTF-16 what it is today). By then Microsoft were on to WinNT4, at which point it was far too late to change strategy. In short, Microsoft were unlucky to be designing a new OS from scratch around the time when Unicode was in its infancy.


Windows chose to support Unicode with UTF-16 and the attendant Ascii/Unicode functions way way way way WAAAAAAY back in the early 90's (Windows NT 3.1 came out in 1993), before Linux ever had the notion of Unicode support.

Linux has been able to learn from best practices built on Windows and other Unicode capable platforms.

Many people would agree today that UTF-8 is the better encoding for size reasons unless you know you're absolutely going to be dealing with lots of double-byte characters - exclusively - where UTF-16 is more space efficient.

0

精彩评论

暂无评论...
验证码 换一张
取 消