Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Disagree. At least in the context of Unix utilities portable to Windows. We are NOT going to be forking those to use wchar_t on Windows and char on Unix -that's a non-starter- and we're also not going to be switching to wchar_t on both because wchar_t is a second-class citizen on Unix.

Using UTF-8 with the "A" Windows APIs is the only reasonable solution, and Microsoft needs to commit to that.

> - The wide APIs accept and/or produce invalid UTF-16 in some places (like filesystems). There's no corresponding UTF-8 for invalid UTF-16. Meaning there are cases that lead to loss of information and that you simply cannot handle.

This is also true on Unix systems as to `char`. Yes, that means there will be loss of information regarding paths that have garbage in them. And again, if you want to write code for Windows _and_ Unix, using wchar_t won't spare you this loss on Unix. So you're damned if you do and damned if you don't, so just accept this loss and say "don't do that".

> - You have no control over all the DLLs loaded in your process. If a user DLL loads that can't handle UTF-8 narrow APIs, you're just praying it won't break.

In some cases you do have such control, but if some DLL unknown to you uses "W" APIs then.. it doesn't matter because if it's unknown to you then you're not interacting with it, or if you are interacting with it via another DLL that is known to you then it's that DLL's responsibility to convert between char and wchar_t as needed. I.e., this is not your problem -- I get that other people's bugs have a way of becoming your problem, but strictly speaking it's their problem not yours.

> - Some APIs simply don't have narrow versions. Like CommandLineToArgvW() or GetFileInformationByHandleEx() (e.g., FILE_NAME_INFO). You will not avoid wide APIs by doing this if you need to use enough of the APIs; you're just going to have to perform conversions that have dubious semantics anyway (see point #1 above).

True, but these can be wrapped with code that converts as needed. This is a lot better from a portability point of view than to fork your entire code into Windows and Unix versions.

> - Compatibility with previous Windows versions, obviously.

Sigh. At some point people (companies, contractors/consultants, ...) need to put their feet down and tell the U.S. government to upgrade their ancient Windows systems.

> - Performance

The performance difference between UTF-8 and UTF-16 is in the noise, and it depends greatly on context. But it doesn't matter. UTF-8 could be invariably slower than UTF-16 and it would still be better to move Windows code to UTF-8 than to move Unix to UTF-16 or lose portability between Windows and Unix.

In case you and others had not noticed Linux has a huge share of the market on servers while Windows has a huge share of the market on laptops, which means that giving up on portability is not an option.

The advice we give developers here has to include advice we give to developers who have to write and look after code that is meant to be portable to Windows and Unix. Sure, if you're talking to strictly-Windows-only devs, the advice you give is alright enough, but if later their code needs porting to Unix they'll be sad.

The reality is that UTF-8 is superior to UTF-16. UTF-8 has won. There's just a few UTF-16 holdouts: Windows and JavaScript/ECMAScript. Even Java has moved to UTF-8. And even Microsoft seems to be heading in the direction of making UTF-8 a first-class citizen on Windows.



> This is also true on Unix systems as to `char`. Yes, that means there will be loss of information regarding paths that have garbage in them. And again, if you want to write code for Windows _and_ Unix, using wchar_t won't spare you this loss on Unix. So you're damned if you do and damned if you don't, so just accept this loss and say "don't do that".

The problem is that you can't roundtrip all filenames. CP_UTF8 doesn't solve that only pretends to. For a full solution you need to use the W functiosn and then convert between WTF-16 and WTF-8 yourself.


Hard disagree:

> At least in the context of Unix utilities portable to Windows. We are NOT going to be forking those to use wchar_t on Windows and char on Unix -that's a non-starter- and we're also not going to be switching to wchar_t on both because wchar_t is a second-class citizen on Unix.

Those aren't the only options. You (or someone) could also write your own compatibility layers for the APIs that avoid some of the problems I mentioned (e.g., by producing errors on inconvertible characters, by being compatible with former Windows versions, by not affecting other DLLs in your process, etc.)

Or you could e.g. get upstream to start caring about their users on other platforms, and play ball.

> This is also true on Unix systems as to `char`. Yes, that means there will be loss of information regarding paths that have garbage in them. And again, if you want to write code for Windows _and_ Unix, using wchar_t won't spare you this loss on Unix.

Er, no. First, if you're actually writing portable code, TCHAR is the solution, not wchar_t. Second, if you can't fork others' code, at the very least you can produce errors to avoid silent bugs (see above). And finally, "this problem also exists with char" is just wrong. In a lot of cases the problem doesn't exist as long as you're using the same representation and avoiding lossy conversion, whatever the data type is. If (say) the file path is invalid UTF, and you save it somewhere and reuse it, or pass it to some program and then have it passed back to you, you won't encounter any issues -- the data is whatever it was. The issues only come up with lossy conversions in any direction.

> if some DLL unknown to you uses "W" APIs then.. it doesn't matter because if it's unknown to you then you're not interacting with it, or if you are interacting with it via another DLL

I don't think you're understanding the problem here. Interaction is not part of the picture at all. You might not be loading the DLL yourself at all. DLLs get loaded by the OS and user for all sorts of reasons (antiviruses, shell extensions, etc.) and they easily run in the background without anything else in the process "knowing" anything about the at all. Your program is declaring that everything in the process is UTF-8 compatible, but those DLLs might not be compatible with that, and so you're just praying that they don't use -A functions in an incompatible manner.

> Sigh. At some point people (companies, contractors/consultants, ...) need to put their feet down and tell the U.S. government to upgrade their ancient Windows systems.

USG? Ancient? These are systems less than 10 years old. We're not talking floppy-controlled nukes here.

> The performance difference between UTF-8 and UTF-16 is in the noise, and it depends greatly on context.

"Depends greatly on the context" kinda makes my point. It can turn a zero-copy program into single- or double-copy. Generally not a showstopper by any means, but it sure as heck can impact some programs. And if that program is a DLL people use - well now you can't work around. (Yes, there's a reason I listed this last. But there's a reason I listed it at all.)

> The reality is that UTF-8 is superior to UTF-16. UTF-8 has won.

The reality is Windows isn't UTF-16 and nix isn't UTF-8, which was the crux of most of my points.


> Er, no. First, if you're actually writing portable code, TCHAR is the solution, not wchar_t.

TCHAR is a Microsoftism, it's NOT portable at all.


I didn't mean "portable" in the same sense you're using it. Maybe "cross-platform", if you will. Or insert whatever word you want that would get my point across.


> Those aren't the only options. You (or someone) could also write your own compatibility layers for the APIs that avoid some of the problems I mentioned (e.g., by producing errors on inconvertible characters, by being compatible with former Windows versions, by not affecting other DLLs in your process, etc.)

That's akin to writing a partial C library. If MSFT makes UTF-8 as the codepage work well enough I'd rather use that.

> Or you could e.g. get upstream to start caring about their users on other platforms, and play ball.

The upstream is often not paid for this. Even if they get a PR, if the PR makes their code harder to work on they might reject it.

Microsoft has to make UTF-8 a first-class citizen.

> I don't think you're understanding the problem here. Interaction is not part of the picture at all. You might not be loading the DLL yourself at all. DLLs get loaded by the OS and user for all sorts of reasons (antiviruses, shell extensions, etc.) and they easily run in the background without anything else in the process "knowing" anything about the at all. Your program is declaring that everything in the process is UTF-8 compatible, but those DLLs might not be compatible with that, and so you're just praying that they don't use -A functions in an incompatible manner.

You mean changing the codepage for use with the "A" functions? Any DLL that does that must go on the bonfire. There's a special place in Hell for developers who build such DLLs.

> "Depends greatly on the context" kinda makes my point. It can turn a zero-copy program into single- or double-copy. Generally not a showstopper by any means, but it sure as heck can impact some programs. And if that program is a DLL people use - well now you can't work around. (Yes, there's a reason I listed this last. But there's a reason I listed it at all.)

I'm assuming you're referring to having to re-encode at certain boundaries. But note that nothing in Windows forces or even encourages you to use UTF-16 for bulk data.

> The reality is Windows isn't UTF-16 and nix isn't UTF-8, which was the crux of most of my points.

Windows clearly prefers UTF-16, and its filesystems generally use just-wchar-strings for filenames on disk (they don't have to though). Unix clearly prefers UTF-8, and its filesystems generally use just-char-strings on disk.


>> Those aren't the only options. You (or someone) could also write your own compatibility layers for the APIs that avoid some of the problems I mentioned (e.g., by producing errors on inconvertible characters, by being compatible with former Windows versions, by not affecting other DLLs in your process, etc.)

> That's akin to writing a partial C library. If MSFT makes UTF-8 as the codepage work well enough I'd rather use that.

I found out about activeCodePages thanks to developers of those compatibility layers documenting the option and recommending it over their own solutions.

> The upstream is often not paid for this. Even if they get a PR, if the PR makes their code harder to work on they might reject it

The project I work on is an MFC application stemming from 9x and early XP and abandoned for 15 years. Before I touched it it had no Unicode support at all. I'm definitely not being paid to work on it, let alone the effort to convert everything to UTF-16 when the tide seems to be going the other direction.

> Your program is declaring that everything in the process is UTF-8 compatible, but those DLLs might not be compatible with that, and so you're just praying that they don't use -A functions in an incompatible manner.

Programs much, much, much more popular than mine written by the largest companies in the world, and many programs you likely use as a developer on Windows, set activeCodePage to UTF-8. Notwithstanding the advice in the article to set it globally for all applications (and it implies it already is the default in some locales). Those DLLs will be upgraded, removed, or replaced.


Forget it, you ain't gonna make Linux-centric open-source community to really care about Windows (or other un-POSIX-like OSes, of which today there is almost none). The others have to give in and accomodate to their ways if those others want to use their code.

And since Windows-centric developers, when porting their apps to Linux, are generally willing to accomodate for Linux-specific idiosyncrasies (that's what porting is about, after all) if they care abour that platform enough, the dynamic will generally stay the same: people porting from Windows to Linux will keep making compatibility shims, people porting from Linux to Windows will keep telling you "build it with MinGW or just run it in WSL2, idgaf".


> That's akin to writing a partial C library.

Not really. It's just writing an encoding layer for the APIs. For most APIs it doesn't actually matter what they're doing at all; you don't have to actually care what their behaviors are. In fact you could probably write compiler tooling to do automatically analyze the APIs and generate code for most functions so you don't have to do this manually.

> If MSFT makes UTF-8 as the codepage work well enough I'd rather use that.

"Well enough" as in, with all the warts I'm pointing out? Their current solution is all-or-nothing for the whole process. They haven't provided a module-by-module solution and I don't expect them to. They haven't provided a way to avoid information loss and I don't expect them to.

> You mean changing the codepage for use with the "A" functions? Any DLL that does that must go on the bonfire. There's a special place in Hell for developers who build such DLLs.

"Changing" the code page? No, I'm just saying any DLL that calls FooA() without realizing FooA() can now accept UTF-8 could easily break. You're just praying that they don't.

> I'm assuming you're referring to having to re-encode at certain boundaries. But note that nothing in Windows forces or even encourages you to use UTF-16 for bulk data.

Nothing? How do you say this with such confidence? What about, say, IDWriteFactory::CreateTextLayout(const wchar_t*) (to give just one random example)?

And literally everything that interacts with other apps/libraries/etc. that use Unicode (which at least includes the OS itself) will have to encode/decode. Like the console, clipboard, or WM_GETTEXT, or whatever.

The whole underlying system is based on 16-bit code units. You're going to get a performance hit in some places, it's just unavoidable. And performance isn't just throughput, it's also latency.

> Windows clearly prefers UTF-16, and its filesystems generally use just-wchar-strings for filenames on disk (they don't have to though). Unix clearly prefers UTF-8, and its filesystems generally use just-char-strings on disk.

Yes, and you completely missed the point. I was replying to your claim that "UTF-8 has won" over UTF-16. I was pointing out that what you have here is neither UTF-8 on one side nor UTF-16 on the other. Going with who "won" makes no sense when neither is the one you're talking about, and you're hitting information loss during conversions. If you were actually dealing with UTF-16 and UTF-8, that would be a very different story.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: