> there is no guarantee `char` is 8 bits, nor that it represents text, or even a particular encoding.
True, but sizeof(char) is defined to be 1. In section 7.6.2.5:
"The result of sizeof applied to any of the narrow character types is 1"
In fact, char and associated types are the only types in the standard where the size is not implementation-defined.
So the only way that a C++ implementation can conform to the standard and have a char type that is not 8 bits is if the size of a byte is not 8 bits. There are historical systems that meet that constraint but no modern systems that I am aware of.
char8_t also isn't guaranteed to be 8-bits, because sizeof(char) == 1 and sizeof(char8_t) >= 1. On a platform where char is 16 bits, char8_t will be 16 bits as well
The cpp standard explicitly says that it has the same size, typed, signedness and alignment as unsigned char, but its a distinct type. So its pretty useless, and badly named
Wouldn't it be rather the case that char8_t just wouldn't exist on that platform? At least that's the case with the uintN_t types, they are just not available everywhere. If you want something that is always available you need to use uintN_least_t or uintN_fast_t.
It is pretty consistent. It is part of the C Standard and a feature meant to make string handling better, it would be crazy if it wasn't a complete clusterfuck.
> There's no guarantee char8_t is 8 bits either, it's only guaranteed to be at least 8 bits.
Have you read the standard? It says: "The result of sizeof applied to any of the narrow character types is 1." Here, "narrow character types" means char and char8_t. So technically they aren't guaranteed to be 8 bits, but they are guaranteed to be one byte.
Well platforms with CHAR_BIT != 8. In c and c++ char and there for byte is atleast 8 bytes not 8 bytes. POSIX does force CHAR_BIT == 8. I think only place is in embeded and that to some DSPs or ASICs like device. So in practice most code will break on those platforms and they are very rare. But they are still technically supported by c and c++ std. Similarly how c still suported non 2's complement arch till 2023.
That's where the standard should come in and say something like "starting with C++26 char is always 1 byte and signed. std::string is always UTF-8" Done, fixed unicode in C++.
But instead we get this mess. I guess it's because there's too much Microsoft in the standard and they are the only ones not having UTF-8 everywhere in Windows yet.
Of course it can be made UTF-8. Just add a codepoints_size() method and other helpers.
But it isn't really needed anyway: I'm using it for UTF-8 (with helper functions for the 1% cases where I need codepoints) and it works fine. But starting with C++20 it's starting to get annoying because I have to reinterpret_cast to the useless u8 versions.
First, because of existing constraints like mutability though direct buffer access, a hypothetical codepoints_size() would require recomputation each time which would be prohibitively expensive, in particular because std::string is virtually unbounded.
Second, there is also no way to be able to guarantee that a string encodes valid UTF-8, it could just be whatever.
You can still just use std::string to store valid encoded UTF-8, you just have to be a little bit careful. And functions like codepoints_size() are pretty fringe -- unless you're not doing specialized Unicode transformations, it's more typical to just treat strings as opaque byte slices in a typical C++ application.
I never said always. Just add some new methods for which it has to be UTF-8. All current functions that need an encoding (e.g. text IO) also switch to UTF-8.
Of course you could still save arbitrary binary data in it.
Is there a single esoteric DSP in active use that supports C++20? This is the umpteenth time I've seen DSP's brought up in casual conversations about C/C++ standards, so I did a little digging:
Aside from that, from what I can tell, those esoteric architectures are being phased out in lieu of running DSP workloads on Cortex-M, which is just ARM.
I'd love it if someone who was more familiar with DSP workloads would chime in, but it really does seem that trying to be the language for all possible and potential architectures might not be the right play for C++ in 202x.
Besides, it's not like those old standards or compilers are going anywhere.
Cadence DSPs have C++17 compatible compiler and will be c++20 soon, new CEVA cores also (both are are clang based).
TI C7x is still C++14 (C6000 is ancient core, yet still got c++14 support as you mentioned).
AFIR Cadence ASIP generator will give you C++17 toolchain and c++20 is on roadmap, but not 100% sure.
But for those devices you use limited subset of language features and you would be better of not linking c++ stdlib and even c stdlib at all (so junior developers don't have space for doing stupid things ;))
How common is it to use Green Hills compilers for those DSP targets? I was under the impression that their bread was buttered by more-familiar-looking embedded targets, and more recently ARM Cortex.
Dunno! My last project there was to add support for one of the TI DSPs, but as I said, that's decades past now.
Anyway, I think there are two takeaways:
1. There probably do exist non-8-bit-byte architectures targeted by compilers that provide support for at-least-somewhat-recent C++ versions
2. Such cases are certainly rare
Where that leaves things, in terms of what the C++ standard should specify, I don't know. IIRC JF Bastien or one of the other Apple folks that's driven things like "twos complement is the only integer representation C++ supports" tried to push for "bytes are 8 bits" and got shot down?
Judging by the lack of modern C++ in these crufty embedded compilers, maybe modern C++ is throwing too much good effort after bad. C++03 isn't going away, and it's not like these compilers always stuck to the standard anyway in terms of runtime type information, exceptions, and full template support.
Besides, I would argue that the selling point of C++ wasn't portability per se, but the fact that it was largely compatible with existing C codebases. It was embrace, extend, extinguish in language form.
> Judging by the lack of modern C++ in these crufty embedded compilers,
Being conservative with features and deliberately not implementing them are two different thing. Some embedded compilers go through certification, to be allowed to be used producing mission critical software. Chasing features is prohibitively expensive, for no obvious benefit. I'd bet in 2030s most embedded compiler would support C++ 14 or even 17. Good enough for me.
> Being conservative with features and deliberately not implementing them are two different thing.
There is no version of the C++ standard that lacks features like exceptions, RTTI, and fully functional templates.
If the compiler isn't implementing all of a particular standard then it's not standard C++. If an implementation has no interest in standard C++, why give those implementations a seat at the table in the first place? Those implementations can continue on with their C++ fork without mandating requirements to anyone else.
> Then they will diverge too much, like it happened with countless number of other languages, like Lisp.
Forgive me if I am unconvinced that the existence of DSP-friendly dialects of C++ will cause the kinds of language fracturing that befell Lisp.
DSP workloads are relatively rare compared to the other kinds of workloads C++ is tasked with, and even in those instances a lot of DSP work is starting to be done on more traditional architectures like ARM Cortex-M.
A cursory Chromium code search does not find anything outside third_party/ forcing either signed or unsigned char.
I suspect if I dug into the archives, I'd find a discussion on cxx@ with some comments about how doing this would result in some esoteric risk. If I was still on the Chrome team I'd go looking and see if it made sense to reraise the issue now; I know we had at least one stable branch security bug this caused.
Related: in C at least (C++ standards are tl;dr), type names like `int32_t` are not required to exist. Most uses, in portable code, should be `int_least32_t`, which is required.
FWIW, I really like the way C# has approached this need... most usage is exposed via attribute declaration/declaration DllImport for P/Invoke. Contrasted with say JNI in Java or even the Go syntax. The only thing that might be a significant improvement would be an array/vector of lookup names for the library on the system given how specific versions are often tagged in Linux vs Windows.
People can be strange or bizarre if they want too but they have to understand it means some people won't like them, especially if their shtick is deliberately making people uncomfortable and being annoying.
> I do not understand the desire for everybody else in the world to act exactly like you. Variety is the spice of life.
I don't want people to act exactly like me. I greatly appreciate the existence of people different from me with differing points of view and differing nations with differing cultures. This doesn't mean I have to like one specific archetype that I feel acts obnoxiously.
The author quite literally mentions that part of their motivation to do things is to make people want them to stop, not to mention the deliberate and conscious choice to write the article in lowercase.
It's also natural to be uncomfortable because of the various references to sexual fetishes throughout the article.
in the sense of "writing a brainfuck compiler in ed," not in making them so uncomfortable they beg for release. plus, "feminization" is not a fetish, at least in the sense of making rustc say "i love you;" that feels incredibly uncharitable.
i was being charitable, not obtuse. a great number of my closest friends are trans; no element of their experience as i observe it fetishizes the very concept of transition, and those who've spoken to me about it are quite opposed to the "pornification" (as opposed to even sexualization) of trans people (particularly women) by the community itself, and others. if you're at all curious, i thought [0] was pretty informative.
all that to say, trans people (or anyone) shouldn't need to qualify their position (or very lighthearted, energetic opinion piece) with some genericizing disclaimer as to their identity, intents, etc., on the very basis of their identity. live and let live (i.e. fuck off)
"feminizing" doesn't refer to a sexual fetish, it just means making something more feminine. Do you assume that something being feminine is automatically sexualized and fetishistic?
"Feminizing" doesn't inherently refer to a sexual fetish but context matters. I invite you to examine the article more in depth, look at the chatroom conversations and then come to your own conclusion.
Almost every even half decent CPU made in the last decade does have TPM 2.0, albeit for some strange reason OEMs used to ship with it disabled. You may be able to turn it on in the bios.
This is a massive pet peeve of mine as well. As far as I'm aware there's not a single consumer CPU listed in the Windows 11 compatibility list that doesn't have builtin TPM2.0.
That study only says that most Americans think they interact with AI at least a few times a week (it doesn't say how or if it's intentional). And it also says the vast majority feel they have little or no control over whether AI is used in the lives.
For example, someone getting a google search result containing an AI response is technically interacting with AI but not necessarily making use of its response or even wanting to see it in the first place. Or perhaps someone suspects their insurance premiums were decided by AI (whether that's true or not). Or customer service that requires you go through a chat bot before you get real service.
Which can be trivially mapped to directories for aliasing. Just like Linux.
Windows NT and UNIX are much more similar than many people realize; Windows NT just has a giant pile of Dos/Win9x compatibility baked on top hiding how great the core kernel design actually is.
In the end, if you think about it, the Win32 subsystem running on top of NT OSes it's pretty much the same concept as Wine running on Unix. That's why Wine is not an emulator. And neither is XP emulating old Win32 stuff to run Win9x binaries.
They're using slide rule users as a stand-in for serious mathematician as opposed to people who incidentally use mathematics. It makes some sense in historical context but becomes a bit anachronistic after the invention of electronic calculators.
reply