Consider a simple bit mask operation, assume 8 bit ints for the sake of brevity:
Prior to C99, assuming you use the mentioned typedef for bool:
bool a = someInt & 0x02
'a' will be 0x02 if the bit is set.
In C99, bool is aliased to _Bool and if the flag is set, the above code will result in 'a' being 0x01 because of C99's requirements for type conversion.
To accurately get the same behavior prior to C99, you can add !!, e.g.:
bool a = !!(someInt & 0x02) //'a' is now 0x01 when the bit is set.
Most, if not all, of the problems people are describing in this discussion come down to trying to assign something that isn't a boolean value to a variable of boolean type, and expecting it to do something sensible. I don't see how this is ever going to have a happy ending.
If you had written
bool a = (someInt & 0x02) == 0x02
or something similarly clear and unambiguous, nothing odd would happen, even in C.
(Edit: OK, that's not strictly true, because of the operator precedence order. It's never made sense to me that integer arithmetic operators have higher precedence than comparisons but bitwise logical operators have lower precedence, so if you remove the parentheses above then the resulting code doesn't do what you'd expect. I suppose this is because I'm looking at the problem as if comparison operators return a proper boolean value rather than an integer, and the ordering we've wound up with in C dates from a historical oddity about 40 years ago.)
The underlying problem with booleans in C99, as Linus and others have been saying, is that the language doesn't actually enforce basic type safety, so cases like your first example
bool a = someInt & 0x02
that should result in a type error are allowed through, and with odd results: how does it make any sense for a boolean variable to have an integer value like 0x01 or 0x02?
Then programmers who relied on such odd results wind up writing horrific code like your second example
bool a = !!(someInt & 0x02)
where fudge factors build on top of distortions to make the old hacks work.
And then we wonder why in 2013 we still have widely used, essential software that is riddled with security flaws and crash bugs. :-(
The reason that & and | have lower precedence than the comparison operators is indeed historical - it's because the earliest versions of the language didn't have the logical boolean operators && and ||, so the bitwise & and | operators stood in for them. The lower precedence meant that you could write:
if (x == 1 & y == 2) {
..and have it do what you meant. This became a bit of a wart when the && and || operators were introduced (still well before ANSI standardisation), but it was considered that changing it would have broken too much existing code.
but this - and other examples here - seem unfair, in that they are blaming _Bool for working correctly (consistently, logically) in cases where people were previously doing dumb things.
if you rely on something called "bool" being 0x02 you're going to have a bad time. that's hardly C99's fault.
your last line of code is what i would write, effectively, if i needed to compare booleans. it seems to me that _Bool is an improvement because, pre-C99, if i forgot the !! dance somewhere, i likely had a bug. with _Bool things just work.
(disclaimer, as with other reply here - still trying to get a grasp on this, so may be saying something stupid myself).
I don't really disagree, but Linus's point, which I agree with, is that C99's implementation is 'better' but is still pretty bad, i.e. 'bool' is still a mask for 'int' and as a result array's of bools aren't what they should be: bit arrays, directly serializing a bool is still sending out an entire int of data, etc. The 'improvement' in C99 isn't worth the broken code that will result from the subtle differences.
It's also worth considering in context that a lot of the code which will run into problems with these small differences is low level OS/driver code that often deals with a lot of bit flags and bit manipulation in general. When your trying to fit a web server into 3800 _bytes_ of ram on an 8 bit microcontroller, 'doing dumb things' becomes 'being inventive'.
a will be 1 or 0, not 1 << 5. You don't get this behavior with a normal int.
MSVC also has a warning about some of this behavior [1], with a nonsense performance subtext. I don't think theres a GCC equivalent.
It might service some of your watched repos because of data quality issues. It uses a very basic nearest neighbor shared item set model. I'm definitely planning on improving the quality of the results in the near future.