The problem with unsigned is the ridiculous underflow behavior. Unsigned integers are inherently unsafe without underflow checking at runtime. When you are using signed integers for a positive number, then seeing a negative number tells you everything you need to know (the program is wrong and it will most likely crash soon). However, when you are using unsigned to mean a number that can never be negative, then you can't just expect people to never underflow it even just to -1 because it will turn into a large positive number by accident. There is a semantic gap between the meaning of the type and what invariants it guarantees at runtime and that means having no unsigned integers is better for most code unless the unsigned Integer is fully underflow checked at runtime.
I think you are confused. Unsigned arithmetic is modular. Unsigned integers are both positive and negative.
0b1111_1111 is congruent to both -1 and 255 (mod 256). We simply choose the value 255 as being the principle value for coercions, but you could equally well choose -1.
My favorite example of this was the major outage at a Swedish futures exchange in 2012 where the system wound up treating an unsigned order quantity as "negative" and placed an order for 4 billion futures contracts: https://www.reuters.com/article/markets-sweden-bug-idUSL5E8M...
> unless the unsigned Integer is fully underflow checked at runtime.
Sure. That doesn't mean the datatype wouldn't be nice to have even with such checks. There are languages out there that fully check over- and underflow. Java could too.