jviotti, the reason BOOL was not changed to bool sooner was because it would break ABI compatibility. So on Mac for example, they had to wait until the x86_64 to arm64 to finally do it.
Talking about the hardware here. Yep, Server.app is a shell of what it could be. But I mean the rackmount macs — I was asking GP why they didn't consider those servers ;-)
Not GP, but because they aren’t marketed as servers and don’t have features you’d commonly expect in a server. Video and audio production heavily uses rack mount gear and so having a rack mounted version of a Mac Pro makes perfect sense.
There are many servers still not using DKIM, as it requires a server/software update. But SPF is super easy, requiring only DNS changes. Could probably do as you suggest looking at only SPF.
Actually, the ][+ was still 40-column, uppercase. I remember wiring a pin off my keyboard controller card to the paddle button input and add a replacement character ROM to get lowercase support.
It was the //e and //c that had built-in lowercase support.
We had an Apple II+ with the limitations you describe. However, we purchased and installed the 80-column card that provided both 80-columns and lowercase characters, which is what I think the parent comment is suggesting.
I remember adding 16KB or 32KB to get to a total of 48KB of RAM with a card that piggybacked onto the existing chips. Was that the same upgrade that enabled lowercase letters?
I mostly remember that the RAM upgrade changed the gauges in MS Flight Simulator from octagons into rounder circles.
Yeah, it did. Also a Z80 card so you could run CP/M. With CP/M I could run Fortran and C on my ][+, it was a lot easier to get pirated versions of CP/M SW than original Apple Pascal or Fortran.
No, that's not correct. The II+ was a II with Applesoft BASIC instead of Woz's Integer BASIC, and the ability to automatically boot from disk at power on. That's it.
Lowercase and 80 columns were available from third-party vendors for both the II and the II+.
Worth pointing out (for those interested) that Applesoft BASIC provided floating point support, and was provided by Microsoft.
BASIC and the monitor were stored on ROM chips on the motherboard. The II and II+ were basically the same computer with a different set of BASIC ROM chips installed.
Edit: It is actually possible to have both sets of ROM chips installed if you use a Firmware card. Very handy!
We see this at Raygun with our Crash Reporting and Real User Monitoring products (measuring errors, and end user experience in general). Nearly all power users are more likely to be in product, or very senior/executive developers.
Very low engagement out of QA. I'm not sure if they just have a way they like to work, or they think that if they exist then no bugs make it to prod (don't laugh, plenty of our twitter adverts get comments from folks thinking you don't need any monitoring if you have a tester on the team or if a dev writes unit tests).
Understand how the end user experiences your software. That's the actual source of the truth.
memset_s was added to C11 in an optional annex, and my understanding is that there are zero platforms that actually implement it. (Microsoft implemented an early draft of Annex K that doesn't actually include memset_s.)
Most libc's added an insecure version of memset_s, doing only the above discussed compiler-barrier, but not a memory-barrier, which is needed for Spectre, broken HW. The default memset should do the compiler-barrier. But unfortunately you cannot talk with libc maintainers about security. Too much arrogance. Thanks to this Redhat article for supporting the user-base on this.
You can use my safeclib, which implements the Annex K extensions.