Real estate investors are not typically developers. They also generally are opposed to things YIMBYs support since as mentioned it would cut into their profits. YIMBYs, though not a monolith, largely support rent control and public housing. They also support private development. The goal is to reduce housing costs, and a variety of means are necessary.
Article 1, Section 9 specifically prevented slavery from being abolished until 1808. The three-fifths compromise (Article 1, Section 2) also addressed slavery. The word never appears, but there are plenty of euphemistic references.
This is actually an internal artificial kidney. There are other projects like you describe. https://pharm.ucsf.edu/kidney/device/faq has some helpful info about this.
This would basically replace dialysis if they are able to achieve the numbers they quote (GFR of 20-30). Occasionally having a minor surgery is likely much safer and more affordable than dialysis.
GC is a memory management technique with tradeoffs like all the others.
GC has many different implementations, with widely ranging properties. For example, the JVM itself currently supports at least 3 different GC implementations. There are also different types of GC's, so for example in a generational garbage collection system you'll typically see two or three generations of GCs, depending on the generation (how many GC cycles it has survived) of the objects it collects. The shortest GC's in those systems are usually a couple milliseconds, while the longest ones can be many seconds.
GC isn't always a problem. If your application isn't latency sensitive, it's not a big deal. Though if you tune your network timeouts to be too low, even something that is not really latency sensitive can have trouble because of GC causing network connections to timeout. Even if it is a latency sensitive applicatoin, if GC "stop the world" pauses - pauses that stop program execution, are short it can be OK.
One reason you'll see people say GCs are bad is for those latency sensitive applications. For example, I previously worked on distributed datastores where low latency responses were critical. If our 99th percentile response times jumped over say 250ms, that would result in customers calling our support line in massive numbers. These datastores ran on the JVM, where at the time G1GC was the state of the art low-latency GC. If the systems were overloaded or had badly tuned GC parameters, GC times could easily spike into the seconds range.
Other considerations are GC throughput and CPU usage. GC systems can use a lot of CPU. That's often the tradeoff you'll see for these low-latency GC implementations. GC's also can put a cap on memory throughput. How much memory can the GC implementation examine with how much CPU usage with what amount of stop-the-world time tends to be the nature of the question.
People loathe Apple for locking down their own access to their computer, not for security measures. Signed app distribution solves an entirely separate problem- making sure you get the app unmodified. This is a problem with sandboxing. Apple also does have sandboxing, but mostly that’s not what people complain about.
You can have signed apps and sandboxing without Apple being the sole arbiter of what you install. For example, Linux has multiple options available to solve this. Signed repos combined with SELinux, or flatpak, and snap all solve the same problem.
I have the same laptop and this is also my fourth upgrade on it. Really excellent.
It is impressive how much the upgrade process has improved over the years. Years ago you could either risk updating your repo files and yum updating, or just do a full reinstall. Then they built fedup which was a command line tool that made it easy, but still required several different commands. Now it’s driven through the GUI and a better experience than macOS upgrades in my opinion.
Once started the upgrade process is preatty much the same as the GUI update - download all needed packages, check if there are no conflicts with what is already installed, then reboot to minimalistic systemd target and run the update here.
Doing it like this makes sure that packaging conflicts, network connectivity or unrelated processes running on the machine can break your system.
No experience, but did a thorough read through of the docs. One thing to keep in mind about clickhouse is that their replication guarantees aren't very strong. From the docs: "There are no quorum writes. You can't write data with confirmation that it was received by more than one replica."
That's pretty troubling, but at least they're open about it. That said their performance claims are pretty spectacular, and it seems solidly engineered. Further if you're not planning on using replication it certainly seems interesting. I'd be curious to hear about someone's production experience as well, since the list of companies running it seems rather thin.
Yes, replication in ClickHouse is asynchronous by default. For intended use cases (OLAP queries aggregating data from many rows) data that is a few seconds stale is usually okay. In a serious production deployment you absolutely should enable replication, otherwise you risk losing all your data, not just last couple of seconds of inserts.
That said, sometimes synchronous replication is necessary despite the latency penalty that comes with it. This feature is actually implemented but not yet considered ready for prime time.
We have several years of production experience with ClickHouse (as a DBMS powering Yandex.Metrica - second largest web analytics system in the world). If you have questions - just ask.