FF is plenty competitive on the technical and feature front. It's market share is not a reflection of technical merit.
What's more, next to Linux itself it is maybe the only case I can see where a major piece of user facing software is kept competitive with the Apple/Google/MS tools.
LibreOffice or Nextcloud are technically far further behind Office and Google's online offerings.
Which therefore begs the question: Who else is in a position to do this?
At first glance, Moz with Firefox + a suite of self-hosted team and productivity stuff that works well in Firefox would make a ton of sense...
Worse, it's ridden with spyware, and is merely a honeypot for security-aware people that are not sufficiently paranoid to check any of the claims. Like, those VPNs from YT ads that use your IP to give AI companies residential proxies, the same kind of scam.
Spin up Wireshark and take a look at activity of Firefox. Try to shut the browser up. It won't work.
Even if they weren't a Google's proxy company, they would lose to standards commitees being infested by Google, and would have to play the "best luck catching up" game by constantly supporting new versions of JS, APIs and CSS features that nobody needs (except Google's YouTube will use them to stop you from using an adblocker).
FF is governed by ex-Oracle managers at the moment, singing the Google's song. Don't anthropomorphize your lawnmower.
"Ubuntu will apply security updates automatically, without user interaction. This is done via the unattended-upgrades package, which is installed by default."
Right, but it's a minor annoyance, get rid of it with:
sudo apt-get remove --purge unattended-upgrades
(doesn't trigger removal of anything else, and you'll enjoy 420kb of additional disk space).
OTOH the real issue with Ubuntu is snap(d). Snap packages definitely do auto-update. You may want to uninstall the whole snap system - it's (still?) perfectly possible, if a little bit convoluted, due to some infamous snaps like firefox, thunderbird, chromium, or eg. certbot on servers
Or just use Debian or any snap-free fork for the matter.
Eh... mainstream physics by numbers is not HEP and definitely not HEP Th, and there are plenty of serious physicists somewhat critical of the field, and more so of the way it presented itself over the last decades.
And while I disagree with some of the criticisms and some of the style of the crtics, it's not like you get an honest appraisal from Greene (and Witten).
Maybe you haven't been paying much attention in this space. Google found empirically that error density in _unsafe_ Rust is still much lower than in C/C++. And only a small portion of code is unsafe. So per LOC Rust has orders of magnitudes fewer errors than C/C++ in real world Android development. And these are not small sample sizes. By now more code is being written in Rust than C++ at Google:
But don't take my word for it, you can hear about the benefits of Rust directly from GKH:
www.youtube.com/watch?v=HX0GH-YJbGw
There really isn't a good faith argument here. You can make mistakes in Rust? No one denies that. There is more C code so of course there are more mistakes in C code than in Rust? Complete red herring.
I would expect that the largest factor is cultural, and of course it's possible to inculcate safety culture in a team working on a C or C++ codebase, but it seems to me that we've shown it's actually easier to import the culture with a language which supports it.
Essentially Weak Sapir–Whorf but for programming languages rather than natural languages. Which is such a common idea that it's the subject of a Turing Award speech. Because the code you read and write in Rust usually has these desirable safety properties, that's how you tend to end up thinking about the problems expressed in that code. You could think this way in C, or C++ but the provided tooling and standard libraries don't support that way of using them so well.
I also think that the largest factor is cultural. But my conclusion from this is not that one should import it with a new language while pretending achieving similar results is not possible otherwise. This just gives an excuse for not caring for the existing code anymore, which I suspect is one reason some parts of the industry like Rust ("nobody can expect us to care about the legacy code, nothing can be done until it is rewritten in Rust")
Of course highly correct C code is possible [1]. But ADA makes it easier. Rust makes it easier. You can write anything in any language, that is _not_ the argument. How could you plausibly advocate for a culture that invests a lot of effort [1] into making codes correct, and not also advocate for tools and languages that make it easier to check important aspects of correctness? A craftsman is responsible for his tools. Using subpar tools with the argument that with sufficient knowledge, skill and an appropriate culture you can overcome their shortcomings is absurd.
Rust is also often not the right tool. I looked at it fairly deeply some years ago for my team to transition away from Python/C hybrids, but settled on a fully garbage-collected language in the end. That was definitely the right choice for us.
The thing is. There always was a strong theoretical case that Rust should improve software quality (not just because of the fact that you have a lifetime system). The only reasonable counterpoint was that this is theory, and large scale experience is missing. Maybe in high quality code bases the mental overhead of using Rust would outweigh the theoretical guarantees, and the type of mistakes prevented are already caught by C/C++ tooling anyways?
The (in recent years) rapid adoption of Rust in industry clearly shows that this is not the case.
What about qmail? No one runs qmail and no one is writing new C with that kind of insanely hyperconservative style using only world-class security experts.
And it still wasn't enough. qmail has seen RCEs [0, 1] because DJB didn't consider integer and buffer overflows in-scope for the application.
Perhaps because qmail is an anomaly, not Android? To remain relatively bug-free, a sizeable C project seems to require a small team and iron discipline. Unix MTAs are actually pretty good examples. With qmail, for a long time, it was just DJB. Postfix has also fared well, and (AFAIK) has a very small team. Both have been architected to religiously check error conditions and avoid the standard library for structure manipulation.
Android is probably more representative of large C (or C++) projects one may encounter in the wild.
So you can't, and if a "dumbass" like me can understand the importance of empirical evidence but you can't, maybe read up on rational thinking instead of lashing out emotionally.
I find this highly annoying. Here we've had very tasty wheat based slices that can serve the same purpose as sliced salami/meats on bread, and didn't try to muck anything in particular. But they disappeared from the shelves while the stuff branded as Vegan Salami seemingly does well.
I guess for casual buyers having a familiar reference point is just crucial.
The crusade against gluten probably did it. Tofu lives as un-refrigerated grey blobs and tempeh never even made it to the shelf, probably because of hormone-disrupting soybeans. But hyper-engineered single cell meat? Now that’ll sell.
This is really frustrating to me, it's hard to find seitan outside of Chinese shaokao (BBQ skewers) restaurants. There's a local brand of wheat-meat that even runs a deli that's pretty good, but people are so afraid of gluten.
> In case the sarcasm isn’t clear, it’s better to leave the warts. But it is also worthwhile to recognise that in terms of effectiveness for driving system change, signage and warnings are on the bottom of the tier list. We should not be surprised when they don’t work.
I thought I had read it. :) I thought the three `* * *` at the bottom was indicating I was about to start reading suggestions for the next article. So definitely a "Woosh" moment for me :D
Do you have any actual evidence that this result can be viewed as an instance of CH?
The networks on the measure theory side and on the algorithmic side are not the same. They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes.
The correspondence outlined is also extremely subtle. Measurable colorings are related to speed of consensus.
You make it sound like this is a result of the type: "To prove that a coloring exists, I prove that an algorithm that colors the network exists." Which it is not, as far as I understand.
It seems to me you are mischaracterizing CH here as well:
> A proof that your network has some property is an algorithm to construct an instance of appropriate type from your object.
A proof that a certain network has some property is an algorithm that constructs an instance of an appropriate type that expresses this fact from the axioms you're working from.
> You make it sound like this is a result of the type: "To prove that a coloring exists, I prove that an algorithm that colors the network exists." Which it is not, as far as I understand.
This is the crux of the proof, as I understand it: to classify network coloring measurability, I classify algorithms that color the network.
Which I can do because there’s a correspondence between network colorings (in graph theory) and algorithms that color networks (in CS). Which I’m arguing is an instance of CH: they’re equivalent things, so classifying either is classifying both.
> They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes.
[…] Measurable colorings are related to speed of consensus.
Yes — this is why the work is technically impressive, because proving the intuition from above works when extending to the infinite case is non-trivial. But that doesn’t change that fundamentally we’re porting an algorithm. I’m impressed by the approach to dealing with labels in the uncountable context which allows the technique to work for these objects — but that doesn’t mean I’m surprised such a thing could work. Or that work on network colorings (in CS) turned out to have techniques useful for network colorings (in math).
> It seems to me you are mischaracterizing CH
You then go on to make some quibble about my phrasing that, contrary to your claim about mischaracterizing, doesn’t conflict with what you wrote.
Edit: removing last paragraph; didn’t realize new author.
CH as I understand it has nothing to do with this. As an example that illustrates why, consider the simple infinite coloring discussed in the article that uses the axiom of choice. You could not write an algorithm that actually performs this coloring (because Axiom of Choice, and because it requires uncountably many actions). CH says that the statement "all such graphs can be colored" can be computed (in finitely many steps) by a program from the axioms. Even though the colorings can-not be done by a computation.
What CH does not allow you to do is turn an existence proof (a coloring exists) into a constructive proof (a means to actually construct such a coloring). In fact, this is generally not true. Mathematical statements correspond to computations in a much more subtle and indirect way than that.
Honestly, I get the impression that you have a very superficial understanding of the topics at hand, but I am far from an expert myself. If you really know a way to see this as an instance of CH I would be very intrigued to learn about it.
Non-political almost always means "accept the social status quo I am used to".
I think there is a reasonable argument that the default for a community with technical goals should be to accept social status quo conventions unless they conflict with the communities technical goals. But if the social default is "girls don't code and queers should hide" there is a reasonable counterargument that these conflict with the goal of making the technology (and community) available to everyone.
> But if the social default is "girls don't code and queers should hide"
Queers should hide definitely isn't any social default unless the code is exclusively developed in Gaza. "Do what you like but please stick to technical considerations" isn't "you need to hide".
You seem to have read that as literally "hide if you are queer". What I meant to encompass was also "hide that you're queer". And that's absolutely a norm that exists, and that can perfectly well hide behind "stick to technical considerations". If you say "I prefer to be referred to by these pronouns" then the reply "please stick to technical considerations" is not neutral. If someone would point out @acronym_XYZ is actually a woman when someone referred to her as he, nobody would reply with "please stick to technical considerations". "Please stick to technical considerations" as a response to any behavior that is outside some societal default is not enforcing a technical concern, it's enforcing the societal default, including when there are no technical reasons to do so.
That doesn't mean that "Please stick to technical considerations" or quite simply "that's off-topic" are never valid. They very much are. It simply means that they don't provide some clean clear demarcation line around which to organize a technical community.
> If you say "I prefer to be referred to by these pronouns" then the reply "please stick to technical considerations" is not neutral.
There's no single societal norm in worldwide source projects; that's just you monolithing a diverse set of groups. In this particular instance if you're just a GitHub handle then whether you're female or you wish to be called "her" then that doesn't require hiding that you're "queer", but it also doesn't require mentioning it. Just say, "actually it's 'she'" and you're done.
I thought you merely misread my comment. But it really seems you are very determined to pretend that there are no reasons for people to hide that they are queer, which, given the current political climate, is just bizarre to me.
I assumed you were talking about laws because the social default in places like the south of the US is definitely that visibly queer people should hide.
What's more, next to Linux itself it is maybe the only case I can see where a major piece of user facing software is kept competitive with the Apple/Google/MS tools.
LibreOffice or Nextcloud are technically far further behind Office and Google's online offerings.
Which therefore begs the question: Who else is in a position to do this?
At first glance, Moz with Firefox + a suite of self-hosted team and productivity stuff that works well in Firefox would make a ton of sense...
reply