"Why not add X feature? If people don't want to use X, they just don't, and there are basically 0 downsides."
In theory this is true. If the compiler is decent, compile times and analysis shouldn't really be affected. Maybe libraries will use X but otherwise they would use a manual implementation of X anyways.
But in practice developers misuse features, so adding a feature actually leads to worse code. It also creates a higher learning curve, since you have to decide whether to use a new feature or just re-implement it via old features. See: C++ and over-engineered Haskell. So each feature has a "learnability cost", and only add features which are useful enough to outweigh the cost.
But most features actually are useful, at least for particular types of programs. It's much harder to write an asynchronous program without some form of async; it's much harder to write a program like a video game without objects. This may be controversial, but I really don't like Go and Elm (very simple languages) because I feel like have to write so much boilerplate vs. other languages where I could just use an advanced feature. And this boilerplate isn't just hard to create, it's hard to maintain because small changes require rewriting a lot of code.
So ultimately language designers need to balance number of features with expressiveness: the goal is to use as few simple but powerful features to make your language simple but really expressive. And different languages for different people. Personally I like working with Java and Kotlin and Swift (the middle languages in the author's meme) because I can establish coding conventions and stick to them, C++ and Haskell are too complicated and it's harder to figure out and stick to the "ideal" conventions.
All features are useful. That's table stakes. But usefulness is insufficient to warrant inclusion. How does a feature interact with all existing features? Are there ambiguities? Are there conflicts? A language is not a grab-bag of capabilities, it's a single cohesive thing that requires design and thought.
> But in practice developers misuse features, so adding a feature actually leads to worse code.
Is that really a problem on the language's side, though? Devs are capable of mis-using any feature, even extremely basic ones that almost every language has (variable names, for instance (although I'm laughing in FORTH)). Code standards and code reviews are necessary tools in the first place because it doesn't matter what language you give a programmer - they're perfectly capable of constructing a monstrosity in it.
I argue that preventing programmers from doing dumb things with well-designed language features (so, hygenic Scheme macros, and not raw C pointers) is a social and/or organizational problem, and it's better to solve that at that level than to try to solve it (inadequately) at a technical level.
("I keep dereferencing null pointers", on the other hand, is an example of a technical problem that can be solved on the technical level with better language design)
> Is that really a problem on the language's side, though?
Yes, for a language to be good in practice you need to look at what developers actually do and not how a perfectly rational developer would use the language.
> But in practice developers misuse features, so adding a feature actually leads to worse code.
I have found the opposite to be true. Missing features often leads to what one would call "design patterns". When the language adds official support to solve the problem you're trying to solve with that pattern, the code becomes clearer.
In theory this is true. If the compiler is decent, compile times and analysis shouldn't really be affected. Maybe libraries will use X but otherwise they would use a manual implementation of X anyways.
But in practice developers misuse features, so adding a feature actually leads to worse code. It also creates a higher learning curve, since you have to decide whether to use a new feature or just re-implement it via old features. See: C++ and over-engineered Haskell. So each feature has a "learnability cost", and only add features which are useful enough to outweigh the cost.
But most features actually are useful, at least for particular types of programs. It's much harder to write an asynchronous program without some form of async; it's much harder to write a program like a video game without objects. This may be controversial, but I really don't like Go and Elm (very simple languages) because I feel like have to write so much boilerplate vs. other languages where I could just use an advanced feature. And this boilerplate isn't just hard to create, it's hard to maintain because small changes require rewriting a lot of code.
So ultimately language designers need to balance number of features with expressiveness: the goal is to use as few simple but powerful features to make your language simple but really expressive. And different languages for different people. Personally I like working with Java and Kotlin and Swift (the middle languages in the author's meme) because I can establish coding conventions and stick to them, C++ and Haskell are too complicated and it's harder to figure out and stick to the "ideal" conventions.