A related phenomenon: developers who generate a ton of LOC, create new services willy-nilly, and adopt new technologies to cure their boredom have the greatest impact on coding norms in an organization.
Developers who solve problems with less code and less complexity, and check to see if libraries already in use have the functionality they need before adopting new ones, those developers are less visible in the codebase, and it isn't obvious that others should see them as leaders and role models.
One thing I dislike is superfluous code under the guise of best practices or defensive programming. Here's an opinion of mine I've found to be controversial: In JavaScript, I don't like to use ===/!== unless I'm actually operating on operands that can be multiple types.
Hard disagree. Many more bugs are caused by the type coercion of '==' than are caused by people adding verbosity with the use of a single additional character. It also is way easier to read code when you don't have to worry about the type coercion possibilities of JS's double equals operator. Type coercion should be opt-in and not the default, if there are good reasons for it existing at all, since it inherently makes things more complicated.
Imagine how absurd this position would be if there was an equivalent with the less/greater than operators...
I think it's controversial because you have a really good point but your example kind of goes against your point rather than supporting it.
'===' is a single operator in JS, just as '==' is. It's only superfluous in terms of character count (which barely matters at all), as opposed to token count or actual complexity.
I'd argue that in the important metrics, '==' is actually more superfluous, in that it has a broader scope of functionality than '==='. When you use '==' to do an equality check you're essentially invoking something that does the check, but also does a bunch of implicit type conversion and checking between types, none of which you actually care about at all.
I'm also not a fan of 'defensive programming', I think people fuck a lot of perfectly good code up trying to read the tea leaves about how other devs are going to poorly interact with it in the future. But using '===' is like saying "I just want to check equality". It's not adding something for the sake of defense (other than the single character), it's removing something useless for the sake of removing something useless. Which is a good thing for maintainability and bug prevention yada yada, but it also feels nice to do as a dev.
That's why you see very few people arguing against '===' as a best practice, because incentives are mostly aligned in this case. It's not really like a lot of the other examples where something is seen as a 'best practice' in spite of being something that devs hate doing.
What happens if the code changes and something you thought was restricted to a single type, isn't anymore? Do you have to go through and update every equality check?
Hmm, just curious why that would make it better. I think it's hard to justify because the cost is so low (it's an extra character, and possibly some confusion) but can have a big downside if the unexpected happens.
Isn't generating a ton of LOC considered bad practice? If my pull request is over 1K LOC, most likely my review will take longer, increase cycle time, and be more bug prone?
> Obviously the above examples are exaggerated hyperboles and reality is somewhere in the middle.
Are they? Is it?
I'm starting to think this actually describes most of professional accomplishment and career, at least in software. People can't see negative space, so to speak, and they don't bother to imagine it.
Write or debug something in a fraction of a time another engineer would be able to? Well, we can't observe both universes, so it must have just been easy. Make something super stable and easily understandable that doesn't require any attention or maintenance and becomes crucial to a company's success? Without noise, people will just forget about it or assume it was straightforward.
The space of things that doesn't happen is infinitely bigger than the outcome we end up seeing. This makes a lot of our judgments completely off base.
Agree. The cynical optimum solution seems to be to artificially create drama by exaggerating the difficulties, always looking busy, and solving the problem way ahead of schedule but only deliver close to last minute while telling everybody how hard the team worked to save the day. And then convince non-tech management that you need a bigger budget/more people to handle future projects because obviously what you are doing is super hard and we can’t rely on heroic sacrifices forever.
As a side effect this will make you also super reliable in front of the management, always delivering when promised. SO if you don't exaggerate with your requests, it might be even a win-win situation.
Yep. A friend of mine said that he used to work for a company where team A always multiplied their schedules by 3, and team B worked as fast as possible to deliver. The result was that management loved team A because they always delivered on time, and considered team B to be unreliable/not to be trusted/less competent than team B. The reality of course is that the two teams were equally competent. It’s all about how you handle management expectations.
Interestingly I see a form of #2 in the Clojure community whenever a newer Clojurist asks if a stable library is still maintained. The answer is usually yes, but it hasn't needed an update for a while. Sometimes even I have to double check.
I’m thinking that a stable library should do a x.x.X release periodically that might go no further than bumping the version number (and maybe fixing typos in the docs) to make it clear it’s not abandoned.
What if the library has been abandoned, but it's cronjob is still perpetually running somewhere on a free cloud server plan? Or even there will appear a cool SaaS which will auto-bump version number every X days with a free plan for open-source GitHub repos?
A counter that count the number of projects that use the library could be useful too. If 20000 projects use it, 300 more since last month. It's clearly alive even if there is no new feature/bug fix since a year.
I'm sorry, but the management scenarios seem completely backwards. May be it's my background (service gamedev and web SAAS-like apps), but shipping a feature is only a beginning — you have to test, measure, modify, repeat. If you write bad code, this loop will get long and expensive, your team won't reach your financial targets and the project will be closed. But if you manage to do that quickly and without distracting too much time on bugs — that directly translates to earning more money and being able to grow the team size.
I don’t know if these scenarios are so backwards. As the article hints to it can be hard to distinguish code of high quality from what I’ll call code of ‘high effort [but low quality]’, especially for management.
As someone else pointed out: People have trouble seeing negative space.
As I described, there's no need for anybody to make that distinction. Good code is good because it helps achieve business objectives better. Free market is all that is required for it to win in the end.
However, a big problem is the opacity of engineering, especially software engineering. It can be hard to tell the quality of code from the outside, and the less technical you are, the harder it is to see. There are signs of course, such as the number of bugs, but it can be hard to tell the difference between style and quality.
“Heroes” that are incompetent but still manages to somehow save the day by working all nighters and sacrificing everything to counter their incompetence gets a lot more attention and interest from non-tech managers than competent smart developers that solves the same problem on time with no drama and go home to their families at 5PM. However I still prefer not to be a “Hero” :) My job is not my life.
An interesting thought. I keep finding it amusing how I learn a bit more about the buggy and vulnerable software, via a security announcements mailing list, while not hearing unexpectedly about the software that is more secure.
One of the comments to the linked article also mentions the principal-agent problem, though not by name; an interesting and tricky one, and likely indeed can happen in a software development setting.
I'm not sure I agree with the premise. Most of the time I see someone not following best practices it's because they're unaware of them and just do the simple/most obvious thing, which just so happens to be bad for not-so-obvious reasons. Or they are aware of the best practice, but decide not to follow it because of reasons x, y, and z. It's very rare, however, that I see someone do the wrong thing because someone told them it was the right thing (at least in the programming field).
Perhaps this is just an artifact of my own personal experience though, all my professional experience has been at dedicated software shops with highly competent engineering teams. I realize that's not the case everywhere.
Does anyone have any specific examples of worst practices explicitly being disseminated under the guise of best practices? Is this something that you tend to see happen only within organizations, or across the internet as a whole?
The requirement was to ingest a CSV, transform it, and put it in a database somewhere. The solution was three microservices.
The first read the data piece by piece (unit of work) and passed it to the second via HTTP. This was slow so it was threaded.
The second provided an abstract class which was extended to facilitate string transformations. Each was accompanied by a unit test. Similarly, the result was passed along to the third service via HTTP. This was also threaded.
The third put the data into a third-party queue which wrote into a database.
It took nearly a half of a day for the process to complete. I rewrote it as an SQL script which read the CSV into the database directly, used SQL functions for the transformations and then stored its result. It took a few minutes to complete. But, this wasn't a best practice or popular, and what would the team work on?
> It's very rare, however, that I see someone do the wrong thing because someone told them it was the right thing (at least in the programming field).
> any specific examples of worst practices explicitly being disseminated under the guise of best practices?
OOP.
The actor model envisioned by Alan Kay was promising, but the brand we saw in Java and C++ in the late 90's and early naughties was pretty bad. There's a reason we moved away from inheritance in favour of composition, a reason why OOP languages took a clue from statically typed FP languages and grew generics 20 years later than them, a reason why they eventually grew first class functions… and don't get me started on performance sensitive software which could have benefited from a bit of data orientation.
Don't get me wrong, classes have their uses. Grouping related data together is very handy, as well as the namespacing. And sometimes, even inheritance has its uses. But as a whole it's just the wrong way to look at things. A program's job is to move & transform data, and instead of focusing on that data OOP encourages you to think of an inevitably contrived model of the world.
I've heard people say, "React is fast. It has an intermediate layer to optimize changes to the HTML. You get much better performance with it." Of course, we know that effectively manipulating the DOM via plain JavaScript is the best you'll get. (And you can achieve great performance with well written jQuery too.)
Agree. I did a web application where all views were defined in the HTML file, all disabled except for the current view. The app would then switch views by simply enabling/disabling views as needed. It was super fast. And keeping the views up-to-date was easy. The app simply updated all views when the data changed => the visible view and all non-visible views would always be up-to-date without needing any special handling or virtual DOMs. Super simple. Super fast.
> Of course, we know that effectively manipulating the DOM via plain JavaScript is the best you'll get.
With an infinite amount of hand-tuning by experts, yes. In practice to know what parts of the DOM to update based on what changes and maintain that mapping correctly as your system evolves, you have to either use React or use/build something equivalent to it.
No it doesn’t. Software doesn’t need to bloat to the point of unusability. We just tend toward that because we don’t deeply understand what role a particular piece of software should play.
The point of using Stack Overflow questions as a metric is very intriguing. I never thought about the negative feedback loop of popular libraries just being buggy.
That's not even the largest bias. Stack Overflow ranking is mostly defined by a software's popularity divided by the quality of its documentation. The quality of the software itself is far away in relevance.
Sqlite is also an outlier in the open source world in that it has an extremely decent funding model (basically premium support). This has enabled continued work on it for 20+ years by the same author and a small team of people he hired for this work. Just the sheer lifespan of the project makes sure it generates the buzz-over-time that TFA mentions as the lifeblood of software projects.
SQLite also might not be open source because it's not possible to contribute to it - the authors don't accept patches. (You can show them example patches, but they'll rewrite them.)
- "We made the code twice as fast with this clever optimization" - Translation: we solved the issue badly with bad technology, thus making the simplest operations gratingly slow, our performance is 100x worse than the theoretical best possible. Now we made it only 50x worse, a triumphant achievement. This will still allow us to make several more triumphant gains, and end up with finally with something that's only 5x slower than best case.
- We used a brand-name, hip library for a very specific use case - In order to solve the issue quickly, while avoiding having to understand the problem domain, we outsourced one of our core differentiators to some npm package. We made the first release in a quarter of the time, compared to doing it properly. Management was happy, because progress was fast. It kinda sucks, and there's no way to improve on it without actually investing the time and of building it ourselves. Which we won't do, but will replace library X with it's successor come a few years, making a blogpost about the stupidity of X's implementors, and praising library Y. This is a lazier parallel of the previous point, with the actual developing competencies outsourced.
Corollary 2 reminds me a bit of software like djbdns and qmail which was relatively simple and worked incredibly well for it's time.
I managed a number of production systems with both and it always worked beautifully.
That said, licensing confusion always made it difficult for anyone to repackage and distribute it (until 2007) and djb was not known for being easy to work with. So that probably also hurt adoption.
In the web world, and probably other paradigms too, there is no such thing as 1.0 and done. The code will eventually need to change because of new browser features / removed browser features / browsers introducing bugs. In that world a library that has had no updates in 3 years is a risk.
Sometimes however your environment can be stable enough that something working can really be left alone. One extreme case being a purely computational C library with zero dependencies, like a crypto library or a PNG parser.
Excellent point! As you imply, bad code compounds the need for job openings:
* bad code requires more maintainers than good code
* it's hard to find people to work on it — partly because the code is bad, and partly because whatever process that _led to_ bad code is probably still around
The same thing can be extended to any product to an extent: You must always improve, or no one will know that you even exist. MS Windows, MS Office, and recently twitter with their NFT profile pictures are some of the things where developers seem to desperately cramp in an improvement. What if you have the perfect platform? You become forgotten.
In my own experience it’s just code which is easy to reuse that becomes viral. I don’t think ease-of-use means the underlying workings are good though.
Most likely I think it’s just easier to write a good API for code that isn’t handling edge cases properly.
Developers who solve problems with less code and less complexity, and check to see if libraries already in use have the functionality they need before adopting new ones, those developers are less visible in the codebase, and it isn't obvious that others should see them as leaders and role models.