Not directly related, but an anecdote: well before AI, I was talking to a Portfolio Solutions Manager or something from JP Morgan. He was an MD at the firm and very full of himself. He told me, "You guys, your job is....you just Google search your problem and copy paste a solution, right?". What I found hilarious is that he also told me, "The quants, I hate that they keep their C++ code secret. I opened up the executable in Notepad to read it and it was just gibberish". Lesson: people with grave incompetence at programming feel completely competent to judge what programming is and should be.
My own tangential gripe (a bit related to yours though): the factory work began when Agile crept into the workplace. Additionally, lint, unit tests, code reviews... all this crap just piled on making programming worse still.
It stopped being fun to code around that point. Too many i's to dot to make management happy.
If you give up on unit tests and code review then the code is "yours" instead of "ours" and your coworkers will not want to collaborate on it with you.
However, this has to be substantive code review by technical peers who actually care.
Unit tests also need the be valued as integral to the implementation task. The author writes the unit tests. It helps to guide the thought process. You should not offload unit tests to an intern as "scutwork".
If your code is sloppy, a stylistic mess, and unreviewed, then I am going to put it behind an interface as best I can, refer to it as "legacy", rely on you for bugfixes (I'm not touching that stinking pile), and will probably try to rally people behind a replacement.
In my experience that did not happen. I've been lucky perhaps to always work with engineers I trusted.
And frankly, giving ownership to code ("it's yours") has, also in my experience, been an excellent way to give an engineer "pride of ownership". No one wants to have that "stinking pile".
We used to do a kind of unit tests in place. (We called them param checking.)
In my experience unit tests have been simply a questionable yardstick management uses to feel at ease shipping code.
"98% code coverage with unit tests? Sounds good. It must be 98% bug-free — ship it."
Not that anyone ever exactly said that but that's essentially what is going on.
Code reviews seem to bring out the code-nazi types. Code reviews then break any goodwill between team members.
I preferred when I would go to a co-workers office and talk through an issue, come up with a plan to solve the problem. We trusted the other to execute.
When code reviews became a requirement it seemed to suck the joy out of being a team.
Too frequently a code review would turn into "here's how I would have implemented it therefore you're wrong, rewrite it this way, code does not pass."
Was that a shitty code reviewer? Maybe. But that's just human nature — the kind of behaviors that code "gate keeping" invites.
Everywhere I’ve worked I’ve found that the entire organization pays lip service to unit tests and code reviews, but then sets timelines so short and workloads so high that make real tests and reviews genuinely impossible.
Can echo that - it's pure lip service - the deadlines are arbitrary and change often, and nobody cares about quality beyond "can it appear to work plausibly at a demo". There are notable exceptions - nobody wants to be the next Knight Capital but a lot of work is not seen to be that critical so off to the next ticket it is.
TDD, while a good idea, is never used because it is highly unnatural when the stakes are low, which they are most of the time. And when the stakes are high enough - you'd want to hire a real QA engineer to write all the tests anyway, including the unit tests.
Code review is another sacred process that seems too good not to have, but many teams use it as a "we care about quality" stamp when in fact they do not. Used for just nitpicking code style (important but not the whole reason to have CR, and there are tools for this), issue comments like "LGTM" and approve whatever arrives at the pull request anyway.
I've not yet seen code review implemented in a good way in places I have worked. It's not really considered "real work" (may result in zero lines of code change) and it takes time to properly read through code and figuring out where the weaknesses might be. I just end up being forced to skim read for anything obvious and merging, because there is not enough time to review the code properly.
As a manager, code review has two benefits that typically matter to me: (a) cost: it's cheaper to fix a defect that hasn't shipped (reading tests for missing cases is a useful review, in my experience); (b) bus-factor: make sure someone else has a passing familiarity with the code. And some ancillary (and somewhat performative benefits) like compliance: your iso-27001, soc-2 change control processes likely require a review.
It's hard, though, to keep code reviews from turning into style and architecture reviews. Code reviewing for style is subjective. (And if someone on the team regularly produces very poor quality code, code review isn't the vehicle for fixing that.) Code reviewing for architecture is expensive; settle on a design before producing production-ready code.
My $0.02 from the other side of the manager/programmer fence.
ISO-27001's change management process requires that [you have and a execute a change management policy that requires that] changes are conducted as planned, that changes are evaluated for impact, and are authorized. In my experience, auditors will accept peer-review as a component of your change management procedure as a meaningful contributor to meeting these requirements.
"All changes are reviewed by a subject matter expert who verifies that the change meets the planned activity as described in the associated issue/ticket. Changes are not deployed to production environments until authorized by a subject matter expert after review. An independent reviewer evaluates changes for production impact before the change is deployed..."
If you are doing code review already, might as well leverage it here.
Code review where I worked seem to either in practice be rubber stamping or back scratching. Never once have I felt the need for it. If people are unsure about a change they ask usually.
If teams care about each other’s code, they ought to collaborate on its design and implementation from the start. I’ve come to see code reviews (as a gate at the end of some cycle) as an abdication of responsibility and the worst possible way to achieve alignment and high quality. Any team that gets to the end of a feature without confidence that it can be immediately rolled out to create value for users has a fundamentally flawed process.
> they ought to collaborate on its design and implementation from the start
That's exactly right. After said process, it comes down to trusting your coworkers to execute capably. And if you don't think coworker is capable, say so (or if they're junior, more prudently hand them the simpler tasks — perhaps code review behind their back and let it go if the code is "valid" — even if it is not the Best Way™ in your opinion.)
Only if there aren't QA boards with quality KPIs to fulfill, and many code reviews are wasted time in ceremonies to fulfill egos from reviewers about the only true way how to deliver software.
I usually give up, stop arguing why it is actually better than the way the gatekeepers suggest and redo my code, less time wasted.
Writing good tests is an art. Its hard. It takes a deep understanding of _how_ the system is implemented, what should be tested and what should be left alone.
Coverage results don't mean much. Takes some experience to know how easy it is to introduce a major bug with 100% test coverage.
Tests are supposed to tell you if a piece of code works as it should. But I have found no good way of judging how well a test suite actually works. You somehow need tests for tests and to version the test suite.
A overemphasis on testing also makes the code very brittle and a pain to work with. Simple refactorings and text changes need dozens of tests to be fixed. library changes break things in weird ways.
Unless I know the system being tested, I take no interest in tests.
There's clever hacky ways to test systems that will never pass the "100% coverage" requirement and are a joy to work with. But they're the exception.
The point about coverage results is an important one to understand. Something that I like to say when discussing this with other folks is that while high code coverage does not tell you that you have a good test suite, low code coverage does tell you that you have a poor one. It's one metric amongst many that should be used to measure your code quality, it's not the end-all-be-all.
code coverage is a bad metric either way. soon as it gets mentioned anywhere, an mba manager wants it as close to 100 as possible and goodhart's law kicks in.
it's synonymous with LOC. don't bring it up anywhere.
Tests that you write in order to contribute to a robust test suite are good.
Tests that are written to comply with a policy that requires that all components must have a unit test, and that test must be green, could be good. Often, they are just more bullshit from the bullshit factory that is piling up and drowning the product, the workers, the management, and anyone else who comes too close.
I feel that it’s still correct to call both of these things tests, because in isolation, they do the same thing. It’s the structure they’re embedded in that is different.
Becuase builds are gated by test coverage people write tests for coverage and not for functionality. I’d say a good portion of the inherited tests I’ve ran in to wouldn’t catch anything meaningfully breaking in the function being tested.
Your issue is with targeting a metric then (coverage), not the unit tests. Good unit tests can be so useful. I've got a project currently that can't be run locally because of some dependencies, and coding against unit tests means I get to iterate at a reasonable speed without needing to run all code remotely.
I spent 3 years getting a Ruby codebase to 100% branch coverage running locally in a few minutes (I wasn't just looking at coverage, I was also profiling for slow tests). Found a few bugs ofc having to read through code so carefully. The value was having a layer of defence while refactoring, if some unrelated test failed it implied you missed impact of your change. It also helped people avoid the issue of making changes to an area of code with no testing, where existing tests act as docs (which execute, so won't go stale as easily) & make it easier for new code to write new tests building on existing tests
This codebase was quick to deploy at Microsoft. We'd rollout every week. Compared to other projects that took months to rollout with a tangling release pipeline
Anyways I left for a startup & most of this fast moving team dissolved, so the Ruby codebase has been cast aside in favor of projects with tangling release pipelines
From my perspective it's not "tests" but this reaction. There's nothing wrong with tests, but there certainly is a cost to them, are you getting a positive ROI? Has the system been perverted to focus on tests vs. tests supporting quality? Are tests used to justify all sorts of unrelated actions or inaction? Now repeat this exercise with 100 or 1000 other perfectly valid concepts that can help destroy the the very thing that you are trying to accomplish.
Ha, this sounds like my work. I've developed and evolved a java set of apps that integrate our core banking system with few tens of other internal apps.
In a decade and a half, we had very few issues, all easy to handle, and ie app has its own clustering via Hazelcast so its pretty robust with minimal resources. Simply nothing business could point a finger to and complain about. Since it was mostly just me, a pretty low cost solution that could be bent to literally any requirement pretty quickly.
Come 2025, now its part of agile team and efforts, all runs on openshift which adds nothing good but a lot of limitations, we waste maybe 0.5-1md each week just on various agile meetings which add 0 velocity or efficiency, in fact we are much slower (not only due to agile, technology landacape became more and more hostile to literally any change, friction for anything is maasive compared to a decade ago and there is nothing I can do with that).
I understand being risk averse against new unknown stuff, but something that proved its worth over 15 years?
Well it aint my money being spend needlessly, I dont care and find life fulfillment completely outside of work (the only healthy approach for devs in places like banking megacorps). But smart or effective it ain't.
A lot of people never learned how, and now they just avoid doing it whenever possible.
It's really frustrating; I'm all for some bit of code not needing a test, but it should be because the code doesn't need to be unit tested. Breaking unit testing, not knowing how to fix it, and removing all testing is not a good reason.
Testing in production happens. This is for example the best practice at SpaceX or at Tesla (FSD, Robotaxi, Unboxed designs, etc), and I think these people sleep very well.
Yes, of course, some rockets may explode (almost 10 soon), or some people may have accident, but that's ok from their perspective.
I never found linting or writing unit tests to be particularly un-fun, but I generally really really value correctness in my code, and both of those things tend to help on that front.
I used to work in aerospace R&D. The number of times I heard some variant of “it’s just software” to disregard a safety critical concern was mind boggling. My favorite is a high-level person equating it to writing directions on a napkin.
It doesn't help that most "tech visionaries" or people considered tech bros these days more often come from accounting or legal background than anything technical. They are widely perceived as authorities but come without the expertise. This is why it's so perplexing for the techies when the industry gets caught up in some ridiculous hype cycle apparently neglecting the physical realities.