I've noticed what I'd term a "post-modern" sensibility towards code quality, where some of the often cited tenets of good code are being critiqued or questioned. Stuff like yes, you can have super large functions or super large files. Or yes, you can have repetitive code. Turns out a lot of this stuff doesn't impact readability or in fact is more readable than the alternative. I'd much rather read one function top to bottom than have to jump into 7 nested function calls that are only called once. And I certainly don't need the levels of abstraction that design patterns or Haskell typeclasses demand.
There's a real danger in a well-intentioned clean code advocate. First, because there are no universal code principles. Maybe "have it work", but even then, a lot of value has been gotten out of code that doesn't work. And second, because it's very easy to go from a good guideline to a dogma. All you need is a few people to misinterpret your guidelines. Heck, a lot of people haven't even read the guidelines. They just parrot what someone else paraphrased on Twitter.
One opinion that seems to have completely changed since I first learned is about having a single `return` in my function. It was hammered into my when I first learned that you should only ever have one exit in your code, anything else is bad practice.
Fast forward about 11 years, and people realize that having a million nested if statements and matching else blocks is actually absurdly unreadable; much better to just return early, avoid nesting, and break the "only one exit" pattern.
The term for that is a "guard". Swift took it literally and introduced guard statements which force you to terminate the containing scope (return, throw, continue, break) or call a function that is marked as never returning. If control flow can ~~enter~~ exit the body of the guard statement without hitting one of those, it's a compiler error.
EDIT: Exit the body of the guard statement, not enter.
I get you’re going for the non-standard markdown strike through, but that looks confusing and calls attention to something you meant to remove. You can (well, could) remove the word when editing.
It is possibly the one Markdown feature I'd like HN to have. I use it all the time on Reddit to correct my comments without making replies look like nonsense.
I think that dates back to assembly where you have to clear up a stack frame before returning.
Especially when the code to do that depends on the amount of locals, having only a single place where you assign space for locals and a single one where you revert that helps a lot.
It also can help in languages that don’t help you run cleanup code (closing files, freeing memory) at function exit. Change the code and forget to update one of your early exits, and you have a bug. If that exit is rare, that may ship and may become a vulnerability.
Additionally, something that is forgotten now that structured programming won so hard, and so little assembly is hand-written, is that it's possible to return to different places. As in, in case A return to address F, but in case B return to address G.
At my first programming job they were religious about the single `return' and followed a pattern that was bizarre and luckily I've not seen since--using a `retVal' variable declared at the top.
int foo(int bar) {
int retVal = -1;
if (bar == 1) {
retVal = 1;
}
if ((retVal % 2) == 0) {
retVal = 2;
}
return retVal;
}
Also, putting code inside a do ... while( false ) loop purely so you can use break as a sneaky goto to avoid deeply nested conditionals while technically adhering to the single return rule.
I think when you're starting to use control statements in bizarre ways like that, it's a good indication that maybe that's a case where breaking the "rules" is the best thing.
It might have improved in the last few years, but as of 2018, MSVC would not do named value return optimization for the case where all branches returned the same local variable, in my experience. There had to be just the one penultimate return statement, in order for NRVO to kick in.
My biggest pet peeves in the code I'm currently working.
It leads to multiple multi-line ternary to keep a single return.
Single Return made sense when we had to de-allocate resources, but there are few years already that most languages have memory safe resource-counted allocations (C++11 for i.e) that will free resources correctly independent of your single/multiple returns.
I don't believe that keeping those inherited "best practices" from the past will help us developing modern code.
> It was hammered into my when I first learned that you should only ever have one exit in your code, anything else is bad practice.
Yes. This rule was an overreaction to a pervasive problem back in the day.
But, much like the use of "goto"s, there are times where your code is more readable, maintainable, and efficient if you ignore the prohibition. The trick is to know when that's the case.
I see it more as a tick-tock pendulum swing. A principle is neglected, then comes roaring back because of the glaring need for it. It becomes overused, fetishized, and popularized in warped ways. Then the "accepted design principle considered harmful" articles appear, and the pendulum starts to swing back the other way.
What we have to keep in mind is that the needs of whatever codebase we're working on might be out of sync with fashion. For example, I've been relishing the recent push-back against DRY, because I think it's over-applied in damaging ways, but the main codebase I've been working on for the last year needs more application of DRY, not less.
> And I certainly don't need the levels of abstraction that design patterns or Haskell typeclasses demand.
I think it's fair to say that AbstractSingletonProxyFactoryBean[0] is something of an abomination, but it's important to avoid throwing the baby out with the bathwater. Some of the Go4 patterns like Command, Observer, Visitor, and Iterator are literally all over the place and you have to actively go out of your way to avoid writing them.
Likewise, Haskell type classes aren't something you're supposed to be implementing yourself all over the place in your application. They're very high bang-for-buck things you implement in libraries. The equivalent to implementing iterators for your collections, or AutoCloseable on resource-like java classes.
> I'd much rather read one function top to bottom than have to jump into 7 nested function calls that are only called once. And I certainly don't need the levels of abstraction that design patterns or Haskell typeclasses demand.
I’d say it’s pretty subjective. I’m the opposite. I can’t read functions that are a hundred lines long with a dozen bound variables and half a dozen branches and early exits. That’s not easy to read and it requires keeping a notepad with you to figure out the control flow.
I much prefer short definitions with clear logic. Leave the semantics to the edges of the program where it matters. It’s easier to reason about using substitution and algebraic manipulation.
Don’t get me wrong though, I write my fair share of C code and low level drivers. For that though I tend to use a higher level language to work out the logic.
My business isn’t writing code. That’s the boring part. It’s solving the problems that is interesting.
But that’s the funny thing. Regardless we could both solve the same problems in our own ways and more often than not it will be good enough.
After twenty years and hoping for another twenty… I guess I’ve learned that what matters is that you can get along with your team and read each other’s code. Styles come and go and don’t matter that much.
That’s not easy to read and it requires keeping a notepad with you to figure out the control flow.
It does require a notepad as well when a five-page long function gets smeared across three “services”, five files and tens of functions, each doing a couple lines of code.
Leave the semantics to the edges of the program where it matters
Agreed. But when I meet this highly-structured code (and get confused), it’s usually split not across semantics, but across some random undocumented ideas about reuse, undeclared abstractions, highly indirect flow control, etc. And all of that named as if it was a most bizarre naming contest.
So that instead of a little hairy idea you get a convoluted irrational multi-plane horror to deal with.
Yes, I understand. In highly procedural code the unprincipled combination of functions results in layers of indirection which do not make the code any easier to understand.
It requires discipline to achieve what I call here, principled combination: that the combination of functions follow certain laws like they do in mathematics. In many programming languages it requires discipline because the type systems and the programming languages are not constructed in such a way as to uphold these laws for you. You, the programmer, have to enforce these laws on your code.
That is why, when I'm writing something in C, I will some times prototype my design in a higher-level language first and translate that into C code. The higher level language is more principled and helps me catch errors in the design and development stage before I've made a mess of things.
I don't always do it that way but it tends to work the best when I can get it to.
Yeah, I used to do that (not always) too. Either prototypes or high-level design documents full of pseudocode and interoperating modules. But what kills the whole idea is that our tools do not support such structures in any way. They are completely blind to this.
People praise typed languages, but honestly I’m much more okay with dynamic typing than with the lack of beams and pillars for design, to do any of it. It’s very easy to lose yourself for an hour and start creating a mess in what you have planned for weeks, simply because your attention was elsewhere, there was a deadline or new goals were incompatible with current ideas and you had no clue how to marry them correctly (nor courage to begin) within a timeframe available.
Add a couple of developers who aren’t you and/or switch into another project for a month or two – and it’s a recipe for a mudball. I used to believe in all that, not anymore. May be a bold claim, but most projects aren’t even hard enough to require it. I remember my own past experience and can tell that all that division into layers, concerns, etc was not a requirement nor an enhancement. I was simply ticking boxes in a “how clever do you feel this week” form. Days of work to spare five pages of clear actual instructions which could have been typed and tested in under few hours, shipped next evening and – much more importantly – I could reenter that context in few minutes after a year off, in contrast to any to-be-mudball.
I'm with you re. short functions. Setting boundaries around code, i.e. scope, and giving that behaviour a well-defined signature (i.e. a descriptive name, what it needs, and what it gives back) makes it easier to reason about the overall functionality. Giant functions reduce my confidence that any given change isn't going to introduce a problem.
I like to imagine each execution path through a function as a piece of string. The fewer strings, the fewer kinks, and the less string overall, the easier it is for my monkey brain to handle most of the time. Yes this is 'cyclomatic complexity', but strings are nicer visually :)
> And second, because it's very easy to go from a good guideline to a dogma. All you need is a few people to misinterpret your guidelines. Heck, a lot of people haven't even read the guidelines. They just parrot what someone else paraphrased on Twitter.
Another issue that's never explicitely mentionned is the vastly different calibers of programmers and engineers in the field.
I'm pretty sure John Carmack, Linus Torvald or Donald Knuth's definition of "good code" is quite different than $BODY_SHOP_RESSOURCE_1389's.
> I'd much rather read one function top to bottom than have to jump into 7 nested function calls that are only called once.
I actually would not. I do not want to have to read every detail of each condition or formula on the first pass. I want to see high level gist of what is supposed to happen - high level algorithm. Plus, I like when variables have clearly limited scopes.
I personally take a more pragmatic approach, though perhaps its "post-modern". How well-architected, tested, groomed, etc my code tends to be is proportional to the confidence the business has in the problem space.
If I have a detailed, firm spec that I know won't change at all, I'd spend some time planning and then build a DRYed up, abstracted, well-tested, etc solution. I know the solution is going to be sticking around for a while, more or less, so it makes sense to devote the effort to build a solid foundation.
If the thing I'm building is squishy, looking for heavy user feedback, will likely be iterated on quickly (i.e., very agile manifesto agile), I won't spend the overhead time for those things. There's a nonzero chance the code ends up in the trashcan, and the most important thing is to ship something and get feedback from real customers. You later go back and clean it up, based on the growing confidence in the permanence of that feature.
We have a tendency sometimes to prematurely abstract pieces of code.
I am okay with copy-pasting code but after the second or third copy-paste, you should probably at least consider asking, “Should I abstract this into its own function?”
One way this is formulated is the "Rule of 3 for Abstractions":
One copy of something is YAGNI. You aren't going to need it (an abstraction).
Two copies of something are coincidence. It's fine to copy and paste and leave it that way.
Three copies of something are finally a pattern. At least three copies are when you start to really see what sort of abstraction that you need to handle the pattern.
>I'd much rather read one function top to bottom than have to jump into 7 nested function calls that are only called once.
Well, yeah, coz that's worse.
It decreases code cohesion (especially when those functions are stuck in files far away) and to little appreciable benefit.
>First, because there are no universal code principles
I think there are and good developers have a spidey sense about what they are and will usually agree but we've yet to culturally agree on what they are as an industry.
Moreover, some literature on the topic (e.g. Robert Martin) is very, very wrong.
For some people esthetic considerations, philosophical considerations trumps usefulness, functionality, productivity the ease of coding, the ease of reading and understanding code.
They rather have clean, SOLID, DRY, design patterns than easy, fast, productive code.
More of a legacy code thing, but in my experience a lot of devs will race towards labelling legacy code as "bad" code because it doesn't meet some superficial standard. If I had a dollar for everytime I heard this from a dev as they start discovery on some new feature or necessary refactor, only for them to come back and say they realised there's a good reason the legacy code is the way it is. I was certainly guilty of this too when I was a junior/mid.
Oh goodness, I've been programming for 30 years and I'm still in this picture and I hate it. I have to fight the temptation to judgement all the time when I read other people's code.
Their code uses too many classes? Bad code! Its overly factorized? Under factorized? Too many dependencies? Too much reinvention of the wheel? All these things are obviously evidence that whoever wrote this code is inexperienced (and they probably have deep character flaws).
I'm sure other people feel exactly the same way about my code, too. Its frustrating and dumb.
> I have to fight the temptation to judgement all the time when I read other people's code.
Me too.
But...
Every so often it happens that I read bad code, with functions that are too long and complex or too short, or with misleading variable names, or a way too clever class structure, or whatever; cursing the person who wrote that code
...
only to find out it was me who wrote that code.
It makes me put things in a different perspective. I try to judge not too harshly.
I don’t know if that’s really something you can ever grow out of, honestly… And I feel the same way.
But if it’s any consolation, it’s pretty much the same with any cognitive bias: you can never totally eliminate them, but you can make progress, and the first step is just knowing about it! :)
I work with 3 programmers who have worked together for 30+ years (probably a bit rare in any profession, especially tech), and if you talk to any 2 of them separately, they'll talk shit about the 3rd's code. I doubt they'd be surprised if I told them that.
They probably talk shit about the others' code to their faces. After 30 years, they are probably like siblings. All the politeness and sugarcoating is gone. Say how you feel and move on. They are also probably intimately familiar with each other's preferences and they likely just don't care.
Good/bad code has changed radically over time, and there’s still a bunch of diverse opinions on it. It also means different things in different ecosystems. At the end of the day, the easiest way to deal with it is to find a social circle that shares your opinions.
That’s the only situation where you get enough direct, useful feedback over time on whether something is good or not.
This is actually better because you’re bringing the person back to the forefront instead of trying to judge code as objectively good or not
FWIW, my learning velocity has typically been the highest when I’ve put myself outside of my comfort zone, in experienced teams.
This could mean working with (competent) people who think differently, or working with a different language (my most formative was Clojure, as someone who had never even seen a lisp before), etc.
Initially I bend over backwards in near-total deference, and over the course of a few months I start to feel like I’ve gained perspective. It’s the only way I’ve found to not carry the baggage of “my way” into new situations.
On the flipside, if I'm learning a new paradigm all my new code feels like "Good code".
For example recently learning RxJS and writing a complete mess of state spread across my app feels good because its "declarative" compared to the ugly previous "imperative code".
I think part of it is the reward of the learning process itself, but I can see it being an addictive cycle.
I find I also often have the opposite problem. I'll encounter code that is actually poorly written and hard to understand/modify, but by the time I've put the effort into working out what's going on it's hard to properly remember just how much work it was to get to that point of understanding and so I don't actually fix it.
Then again, I also encounter people who use all sorts of patterns and abstractions that they learned in university, the code is actually an overengineered mess, but they refuse to believe there's a problem because they followed all of the patterns!
After all how could code be bad if it follows a pattern?
But has anyone ever looked at a program that was structured after "clean architecture" principles as outlined by good-ol' uncle bob and thought "yeah this was a great idea!". The answer is no.
Agree with everyone else and you want to go easy on people but then you get code like this:
public class C {
int x;
public C(int x) {
int x;
this.x = x;
}
}
and you waste an hour trying to work out why C(5) is not working as expected because god knows who would do that? (summarised from a real life example)
I something similar where a colleague (a long time ago) realised that someone had overload string assignment in the C++ code he was debugging to do something "special" depending on the value of the string.
> they realised there's a good reason the legacy code is the way it is. I was certainly guilty of this too when I was a junior/mid.
It's one thing for a junior dev to do this. But try working with a bunch of amateurs that somehow got promoted to lead because they kind of worked with some technology at their prior job. It's amazing watching a company replace domain expertise with superficial framework-of-the-day knowledge. Especially knowing that these devs will move on in a year escaping the consequences of their actions.
I think it's also because you don't have the context of the full project. For example, a project I worked on had a lot of tacked on functions because every couple of years, we'd get pulled in on a contract to update the software. So you'd go in, complain about the software, realize why someone did it the way they did, and then add another chunk to it. You didn't like how it was, but you understood why it was the way it was.
Maybe a livestream is not the best place to articulate well ones thoughts on software design philosophy, and to give the most charitable interpretation: bad code can be distinguished from terrible code if one can easily see how to improve it.
I think it would be unwise to suggest bad code is good without qualifying that 'good' here just means 'better than terrible', which is my more literal interpretation of the tweet.
I'm also not fond of the idea that bad code is not necessarily bad because of xyz circumstance that lead to the bad code. Can't we just call a spade a spade? We all have or will continue to write code which is less than ideal; seems like too much ego is attached to the code and we're gonna start down the All Code Is Beautiful path. Sometimes you just gotta write garbage, but garbage is garbage at the end of the day.
In my experience, there are no absolute rules for “good code”. There is a lot of good advice out there, but ultimately the best predictors of good code are 1) experience, and; 2) caring.
I’ve been in extremely frustrating situations where horrible horrible code gets steamrolled into production systems because of <arguments>. However, in nearly every single case that wasn’t a well-intentioned junior, the bad code was almost never the result of following bad advice: it was the result of the engineer not caring.
Start with caring and empathy, and always worry about how “nice” your code is. That’s how it’ll get better.
His argument, summarized, is that ostensibly "bad" code is not as bad as it could be, because that implies you can readily tell what it does, immediately see issues with it, and know how to improve it. Possibly, this code might not fully deserve the label "bad". It's _far_ better than code that's completely impenetrable.
Reminds me of the Bjarne Stroustrup quote: "There are only two kinds of languages: the ones people complain about and the ones nobody uses".
I'm not so convinced that "bad code is better than awful code" is a very useful observation. Kind of strikes me like "well I have stage 3 cancer, but at least it's not stage 4"; you wouldn't be wrong, but you still have cancer..
I think maybe "treatable" vs "untreatable/terminal" could be a better metaphor. Both obviously still suck and are expensive and painful and time consuming, but the difference seems signficant
Agreed, I like that. Seems more useful/actionable than 'good' or 'bad'. So can we then say the difference between a rookie and a seasoned developer is just whether they say code is 'unsalvageable' or whether it's 'bad'? We solved it - coming soon to a interview process near you.
Tsoding is a really impressive (though cranky in an almost stereotypically Russian way ) coding livestreamer. I've really enjoyed his voyages around Haskell, Rust, and other esoteric pursuits over the years: https://tsoding.github.io/
* You can't express `T` where `null` is forbidden in the type system so you get NullPointerException everywhere and defensive null checks.
* You express a sum type as a product type because your language does not have sum types .
* Your language doesn't have first class multiple return values (or tuples) so you return extra parameters via out parameters or thread local variables such as `errno`.
* Your language doesn't have exceptions (or algebraic effects) and can't do IO so you have monad transformers.
* Your language doesn't have set-theoretic types so you need hacks like `thiserror` .
* Your language doesn't have stackful coroutines or can't infer async IO for you so you have `async/await` spam or callback hell or "mono's".
* Your language doesn't have exhaustive checks (or pattern matching) so you need a fallthrough case check on switch statements .
* Your language doesn't have algebraic effects, so you need to pass context everywhere.
I know someone will reply about Java's null annotation checking options, so here is one of them: https://github.com/uber/NullAway .
I'm not sure if it's a trend but I am sure that I like to see people being vocal about highly readable codebase. I have yet to see this mindset in workplaces I've been at.
I would love to see a future where folks write code without thinking about how "scalable" or "performant" their code is and instead focus on how easy it would be to change the behaviour of existing code.
This one drives me nuts. We aren't running this web app on a pentium 2. Code complexity is vastly more important than the complexity of running this algo when the N is 12.
Because it starts with "it doesn't matter because N is 12" and it ends up with websites such as reddit which are extremely slow and go offline every single day.
You may not be running your site on a pentium 2 but shitty performance adds up quickly and computing power is not infinite.
Problem is even if your manager's manager understands the value of it, his manager's manager doesn't, so time spent on it is seen as wasteful or "doing nothing" versus duct-taping things together for some poor sap to have to refactor ("for free") under pressure while implementing more features non-technical PMs or managers want ASAP.
You have to be skilled at office politics or make sacrifices to keep code quality high. Usually not worth it. You get yelled at less if you externalize all the costs to future suckers, instead of trying to reduce those costs upfront.
I literally just had this exact meeting yesterday.
So much basic maintenance had been skipped that a suite of apps was costing too much money to migrate to a new platform. Instead of asking for more budget for that app, the dev manager suggested using the money from an upcoming project for an unrelated app to finish off this one!
It's a literal Ponzi scheme, using funds from newcomers to pay off the technical debts of previous projects.
Which of course never ends. Now the next project will go over budget, leading to rushed work, corner cutting, and drawing down on the project after that, which in turn will be a mess, etc...
It's fascinating to watch this kind of stuff unfold as an outsider.
I realize that my comment doesn’t scale and only works for a minority of people but…
Just try really hard to not work for shortsighted people or bad companies. There are tons of great workplaces out there that will treat you like a king and pay you very well. Keep looking, these “almost too good to be true” companies exist. Vote with your feet at every opportunity.
FWIW, my experience has been that companies using less popular languages with a good reputation (Haskell, Clojure, lisp, OCaml...) are really, really awesome places to work and learn.
Certainly something I imagine to be true, I just haven't been fortunate enough to get deeper in that direction. My first contact with programming logic was The Little Schemer though, and I'm incredibly glad for that.
> instead focus on how easy it would be to change the behaviour of existing code.
This has been my principal design consideration for a while now. Not "the code should be short!" or "the code should be elegant!" but "it should be easy to change" because your requirements are always going to change. It just so happens that the latter usually means things like single responsibility and loose coupling that bring about the former as a matter of course.
Curious about the coorelation between scalability and performant vs over time and over budget.
Of course scalable and performant is nice. I have also been on teams that delivered absolutely nothing because the KPIs were more important than the product.
This is why devs should chose a language like golang. It is readable because, generics aside, you don't get to do much with the language and it also runs very fast.
Good and bad are such boring descriptions for code. It tells me nothing about it. Readability is incredibly subjective and it needs a point of comparison.
What properties of the code need improvement or demonstrate what other should aspire to? -- Getting someone to really articulate that is quite difficult.
> Readability is incredibly subjective and it needs a point of comparison
Readability definitely has objective measures. One that I find especially interesting goes by the term "cognitive complexity" and measures, in part, comprehensibility. A 2003 article in the Canadian Journal of Electrical and Computer Engineering titled, "A new measure of software complexity based on cognitive weights", defines it as "the degree of difficulty or relative time and effort required for comprehending a given piece of software modelled by a number of BCS [basic control structures]" In 2018, "Cognitive complexity: an overview and evaluation" in Proceedings of the 2018 International Conference on Technical Debt gave a 3-part criteria for evaluating complexity.
Not that this is not the same as cyclomatic complexity, a measure best suited to gauge the effort needed to adequately test a system.
Good and bad code only matters if someone is looking.
If you assume nobody is looking, you will often write bad code. If you assume that someone is looking, possibly including Future You, you will aim to write good code.
This is very true. IME it’s really an empathy problem.
- “It passes the linter so don’t give me any advice on how it can be improved”
- “YAGNI”
While both of the above could be good counter-arguments in some contexts, I’ve seen them used disproportionately often by set-in-their-ways senior engineers who don’t care anymore or who forget that we write code for other humans, not for machines.
There was an "OMG Bad Code!!" scene toward the end of the Silicon Valley series that I caught.
Many people ragging on the founder|coder for writting a clean snippet of linear search function when he obviously should have done better and used a binary search (or some other algo) .. ergo "Bad Code!!".
What was missing from that was any context about the volume of expected data per "need to seach this" call etc.
It's entirely possible that every instance was to quickly search an already primed cache of data and that the optimal performance on the target hardware was to minimise branching and linearly rip through a small chunk of data in a load page.
Or, maybe not.
Point being, context and bigger picture plays a part here also.
In any case, you shouldn't use explicit iteration to solve this problem without a good reason. Every language should have both linear and binary search as part of its standard library.
Cheers for the detail - I only dimly recall it in passing as I'm not much of a TV series consumer.
I agree that you shouldn't especially choose one algorithm over another without reason and I feel comfortable choosing a linear scan of short sections of cached data in pipelined situations where branching overheads are steep.
Was the case in the TV episode such a scenario?
I have no knowledge of that myself.
Uhhh not sure why you were downvoted to oblivion. I can easily see other people disagreeing with you, and I’m not saying I agree, but what you said was not unreasonable.
New accounts whose first comment includes a link tend to get filtered and marked dead without user interaction, it's part of the automated mod system. They basically get caught up in the spam filter, I vouch for them a lot when (like the one you vouched) they are actually contributing.
Some of the most impactful code I've written has been absolutely trash. Like, really, really bad. By almost all objective measures, terrible. Except for the results it helped bring about. By that one measure, it was excellent, damn near perfect code
Likely because if the effect of "bad" code is positive enough, you have incentive to just let it keep on being "bad" and not touching it, so you can keep reaping the benefits.
Also, if you're on a tight timeline for a critical function, you may want to push ugly/inelegant/dense/"bad" code just to get the function in prod a day sooner. This doesn't mean the code isn't "bad," it just means that the business needs outweigh the ugliness.
I wrote some really trashy code in what was supposed to be a "proof of concept" where I had about 7 days to write something that would be shown to the executive committee of the very large company I was working for at the time.
Then I was told the CEO and CFO wanted a preview in 3 days....
That was 12 years ago, still in production as far as I know!
[The functionality was supposed to be replaced with something built in or integrated with the new ERP system.... but that project crashed and burned].
Oh man I have some bash case statements that fit this description. Could be improved, but it's tested with every commit on every PR and by George it really holds the room together.
I've seen snippets of the back-end code of a public cloud, and it is full of multi-page "switch" statements where they have hard-coded lookup tables for what SKUs support what features.
It's ugly, it's a maintenance headache, but it does function, and very robustly too. There are no API calls, no microservices, no performance issues, etc...
I agree with the comments in the video since often I find myself dealing with opaque code. Having code that's opaque makes it hard to make sense of what it does, when it does what it does, and what happens when it fails. Having code that's obviously less than optimal but still understandable is far better as the OP mentions in the video. At least with bad code that's understandable you can improve it sometimes if the underlying requirements and dependencies aren't too deep (otherwise, you're gonna break something and it's better to just leave it be bad or ugly code).
3. “I don’t understand what the code is doing, and have no practical way to tell if a change would break something, because the application integrates with bespoke distributed systems that only run at the customer site, and also there are N customers with N different environments having different integrations, and what’s worse their respective test environments, for those that have one, don’t match their prod environments.”
Yeah but sometimes bad code is just bad code. Sometimes what it is doing is not obviously wrong, but ends up being wrong, but it's so bad that it's not even doing the wrong thing right, so when you "fix" it you discover it should never have been written in the first place.
In other words: My definition of X which I interpret as Y, and is not what is usually grnerslly understood by X, is bad. Therefore, I propose alternative practice Z which is better than my Y (and also happens to be what people generally understand to be defined as X in the first place).
For code to do "all of the above", it has to change. In which case, bad code does not get the job done because it's highly resistant to change and gets more resistant as time goes on.
This is extremely stupid. And programming "influencers" are also stupid.
Code can be bad on many levels. It can be bad on the level of some one liner where someone appends to a list in javascript by using the length as an index. It can also be bad on the level of being an impenetrable wall of shit that is impossible to understand.
And just because I don't understand how something works doesn't mean I don't know why it's bad. I know why it's bad because I know it can be done simply but the person that implemented decided to write their own terrible and over engineered design.
There's a real danger in a well-intentioned clean code advocate. First, because there are no universal code principles. Maybe "have it work", but even then, a lot of value has been gotten out of code that doesn't work. And second, because it's very easy to go from a good guideline to a dogma. All you need is a few people to misinterpret your guidelines. Heck, a lot of people haven't even read the guidelines. They just parrot what someone else paraphrased on Twitter.