This puts the "just get it working" as the first priority. Don't care about quality, just make it. Then, and only once you have something working, do you care about quality first. This is about getting the code into something reasonable that would pass a review (e.g., architectually sound). Finally, do an optimization pass.
This is the process I follow for PRs and projects alike. Sometimes you can mix all the steps into a single commit, if you understand the problem&solution domain well. But if you don't, you'll likely have to split it up.
Depending on how low-level your code is, this... may not work out in those terms.
In other words, I’d say that if you actually want good software—and that includes making sure its speed falls within a reasonable factor of the napkin-math theoretical maximum achievable on the platform—your three steps can easily constitute three entire rewrites or at least substantial refactors. You might well need to rearchitect if the “working well” version has multiple small loops split by domain-level concern when the hardware really wants a single large one, or if you’re doing a lot of pointer-chasing and need to flatten the whole thing into a single buffer in preorder, or if your interface assumes per-byte ops where SIMD can be applied.
This is not a condemnation of the strategy, mind you. Crap code is valuable and I wish I were better at it. I just disagree that the transition from step 2 to step 3 can be described as an optimization pass. If that’s what you limit yourself to, you’ll quite likely be forced to leave at least an order of magnitude’s worth of performance on the table.
And yes, most consumer software is very much not good by that definition.
(For instance, I’m expecting that the Ladybird devs will be able to get their browser to work well for daily tasks—which I would count a tremendous achievement—but I’m not optimistic about it then becoming any faster than the state of the art even ten or fifteen years ago.)
Some optimization problems require an entire PHD dissertation and research budget to actually optimize, so some algorithms require far more effort applied to this than is reasonable for most products. As mentioned, sometimes you can combine these all into one step -- when you know the domains well.
Sometimes, it might even be completely separate people working on each step... separated by time and space.
In any case, most software generally stops at (2) simply due to the fact that any effort towards (3) isn't worth the effort -- for example, there's very little point in spending two weeks optimizing a report generation that runs in the middle of the night, once a month. At some point, there may be, but usually not anytime soon.
This somewhat depends on how big of program/application you are making.
Again, this is something I set bite enterprise style applications quite often as they can be pushed out piecemeal where you can get things like the datastore/input APIs/UI to the customer quickly, then over the next months things like reporting, auditing, and fine grained access controls get put in, and suddenly you find yourself stuffed working around major issues where a little bit up up front thinking about the later steps would have saved you a lot of heartache.
This is where 'knowing the domain' lets you put a ton of stuff in all at once. If you have no clue what you're doing, you have to learn the lesson your talking about. As long as you can avoid joining teams that haven't learned this lesson (and others like it), you'll be fine.
I once joined a team where they knew they were going to do translations at some point ... and the way they decided to "prepare" for it was absolutely nonsensical. It was clear none of them had ever done translations before, so when it came time to actually do the work, it was a disaster. They had told product/sales "it was ready" but it didn't actually work -- and couldn't ever actually work. It required redesigning half the architecture and months of effort across a whole team to get it working. Even then, some aspects were completely untranslatable that took an additional 6-8 months of refactoring.
So, another lesson is to not try to engineer something unless your goal is to "get it working". If you don't need it, it is probably still better to actually wait until you need it.
This is fantastic, but how do you communicate this within your organization to peers, and not allow the pace of the organization to interfere? For example, can see many teams stopping after step 1.
And in some cases, there isn't a reason to continue to step 2 or 3. Software generally has a shelf-life. Most businesses write code that should be rewritten every 5-10 years, but there's that kernel of code that _never_ changes... that's the code that really needs step 2 and 3. The rest, probably only runs occasionally and doesn't explicitely need to be extremely tested and fast.
Agreed. Having worked the range of boring backend systems to performance critical embedded systems, only few areas are worth optimizing for, and we always let data inform where to invest additional time.
I much prefer a code base that is readable and straightforward (maybe at the expense of some missed perf gains) over code that is highly performant but hard to follow/too clever.
1. Get it working.
2. Get it working well.
3. Get it working fast.
This puts the "just get it working" as the first priority. Don't care about quality, just make it. Then, and only once you have something working, do you care about quality first. This is about getting the code into something reasonable that would pass a review (e.g., architectually sound). Finally, do an optimization pass.
This is the process I follow for PRs and projects alike. Sometimes you can mix all the steps into a single commit, if you understand the problem&solution domain well. But if you don't, you'll likely have to split it up.