Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For sure. I'd argue to write the "stupid" code to get started, get that momentum going. The sooner you are writing code, the sooner you are making your concept real, and finding the flaws in your mental model for what you're solving.

I used to try to think ahead, plan ahead and "architect", then I realized simply "getting something on paper" corrects many of the assumptions I had in my head. A colleague pushed me to "get something working" and iterate from there, and it completely changed how I build software. Even if that initial version is "stupid" and hack-ish!



I think this is mostly true, but also I’d highlight the necessity of having a mental model, and iterating.

I think it is common for a programmer to just start programming without coming up with any model, and just try to solve the problem by adding code on top of code.

There are also many programmers who go with their first “working” implementation, and never iterate.

These days, I think the pendulum has swung too far from thinking about the program, maybe mapping it out a bit on paper before writing code.


My philosophy:

1. Get it working.

2. Get it working well.

3. Get it working fast.

This puts the "just get it working" as the first priority. Don't care about quality, just make it. Then, and only once you have something working, do you care about quality first. This is about getting the code into something reasonable that would pass a review (e.g., architectually sound). Finally, do an optimization pass.

This is the process I follow for PRs and projects alike. Sometimes you can mix all the steps into a single commit, if you understand the problem&solution domain well. But if you don't, you'll likely have to split it up.


> Finally, do an optimization pass.

Depending on how low-level your code is, this... may not work out in those terms.

In other words, I’d say that if you actually want good software—and that includes making sure its speed falls within a reasonable factor of the napkin-math theoretical maximum achievable on the platform—your three steps can easily constitute three entire rewrites or at least substantial refactors. You might well need to rearchitect if the “working well” version has multiple small loops split by domain-level concern when the hardware really wants a single large one, or if you’re doing a lot of pointer-chasing and need to flatten the whole thing into a single buffer in preorder, or if your interface assumes per-byte ops where SIMD can be applied.

This is not a condemnation of the strategy, mind you. Crap code is valuable and I wish I were better at it. I just disagree that the transition from step 2 to step 3 can be described as an optimization pass. If that’s what you limit yourself to, you’ll quite likely be forced to leave at least an order of magnitude’s worth of performance on the table.

And yes, most consumer software is very much not good by that definition.

(For instance, I’m expecting that the Ladybird devs will be able to get their browser to work well for daily tasks—which I would count a tremendous achievement—but I’m not optimistic about it then becoming any faster than the state of the art even ten or fifteen years ago.)


Some optimization problems require an entire PHD dissertation and research budget to actually optimize, so some algorithms require far more effort applied to this than is reasonable for most products. As mentioned, sometimes you can combine these all into one step -- when you know the domains well.

Sometimes, it might even be completely separate people working on each step... separated by time and space.

In any case, most software generally stops at (2) simply due to the fact that any effort towards (3) isn't worth the effort -- for example, there's very little point in spending two weeks optimizing a report generation that runs in the middle of the night, once a month. At some point, there may be, but usually not anytime soon.


This should be carved in stone on every campus computer science building.

https://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast


This somewhat depends on how big of program/application you are making.

Again, this is something I set bite enterprise style applications quite often as they can be pushed out piecemeal where you can get things like the datastore/input APIs/UI to the customer quickly, then over the next months things like reporting, auditing, and fine grained access controls get put in, and suddenly you find yourself stuffed working around major issues where a little bit up up front thinking about the later steps would have saved you a lot of heartache.


This is where 'knowing the domain' lets you put a ton of stuff in all at once. If you have no clue what you're doing, you have to learn the lesson your talking about. As long as you can avoid joining teams that haven't learned this lesson (and others like it), you'll be fine.

I once joined a team where they knew they were going to do translations at some point ... and the way they decided to "prepare" for it was absolutely nonsensical. It was clear none of them had ever done translations before, so when it came time to actually do the work, it was a disaster. They had told product/sales "it was ready" but it didn't actually work -- and couldn't ever actually work. It required redesigning half the architecture and months of effort across a whole team to get it working. Even then, some aspects were completely untranslatable that took an additional 6-8 months of refactoring.

So, another lesson is to not try to engineer something unless your goal is to "get it working". If you don't need it, it is probably still better to actually wait until you need it.


Slight variation: - Make it work - Make it right - Make it fast


This is fantastic, but how do you communicate this within your organization to peers, and not allow the pace of the organization to interfere? For example, can see many teams stopping after step 1.


And in some cases, there isn't a reason to continue to step 2 or 3. Software generally has a shelf-life. Most businesses write code that should be rewritten every 5-10 years, but there's that kernel of code that _never_ changes... that's the code that really needs step 2 and 3. The rest, probably only runs occasionally and doesn't explicitely need to be extremely tested and fast.


Agreed. Having worked the range of boring backend systems to performance critical embedded systems, only few areas are worth optimizing for, and we always let data inform where to invest additional time.

I much prefer a code base that is readable and straightforward (maybe at the expense of some missed perf gains) over code that is highly performant but hard to follow/too clever.


You write a tech debt ticket and move on.

I've used a similar mantra of "make it work, make it pretty, make it fast" for two decades.

I think I've had to get to step 3 once and that was because the specs went from "one device" to "20 devices and two factories" after step 1 :D


> These days, I think the pendulum has swung too far from thinking about the program, maybe mapping it out a bit on paper before writing code.

Sometimes I think about code structure like a sudoku where you have to eliminate two possibilities by following through what would happen. Writing the code is (to me) like placing the provisional numbers and finding where you have a conflict. I simply cannot do it by holding state in my head (ie without making marks on the paper).

It could definitely be a limitation of me rather than generally true.


Totally agree. Iteration is key. Mapping things out on paper after you've written the code can also be illuminating. Analysis and design doesn't imply one-and-done Architect -> Implement waterfall methods.


Knowing hard requirements up front can be critical to building the right thing. It's scary how many "temporary" things get built in top of and stuck in production. Obviously loose coupling / clear interfaces can help a lot with this.

But an easy example is "just build the single player version" (of an application) can be worse than just eating your vegetables. It can be very difficult to tack-on multiplayer, as opposed to building for this up front.


I once retrofitted a computer racing game/sim from single-player to multi-player.

I thought it was a masterpiece of abusing the C pre-processor to ensure that all variables used for player physics, game state, inputs, and position outputs to the graphics pipeline were guarded with macros to ensure as the (overwhelmingly) single-player titles continued to be developed that the code would remain clean for the two titles that we hoped to ship with split-screen support.

All the state was wrapped in ss_access() macros (“split screen access”) and compiled to plain variables for single-player titles, but with the variable name changed so writing plain access code wouldn’t compile.

I was proud of the technical use/abuse of macros. I was not proud that I’d done a lot of work and imposed a tax on the other teams all for a feature that producers wanted but that in the end we never shipped a single split-screen title. One console title was cancelled (Saturn) and one (mine) shipped single-player only (PlayStation).


Pain aside, this sounds like an absolute blast.


Even more of a blast was this anecdote from the same effort:

https://news.ycombinator.com/item?id=33963859


That's a great point, and I feel like it is relevant for a lot more than games.

We should definitely have a plan before we start, and sketch out the broad strokes both in design and in actual code as a starting point. For smaller things it's fine to just start hacking away, but when we're designing nå entire application i think the right way to approach it is to plan it out and then solve the big problems first. Like multiplayer.

They don't have to be completely solved, it's an iterative process but they should be part of the process from the beginning.

An example from my own work: I took over an app two other developers had started. The plan was to synchronize data from a third party to our own db, but they hadn't done that. They had just used the third party api directly. I don't know why. So when they left and I took over, I ended up deleting/refactoring everything because everything was built around this third party api and there was a whole bunch of problems related to that and how they were just using the third party's data structure directly rather than shaping the data the way we wanted it. The frontend took 30-60+ seconds to load a page because it was making like 7 serialized requests, waiting for a response before sending the next one and the backend did the same thing.

Now it's loading instantly, but it did require that I basically tear out everything they've done and rewrite most of the system from scratch.


In many project it's impossible to know the requirements up front, or they are very vagues.

Business requirements != programming requirements/features.

Very often both the business requirements and programming requirements change a lot since unless you have already written this one thing, in the exact form that you are making it now, you will NEVER get it right the first time.


The problem is people don't adapt properly. If the business requirements change so much that it invalidates your previous work then you need to re-do the work. But in reality people just duct tape a bunch of workarounds together and you end up with a frankensystem that doesn't do anything right.

It is possible to build systems that can adapt to change, by decoupling and avoiding cross cutting concerns etc you can make a lot of big sweeping changes quite easily in a well designed system. It's just that most developers are bad at software development, they make a horrible mess and then they just keep making it worse while blaming deadlines and management etc.


You are both right and that's why so many projects are over budget or even fail miserably.


This is why I hate software engineering as a profession.

You're going to write the "stupid code" to get things out the door, get promoted and move on to another job, and then some future engineer has to come along and fix the mess you made.

But management and the rest of the org won't understand why those future engineers are having such a hard time, why there's so much tech debt, and why any substantial improvements require major rework and refactoring.

So the people writing the stupid code get promoted and look good, but the people who have to deal with the mess end up looking bad.


Sure, that sucks. You know what else sucks? The engineer who does nothing but foundation building and then is surprised that reality doesn't align with their meticulously laid out assumptions.


An engineer that does nothing but foundations can still be a damn good geotechnical engineer.

A foundation that isn't useful to build atop is just a shitty foundation. Everyone is taking it for granted that building a good foundation is impossible if you haven't built a shitty foundation for the same building first, but that's not the only way to do things.


The analogy is strained. Software is closer to a food recipe than a building. Trying to make a 3-layer strawberry banana cake with pineapple frosting? You are going to have to bake something and taste it to see if your recipe is any good. Then make some adjustments and bake some more.


Is the argument here that a skilled chef has no better way to make good food than unguided trial and error? That's obviously not true, as the abundance of "random ingredient" cooking challenges will attest.


I mean, you can write the stupid code to get something working, and not submit the code until you've iterated on it.


> I used to try to think ahead, plan ahead and "architect"

Depends on what you do. If you build a network protocol, you'd better architect it carefully before you start building upon it (and maybe others do, too).

The question is: "if I get this wrong, how much impact does it have?". Getting the API of a core service wrong will have a lot of impact, while writing a small mobile app won't affect anything other than itself.

But the thing is, if you think about that before you start iterating on your small app, then you've already taken an architectural decision :-).


> I'd argue to write the "stupid" code to get started, get that momentum going.

Yes and no, depending on how dependent you become on that first iteration, you might drown an entire project or startup in technical debt.

You should only ever just jump in if:

A) it's a one off for some quick results or a demo or whatever

B) it's easy enough to throw away and nobody will try to ship it and make you maintain it

That said, having so much friction and analysis paralysis that you never ship is also no good.


or C): You cultivate a culture of continuous rewrite to match updated requirements and understandings as you code. So, so many people have never learned that, but once you do reach that state, it is very liberating as there will be no more sacred ducks.

That said, it takes quite a bit of practice to become good enough at refactoring to actually practice that.


Yeah, I think it's actually a great skill to be comfortable with not getting attached to your code, and being open to refactoring/rearchitecting -- in fact, if you have this as a common expectation, you may get really good at writing easily-maintainable code. I have started putting less and less "clever optimizations" into my code, instead opting for ease of maintainability, and onboarding for new team members to join up and start contributing. Depends on the size of project/team (and the priorities therein), but it helps me later too when I have to change functionality in something I wrote anywhere from 6-48 months ago :)


You should always have an architecture in mind. But it should be appropriate for the scale and complexity of your application _right now_, as opposed to what you imagine it will be in five years. Let it evolve, but always have it.


I frequently do both. It takes longer but leads to great overall architecture. I write a functional thing from scratch to understand the requirements and constraints. It grows organically and the architecture is bad. Once I understand better the product, I think deeply on a better architecture first before basically rewriting from scratch. I sometimes need several iterations on the most complex products.


This is where experience matters. The more experience you have, more often than not, the less stupid the code is. Not because you aren't testing your concepts as fast, but that your tooling is improved.

Basicaly, do you have a good foundation to build from. With more experience, you can build a better foundation.


With a working prototype you get to test the specification and the users, not just the code itself.


This is also why I'm not a fan of the "software architect" that doesn't write code, or at least not the code that they've architected.


Theres a phrase that's become a bit of a mantra in adjacent circles

"Just make it exist first. You can make it good later."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: