Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's something exhilarating about pushing through to some "everything works like I think it should" point, and you can often get there without doing the conscientious, diligent, methodical "right" way of doing things, and it's only getting easier. At the point where everything works, if it's not just a toy or experiment, you definitely have to go back and understand everything. There will be a ton to fix, and it might take longer to do it like that than just by doing it right the first time.

I'm not a professional SWE, I just know enough to understand what the right processes look like, and vibe coding is awesome but chaotic and messy.





If you're hanging your features off a well trodden framework or engine this seems fine.

If frameworks don't make sense for what you're doing though and you're now relying on your LLM to write the core design of your codebase... it will fall apart long before you reach "its basically working".

The more nuanced interactions in your code the worse it'll do.


"It was a non-trivial project, and I had to be paying attention to what the agent was doing"

There is a big difference between vibe coding and llm assisted coding and the poster above seems to be aware of it.


Vibe coding is the name for LLM assisted coding, whether you like it or not.

Not all LLM assisted coding is vibe coding. Vibe coding goes something like , throw it a prompt, then repeat "it works" or "it doesn't work" or "it works, but I want Feature XYZ, also" or repeated "ok, but make it better."

Vibe implies missing knowledge or experience - LLMs are a great equalizer, on the level of handguns or industrialization. Everyone can use them, some can use them well, and the value of using them well is enormous.

A real SWE is going to say "ok, now build a new function doing XYZ" and in their agents.md they'll have their entire project specs and baseline prompt framework, with things like "document in the specified standard, create the unit test, document how the test integrates with the existing features, create a followup note on any potential interference with existing functionality, and update the todo.md" and so forth, with all the specific instructions and structure and subtasks and boring project management most of us wouldn't even know to think of. Doing it right takes a lot of setup and understanding how software projects should be run, and using the LLMs to do all the work they excel at doing, and not having them do the work they suck at.

I only know the binary "that works" or not, for most things I've vibe coded. It'd be nice to have the breadth of knowledge and skills to do things right, but I also feel like it'd take a lot of the magic out of it too, lol.


While that was the original intent when Andrej Karpathy coined the term, it's now simply synonymous with LLM assisted coding. Like many previously pejorative terms, it's now become ubiquitous and lost its original meaning.

I disagree, "vibe" characterizes how the LLM is being used, not that AI is being used at all. The vibe part means it's not rigorous. Using an LLM to autocomplete a line in an otherwise traditionally coded project would not be considered "vibe" coding, despite being AI assisted, because the programmer can easily read and verify the line is as correct as if they had typed it character by character.

That's the etymology, but it's now entered the general lexicon and that distinction had been lost.

I don't think you can make an argument the definition is settled based on popular usage when the replies to your comment show there's still plenty of contention about what it means. It hasn't entered the general lexicon yet because most people have never heard it and could only guess at what it means. Not even my students agree on what vibe coding means and they're the ones who do it most. And regardless, terms of art can still have technical meanings distinct from the general lexicon (like "theory", or the example you used "hacker", which means something different around here).

One means (used to mean?) actually checking the LLM's output one means keep trying until it outputs does what you want.

Given that the models will attempt to check their own work with almost the identical verification that a human engineer would, it's hard to say if human's aren't implicitly checking by relying on the shared verification methods (e.g. let me run the tests, let me try to run the application with specific arguments to test if the behavior works).

> Given that the models will attempt to check their own work with almost the identical verification that a human engineer would

That's not the case at all though. The LLM doesn't have a mental model of what the expected final result is, so how could it possibly verify that?

It has a description in text format of what the engineer thinks he wants. The text format is inherently limited and lossy and the engineer is unlikely to be perfect at expressing his expectations in any case.


That's the original context of the Andrej Karpathy comment, but it's just synonymous with LLM assisted coding now.

Not yet, but the more you will insist, the more it will be. But what is your proposal for differentiating between just prompting without looking at code vs just using LLM to generate code?

I'm not in favour of the definition, but like _hacker_, the battle is already lost.

> I'm not a professional SWE

It was already obvious from your first paragraph - in that context even the sentence "everything works like I think it should" makes absolute sense, because it fits perfectly to limited understanding of a non-engineer - from your POV, it indeed all works perfectly, API secrets in the frontend and 5 levels of JSON transformation on the backend side be damned, right ;) Yay, vibe-coding for everyone - even if it takes longer than the programming the conventional way, who cares, right?


It sounds more like you just made an overly simplistic interpretation of their statement, "everything works like I think it should," since it's clear from their post that they recognize the difference between some basic level of "working" and a well-engineered system.

Hopefully you aren't discouraged by this, observationist, pretty clear hansmayer is just taking potshots. Your first paragraph could very well have been written by a professional SWE who understood what level of robustness was required given the constraints of the specific scenario in which the software was being developed.


By your response, it really seems like you read their first sentence as advocating for vibe coding, but I think they were saying something more to the effect of "While it's exciting to reach those milestones more quickly and frequently, as it becomes easier to reach a point where everything seems to be working on the surface, the easier it then is to bypass elegant, properly designed, intimately internalized detail—unavoidable if written manually—and thus when it comes time to troubleshoot, the same people may have to confront those rapidly constructed systems with less confidence, and hence the maintenance burden later may be much greater than it otherwise would be"

Which to me, as a professional SWE, seems like a very engineer thing to think about, if I've read both of your comments correctly.


Exactly - I know enough to know what I don't know, since I've been able to interact with professionals, and went down the path of programming far enough to know I didn't want to do it. I've also gotten good at enough things to know the pattern of "be really shitty at doing things until you're not bad, and eventually be ok, and if you work your ass off, someday you'll actually be good at it".

The neat thing about vibe coding is knowing that I'm shitty at actual coding and achieving things in hours that would likely have taken me months to learn to do the right way, or weeks to hack together with other people's bubblegum and duct tape. I'd have to put in a couple years to get to the "OK" level of professional programming, and I feel glad I didn't. Lucky, even.


>> even if it takes longer than the programming the conventional way, who cares, right?

Longer than writing code from scratch, with no templates or frameworks? Longer than testing and deploying manually?

Even eight years ago when I left full-stack development, nobody was building anything from scratch, without any templates.

Serious questions - are there still people who work at large companies who still build things the conventional way? Or even startups? I was berated a decade ago for building just a static site from scratch so curious to know if people are still out there doing this.


What do you mean by "the conventional way"?

I was referencing OP's statement.

"conventional programming"

Key Characteristics of Conventional Programming:

Manual Code Writing

- Developers write detailed instructions in a programming language (e.g., Java, C++, Python) to tell the computer exactly what to do.

- Every logic, condition, and flow is explicitly coded.

Imperative Approach

- Focuses on how to achieve a result step by step. Example: Writing loops and conditionals to process data rather than using built-in abstractions or declarative statements.

High Technical Skill Requirement

- Requires understanding of syntax, algorithms, data structures, and debugging. No visual drag-and-drop or automation tools—everything is coded manually.

Longer Development Cycles

- Building applications from scratch without pre-built templates or AI assistance. Testing and deployment are also manual and time-intensive.

Traditional Tools

- IDEs (Integrated Development Environments) like Eclipse or Visual Studio. Version control systems like Git for collaboration.


>> I'm not a professional SWE, I just know enough to understand what the right processes look like, and vibe coding is awesome but chaotic and messy.

> It was already obvious from your first paragraph - in that context even the sentence "everything works like I think it should" makes absolute sense, because it fits perfectly to limited understanding of a non-engineer - from your POV, it indeed all works perfectly, API secrets in the frontend and 5 levels of JSON transformation on the backend side be damned, right ;)

I mean, he qualified it, right? Sounds like he knew exactly what he was getting :-/


I've noticed that as well. I don't memorize every single syntax error, but when I use agents to help code I learn why they fail and how to correct them. The same way I would imagine a teacher learns the best way to teach their students.

AI is allowing a lot of "non SWEs" to speedrun the failed project lifecycle.

The exuberance of rapid early-stage development is mirrored by the despair of late-stage realizations that you've painted yourself into a corner, you don't understand enough about the code or the problem domain to move forward at all, and your AI coding assistant can't help either because the program is too large for it to reason about fully.

AI lets you make all the classic engineering project mistakes faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: