Hacker Newsnew | past | comments | ask | show | jobs | submit | nicce's commentslogin

I wonder what is the role of moderating duplicate questions. More time passes - more existing data there is and less need for new questions. If you moderate duplicate questions, will they disappear from these charts? Is this decline actually logical?

2020 there was new CEO and moderator council was formed: https://stackoverflow.blog/2020/01/21/scripting-the-future-o...


Because the oil seems to be the main discussion topic, it is fair to assume that there are no fair elections or own decision power anytime soon.

Let’s do the same for Trump. Same base idea, right?

"If they didn't have double-standards, they wouldn't have any standards at all."

Instead sites adds Gemini integrations, which are targeted based on prompts. When you pay enough, Gemini recommends your shop and AI buys the stuff for the target audience.

For a prototype, but something production ready requires almost similar amount of effort than it used to, if you care about good design and code quality.

I really doesn't. I just ditched my wordpress/woocommerce webshop for a custom one that I made in 3 days with Claude, in C# blazor. It is better in every single way than my old webshop, and I have control over every aspect of it. It's totally production ready.

The code is as good or even better than I would have written. I gave Claude the right guidelines and made sure it stayed in line. There are a bunch of playwright tests ensuring things don't break over time, and proving that things actually work.

I didn't have to mess with any of the HTML/css which is usually what makes me give up my personal projects. The result is really, really good, and I say that as someone who's been passionate about programming for about 15 years.

3 days for a complete webshop with Stripe integration, shipping labels and tracking automation, SMTP emails, admin dashboard, invoicing, CI/CD, and all the custom features that I used to dream of.

Sure it's not a crazy innovative projet, but it brings me a ton of value and liberates me from these overengineered, "generic" bulky CMS. I don't have to pay $50 for a stupid plugin (that wouldn't really fit my needs anyway) anymore.

The future is both really exciting and scary.


I wish. I have all the rules and skill files and constraints in place and yet Claude 4.5 Sonnet continues to do strange things beyond a medium scale.

But it does save me time in many other aspects, so I can't complain.


I find that restricting it to very small modules that are clearly separated works well. It does sometimes do weird things, but I'm there to correct it with my experience.

I just wish I could have competent enough local LLMs and not rely on a company.


The ones approaching competency cost tens of thousands in hardware to run. Even if competitive local models existed would you spend that to run them? (And then have to upgrade every handful of years.)

Nope, I wouldn't. I wish for competent local LLMs that don't require a supercomputer at home to run. One can dream!

Use Opus only, or use GPT 5.2 Codex High (with 5.2 Pro as oracle and for spec work)

Yes of course. That's the one I meant to write.

You can be as specific as you want with an LLM, you can literally tell it to do “clean code” or use a DI framework or whatever and it’ll do it. Is it still work? Yes. But once you start using them you’ll realize how much code you actually write is safely in the realm of boilerplate and the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.

Here’s a bunch of examples: moving code around, abstracting common functionality into a function and then updating all call sites, moving files around, pattern matching off an already existing pattern in your code. Sometimes it can be fun and zen or you’ll notice another optimization along the way … but most of the time it’s boring work an agent can is 10x faster than you.


> the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.

This right here in your very own comment is the crux. Unless you're rich or run your own business, your employer (and many other employers) are right now counting down the days till they can think of YOU as boilerplate they want to farm YOU out to an LLM. At the very least where they currently employee 10 they are salivating about reducing it to 2.

This means painful change for a great many people. Appeal by analogy to historical changes like motorised vehicles etc miss the QUALITATIVE change occurring this time.

Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.


Jevons paradox I doubt applies to software sadly for SWE's; or at least not in the way they hope it does. That paradox implies that there are software projects on the shelf that have a decent return on investment (ROI) but aren't taken up because of lack of resources (money, space, production capacity or otherwise). In general unlike physical goods usually the only resource lacking is now money and people which means the only way for more software to be built is lower value projects now stack up.

AI may make low ROI projects more viable now (e.g. internal tooling in a company, or a business website) but in general the high ROI and therefore can justify high salary projects would of been done anyway.


My overwhelming experience is that the sort of developers unironically using the phrase "vibe coding" are not interested in or care about good design and code quality.

What is good design and code quality?

If I can keep adding new features without introducing big regressions that is good design and good code quality. (Of course there will come a time when it will not be possible and it will need a rewrite. Same like software created by top paid developers from the best universities.)

As long as we can keep new bugs to the same level as hand written code with LLM written code, I think, LLMs writing code is much superior just because of the speed with which it allows us to implement features.

We write software to solve (mostly) business efficiency problems. The businesses which will solve those problems faster than their competitors will win.


In light of OpenAI confessing to shareholders there’s no there there (being shocked by and then using Anthropics MCP, being shocked by and then using Anthropics Skills, opening up a hosted dev platform to milk my awesome LLM business ideas, and now revealing that inline ads a-la Google is their best idea so far to make, you know, make money…), I was thinking about those LLM project statistics. Something like 5-10% of projects are seeing a nice productivity bump.

Standard distribution says some minority of IT projects are tragi-bad… I’ve worked with dudes who would copy and paste three different JavaScript frameworks onto the same page, as long as it worked…

AirFryers are great household tabletop appliances that help people cook extraordinary dishes their ovens normally wouldn’t faster and easier than ever before. A true revolution. A proper chef can use one to craft amazing food. They’re small and economical, awesome for students.

Chefs just call it “convection cooking” though. It’s been around for a minute. Chefs also know to go hot (when and how), and can use an actual deep fryer if and when they want.

The frozen food bags here have AirFryer instructions now. The Michelin star chefs are still focusing on shit you could buy books about 50 years ago…


Coding is merely a means to an end and not the end itself. Capitalism sees to it that a great many things are this way. Unfortunately only the results matter and not much else. I'm personally very sorry things are this way. What I can change I know not.

Not sure it's the gotcha you want it to be. What you said is true by definition. That is, vibe coding is defined as not caring about code. Not to be confused with LLM-assisted coding.

I care about product quality. If "good design" and "code quality" can't be perceived in the product they don't matter.

I have no idea what the code quality is like in any of the software I use, but I can tell you all about how well they work, how easy to use they are, and how fast they run.


Perhaps for the inexperienced or timid. Code quality is it compiles and design is it performs to spec. Does properly formatted code matter when you no longer have to read it?

Formatted? I guess not really, because it’s trivially easy to reformat it. But how it’s structured, the data structures and algorithms it uses, the way it models the problem space, the way it handles failures? That all matters, because ultimately the computer still has to run the code.

It may be more extreme than what you are suggesting here, but there are definitely people out there who think that code quality no longer matters. I find that viewpoint maddening. I was already of the opinion that the average quality of software is appalling, even before we start talking about generated code. Probably 99% of all CPU cycles today are wasted relative to how fast software could be.

Of course there are trade-offs: we can’t and shouldn’t all be shipping only hand-optimised machine code. But the degree to which we waste these incredible resources is slightly nauseating.

Just because something doesn’t have to be better, it doesn’t mean we shouldn’t strive to make it so.


> Does properly formatted code matter when you no longer have to read it?

That is exactly the moment when you cannot say anything about the code and cannot fix single line by yourself.


I don't agree, I looked at most of the code the AI wrote in my project, I have a good idea of how it is architectured because I actively planned it. If I have a bug in my orders, I know I have to go to the orders service. Then it's not much harder than reading the code my coworkers write at my daily job.

Parent comment implied that they don’t plan to read the code at all in the long term.

At this point in reality do you read assembly or libraries anymore?

Years ago it was Programmer -> Code -> Compile -> Runtime Now today the Programmer is divided into two entities.

Intention/Prompt Engineer -> AI -> Code -> Compile -> Runtime.

We have entered the 'sudo make me a sandwich' world where computers are now doing our bidding via voice and intent. Despite knowing how low level device drivers work I do not care how a file is stored, in what format, or on what medium. I do want it to function with .open and .write which will work as expected with a working instruction set.

Those who can dive deep into software and hardware problems will retain their jobs or find work doing that which AI cannot. The days of requiring an army of six figure polyglots has passed. As for the ability to production or kernel level work is a matter of time.


Imagine if those trillions would be spent on research and healthcare


You cannot quarantee who is holding the pager at the moment of explosion.


You can have a reasonable expectation secure military pagers are only going to be used by soldiers. Given how few collateral deaths there were this was a reasonable assumption.


You can’t pay to use it for commercial purposes?


> Database leaks happen all the time

The point is to use unique passwords. If there is a leak, hopefully it is detected and then it is appropriate to change the password.


Sure, if you use unique passwords, then changing passwords isn't as useful. Yet we shouldn't judge a security policy based on the existence or not of another policies.

What you are judging then is a whole set of policies, which is a bit too controlling, you will most often not have absolute control over the users policy set, all you can do is suggest policies which may or may not be adopted, you can't rely on their strict adoption.

A similar case is on the empiric efficacy of birth control. The effectiveness of abstinence based methods is lower than condoms in practice. Whereas theoretically abstinence based birth control would be better, who cares what the rates are in theory? The actual success rates are what matters.


Business model from Orion+ would likely take a hit in the long run.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: