Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a flywheel where programmers choose languages that LLMs already understand, but LLMs can only learn languages that programmers write a sufficient amount of code in.

Because LLMs make it that much faster to develop software, any potential advantage you may get from adopting a very niche language is overshadowed by the fact that you can't use it with an LLM. This makes it that much harder for your new language to gain traction. If your new language doesn't gain enough traction, it'll never end up in LLM datasets, so programmers are never going to pick it up.



> Because LLMs make it that much faster to develop software

I feel as though "facts" such as this are presented to me all the time on HN, but in my every day job I encounter devs creating piles of slop that even the most die-hard AI enthusiasts in my office can't stand and have started to push against.

I know, I know "they just don't know how to use LLMs the right way!!!", but all of the better engineers I know, the ones capable of quickly assessing the output of an LLM, tend to use LLMs much more sparingly in their code. Meanwhile the ones that never really understood software that well in the first place are the ones building agent-based Rube Goldberg machines that ultimately slow everyone down

If we can continue living in the this AI hallucination for 5 more years, I think the only people capable of producing anything of use or value will be devs that continued to devote some of their free time to coding in languages like Gleam, and continued to maintain and sharpen their ability to understand and reason about code.


This last week:

* One developer tried to refactor a bunch of graph ql with an LLM and ended up checking in a bunch of completely broken code. Thankfully there were api tests.

* One developer has an LLM making his PRs. He slurped up my unfinished branch, PRed it, and merged (!) it. One can only guess that the approved was also using an LLM. When I asked him why he did it, he was completely baffled and assured me he would never. Source control tells a different story.

* And I forgot to turn off LLM auto complete after setting up my new machine. The LLM wouldn't stop hallucinating non-existent constructors for non-existent classes. Bog standard intellisense did in seconds what I needed after turning off LLM auto complete.

LLMs sometimes save me some time. But overall I'm sitting at a pretty big amount of time wasted by them that the savings have not yet offset.


The first two cases indicate that you have some gaps in your change management process. Strict requirements for pulls and ci/cd checks.


> One developer tried to refactor a bunch of graph ql with an LLM and ended up checking in a bunch of completely broken code. Thankfully there were api tests.

So the LLM was not told how to run the tests? Without that they cannot know if what they did works, and they are a bit like humans, they try something and then they need to check if that does the right thing. Without a test cycle you definitely don’t get a lot out of LLMs.


You guys always find a way to say "you can be an LLM maximalist too, you just skipped a step."

The bigger story here is not that they forgot to tell the LLM to run tests, it's that agentic use has been so normalized and overhyped that an entire PR was attempted without any QA. Even if you're personally against this, this is how most people talk about agents online.

You don't always have the privilege of working on a project with tests, and rarely are they so thorough that they catch everything. Blindly trusting LLM output without QA or Review shouldn't be normalized.


Who is normalizing merging ANYTHING, LLM-generated or human-generated, without QA or review?

You should be reviewing everything that touches your codebase regardless of source.


A LOT of people, if you're paying attention. Why do you think that happened at their company?

It's not hard to find comments from people vibe coding apps without understanding the code, even apps handling sensitive data. And it's not hard to find comments saying agents can run by themselves.

I mean people are arguing AGI is already here. What do you mean who is normalizing this?


I fully believe there are misguided leaders advocating for "increasing velocity" or "productivity" or whatever, but the technical leaders should be pushing back. You can't make a ship go faster by removing the hull.

And if you want to try... well you get what you get!

But again, no one who is serious about their business and serious about building useful products is doing this.


> But again, no one who is serious about their business and serious about building useful products is doing this.

While this is potentially true for software companies, there are many companies for which software or even technology in general is not a core competency. They are very serious about their very useful products. They also have some, er, interesting ideas about what LLMs allow them to accomplish.


I am not saying you should be a LLM maximalist at all. I am just saying LLMs need to have a change-test cycle, like humans, in order to be effective. But looks like your goal is not really to be effective at using LLMs, but to bitch about it on the internet.

> But looks like your goal is not really to be effective at using LLMs, but to bitch about it on the internet

Listen, you can engage with the comment or ignore everything but the first sentence and throw out personal insults. If you don't want to sound like a shill, don't write like one.

When you're telling people the problem is the LLM did not have tests, you're saying "Yeah I know you caught it spitting out random unrelated crap, but if you just let it verify if it was crap or not, maybe it would get it right after a dozen tries." Does that not seem like a horribly ineffectual way to output code? Maybe that's how some people write code, but I evaluate myself with tests to see if I accidentally broke something elsewhere. Not because I have no idea what I'm even writing to begin with.

You wrote

> Without that they cannot know if what they did works, and they are a bit like humans

They are exactly not like humans this way. LLMs break code by not writing valid code to begin with. Humans break code by forgetting an obscure business rule they heard about 6 months ago. People work on very successful projects without tests all the time. It's not my preference, but tests are non-exhaustive and no replacement for a human that knows what they're doing. And the tests are meaningless without that human writing them.

So your response to that comment, pushing them further down the path of agentic code doing everything for them, smacks of maximalism, yes.


You need to seek medical help. LLM is not your enemy. I am not your enemy. The world is not against you.

I agree with everything you wrote.

You are overlooking a blind spot, that is increasingly becoming a weakness for devs. You assume that businesses care that their software actually works. It sounds crazy from the dev side but they really don't. As long as cash keeps hitting accounts the people in charge MBAs do not care how it gets there and the program to find that out only requires one simple unmistakable algo Money In - money out.

evidence

Spreadsheets. These DSL lite tools are almost universally known to be generally wrong and full of bugs. Yet, the world literally runs on them.

Lowest bidder outsourcing. Its well known that various low cost outsourcing produces non functional or failed projects or projects that limp along for years with nonstop bug stomping. Yet business is booming.

This only works in a very rich empire that is in the collapse/looting phase. Which we are in and will not change. See: History.


I wish I could just ship 99% AI generated code and never have to check anything.

Where is everyone working where they can just ship broken code all the time?

I use LLMs for hours, every single day, yes sometimes they output trash. That’s why the bottleneck is checking the solutions and iterating on them.

All the best engineers I know, the ones managing 3-4 client projects at once, are using LLMs nonstop and outputting 3-4x their normal output. That doesn’t mean LLMs are one-shotting their problems.


Using AI to write good code faster is hard work.

I once toured a dairy farm that had been a pioneer test site for Lasix. Like all good hippies, everyone I knew shunned additives. This farmer claimed that Lasix wasn't a cheat because it only worked on really healthy cows. Best practices, and then add Lasix.

I nearly dropped out of Harvard's mathematics PhD program. Sticking around and finishing a thesis was the hardest thing I've ever done. It didn't take smarts. It took being the kind of person who doesn't die on a mountain.

There's a legendary Philadelphia cook who does pop-up meals, and keeps talking about the restaurant he plans to open. Professional chefs roll their eyes; being a good cook is a small part of the enterprise of engineering a successful restaurant.

(These are three stool legs. Neurodivergents have an advantage using AI. A stool is more stable when its legs are further apart. AI is an association engine. Humans find my sense of analogy tedious, but spreading out analogies defines more accurate planes in AI's association space. One doesn't simply "tell AI what to do".)

Learning how to use AI effectively was the hardest thing I've done recently, many brutal months of experiment, test projects with a dozen languages. One maintains several levels of planning, as if a corporate CTO. One tears apart all code in many iterations of code review. Just as a genius manager makes best use of flawed human talent, one learns to make best use of flawed AI talent.

My guess is that programmers who write bad code with AI were already writing bad code before AI.

Best practices, and then add AI.


> but LLMs can only learn languages that programmers write a sufficient amount of code in

i wrote my own language, LLMs have been able to work with it at a good level for over a year. I don't do anything special to enable that - just front load some key examples of the syntax before giving the task. I don't need to explain concepts like iteration.

Also llm's can work with languages with unconventional paradigms - kdb comes up fairly often in my world (array language but also written right to left).


LLMs still struggle with lisp parens though


I think most people struggle to one-shot Lisp parens. Visual guides or structured editing are sorta necessary. LLMs don't have that kind of UI (yet?)


I bet LLMs create their version of Jevons paradox.

More trial and error because trial is cheap, in the end less typing but hardly faster end results


I don't think this is actually true. LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere.

If this does appear to become a problem, is it not hard to apply the same RLHF infrastructure that's used to get LLMs effective at writing syntactically-correct code that accomplishes sets of goals in existing programming languages to new ones.


> LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere.

That would make sense if LLMs understood the domains and the concepts. They don't. They need a lot of training data to "map" the "knowledge transfer".

Personal anecdote: Claude stopped writing Java-like Elixir only some time around summer this year (Elixir is 13 years old), and is still incapable of writing "modern HEEX" which changed some of the templaring syntax in Phoenix almost two years ago.


You raise such an interesting point!

But consider: as LLMs get better and approach AGI you won't need a corpus: only a specification.

In this way, AI may enable more languages, not less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: