Hacker Newsnew | past | comments | ask | show | jobs | submit | slfnflctd's commentslogin

My thoughts on this have always been a blend of your two 'they should not be deported' scenarios, with a slow, measured rollout.

Sudden changes cause too much chaos, and you don't always know what works until you try it. Avoiding entitlement abuse is always going to be part of the conversation, and it seems to me the fix for this (and nearly any other issue) needs to be approached carefully from both the supply and demand sides until what's effective is more clear.


I guess where we differ is that I believe that we've tried the other side and found it wanting. You can say that the Biden asylum catch-and-release policies did not include entitlement reform or worker protections so they don't count, but what it shows me is that too many moving parts mean that only the worst aspects of the worst solution are what get implemented. The simplest solution is securing the border and deporting illegal immigrants.

> There is nothing complicated about this divide [...] You can't "not understand"

I beg to differ. There are a whole lot of folks with astonishingly incomplete understanding about all the facts here who are going to continue to make things very, very complicated. Disagreement is meaningless when the relevant parties are not working from the same assumption of basic knowledge.


Bolstering your point, check out the comments in this thread: https://www.reddit.com/r/rust/comments/1qy9dcs/who_has_compl...

There’s a lot of unwillingness to even attempt to try the tools.


Those people are absolutely going to get left in the dust. In the hands of a skilled dev, these things are massive force multipliers.

That's one of the sentiments I don't quite grasp, though. Why can't they just learn the tools when they're stable? So far it's been sooo many changes in workflows, basically relearn the tools every three months. It's maybe a bit more stabilized the last year, but still one could spend an enormous amount of time twiddling with various models or tools, knowledge that someone else probably could learn quicker at a later time.

"Being left in the dust" would also mean it's impossible for new people / graduates to ever catch up. I don't think it is. Even though I learned react a few years after it was in vogue (my company bet on the wrong horse), I quickly got up to speed and am just as productive now as someone that started a bit earlier.


Not the person you asked, but my interpretation of “left in the dust” here (not a phrasing I particularly agree with) would be the same way iOS development took off in the 2010s.

There was a land rush to create apps. Basic stuff like the flash light, todo lists, etc, were created and found a huge audience. Development studios were established, people became very successful out of it.

I think the same thing will happen here. There is a first mover advantage. The future is not yet evenly distributed.

You can still start as an iOS developer today, but the opportunity is different.


I’m not sure your analogy is applicable here.

The introduction of the App Store did not increase developer productivity per se. If anything, it decreased developer productivity, because unless you were already already a Mac developer, you had to learn a programming language you've never used, Objective-C, (now it's largely Swift, but that's still mainly used only on Apple platforms) and a brand new Apple-specific API, so a lot of your previous programming expertise became obsolete on a new platform. What the App Store did that was valuable to developers was open up a new market and bring a bunch of new potential customers, iPhone users, indeed relatively wealthy customers willing to spend money on software.

What new market is brought by LLMs? They can produce as much source code as you like, but how exactly do you monetize that massive amount of source code? If anything, the value of source code and software products will drop as more is able to be produced rapidly.

The only new market I see is actually the developer tool market for LLM fans, essentially a circular market of LLM developers marketing to other LLM developers.

As far as the developer job market is concerned, it's painfully clear that companies are in a mass layoff mood. Whether that's due to LLMs, or whether LLMs are just the cover story, the result is the same. Developer compensation is not on the rise, unless you happen to be recruited by one of the LLM vendors themselves.

My impression is that from the developer perspective, LLMs are a scheme to transfer massive amounts of wealth from developers to the LLM vendors. And you can bet the prices for access to LLMs will go up, up, up over time as developers become hooked and demand increases. To me, the whole "OpenClaw" hype looks like a crowd of gamblers at a casino, putting coins in slot machines. One thing is for certain: the house always wins.


My take is more optimistic.

I think it will make prototyping and MVP more accessible to a wider range of people than before. This goes all the way from people who don't know how to code up to people who know very well how to code, but don't have the free time/energy to pursue every idea.

Project activation energy decreases. I think this is a net positive, as it allows more and different things to be started. I'm sure some think it's a net negative for the same reasons. If you're a developer selling the same knowledge and capacity you sold ten years ago things will change. But that was always the case.

My comparison to iOS was about the market opportunity, and the opportunity for entrepreneurship. It's not magic, not yet anyway. This is the time to go start a company, or build every weird idea that you were never going to get around to.

There are so many opportunities to create software and companies, we're not running out of those just because it's faster to generate some of the code.


What you just said seems reasonable. However, what the earlier commenter said, which led to this subthread, seems unreasonable: those people unwilling to try the tools "are absolutely going to get left in the dust."

Returning to the iOS analogy, though, there was only a short period of time in history when a random developer with a flashlight or fart app could become successful in the App Store. Nowadays, such a new app would flop, if Apple even allowed it, as you admitted: "You can still start as an iOS developer today, but the opportunity is different." The software market in general is not new. There are already a huge number of competitors. Thus, when you say, "This is the time to go start a company, or build every weird idea that you were never going to get around to," it's unclear why this would be the case. Perhaps the barrier to entry for competitors has been lowered, yet the competition is as fierce as ever (unlike in the early App Store).

In any case, there's a huge difference between "the barrier to entry has been lowered" and "those who don't use LLMs will be left in the dust". I think the latter is ridiculous.

Where are the original flashlight and fart app developers now? Hopefully they made enough money to last a lifetime, otherwise they're back in the same boat as everyone else.


> In any case, there's a huge difference between "the barrier to entry has been lowered" and "those who don't use LLMs will be left in the dust". I think the latter is ridiculous.

Yeah, it’s a bit incendiary, I just wanted to turn it into a more useful conversation.

I also think it overstates the case, but I do think it’s an opportunity.

It’s not just that the barrier to entry has been lowered (which it has) but that someone with a lot of existing skill can leverage that. Not everyone can bring that to the table, and not everyone who can is doing so. That’s the current advantage (in my opinion, of course).

All that said, I thought the Vision Pro was going to usher in a new era of computing, so I’m not much of a prognosticator.


> it’s a bit incendiary

> I also think it overstates the case

I think it's a mistake to defend and/or "reinterpret" the hype, which is not helping to promote the technology to people who aren't bandwagoners. If anything, it drives them away. It's a red flag.

I wish you would just say to the previous commenter, hey, you appear to be exaggerating, and that's not a good idea.


I didn't read the comment as such a direct analogy. It was more recalling a lesson of history that maybe doesn't repeat but probably will rhyme.

The App Store reshuffled the deck. Some people recognized that and took advantage of the decalcification. Some of them did well.

You've recognized some implications of the reshuffle that's currently underway. Maybe you're right that there's a bias toward the LLM vendors. But among all of it, is there a niche you can exploit?


> In the hands of a skilled dev, these things are massive force multipliers.

What do you get from it? Say you produce more, do you get a higher salary?

What I have seen so far is the opposite: if you don't produce more, you risk getting fired.

I am not denying that LLMs make me more productive. Just saying that they don't make me more wealthy. On the other hand, they use a ton of energy at a time where we as a society should probably know better. The way I see it, we are killing the Earth because we produce too much. LLMs help us produce more, why should we be happy?


(Imagine me posting the graph of worker productivity in the US climbing quickly over time while pay remains flat or falls)

Using these tools comes down to basically just writing what you want in a natural language. I don't think it will be a problem to catch up if they need to.

Context management, plan mode versus agent mode, skills vs system prompt, all make a huge difference and all take some time to build intuition around.

Not all that hard to learn, but waiting for things to settle down assumes things are going to settle down. Are they? When?


That these facets of use exist at all are indicative of immature product design.

These are leaked implementation details that the labs are forcing us to know because these are weak, early products and they’re still exploring the design space. The median user doesn’t want to and shouldn’t have to care about details like this.

Future products in this space won’t have them and future users won’t be left in the dust by not learning them today.

Python programmers aren’t left behind by not knowing malloc and free.


Someone will package up all that intuition and skills and I imagine people won't have to do any of these things in future.

You wait for everyone to go broke chasing whatever, and then take their work for your own. It's not that hard to copy and paste.

It doesn’t matter how fast you run if it’s not the correct direction.

Good LLM wielders run in widening circles and get to the goal faster than good old school programmers running in a straight line

I try to avoid LLMs as much as I can in my role as SWE. I'm not ideologically opposed to switching, I just don't have any pressing need.

There are people I work with who are deep in the AI ecosystem and it's obvious what tools they're using It would not be uncharitable in any way to characterize their work as pure slop that doesn't work, buggy, untested adequately, etc.

The moment I start to feel behind I'll gladly start adopting agentic AI tools, but as things stand now, I'm not seeing any pressing need.

Comments like these make me feel like I'm being gaslit.


We are all constantly being gaslit. People have insane amounts of money and prestige riding on this thing paying off in such a comically huge way that it can absolutely not deliver on it in the foreseeable future. Creating a constant pressing sentiment that actually You Are Being Left Behind Get On Now Now Now is the only way they can keep inflating the balloon.

If this stuff was self-evidently as useful as it's being made out to be, there would be no point in constantly trying to pressure, coax and cajole people into it. You don't need to spook people into using things that are useful, they'll do it when it makes sense.

The actual use-case of LLMs is dwarfed by the massive investment bubble it has become, and it's all riding on future gains that are so hugely inflated they will leave a crater that makes the dotcom bubble look like a pothole.


Then where is all this new and amazing software? If LLM can 10x or 100x someones output we should've seen an explosion of great software by now.

One dude with an LLM should be able to write a browser fully capable of browsing the modern web or an OS from scratch in a year, right?


That's a silly bar to ask for.

Chrome took at least a thousand man years i.e. 100 people working for 10 years.

I'm lowballing here: it's likely way, way more.

If ai gives 10x speedup, to reproduce Chrome as it is today would require 1 person working for 100 years, 10 people working for 10 years or 100 people working for 1 year.

Clearly, unrealistic bar to meet.

If you want a concrete example: https://github.com/antirez/flux2.c

Creator of Redis started this project 3 weeks ago and use Claude Code to vibe code this.

It works, it's fast and the code quality is as high as I've ever seen a C code base. Easily 1% percentile of quality.

Look at this one-shotted working implementation of jpeg decoder: https://github.com/antirez/flux2.c/commit/a14b0ff5c3b74c7660...

Now, it takes a skilled person to guide Claude Code to generate this but I have zero doubts that this was done at least 5x-10x faster than Antirez writing the same code by hand.


Ah, right, so it's a "skill issue" when GPT5.3 has no idea what is going on in a private use case.

Literally yes

I still haven’t seen those mythical LLM wielders in the wild. While I’m using tools like curl, jq, cmus, calibre, openbsd,… that has been most certainly created by those old school programmers.

I only wish I was there when that cocky "skilled dev" is laid off.

You know, I'm past middle age, have seen this brand everywhere for many years, and I only just now read about the individual Eddie Bauer for the first time. Interesting dude.

https://en.wikipedia.org/wiki/Eddie_Bauer_(outdoorsman)


On some podcast I was listening to one of the hosts described her father as an avid indoorsman. I thought that was great and am happy to have a new way to describe myself. :)

I used to have a 'Rugged Indoorsman' shirt. Loved that one.


To be generous and steelman the author, perhaps what they're saying is that at each layer of abstraction, there may be some new low-hanging fruit.

Whether this is doable through orchestration or through carefully guided HITL by various specialists in their fields - or maybe not at all! - I suspect will depend on which domain you're operating in.


Very well put, one of the more compelling insights I've seen about this whole situation. I feel like it gets at something I've been trying to say but couldn't find the right words for yet.

Quality. Matters.

It always has, and it always will. If you're telling yourself otherwise, you are part of a doomed way of thinking and will eventually be outcompeted by those who understand the implications of thinking further ahead. [ETA: Unfortunately, 'eventually' in this context could be an impossibly long time, or never, because people are irrational animals who too often prioritize our current feelings over everything else.]


If quality matters then why is everything crap? Price has a quality of its own.

As I said in my edit, "because people are irrational animals who too often prioritize our current feelings over everything else."

Marginal improvements in quality which result in a marginal increase of cost/price often provide much better overall returns than just using a series of cheap substitutes that fail quickly. In some areas, this doesn't work, but I think shortsightedness is blocking truly better solutions in a great many cases. Particularly when true costs are being externalized.


Interesting business model.

I have two questions:

1) Who do you see as primary competitors in this niche?

2) What is your model for managing liability in a situation where a naive, less technical (or plain lazy) customer lets one of these run a business without sufficiently verifying output? Your definition of due diligence may differ from theirs.


Great questions!

1) Competitors: There are players approaching this from different angles - some focus on code completion (Copilot), some on chat (ChatGPT), some on specific automation tools. What I wanted was an end-to-end solution: a cloud-based AI employee with its own persistent environment. That's the gap I'm filling.

2) Liability: Valid concern. The AI can make mistakes, and that's a real risk. We're implementing safeguards - you can watch tasks in real-time, pause/stop at any point, and we encourage users to review outputs before deploying to production. It's definitely not "set and forget" for critical work. Still iterating on making this safer.


I strongly believe we will need both agentic and deterministic approaches. Agentic to catch edge cases & the like, deterministic as those problems (along with the simpler ones early on) are continually turned into hard coded solutions to the maximum extent possible.

Ideally you could eventually remove the agentic supervisor. But for some cases you would want to keep it around, or at least a smaller model which suffices.


I was using all lowercase as my default for internet comments (and personal journal entries) for at least a solid decade, starting from some point in the 90s. I saw it as a way to take a step back from being pretentious.

I eventually ran into so much resistance and hate about it that I decided conforming to writing in a way that people aren't actively hostile to was a better approach to communicating my thoughts than getting hung up on an aesthetic choice.

Having started out as a counterculture type, that will always be in my blood, but I've relearned this lesson over and over again in many situations-- it's usually better to focus on clear communication and getting things done unless your non-standard format is a critical part of whatever message you're trying to send at the moment.


I'm a big fan of counter culture and so on, but generally the point of text is to be read and using all lower case just makes it harder for all your readers, which seems like the worst form of arrogance.

The people who work in the datacenters don't want a long commute.

Also, in a remote area, the third parties the owners require for continual maintenance will be fewer, take longer to respond, likely cost more, and may be less qualified than those you can find in a more populated area.


Very few people work in datacenters

Pay them more then

so datacenters should be allowed to come into communities, consume their resources and barely hire the local populace?

What? A 5 minute drive is miles, and that's plenty far enough. They are currently being built within 100 meters of homes. It's absolutely insane.

EDIT: https://youtu.be/t-8TDOFqkQA?si=Qa9ot70MylFp6qkE

Just watch that and not get hoppin' mad.


Except that they can talk with you, at length, and seem empathetic, even if they're totally unconscious.

Which, you know, humans can also do, including when they're not actually empathizing with you. It's often called lying. In some fields it's called a bedside manner.

An imaginary friend is just your own brain. LLMs are something much more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: