Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People are already generating entire apps with AI in minutes. Software engineering as a profession is finished. We are all now slaves to Roko’s Basilisk.


Most apps worth money have to integrate with existing systems in highly specific ways determined by endless meetings with real people to extract the requirements and probably haven't been done before and are usually all proprietary code ChatGPT doesn't know about and there's not a cobol's chance in hell anyone is going to rewrite it all in a way that's compatible with junk hallucinated by a bot. There's no business case for that. It will probably always be cheaper for humans to maintain what has already accumulated over decades despite the "high" salaries.

The knowledge gap between devs and everyone else, including bots, is a chasm. It seems many don't realize how hard some devs work for so little relatively speaking.


I something think that some developers posting here want to be punished for earning a living wage. Strangely lawyers never think that and I can assure you that this is not a more difficult profession [0].

[0] Surely someone will quickly post that it is.


Ah. I think it still requires you to understand what you're doing. I tried to use both copilot and ChatGPT 3.5 and 4. It doesn't get a lot of things right in my interest areas. I think it's great as a crazy-good autocompletion or refactoring tool. It also helps with boilerplate. But every time you need to know what you want, and it just saves you typing the code.

Sometimes it's just amusingly wrong, too. Sometimes annoyingly so, especially after it repeats mistakes.

I always said our jobs aren't hard because we have to write code for computer, but because we have to capture intent for other humans in code. Sometimes we don't even know what the intent is, because half the time we don't even know what we're building exactly.

So, this all slightly shifts the level of abstraction for developers to think more about intent and less about code. You still have to make sure the code is correct. So, you need to understand the code and the intent. Not much has changed. It's just gotten less mechanical.

Anyway, I don't think people need to be scared of AI just yet.

Obvs there's programmers that enjoy the nitty gritty keyboard slapping, and they can still do that for problems they find interesting. But for things they want to get out of the way, why not unburden ourselves?

Typing on phone so pardon the typos/autocorrect


I think LLMs will make a software engineers job harder and more in demand. By removing boiler plate, engineers will be expected to create more output per person, and the job moves up the stack from junior code monkey stuff to architect / senior debugging guru style work. Not everyone can do such a role. It will be a very tough time to be a junior.


I'm guessing this tweet is what the OP was referring to

https://twitter.com/mortenjust/status/1636001311417319426

This is the first bit of AI news that actually made me want to investigate it for myself. Even though it is pretty impressive to get a working app out of a few prompts, it seems for now it's not capable of producing much more than snippet level code.

I like the author's take further in the thread:

> Company A fires half their staff yet keeps delivering 100% of their old capacity = they save money

> Company B keeps all their staff, but can now deliver 200% of their old capacity = they win the competition

> So seems like the strategic move here is to raise the bar rather than replace engineers

https://twitter.com/mortenjust/status/1636037312571211777

My hope is that stuff allows me to augment my workflow and automate repetitive and tedious tasks away but doesn't fully replace me (or I get UBI and don't have to work anymore)


But Company A has reduced product price be 60% because the biggest cost center is now halved. Customers like some new features from B but not justifying the price.


Likely sarcasm, still I'll bite.

I saw the exact same commentary around the no-code hype of the past few years and it made me increasingly cynical. No-code tools made it super easy to create a boilerplate website that looked pretty but was not fit for purpose. Meanwhile it was advertised as "Build an AirBnB clone without code!".

OK fine, hyperbole as a marketing tactic. But many people handed over wads of cash for courses that promised results that were in reality no more than cardboard cutouts of actual websites.

The Internet appears to have an endless ability to create markets for don't-do-the-work-and-still-enjoy-the-reward snakeoil.

And now we have ChatGPT- an instant boilerplate-creating machine. Watch as a new (or in many cases the same) class of charlatans grease up this technological wave too.


Can you share examples of this? I've been doing some searching on my own because I find this stuff interesting, and I haven't seen evidence that current versions of AI tools are capable of helping generate entire apps in minutes, at least non-trivial ones.

I keep seeing some real dramatic premonitions about AI, and I'd love to see something to back them up.

Even the original post, I'm having trouble understanding. I know Stable Diffusion can create some amazing images, but I don't see how it's being used to create sprite assets for games. You typically need the same character from different angles. Can Midjourney v5 do that? The OP talks about being a 3D artist. Can Midjourney generate meshes and 3D objects or just raster images? I didn't think we were at the point where these tools could fully replace an artist's workflow, but the OP seems to claim so.


All knowledge workers are completely screwed. The only way out is UBI and a maximum three-day work week.


I don't see it. I consult, and AI has just let me take on more clients. Using ChatGPT lets me finish their projects faster.

What's ending is the era of CRUD work.


> don't see it. I consult, and AI has just let me take on more clients. Using ChatGPT lets me finish their projects faster.

Get ready to take on more clients and at a smaller and smaller gain. An avalanche of competition is coming your way too if you have no moat.


For sure. I recommend anyone in business for themselves productize and build moats.

I still work about the same hours, though. AI has just raised the level of abstraction. Less time is spent on looking up specific syntax and more on reviewing, iterating, testing, architecting.


Today yeah, but tomorrow the clients can use GPT 7 consultant who has better context with access to all the mails/data of the client. It is fast/cheap and is a polymath.


I don’t understand why people see this future as guaranteed, just because GPT4 was a great improvement on 3.5 does not imply this approach will continue accelerating at this rate. I am not saying it definitely won’t happen but I find the confidence that it will pretty baffling.


It seems unlikely to me that this technology has plateaued. The gains have been exponential. It strikes me as prudent to at least consider its continued improvement a possibility and try to work out what that means for society. The best thing on the table so far is people lose jobs and business owners make more money. Some amount of that is unavoidable and maybe even beneficial, but too much of it is unsustainable. Better to have a plan and not need and all that.


I don’t take issue with considering continued improvement and largely agree with this, my issue is with all the proponents who speak about it as if it is guaranteed.


We have numerous examples that have worked this way throughout the history of computing. It would be shocking if it plateaued this early.


We have numerous that haven’t also, AI approaches specifically have plateaued quite a bit throughout history.


Why do you think such a world would have companies, governments or even money? We are way outside the horizon of predictability if LLMs reach AGI levels.


This has been said for years, first about robotics, now about AI. And today, most developed nations have trouble finding people to work because there's so much work that needs to be done.

The opposite is happening, yet it's still repeated.


Sounds like absolute paradise, I don’t give a shit about “work”, or status or anything of that.

I hope Silicon Valley builds itself a starship and ships itself off to Alpha Centuri and leaves me to live a pretty simple life with way less grind.


A Marxist can dream


Okay. Link me to a single production app with paying users that was created with AI in minutes.

Since it only takes minutes to make you must have a lot of examples.


Why bother issuing such a challenge? You'll win today, but lose decisively in only a few more years.


So you are saying they are right but they are just a few years off? That is the same as being wrong.


We'll have self-driving cars by next year, right?


Those are a tougher nut to crack because the entire road transportation infrastructure is designed for human drivers. Lots of so-called "wicked problems" that don't exist in the art world.


Trivial app demos, sure. Non-trivial, no.

Influencers have been cranking out the former to sell their course/personal brand/consulting. They stand to gain from making AI look as magical and easy as possible.

A decision-maker watches an influencer's video and goes, "Wow! AI can do it all! I need to hire this influencer as a consultant/like their content/buy their course!" The reality isn't quite as flashy.

Incorporating LLMs into my workflow has been a great productivity boost, but by no means do they spell the end of software engineering.


True. The influencers are using AI to sell themselves. The amount of human attention which goes into doing a semi repeatable task will surely go down with AI but not so much as they claim to churn out apps in seconds vs hours/minutes with a real developer.

Also, I have not seen any insight level help from GPT. It just seems like it has a good recall for everything it has seen. And prompt engineering takes away the mental work away from the problem and more about what the model knows.


ChatGPT is impressive if you start from a clean slate but the moment you do anything more complicated it slows down to being as fast as doing things yourself. After all, you have to know what to input into it and that may take longer than actually processing the information.


This won’t be a problem once you can feed it an entire solution. I suspect we’re not too far from being able to do that.


If this is indeed so simple then why chatGPT (meaning web app not LLM) was so buggy ? Did OpenAI use their LLM to make the app or not ?


We shall see.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: