Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI, at least in its current form, is not so much replacing human expertise as it is augmenting and redistributing it.


Yep. And that's the real value add that is happening right now.

HN concentrates on the hype but ignores the massive growth in startups that are applying commoditized foundational models to specific domains and applications.

Early Stage investments are made with a 5-7 year timeline in mind (either for later stage funding if successful or acquisition if less successful).

People also seem to ignore the fact that foundational models are on the verge of being commoditized over the next 5-7 years, which decreases the overall power of foundational ML companies, as applications become the key differentiator, and domain experience is hard to build (look at how it took Google 15 years to finally get on track in the cloud computing world)


I notice that a lot of people seem to only focus on the things that AI can't do or the cases where it breaks, and seem unwilling or incapable of focusing on things it can do.

The reality is that both things are important. It is necessary to know the limitations of AI (and keep up with them as they change), to avoid getting yourself in trouble, but if you ignore the things that AI can do (which are many, and constantly increasing), you are leaving a ton of value on the table.


> I notice that a lot of people seem to only focus on the things that AI can't do or the cases where it breaks, and seem unwilling or incapable of focusing on things it can do.

I might be one of these people, but in my opinion, one should not concentrate on things that it can do, but for how many of the things where an AI might be of help for you,

- it does work

- it only "can" do it in a very broken way

- it can't do that

At least for the things that I am interested in an AI doing for me, the record is rather bad.


Just because AI doesn’t work for you, doesn’t mean it doesn’t work for other people. Ozempic may have no effect, or even harmful to you, but it’s a godsend for many others. Acknowledge that, instead of blindly insisting on your use cases. It’s fine to resist the hype, but it’s foolish to be willfully ignorant.


How do you define "can do" ? Would answering correctly 9 out of 10 questions correctly for a type of question (like give directions knowing a map) mean it "can do" or that it "can't do" ?

Considering it works for so many cases, I think it is naturally to point out the examples where it does not work - to better understand the limit.

Not to mention that practically, I did not see anything proving that it will always "be able" to do something . Yes, it works most of the times for many things, but it's important to remember it can (randomly?) fail and we don't seem to be able to fix that (humans do that too, but having computers fail randomly is something new). Other software lets say a numerical solver or a compiler, are more stable and predictable (and if they don't work there is a clear bug-fix that can be implemented).


Yep! Nuance is critical, and sadly it feels like nuance is dying on HN.


This very discussion feels nuanced so i don't share your sentiment.


It would be nice to have more examples. Without specifics, “massive growth in startups” isn’t easily distinguishable from hype.

A trend towards domain-specific tools makes sense, though.


DevTools/Configuration Management and Automated SOC are two fairly significant example.


Am I the only one unimpressed by the dev tool situation? Debugging and verifying the generated code is more work than simply writing it.

I'm much more impressed with the advances in computer vision and image generation.

Either way, what are the startups that I should be looking at?


And even when the output is perfect, it may be that the tool is helping you write the same thing a hundred times instead of abstracting it into a better library or helper function.

Search/Replace as a service.


Those are more like broad categories than examples of startups, though.


Same with consultancy. There is a huge amount of automation that can be done with current gen LLMs, as long as you keep their shortcomings in mind. The "stochastic parrot" crowd seems an over correction to the hype bros.


It's because the kind of person who understands nuance isn't the kind of person to post in HN flame wars.

The industry is still in it's infancy right now, and stuff can change in 3-5 years.

Heck, 5 years ago models like GPT-4o were considered unrealistic in scale, and funding in the AI/ML space was drying up at the expense of crypto and cybersecurity. Yet look at the industry today.

We're still very early and there are a lot of opportunities that are going to be discovered or are in the process of being discovered.


GPT4o is unrealistic at scale. OpenAI isn't making a profit running it.


...and then being blown up when the AI company integrates their idea.


Not exactly.

At least in the cybersecurity space, most startups have 3-5 year plans to build their own foundational models and/or work with foundational model companies to not directly compete with each other.

Furthermore, GTM is relationship and solution, and an "everything" company has a difficult time sympathizing or understanding GTM on a sector to sector basis.

Instead, the foundational ML companies like OpenAI have worked to instead give seed/pre-seed funding to startups applying foundational MLs per domain.


OpenAI/Microsoft are building a $100B+ datacenter for foundation models and pitching ideas for $1T+. Compute is the primary bottleneck, startup competitors will not be physically possible.


Yes, it should really be called collective intelligence not artificial intelligence




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: