Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The current AIs powered by LLMs intend to "talk/think like ordinary humans do".

There might exist some practical applications for such AIs that might have economic value, but doing highly innovative things is not among these.

Doing highly innovative things rather means subverting the current state of art in a very clever way. If you think of people who have this property, you will likely immediately think of some ingenious smartass who is nearly always right, but insanely annoying to the people surrounding him because of this know-it-all attitude.

Would such an AI be possible to create? I don't know, but let's assume it is.

What should be obvious is that such an AI would need entirely different techniques to develop, but let's again assume that this problem has been solved.

What would a business model for such an AI look like? You clearly could not sell API access to it, since such an AI would demand far too demanding in the learning requirements for its users (discussion partners if implemented as a chatbot); look in the mirror: how many post-graduate level textbooks about some scientific topic (in particular math or physics) did you read in the last months?

So, such an AI would only make sense in the basements of some big corporation or three-letter agency, where there AI is commanded by some insanely brainiac users who have gotten a yearslong training to develop the actual intellectual capacity and scope of knowledge to actually understanding a glimpse of the AI's ideas. This glimpse then "trickles down" into innovations where no one has the slightest idea about where their true origin is (they fell into someone's lap).



The machines invented so far haven't done "highly innovative things" all by themselves, and yet people doing innovative things often find their machines useful. I expect organizations consisting of both humans and machines will still be pretty important for a while.


> The machines invented so far haven't done "highly innovative things" all by themselves, and yet people doing innovative things often find their machines useful.

This statement is near to being tautological. The central test is rather whether people who do "highly innovative things" become more productive in doing so not just by the mere fact that the AI removes some yak shaving from their work.

Otherwise, you could simply argue that people who do "highly innovative things" also find

- a housecleaner

- a personal secretary

- using a word processing program instead of a typewriter

- a washing machine

- an automatic dishwasher

- ...

to be useful.


But they do find them useful. Personal computers were a pretty important invention, too. More recently, the web and "smart" phones (which aren't actually smart) resulted in major changes to organizations. We work differently now.

I'm not actually sure what argument you're making, though? It seems like you're saying that only a certain kind of technological innovation would count for some purpose, but I don't know what purpose you're interested in.


> I'm not actually sure what argument you're making, though?

You can read my argument at https://news.ycombinator.com/item?id=36937368

In my answer to your answer (https://news.ycombinator.com/item?id=36937760), I argued that the fact that people doing innovative things often find their machines useful says nothing about that AIs are capable of doing innovative things.


Yes, I read that, but it's unclear what your assumptions are or what you value. I guess having AI that is "highly innovative" in the same way that some people can be innovative is something you value, but it's not that clear to me why that's important.


It’s the exact opposite trope in my experience. The ingenious people that are always right are invariably courteous, polite and a pleasure to be around. Those a little bit lower on the intelligence rung are usually the ones that feel the need to be contrarians and generally disruptive to “prove” their intelligence.


> The ingenious people that are always right are invariably courteous, polite and a pleasure to be around.

My experience in both academia and business was/is very different: to those who are insanely competent, you typically had (in some sense) to "prove" by diligence and intelligence that you are "worthy" their time.

I can understand that attitude really well: otherwise a lot of people would insanely waste their time, and the respective people would get nowhere.


Arrogant smartasses often put more effort into having their genius recognized than they do in being correct, and that often... works. Loud and aggressive people take as much credit as they can and they make sure everyone knows, so they generally appear more competent than the rest.

Sometimes they really are outstanding on their own (e.g. Linus Torvalds), but just as often, it turns out that yes, they are highly competent, but the true insights actually come from their underlings.


Or perhaps those traits are unrelated?

Smart people can be assholes, they can also be excellent humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: