Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>This is a GREAT example of the (not so) subtle mistakes AI will make in image generation, or code creation, or your future knee surgery.

The mistake is in the prompting (not enough information). The AI did the best it could

"What's the biggest known planet" "Jupiter" "NO I MEANT IN THE UNIVERSE!"



It doesn't affect your point but technically since the IAU are insane, exoplanets aren't technically planets and Jupiter is the largest planet in the universe.


I suppose it was too much to hope that chatbots could be trained to avoid pointless pedantry.


They've been trained on every web forum on the Internet. How could it be possible for them to avoid that?


asking "x-most known y" and not expecting a global answer is odd


Every answer concerning planets is global.



No, this is squarely on the AI. A human would know what you mean without specific instructions.


Seems like you're making a judgment based on your own experience, but as another commenter pointed out, it was wrong. There are plenty of us out there who would confirm, because people are too flawed to trust. Humans double/triple check, especially under higher stakes conditions (surgery).

Heck, humans are so flawed, they'll put the things in the wrong eye socket even knowing full well exactly where they should go - something a computer literally couldn't do.


Why on earth would the fallback when a prompt is under specified be to do something no human expects?


“People are too flawed to trust”? You’ve lost the plot. People are trusted to perform complex tasks every single minute of every single day, and they overwhelmingly perform those tasks with minimal errors.


Extremely talented, studied, hard working humans perform complex tasks all the time, and never with 100% win rate over all time.

In other examples, almost every single person has had the experience of saying, "turn right", "oh I meant left sorry, I knew it was right too, I don't know why I said left". Even the most sophisticated humans have made this error. A computer would never.

Humans are deeply flawed and after pre-selection require expensive training to perform complex tasks at a never perfect success rate.


Intelligence in my book includes error correction. Questioning possible mistakes is part of wisdom.

So the understanding that AI and HI are different entities altogether with only a subset of communication protocols between them will become more and more obvious, like some comments here are already implicitly telling.


If the instructions were actually specific, e.g. Put a blackberry in its right eye socket, then yes, most humans would know what that meant. But the instructions were not that specific: in the right eye socket


Or be even more explicit: Put a strawberry in the person’s right eye socket.


If you asked me right now what the biggest known planet was, I'd think Jupiter. I'd assume you were talking about our solar system ("known" here implying there might be more planets out in the distant reaches).


I would be amused to see you test this theory with 100 men on the street


I would not, I would clarify, and I think I'm a human.


Yeah, just like humans always know what you mean.


But different humans would know what you meant differently. Some would have known it the same way the AI did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: