Hacker Newsnew | past | comments | ask | show | jobs | submit | hashmush's commentslogin

Agree, I did a double take on this too.

Values of the same type can be sorted if a order is defined on the type.

It's also strange to contrast "random values" with "integers". You can generate random integers, and they have a "sorting" (depending on what that means though)


Ehm, like in Vietnam's neighbors Laos (ພາສາລາວ) and Cambodia (ខ្មែរ)? Sure Vietnamese used to (a long time ago) be written in its own version of the Chinese script, I'll give you that. But most languages in the region do not use a script derived from Chinese.


> But also, does that mean that you have to pay a bank transfer fee every time you buy anything?

No, not at all. The Swish rails are free to users. But I've never had to pay any transfer fees for domestic transfers anyway. They are just much slower than using Swish (instant transfers) and much more clunky (bank account number etc. vs. phone number/QR-code).


I dunno, it seems you have figured it out too, probably before LLMs?

I'd say all speakers of all languages have figured it out and your statement is quite confusing, at least to me.


Yes of course we’ve implicitly learned those rules, but we have not been able to articulate them fully ala Chomsky.

Somehow, LLMs have those rules stored within a finite set of weights.

https://slator.com/how-large-language-models-prove-chomsky-w...


We all make grammar mistakes but I’ve yet to see the main LLMs make any.


As much as I'm also annoyed by that phrase, is it really any different from:

- I had to Google it...

- According to a StackOverflow answer...

- Person X told me about this nice trick...

- etc.

Stating your sources should surely not be a bad thing, no?


It is not about stating a source, the bad thing is treating chatGPT as an authoritative source like it is a subject matter expert.


But is "I asked chatgpt" assigning any authority to it? I use precisely that sentence as a shorthand for "I didn't know, looked it up in the most convenient way, and it sounded plausible enough to pass on".


In my own experience, the vast majority of people using this phrase ARE using it as a source of authority. People will ask me about things I am an actual expert in, and then when they don’t like my response, hit me with the ol’ “well, I asked chatGPT and it said…”


I think you are misunderstanding them. I also frequently cite ChatGPT, as a way to accurately convey my source, not as a way to claim it as authoritative.


I have interrogated it in those cases. I was not misunderstanding.


I think you are in the minority of people who use that phrase.


It's a social-media-level of fact checking, that is to say, you feel something is right but have no clue if it actually is. If you had a better source for a fact, you'd quote that source rather than the LLM.

Just do the research, and you don't have to qualify it. "GPT said that Don Knuth said..." Just verify that Don said it, and report the real fact! And if something turns out to be too difficult to fact check, that's still valuable information.


In general those point to the person's understanding being shallow. So far when someone says "GPT said..." it is a new low in understanding, and there is no more to the article they googled or second stackOverflow answer with a different take on it, it is the end of the conversation.


Well, it is not, but the three "sources" you mention are not worth much either, much like ChatGPT.


SO at least has reputation scores and people vote on answers. An answer with 5000 upvotes, written by someone with high karma, is probably legit.


>but the three "sources" you mention are not worth much either, much like ChatGPT.

I don't think I've ever seen anyone lambasted for citing stackoverflow as a source. At best, they chastised for not reading the comments, but nowhere as much pushback as for LLMs.


From what I’ve seen, Stack Overflow answers are much more reliable than LLMs.

Also, using Stack Overflow correctly requires more critical thinking. You have to determine whether any given question-and-answer is actually relevant to your problem, rather than just pasting in your code and seeing what the LLM says. Requiring more work is not inherently a good thing, but it does mean that if you’re citing Stack Overflow, you probably have a somewhat better understanding of whatever you’re citing it for than if you cited an LLM.


I have personally always been kind of against using StackOverflow as a sole source for things. It is very often a good pointer, but it's always a good idea to cross-check with primary sources. Otherwise you get all sorts of interesting surprises, like that Razer Synapse + Docker for Windows debacle. Not to mention that you are technically not allowed to just copy-paste stuff from SO.


    > Not to mention that you are technically not allowed to just copy-paste stuff from SO.
Sure you can. Over the last ten years, I have probably copied at least 100 snippets of code from StackOverflow in my corporate code base (and included a link to the original code). The stuff that was published before Generation AI Slop started is unbeatable as a source of code snippets. I am a developer for internal CRUD apps, so we don't care about licenses (except AGPL due to FUD by legal & compliance teams). Anything goes because we do not distribute our software externally.


I mean, if all they did is regurgitate a SO post wholesale without checking the correctness or applicability, and the answer was in fact not correct or applicable, they would probably get equally lambasted.

If anything, SO having verified answers helps its credibility slightly compared to a LLM which are all known to regularly hallucinate (see: literally this post).


...isn't that exactly why someone states that?

"Hey, I didn't study this, I found it on Google. Take it with a grain of caution, as it came from the internet" has been shortened to "I googled it and...", which is now evolving to "Hey, I asked chatGPT, and...."


All three of those should be followed by "...and I checked it to see if it was a sufficient solution to X..." or words to that effect.


The complaint isn't about stating the source. The complaint is about asking for advice, then ignoring that advice. If one asks how to do something, get a reply, then reply to that reply 'but Google says', that's just as rude.


It's a "source" that cannot be reproduced or actually referenced in any way.

And all the other examples will have a chain of "upstream" references, data and discussion.

I suppose you can use those same phrases to reference things without that, random "summaries" without references or research, "expert opinion" from someone without any experience in that sector, opinion pieces from similarly reputation-less people etc. but I'd say they're equally worthless as references as "According to GPT...", and should be treated similarly.


It depends on if they are just repeating things without understanding, or if they have understanding. My issue is that people that say "I asked gpt" is that they often do not have any understanding themselves.

Copy and pasting from ChatGPT has the same consequences as copying and pasting from StackOverflow, which is to say you're now on the hook supporting code in production that you don't understand.


We cannot blame the tools for how they are used by those yielding them.

I can use ChatGPT to teach me and understand a topic or i can use it to give me an answer and not double check and just copy paste.

Just shows off how much you care about the topic at hand, no?


If you used ChatGPT to teach you the topic, you'd write your own words.

Starting the answer with "I asked ChatGPT and it said..." almost 100% means the poster did not double-check.

(This is the same with other systems: If you say, "According to Google...", then you are admitting you don't know much about this topic. This can occasionally be useful, but most of the time it's just annoying...)


How do you know that ChatGPT is teaching you about the topic? It doesn't know what is right or what is wrong.


It can consult any sources about any topic, ChatGPT is as good at teaching as the pupil's capabilities to ask the right questions, if you ask me


I like to ask AI systems sports trivia. It's something low-stakes, easy-to-check, and for which there's a ton of good clean data out there.

It sucks at sports trivia. It will confidently return information that is straight up wrong [1]. This should be a walk in the park for an LLM, but it fails spectacularly at it. How is this useful for learning at all?

[1] https://news.ycombinator.com/item?id=43669364


But just because it's wrong about sports trivia doesn't mean it's wrong about anything else! /s [0]

[0] https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect


It may well consult any source about the topic, or it may simply make something up.

If you don't know anything about the subject area, how do you know if you are asking the right questions?


LLM fans never seem very comfortable answering the question "How do you know it's correct?"


I'm a moderate fan of LLMs.

I will ask for all claims to be backed with cited evidence. And then, I check those.

In other cases, of things like code generation, I ask for a test harness be written in and test.

In some foreign language translation (High German to english), I ask for a sentence to sentence comparison in the syntax of a diff.


We can absolutely blame the people selling and marketing those tools.


Yeah, marketing always seemed to me like a misnomer or doublespeak for legal lies.

All marketing departments are trying to manipulate you to buy their thing, it should be illegal.

But just testing out this new stuff and seeing what's useful for you (or not) is usually the way


This subthread was about blaming people, not the tool.


my bad I had just woke up!


I see nobody here blaming tools and not people!


the first 2 bullet points give you an array of answers/comments helping you cross check (also I'm a freak, and even on SO, I generally click on the posted documentation links).


> You have a 50/50 chance of getting it right.

What is the right answer? Doesn't it depend on the DB? Postgres at least shows rows ordered by last updated time (simplified, I know).

I would be fine if it was "... near the top or bottom" though.

(Or maybe this comment is the correct answer?)


Sorry, I wrote this in a hurry. Of course I would have included an ORDER BY clause.


The one without that clause was still fun to think about, so no harm done!


You know what? I've noticed the same thing with eating snacks etc., if you don't buy any, you won't eat any. It's amazing!


Doesn't work that way for me. Not having snacks in the house reduces the average amount of snacks to a minimum.

It however causes cravings sometimes which result in binges


Uhm? The site doesn't show what week it is..? It's currently week 9 of 2025, but the site shows W7 of Q1. (Maybe that's what you meant? Searching for the current week in the quarter?)


oh, I was being subtle since everyone was asking for a feature request.


Sweden has a sensible solution to this (im sure others do too). When you register a name you specify which part is the tilltalsnamn (lit. name of adress). In your case, the names would be disambiguated as Joe Frank Smith and Joe Frank Smith.

Not all systems use that piece of information, but most do.


Yes. I'm Swedish and American TSA/airport security not understanding this is why I mentioned it.


It absolutely helps. It tells everyone that the car is on!

Anecdote: coming from a country where this is mandatory, visiting a country where it's not, I almost got run over because I assumed a car was parked when I glanced left before crossing the road.

Of course, might not prove that one or the other is safer, but it did show me how often I subconsciously use headlights as an indicator of off (=> stationary => safe) vs. on (=> potentially moving => potentially a "threat")


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: