Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, web search often tanks the quality of the output.

I don't know if it's because of context clogging or that the model can't tell what's a high quality source from garbage.

I've defaulted to web search off and turn it on via the tools menu as needed.



Web search often tanks the quality of MY output these days too. Context clogging seems a reasonable description of what I experience when I try to use the normal web.


THIS. I do my best work after a long vigorous walk and contemplation, while listening to Bach sipping espresso. (Not exaggerating much.) If I go on HN or slack or ClickUp or work email, context is slammed and I cannot do /clear so fast. Even looking up something quick on the web or an LLM causes a dirtying.


I feel the same. LLMs using web search ironically seem to have less thoughtful output. Part of the reason for using LLMs is to explore somewhat novel ideas. I think with web search it aligns too strongly to the results rather than the overall request making it a slow search-engine.


That makes sense. They're doing their interpretation on the fly for one thing. For another just because they now have data that is 10 months more recent than their cutoff they don't have any of the intervening information. That's gotta make it tough.


Web search is super important for frameworks that are not (sufficiently?) in the training data. o3 often pulls info from Swift forums to find and fix obscure Swift concurrency issues for me.


In my experience none of the frontier models I tried (o3, Opus 4, Gemini 2.5 Pro) was able to solve Swift concurrency issues, with or without web search. At least not sufficiently for Swift 6 language mode. They don’t seem to have a mental model of the whole concept and how things (actors, isolation, Tasks) need to play together.


> They don’t seem to have a mental model of the whole concept and how things (actors, isolation, Tasks) need to play together.

to be fair, does anyone ¯\_(ツ)_/¯


This. It’s a bunch of rules you need to juggle in your head.


I haven't tried ChatGPT web search, but my experience with Claude web search is very good. It's actually what sold me and made me start using LLMs as part of my day to day. The citations they leave (I assume ChatGPT does the same) are killer for making sure I'm not being BSd on certain points.


How often you actually check the citations? They seems to confidentally cite things but then they also say different things what source has.


It depends on the question. I was having a casual chat with my dad and we wondered how Apple's revenue was split amongst products, and it was just to chat about so I didn't check.

On the other hand, I got an overview of Postgres RLS and I checked the majority of those citations since those answers were going to be critical.


That’s interesting. I use the API and there are zero citations with Claude, charGPT and Gemini. Only Kagi assistant gives me some, which is why I prefer it when researching facts.

What software to you use? The native Claude app? What subscription do you have?


Claude directly (web and mobile) with the Pro ($20) subscriptions.

I found it very similar to Kagi Assistant (which I also use).


Kagi really helps with this. They built a good search engine first, then wired it up to AI stuff.


I also find that it gets way more snarky. The internet brings that bad taint.


Completely opposite experience here (with Claude). Most of my googling is now done through Claude- it can find and digest a d compile information much quicker and better than I'd do myself. Without web search you're basically asking an LLM to pull facts out of its ass- good luck with trusting the results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: