Hacker Newsnew | past | comments | ask | show | jobs | submit | pton_xd's commentslogin

Not to be an alarmist, but... your body is covered with bacteria, viruses, and fungi; literally crawling with microbes, inside and out.

When you smell poop, it’s actually shit particles in the air..lol. Sometimes bacteria comes along for the ride too? Garbage, rotting flesh…yeah.

Technically, gas != tiny solid particles.

So, no, you’re not inhaling actual particles.


> Apart from being all AI-written

I think nearly 100% of blog posts are run through an LLM now. The author was lazy and went with the default LLM "tone" and so the typical LLM grammar usage and turns of phrase are too readily apparent.

It's really disheartening as I read blogs to get a perspective on what another human thinks and how their thought process works. If I wanted to chat with a LLM, I can open a new tab.



I never use an LLM for blog posts. Seemed like you need to hear more people telling you that.

Only amateurs and scammers use LLMs for writing.


So I guess this talent hire + tech license is the new way to acquire startups? Chatacter.ai, Windsurf, and now Groq.

Any employees of those companies lurking here? I'm curious how the morale is now.


It's just fatigue from seeing the same people and themes repeatedly, non-stop, for the last X months on the site. Eventually you'd expect some tired reactions.

Better this than 300th React.JS bloatware of the year.

> "Just Eat Less" is roughly the way to lose weight

Maybe the messaging should be "eat healthier"? How many obese people cook for themselves and eat exclusively from the outer aisles of the grocery (fruits, vegetables, seafood, meat, eggs, dairy)?

I could be wrong but I have to imagine the average obese person has a terrible diet. Portion control won't work at that point, you're already doomed to fail.

To be fair, most people have a terrible diet, it's just that some lucky individuals have the metabolism to overcome it. It seems like those people are increasingly the exception and a bad benchmark for how humans should eat.


Differences in metabolism are very seldom the real reason. The people who claim they have a "slow" or "fast" metabolism can't back that up with actual RMR test results. They're just bad at estimating many calories they actually consume. This can go both ways.

> For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots: probabilistic machines that would: 1. NOT have any representation about the meaning of the prompt. 2. NOT have any representation about what they were going to say. In 2025 finally almost everybody stopped saying so.

It's interesting that Terrence Tao just released his own blog post stating that they're best viewed as stochastic generators. True he's not an AI researcher, but it does sound like he's using AI frequently with some success.

"viewing the current generation of such tools primarily as a stochastic generator of sometimes clever - and often useful - thoughts and outputs may be a more productive perspective when trying to use them to solve difficult problems" [0].

[0] https://mathstodon.xyz/@tao/115722360006034040


I get the impression that folks who have a strong negative reaction to the phrase "stochastic parrot" tend to do so because they interpret it literally or analogously (revealed in their arguments against it), when it is most useful as a metaphor.

(And, in some cases, a desire to deny the people and perspectives from which the phrase originated.)


What happened recently is that all the serious AI researches that were in the stochastic parrot side changed point of view but, incredibly, people without a deep understanding on such matters, previously exposed to such arguments, are lagging behind and still repeat arguments that the people who popularized them would not repeat again.

Today there is no top AI scientist that will tell you LLMs are just stochastic parrots.


You seem to think the debate is settled, but that’s far from true. It’s oddly controlling to attempt to discredit any opposition to this viewpoint. There’s plenty of research supporting the stochastic view of these models, such as Apple’s “Illusion” papers. Tao is also a highly respected researcher, and has worked with these models at a very high level - his viewpoint has merit as well.

The stochastic parrot framing makes some assumptions, one of them being that LLMs generate from minimal input prompts, like "tell me about Transformers" or "draw a cute dog". But when input provides substantial entropy or novelty, the output will not look like any training data. And longer sessions with multiple rounds of messages also deviate OOD. The model is doing work outside its training distribution.

It's like saying pianos are not creative because they don't make music. Well, yes, you have to play the keys to hear the music, and transformers are no exception. You need to put in your unique magic input to get something new and useful.


Now that you’re here, what do you mean by “scientific hints” in your first paragraph?

This is what seemingly every app does. They add 15 different categories for notifications / emails / whatever, and then make you turn off each one individually. Then they periodically remove / add new categories, enabled by default. Completely abusive behavior.

Want to unsubscribe from this email? Ok, you can do it in one click, but we have 16 categories of emails we send you, so you'll still get the other 15! It's a dark pattern for sure.

And by unsubscribing, you just gave us a signal that you are active.

They’re sad they can’t point that particular marketing hose at you, anymore, but appreciate confirming your validity as a lead they’ll sell to data brokers.

1.3076744e+12 -1 is a lot of categories to click.

1,307,674,368,000

[ ] 231,846,239,211 “Messages related to wetland fauna migratory patterns and their impact on commodity spice markets, also Pepsi advertising”

e+ is such an unintuitive decimal representation system. going in blindly, it's completely non-obvious what "e" stands for, surely "d" would make far more sense. also, the namespace for e is plenty filled up as is, and, most of all, +12 implies 12 additional digits, not digits after the point

Google's choice to use it for calculation results despite having essentially no restriction on text space always annoyed me. I think this is the first time I've seen a human using it


Nothing to do with Google.

(Apologies if this is pedantic, but:)

The letter "e" (for "exponent") has meant "multiplied by ten to the power of", since the dawn of computing (Fortran!), when it was impossible to display or type a superscripted exponent.

In computing, we all got used to it because there was no other available notation (this is also why we use * for multiplication, / for division, etc). And it's intuitive enough, if you already know scientific notation, which Fortran programmers did.

Scientific notation goes back even further, and is used when the magnitude of the number exceeds the available significant digits.

E.g., Avogadro's number is 6.02214076 × 10⁻²³. In school we usually used it as 6.022 × 10⁻²³, which is easier to work with and was more appropriate for our classroom precision. In E notation, that'd be 6.022E-23.

1.3076744e+12 is 1.3076744 × 10¹². The plus sign is for the positive exponent, not addition. You could argue that the plus sign is redundant, but the clear notation can be helpful when working with numbers that can swing from one to the other.


The use of E notation for scientific notation dates back to 1956: https://en.wikipedia.org/wiki/Scientific_notation#:~:text=wh...

It’s also pretty common on scientific and graphing calculators; the first time I saw it was in junior school in maths.


And if you just add them to your spam filter, it won't even work easily, because they deliberately shift around the domains and subdomains they send from every so often.

I just use a unique address for each service. Any email that gets leaked or is getting unsubscribe resistant spam is added to /etc/postfix/denied_recipients :)

Appending "+label" to the username part of an email address is legal and will be delivered to the username mailbox.

Doesn’t sound like a very fun hobby, TBH.

no the op, but I find great joy in looking though who sends me spam (based on the unique email used to sign up for each service)

I think it scratches a similar itch to putting up a game camera to see what sort of vermin are running around in your back yard.


You inevitably catch LexisNexis shitting in your herb garden and leaving squirrel carcasses lying about…

Luckly they don't seem to shift the addresses they send to, so if you own the domain you use for email you can make dedicated addresses for each service you sign up for. Then filter based on the `to:` field.

this is where LLMs could actually help. create spam filters that an LLM can parse and deny if it looks close enough. but then again, hallucinations would be kind of terrible.

I agree this would be a good use of an LLM (assuming that it was running locally). I wouldn't put one in charge of deleting my messages, but I could see one being used to assign a score to messages and based on that score moving them out of my inbox into various folders for review.

I'd be really interested to see a comparison between LLM spam scoring and a traditional spam scoring algorithm because an LLM is essentially a spam generator. Can that be used to make a better spam detector?

Same can be achieved with a catch all domain and a sub for every service you use. Cost $13/year. Extra protection: now if you lose access to your email provider, you still have access to future emails.

Thanks again for unsubscribing! This is your weekly reminder that you are still unsubscribed. As usual, we've included a little bonus for you to enjoy at the end of this unsubscribe-reminder e-mail: a complementary full edition of this week's newsletter!

Yep. Had that happen with the United app a few weeks ago. Unsolicited spam sent via push notification to my phone. Turns out that they added a bunch of notification settings - of course all default to on.

Turned them all off except for trip updates that day.

Best part is- yesterday I received yet another unsolicited spam push message. With all the settings turned off.

So these companies will effective require you to use their app to use their service, then refuse to respect their own settings for privacy.


I've taken to "Archiving" apps like this on my Android phone. When I need it, I can un-archive it to use it. Keeps the list of things trying to get my attention a little bit smaller.

I just hellban every app from sending any notifications, except for a select few. Apps get like a one strike policy on notification spam. If they send a single notification I didn't want, I disable their ability to send notifications at all.

Also all notifications/etc are silent, except for alarms, pages, phone calls, and specific named people's texts.

Everything else... no. YouTube was the worst offender before for me.


> YouTube was the worst offender before for me.

Uber. Hands down. I'm using it a lot less since they started sending ads on the same notification channel as my ride updates.


Another technique for me is to avoid apps like Instagram, Facebook and Youtube. I run them all through mobile Firefox with uBlock origin and custom block scripts that block sponsored posts and shorts. This combines well with having Youtube's history turned off which prevents the algorithmic suggestions.

I give apps a one strike policy on notification spam. If they do it at all, I'm uninstalling it until I actually need to use it next (if I can't find an alternative). And the same goes for getting in my way to beg for a review on the app store: that's a shortcut to getting a one-star rating.

The main exception to this is the notification spam from Google asking me to rate call quality after every damn call. I don't have my phone rooted, so I can't turn off that category of notification.


This is the way. You get one chance, app. If you send me an unwanted notification, you're done. You have to almost treat these apps as attackers.

Why even give most apps even one chance? For almost every app I have zero interest in ever getting a notification from. I see no reason to give them an opportunity to annoy me even once.

Honestly because I won't remember to go into the settings page and disable it. When a notification comes in, there's a quick route to disable forever, otherwise I have to go preemptively digging

Why do you even need the United app? They have a website.

This is why whenever you try to do anything significant on a web site with a phone, they tell you to "Download our app". Detection is very good now. Slack can see right through desktop mode, cheater, and will redirect you to the app regardless.

Never had that issue on Vanadium browser, or Brave or even Firefox. I personally refuse to download an app if there is a website for the same. For a long time I was even using door dash in browser.

Why use a website at all, then? United has a reservations 800 number and you can print your boarding pass at the airport.

I get the sarcasm, but it's like comparing apples to oranges. Calling a number and talking to people is vastly different to clicking some buttons on your phone. App/website have almost same user interface, just different ways to get to that interface. Calling the number is totally different interface.

Often there is an extra fee to use the 800 number. (I'm not sure about United, but some places do that)

Boarding pass. For the airline apps, it probably is a good assumption that most people want to get a notification that their flight is delayed, or started boarding, etc..

They don't advertise it, but you can many times add the Apple Wallet pass from the website. And it actually sends you flight change notifications too.

Unfortunately the Apple wallet boarding pass is often out of date with any gate changes. The app will update immediately.

Sending ad notifications is a recent trend, normally Apple guidelines don’t allow it, but they know that Apple cannot much fuss about with all the regulatory pressure.

It’s the enshitification of the notification system, the apps are already filled with ads and now they’re making you open the app or splash things on your face.


When I get email like that, I mark it as spam. That trains the spam filters to remove their marketing email from everyone's inbox. I see it as a community service.

That behavior is what finally got me off Facebook awhile back.

Edit: And something similar with Windows now that I think about it; there was a privacy setting which would appear to work till you re-entered that menu. Saving the setting didn't actually persist it, and the default was not consumer-friendly.


LinkedIn does the same thing re emails, notifications, etc that they send. I think I turned off notifications that connections had achieved new high scores in games they play on LinkedIn. Absurd.

[flagged]


LinkedIn is one the most useless app ever. I have trashed it countless times, but I do use it now and ten to keep up with companies and respond to a few solicitations. There is almost never anything of value in my feed, between the fake jobs and the low value self-promotion AI-written posts. Who even reads this? Not even mentioning the political, and pseudo-activist posts. And this happens despite systematically marking all of these posts irrelevant or “inappropriate for LinkedIn”. This app is beyond repair. Uninstalling.

“House Project Managers”

I especially like how they add it to the bottom of a widget with hidden scrollbars, just to make it totally missable that they added them at all!

I have to say, it was fun while it lasted! Couldn't really have asked for a more rewarding hobby and career.

Prompting an AI just doesn't have the same feeling, unfortunately.


It depends. I've been working on a series of large, gnarly refactors at work, and the process has involved writing a (fairly long), hand-crafted spec/policy document. The big advantage of Opus has been that the spec is now machine-executable -- I repeatedly fed it into the LLM and see what it did on some test cases. That sped up experimentation and prototyping tremendously, and it also found a lot of ambiguities in the policy document that were helpful to address.

The document is human-crafted and human-reviewed, and it primarily targets humans. The fact that it works for machines is a (pretty neat) secondary effect, but not really the point. And the document sped up the act of doing the refactors by around 5x.

The whole process was really fun! It's not really vibe coding at that point, really (I continue to be relatively unimpressed at vibe coding beyond a few hundred lines of code). It's closer to old-school waterfall-style development, though with much quicker iteration cycles.


For me it‘s the opposite. I do have a good feeling what I want to achieve, but translating this into and testing program code has always been causing me outright physical pain (and in case of C++ I really hate it). I‘ve been programming since age 10. Almost 40 years. And it feels like liberation.

It brings the “what to build“ question front and center while “how to build it“ has become much, much easier and more productive


Indeed. I still use AI for my side projects, but strictly limit to discussion only, no code. Otherwise what is the point? The good thing about programming is, unlike playing chess, there is no real "win/lose" in the scenario so I won't feel discouraged even if AI can do all the work by itself.

Same thing for science. I don't mind if AI could solve all those problems, as long as they can teach me. Those problems are already "solved" by the universe anyway.


Even the discussion side has been pretty meh in my mind. I was looking into a bug in a codebase filled with Claude output and for funsies decided to ask Claude about it. It basically generated a "This thing here could be a problem but there is manual validation for it" response, and when I looked, that manual validation were nowhere to be found.

There's so much half-working AI-generated code everywhere that I'd feel ashamed if I had to ever meet our customers.

I think the thing that gives me the most value is code review. So basically I first review my code myself, then have Claude review it and then submit for someone else to approve.


I don't discuss actual code with ChatGPT, just concepts. Like "if I have an issue and my algo looks like this, how can I debug it effectively in gdb?", or "how do I reduce lock contention if I have to satisfy A/B/...".

Maybe it's just because my side projects are fairly elementary.

And I agree that AI is pretty good at code review, especially if the code contains complex business logic.


So the brain is a mathematical artifact that operates independently from time? It just happens to be implemented using physics? Somehow I doubt it.


The brain follows the laws of physics. The laws of physics can be closely approximated by mathematical models. Thus, the brain can be closely approximated by mathematical models.


> And they can't even write a single proper function with modern c++ templating stuff for example.

That's just not true. ChatGPT 4 could explain template concepts lucidly but would always bungle the implementation. Recent models are generally very strong at generating templated code, even if its fairly complex.

If you really get out into the weeds with things like ADL edge cases or static initialization issues they'll still go off the rails and start suggesting nonsense though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: