Hacker Newsnew | past | comments | ask | show | jobs | submit | ertgbnm's commentslogin

non-scientific but every young person I know has had covid at least twice.


I am not a physician, but I expect that decades hence we will see the health effects of repeated Covid infections. I'm guessing specifically around cardio health and dementia risk.


and how many young persons do you know that have had COVID-induced myocarditis?


Seems like the problem should be pretty easy to figure out. Just need to wait ~5 gigayears and see which model is right. I'm personally hoping for deceleration so that we have more total visitable volume.

I'll set a reminder to check back at that time to see who was right.


Oh, I'm not going to care about visitable volume.

With 5 gigayears to work with I'm going to move a few star systems over, break down all the matter orbiting the star into a Dyson sphere made of computronium, and simulate visiting any world I could possibly ever want to.


This is one reason to assume we'll never meet any aliens. They're just simulating whatever all day long in their Dyson Goonspheres.


I just pictured someone getting a message to check which model was right from an ancestor 20 giga generations ago!


!remindme 20,000,000,000 years


In addition, making money off the software that others develop and sell on the app store doesn't make Apple more of a software company, it makes them a middle man.


IMO a middle man means you are in between 2 other services, taking a cut off the top. In this instance, apple not only created and curate the app store, but also invented the concept. In this case they are definitely not a middle man, they are a software company selling access to their software to developers.


Unrelated but my current AI text flag is the use of "It's not X. It's Y."

It's become so repetitive recently. Examples from this post alone:

1. "This isn't about AI. The quality crisis started years before ChatGPT existed."

2. "The degradation isn't gradual—it's exponential."

3. "These aren't feature requirements. They're memory leaks that nobody bothered to fix."

4. "This wasn't sophisticated. This was Computer Science 101 error handling that nobody implemented."

5. "This isn't an investment. It's capitulation."

6. "senior developers don't emerge from thin air. They grow from juniors who:"

7. "The solution isn't complex. It's just uncomfortable."

Currently this rhetorical device is like nails on a chalkboard for me.

Anyway, this isn't a critique of your point. It's pedantry from me. :)


Yeah, I bounced hard off the article at #5. My AI detector was slow warming up but kicked on at:

"Today’s real chain: React → Electron → Chromium → Docker → Kubernetes → VM → managed DB → API gateways."

Like, yes, those are all technologies, and I can imagine an app + service backend that might use all of them, but the "links" in the chain don't always make sense next to each other and I don't think a human would write this. Read literally, it implies someone deploying an electron app using Kubernetes for god knows why.

If you really wanted to communicate a client-server architecture, you'd list the API gateway as the link between the server-side stuff and the electron app (also you'd probably put electron upstream of chromium).


Every time I see a big chain of technologies as "bad", all I want to do is add more and see where the good/bad separation should be placed. So:

API gateways -> Java servers -> JVM -> C/C++ -> Assembly -> machine code -> microprocessors -> integrated circuits -> refined silicon -> electronics -> refined metals -> cast metallurgy -> iron tools -> copper tools -> stone tools

Anyway, my take is that everything after copper is evil and should be banished.


The fun part is where the chain diverges into separate branches for digital logic and the mechanics of constructing circuits.


Yeah, the beginning of the article could reasonably pass for a "rage blog", but the end almost reads like an Axios article, with nothing but bullet points and "clever" headlines, in a weirdly formulaic way.

Also, what's the deal with all the "The <overly clever noun phrase>" headlines?

Smells a lot like AI.


  Also, what's the deal with all the "The <overly clever noun phrase>" headlines?
Adding "The" to the beginning of an opinion is a mental manipulation tactic that makes the reader believe it is a well-known established fact that everyone else knows.


I've been seeing this pattern of text crop up in many places. On LinkedIn, much of my feed is filled with posts of short sentences that follow this exact pattern. What's more, even the replies are obviously AI-generated.


I hear people talk like this on the phone. The one I hear a lot is: "It's not about X, it's about Y1. It's about Y2. It's about Y3." Where Y is usually something humanizing.


Proving or disproving intent is hard, in court trials often taking days of witness testimony and jury deliberation.

These hot-take/title patterns "X is about Y1" are exploiting the difficulty of disproving them.

I often see it in the pattern of "Behavior of Group I Dislike is About Bad Motive."


I agree. LinkedIn is completely dominated by AI slop replies to AI slop posts these days.

It's almost worse than Twitter.


"Formulaic headlines considered harmful"


That is a sharp observation and you are absolutely right to point it out! We are here not to consume, but to critically evaluate, not to skim, but to understand.

Would you like me to generate a chart that shows how humans have adopted AI-speak over time?


Now look, you.


Yeah I can see this can be an irritating rhetorical device. It implies that the reader already has a certain judgement or explanation, and makes a straw man in order to turn around and then argue against it, throwing nuance out the window.


It's irritating on purpose, for engagement.


It’s not nuance, it’s intellectual dishonesty.


I see what you did there :-)


I've always talked like this, I'm sure others do too.


It’s becoming exhausting to avoid all of these commonly used phrases!

I don’t use LLMs.


> It’s becoming exhausting to avoid all of these commonly used phrases!

That's not the only price society pays. It makes sense for us to develop the heuristics to detect AI, but the implication of doing so has its own cost.

It started out as people avoiding the use of em-dash in order to avoid being mistaken for being AI, for example.

Now in the case of OP's observation, it will pressure real humans to not use the format that's normally used to fight against a previous form of coercion. A tactic of capital interests has been to get people arguing about the wrong question concerning ImportantIssueX in order to distract from the underlying issue. The way to call this out used to be to point out that, "it's not X1 we should be arguing about, but X2." Combined with OP's revelation, it is now harder to call out BS. That sure is convenient for capital interests.

I wonder what's next.


As long as we can swear more, we’ll be ok.

X1 is bullshit to argue about, it’s about X2.

Since the models are so censored and “proper” in their grammar, you can pretty easily stand out


I've found swearing to be a pretty decent heuristic for whether I'm talking to an actual person or not. Either it'll remain a decent heuristic or we'll get some Malcolm Tucker-esque LLMs out of it!


I actually liked the em dash. But now I stopped using it. Probably will be good to use soon as AI gets trained to not use it.


Write it as three dashes instead---that's not what an LLM would do and anyone looking close enough will see the difference.


Unfortunately the time each cycle takes from start to finish is measured in months or years, and it's an endless game of whac-o-mole.


Path forward

Accept that we will going to see more and more of these to the point that it's pointless to point out


I'm happy to go further, I think the majority of the post is slop that reads like a bunch of tweets stitched together.

And what's with that diagram at the start? What's the axis on that graph? The "symptoms" of the "collapse" are listed as "Calculator", "Replit AI" and "AI Code". What?

Later in the post, we see the phrase "our research found". Is the author referring to the credulous citations of other content mill pieces? Is that research?

Our collective standard for quality of writing should be higher. Just as experienced devs have the good "taste" to curate LLM output, inexperienced writers cannot expect LLMs to write well for them.


100%. I think it's irritating, because it's a cheap way to create drama out of nowhere.


It’s not drama. It’s attention-seeking dressed up as emotion.

/s


It's hilarious just how much of a witchhunt on AI is kicked off from a bunch of vague heuristics.

Writing this as someone who likes using em dash and now I have to watch that habit because everyone is obsessed with sniffing out AI.

Cliched writing is definitely bad. I guess I should be happy that we are smashing them one way or another.


I don't want to take the time writing up a cogent response to an article someone didn't bother taking the time to write. With this particular article, there were a couple of points I wanted to respond to, before I realized there was no human mind behind them.

I've always liked the HN community because it facilitates an intelligent exchange of ideas. I've learned a lot trawling the comments on this site. I don't want to see the energy of human discourse being sucked up and wasted on the output of ChatGPT. Aggressively flagging this stuff is a sort of immune response for the community.


I wonder how it's different from when a collegue sends you an llm PR.


You get paid

And it's the employers problem, if you don't react too much emotionally


I have thought like this since the slop wave hit, but hadn't yet put it into words. Gratified to see someone else do it


It's not a witchhunt for AI. It's a witchhunt for bad writing.


LLMs like to repeat same pattern over and over again. This is my text flag.


It seems like everyone is getting too worked up about AI generated text. Yes, it's bad, but bad writing has existed forever. We don't see most of the older stuff because it disappears (thankfully) into oblivion and you are left with the works of Chaucer and Shakespeare.


> It seems like everyone is getting too worked up about AI generated text. Yes, it's bad, but bad writing has existed forever. We don't see most of the older stuff because it disappears (thankfully) into oblivion and you are left with the works of Chaucer and Shakespeare.

You're missing the point. In the past bad writing was just bad writing, and it was typically easy to detect. Now the main contribution of AI is bad writing that can masquerade as good writing, be produced in industrial-scale quantities, and flood all the channels. That's a much different thing.

IMHO the main achievement of LLMs will be to destroy. It'll consume utterly massive quantities of resources to basically undermine processes and technologies that once created a huge amount of value (e.g. using the internet for wide-scale conversation).

I mean, schools are going back to handwritten essays, for Christ's sake.


> You're missing the point. In the past bad writing was just bad writing, and it was typically easy to detect.

If AI generated text were well written, would it matter to you? Is it bad to use Grammarly?

I don't see anything inherently wrong with using AI tools to write, as long as writers take the responsibility to ensure the final result is good. Fighting against use of LLMs seems like a fool's errand at this point. Personally I've been using Google Translate for years to help with writing in German, little knowing at the time that it was using transformers under the covers. [0] I'm pretty sure my correspondents would have thanked me had they known. Same applies for text written in English by non-native speakers.

[0] https://arxiv.org/abs/1706.03762

edit: fixed typo. Just proof this is not an LLM.


> If AI generated text were well written, would it matter to you?

Yes, of course.

1) I don't want to waste my time with slop pumped out with a mindless process by someone who doesn't give a shit. That includes turning half an idea into a full essay of bullshit.

2) You have to distinguish between "good writing" and (lets call it) "smooth text construction." One of the big problems with LLMs is they can be used to generate slop that lacks many of the tells you could previously use to quickly determine that you're reading garbage. It's still garbage, just harder to spot so you waste more time.

> I don't see anything inherently wrong with using AI tools to write, as long as writers take the responsibility to ensure the final result is good.

Yeah, but what about the writers who don't? That's what I'm talking about. These tools benefit the bad actors far more than the ones who are trying to do things properly.

> Personally I've been using Google Translate for years to help with writing in German, little knowing at the time that it was using transformers under the covers. [0] I'm pretty sure my correspondents would have thanked me had they known.

Honestly, I think Google Translate is a lot harder to misuse than an LLM chatbot. These things aren't all the same.


I understand your argument, but the distinctions you are making seem really hard to uphold. Adapting to LLMs means we'll adopt new standards for quality or more likely re-emphasize old ones like assigning trust to specific authorities.

If you read something from Simon Willison, it's generally worth reading. [0] (Actually pretty great a lot of the time.) Everything else is the literary equivalent of spam calls. Maybe it's time to stop answering the phone?

[0] https://simonwillison.net/


> Adapting to LLMs means we'll adopt new standards for quality or more likely re-emphasize old ones like assigning trust to specific authorities.

I think we're in violent agreement, I just have a less sanguine attitude towards it. LLMs will "undermine processes and technologies that once created a huge amount of value" (to quote myself above). We'll adapt to that, as in life goes on, but major things will be lost.


This isn’t just an insightful comment. It’s profound.

I’d like to delve into the crucial topic of whether AI generated slop is respectful to the innovative entrepreneurs of Hacker News. If they won’t assert the value of their time, who will?

In this digital age, can we not expect writers to just keep it brief? Or heck, just share the prompt, which is almost certainly shorter than the output and includes 100% of the information they intend to share?

Or is true 21st century digital transformation driven by the dialectical tension between AI generators and AI summarizers?


> can we not expect writers to just keep it brief?

In essentially every situation, you cannot expect readers to read.


Agree but at least someone took time and effort to write it before. There were limits on the quantity produced. AI will simply release a flood of this stuff. Social media did similar things by making zero barriers to "Enquireresque" type of misinformation - Weekly World News "Bat Child Found in Cave"

In the 70's we had Environmental Pollution - the 2000s will be defined as a fight against Social Pollution.


The problem isn't the bad writing; it's that it's vapid slop trying to consume my very limited time and attention.


The most grating title/text pattern for me is:

1. Stop []!


Numbered lists are an AI smell.


I know a lot of real people using numbered lists in their writing. That's not a decisive feature. Using an emoji for the bullet point is somewhere you definitely need to stop.


They weee always low effort, and therefore pretty bad even before AI. But now they are practically no effort and 100x worse.


Congratulations, you're one of this month's lucky 10,000!


This may be the "reason" that they use but I doubt they have done any testing to show that it provides any level of protection and just makes their app less useable. Sounding like a good reason doesn't make it a good reason.


On windows, csv's automatically open in Excel through the file explorer. Almost all normal businesses use windows so the OPs claim is pretty reasonable.


Depends on the country/locale - I just generate them with semicolons to enable easy opening


We're discussing CSVs. You are discussing SemicolonSVs.


I do wish SCSV was thing


Everyone knows Greenland is the real proponent of the Mercator projection. Using the unfair influence of their massive perceived size on the map to unfairly inflate their importance in global politics and economics. It's time to put Greenland back in its place.


I am now wondering if Trump's Greenland obsession is because he thinks it's very bigly, due to Mercator (it appears larger than the US in a Mercator projection).


Maybe they should put Roosevelt's massive 50-inch globe back in the Oval Office.

Assuming there's a place for more Big Balls in federal government?


You're not the only one who has wondered this: https://www.newsweek.com/mercator-projection-greenland-donal...


> "I love maps. And I always said: 'Look at the size of this. It's massive. That should be part of the United States.'"

Oh, huh, I was joking, but actually based on this he probably _does_ think that (unless he was looking at equal-area maps, which is unlikely).


He wants "the rare earths"


He may generate useless tokens but boy can he generate ALOT of tokens.



He? I know some Gemmas and it's distinctly a female name; is Gemma a boy's name where you're from?


I don't really gender LLMs in my head in general. I guess Gemma is a female name. I only gendered it in the joke because I think it makes it funnier, especially since it's just "a little guy". I know they are giving gendered names to these models now but I think it's a bit weird to gender when interacting with them.


Doesn’t the “M” in “Gemma 3 270M” Stand for “male”?

Also: https://en.wikipedia.org/wiki/Gemma_Frisius


Not sure if that’s a serious question but it stands for “million”. As compared to 1B+ models, where the B stands for “billion” parameters.


Perhaps the poster we referring to Simon not Gemma.


> ALOT

'Alot' is not a word. (I made this mistake a lot, too.)


The only problem I have with articles like this is that it frames what Trump is doing with an air of intention. Most readers will obviously think that becoming mercantilist is a bad idea. However, readers will also walk away with the notion that these actions have all been done with the a grand plan in mind.

The reality is much more stupid though. Recent policies have been made without any plan to begin with. There is no driving philosophy of "we must create modern mercantilism" and the resulting policies being a coherent plan to bring about that change. Instead actions are being made based on the split second decision making of a moron would lacks the most basic understanding of economics much less mercantilism.

This is a decidedly worse world than one in which the plan is simply a bad one. A bad plan would at least be coherent and something that our allies could predict and make their own plans around. Nevertheless, I think the thesis is broadly correct as being the outcome of recent actions.


I don't think they're gracing him with an air of intention. I think they're just looking at what he's doing and finding the closest model that makes reasonably accurate predictions about the future.


I don't think so. Recent actions are not delivering an industrial policy, or promoting research. They are not addressing the distribution of "wealth and power" which is possibly more important than its magnitude.


True. And in a few years we'll find out how much Trump has actually affected the nation's trajectory. His trade policy is an absolute 180 from GOP orthodoxy and members of his own party have mostly said "we'll do what the president wants" and not "we think this is a great idea". I'd wager a lot of this policy gets undone at the first opportunity.


> I'd wager a lot of this policy gets undone at the first opportunity.

You're likely right, but it likely won't happen for about 3.5 years during which time much damage will be done.


If it won't happen during these 3.5 years, it won't happen for decades. Do you really except this Grand Old Pedophile party to cede power willingly, now that they own all four branches of government?


Trump doesn't have to have intention.

The trend of the US pulling out of the "global order" has already been happening. It's just that Trump is taking something that was happening gradually, and through his policy button-mashing, accelerated it much faster


It's literally a billion dollar plus release. I get more scrutiny on my presentations to groups of 10 people.


I take a strange comfort in still spotting AI typos. Makes it obvious their shiny new "toy" isn't ready to replace professionals.

They talk about using this to help families facing a cancer diagnosis -- literal life or death! -- and we're supposed to trust a machine that can't even spot a few simple typos? Ha.

The lack of human proofreading says more about their values than their capabilities. They don't want oversight -- especially not from human professionals.


Cynically, the AI is ready to replace professionals, in areas where the stakeholders don't care too much. They can offer the services cheaper, and this is all that matters to their customers. Were it not so, companies like Tata won't have any customers. The phenomenon of "cheap Chinese junk" would not exist, because no retailer would order to produce it.

So, brace yourselves, we'll see more of this in production :(


Does something where you don't care about quality this much need doing at all?


Well, the world will split into those who care, and fields where precision is crucial, and the rest. Occasional mistakes are tolerable but systematic bullshit is a bit too much for me.


This separation (always a spectrum, not a split) already exists for a long time. Bouts of systemic bullshit occur every now and then, known as "bubbles" (as in dotcom bubble, mortgage bubble, etc) or "crises" (such as "reproducibility crisis", etc). Smaller waves rise and fall all the time, in the form of various scams (from the ancient tulip mania to Ponzi to Madoff to ICOs, etc).

It seems like large amounts of people, including people at high-up positions, tend to believe bullshit, as long as it makes them feel comfortable. This leads to various irrational business fashions and technological fads, to say nothing of political movements.

So yes, another wave of fashion, another miracle that works "as everybody knows" would fit right in. It's sad because bubbles inevitably burst, and that may slow down or even destroy some of the good parts, the real advances that ML is bringing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: