Hacker Newsnew | past | comments | ask | show | jobs | submit | more lapcat's commentslogin

> Sure, we have 50 years of research proving that children are blank states

No, we don't.

> I find it extremely difficult to believe that we'd be born equal

This is not what the article author claims or ever claimed.


> It’s not that surprising that many successful people seem to be strong fans of heritability, or more broadly, of the idea that metrics like IQ point to some sort of “universal independent” metric of value. To do otherwise requires living one’s life in cognitive dissonance; how could they be worthy of such riches while others struggle to just pay the bills? Surely success and intelligence is just an inborn thing, and thus inevitable and unchangeable. There’s nothing they can do, and it was always going to end up that way. Inevitability erases any feelings or guilt or shame.

I've never understood the idea that winning the genetic lottery somehow makes a person more "deserving" or "worthy" than another. To me, the whole idea of "meritocracy" is a moral abomination.


How do you understand meritocracy? It seems natural that those that do valuable things get rewarded a lot.

Ideally everyone would get the same chances to do valuable things but that's not how the world is setup. Unfortunately.

However trying to change that must be done with care as it's easy to increase injustice (looking at most communist systems)


> It seems natural that those that do valuable things get rewarded a lot.

I'm not fond of the term "rewarded." I understand how prices are determined by supply and demand in economics. Obviously in the labor market, some skill that is in high demand and/or short supply will bring a high price. However, economics are largely amoral. The economic system is not an ethical system to reward the worthy and punish the unworthy, just a method of distributing resources.

There's both an uncontroversial and a controversial interpretation of "meritocracy." Uncontroversially, those who are best qualified for a job should do that job, especially for life-and-death jobs like in medicine. This is how the argument usually starts, with the uncontroversial interpretation, but then it slyly shifts to the controverisal interpretation, that certain people "deserve" more money than others, often a lot more money, due to their qualifications. And while we may want economic incentives for the most qualified people to persue certain jobs, overall it doesn't appear to me that the economic incentives align with societal benefit. For example, we massively reward professional athletes and entertainers much more than doctors and nurses.

Ultimately, the controversial notion of meritocracy is used to justify enormous disparities of wealth, where a few people have so much money that they can buy politicians and elections, whereas others are so poor that they have trouble affording the basics like food, shelter, and medical care. And supposedly that's all based on "merit", which I think is crap.


> The question never was about whether or not genetic differences contribute to the spread of intellectual talent—they obviously do. The question always was about the “interesting place” Paul Graham talked about, the meaningful space between genetic potential and actual achievement, and whether or not it really existed. And, at 30% or 50%, this place surely exists.


The author of this piece totally ignored that heritability is only part of the genetic lottery.


"Heritability", strictly construed (as is the case in every study establishing heritability numbers) isn't necessarily a description of a "genetic lottery" at all. Plenty of things are highly heritable and not at all genetically determined, and the converse is also true!

What do you mean?


That heritability doesn't cover all genetic factors. E.g. out of 100% of IQ variation 50% might be inherited, but that doesn't say that the rest is nurture, right? It can still have a huge factor of genetic lottery. E.g. isn't heritability the mean of the genetic effects, but there's also the rest of the distribution (std. dev)?


Still have no idea what distinction you are making.

> That sentence smells like AI writing, so who knows what the author actually thinks.

The author has been a professional writer since long before LLMs were invented: https://hey.paris/books-and-events/books/

LLMs were trained on books like the ones written by the author, which is why AI writing "smells" like professional writing. The reason that AI is notorious for using em dashes, for example, is that professional authors use em dashes, whereas amateur writers tend not to use em dashes.

It's becoming absurd that we're now accusing professional writers of being AI.


I didn't mention em dashes anywhere in my comment!

If this isn't AI writing, why say "The “New Account” Trap" with then further sub-headers "The Legal Catch", "The Technical Trap", "The Developer Risk"... I have done a lot of copyreading in my life and humans simply didn't write this way prior to recent years.


> humans simply didn’t write this way prior to recent years.

Aren’t LLMs evidence that humans did write this way? They’re literally trained to copy humans on vast swaths of human written content. What evidence do you have to back up your claim?


Decades of reading experience of blog posts and newspaper articles. They simply never contained this many section headers or bolded phrases after bullet points, and especially not of the "The [awkward noun phrase]" format heavily favored by LLMs.


So what would explain why AI writes a certain way, when there is no mechanism for it, and when the way AI works is to favor what humans do? LLM training includes many more writing samples than you’ve ever seen. Maybe you have a biased sample, or maybe you’re misremembering? The article’s style is called an outline, we were taught in school to write the way the author did.


Why did LLMs add tons of emoji to everything for a while, and then dial back on it more recently?

The problem is they were trained on everything, yet the common style for a blog post previously differed from the common style of a technical book, which differed from the common style of a throwaway Reddit post, etc.

There's a weird baseline assumption of AI outputting "good" or "professional" style, but this simply isn't the case. Good writing doesn't repeat the same basic phrasing for every section header, and insert tons of unnecessary headers in the first place.


Yes, training data is a plausible answer to your own question there, as well as mine above. And that explanation does not support your claims that AI is writing differently than humans, it only suggests training sets vary.

Repeating your thesis three times in slightly different words was taught in school. Using outline style and headings to make your points clear was taught in school. People have been writing like this for a long time.

If your argument depends on your subjective idea of “good writing”, that may explain why you think AI & blog styles are changing; they are changing. That still doesn’t suggest that LLMs veer from what they see.

All that aside, as other people have mentioned already, whether someone is using AI is irrelevant, and believing you can detect it and accusing people of using AI quickly becoming a lazy trope, and often incorrect to boot.


You’re pointlessly derailing a conversation with a claim you can’t support that isn’t relevant even if true.

Regardless of whether AI wrote that line he published it and we can safely assume it is what he thinks.


[flagged]


I don’t think you even know what you’re arguing about anymore. You claimed that what the author wrote wasn’t what the author thinks. As evidence you provided weak arguments about other parts of it being AI written and made an appeal to your own authority. It doesn’t matter if AI wrote that line, he wrote it, a ghost writer wrote it or a billion monkeys wrote it. He published it as his own work and you can act as if he thinks it even if you don’t otherwise trust him or the article.


Ah, I see the confusion, you're still focusing entirely on this one "this isn't just x; it's y" line. I was mostly talking about the piece as a whole, for pretty much everything other than the first sentence of my first comment above. Sincere apologies if I didn't state that clearly.


LLMs learned from human writing. They might amplify the frequency of some particular affectations, but they didn't come up with those affectations themselves. They write like that because some people write like that.


[flagged]


Those are different levels of abstraction. LLMs can say false things, but the overall structure and style is, at this point, generally correct (if repetitive/boring at times). Same with image gen. They can get the general structure and vibe pretty well, but inspecting the individual "facts" like number of fingers may reveal problems.


That seems like straw man. Image generation matches style quite well. LLM hallucination conjures untrue statements while still matching the training data style and word choices.


[flagged]


> AI may output certain things at a vastly different rate than it appears in the training data

That’s a subjective statement, but generally speaking, not true. If it were, LLMs would produce unintelligible text & images. The way neural networks function is fundamentally to produce data that is statistically similar to the training data. Context, prompts, and training data are what drive the style. Whatever trends you believe you’re seeing in AI can be explained by context, prompts, and training data, and isn’t an inherent part of AI.

Extra fingers are known as hallucination, so if it’s a different phenomenon, then nobody knows what you’re talking about, and you are saying your analogy to fingers doesn’t work. In the case of images, the tokens are pixels, while in the case of LLMs, the tokens are approximately syllables. Finger hallucinations are lack of larger structural understanding, but they statistically mimic the inputs and are not examples of frequency differences.


This is a bad faith argument and you know it.


> I didn't mention em dashes anywhere in my comment!

I know. I just mentioned them as another silly but common reason why people unjustly accuse professional writers of being AI.

> I have done a lot of copyreading in my life and humans simply didn't write this way prior to recent years.

What would you have written instead?


Most of those section headers and bolded bullet-point summary phrases should simply be removed. That's why I described them as superfluous.

In cases where it makes sense to divide an article into sections, the phrasing should be varied so that they aren't mostly of the same format ("The Blahbity Blah", in the case of what AI commonly spews out).

This is fairly basic writing advice!

To be clear, I'm not accusing his books as being written like this or using AI. I'm simply responding to the writing style of this article. For me, it reduces the trustworthiness of the claims in the article, especially combined with the key missing detail of why/how exactly such a large gift card was being purchased.


> To be clear, I'm not accusing his books as being written like this or using AI. I'm simply responding to the writing style of this article.

It's unlikely that the article had the benefit of professional, external editing, unlike the books. Moreover, it's likely that this article was written in a relatively short amount of time, so maybe give the author a break that it's not formatted the way you would prefer if you were copyediting? I think you're just nitpicking here. It's a blog post, not a book.

Look at the last line of the article: "No permission granted to any AI/LLM/ML-powered system (or similar)." The author has also written several previous articles that appear to be anti-AI: https://hey.paris/posts/govai/ https://hey.paris/posts/cba/ https://hey.paris/posts/genai/

So again, I think it's ridiculous to claim that the article was written by AI.


It's a difference of opinion and that's fine. But I'll just say, notice how those 3 previous articles you linked don't contain "The Blahbity Blah" style headers throughout, while this article has nine occurrences of them.


> notice how those 3 previous articles you linked don't contain "The Blahbity Blah" style headers throughout, while this article has nine occurrences of them.

The post https://hey.paris/posts/cba/ has five bold "And..." headers, which is even worse than "The..." headers.

Would AI do that? The more plausible explanation is that the writer just has a somewhat annoying blogging style, or lack of style.


To me those "And..." headers read as intentional repetition to drive home a point. That isn't bad writing in my opinion. Notice each header varies the syntax/phrasing there. They aren't like "And [adjective] [noun]".

We're clearly not going to agree here, but I just ask that as you read various articles over the next few weeks, please pay attention to headers especially of the form "The ___ Trap", "The ___ Problem", "The ___ Solution".


> I just ask that as you read various articles over the next few weeks, please pay attention to headers especially of the form "The ___ Trap", "The ___ Problem", "The ___ Solution".

No, I'm going to try very hard to forget that I ever engaged in this discussion. I think your evidence is minimal at best, your argument self-contradictory at worst. The issue is not even whether you and I agree but whether it's justifiable to make a public accusation of AI authorship. Unless there's an open-and-shut case—which is definitely not the case here—it's best to err on the side of not making such accusations, and I think this approach is recommended by the HN guidelines.

I would also note that your empirical claim is inaccurate. A number of the headers are just "The [noun]". In fact, there's a correspondence between the headers and subheaders, where the subheaders follow the pattern of the main header:

> The Situation • The Trigger • The Consequence • The Damage

> The "New Account" Trap • The Legal Catch • The Technical Trap • The Developer Risk

This correspondence could be considered evidence of intention, a human mind behind the words, perhaps even a clever mind.

By the way, the liberal use of headers and subheaders may feel superfluous to you, but it's reminiscent of textbook writing, which is the author's specialty.


[flagged]


> please don't make it out like a throwaway "AI bad" argument.

The issue isn't whether AI is good or bad or neither or both. The issue is whether the author used AI or not. And you were actually the one who suggested that the author's alleged use of AI made the article less trustworthy. The only reason you mentioned it was to malign the author; you would never say, for example, "The author obviously used a spellchecker, which affects how trustworthy I find the article."

> If you think this is good writing then you're welcome to your opinion

I didn't say it's good writing. To the contrary, I said, "the writer just has a somewhat annoying blogging style, or lack of style."

The debate was never about the author's style but rather about the author's identity, i.e., human or machine.

> Textbooks don't contain section headers every few paragraphs.

Of course they do. I just pulled some off my shelves to look.

Not all textbooks do, but some definitely do.


I said it affects how trustworthy I find the article, when considered in combination with other aspects of this situation that don't add up to me.

After going through my technical bookshelf I can't find a single example that follows this header/bullet style. And meanwhile I have seen countless posts that are known to be AI-assisted which do.

Apparently we exist in different realities, and are never going to agree on this, so there is no point in discussing further.


> Textbooks don’t contain section headers every few paragraphs.

Yes they absolutely do. What are you even talking about?


> I know. I just mentioned them as another silly but common reason why people unjustly accuse professional writers of being AI.

The difference is that using em dashes is good, whereas the cringe headings should die in a fire whether they’re written by an LLM or a human.


Heuristics are nice but must be reviewed when confronted with actual counterexamples.

If this is a published author known to write books before LLMs, why automatically decide "humans don't write like this". He's human and he does write like this!


The author is reputable, just look at the rest of their website.

Your accusation on the other hand is based on far-fetched speculation.


My writing from 5+ years ago was accused of being AI generated by laymen because I used Markdown, emojis and dared to use headers for different sections in my articles.

It's kind of weird realizing you write like generic ChatGPT. I've felt the need to put human errors, less markup, etc into stuff I write now.


> I've felt the need to put human errors, less markup, etc into stuff I write now.

Don't give in to the nitwits!


The operating system does not. The App Store does, and unfortunately on iOS the App Store is the only way to download apps not included with the OS.


The second sentence is not true in the EU and Japan, two regions that actually care about end users.

> While I agree that entering a dark alley shouldn't result in ill effects, if ill effects happen in said dark alley it is still worth the discussion to remind people to stay out of dark alleys in today's day and age (or until the root problem, whatever it is, is improved).

This is not a dark alley. It's the main street. It's the world we live in. iPhone has more than half the market share in the US and well over a billion users worldwide. Moreover, Apple, Google, and Microsoft collectively monopolize consumer operating systems on both mobile and desktop. Try going into a retail store and buying a computing device that is not running iOS, Android, macOS, or Windows. That's the reality for most people.

The dark alleys are the non-mainstream options that hardly anyone knows about.


To further stretch the analogy: the main street is now full of potholes, sinkholes, and even landmines. The root problem is that, in exchange for convenience, we as a society have ceded too much power to these large businesses and we are now paying the price for it. We have bought the proverbial monorail [1] and now we are stuck with it.

[1] https://www.youtube.com/watch?v=taJ4MFCxiuo


> The root problem is that, in exchange for convenience, we as a society have ceded too much power to these large businesses and we are now paying the price for it.

I don't know why some people have made "convenience" into a dirty word. Almost everything we do is for convenience. You could live in a remote log cabin with no electricity and grow/hunt your own food, separating yourself from most of society, but that wouldn't be convenient or pleasant.

Individual consumers have very little power over the market. There's a collective action problem, which is why governments and regulation exist... or should exist. The way I see it, the root problem is a massive failure by (corrupt) governments to protect consumer rights.


How do governments become corrupt in the first place though, if they don't start that way? It's collective action problems all the way down.

Perhaps the root problem is that we've blown too far past Dunbar's number to be able to deal with the societies we live in. All of these systems we've contrived to mitigate the trust problem are full of holes.

As for convenience, that carries a tradeoff. All of the technology and all of the revolutions we've had (agricultural, industrial, information technology) have come with these tradeoffs. Even the log cabin has downsides compared to the nomadic hunter-gatherer lifestyle.


> How do governments become corrupt in the first place though, if they don't start that way?

I think the US government did start that way. Maybe not "corrupt" as such, but the United States was founded by plutocrats and was clearly designed to protect the minority of plutocrats against mass democracy.

> Even the log cabin has downsides compared to the nomadic hunter-gatherer lifestyle.

Yes, but I'd say the nomadic hunter-gatherer lifestyle has even greater downsides, and our current state of convenience is in many ways a vast improvement over the precarious existence of our distant ancestors.


> - Force prominent disclosure of refund policies. Epic Games doesn't allow them for IAP. Apple does.

Apple has no official App Store refund policy, either for IAP or for upfront paid apps. I've already looked for one. There's of course a form to request a refund, but refunds are entirely at Apple's discretion, for any reason or no reason, and Apple often exercises its discretion to refuse refunds.


I have never had Apple to refuse a refund and I’ve had an iTunes account since 2003


> I have never had Apple to refuse a refund

Good for you, but you're only one user out of more than a billion.

> I’ve had an iTunes account since 2003

I'm not sure how that's relevant, because the App Store opened in 2008. Also, Apple had a different CEO at the time.


The App Store was built on iTunes and used the same backend. The refund process hasn’t changed since then. Funny enough before the App Store you could buy Apple curated apps for your iPod.

Have you heard reports of Apple not granting refunds?


> The App Store was built on iTunes and used the same backend. The refund process hasn’t changed since then.

I'm not talking about the technical process. Like I already said, "There's of course a form to request a refund".

> Have you heard reports of Apple not granting refunds?

Yes, many. Indeed, I've heard it from my own customers, as I'm an App Store developer myself.


> I would think making sure outside payment links aren’t scams will be more expensive than that because checking that once isn’t sufficient.

According to the ruling on page 42, "(c) Apple should receive no commission for the security and privacy features it offers to external links, and its calculation of its necessary costs for external links should not include the cost associated with the security and privacy features it offers with its IAP"


> Apple should receive no commission for the security and privacy features it offers to external links

I'm not versed in legalese, so maybe I misunderstand. Isn't it reasonable that Apple receives money for a service they provide, that costs money to run?


The case is really about the opposite: "what payment related services is Apple allowed to force people to use (and therefore pay for)". The court concluded that excludes both the payment service itself as well as the validation of the security of external payment services used in its place.


A service to whom? Protecting users is a service to users, not to developers. This is a selling point of iPhone, and thus Apple receives money from users when they pay for the iPhone.

Think about it this way: totally free apps with no IAP get reviewed by Apple too, and there's no charge to the developer except the $99 Apple Developer Program membership fee, which Epic already pays too.


> Think about it this way: totally free apps with no IAP get reviewed by Apple too, and there's no charge to the developer except the $99 Apple Developer Program membership fee

Yearly fee. And about $500 a year in hardware depreciation, because you can reasonably develop for Apple _only_ on Apple hardware.

This is _way_ more than Microsoft has ever charged, btw.


Protecting users is absolutely in the best interest of developers.


Forcing developers to go through Apple's arbitrary, capricious, slow review process is absolutely not in the best interest of developers.


I think this quote may speak to the question:

> The brain’s general object-recognition machinery is at the same level of abstractness as the language network. It’s not so different from some higher-level visual areas such as the inferotemporal cortex (opens a new tab) storing bits of object shapes, or the fusiform face area storing a basic face template.

In other words, it sounds like the brain may start with the same basic methods of pattern matching for many different contexts, but then different areas of the brain specialize in looking for patterns in specific contexts such as vision or language.

This seems to align with the research of Jenny Saffran, for example, who has studied how babies recognize language, arguing that this is largely statistical pattern matching.


In the series Babies by Netflix some of her research on this topic is covered. Season 1 Episode 4 "First Words."


I wouldn't read too much into the LLM analogy. The interview is disappointingly short, filled with a bunch of unnecessarily tall photgraphs, and the interviewer, the one who brought up LLMs and ChatGPT and has a history of writing AI articles (https://www.quantamagazine.org/authors/john-pavlus/), almost seemed to have an agenda to contextualize the research in this way. In general, except in a hostile context such as politics, interviewees tend to be agreeable and cooperative with interviewers, which means that interviews can be steered in a predetermined way, probably for clickbait here.

In any case, there's a key disanalogy:

> Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess).


The disanalogy you quote might actually be the key insight. What if language operates at two levels, like Kahneman's System 1/2?

Level 1: Nearly autonomic — pattern-matched language that acts directly on the nervous system. Evidence: how insults land before you "process" them, how fluent speakers produce speech faster than conscious deliberation allows, and the entire body of work on hypnotic suggestion, which relies on language bypassing conscious evaluation entirely.

Level 2: The conscious formulation you describe — the translator between perception and meaning.

LLMs might be decent models of Level 1 but have nothing corresponding to Level 2. Fedorenko's "glorified parser" could be the Level 1 system.


> LLMs might be decent models of Level 1

I don't think so. Fast speakers and hyponotized people are still clearly conscious and "at home" inside, vastly more "human" than any LLM. Deliberation and evaluation imply thinking before you speak but do not imply that you can't otherwise think while you speak.


The body of knowledge on Ericksonian hypnotherapy is pretty clear that the effect of language on Level 1 is orthogonal to, and sometimes even opposed to, conscious processes.

I became interested after being medically hypnotized for kidney stone pain. As the hypnotist spoke, I was consciously thinking: "this is dumb, it will never work." And yet it did.

That's exactly your point — I was fully conscious and "at home" the whole time, yet something was processing and acting on the language independently. The question is whether that something shares any computational properties with LLMs, not whether the whole system does.


"It's exactly your point — I was fully conscious and "at home" the whole time, yet something was processing and acting on the language independently."

It's unclear what you're referring to here. You were conscious, & you wanted to think the thought "this is dumb, it will never work." & you thought that. What was the independent process?


The hypnotism worked. There was an unconscious process at work whose relationship to the words was completely different from my conscious reaction.


I think you're creating a false dichotomy between meta-thinking and mere reflex, when in fact most conscious thinking is neither of those.

My understanding is that a hypnotized person is very focused on the hypnotist and suggestible but can otherwise carry on a relatively normal conversation with the hypnotist. And certainly an unhypnotized chattering person is still conscious, aware of the context as well as the subject of their speech. You may find the speech dull and tedious, may even call it "mindless" as insult, yet it's honestly impossible to dispute that there's an active human mind at work.


I don't think we're far apart. My claim isn't that Level 1 is "mere reflex". It's that language can produce effects at a level that operates independently of (and sometimes in opposition to) conscious evaluation. The hypnosis example is just a clean demonstration of that separation.

Whether LLMs are useful models for studying that level is an empirical question. They're not conscious, but they do learn statistical regularities in language structure, which may be exactly what Level 1 is optimized for.


> language can produce effects at a level that operates independently of (and sometimes in opposition to) conscious evaluation

I don't think this is a particularly interesting claim if "conscious evaluation" is understood so strictly that it excludes an ordinary motormouth.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: