Hacker Newsnew | past | comments | ask | show | jobs | submit | thomastjeffery's commentslogin

Only difficult because the criteria are misaligned. We diagnose school children more consistently, because we subject school children to strict measured criteria (school), and can point to the data (grades/homework) as objective evidence.

Why do we care so much about objective evidence? Because of prohibition. Prescribing stimulants isn't illegal because it is difficult to diagnose ADHD. It's difficult to diagnose ADHD for the very same reason it's illegal to prescribe stimulants: our society values prohibition of drugs over actual healthcare. An ADHD diagnosis implies a compromise of prohibition, so our society has structured the means to that diagnosis accordingly.

Experts in the field estimate a very high incidence of undiagnosed ADHD in adults. During the height of the COVID-19 epidemic, telehealth services were made significantly more available, which lead to a huge spike in adult ADHD diagnoses. Instead of reacting to that by making healthcare more ADHD accessible, our society backslid; lamenting telehealth providers as "pill mills", and generating a medication shortage out of thin air.


Very well said. I think it's hard for people to understand that ADHD is simultaneously over diagnosed and under diagnosed.

I think the most absurd thing to come from the statistical AI boom is how incredibly often people describe a model doing precisely what it should be expected to do as a "pitfall" or a "limitation".

It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.

These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.


Absolutely!

The entire narrative of "cheating" is a giant misdirect. People don't actually care about cheating, they care about fun. If a player is making the game less fun, it does not matter how.

The real problem is that ~10 years ago major game studios decided to monopolize server hosting. This means that the responsibility of moderation is now in their hands. The only way this problem can ever be resolved is by giving the authority to moderate servers back to players. Until then, the responsibility to moderate will be unmet, no matter how fascist and authoritarian game studios become. Fascism cannot guarantee fun!


> fascist

How are game studios "fascist"?


People are still playing Battlefield 4 (2013) on user-hosted servers. Right now.

The only way that "around the world" can be relevant is ping, and the best way to manage ping is by sorting a list of servers by ping.

Cheating is an arms race that no one needs to participate in. Moderation was a perfectly good workaround until major game studios decided to monopolize server hosting.


What, 2000 players? 5000?

Moderating that game is multiple orders of magnitude off of major titles.

No Battlefield game is even in the top 100 of esports earnings.


My point is that player-moderation scales, while corporate moderation does not. The fact that there are more players on corporate moderated servers only makes this reality more significant.


The reality of what exists today shows that the opposite is true.

Pulling out a few games with less than 5000 players globally isn't remotely comparable to having hundreds of thousands of players daily.

It's amplified even more for games that have a global ranking system and a professional competitive scene.


Studios could release the server files when the game is EOL…


I wouldn't measure anticheat success by esports earning


It's clearly one significant measure. What do you think is going to happen to tournament money if every other tournament has a cheater? How many esports fans want to go play League after watching Faker decimate another team if they have cheaters in their match every other day?

What it tells you most of all is popularity and incentive to cheat. Cast a big enough net and you'll inevitably find cheaters. The bigger the net, the more cheaters you'll collect.


Don't worry about cookies or bother using a VPN, because... you are being tracked anyway? What's the point of including such a defeatist stance?

> the real world across industry, academia, and government.

Gotcha, so no one here gives a shit about privacy. They only care about avoiding the inconveniences of fraud and leaked secrets.

Use a password manager and a feature-complete adblocker (ublock origin on Firefox). Send messages over end-to-end encrypted channels. Use a VPN along with your adblocker and some kind of cookie/browser-id isolation if you don't want your traffic stalked.


BTW, I really would like to have a way to partially clear cookies – i.e., I don't want to be signed out of gmail, and maybe not out of the Mechanic's Bank of Alaska or Amazon or Netflix, but most other things could go. I don't think this is easy in Chrome, Safari or other mainstream browsers, is it?

Yesyes, I do know that Big Ad can mostly stitch together some proxy profile of me anyway, but it would be more blurry.


Firefox has a great feature for this: multi-account containers. The UI is trash, but it's usable.


Just use a separate browser profile for your critical accounts.


Doesn’t this leak info when clicking on a link in say gmail opening that link in another profile? Most URL’s have pretty long extra strings in them that I assume are just cookie-equivalent?


What if a 3 day old human knew how to walk? I don't think that would look any different, because they physically can't do it anyway.

The first couple years of human development completely change the structure of the body. Walking is only possible after a significant amount of that process has happened, and the body keeps developing even after you learn how to walk.

A three minute old horse is both structurally and mentally prepared to run. A three year old horse will be taller and heavier, but not structurally different enough to change what walking is to their brain.

What a horse can never do as well as a human, is to learn a completely new behavior. Our brains are unmatched for flexibility in learning. Infant humans don't need to be born with the knowledge or the structure for waking. Both can develop together over time because our brains are able to develop new behavior.

The mystery here is the difference between a horse thinking "legs go" and a human thinking "legs that are just ready to hold me up, do what I see other people do, and don't fall over". We only have a vague linguistic model to express our understanding of the underlying complexity.


Seems like they are really jumping to conclusions here.

> Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so

There are some serious problems lurking in the narrative here.

Let's look at it this way: they trained a statistical model on all of the brain patterns that happen when the patient performs a specific task. Next, the model was presented with the same brain pattern. When would you expect the model to complete the pattern? As soon as it recognizes the pattern, of course!

> That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so

There are two overconfident assumptions at play here:

1. Researchers can accurately measure the moment she "consciously attempted" to perform the pretrained task.

2. Whatever brain patterns that happened before this arbitrary moment are relevant to the patient's intention.

There's supposed to be a contradiction here: The first assumption is correct, and the second assumption is also correct. Therefore, the second assumption does not invalidate the first assumption. How? Because the circumstances of the second assumption are a special thing called "precognition"... Tautological nonsense.

Not only do these assumptions blatantly contradict each other, they are totally irrelevant to the model itself. The BCI system was trained on her brain signals during the entirety of her performance. It did not model "her intention" as anything distinct from the rest of the session. It modeled the performance. How can we know that when the patient begins a totally different task, that the model won't just "play the piano" like it was trained to? Oh wait, we do know:

> But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.”

So the model is not responding to her intention. That's supposed to support your hypothesis how?

---

These are exactly the kind of narrative problems I expect to find any "AI" research buried in. How did we get here? I'll give you a hint:

> Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users.

This is the fundamental miscommunication. Statistical models are not decoders. Decoding is a symbolic task. The entire point of a statistical model is to overcome the limitations of symbolic logic by not doing symbolic logic.

By failing to recognize this distinction, the narrative leads us right to all the familiar tropes:

LLMs are able to perform logical deduction. They solve riddles, math problems, and find bugs in your code. Until they don't, that is. When an LLM performs any of these tasks wrong, that's simply a case of "hallucination". The more practice it gets, the fewer instances of hallucination, right? We are just hitting the current "limitation".

This entire story is predicated on the premise that statistical models somehow perform symbolic logic. They don't. The only thing a statistical model does is hallucinate. So how can it finish your math homework? It's seen enough examples to statistically stumble into the right answer. That's it. No logic, just weighted chance.

Correlation is not causation. Statistical relevance is not symbolic logic. If we fail to recognize the latter distinction, we are doomed to be ignorant of the former.


How could you be so naive?

This article is as absurdly biased as it could be! Of course they provided a quoted response from GrapheneOS devs: that's the only appeal to credibility they have.

A truly responsible journalist would explain to their audience what is actually at stake, not simply spout every available position as if it were equivalent.


The only problem with that train of thought is that you are advocating a lower standard. Backdoors are not a superior option in any circumstance whatsoever.

The standard of conduct we need (and are failing) to hold politicians and cops to is actual security and responsibility. Some of the most powerful politicians in the world are leaking private conversations, and no one is holding them accountable. Police are paying private corporations (notably Flock) to build giant monolithic datasets from stalking private citizens, yet neither party is held to any standard whatsoever.


> Human reaction time is around 200ms

Even if you are talking about the entire loop, that sounds pretty high. Maybe if its moving your hands in reaction to an unexpected stimulus in your feet...

We can tell the difference between 60fps (~16ms per frame) and 120fps (~8ms per frame). Any more than that is a noticeable amount of waiting.

It does get complicated, though. What if the information is presented immediately, then animated? Well, that's where a complete measurement of reaction time would be relevant.

Even so, as you pointed out, we often predict what we will be doing in advance, and can perform a sequence of learned actions much more quickly. If there is a delay imposed before you can perform an action, then you must learn the delay, too. That learning process involves making mistakes (attempting the action before the animation is over), which is extra frustrating, considering how unnecessary it is.


https://humanbenchmark.com/tests/reactiontime

You'll probably see around 200ms. Not saying that's the relevant number in this discussion, but that's probably where the number comes from.


On mobile, I consistently get just under 400ms. I suspect using a mouse would get me closer to 200ms, since I would be resting my finger on the button.

So yes, total reaction time is generally quite long, but most of that time is spent performing "action".

That site would be more interesting if it provided a second interface where you do something predictable, like match a repeating beat.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: