Hacker Newsnew | past | comments | ask | show | jobs | submit | k4j8's commentslogin

The author disagrees with Harvard psychologist Ellen Langer, whom Levitt interviewed on his podcast, People I Mostly Admire, which is part of the Freakonomics radio network but not the main show, Freakonomics. The author thinks Levitt should have been more critical of his guest. Perhaps, but this is a podcast and not a peer review.

In my opinion, Levitt didn't even say he agreed with Langer, although he did compliment her work.

Disclaimer: I'm a hug fan on all the Freakonomics shows. I appreciate the author pointing out some opposing views and think the post is well-written, although exaggerated and overly emotional.


> The author disagrees with Harvard psychologist Ellen Langer,

He might, he might not. What he definitely does think is that there have been several in-depth critiques of Langer's work, and that it does the listener of Freakonomics a disservice by apparently not taking them into account in an way (certainly not mentioning them).

The critiques are not of the form "Langer is wrong". They are of the form "the experimental design, sample size and statistical analysis do not support the claims Langer is making".


> What he definitely does think is that there have been several in-depth critiques of Langer's work

And one of those in-depth critiques, which is linked to in the post, is by the author of the article himself.

> The critiques are not of the form "Langer is wrong". They are of the form "the experimental design, sample size and statistical analysis do not support the claims Langer is making".

This seems like a distinction that's not really worth making. The author of the post is a statistician, and he's published a detailed critique (see https://news.ycombinator.com/item?id=41974050 ) that says that Langer did the stats wrong. So sure, he is saying "the experimental design, sample size and statistical analysis do not support the claims Langer is making", which seems equivalent to "she is wrong" when you're a statistician.


Gelman leaves the door reasonably ajar on the possibility that Langer is right about effects in the world, but firmly closes it on the possibility that the statistical analysis Langer presents supports this belief.


Well, we'll just have to reasonably disagree with the final interpretation, then. I will say that from reading this closing section of Gelman's paper, it's about as harsh a condemnation as I've ever seen in an academic paper - he essentially says it's not science that's masquerading as science. Written from one academic to another, that's basically the equivalent of "you're full of shit":

> 4.4. Statistical and conceptual problems go together

> We have focused our inquiry on the Aungle and Langer (2023) paper, which, despite the evident care that went into it, has many problems that we have often seen elsewhere in the human sciences: weak theory, noisy data, a data structure necessitating a complicated statistical analysis that was done wrong, uncontrolled researcher degrees of freedom, lack of preregistration or replication, and an uncritical reliance on a literature that also has all these problems.

> Any one or two of these problems would raise a concern, but we argue that it is no coincidence that they all have happened together in one paper, and, as we noted earlier, this was by no means the only example we could have chosen to illustrate these issues. Weak theory often goes with noisy data: it is hard to know to collect relevant data to test a theory that is not well specified. Such studies often have a scattershot flavor with many different predictors and outcomes being measured in the hope that something will come up, thus yielding difficult data structures requiring complicated analyses with many researcher degrees of freedom. When underlying effects are small and highly variable, direct replications are often unsuccessful, leading to literatures that are full of unreplicated studies that continue to get cited without qualification. This seems to be a particular problem with claims about the potentially beneficial effects of emotional states on physical health outcomes; indeed, one of us found enough material for an entire Ph.D. dissertation on this topic (N. J. L. Brown, 2019).

> Finally, all of this occurs in the context of what we believe is a sincere and highly motivated research program. The work being done in this literature can feel like science: a continual refinement of hypotheses in light of data, theory, and previous knowledge. It is through a combination of statistics (recognizing the biases and uncertainty in estimates in the context of variation and selection effects) and reality checks (including direct replications) that we have learned that this work, which looks and feels so much like science, can be missing some crucial components. This is why we believe there is general value in the effort taken in the present article to look carefully at the details of what went wrong in this one study and in the literature on which it is based.


This one break area at work had some cookies sitting out one time for people to grab. That was 6 months ago, but I still check every time I pass it...


Keeping your credit frozen permanently is a great idea. Some of the credit agencies even encourage this with features such as a temporary unfreeze of your credit for a few days/weeks and then back to the permanently frozen state.


Before switching to Linux for gaming, I recommend checking compatibility of your favorite games using https://www.protondb.com/, an unofficial site for Proton compatibility.


I've had some success with sites such as Facebook Groups or Meetup. They organize events to connect strangers with a shared common interest, in my case casual hiking and board games.


My homebrew solution uses comments surrounding or in-line with the code that is machine-specific. The program then uncomments/comments the code appropriately as it is backed up and restored.

What do you think of this approach? Would that remove the complexity hiding?


i'd have to see it to have much of an opinion. have a GitHub link by chance?


See the example usage via the link below.

https://github.com/k4j8/filetailor#example-usage

The program works great for my use case, but I don't have a CS background and this was my first big project, so I'm sure it's full of bugs and poor coding practices.


I didn't discover Chezmoi until seeing this thread (sigh). I developed a tool, filetailor, with an almost identical goal (dotfile management while accounting for differences across machines). It uses Python and YAML, but from what I can tell is similar in concept to Chezmoi.

https://github.com/k4j8/filetailor

One thing I like about filetailor I didn't see in Chezmoi was the ability to surround code with a comment specifying which machines it should be commented/uncommented for. It's easier than templates in some situations.

It works great, but there's probably tons of bugs that occur when used by someone other than me. I don't have a CS background and this was my first big hobby project.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: