Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Valid critique, but one addressing a problem above the ML layer at the human layer. :)

That said, your comment has an implication: in which fields can we trust data if incentives are poor?

For instance, many Alzheimer's papers were undermined after journalists unmasked foundational research as academic fraud. Which conclusions are reliable and which are questionable? Who should decide? Can we design model architectures and training to grapple with this messy reality?

These are hard questions.

ML/AI should help shield future generations of scientists from poor incentives by maximizing experimental transparency and reproducibility.

Apt quote from Supreme Court Justice Louis Brandeis: "Sunlight is the best disinfectant."



Not a answer, but contributory idea - Meta-analysis. There are plenty of strong meta-analysis out there and one of the things they tend to end up doing is weighing the methodological rigour of the papers along with the overlap they have to the combined question being analyzed. Could we use this weighting explicitly in the training process?


Thanks. This is helpful. Looking forward to more of your thoughts.

Some nuance:

What happens when the methods are outdated/biased? We highlight a potential case in breast cancer in one of our papers.

Worse, who decides?

To reiterate, this isn’t to discourage the idea. The idea is good and should be considered, but doesn’t escape (yet) the core issue of when something becomes a “fact.”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: