Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As I said, my Bayesian theory is rusty, but there are no "frequentist properties" of an estimator. Frequentist inference is inference -- it doesn't make guarantees about the underlying thing it's approximating, it provides guarantees about its approximation.

The key here is that Bayesian and frequentist procedures provide different sorts of guarantees. Frequentists optimize for \theta, the possible set of things that could describe all of the data X, while Bayesians will assume a single describing function \theta (this might come from an "expert") and simply optimize the expectation conditioned on the data. Neither is "wrong" but in the case of the bootstrap, the result is calibrated in a way that Bayesian inference simply never will be (if it were, it would be frequentist).

EDIT: As per your second question, actually I think it's not more interesting. A classifier is a type of estimator, so all of the general frequentist guarantees actually still apply to decisionmaking.



Frequentist statistics is about determining the repeated sampling properties of a procedure/statistic/estimator. It's about evaluation not estimation. "Optimizing \theta" or whatever you're envisioning is just one possible procedure you might be interested in evaluating. You can use the repeated sampling properties of your procedure to do frequentist inference or to evaluate other properties like unbiasedness, consistency, risk, etc. Typically the goal is to find procedures that have "good" frequentist (repeated sampling) properties. Most Bayesian-inclined statisticians would tend to argue that many frequentist properties are not important to applied data analysis or optimal decision making.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: