Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Generally, from what I've read on SETI @ Home, the way these work is they run the same calculations on multiple computers. It's still possible to fool the system but it gets increasingly harder the less portion of computers on the network you own (assuming everybody else has an honest computer)


In the case of neural network training the cost of verifying that a gradient submitted by a peer reduces the cost function should be significantly less than generating that gradient, so you wouldn't even need to burn 2x effort to evade cheaters.


Is it possible to submit a falsified gradient which still reduces the cost, but less so than the actual gradient would, and such that how the network behaves is manipulated?

Like, say, if one selected some of the images in the batch to use a different label for when computing the gradient, but still using the right label for most of the images in the batch?


subsequent gradient updates would probably wipe out the manipulation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: