Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that discrimination would be a lot easier to objectively prove after the fact, but it also would be far easier to occur in the first place, since many hiring managers would blindly "trust the AI" without a second thought.


From my experience working on projects where we trained models, usually it’s obviously completely broken the first attempt and requires a lot of iteration to get to a decent state. “Trust the AI” is not a phrase anyone involved would utter. It’s more like: trust that it is wrong for any edge case we didn’t discover yet. Can we constrain the possibility space any more?


Most hiring managers wouldn't make it to the end of the phrase "constrain the possibility space"


"Trust the AI" could mean uploading a resume to a website and getting a "candidate score" from somebody else's model.

Because I'll tell you, there's millions of landlords and they blindly trust FICO when screening candidates. Maybe not as the only signal, but they do trust it without testing it for edge cases.


Definitely could be so, particularly in these early days where frameworks and best-practices are very immature. Inasmuch as you think this is likely, I suspect you should favor regulation of algorithmic processes instead of voluntary industry best-practices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: