Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was reading about your company and really liked your post about benchmarking bot detection systems (https://research.roundtable.ai/bot-benchmarking/).

I find myself wondering why Google isn't doing as good a job as you guys. In particular, I'm wondering if you think they've been kind of lazy about the problem because they don't see captcha detection as faring particularly poorly, and perhaps once AI agents really start taking off and becoming prevalent, then Google will buckle down, at which point they'll be able to hone their troves of data to build a newer captcha even better than yours. Or is there an additional secret sauce you guys have?

That kind of leads me to my other question-I assume your secret sauce is the cognitive science, stroop-esque approach to bot detection (which I think is brilliant in the abstract). But I'm curious how that scales. Like, do you go one-by-one for each website you have a partnership with and figure out how a human would likely use the site, or do you use general techniques (besides the obvious ones like typing speed and mouse movements, which almost certainly Google could implement too if it decided to buckle down). Like, can you scale some broad stroop task across websites?

By the way, asking all of this out of admiration. You guys are doing really clever work!





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: