This was discovered automatically. That's one benefit of a machine learning-based system: you feed it a lot of data and tell it when fraud actually occurred and it adapts its rules predict fraud accordingly.
Not quite...an order is more likely fraudulent when it was placed at 2-4am local time. Local time for the fraudster (in Vietnam or wherever else), not local time for the site they're defrauding.
E-commerce fraud takes many forms. Three main types of fraud impact merchants: payment fraud, new account fraud and account takeover. We described all three recently at Sift in a blog post: http://ow.ly/oPrS2.
In the stolen card scenario you describe (a type of payment fraud), the e-commerce merchant is actually liable. In other words, if that card is reported stolen by the cardholder after the goods are shipped out, the merchant loses the revenue. This is b/c for online credit card transactions, they are categorized as "card not present" transactions, since the merchant can't be as certain the actual cardholder made the purchase. If the transaction had occurred in a physical store, the card company would be liable.
We use the time zone of the customer rather than of the website for scoring riskiness. Sorry if that wasn't clear.
As for customers including birth years in their emails, you're correct that this would be counter to the general trend. However, a fraud detection system like Sift has many data points on which to score a customer. Hopefully, the person would otherwise look benign and thus not have a very high overall score.
These results are averaged across many different types of e-commerce companies. So you're correct that a particular company shouldn't necessarily set up a rule to flag transactions occurring 3-10 mins from account creation.
At Sift Science, a user is flagged based on a combination of many different factors. So while a transaction 3-10 minutes after signup is associated with increased risk of fraud, a user typically has to match many different patterns to be flagged as an overall risk.
That list is actually supposed to show the people they're looking for as far as I can tell (since you can expand the photos for more info and ways to contact the police). If you have a list of most wanted and 90% of them are minorities, what do you do? Do you propose a politically correct list where you can only list an equal number of people of each ethnicity and each gender?
They're only processing $10M of transactions/month, so even if they were to start charging fees, that wouldn't translate into much revenue. Agree that this is a great exit for the team.