Per the official numbers, Sweden has the highest rate of rape in the world.
Without knowing exactly what they considered homicide and what they considered a firearm related homicide, these numbers are meaningless. Only a few years ago the CDC released a large study where they choose to define a female forcing a male to penetrate as not being rape, thus resulting in very different statistics about rape than had the definitions been the more common 'non-consensual sex' or such.
Great work guys. Lob's a great service and I wish you all the best.
Certainly rings a lot of bells here at Pwinty as we both launched about the same time.
Our first ever order arrived sooner than expected and was fulfilled by me running down to the local photo store with a memory stick and them over to the post office. Then spent a few months with a commercial photo printer living in our dining room. We'd drape a blanket over it when guests came over for dinner, but it had a habit of whirring to life at the most inopportune times.
My wife tolerated this wonderfully for a good few months until we had a better alternative!
We're using GoCardless at Pwinty to bill our UK based merchants- works really smoothly, and much cheaper than collecting via debit card or PayPal as we have to do in the rest of the world.
Best of luck to them- especially after hearing they're rolling out to Europe as well- exciting times!
Unfortunately the working time directive for doctors in the UK is also averaged over a longer time period. So my wife has just come off an 80+ hour week of night shifts
Maybe I'm missing something- but I can't see the valuations the companies are raising at anywhere (e.g. I can see that Company X wants to raise $490,000 - but how much of the business this is for)
Surely this is question number 1 when you want to invest in a company?
EDIT- Found it once I clicked the "Apply to invest" button". Maybe it should be a bit more up front though
Although TDD does give you more confidence to make changes to your code in future, I think you're overlooking a major benefit, which is allowing swifter code/test/debug cycles.
So rather than having to make a change, run up the app in my browser, and manually test that the latest bit of code is working, I can test individual functions just by running my unit tests.
Of course this doesn't obviate the need for browser based testing too - but it can reduce the amount of it you need.
TDD allows me to produce working code faster by helping me notice and fix my errors sooner. If you're an excellent coder, maybe you don't need this- but I certainly do!
interesting- in the UK, Volvo drivers tend to be thought of as left-wing, cautious, careful, slow. The brand is also associated with the left-leaning newspaper 'The Guardian' as well as museli and wearing socks with sandals!
Hmm, in the U.S., they seem to have aspects of both these stereotypes (at least at one time; I haven't lived in the U.S. in many years): cautious, slow, sandal-wearing, but also self-centered and kind of clueless, blithely mowing down pedestrians despite their avowed support for all living beings... ><
p.s. I've actually known some Volvo owners in the UK, and they really did read the Guardian!
p.p.s. I'm totally left wing, wear socks with my sandals, and eat tons of muesli... oO;
Surely hosting yourself exposes you to just as much, if not more risk? Problem in the datacentre where you're co-lo'd, or one of your servers blows up?
I think people not trusting the cloud is similar to how people feel safer driving their cars then taking a plane. The stats say the plane's safer, but people prefer being in control. People like the idea of being in control of their servers, even if that means there's hundreds of extra things that can go wrong compared to a cloud provider.
We also get a lot more publicity when a cloud provider has an outage as LOTS of sites go down at once. Hardly anyone notices when service X who self-host go down for a few hours...
This is true of every data center I've worked with. Also network providers: everyone has downtime and sometimes you learn the hard way that despite being written into your contract someone took the cheap way and ran your “redundant” fiber through the same conduit which a backhoe just tore up.
We're coloed across three datacenters spanning the US (one might be in TO I think) and if a datacenter were to go down, we have a hot backup that's no more than 12 hours stale.
The only real manual maintenance that we've got is a rolling reimaging of servers based on whatever's in version control, which usually takes a few hours twice a year, but we'd probably do that if we were in the cloud anyway.
When you can script away 90% of your system administration tasks, hosting in the cloud doesn't really make a ton of sense.
But a DNS based failover is still going to take an hour or so to propagate right (given that a lot of browsers/proxies/DNS servers don't respect TTL very well at all)? And then you end up with a system with stale data, and the mess of trying to reconcile it when your other system comes back up.
I'd take an hour long Appengine outage once a year over that anytime!
Your name server or stub resolver is what respects DNS TTL, not your browser or proxy. Everyone - including people hosting on AWS - needs to be able to fail over DNS, if the AWS IP you're using is in a zone that just went down, for example.
Any time you have an outage you need to contact your service provider to get an estimate of downtime. If they can't give you one, assume it'll take forever and cut the DNS over. The worst case is some of your users will start to come back online slowly. If you don't cut over, the worst case is all your users are down until whenever the service provider fixes it, and you get to tell your users "we're waiting for someone else to deal with it", which won't make them very happy.
12 hour stale data sounds kind of long to me. 4 hours sounds more reasonable.
I've seen plenty of crappy ISP DNS servers ignore TTL values and cache DNS entries for many hours longer than they're supposed to. Unfortunately, it's all too common.
In that case, what is the ratio of "time spent doing ops-related tasks" vs "time spent developing new features" in your company? Please offer an honest evaluation. Everything has a cost; I'm genuinely curious about data points other than my own.
Today, maybe, assuming a calm ocean and no scaling issues. But I don't believe you spent an hour a week setting up your three data centers, backups, failover procedure, etc.
The "safety" of the cloud is about two things: 1. trusting your service provider, and 2. redundancy.
You have to trust your cloud provider. They control everything you do. If their security isn't bulletproof, you're screwed. If their SAN's firmware isn't upgraded properly to deal with some performance issue, you're screwed. If their developers fuck up the API and you can't modify your instances, you're screwed. You have to put complete faith in a secret infrastructure run for hundreds of thousands of clients so there's no customer relationship to speak of.
That's just the "trust" issue. Then there's the issue of actual redundancy. It's completely possible to have a network-wide outage for a cloud provider. There will be no redundancy, because their entire system is built to be in unison; one change affects everything.
Running it yourself means you know how secure it is, how robust the procedures are, and you can build in real redundancy and real disaster recovery. Do people build themselves bulletproof services like this? Usually not. But if you cared to, you could.
It really depends on how many servers and how good your sys admins are. If you have 1 server in a cheap colo, then yes, the cloud is probably better. If you have a GOOD hosting provider, and build out a redundant at every tier cluster, you can easily beat the major cloud providers' uptime.
We run about 11 client clusters across ~250 servers across 3 data centers in the US and Europe. Each of our client's uptime is very very close to 100%, and we've NEVER lost everything, even for 1 second.