"...but nobody in the business of deploying production Posgresql should ever have been using the various models of desktop SSD that were tested." this is one of the reasons why this article was written. To show you how to test, to see what you should use and what you shouldn't in production. When we did these tests we already knew we shouldn't use those SSDs, we mostly did it just for the sake of testing, to see if the tests we wanted to use make any sense and to show others how to do it.
Also I know it may sound funny to use these SSDs in production, but when we started with our game Top Eleven we didn't know that, when you have 5 people working on a game, you don't have time (nor resources) to think about a type of an SSD; you order server with an SSD, you get an SSD (you usually don't even have a choice when you rent) and you use it. This is how it still works with almost any server renting company.
Ok, you didn't know something that's common knowledge in the industry - that's reasonable. I'm sure there are plenty of things I don't know that I should. But there are only three people in our company and we spend quite a bit of time worrying about SSD reliability because our business depends on it :)
I don't know in what type of business you are, but in gaming if one server breaks, it's not the end of the world, nobody is going to die, or lose money about it :P
Yeah I know that these are old drives, we've done this testing about year and a half ago and when we started renting servers more than 5 years ago, we didn't think about which SSDs are in there. At that time we didn't have an option to choose, nor time to think about the implications of different SSD models, we used what we got. Later on when we grew and started having problems, we started investigating what models we should use. The main reason for writing this article is for people to be aware that they should test their drives and how to do it.
What's the decision behind not testing Samsung with "On On" cache and barriers? Is it so much slower that it's not worth testing? Shouldn't barriers allow the cache on the disk to "know better" how to organize writes and still be faster than without the cache turned on?
The "disk cache" is a disk hardware option (how it uses its own RAM), if I understood, and the barriers are just an option of the FS behavior (software). I'd expect that the performance penalty to the former is much higher than to the later?
The reason is the performance penalty for barriers On + disk cache On was much higher than for barriers Off + Disk Cache off, at least that is how it looked like on our testing system. Of course that could be do to the fact that we are using a RAID controller which has a 1GB of cache. If you are not using RAID, "On On" would be a valid test.
Also I know it may sound funny to use these SSDs in production, but when we started with our game Top Eleven we didn't know that, when you have 5 people working on a game, you don't have time (nor resources) to think about a type of an SSD; you order server with an SSD, you get an SSD (you usually don't even have a choice when you rent) and you use it. This is how it still works with almost any server renting company.