Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's no one cutoff or threshold at which hard drives die. SSDs have already made hard drives smaller than about 320GB completely uneconomical on a pure $/GB basis, and have killed off high-RPM hard drives. SSDs will continue to displace hard drives anywhere that performance or battery life matters.

You can break down drive costs somewhat into the fixed costs (SSD controller, or hard drive spindle motor and actuators) and the costs that vary with capacity (NAND or platters+heads). The fixed costs tend to be lower for SSDs (or at least SATA SSDs), and the variable costs are higher for SSDs because adding NAND is more expensive than another platter.

Hard drives smaller than several TB are no longer getting new technology (eg. you won't find a 2-platter helium drive), so whenever NAND gets cheaper the threshold capacity below which hard drives don't make sense moves upward.

For the near future, hard drive manufacturers have a clear path to outrun the capacities available from cheap SSDs. NAND flash gets you more bits per mm^2 than a hard drive platter, but platters are far cheaper per mm^2—enough to also be cheaper per bit. I think that relationship will still be true by the time hard drives are using bit-patterned media and 3D NAND is at several hundred layers.

Ultimately, it might make more sense to ask when we will see SSDs having taken over the former hard drive market with hard drives having moved entirely into the market traditionally occupied by tape.



What’s interesting is we’re seeing the ending of the 2 year Moore’s Law doubling rate at the same time (as we reach near the limits of the physics of photolithography as we reach toward EUV light sources... if we go much shorter wavelength, we get inherent shot noise from the high energy of the photons... and this in addition to device-level quantum effects), so we might never quite reach the point where SSDs totally take over. Also, even tape seems to be defending its niche from hard drives.

Also, higher RPM hard drives are still a thing and are holding their own to some degree. The higher RPM can help with rebuild times and to reduce the impact of the lower random IOPS. Also, conventional hard drives do not have the write limitations that (especially cheaper) SSDs have, although that has improved over time.

So I think we’ll just see a continuation of the three-tiered system of storage for many years to come, but with hard drives increasingly disappearing into the cloud and away from consumer devices. SSDs for most things, hard drives for bulk server/cloud storage, and tape still for cold, long-term-stable storage.

I think we’ve already seen a plateau in storage cost reduction as SSDs are not cheaper per TB than HDs. I think we’ll put more effort into being efficient with storage management in the future as we can no longer simply rely on doubling storage capacity every couple years.


> have killed off high-RPM hard drives

Depends, 15k RPM are indeed gone. 10k RPM drives are sold and used.


Sold and used, sure. Lots of dead-end enterprise hardware stays in service and officially still available long after it stops making sense. Long validation cycles, etc.

As far as I can tell, WD's 10k RPM drives are discontinued and no longer listed on their site. Seagate lists 10k RPM drives up to 2.4TB and 266MB/s, with a 16GB flash cache. Looking on CDW, it's more expensive than a 3.84TB QLC drive. It uses more power at idle than a QLC SATA SSD under load. I can only imagine a few workloads where the 10k RPM drive would be preferable to the QLC SSD, and I'm not sure the 10k RPM drive would have better TCO than 7200 RPM drives for such uses.

Are there any situations that you think still call for 10k RPM drives to be selected, rather than merely kept around due to inertia?


When you do not want or can't use an SSD for price or reliability concerns.

1) A high-throughput MX/message broker server that is set up and works next 10 years without a need of drive replacement.

2) ZFS SLOG (or any similar scratchpas/transaction log) for 24/7 intensive writes.

In both cases similarly reliable SSD (enterprise ssds) would cost more and require more frequent replacements.


> hard drives having moved entirely into the market traditionally occupied by tape

It'll be curious if the hyperscale clouds decide to self-manage more HDD functions, and dumb down the devices ($), or leave that to HDD manufacturers ($$).

I imagine there's some savings to be had stripping memory & controllers out of drives, when you're deploying in large groups anyway. Similar to what was done with networking kit.

Or maybe this already happens? Moreso than "RAID-edition" drives.


Hard drives already present a fairly minimal abstraction over the underlying media. For most drives, it can't get any simpler unless the host system software wants to get bogged down with media and vendor-specific details about things like error correction.

For drives using Shingled Magnetic Recording (SMR), the storage protocols have already been extended to present a zoned storage model, so that drives don't have to be responsible for the huge read-modify-write operations necessary to make SMR behave like a traditional block storage device. I suspect these Host-Managed SMR drives are not equipped with the larger caches found on consumer drive-managed SMR hard drives.


Depending on what you mean by "dumb down" Seagate have their Kinetic stuff presenting an object API which is could be viewed as simpler but arguably the drive is doing more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: