Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We know the rate of growth of hard drive sizes while keeping the cost equal. We know the same for SSDs.

Did anybody calculate the point where it will not be economical anymore to build hard drives? 1 year? 5 years? 10 years?



There's no one cutoff or threshold at which hard drives die. SSDs have already made hard drives smaller than about 320GB completely uneconomical on a pure $/GB basis, and have killed off high-RPM hard drives. SSDs will continue to displace hard drives anywhere that performance or battery life matters.

You can break down drive costs somewhat into the fixed costs (SSD controller, or hard drive spindle motor and actuators) and the costs that vary with capacity (NAND or platters+heads). The fixed costs tend to be lower for SSDs (or at least SATA SSDs), and the variable costs are higher for SSDs because adding NAND is more expensive than another platter.

Hard drives smaller than several TB are no longer getting new technology (eg. you won't find a 2-platter helium drive), so whenever NAND gets cheaper the threshold capacity below which hard drives don't make sense moves upward.

For the near future, hard drive manufacturers have a clear path to outrun the capacities available from cheap SSDs. NAND flash gets you more bits per mm^2 than a hard drive platter, but platters are far cheaper per mm^2—enough to also be cheaper per bit. I think that relationship will still be true by the time hard drives are using bit-patterned media and 3D NAND is at several hundred layers.

Ultimately, it might make more sense to ask when we will see SSDs having taken over the former hard drive market with hard drives having moved entirely into the market traditionally occupied by tape.


What’s interesting is we’re seeing the ending of the 2 year Moore’s Law doubling rate at the same time (as we reach near the limits of the physics of photolithography as we reach toward EUV light sources... if we go much shorter wavelength, we get inherent shot noise from the high energy of the photons... and this in addition to device-level quantum effects), so we might never quite reach the point where SSDs totally take over. Also, even tape seems to be defending its niche from hard drives.

Also, higher RPM hard drives are still a thing and are holding their own to some degree. The higher RPM can help with rebuild times and to reduce the impact of the lower random IOPS. Also, conventional hard drives do not have the write limitations that (especially cheaper) SSDs have, although that has improved over time.

So I think we’ll just see a continuation of the three-tiered system of storage for many years to come, but with hard drives increasingly disappearing into the cloud and away from consumer devices. SSDs for most things, hard drives for bulk server/cloud storage, and tape still for cold, long-term-stable storage.

I think we’ve already seen a plateau in storage cost reduction as SSDs are not cheaper per TB than HDs. I think we’ll put more effort into being efficient with storage management in the future as we can no longer simply rely on doubling storage capacity every couple years.


> have killed off high-RPM hard drives

Depends, 15k RPM are indeed gone. 10k RPM drives are sold and used.


Sold and used, sure. Lots of dead-end enterprise hardware stays in service and officially still available long after it stops making sense. Long validation cycles, etc.

As far as I can tell, WD's 10k RPM drives are discontinued and no longer listed on their site. Seagate lists 10k RPM drives up to 2.4TB and 266MB/s, with a 16GB flash cache. Looking on CDW, it's more expensive than a 3.84TB QLC drive. It uses more power at idle than a QLC SATA SSD under load. I can only imagine a few workloads where the 10k RPM drive would be preferable to the QLC SSD, and I'm not sure the 10k RPM drive would have better TCO than 7200 RPM drives for such uses.

Are there any situations that you think still call for 10k RPM drives to be selected, rather than merely kept around due to inertia?


When you do not want or can't use an SSD for price or reliability concerns.

1) A high-throughput MX/message broker server that is set up and works next 10 years without a need of drive replacement.

2) ZFS SLOG (or any similar scratchpas/transaction log) for 24/7 intensive writes.

In both cases similarly reliable SSD (enterprise ssds) would cost more and require more frequent replacements.


> hard drives having moved entirely into the market traditionally occupied by tape

It'll be curious if the hyperscale clouds decide to self-manage more HDD functions, and dumb down the devices ($), or leave that to HDD manufacturers ($$).

I imagine there's some savings to be had stripping memory & controllers out of drives, when you're deploying in large groups anyway. Similar to what was done with networking kit.

Or maybe this already happens? Moreso than "RAID-edition" drives.


Hard drives already present a fairly minimal abstraction over the underlying media. For most drives, it can't get any simpler unless the host system software wants to get bogged down with media and vendor-specific details about things like error correction.

For drives using Shingled Magnetic Recording (SMR), the storage protocols have already been extended to present a zoned storage model, so that drives don't have to be responsible for the huge read-modify-write operations necessary to make SMR behave like a traditional block storage device. I suspect these Host-Managed SMR drives are not equipped with the larger caches found on consumer drive-managed SMR hard drives.


Depending on what you mean by "dumb down" Seagate have their Kinetic stuff presenting an object API which is could be viewed as simpler but arguably the drive is doing more.


I've read that unpowered SSDs do not retain data for long periods of time.

I've also been told that we have the technology to get data OFF of hard drives in cases of catastrophic failure. we don't have that capability with SSDs.

So for archiving, I think hard drives should stay a long time.

I think hard drives will be the "tape drives" of the future, relying on capacity more than random i/o speeds.


As someone who services client SANs, I could not disagree harder. I cannot count how many times I have seen some site suffer a poweroff and then half the spinners don't power back up even though the health checks were previously green checkmarks across the board. Most folks plan their RAID for 1 or 2 failures, not 6!

If you want to archive at rest, bluray/dvd or tape in good climate controlled storage is the only way to do it. Your HDD will spontaneously die simply because it is a moving part well before a comparative SSD reaches the write-limit.


I agree on the idea of BD/DVD/tape in a well-kept environment for good long term storage. When storing on optical media you'll want to be sure to get an archive-grade storage medium like M-DISC.

https://en.wikipedia.org/wiki/M-DISC


I've had DVD archives become unreadable a few years later and stopped using them. Is your argument that drives that can read them will get better faster than the media degrades?


> in cases of catastrophic failure

I'm not sure it would be economical for bulk/cloud storage. It might be cheaper to just have geo-diverse redundant storage such that a failed drive can be rebuilt anew from redundant copies of the same data.

I'd be interested to see if optical technologies find a niche, they seem most stable for long-term storage, e.g. M-DISK

see also: https://arstechnica.com/gadgets/2019/11/microsofts-project-s...


There are physical limitations. An electron trap will have a minimum size. New ways for leaps and bounds need to be discovered. I think it's not reasonable to assume that both magnetic and electric storage have the same maximum density of infinity.


Read/write speeds will be the limit.

How many years it will take to write 120TB on it?

A 120TB flash array can be written in minutes with enough upstream bandwidth, and takes just one rack.

Tape on other hand was always speed limited, is just fine in its nice with that limitation.


Doesn't take a whole rack. You can literally buy 4 x 30TB 2.5" SSDs and have 120TB.


If you want to be able to read or write the full 120TB in a few minutes instead of 4+ hours, you need to use SSD smaller than 30TB.


Unlike disks flash has the nice advantage that it's not really constrained by form factor though. With a server full of E1.L drives you get strong performance and large storage in a dense space. Of course that's eye-wateringly expensive today but I think the combination of these factors will be what displaces disks rather than purely $$$/TB.


Will take some time...

The PM1643 16TB SSD is about 7 times as expensive as 16TB disks. (2600 vs 360, euro)

The PM1643 actually uses more than twice as much power in use (r or w) than a spinning disk. Idling they use the same 5W.


Is that twice the power at full bandwidth? Because if so, then the SSD is still more efficient, because those reads/writes will be over a lot more than twice as fast, and the SSD can go back to idling.


I guess that is active power usage, but considering how much faster SSD's are compared to HDD's I doubt that scales with amount transferred (esp for random read/writes). So what would the power usage be when the total over active work time is counted?


How much more read/write performance do you get for twice the energy usage? A performance per watt comparison would be interesting here.


Samsung quotes 148 MB/s per watt for the PM1643, for sequential transfers. Hard drive performance tops out around 270 MB/s and draws something less than ~9.5W, so the SSD is 3-5x more efficient for sequential transfers. For random IO, it's several orders of magnitude difference in performance and efficiency.

The most power-efficient consumer SSDs are an order of magnitude more efficient for sequential transfers than the Samsung PM1643. Eg: https://www.anandtech.com/bench/SSD18/2460


Depends on use. Queued random access 4kiB block read IOPS can be 3 orders of magnitude different. (120 IOPS vs 500k IOPS for example)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: