Hacker Newsnew | past | comments | ask | show | jobs | submit | acranox's commentslogin

For a lot of items I’ve looked into, Newegg and Microcenter had comparable prices. Several small items were cheaper at microcenter, presumably because Newegg marked up the prices for low cost items to cover the free shipping.


Don’t forget about power. If you’re trying to build a low power NAS, those hdds idle around 5w each, while the ssd is closer to 5mw. Once you’ve got a few disks, the HDDs can account for half the power or more. The cost penalty for 2TB or 4TB ssds is still big, but not as bad as at the 8TB level.


5w to mw is huge, but at the same time the total cost of ownership over 3/5 years depending on the cost of the hardware may not pay off in a 5w spread, especially with SSD premiums.

When it comes to self hosted servers for example, using tiny computers as servers often gets you massive power savings that do make a difference compared to buying off-lease rack mount servers that can idle in the hundreds of watts.


such power claims are problematic - you're not letting the HDs spin down, for instance, and not crediting the fact that an SSD may easily dissipate more power than an HD under load. (in this thread, the host and network are slow, so it's not relevant that SSDs are far faster when active.)


There's a lot of "never let your drive spin down! They need to be running 24/7 or they'll die in no time at all!" voices in the various homelab communities sadly.

Even the lower tier IronWolf drives from Seagate specify 600k load/unload cycles (not spin down, granted, but gives an idea of the longevity).


Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...


Here is someone that had significant corruption until they stopped: https://www.xda-developers.com/why-not-to-spin-down-nas-hard...

There are many similar articles.


I wonder if they were just hit with the bathtub curve?

Or perhaps the fact that my IronWolf drives are 5400rpm rather than 7200rpm means they're still going strong after 4 years with no issues spinning down after 20 minutes.

Or maybe I'm just insanely lucky? Before I moved to my desktop machine being 100% SSD I used hard drives for close to 30 years and never had a drive go bad. I did tend to use drives for a max of 3-5 years though before upgrading for more space.


I wonder if it has to do with the type of HDD. The red NAS drives may not like to be spun down as much. I spin down my drives and have not had a problem except for one drive, after 10 years continuous running, but I use consumer desktop drives which probably expect to be cycled a lot more than a NAS.


I experimented with spindowns, but the fact is, many applications needs to write to disk several times per minute. Because of this I only use SSD's now. Archived files are moved to the Cloud. I think Google Disk is one of the best alternatives out there, as it has true data streaming built in the MacOS or Windows clients. It feels like an external hard drive.


Letting hdds spin down is generally not advisable in a NAS, unless you access it really rarely perhaps.


Spin down isn't as problematic today. It really depends on your setup and usage.

If the stuff you access often can be cashed to SSDs you rarely access it. Depending on your file system and operating system only drives that are in use can be spun up. If you have multiple drive arrays with media some of it won't be accessed as often.

In an enterprise setting it generally doesn't make sense. For a home environment disks you generally don't access the data that often. Automatic downloads and seeding change that.


Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...

(see above, same question)


It's probably decades old anecdata from people who re commissioned old drives that were on the shelf for many years. The theory is that the grease on the spindle dries up and seizes up the platters.


I've put all of my surveillance cameras on one volume in _hopes_ that I can let my other volumes spin down. But nope. They spend the vast majority of their day spinning.


Did you consider ZFS with L2ARC? The extra caching device might make this possible...


That's not how L2ARC works. It's not how the ZIL SLOG works, either.

If a read request can be filled by the OS cache, it will be. Then it will be filled by the ARC, if possible. Then it will be filled by the L2ARC, if it exists. Then it will be filled by the on-disk cache, if possible; finally, it will be filled by a read.

An async write will eventually be flushed to a disk write, possibly after seconds of realtime. The ack is sent after the write is complete... which may be while the drive has it in a cache but hasn't actually written it yet.

A sync write will be written to the ZIL SLOG, if it exists, while it is being written to the disk. It will be acknowledged as soon as the ZIL finishes the write. If the SLOG does not exist, the ack comes when the disk reports the write complete.


BTW what I mean is that even with my attempts to limit activity, there seems to be enough network activity to wake these drives pretty much continuously.




https://www.bostonglobe.com/2023/06/15/business/subaru-buyer...

“ Subaru and another automaker, Kia, have been especially aggressive in resisting the law. While other companies are counting on a long-running federal lawsuit to overturn the statute, Kia and Subaru opted to shut off the features in their vehicles that are covered by the law.”


So in massechutses my car doesn't send any telemetry to the manufactorer or random dealerships, and can't receive remote signals to start, stop brake or turn on its own? Sounds like a pretty good win. Also rich hearing about security concerns coming from Kia at the moment.


There’s always Backblaze’s SSD stats. https://www.backblaze.com/blog/ssd-edition-2022-drive-stats-...


Good luck finding the same models.

They use them as a boot/system drives without a big load. And when you have more than 500 drives some would die just by pure luck (or lack of it). Mishandling, static charge etc...


The Atlantic had a good article about this last summer. https://www.theatlantic.com/magazine/archive/2022/07/pennsyl...


Even worse was one of mine last year that needed Flash. Apparently we neglected to update it. I can handle ancient Java, but trying to get Flash setup was going to be futile, so I just went to the data center.


That was an old Cisco server, right? I think those models still require Flash even if they're fully updated, and people have to use VMs with Flash installed to access the BMC.


Yep. I think you may be right, it’s EOL and probably doesn’t have any more updates available. I have a VM for when I need old Java, but I was going to need an older VM to run Flash, and that just wasn’t how I wanted to spend my time. :D


If this topic interests you, the “Last Seen” podcast will be worth listening to. https://www.wbur.org/inside/2018/07/19/wbur-and-the-boston-g...


Sparkleshare does something kind of similar. It uses git as the backend automatically sync directories on a few computers. https://www.sparkleshare.org/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: