First off, I’d normally ask this question on a datahoarding forum, but this one is way more active than those and I’m sure there’s considerable overlap.
So I have a Synology DS218+ that I got in 2020. So it’s a 6 year old model by now but only 4 into its service. There’s absolutely no reason to believe it’ll start failing anytime soon, and it’s completely reliable. I’m just succession planning.
I’m looking forward to my next NAS, wondering if I should get the new version of the same model again (whenever that is) or expand to a 4 bay.
The drives are 14 TB shucked easy stores, for what it’s worth, and not even half full.
What are your thoughts?
The NAS itself will likely outlive the drives inside, just the nature of things. Hard drives follow a sort of curve when it comes to failure, most fail either immediately or in a few 10000 hours of run time. Other factors include the drives being too hot, the amount of hard power events, and vibration.
Lots of info on drive failure can be found on Backblaze’s Drive stat page. I know you have shucked drives, these are likely white label WD Red drives which are close to the 14TB WD drive backblaze uses.
Yeah they’re reds. Is there a way I can check how many hours they have on them? 10,000 is just over a year. They’re a couple years old now.
I’m not too concerned about them failing, I can afford to replace one without notice and they’re mirrored. And backed up in some other easy stores.
I believe the synology DSM should have a feature for this. Try the storage manager app and it should tell you SMART info.
smartctl
But 10.000 seems on the low side, i have 4 datacenter toshiba 10tb disks with 40k hours and expect them to do at least 80k, but you can have bad luck and one fails prematurely.
If its within warranty, you can get it replaced, if not, tough luck.
Always have stuff protected in raid/zfs and backed up if you value the data or dont want a weekend ruined because you now have to reinstall.
And with big disks, consider having more disks as redundancy as another might get a bit-error while restoring the failed one. (check the statistical averages of the disk in the datasheet)
I wouldn’t start worrying until 50k+ hours.
There should be a way to view SMART info and that includes an hour count.
That info can be found in the smart data for the drives, but I didn’t mean 10,000 hours, more like > 50,000
I’ve got a 12TB Seagate desktop expansion which contains a Seagate ironwolf drive. According to the link you shared, I’ll better look for a backup drive asap.
Edit: the ones in the backblaze reference are all exos models, but i still have no profounf trust in Seagate.
Mine aren’t even on the list :(
Yes, according to their historical data Seagate drives appear to be on the higher side of failure rates. I’ve also experienced it myself, my Seagate drives have almost always failed before my WD drives.