First off, I’d normally ask this question on a datahoarding forum, but this one is way more active than those and I’m sure there’s considerable overlap.

So I have a Synology DS218+ that I got in 2020. So it’s a 6 year old model by now but only 4 into its service. There’s absolutely no reason to believe it’ll start failing anytime soon, and it’s completely reliable. I’m just succession planning.

I’m looking forward to my next NAS, wondering if I should get the new version of the same model again (whenever that is) or expand to a 4 bay.

The drives are 14 TB shucked easy stores, for what it’s worth, and not even half full.

What are your thoughts?

  • GlitzyArmrest@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    The NAS itself will likely outlive the drives inside, just the nature of things. Hard drives follow a sort of curve when it comes to failure, most fail either immediately or in a few 10000 hours of run time. Other factors include the drives being too hot, the amount of hard power events, and vibration.

    Lots of info on drive failure can be found on Backblaze’s Drive stat page. I know you have shucked drives, these are likely white label WD Red drives which are close to the 14TB WD drive backblaze uses.

    • akilou@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Yeah they’re reds. Is there a way I can check how many hours they have on them? 10,000 is just over a year. They’re a couple years old now.

      I’m not too concerned about them failing, I can afford to replace one without notice and they’re mirrored. And backed up in some other easy stores.

      • subtext@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        I believe the synology DSM should have a feature for this. Try the storage manager app and it should tell you SMART info.

      • GlitzyArmrest@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        That info can be found in the smart data for the drives, but I didn’t mean 10,000 hours, more like > 50,000

      • SomeoneSomewhere@lemmy.nz
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        I wouldn’t start worrying until 50k+ hours.

        There should be a way to view SMART info and that includes an hour count.

      • Nyfure@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        smartctl

        But 10.000 seems on the low side, i have 4 datacenter toshiba 10tb disks with 40k hours and expect them to do at least 80k, but you can have bad luck and one fails prematurely.
        If its within warranty, you can get it replaced, if not, tough luck.

        Always have stuff protected in raid/zfs and backed up if you value the data or dont want a weekend ruined because you now have to reinstall.
        And with big disks, consider having more disks as redundancy as another might get a bit-error while restoring the failed one. (check the statistical averages of the disk in the datasheet)

    • passepartout@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 months ago

      I’ve got a 12TB Seagate desktop expansion which contains a Seagate ironwolf drive. According to the link you shared, I’ll better look for a backup drive asap.

      Edit: the ones in the backblaze reference are all exos models, but i still have no profounf trust in Seagate.

      • GlitzyArmrest@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        Yes, according to their historical data Seagate drives appear to be on the higher side of failure rates. I’ve also experienced it myself, my Seagate drives have almost always failed before my WD drives.

  • Bourff@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Imho there’s no reason to change or upgrade if your current setup works and you’re satisfied with it. Keep your money, you’ll see what the market has to offer when you need it.

  • Prok@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    My Synology NAS was running for 6+ years before I replaced it last year. And the only reason I replaced it was to upgrade the hardware to be able to act more like a home server running some more demanding services.

    I’ve since given the NAS away to a friend who is still running it… As always back up your data just in case, but I wouldn’t expect the hardware to crap out on you too soon

    • Red@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I still have my DS1812 which I bought for ~1200 when it came out in 2012/2013 as well.

      It only runs NFS/SMB atorage services. Still is an amazing unit. It has been through 7 house moves 2 complete failures, and about 4 raid rebuilds.

      Considering it’s 2024 now and it’s been running for nearly 12 years, it’s the reason I recommend paying out the arse for Synology hardware even if it is overpriced. I still get security patches, and I got a recent (2 years ago?) OS upgrade.
      It can still run the occasional docker containers for when I need to get the latest ISOs or for running rclone to backup.

      If I bought a new unit I would be happy for another 10+ years with it no doubt. As long as I purchased as much ram as possible to put in it because 3GB ram in this unit is what really kills the functionality, besides from the now-slow cpu

      • Nyarlathotep@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        I have an 1813+ and it’s also been a champ. Unless the computer inside it dies, I will continue to use it indefinitely.

        However, I have offloaded all server duties other than storage to other devices. I do not ask my Syno to run Plex or any other services else besides DNS. As long as those SMB shares stay up, it’s doing what it needs to do. And gigabit will be fast enough for a long time to come.

    • jasep@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Same here. Last year I upgraded from a DS214+ and it was still running great. The only reason I upgraded to the DS220+ was so I could run docker containers.

      I sold it for $200 which meant I ran it for 9 years for about $57 a year (CAD). I’m hoping to get even better bang for the buck with the new unit.

  • mipadaitu@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    I had a DS212j for about 10 years before I replaced it, and it was working just fine, so I sold it on ebay. It just couldn’t keep up with the transcoding plex that I was using it for. Heck, 7 of those years it was running on a shelf in my garage getting covered in dust, and spiderwebs.

    I imagine a + model will last even longer than that.

  • Ashy@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 months ago

    I just recently upgraded from my 2 bay NAS, simply because I ran out of storage and attaching more drives via USB just seemed silly at this point (I was already at 5).

    I now have a DS2422+ 12 bay with 6x 20TB plates. And I very much expect the NAS to last past 10 years. HDDs can be added and replaced if you have raid setup. Not very feasible in 2 bay NAS.

  • metaStatic@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    shucked

    oh you are dancing with the devil. not sure there’s a way to check actual SMART data in Synology’s OS but I would be very interested in those logs.

    I’ve found over the years that the second I think about backing up the drive is about to fail.

    I would update to a 4bay and invest in actual NAS drives. (and I will personally be looking for 10gbe lan but this isn’t homelab)

    • BarbecueCowboy@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      People have tested them long term at this point. Outside of a few rare exceptions, there’s not a noticeable difference in reliability between shucked drives and ‘normal’ drives. They’re the same stock but just rebranded and have to be cheaper because they’re marketed primarily for retail as opposed to enthusiast/enterprise who are willing to pay more.

    • Davel23@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      There’s nothing wrong with shucked drives, and they are frequently relabelled NAS drives anyway.

        • ShepherdPie@midwest.social
          cake
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          Packaging a drive for sale in an external enclosure doesn’t make it any more prone to failure compared to one that wasn’t.

          • metaStatic@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            11 months ago

            except you don’t know what you’re buying.

            the fact it’s typically cheaper than buying the naked drive should tell you everything you need to know about the risk involved.

            • GlitzyArmrest@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              This is misinformation, I have always known what drives to expect when shucking. Not only that, but you can tell what drive is inside just by plugging it in before shucking to check. I’ve shucked over 16 drives so far and all were exactly as expected.

              The drives for WD are white label, but they’re WD Reds. They’re cheaper because they’re consumer facing, no more, no less. Have you been bitten by shucking in the past? I’m confused why else you’d be saying it’s a risk. The only risk associated is warranty related.

            • ShortN0te@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              There is not even any proof from any independent media that special certified drives have a longer lifespan. You can see it when you compare OEM prices for different drives. Quite often Data Center labeled cards are more expensive then the prosumer drives, because consumers are idiots and buy into marketing.

              There are other problems with shucking like warranty but the dice role is not certainly it.

            • hedgehog@ttrpg.network
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              You have an idea of what you’re buying and you know what you have once you’ve shucked it. The worst case scenario is that it’s not what you expected, isn’t suited for that use case, you can’t find another use for it, and you can’t return it… but it’s not like anyone is forcing you to add an unsuitable drive to your setup.

            • Nollij@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              That the market buying internal drives is generally willing to pay more for the product vs the people buying an external drive? Because cost of the parts (AKA Bill of Materials, or BOM) is only a small part of what determines the price on the shelf.

              The fact the WD has a whole thing about refusing to honor the warranty (likely in violation of the Magnuson-Moss Warranty Act) should tell you what you really need to know.

  • Afx@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Still running a ds210+ i bought second hand about 8 years ago… Hosts a website and downloads torrents… Not much else. Think it’s about time i upgraded.

  • rufus@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 months ago

    I’d say 6-12 years. Maybe including about 1 hard disk failing. I forgot what the mean to failure is for a harddisk. And in a decade I probably have all the disks filled to the brim, my usage pattern changed and a new one has 10x the network speed, 4x more storage and is way faster in every aspect.

  • Nollij@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    What do you mean by “last”? I know it’s a common term, but when you dig deeper, you’ll see why it doesn’t really make sense. For this discussion, I’m assuming you mean “How long until I need to buy a newer model?”

    First, consider the reasons you might have for buying a newer model. The first is hardware failure. Second is obsolescence - the device cannot keep up with newer needs, such as speed, capacity, or interface. The third is insecurity/unsupported from the vendor.

    The last one is easy enough to check from a vendor’s product lifecycle page. I’ll assume this isn’t what you’re concerned about. Up next is obsolescence. Obviously it meets your needs today, but only you can predict your future needs. Maybe it’s fine for a single 1080p* stream today, and that’s all you use it for. It will continue to serve that purpose forever. But if your household grows and suddenly you need 3x 4k streams, it might not keep up. Or maybe you’ll only need that single 1080p stream for the next 20 years. Maybe you’ll hit drive capacity limits, or maybe you won’t. We can’t answer any of that for you.

    That leaves hardware failure. But electronics don’t wear out (mechanical drives do, to an extent, but you asked about the NAS). They don’t really have an expected life span in the same way as a car battery or an appliance. Instead, they have a failure rate. XX% fail in a given time frame. Even if we assume a bathtub curve (which is a very bold assumption), the point where failures climb is going to be very unclear. The odds are actually very good that it will keep working well beyond that.

    Also of note, very few electronics fail before they are obsolete.

    *Technically it’s about bitrate, but let’s just ignore that detail for simplicity. We’ll assume that 4k uses 4x as much space as 1080p

    TL;DR: It could fail at any moment from the day it was manufactured, or it could outlast all of us. Prepare for that scenario with a decent backup strategy, but don’t actually replace it until needed.

  • mbirth@lemmy.mbirth.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    I’ve bought a Synology DS415+ back in December 2014. So it just turned 9 and it’s still kicking. (Even with the C2000 fix.)

    Although Synology stopped delivering updates, I’ll keep it as long as it does what I need it to. However, my next device will be a TerraMaster where I’ll install OMV on. Can’t get a NAS with custom OS in a smaller form factor.

  • yeehaw@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    I built my 10ish TB (usable after raidz2) system in 2015. I did some drive swaps but I think it might have actually been a shoddy power cable that was the problem and the disks may have been fine.

  • ShortN0te@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    The NAS will most likely outlive the software support and by far the HDDs you are putting in them.